pax_global_header00006660000000000000000000000064150707171110014512gustar00rootroot0000000000000052 comment=f1a5acbae0728eac3d63614b4b9232ca3b65d575 waypipe-v0.10.5/000077500000000000000000000000001507071711100134415ustar00rootroot00000000000000waypipe-v0.10.5/.clang-format000066400000000000000000000053601507071711100160200ustar00rootroot00000000000000# Only including options for C only Language: Cpp AlignAfterOpenBracket: DontAlign AlignConsecutiveAssignments: false AlignConsecutiveDeclarations: false AlignEscapedNewlines: Right AlignOperands: true AlignTrailingComments: true AllowAllParametersOfDeclarationOnNextLine: true AllowShortBlocksOnASingleLine: false AllowShortCaseLabelsOnASingleLine: false AllowShortFunctionsOnASingleLine: All AllowShortIfStatementsOnASingleLine: false AllowShortLoopsOnASingleLine: false AlwaysBreakAfterDefinitionReturnType: None AlwaysBreakAfterReturnType: None AlwaysBreakBeforeMultilineStrings: false BinPackArguments: true BinPackParameters: true BraceWrapping: AfterControlStatement: false AfterEnum: false AfterFunction: false AfterStruct: false AfterUnion: false AfterExternBlock: false BeforeCatch: false BeforeElse: false IndentBraces: false SplitEmptyFunction: true SplitEmptyRecord: true SplitEmptyNamespace: true BreakBeforeBinaryOperators: None BreakBeforeBraces: Linux BreakBeforeInheritanceComma: false BreakBeforeTernaryOperators: true BreakStringLiterals: false ColumnLimit: 80 CommentPragmas: '^ IWYU pragma:' CompactNamespaces: false ConstructorInitializerAllOnOneLineOrOnePerLine: false ConstructorInitializerIndentWidth: 4 ContinuationIndentWidth: 16 Cpp11BracedListStyle: true DerivePointerAlignment: false DisableFormat: false ExperimentalAutoDetectBinPacking: false FixNamespaceComments: true ForEachMacros: - foreach - Q_FOREACH - BOOST_FOREACH IncludeBlocks: Preserve IncludeCategories: - Regex: '^"(llvm|llvm-c|clang|clang-c)/' Priority: 2 - Regex: '^(<|"(gtest|gmock|isl|json)/)' Priority: 3 - Regex: '.*' Priority: 1 IncludeIsMainRegex: '(Test)?$' IndentCaseLabels: false IndentPPDirectives: None IndentWidth: 8 IndentWrappedFunctionNames: false KeepEmptyLinesAtTheStartOfBlocks: true MacroBlockBegin: '' MacroBlockEnd: '' MaxEmptyLinesToKeep: 1 NamespaceIndentation: None PenaltyBreakAssignment: 2 PenaltyBreakBeforeFirstCallParameter: 19 PenaltyBreakComment: 300 PenaltyBreakFirstLessLess: 120 PenaltyBreakString: 1000 PenaltyBreakTemplateDeclaration: 10 PenaltyExcessCharacter: 1000000 PenaltyReturnTypeOnItsOwnLine: 60 PointerAlignment: Right ReflowComments: true SortIncludes: true SortUsingDeclarations: true SpaceAfterCStyleCast: false SpaceBeforeParens: ControlStatements SpaceBeforeRangeBasedForLoopColon: true SpaceInEmptyParentheses: false SpacesBeforeTrailingComments: 1 SpacesInAngles: false SpacesInContainerLiterals: true SpacesInCStyleCastParentheses: false SpacesInParentheses: false SpacesInSquareBrackets: false Standard: Cpp03 TabWidth: 8 UseTab: ForContinuationAndIndentation waypipe-v0.10.5/.gitignore000066400000000000000000000002641507071711100154330ustar00rootroot00000000000000/waypipe /build/ /Doxyfile /html /latex /doc /waypipe-c/test/matrix /target /build-minimal/ /wrap-ffmpeg/target /wrap-lz4/target /wrap-zstd/target /wrap-gbm/target /shaders/target waypipe-v0.10.5/CONTRIBUTING.md000066400000000000000000000043071507071711100156760ustar00rootroot00000000000000Contributing guidelines =============================================================================== ## Formatting To avoid needless time spent formatting things, this project has autoformatting set up. Yes, it's often ugly, but after using it long enough you'll forget that code can look nice. Python scripts are formatted with ruff[0], C code with clang-format[1], and Rust code with `cargo fmt`. The script `autoformat.sh` at the root of the directory should format all source code files in the project. [0] https://docs.astral.sh/ruff/ [1] https://clang.llvm.org/docs/ClangFormat.html ## Comments Explain precisely that which is not obvious. `/* ... */` is preferred to `// ...` for longer comments; the leading `/*` and trailing `*/ do not need lines of their own. Use Doxygen style (`/\*\*\`) for functions and structs that need commenting, but not to the point where it hinders source code readability. Waypipe is not a library. ## Memory and errors Some errors are unrecoverable, and for those cases Waypipe should shut down cleanly. For instance, if Waypipe cannot replicate a file descriptor, then an application connected through it will almost certainly crash, and it's better to have Waypipe exit instead. Other errors can safely ignored -- if fine grained damage tracking fails, a sane fallback would be to assume that an entire surface is damaged. All error conditions should be handled, including the errors produced by allocation failures. (It is relatively easy to test for allocation failure by `LD_PRELOAD`ing a library that redefines malloc et al.; see for instance "mallocfail" and "failmalloc". `ulimit -v` may be less effective.) ## Types - Typedefs should be used only for function signatures, and never applied to structs. - `short`, `long`, and `long long` should not be used, in favor of `int16_t` and `int64_t`. - All wire-format structures should use fixed size types. It's safe to assume that buffers will never be larger than about 1 GB, so buffer sizes and indices do not require 64 bit types when used in protocol message headers. - `printf` should be called with the correct format codes. For example, `%zd` for `ssize_t`, and the `PRIu32` macro for `uint32_t`. - Avoid unnecessary casts. waypipe-v0.10.5/Cargo.lock000066400000000000000000000205461507071711100153550ustar00rootroot00000000000000# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "anstream" version = "0.6.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8acc5369981196006228e28809f761875c0327210a891e941f4c683b3a99529b" dependencies = [ "anstyle", "anstyle-parse", "anstyle-query", "anstyle-wincon", "colorchoice", "is_terminal_polyfill", "utf8parse", ] [[package]] name = "anstyle" version = "1.0.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "55cc3b69f167a1ef2e161439aa98aed94e6028e5f9a59be9a6ffb47aef1651f9" [[package]] name = "anstyle-parse" version = "0.2.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3b2d16507662817a6a20a9ea92df6652ee4f94f914589377d69f3b21bc5798a9" dependencies = [ "utf8parse", ] [[package]] name = "anstyle-query" version = "1.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "79947af37f4177cfead1110013d678905c37501914fba0efea834c3fe9a8d60c" dependencies = [ "windows-sys", ] [[package]] name = "anstyle-wincon" version = "3.0.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2109dbce0e72be3ec00bed26e6a7479ca384ad226efdd66db8fa2e3a38c83125" dependencies = [ "anstyle", "windows-sys", ] [[package]] name = "ash" version = "0.38.0+1.3.281" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0bb44936d800fea8f016d7f2311c6a4f97aebd5dc86f09906139ec848cf3a46f" dependencies = [ "libloading", ] [[package]] name = "autocfg" version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26" [[package]] name = "bitflags" version = "2.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b048fb63fd8b5923fc5aa7b340d8e156aec7ec02f0c78fa8a6ddc2613f6f71de" [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "cfg_aliases" version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724" [[package]] name = "clap" version = "4.5.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fb3b4b9e5a7c7514dfa52869339ee98b3156b0bfb4e8a77c4ff4babb64b1604f" dependencies = [ "clap_builder", ] [[package]] name = "clap_builder" version = "4.5.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b17a95aa67cc7b5ebd32aa5370189aa0d79069ef1c64ce893bd30fb24bff20ec" dependencies = [ "anstream", "anstyle", "clap_lex", "strsim", ] [[package]] name = "clap_lex" version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "afb84c814227b90d6895e01398aee0d8033c00e7466aca416fb6a8e0eb19d8a7" [[package]] name = "colorchoice" version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5b63caa9aa9397e2d9480a9b13673856c78d8ac123288526c37d7839f2a86990" [[package]] name = "getrandom" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c4567c8db10ae91089c99af84c68c38da3ec2f087c3f82960bcdbf3656b6f4d7" dependencies = [ "cfg-if", "libc", "wasi", ] [[package]] name = "is_terminal_polyfill" version = "1.70.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf" [[package]] name = "libc" version = "0.2.164" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "433bfe06b8c75da9b2e3fbea6e5329ff87748f0b144ef75306e674c3f6f7c13f" [[package]] name = "libloading" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4979f22fdb869068da03c9f7528f8297c6fd2606bc3a4affe42e6a823fdb8da4" dependencies = [ "cfg-if", "windows-targets", ] [[package]] name = "log" version = "0.4.22" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a7a70ba024b9dc04c27ea2f0c0548feb474ec5c54bba33a7f72f873a39d07b24" [[package]] name = "memoffset" version = "0.9.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "488016bfae457b036d996092f6cb448677611ce4449e970ceaf42695203f218a" dependencies = [ "autocfg", ] [[package]] name = "nix" version = "0.29.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "71e2746dc3a24dd78b3cfcb7be93368c6de9963d30f43a6a73998a9cf4b17b46" dependencies = [ "bitflags", "cfg-if", "cfg_aliases", "libc", "memoffset", ] [[package]] name = "pkg-config" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "953ec861398dccce10c670dfeaf3ec4911ca479e9c02154b3a215178c5f566f2" [[package]] name = "strsim" version = "0.11.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f" [[package]] name = "utf8parse" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" [[package]] name = "wasi" version = "0.11.0+wasi-snapshot-preview1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423" [[package]] name = "waypipe" version = "0.10.5" dependencies = [ "ash", "clap", "getrandom", "log", "nix", "waypipe-ffmpeg-wrapper", "waypipe-gbm-wrapper", "waypipe-lz4-wrapper", "waypipe-shaders", "waypipe-zstd-wrapper", ] [[package]] name = "waypipe-ffmpeg-wrapper" version = "0.1.0" dependencies = [ "libloading", "pkg-config", ] [[package]] name = "waypipe-gbm-wrapper" version = "0.1.0" dependencies = [ "libloading", "pkg-config", ] [[package]] name = "waypipe-lz4-wrapper" version = "0.1.0" dependencies = [ "pkg-config", ] [[package]] name = "waypipe-shaders" version = "0.1.0" [[package]] name = "waypipe-zstd-wrapper" version = "0.1.0" dependencies = [ "pkg-config", ] [[package]] name = "windows-sys" version = "0.59.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b" dependencies = [ "windows-targets", ] [[package]] name = "windows-targets" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973" dependencies = [ "windows_aarch64_gnullvm", "windows_aarch64_msvc", "windows_i686_gnu", "windows_i686_gnullvm", "windows_i686_msvc", "windows_x86_64_gnu", "windows_x86_64_gnullvm", "windows_x86_64_msvc", ] [[package]] name = "windows_aarch64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3" [[package]] name = "windows_aarch64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469" [[package]] name = "windows_i686_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b" [[package]] name = "windows_i686_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66" [[package]] name = "windows_i686_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66" [[package]] name = "windows_x86_64_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78" [[package]] name = "windows_x86_64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d" [[package]] name = "windows_x86_64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec" waypipe-v0.10.5/Cargo.toml000066400000000000000000000047721507071711100154030ustar00rootroot00000000000000[package] name = "waypipe" version = "0.10.5" # To ensure Waypipe can still be built on older/unupdated systems, do # not update the edition or version unless necessary. edition = "2021" rust-version = "1.77.0" default-run = "waypipe" [[bin]] name = "waypipe" path = "src/main.rs" [[bin]] name = "test_proto" path = "src/test_proto.rs" required-features = ["test_proto"] [features] default = ["video", "dmabuf", "lz4", "zstd", "gbmfallback", "test_proto"] video = ["dep:waypipe-ffmpeg-wrapper", "dep:waypipe-shaders", "dmabuf"] dmabuf = ["dep:ash"] lz4 = ["dep:waypipe-lz4-wrapper"] zstd = ["dep:waypipe-zstd-wrapper"] gbmfallback = ["dep:waypipe-gbm-wrapper"] test_proto = [] [dependencies] getrandom = "0.2.15" clap = { version = "4.5.21", features = [] } log = { version = "0.4.22", features = ["std"] } nix = { version = "0.29.0", features = ["fs", "poll", "uio", "socket", "process", "signal", "ioctl"] } ash = { version = "0.38.0", optional = true } waypipe-ffmpeg-wrapper = { path = "wrap-ffmpeg", optional = true} waypipe-lz4-wrapper = { path = "wrap-lz4", optional = true} waypipe-zstd-wrapper = { path = "wrap-zstd", optional = true} waypipe-gbm-wrapper = { path = "wrap-gbm", optional = true} waypipe-shaders= { path = "shaders", optional = true} # Build profiles. The default 'release' and 'dev' are for use when building # waypipe directly with cargo; debug info is _on_ for both. There are also # variants used by meson corresponding to each optimization level and debug amount. # Note: tt is recommended to split debuginfo _after_ building, as Rust's # split-debuginfo still keeps a lot of debug info in the executable. [profile.release] opt-level = 3 debug = "full" panic = "abort" lto = "thin" [profile.dev] panic = "abort" [profile.meson-0] inherits = "dev" opt-level = 0 [profile.meson-1] inherits = "release" opt-level = 1 lto = false [profile.meson-2] inherits = "release" lto = "thin" opt-level = 2 [profile.meson-3] inherits = "release" opt-level = 3 lto = true codegen-units = 1 [profile.meson-s] inherits = "release" lto = true opt-level = "s" codegen-units = 1 # to shave off ~100kB: -Z build-std="std,panic_abort" [profile.meson-0-strip] inherits = "meson-0" debug = false strip = "debuginfo" [profile.meson-1-strip] inherits = "meson-1" debug = false strip = "debuginfo" [profile.meson-2-strip] inherits = "meson-2" debug = false strip = "debuginfo" [profile.meson-3-strip] inherits = "meson-3" debug = false strip = "debuginfo" [profile.meson-s-strip] inherits = "meson-s" debug = false strip = "debuginfo" waypipe-v0.10.5/Cross.toml000066400000000000000000000004531507071711100154310ustar00rootroot00000000000000[build] pre-build = [ "dpkg --add-architecture $CROSS_DEB_ARCH", "apt-get update && apt-get --assume-yes install libvulkan-dev:$CROSS_DEB_ARCH" ] [target.i686-unknown-freebsd] pre-build = [] [target.x86_64-unknown-freebsd] pre-build = [] [target.powerpc64-unknown-linux-gnu] pre-build = [] waypipe-v0.10.5/LICENSE.GPLv3000066400000000000000000001045151507071711100153460ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . waypipe-v0.10.5/LICENSE.MIT000066400000000000000000000023051507071711100150760ustar00rootroot00000000000000Copyright © 2019 Manuel Stoeckl Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. --- The above is the version of the MIT "Expat" License used by X.org: http://cgit.freedesktop.org/xorg/xserver/tree/COPYING waypipe-v0.10.5/README.md000066400000000000000000000241301507071711100147200ustar00rootroot00000000000000Waypipe ================================================================================ `waypipe` is a proxy for [Wayland][l0] clients. It forwards Wayland messages and serializes changes to shared memory buffers over a single socket. This makes application forwarding similar to [`ssh -X`][l1] feasible. [l0]: https://wayland.freedesktop.org/ [l1]: https://wiki.archlinux.org/title/OpenSSH#X11_forwarding ## Usage `waypipe` should be installed on both the local and remote computers. There is a user-friendly command line pattern which prefixes a call to `ssh` and automatically sets up a reverse tunnel for protocol data. For example, ``` waypipe ssh user@theserver weston-terminal ``` will run `ssh`, connect to `theserver`, and remotely run `weston-terminal`, using local and remote `waypipe` processes to synchronize the shared memory buffers used by Wayland clients between both computers. Command line arguments before `ssh` apply only to `waypipe`; those after `ssh` belong to `ssh`. Alternatively, one can launch the local and remote processes by hand, with the following set of shell commands: ``` /usr/bin/waypipe -s /tmp/socket-local client & ssh -R /tmp/socket-remote:/tmp/socket-local -t user@theserver \ /usr/bin/waypipe -s /tmp/socket-remote server -- \ /usr/bin/weston-terminal kill %1 ``` It's possible to set up the local and remote processes so that, when the connection between the sockets used by each end breaks, one can create a new forwarded socket on the remote side and reconnect the two processes. For a more detailed example, see the man page. ## Installing Waypipe's build uses a mixture of [meson][i0] and [cargo][i1]. For example: ``` cd /path/to/waypipe/ cargo fetch --locked cd .. mkdir build-waypipe meson --buildtype debugoptimized waypipe build-waypipe ninja -C build-waypipe install ``` Core build requirements: - [meson][i0] (build, ≥ 0.57. with dependencies `ninja`, `pkg-config`, `python3`) - [cargo][i1] (build) - rust (build, edition 2021, version ≥ 1.77) - [scdoc][i2] (optional, to generate a man page, ) - ssh (runtime, OpenSSH ≥ 6.7, for Unix domain socket forwarding) Rust library dependencies (can be acquired through `cargo fetch --locked`): - clap: command line parsing - getrandom: picking random socket names - log: logging - nix: safer libc wrapper - ash: (optional, provides vulkan bindings for dmabuf support) - libloading: (optional, used by ffmpeg bindings) - pkg-config: (optional, used to find libraries to link to) Optional linked dependencies, broken out by feature: - lz4 compression: - liblz4 (≥ 1.7.0) - bindgen (build, ≥ 0.70.0) - zstd compression: - libzstd (≥ 0.4.6) - bindgen (build, ≥ 0.70.0) - dmabuf support: - vulkan (to support programs using GPU rendering and DMABUFs) - vulkan validation layers (runtime, optional, used for tests and with --debug flag) - dmabuf support fallback with gbm: - libgbm (mesa, ≥ 11.0.0) - bindgen (build, ≥ 0.70.0) - video encoding/decoding support: - dmabuf support - ffmpeg (≥ 7.1, needs avcodec/avutil for lossy video encoding) - bindgen (build, ≥ 0.70.0) - glslc (build, to compile shaders for image format conversion) Note: in practice, bindgen requires certain C library headers from clang, but some distributions have not made them a dependency of bindgen. If the build fails because `limits.h` or `stddef.h` is missing, try installing `clang`. [i0]: https://mesonbuild.com/ [i1]: https://doc.rust-lang.org/cargo/ [i2]: https://git.sr.ht/~sircmpwn/scdoc ### `waypipe-c` Originally, Waypipe was developed in C; it was later ported to use Rust. The C implementation, now producing an executable called `waypipe-c`, has been kept in the repository for use on older systems which do not have the Rust version's dependencies installed. `waypipe-c` also includes some features (like reconnection support) dropped in later versions. There are two ways to build it: with `meson`, ``` cd /path/to/waypipe/ && cd .. mkdir build-waypipe meson --buildtype debugoptimized waypipe build-waypipe -Dbuild_c=true -Dbuild_rs=false ninja -C build-waypipe install ``` or by running the `./minimal_build.sh` script. In addition to `meson`, `python`, `liblz4`, `libzstd`, `ssh`, `waypipe-c` requires: - ffmpeg (≥ 3.1, needs avcodec/avutil/swscale for lossy video encoding) - libva (for hardware video encoding and decoding) - libgbm (to support programs using OpenGL via DMABUFs) - libdrm (same as for libgbm) - sys/sdt.h (to provide static tracepoints for profiling) - libwayland-client (for security context support) ## License `waypipe` is licensed GPLv3-or-later; `waypipe-c` is MIT. In both cases, the compiled executable is derived from the Wayland protocol files in `./protocols`, which have their own (permissive) licenses. `waypipe`'s Rust dependency tree can be shown with `cargo tree`. See git history for a list of authors. ## Reporting issues Waypipe is developed at [gitlab.freedesktop.org/mstoeckl/waypipe][r0]; file bug reports or submit patches here. In general, if a program does not work properly under Waypipe, it is a bug worth reporting. If possible, before doing so ensure both computers are using the most recently released version of Waypipe (or are built from git master). A workaround that may help for some programs using OpenGL or Vulkan is to run Waypipe with the `--no-gpu` flag, which may force them to use software rendering and shared memory buffers. (Please still file a bug.) Some programs may require specific environment variable settings or command line flags to run remotely; a few examples are given in the [man page][r1]. Useful information for bug reports includes: - If a Waypipe process has crashed on either end of the connection, a full stack trace, with debug symbols. (In gdb, `bt full`). - If the program uses OpenGL or Vulkan, the graphics cards and drivers on both computers. - The output of `waypipe --version` on both ends of the connection - Logs when Waypipe is run with the `--debug` flag, or when the program is run with the environment variable setting `WAYLAND_DEBUG=1`. - Screenshots of any visual glitches. [r0]: https://gitlab.freedesktop.org/mstoeckl/waypipe/ [r1]: https://gitlab.freedesktop.org/mstoeckl/waypipe/-/blob/master/waypipe.scd ## Technical Limitations ### Security Waypipe sends Wayland messages and updates to file descriptors (like window surface content or copy-paste data transfers) over a socket. When these messages are sent over a network, even if encrypted and with ssh's (lightweight) timing mitigations, an observer may be able to guess what is being done (typing, mouse motion, scrolling a page, and more...) from the sizes and timing of messages alone, even without knowing the precise contents. Against broken or malicious compositors or applications: Waypipe does not impose resource limits, and the applications or compositors it forwards may instruct it to allocate large amounts of memory or do CPU or GPU-intensive work. Waypipe exposes compression and video encoding libraries which parse complicated formats and may have potential vulnerabilities; of these the compression libraries are simpler and better tested. See the man page for more details. ### Partial protocol processing Waypipe does not have a full view of the Wayland protocol. It includes a compiled form of the base protocol and several extension protocols, but is not able to parse all messages that the programs it connects send. Fortunately, the Wayland wire protocol is partially self-describing, so Waypipe can parse the messages it needs (those related to resources shared with file descriptors) while ignoring the rest. This makes Waypipe partially forward-compatible: if a future protocol comes out about details (for example, about window positioning) which do not require that file descriptors be sent, then applications will be able to use that protocol even with older versions of Waypipe. The tradeoff to allowing messages that Waypipe can not parse is that Waypipe can only make minor modifications to the wire protocol. In particular, adding or removing any Wayland protocol objects would require changing all messages that refer to them, including those messages that Waypipe does not parse. This precludes, for example, global object deduplication tricks that could reduce startup time for complicated applications. ### Latency Shared memory buffer updates, including those for the contents of windows, are tracked by keeping a "mirror" copy of the buffer the represents the view which the opposing instance of Waypipe has. This way, Waypipe can send only the regions of the buffer that have changed relative to the remote copy. This is more efficient than resending the entire buffer on every update, which is good for applications with reasonably static user interfaces (like a text editor or email client). However, with programs with animations where the interaction latency matters (like games or certain audio tools), major window updates will unavoidably produce a lag spike. The additional memory cost of keeping mirrors is moderate. The ssh ObscureKeystrokeTiming feature may introduce delays to obscure when input events and responses occur; reducing the delay interval should reduce latency/improve frame rates at the cost of sending more packets. ### Other The video encoding option for DMABUFs currently maintains a video stream for each buffer that is used by a window surface. Since surfaces typically rotate between a small number of buffers, a video encoded window will appear to flicker as it switches rapidly between the underlying buffers, each of whose video streams has different encoding artifacts. As of writing, hardware video support with Vulkan is somewhat experimental and may require that driver-specific environment variables be set. Since little-endian computers are vastly more common than big-endian, Waypipe only receives and produces little-endian Wayland protocol messages. For big-endian machines, run applications under a tool like `wswapendian` to adjust the protocol endianness. (Having Waypipe do this itself would require that it embed or load many more Wayland protocol descriptions and restrict clients to use them; at the moment it is more practical to do the endianness conversion in a separate program.) waypipe-v0.10.5/autoformat.sh000077500000000000000000000006001507071711100161550ustar00rootroot00000000000000#!/bin/sh set -e cargo fmt --all ruff format -q -n protogen.py waypipe-c/test/*.py waypipe-c/protocols/*.py clang-format -style=file --assume-filename=C -i waypipe-c/*.h waypipe-c/*.c waypipe-c/test/*.c waypipe-c/test/*.h clang-format -style=llvm -i shaders/*.glsl meson fmt -i meson.build waypipe-c/meson.build waypipe-c/protocols/meson.build waypipe-c/test/meson.build cd wrap-zstd waypipe-v0.10.5/compile_wrapper.sh000066400000000000000000000012731507071711100171700ustar00rootroot00000000000000#!/bin/sh # This script is run by meson to make cargo build in a specific folder and produce a specific output file, # because --artifact-dir is still nightly-only set -e if [ $# -ne 10 ] ; then echo "Incorrect number of arguments: $#" exit 1 fi # This is a workaround for Rust having no simple and stable compile # time conditional string concatenation; and meson not properly # handling newlines or backslashes in custom targets version="$6 features: lz4: $7 zstd: $8 dmabuf: $9 video: ${10}" env WAYPIPE_VERSION="$version" cargo build --frozen -v --bin waypipe --profile "$1" --manifest-path "$2" --no-default-features --target-dir "$3" --features "$4" cp "$3/$1/waypipe" "$5" waypipe-v0.10.5/cross_testing.sh000077500000000000000000000026101507071711100166650ustar00rootroot00000000000000#!/bin/sh # WARNING: cross may download target-specific images from the internet # WARNING: cargo msrv may install toolchains via rustup set -ex cargo fmt --check cargo test cargo test --no-default-features cargo test --no-default-features --features lz4,dmabuf,test_proto cargo test --no-default-features --features zstd,video,test_proto cargo test --no-default-features --features gbmfallback,test_proto # Tier 1 cross test --target x86_64-unknown-linux-gnu --no-default-features --features dmabuf,test_proto cross test --target aarch64-unknown-linux-gnu --no-default-features --features dmabuf,test_proto # Linux support, 32 bit cross test --target i686-unknown-linux-gnu --no-default-features --features dmabuf,test_proto cross test --target armv7-unknown-linux-gnueabihf --no-default-features --features dmabuf,test_proto # Big-endian representative cross test --target powerpc64-unknown-linux-gnu --no-default-features --features test_proto # FreeBSD support (testing not available, needs full emulation?) cross build --target x86_64-unknown-freebsd --no-default-features --features test_proto cross build --target i686-unknown-freebsd --no-default-features --features test_proto # Check that the build still works with older Rust versions cargo msrv verify # on 64 bit systems with the necessary libraries, cross build to 32 bit # PKG_CONFIG_ALLOW_CROSS=1 cargo build --target=i686-unknown-linux-gnu waypipe-v0.10.5/meson.build000066400000000000000000000146271507071711100156150ustar00rootroot00000000000000project( 'waypipe', 'c', license: ['GPL3+', 'MIT/Expat'], meson_version: '>=0.56.0', default_options: ['c_std=c11', 'warning_level=3', 'werror=true'], version: '0.10.5', ) # mention version raw_version = '@0@'.format(meson.project_version()) git = find_program('git', native: true, required: false) if git.found() dir_arg = '--git-dir=@0@/.git'.format(meson.project_source_root()) commit = run_command( [git, dir_arg, 'rev-parse', '--verify', '-q', 'HEAD'], check: false, ) if commit.returncode() == 0 raw_version = '@0@ (commit @1@)'.format( meson.project_version(), commit.stdout().strip(), ) endif endif version = '"@0@"'.format(raw_version) if get_option('build_rs') cargo = find_program('cargo', native: true) sh = find_program('sh', native: true) objcopy = find_program('objcopy', native: true) # Because meson and cargo interact badly, option implementation and dependency resolution are deferred to cargo # However, to 'autodetect' what features are available, we duplicate some of the build logic features = [] has_lz4 = false has_zstd = false has_gbm = false has_dmabuf = false has_video = false # compute maximum of with_lz4/with_zstd/with_gbm/with_video features max_feature = get_option('with_lz4') if (max_feature.auto() and get_option('with_zstd').enabled()) or (not max_feature.enabled() and get_option( 'with_zstd', ).auto()) max_feature = get_option('with_zstd') endif if (max_feature.auto() and get_option('with_gbm').enabled()) or (not max_feature.enabled() and get_option( 'with_gbm', ).auto()) max_feature = get_option('with_gbm') endif if (max_feature.auto() and get_option('with_video').enabled()) or (not max_feature.enabled() and get_option( 'with_video', ).auto()) max_feature = get_option('with_video') endif bindgen = find_program( 'bindgen', version: '>= 0.66.0', native: true, required: max_feature, ) liblz4 = dependency( 'liblz4', version: '>=1.7.0', required: get_option('with_lz4'), ) if bindgen.found() and liblz4.found() has_lz4 = true features += ['lz4'] endif libzstd = dependency( 'libzstd', version: '>=0.4.6', required: get_option('with_zstd'), ) if bindgen.found() and libzstd.found() has_zstd = true features += ['zstd'] endif libvulkan = dependency('vulkan', required: get_option('with_dmabuf')) if libvulkan.found() # Note: 'ash' is the vulkan wrapper used, and may require other libraries/programs to be present has_dmabuf = true features += ['dmabuf'] endif libgbm = dependency( 'gbm', version: '>=11.0.0', # gbm is part of Mesa and has a matching version required: get_option('with_gbm'), ) if bindgen.found() and libzstd.found() has_zstd = true features += ['gbmfallback'] endif glslc = find_program( 'glslc', native: true, required: get_option('with_video'), ) libavutil = dependency( 'libavutil', version: '>=58.11.100', required: get_option('with_video'), ) libavcodec = dependency( 'libavcodec', version: '>=59.0.100', required: get_option('with_video'), ) if bindgen.found() and libavcodec.found() and libavutil.found() and glslc.found() has_video = true features += ['video'] endif target_dir = meson.project_build_root() / 'target' output = target_dir / 'release/waypipe' manifest = meson.project_source_root() / 'Cargo.toml' # Cargo has no direct analogue for 'g' or 'plain' optimization levels, # so the choices for them are somewhat arbitrary if get_option('optimization') == '0' or get_option('optimization') == 'g' profile = 'meson-0' elif get_option('optimization') == '1' or get_option('optimization') == 'plain' profile = 'meson-1' elif get_option('optimization') == '3' profile = 'meson-3' elif get_option('optimization') == 's' profile = 'meson-s' else profile = 'meson-2' endif if get_option('debug') == false profile += '-strip' endif message('Cargo will be run with profile: @0@'.format(profile)) binary = custom_target( 'waypipe', build_always_stale: true, output: 'waypipe', command: [ sh, '@SOURCE_ROOT@' / 'compile_wrapper.sh', profile, manifest, target_dir, ','.join(features), '@OUTPUT@', raw_version, has_lz4.to_string('true', 'false'), has_zstd.to_string('true', 'false'), has_dmabuf.to_string('true', 'false'), has_video.to_string('true', 'false'), ], install: true, install_dir: get_option('bindir'), ) test( 'running cargo test', cargo, args: [ 'test', '--frozen', '--bin', 'waypipe', '--test', 'proto', '-v', '--manifest-path', manifest, '--no-default-features', '--target-dir', target_dir, '--features', ','.join(features + ['test_proto']), ], timeout: 200, priority: 100, ) test( 'running (shortened) waypipe bench', sh, args: [ '-c', ' '.join([binary.full_path(), '--test-fast-bench', 'bench']), ], depends: binary, ) endif if get_option('build_c') subdir('waypipe-c') endif scdoc = dependency( 'scdoc', version: '>=1.9.4', native: true, required: get_option('man-pages'), ) if scdoc.found() scdoc_prog = find_program( scdoc.get_variable(pkgconfig: 'scdoc'), native: true, ) sh = find_program('sh', native: true) mandir = get_option('mandir') custom_target( 'waypipe.1', input: 'waypipe.scd', output: 'waypipe.1', command: [ sh, '-c', '@0@ < @INPUT@ > @1@'.format(scdoc_prog.full_path(), 'waypipe.1'), ], install: true, install_dir: '@0@/man1'.format(mandir), ) endif waypipe-v0.10.5/meson_options.txt000066400000000000000000000043501507071711100171000ustar00rootroot00000000000000# Control what to build option('man-pages', type: 'feature', value: 'auto', description: 'Generate and install man pages') option('build_c', description: 'Build old version of waypipe (waypipe-c)', type: 'boolean', value: false) option('build_rs', description: 'Build current version of waypipe', type: 'boolean', value: true) option('tests', description: 'Compile Waypipe tests', type: 'boolean', value: true) # Options that apply to waypipe and waypipe-c option('with_video', type : 'feature', value : 'auto', description : 'Link with ffmpeg libraries and provide a command line option to display all buffers using a video stream') option('with_dmabuf', type : 'feature', value : 'auto', description : 'Support DMABUFs, the file descriptors used to exchange data for e.g. OpenGL applications') option('with_lz4', type : 'feature', value : 'auto', description : 'Support LZ4 as a compression mechanism') option('with_zstd', type : 'feature', value : 'auto', description : 'Support ZStandard as a compression mechanism') # Options for: waypipe option('with_gbm', type : 'feature', value : 'auto', description : 'Use libgbm as a limited fallback path to support DMABUFs') # Options for: waypipe-c option('with_vaapi', type : 'feature', value : 'auto', description : 'C only: Link with libva and use VAAPI to perform hardware video output color space conversions on GPU') option('with_systemtap', type: 'boolean', value: true, description: 'C only: Enable tracing using sdt and provide static tracepoints for profiling') option('with_secctx', type: 'feature', value: 'auto', description: 'C only: Enable support for the Wayland security context protocol') ## It is recommended to keep these on; Waypipe will automatically select the highest available instruction set at runtime option('with_avx512f', type: 'boolean', value: true, description: 'C only: Compile with support for AVX512f SIMD instructions') option('with_avx2', type: 'boolean', value: true, description: 'C only: Compile with support for AVX2 SIMD instructions') option('with_sse3', type: 'boolean', value: true, description: 'C only: Compile with support for SSE3 SIMD instructions') option('with_neon_opts', type: 'boolean', value: true, description: 'C only: Compile with support for ARM64 neon instructions') waypipe-v0.10.5/minimal_build.sh000077500000000000000000000020311507071711100166010ustar00rootroot00000000000000#!/bin/sh set -e echo "This script is a backup build system in case meson/ninja are unavailable." echo "No optional features or optimizations are included. Waypipe will be slow." echo "Requirements: python3, gcc, libc+pthreads" echo "Enter to continue, interrupt to exit." read unused mkdir -p build-minimal cd build-minimal root=../waypipe-c echo "Generating code..." python3 $root/protocols/symgen.py data $root/protocols/function_list.txt protocols.c \ ../protocols/*.xml python3 $root/protocols/symgen.py header $root/protocols/function_list.txt protocols.h \ ../protocols/*.xml echo '#define WAYPIPE_VERSION "minimal"' > config-waypipe.h echo "Compiling..." gcc -D_DEFAULT_SOURCE -Os -I. -I$root/protocols/ -lpthread -o waypipe-c protocols.c \ $root/bench.c $root/client.c $root/dmabuf.c $root/handlers.c \ $root/interval.c $root/kernel.c $root/mainloop.c $root/parsing.c \ $root/platform.c $root/server.c $root/shadow.c $root/util.c \ $root/video.c $root/waypipe.c cd .. echo "Done. See ./build-minimal/waypipe-c" waypipe-v0.10.5/protocols/000077500000000000000000000000001507071711100154655ustar00rootroot00000000000000waypipe-v0.10.5/protocols/color-management-v1.xml000066400000000000000000002221271507071711100217710ustar00rootroot00000000000000 Copyright 2019 Sebastian Wick Copyright 2019 Erwin Burema Copyright 2020 AMD Copyright 2020-2024 Collabora, Ltd. Copyright 2024 Xaver Hugl Copyright 2022-2025 Red Hat, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The aim of the color management extension is to allow clients to know the color properties of outputs, and to tell the compositor about the color properties of their content on surfaces. Doing this enables a compositor to perform automatic color management of content for different outputs according to how content is intended to look like. The color properties are represented as an image description object which is immutable after it has been created. A wl_output always has an associated image description that clients can observe. A wl_surface always has an associated preferred image description as a hint chosen by the compositor that clients can also observe. Clients can set an image description on a wl_surface to denote the color characteristics of the surface contents. An image description includes SDR and HDR colorimetry and encoding, HDR metadata, and viewing environment parameters. An image description does not include the properties set through color-representation extension. It is expected that the color-representation extension is used in conjunction with the color management extension when necessary, particularly with the YUV family of pixel formats. Recommendation ITU-T H.273 "Coding-independent code points for video signal type identification" shall be referred to as simply H.273 here. The color-and-hdr repository (https://gitlab.freedesktop.org/pq/color-and-hdr) contains background information on the protocol design and legacy color management. It also contains a glossary, learning resources for digital color, tools, samples and more. The terminology used in this protocol is based on common color science and color encoding terminology where possible. The glossary in the color-and-hdr repository shall be the authority on the definition of terms in this protocol. Warning! The protocol described in this file is currently in the testing phase. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes can only be done by creating a new major version of the extension. A singleton global interface used for getting color management extensions for wl_surface and wl_output objects, and for creating client defined image description objects. The extension interfaces allow getting the image description of outputs and setting the image description of surfaces. Compositors should never remove this global. Destroy the wp_color_manager_v1 object. This does not affect any other objects in any way. See the ICC.1:2022 specification from the International Color Consortium for more details about rendering intents. The principles of ICC defined rendering intents apply with all types of image descriptions, not only those with ICC file profiles. Compositors must support the perceptual rendering intent. Other rendering intents are optional. The compositor supports set_mastering_display_primaries request with a target color volume fully contained inside the primary color volume. The compositor additionally supports target color volumes that extend outside of the primary color volume. This can only be advertised if feature set_mastering_display_primaries is supported as well. Named color primaries used to encode well-known sets of primaries. H.273 is the authority, when it comes to the exact values of primaries and authoritative specifications, where an equivalent code point exists. A value of 0 is invalid and will never be present in the list of enums. Descriptions do list the specifications for convenience. Color primaries as defined by - Rec. ITU-R BT.709-6 - Rec. ITU-R BT.1361-0 conventional colour gamut system and extended colour gamut system (historical) - IEC 61966-2-1 sRGB or sYCC - IEC 61966-2-4 - Society of Motion Picture and Television Engineers (SMPTE) RP 177 (1993) Annex B Equivalent to H.273 ColourPrimaries code point 1. Color primaries as defined by - Rec. ITU-R BT.470-6 System M (historical) - United States National Television System Committee 1953 Recommendation for transmission standards for color television - United States Federal Communications Commission (2003) Title 47 Code of Federal Regulations 73.682 (a)(20) Equivalent to H.273 ColourPrimaries code point 4. Color primaries as defined by - Rec. ITU-R BT.470-6 System B, G (historical) - Rec. ITU-R BT.601-7 625 - Rec. ITU-R BT.1358-0 625 (historical) - Rec. ITU-R BT.1700-0 625 PAL and 625 SECAM Equivalent to H.273 ColourPrimaries code point 5. Color primaries as defined by - Rec. ITU-R BT.601-7 525 - Rec. ITU-R BT.1358-1 525 or 625 (historical) - Rec. ITU-R BT.1700-0 NTSC - SMPTE 170M (2004) - SMPTE 240M (1999) (historical) Equivalent to H.273 ColourPrimaries code point 6 and 7. Color primaries as defined by H.273 for generic film. Equivalent to H.273 ColourPrimaries code point 8. Color primaries as defined by - Rec. ITU-R BT.2020-2 - Rec. ITU-R BT.2100-0 Equivalent to H.273 ColourPrimaries code point 9. Color primaries as defined as the maximum of the CIE 1931 XYZ color space by - SMPTE ST 428-1 - (CIE 1931 XYZ as in ISO 11664-1) Equivalent to H.273 ColourPrimaries code point 10. Color primaries as defined by Digital Cinema System and published in SMPTE RP 431-2 (2011). Equivalent to H.273 ColourPrimaries code point 11. Color primaries as defined by Digital Cinema System and published in SMPTE EG 432-1 (2010). Equivalent to H.273 ColourPrimaries code point 12. Color primaries as defined by Adobe as "Adobe RGB" and later published by ISO 12640-4 (2011). Named transfer functions used to represent well-known transfer characteristics. H.273 is the authority, when it comes to the exact formulas and authoritative specifications, where an equivalent code point exists. A value of 0 is invalid and will never be present in the list of enums. Descriptions do list the specifications for convenience. Rec. ITU-R BT.1886 is the display transfer characteristic assumed by - Rec. ITU-R BT.601-7 525 and 625 - Rec. ITU-R BT.709-6 - Rec. ITU-R BT.2020-2 These recommendations are referred to by H.273 TransferCharacteristics code points 1, 6, 14, and 15, which are all equivalent. This TF implies these default luminances from Rec. ITU-R BT.2035: - primary color volume minimum: 0.01 cd/m² - primary color volume maximum: 100 cd/m² - reference white: 100 cd/m² Transfer characteristics as defined by - Rec. ITU-R BT.470-6 System M (historical) - United States National Television System Committee 1953 Recommendation for transmission standards for color television - United States Federal Communications Commission (2003) Title 47 Code of Federal Regulations 73.682 (a) (20) - Rec. ITU-R BT.1700-0 625 PAL and 625 SECAM Equivalent to H.273 TransferCharacteristics code point 4. Transfer characteristics as defined by - Rec. ITU-R BT.470-6 System B, G (historical) Equivalent to H.273 TransferCharacteristics code point 5. Transfer characteristics as defined by - SMPTE ST 240 (1999) Equivalent to H.273 TransferCharacteristics code point 7. Linear transfer function defined over all real numbers. Normalised electrical values are equal the normalised optical values. The differences to H.273 TransferCharacteristics code point 8 are the definition over all real numbers. Logarithmic transfer characteristic (100:1 range). Equivalent to H.273 TransferCharacteristics code point 9. Logarithmic transfer characteristic (100 * Sqrt(10) : 1 range). Equivalent to H.273 TransferCharacteristics code point 10. Transfer characteristics as defined by - IEC 61966-2-4 Equivalent to H.273 TransferCharacteristics code point 11. Transfer characteristics as defined by - IEC 61966-2-1 sRGB Equivalent to H.273 TransferCharacteristics code point 13 with MatrixCoefficients set to 0. Transfer characteristics as defined by - IEC 61966-2-1 sYCC Equivalent to H.273 TransferCharacteristics code point 13 with MatrixCoefficients set to anything but 0. Transfer characteristics as defined by - SMPTE ST 2084 (2014) for 10-, 12-, 14- and 16-bit systems - Rec. ITU-R BT.2100-2 perceptual quantization (PQ) system Equivalent to H.273 TransferCharacteristics code point 16. This TF implies these default luminances - primary color volume minimum: 0.005 cd/m² - primary color volume maximum: 10000 cd/m² - reference white: 203 cd/m² The difference between the primary color volume minimum and maximum must be approximately 10000 cd/m² as that is the swing of the EOTF defined by ST 2084 and BT.2100. The default value for the reference white is a protocol addition: it is suggested by Report ITU-R BT.2408-7 and is not part of ST 2084 or BT.2100. Transfer characteristics as defined by - SMPTE ST 428-1 (2019) Equivalent to H.273 TransferCharacteristics code point 17. Transfer characteristics as defined by - ARIB STD-B67 (2015) - Rec. ITU-R BT.2100-2 hybrid log-gamma (HLG) system Equivalent to H.273 TransferCharacteristics code point 18. This TF implies these default luminances - primary color volume minimum: 0.005 cd/m² - primary color volume maximum: 1000 cd/m² - reference white: 203 cd/m² HLG is a relative display-referred signal with a specified non-linear mapping to the display peak luminance (the HLG OOTF). All absolute luminance values used here for HLG assume a 1000 cd/m² peak display. The default value for the reference white is a protocol addition: it is suggested by Report ITU-R BT.2408-7 and is not part of ARIB STD-B67 or BT.2100. This creates a new wp_color_management_output_v1 object for the given wl_output. See the wp_color_management_output_v1 interface for more details. If a wp_color_management_surface_v1 object already exists for the given wl_surface, the protocol error surface_exists is raised. This creates a new color wp_color_management_surface_v1 object for the given wl_surface. See the wp_color_management_surface_v1 interface for more details. This creates a new color wp_color_management_surface_feedback_v1 object for the given wl_surface. See the wp_color_management_surface_feedback_v1 interface for more details. Makes a new ICC-based image description creator object with all properties initially unset. The client can then use the object's interface to define all the required properties for an image description and finally create a wp_image_description_v1 object. This request can be used when the compositor advertises wp_color_manager_v1.feature.icc_v2_v4. Otherwise this request raises the protocol error unsupported_feature. Makes a new parametric image description creator object with all properties initially unset. The client can then use the object's interface to define all the required properties for an image description and finally create a wp_image_description_v1 object. This request can be used when the compositor advertises wp_color_manager_v1.feature.parametric. Otherwise this request raises the protocol error unsupported_feature. This creates a pre-defined image description for the so-called Windows-scRGB stimulus encoding. This comes from the Windows 10 handling of its own definition of an scRGB color space for an HDR screen driven in BT.2100/PQ signalling mode. Windows-scRGB uses sRGB (BT.709) color primaries and white point. The transfer characteristic is extended linear. The nominal color channel value range is extended, meaning it includes negative and greater than 1.0 values. Negative values are used to escape the sRGB color gamut boundaries. To make use of the extended range, the client needs to use a pixel format that can represent those values, e.g. floating-point 16 bits per channel. Nominal color value R=G=B=0.0 corresponds to BT.2100/PQ system 0 cd/m², and R=G=B=1.0 corresponds to BT.2100/PQ system 80 cd/m². The maximum is R=G=B=125.0 corresponding to 10k cd/m². Windows-scRGB is displayed by Windows 10 by converting it to BT.2100/PQ, maintaining the CIE 1931 chromaticity and mapping the luminance as above. No adjustment is made to the signal to account for the viewing conditions. The reference white level of Windows-scRGB is unknown. If a reference white level must be assumed for compositor processing, it should be R=G=B=2.5375 corresponding to 203 cd/m² of Report ITU-R BT.2408-7. The target color volume of Windows-scRGB is unknown. The color gamut may be anything between sRGB and BT.2100. Note: EGL_EXT_gl_colorspace_scrgb_linear definition differs from Windows-scRGB by using R=G=B=1.0 as the reference white level, while Windows-scRGB reference white level is unknown or varies. However, it seems probable that Windows implements both EGL_EXT_gl_colorspace_scrgb_linear and Vulkan VK_COLOR_SPACE_EXTENDED_SRGB_LINEAR_EXT as Windows-scRGB. This request can be used when the compositor advertises wp_color_manager_v1.feature.windows_scrgb. Otherwise this request raises the protocol error unsupported_feature. The resulting image description object does not allow get_information request. The wp_image_description_v1.ready event shall be sent. When this object is created, it shall immediately send this event once for each rendering intent the compositor supports. When this object is created, it shall immediately send this event once for each compositor supported feature listed in the enumeration. When this object is created, it shall immediately send this event once for each named transfer function the compositor supports with the parametric image description creator. When this object is created, it shall immediately send this event once for each named set of primaries the compositor supports with the parametric image description creator. This event is sent when all supported rendering intents, features, transfer functions and named primaries have been sent. A wp_color_management_output_v1 describes the color properties of an output. The wp_color_management_output_v1 is associated with the wl_output global underlying the wl_output object. Therefore the client destroying the wl_output object has no impact, but the compositor removing the output global makes the wp_color_management_output_v1 object inert. Destroy the color wp_color_management_output_v1 object. This does not affect any remaining protocol objects. This event is sent whenever the image description of the output changed, followed by one wl_output.done event common to output events across all extensions. If the client wants to use the updated image description, it needs to do get_image_description again, because image description objects are immutable. This creates a new wp_image_description_v1 object for the current image description of the output. There always is exactly one image description active for an output so the client should destroy the image description created by earlier invocations of this request. This request is usually sent as a reaction to the image_description_changed event or when creating a wp_color_management_output_v1 object. The image description of an output represents the color encoding the output expects. There might be performance and power advantages, as well as improved color reproduction, if a content update matches the image description of the output it is being shown on. If a content update is shown on any other output than the one it matches the image description of, then the color reproduction on those outputs might be considerably worse. The created wp_image_description_v1 object preserves the image description of the output from the time the object was created. The resulting image description object allows get_information request. If this protocol object is inert, the resulting image description object shall immediately deliver the wp_image_description_v1.failed event with the no_output cause. If the interface version is inadequate for the output's image description, meaning that the client does not support all the events needed to deliver the crucial information, the resulting image description object shall immediately deliver the wp_image_description_v1.failed event with the low_version cause. Otherwise the object shall immediately deliver the ready event. A wp_color_management_surface_v1 allows the client to set the color space and HDR properties of a surface. If the wl_surface associated with the wp_color_management_surface_v1 is destroyed, the wp_color_management_surface_v1 object becomes inert. Destroy the wp_color_management_surface_v1 object and do the same as unset_image_description. If this protocol object is inert, the protocol error inert is raised. Set the image description of the underlying surface. The image description and rendering intent are double-buffered state, see wl_surface.commit. It is the client's responsibility to understand the image description it sets on a surface, and to provide content that matches that image description. Compositors might convert images to match their own or any other image descriptions. Image descriptions which are not ready (see wp_image_description_v1) are forbidden in this request, and in such case the protocol error image_description is raised. All image descriptions which are ready (see wp_image_description_v1) are allowed and must always be accepted by the compositor. A rendering intent provides the client's preference on how content colors should be mapped to each output. The render_intent value must be one advertised by the compositor with wp_color_manager_v1.render_intent event, otherwise the protocol error render_intent is raised. When an image description is set on a surface, the Transfer Characteristics of the image description defines the valid range of the nominal (real-valued) color channel values. The processing of out-of-range color channel values is undefined, but compositors are recommended to clamp the values to the valid range when possible. By default, a surface does not have an associated image description nor a rendering intent. The handling of color on such surfaces is compositor implementation defined. Compositors should handle such surfaces as sRGB, but may handle them differently if they have specific requirements. Setting the image description has copy semantics; after this request, the image description can be immediately destroyed without affecting the pending state of the surface. If this protocol object is inert, the protocol error inert is raised. This request removes any image description from the surface. See set_image_description for how a compositor handles a surface without an image description. This is double-buffered state, see wl_surface.commit. A wp_color_management_surface_feedback_v1 allows the client to get the preferred image description of a surface. If the wl_surface associated with this object is destroyed, the wp_color_management_surface_feedback_v1 object becomes inert. Destroy the wp_color_management_surface_feedback_v1 object. The preferred image description is the one which likely has the most performance and/or quality benefits for the compositor if used by the client for its wl_surface contents. This event is sent whenever the compositor changes the wl_surface's preferred image description. This event sends the identity of the new preferred state as the argument, so clients who are aware of the image description already can reuse it. Otherwise, if the client client wants to know what the preferred image description is, it shall use the get_preferred request. The preferred image description is not automatically used for anything. It is only a hint, and clients may set any valid image description with set_image_description, but there might be performance and color accuracy improvements by providing the wl_surface contents in the preferred image description. Therefore clients that can, should render according to the preferred image description If this protocol object is inert, the protocol error inert is raised. The preferred image description represents the compositor's preferred color encoding for this wl_surface at the current time. There might be performance and power advantages, as well as improved color reproduction, if the image description of a content update matches the preferred image description. This creates a new wp_image_description_v1 object for the currently preferred image description for the wl_surface. The client should stop using and destroy the image descriptions created by earlier invocations of this request for the associated wl_surface. This request is usually sent as a reaction to the preferred_changed event or when creating a wp_color_management_surface_feedback_v1 object if the client is capable of adapting to image descriptions. The created wp_image_description_v1 object preserves the preferred image description of the wl_surface from the time the object was created. The resulting image description object allows get_information request. If the image description is parametric, the client should set it on its wl_surface only if the image description is an exact match with the client content. Particularly if everything else matches, but the target color volume is greater than what the client needs, the client should create its own parameric image description with its exact parameters. If the interface version is inadequate for the preferred image description, meaning that the client does not support all the events needed to deliver the crucial information, the resulting image description object shall immediately deliver the wp_image_description_v1.failed event with the low_version cause, otherwise the object shall immediately deliver the ready event. The same description as for get_preferred applies, except the returned image description is guaranteed to be parametric. This is meant for clients that can only deal with parametric image descriptions. If the compositor doesn't support parametric image descriptions, the unsupported_feature error is emitted. This type of object is used for collecting all the information required to create a wp_image_description_v1 object from an ICC file. A complete set of required parameters consists of these properties: - ICC file Each required property must be set exactly once if the client is to create an image description. The set requests verify that a property was not already set. The create request verifies that all required properties are set. There may be several alternative requests for setting each property, and in that case the client must choose one of them. Once all properties have been set, the create request must be used to create the image description object, destroying the creator in the process. Create an image description object based on the ICC information previously set on this object. A compositor must parse the ICC data in some undefined but finite amount of time. The completeness of the parameter set is verified. If the set is not complete, the protocol error incomplete_set is raised. For the definition of a complete set, see the description of this interface. If the particular combination of the information is not supported by the compositor, the resulting image description object shall immediately deliver the wp_image_description_v1.failed event with the 'unsupported' cause. If a valid image description was created from the information, the wp_image_description_v1.ready event will eventually be sent instead. This request destroys the wp_image_description_creator_icc_v1 object. The resulting image description object does not allow get_information request. Sets the ICC profile file to be used as the basis of the image description. The data shall be found through the given fd at the given offset, having the given length. The fd must be seekable and readable. Violating these requirements raises the bad_fd protocol error. If reading the data fails due to an error independent of the client, the compositor shall send the wp_image_description_v1.failed event on the created wp_image_description_v1 with the 'operating_system' cause. The maximum size of the ICC profile is 32 MB. If length is greater than that or zero, the protocol error bad_size is raised. If offset + length exceeds the file size, the protocol error out_of_file is raised. A compositor may read the file at any time starting from this request and only until whichever happens first: - If create request was issued, the wp_image_description_v1 object delivers either failed or ready event; or - if create request was not issued, this wp_image_description_creator_icc_v1 object is destroyed. A compositor shall not modify the contents of the file, and the fd may be sealed for writes and size changes. The client must ensure to its best ability that the data does not change while the compositor is reading it. The data must represent a valid ICC profile. The ICC profile version must be 2 or 4, it must be a 3 channel profile and the class must be Display or ColorSpace. Violating these requirements will not result in a protocol error, but will eventually send the wp_image_description_v1.failed event on the created wp_image_description_v1 with the 'unsupported' cause. See the International Color Consortium specification ICC.1:2022 for more details about ICC profiles. If ICC file has already been set on this object, the protocol error already_set is raised. This type of object is used for collecting all the parameters required to create a wp_image_description_v1 object. A complete set of required parameters consists of these properties: - transfer characteristic function (tf) - chromaticities of primaries and white point (primary color volume) The following properties are optional and have a well-defined default if not explicitly set: - primary color volume luminance range - reference white luminance level - mastering display primaries and white point (target color volume) - mastering luminance range The following properties are optional and will be ignored if not explicitly set: - maximum content light level - maximum frame-average light level Each required property must be set exactly once if the client is to create an image description. The set requests verify that a property was not already set. The create request verifies that all required properties are set. There may be several alternative requests for setting each property, and in that case the client must choose one of them. Once all properties have been set, the create request must be used to create the image description object, destroying the creator in the process. Create an image description object based on the parameters previously set on this object. The completeness of the parameter set is verified. If the set is not complete, the protocol error incomplete_set is raised. For the definition of a complete set, see the description of this interface. The protocol error invalid_luminance is raised if any of the following requirements is not met: - When max_cll is set, it must be greater than min L and less or equal to max L of the mastering luminance range. - When max_fall is set, it must be greater than min L and less or equal to max L of the mastering luminance range. - When both max_cll and max_fall are set, max_fall must be less or equal to max_cll. If the particular combination of the parameter set is not supported by the compositor, the resulting image description object shall immediately deliver the wp_image_description_v1.failed event with the 'unsupported' cause. If a valid image description was created from the parameter set, the wp_image_description_v1.ready event will eventually be sent instead. This request destroys the wp_image_description_creator_params_v1 object. The resulting image description object does not allow get_information request. Sets the transfer characteristic using explicitly enumerated named functions. When the resulting image description is attached to an image, the content should be encoded and decoded according to the industry standard practices for the transfer characteristic. Only names advertised with wp_color_manager_v1 event supported_tf_named are allowed. Other values shall raise the protocol error invalid_tf. If transfer characteristic has already been set on this object, the protocol error already_set is raised. Sets the color component transfer characteristic to a power curve with the given exponent. Negative values are handled by mirroring the positive half of the curve through the origin. The valid domain and range of the curve are all finite real numbers. This curve represents the conversion from electrical to optical color channel values. When the resulting image description is attached to an image, the content should be encoded with the inverse of the power curve. The curve exponent shall be multiplied by 10000 to get the argument eexp value to carry the precision of 4 decimals. The curve exponent must be at least 1.0 and at most 10.0. Otherwise the protocol error invalid_tf is raised. If transfer characteristic has already been set on this object, the protocol error already_set is raised. This request can be used when the compositor advertises wp_color_manager_v1.feature.set_tf_power. Otherwise this request raises the protocol error unsupported_feature. Sets the color primaries and white point using explicitly named sets. This describes the primary color volume which is the basis for color value encoding. Only names advertised with wp_color_manager_v1 event supported_primaries_named are allowed. Other values shall raise the protocol error invalid_primaries_named. If primaries have already been set on this object, the protocol error already_set is raised. Sets the color primaries and white point using CIE 1931 xy chromaticity coordinates. This describes the primary color volume which is the basis for color value encoding. Each coordinate value is multiplied by 1 million to get the argument value to carry precision of 6 decimals. If primaries have already been set on this object, the protocol error already_set is raised. This request can be used if the compositor advertises wp_color_manager_v1.feature.set_primaries. Otherwise this request raises the protocol error unsupported_feature. Sets the primary color volume luminance range and the reference white luminance level. These values include the minimum display emission and ambient flare luminances, assumed to be optically additive and have the chromaticity of the primary color volume white point. The default luminances from https://www.color.org/chardata/rgb/srgb.xalter are - primary color volume minimum: 0.2 cd/m² - primary color volume maximum: 80 cd/m² - reference white: 80 cd/m² Setting a named transfer characteristic can imply other default luminances. The default luminances get overwritten when this request is used. With transfer_function.st2084_pq the given 'max_lum' value is ignored, and 'max_lum' is taken as 'min_lum' + 10000 cd/m². 'min_lum' and 'max_lum' specify the minimum and maximum luminances of the primary color volume as reproduced by the targeted display. 'reference_lum' specifies the luminance of the reference white as reproduced by the targeted display, and reflects the targeted viewing environment. Compositors should make sure that all content is anchored, meaning that an input signal level of 'reference_lum' on one image description and another input signal level of 'reference_lum' on another image description should produce the same output level, even though the 'reference_lum' on both image representations can be different. 'reference_lum' may be higher than 'max_lum'. In that case reaching the reference white output level in image content requires the 'extended_target_volume' feature support. If 'max_lum' or 'reference_lum' are less than or equal to 'min_lum', the protocol error invalid_luminance is raised. The minimum luminance is multiplied by 10000 to get the argument 'min_lum' value and carries precision of 4 decimals. The maximum luminance and reference white luminance values are unscaled. If the primary color volume luminance range and the reference white luminance level have already been set on this object, the protocol error already_set is raised. This request can be used if the compositor advertises wp_color_manager_v1.feature.set_luminances. Otherwise this request raises the protocol error unsupported_feature. Provides the color primaries and white point of the mastering display using CIE 1931 xy chromaticity coordinates. This is compatible with the SMPTE ST 2086 definition of HDR static metadata. The mastering display primaries and mastering display luminances define the target color volume. If mastering display primaries are not explicitly set, the target color volume is assumed to have the same primaries as the primary color volume. The target color volume is defined by all tristimulus values between 0.0 and 1.0 (inclusive) of the color space defined by the given mastering display primaries and white point. The colorimetry is identical between the container color space and the mastering display color space, including that no chromatic adaptation is applied even if the white points differ. The target color volume can exceed the primary color volume to allow for a greater color volume with an existing color space definition (for example scRGB). It can be smaller than the primary color volume to minimize gamut and tone mapping distances for big color spaces (HDR metadata). To make use of the entire target color volume a suitable pixel format has to be chosen (e.g. floating point to exceed the primary color volume, or abusing limited quantization range as with xvYCC). Each coordinate value is multiplied by 1 million to get the argument value to carry precision of 6 decimals. If mastering display primaries have already been set on this object, the protocol error already_set is raised. This request can be used if the compositor advertises wp_color_manager_v1.feature.set_mastering_display_primaries. Otherwise this request raises the protocol error unsupported_feature. The advertisement implies support only for target color volumes fully contained within the primary color volume. If a compositor additionally supports target color volume exceeding the primary color volume, it must advertise wp_color_manager_v1.feature.extended_target_volume. If a client uses target color volume exceeding the primary color volume and the compositor does not support it, the result is implementation defined. Compositors are recommended to detect this case and fail the image description gracefully, but it may as well result in color artifacts. Sets the luminance range that was used during the content mastering process as the minimum and maximum absolute luminance L. These values include the minimum display emission and ambient flare luminances, assumed to be optically additive and have the chromaticity of the primary color volume white point. This should be compatible with the SMPTE ST 2086 definition of HDR static metadata. The mastering display primaries and mastering display luminances define the target color volume. If mastering luminances are not explicitly set, the target color volume is assumed to have the same min and max luminances as the primary color volume. If max L is less than or equal to min L, the protocol error invalid_luminance is raised. Min L value is multiplied by 10000 to get the argument min_lum value and carry precision of 4 decimals. Max L value is unscaled for max_lum. This request can be used if the compositor advertises wp_color_manager_v1.feature.set_mastering_display_primaries. Otherwise this request raises the protocol error unsupported_feature. The advertisement implies support only for target color volumes fully contained within the primary color volume. If a compositor additionally supports target color volume exceeding the primary color volume, it must advertise wp_color_manager_v1.feature.extended_target_volume. If a client uses target color volume exceeding the primary color volume and the compositor does not support it, the result is implementation defined. Compositors are recommended to detect this case and fail the image description gracefully, but it may as well result in color artifacts. Sets the maximum content light level (max_cll) as defined by CTA-861-H. max_cll is undefined by default. Sets the maximum frame-average light level (max_fall) as defined by CTA-861-H. max_fall is undefined by default. An image description carries information about the color encoding used on a surface when attached to a wl_surface via wp_color_management_surface_v1.set_image_description. A compositor can use this information to decode pixel values into colorimetrically meaningful quantities. Note, that the wp_image_description_v1 object is not ready to be used immediately after creation. The object eventually delivers either the 'ready' or the 'failed' event, specified in all requests creating it. The object is deemed "ready" after receiving the 'ready' event. An object which is not ready is illegal to use, it can only be destroyed. Any other request in this interface shall result in the 'not_ready' protocol error. Attempts to use an object which is not ready through other interfaces shall raise protocol errors defined there. Once created and regardless of how it was created, a wp_image_description_v1 object always refers to one fixed image description. It cannot change after creation. Destroy this object. It is safe to destroy an object which is not ready. Destroying a wp_image_description_v1 object has no side-effects, not even if a wp_color_management_surface_v1.set_image_description has not yet been followed by a wl_surface.commit. If creating a wp_image_description_v1 object fails for a reason that is not defined as a protocol error, this event is sent. The requests that create image description objects define whether and when this can occur. Only such creation requests can trigger this event. This event cannot be triggered after the image description was successfully formed. Once this event has been sent, the wp_image_description_v1 object will never become ready and it can only be destroyed. Once this event has been sent, the wp_image_description_v1 object is deemed "ready". Ready objects can be used to send requests and can be used through other interfaces. Every ready wp_image_description_v1 protocol object refers to an underlying image description record in the compositor. Multiple protocol objects may end up referring to the same record. Clients may identify these "copies" by comparing their id numbers: if the numbers from two protocol objects are identical, the protocol objects refer to the same image description record. Two different image description records cannot have the same id number simultaneously. The id number does not change during the lifetime of the image description record. The id number is valid only as long as the protocol object is alive. If all protocol objects referring to the same image description record are destroyed, the id number may be recycled for a different image description record. Image description id number is not a protocol object id. Zero is reserved as an invalid id number. It shall not be possible for a client to refer to an image description by its id number in protocol. The id numbers might not be portable between Wayland connections. A compositor shall not send an invalid id number. This identity allows clients to de-duplicate image description records and avoid get_information request if they already have the image description information. Creates a wp_image_description_info_v1 object which delivers the information that makes up the image description. Not all image description protocol objects allow get_information request. Whether it is allowed or not is defined by the request that created the object. If get_information is not allowed, the protocol error no_information is raised. Sends all matching events describing an image description object exactly once and finally sends the 'done' event. This means - if the image description is parametric, it must send - primaries - named_primaries, if applicable - at least one of tf_power and tf_named, as applicable - luminances - target_primaries - target_luminance - if the image description is parametric, it may send, if applicable, - target_max_cll - target_max_fall - if the image description contains an ICC profile, it must send the icc_file event Once a wp_image_description_info_v1 object has delivered a 'done' event it is automatically destroyed. Every wp_image_description_info_v1 created from the same wp_image_description_v1 shall always return the exact same data. Signals the end of information events and destroys the object. The icc argument provides a file descriptor to the client which may be memory-mapped to provide the ICC profile matching the image description. The fd is read-only, and if mapped then it must be mapped with MAP_PRIVATE by the client. The ICC profile version and other details are determined by the compositor. There is no provision for a client to ask for a specific kind of a profile. Delivers the primary color volume primaries and white point using CIE 1931 xy chromaticity coordinates. Each coordinate value is multiplied by 1 million to get the argument value to carry precision of 6 decimals. Delivers the primary color volume primaries and white point using an explicitly enumerated named set. The color component transfer characteristic of this image description is a pure power curve. This event provides the exponent of the power function. This curve represents the conversion from electrical to optical pixel or color values. The curve exponent has been multiplied by 10000 to get the argument eexp value to carry the precision of 4 decimals. Delivers the transfer characteristic using an explicitly enumerated named function. Delivers the primary color volume luminance range and the reference white luminance level. These values include the minimum display emission and ambient flare luminances, assumed to be optically additive and have the chromaticity of the primary color volume white point. The minimum luminance is multiplied by 10000 to get the argument 'min_lum' value and carries precision of 4 decimals. The maximum luminance and reference white luminance values are unscaled. Provides the color primaries and white point of the target color volume using CIE 1931 xy chromaticity coordinates. This is compatible with the SMPTE ST 2086 definition of HDR static metadata for mastering displays. While primary color volume is about how color is encoded, the target color volume is the actually displayable color volume. If target color volume is equal to the primary color volume, then this event is not sent. Each coordinate value is multiplied by 1 million to get the argument value to carry precision of 6 decimals. Provides the luminance range that the image description is targeting as the minimum and maximum absolute luminance L. These values include the minimum display emission and ambient flare luminances, assumed to be optically additive and have the chromaticity of the primary color volume white point. This should be compatible with the SMPTE ST 2086 definition of HDR static metadata. This luminance range is only theoretical and may not correspond to the luminance of light emitted on an actual display. Min L value is multiplied by 10000 to get the argument min_lum value and carry precision of 4 decimals. Max L value is unscaled for max_lum. Provides the targeted max_cll of the image description. max_cll is defined by CTA-861-H. This luminance is only theoretical and may not correspond to the luminance of light emitted on an actual display. Provides the targeted max_fall of the image description. max_fall is defined by CTA-861-H. This luminance is only theoretical and may not correspond to the luminance of light emitted on an actual display. waypipe-v0.10.5/protocols/commit-timing-v1.xml000066400000000000000000000126301507071711100213120ustar00rootroot00000000000000 Copyright © 2023 Valve Corporation Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. When a compositor latches on to new content updates it will check for any number of requirements of the available content updates (such as fences of all buffers being signalled) to consider the update ready. This protocol provides a method for adding a time constraint to surface content. This constraint indicates to the compositor that a content update should be presented as closely as possible to, but not before, a specified time. This protocol does not change the Wayland property that content updates are applied in the order they are received, even when some content updates contain timestamps and others do not. To provide timestamps, this global factory interface must be used to acquire a wp_commit_timing_v1 object for a surface, which may then be used to provide timestamp information for commits. Warning! The protocol described in this file is currently in the testing phase. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes can only be done by creating a new major version of the extension. Informs the server that the client will no longer be using this protocol object. Existing objects created by this object are not affected. Establish a timing controller for a surface. Only one commit timer can be created for a surface, or a commit_timer_exists protocol error will be generated. An object to set a time constraint for a content update on a surface. Provide a timing constraint for a surface content update. A set_timestamp request may be made before a wl_surface.commit to tell the compositor that the content is intended to be presented as closely as possible to, but not before, the specified time. The time is in the domain of the compositor's presentation clock. An invalid_timestamp error will be generated for invalid tv_nsec. If a timestamp already exists on the surface, a timestamp_exists error is generated. Requesting set_timestamp after the commit_timer object's surface is destroyed will generate a "surface_destroyed" error. Informs the server that the client will no longer be using this protocol object. Existing timing constraints are not affected by the destruction. waypipe-v0.10.5/protocols/ext-data-control-v1.xml000066400000000000000000000273751507071711100217360ustar00rootroot00000000000000 Copyright © 2018 Simon Ser Copyright © 2019 Ivan Molodetskikh Copyright © 2024 Neal Gompa Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. This protocol allows a privileged client to control data devices. In particular, the client will be able to manage the current selection and take the role of a clipboard manager. Warning! The protocol described in this file is currently in the testing phase. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes can only be done by creating a new major version of the extension. This interface is a manager that allows creating per-seat data device controls. Create a new data source. Create a data device that can be used to manage a seat's selection. All objects created by the manager will still remain valid, until their appropriate destroy request has been called. This interface allows a client to manage a seat's selection. When the seat is destroyed, this object becomes inert. This request asks the compositor to set the selection to the data from the source on behalf of the client. The given source may not be used in any further set_selection or set_primary_selection requests. Attempting to use a previously used source triggers the used_source protocol error. To unset the selection, set the source to NULL. Destroys the data device object. The data_offer event introduces a new ext_data_control_offer object, which will subsequently be used in either the ext_data_control_device.selection event (for the regular clipboard selections) or the ext_data_control_device.primary_selection event (for the primary clipboard selections). Immediately following the ext_data_control_device.data_offer event, the new data_offer object will send out ext_data_control_offer.offer events to describe the MIME types it offers. The selection event is sent out to notify the client of a new ext_data_control_offer for the selection for this device. The ext_data_control_device.data_offer and the ext_data_control_offer.offer events are sent out immediately before this event to introduce the data offer object. The selection event is sent to a client when a new selection is set. The ext_data_control_offer is valid until a new ext_data_control_offer or NULL is received. The client must destroy the previous selection ext_data_control_offer, if any, upon receiving this event. Regardless, the previous selection will be ignored once a new selection ext_data_control_offer is received. The first selection event is sent upon binding the ext_data_control_device object. This data control object is no longer valid and should be destroyed by the client. The primary_selection event is sent out to notify the client of a new ext_data_control_offer for the primary selection for this device. The ext_data_control_device.data_offer and the ext_data_control_offer.offer events are sent out immediately before this event to introduce the data offer object. The primary_selection event is sent to a client when a new primary selection is set. The ext_data_control_offer is valid until a new ext_data_control_offer or NULL is received. The client must destroy the previous primary selection ext_data_control_offer, if any, upon receiving this event. Regardless, the previous primary selection will be ignored once a new primary selection ext_data_control_offer is received. If the compositor supports primary selection, the first primary_selection event is sent upon binding the ext_data_control_device object. This request asks the compositor to set the primary selection to the data from the source on behalf of the client. The given source may not be used in any further set_selection or set_primary_selection requests. Attempting to use a previously used source triggers the used_source protocol error. To unset the primary selection, set the source to NULL. The compositor will ignore this request if it does not support primary selection. The ext_data_control_source object is the source side of a ext_data_control_offer. It is created by the source client in a data transfer and provides a way to describe the offered data and a way to respond to requests to transfer the data. This request adds a MIME type to the set of MIME types advertised to targets. Can be called several times to offer multiple types. Calling this after ext_data_control_device.set_selection is a protocol error. Destroys the data source object. Request for data from the client. Send the data as the specified MIME type over the passed file descriptor, then close it. This data source is no longer valid. The data source has been replaced by another data source. The client should clean up and destroy this data source. A ext_data_control_offer represents a piece of data offered for transfer by another client (the source client). The offer describes the different MIME types that the data can be converted to and provides the mechanism for transferring the data directly from the source client. To transfer the offered data, the client issues this request and indicates the MIME type it wants to receive. The transfer happens through the passed file descriptor (typically created with the pipe system call). The source client writes the data in the MIME type representation requested and then closes the file descriptor. The receiving client reads from the read end of the pipe until EOF and then closes its end, at which point the transfer is complete. This request may happen multiple times for different MIME types. Destroys the data offer object. Sent immediately after creating the ext_data_control_offer object. One event per offered MIME type. waypipe-v0.10.5/protocols/ext-foreign-toplevel-list-v1.xml000066400000000000000000000234461507071711100235740ustar00rootroot00000000000000 Copyright © 2018 Ilia Bozhinov Copyright © 2020 Isaac Freund Copyright © 2022 wb9688 Copyright © 2023 i509VCB Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. The purpose of this protocol is to provide protocol object handles for toplevels, possibly originating from another client. This protocol is intentionally minimalistic and expects additional functionality (e.g. creating a screencopy source from a toplevel handle, getting information about the state of the toplevel) to be implemented in extension protocols. The compositor may choose to restrict this protocol to a special client launched by the compositor itself or expose it to all clients, this is compositor policy. The key words "must", "must not", "required", "shall", "shall not", "should", "should not", "recommended", "may", and "optional" in this document are to be interpreted as described in IETF RFC 2119. Warning! The protocol described in this file is currently in the testing phase. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes can only be done by creating a new major version of the extension. A toplevel is defined as a surface with a role similar to xdg_toplevel. XWayland surfaces may be treated like toplevels in this protocol. After a client binds the ext_foreign_toplevel_list_v1, each mapped toplevel window will be sent using the ext_foreign_toplevel_list_v1.toplevel event. Clients which only care about the current state can perform a roundtrip after binding this global. For each instance of ext_foreign_toplevel_list_v1, the compositor must create a new ext_foreign_toplevel_handle_v1 object for each mapped toplevel. If a compositor implementation sends the ext_foreign_toplevel_list_v1.finished event after the global is bound, the compositor must not send any ext_foreign_toplevel_list_v1.toplevel events. This event is emitted whenever a new toplevel window is created. It is emitted for all toplevels, regardless of the app that has created them. All initial properties of the toplevel (identifier, title, app_id) will be sent immediately after this event using the corresponding events for ext_foreign_toplevel_handle_v1. The compositor will use the ext_foreign_toplevel_handle_v1.done event to indicate when all data has been sent. This event indicates that the compositor is done sending events to this object. The client should destroy the object. See ext_foreign_toplevel_list_v1.destroy for more information. The compositor must not send any more toplevel events after this event. This request indicates that the client no longer wishes to receive events for new toplevels. The Wayland protocol is asynchronous, meaning the compositor may send further toplevel events until the stop request is processed. The client should wait for a ext_foreign_toplevel_list_v1.finished event before destroying this object. This request should be called either when the client will no longer use the ext_foreign_toplevel_list_v1 or after the finished event has been received to allow destruction of the object. If a client wishes to destroy this object it should send a ext_foreign_toplevel_list_v1.stop request and wait for a ext_foreign_toplevel_list_v1.finished event, then destroy the handles and then this object. A ext_foreign_toplevel_handle_v1 object represents a mapped toplevel window. A single app may have multiple mapped toplevels. This request should be used when the client will no longer use the handle or after the closed event has been received to allow destruction of the object. When a handle is destroyed, a new handle may not be created by the server until the toplevel is unmapped and then remapped. Destroying a toplevel handle is not recommended unless the client is cleaning up child objects before destroying the ext_foreign_toplevel_list_v1 object, the toplevel was closed or the toplevel handle will not be used in the future. Other protocols which extend the ext_foreign_toplevel_handle_v1 interface should require destructors for extension interfaces be called before allowing the toplevel handle to be destroyed. The server will emit no further events on the ext_foreign_toplevel_handle_v1 after this event. Any requests received aside from the destroy request must be ignored. Upon receiving this event, the client should destroy the handle. Other protocols which extend the ext_foreign_toplevel_handle_v1 interface must also ignore requests other than destructors. This event is sent after all changes in the toplevel state have been sent. This allows changes to the ext_foreign_toplevel_handle_v1 properties to be atomically applied. Other protocols which extend the ext_foreign_toplevel_handle_v1 interface may use this event to also atomically apply any pending state. This event must not be sent after the ext_foreign_toplevel_handle_v1.closed event. The title of the toplevel has changed. The configured state must not be applied immediately. See ext_foreign_toplevel_handle_v1.done for details. The app id of the toplevel has changed. The configured state must not be applied immediately. See ext_foreign_toplevel_handle_v1.done for details. This identifier is used to check if two or more toplevel handles belong to the same toplevel. The identifier is useful for command line tools or privileged clients which may need to reference an exact toplevel across processes or instances of the ext_foreign_toplevel_list_v1 global. The compositor must only send this event when the handle is created. The identifier must be unique per toplevel and it's handles. Two different toplevels must not have the same identifier. The identifier is only valid as long as the toplevel is mapped. If the toplevel is unmapped the identifier must not be reused. An identifier must not be reused by the compositor to ensure there are no races when sharing identifiers between processes. An identifier is a string that contains up to 32 printable ASCII bytes. An identifier must not be an empty string. It is recommended that a compositor includes an opaque generation value in identifiers. How the generation value is used when generating the identifier is implementation dependent. waypipe-v0.10.5/protocols/ext-image-capture-source-v1.xml000066400000000000000000000116351507071711100233600ustar00rootroot00000000000000 Copyright © 2022 Andri Yngvason Copyright © 2024 Simon Ser Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol serves as an intermediary between capturing protocols and potential image capture sources such as outputs and toplevels. This protocol may be extended to support more image capture sources in the future, thereby adding those image capture sources to other protocols that use the image capture source object without having to modify those protocols. Warning! The protocol described in this file is currently in the testing phase. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes can only be done by creating a new major version of the extension. The image capture source object is an opaque descriptor for a capturable resource. This resource may be any sort of entity from which an image may be derived. Note, because ext_image_capture_source_v1 objects are created from multiple independent factory interfaces, the ext_image_capture_source_v1 interface is frozen at version 1. Destroys the image capture source. This request may be sent at any time by the client. A manager for creating image capture source objects for wl_output objects. Creates a source object for an output. Images captured from this source will show the same content as the output. Some elements may be omitted, such as cursors and overlays that have been marked as transparent to capturing. Destroys the manager. This request may be sent at any time by the client and objects created by the manager will remain valid after its destruction. A manager for creating image capture source objects for ext_foreign_toplevel_handle_v1 objects. Creates a source object for a foreign toplevel handle. Images captured from this source will show the same content as the toplevel. Destroys the manager. This request may be sent at any time by the client and objects created by the manager will remain valid after its destruction. waypipe-v0.10.5/protocols/ext-image-copy-capture-v1.xml000066400000000000000000000450761507071711100230400ustar00rootroot00000000000000 Copyright © 2021-2023 Andri Yngvason Copyright © 2024 Simon Ser Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol allows clients to ask the compositor to capture image sources such as outputs and toplevels into user submitted buffers. Warning! The protocol described in this file is currently in the testing phase. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes can only be done by creating a new major version of the extension. This object is a manager which offers requests to start capturing from a source. Create a capturing session for an image capture source. If the paint_cursors option is set, cursors shall be composited onto the captured frame. The cursor must not be composited onto the frame if this flag is not set. If the options bitfield is invalid, the invalid_option protocol error is sent. Create a cursor capturing session for the pointer of an image capture source. Destroy the manager object. Other objects created via this interface are unaffected. This object represents an active image copy capture session. After a capture session is created, buffer constraint events will be emitted from the compositor to tell the client which buffer types and formats are supported for reading from the session. The compositor may re-send buffer constraint events whenever they change. To advertise buffer constraints, the compositor must send in no particular order: zero or more shm_format and dmabuf_format events, zero or one dmabuf_device event, and exactly one buffer_size event. Then the compositor must send a done event. When the client has received all the buffer constraints, it can create a buffer accordingly, attach it to the capture session using the attach_buffer request, set the buffer damage using the damage_buffer request and then send the capture request. Provides the dimensions of the source image in buffer pixel coordinates. The client must attach buffers that match this size. Provides the format that must be used for shared-memory buffers. This event may be emitted multiple times, in which case the client may choose any given format. This event advertises the device buffers must be allocated on for dma-buf buffers. In general the device is a DRM node. The DRM node type (primary vs. render) is unspecified. Clients must not rely on the compositor sending a particular node type. Clients cannot check two devices for equality by comparing the dev_t value. Provides the format that must be used for dma-buf buffers. The client may choose any of the modifiers advertised in the array of 64-bit unsigned integers. This event may be emitted multiple times, in which case the client may choose any given format. This event is sent once when all buffer constraint events have been sent. The compositor must always end a batch of buffer constraint events with this event, regardless of whether it sends the initial constraints or an update. This event indicates that the capture session has stopped and is no longer available. This can happen in a number of cases, e.g. when the underlying source is destroyed, if the user decides to end the image capture, or if an unrecoverable runtime error has occurred. The client should destroy the session after receiving this event. Create a capture frame for this session. At most one frame object can exist for a given session at any time. If a client sends a create_frame request before a previous frame object has been destroyed, the duplicate_frame protocol error is raised. Destroys the session. This request can be sent at any time by the client. This request doesn't affect ext_image_copy_capture_frame_v1 objects created by this object. This object represents an image capture frame. The client should attach a buffer, damage the buffer, and then send a capture request. If the capture is successful, the compositor must send the frame metadata (transform, damage, presentation_time in any order) followed by the ready event. If the capture fails, the compositor must send the failed event. Destroys the frame. This request can be sent at any time by the client. Attach a buffer to the session. The wl_buffer.release request is unused. The new buffer replaces any previously attached buffer. This request must not be sent after capture, or else the already_captured protocol error is raised. Apply damage to the buffer which is to be captured next. This request may be sent multiple times to describe a region. The client indicates the accumulated damage since this wl_buffer was last captured. During capture, the compositor will update the buffer with at least the union of the region passed by the client and the region advertised by ext_image_copy_capture_frame_v1.damage. When a wl_buffer is captured for the first time, or when the client doesn't track damage, the client must damage the whole buffer. This is for optimisation purposes. The compositor may use this information to reduce copying. These coordinates originate from the upper left corner of the buffer. If x or y are strictly negative, or if width or height are negative or zero, the invalid_buffer_damage protocol error is raised. This request must not be sent after capture, or else the already_captured protocol error is raised. Capture a frame. Unless this is the first successful captured frame performed in this session, the compositor may wait an indefinite amount of time for the source content to change before performing the copy. This request may only be sent once, or else the already_captured protocol error is raised. A buffer must be attached before this request is sent, or else the no_buffer protocol error is raised. This event is sent before the ready event and holds the transform that the compositor has applied to the buffer contents. This event is sent before the ready event. It may be generated multiple times to describe a region. The first captured frame in a session will always carry full damage. Subsequent frames' damaged regions describe which parts of the buffer have changed since the last ready event. These coordinates originate in the upper left corner of the buffer. This event indicates the time at which the frame is presented to the output in system monotonic time. This event is sent before the ready event. The timestamp is expressed as tv_sec_hi, tv_sec_lo, tv_nsec triples, each component being an unsigned 32-bit value. Whole seconds are in tv_sec which is a 64-bit value combined from tv_sec_hi and tv_sec_lo, and the additional fractional part in tv_nsec as nanoseconds. Hence, for valid timestamps tv_nsec must be in [0, 999999999]. Called as soon as the frame is copied, indicating it is available for reading. The buffer may be re-used by the client after this event. After receiving this event, the client must destroy the object. An unspecified runtime error has occurred. The client may retry. The buffer submitted by the client doesn't match the latest session constraints. The client should re-allocate its buffers and retry. The session has stopped. See ext_image_copy_capture_session_v1.stopped. This event indicates that the attempted frame copy has failed. After receiving this event, the client must destroy the object. This object represents a cursor capture session. It extends the base capture session with cursor-specific metadata. Destroys the session. This request can be sent at any time by the client. This request doesn't affect ext_image_copy_capture_frame_v1 objects created by this object. Gets the image copy capture session for this cursor session. The session will produce frames of the cursor image. The compositor may pause the session when the cursor leaves the captured area. This request must not be sent more than once, or else the duplicate_session protocol error is raised. Sent when a cursor enters the captured area. It shall be generated before the "position" and "hotspot" events when and only when a cursor enters the area. The cursor enters the captured area when the cursor image intersects with the captured area. Note, this is different from e.g. wl_pointer.enter. Sent when a cursor leaves the captured area. No "position" or "hotspot" event is generated for the cursor until the cursor enters the captured area again. Cursors outside the image capture source do not get captured and no event will be generated for them. The given position is the position of the cursor's hotspot and it is relative to the main buffer's top left corner in transformed buffer pixel coordinates. The coordinates may be negative or greater than the main buffer size. The hotspot describes the offset between the cursor image and the position of the input device. The given coordinates are the hotspot's offset from the origin in buffer coordinates. Clients should not apply the hotspot immediately: the hotspot becomes effective when the next ext_image_copy_capture_frame_v1.ready event is received. Compositors may delay this event until the client captures a new frame. waypipe-v0.10.5/protocols/gtk-primary-selection.xml000066400000000000000000000237111507071711100224440ustar00rootroot00000000000000 Copyright © 2015, 2016 Red Hat Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol provides the ability to have a primary selection device to match that of the X server. This primary selection is a shortcut to the common clipboard selection, where text just needs to be selected in order to allow copying it elsewhere. The de facto way to perform this action is the middle mouse button, although it is not limited to this one. Clients wishing to honor primary selection should create a primary selection source and set it as the selection through wp_primary_selection_device.set_selection whenever the text selection changes. In order to minimize calls in pointer-driven text selection, it should happen only once after the operation finished. Similarly, a NULL source should be set when text is unselected. wp_primary_selection_offer objects are first announced through the wp_primary_selection_device.data_offer event. Immediately after this event, the primary data offer will emit wp_primary_selection_offer.offer events to let know of the mime types being offered. When the primary selection changes, the client with the keyboard focus will receive wp_primary_selection_device.selection events. Only the client with the keyboard focus will receive such events with a non-NULL wp_primary_selection_offer. Across keyboard focus changes, previously focused clients will receive wp_primary_selection_device.events with a NULL wp_primary_selection_offer. In order to request the primary selection data, the client must pass a recent serial pertaining to the press event that is triggering the operation, if the compositor deems the serial valid and recent, the wp_primary_selection_source.send event will happen in the other end to let the transfer begin. The client owning the primary selection should write the requested data, and close the file descriptor immediately. If the primary selection owner client disappeared during the transfer, the client reading the data will receive a wp_primary_selection_device.selection event with a NULL wp_primary_selection_offer, the client should take this as a hint to finish the reads related to the no longer existing offer. The primary selection owner should be checking for errors during writes, merely cancelling the ongoing transfer if any happened. The primary selection device manager is a singleton global object that provides access to the primary selection. It allows to create wp_primary_selection_source objects, as well as retrieving the per-seat wp_primary_selection_device objects. Create a new primary selection source. Create a new data device for a given seat. Destroy the primary selection device manager. Replaces the current selection. The previous owner of the primary selection will receive a wp_primary_selection_source.cancelled event. To unset the selection, set the source to NULL. Introduces a new wp_primary_selection_offer object that may be used to receive the current primary selection. Immediately following this event, the new wp_primary_selection_offer object will send wp_primary_selection_offer.offer events to describe the offered mime types. The wp_primary_selection_device.selection event is sent to notify the client of a new primary selection. This event is sent after the wp_primary_selection.data_offer event introducing this object, and after the offer has announced its mimetypes through wp_primary_selection_offer.offer. The data_offer is valid until a new offer or NULL is received or until the client loses keyboard focus. The client must destroy the previous selection data_offer, if any, upon receiving this event. Destroy the primary selection device. A wp_primary_selection_offer represents an offer to transfer the contents of the primary selection clipboard to the client. Similar to wl_data_offer, the offer also describes the mime types that the source will transferthat the data can be converted to and provides the mechanisms for transferring the data directly to the client. To transfer the contents of the primary selection clipboard, the client issues this request and indicates the mime type that it wants to receive. The transfer happens through the passed file descriptor (typically created with the pipe system call). The source client writes the data in the mime type representation requested and then closes the file descriptor. The receiving client reads from the read end of the pipe until EOF and closes its end, at which point the transfer is complete. Destroy the primary selection offer. Sent immediately after creating announcing the wp_primary_selection_offer through wp_primary_selection_device.data_offer. One event is sent per offered mime type. The source side of a wp_primary_selection_offer, it provides a way to describe the offered data and respond to requests to transfer the requested contents of the primary selection clipboard. This request adds a mime type to the set of mime types advertised to targets. Can be called several times to offer multiple types. Destroy the primary selection source. Request for the current primary selection contents from the client. Send the specified mime type over the passed file descriptor, then close it. This primary selection source is no longer valid. The client should clean up and destroy this primary selection source. waypipe-v0.10.5/protocols/input-method-unstable-v2.xml000066400000000000000000000515521507071711100227740ustar00rootroot00000000000000 Copyright © 2008-2011 Kristian Høgsberg Copyright © 2010-2011 Intel Corporation Copyright © 2012-2013 Collabora, Ltd. Copyright © 2012, 2013 Intel Corporation Copyright © 2015, 2016 Jan Arne Petersen Copyright © 2017, 2018 Red Hat, Inc. Copyright © 2018 Purism SPC Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol allows applications to act as input methods for compositors. An input method context is used to manage the state of the input method. Text strings are UTF-8 encoded, their indices and lengths are in bytes. This document adheres to the RFC 2119 when using words like "must", "should", "may", etc. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. An input method object allows for clients to compose text. The objects connects the client to a text input in an application, and lets the client to serve as an input method for a seat. The zwp_input_method_v2 object can occupy two distinct states: active and inactive. In the active state, the object is associated to and communicates with a text input. In the inactive state, there is no associated text input, and the only communication is with the compositor. Initially, the input method is in the inactive state. Requests issued in the inactive state must be accepted by the compositor. Because of the serial mechanism, and the state reset on activate event, they will not have any effect on the state of the next text input. There must be no more than one input method object per seat. Notification that a text input focused on this seat requested the input method to be activated. This event serves the purpose of providing the compositor with an active input method. This event resets all state associated with previous enable, disable, surrounding_text, text_change_cause, and content_type events, as well as the state associated with set_preedit_string, commit_string, and delete_surrounding_text requests. In addition, it marks the zwp_input_method_v2 object as active, and makes any existing zwp_input_popup_surface_v2 objects visible. The surrounding_text, and content_type events must follow before the next done event if the text input supports the respective functionality. State set with this event is double-buffered. It will get applied on the next zwp_input_method_v2.done event, and stay valid until changed. Notification that no focused text input currently needs an active input method on this seat. This event marks the zwp_input_method_v2 object as inactive. The compositor must make all existing zwp_input_popup_surface_v2 objects invisible until the next activate event. State set with this event is double-buffered. It will get applied on the next zwp_input_method_v2.done event, and stay valid until changed. Updates the surrounding plain text around the cursor, excluding the preedit text. If any preedit text is present, it is replaced with the cursor for the purpose of this event. The argument text is a buffer containing the preedit string, and must include the cursor position, and the complete selection. It should contain additional characters before and after these. There is a maximum length of wayland messages, so text can not be longer than 4000 bytes. cursor is the byte offset of the cursor within the text buffer. anchor is the byte offset of the selection anchor within the text buffer. If there is no selected text, anchor must be the same as cursor. If this event does not arrive before the first done event, the input method may assume that the text input does not support this functionality and ignore following surrounding_text events. Values set with this event are double-buffered. They will get applied and set to initial values on the next zwp_input_method_v2.done event. The initial state for affected fields is empty, meaning that the text input does not support sending surrounding text. If the empty values get applied, subsequent attempts to change them may have no effect. Tells the input method why the text surrounding the cursor changed. Whenever the client detects an external change in text, cursor, or anchor position, it must issue this request to the compositor. This request is intended to give the input method a chance to update the preedit text in an appropriate way, e.g. by removing it when the user starts typing with a keyboard. cause describes the source of the change. The value set with this event is double-buffered. It will get applied and set to its initial value on the next zwp_input_method_v2.done event. The initial value of cause is input_method. Indicates the content type and hint for the current zwp_input_method_v2 instance. Values set with this event are double-buffered. They will get applied on the next zwp_input_method_v2.done event. The initial value for hint is none, and the initial value for purpose is normal. Atomically applies state changes recently sent to the client. The done event establishes and updates the state of the client, and must be issued after any changes to apply them. Text input state (content purpose, content hint, surrounding text, and change cause) is conceptually double-buffered within an input method context. Events modify the pending state, as opposed to the current state in use by the input method. A done event atomically applies all pending state, replacing the current state. After done, the new pending state is as documented for each related request. Events must be applied in the order of arrival. Neither current nor pending state are modified unless noted otherwise. Send the commit string text for insertion to the application. Inserts a string at current cursor position (see commit event sequence). The string to commit could be either just a single character after a key press or the result of some composing. The argument text is a buffer containing the string to insert. There is a maximum length of wayland messages, so text can not be longer than 4000 bytes. Values set with this event are double-buffered. They must be applied and reset to initial on the next zwp_text_input_v3.commit request. The initial value of text is an empty string. Send the pre-edit string text to the application text input. Place a new composing text (pre-edit) at the current cursor position. Any previously set composing text must be removed. Any previously existing selected text must be removed. The cursor is moved to a new position within the preedit string. The argument text is a buffer containing the preedit string. There is a maximum length of wayland messages, so text can not be longer than 4000 bytes. The arguments cursor_begin and cursor_end are counted in bytes relative to the beginning of the submitted string buffer. Cursor should be hidden by the text input when both are equal to -1. cursor_begin indicates the beginning of the cursor. cursor_end indicates the end of the cursor. It may be equal or different than cursor_begin. Values set with this event are double-buffered. They must be applied on the next zwp_input_method_v2.commit event. The initial value of text is an empty string. The initial value of cursor_begin, and cursor_end are both 0. Remove the surrounding text. before_length and after_length are the number of bytes before and after the current cursor index (excluding the preedit text) to delete. If any preedit text is present, it is replaced with the cursor for the purpose of this event. In effect before_length is counted from the beginning of preedit text, and after_length from its end (see commit event sequence). Values set with this event are double-buffered. They must be applied and reset to initial on the next zwp_input_method_v2.commit request. The initial values of both before_length and after_length are 0. Apply state changes from commit_string, set_preedit_string and delete_surrounding_text requests. The state relating to these events is double-buffered, and each one modifies the pending state. This request replaces the current state with the pending state. The connected text input is expected to proceed by evaluating the changes in the following order: 1. Replace existing preedit string with the cursor. 2. Delete requested surrounding text. 3. Insert commit string with the cursor at its end. 4. Calculate surrounding text to send. 5. Insert new preedit text in cursor position. 6. Place cursor inside preedit text. The serial number reflects the last state of the zwp_input_method_v2 object known to the client. The value of the serial argument must be equal to the number of done events already issued by that object. When the compositor receives a commit request with a serial different than the number of past done events, it must proceed as normal, except it should not change the current state of the zwp_input_method_v2 object. Creates a new zwp_input_popup_surface_v2 object wrapping a given surface. The surface gets assigned the "input_popup" role. If the surface already has an assigned role, the compositor must issue a protocol error. Allow an input method to receive hardware keyboard input and process key events to generate text events (with pre-edit) over the wire. This allows input methods which compose multiple key events for inputting text like it is done for CJK languages. The compositor should send all keyboard events on the seat to the grab holder via the returned wl_keyboard object. Nevertheless, the compositor may decide not to forward any particular event. The compositor must not further process any event after it has been forwarded to the grab holder. Releasing the resulting wl_keyboard object releases the grab. The input method ceased to be available. The compositor must issue this event as the only event on the object if there was another input_method object associated with the same seat at the time of its creation. The compositor must issue this request when the object is no longer usable, e.g. due to seat removal. The input method context becomes inert and should be destroyed after deactivation is handled. Any further requests and events except for the destroy request must be ignored. Destroys the zwp_text_input_v2 object and any associated child objects, i.e. zwp_input_popup_surface_v2 and zwp_input_method_keyboard_grab_v2. This interface marks a surface as a popup for interacting with an input method. The compositor should place it near the active text input area. It must be visible if and only if the input method is in the active state. The client must not destroy the underlying wl_surface while the zwp_input_popup_surface_v2 object exists. Notify about the position of the area of the text input expressed as a rectangle in surface local coordinates. This is a hint to the input method telling it the relative position of the text being entered. The zwp_input_method_keyboard_grab_v2 interface represents an exclusive grab of the wl_keyboard interface associated with the seat. This event provides a file descriptor to the client which can be memory-mapped to provide a keyboard mapping description. A key was pressed or released. The time argument is a timestamp with millisecond granularity, with an undefined base. Notifies clients that the modifier and/or group state has changed, and it should update its local state. Informs the client about the keyboard's repeat rate and delay. This event is sent as soon as the zwp_input_method_keyboard_grab_v2 object has been created, and is guaranteed to be received by the client before any key press event. Negative values for either rate or delay are illegal. A rate of zero will disable any repeating (regardless of the value of delay). This event can be sent later on as well with a new value if necessary, so clients should continue listening for the event past the creation of zwp_input_method_keyboard_grab_v2. The input method manager allows the client to become the input method on a chosen seat. No more than one input method must be associated with any seat at any given time. Request a new input zwp_input_method_v2 object associated with a given seat. Destroys the zwp_input_method_manager_v2 object. The zwp_input_method_v2 objects originating from it remain valid. waypipe-v0.10.5/protocols/linux-dmabuf-v1.xml000066400000000000000000000676521507071711100211460ustar00rootroot00000000000000 Copyright © 2014, 2015 Collabora, Ltd. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Following the interfaces from: https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_image_dma_buf_import_modifiers.txt and the Linux DRM sub-system's AddFb2 ioctl. This interface offers ways to create generic dmabuf-based wl_buffers. Clients can use the get_surface_feedback request to get dmabuf feedback for a particular surface. If the client wants to retrieve feedback not tied to a surface, they can use the get_default_feedback request. The following are required from clients: - Clients must ensure that either all data in the dma-buf is coherent for all subsequent read access or that coherency is correctly handled by the underlying kernel-side dma-buf implementation. - Don't make any more attachments after sending the buffer to the compositor. Making more attachments later increases the risk of the compositor not being able to use (re-import) an existing dmabuf-based wl_buffer. The underlying graphics stack must ensure the following: - The dmabuf file descriptors relayed to the server will stay valid for the whole lifetime of the wl_buffer. This means the server may at any time use those fds to import the dmabuf into any kernel sub-system that might accept it. However, when the underlying graphics stack fails to deliver the promise, because of e.g. a device hot-unplug which raises internal errors, after the wl_buffer has been successfully created the compositor must not raise protocol errors to the client when dmabuf import later fails. To create a wl_buffer from one or more dmabufs, a client creates a zwp_linux_dmabuf_params_v1 object with a zwp_linux_dmabuf_v1.create_params request. All planes required by the intended format are added with the 'add' request. Finally, a 'create' or 'create_immed' request is issued, which has the following outcome depending on the import success. The 'create' request, - on success, triggers a 'created' event which provides the final wl_buffer to the client. - on failure, triggers a 'failed' event to convey that the server cannot use the dmabufs received from the client. For the 'create_immed' request, - on success, the server immediately imports the added dmabufs to create a wl_buffer. No event is sent from the server in this case. - on failure, the server can choose to either: - terminate the client by raising a fatal error. - mark the wl_buffer as failed, and send a 'failed' event to the client. If the client uses a failed wl_buffer as an argument to any request, the behaviour is compositor implementation-defined. For all DRM formats and unless specified in another protocol extension, pre-multiplied alpha is used for pixel values. Unless specified otherwise in another protocol extension, implicit synchronization is used. In other words, compositors and clients must wait and signal fences implicitly passed via the DMA-BUF's reservation mechanism. Objects created through this interface, especially wl_buffers, will remain valid. This temporary object is used to collect multiple dmabuf handles into a single batch to create a wl_buffer. It can only be used once and should be destroyed after a 'created' or 'failed' event has been received. This event advertises one buffer format that the server supports. All the supported formats are advertised once when the client binds to this interface. A roundtrip after binding guarantees that the client has received all supported formats. For the definition of the format codes, see the zwp_linux_buffer_params_v1::create request. Starting version 4, the format event is deprecated and must not be sent by compositors. Instead, use get_default_feedback or get_surface_feedback. This event advertises the formats that the server supports, along with the modifiers supported for each format. All the supported modifiers for all the supported formats are advertised once when the client binds to this interface. A roundtrip after binding guarantees that the client has received all supported format-modifier pairs. For legacy support, DRM_FORMAT_MOD_INVALID (that is, modifier_hi == 0x00ffffff and modifier_lo == 0xffffffff) is allowed in this event. It indicates that the server can support the format with an implicit modifier. When a plane has DRM_FORMAT_MOD_INVALID as its modifier, it is as if no explicit modifier is specified. The effective modifier will be derived from the dmabuf. A compositor that sends valid modifiers and DRM_FORMAT_MOD_INVALID for a given format supports both explicit modifiers and implicit modifiers. For the definition of the format and modifier codes, see the zwp_linux_buffer_params_v1::create and zwp_linux_buffer_params_v1::add requests. Starting version 4, the modifier event is deprecated and must not be sent by compositors. Instead, use get_default_feedback or get_surface_feedback. This request creates a new wp_linux_dmabuf_feedback object not bound to a particular surface. This object will deliver feedback about dmabuf parameters to use if the client doesn't support per-surface feedback (see get_surface_feedback). This request creates a new wp_linux_dmabuf_feedback object for the specified wl_surface. This object will deliver feedback about dmabuf parameters to use for buffers attached to this surface. If the surface is destroyed before the wp_linux_dmabuf_feedback object, the feedback object becomes inert. This temporary object is a collection of dmabufs and other parameters that together form a single logical buffer. The temporary object may eventually create one wl_buffer unless cancelled by destroying it before requesting 'create'. Single-planar formats only require one dmabuf, however multi-planar formats may require more than one dmabuf. For all formats, an 'add' request must be called once per plane (even if the underlying dmabuf fd is identical). You must use consecutive plane indices ('plane_idx' argument for 'add') from zero to the number of planes used by the drm_fourcc format code. All planes required by the format must be given exactly once, but can be given in any order. Each plane index can be set only once. Cleans up the temporary data sent to the server for dmabuf-based wl_buffer creation. This request adds one dmabuf to the set in this zwp_linux_buffer_params_v1. The 64-bit unsigned value combined from modifier_hi and modifier_lo is the dmabuf layout modifier. DRM AddFB2 ioctl calls this the fb modifier, which is defined in drm_mode.h of Linux UAPI. This is an opaque token. Drivers use this token to express tiling, compression, etc. driver-specific modifications to the base format defined by the DRM fourcc code. Starting from version 4, the invalid_format protocol error is sent if the format + modifier pair was not advertised as supported. Starting from version 5, the invalid_format protocol error is sent if all planes don't use the same modifier. This request raises the PLANE_IDX error if plane_idx is too large. The error PLANE_SET is raised if attempting to set a plane that was already set. This asks for creation of a wl_buffer from the added dmabuf buffers. The wl_buffer is not created immediately but returned via the 'created' event if the dmabuf sharing succeeds. The sharing may fail at runtime for reasons a client cannot predict, in which case the 'failed' event is triggered. The 'format' argument is a DRM_FORMAT code, as defined by the libdrm's drm_fourcc.h. The Linux kernel's DRM sub-system is the authoritative source on how the format codes should work. The 'flags' is a bitfield of the flags defined in enum "flags". 'y_invert' means the that the image needs to be y-flipped. Flag 'interlaced' means that the frame in the buffer is not progressive as usual, but interlaced. An interlaced buffer as supported here must always contain both top and bottom fields. The top field always begins on the first pixel row. The temporal ordering between the two fields is top field first, unless 'bottom_first' is specified. It is undefined whether 'bottom_first' is ignored if 'interlaced' is not set. This protocol does not convey any information about field rate, duration, or timing, other than the relative ordering between the two fields in one buffer. A compositor may have to estimate the intended field rate from the incoming buffer rate. It is undefined whether the time of receiving wl_surface.commit with a new buffer attached, applying the wl_surface state, wl_surface.frame callback trigger, presentation, or any other point in the compositor cycle is used to measure the frame or field times. There is no support for detecting missed or late frames/fields/buffers either, and there is no support whatsoever for cooperating with interlaced compositor output. The composited image quality resulting from the use of interlaced buffers is explicitly undefined. A compositor may use elaborate hardware features or software to deinterlace and create progressive output frames from a sequence of interlaced input buffers, or it may produce substandard image quality. However, compositors that cannot guarantee reasonable image quality in all cases are recommended to just reject all interlaced buffers. Any argument errors, including non-positive width or height, mismatch between the number of planes and the format, bad format, bad offset or stride, may be indicated by fatal protocol errors: INCOMPLETE, INVALID_FORMAT, INVALID_DIMENSIONS, OUT_OF_BOUNDS. Dmabuf import errors in the server that are not obvious client bugs are returned via the 'failed' event as non-fatal. This allows attempting dmabuf sharing and falling back in the client if it fails. This request can be sent only once in the object's lifetime, after which the only legal request is destroy. This object should be destroyed after issuing a 'create' request. Attempting to use this object after issuing 'create' raises ALREADY_USED protocol error. It is not mandatory to issue 'create'. If a client wants to cancel the buffer creation, it can just destroy this object. This event indicates that the attempted buffer creation was successful. It provides the new wl_buffer referencing the dmabuf(s). Upon receiving this event, the client should destroy the zwp_linux_buffer_params_v1 object. This event indicates that the attempted buffer creation has failed. It usually means that one of the dmabuf constraints has not been fulfilled. Upon receiving this event, the client should destroy the zwp_linux_buffer_params_v1 object. This asks for immediate creation of a wl_buffer by importing the added dmabufs. In case of import success, no event is sent from the server, and the wl_buffer is ready to be used by the client. Upon import failure, either of the following may happen, as seen fit by the implementation: - the client is terminated with one of the following fatal protocol errors: - INCOMPLETE, INVALID_FORMAT, INVALID_DIMENSIONS, OUT_OF_BOUNDS, in case of argument errors such as mismatch between the number of planes and the format, bad format, non-positive width or height, or bad offset or stride. - INVALID_WL_BUFFER, in case the cause for failure is unknown or platform specific. - the server creates an invalid wl_buffer, marks it as failed and sends a 'failed' event to the client. The result of using this invalid wl_buffer as an argument in any request by the client is defined by the compositor implementation. This takes the same arguments as a 'create' request, and obeys the same restrictions. This object advertises dmabuf parameters feedback. This includes the preferred devices and the supported formats/modifiers. The parameters are sent once when this object is created and whenever they change. The done event is always sent once after all parameters have been sent. When a single parameter changes, all parameters are re-sent by the compositor. Compositors can re-send the parameters when the current client buffer allocations are sub-optimal. Compositors should not re-send the parameters if re-allocating the buffers would not result in a more optimal configuration. In particular, compositors should avoid sending the exact same parameters multiple times in a row. The tranche_target_device and tranche_formats events are grouped by tranches of preference. For each tranche, a tranche_target_device, one tranche_flags and one or more tranche_formats events are sent, followed by a tranche_done event finishing the list. The tranches are sent in descending order of preference. All formats and modifiers in the same tranche have the same preference. To send parameters, the compositor sends one main_device event, tranches (each consisting of one tranche_target_device event, one tranche_flags event, tranche_formats events and then a tranche_done event), then one done event. Using this request a client can tell the server that it is not going to use the wp_linux_dmabuf_feedback object anymore. This event is sent after all parameters of a wp_linux_dmabuf_feedback object have been sent. This allows changes to the wp_linux_dmabuf_feedback parameters to be seen as atomic, even if they happen via multiple events. This event provides a file descriptor which can be memory-mapped to access the format and modifier table. The table contains a tightly packed array of consecutive format + modifier pairs. Each pair is 16 bytes wide. It contains a format as a 32-bit unsigned integer, followed by 4 bytes of unused padding, and a modifier as a 64-bit unsigned integer. The native endianness is used. The client must map the file descriptor in read-only private mode. Compositors are not allowed to mutate the table file contents once this event has been sent. Instead, compositors must create a new, separate table file and re-send feedback parameters. Compositors are allowed to store duplicate format + modifier pairs in the table. This event advertises the main device that the server prefers to use when direct scan-out to the target device isn't possible. The advertised main device may be different for each wp_linux_dmabuf_feedback object, and may change over time. There is exactly one main device. The compositor must send at least one preference tranche with tranche_target_device equal to main_device. Clients need to create buffers that the main device can import and read from, otherwise creating the dmabuf wl_buffer will fail (see the wp_linux_buffer_params.create and create_immed requests for details). The main device will also likely be kept active by the compositor, so clients can use it instead of waking up another device for power savings. In general the device is a DRM node. The DRM node type (primary vs. render) is unspecified. Clients must not rely on the compositor sending a particular node type. Clients cannot check two devices for equality by comparing the dev_t value. If explicit modifiers are not supported and the client performs buffer allocations on a different device than the main device, then the client must force the buffer to have a linear layout. This event splits tranche_target_device and tranche_formats events in preference tranches. It is sent after a set of tranche_target_device and tranche_formats events; it represents the end of a tranche. The next tranche will have a lower preference. This event advertises the target device that the server prefers to use for a buffer created given this tranche. The advertised target device may be different for each preference tranche, and may change over time. There is exactly one target device per tranche. The target device may be a scan-out device, for example if the compositor prefers to directly scan-out a buffer created given this tranche. The target device may be a rendering device, for example if the compositor prefers to texture from said buffer. The client can use this hint to allocate the buffer in a way that makes it accessible from the target device, ideally directly. The buffer must still be accessible from the main device, either through direct import or through a potentially more expensive fallback path. If the buffer can't be directly imported from the main device then clients must be prepared for the compositor changing the tranche priority or making wl_buffer creation fail (see the wp_linux_buffer_params.create and create_immed requests for details). If the device is a DRM node, the DRM node type (primary vs. render) is unspecified. Clients must not rely on the compositor sending a particular node type. Clients cannot check two devices for equality by comparing the dev_t value. This event is tied to a preference tranche, see the tranche_done event. This event advertises the format + modifier combinations that the compositor supports. It carries an array of indices, each referring to a format + modifier pair in the last received format table (see the format_table event). Each index is a 16-bit unsigned integer in native endianness. For legacy support, DRM_FORMAT_MOD_INVALID is an allowed modifier. It indicates that the server can support the format with an implicit modifier. When a buffer has DRM_FORMAT_MOD_INVALID as its modifier, it is as if no explicit modifier is specified. The effective modifier will be derived from the dmabuf. A compositor that sends valid modifiers and DRM_FORMAT_MOD_INVALID for a given format supports both explicit modifiers and implicit modifiers. Compositors must not send duplicate format + modifier pairs within the same tranche or across two different tranches with the same target device and flags. This event is tied to a preference tranche, see the tranche_done event. For the definition of the format and modifier codes, see the wp_linux_buffer_params.create request. This event sets tranche-specific flags. The scanout flag is a hint that direct scan-out may be attempted by the compositor on the target device if the client appropriately allocates a buffer. How to allocate a buffer that can be scanned out on the target device is implementation-defined. This event is tied to a preference tranche, see the tranche_done event. waypipe-v0.10.5/protocols/linux-drm-syncobj-v1.xml000066400000000000000000000305571507071711100221310ustar00rootroot00000000000000 Copyright 2016 The Chromium Authors. Copyright 2017 Intel Corporation Copyright 2018 Collabora, Ltd Copyright 2021 Simon Ser Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol allows clients to request explicit synchronization for buffers. It is tied to the Linux DRM synchronization object framework. Synchronization refers to co-ordination of pipelined operations performed on buffers. Most GPU clients will schedule an asynchronous operation to render to the buffer, then immediately send the buffer to the compositor to be attached to a surface. With implicit synchronization, ensuring that the rendering operation is complete before the compositor displays the buffer is an implementation detail handled by either the kernel or userspace graphics driver. By contrast, with explicit synchronization, DRM synchronization object timeline points mark when the asynchronous operations are complete. When submitting a buffer, the client provides a timeline point which will be waited on before the compositor accesses the buffer, and another timeline point that the compositor will signal when it no longer needs to access the buffer contents for the purposes of the surface commit. Linux DRM synchronization objects are documented at: https://dri.freedesktop.org/docs/drm/gpu/drm-mm.html#drm-sync-objects Warning! The protocol described in this file is currently in the testing phase. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes can only be done by creating a new major version of the extension. This global is a factory interface, allowing clients to request explicit synchronization for buffers on a per-surface basis. See wp_linux_drm_syncobj_surface_v1 for more information. Destroy this explicit synchronization factory object. Other objects shall not be affected by this request. Instantiate an interface extension for the given wl_surface to provide explicit synchronization. If the given wl_surface already has an explicit synchronization object associated, the surface_exists protocol error is raised. Graphics APIs, like EGL or Vulkan, that manage the buffer queue and commits of a wl_surface themselves, are likely to be using this extension internally. If a client is using such an API for a wl_surface, it should not directly use this extension on that surface, to avoid raising a surface_exists protocol error. Import a DRM synchronization object timeline. If the FD cannot be imported, the invalid_timeline error is raised. This object represents an explicit synchronization object timeline imported by the client to the compositor. Destroy the synchronization object timeline. Other objects are not affected by this request, in particular timeline points set by set_acquire_point and set_release_point are not unset. This object is an add-on interface for wl_surface to enable explicit synchronization. Each surface can be associated with only one object of this interface at any time. Explicit synchronization is guaranteed to be supported for buffers created with any version of the linux-dmabuf protocol. Compositors are free to support explicit synchronization for additional buffer types. If at surface commit time the attached buffer does not support explicit synchronization, an unsupported_buffer error is raised. As long as the wp_linux_drm_syncobj_surface_v1 object is alive, the compositor may ignore implicit synchronization for buffers attached and committed to the wl_surface. The delivery of wl_buffer.release events for buffers attached to the surface becomes undefined. Clients must set both acquire and release points if and only if a non-null buffer is attached in the same surface commit. See the no_buffer, no_acquire_point and no_release_point protocol errors. If at surface commit time the acquire and release DRM syncobj timelines are identical, the acquire point value must be strictly less than the release point value, or else the conflicting_points protocol error is raised. Destroy this surface synchronization object. Any timeline point set by this object with set_acquire_point or set_release_point since the last commit may be discarded by the compositor. Any timeline point set by this object before the last commit will not be affected. Set the timeline point that must be signalled before the compositor may sample from the buffer attached with wl_surface.attach. The 64-bit unsigned value combined from point_hi and point_lo is the point value. The acquire point is double-buffered state, and will be applied on the next wl_surface.commit request for the associated surface. Thus, it applies only to the buffer that is attached to the surface at commit time. If an acquire point has already been attached during the same commit cycle, the new point replaces the old one. If the associated wl_surface was destroyed, a no_surface error is raised. If at surface commit time there is a pending acquire timeline point set but no pending buffer attached, a no_buffer error is raised. If at surface commit time there is a pending buffer attached but no pending acquire timeline point set, the no_acquire_point protocol error is raised. Set the timeline point that must be signalled by the compositor when it has finished its usage of the buffer attached with wl_surface.attach for the relevant commit. Once the timeline point is signaled, and assuming the associated buffer is not pending release from other wl_surface.commit requests, no additional explicit or implicit synchronization with the compositor is required to safely re-use the buffer. Note that clients cannot rely on the release point being always signaled after the acquire point: compositors may release buffers without ever reading from them. In addition, the compositor may use different presentation paths for different commits, which may have different release behavior. As a result, the compositor may signal the release points in a different order than the client committed them. Because signaling a timeline point also signals every previous point, it is generally not safe to use the same timeline object for the release points of multiple buffers. The out-of-order signaling described above may lead to a release point being signaled before the compositor has finished reading. To avoid this, it is strongly recommended that each buffer should use a separate timeline for its release points. The 64-bit unsigned value combined from point_hi and point_lo is the point value. The release point is double-buffered state, and will be applied on the next wl_surface.commit request for the associated surface. Thus, it applies only to the buffer that is attached to the surface at commit time. If a release point has already been attached during the same commit cycle, the new point replaces the old one. If the associated wl_surface was destroyed, a no_surface error is raised. If at surface commit time there is a pending release timeline point set but no pending buffer attached, a no_buffer error is raised. If at surface commit time there is a pending buffer attached but no pending release timeline point set, the no_release_point protocol error is raised. waypipe-v0.10.5/protocols/presentation-time.xml000066400000000000000000000311321507071711100216560ustar00rootroot00000000000000 Copyright © 2013-2014 Collabora, Ltd. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The main feature of this interface is accurate presentation timing feedback to ensure smooth video playback while maintaining audio/video synchronization. Some features use the concept of a presentation clock, which is defined in the presentation.clock_id event. A content update for a wl_surface is submitted by a wl_surface.commit request. Request 'feedback' associates with the wl_surface.commit and provides feedback on the content update, particularly the final realized presentation time. When the final realized presentation time is available, e.g. after a framebuffer flip completes, the requested presentation_feedback.presented events are sent. The final presentation time can differ from the compositor's predicted display update time and the update's target time, especially when the compositor misses its target vertical blanking period. These fatal protocol errors may be emitted in response to illegal presentation requests. Informs the server that the client will no longer be using this protocol object. Existing objects created by this object are not affected. Request presentation feedback for the current content submission on the given surface. This creates a new presentation_feedback object, which will deliver the feedback information once. If multiple presentation_feedback objects are created for the same submission, they will all deliver the same information. For details on what information is returned, see the presentation_feedback interface. This event tells the client in which clock domain the compositor interprets the timestamps used by the presentation extension. This clock is called the presentation clock. The compositor sends this event when the client binds to the presentation interface. The presentation clock does not change during the lifetime of the client connection. The clock identifier is platform dependent. On POSIX platforms, the identifier value is one of the clockid_t values accepted by clock_gettime(). clock_gettime() is defined by POSIX.1-2001. Timestamps in this clock domain are expressed as tv_sec_hi, tv_sec_lo, tv_nsec triples, each component being an unsigned 32-bit value. Whole seconds are in tv_sec which is a 64-bit value combined from tv_sec_hi and tv_sec_lo, and the additional fractional part in tv_nsec as nanoseconds. Hence, for valid timestamps tv_nsec must be in [0, 999999999]. Note that clock_id applies only to the presentation clock, and implies nothing about e.g. the timestamps used in the Wayland core protocol input events. Compositors should prefer a clock which does not jump and is not slewed e.g. by NTP. The absolute value of the clock is irrelevant. Precision of one millisecond or better is recommended. Clients must be able to query the current clock value directly, not by asking the compositor. A presentation_feedback object returns an indication that a wl_surface content update has become visible to the user. One object corresponds to one content update submission (wl_surface.commit). There are two possible outcomes: the content update is presented to the user, and a presentation timestamp delivered; or, the user did not see the content update because it was superseded or its surface destroyed, and the content update is discarded. Once a presentation_feedback object has delivered a 'presented' or 'discarded' event it is automatically destroyed. As presentation can be synchronized to only one output at a time, this event tells which output it was. This event is only sent prior to the presented event. As clients may bind to the same global wl_output multiple times, this event is sent for each bound instance that matches the synchronized output. If a client has not bound to the right wl_output global at all, this event is not sent. These flags provide information about how the presentation of the related content update was done. The intent is to help clients assess the reliability of the feedback and the visual quality with respect to possible tearing and timings. The presentation was synchronized to the "vertical retrace" by the display hardware such that tearing does not happen. Relying on software scheduling is not acceptable for this flag. If presentation is done by a copy to the active frontbuffer, then it must guarantee that tearing cannot happen. The display hardware provided measurements that the hardware driver converted into a presentation timestamp. Sampling a clock in software is not acceptable for this flag. The display hardware signalled that it started using the new image content. The opposite of this is e.g. a timer being used to guess when the display hardware has switched to the new image content. The presentation of this update was done zero-copy. This means the buffer from the client was given to display hardware as is, without copying it. Compositing with OpenGL counts as copying, even if textured directly from the client buffer. Possible zero-copy cases include direct scanout of a fullscreen surface and a surface on a hardware overlay. The associated content update was displayed to the user at the indicated time (tv_sec_hi/lo, tv_nsec). For the interpretation of the timestamp, see presentation.clock_id event. The timestamp corresponds to the time when the content update turned into light the first time on the surface's main output. Compositors may approximate this from the framebuffer flip completion events from the system, and the latency of the physical display path if known. This event is preceded by all related sync_output events telling which output's refresh cycle the feedback corresponds to, i.e. the main output for the surface. Compositors are recommended to choose the output containing the largest part of the wl_surface, or keeping the output they previously chose. Having a stable presentation output association helps clients predict future output refreshes (vblank). The 'refresh' argument gives the compositor's prediction of how many nanoseconds after tv_sec, tv_nsec the very next output refresh may occur. This is to further aid clients in predicting future refreshes, i.e., estimating the timestamps targeting the next few vblanks. If such prediction cannot usefully be done, the argument is zero. For version 2 and later, if the output does not have a constant refresh rate, explicit video mode switches excluded, then the refresh argument must be either an appropriate rate picked by the compositor (e.g. fastest rate), or 0 if no such rate exists. For version 1, if the output does not have a constant refresh rate, the refresh argument must be zero. The 64-bit value combined from seq_hi and seq_lo is the value of the output's vertical retrace counter when the content update was first scanned out to the display. This value must be compatible with the definition of MSC in GLX_OML_sync_control specification. Note, that if the display path has a non-zero latency, the time instant specified by this counter may differ from the timestamp's. If the output does not have a concept of vertical retrace or a refresh cycle, or the output device is self-refreshing without a way to query the refresh count, then the arguments seq_hi and seq_lo must be zero. The content update was never displayed to the user. waypipe-v0.10.5/protocols/primary-selection-unstable-v1.xml000066400000000000000000000243451507071711100240240ustar00rootroot00000000000000 Copyright © 2015, 2016 Red Hat Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol provides the ability to have a primary selection device to match that of the X server. This primary selection is a shortcut to the common clipboard selection, where text just needs to be selected in order to allow copying it elsewhere. The de facto way to perform this action is the middle mouse button, although it is not limited to this one. Clients wishing to honor primary selection should create a primary selection source and set it as the selection through wp_primary_selection_device.set_selection whenever the text selection changes. In order to minimize calls in pointer-driven text selection, it should happen only once after the operation finished. Similarly, a NULL source should be set when text is unselected. wp_primary_selection_offer objects are first announced through the wp_primary_selection_device.data_offer event. Immediately after this event, the primary data offer will emit wp_primary_selection_offer.offer events to let know of the mime types being offered. When the primary selection changes, the client with the keyboard focus will receive wp_primary_selection_device.selection events. Only the client with the keyboard focus will receive such events with a non-NULL wp_primary_selection_offer. Across keyboard focus changes, previously focused clients will receive wp_primary_selection_device.events with a NULL wp_primary_selection_offer. In order to request the primary selection data, the client must pass a recent serial pertaining to the press event that is triggering the operation, if the compositor deems the serial valid and recent, the wp_primary_selection_source.send event will happen in the other end to let the transfer begin. The client owning the primary selection should write the requested data, and close the file descriptor immediately. If the primary selection owner client disappeared during the transfer, the client reading the data will receive a wp_primary_selection_device.selection event with a NULL wp_primary_selection_offer, the client should take this as a hint to finish the reads related to the no longer existing offer. The primary selection owner should be checking for errors during writes, merely cancelling the ongoing transfer if any happened. The primary selection device manager is a singleton global object that provides access to the primary selection. It allows to create wp_primary_selection_source objects, as well as retrieving the per-seat wp_primary_selection_device objects. Create a new primary selection source. Create a new data device for a given seat. Destroy the primary selection device manager. Replaces the current selection. The previous owner of the primary selection will receive a wp_primary_selection_source.cancelled event. To unset the selection, set the source to NULL. Introduces a new wp_primary_selection_offer object that may be used to receive the current primary selection. Immediately following this event, the new wp_primary_selection_offer object will send wp_primary_selection_offer.offer events to describe the offered mime types. The wp_primary_selection_device.selection event is sent to notify the client of a new primary selection. This event is sent after the wp_primary_selection.data_offer event introducing this object, and after the offer has announced its mimetypes through wp_primary_selection_offer.offer. The data_offer is valid until a new offer or NULL is received or until the client loses keyboard focus. The client must destroy the previous selection data_offer, if any, upon receiving this event. Destroy the primary selection device. A wp_primary_selection_offer represents an offer to transfer the contents of the primary selection clipboard to the client. Similar to wl_data_offer, the offer also describes the mime types that the data can be converted to and provides the mechanisms for transferring the data directly to the client. To transfer the contents of the primary selection clipboard, the client issues this request and indicates the mime type that it wants to receive. The transfer happens through the passed file descriptor (typically created with the pipe system call). The source client writes the data in the mime type representation requested and then closes the file descriptor. The receiving client reads from the read end of the pipe until EOF and closes its end, at which point the transfer is complete. Destroy the primary selection offer. Sent immediately after creating announcing the wp_primary_selection_offer through wp_primary_selection_device.data_offer. One event is sent per offered mime type. The source side of a wp_primary_selection_offer, it provides a way to describe the offered data and respond to requests to transfer the requested contents of the primary selection clipboard. This request adds a mime type to the set of mime types advertised to targets. Can be called several times to offer multiple types. Destroy the primary selection source. Request for the current primary selection contents from the client. Send the specified mime type over the passed file descriptor, then close it. This primary selection source is no longer valid. The client should clean up and destroy this primary selection source. waypipe-v0.10.5/protocols/security-context-v1.xml000066400000000000000000000176351507071711100221000ustar00rootroot00000000000000 Copyright © 2021 Simon Ser Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This interface allows a client to register a new Wayland connection to the compositor and attach a security context to it. This is intended to be used by sandboxes. Sandbox engines attach a security context to all connections coming from inside the sandbox. The compositor can then restrict the features that the sandboxed connections can use. Compositors should forbid nesting multiple security contexts by not exposing wp_security_context_manager_v1 global to clients with a security context attached, or by sending the nested protocol error. Nested security contexts are dangerous because they can potentially allow privilege escalation of a sandboxed client. Warning! The protocol described in this file is currently in the testing phase. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes can only be done by creating a new major version of the extension. Destroy the manager. This doesn't destroy objects created with the manager. Creates a new security context with a socket listening FD. The compositor will accept new client connections on listen_fd. listen_fd must be ready to accept new connections when this request is sent by the client. In other words, the client must call bind(2) and listen(2) before sending the FD. close_fd is a FD that will signal hangup when the compositor should stop accepting new connections on listen_fd. The compositor must continue to accept connections on listen_fd when the Wayland client which created the security context disconnects. After sending this request, closing listen_fd and close_fd remains the only valid operation on them. The security context allows a client to register a new client and attach security context metadata to the connections. When both are set, the combination of the application ID and the sandbox engine must uniquely identify an application. The same application ID will be used across instances (e.g. if the application is restarted, or if the application is started multiple times). When both are set, the combination of the instance ID and the sandbox engine must uniquely identify a running instance of an application. Destroy the security context object. Attach a unique sandbox engine name to the security context. The name should follow the reverse-DNS style (e.g. "org.flatpak"). A list of well-known engines is maintained at: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/staging/security-context/engines.md It is a protocol error to call this request twice. The already_set error is sent in this case. Attach an application ID to the security context. The application ID is an opaque, sandbox-specific identifier for an application. See the well-known engines document for more details: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/staging/security-context/engines.md The compositor may use the application ID to group clients belonging to the same security context application. Whether this request is optional or not depends on the sandbox engine used. It is a protocol error to call this request twice. The already_set error is sent in this case. Attach an instance ID to the security context. The instance ID is an opaque, sandbox-specific identifier for a running instance of an application. See the well-known engines document for more details: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/staging/security-context/engines.md Whether this request is optional or not depends on the sandbox engine used. It is a protocol error to call this request twice. The already_set error is sent in this case. Atomically register the new client and attach the security context metadata. If the provided metadata is inconsistent or does not match with out of band metadata (see https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/staging/security-context/engines.md), the invalid_metadata error may be sent eventually. It's a protocol error to send any request other than "destroy" after this request. In this case, the already_used error is sent. waypipe-v0.10.5/protocols/viewporter.xml000066400000000000000000000177051507071711100204270ustar00rootroot00000000000000 Copyright © 2013-2016 Collabora, Ltd. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The global interface exposing surface cropping and scaling capabilities is used to instantiate an interface extension for a wl_surface object. This extended interface will then allow cropping and scaling the surface contents, effectively disconnecting the direct relationship between the buffer and the surface size. Informs the server that the client will not be using this protocol object anymore. This does not affect any other objects, wp_viewport objects included. Instantiate an interface extension for the given wl_surface to crop and scale its content. If the given wl_surface already has a wp_viewport object associated, the viewport_exists protocol error is raised. An additional interface to a wl_surface object, which allows the client to specify the cropping and scaling of the surface contents. This interface works with two concepts: the source rectangle (src_x, src_y, src_width, src_height), and the destination size (dst_width, dst_height). The contents of the source rectangle are scaled to the destination size, and content outside the source rectangle is ignored. This state is double-buffered, see wl_surface.commit. The two parts of crop and scale state are independent: the source rectangle, and the destination size. Initially both are unset, that is, no scaling is applied. The whole of the current wl_buffer is used as the source, and the surface size is as defined in wl_surface.attach. If the destination size is set, it causes the surface size to become dst_width, dst_height. The source (rectangle) is scaled to exactly this size. This overrides whatever the attached wl_buffer size is, unless the wl_buffer is NULL. If the wl_buffer is NULL, the surface has no content and therefore no size. Otherwise, the size is always at least 1x1 in surface local coordinates. If the source rectangle is set, it defines what area of the wl_buffer is taken as the source. If the source rectangle is set and the destination size is not set, then src_width and src_height must be integers, and the surface size becomes the source rectangle size. This results in cropping without scaling. If src_width or src_height are not integers and destination size is not set, the bad_size protocol error is raised when the surface state is applied. The coordinate transformations from buffer pixel coordinates up to the surface-local coordinates happen in the following order: 1. buffer_transform (wl_surface.set_buffer_transform) 2. buffer_scale (wl_surface.set_buffer_scale) 3. crop and scale (wp_viewport.set*) This means, that the source rectangle coordinates of crop and scale are given in the coordinates after the buffer transform and scale, i.e. in the coordinates that would be the surface-local coordinates if the crop and scale was not applied. If src_x or src_y are negative, the bad_value protocol error is raised. Otherwise, if the source rectangle is partially or completely outside of the non-NULL wl_buffer, then the out_of_buffer protocol error is raised when the surface state is applied. A NULL wl_buffer does not raise the out_of_buffer error. If the wl_surface associated with the wp_viewport is destroyed, all wp_viewport requests except 'destroy' raise the protocol error no_surface. If the wp_viewport object is destroyed, the crop and scale state is removed from the wl_surface. The change will be applied on the next wl_surface.commit. The associated wl_surface's crop and scale state is removed. The change is applied on the next wl_surface.commit. Set the source rectangle of the associated wl_surface. See wp_viewport for the description, and relation to the wl_buffer size. If all of x, y, width and height are -1.0, the source rectangle is unset instead. Any other set of values where width or height are zero or negative, or x or y are negative, raise the bad_value protocol error. The crop and scale state is double-buffered, see wl_surface.commit. Set the destination size of the associated wl_surface. See wp_viewport for the description, and relation to the wl_buffer size. If width is -1 and height is -1, the destination size is unset instead. Any other pair of values for width and height that contains zero or negative values raises the bad_value protocol error. The crop and scale state is double-buffered, see wl_surface.commit. waypipe-v0.10.5/protocols/virtual-keyboard-unstable-v1.xml000066400000000000000000000114261507071711100236360ustar00rootroot00000000000000 Copyright © 2008-2011 Kristian Høgsberg Copyright © 2010-2013 Intel Corporation Copyright © 2012-2013 Collabora, Ltd. Copyright © 2018 Purism SPC Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The virtual keyboard provides an application with requests which emulate the behaviour of a physical keyboard. This interface can be used by clients on its own to provide raw input events, or it can accompany the input method protocol. Provide a file descriptor to the compositor which can be memory-mapped to provide a keyboard mapping description. Format carries a value from the keymap_format enumeration. A key was pressed or released. The time argument is a timestamp with millisecond granularity, with an undefined base. All requests regarding a single object must share the same clock. Keymap must be set before issuing this request. State carries a value from the key_state enumeration. Notifies the compositor that the modifier and/or group state has changed, and it should update state. The client should use wl_keyboard.modifiers event to synchronize its internal state with seat state. Keymap must be set before issuing this request. A virtual keyboard manager allows an application to provide keyboard input events as if they came from a physical keyboard. Creates a new virtual keyboard associated to a seat. If the compositor enables a keyboard to perform arbitrary actions, it should present an error when an untrusted client requests a new keyboard. waypipe-v0.10.5/protocols/wayland-drm.xml000066400000000000000000000173051507071711100204340ustar00rootroot00000000000000 Copyright © 2008-2011 Kristian Høgsberg Copyright © 2010-2011 Intel Corporation Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that\n the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Bitmask of capabilities. waypipe-v0.10.5/protocols/wayland.xml000066400000000000000000004463761507071711100176720ustar00rootroot00000000000000 Copyright © 2008-2011 Kristian Høgsberg Copyright © 2010-2011 Intel Corporation Copyright © 2012-2013 Collabora, Ltd. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The core global object. This is a special singleton object. It is used for internal Wayland protocol features. The sync request asks the server to emit the 'done' event on the returned wl_callback object. Since requests are handled in-order and events are delivered in-order, this can be used as a barrier to ensure all previous requests and the resulting events have been handled. The object returned by this request will be destroyed by the compositor after the callback is fired and as such the client must not attempt to use it after that point. The callback_data passed in the callback is undefined and should be ignored. This request creates a registry object that allows the client to list and bind the global objects available from the compositor. It should be noted that the server side resources consumed in response to a get_registry request can only be released when the client disconnects, not when the client side proxy is destroyed. Therefore, clients should invoke get_registry as infrequently as possible to avoid wasting memory. The error event is sent out when a fatal (non-recoverable) error has occurred. The object_id argument is the object where the error occurred, most often in response to a request to that object. The code identifies the error and is defined by the object interface. As such, each interface defines its own set of error codes. The message is a brief description of the error, for (debugging) convenience. These errors are global and can be emitted in response to any server request. This event is used internally by the object ID management logic. When a client deletes an object that it had created, the server will send this event to acknowledge that it has seen the delete request. When the client receives this event, it will know that it can safely reuse the object ID. The singleton global registry object. The server has a number of global objects that are available to all clients. These objects typically represent an actual object in the server (for example, an input device) or they are singleton objects that provide extension functionality. When a client creates a registry object, the registry object will emit a global event for each global currently in the registry. Globals come and go as a result of device or monitor hotplugs, reconfiguration or other events, and the registry will send out global and global_remove events to keep the client up to date with the changes. To mark the end of the initial burst of events, the client can use the wl_display.sync request immediately after calling wl_display.get_registry. A client can bind to a global object by using the bind request. This creates a client-side handle that lets the object emit events to the client and lets the client invoke requests on the object. Binds a new, client-created object to the server using the specified name as the identifier. Notify the client of global objects. The event notifies the client that a global object with the given name is now available, and it implements the given version of the given interface. Notify the client of removed global objects. This event notifies the client that the global identified by name is no longer available. If the client bound to the global using the bind request, the client should now destroy that object. The object remains valid and requests to the object will be ignored until the client destroys it, to avoid races between the global going away and a client sending a request to it. Clients can handle the 'done' event to get notified when the related request is done. Note, because wl_callback objects are created from multiple independent factory interfaces, the wl_callback interface is frozen at version 1. Notify the client when the related request is done. A compositor. This object is a singleton global. The compositor is in charge of combining the contents of multiple surfaces into one displayable output. Ask the compositor to create a new surface. Ask the compositor to create a new region. The wl_shm_pool object encapsulates a piece of memory shared between the compositor and client. Through the wl_shm_pool object, the client can allocate shared memory wl_buffer objects. All objects created through the same pool share the same underlying mapped memory. Reusing the mapped memory avoids the setup/teardown overhead and is useful when interactively resizing a surface or for many small buffers. Create a wl_buffer object from the pool. The buffer is created offset bytes into the pool and has width and height as specified. The stride argument specifies the number of bytes from the beginning of one row to the beginning of the next. The format is the pixel format of the buffer and must be one of those advertised through the wl_shm.format event. A buffer will keep a reference to the pool it was created from so it is valid to destroy the pool immediately after creating a buffer from it. Destroy the shared memory pool. The mmapped memory will be released when all buffers that have been created from this pool are gone. This request will cause the server to remap the backing memory for the pool from the file descriptor passed when the pool was created, but using the new size. This request can only be used to make the pool bigger. This request only changes the amount of bytes that are mmapped by the server and does not touch the file corresponding to the file descriptor passed at creation time. It is the client's responsibility to ensure that the file is at least as big as the new pool size. A singleton global object that provides support for shared memory. Clients can create wl_shm_pool objects using the create_pool request. On binding the wl_shm object one or more format events are emitted to inform clients about the valid pixel formats that can be used for buffers. These errors can be emitted in response to wl_shm requests. This describes the memory layout of an individual pixel. All renderers should support argb8888 and xrgb8888 but any other formats are optional and may not be supported by the particular renderer in use. The drm format codes match the macros defined in drm_fourcc.h, except argb8888 and xrgb8888. The formats actually supported by the compositor will be reported by the format event. For all wl_shm formats and unless specified in another protocol extension, pre-multiplied alpha is used for pixel values. Create a new wl_shm_pool object. The pool can be used to create shared memory based buffer objects. The server will mmap size bytes of the passed file descriptor, to use as backing memory for the pool. Informs the client about a valid pixel format that can be used for buffers. Known formats include argb8888 and xrgb8888. Using this request a client can tell the server that it is not going to use the shm object anymore. Objects created via this interface remain unaffected. A buffer provides the content for a wl_surface. Buffers are created through factory interfaces such as wl_shm, wp_linux_buffer_params (from the linux-dmabuf protocol extension) or similar. It has a width and a height and can be attached to a wl_surface, but the mechanism by which a client provides and updates the contents is defined by the buffer factory interface. Color channels are assumed to be electrical rather than optical (in other words, encoded with a transfer function) unless otherwise specified. If the buffer uses a format that has an alpha channel, the alpha channel is assumed to be premultiplied into the electrical color channel values (after transfer function encoding) unless otherwise specified. Note, because wl_buffer objects are created from multiple independent factory interfaces, the wl_buffer interface is frozen at version 1. Destroy a buffer. If and how you need to release the backing storage is defined by the buffer factory interface. For possible side-effects to a surface, see wl_surface.attach. Sent when this wl_buffer is no longer used by the compositor. The client is now free to reuse or destroy this buffer and its backing storage. If a client receives a release event before the frame callback requested in the same wl_surface.commit that attaches this wl_buffer to a surface, then the client is immediately free to reuse the buffer and its backing storage, and does not need a second buffer for the next surface content update. Typically this is possible, when the compositor maintains a copy of the wl_surface contents, e.g. as a GL texture. This is an important optimization for GL(ES) compositors with wl_shm clients. A wl_data_offer represents a piece of data offered for transfer by another client (the source client). It is used by the copy-and-paste and drag-and-drop mechanisms. The offer describes the different mime types that the data can be converted to and provides the mechanism for transferring the data directly from the source client. Indicate that the client can accept the given mime type, or NULL for not accepted. For objects of version 2 or older, this request is used by the client to give feedback whether the client can receive the given mime type, or NULL if none is accepted; the feedback does not determine whether the drag-and-drop operation succeeds or not. For objects of version 3 or newer, this request determines the final result of the drag-and-drop operation. If the end result is that no mime types were accepted, the drag-and-drop operation will be cancelled and the corresponding drag source will receive wl_data_source.cancelled. Clients may still use this event in conjunction with wl_data_source.action for feedback. To transfer the offered data, the client issues this request and indicates the mime type it wants to receive. The transfer happens through the passed file descriptor (typically created with the pipe system call). The source client writes the data in the mime type representation requested and then closes the file descriptor. The receiving client reads from the read end of the pipe until EOF and then closes its end, at which point the transfer is complete. This request may happen multiple times for different mime types, both before and after wl_data_device.drop. Drag-and-drop destination clients may preemptively fetch data or examine it more closely to determine acceptance. Destroy the data offer. Sent immediately after creating the wl_data_offer object. One event per offered mime type. Notifies the compositor that the drag destination successfully finished the drag-and-drop operation. Upon receiving this request, the compositor will emit wl_data_source.dnd_finished on the drag source client. It is a client error to perform other requests than wl_data_offer.destroy after this one. It is also an error to perform this request after a NULL mime type has been set in wl_data_offer.accept or no action was received through wl_data_offer.action. If wl_data_offer.finish request is received for a non drag and drop operation, the invalid_finish protocol error is raised. Sets the actions that the destination side client supports for this operation. This request may trigger the emission of wl_data_source.action and wl_data_offer.action events if the compositor needs to change the selected action. This request can be called multiple times throughout the drag-and-drop operation, typically in response to wl_data_device.enter or wl_data_device.motion events. This request determines the final result of the drag-and-drop operation. If the end result is that no action is accepted, the drag source will receive wl_data_source.cancelled. The dnd_actions argument must contain only values expressed in the wl_data_device_manager.dnd_actions enum, and the preferred_action argument must only contain one of those values set, otherwise it will result in a protocol error. While managing an "ask" action, the destination drag-and-drop client may perform further wl_data_offer.receive requests, and is expected to perform one last wl_data_offer.set_actions request with a preferred action other than "ask" (and optionally wl_data_offer.accept) before requesting wl_data_offer.finish, in order to convey the action selected by the user. If the preferred action is not in the wl_data_offer.source_actions mask, an error will be raised. If the "ask" action is dismissed (e.g. user cancellation), the client is expected to perform wl_data_offer.destroy right away. This request can only be made on drag-and-drop offers, a protocol error will be raised otherwise. This event indicates the actions offered by the data source. It will be sent immediately after creating the wl_data_offer object, or anytime the source side changes its offered actions through wl_data_source.set_actions. This event indicates the action selected by the compositor after matching the source/destination side actions. Only one action (or none) will be offered here. This event can be emitted multiple times during the drag-and-drop operation in response to destination side action changes through wl_data_offer.set_actions. This event will no longer be emitted after wl_data_device.drop happened on the drag-and-drop destination, the client must honor the last action received, or the last preferred one set through wl_data_offer.set_actions when handling an "ask" action. Compositors may also change the selected action on the fly, mainly in response to keyboard modifier changes during the drag-and-drop operation. The most recent action received is always the valid one. Prior to receiving wl_data_device.drop, the chosen action may change (e.g. due to keyboard modifiers being pressed). At the time of receiving wl_data_device.drop the drag-and-drop destination must honor the last action received. Action changes may still happen after wl_data_device.drop, especially on "ask" actions, where the drag-and-drop destination may choose another action afterwards. Action changes happening at this stage are always the result of inter-client negotiation, the compositor shall no longer be able to induce a different action. Upon "ask" actions, it is expected that the drag-and-drop destination may potentially choose a different action and/or mime type, based on wl_data_offer.source_actions and finally chosen by the user (e.g. popping up a menu with the available options). The final wl_data_offer.set_actions and wl_data_offer.accept requests must happen before the call to wl_data_offer.finish. The wl_data_source object is the source side of a wl_data_offer. It is created by the source client in a data transfer and provides a way to describe the offered data and a way to respond to requests to transfer the data. This request adds a mime type to the set of mime types advertised to targets. Can be called several times to offer multiple types. Destroy the data source. Sent when a target accepts pointer_focus or motion events. If a target does not accept any of the offered types, type is NULL. Used for feedback during drag-and-drop. Request for data from the client. Send the data as the specified mime type over the passed file descriptor, then close it. This data source is no longer valid. There are several reasons why this could happen: - The data source has been replaced by another data source. - The drag-and-drop operation was performed, but the drop destination did not accept any of the mime types offered through wl_data_source.target. - The drag-and-drop operation was performed, but the drop destination did not select any of the actions present in the mask offered through wl_data_source.action. - The drag-and-drop operation was performed but didn't happen over a surface. - The compositor cancelled the drag-and-drop operation (e.g. compositor dependent timeouts to avoid stale drag-and-drop transfers). The client should clean up and destroy this data source. For objects of version 2 or older, wl_data_source.cancelled will only be emitted if the data source was replaced by another data source. Sets the actions that the source side client supports for this operation. This request may trigger wl_data_source.action and wl_data_offer.action events if the compositor needs to change the selected action. The dnd_actions argument must contain only values expressed in the wl_data_device_manager.dnd_actions enum, otherwise it will result in a protocol error. This request must be made once only, and can only be made on sources used in drag-and-drop, so it must be performed before wl_data_device.start_drag. Attempting to use the source other than for drag-and-drop will raise a protocol error. The user performed the drop action. This event does not indicate acceptance, wl_data_source.cancelled may still be emitted afterwards if the drop destination does not accept any mime type. However, this event might however not be received if the compositor cancelled the drag-and-drop operation before this event could happen. Note that the data_source may still be used in the future and should not be destroyed here. The drop destination finished interoperating with this data source, so the client is now free to destroy this data source and free all associated data. If the action used to perform the operation was "move", the source can now delete the transferred data. This event indicates the action selected by the compositor after matching the source/destination side actions. Only one action (or none) will be offered here. This event can be emitted multiple times during the drag-and-drop operation, mainly in response to destination side changes through wl_data_offer.set_actions, and as the data device enters/leaves surfaces. It is only possible to receive this event after wl_data_source.dnd_drop_performed if the drag-and-drop operation ended in an "ask" action, in which case the final wl_data_source.action event will happen immediately before wl_data_source.dnd_finished. Compositors may also change the selected action on the fly, mainly in response to keyboard modifier changes during the drag-and-drop operation. The most recent action received is always the valid one. The chosen action may change alongside negotiation (e.g. an "ask" action can turn into a "move" operation), so the effects of the final action must always be applied in wl_data_offer.dnd_finished. Clients can trigger cursor surface changes from this point, so they reflect the current action. There is one wl_data_device per seat which can be obtained from the global wl_data_device_manager singleton. A wl_data_device provides access to inter-client data transfer mechanisms such as copy-and-paste and drag-and-drop. This request asks the compositor to start a drag-and-drop operation on behalf of the client. The source argument is the data source that provides the data for the eventual data transfer. If source is NULL, enter, leave and motion events are sent only to the client that initiated the drag and the client is expected to handle the data passing internally. If source is destroyed, the drag-and-drop session will be cancelled. The origin surface is the surface where the drag originates and the client must have an active implicit grab that matches the serial. The icon surface is an optional (can be NULL) surface that provides an icon to be moved around with the cursor. Initially, the top-left corner of the icon surface is placed at the cursor hotspot, but subsequent wl_surface.offset requests can move the relative position. Attach requests must be confirmed with wl_surface.commit as usual. The icon surface is given the role of a drag-and-drop icon. If the icon surface already has another role, it raises a protocol error. The input region is ignored for wl_surfaces with the role of a drag-and-drop icon. The given source may not be used in any further set_selection or start_drag requests. Attempting to reuse a previously-used source may send a used_source error. This request asks the compositor to set the selection to the data from the source on behalf of the client. To unset the selection, set the source to NULL. The given source may not be used in any further set_selection or start_drag requests. Attempting to reuse a previously-used source may send a used_source error. The data_offer event introduces a new wl_data_offer object, which will subsequently be used in either the data_device.enter event (for drag-and-drop) or the data_device.selection event (for selections). Immediately following the data_device.data_offer event, the new data_offer object will send out data_offer.offer events to describe the mime types it offers. This event is sent when an active drag-and-drop pointer enters a surface owned by the client. The position of the pointer at enter time is provided by the x and y arguments, in surface-local coordinates. This event is sent when the drag-and-drop pointer leaves the surface and the session ends. The client must destroy the wl_data_offer introduced at enter time at this point. This event is sent when the drag-and-drop pointer moves within the currently focused surface. The new position of the pointer is provided by the x and y arguments, in surface-local coordinates. The event is sent when a drag-and-drop operation is ended because the implicit grab is removed. The drag-and-drop destination is expected to honor the last action received through wl_data_offer.action, if the resulting action is "copy" or "move", the destination can still perform wl_data_offer.receive requests, and is expected to end all transfers with a wl_data_offer.finish request. If the resulting action is "ask", the action will not be considered final. The drag-and-drop destination is expected to perform one last wl_data_offer.set_actions request, or wl_data_offer.destroy in order to cancel the operation. The selection event is sent out to notify the client of a new wl_data_offer for the selection for this device. The data_device.data_offer and the data_offer.offer events are sent out immediately before this event to introduce the data offer object. The selection event is sent to a client immediately before receiving keyboard focus and when a new selection is set while the client has keyboard focus. The data_offer is valid until a new data_offer or NULL is received or until the client loses keyboard focus. Switching surface with keyboard focus within the same client doesn't mean a new selection will be sent. The client must destroy the previous selection data_offer, if any, upon receiving this event. This request destroys the data device. The wl_data_device_manager is a singleton global object that provides access to inter-client data transfer mechanisms such as copy-and-paste and drag-and-drop. These mechanisms are tied to a wl_seat and this interface lets a client get a wl_data_device corresponding to a wl_seat. Depending on the version bound, the objects created from the bound wl_data_device_manager object will have different requirements for functioning properly. See wl_data_source.set_actions, wl_data_offer.accept and wl_data_offer.finish for details. Create a new data source. Create a new data device for a given seat. This is a bitmask of the available/preferred actions in a drag-and-drop operation. In the compositor, the selected action is a result of matching the actions offered by the source and destination sides. "action" events with a "none" action will be sent to both source and destination if there is no match. All further checks will effectively happen on (source actions ∩ destination actions). In addition, compositors may also pick different actions in reaction to key modifiers being pressed. One common design that is used in major toolkits (and the behavior recommended for compositors) is: - If no modifiers are pressed, the first match (in bit order) will be used. - Pressing Shift selects "move", if enabled in the mask. - Pressing Control selects "copy", if enabled in the mask. Behavior beyond that is considered implementation-dependent. Compositors may for example bind other modifiers (like Alt/Meta) or drags initiated with other buttons than BTN_LEFT to specific actions (e.g. "ask"). This interface is implemented by servers that provide desktop-style user interfaces. It allows clients to associate a wl_shell_surface with a basic surface. Note! This protocol is deprecated and not intended for production use. For desktop-style user interfaces, use xdg_shell. Compositors and clients should not implement this interface. Create a shell surface for an existing surface. This gives the wl_surface the role of a shell surface. If the wl_surface already has another role, it raises a protocol error. Only one shell surface can be associated with a given surface. An interface that may be implemented by a wl_surface, for implementations that provide a desktop-style user interface. It provides requests to treat surfaces like toplevel, fullscreen or popup windows, move, resize or maximize them, associate metadata like title and class, etc. On the server side the object is automatically destroyed when the related wl_surface is destroyed. On the client side, wl_shell_surface_destroy() must be called before destroying the wl_surface object. A client must respond to a ping event with a pong request or the client may be deemed unresponsive. Start a pointer-driven move of the surface. This request must be used in response to a button press event. The server may ignore move requests depending on the state of the surface (e.g. fullscreen or maximized). These values are used to indicate which edge of a surface is being dragged in a resize operation. The server may use this information to adapt its behavior, e.g. choose an appropriate cursor image. Start a pointer-driven resizing of the surface. This request must be used in response to a button press event. The server may ignore resize requests depending on the state of the surface (e.g. fullscreen or maximized). Map the surface as a toplevel surface. A toplevel surface is not fullscreen, maximized or transient. These flags specify details of the expected behaviour of transient surfaces. Used in the set_transient request. Map the surface relative to an existing surface. The x and y arguments specify the location of the upper left corner of the surface relative to the upper left corner of the parent surface, in surface-local coordinates. The flags argument controls details of the transient behaviour. Hints to indicate to the compositor how to deal with a conflict between the dimensions of the surface and the dimensions of the output. The compositor is free to ignore this parameter. Map the surface as a fullscreen surface. If an output parameter is given then the surface will be made fullscreen on that output. If the client does not specify the output then the compositor will apply its policy - usually choosing the output on which the surface has the biggest surface area. The client may specify a method to resolve a size conflict between the output size and the surface size - this is provided through the method parameter. The framerate parameter is used only when the method is set to "driver", to indicate the preferred framerate. A value of 0 indicates that the client does not care about framerate. The framerate is specified in mHz, that is framerate of 60000 is 60Hz. A method of "scale" or "driver" implies a scaling operation of the surface, either via a direct scaling operation or a change of the output mode. This will override any kind of output scaling, so that mapping a surface with a buffer size equal to the mode can fill the screen independent of buffer_scale. A method of "fill" means we don't scale up the buffer, however any output scale is applied. This means that you may run into an edge case where the application maps a buffer with the same size of the output mode but buffer_scale 1 (thus making a surface larger than the output). In this case it is allowed to downscale the results to fit the screen. The compositor must reply to this request with a configure event with the dimensions for the output on which the surface will be made fullscreen. Map the surface as a popup. A popup surface is a transient surface with an added pointer grab. An existing implicit grab will be changed to owner-events mode, and the popup grab will continue after the implicit grab ends (i.e. releasing the mouse button does not cause the popup to be unmapped). The popup grab continues until the window is destroyed or a mouse button is pressed in any other client's window. A click in any of the client's surfaces is reported as normal, however, clicks in other clients' surfaces will be discarded and trigger the callback. The x and y arguments specify the location of the upper left corner of the surface relative to the upper left corner of the parent surface, in surface-local coordinates. Map the surface as a maximized surface. If an output parameter is given then the surface will be maximized on that output. If the client does not specify the output then the compositor will apply its policy - usually choosing the output on which the surface has the biggest surface area. The compositor will reply with a configure event telling the expected new surface size. The operation is completed on the next buffer attach to this surface. A maximized surface typically fills the entire output it is bound to, except for desktop elements such as panels. This is the main difference between a maximized shell surface and a fullscreen shell surface. The details depend on the compositor implementation. Set a short title for the surface. This string may be used to identify the surface in a task bar, window list, or other user interface elements provided by the compositor. The string must be encoded in UTF-8. Set a class for the surface. The surface class identifies the general class of applications to which the surface belongs. A common convention is to use the file name (or the full path if it is a non-standard location) of the application's .desktop file as the class. Ping a client to check if it is receiving events and sending requests. A client is expected to reply with a pong request. The configure event asks the client to resize its surface. The size is a hint, in the sense that the client is free to ignore it if it doesn't resize, pick a smaller size (to satisfy aspect ratio or resize in steps of NxM pixels). The edges parameter provides a hint about how the surface was resized. The client may use this information to decide how to adjust its content to the new size (e.g. a scrolling area might adjust its content position to leave the viewable content unmoved). The client is free to dismiss all but the last configure event it received. The width and height arguments specify the size of the window in surface-local coordinates. The popup_done event is sent out when a popup grab is broken, that is, when the user clicks a surface that doesn't belong to the client owning the popup surface. A surface is a rectangular area that may be displayed on zero or more outputs, and shown any number of times at the compositor's discretion. They can present wl_buffers, receive user input, and define a local coordinate system. The size of a surface (and relative positions on it) is described in surface-local coordinates, which may differ from the buffer coordinates of the pixel content, in case a buffer_transform or a buffer_scale is used. A surface without a "role" is fairly useless: a compositor does not know where, when or how to present it. The role is the purpose of a wl_surface. Examples of roles are a cursor for a pointer (as set by wl_pointer.set_cursor), a drag icon (wl_data_device.start_drag), a sub-surface (wl_subcompositor.get_subsurface), and a window as defined by a shell protocol (e.g. wl_shell.get_shell_surface). A surface can have only one role at a time. Initially a wl_surface does not have a role. Once a wl_surface is given a role, it is set permanently for the whole lifetime of the wl_surface object. Giving the current role again is allowed, unless explicitly forbidden by the relevant interface specification. Surface roles are given by requests in other interfaces such as wl_pointer.set_cursor. The request should explicitly mention that this request gives a role to a wl_surface. Often, this request also creates a new protocol object that represents the role and adds additional functionality to wl_surface. When a client wants to destroy a wl_surface, they must destroy this role object before the wl_surface, otherwise a defunct_role_object error is sent. Destroying the role object does not remove the role from the wl_surface, but it may stop the wl_surface from "playing the role". For instance, if a wl_subsurface object is destroyed, the wl_surface it was created for will be unmapped and forget its position and z-order. It is allowed to create a wl_subsurface for the same wl_surface again, but it is not allowed to use the wl_surface as a cursor (cursor is a different role than sub-surface, and role switching is not allowed). These errors can be emitted in response to wl_surface requests. Deletes the surface and invalidates its object ID. Set a buffer as the content of this surface. The new size of the surface is calculated based on the buffer size transformed by the inverse buffer_transform and the inverse buffer_scale. This means that at commit time the supplied buffer size must be an integer multiple of the buffer_scale. If that's not the case, an invalid_size error is sent. The x and y arguments specify the location of the new pending buffer's upper left corner, relative to the current buffer's upper left corner, in surface-local coordinates. In other words, the x and y, combined with the new surface size define in which directions the surface's size changes. Setting anything other than 0 as x and y arguments is discouraged, and should instead be replaced with using the separate wl_surface.offset request. When the bound wl_surface version is 5 or higher, passing any non-zero x or y is a protocol violation, and will result in an 'invalid_offset' error being raised. The x and y arguments are ignored and do not change the pending state. To achieve equivalent semantics, use wl_surface.offset. Surface contents are double-buffered state, see wl_surface.commit. The initial surface contents are void; there is no content. wl_surface.attach assigns the given wl_buffer as the pending wl_buffer. wl_surface.commit makes the pending wl_buffer the new surface contents, and the size of the surface becomes the size calculated from the wl_buffer, as described above. After commit, there is no pending buffer until the next attach. Committing a pending wl_buffer allows the compositor to read the pixels in the wl_buffer. The compositor may access the pixels at any time after the wl_surface.commit request. When the compositor will not access the pixels anymore, it will send the wl_buffer.release event. Only after receiving wl_buffer.release, the client may reuse the wl_buffer. A wl_buffer that has been attached and then replaced by another attach instead of committed will not receive a release event, and is not used by the compositor. If a pending wl_buffer has been committed to more than one wl_surface, the delivery of wl_buffer.release events becomes undefined. A well behaved client should not rely on wl_buffer.release events in this case. Alternatively, a client could create multiple wl_buffer objects from the same backing storage or use wp_linux_buffer_release. Destroying the wl_buffer after wl_buffer.release does not change the surface contents. Destroying the wl_buffer before wl_buffer.release is allowed as long as the underlying buffer storage isn't re-used (this can happen e.g. on client process termination). However, if the client destroys the wl_buffer before receiving the wl_buffer.release event and mutates the underlying buffer storage, the surface contents become undefined immediately. If wl_surface.attach is sent with a NULL wl_buffer, the following wl_surface.commit will remove the surface content. If a pending wl_buffer has been destroyed, the result is not specified. Many compositors are known to remove the surface content on the following wl_surface.commit, but this behaviour is not universal. Clients seeking to maximise compatibility should not destroy pending buffers and should ensure that they explicitly remove content from surfaces, even after destroying buffers. This request is used to describe the regions where the pending buffer is different from the current surface contents, and where the surface therefore needs to be repainted. The compositor ignores the parts of the damage that fall outside of the surface. Damage is double-buffered state, see wl_surface.commit. The damage rectangle is specified in surface-local coordinates, where x and y specify the upper left corner of the damage rectangle. The initial value for pending damage is empty: no damage. wl_surface.damage adds pending damage: the new pending damage is the union of old pending damage and the given rectangle. wl_surface.commit assigns pending damage as the current damage, and clears pending damage. The server will clear the current damage as it repaints the surface. Note! New clients should not use this request. Instead damage can be posted with wl_surface.damage_buffer which uses buffer coordinates instead of surface coordinates. Request a notification when it is a good time to start drawing a new frame, by creating a frame callback. This is useful for throttling redrawing operations, and driving animations. When a client is animating on a wl_surface, it can use the 'frame' request to get notified when it is a good time to draw and commit the next frame of animation. If the client commits an update earlier than that, it is likely that some updates will not make it to the display, and the client is wasting resources by drawing too often. The frame request will take effect on the next wl_surface.commit. The notification will only be posted for one frame unless requested again. For a wl_surface, the notifications are posted in the order the frame requests were committed. The server must send the notifications so that a client will not send excessive updates, while still allowing the highest possible update rate for clients that wait for the reply before drawing again. The server should give some time for the client to draw and commit after sending the frame callback events to let it hit the next output refresh. A server should avoid signaling the frame callbacks if the surface is not visible in any way, e.g. the surface is off-screen, or completely obscured by other opaque surfaces. The object returned by this request will be destroyed by the compositor after the callback is fired and as such the client must not attempt to use it after that point. The callback_data passed in the callback is the current time, in milliseconds, with an undefined base. This request sets the region of the surface that contains opaque content. The opaque region is an optimization hint for the compositor that lets it optimize the redrawing of content behind opaque regions. Setting an opaque region is not required for correct behaviour, but marking transparent content as opaque will result in repaint artifacts. The opaque region is specified in surface-local coordinates. The compositor ignores the parts of the opaque region that fall outside of the surface. Opaque region is double-buffered state, see wl_surface.commit. wl_surface.set_opaque_region changes the pending opaque region. wl_surface.commit copies the pending region to the current region. Otherwise, the pending and current regions are never changed. The initial value for an opaque region is empty. Setting the pending opaque region has copy semantics, and the wl_region object can be destroyed immediately. A NULL wl_region causes the pending opaque region to be set to empty. This request sets the region of the surface that can receive pointer and touch events. Input events happening outside of this region will try the next surface in the server surface stack. The compositor ignores the parts of the input region that fall outside of the surface. The input region is specified in surface-local coordinates. Input region is double-buffered state, see wl_surface.commit. wl_surface.set_input_region changes the pending input region. wl_surface.commit copies the pending region to the current region. Otherwise the pending and current regions are never changed, except cursor and icon surfaces are special cases, see wl_pointer.set_cursor and wl_data_device.start_drag. The initial value for an input region is infinite. That means the whole surface will accept input. Setting the pending input region has copy semantics, and the wl_region object can be destroyed immediately. A NULL wl_region causes the input region to be set to infinite. Surface state (input, opaque, and damage regions, attached buffers, etc.) is double-buffered. Protocol requests modify the pending state, as opposed to the active state in use by the compositor. A commit request atomically creates a content update from the pending state, even if the pending state has not been touched. The content update is placed in a queue until it becomes active. After commit, the new pending state is as documented for each related request. When the content update is applied, the wl_buffer is applied before all other state. This means that all coordinates in double-buffered state are relative to the newly attached wl_buffers, except for wl_surface.attach itself. If there is no newly attached wl_buffer, the coordinates are relative to the previous content update. All requests that need a commit to become effective are documented to affect double-buffered state. Other interfaces may add further double-buffered surface state. This is emitted whenever a surface's creation, movement, or resizing results in some part of it being within the scanout region of an output. Note that a surface may be overlapping with zero or more outputs. This is emitted whenever a surface's creation, movement, or resizing results in it no longer having any part of it within the scanout region of an output. Clients should not use the number of outputs the surface is on for frame throttling purposes. The surface might be hidden even if no leave event has been sent, and the compositor might expect new surface content updates even if no enter event has been sent. The frame event should be used instead. This request sets the transformation that the client has already applied to the content of the buffer. The accepted values for the transform parameter are the values for wl_output.transform. The compositor applies the inverse of this transformation whenever it uses the buffer contents. Buffer transform is double-buffered state, see wl_surface.commit. A newly created surface has its buffer transformation set to normal. wl_surface.set_buffer_transform changes the pending buffer transformation. wl_surface.commit copies the pending buffer transformation to the current one. Otherwise, the pending and current values are never changed. The purpose of this request is to allow clients to render content according to the output transform, thus permitting the compositor to use certain optimizations even if the display is rotated. Using hardware overlays and scanning out a client buffer for fullscreen surfaces are examples of such optimizations. Those optimizations are highly dependent on the compositor implementation, so the use of this request should be considered on a case-by-case basis. Note that if the transform value includes 90 or 270 degree rotation, the width of the buffer will become the surface height and the height of the buffer will become the surface width. If transform is not one of the values from the wl_output.transform enum the invalid_transform protocol error is raised. This request sets an optional scaling factor on how the compositor interprets the contents of the buffer attached to the window. Buffer scale is double-buffered state, see wl_surface.commit. A newly created surface has its buffer scale set to 1. wl_surface.set_buffer_scale changes the pending buffer scale. wl_surface.commit copies the pending buffer scale to the current one. Otherwise, the pending and current values are never changed. The purpose of this request is to allow clients to supply higher resolution buffer data for use on high resolution outputs. It is intended that you pick the same buffer scale as the scale of the output that the surface is displayed on. This means the compositor can avoid scaling when rendering the surface on that output. Note that if the scale is larger than 1, then you have to attach a buffer that is larger (by a factor of scale in each dimension) than the desired surface size. If scale is not greater than 0 the invalid_scale protocol error is raised. This request is used to describe the regions where the pending buffer is different from the current surface contents, and where the surface therefore needs to be repainted. The compositor ignores the parts of the damage that fall outside of the surface. Damage is double-buffered state, see wl_surface.commit. The damage rectangle is specified in buffer coordinates, where x and y specify the upper left corner of the damage rectangle. The initial value for pending damage is empty: no damage. wl_surface.damage_buffer adds pending damage: the new pending damage is the union of old pending damage and the given rectangle. wl_surface.commit assigns pending damage as the current damage, and clears pending damage. The server will clear the current damage as it repaints the surface. This request differs from wl_surface.damage in only one way - it takes damage in buffer coordinates instead of surface-local coordinates. While this generally is more intuitive than surface coordinates, it is especially desirable when using wp_viewport or when a drawing library (like EGL) is unaware of buffer scale and buffer transform. Note: Because buffer transformation changes and damage requests may be interleaved in the protocol stream, it is impossible to determine the actual mapping between surface and buffer damage until wl_surface.commit time. Therefore, compositors wishing to take both kinds of damage into account will have to accumulate damage from the two requests separately and only transform from one to the other after receiving the wl_surface.commit. The x and y arguments specify the location of the new pending buffer's upper left corner, relative to the current buffer's upper left corner, in surface-local coordinates. In other words, the x and y, combined with the new surface size define in which directions the surface's size changes. The exact semantics of wl_surface.offset are role-specific. Refer to the documentation of specific roles for more information. Surface location offset is double-buffered state, see wl_surface.commit. This request is semantically equivalent to and the replaces the x and y arguments in the wl_surface.attach request in wl_surface versions prior to 5. See wl_surface.attach for details. This event indicates the preferred buffer scale for this surface. It is sent whenever the compositor's preference changes. Before receiving this event the preferred buffer scale for this surface is 1. It is intended that scaling aware clients use this event to scale their content and use wl_surface.set_buffer_scale to indicate the scale they have rendered with. This allows clients to supply a higher detail buffer. The compositor shall emit a scale value greater than 0. This event indicates the preferred buffer transform for this surface. It is sent whenever the compositor's preference changes. Before receiving this event the preferred buffer transform for this surface is normal. Applying this transformation to the surface buffer contents and using wl_surface.set_buffer_transform might allow the compositor to use the surface buffer more efficiently. A seat is a group of keyboards, pointer and touch devices. This object is published as a global during start up, or when such a device is hot plugged. A seat typically has a pointer and maintains a keyboard focus and a pointer focus. This is a bitmask of capabilities this seat has; if a member is set, then it is present on the seat. These errors can be emitted in response to wl_seat requests. This is emitted whenever a seat gains or loses the pointer, keyboard or touch capabilities. The argument is a capability enum containing the complete set of capabilities this seat has. When the pointer capability is added, a client may create a wl_pointer object using the wl_seat.get_pointer request. This object will receive pointer events until the capability is removed in the future. When the pointer capability is removed, a client should destroy the wl_pointer objects associated with the seat where the capability was removed, using the wl_pointer.release request. No further pointer events will be received on these objects. In some compositors, if a seat regains the pointer capability and a client has a previously obtained wl_pointer object of version 4 or less, that object may start sending pointer events again. This behavior is considered a misinterpretation of the intended behavior and must not be relied upon by the client. wl_pointer objects of version 5 or later must not send events if created before the most recent event notifying the client of an added pointer capability. The above behavior also applies to wl_keyboard and wl_touch with the keyboard and touch capabilities, respectively. The ID provided will be initialized to the wl_pointer interface for this seat. This request only takes effect if the seat has the pointer capability, or has had the pointer capability in the past. It is a protocol violation to issue this request on a seat that has never had the pointer capability. The missing_capability error will be sent in this case. The ID provided will be initialized to the wl_keyboard interface for this seat. This request only takes effect if the seat has the keyboard capability, or has had the keyboard capability in the past. It is a protocol violation to issue this request on a seat that has never had the keyboard capability. The missing_capability error will be sent in this case. The ID provided will be initialized to the wl_touch interface for this seat. This request only takes effect if the seat has the touch capability, or has had the touch capability in the past. It is a protocol violation to issue this request on a seat that has never had the touch capability. The missing_capability error will be sent in this case. In a multi-seat configuration the seat name can be used by clients to help identify which physical devices the seat represents. The seat name is a UTF-8 string with no convention defined for its contents. Each name is unique among all wl_seat globals. The name is only guaranteed to be unique for the current compositor instance. The same seat names are used for all clients. Thus, the name can be shared across processes to refer to a specific wl_seat global. The name event is sent after binding to the seat global. This event is only sent once per seat object, and the name does not change over the lifetime of the wl_seat global. Compositors may re-use the same seat name if the wl_seat global is destroyed and re-created later. Using this request a client can tell the server that it is not going to use the seat object anymore. The wl_pointer interface represents one or more input devices, such as mice, which control the pointer location and pointer_focus of a seat. The wl_pointer interface generates motion, enter and leave events for the surfaces that the pointer is located over, and button and axis events for button presses, button releases and scrolling. Set the pointer surface, i.e., the surface that contains the pointer image (cursor). This request gives the surface the role of a cursor. If the surface already has another role, it raises a protocol error. The cursor actually changes only if the pointer focus for this device is one of the requesting client's surfaces or the surface parameter is the current pointer surface. If there was a previous surface set with this request it is replaced. If surface is NULL, the pointer image is hidden. The parameters hotspot_x and hotspot_y define the position of the pointer surface relative to the pointer location. Its top-left corner is always at (x, y) - (hotspot_x, hotspot_y), where (x, y) are the coordinates of the pointer location, in surface-local coordinates. On wl_surface.offset requests to the pointer surface, hotspot_x and hotspot_y are decremented by the x and y parameters passed to the request. The offset must be applied by wl_surface.commit as usual. The hotspot can also be updated by passing the currently set pointer surface to this request with new values for hotspot_x and hotspot_y. The input region is ignored for wl_surfaces with the role of a cursor. When the use as a cursor ends, the wl_surface is unmapped. The serial parameter must match the latest wl_pointer.enter serial number sent to the client. Otherwise the request will be ignored. Notification that this seat's pointer is focused on a certain surface. When a seat's focus enters a surface, the pointer image is undefined and a client should respond to this event by setting an appropriate pointer image with the set_cursor request. Notification that this seat's pointer is no longer focused on a certain surface. The leave notification is sent before the enter notification for the new focus. Notification of pointer location change. The arguments surface_x and surface_y are the location relative to the focused surface. Describes the physical state of a button that produced the button event. Mouse button click and release notifications. The location of the click is given by the last motion or enter event. The time argument is a timestamp with millisecond granularity, with an undefined base. The button is a button code as defined in the Linux kernel's linux/input-event-codes.h header file, e.g. BTN_LEFT. Any 16-bit button code value is reserved for future additions to the kernel's event code list. All other button codes above 0xFFFF are currently undefined but may be used in future versions of this protocol. Describes the axis types of scroll events. Scroll and other axis notifications. For scroll events (vertical and horizontal scroll axes), the value parameter is the length of a vector along the specified axis in a coordinate space identical to those of motion events, representing a relative movement along the specified axis. For devices that support movements non-parallel to axes multiple axis events will be emitted. When applicable, for example for touch pads, the server can choose to emit scroll events where the motion vector is equivalent to a motion event vector. When applicable, a client can transform its content relative to the scroll distance. Using this request a client can tell the server that it is not going to use the pointer object anymore. This request destroys the pointer proxy object, so clients must not call wl_pointer_destroy() after using this request. Indicates the end of a set of events that logically belong together. A client is expected to accumulate the data in all events within the frame before proceeding. All wl_pointer events before a wl_pointer.frame event belong logically together. For example, in a diagonal scroll motion the compositor will send an optional wl_pointer.axis_source event, two wl_pointer.axis events (horizontal and vertical) and finally a wl_pointer.frame event. The client may use this information to calculate a diagonal vector for scrolling. When multiple wl_pointer.axis events occur within the same frame, the motion vector is the combined motion of all events. When a wl_pointer.axis and a wl_pointer.axis_stop event occur within the same frame, this indicates that axis movement in one axis has stopped but continues in the other axis. When multiple wl_pointer.axis_stop events occur within the same frame, this indicates that these axes stopped in the same instance. A wl_pointer.frame event is sent for every logical event group, even if the group only contains a single wl_pointer event. Specifically, a client may get a sequence: motion, frame, button, frame, axis, frame, axis_stop, frame. The wl_pointer.enter and wl_pointer.leave events are logical events generated by the compositor and not the hardware. These events are also grouped by a wl_pointer.frame. When a pointer moves from one surface to another, a compositor should group the wl_pointer.leave event within the same wl_pointer.frame. However, a client must not rely on wl_pointer.leave and wl_pointer.enter being in the same wl_pointer.frame. Compositor-specific policies may require the wl_pointer.leave and wl_pointer.enter event being split across multiple wl_pointer.frame groups. Describes the source types for axis events. This indicates to the client how an axis event was physically generated; a client may adjust the user interface accordingly. For example, scroll events from a "finger" source may be in a smooth coordinate space with kinetic scrolling whereas a "wheel" source may be in discrete steps of a number of lines. The "continuous" axis source is a device generating events in a continuous coordinate space, but using something other than a finger. One example for this source is button-based scrolling where the vertical motion of a device is converted to scroll events while a button is held down. The "wheel tilt" axis source indicates that the actual device is a wheel but the scroll event is not caused by a rotation but a (usually sideways) tilt of the wheel. Source information for scroll and other axes. This event does not occur on its own. It is sent before a wl_pointer.frame event and carries the source information for all events within that frame. The source specifies how this event was generated. If the source is wl_pointer.axis_source.finger, a wl_pointer.axis_stop event will be sent when the user lifts the finger off the device. If the source is wl_pointer.axis_source.wheel, wl_pointer.axis_source.wheel_tilt or wl_pointer.axis_source.continuous, a wl_pointer.axis_stop event may or may not be sent. Whether a compositor sends an axis_stop event for these sources is hardware-specific and implementation-dependent; clients must not rely on receiving an axis_stop event for these scroll sources and should treat scroll sequences from these scroll sources as unterminated by default. This event is optional. If the source is unknown for a particular axis event sequence, no event is sent. Only one wl_pointer.axis_source event is permitted per frame. The order of wl_pointer.axis_discrete and wl_pointer.axis_source is not guaranteed. Stop notification for scroll and other axes. For some wl_pointer.axis_source types, a wl_pointer.axis_stop event is sent to notify a client that the axis sequence has terminated. This enables the client to implement kinetic scrolling. See the wl_pointer.axis_source documentation for information on when this event may be generated. Any wl_pointer.axis events with the same axis_source after this event should be considered as the start of a new axis motion. The timestamp is to be interpreted identical to the timestamp in the wl_pointer.axis event. The timestamp value may be the same as a preceding wl_pointer.axis event. Discrete step information for scroll and other axes. This event carries the axis value of the wl_pointer.axis event in discrete steps (e.g. mouse wheel clicks). This event is deprecated with wl_pointer version 8 - this event is not sent to clients supporting version 8 or later. This event does not occur on its own, it is coupled with a wl_pointer.axis event that represents this axis value on a continuous scale. The protocol guarantees that each axis_discrete event is always followed by exactly one axis event with the same axis number within the same wl_pointer.frame. Note that the protocol allows for other events to occur between the axis_discrete and its coupled axis event, including other axis_discrete or axis events. A wl_pointer.frame must not contain more than one axis_discrete event per axis type. This event is optional; continuous scrolling devices like two-finger scrolling on touchpads do not have discrete steps and do not generate this event. The discrete value carries the directional information. e.g. a value of -2 is two steps towards the negative direction of this axis. The axis number is identical to the axis number in the associated axis event. The order of wl_pointer.axis_discrete and wl_pointer.axis_source is not guaranteed. Discrete high-resolution scroll information. This event carries high-resolution wheel scroll information, with each multiple of 120 representing one logical scroll step (a wheel detent). For example, an axis_value120 of 30 is one quarter of a logical scroll step in the positive direction, a value120 of -240 are two logical scroll steps in the negative direction within the same hardware event. Clients that rely on discrete scrolling should accumulate the value120 to multiples of 120 before processing the event. The value120 must not be zero. This event replaces the wl_pointer.axis_discrete event in clients supporting wl_pointer version 8 or later. Where a wl_pointer.axis_source event occurs in the same wl_pointer.frame, the axis source applies to this event. The order of wl_pointer.axis_value120 and wl_pointer.axis_source is not guaranteed. This specifies the direction of the physical motion that caused a wl_pointer.axis event, relative to the wl_pointer.axis direction. Relative directional information of the entity causing the axis motion. For a wl_pointer.axis event, the wl_pointer.axis_relative_direction event specifies the movement direction of the entity causing the wl_pointer.axis event. For example: - if a user's fingers on a touchpad move down and this causes a wl_pointer.axis vertical_scroll down event, the physical direction is 'identical' - if a user's fingers on a touchpad move down and this causes a wl_pointer.axis vertical_scroll up scroll up event ('natural scrolling'), the physical direction is 'inverted'. A client may use this information to adjust scroll motion of components. Specifically, enabling natural scrolling causes the content to change direction compared to traditional scrolling. Some widgets like volume control sliders should usually match the physical direction regardless of whether natural scrolling is active. This event enables clients to match the scroll direction of a widget to the physical direction. This event does not occur on its own, it is coupled with a wl_pointer.axis event that represents this axis value. The protocol guarantees that each axis_relative_direction event is always followed by exactly one axis event with the same axis number within the same wl_pointer.frame. Note that the protocol allows for other events to occur between the axis_relative_direction and its coupled axis event. The axis number is identical to the axis number in the associated axis event. The order of wl_pointer.axis_relative_direction, wl_pointer.axis_discrete and wl_pointer.axis_source is not guaranteed. The wl_keyboard interface represents one or more keyboards associated with a seat. Each wl_keyboard has the following logical state: - an active surface (possibly null), - the keys currently logically down, - the active modifiers, - the active group. By default, the active surface is null, the keys currently logically down are empty, the active modifiers and the active group are 0. This specifies the format of the keymap provided to the client with the wl_keyboard.keymap event. This event provides a file descriptor to the client which can be memory-mapped in read-only mode to provide a keyboard mapping description. From version 7 onwards, the fd must be mapped with MAP_PRIVATE by the recipient, as MAP_SHARED may fail. Notification that this seat's keyboard focus is on a certain surface. The compositor must send the wl_keyboard.modifiers event after this event. In the wl_keyboard logical state, this event sets the active surface to the surface argument and the keys currently logically down to the keys in the keys argument. The compositor must not send this event if the wl_keyboard already had an active surface immediately before this event. Clients should not use the list of pressed keys to emulate key-press events. The order of keys in the list is unspecified. Notification that this seat's keyboard focus is no longer on a certain surface. The leave notification is sent before the enter notification for the new focus. In the wl_keyboard logical state, this event resets all values to their defaults. The compositor must not send this event if the active surface of the wl_keyboard was not equal to the surface argument immediately before this event. Describes the physical state of a key that produced the key event. Since version 10, the key can be in a "repeated" pseudo-state which means the same as "pressed", but is used to signal repetition in the key event. The key may only enter the repeated state after entering the pressed state and before entering the released state. This event may be generated multiple times while the key is down. A key was pressed or released. The time argument is a timestamp with millisecond granularity, with an undefined base. The key is a platform-specific key code that can be interpreted by feeding it to the keyboard mapping (see the keymap event). If this event produces a change in modifiers, then the resulting wl_keyboard.modifiers event must be sent after this event. In the wl_keyboard logical state, this event adds the key to the keys currently logically down (if the state argument is pressed) or removes the key from the keys currently logically down (if the state argument is released). The compositor must not send this event if the wl_keyboard did not have an active surface immediately before this event. The compositor must not send this event if state is pressed (resp. released) and the key was already logically down (resp. was not logically down) immediately before this event. Since version 10, compositors may send key events with the "repeated" key state when a wl_keyboard.repeat_info event with a rate argument of 0 has been received. This allows the compositor to take over the responsibility of key repetition. Notifies clients that the modifier and/or group state has changed, and it should update its local state. The compositor may send this event without a surface of the client having keyboard focus, for example to tie modifier information to pointer focus instead. If a modifier event with pressed modifiers is sent without a prior enter event, the client can assume the modifier state is valid until it receives the next wl_keyboard.modifiers event. In order to reset the modifier state again, the compositor can send a wl_keyboard.modifiers event with no pressed modifiers. In the wl_keyboard logical state, this event updates the modifiers and group. Informs the client about the keyboard's repeat rate and delay. This event is sent as soon as the wl_keyboard object has been created, and is guaranteed to be received by the client before any key press event. Negative values for either rate or delay are illegal. A rate of zero will disable any repeating (regardless of the value of delay). This event can be sent later on as well with a new value if necessary, so clients should continue listening for the event past the creation of wl_keyboard. The wl_touch interface represents a touchscreen associated with a seat. Touch interactions can consist of one or more contacts. For each contact, a series of events is generated, starting with a down event, followed by zero or more motion events, and ending with an up event. Events relating to the same contact point can be identified by the ID of the sequence. A new touch point has appeared on the surface. This touch point is assigned a unique ID. Future events from this touch point reference this ID. The ID ceases to be valid after a touch up event and may be reused in the future. The touch point has disappeared. No further events will be sent for this touch point and the touch point's ID is released and may be reused in a future touch down event. A touch point has changed coordinates. Indicates the end of a set of events that logically belong together. A client is expected to accumulate the data in all events within the frame before proceeding. A wl_touch.frame terminates at least one event but otherwise no guarantee is provided about the set of events within a frame. A client must assume that any state not updated in a frame is unchanged from the previously known state. Sent if the compositor decides the touch stream is a global gesture. No further events are sent to the clients from that particular gesture. Touch cancellation applies to all touch points currently active on this client's surface. The client is responsible for finalizing the touch points, future touch points on this surface may reuse the touch point ID. No frame event is required after the cancel event. Sent when a touchpoint has changed its shape. This event does not occur on its own. It is sent before a wl_touch.frame event and carries the new shape information for any previously reported, or new touch points of that frame. Other events describing the touch point such as wl_touch.down, wl_touch.motion or wl_touch.orientation may be sent within the same wl_touch.frame. A client should treat these events as a single logical touch point update. The order of wl_touch.shape, wl_touch.orientation and wl_touch.motion is not guaranteed. A wl_touch.down event is guaranteed to occur before the first wl_touch.shape event for this touch ID but both events may occur within the same wl_touch.frame. A touchpoint shape is approximated by an ellipse through the major and minor axis length. The major axis length describes the longer diameter of the ellipse, while the minor axis length describes the shorter diameter. Major and minor are orthogonal and both are specified in surface-local coordinates. The center of the ellipse is always at the touchpoint location as reported by wl_touch.down or wl_touch.move. This event is only sent by the compositor if the touch device supports shape reports. The client has to make reasonable assumptions about the shape if it did not receive this event. Sent when a touchpoint has changed its orientation. This event does not occur on its own. It is sent before a wl_touch.frame event and carries the new shape information for any previously reported, or new touch points of that frame. Other events describing the touch point such as wl_touch.down, wl_touch.motion or wl_touch.shape may be sent within the same wl_touch.frame. A client should treat these events as a single logical touch point update. The order of wl_touch.shape, wl_touch.orientation and wl_touch.motion is not guaranteed. A wl_touch.down event is guaranteed to occur before the first wl_touch.orientation event for this touch ID but both events may occur within the same wl_touch.frame. The orientation describes the clockwise angle of a touchpoint's major axis to the positive surface y-axis and is normalized to the -180 to +180 degree range. The granularity of orientation depends on the touch device, some devices only support binary rotation values between 0 and 90 degrees. This event is only sent by the compositor if the touch device supports orientation reports. An output describes part of the compositor geometry. The compositor works in the 'compositor coordinate system' and an output corresponds to a rectangular area in that space that is actually visible. This typically corresponds to a monitor that displays part of the compositor space. This object is published as global during start up, or when a monitor is hotplugged. This enumeration describes how the physical pixels on an output are laid out. This describes transformations that clients and compositors apply to buffer contents. The flipped values correspond to an initial flip around a vertical axis followed by rotation. The purpose is mainly to allow clients to render accordingly and tell the compositor, so that for fullscreen surfaces, the compositor will still be able to scan out directly from client surfaces. The geometry event describes geometric properties of the output. The event is sent when binding to the output object and whenever any of the properties change. The physical size can be set to zero if it doesn't make sense for this output (e.g. for projectors or virtual outputs). The geometry event will be followed by a done event (starting from version 2). Clients should use wl_surface.preferred_buffer_transform instead of the transform advertised by this event to find the preferred buffer transform to use for a surface. Note: wl_output only advertises partial information about the output position and identification. Some compositors, for instance those not implementing a desktop-style output layout or those exposing virtual outputs, might fake this information. Instead of using x and y, clients should use xdg_output.logical_position. Instead of using make and model, clients should use name and description. These flags describe properties of an output mode. They are used in the flags bitfield of the mode event. The mode event describes an available mode for the output. The event is sent when binding to the output object and there will always be one mode, the current mode. The event is sent again if an output changes mode, for the mode that is now current. In other words, the current mode is always the last mode that was received with the current flag set. Non-current modes are deprecated. A compositor can decide to only advertise the current mode and never send other modes. Clients should not rely on non-current modes. The size of a mode is given in physical hardware units of the output device. This is not necessarily the same as the output size in the global compositor space. For instance, the output may be scaled, as described in wl_output.scale, or transformed, as described in wl_output.transform. Clients willing to retrieve the output size in the global compositor space should use xdg_output.logical_size instead. The vertical refresh rate can be set to zero if it doesn't make sense for this output (e.g. for virtual outputs). The mode event will be followed by a done event (starting from version 2). Clients should not use the refresh rate to schedule frames. Instead, they should use the wl_surface.frame event or the presentation-time protocol. Note: this information is not always meaningful for all outputs. Some compositors, such as those exposing virtual outputs, might fake the refresh rate or the size. This event is sent after all other properties have been sent after binding to the output object and after any other property changes done after that. This allows changes to the output properties to be seen as atomic, even if they happen via multiple events. This event contains scaling geometry information that is not in the geometry event. It may be sent after binding the output object or if the output scale changes later. The compositor will emit a non-zero, positive value for scale. If it is not sent, the client should assume a scale of 1. A scale larger than 1 means that the compositor will automatically scale surface buffers by this amount when rendering. This is used for very high resolution displays where applications rendering at the native resolution would be too small to be legible. Clients should use wl_surface.preferred_buffer_scale instead of this event to find the preferred buffer scale to use for a surface. The scale event will be followed by a done event. Using this request a client can tell the server that it is not going to use the output object anymore. Many compositors will assign user-friendly names to their outputs, show them to the user, allow the user to refer to an output, etc. The client may wish to know this name as well to offer the user similar behaviors. The name is a UTF-8 string with no convention defined for its contents. Each name is unique among all wl_output globals. The name is only guaranteed to be unique for the compositor instance. The same output name is used for all clients for a given wl_output global. Thus, the name can be shared across processes to refer to a specific wl_output global. The name is not guaranteed to be persistent across sessions, thus cannot be used to reliably identify an output in e.g. configuration files. Examples of names include 'HDMI-A-1', 'WL-1', 'X11-1', etc. However, do not assume that the name is a reflection of an underlying DRM connector, X11 connection, etc. The name event is sent after binding the output object. This event is only sent once per output object, and the name does not change over the lifetime of the wl_output global. Compositors may re-use the same output name if the wl_output global is destroyed and re-created later. Compositors should avoid re-using the same name if possible. The name event will be followed by a done event. Many compositors can produce human-readable descriptions of their outputs. The client may wish to know this description as well, e.g. for output selection purposes. The description is a UTF-8 string with no convention defined for its contents. The description is not guaranteed to be unique among all wl_output globals. Examples might include 'Foocorp 11" Display' or 'Virtual X11 output via :1'. The description event is sent after binding the output object and whenever the description changes. The description is optional, and may not be sent at all. The description event will be followed by a done event. A region object describes an area. Region objects are used to describe the opaque and input regions of a surface. Destroy the region. This will invalidate the object ID. Add the specified rectangle to the region. Subtract the specified rectangle from the region. The global interface exposing sub-surface compositing capabilities. A wl_surface, that has sub-surfaces associated, is called the parent surface. Sub-surfaces can be arbitrarily nested and create a tree of sub-surfaces. The root surface in a tree of sub-surfaces is the main surface. The main surface cannot be a sub-surface, because sub-surfaces must always have a parent. A main surface with its sub-surfaces forms a (compound) window. For window management purposes, this set of wl_surface objects is to be considered as a single window, and it should also behave as such. The aim of sub-surfaces is to offload some of the compositing work within a window from clients to the compositor. A prime example is a video player with decorations and video in separate wl_surface objects. This should allow the compositor to pass YUV video buffer processing to dedicated overlay hardware when possible. Informs the server that the client will not be using this protocol object anymore. This does not affect any other objects, wl_subsurface objects included. Create a sub-surface interface for the given surface, and associate it with the given parent surface. This turns a plain wl_surface into a sub-surface. The to-be sub-surface must not already have another role, and it must not have an existing wl_subsurface object. Otherwise the bad_surface protocol error is raised. Adding sub-surfaces to a parent is a double-buffered operation on the parent (see wl_surface.commit). The effect of adding a sub-surface becomes visible on the next time the state of the parent surface is applied. The parent surface must not be one of the child surface's descendants, and the parent must be different from the child surface, otherwise the bad_parent protocol error is raised. This request modifies the behaviour of wl_surface.commit request on the sub-surface, see the documentation on wl_subsurface interface. An additional interface to a wl_surface object, which has been made a sub-surface. A sub-surface has one parent surface. A sub-surface's size and position are not limited to that of the parent. Particularly, a sub-surface is not automatically clipped to its parent's area. A sub-surface becomes mapped, when a non-NULL wl_buffer is applied and the parent surface is mapped. The order of which one happens first is irrelevant. A sub-surface is hidden if the parent becomes hidden, or if a NULL wl_buffer is applied. These rules apply recursively through the tree of surfaces. The behaviour of a wl_surface.commit request on a sub-surface depends on the sub-surface's mode. The possible modes are synchronized and desynchronized, see methods wl_subsurface.set_sync and wl_subsurface.set_desync. Synchronized mode caches the wl_surface state to be applied when the parent's state gets applied, and desynchronized mode applies the pending wl_surface state directly. A sub-surface is initially in the synchronized mode. Sub-surfaces also have another kind of state, which is managed by wl_subsurface requests, as opposed to wl_surface requests. This state includes the sub-surface position relative to the parent surface (wl_subsurface.set_position), and the stacking order of the parent and its sub-surfaces (wl_subsurface.place_above and .place_below). This state is applied when the parent surface's wl_surface state is applied, regardless of the sub-surface's mode. As the exception, set_sync and set_desync are effective immediately. The main surface can be thought to be always in desynchronized mode, since it does not have a parent in the sub-surfaces sense. Even if a sub-surface is in desynchronized mode, it will behave as in synchronized mode, if its parent surface behaves as in synchronized mode. This rule is applied recursively throughout the tree of surfaces. This means, that one can set a sub-surface into synchronized mode, and then assume that all its child and grand-child sub-surfaces are synchronized, too, without explicitly setting them. Destroying a sub-surface takes effect immediately. If you need to synchronize the removal of a sub-surface to the parent surface update, unmap the sub-surface first by attaching a NULL wl_buffer, update parent, and then destroy the sub-surface. If the parent wl_surface object is destroyed, the sub-surface is unmapped. A sub-surface never has the keyboard focus of any seat. The wl_surface.offset request is ignored: clients must use set_position instead to move the sub-surface. The sub-surface interface is removed from the wl_surface object that was turned into a sub-surface with a wl_subcompositor.get_subsurface request. The wl_surface's association to the parent is deleted. The wl_surface is unmapped immediately. This schedules a sub-surface position change. The sub-surface will be moved so that its origin (top left corner pixel) will be at the location x, y of the parent surface coordinate system. The coordinates are not restricted to the parent surface area. Negative values are allowed. The scheduled coordinates will take effect whenever the state of the parent surface is applied. If more than one set_position request is invoked by the client before the commit of the parent surface, the position of a new request always replaces the scheduled position from any previous request. The initial position is 0, 0. This sub-surface is taken from the stack, and put back just above the reference surface, changing the z-order of the sub-surfaces. The reference surface must be one of the sibling surfaces, or the parent surface. Using any other surface, including this sub-surface, will cause a protocol error. The z-order is double-buffered. Requests are handled in order and applied immediately to a pending state. The final pending state is copied to the active state the next time the state of the parent surface is applied. A new sub-surface is initially added as the top-most in the stack of its siblings and parent. The sub-surface is placed just below the reference surface. See wl_subsurface.place_above. Change the commit behaviour of the sub-surface to synchronized mode, also described as the parent dependent mode. In synchronized mode, wl_surface.commit on a sub-surface will accumulate the committed state in a cache, but the state will not be applied and hence will not change the compositor output. The cached state is applied to the sub-surface immediately after the parent surface's state is applied. This ensures atomic updates of the parent and all its synchronized sub-surfaces. Applying the cached state will invalidate the cache, so further parent surface commits do not (re-)apply old state. See wl_subsurface for the recursive effect of this mode. Change the commit behaviour of the sub-surface to desynchronized mode, also described as independent or freely running mode. In desynchronized mode, wl_surface.commit on a sub-surface will apply the pending state directly, without caching, as happens normally with a wl_surface. Calling wl_surface.commit on the parent surface has no effect on the sub-surface's wl_surface state. This mode allows a sub-surface to be updated on its own. If cached state exists when wl_surface.commit is called in desynchronized mode, the pending state is added to the cached state, and applied as a whole. This invalidates the cache. Note: even if a sub-surface is set to desynchronized, a parent sub-surface may override it to behave as synchronized. For details, see wl_subsurface. If a surface's parent surface behaves as desynchronized, then the cached state is applied on set_desync. waypipe-v0.10.5/protocols/wlr-data-control-unstable-v1.xml000066400000000000000000000274161507071711100235510ustar00rootroot00000000000000 Copyright © 2018 Simon Ser Copyright © 2019 Ivan Molodetskikh Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. This protocol allows a privileged client to control data devices. In particular, the client will be able to manage the current selection and take the role of a clipboard manager. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. This interface is a manager that allows creating per-seat data device controls. Create a new data source. Create a data device that can be used to manage a seat's selection. All objects created by the manager will still remain valid, until their appropriate destroy request has been called. This interface allows a client to manage a seat's selection. When the seat is destroyed, this object becomes inert. This request asks the compositor to set the selection to the data from the source on behalf of the client. The given source may not be used in any further set_selection or set_primary_selection requests. Attempting to use a previously used source is a protocol error. To unset the selection, set the source to NULL. Destroys the data device object. The data_offer event introduces a new wlr_data_control_offer object, which will subsequently be used in either the wlr_data_control_device.selection event (for the regular clipboard selections) or the wlr_data_control_device.primary_selection event (for the primary clipboard selections). Immediately following the wlr_data_control_device.data_offer event, the new data_offer object will send out wlr_data_control_offer.offer events to describe the MIME types it offers. The selection event is sent out to notify the client of a new wlr_data_control_offer for the selection for this device. The wlr_data_control_device.data_offer and the wlr_data_control_offer.offer events are sent out immediately before this event to introduce the data offer object. The selection event is sent to a client when a new selection is set. The wlr_data_control_offer is valid until a new wlr_data_control_offer or NULL is received. The client must destroy the previous selection wlr_data_control_offer, if any, upon receiving this event. The first selection event is sent upon binding the wlr_data_control_device object. This data control object is no longer valid and should be destroyed by the client. The primary_selection event is sent out to notify the client of a new wlr_data_control_offer for the primary selection for this device. The wlr_data_control_device.data_offer and the wlr_data_control_offer.offer events are sent out immediately before this event to introduce the data offer object. The primary_selection event is sent to a client when a new primary selection is set. The wlr_data_control_offer is valid until a new wlr_data_control_offer or NULL is received. The client must destroy the previous primary selection wlr_data_control_offer, if any, upon receiving this event. If the compositor supports primary selection, the first primary_selection event is sent upon binding the wlr_data_control_device object. This request asks the compositor to set the primary selection to the data from the source on behalf of the client. The given source may not be used in any further set_selection or set_primary_selection requests. Attempting to use a previously used source is a protocol error. To unset the primary selection, set the source to NULL. The compositor will ignore this request if it does not support primary selection. The wlr_data_control_source object is the source side of a wlr_data_control_offer. It is created by the source client in a data transfer and provides a way to describe the offered data and a way to respond to requests to transfer the data. This request adds a MIME type to the set of MIME types advertised to targets. Can be called several times to offer multiple types. Calling this after wlr_data_control_device.set_selection is a protocol error. Destroys the data source object. Request for data from the client. Send the data as the specified MIME type over the passed file descriptor, then close it. This data source is no longer valid. The data source has been replaced by another data source. The client should clean up and destroy this data source. A wlr_data_control_offer represents a piece of data offered for transfer by another client (the source client). The offer describes the different MIME types that the data can be converted to and provides the mechanism for transferring the data directly from the source client. To transfer the offered data, the client issues this request and indicates the MIME type it wants to receive. The transfer happens through the passed file descriptor (typically created with the pipe system call). The source client writes the data in the MIME type representation requested and then closes the file descriptor. The receiving client reads from the read end of the pipe until EOF and then closes its end, at which point the transfer is complete. This request may happen multiple times for different MIME types. Destroys the data offer object. Sent immediately after creating the wlr_data_control_offer object. One event per offered MIME type. waypipe-v0.10.5/protocols/wlr-export-dmabuf-unstable-v1.xml000066400000000000000000000217271507071711100237360ustar00rootroot00000000000000 Copyright © 2018 Rostislav Pehlivanov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. An interface to capture surfaces in an efficient way by exporting DMA-BUFs. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. This object is a manager with which to start capturing from sources. Capture the next frame of an entire output. All objects created by the manager will still remain valid, until their appropriate destroy request has been called. This object represents a single DMA-BUF frame. If the capture is successful, the compositor will first send a "frame" event, followed by one or several "object". When the frame is available for readout, the "ready" event is sent. If the capture failed, the "cancel" event is sent. This can happen anytime before the "ready" event. Once either a "ready" or a "cancel" event is received, the client should destroy the frame. Once an "object" event is received, the client is responsible for closing the associated file descriptor. All frames are read-only and may not be written into or altered. Special flags that should be respected by the client. Main event supplying the client with information about the frame. If the capture didn't fail, this event is always emitted first before any other events. This event is followed by a number of "object" as specified by the "num_objects" argument. Event which serves to supply the client with the file descriptors containing the data for each object. After receiving this event, the client must always close the file descriptor as soon as they're done with it and even if the frame fails. This event is sent as soon as the frame is presented, indicating it is available for reading. This event includes the time at which presentation happened at. The timestamp is expressed as tv_sec_hi, tv_sec_lo, tv_nsec triples, each component being an unsigned 32-bit value. Whole seconds are in tv_sec which is a 64-bit value combined from tv_sec_hi and tv_sec_lo, and the additional fractional part in tv_nsec as nanoseconds. Hence, for valid timestamps tv_nsec must be in [0, 999999999]. The seconds part may have an arbitrary offset at start. After receiving this event, the client should destroy this object. Indicates reason for cancelling the frame. If the capture failed or if the frame is no longer valid after the "frame" event has been emitted, this event will be used to inform the client to scrap the frame. If the failure is temporary, the client may capture again the same source. If the failure is permanent, any further attempts to capture the same source will fail again. After receiving this event, the client should destroy this object. Unreferences the frame. This request must be called as soon as its no longer used. It can be called at any time by the client. The client will still have to close any FDs it has been given. waypipe-v0.10.5/protocols/wlr-gamma-control-unstable-v1.xml000066400000000000000000000126031507071711100237120ustar00rootroot00000000000000 Copyright © 2015 Giulio camuffo Copyright © 2018 Simon Ser Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. This protocol allows a privileged client to set the gamma tables for outputs. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. This interface is a manager that allows creating per-output gamma controls. Create a gamma control that can be used to adjust gamma tables for the provided output. All objects created by the manager will still remain valid, until their appropriate destroy request has been called. This interface allows a client to adjust gamma tables for a particular output. The client will receive the gamma size, and will then be able to set gamma tables. At any time the compositor can send a failed event indicating that this object is no longer valid. There can only be at most one gamma control object per output, which has exclusive access to this particular output. When the gamma control object is destroyed, the gamma table is restored to its original value. Advertise the size of each gamma ramp. This event is sent immediately when the gamma control object is created. Set the gamma table. The file descriptor can be memory-mapped to provide the raw gamma table, which contains successive gamma ramps for the red, green and blue channels. Each gamma ramp is an array of 16-byte unsigned integers which has the same length as the gamma size. The file descriptor data must have the same length as three times the gamma size. This event indicates that the gamma control is no longer valid. This can happen for a number of reasons, including: - The output doesn't support gamma tables - Setting the gamma tables failed - Another client already has exclusive gamma control for this output - The compositor has transferred gamma control to another client Upon receiving this event, the client should destroy this object. Destroys the gamma control object. If the object is still valid, this restores the original gamma tables. waypipe-v0.10.5/protocols/wlr-screencopy-unstable-v1.xml000066400000000000000000000236661507071711100233370ustar00rootroot00000000000000 Copyright © 2018 Simon Ser Copyright © 2019 Andri Yngvason Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol allows clients to ask the compositor to copy part of the screen content to a client buffer. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. This object is a manager which offers requests to start capturing from a source. Capture the next frame of an entire output. Capture the next frame of an output's region. The region is given in output logical coordinates, see xdg_output.logical_size. The region will be clipped to the output's extents. All objects created by the manager will still remain valid, until their appropriate destroy request has been called. This object represents a single frame. When created, a series of buffer events will be sent, each representing a supported buffer type. The "buffer_done" event is sent afterwards to indicate that all supported buffer types have been enumerated. The client will then be able to send a "copy" request. If the capture is successful, the compositor will send a "flags" followed by a "ready" event. For objects version 2 or lower, wl_shm buffers are always supported, ie. the "buffer" event is guaranteed to be sent. If the capture failed, the "failed" event is sent. This can happen anytime before the "ready" event. Once either a "ready" or a "failed" event is received, the client should destroy the frame. Provides information about wl_shm buffer parameters that need to be used for this frame. This event is sent once after the frame is created if wl_shm buffers are supported. Copy the frame to the supplied buffer. The buffer must have a the correct size, see zwlr_screencopy_frame_v1.buffer and zwlr_screencopy_frame_v1.linux_dmabuf. The buffer needs to have a supported format. If the frame is successfully copied, a "flags" and a "ready" events are sent. Otherwise, a "failed" event is sent. Provides flags about the frame. This event is sent once before the "ready" event. Called as soon as the frame is copied, indicating it is available for reading. This event includes the time at which presentation happened at. The timestamp is expressed as tv_sec_hi, tv_sec_lo, tv_nsec triples, each component being an unsigned 32-bit value. Whole seconds are in tv_sec which is a 64-bit value combined from tv_sec_hi and tv_sec_lo, and the additional fractional part in tv_nsec as nanoseconds. Hence, for valid timestamps tv_nsec must be in [0, 999999999]. The seconds part may have an arbitrary offset at start. After receiving this event, the client should destroy the object. This event indicates that the attempted frame copy has failed. After receiving this event, the client should destroy the object. Destroys the frame. This request can be sent at any time by the client. Same as copy, except it waits until there is damage to copy. This event is sent right before the ready event when copy_with_damage is requested. It may be generated multiple times for each copy_with_damage request. The arguments describe a box around an area that has changed since the last copy request that was derived from the current screencopy manager instance. The union of all regions received between the call to copy_with_damage and a ready event is the total damage since the prior ready event. Provides information about linux-dmabuf buffer parameters that need to be used for this frame. This event is sent once after the frame is created if linux-dmabuf buffers are supported. This event is sent once after all buffer events have been sent. The client should proceed to create a buffer of one of the supported types, and send a "copy" request. waypipe-v0.10.5/protocols/xdg-shell.xml000066400000000000000000001672411507071711100201110ustar00rootroot00000000000000 Copyright © 2008-2013 Kristian Høgsberg Copyright © 2013 Rafael Antognolli Copyright © 2013 Jasper St. Pierre Copyright © 2010-2013 Intel Corporation Copyright © 2015-2017 Samsung Electronics Co., Ltd Copyright © 2015-2017 Red Hat Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The xdg_wm_base interface is exposed as a global object enabling clients to turn their wl_surfaces into windows in a desktop environment. It defines the basic functionality needed for clients and the compositor to create windows that can be dragged, resized, maximized, etc, as well as creating transient windows such as popup menus. Destroy this xdg_wm_base object. Destroying a bound xdg_wm_base object while there are surfaces still alive created by this xdg_wm_base object instance is illegal and will result in a defunct_surfaces error. Create a positioner object. A positioner object is used to position surfaces relative to some parent surface. See the interface description and xdg_surface.get_popup for details. This creates an xdg_surface for the given surface. While xdg_surface itself is not a role, the corresponding surface may only be assigned a role extending xdg_surface, such as xdg_toplevel or xdg_popup. It is illegal to create an xdg_surface for a wl_surface which already has an assigned role and this will result in a role error. This creates an xdg_surface for the given surface. An xdg_surface is used as basis to define a role to a given surface, such as xdg_toplevel or xdg_popup. It also manages functionality shared between xdg_surface based surface roles. See the documentation of xdg_surface for more details about what an xdg_surface is and how it is used. A client must respond to a ping event with a pong request or the client may be deemed unresponsive. See xdg_wm_base.ping and xdg_wm_base.error.unresponsive. The ping event asks the client if it's still alive. Pass the serial specified in the event back to the compositor by sending a "pong" request back with the specified serial. See xdg_wm_base.pong. Compositors can use this to determine if the client is still alive. It's unspecified what will happen if the client doesn't respond to the ping request, or in what timeframe. Clients should try to respond in a reasonable amount of time. The “unresponsive” error is provided for compositors that wish to disconnect unresponsive clients. A compositor is free to ping in any way it wants, but a client must always respond to any xdg_wm_base object it created. The xdg_positioner provides a collection of rules for the placement of a child surface relative to a parent surface. Rules can be defined to ensure the child surface remains within the visible area's borders, and to specify how the child surface changes its position, such as sliding along an axis, or flipping around a rectangle. These positioner-created rules are constrained by the requirement that a child surface must intersect with or be at least partially adjacent to its parent surface. See the various requests for details about possible rules. At the time of the request, the compositor makes a copy of the rules specified by the xdg_positioner. Thus, after the request is complete the xdg_positioner object can be destroyed or reused; further changes to the object will have no effect on previous usages. For an xdg_positioner object to be considered complete, it must have a non-zero size set by set_size, and a non-zero anchor rectangle set by set_anchor_rect. Passing an incomplete xdg_positioner object when positioning a surface raises an invalid_positioner error. Notify the compositor that the xdg_positioner will no longer be used. Set the size of the surface that is to be positioned with the positioner object. The size is in surface-local coordinates and corresponds to the window geometry. See xdg_surface.set_window_geometry. If a zero or negative size is set the invalid_input error is raised. Specify the anchor rectangle within the parent surface that the child surface will be placed relative to. The rectangle is relative to the window geometry as defined by xdg_surface.set_window_geometry of the parent surface. When the xdg_positioner object is used to position a child surface, the anchor rectangle may not extend outside the window geometry of the positioned child's parent surface. If a negative size is set the invalid_input error is raised. Defines the anchor point for the anchor rectangle. The specified anchor is used derive an anchor point that the child surface will be positioned relative to. If a corner anchor is set (e.g. 'top_left' or 'bottom_right'), the anchor point will be at the specified corner; otherwise, the derived anchor point will be centered on the specified edge, or in the center of the anchor rectangle if no edge is specified. Defines in what direction a surface should be positioned, relative to the anchor point of the parent surface. If a corner gravity is specified (e.g. 'bottom_right' or 'top_left'), then the child surface will be placed towards the specified gravity; otherwise, the child surface will be centered over the anchor point on any axis that had no gravity specified. If the gravity is not in the ‘gravity’ enum, an invalid_input error is raised. The constraint adjustment value define ways the compositor will adjust the position of the surface, if the unadjusted position would result in the surface being partly constrained. Whether a surface is considered 'constrained' is left to the compositor to determine. For example, the surface may be partly outside the compositor's defined 'work area', thus necessitating the child surface's position be adjusted until it is entirely inside the work area. The adjustments can be combined, according to a defined precedence: 1) Flip, 2) Slide, 3) Resize. Don't alter the surface position even if it is constrained on some axis, for example partially outside the edge of an output. Slide the surface along the x axis until it is no longer constrained. First try to slide towards the direction of the gravity on the x axis until either the edge in the opposite direction of the gravity is unconstrained or the edge in the direction of the gravity is constrained. Then try to slide towards the opposite direction of the gravity on the x axis until either the edge in the direction of the gravity is unconstrained or the edge in the opposite direction of the gravity is constrained. Slide the surface along the y axis until it is no longer constrained. First try to slide towards the direction of the gravity on the y axis until either the edge in the opposite direction of the gravity is unconstrained or the edge in the direction of the gravity is constrained. Then try to slide towards the opposite direction of the gravity on the y axis until either the edge in the direction of the gravity is unconstrained or the edge in the opposite direction of the gravity is constrained. Invert the anchor and gravity on the x axis if the surface is constrained on the x axis. For example, if the left edge of the surface is constrained, the gravity is 'left' and the anchor is 'left', change the gravity to 'right' and the anchor to 'right'. If the adjusted position also ends up being constrained, the resulting position of the flip_x adjustment will be the one before the adjustment. Invert the anchor and gravity on the y axis if the surface is constrained on the y axis. For example, if the bottom edge of the surface is constrained, the gravity is 'bottom' and the anchor is 'bottom', change the gravity to 'top' and the anchor to 'top'. The adjusted position is calculated given the original anchor rectangle and offset, but with the new flipped anchor and gravity values. If the adjusted position also ends up being constrained, the resulting position of the flip_y adjustment will be the one before the adjustment. Resize the surface horizontally so that it is completely unconstrained. Resize the surface vertically so that it is completely unconstrained. Specify how the window should be positioned if the originally intended position caused the surface to be constrained, meaning at least partially outside positioning boundaries set by the compositor. The adjustment is set by constructing a bitmask describing the adjustment to be made when the surface is constrained on that axis. If no bit for one axis is set, the compositor will assume that the child surface should not change its position on that axis when constrained. If more than one bit for one axis is set, the order of how adjustments are applied is specified in the corresponding adjustment descriptions. The default adjustment is none. Specify the surface position offset relative to the position of the anchor on the anchor rectangle and the anchor on the surface. For example if the anchor of the anchor rectangle is at (x, y), the surface has the gravity bottom|right, and the offset is (ox, oy), the calculated surface position will be (x + ox, y + oy). The offset position of the surface is the one used for constraint testing. See set_constraint_adjustment. An example use case is placing a popup menu on top of a user interface element, while aligning the user interface element of the parent surface with some user interface element placed somewhere in the popup surface. When set reactive, the surface is reconstrained if the conditions used for constraining changed, e.g. the parent window moved. If the conditions changed and the popup was reconstrained, an xdg_popup.configure event is sent with updated geometry, followed by an xdg_surface.configure event. Set the parent window geometry the compositor should use when positioning the popup. The compositor may use this information to determine the future state the popup should be constrained using. If this doesn't match the dimension of the parent the popup is eventually positioned against, the behavior is undefined. The arguments are given in the surface-local coordinate space. Set the serial of an xdg_surface.configure event this positioner will be used in response to. The compositor may use this information together with set_parent_size to determine what future state the popup should be constrained using. An interface that may be implemented by a wl_surface, for implementations that provide a desktop-style user interface. It provides a base set of functionality required to construct user interface elements requiring management by the compositor, such as toplevel windows, menus, etc. The types of functionality are split into xdg_surface roles. Creating an xdg_surface does not set the role for a wl_surface. In order to map an xdg_surface, the client must create a role-specific object using, e.g., get_toplevel, get_popup. The wl_surface for any given xdg_surface can have at most one role, and may not be assigned any role not based on xdg_surface. A role must be assigned before any other requests are made to the xdg_surface object. The client must call wl_surface.commit on the corresponding wl_surface for the xdg_surface state to take effect. Creating an xdg_surface from a wl_surface which has a buffer attached or committed is a client error, and any attempts by a client to attach or manipulate a buffer prior to the first xdg_surface.configure call must also be treated as errors. After creating a role-specific object and setting it up (e.g. by sending the title, app ID, size constraints, parent, etc), the client must perform an initial commit without any buffer attached. The compositor will reply with initial wl_surface state such as wl_surface.preferred_buffer_scale followed by an xdg_surface.configure event. The client must acknowledge it and is then allowed to attach a buffer to map the surface. Mapping an xdg_surface-based role surface is defined as making it possible for the surface to be shown by the compositor. Note that a mapped surface is not guaranteed to be visible once it is mapped. For an xdg_surface to be mapped by the compositor, the following conditions must be met: (1) the client has assigned an xdg_surface-based role to the surface (2) the client has set and committed the xdg_surface state and the role-dependent state to the surface (3) the client has committed a buffer to the surface A newly-unmapped surface is considered to have met condition (1) out of the 3 required conditions for mapping a surface if its role surface has not been destroyed, i.e. the client must perform the initial commit again before attaching a buffer. Destroy the xdg_surface object. An xdg_surface must only be destroyed after its role object has been destroyed, otherwise a defunct_role_object error is raised. This creates an xdg_toplevel object for the given xdg_surface and gives the associated wl_surface the xdg_toplevel role. See the documentation of xdg_toplevel for more details about what an xdg_toplevel is and how it is used. This creates an xdg_popup object for the given xdg_surface and gives the associated wl_surface the xdg_popup role. If null is passed as a parent, a parent surface must be specified using some other protocol, before committing the initial state. See the documentation of xdg_popup for more details about what an xdg_popup is and how it is used. The window geometry of a surface is its "visible bounds" from the user's perspective. Client-side decorations often have invisible portions like drop-shadows which should be ignored for the purposes of aligning, placing and constraining windows. The window geometry is double-buffered state, see wl_surface.commit. When maintaining a position, the compositor should treat the (x, y) coordinate of the window geometry as the top left corner of the window. A client changing the (x, y) window geometry coordinate should in general not alter the position of the window. Once the window geometry of the surface is set, it is not possible to unset it, and it will remain the same until set_window_geometry is called again, even if a new subsurface or buffer is attached. If never set, the value is the full bounds of the surface, including any subsurfaces. This updates dynamically on every commit. This unset is meant for extremely simple clients. The arguments are given in the surface-local coordinate space of the wl_surface associated with this xdg_surface, and may extend outside of the wl_surface itself to mark parts of the subsurface tree as part of the window geometry. When applied, the effective window geometry will be the set window geometry clamped to the bounding rectangle of the combined geometry of the surface of the xdg_surface and the associated subsurfaces. The effective geometry will not be recalculated unless a new call to set_window_geometry is done and the new pending surface state is subsequently applied. The width and height of the effective window geometry must be greater than zero. Setting an invalid size will raise an invalid_size error. When a configure event is received, if a client commits the surface in response to the configure event, then the client must make an ack_configure request sometime before the commit request, passing along the serial of the configure event. For instance, for toplevel surfaces the compositor might use this information to move a surface to the top left only when the client has drawn itself for the maximized or fullscreen state. If the client receives multiple configure events before it can respond to one, it only has to ack the last configure event. Acking a configure event that was never sent raises an invalid_serial error. A client is not required to commit immediately after sending an ack_configure request - it may even ack_configure several times before its next surface commit. A client may send multiple ack_configure requests before committing, but only the last request sent before a commit indicates which configure event the client really is responding to. Sending an ack_configure request consumes the serial number sent with the request, as well as serial numbers sent by all configure events sent on this xdg_surface prior to the configure event referenced by the committed serial. It is an error to issue multiple ack_configure requests referencing a serial from the same configure event, or to issue an ack_configure request referencing a serial from a configure event issued before the event identified by the last ack_configure request for the same xdg_surface. Doing so will raise an invalid_serial error. The configure event marks the end of a configure sequence. A configure sequence is a set of one or more events configuring the state of the xdg_surface, including the final xdg_surface.configure event. Where applicable, xdg_surface surface roles will during a configure sequence extend this event as a latched state sent as events before the xdg_surface.configure event. Such events should be considered to make up a set of atomically applied configuration states, where the xdg_surface.configure commits the accumulated state. Clients should arrange their surface for the new states, and then send an ack_configure request with the serial sent in this configure event at some point before committing the new surface. If the client receives multiple configure events before it can respond to one, it is free to discard all but the last event it received. This interface defines an xdg_surface role which allows a surface to, among other things, set window-like properties such as maximize, fullscreen, and minimize, set application-specific metadata like title and id, and well as trigger user interactive operations such as interactive resize and move. A xdg_toplevel by default is responsible for providing the full intended visual representation of the toplevel, which depending on the window state, may mean things like a title bar, window controls and drop shadow. Unmapping an xdg_toplevel means that the surface cannot be shown by the compositor until it is explicitly mapped again. All active operations (e.g., move, resize) are canceled and all attributes (e.g. title, state, stacking, ...) are discarded for an xdg_toplevel surface when it is unmapped. The xdg_toplevel returns to the state it had right after xdg_surface.get_toplevel. The client can re-map the toplevel by performing a commit without any buffer attached, waiting for a configure event and handling it as usual (see xdg_surface description). Attaching a null buffer to a toplevel unmaps the surface. This request destroys the role surface and unmaps the surface; see "Unmapping" behavior in interface section for details. Set the "parent" of this surface. This surface should be stacked above the parent surface and all other ancestor surfaces. Parent surfaces should be set on dialogs, toolboxes, or other "auxiliary" surfaces, so that the parent is raised when the dialog is raised. Setting a null parent for a child surface unsets its parent. Setting a null parent for a surface which currently has no parent is a no-op. Only mapped surfaces can have child surfaces. Setting a parent which is not mapped is equivalent to setting a null parent. If a surface becomes unmapped, its children's parent is set to the parent of the now-unmapped surface. If the now-unmapped surface has no parent, its children's parent is unset. If the now-unmapped surface becomes mapped again, its parent-child relationship is not restored. The parent toplevel must not be one of the child toplevel's descendants, and the parent must be different from the child toplevel, otherwise the invalid_parent protocol error is raised. Set a short title for the surface. This string may be used to identify the surface in a task bar, window list, or other user interface elements provided by the compositor. The string must be encoded in UTF-8. Set an application identifier for the surface. The app ID identifies the general class of applications to which the surface belongs. The compositor can use this to group multiple surfaces together, or to determine how to launch a new application. For D-Bus activatable applications, the app ID is used as the D-Bus service name. The compositor shell will try to group application surfaces together by their app ID. As a best practice, it is suggested to select app ID's that match the basename of the application's .desktop file. For example, "org.freedesktop.FooViewer" where the .desktop file is "org.freedesktop.FooViewer.desktop". Like other properties, a set_app_id request can be sent after the xdg_toplevel has been mapped to update the property. See the desktop-entry specification [0] for more details on application identifiers and how they relate to well-known D-Bus names and .desktop files. [0] https://standards.freedesktop.org/desktop-entry-spec/ Clients implementing client-side decorations might want to show a context menu when right-clicking on the decorations, giving the user a menu that they can use to maximize or minimize the window. This request asks the compositor to pop up such a window menu at the given position, relative to the local surface coordinates of the parent surface. There are no guarantees as to what menu items the window menu contains, or even if a window menu will be drawn at all. This request must be used in response to some sort of user action like a button press, key press, or touch down event. Start an interactive, user-driven move of the surface. This request must be used in response to some sort of user action like a button press, key press, or touch down event. The passed serial is used to determine the type of interactive move (touch, pointer, etc). The server may ignore move requests depending on the state of the surface (e.g. fullscreen or maximized), or if the passed serial is no longer valid. If triggered, the surface will lose the focus of the device (wl_pointer, wl_touch, etc) used for the move. It is up to the compositor to visually indicate that the move is taking place, such as updating a pointer cursor, during the move. There is no guarantee that the device focus will return when the move is completed. These values are used to indicate which edge of a surface is being dragged in a resize operation. Start a user-driven, interactive resize of the surface. This request must be used in response to some sort of user action like a button press, key press, or touch down event. The passed serial is used to determine the type of interactive resize (touch, pointer, etc). The server may ignore resize requests depending on the state of the surface (e.g. fullscreen or maximized). If triggered, the client will receive configure events with the "resize" state enum value and the expected sizes. See the "resize" enum value for more details about what is required. The client must also acknowledge configure events using "ack_configure". After the resize is completed, the client will receive another "configure" event without the resize state. If triggered, the surface also will lose the focus of the device (wl_pointer, wl_touch, etc) used for the resize. It is up to the compositor to visually indicate that the resize is taking place, such as updating a pointer cursor, during the resize. There is no guarantee that the device focus will return when the resize is completed. The edges parameter specifies how the surface should be resized, and is one of the values of the resize_edge enum. Values not matching a variant of the enum will cause the invalid_resize_edge protocol error. The compositor may use this information to update the surface position for example when dragging the top left corner. The compositor may also use this information to adapt its behavior, e.g. choose an appropriate cursor image. The different state values used on the surface. This is designed for state values like maximized, fullscreen. It is paired with the configure event to ensure that both the client and the compositor setting the state can be synchronized. States set in this way are double-buffered, see wl_surface.commit. The surface is maximized. The window geometry specified in the configure event must be obeyed by the client, or the xdg_wm_base.invalid_surface_state error is raised. The client should draw without shadow or other decoration outside of the window geometry. The surface is fullscreen. The window geometry specified in the configure event is a maximum; the client cannot resize beyond it. For a surface to cover the whole fullscreened area, the geometry dimensions must be obeyed by the client. For more details, see xdg_toplevel.set_fullscreen. The surface is being resized. The window geometry specified in the configure event is a maximum; the client cannot resize beyond it. Clients that have aspect ratio or cell sizing configuration can use a smaller size, however. Client window decorations should be painted as if the window is active. Do not assume this means that the window actually has keyboard or pointer focus. The window is currently in a tiled layout and the left edge is considered to be adjacent to another part of the tiling grid. The client should draw without shadow or other decoration outside of the window geometry on the left edge. The window is currently in a tiled layout and the right edge is considered to be adjacent to another part of the tiling grid. The client should draw without shadow or other decoration outside of the window geometry on the right edge. The window is currently in a tiled layout and the top edge is considered to be adjacent to another part of the tiling grid. The client should draw without shadow or other decoration outside of the window geometry on the top edge. The window is currently in a tiled layout and the bottom edge is considered to be adjacent to another part of the tiling grid. The client should draw without shadow or other decoration outside of the window geometry on the bottom edge. The surface is currently not ordinarily being repainted; for example because its content is occluded by another window, or its outputs are switched off due to screen locking. Set a maximum size for the window. The client can specify a maximum size so that the compositor does not try to configure the window beyond this size. The width and height arguments are in window geometry coordinates. See xdg_surface.set_window_geometry. Values set in this way are double-buffered, see wl_surface.commit. The compositor can use this information to allow or disallow different states like maximize or fullscreen and draw accurate animations. Similarly, a tiling window manager may use this information to place and resize client windows in a more effective way. The client should not rely on the compositor to obey the maximum size. The compositor may decide to ignore the values set by the client and request a larger size. If never set, or a value of zero in the request, means that the client has no expected maximum size in the given dimension. As a result, a client wishing to reset the maximum size to an unspecified state can use zero for width and height in the request. Requesting a maximum size to be smaller than the minimum size of a surface is illegal and will result in an invalid_size error. The width and height must be greater than or equal to zero. Using strictly negative values for width or height will result in a invalid_size error. Set a minimum size for the window. The client can specify a minimum size so that the compositor does not try to configure the window below this size. The width and height arguments are in window geometry coordinates. See xdg_surface.set_window_geometry. Values set in this way are double-buffered, see wl_surface.commit. The compositor can use this information to allow or disallow different states like maximize or fullscreen and draw accurate animations. Similarly, a tiling window manager may use this information to place and resize client windows in a more effective way. The client should not rely on the compositor to obey the minimum size. The compositor may decide to ignore the values set by the client and request a smaller size. If never set, or a value of zero in the request, means that the client has no expected minimum size in the given dimension. As a result, a client wishing to reset the minimum size to an unspecified state can use zero for width and height in the request. Requesting a minimum size to be larger than the maximum size of a surface is illegal and will result in an invalid_size error. The width and height must be greater than or equal to zero. Using strictly negative values for width and height will result in a invalid_size error. Maximize the surface. After requesting that the surface should be maximized, the compositor will respond by emitting a configure event. Whether this configure actually sets the window maximized is subject to compositor policies. The client must then update its content, drawing in the configured state. The client must also acknowledge the configure when committing the new content (see ack_configure). It is up to the compositor to decide how and where to maximize the surface, for example which output and what region of the screen should be used. If the surface was already maximized, the compositor will still emit a configure event with the "maximized" state. If the surface is in a fullscreen state, this request has no direct effect. It may alter the state the surface is returned to when unmaximized unless overridden by the compositor. Unmaximize the surface. After requesting that the surface should be unmaximized, the compositor will respond by emitting a configure event. Whether this actually un-maximizes the window is subject to compositor policies. If available and applicable, the compositor will include the window geometry dimensions the window had prior to being maximized in the configure event. The client must then update its content, drawing it in the configured state. The client must also acknowledge the configure when committing the new content (see ack_configure). It is up to the compositor to position the surface after it was unmaximized; usually the position the surface had before maximizing, if applicable. If the surface was already not maximized, the compositor will still emit a configure event without the "maximized" state. If the surface is in a fullscreen state, this request has no direct effect. It may alter the state the surface is returned to when unmaximized unless overridden by the compositor. Make the surface fullscreen. After requesting that the surface should be fullscreened, the compositor will respond by emitting a configure event. Whether the client is actually put into a fullscreen state is subject to compositor policies. The client must also acknowledge the configure when committing the new content (see ack_configure). The output passed by the request indicates the client's preference as to which display it should be set fullscreen on. If this value is NULL, it's up to the compositor to choose which display will be used to map this surface. If the surface doesn't cover the whole output, the compositor will position the surface in the center of the output and compensate with with border fill covering the rest of the output. The content of the border fill is undefined, but should be assumed to be in some way that attempts to blend into the surrounding area (e.g. solid black). If the fullscreened surface is not opaque, the compositor must make sure that other screen content not part of the same surface tree (made up of subsurfaces, popups or similarly coupled surfaces) are not visible below the fullscreened surface. Make the surface no longer fullscreen. After requesting that the surface should be unfullscreened, the compositor will respond by emitting a configure event. Whether this actually removes the fullscreen state of the client is subject to compositor policies. Making a surface unfullscreen sets states for the surface based on the following: * the state(s) it may have had before becoming fullscreen * any state(s) decided by the compositor * any state(s) requested by the client while the surface was fullscreen The compositor may include the previous window geometry dimensions in the configure event, if applicable. The client must also acknowledge the configure when committing the new content (see ack_configure). Request that the compositor minimize your surface. There is no way to know if the surface is currently minimized, nor is there any way to unset minimization on this surface. If you are looking to throttle redrawing when minimized, please instead use the wl_surface.frame event for this, as this will also work with live previews on windows in Alt-Tab, Expose or similar compositor features. This configure event asks the client to resize its toplevel surface or to change its state. The configured state should not be applied immediately. See xdg_surface.configure for details. The width and height arguments specify a hint to the window about how its surface should be resized in window geometry coordinates. See set_window_geometry. If the width or height arguments are zero, it means the client should decide its own window dimension. This may happen when the compositor needs to configure the state of the surface but doesn't have any information about any previous or expected dimension. The states listed in the event specify how the width/height arguments should be interpreted, and possibly how it should be drawn. Clients must send an ack_configure in response to this event. See xdg_surface.configure and xdg_surface.ack_configure for details. The close event is sent by the compositor when the user wants the surface to be closed. This should be equivalent to the user clicking the close button in client-side decorations, if your application has any. This is only a request that the user intends to close the window. The client may choose to ignore this request, or show a dialog to ask the user to save their data, etc. The configure_bounds event may be sent prior to a xdg_toplevel.configure event to communicate the bounds a window geometry size is recommended to constrain to. The passed width and height are in surface coordinate space. If width and height are 0, it means bounds is unknown and equivalent to as if no configure_bounds event was ever sent for this surface. The bounds can for example correspond to the size of a monitor excluding any panels or other shell components, so that a surface isn't created in a way that it cannot fit. The bounds may change at any point, and in such a case, a new xdg_toplevel.configure_bounds will be sent, followed by xdg_toplevel.configure and xdg_surface.configure. This event advertises the capabilities supported by the compositor. If a capability isn't supported, clients should hide or disable the UI elements that expose this functionality. For instance, if the compositor doesn't advertise support for minimized toplevels, a button triggering the set_minimized request should not be displayed. The compositor will ignore requests it doesn't support. For instance, a compositor which doesn't advertise support for minimized will ignore set_minimized requests. Compositors must send this event once before the first xdg_surface.configure event. When the capabilities change, compositors must send this event again and then send an xdg_surface.configure event. The configured state should not be applied immediately. See xdg_surface.configure for details. The capabilities are sent as an array of 32-bit unsigned integers in native endianness. A popup surface is a short-lived, temporary surface. It can be used to implement for example menus, popovers, tooltips and other similar user interface concepts. A popup can be made to take an explicit grab. See xdg_popup.grab for details. When the popup is dismissed, a popup_done event will be sent out, and at the same time the surface will be unmapped. See the xdg_popup.popup_done event for details. Explicitly destroying the xdg_popup object will also dismiss the popup and unmap the surface. Clients that want to dismiss the popup when another surface of their own is clicked should dismiss the popup using the destroy request. A newly created xdg_popup will be stacked on top of all previously created xdg_popup surfaces associated with the same xdg_toplevel. The parent of an xdg_popup must be mapped (see the xdg_surface description) before the xdg_popup itself. The client must call wl_surface.commit on the corresponding wl_surface for the xdg_popup state to take effect. This destroys the popup. Explicitly destroying the xdg_popup object will also dismiss the popup, and unmap the surface. If this xdg_popup is not the "topmost" popup, the xdg_wm_base.not_the_topmost_popup protocol error will be sent. This request makes the created popup take an explicit grab. An explicit grab will be dismissed when the user dismisses the popup, or when the client destroys the xdg_popup. This can be done by the user clicking outside the surface, using the keyboard, or even locking the screen through closing the lid or a timeout. If the compositor denies the grab, the popup will be immediately dismissed. This request must be used in response to some sort of user action like a button press, key press, or touch down event. The serial number of the event should be passed as 'serial'. The parent of a grabbing popup must either be an xdg_toplevel surface or another xdg_popup with an explicit grab. If the parent is another xdg_popup it means that the popups are nested, with this popup now being the topmost popup. Nested popups must be destroyed in the reverse order they were created in, e.g. the only popup you are allowed to destroy at all times is the topmost one. When compositors choose to dismiss a popup, they may dismiss every nested grabbing popup as well. When a compositor dismisses popups, it will follow the same dismissing order as required from the client. If the topmost grabbing popup is destroyed, the grab will be returned to the parent of the popup, if that parent previously had an explicit grab. If the parent is a grabbing popup which has already been dismissed, this popup will be immediately dismissed. If the parent is a popup that did not take an explicit grab, an error will be raised. During a popup grab, the client owning the grab will receive pointer and touch events for all their surfaces as normal (similar to an "owner-events" grab in X11 parlance), while the top most grabbing popup will always have keyboard focus. This event asks the popup surface to configure itself given the configuration. The configured state should not be applied immediately. See xdg_surface.configure for details. The x and y arguments represent the position the popup was placed at given the xdg_positioner rule, relative to the upper left corner of the window geometry of the parent surface. For version 2 or older, the configure event for an xdg_popup is only ever sent once for the initial configuration. Starting with version 3, it may be sent again if the popup is setup with an xdg_positioner with set_reactive requested, or in response to xdg_popup.reposition requests. The popup_done event is sent out when a popup is dismissed by the compositor. The client should destroy the xdg_popup object at this point. Reposition an already-mapped popup. The popup will be placed given the details in the passed xdg_positioner object, and a xdg_popup.repositioned followed by xdg_popup.configure and xdg_surface.configure will be emitted in response. Any parameters set by the previous positioner will be discarded. The passed token will be sent in the corresponding xdg_popup.repositioned event. The new popup position will not take effect until the corresponding configure event is acknowledged by the client. See xdg_popup.repositioned for details. The token itself is opaque, and has no other special meaning. If multiple reposition requests are sent, the compositor may skip all but the last one. If the popup is repositioned in response to a configure event for its parent, the client should send an xdg_positioner.set_parent_configure and possibly an xdg_positioner.set_parent_size request to allow the compositor to properly constrain the popup. If the popup is repositioned together with a parent that is being resized, but not in response to a configure event, the client should send an xdg_positioner.set_parent_size request. The repositioned event is sent as part of a popup configuration sequence, together with xdg_popup.configure and lastly xdg_surface.configure to notify the completion of a reposition request. The repositioned event is to notify about the completion of a xdg_popup.reposition request. The token argument is the token passed in the xdg_popup.reposition request. Immediately after this event is emitted, xdg_popup.configure and xdg_surface.configure will be sent with the updated size and position, as well as a new configure serial. The client should optionally update the content of the popup, but must acknowledge the new popup configuration for the new position to take effect. See xdg_surface.ack_configure for details. waypipe-v0.10.5/protocols/xdg-toplevel-icon-v1.xml000066400000000000000000000225371507071711100221040ustar00rootroot00000000000000 Copyright © 2023-2024 Matthias Klumpp Copyright © 2024 David Edmundson Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol allows clients to set icons for their toplevel surfaces either via the XDG icon stock (using an icon name), or from pixel data. A toplevel icon represents the individual toplevel (unlike the application or launcher icon, which represents the application as a whole), and may be shown in window switchers, window overviews and taskbars that list individual windows. This document adheres to RFC 2119 when using words like "must", "should", "may", etc. Warning! The protocol described in this file is currently in the testing phase. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes can only be done by creating a new major version of the extension. This interface allows clients to create toplevel window icons and set them on toplevel windows to be displayed to the user. Destroy the toplevel icon manager. This does not destroy objects created with the manager. Creates a new icon object. This icon can then be attached to a xdg_toplevel via the 'set_icon' request. This request assigns the icon 'icon' to 'toplevel', or clears the toplevel icon if 'icon' was null. This state is double-buffered and is applied on the next wl_surface.commit of the toplevel. After making this call, the xdg_toplevel_icon_v1 provided as 'icon' can be destroyed by the client without 'toplevel' losing its icon. The xdg_toplevel_icon_v1 is immutable from this point, and any future attempts to change it must raise the 'xdg_toplevel_icon_v1.immutable' protocol error. The compositor must set the toplevel icon from either the pixel data the icon provides, or by loading a stock icon using the icon name. See the description of 'xdg_toplevel_icon_v1' for details. If 'icon' is set to null, the icon of the respective toplevel is reset to its default icon (usually the icon of the application, derived from its desktop-entry file, or a placeholder icon). If this request is passed an icon with no pixel buffers or icon name assigned, the icon must be reset just like if 'icon' was null. This event indicates an icon size the compositor prefers to be available if the client has scalable icons and can render to any size. When the 'xdg_toplevel_icon_manager_v1' object is created, the compositor may send one or more 'icon_size' events to describe the list of preferred icon sizes. If the compositor has no size preference, it may not send any 'icon_size' event, and it is up to the client to decide a suitable icon size. A sequence of 'icon_size' events must be finished with a 'done' event. If the compositor has no size preferences, it must still send the 'done' event, without any preceding 'icon_size' events. This event is sent after all 'icon_size' events have been sent. This interface defines a toplevel icon. An icon can have a name, and multiple buffers. In order to be applied, the icon must have either a name, or at least one buffer assigned. Applying an empty icon (with no buffer or name) to a toplevel should reset its icon to the default icon. It is up to compositor policy whether to prefer using a buffer or loading an icon via its name. See 'set_name' and 'add_buffer' for details. Destroys the 'xdg_toplevel_icon_v1' object. The icon must still remain set on every toplevel it was assigned to, until the toplevel icon is reset explicitly. This request assigns an icon name to this icon. Any previously set name is overridden. The compositor must resolve 'icon_name' according to the lookup rules described in the XDG icon theme specification[1] using the environment's current icon theme. If the compositor does not support icon names or cannot resolve 'icon_name' according to the XDG icon theme specification it must fall back to using pixel buffer data instead. If this request is made after the icon has been assigned to a toplevel via 'set_icon', a 'immutable' error must be raised. [1]: https://specifications.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html This request adds pixel data supplied as wl_buffer to the icon. The client should add pixel data for all icon sizes and scales that it can provide, or which are explicitly requested by the compositor via 'icon_size' events on xdg_toplevel_icon_manager_v1. The wl_buffer supplying pixel data as 'buffer' must be backed by wl_shm and must be a square (width and height being equal). If any of these buffer requirements are not fulfilled, a 'invalid_buffer' error must be raised. If this icon instance already has a buffer of the same size and scale from a previous 'add_buffer' request, data from the last request overrides the preexisting pixel data. The wl_buffer must be kept alive for as long as the xdg_toplevel_icon it is associated with is not destroyed, otherwise a 'no_buffer' error is raised. The buffer contents must not be modified after it was assigned to the icon. As a result, the region of the wl_shm_pool's backing storage used for the wl_buffer must not be modified after this request is sent. The wl_buffer.release event is unused. If this request is made after the icon has been assigned to a toplevel via 'set_icon', a 'immutable' error must be raised. waypipe-v0.10.5/protogen.py000077500000000000000000000602211507071711100156540ustar00rootroot00000000000000#!/usr/bin/env python3 # SPDX-License-Identifier: GPL-3.0-or-later """ Read protocol files, and convert to a rust-readable format """ import os import xml.etree.ElementTree import subprocess # Table of functions for which to generate helper code handled_funcs = [ ("ext_data_control_device_v1", "data_offer"), ("ext_data_control_device_v1", "selection"), ("ext_data_control_device_v1", "set_selection"), ("ext_data_control_manager_v1", "create_data_source"), ("ext_data_control_manager_v1", "get_data_device"), ("ext_data_control_offer_v1", "offer"), ("ext_data_control_offer_v1", "receive"), ("ext_data_control_source_v1", "offer"), ("ext_data_control_source_v1", "send"), ("ext_image_copy_capture_cursor_session_v1", "get_capture_session"), ("ext_image_copy_capture_frame_v1", "attach_buffer"), ("ext_image_copy_capture_frame_v1", "capture"), ("ext_image_copy_capture_frame_v1", "damage_buffer"), ("ext_image_copy_capture_frame_v1", "destroy"), ("ext_image_copy_capture_frame_v1", "failed"), ("ext_image_copy_capture_frame_v1", "presentation_time"), ("ext_image_copy_capture_frame_v1", "ready"), ("ext_image_copy_capture_manager_v1", "create_session"), ("ext_image_copy_capture_session_v1", "buffer_size"), ("ext_image_copy_capture_session_v1", "create_frame"), ("ext_image_copy_capture_session_v1", "destroy"), ("ext_image_copy_capture_session_v1", "dmabuf_device"), ("ext_image_copy_capture_session_v1", "dmabuf_format"), ("ext_image_copy_capture_session_v1", "done"), ("ext_image_copy_capture_session_v1", "shm_format"), ("ext_output_image_capture_source_manager_v1", "create_source"), ("gtk_primary_selection_device", "data_offer"), ("gtk_primary_selection_device", "selection"), ("gtk_primary_selection_device", "set_selection"), ("gtk_primary_selection_device_manager", "create_source"), ("gtk_primary_selection_device_manager", "get_device"), ("gtk_primary_selection_offer", "offer"), ("gtk_primary_selection_offer", "receive"), ("gtk_primary_selection_source", "offer"), ("gtk_primary_selection_source", "send"), ("wl_buffer", "destroy"), ("wl_buffer", "release"), ("wl_callback", "done"), ("wl_compositor", "create_surface"), ("wl_data_device", "data_offer"), ("wl_data_device", "selection"), ("wl_data_device", "set_selection"), ("wl_data_device_manager", "create_data_source"), ("wl_data_device_manager", "get_data_device"), ("wl_data_offer", "offer"), ("wl_data_offer", "receive"), ("wl_data_source", "offer"), ("wl_data_source", "send"), ("wl_display", "delete_id"), ("wl_display", "error"), ("wl_display", "get_registry"), ("wl_display", "sync"), ("wl_keyboard", "keymap"), ("wl_registry", "bind"), ("wl_registry", "global"), ("wl_registry", "global_remove"), ("wl_seat", "capabilities"), ("wl_seat", "get_keyboard"), ("wl_shm", "create_pool"), ("wl_shm_pool", "create_buffer"), ("wl_shm_pool", "destroy"), ("wl_shm_pool", "resize"), ("wl_surface", "attach"), ("wl_surface", "commit"), ("wl_surface", "damage"), ("wl_surface", "damage_buffer"), ("wl_surface", "destroy"), ("wl_surface", "set_buffer_scale"), ("wl_surface", "set_buffer_transform"), ("wp_color_management_output_v1", "get_image_description"), ("wp_color_manager_v1", "create_icc_creator"), ("wp_color_manager_v1", "get_output"), ("wp_commit_timer_v1", "set_timestamp"), ("wp_commit_timing_manager_v1", "get_timer"), ("wp_image_description_creator_icc_v1", "set_icc_file"), ("wp_image_description_info_v1", "icc_file"), ("wp_image_description_v1", "get_information"), ("wp_linux_drm_syncobj_manager_v1", "get_surface"), ("wp_linux_drm_syncobj_manager_v1", "import_timeline"), ("wp_linux_drm_syncobj_surface_v1", "set_acquire_point"), ("wp_linux_drm_syncobj_surface_v1", "set_release_point"), ("wp_presentation", "clock_id"), ("wp_presentation", "destroy"), ("wp_presentation", "feedback"), ("wp_presentation_feedback", "presented"), ("wp_security_context_manager_v1", "create_listener"), ("wp_security_context_v1", "commit"), ("wp_security_context_v1", "set_app_id"), ("wp_security_context_v1", "set_instance_id"), ("wp_security_context_v1", "set_sandbox_engine"), ("wp_viewport", "destroy"), ("wp_viewport", "set_destination"), ("wp_viewport", "set_source"), ("wp_viewporter", "get_viewport"), ("xdg_surface", "get_toplevel"), ("xdg_toplevel", "set_title"), ("xdg_toplevel_icon_manager_v1", "create_icon"), ("xdg_toplevel_icon_manager_v1", "done"), ("xdg_toplevel_icon_manager_v1", "icon_size"), ("xdg_toplevel_icon_v1", "add_buffer"), ("xdg_wm_base", "get_xdg_surface"), ("zwlr_data_control_device_v1", "data_offer"), ("zwlr_data_control_device_v1", "selection"), ("zwlr_data_control_device_v1", "set_selection"), ("zwlr_data_control_manager_v1", "create_data_source"), ("zwlr_data_control_manager_v1", "get_data_device"), ("zwlr_data_control_offer_v1", "offer"), ("zwlr_data_control_offer_v1", "receive"), ("zwlr_data_control_source_v1", "offer"), ("zwlr_data_control_source_v1", "send"), ("zwlr_gamma_control_manager_v1", "get_gamma_control"), ("zwlr_gamma_control_v1", "gamma_size"), ("zwlr_gamma_control_v1", "set_gamma"), ("zwlr_screencopy_frame_v1", "buffer"), ("zwlr_screencopy_frame_v1", "buffer_done"), ("zwlr_screencopy_frame_v1", "copy"), ("zwlr_screencopy_frame_v1", "copy_with_damage"), ("zwlr_screencopy_frame_v1", "destroy"), ("zwlr_screencopy_frame_v1", "failed"), ("zwlr_screencopy_frame_v1", "linux_dmabuf"), ("zwlr_screencopy_frame_v1", "ready"), ("zwlr_screencopy_manager_v1", "capture_output"), ("zwlr_screencopy_manager_v1", "capture_output_region"), ("zwp_linux_buffer_params_v1", "add"), ("zwp_linux_buffer_params_v1", "create"), ("zwp_linux_buffer_params_v1", "create_immed"), ("zwp_linux_buffer_params_v1", "created"), ("zwp_linux_buffer_params_v1", "destroy"), ("zwp_linux_buffer_params_v1", "failed"), ("zwp_linux_dmabuf_feedback_v1", "done"), ("zwp_linux_dmabuf_feedback_v1", "format_table"), ("zwp_linux_dmabuf_feedback_v1", "main_device"), ("zwp_linux_dmabuf_feedback_v1", "tranche_done"), ("zwp_linux_dmabuf_feedback_v1", "tranche_flags"), ("zwp_linux_dmabuf_feedback_v1", "tranche_formats"), ("zwp_linux_dmabuf_feedback_v1", "tranche_target_device"), ("zwp_linux_dmabuf_v1", "create_params"), ("zwp_linux_dmabuf_v1", "format"), ("zwp_linux_dmabuf_v1", "get_default_feedback"), ("zwp_linux_dmabuf_v1", "get_surface_feedback"), ("zwp_linux_dmabuf_v1", "modifier"), ("zwp_primary_selection_device_manager_v1", "create_source"), ("zwp_primary_selection_device_manager_v1", "get_device"), ("zwp_primary_selection_device_v1", "data_offer"), ("zwp_primary_selection_device_v1", "selection"), ("zwp_primary_selection_device_v1", "set_selection"), ("zwp_primary_selection_offer_v1", "offer"), ("zwp_primary_selection_offer_v1", "receive"), ("zwp_primary_selection_source_v1", "offer"), ("zwp_primary_selection_source_v1", "send"), ] assert handled_funcs == sorted(handled_funcs), "\n".join( [str(x) for x in sorted(handled_funcs)] ) handled_funcs = set(handled_funcs) handled_enums = set([("wl_output", "transform"), ("wl_shm", "format")]) header = """/*! Wayland protocol interface and method data and functions. Code automatically generated from protocols/ folder. */ #![allow(clippy::all,dead_code)] use crate::wayland::*; use crate::wayland::WaylandArgument::*; use WaylandInterface::*; """ def unsnake(s): return "".join(map(lambda x: x.capitalize(), s.split("_"))) def get_signature(method): """ returns a string like "[Int, Uint, NewId(WlOutput)]" """ vals = [] for arg in method: if arg.tag != "arg": continue allow_null = "allow-null" in arg.attrib and arg.attrib["allow-null"] == "true" if arg.attrib["type"] == "object": if "interface" in arg.attrib: vals.append("Object(" + unsnake(arg.attrib["interface"]) + ")") else: vals.append("GenericObject") elif arg.attrib["type"] == "new_id": if "interface" in arg.attrib: vals.append("NewId(" + unsnake(arg.attrib["interface"]) + ")") else: vals.append("GenericNewId") elif arg.attrib["type"] == "int": vals.append("Int") elif arg.attrib["type"] == "uint": vals.append("Uint") elif arg.attrib["type"] == "fixed": vals.append("Fixed") elif arg.attrib["type"] == "string": if allow_null: vals.append("OptionalString") else: vals.append("String") elif arg.attrib["type"] == "array": vals.append("Array") elif arg.attrib["type"] == "fd": vals.append("Fd") else: raise NotImplementedError(arg.attrib) return "&[" + ", ".join(vals) + "]" def write_method_length(meth_name, method, write): """ Create a function to report how long the method would be """ lines = [] args = [] base_len = 8 for arg in method: if arg.tag != "arg": continue if arg.attrib["type"] == "new_id": if "interface" in arg.attrib: base_len += 4 else: arg_name = arg.attrib["name"] + "_iface_name_len" args.append(arg_name + ": usize") lines.append(" v += length_string({});".format(arg_name)) base_len += 8 elif arg.attrib["type"] in ("int", "uint", "object", "fixed"): base_len += 4 elif arg.attrib["type"] == "string": args.append(arg.attrib["name"] + "_len : usize") lines.append( " v += length_string({});".format(arg.attrib["name"] + "_len") ) elif arg.attrib["type"] == "array": args.append(arg.attrib["name"] + "_len : usize") lines.append( " v += length_array({});".format(arg.attrib["name"] + "_len") ) elif arg.attrib["type"] == "fd": pass else: raise NotImplementedError(arg.attrib) write("pub fn length_{}({}) -> usize {{".format(meth_name, ", ".join(args))) if lines: write(" let mut v = {};".format(base_len)) for l in lines: write(l) write(" v") else: write(" {}".format(base_len)) write("}") def write_method_write(meth_name, meth_num, method, write): """ Create a function to write the method to a buffer """ num_fds = 0 length_args = [] for arg in method: if arg.tag != "arg": continue if arg.attrib["type"] == "new_id": if "interface" not in arg.attrib: length_args.append(arg.attrib["name"] + "_iface_name") elif arg.attrib["type"] == "string": length_args.append(arg.attrib["name"]) elif arg.attrib["type"] == "array": length_args.append(arg.attrib["name"]) elif arg.attrib["type"] == "fd": num_fds += 1 elif arg.attrib["type"] in ("uint", "int", "object", "fixed"): pass else: raise NotImplementedError(arg.attrib) length_args = [x + ".len()" for x in length_args] args = ["dst: &mut &mut [u8]", "for_id: ObjId"] lines = [ " let l = length_{}({});".format(meth_name, ", ".join(length_args)), ] if num_fds > 0: args.append("tag_fds: bool") lines.append( " write_header(dst, for_id, l, {}, if tag_fds {{ {} }} else {{ 0 }});".format( meth_num, num_fds ) ) else: lines.append(" write_header(dst, for_id, l, {}, 0);".format(meth_num)) base_len = 8 for arg in method: if arg.tag != "arg": continue allow_null = "allow-null" in arg.attrib and arg.attrib["allow-null"] == "true" if arg.attrib["type"] == "object": args.append(arg.attrib["name"] + ": ObjId") lines.append(" write_obj(dst, {});".format(arg.attrib["name"])) elif arg.attrib["type"] == "new_id": assert not allow_null if "interface" in arg.attrib: args.append(arg.attrib["name"] + ": ObjId") lines.append(" write_obj(dst, {});".format(arg.attrib["name"])) else: args.append(arg.attrib["name"] + "_iface_name" + ": &[u8]") args.append(arg.attrib["name"] + "_version" + ": u32") args.append(arg.attrib["name"] + ": ObjId") lines.append( " write_string(dst, Some({}));".format( arg.attrib["name"] + "_iface_name" ) ) lines.append( " write_u32(dst, {});".format(arg.attrib["name"] + "_version") ) lines.append(" write_obj(dst, {});".format(arg.attrib["name"])) elif arg.attrib["type"] == "int": args.append(arg.attrib["name"] + ": i32") lines.append(" write_i32(dst, {});".format(arg.attrib["name"])) elif arg.attrib["type"] == "uint": args.append(arg.attrib["name"] + ": u32") lines.append(" write_u32(dst, {});".format(arg.attrib["name"])) elif arg.attrib["type"] == "fixed": args.append(arg.attrib["name"] + ": i32") lines.append(" write_i32(dst, {});".format(arg.attrib["name"])) elif arg.attrib["type"] == "string": if allow_null: args.append(arg.attrib["name"] + ": Option<&[u8]>") lines.append(" write_string(dst, {});".format(arg.attrib["name"])) else: args.append(arg.attrib["name"] + ": &[u8]") lines.append( " write_string(dst, Some({}));".format(arg.attrib["name"]) ) elif arg.attrib["type"] == "array": args.append(arg.attrib["name"] + ": &[u8]") lines.append(" write_array(dst, {});".format(arg.attrib["name"])) elif arg.attrib["type"] == "fd": pass else: raise NotImplementedError(arg.attrib) write("pub fn write_{}({}) {{".format(meth_name, ", ".join(args))) for l in lines: write(l) write("}") def write_method_parse(meth_name, method, write): """ Create a function to parse the method tail """ length_args = [] for arg in method: if arg.tag != "arg": continue if arg.attrib["type"] == "new_id": if "interface" not in arg.attrib: length_args.append(arg.attrib["name"] + "_iface_name") elif arg.attrib["type"] == "string": length_args.append(arg.attrib["name"]) elif arg.attrib["type"] == "array": length_args.append(arg.attrib["name"]) elif arg.attrib["type"] in ("fd", "uint", "int", "object", "fixed"): pass else: raise NotImplementedError(arg.attrib) sig = [] lines = [] ret = [] for i, arg in enumerate(method): if arg.tag != "arg": continue allow_null = "allow-null" in arg.attrib and arg.attrib["allow-null"] == "true" if arg.attrib["type"] == "object": lines.append(" let arg{} = parse_obj(&mut msg)?;".format(i)) sig.append("ObjId") ret.append("arg{}".format(i)) elif arg.attrib["type"] == "new_id": if "interface" in arg.attrib: lines.append(" let arg{} = parse_obj(&mut msg)?;".format(i)) ret.append("arg{}".format(i)) sig.append("ObjId") else: lines.append( " let arg{}_iface_name = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?;".format( i ) ) lines.append(" let arg{}_version = parse_u32(&mut msg)?;".format(i)) lines.append(" let arg{} = parse_obj(&mut msg)?;".format(i)) sig.append("&'a [u8]") sig.append("u32") sig.append("ObjId") ret.append("arg{}_iface_name".format(i)) ret.append("arg{}_version".format(i)) ret.append("arg{}".format(i)) elif arg.attrib["type"] in ("int", "fixed"): lines.append(" let arg{} = parse_i32(&mut msg)?;".format(i)) sig.append("i32") ret.append("arg{}".format(i)) elif arg.attrib["type"] == "uint": lines.append(" let arg{} = parse_u32(&mut msg)?;".format(i)) sig.append("u32") ret.append("arg{}".format(i)) elif arg.attrib["type"] == "string": if allow_null: lines.append(" let arg{} = parse_string(&mut msg)?;".format(i)) sig.append("Option<&'a [u8]>") else: lines.append( " let arg{} = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?;".format( i ) ) sig.append("&'a [u8]") ret.append("arg{}".format(i)) elif arg.attrib["type"] == "array": lines.append(" let arg{} = parse_array(&mut msg)?;".format(i)) sig.append("&'a [u8]") ret.append("arg{}".format(i)) elif arg.attrib["type"] == "fd": pass else: raise NotImplementedError(arg.attrib) tail_prefix = "" if not sig else "mut " paren = (lambda x: "(" + x + ")") if len(sig) != 1 else (lambda x: x) write( "pub fn parse_{}<'a>({}msg: &'a [u8]) -> Result<{}, &'static str> {{".format( meth_name, tail_prefix, paren(", ".join(sig)) ) ) if sig: write(" msg = msg.get(8..).ok_or(PARSE_ERROR)?;") for l in lines: write(l) write(" if !msg.is_empty() { return Err(PARSE_ERROR); }") else: write(" if msg.len() != 8 { return Err(PARSE_ERROR); }") write(" Ok(" + paren(", ".join(ret)) + ")") write("}") def write_enum(enum_name, enum_entries, enum_values, with_try, write): write("pub enum " + enum_name + " {") for i, (name, value) in enumerate(zip(enum_entries, enum_values)): write(" " + name + " = " + str(value) + ",") write("}") if with_try: write("impl TryFrom for " + enum_name + " {") write(" type Error = ();") write(" fn try_from(v: u32) -> Result {") write(" Ok(match v {") for i, (name, value) in enumerate(zip(enum_entries, enum_values)): write(" {} => {}::{},".format(value, enum_name, name)) write(" _ => return Err(()),") write(" })") write(" }") write("}") def process_interface(interface, uid, write): iface_name = interface.attrib["name"] iface_version = interface.attrib["version"] evts = [] reqs = [] for thing in interface: if thing.tag == "event" or thing.tag == "request": signature = get_signature(thing) destructor = "type" in thing.attrib and thing.attrib["type"] == "destructor" dst = evts if thing.tag == "event" else reqs meth_num = len(dst) name = thing.attrib["name"] dst.append((name, signature, destructor)) meth_name = ( ("evt" if thing.tag == "event" else "req") + "_" + iface_name + "_" + name ) if (iface_name, name) in handled_funcs: write_method_write(meth_name, meth_num, thing, write) write_method_length(meth_name, thing, write) write_method_parse(meth_name, thing, write) write( "pub const {} : MethodId = {}({});".format( "OPCODE_" + iface_name.upper() + "_" + name.upper(), "MethodId::Event" if thing.tag == "event" else "MethodId::Request", len(dst) - 1, ) ) handled_funcs.remove((iface_name, name)) elif thing.tag == "enum": enum_name = iface_name + "_" + thing.attrib["name"] names = [] values = [] for elt in thing: if elt.tag == "entry": name = unsnake(elt.attrib["name"]) if name.isnumeric(): name = "Item" + name names.append(name) values.append(elt.attrib["value"]) do_try = (iface_name, thing.attrib["name"]) in handled_enums if do_try: handled_enums.remove((iface_name, thing.attrib["name"])) write("#[derive(Debug,Clone,Copy,PartialEq,Eq)]") write_enum(unsnake(enum_name), names, values, do_try, write) write("const " + "DATA_" + iface_name.upper() + ": WaylandData = WaylandData {") write(' name: "' + iface_name + '",') if evts: write(" evts: &[") for name, sig, destructor in evts: write(" WaylandMethod {") write(' name: "' + name + '",') write(" sig: " + sig + ",") write(" destructor: " + str(destructor).lower() + ",") write(" },") write(" ],") else: write(" evts: &[],") if reqs: write(" reqs: &[") for name, sig, destructor in reqs: write(" WaylandMethod {") write(' name: "' + name + '",') write(" sig: " + sig + ",") write(" destructor: " + str(destructor).lower() + ",") write(" },") write(" ],") else: write(" reqs: &[],") write(" version: " + iface_version + ",") write("};") write( "pub const " + iface_name.upper() + ": &[u8] = DATA_" + iface_name.upper() + ".name.as_bytes();" ) if False: if evts: write_enum( unsnake(iface_name) + "EvtIDs", [unsnake(name) for name, _, _ in evts], list(range(len(evts))), False, write, ) if reqs: write_enum( unsnake(iface_name) + "ReqIDs", [unsnake(name) for name, _, _ in reqs], list(range(len(reqs))), False, write, ) return iface_name if __name__ == "__main__": protocols = sorted(os.listdir(path="protocols")) if not all(map(lambda x: x.endswith(".xml"), protocols)): print("Not all files in protocols/ are XML files:", protocols) quit() with open("src/wayland_gen.rs", "w") as output: def write(*x): print(*x, file=output) write(header) interfaces = [] uid = 0 for protocol_file in protocols: root = xml.etree.ElementTree.parse("protocols/" + protocol_file).getroot() for interface in root: if interface.tag == "interface": interfaces.append(process_interface(interface, uid, write)) uid += 1 write("#[repr(u8)]") write("#[derive(Debug,Clone,Copy)]") write_enum( "WaylandInterface", [unsnake(x) for x in sorted(interfaces)], list(range(len(interfaces))), True, write, ) write("pub const INTERFACE_TABLE : &[WaylandData] = &[") for i, intf in enumerate(sorted(interfaces)): write(" {},".format("DATA_" + intf.upper())) write("];") if handled_enums or handled_funcs: raise Exception("Unhandled: {} {}".format(handled_funcs, handled_enums)) subprocess.call(["rustfmt", "src/wayland_gen.rs"]) waypipe-v0.10.5/shaders/000077500000000000000000000000001507071711100150725ustar00rootroot00000000000000waypipe-v0.10.5/shaders/Cargo.lock000066400000000000000000000002371507071711100170010ustar00rootroot00000000000000# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "waypipe-shaders" version = "0.1.0" waypipe-v0.10.5/shaders/Cargo.toml000066400000000000000000000001361507071711100170220ustar00rootroot00000000000000[package] name = "waypipe-shaders" version = "0.1.0" edition = "2021" rust-version = "1.56.0" waypipe-v0.10.5/shaders/build.rs000066400000000000000000000032171507071711100165420ustar00rootroot00000000000000use std::env; use std::ffi::OsStr; use std::fmt::Write; use std::fs; use std::path::Path; use std::process::Command; fn main() { let paths = fs::read_dir("./").unwrap(); let mut shaders = Vec::new(); for p in paths { let q = p.unwrap(); if q.path().extension() == Some(OsStr::new("glsl")) { shaders.push(q.path()); } } /* No rerun-if directives -- these will track changes to * existing files, but not register any new shaders. */ let compiler = "glslc"; /* If parallelization is ever needed, use jobserver */ shaders.sort(); let mut contents = String::new(); for shader in shaders { let args: &[&OsStr] = &[ OsStr::new("-O"), OsStr::new("-fshader-stage=compute"), shader.as_os_str(), OsStr::new("-o"), OsStr::new("-"), ]; let spirv = Command::new(compiler) .args(args) .output() .expect("Failed to compile file") .stdout; let s: &str = shader.file_stem().unwrap().to_str().unwrap(); write!( &mut contents, "pub const {}: &[u32] = &[\n", s.to_uppercase() ) .unwrap(); assert!(spirv.len() % 4 == 0); for w in spirv.chunks_exact(4) { let block = u32::from_ne_bytes(w.try_into().unwrap()); write!(&mut contents, " {:#010x},\n", block).unwrap(); } write!(&mut contents, "];\n").unwrap(); } let generated_path = Path::new(&env::var("OUT_DIR").unwrap()).join("shaders.rs"); fs::write(&generated_path, contents).unwrap(); } waypipe-v0.10.5/shaders/check.sh000077500000000000000000000005061507071711100165070ustar00rootroot00000000000000#!/bin/sh glslc -Werror -O -fshader-stage=compute nv12_img_to_rgb.glsl -o - >/dev/null glslc -Werror -O -fshader-stage=compute rgb_to_nv12_img.glsl -o - >/dev/null glslc -Werror -O -fshader-stage=compute rgb_to_yuv420_buf.glsl -o - >/dev/null glslc -Werror -O -fshader-stage=compute yuv420_buf_to_rgb.glsl -o - >/dev/null waypipe-v0.10.5/shaders/nv12_img_to_rgb.glsl000066400000000000000000000017051507071711100207360ustar00rootroot00000000000000#version 450 layout(binding = 0, rgba8) uniform writeonly image2D output_img; layout(binding = 1) uniform sampler2D input_y; layout(binding = 2) uniform sampler2D input_vu; layout(push_constant) uniform constants { mat3x4 ybr_to_rgb; } push; layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in; void main() { ivec2 brdim = textureSize(input_vu, 0); ivec2 ydim = textureSize(input_y, 0); ivec2 outdim = imageSize(output_img); /* input_vu has exactly half the size of input_y */ /* invocation x/y ranges over all pixels of output */ vec2 pos = min(vec2(gl_GlobalInvocationID.xy) + 0.5, vec2(ydim)); float y = texture(input_y, pos).r; vec2 br = texture(input_vu, pos / 2.0).gr; vec4 ybro = vec4(y, br.r, br.g, 1.0); vec3 rgb = transpose(push.ybr_to_rgb) * ybro; vec4 val = vec4(rgb, 1); ivec2 opos = ivec2(gl_GlobalInvocationID.xy); if (opos.x < outdim.x && opos.y < outdim.y) { imageStore(output_img, opos, val); } } waypipe-v0.10.5/shaders/rgb_to_nv12_img.glsl000066400000000000000000000024251507071711100207360ustar00rootroot00000000000000#version 450 layout(binding = 0) uniform sampler2D input_rgb; layout(binding = 1, r8) uniform writeonly image2D output_y; layout(binding = 2, rg8) uniform writeonly image2D output_vu; layout(push_constant) uniform constants { mat3x4 rgb_to_yrb; } push; /* Each individual task fills 2x2 pixels. */ /* note: AMD subgroupSize is 64, so 8x8 is needed to fully use it */ layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in; void main() { ivec2 brdim = imageSize(output_vu); ivec2 ydim = imageSize(output_y); ivec2 rgbdim = textureSize(input_rgb, 0); vec4 avg = vec4(0.); for (int i = 0; i < 2; i++) { for (int j = 0; j < 2; j++) { ivec2 pos = ivec2(2 * gl_GlobalInvocationID.x + i, 2 * gl_GlobalInvocationID.y + j); vec2 sample_pos = vec2(pos) + 0.5; vec4 rgbo = vec4(texture(input_rgb, sample_pos).rgb, 1.0); float y = (transpose(push.rgb_to_yrb) * rgbo).r; imageStore(output_y, pos, vec4(y, 1., 1., 1.)); avg += rgbo; } } ivec2 pos = ivec2(gl_GlobalInvocationID.xy); vec2 sample_pos = vec2(2 * pos + 0.5) + 0.5; // vec4 rgbo = vec4(texture(input_rgb, sample_pos).rgb, 1.0); vec4 rgbo = avg / 4; vec2 vu = (transpose(push.rgb_to_yrb) * rgbo).bg; imageStore(output_vu, pos, vec4(vu, 1., 1.)); } waypipe-v0.10.5/shaders/rgb_to_yuv420_buf.glsl000066400000000000000000000030561507071711100212220ustar00rootroot00000000000000#version 450 layout(binding = 0) uniform sampler2D input_rgb; layout(binding = 1, r8) writeonly uniform imageBuffer output_y; layout(binding = 2, r8) writeonly uniform imageBuffer output_u; layout(binding = 3, r8) writeonly uniform imageBuffer output_v; layout(push_constant) uniform constants { mat3x4 rgb_to_yrb; int stride_y; int stride_u; int stride_v; } push; /* Each individual task fills 2x2 pixels. */ /* note: AMD subgroupSize is 64, so 8x8 is needed to fully use it */ layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in; void main() { ivec2 rgbdim = textureSize(input_rgb, 0); vec4 avg = vec4(0.); for (int i = 0; i < 2; i++) { for (int j = 0; j < 2; j++) { ivec2 pos = ivec2(2 * gl_GlobalInvocationID.x + i, 2 * gl_GlobalInvocationID.y + j); vec2 sample_pos = vec2(pos) + 0.5; vec4 rgbo = vec4(texture(input_rgb, sample_pos).rgb, 1.0); float y = (transpose(push.rgb_to_yrb) * rgbo).r; int store_pos = pos.y * push.stride_y + pos.x; imageStore(output_y, store_pos, vec4(y, 1., 1., 1.)); avg += rgbo; } } ivec2 pos = ivec2(gl_GlobalInvocationID.xy); vec2 sample_pos = vec2(2 * pos + 0.5) + 0.5; // vec4 rgbo = vec4(texture(input_rgb, sample_pos).rgb, 1.0); vec4 rgbo = avg / 4; vec2 vu = (transpose(push.rgb_to_yrb) * rgbo).bg; int store_pos_v = pos.y * push.stride_v + pos.x; imageStore(output_v, store_pos_v, vec4(vu.g, 1., 1., 1.)); int store_pos_u = pos.y * push.stride_u + pos.x; imageStore(output_u, store_pos_u, vec4(vu.r, 1., 1., 1.)); } waypipe-v0.10.5/shaders/src/000077500000000000000000000000001507071711100156615ustar00rootroot00000000000000waypipe-v0.10.5/shaders/src/lib.rs000066400000000000000000000000631507071711100167740ustar00rootroot00000000000000include!(concat!(env!("OUT_DIR"), "/shaders.rs")); waypipe-v0.10.5/shaders/yuv420_buf_to_rgb.glsl000066400000000000000000000022301507071711100212130ustar00rootroot00000000000000#version 450 layout(binding = 0, rgba8) uniform writeonly image2D output_img; layout(binding = 1) uniform textureBuffer input_y; layout(binding = 2) uniform textureBuffer input_u; layout(binding = 3) uniform textureBuffer input_v; layout(push_constant) uniform constants { mat3x4 ybr_to_rgb; int stride_y; int stride_u; int stride_v; } push; layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in; void main() { ivec2 outdim = imageSize(output_img); /* invocation x/y ranges over all pixels of output */ ivec2 pos = ivec2(gl_GlobalInvocationID.xy); // TODO: any better strategy than doubling U/V coordinates? int y_pos = pos.y * push.stride_y + pos.x; int u_pos = (pos.y / 2) * push.stride_u + (pos.x / 2); int v_pos = (pos.y / 2) * push.stride_v + (pos.x / 2); float y = texelFetch(input_y, y_pos).r; float b = texelFetch(input_v, v_pos).r; float r = texelFetch(input_u, u_pos).r; vec4 ybro = vec4(y, b, r, 1.0); vec3 rgb = transpose(push.ybr_to_rgb) * ybro; vec4 val = vec4(rgb, 1); ivec2 opos = ivec2(gl_GlobalInvocationID.xy); if (opos.x < outdim.x && opos.y < outdim.y) { imageStore(output_img, opos, val); } } waypipe-v0.10.5/src/000077500000000000000000000000001507071711100142305ustar00rootroot00000000000000waypipe-v0.10.5/src/bench.rs000066400000000000000000000505561507071711100156700ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! `waypipe bench` implementation */ use crate::compress::*; use crate::kernel::{apply_diff_one, construct_diff_segment_two}; use crate::util::*; use crate::Compression; use crate::Options; use std::time::Instant; #[derive(Debug, PartialEq, Eq)] enum DiffPattern { On, IdealOn, Off, MemcmpOff, MinimumOff, Alternating100, // very short cycle, should be papered over Alternating1K, // 2kb ~~ 512 pixels Alternating2K, // 4kb cycle ~ 1024 pixels, } fn fill_alternating(rng: &mut BadRng, data: &mut [u8], span_min: usize, span_max: usize) { let mut i = 0; let mut change = false; while i < data.len() { let jump = span_min + rng.next_usize(1 + span_max - span_min); let j = std::cmp::min(data.len(), i + jump); data[i..j].fill(change as u8); i = j; change = !change; } } #[cfg(any(target_arch = "x86_64", target_arch = "x86"))] #[target_feature(enable = "avx2")] unsafe fn read_replace_write_avx2(src: &[u8], rdwrite: &mut [u8], dst: &mut [u8]) -> u32 { #[cfg(target_arch = "x86")] use std::arch::x86::*; #[cfg(target_arch = "x86_64")] use std::arch::x86_64::*; assert!(src.as_ptr() as usize % 64 == 0); assert!(rdwrite.as_ptr() as usize % 64 == 0); assert!(dst.as_ptr() as usize % 4 == 0); assert!(src.len() == rdwrite.len()); assert!(dst.len() >= src.len()); if true { let ones = _mm256_set1_epi64x(u64::MAX as i64); const UNROLL: usize = 4; for k in 0..(src.len() / (32 * UNROLL)) { // Widely unrolled let mut xs = [_mm256_undefined_si256(); UNROLL]; let mut ys = [_mm256_undefined_si256(); UNROLL]; for j in 0..UNROLL { let i = UNROLL * k + j; xs[j] = _mm256_load_si256(src.as_ptr().add(i * 32) as *const _); ys[j] = _mm256_load_si256(rdwrite.as_ptr().add(i * 32) as *const _); } let diff0 = _mm256_cmpeq_epi32(xs[0], ys[0]); let diff1 = _mm256_cmpeq_epi32(xs[1], ys[1]); let diff2 = _mm256_cmpeq_epi32(xs[2], ys[2]); let diff3 = _mm256_cmpeq_epi32(xs[3], ys[3]); let sum0 = _mm256_add_epi32(diff0, diff1); let sum2 = _mm256_add_epi32(diff2, diff3); let sum = _mm256_add_epi32(sum0, sum2); if _mm256_testc_si256(sum, ones) != 0 { // Introduce early exit option to consume reads and inhibit memcpy optimization panic!(); } for (j, x) in xs.iter().enumerate().take(UNROLL) { let i = UNROLL * k + j; _mm256_store_si256(rdwrite.as_mut_ptr().add(i * 32) as *mut _, *x); } for (j, x) in xs.iter().enumerate().take(UNROLL) { let i = UNROLL * k + j; _mm256_storeu_si256(dst.as_mut_ptr().add(i * 32) as *mut _, *x); } } } if false { let ones = _mm256_set1_epi64x(u64::MAX as i64); for i in 0..(src.len() / 32) { let x = _mm256_load_si256(src.as_ptr().add(i * 32) as *const _); let y = _mm256_load_si256(rdwrite.as_ptr().add(i * 32) as *const _); let s = _mm256_cmpeq_epi32(x, y); _mm256_store_si256(rdwrite.as_mut_ptr().add(i * 32) as *mut _, x); _mm256_storeu_si256(dst.as_mut_ptr().add(i * 32) as *mut _, x); if i % 4 == 0 && _mm256_testc_si256(s, ones) != 0 { /* Should never happen; this test should prevent memcpy optimization */ panic!(); } } } if false { for i in 0..(src.len() / 32) { let x = _mm256_load_si256(src.as_ptr().add(i * 32) as *const _); let y = _mm256_load_si256(rdwrite.as_ptr().add(i * 32) as *const _); let s = _mm256_add_epi32(x, y); /* Store "s" instead of the correct "x", because otherwise the compiler will * mis-optimize by extracting a memcpy and increase total memory bandwidth used. */ _mm256_store_si256(rdwrite.as_mut_ptr().add(i * 32) as *mut _, s); _mm256_storeu_si256(dst.as_mut_ptr().add(i * 32) as *mut _, s); } } src.len() as u32 } /* Test function: count the number of differences between src1 and rdwrite, and then * copy src1 to rdwrite and dst2. The performance of this, if properly optimized, should be * a lower bound on diff construction performance in the "everything changed" scenario. */ fn read_replace_write(src: &[u8], rdwrite: &mut [u8], dst: &mut [u8]) -> u32 { const CHUNK_SIZE: usize = 256; let mut any_nondiff = false; #[cfg(any(target_arch = "x86_64", target_arch = "x86"))] if is_x86_feature_detected!("avx2") { return unsafe { read_replace_write_avx2(src, rdwrite, dst) }; } for (bsrc, (brdwr, bdst)) in std::iter::zip( src.chunks_exact(CHUNK_SIZE), std::iter::zip( rdwrite.chunks_exact_mut(CHUNK_SIZE), dst.chunks_exact_mut(CHUNK_SIZE), ), ) { any_nondiff |= bsrc.cmp(brdwr).is_eq(); brdwr.copy_from_slice(bsrc); bdst.copy_from_slice(bsrc); } any_nondiff as u32 } #[cfg(any(target_arch = "x86_64", target_arch = "x86"))] #[target_feature(enable = "avx2")] unsafe fn read_once_avx2(src: &[u8]) -> u32 { #[cfg(target_arch = "x86")] use std::arch::x86::*; #[cfg(target_arch = "x86_64")] use std::arch::x86_64::*; assert!(src.as_ptr() as usize % 64 == 0); let mut xs = [_mm256_set1_epi8(0); 4]; for i in 0..(src.len() / 128) { for (j, x) in xs.iter_mut().enumerate() { let k = 4 * i + j; let y = _mm256_load_si256(src.as_ptr().add(k * 32) as *const _); *x = _mm256_max_epi32(*x, y); // max epi32 may be faster than epi8 } } let mut nonzero = false; for x in xs { let ones = _mm256_set1_epi8(0xff_u8 as i8); if _mm256_testz_si256(x, ones) != 0 { nonzero = true; } } nonzero as u32 } /* Test function: how long does it take to just _read_ the source, and compute something trivial * (like whether it is all-zero); only run on all 0 input */ fn read_once(src: &[u8]) -> u32 { #[cfg(any(target_arch = "x86_64", target_arch = "x86"))] if is_x86_feature_detected!("avx2") { return unsafe { read_once_avx2(src) }; } src.contains(&1) as u32 } fn estimate_diff_speed(pattern: DiffPattern) { let length: usize = 1 << 20; let mut reference_arr = AlignedArray::new(length); let mut source_arr = AlignedArray::new(length); let mut baseline_arr = AlignedArray::new(length); let baseline = baseline_arr.get_mut(); let reference = reference_arr.get_mut(); let source = source_arr.get_mut(); baseline.fill(0); reference.fill(0); let mut diff = vec![0; length + 8]; let mut rng = BadRng { state: 0x1 }; match pattern { DiffPattern::IdealOn | DiffPattern::On => { /* disagree with baseline everwhere */ source.fill(1); } DiffPattern::MemcmpOff | DiffPattern::Off | DiffPattern::MinimumOff => { /* match baseline everwhere */ source.fill(0); } DiffPattern::Alternating100 => { fill_alternating(&mut rng, &mut source[..], 90, 110); } DiffPattern::Alternating1K => { fill_alternating(&mut rng, &mut source[..], 900, 1100); } DiffPattern::Alternating2K => { fill_alternating(&mut rng, &mut source[..], 1800, 2200); } } let ntrials = 1600; let mut elapsed_times = Vec::::new(); let mut last_diff_len = None; for i in 0..ntrials { /* Design: construct diffs, periodically diffing the reference against the source and baseline patterns */ let test_src: &[u8] = std::hint::black_box(if i % 2 == 0 { source } else { baseline }); let start = Instant::now(); let diff_len: u32 = if pattern == DiffPattern::MemcmpOff { (test_src.cmp(reference) as i32) as u32 } else if pattern == DiffPattern::MinimumOff { read_once(test_src) } else if pattern == DiffPattern::IdealOn { read_replace_write(test_src, &mut reference[..], &mut diff[..]) } else { construct_diff_segment_two(&mut diff[..], test_src, &mut reference[..], 0, 32) }; let end = Instant::now(); if let Some(d) = last_diff_len { /* Light sanity check: diff length should only depend on the pattern of differences, not on the * specific contents of the data */ assert!(d == diff_len); } elapsed_times.push(end.duration_since(start).as_secs_f64()); last_diff_len = Some(diff_len); } let diff_len = last_diff_len.unwrap(); /* Skip the first run (may not be in cache) */ let hot_times = &elapsed_times[1..]; // minor issue: not stable to outliers or numerical issues, there are better algorithms let mean = hot_times.iter().sum::() / hot_times.len() as f64; let sample_var = hot_times .iter() .map(|x| (*x - mean) * (*x - mean)) .sum::() / ((hot_times.len() - 1) as f64); let min_time = hot_times.iter().fold(f64::INFINITY, |x, y| x.min(*y)); let max_time = hot_times.iter().fold(-f64::INFINITY, |x, y| x.max(*y)); let f: f64 = 1e9 / (length as f64); println!( "{:>8?} (diff len={:>8}): {:.4} +/- {:.4} ns/byte, range [{:.4},{:.3}] ns/byte", pattern, diff_len, mean * f, sample_var.sqrt() * f, min_time * f, max_time * f ); } fn estimate_diff_compress_speed( length: usize, comp: Compression, text_like: bool, ) -> ((f32, f32), (f32, f32), f32) { assert!(length % 64 == 0); let nshards = std::cmp::max(3, (length / 64) / (1 << 12)); let mut src_arr = AlignedArray::new(length); let mut src_mirror_arr = AlignedArray::new(length); let mut diff_arr = AlignedArray::new(length + 8 * nshards); let mut dst_arr = AlignedArray::new(length); let mut dst_mirror_arr = AlignedArray::new(length); let src = src_arr.get_mut(); let src_mirror = src_mirror_arr.get_mut(); let diff = diff_arr.get_mut(); let dst = dst_arr.get_mut(); let dst_mirror = dst_mirror_arr.get_mut(); let mut rng = BadRng { state: 1 }; if text_like { let mut i = 0; let mut change = false; while i < length { let jump = 4096 + rng.next_usize(1 + 4096); let j = std::cmp::min(length, i + jump); if change { for s in &mut src[i..j] { *s = (0x7f * rng.next_usize(3)) as u8; } } i = j; change = !change; } } else { let mut k: usize = 0; for i in 0..(length / 4) { if i % 1024 == 0 { k = rng.next_usize(1 << 24); } else { k += 4; } let noise = rng.next_usize(2); src[4 * i] = (k + noise) as u8; src[4 * i + 1] = (k >> 8) as u8; src[4 * i + 2] = (k >> 16) as u8; src[4 * i + 3] = 0xff; } } /* Operate on the large region chunk-by-chunk -- this provides crude statistics * to estimate uncertainty for timing and makes it possible to stop early if * compression is very slow */ let mut diff_start = 0; let mut data = Vec::new(); let total_start = Instant::now(); for i in 0..nshards { let istart = 64 * split_interval(0, (length / 64) as u32, nshards as u32, i as u32) as usize; let iend = 64 * split_interval(0, (length / 64) as u32, nshards as u32, (i + 1) as u32) as usize; let start = Instant::now(); let diff_len = construct_diff_segment_two( &mut diff[diff_start..], &src[istart..iend], &mut src_mirror[istart..iend], istart as u32, 32, ); let shard_diff = &diff[diff_start..diff_start + diff_len as usize]; let (ndiff, mid, comp_len): (Vec, Instant, usize) = match comp { Compression::None => { let x = std::hint::black_box(Vec::from(shard_diff)); (x, Instant::now(), diff_len as usize) } Compression::Lz4(lvl) => { let mut ctx = lz4_make_cctx().unwrap(); let comp = std::hint::black_box(lz4_compress_to_vec(&mut ctx, shard_diff, lvl, 0, 0)); let t = Instant::now(); ( lz4_decompress_to_vec(&comp, diff_len as usize).unwrap(), t, comp.len(), ) } Compression::Zstd(lvl) => { let mut cctx = zstd_make_cctx().unwrap(); let mut dctx = zstd_make_dctx().unwrap(); let comp = std::hint::black_box(zstd_compress_to_vec(&mut cctx, shard_diff, lvl, 0, 0)); let t = Instant::now(); ( zstd_decompress_to_vec(&mut dctx, &comp, diff_len as usize).unwrap(), t, comp.len(), ) } }; apply_diff_one(&ndiff, 0, 0, dst).unwrap(); apply_diff_one(&ndiff, 0, 0, dst_mirror).unwrap(); diff_start += diff_len as usize; let end = Instant::now(); let time_diff = mid.duration_since(start).as_secs_f32(); let time_apply = end.duration_since(mid).as_secs_f32(); data.push((time_diff, time_apply, iend - istart, comp_len)); /* If replicating the buffer takes a long time, then stop; cache * effects will be negligible compared to compression/decompression time */ if end.duration_since(total_start).as_secs_f32() >= 1.0 && data.len() >= 3 { break; } } let copy_end = 64 * split_interval(0, (length / 64) as u32, nshards as u32, data.len() as u32) as usize; std::hint::black_box(dst); assert!(std::hint::black_box(dst_mirror)[..copy_end] == src[..copy_end]); let mut comp_len = 0; let mut proc_len = 0; let mut speeds_diff: Vec = Vec::new(); let mut speeds_apply: Vec = Vec::new(); for (time_diff, time_apply, input_len, output_len) in data.iter() { proc_len += input_len; comp_len += output_len; /* The shards are almost equal in length */ speeds_diff.push((*input_len as f32) / time_diff); speeds_apply.push((*input_len as f32) / time_apply); } let ratio = (comp_len as f32) / (proc_len as f32); let n = data.len(); let diff_speed: f32 = speeds_diff.iter().sum::() / (n as f32); let diff_sstdev2: f32 = speeds_diff .iter() .map(|x| (x - diff_speed) * (x - diff_speed)) .sum::() / ((n - 1) as f32); let apply_speed: f32 = speeds_apply.iter().sum::() / (n as f32); let apply_sstdev2: f32 = speeds_apply .iter() .map(|x| (x - diff_speed) * (x - diff_speed)) .sum::() / ((n - 1) as f32); ( (diff_speed, diff_sstdev2.sqrt()), (apply_speed, apply_sstdev2.sqrt()), ratio, ) } fn run_diff_speed_benchmark() { println!("Diff pattern speed for 2^20 bytes, using sample stdev"); estimate_diff_speed(DiffPattern::On); /* NOTE: these are _not_ perfect benchmarks, and should not be used for comparisons; * thermal or other throttling makes results after the first run slower. Thus, run 'On' twice. */ estimate_diff_speed(DiffPattern::On); estimate_diff_speed(DiffPattern::IdealOn); estimate_diff_speed(DiffPattern::Off); estimate_diff_speed(DiffPattern::MemcmpOff); estimate_diff_speed(DiffPattern::MinimumOff); estimate_diff_speed(DiffPattern::Alternating100); estimate_diff_speed(DiffPattern::Alternating1K); estimate_diff_speed(DiffPattern::Alternating2K); } pub fn run_benchmark(opts: &Options, fast_mode: bool) -> Result<(), String> { if opts.debug { run_diff_speed_benchmark(); } let mut cvs = Vec::::new(); cvs.push(Compression::None); if cfg!(feature = "lz4") { for i in -10..=12 { cvs.push(Compression::Lz4(i)); } } else { println!("Waypipe was not built with lz4 compression/decompression support, skipping measurements with lz4"); } if cfg!(feature = "zstd") { for i in -10..=22 { cvs.push(Compression::Zstd(i)); } } else { println!("Waypipe was not built with zstd compression/decompression support, skipping measurements with zstd"); } let mut text_results = Vec::new(); let mut img_results = Vec::new(); /* A long test length is useful for realism (to avoid having *all data fit into cache), but makes * the test slower to run, so no repetitions will be done. */ let test_size = if fast_mode { 1 << 16 } else { 1 << 22 }; println!("Measured (diff+compress,decompress+apply) speeds and compression ratios"); println!("(single-threaded, unpipelined, quite artificial, single measurement)"); for c in cvs.iter() { let (textspeed_diff, textspeed_apply, textratio) = estimate_diff_compress_speed(test_size, *c, true); let (imgspeed_diff, imgspeed_apply, imgratio) = estimate_diff_compress_speed(test_size, *c, false); let max_pad = &[' '; 8]; let padding = max_pad[..8 - c.to_string().len()] .iter() .collect::(); println!( "{}:{} text-like ({:.2e}±{:.1}%,{:.2e}±{:.1}%) bytes/sec, ratio {:.3} image-like ({:.2e}±{:.1}%,{:.2e}±{:.1}%) bytes/sec, ratio {:.3}", c, padding, textspeed_diff.0, textspeed_diff.1 / textspeed_diff.0, textspeed_apply.0, textspeed_apply.1 / textspeed_apply.0, textratio, imgspeed_diff.0, imgspeed_diff.1 / imgspeed_diff.0, imgspeed_apply.0, imgspeed_apply.1 / imgspeed_apply.0, imgratio ); text_results.push((textspeed_diff.0, textspeed_apply.0, textratio)); img_results.push((imgspeed_diff.0, imgspeed_apply.0, imgratio)); } let nthreads = if opts.threads == 0 { std::cmp::max(1, std::thread::available_parallelism().unwrap().get() / 2) } else { opts.threads as usize }; println!( "With {} threads, estimated time for a 32 MB (4k) image transfer, assuming:", nthreads ); println!("- perfect utilization and pipelining; no buffering, transfer latency or jitter"); println!("- equally fast local and remote computers"); let mbps_opts = [ 1.0, 2.0, 5.0, 10.0, 20.0, 50.0, 100.0, 200.0, 500.0, 1000.0, 2000.0, 5000.0, 10000.0, 20000.0, ]; let test_size = (1 << 25) as f32; for (typ, results) in &[("text-like", &text_results), ("image-like", &img_results)] { for m in mbps_opts { let bandwidth = m * 1e6; /* optimal choice: */ let mut best = Compression::None; let mut best_time = f32::INFINITY; for (c, (speed_diff, speed_apply, ratio)) in cvs.iter().zip(results.iter()) { let full_speed_diff = speed_diff * (nthreads as f32); let full_speed_apply = speed_apply * (nthreads as f32); let transfer_speed = bandwidth / ratio; let processing_speed = transfer_speed.min(full_speed_apply).min(full_speed_diff); let test_time = test_size / processing_speed; let test_time_unpipelined = test_size / full_speed_diff + test_size / transfer_speed + test_size / full_speed_apply; if opts.debug { println!( "comp={}, bandwidth={:e} bytes/sec: estimated {} msec (unpipelined: {} msec)", c, bandwidth, 1e3 * test_time, 1e3 * test_time_unpipelined ); } if test_time < best_time { best_time = test_time; best = *c; } } println!( "bandwidth={:e} bytes/sec, {} suggested comp={} with: estimated {} msec", bandwidth, typ, best, 1e3 * best_time ); } } Ok(()) } waypipe-v0.10.5/src/compress.rs000066400000000000000000000313001507071711100164260ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Safe LZ4 and ZSTD compression wrappers */ use core::ffi::{c_char, c_void}; #[cfg(feature = "lz4")] use waypipe_lz4_wrapper::*; #[cfg(feature = "zstd")] use waypipe_zstd_wrapper::*; pub struct LZ4CCtx { #[cfg(feature = "lz4")] state: *mut u8, } pub struct ZstdCCtx { #[cfg(feature = "zstd")] ctx: *mut ZSTD_CCtx, } pub struct ZstdDCtx { #[cfg(feature = "zstd")] ctx: *mut ZSTD_DCtx, } #[cfg(feature = "zstd")] pub fn zstd_make_cctx() -> Option { unsafe { // SAFETY: ZSTD_createCCtx is thread-safe let x = ZSTD_createCCtx(); if x.is_null() { return None; } Some(ZstdCCtx { ctx: x }) } } #[cfg(not(feature = "zstd"))] pub fn zstd_make_cctx() -> Option { unreachable!(); } #[cfg(feature = "zstd")] pub fn zstd_make_dctx() -> Option { unsafe { // SAFETY: ZSTD_createDCtx is thread-safe let x = ZSTD_createDCtx(); if x.is_null() { return None; } Some(ZstdDCtx { ctx: x }) } } #[cfg(not(feature = "zstd"))] pub fn zstd_make_dctx() -> Option { unreachable!(); } #[cfg(feature = "zstd")] impl Drop for ZstdCCtx { fn drop(&mut self) { unsafe { // SAFETY: ZSTD_freeCCtx is thread-safe, operates // on non-null pointer made by ZSTD_createCCtx ZSTD_freeCCtx(self.ctx); } } } #[cfg(feature = "zstd")] impl Drop for ZstdDCtx { fn drop(&mut self) { unsafe { // SAFETY: ZSTD_freeDCtx is thread-safe, operates // on non-null pointer made by ZSTD_createDCtx ZSTD_freeDCtx(self.ctx); } } } #[cfg(feature = "lz4")] pub fn lz4_make_cctx() -> Option { unsafe { let sz = std::cmp::max(LZ4_sizeofState(), LZ4_sizeofStateHC()) as usize; assert!(sz > 0); // LZ4_compress_extState and LZ4_compress_HC_extStateHC // require that layout is 8-aligned let layout = std::alloc::Layout::from_size_align(sz, 8).unwrap(); // SAFETY: layout size is verified to be > 0 let data = std::alloc::alloc(layout); if data.is_null() { return None; } Some(LZ4CCtx { state: data }) } } #[cfg(not(feature = "lz4"))] pub fn lz4_make_cctx() -> Option { unreachable!(); } #[cfg(feature = "lz4")] impl Drop for LZ4CCtx { fn drop(&mut self) { unsafe { let sz = std::cmp::max(LZ4_sizeofState(), LZ4_sizeofStateHC()) as usize; let layout = std::alloc::Layout::from_size_align(sz, 8).unwrap(); // SAFETY: self.state only set in lz4_make_cctx, which uses the same layout // because LZ4_sizeofState / LZ4_sizeofStateHC always return same result std::alloc::dealloc(self.state, layout); } } } /* Create a vector containing the compressed input, preceded by pad_pre zeros, and followed by pad_post zeros */ // typically used with pad_pre = 16, pad_post = 4, with Vec::truncate called after (which does not reallocate) #[cfg(feature = "zstd")] pub fn zstd_compress_to_vec( ctx: &mut ZstdCCtx, input: &[u8], level: i8, pad_pre: usize, pad_post: usize, ) -> Vec { let mut v = Vec::new(); unsafe { let max_space: usize = ZSTD_compressBound(input.len()); // Compute required space used without overflow let req_space = max_space .checked_add(pad_pre) .unwrap() .checked_add(pad_post) .unwrap(); assert!(req_space <= isize::MAX as usize); v.reserve_exact(req_space); // SAFETY: function checks inputs for validity, ctx.ctx is non-null from ZstdCCtx construction let ret = ZSTD_CCtx_setParameter( ctx.ctx, ZSTD_cParameter_ZSTD_c_compressionLevel, level as i32, ); assert!( ZSTD_isError(ret) == 0, "Failed to set Zstd CCtx compression level" ); let dst: *mut u8 = v.as_mut_ptr(); // SAFETY: v has reserved reserve_exact > pad_pre bytes, has aligned plain data contents std::ptr::write_bytes(dst, 0, pad_pre); // SAFETY: ctx.ctx is not null; ZSTD_compress2 should never write outside dst[..max_space], // and pad_pre + max_space is <= req_space so all input regions are allocated let sz = ZSTD_compress2( ctx.ctx, dst.add(pad_pre) as *mut c_void, max_space, input.as_ptr() as *const c_void, input.len(), ); assert!(ZSTD_isError(sz) == 0, "Failed to compress with Zstd"); assert!(sz <= max_space); // SAFETY: dst has aligned plain data contents, and written interval is in range // // (because pad_pre + sz + pad_post <= pad_pre + max_space + pad_post = req_space) std::ptr::write_bytes(dst.add(pad_pre + sz), 0, pad_post); // SAFETY: ZSTD_compress2 wrote as many bytes as its return value indicated v.set_len(sz + pad_pre + pad_post); } v } #[cfg(not(feature = "zstd"))] pub fn zstd_compress_to_vec( ctx: &mut ZstdCCtx, input: &[u8], level: i8, pad_pre: usize, pad_post: usize, ) -> Vec { unreachable!(); } /* Returns None if the input does not decompress to exactly uncomp_len, or if input/output is too large */ #[cfg(feature = "zstd")] pub fn zstd_decompress_to_vec( ctx: &mut ZstdDCtx, input: &[u8], uncomp_len: usize, ) -> Option> { let mut v = Vec::new(); unsafe { // SAFETY: iff ZSTD_decompressDCtx succeeds, all of dst will be overwritten. v.reserve_exact // ensures `uncomp_len` bytes are available in v, and ZSTD_decompressDCtx should not read or // write outside these bounds, or read outside the input. There is no alignment requirement v.reserve_exact(uncomp_len); let ndecomp = ZSTD_decompressDCtx( ctx.ctx, v.as_mut_ptr() as *mut c_void, uncomp_len, input.as_ptr() as *const c_void, input.len(), ); if ndecomp != uncomp_len { return None; } // SAFETY: all of dst[..uncomp_len] has been written to by ZSTD_decompressDCtx v.set_len(uncomp_len); } Some(v) } #[cfg(not(feature = "zstd"))] pub fn zstd_decompress_to_vec( ctx: &mut ZstdDCtx, input: &[u8], uncomp_len: usize, ) -> Option> { unreachable!(); } #[cfg(feature = "zstd")] pub fn zstd_decompress_to_slice(ctx: &mut ZstdDCtx, input: &[u8], dst: &mut [u8]) -> Option<()> { unsafe { // SAFETY: iff ZSTD_decompressDCtx succeeds, all of dst will be overwritten; ZSTD_decompressDCtx // should never write outside `dst` vector allocated by v, or read outside the input. There is // no alignment requirement let ndecomp = ZSTD_decompressDCtx( ctx.ctx, dst.as_mut_ptr() as *mut c_void, dst.len(), input.as_ptr() as *const c_void, input.len(), ); if ndecomp != dst.len() { return None; } } Some(()) } #[cfg(not(feature = "zstd"))] pub fn zstd_decompress_to_slice(ctx: &mut ZstdDCtx, input: &[u8], dst: &mut [u8]) -> Option<()> { unreachable!(); } #[cfg(feature = "lz4")] pub fn lz4_compress_to_vec( ctx: &mut LZ4CCtx, input: &[u8], level: i8, pad_pre: usize, pad_post: usize, ) -> Vec { let mut v = Vec::new(); unsafe { let max_space: i32 = LZ4_compressBound(input.len().try_into().unwrap()); let req_space = TryInto::::try_into(max_space) .unwrap() .checked_add(pad_pre) .unwrap() .checked_add(pad_post) .unwrap(); assert!(req_space < isize::MAX as usize); v.reserve_exact(req_space as usize); let dst: *mut u8 = v.as_mut_ptr(); // SAFETY: the req_space >= pad_pre bytes written here been allocated; // dst is aligned and contents valid as data is u8 std::ptr::write_bytes(dst, 0, pad_pre); // SAFETY: Same in both cases. ctx.state is not null, is 8-aligned, and was made at // least as large as required for both LZ4/LZ4HC. The LZ4_compress functions do // not write outside their provided intervals, and dst[pad_pre..pad_pre+max_space] // is contained in dst[..req_space] so all output regions are allocated and pointer // calculations are in bounds let sz: i32 = if level <= 0 { // todo: currently input values <= 1 are replaced with 1, so level=0,level=1 are equivalent // waypipe-c has the same behavior LZ4_compress_fast_extState( ctx.state as *mut c_void, input.as_ptr() as *const c_char, dst.add(pad_pre) as *mut c_char, input.len().try_into().unwrap(), max_space, -(level as i32), ) } else { LZ4_compress_HC_extStateHC( ctx.state as *mut c_void, input.as_ptr() as *const c_char, dst.add(pad_pre) as *mut c_char, input.len().try_into().unwrap(), max_space, level as i32, ) }; assert!(sz >= 0 && sz <= max_space, "Failed to compress with LZ4"); let usz = sz as usize; // SAFETY: the region up to pad_pre + usz + pad_post has been allocated // dst is aligned and contents valid as data is u8 std::ptr::write_bytes(dst.add(pad_pre + usz), 0, pad_post); // SAFETY: usz + pad_pre + pad_post < req_space, and the write_bytes/LZ4_compress // functions have written to every byte v.set_len(usz + pad_pre + pad_post); } v } #[cfg(not(feature = "lz4"))] pub fn lz4_compress_to_vec( ctx: &mut LZ4CCtx, input: &[u8], level: i8, pad_pre: usize, pad_post: usize, ) -> Vec { unreachable!(); } #[cfg(feature = "lz4")] pub fn lz4_decompress_to_vec(input: &[u8], uncomp_len: usize) -> Option> { let mut v = Vec::new(); let ilen: i32 = input.len().try_into().ok()?; let olen: i32 = uncomp_len.try_into().ok()?; unsafe { // SAFETY: iff LZ4_decompress_safe succeeds, all of dst will be overwritten; LZ4_decompress_safe // should never write outside `dst` vector allocated by v, or read outside the input. There is // no alignment requirement. The reserved space `uncomp_len` equals `olen` as overflow was checked. v.reserve_exact(uncomp_len); let ndecomp = LZ4_decompress_safe( input.as_ptr() as *const c_char, v.as_mut_ptr() as *mut c_char, ilen, olen, ); if ndecomp != olen { return None; } // SAFETY: all of dst[..uncomp_len] has been written to by LZ4_decompress_safe v.set_len(uncomp_len); } Some(v) } #[cfg(not(feature = "lz4"))] pub fn lz4_decompress_to_vec(input: &[u8], uncomp_len: usize) -> Option> { unreachable!(); } #[cfg(feature = "lz4")] pub fn lz4_decompress_to_slice(input: &[u8], dst: &mut [u8]) -> Option<()> { let ilen: i32 = input.len().try_into().ok()?; let olen: i32 = dst.len().try_into().ok()?; unsafe { // SAFETY: iff LZ4_decompress_safe succeeds, all of dst will be overwritten; LZ4_decompress_safe // should never operate outside its input/output regions. There is no alignment requirement. // The reserved space `uncomp_len` equals `olen` as overflow was checked. let ndecomp = LZ4_decompress_safe( input.as_ptr() as *const c_char, dst.as_mut_ptr() as *mut c_char, ilen, olen, ); if ndecomp != olen { return None; } } Some(()) } #[cfg(not(feature = "lz4"))] pub fn lz4_decompress_to_slice(input: &[u8], dst: &mut [u8]) -> Option<()> { unreachable!(); } #[cfg(feature = "zstd")] #[test] fn test_zstd_compression() { let mut x: Vec = vec![0; 1000]; for (i, v) in x.iter_mut().enumerate() { *v = ((11 * i) % 256) as u8; } let mut c = zstd_make_cctx().unwrap(); let w = zstd_compress_to_vec(&mut c, &x[..], 0, 16, 4); let mut d = zstd_make_dctx().unwrap(); let y = zstd_decompress_to_vec(&mut d, &w[16..w.len() - 4], x.len()).unwrap(); assert_eq!(x, y); } #[cfg(feature = "lz4")] #[test] fn test_lz4_compression() { let mut x: Vec = vec![0; 1000]; for (i, v) in x.iter_mut().enumerate() { *v = ((11 * i) % 256) as u8; } let mut c = lz4_make_cctx().unwrap(); let w = lz4_compress_to_vec(&mut c, &x[..], 0, 16, 4); let y = lz4_decompress_to_vec(&w[16..w.len() - 4], x.len()).unwrap(); assert_eq!(x, y); } waypipe-v0.10.5/src/damage.rs000066400000000000000000000364461507071711100160310ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Damage merging logic */ use log::debug; use std::cmp::Reverse; use std::collections::BinaryHeap; /* Note: (the use of u32 x1/x2 instead of i32 x+width ensures sign and overflow checks need * be done only once.) */ #[derive(Debug, Copy, Clone)] pub struct Rect { pub x1: u32, pub x2: u32, pub y1: u32, pub y2: u32, } /* Note: this _saturates_ on overflow, and assumes a well-formed rectangle with x1 (usize, usize) { let start = offset .saturating_add((r.y1 as usize).saturating_mul(stride)) .saturating_add((r.x1 as usize).saturating_mul(bpp)); let end = offset .saturating_add((r.y2 as usize).saturating_mul(stride)) .saturating_add((r.x2 as usize).saturating_mul(bpp)); (start, end) } fn align_down(x: usize, align_bits: u32) -> usize { (x >> align_bits) << align_bits } fn align_up(x: usize, align_bits: u32) -> usize { x.checked_next_multiple_of(1_usize << align_bits).unwrap() } // todo: handling overflow? Cannot align max_int - 1; better to error? fn aligned_bounding_interval( r: &Rect, offset: usize, stride: usize, bpp: usize, align_bits: u32, ) -> (usize, usize) { let x = bounding_interval(r, offset, stride, bpp); (align_down(x.0, align_bits), align_up(x.1, align_bits)) } /* Requires: All rectangles, after projection, will fit inside the interval given by bound */ fn process_group( rects: &[Rect], bound: (usize, usize), output: &mut Vec<(usize, usize)>, align_bits: u32, min_gap: usize, offset: usize, stride: usize, bpp: usize, ) { // TODO: implement a pre-pass that computes a disjoint collection of rectangles (e.g.: using a sweep // line algorithm plus interval tree); this would be far more efficient than the current solution, which // tends to compute almost the same union once per horizontal line. The fact that the output space is // almost a cylinder is not actually important -- it at most introduces one seam per horizontal line, // which is a minor cost. min_gap can be handled by horizontally expanding the rectangles beforehand; // it is permissible to erase gaps slightly larger than min_gap. /* A heap of indices into the rectangle, using lex sorted key to track start position and rect index */ let mut rect_heap: BinaryHeap<(Reverse, usize)> = BinaryHeap::new(); rect_heap.reserve_exact(rects.len()); let mut row_counters: Vec = vec![0; rects.len()]; let mut work_estimate: usize = 0; // if this grows too large relative to bound.1-bound.0, fall back to dumb strategy. for (i, r) in rects.iter().enumerate() { if (r.x2 - r.x1) as usize * bpp > stride.saturating_sub(min_gap) { /* All segments from this rectangle will be merged; this is a common case, so optimize it */ row_counters[i] = usize::MAX; } // TODO: special case optimization to merge entire rectangle into contiguous segment if gaps are small; // not _strictly_ necessary, but good for perf let start_pos = align_down( offset + (r.y1 as usize) * stride + (r.x1 as usize) * bpp, align_bits, ); rect_heap.push((Reverse(start_pos), i)); } let mut cur: Option<(usize, usize)> = None; while let Some((Reverse(start), i)) = rect_heap.pop() { /* Process next segment */ let rect = &rects[i]; let merge_opt = row_counters[i] == usize::MAX; let end = if merge_opt { align_up( offset + (rect.y2 - 1) as usize * stride + (rect.x2 as usize) * bpp, align_bits, ) } else { align_up( offset + (rect.y1 as usize + row_counters[i]) * stride + (rect.x2 as usize) * bpp, align_bits, ) }; if let Some(cv) = cur { if start <= cv.1 || (start - cv.1) < min_gap { /* either approximate containment or overlap */ cur = Some((cv.0, std::cmp::max(cv.1, end))); } else { output.push(cv); cur = Some((start, end)); } } else { cur = Some((start, end)); } if !merge_opt { /* Update row counter and return rectangle if not done */ row_counters[i] += 1; if row_counters[i] < (rect.y2 - rect.y1) as usize { let start_pos = align_down( offset + (rect.y1 as usize + row_counters[i]) * stride + (rect.x1 as usize) * bpp, align_bits, ); rect_heap.push((Reverse(start_pos), i)); } } /* Do not spend much more time computing damage regions than it would * take to just scan the entire interval for differences. This doesn't * need to be a _great_ estimate, just within a few orders of magnitude * to avoid pathological behavior */ work_estimate += std::cmp::max(16, rect_heap.len()).ilog2() as usize; if work_estimate > (bound.1 - bound.0) / 8 { debug!( "Stopped processing block after estimated {} work; length {}", work_estimate, (bound.1 - bound.0) ); output.push((cur.unwrap().0, bound.1)); return; } } if let Some(cv) = cur { output.push(cv); } } // todo: eventually add support for multiplanar and subsampled formats? /** * Note: all rectangles must be clipped to the wl_buffer bounds, otherwise function may produce nonsense or panic * * min_gap: minimum allowed gap on output * align_bits: e.g. value 6 => output segment positions are multiples of 64 * * The last interval may be rounded up align(buffer end, 1< Vec<(usize, usize)> { if rects.is_empty() { return Vec::new(); } assert!(stride > 0); assert!(bpp > 0); let mut output = Vec::new(); for r in rects.iter() { assert!(r.x1 < r.x2 && r.y1 < r.y2); } rects.sort_unstable_by_key(|rect: &Rect| -> usize { bounding_interval(rect, offset, stride, bpp).0 }); struct Group { i_start: usize, i_end: usize, /* The aligned region covered by the corresponding rectangles */ region: (usize, usize), } /* First: group intervals into overlapping clumps separated by >= min_gap, post alignment. * */ let mut spans: Vec = Vec::new(); let mut rect_iter = rects.iter().enumerate(); let mut current = Group { i_start: 0, i_end: 1, region: aligned_bounding_interval( rect_iter.next().unwrap().1, offset, stride, bpp, align_bits, ), }; for (i, rect) in rect_iter { let b = aligned_bounding_interval(rect, offset, stride, bpp, align_bits); if b.0 <= current.region.1 || (b.0 - current.region.1) < min_gap { current = Group { i_start: current.i_start, i_end: i + 1, region: (current.region.0, std::cmp::max(current.region.1, b.1)), } } else { spans.push(current); current = Group { i_start: i, i_end: i + 1, region: b, } } } spans.push(current); /* Next: process each span (or if it isn't worth the time, complete it) */ for group in spans { process_group( &rects[group.i_start..group.i_end], group.region, &mut output, align_bits, min_gap, offset, stride, bpp, ); } output } /* Given two sorted lists of disjoint intervals, compute their union, automatically closing any * gaps of size Vec<(usize, usize)> { assert!(validate_output(a, 0, min_gap).is_ok()); assert!(validate_output(b, 0, min_gap).is_ok()); let mut output = Vec::new(); let mut iter_a = a.iter().peekable(); let mut iter_b = b.iter().peekable(); let mut last: Option<(usize, usize)> = None; loop { // Extract element with next earliest endpoint from either of A or B let pa = iter_a.peek(); let pb = iter_b.peek(); let nxt = *match (pa, pb) { (Some(ea), Some(eb)) => { if ea.0 <= eb.0 { iter_a.next().unwrap() } else { iter_b.next().unwrap() } } (Some(_), None) => iter_a.next().unwrap(), (None, Some(_)) => iter_b.next().unwrap(), (None, None) => { break; } }; let Some(mut y) = last else { last = Some(nxt); continue; }; // Merge intervals or push old if nxt.0 <= y.1 || (nxt.0 - y.1) < min_gap { y.1 = std::cmp::max(y.1, nxt.1); last = Some(y); } else { output.push(y); last = Some(nxt); } } if let Some(e) = last { output.push(e); } output } fn validate_output(a: &[(usize, usize)], align_bits: u32, min_gap: usize) -> Result<(), String> { for (x, y) in a { if x >= y { return Err(format!("negative or empty interval {} {}", x, y)); } let mask = (1_usize << align_bits) - 1; if x & mask != 0 || y & mask != 0 { return Err(format!("misaligned {} {}", x, y)); } } for i in 1..a.len() { if a[i].0 < a[i - 1].1 { return Err(format!("overlapping {:?} {:?}", a[i - 1], a[i])); } if a[i].0 < a[i - 1].1 + min_gap { return Err(format!( "min gap too small {}-{}={} < {}", a[i].0, a[i - 1].1, a[i].0 - a[i - 1].1, min_gap )); } } Ok(()) } #[test] fn test_union_damage() { let x: &[(usize, usize)] = &[(0, 6)]; let y: &[(usize, usize)] = &[(8, 10), (14, 20)]; let align_bits = 1; let max_gap = 4; assert!(validate_output(x, align_bits, max_gap).is_ok()); assert!(validate_output(y, align_bits, max_gap).is_ok()); let bad1: &[(usize, usize)] = &[(8, 10), (15, 20)]; let bad2: &[(usize, usize)] = &[(8, 10), (12, 20)]; let bad3: &[(usize, usize)] = &[(8, 10), (6, 20)]; assert!(validate_output(bad1, align_bits, max_gap).is_err()); assert!(validate_output(bad2, align_bits, max_gap).is_err()); assert!(validate_output(bad3, align_bits, max_gap).is_err()); let output = union_damage(x, y, max_gap); println!("output: {:?}", output); assert_eq!(&output, &[(0, 10), (14, 20)]); // TODO: test overflow conditions } #[test] fn test_damage_computation() { { let w = 100; let h = 50; let bpp = 1; let stride = bpp * w; let example_pattern = [ Rect { x1: 0, x2: 10, y1: 0, y2: 10, }, Rect { x1: 90, x2: 100, y1: 40, y2: 50, }, ]; let align_bits = 0; let offset = 0; let mut tmp = example_pattern; let slices = compute_damaged_segments(&mut tmp, align_bits, 0, offset, stride, bpp); assert!(slices.len() == 20); println!("slices: {:?}", slices); /* Check that, when min_gap is huge, only one big interval is reported. */ let mut tmp = example_pattern; let min_gap = usize::MAX; let slices = compute_damaged_segments(&mut tmp, align_bits, min_gap, offset, stride, bpp); assert_eq!(slices, &[(0, w * h * bpp)]); } fn fill_mask(mask: &mut [bool], w: usize, h: usize, stride: usize, bpp: usize, rects: &[Rect]) { mask.fill(false); for r in rects { assert!(r.x1 < r.x2 && r.x2 <= w as u32, "{:?}", r); assert!(r.y1 < r.y2 && r.y2 <= h as u32, "{:?}", r); for y in r.y1..r.y2 { mask[((y as usize) * stride + (r.x1 as usize) * bpp) ..((y as usize) * stride + (r.x2 as usize) * bpp)] .fill(true); } } } fn test_segments(mask: &mut [bool], segments: &[(usize, usize)]) { for (a, b) in segments { let b = std::cmp::min(*b, mask.len()); mask[*a..b].fill(false); } assert!(mask.iter().all(|x| !*x)); } let w = 100; let h = 100; let bpp = 1; let stride = 200; assert!(stride >= w * bpp); let mut mask = vec![false; h * stride]; for i in 0..100_usize { /* A variety of test patterns */ let mut rects: Vec = Vec::new(); if i == 0 { for x in 0..((w / 2) as u32) { for y in 0..((h / 2) as u32) { rects.push(Rect { x1: 2 * x, x2: 2 * x + 1, y1: 2 * y, y2: 2 * y + 1, }); } } } else if i % 4 == 0 { for j in 0..(i as u32) { rects.push(Rect { x1: j, x2: j + 1, y1: 0, y2: (h as u32) - j, }); rects.push(Rect { x1: 0, x2: (w as u32) - j, y1: j, y2: j + 1, }); } } else if i % 2 == 0 { for j in 0..(i as u32) { rects.push(Rect { x1: j, x2: j + 2, y1: j, y2: j + 2, }); } } else { let (dw, dh, di) = ((w / 2) as u32, (h / 2) as u32, (i / 2) as u32); for j in 1..di { rects.push(Rect { x1: dw - j, x2: dw + j, y1: dh - (di - j), y2: dh + (di - j), }); } } let align_bits = 2; let min_gap = 1; fill_mask(&mut mask, w, h, stride, bpp, &rects); let nset = mask.iter().map(|x| *x as usize).sum::(); let mut tmp = rects; let slices = compute_damaged_segments(&mut tmp, align_bits, min_gap, 0, stride, bpp); let ncover = slices.iter().map(|(x, y)| y - x).sum::(); println!( "test {}, {} rects, {} slices, {} filled, {} covered", i, tmp.len(), slices.len(), nset, ncover ); validate_output(&slices, align_bits, min_gap).unwrap(); test_segments(&mut mask, &slices); } } waypipe-v0.10.5/src/dmabuf.rs000066400000000000000000004152711507071711100160460ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Support for DMABUFs and timeline semaphores (using Vulkan) */ #![cfg(feature = "dmabuf")] use crate::platform::*; use crate::tag; use crate::util::*; #[cfg(feature = "video")] pub use crate::video::*; use crate::wayland_gen::*; use ash::*; use log::{debug, error}; use nix::{errno, libc, request_code_readwrite}; use std::collections::BTreeMap; use std::ffi::{c_char, c_int, c_uint, c_void, CStr, CString}; use std::os::fd::{AsFd, AsRawFd, BorrowedFd, FromRawFd, IntoRawFd, OwnedFd}; use std::path::Path; use std::ptr::{slice_from_raw_parts, slice_from_raw_parts_mut}; use std::sync::{Arc, Mutex, MutexGuard}; /** Properties of a specific format+modifier combination */ #[derive(Debug, Clone, Eq, Ord, PartialEq, PartialOrd)] pub struct ModifierData { pub plane_count: u32, pub max_size_transfer: (u32, u32), pub max_size_store_and_sample: Option<(u32, u32)>, } /** A list of modifiers and associated metadata. */ #[derive(Debug)] pub struct FormatData { /** The message handling logic generally only needs the list which modifiers * are available, so store these separately to provide &[u64] access. */ pub modifiers: Vec, /** A list matching 'modifiers'. */ modifier_data: Vec, } /** Structure holding a queue and associated mutable metadata */ pub struct VulkanQueue { /** Queue object, for which access must be externally synchronized */ pub queue: vk::Queue, /** The last semaphore value planned to be signalled by a submission to the queue */ pub last_semaphore_value: u64, } /** MutexGuard for a VulkanQueue. * * Unlocks the ffmpeg lock for the queue when dropped. */ pub struct VulkanQueueGuard<'a> { pub inner: MutexGuard<'a, VulkanQueue>, vulk: &'a VulkanDevice, } /** A Vulkan entrypoint, instance, and associated information. */ pub struct VulkanInstance { entry: Entry, instance: Instance, physdevs: Vec, } /** A Vulkan logical device, with its associated state, main queue and timeline semaphore, * extensions, and cached properties. */ pub struct VulkanDevice { _instance: Arc, dev_info: DeviceInfo, /** Queue family indices. Order: [compute+transfer, graphics+transfer, encode, decode] */ qfis: [u32; 4], /** Timeline semaphore; when it reaches 'queue.last_semaphore_value', all preceding work using * the semaphore is done */ pub semaphore: vk::Semaphore, /** DRM handle and event_fd exported from [VulkanDevice::semaphore], for easy access from main loop */ semaphore_external: Option, drm_fd: OwnedFd, #[cfg(feature = "video")] pub video: Option, /** The compute+transfer queue to use. Do NOT access via queue.lock() -- use * vulkan_lock_queue() instead, to also ensure ffmpeg is locked out of using * the queue. */ queue: Mutex, pub dev: Device, get_modifier: ext::image_drm_format_modifier::Device, get_mem_reqs2: khr::get_memory_requirements2::Device, bind_mem2: khr::bind_memory2::Device, ext_mem_fd: khr::external_memory_fd::Device, pub timeline_semaphore: khr::timeline_semaphore::Device, ext_semaphore_fd: khr::external_semaphore_fd::Device, pub formats: BTreeMap, // todo: the set of possible formats is small and known at compile time; use a table and perfect hashing instead? device_id: u64, pub queue_family: u32, memory_properties: vk::PhysicalDeviceMemoryProperties, } pub enum VulkanImageParameterMismatch { Format, Modifier, Size((u32, u32)), } /** The associated drm handle/eventfd associated with a timeline semaphore, which can be * used to wait for updates on it. * * [Drop] has not been implemented for this type; it must be destroyed by the containing struct. */ struct VulkanExternalTimelineSemaphore { drm_handle: u32, event_fd: OwnedFd, } /** A Vulkan timeline semaphore and the eventfd used to wait for updates on it */ pub struct VulkanTimelineSemaphore { pub vulk: Arc, pub semaphore: vk::Semaphore, external: VulkanExternalTimelineSemaphore, } /** A sync file (e.g.: exported from a DMABUF, for implicit sync) */ pub struct VulkanSyncFile { vulk: Arc, fd: OwnedFd, } /** A binary semaphore (e.g.: resulting from DMABUF's exported implicit sync file) */ pub struct VulkanBinarySemaphore { vulk: Arc, pub semaphore: vk::Semaphore, } /** A Vulkan command pool */ pub struct VulkanCommandPool { pub vulk: Arc, pub pool: Mutex, } /** Mutable state for a [VulkanDmabuf] */ pub struct VulkanDmabufInner { // TODO: need to store metadata about all pending operations to the dmabuf pub image_layout: vk::ImageLayout, } /** Structure for a Vulkan image imported from/exported to a DMABUF */ pub struct VulkanDmabuf { // No RefCell -- unsafe is used throughout anyway, exclusivity is not needed, and no recursion should be done pub vulk: Arc, pub image: vk::Image, // todo: store memory, to be able to free it properly when the VulkanDmabuf is dropped? // also: the file descriptors probably have a separate lifespan; check this pub width: u32, pub height: u32, // note: the drm_format should never be needed after casting to nearest Vulkan approximation pub vk_format: vk::Format, /* True iff vkImage was created with storage and sample usage */ pub can_store_and_sample: bool, // todo: use a <=4 vector size optimization without any heap allocation memory_planes: Vec<(vk::DeviceMemory, u32, u32)>, /* mem / offset / stride */ /** In order to extract sync files for implicit synchronization, and to poll the implicit * fences if sync file extraction is not supported, store a copy of the DMABUF fd. * (This may need to be changed to support disjoint multi-planar images.) */ pub main_fd: OwnedFd, pub inner: Mutex, } /** Data for a [VulkanBuffer] which is mutable or requires exclusive access. */ struct VulkanBufferInner { data: *mut c_void, reader_count: usize, has_writer: bool, } /** A mapped staging buffer, either for use when reading or writing data */ pub struct VulkanBuffer { pub vulk: Arc, // todo: central handling of vk::Buffer objects, to allow bump-allocating // segments from a larger central buffer. pub buffer: vk::Buffer, mem: vk::DeviceMemory, pub memory_len: u64, pub buffer_len: u64, /* Mutex-wrapped to ensure only one referent can read/write from data at a time */ inner: Mutex, } unsafe impl Send for VulkanBuffer {} unsafe impl Sync for VulkanBuffer {} /** Handle storing (and keeping alive) command buffer for a transfer between a [VulkanBuffer] and a [VulkanDmabuf] */ pub struct VulkanCopyHandle { vulk: Arc, /* Copy operation is between these two objects */ _image: Arc, _buffer: Arc, pool: Arc, // note: not safe to reuse unless image is still alive // TODO: not safe to free a 'pending' command buffer; give Vulkan itself a list of copy-handles? cb: vk::CommandBuffer, // on the queue's timeline semaphore completion_time_point: u64, } impl Drop for VulkanInstance { fn drop(&mut self) { unsafe { self.instance.destroy_instance(None); } } } impl Drop for VulkanDevice { fn drop(&mut self) { unsafe { #[cfg(feature = "video")] { if let Some(ref v) = self.video { destroy_video(&self.dev, v); } /* Drop video, if present */ self.video = None; } // These are probably only safe if operations using them have completed; need // a centralized registry self.dev.destroy_semaphore(self.semaphore, None); self.dev.destroy_device(None); } } } impl Drop for VulkanQueue { fn drop(&mut self) {} } impl Drop for VulkanCommandPool { fn drop(&mut self) { unsafe { let p = self.pool.lock().unwrap(); self.vulk.dev.destroy_command_pool(*p, None); } } } impl Drop for VulkanDmabuf { fn drop(&mut self) { unsafe { self.vulk.dev.destroy_image(self.image, None); for (mem, _offset, _stride) in &self.memory_planes { self.vulk.dev.free_memory(*mem, None); } } // The Arc> should keep Vulkan alive until after VulkanDmabuf is dropped } } impl Drop for VulkanTimelineSemaphore { fn drop(&mut self) { unsafe { drm_syncobj_destroy(&self.vulk.drm_fd, self.external.drm_handle).unwrap(); // SAFETY: only if semaphore is not being used self.vulk.dev.destroy_semaphore(self.semaphore, None); // event_fd cleanup is automatic } } } impl Drop for VulkanBinarySemaphore { fn drop(&mut self) { unsafe { // SAFETY: only if semaphore is not being used self.vulk.dev.destroy_semaphore(self.semaphore, None); } } } impl Drop for VulkanBuffer { fn drop(&mut self) { unsafe { assert!(self.inner.lock().unwrap().reader_count == 0); assert!(!self.inner.lock().unwrap().has_writer); self.vulk.dev.destroy_buffer(self.buffer, None); self.vulk.dev.unmap_memory(self.mem); self.vulk.dev.free_memory(self.mem, None); } } } impl Drop for VulkanCopyHandle { fn drop(&mut self) { let cmd_pool = self.pool.pool.lock().unwrap(); unsafe { /* Verify that the command buffer execution has completed; if not, panic, as it's a program error */ if let Ok(counter) = self .vulk .timeline_semaphore .get_semaphore_counter_value(self.vulk.semaphore) { assert!( counter >= self.completion_time_point, "copy handle deleted at {} >!= {}; dropped too early?", counter, self.completion_time_point ); } self.vulk.dev.free_command_buffers(*cmd_pool, &[self.cb]); } } } impl std::fmt::Display for VulkanImageParameterMismatch { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match self { Self::Format => f.write_str("unsupported format"), Self::Modifier => f.write_str("unsupported modifier"), Self::Size(max_size) => f.write_fmt(format_args!( "provided size is ≰ max size=({},{})", max_size.0, max_size.1 )), } } } /** Function to use to lock the main queue; will ensure queue is also locked * for ffmpeg, if video encoding/decoding is enabled. */ pub fn vulkan_lock_queue(vulk: &VulkanDevice) -> VulkanQueueGuard<'_> { /* Lock order: vulk.queue before video lock */ let inner = vulk.queue.lock().unwrap(); #[cfg(feature = "video")] if let Some(ref v) = vulk.video { unsafe { video_lock_queue(v, vulk.queue_family); } } VulkanQueueGuard { inner, vulk } } impl Drop for VulkanQueueGuard<'_> { fn drop(&mut self) { #[cfg(feature = "video")] if let Some(ref v) = self.vulk.video { unsafe { video_unlock_queue(v, self.vulk.queue_family); } } } } /** Check whether a given extension is available in the list with the given version. */ fn exts_has_prop(exts: &[vk::ExtensionProperties], name: &CStr, version: u32) -> bool { exts.iter() .any(|x| x.extension_name_as_c_str().unwrap() == name && x.spec_version >= version) } /** Additional information for Vulkan formats */ pub struct FormatLayoutInfo { pub bpp: u32, pub planes: usize, // TODO: while the number of _memory_ planes can be looked up by modifier // (e.g., to account for CCS planes), need to track number of logical planes // (e.g.: vk::ImageAspectFlags::PLANE_0 vs vk::ImageAspectFlags::MEMORY_PLANE_0_EXT) // todo: subsampling in vulkan requires width/height to be multiples of 2 sometimes; // would need to bump up canonical size to match } // TODO: determine if it is worth it to deduplicate shm and dmabuf format information. // (the code pathways will probably become very different.) /** List of Vulkan formats which Waypipe supports. * * Channel interpretations _usually_ do not affect processing; given a linear * layout it shouldn't matter which channel is R and which is B as long as the * transfer operations (vkCmdCopyImageToBuffer, etc.) preserve the order. * * However, the DMABUF layout may depend on the precise (format, modifier) pair and * so buffers should be imported/exported using the closest matching Vulkan format. * In particular, DRM_FORMAT_ABGR8888 should use R8G8B8A8_UNORM, and * DRM_FORMAT_ARGB8888 should use B8G8R8A8_UNORM. It is possible that a driver * implements both ABGR and ARGB buffers using a fixed ABGR memory layout (using * the format value to determine whether to swap R and B channels when transferring * the data to a buffer, instead of just copying raw data as would occur with a naive * linear layout). As a result, it is _not_ safe in general to import or export * a DRM format using anything but the exact corresponding Vulkan format. */ const SUPPORTED_FORMAT_LIST: &[vk::Format] = &[ vk::Format::R4G4B4A4_UNORM_PACK16, vk::Format::B4G4R4A4_UNORM_PACK16, // vk::Format::A4R4G4B4_UNORM_PACK16_EXT, A4B4G4R4_UNORM_PACK16_EXT require VK_EXT_4444_formats or Vulkan 1.3 vk::Format::R5G6B5_UNORM_PACK16, vk::Format::B5G6R5_UNORM_PACK16, // vk::Format::A1B5G5R5_UNORM_PACK16_KHR, // requires VK_KHR_maintenance5 or Vulkan 1.4. vk::Format::A1R5G5B5_UNORM_PACK16, vk::Format::B5G5R5A1_UNORM_PACK16, vk::Format::R5G5B5A1_UNORM_PACK16, vk::Format::R8_UNORM, vk::Format::R8G8_UNORM, vk::Format::R8G8B8_UNORM, vk::Format::B8G8R8_UNORM, vk::Format::R8G8B8A8_UNORM, vk::Format::B8G8R8A8_UNORM, vk::Format::A2R10G10B10_UNORM_PACK32, vk::Format::A2B10G10R10_UNORM_PACK32, vk::Format::R16_UNORM, vk::Format::R16G16_UNORM, vk::Format::R16G16B16A16_UNORM, vk::Format::R16G16B16A16_SFLOAT, vk::Format::G8_B8_R8_3PLANE_444_UNORM, ]; /** Get properties of a [vk::Format] */ pub fn get_vulkan_info(f: vk::Format) -> FormatLayoutInfo { match f { vk::Format::R4G4B4A4_UNORM_PACK16 | vk::Format::B4G4R4A4_UNORM_PACK16 | vk::Format::R5G6B5_UNORM_PACK16 | vk::Format::B5G6R5_UNORM_PACK16 | vk::Format::A1R5G5B5_UNORM_PACK16 | vk::Format::B5G5R5A1_UNORM_PACK16 | vk::Format::R5G5B5A1_UNORM_PACK16 => FormatLayoutInfo { bpp: 2, planes: 1 }, vk::Format::R8_UNORM => FormatLayoutInfo { bpp: 1, planes: 1 }, vk::Format::R8G8_UNORM => FormatLayoutInfo { bpp: 2, planes: 1 }, vk::Format::R8G8B8_UNORM | vk::Format::B8G8R8_UNORM => { FormatLayoutInfo { bpp: 3, planes: 1 } } vk::Format::R8G8B8A8_UNORM | vk::Format::B8G8R8A8_UNORM => { FormatLayoutInfo { bpp: 4, planes: 1 } } vk::Format::A2R10G10B10_UNORM_PACK32 | vk::Format::A2B10G10R10_UNORM_PACK32 => { FormatLayoutInfo { bpp: 4, planes: 1 } } vk::Format::R16_UNORM => FormatLayoutInfo { bpp: 2, planes: 1 }, vk::Format::R16G16_UNORM => FormatLayoutInfo { bpp: 4, planes: 1 }, vk::Format::R16G16B16A16_UNORM => FormatLayoutInfo { bpp: 8, planes: 1 }, vk::Format::R16G16B16A16_SFLOAT => FormatLayoutInfo { bpp: 8, planes: 1 }, vk::Format::G8B8G8R8_422_UNORM => FormatLayoutInfo { bpp: 2, planes: 1 }, vk::Format::G8_B8_R8_3PLANE_420_UNORM => FormatLayoutInfo { bpp: 2, planes: 3 }, vk::Format::G8_B8_R8_3PLANE_422_UNORM => FormatLayoutInfo { bpp: 2, planes: 3 }, vk::Format::G8_B8_R8_3PLANE_444_UNORM => FormatLayoutInfo { bpp: 2, planes: 3 }, vk::Format::G8_B8R8_2PLANE_420_UNORM => FormatLayoutInfo { bpp: 2, planes: 2 }, vk::Format::G8_B8R8_2PLANE_422_UNORM => FormatLayoutInfo { bpp: 2, planes: 2 }, vk::Format::G16_B16R16_2PLANE_420_UNORM => FormatLayoutInfo { bpp: 2, planes: 2 }, vk::Format::G16_B16_R16_3PLANE_444_UNORM => FormatLayoutInfo { bpp: 2, planes: 3 }, _ => unreachable!("Format {:?} should have been implemented", f), } } /** Convert a Wayland fourcc format code to a DRM format code. * * Wayland and DRM differ in encodings for Argb8888 and Xrgb8888 only */ #[allow(dead_code)] #[cfg(any(test, feature = "test_proto"))] pub const fn wayland_to_drm(wl_format: WlShmFormat) -> u32 { match wl_format { WlShmFormat::Argb8888 => fourcc('A', 'R', '2', '4'), WlShmFormat::Xrgb8888 => fourcc('X', 'R', '2', '4'), _ => wl_format as u32, } } /** Convert a DRM fourcc format to a canonical Vulkan format with an equivalent layout * (but possibly different channel names -- those can be either ignored or fixed by swizzling.) * * (The other direction is not well defined, because _multiple_ DRM formats may map * onto a Vulkan format.) */ pub fn drm_to_vulkan(drm_format: u32) -> Option { use WlShmFormat::*; if drm_format == 0 || drm_format == 1 { /* not a DRM format, and clients should not send this for DMABUFs */ return None; } /* First, convert to Wayland format */ let shm_format = if let Ok(shm_format) = drm_format.try_into() { shm_format } else { /* Unfortunately wayland uses different values for Argb8888, Xrgb8888 */ if drm_format == fourcc('A', 'R', '2', '4') { Argb8888 } else if drm_format == fourcc('X', 'R', '2', '4') { Xrgb8888 } else { return None; } }; // TODO: is it safe to bundle multiple original channels into one? e.g. RGB233 => R8_UNORM // TODO: UNORM vs interpreting everything as UINT /* 8-bit RGB, the endianness-independent formats */ match shm_format { R8 => return Some(vk::Format::R8_UNORM), Gr88 => return Some(vk::Format::R8G8_UNORM), Rgb888 => return Some(vk::Format::B8G8R8_UNORM), Bgr888 => return Some(vk::Format::R8G8B8_UNORM), Abgr8888 | Xbgr8888 => return Some(vk::Format::R8G8B8A8_UNORM), Argb8888 | Xrgb8888 => return Some(vk::Format::B8G8R8A8_UNORM), /* Bgra/Rgba/Ayuv/Xyuv-type have no direct analogue and using * R8G8B8A8_UNORM/B8G8R8A8_UNORM for either could potentially cause * channel swaps on import */ _ => (), } /* Vulkan physical device endianness matches host endianness, while the Wayland and * DRM formats are endianness-independent (typically little-endian). To keep things simple, * on big endian systems, only permit formats with endianness-independent layout. */ if cfg!(not(target_endian = "little")) { return None; } Some(match shm_format { /* Packed formats */ Rgba4444 | Rgbx4444 => vk::Format::R4G4B4A4_UNORM_PACK16, Bgra4444 | Bgrx4444 => vk::Format::B4G4R4A4_UNORM_PACK16, Rgb565 => vk::Format::R5G6B5_UNORM_PACK16, Bgr565 => vk::Format::B5G6R5_UNORM_PACK16, Abgr1555 | Xbgr1555 => vk::Format::A1B5G5R5_UNORM_PACK16_KHR, Argb1555 | Xrgb1555 => vk::Format::A1R5G5B5_UNORM_PACK16, Bgra5551 | Bgrx5551 => vk::Format::B5G5R5A1_UNORM_PACK16, Rgba5551 | Rgbx5551 => vk::Format::R5G5B5A1_UNORM_PACK16, /* 10-bit RGB */ Argb2101010 | Xrgb2101010 => vk::Format::A2R10G10B10_UNORM_PACK32, Abgr2101010 | Xbgr2101010 => vk::Format::A2B10G10R10_UNORM_PACK32, /* 16-bit RGB */ R16 => vk::Format::R16_UNORM, Gr1616 => vk::Format::R16G16_UNORM, Abgr16161616 | Xbgr16161616 => vk::Format::R16G16B16A16_UNORM, Abgr16161616f | Xbgr16161616f => vk::Format::R16G16B16A16_SFLOAT, /* YUV */ Yuyv => vk::Format::G8B8G8R8_422_UNORM, Uyvy => vk::Format::B8G8R8G8_422_UNORM, Yuv420 => vk::Format::G8_B8_R8_3PLANE_420_UNORM, Yuv422 => vk::Format::G8_B8_R8_3PLANE_422_UNORM, Yuv444 => vk::Format::G8_B8_R8_3PLANE_444_UNORM, Nv12 => vk::Format::G8_B8R8_2PLANE_420_UNORM, Nv16 => vk::Format::G8_B8R8_2PLANE_422_UNORM, P016 => vk::Format::G16_B16R16_2PLANE_420_UNORM, Q401 => vk::Format::G16_B16_R16_3PLANE_444_UNORM, _ => return None, }) } /* Definitions from drm.h and linux/dma-buf.h */ const DRM_IOCTL_SYNCOBJ_DESTROY: (u32, u8) = ('d' as u32, 0xC0); const DRM_IOCTL_SYNCOBJ_FD_TO_HANDLE: (u32, u8) = ('d' as u32, 0xC2); const DRM_IOCTL_SYNCOBJ_EVENTFD: (u32, u8) = ('d' as u32, 0xCF); const DMABUF_IOCTL_EXPORT_SYNC_FILE: (u32, u8) = ('b' as u32, 0x02); #[repr(C)] struct DrmSyncobjDestroy { handle: u32, pad: u32, } #[repr(C)] struct DrmSyncobjHandle { handle: u32, flags: u32, fd: i32, pad: u32, } #[repr(C)] struct DrmSyncobjEventFd { handle: u32, flags: u32, point: u64, fd: i32, pad: u32, } #[repr(C)] struct DmabufExportSyncFile { flags: u32, fd: i32, } const DMA_BUF_SYNC_READ: u32 = 1 << 0; /** Requirements: for the specific ioctl used, arg must be properly * aligned, have the right type, and have the correct lifespan. */ unsafe fn ioctl_loop( drm_fd: &OwnedFd, typ: u32, code: u8, arg: *mut T, about: &str, ) -> Result<(), String> { loop { // Note: Platforms vary in the signedness of the argument type passed // to libc, and some old big-endian platforms have a different ioctl code format let ret = libc::ioctl( drm_fd.as_raw_fd(), request_code_readwrite!(typ, code, std::mem::size_of::()), arg as *mut c_void, ); let errno = errno::Errno::last_raw(); if ret == 0 { return Ok(()); } else if (errno == errno::Errno::EINTR as i32) || (errno == errno::Errno::EAGAIN as i32) { continue; } else { return Err(tag!("ioctl {:x} ({}) failed: {}", code, about, errno)); } } } /** Link a drm syncobj handle and time point to an eventfd */ fn drm_syncobj_eventfd( drm_fd: &OwnedFd, event_fd: &OwnedFd, handle: u32, point: u64, ) -> Result<(), String> { let mut x = DrmSyncobjEventFd { handle, flags: 0, // or: DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE point, fd: event_fd.as_raw_fd(), pad: 0, }; unsafe { // SAFETY: x is repr(C), x has proper type for the ioctl, // and the ioctl does not use the pointer after the call ioctl_loop::( drm_fd, DRM_IOCTL_SYNCOBJ_EVENTFD.0, DRM_IOCTL_SYNCOBJ_EVENTFD.1, &mut x, "eventfd", ) } } /** Convert a DRM syncobj fd to a handle */ fn drm_syncobj_fd_to_handle(drm_fd: &OwnedFd, syncobj_fd: &OwnedFd) -> Result { let mut x = DrmSyncobjHandle { handle: 0, flags: 0, fd: syncobj_fd.as_raw_fd(), pad: 0, }; unsafe { // SAFETY: x is repr(C), x has proper type for the ioctl, // and the ioctl does not use the pointer after the call ioctl_loop::( drm_fd, DRM_IOCTL_SYNCOBJ_FD_TO_HANDLE.0, DRM_IOCTL_SYNCOBJ_FD_TO_HANDLE.1, &mut x, "fd to handle", )?; Ok(x.handle) } } /** Destroy a drm syncobj handle. * * To get full IO-safety for DRM handles, would need an OwnedDrmHandle * wrapper. As-is, the caller should only call this on syncobj handles * it exclusively controls. */ fn drm_syncobj_destroy(drm_fd: &OwnedFd, handle: u32) -> Result<(), String> { let mut x = DrmSyncobjDestroy { handle, pad: 0 }; unsafe { // SAFETY: x is repr(C), x has proper type for the ioctl, // and the ioctl does not use the pointer after the call ioctl_loop( drm_fd, DRM_IOCTL_SYNCOBJ_DESTROY.0, DRM_IOCTL_SYNCOBJ_DESTROY.1, &mut x, "handle destroy", ) } } /** Export the sync file holding read fences associated with the given dmabuf. * * Returns Some(fd) is successful, None if operation possibly not supported, Err * on error. */ fn dmabuf_sync_file_export(dmabuf_fd: &OwnedFd) -> Result, String> { let mut x = DmabufExportSyncFile { flags: DMA_BUF_SYNC_READ, fd: -1, }; unsafe { // SAFETY: x is repr(C), x has proper type for the ioctl, // and the ioctl does not use the pointer after the call let code = request_code_readwrite!( DMABUF_IOCTL_EXPORT_SYNC_FILE.0, DMABUF_IOCTL_EXPORT_SYNC_FILE.1, std::mem::size_of::() ); let arg: *mut c_void = &mut x as *mut DmabufExportSyncFile as *mut c_void; loop { let ret = libc::ioctl(dmabuf_fd.as_raw_fd(), code, arg); let errno = errno::Errno::last_raw(); if ret == 0 { break; } else if (errno == errno::Errno::EINTR as i32) || (errno == errno::Errno::EAGAIN as i32) { continue; } else if errno == errno::Errno::EINVAL as i32 { // EINVAL: the request is not valid (= kernel might be old and may not support this) return Ok(None); } else { return Err(tag!( "ioctl {:x} (sync file export) failed: {}", code, errno )); } } } assert!(x.fd != -1); unsafe { // SAFETY: fd was just created, has been checked valid, and has no other references Ok(Some(OwnedFd::from_raw_fd(x.fd))) } } /** Identify the maximum supported image extents for a given format-modifier pair. * * Returns None if the image format/modifier/usage/etc. combination is not supported. */ fn get_max_external_image_size( instance: &Instance, physdev: vk::PhysicalDevice, queue_family: u32, format: vk::Format, modifier: u64, flags: vk::ImageUsageFlags, ) -> Result, String> { let mut ext_create_info = vk::PhysicalDeviceExternalImageFormatInfo::default() .handle_type(vk::ExternalMemoryHandleTypeFlags::DMA_BUF_EXT); let img_qfis = &[queue_family]; let mut modifier_create_info = vk::PhysicalDeviceImageDrmFormatModifierInfoEXT::default() .drm_format_modifier(modifier) .sharing_mode(vk::SharingMode::EXCLUSIVE) .queue_family_indices(img_qfis); let format_info = vk::PhysicalDeviceImageFormatInfo2KHR::default() .format(format) .ty(vk::ImageType::TYPE_2D) .tiling(vk::ImageTiling::DRM_FORMAT_MODIFIER_EXT) .usage(flags) .flags(vk::ImageCreateFlags::empty()) .push_next(&mut ext_create_info) .push_next(&mut modifier_create_info); let mut ext_info = vk::ExternalImageFormatProperties::default(); let mut image_prop = vk::ImageFormatProperties2::default().push_next(&mut ext_info); unsafe { match instance.get_physical_device_image_format_properties2( physdev, &format_info, &mut image_prop, ) { Err(vk::Result::ERROR_FORMAT_NOT_SUPPORTED) => Ok(None), Err(x) => Err(tag!( "Failed to get image format properties for (format={:?},modifier={:x}): {:?}", format, modifier, x )), Ok(()) => Ok(Some(( image_prop.image_format_properties.max_extent.width, image_prop.image_format_properties.max_extent.height, ))), } } } /** Rank physical devices by performance and video enc/decoding capabilities. * * Lower values are assumed better */ fn device_rank(info: &DeviceInfo) -> u8 { let base_score = match info.typ { vk::PhysicalDeviceType::DISCRETE_GPU => 0, vk::PhysicalDeviceType::INTEGRATED_GPU => 1, vk::PhysicalDeviceType::VIRTUAL_GPU => 2, vk::PhysicalDeviceType::CPU => 3, vk::PhysicalDeviceType::OTHER => 3, _ => 3, }; let hw_enc = info.hw_enc_h264; let hw_dec = info.hw_dec_h264 | info.hw_dec_av1; /* prefers: faster gpu, then hw decoding availability, then hw encoding availability */ base_score * 4 + ((!hw_enc) as u8) + 2 * ((!hw_dec) as u8) } /** Basic information about a [vk::PhysicalDevice] */ #[derive(Copy, Clone)] pub struct DeviceInfo { physdev: vk::PhysicalDevice, device_id: u64, typ: vk::PhysicalDeviceType, /* If hardware video encoding/decoding is supported; set to false if !with_video */ pub hw_enc_h264: bool, pub hw_dec_h264: bool, pub hw_dec_av1: bool, /** Iff true, device supports import and export of timeline semaphores from/to sync files */ supports_timeline_import_export: bool, /** Iff true, device supports import of binary semaphores from sync files*/ supports_binary_import: bool, } /** List of instance extensions that Waypipe requires */ const INSTANCE_EXTS: &[*const c_char] = &[ vk::KHR_GET_PHYSICAL_DEVICE_PROPERTIES2_NAME.as_ptr(), // needed to link device and DRM node vk::KHR_EXTERNAL_MEMORY_CAPABILITIES_NAME.as_ptr(), // needed for buffer import/export vk::KHR_EXTERNAL_SEMAPHORE_CAPABILITIES_NAME.as_ptr(), // needed to export/poll on timeline semaphores ]; /** List of device extensions that Waypipe requires */ const EXT_LIST: &[(&CStr, u32)] = &[ ( /* Needed to get drm node details */ vk::EXT_PHYSICAL_DEVICE_DRM_NAME, 1, ), ( /* This and dependencies needed to import/export dmabufs */ vk::EXT_IMAGE_DRM_FORMAT_MODIFIER_NAME, 1, ), ( /* Used to bind dmabuf memory */ vk::KHR_BIND_MEMORY2_NAME, 1, ), (vk::KHR_SAMPLER_YCBCR_CONVERSION_NAME, 12), (vk::KHR_IMAGE_FORMAT_LIST_NAME, 1), (vk::KHR_EXTERNAL_MEMORY_NAME, 1), (vk::EXT_EXTERNAL_MEMORY_DMA_BUF_NAME, 1), (vk::KHR_GET_MEMORY_REQUIREMENTS2_NAME, 1), (vk::KHR_EXTERNAL_MEMORY_FD_NAME, 1), (vk::KHR_DEDICATED_ALLOCATION_NAME, 3), (vk::KHR_MAINTENANCE1_NAME, 1), /* For synchronization with the client/compositor, which need * not be Vulkan programs themselves */ (vk::EXT_QUEUE_FAMILY_FOREIGN_NAME, 1), /* For timeline semaphores and explicit sync */ (vk::KHR_TIMELINE_SEMAPHORE_NAME, 2), /* To import/export semaphores to fds, for explicit sync protocols & event loop */ (vk::KHR_EXTERNAL_SEMAPHORE_FD_NAME, 1), /* Needed by external_semaphore_fd */ (vk::KHR_EXTERNAL_SEMAPHORE_NAME, 1), ]; /** List of device extensions required to support hardware video encoding. */ const EXT_LIST_VIDEO_ENC_BASE: &[(&CStr, u32)] = &[( vk::KHR_VIDEO_ENCODE_QUEUE_NAME, vk::KHR_VIDEO_ENCODE_QUEUE_SPEC_VERSION, )]; /** List of device extensions required for hardware video encoding with H.264 */ const EXT_VIDEO_ENC_H264: (&CStr, u32) = ( vk::KHR_VIDEO_ENCODE_H264_NAME, vk::KHR_VIDEO_ENCODE_H264_SPEC_VERSION, ); /** List of device extensions required to support hardware video decoding. */ const EXT_LIST_VIDEO_DEC_BASE: &[(&CStr, u32)] = &[( vk::KHR_VIDEO_DECODE_QUEUE_NAME, vk::KHR_VIDEO_DECODE_QUEUE_SPEC_VERSION, )]; /** List of device extensions required for hardware video decoding with H.264 */ const EXT_VIDEO_DEC_H264: (&CStr, u32) = ( vk::KHR_VIDEO_DECODE_H264_NAME, vk::KHR_VIDEO_DECODE_H264_SPEC_VERSION, ); /** List of device extensions required for hardware video decoding with AV1 */ const EXT_VIDEO_DEC_AV1: (&CStr, u32) = ( vk::KHR_VIDEO_DECODE_AV1_NAME, vk::KHR_VIDEO_DECODE_AV1_SPEC_VERSION, ); /** List of device extensions required at minimum for hardware video * encoding or decoding. * * Require the latest known version for video related extensions, * to be safe, because AVVulkanDeviceContext is only given extension * names and not their versions.*/ const EXT_LIST_VIDEO_BASE: &[(&CStr, u32)] = &[ (vk::KHR_VIDEO_QUEUE_NAME, vk::KHR_VIDEO_QUEUE_SPEC_VERSION), /* Also required, for ffmpeg's vkQueueSubmit2 */ ( vk::KHR_SYNCHRONIZATION2_NAME, vk::KHR_SYNCHRONIZATION2_SPEC_VERSION, ), /* YCbCR support */ ( vk::KHR_SAMPLER_YCBCR_CONVERSION_NAME, vk::KHR_SAMPLER_YCBCR_CONVERSION_SPEC_VERSION, ), /* For ffmpeg encoding */ ( vk::KHR_VIDEO_MAINTENANCE1_NAME, vk::KHR_VIDEO_MAINTENANCE1_SPEC_VERSION, ), ]; /** Create a Vulkan instance and record which devices are acceptable */ pub fn setup_vulkan_instance( debug: bool, video: &VideoSetting, test_no_timeline_export: bool, test_no_binary_import: bool, ) -> Result>, String> { let app_name = CString::new(env!("CARGO_PKG_NAME")).unwrap(); let version: u32 = (env!("CARGO_PKG_VERSION_MAJOR").parse::().unwrap() << 24) | (env!("CARGO_PKG_VERSION_MINOR").parse::().unwrap() << 16); let info = vk::ApplicationInfo::default() .application_name(&app_name) .application_version(version) .engine_name(c"waypipe") .engine_version(0) .api_version( if video.dec_pref != Some(CodecPreference::SW) || video.enc_pref != Some(CodecPreference::SW) { // TODO: get best API version available, and turn off video enc/decoding if not? vk::make_api_version(0, 1, 3, 0) } else { vk::make_api_version(0, 1, 0, 0) }, ); let mut create = vk::InstanceCreateInfo::default() .application_info(&info) .enabled_extension_names(INSTANCE_EXTS) .flags(vk::InstanceCreateFlags::default()); let validation = c"VK_LAYER_KHRONOS_validation"; // let dump = c"VK_LAYER_LUNARG_api_dump"; let debug_layers = &[validation.as_ptr()]; unsafe { let entry = Entry::load().map_err(|x| tag!("Failed to load Vulkan library: {:?}", x))?; if debug { /* Only use validation layers with --debug flag if supported */ let has_validation = entry .enumerate_instance_layer_properties() .map_err(|x| tag!("Failed to get vulkan layer properties: {:?}", x))? .iter() .any(|layer| CStr::from_ptr(layer.layer_name.as_ptr()) == validation); if has_validation { create = create.enabled_layer_names(debug_layers); } } let instance: Instance = match entry.create_instance(&create, None) { Err(x) => { /* The set of available extensions and layers can (in theory) change at runtime, * so delay extension checks until now. */ if x == vk::Result::ERROR_EXTENSION_NOT_PRESENT { debug!("Vulkan instance does not support all required instance extensions"); return Ok(None); } return Err(tag!("Failed to create Vulkan instance: {}", x)); } Ok(i) => i, }; /* Note: initial enumeration can be expensive since some details may be loaded * even for devices that are not needed */ let devices = instance .enumerate_physical_devices() .map_err(|x| tag!("Failed to get physical devices: {:?}", x))?; let mut physdevs = Vec::new(); for p in devices { let exts = instance .enumerate_device_extension_properties(p) .map_err(|x| tag!("Failed to enumerate device extensions: {:?}", x))?; let mut drm_prop = vk::PhysicalDeviceDrmPropertiesEXT::default(); let mut prop = vk::PhysicalDeviceProperties2::default(); let has_drm_name = exts_has_prop( &exts, vk::EXT_PHYSICAL_DEVICE_DRM_NAME, vk::EXT_PHYSICAL_DEVICE_DRM_SPEC_VERSION, ); if has_drm_name { prop = prop.push_next(&mut drm_prop); } instance.get_physical_device_properties2(p, &mut prop); let dev_type = prop.properties.device_type; debug!( "Physical device: {}", prop.properties .device_name_as_c_str() .unwrap() .to_str() .unwrap() ); debug!( "API {}.{}.{}/{} driver {:#x} vendor {:#x} device {:#x} type {:?}", vk::api_version_major(prop.properties.api_version), vk::api_version_minor(prop.properties.api_version), vk::api_version_patch(prop.properties.api_version), vk::api_version_variant(prop.properties.api_version), prop.properties.driver_version, prop.properties.vendor_id, prop.properties.device_id, prop.properties.device_type ); if debug { if has_drm_name { let primary = if drm_prop.has_primary != 0 { format!("{}.{}", drm_prop.primary_major, drm_prop.primary_minor) } else { String::from("none") }; let render = if drm_prop.has_render != 0 { format!("{}.{}", drm_prop.primary_major, drm_prop.primary_minor) } else { String::from("none") }; debug!("DRM: primary: {} render: {}", primary, render); } fn list_missing( specs: &[(&CStr, u32)], exts: &[vk::ExtensionProperties], ) -> Vec { specs .iter() .filter_map(|spec| { if !exts_has_prop(exts, spec.0, spec.1) { Some(format!("{}:{}", spec.0.to_str().unwrap(), spec.1)) } else { None } }) .collect() } debug!( "Baseline extensions: missing {:?}", list_missing(EXT_LIST, &exts) ); debug!( "Video base extensions: missing {:?}", list_missing(EXT_LIST_VIDEO_BASE, &exts) ); let mut video_enc_list = Vec::new(); video_enc_list.extend_from_slice(EXT_LIST_VIDEO_ENC_BASE); video_enc_list.push(EXT_VIDEO_ENC_H264); debug!( "Video enc extensions: missing {:?}", list_missing(&video_enc_list, &exts) ); let mut video_dec_list = Vec::new(); video_dec_list.extend_from_slice(EXT_LIST_VIDEO_DEC_BASE); video_dec_list.push(EXT_VIDEO_DEC_H264); video_dec_list.push(EXT_VIDEO_DEC_AV1); debug!( "Video dec extensions: missing {:?}", list_missing(&video_dec_list, &exts) ); } let all_present = EXT_LIST .iter() .all(|(name, version)| exts_has_prop(&exts, name, *version)); if !all_present { continue; } let mut hw_enc_h264 = false; let mut hw_dec_h264 = false; let mut hw_dec_av1 = false; if video.format.is_some() && EXT_LIST_VIDEO_BASE .iter() .all(|(name, version)| exts_has_prop(&exts, name, *version)) { // TODO: first verify that libavcodec has the appropriate encoders/decoders available, if possible if video.dec_pref != Some(CodecPreference::SW) && EXT_LIST_VIDEO_DEC_BASE .iter() .all(|(name, version)| exts_has_prop(&exts, name, *version)) { if exts_has_prop(&exts, EXT_VIDEO_DEC_H264.0, EXT_VIDEO_DEC_H264.1) { hw_dec_h264 = true; } if exts_has_prop(&exts, EXT_VIDEO_DEC_AV1.0, EXT_VIDEO_DEC_AV1.1) { hw_dec_av1 = true; } } if video.enc_pref != Some(CodecPreference::SW) && EXT_LIST_VIDEO_ENC_BASE .iter() .all(|(name, version)| exts_has_prop(&exts, name, *version)) { if exts_has_prop(&exts, EXT_VIDEO_ENC_H264.0, EXT_VIDEO_ENC_H264.1) { hw_enc_h264 = true; } } } let mut timeline_semaphore_info = vk::SemaphoreTypeCreateInfoKHR::default() .semaphore_type(vk::SemaphoreType::TIMELINE) .initial_value(0); /* TODO: should not SYNC_FD be supported and preferable? */ let ext_semaphore_req = vk::PhysicalDeviceExternalSemaphoreInfo::default() .handle_type(vk::ExternalSemaphoreHandleTypeFlags::OPAQUE_FD) .push_next(&mut timeline_semaphore_info); let mut ext_semaphore_info = vk::ExternalSemaphoreProperties::default(); instance.get_physical_device_external_semaphore_properties( p, &ext_semaphore_req, &mut ext_semaphore_info, ); let mut supports_timeline_import_export = ext_semaphore_info.external_semaphore_features.contains( vk::ExternalSemaphoreFeatureFlags::IMPORTABLE_KHR | vk::ExternalSemaphoreFeatureFlags::EXPORTABLE_KHR, ); let mut binary_semaphore_info = vk::SemaphoreTypeCreateInfoKHR::default() .semaphore_type(vk::SemaphoreType::BINARY) .initial_value(0); let ext_binary_semaphore_req = vk::PhysicalDeviceExternalSemaphoreInfo::default() .handle_type(vk::ExternalSemaphoreHandleTypeFlags::SYNC_FD) .push_next(&mut binary_semaphore_info); let mut ext_binary_semaphore_info = vk::ExternalSemaphoreProperties::default(); instance.get_physical_device_external_semaphore_properties( p, &ext_binary_semaphore_req, &mut ext_binary_semaphore_info, ); let mut supports_binary_import = ext_binary_semaphore_info .external_semaphore_features .contains(vk::ExternalSemaphoreFeatureFlags::IMPORTABLE_KHR); if cfg!(target_os = "freebsd") { /* FreeBSD does not (as of writing) support the EXPORT_SYNC_FILE ioctl, and * thus cannot, even if binary semaphores can be imported, extract a sync file * from a dmabuf; with implicitly synchronized applications it will need to * fall back to polling the dmabuf. This may change in the future. * * It also does not currently support DRM_IOCTL_SYNCOBJ_EVENTFD, needed to * use exported timeline semaphores by linking them to eventfds. * * TODO: introduce a runtime check; and/or move to finer-grained feature * detection. Old Linux and other kernels also do not support these ioctls. */ debug!("No EXPORT_SYNC_FILE, disabling binary semaphore import/export"); supports_binary_import = false; supports_timeline_import_export = false; } debug!( "Timeline semaphore import/export: {}; binary semaphore import: {}", fmt_bool(supports_timeline_import_export), fmt_bool(supports_binary_import) ); if test_no_timeline_export { supports_timeline_import_export = false; debug!("Test override: timeline semaphore import/export disabled"); } if test_no_binary_import { supports_binary_import = false; debug!("Test override: binary semaphore import disabled"); } let render_id = if drm_prop.has_render != 0 { Some(((drm_prop.render_major as u64) << 8) | (drm_prop.render_minor as u64)) } else { None }; let primary_id = if drm_prop.has_primary != 0 { Some(((drm_prop.primary_major as u64) << 8) | (drm_prop.primary_minor as u64)) } else { None }; /* Some device_id is needed for the Wayland application to use */ let Some(device_id) = render_id.or(primary_id) else { continue; }; physdevs.push(DeviceInfo { physdev: p, device_id, typ: dev_type, hw_enc_h264, hw_dec_h264, hw_dec_av1, supports_binary_import, supports_timeline_import_export, }) } Ok(Some(Arc::new(VulkanInstance { entry, instance, physdevs, }))) } } impl VulkanInstance { /** Return true if Vulkan has a physical device available that Waypipe can use */ pub fn has_device(&self, main_device: Option) -> bool { self.pick_device(main_device).is_some() } /** Return true if the device exists and will support timeline semaphore import and export */ pub fn device_supports_timeline_import_export(&self, main_device: Option) -> bool { self.pick_device(main_device) .is_some_and(|d| d.supports_timeline_import_export) } /** If `main_device` is None, provide device info for the "best" available * device; otherwise, for the device with the specified ID, if available. */ fn pick_device(&self, main_device: Option) -> Option<&DeviceInfo> { if let Some(d) = main_device { for x in &self.physdevs { if x.device_id == d { return Some(x); } } None } else { let mut best_device: Option<&DeviceInfo> = None; for x in &self.physdevs { if let Some(cur) = best_device { if device_rank(x) < device_rank(cur) { best_device = Some(x); } } else { best_device = Some(x) } } best_device } } } /** Return a vector listing all extensions that should be enabled for * the device. (Pointers are constants / from &'static CStr.) */ fn get_enabled_exts(dev_info: &DeviceInfo) -> Vec<*const c_char> { let mut enabled_exts: Vec<*const c_char> = Vec::new(); // TODO: use a static (stack) array instead, since max number of extensions is small enabled_exts.extend(EXT_LIST.iter().map(|(name, _)| name.as_ptr())); if dev_info.hw_enc_h264 || dev_info.hw_dec_h264 || dev_info.hw_dec_av1 { enabled_exts.extend(EXT_LIST_VIDEO_BASE.iter().map(|(name, _)| name.as_ptr())); } if dev_info.hw_enc_h264 { enabled_exts.extend( EXT_LIST_VIDEO_ENC_BASE .iter() .map(|(name, _)| name.as_ptr()), ); } if dev_info.hw_enc_h264 { enabled_exts.push(EXT_VIDEO_ENC_H264.0.as_ptr()); } if dev_info.hw_dec_h264 | dev_info.hw_dec_av1 { enabled_exts.extend( EXT_LIST_VIDEO_DEC_BASE .iter() .map(|(name, _)| name.as_ptr()), ); } if dev_info.hw_dec_h264 { enabled_exts.push(EXT_VIDEO_DEC_H264.0.as_ptr()); } if dev_info.hw_dec_av1 { enabled_exts.push(EXT_VIDEO_DEC_AV1.0.as_ptr()); } enabled_exts } /** Set up a physical device */ pub fn setup_vulkan_device_base( instance: &Arc, main_device: Option, format_filter_for_video: bool, ) -> Result, String> { let Some(dev_info) = instance.pick_device(main_device) else { if let Some(d) = main_device { error!("Failed to find a Vulkan physical device with device id {}, or it does not meet all requirements.", d); } else { error!("Failed to find any Vulkan physical device meeting all requirements."); } return Ok(None); }; debug!( "Chose physical device with device id: {}", dev_info.device_id ); let physdev = dev_info.physdev; let using_hw_video = dev_info.hw_enc_h264 | dev_info.hw_dec_h264 | dev_info.hw_dec_av1; unsafe { let memory_properties = instance .instance .get_physical_device_memory_properties(physdev); let queue_families = instance .instance .get_physical_device_queue_family_properties(physdev); let mut qfis = [u32::MAX, u32::MAX, u32::MAX, u32::MAX]; let mut nqis = [0, 0, 0, 0]; for (u, family) in queue_families.iter().enumerate().rev() { let i: u32 = u.try_into().unwrap(); if family .queue_flags .contains(vk::QueueFlags::COMPUTE | vk::QueueFlags::TRANSFER) { qfis[0] = i; nqis[0] = family.queue_count; } if family .queue_flags .contains(vk::QueueFlags::GRAPHICS | vk::QueueFlags::TRANSFER) { qfis[1] = i; nqis[1] = family.queue_count; } if family .queue_flags .contains(vk::QueueFlags::VIDEO_ENCODE_KHR) { qfis[2] = i; nqis[2] = family.queue_count; } if family .queue_flags .contains(vk::QueueFlags::VIDEO_DECODE_KHR) { qfis[3] = i; nqis[3] = family.queue_count; } } let queue_family = qfis[0]; let prio = &[1.0]; // make a single queue let cg_queue = qfis[0] == qfis[1]; let nqf = if using_hw_video { if qfis.contains(&u32::MAX) { return Err(tag!("Not all queue types needed available: compute {} graphics {} encode {} decode {}", qfis[0], qfis[1], qfis[2], qfis[3])); } if cg_queue { 3 } else { 4 } } else { 1 }; let qstart = if cg_queue { 1 } else { 0 }; let chosen_queues = [ vk::DeviceQueueCreateInfo::default() .queue_family_index(qfis[0]) .queue_priorities(prio), vk::DeviceQueueCreateInfo::default() .queue_family_index(qfis[1]) .queue_priorities(prio), vk::DeviceQueueCreateInfo::default() .queue_family_index(qfis[2]) .queue_priorities(prio), vk::DeviceQueueCreateInfo::default() .queue_family_index(qfis[3]) .queue_priorities(prio), ]; let enabled_exts = get_enabled_exts(dev_info); let mut featuresv11 = vk::PhysicalDeviceVulkan11Features::default().sampler_ycbcr_conversion(true); let mut featuresv13 = vk::PhysicalDeviceVulkan13Features::default().synchronization2(true); let mut feature_vid1 = vk::PhysicalDeviceVideoMaintenance1FeaturesKHR::default().video_maintenance1(true); let mut features2x = vk::PhysicalDeviceTimelineSemaphoreFeatures::default().timeline_semaphore(true); let mut features2 = vk::PhysicalDeviceFeatures2::default().push_next(&mut features2x); let mut logical_info = vk::DeviceCreateInfo::default() .flags(vk::DeviceCreateFlags::empty()) .queue_create_infos(&chosen_queues[qstart..qstart + nqf]) .enabled_extension_names(&enabled_exts) .push_next(&mut features2); if using_hw_video { logical_info = logical_info .push_next(&mut featuresv11) .push_next(&mut featuresv13) .push_next(&mut feature_vid1); } let dev = match instance .instance .create_device(physdev, &logical_info, None) { Ok(x) => x, Err(x) => { // TODO: cleanup more return Err(tag!("Failed to create logical device: {}", x)); } }; let queue = dev.get_device_queue(queue_family, 0); let get_modifier = ext::image_drm_format_modifier::Device::new(&instance.instance, &dev); let get_mem_reqs2 = khr::get_memory_requirements2::Device::new(&instance.instance, &dev); let bind_mem2 = khr::bind_memory2::Device::new(&instance.instance, &dev); let ext_mem_fd = khr::external_memory_fd::Device::new(&instance.instance, &dev); let timeline_semaphore = khr::timeline_semaphore::Device::new(&instance.instance, &dev); let ext_semaphore_fd = khr::external_semaphore_fd::Device::new(&instance.instance, &dev); let mut formats = BTreeMap::::new(); for f in SUPPORTED_FORMAT_LIST { // note: VkFormatProperties3KHR exists, providing more bits for flags let mut format_drm_props = vk::DrmFormatModifierPropertiesListEXT::default(); let mut format_props = vk::FormatProperties2::default().push_next(&mut format_drm_props); instance.instance.get_physical_device_format_properties2( physdev, *f, &mut format_props, ); if format_drm_props.drm_format_modifier_count == 0 { /* No associated modifiers / format not supported for import/export */ continue; } let mut dst = Vec::new(); dst.resize_with(format_drm_props.drm_format_modifier_count as usize, || { vk::DrmFormatModifierPropertiesEXT::default() }); format_drm_props = format_drm_props.drm_format_modifier_properties(&mut dst); let mut format_props = vk::FormatProperties2::default().push_next(&mut format_drm_props); instance.instance.get_physical_device_format_properties2( physdev, *f, &mut format_props, ); let info = get_vulkan_info(*f); let mut mod_list: Vec = Vec::new(); let mut mod_data_list: Vec = Vec::new(); for m in dst.iter() { /* YUV formats are only fully supported if one can create and import disjoint planes */ if info.planes > 1 && !m .drm_format_modifier_tiling_features .contains(vk::FormatFeatureFlags::DISJOINT) { continue; } let base_feature = vk::FormatFeatureFlags::TRANSFER_SRC | vk::FormatFeatureFlags::TRANSFER_DST; let base_usage = vk::ImageUsageFlags::TRANSFER_SRC | vk::ImageUsageFlags::TRANSFER_DST; if !m.drm_format_modifier_tiling_features.contains(base_feature) { continue; } let Some(max_size_transfer) = get_max_external_image_size( &instance.instance, physdev, queue_family, *f, m.drm_format_modifier, base_usage, )? else { /* Specific combination of format/modifier/usage/queue not supported. */ continue; }; let store_feature = vk::FormatFeatureFlags::TRANSFER_SRC | vk::FormatFeatureFlags::TRANSFER_DST | vk::FormatFeatureFlags::STORAGE_IMAGE | vk::FormatFeatureFlags::SAMPLED_IMAGE; let store_usage = vk::ImageUsageFlags::TRANSFER_SRC | vk::ImageUsageFlags::TRANSFER_DST | vk::ImageUsageFlags::STORAGE | vk::ImageUsageFlags::SAMPLED; let max_size_store_and_sample = if m .drm_format_modifier_tiling_features .contains(store_feature) { get_max_external_image_size( &instance.instance, physdev, queue_family, *f, m.drm_format_modifier, store_usage, )? } else { None }; mod_list.push(m.drm_format_modifier); mod_data_list.push(ModifierData { plane_count: m.drm_format_modifier_plane_count, max_size_transfer, max_size_store_and_sample, }); } if format_filter_for_video { // todo: only restrict modifiers when the format is usable for video? // (in general, if a format supports video encoding, clients should preferably use it.) // Alternatively, a preference for video-encodable formats could be made part of dmabuf-feedback, // and/or an intermediate storage image could be added to allow video encoding for modifiers // which do no support storage or only support small sizes for i in (0..mod_list.len()).rev() { /* Iterating in reverse order ensures each entry is considered exactly once */ if mod_data_list[i].max_size_store_and_sample.is_none() { mod_list.remove(i); mod_data_list.remove(i); } } } if mod_list.is_empty() { continue; } formats.insert( *f, FormatData { modifiers: mod_list, modifier_data: mod_data_list, }, ); } if log::log_enabled!(log::Level::Debug) { /* Print the entire set of formats/modifiers that Waypipe supports, along with the * properties of them that Waypipe uses, compactly. */ let mut entries: Vec<(ModifierData, vk::Format, u64)> = Vec::new(); for f in formats.iter() { for (m, data) in f.1.modifiers.iter().zip(f.1.modifier_data.iter()) { entries.push((data.clone(), *f.0, *m)); } } entries.sort_unstable(); let mut primary_segments: Vec = vec![0]; for i in 1..entries.len() { if entries[i].0 != entries[i - 1].0 { primary_segments.push(i); } } debug!("Format/modifier type classes: {}", primary_segments.len()); primary_segments.push(entries.len()); for w in primary_segments.windows(2) { let segment = &entries[w[0]..w[1]]; debug!( "{} pairs with: planes: {} max tx size: {:?}, max vid size: {:?}", segment.len(), segment[0].0.plane_count, segment[0].0.max_size_transfer, segment[0].0.max_size_store_and_sample ); let mut fmt_segment = vec![0]; for i in 1..segment.len() { if segment[i].1 != segment[i - 1].1 { fmt_segment.push(i); } } fmt_segment.push(segment.len()); for v in fmt_segment.windows(2) { let mods: Vec = segment[v[0]..v[1]].iter().map(|x| x.2).collect(); debug!("- format: {:?}, modifiers: 0x{:x?}", segment[v[0]].1, mods); } } } let init_sem_value = 0; let drm_fd = drm_open_render(dev_info.device_id, false)?; let (semaphore, semaphore_external) = if dev_info.supports_timeline_import_export { let (semaphore, semaphore_drm_handle, semaphore_fd, semaphore_event_fd) = vulkan_create_timeline_parts(&dev, &ext_semaphore_fd, &drm_fd, init_sem_value)?; drop(semaphore_fd); ( semaphore, Some(VulkanExternalTimelineSemaphore { drm_handle: semaphore_drm_handle, event_fd: semaphore_event_fd, }), ) } else { let semaphore = vulkan_create_simple_timeline(&dev, init_sem_value)?; (semaphore, None) }; Ok(Some(VulkanDevice { _instance: instance.clone(), dev_info: *dev_info, qfis, queue: Mutex::new(VulkanQueue { queue, last_semaphore_value: init_sem_value, }), #[cfg(feature = "video")] video: None, dev, drm_fd, semaphore, semaphore_external, get_modifier, get_mem_reqs2, bind_mem2, ext_mem_fd, memory_properties, timeline_semaphore, ext_semaphore_fd, device_id: dev_info.device_id, formats, queue_family, })) } } /** Set up a physical device, including video encoding/decoding setup */ pub fn setup_vulkan_device( instance: &Arc, main_device: Option, video: &VideoSetting, debug: bool, ) -> Result>, String> { let Some(mut dev) = setup_vulkan_device_base(instance, main_device, video.format.is_some())? else { return Ok(None); }; #[cfg(feature = "video")] { let enabled_exts = get_enabled_exts(&dev.dev_info); dev.video = if video.format.is_some() { unsafe { setup_video( &instance.entry, &instance.instance, &dev.dev_info.physdev, &dev.dev, &dev.dev_info, debug, dev.qfis, &enabled_exts, INSTANCE_EXTS, )? } } else { None }; } Ok(Some(Arc::new(dev))) } /** Get the first available memory type in `bitmask` with the given property flags.*/ fn vulkan_get_memory_type_index( info: &VulkanDevice, bitmask: u32, flags: vk::MemoryPropertyFlags, ) -> Option { for (i, t) in info.memory_properties.memory_types [..(info.memory_properties.memory_type_count as usize)] .iter() .enumerate() { if t.property_flags.contains(flags) && (bitmask & (1u32 << i)) != 0 { return Some(i as u32); } } None } /** Image memory barrier for use when transferring image to the current queue from the FOREIGN queue. * * The access range is COLOR for the single level/layer of the entire image. */ pub fn qfot_acquire_image_memory_barrier( image: vk::Image, old_layout: vk::ImageLayout, new_layout: vk::ImageLayout, queue_family_idx: u32, dst_access_mask: vk::AccessFlags, ) -> vk::ImageMemoryBarrier<'static> { let standard_access_range = vk::ImageSubresourceRange::default() .aspect_mask(vk::ImageAspectFlags::COLOR) .level_count(1) .layer_count(1); vk::ImageMemoryBarrier::default() .image(image) .old_layout(old_layout) .new_layout(new_layout) .src_queue_family_index(vk::QUEUE_FAMILY_FOREIGN_EXT) .dst_queue_family_index(queue_family_idx) /* queue family transfer acquire = srcAccessMask ignored, zero value recommended */ .src_access_mask(vk::AccessFlags::empty()) .dst_access_mask(dst_access_mask) .subresource_range(standard_access_range) } /** Image memory barrier for use when transferring image from the current queue to the FOREIGN queue. * * The access range is COLOR for the single level/layer of the entire image. */ pub fn qfot_release_image_memory_barrier( image: vk::Image, old_layout: vk::ImageLayout, new_layout: vk::ImageLayout, queue_family_idx: u32, src_access_mask: vk::AccessFlags, ) -> vk::ImageMemoryBarrier<'static> { let standard_access_range = vk::ImageSubresourceRange::default() .aspect_mask(vk::ImageAspectFlags::COLOR) .level_count(1) .layer_count(1); vk::ImageMemoryBarrier::default() .image(image) .old_layout(old_layout) .new_layout(new_layout) .src_queue_family_index(queue_family_idx) .dst_queue_family_index(vk::QUEUE_FAMILY_FOREIGN_EXT) /* queue family transfer release = dstAccessMask ignored, zero value recommended */ .src_access_mask(src_access_mask) .dst_access_mask(vk::AccessFlags::empty()) .subresource_range(standard_access_range) } /** Image aspect flags for the `x`th memory plane. */ fn memory_plane(x: usize) -> vk::ImageAspectFlags { match x { 0 => vk::ImageAspectFlags::MEMORY_PLANE_0_EXT, 1 => vk::ImageAspectFlags::MEMORY_PLANE_1_EXT, 2 => vk::ImageAspectFlags::MEMORY_PLANE_2_EXT, 3 => vk::ImageAspectFlags::MEMORY_PLANE_3_EXT, _ => panic!("Out of bounds"), } } /** Create a CPU-visible buffer. * * If `read_optimized` is true, the buffer _may_ be allocated in a fashion allowing * for more efficient reads than otherwise. */ fn create_cpu_visible_buffer( vulk: &VulkanDevice, size: usize, read_optimized: bool, ) -> Result<(vk::Buffer, vk::DeviceMemory, u64), String> { let buf_create = vk::BufferCreateInfo::default() .size(size as u64) .sharing_mode(vk::SharingMode::EXCLUSIVE) .flags(vk::BufferCreateFlags::empty()) .usage( vk::BufferUsageFlags::TRANSFER_SRC | vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::STORAGE_TEXEL_BUFFER | vk::BufferUsageFlags::UNIFORM_TEXEL_BUFFER, ); unsafe { let buffer = vulk .dev .create_buffer(&buf_create, None) .map_err(|_| "Failed to create buffer")?; let memreq = vulk.dev.get_buffer_memory_requirements(buffer); assert!(memreq.size >= size as u64); /* note: not asking for HOST_COHERENT, so memory must be explicitly flushed or invalidated */ let Some(mut mem_index) = vulkan_get_memory_type_index( vulk, memreq.memory_type_bits, vk::MemoryPropertyFlags::HOST_VISIBLE, ) else { return Err(tag!( "No acceptable host visible memory type index for buffer" )); }; if read_optimized { /* HOST_CACHED sometimes allows for much faster processing from CPU side; * without it, individual read movs can become very slow. This isn't needed * when only writing because write combination is typically used, and * the write pattern is sequential. However, Vulkan devices are not * guaranteed to have this: they only need to have DEVICE_LOCAL and * HOST_VISIBLE|HOST_COHERENT types. */ if let Some(cached_mem_index) = vulkan_get_memory_type_index( vulk, memreq.memory_type_bits, vk::MemoryPropertyFlags::HOST_VISIBLE | vk::MemoryPropertyFlags::HOST_CACHED, ) { mem_index = cached_mem_index; } } let alloc_info = vk::MemoryAllocateInfo::default() .allocation_size(memreq.size) .memory_type_index(mem_index); /* note: allocate_memory output will at least as aligned as memreq.alignment */ let mem = vulk .dev .allocate_memory(&alloc_info, None) .map_err(|_| "Failed to allocate memory for buffer")?; vulk.dev .bind_buffer_memory(buffer, mem, 0) .map_err(|_| "Failed to bind memory to buffer")?; Ok((buffer, mem, memreq.size)) } } /** Get a CPU-visible buffer of at least the specified length. * * If `read_optimized` is true, the buffer _may_ be allocated in a fashion allowing * for more efficient reads than otherwise. */ pub fn vulkan_get_buffer( vulk: &Arc, nom_len: usize, read_optimized: bool, ) -> Result { let (buffer, mem, len) = create_cpu_visible_buffer(vulk, nom_len, read_optimized)?; unsafe { let data: *mut c_void = vulk .dev .map_memory(mem, 0, len, vk::MemoryMapFlags::empty()) .map_err(|_| "Failed to map memory")?; // TODO: proper error handling; need to clean everything up Ok(VulkanBuffer { vulk: vulk.clone(), buffer, mem, memory_len: len, buffer_len: nom_len as u64, inner: Mutex::new(VulkanBufferInner { data, reader_count: 0, has_writer: false, }), }) } } /** Create a new command pool. */ pub fn vulkan_get_cmd_pool(vulk: &Arc) -> Result, String> { let pool_info = vk::CommandPoolCreateInfo::default() .queue_family_index(vulk.queue_family) .flags(vk::CommandPoolCreateFlags::empty()); // todo: transient? reset? let pool = unsafe { vulk.dev .create_command_pool(&pool_info, None) .map_err(|_| "Failed to create command pool")? }; Ok(Arc::new(VulkanCommandPool { vulk: vulk.clone(), pool: Mutex::new(pool), })) } /** A & view of a [VulkanBuffer]'s mapped contents. */ pub struct VulkanBufferReadView<'a> { buffer: &'a VulkanBuffer, pub data: &'a [u8], } /** A &mut view of a [VulkanBuffer]'s mapped contents. */ pub struct VulkanBufferWriteView<'a> { buffer: &'a VulkanBuffer, pub data: &'a mut [u8], } impl VulkanBuffer { pub fn prepare_read(self: &VulkanBuffer) -> Result<(), String> { unsafe { let ranges = &[vk::MappedMemoryRange::default() .offset(0) .size(self.memory_len) .memory(self.mem)]; self.vulk .dev .invalidate_mapped_memory_ranges(ranges) .map_err(|_| "Failed to invalidate mapped memory range")?; } Ok(()) } pub fn complete_write(self: &VulkanBuffer) -> Result<(), String> { unsafe { let ranges = &[vk::MappedMemoryRange::default() .offset(0) .size(self.memory_len) .memory(self.mem)]; self.vulk .dev .flush_mapped_memory_ranges(ranges) .map_err(|_| "Failed to invalidate mapped memory range")?; } Ok(()) } pub fn get_read_view<'a>(self: &'a VulkanBuffer) -> VulkanBufferReadView<'a> { let mut inner = self.inner.lock().unwrap(); let dst = slice_from_raw_parts(inner.data as *const u8, self.buffer_len as usize); assert!(!inner.has_writer); inner.reader_count += 1; unsafe { // SAFETY: todo VulkanBufferReadView { buffer: self, data: &*dst, } } } pub fn get_write_view<'a>(self: &'a VulkanBuffer) -> VulkanBufferWriteView<'a> { let mut inner = self.inner.lock().unwrap(); let dst = slice_from_raw_parts_mut(inner.data as *mut u8, self.buffer_len as usize); assert!(inner.reader_count == 0); inner.has_writer = true; unsafe { // SAFETY: todo VulkanBufferWriteView { buffer: self, data: &mut *dst, } } } } impl Drop for VulkanBufferReadView<'_> { fn drop(&mut self) { self.buffer.inner.lock().unwrap().reader_count -= 1; } } impl Drop for VulkanBufferWriteView<'_> { fn drop(&mut self) { self.buffer.inner.lock().unwrap().has_writer = false; } } /** Import a [VulkanDmabuf] with specified dimensions and format * from the given list of planes. */ pub fn vulkan_import_dmabuf( vulk: &Arc, planes: Vec, // takes ownership, consumes fd. TODO: proper cleanup if this fails early width: u32, height: u32, drm_format: u32, can_store_and_sample: bool, ) -> Result, String> { let vk_format = drm_to_vulkan(drm_format) .ok_or_else(|| tag!("Did not find matching Vulkan format for {}", drm_format))?; let format_info = get_vulkan_info(vk_format); // note: we should still handle planes together, if only because it will be // needed for video work let mut layout = Vec::new(); // todo: any 'size-limited-vec-from-array' type to use? let mut plane_perm: Vec = (0..planes.len()).collect(); plane_perm.sort_by_key(|i| planes[*i].plane_idx); let modifier = planes[0].modifier; /* post linux-dmabuf version 5, all planes must have same modifier */ assert!(planes.iter().all(|x| x.modifier == modifier)); /* Check that all planes are represented */ assert!(plane_perm .iter() .enumerate() .all(|(i, x)| planes[*x].plane_idx == i as u32)); let mod_index = vulk.formats[&vk_format] .modifiers .iter() .position(|x| *x == modifier) .unwrap(); let mod_data = &vulk.formats[&vk_format].modifier_data[mod_index]; let max_size = if can_store_and_sample { mod_data.max_size_store_and_sample.unwrap() } else { mod_data.max_size_transfer }; assert!(width <= max_size.0 && height <= max_size.1); // todo: invert? for j in plane_perm.iter() { layout.push(vk::SubresourceLayout { offset: planes[*j].offset as u64, row_pitch: planes[*j].stride as u64, size: 0, /* required to be zero per VK_EXT_image_drm_format_modifier */ array_pitch: 0, /* not relevant for 2d image */ depth_pitch: 0, /* not relevant for 2d image */ }); } let main_fd = planes .first() .unwrap() .fd .try_clone() .map_err(|x| tag!("Failed to clone dmabuf fd: {}", x))?; let mut modifier_info = vk::ImageDrmFormatModifierExplicitCreateInfoEXT::default() .plane_layouts(&layout) .drm_format_modifier(modifier); let mut ext_create_info = vk::ExternalMemoryImageCreateInfo::default() .handle_types(vk::ExternalMemoryHandleTypeFlags::DMA_BUF_EXT); let usage_bits = if can_store_and_sample { vk::ImageUsageFlags::TRANSFER_SRC | vk::ImageUsageFlags::TRANSFER_DST | vk::ImageUsageFlags::STORAGE | vk::ImageUsageFlags::SAMPLED } else { vk::ImageUsageFlags::TRANSFER_SRC | vk::ImageUsageFlags::TRANSFER_DST }; /* Import the DMABUF with UNDEFINED layout, but thereafter assume GENERAL layout * (so that no layout transition is done). This is needed to avoid doing an * UNDEFINED->GENERAL layout transition that drops CCS plane contents and * corrupts the image. This may not be strictly per Vulkan spec but seems to * be the best solution; Gamescope and wlroots have done the same thing. * Future spec updates may resolve the issue. */ let import_layout = vk::ImageLayout::UNDEFINED; let init_layout = vk::ImageLayout::GENERAL; let image_info = vk::ImageCreateInfo::default() .flags(if format_info.planes > 1 { vk::ImageCreateFlags::DISJOINT } else { vk::ImageCreateFlags::empty() }) .image_type(vk::ImageType::TYPE_2D) .format(vk_format) .extent(vk::Extent3D { width, height, depth: 1, }) .mip_levels(1) .array_layers(1) .samples(vk::SampleCountFlags::TYPE_1) .tiling(vk::ImageTiling::DRM_FORMAT_MODIFIER_EXT) .usage(usage_bits) .sharing_mode(vk::SharingMode::EXCLUSIVE) // only one queue family may use this .initial_layout(import_layout) .push_next(&mut ext_create_info) .push_next(&mut modifier_info); unsafe { let image = vulk.dev.create_image(&image_info, None).map_err(|x| { tag!( "Failed to create Vulkan image when importing dmabuf: {:?}", x ) })?; if format_info.planes > 1 { /* Unclear how to handle hypothetical multi-planar formats with metadata memory planes */ assert!(planes.len() == format_info.planes); } // TODO: handle non-disjoint YUV let nbindplanes = format_info.planes; let fds: Vec = planes.into_iter().map(|x| x.fd).take(nbindplanes).collect(); let mut bind_planes: Vec = (0..nbindplanes) .map(|i| vk::BindImagePlaneMemoryInfo::default().plane_aspect(memory_plane(i))) .collect(); let mut bind_infos: Vec> = Vec::new(); for ((plane, fd), bind_plane) in fds.into_iter().enumerate().zip(bind_planes.iter_mut()) { let mut mem_props = vk::MemoryFdPropertiesKHR::default(); if let Err(x) = vulk.ext_mem_fd.get_memory_fd_properties( vk::ExternalMemoryHandleTypeFlags::DMA_BUF_EXT, fd.as_raw_fd(), &mut mem_props, ) { for b in bind_infos { vulk.dev.free_memory(b.memory, None); } vulk.dev.destroy_image(image, None); return Err(tag!( "Failed to get memory fd properties for plane {}: {}", plane, x )); }; let plane_aspect = memory_plane(plane); /* Needed for allocation size */ let mut req_plane_info = vk::ImagePlaneMemoryRequirementsInfo::default().plane_aspect(plane_aspect); let mut req_info = vk::ImageMemoryRequirementsInfo2::default().image(image); if nbindplanes > 1 { req_info = req_info.push_next(&mut req_plane_info); } let mut req_out = vk::MemoryRequirements2::default(); vulk.get_mem_reqs2 .get_image_memory_requirements2(&req_info, &mut req_out); let mem_candidates = mem_props.memory_type_bits & req_out.memory_requirements.memory_type_bits; assert!(mem_candidates != 0); let mem_index = mem_candidates.trailing_zeros(); // TODO: importing transfers the file descriptor! let mut import_info = vk::ImportMemoryFdInfoKHR::default() .fd(fd.into_raw_fd()) .handle_type(vk::ExternalMemoryHandleTypeFlags::DMA_BUF_EXT); let mut dedicate_info = vk::MemoryDedicatedAllocateInfo::default().image(image); let alloc_info = vk::MemoryAllocateInfo::default() .allocation_size(req_out.memory_requirements.size) .memory_type_index(mem_index) .push_next(&mut import_info) .push_next(&mut dedicate_info); let mem = match vulk.dev.allocate_memory(&alloc_info, None) { Ok(x) => x, Err(x) => { for b in bind_infos { vulk.dev.free_memory(b.memory, None); } vulk.dev.destroy_image(image, None); return Err(tag!("Failed to allocate memory: {}", x)); } }; let bind_img = vk::BindImageMemoryInfo::default() .image(image) .memory(mem) .memory_offset(0); /* memory offset must be zero unless aliasing */ bind_infos.push(if nbindplanes > 1 { bind_img.push_next(bind_plane) } else { bind_img }); } if let Err(x) = (vulk.bind_mem2.fp().bind_image_memory2_khr)( vulk.bind_mem2.device(), bind_infos.len().try_into().unwrap(), bind_infos.as_ptr(), ) .result() { for b in bind_infos { vulk.dev.free_memory(b.memory, None); } vulk.dev.destroy_image(image, None); return Err(tag!("Failed to bind memory: {}", x)); } // Keep VkDeviceMemory around, to free properly later let mut mem_planes: Vec<(vk::DeviceMemory, u32, u32)> = Vec::new(); for i in 0..bind_infos.len() { mem_planes.push(( bind_infos[i].memory, layout[i].offset.try_into().unwrap(), layout[i].row_pitch.try_into().unwrap(), )); } Ok(Arc::new(VulkanDmabuf { vulk: vulk.clone(), image, inner: Mutex::new(VulkanDmabufInner { image_layout: init_layout, }), memory_planes: mem_planes, vk_format, can_store_and_sample, width, height, main_fd, })) } } /** Create a [VulkanDmabuf] and the the planes exported from it. * * (Note: plane parameters may not exactly match image dimensions.) */ pub fn vulkan_create_dmabuf( vulk: &Arc, width: u32, height: u32, drm_format: u32, modifier_options: &[u64], can_store_and_sample: bool, ) -> Result<(Arc, Vec), String> { let vk_format = drm_to_vulkan(drm_format) .ok_or_else(|| tag!("Did not find matching Vulkan format for {}", drm_format))?; let format_info = get_vulkan_info(vk_format); let format_data = &vulk.formats[&vk_format]; /*<- the list of modifiers that may be chosen */ let mut mod_options = Vec::new(); for (v, data) in format_data .modifiers .iter() .zip(format_data.modifier_data.iter()) { if !modifier_options.contains(v) { continue; } let max_size = if can_store_and_sample { let Some(s) = data.max_size_store_and_sample else { continue; }; s } else { data.max_size_transfer }; if width > max_size.0 || height > max_size.1 { continue; } mod_options.push(*v); } if mod_options.is_empty() { return Err(tag!( "No available modifiers for image with format {}, size {}x{}, store+sample={}", drm_format, width, height, can_store_and_sample )); } unsafe { let nplanes = format_info.planes; let init_layout = vk::ImageLayout::UNDEFINED; let mut modifier_info = vk::ImageDrmFormatModifierListCreateInfoEXT::default() .drm_format_modifiers(&mod_options); let mut ext_create_info = vk::ExternalMemoryImageCreateInfo::default() .handle_types(vk::ExternalMemoryHandleTypeFlags::DMA_BUF_EXT); let usage_bits = if can_store_and_sample { vk::ImageUsageFlags::TRANSFER_SRC | vk::ImageUsageFlags::TRANSFER_DST | vk::ImageUsageFlags::STORAGE | vk::ImageUsageFlags::SAMPLED } else { vk::ImageUsageFlags::TRANSFER_SRC | vk::ImageUsageFlags::TRANSFER_DST }; let image_info = vk::ImageCreateInfo::default() .flags(if format_info.planes > 1 { vk::ImageCreateFlags::DISJOINT } else { vk::ImageCreateFlags::empty() }) .image_type(vk::ImageType::TYPE_2D) .format(vk_format) .extent(vk::Extent3D { width, height, depth: 1, }) .mip_levels(1) .array_layers(1) .samples(vk::SampleCountFlags::TYPE_1) .tiling(vk::ImageTiling::DRM_FORMAT_MODIFIER_EXT) .usage(usage_bits) .sharing_mode(vk::SharingMode::EXCLUSIVE) // only one queue family may use this .initial_layout(init_layout) .push_next(&mut ext_create_info) .push_next(&mut modifier_info); let mut props = vk::ImageDrmFormatModifierPropertiesEXT::default(); let image = vulk .dev .create_image(&image_info, None) .map_err(|_| "Failed to create image")?; if let Err(x) = vulk .get_modifier .get_image_drm_format_modifier_properties(image, &mut props) { vulk.dev.destroy_image(image, None); return Err(tag!("Failed to get image format modifiers: {}", x)); } let mod_info = &format_data.modifier_data[format_data .modifiers .iter() .position(|x| *x == props.drm_format_modifier) .unwrap()]; let nmemoryplanes = mod_info.plane_count as usize; let import_size_limit = if can_store_and_sample { mod_info.max_size_store_and_sample.unwrap() } else { mod_info.max_size_transfer }; if width > import_size_limit.0 || height > import_size_limit.1 { /* In theory, Vulkan could export images at a given size with a modifier * that Vulkan cannot import at that size. Only weird implementations would * do this in practice, so warn if this happens. */ debug!("Warning: created dmabuf with format={:08x}, modifier={:016x} has size {}x{} larger than the {}x{} allowed for import", drm_format, props.drm_format_modifier, width, height, import_size_limit.0, import_size_limit.1); } let mut bind_infos: Vec> = Vec::new(); // todo: fixed size array let mut planes = Vec::::new(); let mut mem_fds = Vec::new(); for plane in 0..nplanes { let plane_aspect = memory_plane(plane); let mut req_plane_info = vk::ImagePlaneMemoryRequirementsInfo::default().plane_aspect(plane_aspect); let mut req_info = vk::ImageMemoryRequirementsInfo2::default().image(image); if nplanes > 1 { req_info = req_info.push_next(&mut req_plane_info); } let mut req_out = vk::MemoryRequirements2::default(); // TODO: load via KHR instead? vulk.get_mem_reqs2 .get_image_memory_requirements2(&req_info, &mut req_out); assert!(req_out.memory_requirements.memory_type_bits != 0); // Pick first/fastest type which is acceptable let mem_index = req_out .memory_requirements .memory_type_bits .trailing_zeros(); let mut export_info = vk::ExportMemoryAllocateInfoKHR::default() .handle_types(vk::ExternalMemoryHandleTypeFlags::DMA_BUF_EXT); let mut dedicate_info = vk::MemoryDedicatedAllocateInfo::default().image(image); let alloc_info = vk::MemoryAllocateInfo::default() .allocation_size(req_out.memory_requirements.size) .memory_type_index(mem_index) .push_next(&mut dedicate_info) .push_next(&mut export_info); let mem = match vulk.dev.allocate_memory(&alloc_info, None) { Ok(x) => x, Err(x) => { for b in bind_infos { vulk.dev.free_memory(b.memory, None); } vulk.dev.destroy_image(image, None); return Err(tag!("Failed to allocate memory: {}", x)); } }; bind_infos.push( vk::BindImageMemoryInfo::default() .image(image) .memory(mem) .memory_offset(0), ); let memory_fd_get_info = vk::MemoryGetFdInfoKHR::default() .memory(mem) .handle_type(vk::ExternalMemoryHandleTypeFlags::DMA_BUF_EXT); let fd = vulk .ext_mem_fd .get_memory_fd(&memory_fd_get_info) .map_err(|_| "Failed to get memory fd")?; // SAFETY: fd only captured here and vkGetMemoryFdKHR transfers ownership mem_fds.push(OwnedFd::from_raw_fd(fd)); } if mem_fds.len() != nmemoryplanes { assert!(mem_fds.len() == 1); /* duplicate fd for each memory plane */ let fd = mem_fds.pop().unwrap(); mem_fds.resize_with(nmemoryplanes - 1, || fd.try_clone().unwrap()); mem_fds.push(fd); } for (plane, fd) in std::iter::zip(0..nmemoryplanes, mem_fds) { let plane_aspect = memory_plane(plane); let layout = vulk.dev.get_image_subresource_layout( image, vk::ImageSubresource::default() .mip_level(0) .array_layer(0) .aspect_mask(plane_aspect), ); planes.push(AddDmabufPlane { fd, plane_idx: plane as u32, offset: layout.offset.try_into().unwrap(), stride: layout.row_pitch.try_into().unwrap(), modifier: props.drm_format_modifier, }); } if let Err(x) = (vulk.bind_mem2.fp().bind_image_memory2_khr)( vulk.bind_mem2.device(), bind_infos.len().try_into().unwrap(), bind_infos.as_ptr(), ) .result() { for b in bind_infos { vulk.dev.free_memory(b.memory, None); } vulk.dev.destroy_image(image, None); return Err(tag!("Failed to bind memory: {}", x)); } let mut mem_planes: Vec<(vk::DeviceMemory, u32, u32)> = Vec::new(); for i in 0..bind_infos.len() { mem_planes.push((bind_infos[i].memory, planes[i].offset, planes[i].stride)); } let main_fd = match planes[0].fd.try_clone() { Err(x) => { for b in bind_infos { vulk.dev.free_memory(b.memory, None); } vulk.dev.destroy_image(image, None); return Err(tag!("Failed to clone dmabuf fd: {}", x)); } Ok(f) => f, }; Ok(( Arc::new(VulkanDmabuf { vulk: vulk.clone(), image, inner: Mutex::new(VulkanDmabufInner { image_layout: init_layout, }), memory_planes: mem_planes, vk_format, can_store_and_sample, width, height, main_fd, }), planes, )) } } /** Create a (nonblocking) eventfd. */ fn make_evt_fd() -> Result { unsafe { let event_init: c_uint = 0; // EFD_SEMAPHORE ?? let ev_flags: c_int = nix::libc::EFD_CLOEXEC | nix::libc::EFD_NONBLOCK; let ev_fd: i32 = nix::libc::eventfd(event_init, ev_flags); if ev_fd == -1 { return Err(tag!("Failed to create eventfd: {}", errno::Errno::last())); } // SAFETY: ev_fd only captured here and was checked valid let event_fd = OwnedFd::from_raw_fd(ev_fd); Ok(event_fd) } } /** Import a Vulkan timeline semaphore from the given file descriptor. */ pub fn vulkan_import_timeline( vulk: &Arc, fd: OwnedFd, ) -> Result, String> { let mut sem_exp_info = vk::ExportSemaphoreCreateInfo::default() .handle_types(vk::ExternalSemaphoreHandleTypeFlags::OPAQUE_FD); let mut sem_type = vk::SemaphoreTypeCreateInfoKHR::default() .semaphore_type(vk::SemaphoreType::TIMELINE_KHR) .initial_value(0); let create_semaphore_info = vk::SemaphoreCreateInfo::default() .flags(vk::SemaphoreCreateFlags::empty()) .push_next(&mut sem_type) .push_next(&mut sem_exp_info); unsafe { let semaphore_drm_handle = drm_syncobj_fd_to_handle(&vulk.drm_fd, &fd)?; /* This semaphore's contents will be _replaced_ by the import */ let semaphore = match vulk.dev.create_semaphore(&create_semaphore_info, None) { Ok(x) => x, Err(_) => { drm_syncobj_destroy(&vulk.drm_fd, semaphore_drm_handle).unwrap(); return Err(tag!("Failed to create semaphore")); } }; let raw_fd = fd.into_raw_fd(); // TODO: if import or any following step fails, need to free. let import = vk::ImportSemaphoreFdInfoKHR::default() .fd(raw_fd) .flags(vk::SemaphoreImportFlags::empty()) .handle_type(vk::ExternalSemaphoreHandleTypeFlags::OPAQUE_FD) .semaphore(semaphore); match vulk.ext_semaphore_fd.import_semaphore_fd(&import) { Ok(()) => (), Err(_) => { /* Import failed, must clean up fd */ nix::unistd::close(raw_fd).unwrap(); vulk.dev.destroy_semaphore(semaphore, None); drm_syncobj_destroy(&vulk.drm_fd, semaphore_drm_handle).unwrap(); return Err(tag!("Failed to import semaphore")); } }; let event_fd = match make_evt_fd() { Ok(x) => x, Err(y) => { vulk.dev.destroy_semaphore(semaphore, None); drm_syncobj_destroy(&vulk.drm_fd, semaphore_drm_handle).unwrap(); return Err(y); } }; Ok(Arc::new(VulkanTimelineSemaphore { vulk: vulk.clone(), semaphore, external: VulkanExternalTimelineSemaphore { drm_handle: semaphore_drm_handle, event_fd, }, })) } } /** Helper function to create a timeline semaphore for internal use only */ unsafe fn vulkan_create_simple_timeline( dev: &Device, start_pt: u64, ) -> Result { let mut sem_type = vk::SemaphoreTypeCreateInfoKHR::default() .semaphore_type(vk::SemaphoreType::TIMELINE_KHR) .initial_value(start_pt); let create_semaphore_info = vk::SemaphoreCreateInfo::default() .flags(vk::SemaphoreCreateFlags::empty()) .push_next(&mut sem_type); let semaphore = dev .create_semaphore(&create_semaphore_info, None) .map_err(|x| tag!("Failed to create semaphore: {:?}", x))?; Ok(semaphore) } /** Helper function to create a timeline semaphore and matching drm syncobj, * event_fd, and fd for export. */ unsafe fn vulkan_create_timeline_parts( dev: &Device, ext_semaphore_fd: &khr::external_semaphore_fd::Device, drm_fd: &OwnedFd, start_pt: u64, ) -> Result<(vk::Semaphore, u32, OwnedFd, OwnedFd), String> { let mut sem_exp_info = vk::ExportSemaphoreCreateInfo::default() .handle_types(vk::ExternalSemaphoreHandleTypeFlags::OPAQUE_FD); let mut sem_type = vk::SemaphoreTypeCreateInfoKHR::default() .semaphore_type(vk::SemaphoreType::TIMELINE_KHR) .initial_value(start_pt); let create_semaphore_info = vk::SemaphoreCreateInfo::default() .flags(vk::SemaphoreCreateFlags::empty()) .push_next(&mut sem_type) .push_next(&mut sem_exp_info); let semaphore = dev .create_semaphore(&create_semaphore_info, None) .map_err(|x| tag!("Failed to create semaphore: {:?}", x))?; let sem_fd_info = vk::SemaphoreGetFdInfoKHR::default() .semaphore(semaphore) .handle_type(vk::ExternalSemaphoreHandleTypeFlags::OPAQUE_FD); let semaphore_fd = match ext_semaphore_fd.get_semaphore_fd(&sem_fd_info) { Ok(x) => { // SAFETY: fd only captured here, vkGetSemaphoreFdKHR transfers ownership OwnedFd::from_raw_fd(x) } Err(x) => { dev.destroy_semaphore(semaphore, None); return Err(tag!("Failed to export semaphore: {:?}", x)); } }; let semaphore_drm_handle = match drm_syncobj_fd_to_handle(drm_fd, &semaphore_fd) { Ok(handle) => handle, Err(x) => { /* semaphore_fd automatically destroyed */ dev.destroy_semaphore(semaphore, None); return Err(x); } }; let event_fd = match make_evt_fd() { Ok(fd) => fd, Err(x) => { /* semaphore_fd automatically destroyed */ drm_syncobj_destroy(drm_fd, semaphore_drm_handle).unwrap(); dev.destroy_semaphore(semaphore, None); return Err(x); } }; Ok((semaphore, semaphore_drm_handle, semaphore_fd, event_fd)) } /** Create a new Vulkan timeline semaphore and matching exported file * descriptor to share with another process */ pub fn vulkan_create_timeline( vulk: &Arc, start_pt: u64, ) -> Result<(Arc, OwnedFd), String> { unsafe { let (semaphore, drm_handle, semaphore_fd, event_fd) = vulkan_create_timeline_parts( &vulk.dev, &vulk.ext_semaphore_fd, &vulk.drm_fd, start_pt, )?; Ok(( Arc::new(VulkanTimelineSemaphore { vulk: vulk.clone(), semaphore, external: VulkanExternalTimelineSemaphore { drm_handle, event_fd, }, }), semaphore_fd, )) } } /** Start a copy operation from the given dmabuf to a buffer. * * Related: [start_copy_segments_onto_dmabuf] */ pub fn start_copy_segments_from_dmabuf( img: &Arc, copy: &Arc, pool: &Arc, segments: &[(u32, u32, u32)], view_row_length: Option, wait_semaphores: &[(Arc, u64)], wait_binary_semaphores: &[VulkanBinarySemaphore], ) -> Result { // TODO: validate that buffer/dmabuf regions affected are not being used by any other transfer // (otherwise callers risk unsoundness) let vulk: &VulkanDevice = &img.vulk; let format_info = get_vulkan_info(img.vk_format); // todo: fully synchronous code; even if not high performance enough in practice, // will be useful for testing or initial creation // TODO: try to have everything necessary allocated in advance / so failures cannot happen unsafe { /* 1: run and wait for copy command */ let cmd_pool = pool.pool.lock().unwrap(); let alloc_cb_info = vk::CommandBufferAllocateInfo::default() .command_pool(*cmd_pool) .command_buffer_count(1) .level(vk::CommandBufferLevel::PRIMARY); let cbvec = vulk .dev .allocate_command_buffers(&alloc_cb_info) .map_err(|_| "Failed to allocate command buffers")?; drop(cmd_pool); let cb = cbvec[0]; // copy-out target buffer -- store per image? 2x memory is OK; but ultimately this is temporary // possible: VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT let begin_cb_info = vk::CommandBufferBeginInfo::default().flags(vk::CommandBufferUsageFlags::empty()); vulk.dev .begin_command_buffer(cb, &begin_cb_info) .map_err(|_| "Failed to begin command buffer")?; let regions = make_copy_regions(segments, format_info, view_row_length, img); // TODO: eventually, might copy plane0/plane1/plane2 for multiplanar formats let op_layout = vk::ImageLayout::TRANSFER_SRC_OPTIMAL; let mut img_inner = img.inner.lock().unwrap(); let acq_barriers = &[qfot_acquire_image_memory_barrier( img.image, img_inner.image_layout, op_layout, vulk.queue_family, vk::AccessFlags::TRANSFER_READ, )]; let rel_barriers = &[qfot_release_image_memory_barrier( img.image, op_layout, vk::ImageLayout::GENERAL, vulk.queue_family, vk::AccessFlags::TRANSFER_READ, )]; let buf_rel_barrier = &[vk::BufferMemoryBarrier::default() .src_access_mask(vk::AccessFlags::TRANSFER_WRITE) .dst_access_mask(vk::AccessFlags::HOST_READ) .buffer(copy.buffer) .offset(0) .size(copy.buffer_len) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED)]; // Perform layout transition, even though it is unclear how useful it is for DMABUFs vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TOP_OF_PIPE, vk::PipelineStageFlags::TRANSFER, vk::DependencyFlags::empty(), &[], &[], acq_barriers, ); if !regions.is_empty() { vulk.dev .cmd_copy_image_to_buffer(cb, img.image, op_layout, copy.buffer, ®ions[..]); } vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TRANSFER, vk::PipelineStageFlags::HOST, vk::DependencyFlags::empty(), &[], buf_rel_barrier, &[], ); vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TRANSFER, vk::PipelineStageFlags::BOTTOM_OF_PIPE, vk::DependencyFlags::empty(), &[], &[], rel_barriers, ); img_inner.image_layout = vk::ImageLayout::GENERAL; vulk.dev .end_command_buffer(cb) .map_err(|_| "Failed to end command buffer")?; let cbs = &[cb]; let mut waitv_values: Vec = wait_semaphores.iter().map(|x| x.1).collect(); let mut waitv_semaphores: Vec = wait_semaphores.iter().map(|x| x.0.semaphore).collect(); for bs in wait_binary_semaphores { /* Wait values for non-timeline semaphores are ignored */ waitv_values.push(u64::MAX); waitv_semaphores.push(bs.semaphore); } let mut waitv_stage_flags = Vec::new(); waitv_stage_flags.resize(waitv_semaphores.len(), vk::PipelineStageFlags::ALL_COMMANDS); let mut queue = vulkan_lock_queue(vulk); queue.inner.last_semaphore_value += 1; let completion_time_point = queue.inner.last_semaphore_value; let values = &[completion_time_point]; let semaphores = &[vulk.semaphore]; let mut signal = vk::TimelineSemaphoreSubmitInfoKHR::default() .wait_semaphore_values(&waitv_values[..]) .signal_semaphore_values(values); let submits = &[vk::SubmitInfo::default() .command_buffers(cbs) .wait_semaphores(&waitv_semaphores[..]) .wait_dst_stage_mask(&waitv_stage_flags) .signal_semaphores(semaphores) .push_next(&mut signal)]; vulk.dev .queue_submit(queue.inner.queue, submits, vk::Fence::null()) .map_err(|_| "Queue submit failed")?; // <- can fail with OOM drop(queue); Ok(VulkanCopyHandle { vulk: img.vulk.clone(), _image: img.clone(), _buffer: copy.clone(), pool: pool.clone(), cb, completion_time_point, }) } } /** Given a collection of damaged segments for a linear view of the image, return a * list of image copy operations. * * This can return an empty vector when all the segments are in the padding region of * the linear view and do not correspond to actual pixels. */ fn make_copy_regions( segments: &[(u32, u32, u32)], format_info: FormatLayoutInfo, view_row_length: Option, img: &VulkanDmabuf, ) -> Vec { let row_length = view_row_length.unwrap_or(img.width * format_info.bpp); assert!(row_length >= img.width * format_info.bpp); assert!(row_length % format_info.bpp == 0); let prototype = vk::BufferImageCopy::default() .buffer_row_length(row_length / format_info.bpp) .buffer_image_height(0) /* not needed, single layer */ .image_subresource( vk::ImageSubresourceLayers::default() .aspect_mask(vk::ImageAspectFlags::COLOR) .mip_level(0) .base_array_layer(0) .layer_count(1), ); let z = vk::Offset3D::default(); let e = vk::Extent3D::default().depth(1); // TODO: use an arrayvec equivalent let mut regions = Vec::::new(); for (mut source_offset, start, end) in segments { // TODO: this assumes disjoint segments; validate time? // (Can avoid this by super-aligning diffs?) let ubpp = format_info.bpp; let mut start_row = start / row_length; let end_row = (end - 1) / row_length; assert!( (start % row_length) % ubpp == 0, "non-{}-aligned interval [{},{}) with row length {}", ubpp, start, end, row_length ); assert!( (end % row_length) % ubpp == 0, "non-{}-aligned interval [{},{}) with row length {}", ubpp, start, end, row_length ); let mut start_pos = (start % row_length) / ubpp; let mut end_pos = 1 + ((end - 1) % row_length) / ubpp; let w = img.width; if start_pos >= w { /* Advance to next row */ source_offset += row_length - start_pos * ubpp; start_pos = 0; start_row += 1; } if end_pos > w { end_pos = w; } if start_row > end_row || (start_row == end_row && start_pos >= end_pos) { /* Can happen if start_pos/end_pos are both in padding region of row */ continue; } /* Cases: * x--x | x-- | x-- * | --x |---- * | |-x */ if start_row == end_row { regions.push( prototype .buffer_offset(source_offset as u64) .image_offset(z.x(start_pos as i32).y(start_row as i32)) .image_extent(e.width(end_pos - start_pos).height(1)), ); } else { let (mid_start, mid_row_start): (u32, u32) = if start_pos == 0 { (start_row, source_offset) } else { ( start_row + 1, source_offset + (row_length - start_pos * ubpp), ) }; let mid_end = if end_pos >= w { end_row } else { end_row - 1 }; assert!( mid_end + 1 >= mid_start, "{} {} {} {} => {} {}", start_pos, start_row, end_pos, end_row, mid_end, mid_start ); if start_pos > 0 { regions.push( prototype .buffer_offset(source_offset as u64) .image_offset(z.x(start_pos as i32).y(start_row as i32)) .image_extent(e.width(w - start_pos).height(1)), ); } if mid_end >= mid_start { regions.push( prototype .buffer_offset(mid_row_start as u64) .image_offset(z.x(0).y(mid_start as i32)) .image_extent(e.width(w).height(mid_end + 1 - mid_start)), ); } if end_pos < w { let adv2 = mid_row_start + (row_length * (mid_end + 1 - mid_start)); regions.push( prototype .buffer_offset(adv2 as u64) .image_offset(z.x(0).y(end_row as i32)) .image_extent(e.width(end_pos).height(1)), ); } } } regions } /** Start a copy operation from the given buffer to a dmabuf. * * `segments` is a list of (src_start, dst_start, dst_end) tuples * * This may be moderately slow; 100 segments once took 0.5msec to * create and queue, while 1 took 0.1msec * * `wait_semaphores`: timeline semaphores and associated points to * to wait for until buffer is safe to modify */ pub fn start_copy_segments_onto_dmabuf( img: &Arc, copy: &Arc, pool: &Arc, segments: &[(u32, u32, u32)], view_row_length: Option, wait_semaphores: &[(Arc, u64)], ) -> Result { let vulk: &VulkanDevice = &img.vulk; let format_info = get_vulkan_info(img.vk_format); // Design: each image gets its own command pool, with four options (diff, fill, etc.) unsafe { let cmd_pool = pool.pool.lock().unwrap(); let alloc_cb_info = vk::CommandBufferAllocateInfo::default() .command_pool(*cmd_pool) .command_buffer_count(1) .level(vk::CommandBufferLevel::PRIMARY); let cbvec = vulk .dev .allocate_command_buffers(&alloc_cb_info) .map_err(|_| "Failed to allocate command buffers")?; drop(cmd_pool); let cb = cbvec[0]; // copy-out target buffer -- store per image? 2x memory is OK; but ultimately this is temporary // possible: VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT let begin_cb_info = vk::CommandBufferBeginInfo::default().flags(vk::CommandBufferUsageFlags::empty()); vulk.dev .begin_command_buffer(cb, &begin_cb_info) .map_err(|_| "Failed to begin command buffer")?; let regions = make_copy_regions(segments, format_info, view_row_length, img); let mut img_inner = img.inner.lock().unwrap(); let op_layout = vk::ImageLayout::TRANSFER_DST_OPTIMAL; let acq_barriers = &[qfot_acquire_image_memory_barrier( img.image, img_inner.image_layout, op_layout, vulk.queue_family, vk::AccessFlags::TRANSFER_WRITE, )]; let buf_acq_barrier = &[vk::BufferMemoryBarrier::default() .src_access_mask(vk::AccessFlags::HOST_WRITE) .dst_access_mask(vk::AccessFlags::TRANSFER_READ) .buffer(copy.buffer) .offset(0) .size(copy.buffer_len) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED)]; let rel_barriers = &[qfot_release_image_memory_barrier( img.image, op_layout, vk::ImageLayout::GENERAL, vulk.queue_family, vk::AccessFlags::TRANSFER_WRITE, )]; // Perform layout transition, even though it is unclear how useful it is for DMABUFs vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TOP_OF_PIPE, vk::PipelineStageFlags::TRANSFER, vk::DependencyFlags::empty(), &[], &[], acq_barriers, ); vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::HOST, vk::PipelineStageFlags::TRANSFER, vk::DependencyFlags::empty(), &[], buf_acq_barrier, &[], ); if !regions.is_empty() { vulk.dev .cmd_copy_buffer_to_image(cb, copy.buffer, img.image, op_layout, ®ions[..]); } vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TRANSFER, vk::PipelineStageFlags::BOTTOM_OF_PIPE, vk::DependencyFlags::empty(), &[], &[], rel_barriers, ); img_inner.image_layout = vk::ImageLayout::GENERAL; vulk.dev .end_command_buffer(cb) .map_err(|_| "Failed to end command buffer")?; let cbs = &[cb]; let waitv_values: Vec = wait_semaphores.iter().map(|x| x.1).collect(); let waitv_semaphores: Vec = wait_semaphores.iter().map(|x| x.0.semaphore).collect(); let mut waitv_stage_flags = Vec::new(); waitv_stage_flags.resize(waitv_semaphores.len(), vk::PipelineStageFlags::ALL_COMMANDS); let mut queue = vulkan_lock_queue(vulk); queue.inner.last_semaphore_value += 1; let completion_time_point = queue.inner.last_semaphore_value; let values = &[completion_time_point]; let semaphores = &[vulk.semaphore]; let mut signal = vk::TimelineSemaphoreSubmitInfoKHR::default() .wait_semaphore_values(&waitv_values[..]) .signal_semaphore_values(values); let submits = &[vk::SubmitInfo::default() .command_buffers(cbs) .wait_semaphores(&waitv_semaphores[..]) .wait_dst_stage_mask(&waitv_stage_flags[..]) .signal_semaphores(semaphores) .push_next(&mut signal)]; vulk.dev .queue_submit(queue.inner.queue, submits, vk::Fence::null()) .map_err(|_| "Queue submit failed")?; // <- can fail with OOM drop(queue); // TODO: clean up or recycle command buffer Ok(VulkanCopyHandle { vulk: img.vulk.clone(), _image: img.clone(), _buffer: copy.clone(), pool: pool.clone(), cb, completion_time_point, }) } } impl VulkanCopyHandle { /** Blocks the thread until the copy operation has completed. * * (Should not be used except for test code, because it blocks.) */ #[cfg(any(test, feature = "test_proto"))] pub fn wait_until_done(self: &VulkanCopyHandle) -> Result<(), String> { self.vulk .wait_for_timeline_pt(self.completion_time_point, u64::MAX) .map(|_| ()) } /** Get the point on the main timeline which will signaled once the copy * operation has completed.*/ pub fn get_timeline_point(self: &VulkanCopyHandle) -> u64 { self.completion_time_point } } /** Get the device id for the special file at `path`, cast to a u64. */ pub fn get_dev_for_drm_node_path(path: &Path) -> Result { get_rdev_for_file(path) .ok_or_else(|| tag!("Failed to get st_rdev for drm node at {}", path.display())) } impl VulkanDevice { /** Wait until the main timeline semaphore has reached the given point, * or `max_wait` nanoseconds have elapsed. (Note: u64::MAX corresponds to * ~585 years and provides an effectively indefinite delay, assuming the * clock does not jump very far forward.) * * Returns `true` if wait successful. */ pub fn wait_for_timeline_pt(&self, pt: u64, max_wait: u64) -> Result { unsafe { let sem = &[self.semaphore]; let values = &[pt]; let wait_info = vk::SemaphoreWaitInfoKHR::default() .semaphores(sem) .values(values) .flags(vk::SemaphoreWaitFlags::empty()); match self .timeline_semaphore .wait_semaphores(&wait_info, max_wait) { Ok(()) => Ok(true), Err(vk::Result::TIMEOUT) => Ok(false), Err(x) => Err(tag!("Waiting for completion failed: {:?}", x)), } } } /** Return the device ID */ pub fn get_device(&self) -> u64 { self.device_id } /** Get the event_fd associated with the main timeline semaphore, if semaphore import/export is supported * and there is one. */ pub fn get_event_fd<'a>( &'a self, timeline_point: u64, ) -> Result>, String> { if let Some(ref ext) = self.semaphore_external { drm_syncobj_eventfd(&self.drm_fd, &ext.event_fd, ext.drm_handle, timeline_point)?; Ok(Some(ext.event_fd.as_fd())) } else { Ok(None) } } /** Return the timeline point for the main timeline semaphore for this device */ pub fn get_current_timeline_pt(&self) -> Result { unsafe { self.timeline_semaphore .get_semaphore_counter_value(self.semaphore) .map_err(|x| tag!("Failed to get timeline point: {:?}", x)) } } /** Return whether a DMABUF with specified parameters can be imported. */ pub fn can_import_image( &self, drm_format: u32, width: u32, height: u32, planes: &[AddDmabufPlane], // todo: move modifier extraction out of the function can_store_and_sample: bool, ) -> Result<(), VulkanImageParameterMismatch> { /* post linux-dmabuf version 5, all planes must have same modifier */ let modifier = planes[0].modifier; assert!(planes.iter().all(|x| x.modifier == modifier)); let Some(vk_format) = drm_to_vulkan(drm_format) else { return Err(VulkanImageParameterMismatch::Format); }; let Some(data) = self.formats.get(&vk_format) else { return Err(VulkanImageParameterMismatch::Format); }; let Some(idx) = data.modifiers.iter().position(|x| *x == modifier) else { return Err(VulkanImageParameterMismatch::Modifier); }; let mod_data = &data.modifier_data[idx]; let max_size = if can_store_and_sample { mod_data.max_size_store_and_sample.unwrap() } else { mod_data.max_size_transfer }; if width <= max_size.0 && height <= max_size.1 { Ok(()) } else { Err(VulkanImageParameterMismatch::Size(max_size)) } } /** Return whether the device supports the given format */ pub fn supports_format(&self, drm_format: u32, drm_modifier: u64) -> bool { let Some(vk_fmt) = drm_to_vulkan(drm_format) else { return false; }; let Some(data) = self.formats.get(&vk_fmt) else { return false; }; data.modifiers.contains(&drm_modifier) } /** Returns empty vector if format is not supported; otherwise a list of permissible modifiers */ pub fn get_supported_modifiers(&self, drm_format: u32) -> &[u64] { let Some(vk_fmt) = drm_to_vulkan(drm_format) else { return &[]; }; let Some(data) = self.formats.get(&vk_fmt) else { return &[]; }; &data.modifiers } /** Returns true if the device can import binary semaphores from sync files */ pub fn supports_binary_semaphore_import(&self) -> bool { self.dev_info.supports_binary_import } /** Returns true if the device can import/export timeline semaphores from sync files */ pub fn supports_timeline_import_export(&self) -> bool { self.dev_info.supports_timeline_import_export } } impl VulkanDmabuf { /** The total length of the canonical representation of the format * * If 'view_row_length' is not None, it specifies the row stride to use. */ pub fn nominal_size(self: &VulkanDmabuf, view_row_length: Option) -> usize { let format_info = get_vulkan_info(self.vk_format); // TODO: handle multiplanar formats if let Some(r) = view_row_length { (self.height as usize) * (r as usize) } else { (self.width as usize) * (self.height as usize) * (format_info.bpp as usize) } } /** Get the number of bytes per pixel (assuming DMABUF has a single-plane format) */ pub fn get_bpp(&self) -> u32 { // todo: will need modification for multi-planar support let format_info = get_vulkan_info(self.vk_format); format_info.bpp } /** Export the read sync file for the DMABUF, returning None * if the operation is not supported (as may happen with old kernels). */ pub fn export_sync_file(&self) -> Result, String> { let Some(sync_fd) = dmabuf_sync_file_export(&self.main_fd)? else { debug!("Failed to export sync file from dmabuf, possible old kernel."); return Ok(None); }; Ok(Some(VulkanSyncFile { vulk: self.vulk.clone(), fd: sync_fd, })) } } impl VulkanSyncFile { /** Export the sync file to a (single-use) binary semaphore. */ pub fn export_binary_semaphore(&self) -> Result { let mut sem_exp_info = vk::ExportSemaphoreCreateInfo::default() .handle_types(vk::ExternalSemaphoreHandleTypeFlags::SYNC_FD); let mut sem_type = vk::SemaphoreTypeCreateInfoKHR::default() .semaphore_type(vk::SemaphoreType::BINARY) .initial_value(0); let create_semaphore_info = vk::SemaphoreCreateInfo::default() .flags(vk::SemaphoreCreateFlags::empty()) // VK_SEMAPHORE_IMPORT_TEMPORARY_BIT ? .push_next(&mut sem_type) .push_next(&mut sem_exp_info); let vulk: &Arc = &self.vulk; let sync_fd = self .fd .try_clone() .map_err(|x| tag!("Failed to clone sync file fd: {}", x))?; unsafe { let semaphore = match vulk.dev.create_semaphore(&create_semaphore_info, None) { Ok(x) => x, Err(x) => { return Err(tag!("Failed to create semaphore: {}", x)); } }; let raw_fd = sync_fd.into_raw_fd(); let import = vk::ImportSemaphoreFdInfoKHR::default() .fd(raw_fd) .flags(vk::SemaphoreImportFlags::TEMPORARY) .handle_type(vk::ExternalSemaphoreHandleTypeFlags::SYNC_FD) .semaphore(semaphore); match vulk.ext_semaphore_fd.import_semaphore_fd(&import) { Ok(()) => (), Err(x) => { /* Import failed, must clean up fd */ nix::unistd::close(raw_fd).unwrap(); vulk.dev.destroy_semaphore(semaphore, None); return Err(tag!("Failed to import semaphore: {}", x)); } }; Ok(VulkanBinarySemaphore { vulk: vulk.clone(), semaphore, }) } } } impl VulkanTimelineSemaphore { /** Block for up to `timeout_ns` for the timeline semaphore to * reach the given point (or for some error to occur). */ #[allow(dead_code)] #[cfg(any(test, feature = "test_proto"))] pub fn wait_for_timeline_pt( self: &VulkanTimelineSemaphore, pt: u64, timeout_ns: u64, ) -> Result<(), String> { unsafe { let sem = &[self.semaphore]; let values = &[pt]; let wait_info = vk::SemaphoreWaitInfoKHR::default() .semaphores(sem) .values(values) .flags(vk::SemaphoreWaitFlags::empty()); self.vulk .timeline_semaphore .wait_semaphores(&wait_info, timeout_ns) // u64::MAX is ~585 years .map_err(|x| tag!("Waiting for completion failed: {:?}", x))?; } Ok(()) } /** Get the current value of the semaphore. */ pub fn get_current_pt(self: &VulkanTimelineSemaphore) -> Result { unsafe { self.vulk .timeline_semaphore .get_semaphore_counter_value(self.semaphore) .map_err(|x| tag!("Failed to get timeline point: {:?}", x)) } } /** Get the eventfd used with this timeline semaphore */ pub fn get_event_fd<'a>(self: &'a VulkanTimelineSemaphore) -> BorrowedFd<'a> { self.external.event_fd.as_fd() } /** Configure the eventfd to be ready when the semaphore has reached the * timeline point */ pub fn link_event_fd<'a>( self: &'a VulkanTimelineSemaphore, timeline_point: u64, ) -> Result, String> { drm_syncobj_eventfd( &self.vulk.drm_fd, &self.external.event_fd, self.external.drm_handle, timeline_point, )?; Ok(self.external.event_fd.as_fd()) } /** Signal the timeline semaphore with the given timeline point. */ pub fn signal_timeline_pt(self: &VulkanTimelineSemaphore, pt: u64) -> Result<(), String> { unsafe { let signal_info = vk::SemaphoreSignalInfo::default() .semaphore(self.semaphore) .value(pt); self.vulk .timeline_semaphore .signal_semaphore(&signal_info) .map_err(|_| tag!("Signalling timeline semaphore failed"))?; } Ok(()) } } /** Copy the entire contents of the buffer onto the dmabuf. * * The buffer length must match the packed linear dmabuf size. */ #[allow(dead_code)] #[cfg(any(test, feature = "test_proto"))] pub fn copy_onto_dmabuf( buf: &Arc, copy: &Arc, data: &[u8], ) -> Result<(), String> { unsafe { let nom_len = buf.nominal_size(None); // Safety: requires valid region, and no other reader or writer at this time assert!(data.len() == nom_len); let inner = copy.inner.lock().unwrap(); let dst = std::ptr::slice_from_raw_parts_mut(inner.data as *mut u8, nom_len); (*dst).copy_from_slice(data); let ranges = &[vk::MappedMemoryRange::default() .offset(0) .size(copy.memory_len) .memory(copy.mem)]; let vulk: &VulkanDevice = &buf.vulk; vulk.dev .flush_mapped_memory_ranges(ranges) .map_err(|_| "Failed to flush mapped memory range")?; let pool = vulkan_get_cmd_pool(&buf.vulk)?; let handle = start_copy_segments_onto_dmabuf( buf, copy, &pool, &[(0, 0, nom_len as u32)], None, &[], )?; handle.wait_until_done()?; drop(handle); } Ok(()) } /** Copy the entire contents of the dmabuf onto the buffer. * * The buffer length must match the packed linear dmabuf size. */ #[allow(dead_code)] #[cfg(any(test, feature = "test_proto"))] pub fn copy_from_dmabuf( buf: &Arc, copy: &Arc, ) -> Result, String> { let pool = vulkan_get_cmd_pool(&buf.vulk)?; let handle = start_copy_segments_from_dmabuf( buf, copy, &pool, &[(0, 0, buf.nominal_size(None) as u32)], None, &[], &[], )?; handle.wait_until_done()?; drop(handle); let nom_len = buf.nominal_size(None); let mut output = vec![0; nom_len]; let vulk: &VulkanDevice = &buf.vulk; unsafe { let ranges = &[vk::MappedMemoryRange::default() .offset(0) .size(copy.memory_len) .memory(copy.mem)]; vulk.dev .invalidate_mapped_memory_ranges(ranges) .map_err(|_| "Failed to invalidate mapped memory range")?; assert!(nom_len as u64 <= copy.memory_len); // Safety: requires valid region, and no other writers at this time let inner = copy.inner.lock().unwrap(); let src = slice_from_raw_parts(inner.data as *mut u8, nom_len); output.copy_from_slice(&*src); } Ok(output) } /** A list of DRM formats, used to test that operations on each supported format work. */ #[cfg(test)] pub const DRM_FORMATS: &[u32] = &[ fourcc('A', 'R', '2', '4'), fourcc('X', 'R', '2', '4'), WlShmFormat::C8 as u32, WlShmFormat::Rgb332 as u32, WlShmFormat::Bgr233 as u32, WlShmFormat::Xrgb4444 as u32, WlShmFormat::Xbgr4444 as u32, WlShmFormat::Rgbx4444 as u32, WlShmFormat::Bgrx4444 as u32, WlShmFormat::Argb4444 as u32, WlShmFormat::Abgr4444 as u32, WlShmFormat::Rgba4444 as u32, WlShmFormat::Bgra4444 as u32, WlShmFormat::Xrgb1555 as u32, WlShmFormat::Xbgr1555 as u32, WlShmFormat::Rgbx5551 as u32, WlShmFormat::Bgrx5551 as u32, WlShmFormat::Argb1555 as u32, WlShmFormat::Abgr1555 as u32, WlShmFormat::Rgba5551 as u32, WlShmFormat::Bgra5551 as u32, WlShmFormat::Rgb565 as u32, WlShmFormat::Bgr565 as u32, WlShmFormat::Rgb888 as u32, WlShmFormat::Bgr888 as u32, WlShmFormat::Xbgr8888 as u32, WlShmFormat::Rgbx8888 as u32, WlShmFormat::Bgrx8888 as u32, WlShmFormat::Abgr8888 as u32, WlShmFormat::Rgba8888 as u32, WlShmFormat::Bgra8888 as u32, WlShmFormat::Xrgb2101010 as u32, WlShmFormat::Xbgr2101010 as u32, WlShmFormat::Rgbx1010102 as u32, WlShmFormat::Bgrx1010102 as u32, WlShmFormat::Argb2101010 as u32, WlShmFormat::Abgr2101010 as u32, WlShmFormat::Rgba1010102 as u32, WlShmFormat::Bgra1010102 as u32, WlShmFormat::Yuyv as u32, WlShmFormat::Yvyu as u32, WlShmFormat::Uyvy as u32, WlShmFormat::Vyuy as u32, WlShmFormat::Ayuv as u32, WlShmFormat::Nv12 as u32, WlShmFormat::Nv21 as u32, WlShmFormat::Nv16 as u32, WlShmFormat::Nv61 as u32, WlShmFormat::Yuv410 as u32, WlShmFormat::Yvu410 as u32, WlShmFormat::Yuv411 as u32, WlShmFormat::Yvu411 as u32, WlShmFormat::Yuv420 as u32, WlShmFormat::Yvu420 as u32, WlShmFormat::Yuv422 as u32, WlShmFormat::Yvu422 as u32, WlShmFormat::Yuv444 as u32, WlShmFormat::Yvu444 as u32, WlShmFormat::R8 as u32, WlShmFormat::R16 as u32, WlShmFormat::Rg88 as u32, WlShmFormat::Gr88 as u32, WlShmFormat::Rg1616 as u32, WlShmFormat::Gr1616 as u32, WlShmFormat::Xrgb16161616f as u32, WlShmFormat::Xbgr16161616f as u32, WlShmFormat::Argb16161616f as u32, WlShmFormat::Abgr16161616f as u32, WlShmFormat::Xyuv8888 as u32, WlShmFormat::Vuy888 as u32, WlShmFormat::Vuy101010 as u32, WlShmFormat::Y210 as u32, WlShmFormat::Y212 as u32, WlShmFormat::Y216 as u32, WlShmFormat::Y410 as u32, WlShmFormat::Y412 as u32, WlShmFormat::Y416 as u32, WlShmFormat::Xvyu2101010 as u32, WlShmFormat::Xvyu1216161616 as u32, WlShmFormat::Xvyu16161616 as u32, WlShmFormat::Y0l0 as u32, WlShmFormat::X0l0 as u32, WlShmFormat::Y0l2 as u32, WlShmFormat::X0l2 as u32, WlShmFormat::Yuv4208bit as u32, WlShmFormat::Yuv42010bit as u32, WlShmFormat::Xrgb8888A8 as u32, WlShmFormat::Xbgr8888A8 as u32, WlShmFormat::Rgbx8888A8 as u32, WlShmFormat::Bgrx8888A8 as u32, WlShmFormat::Rgb888A8 as u32, WlShmFormat::Bgr888A8 as u32, WlShmFormat::Rgb565A8 as u32, WlShmFormat::Bgr565A8 as u32, WlShmFormat::Nv24 as u32, WlShmFormat::Nv42 as u32, WlShmFormat::P210 as u32, WlShmFormat::P010 as u32, WlShmFormat::P012 as u32, WlShmFormat::P016 as u32, WlShmFormat::Axbxgxrx106106106106 as u32, WlShmFormat::Nv15 as u32, WlShmFormat::Q410 as u32, WlShmFormat::Q401 as u32, WlShmFormat::Xrgb16161616 as u32, WlShmFormat::Xbgr16161616 as u32, WlShmFormat::Argb16161616 as u32, WlShmFormat::Abgr16161616 as u32, WlShmFormat::C1 as u32, WlShmFormat::C2 as u32, WlShmFormat::C4 as u32, WlShmFormat::D1 as u32, WlShmFormat::D2 as u32, WlShmFormat::D4 as u32, WlShmFormat::D8 as u32, WlShmFormat::R1 as u32, WlShmFormat::R2 as u32, WlShmFormat::R4 as u32, WlShmFormat::R10 as u32, WlShmFormat::R12 as u32, WlShmFormat::Avuy8888 as u32, WlShmFormat::Xvuy8888 as u32, WlShmFormat::P030 as u32, ]; /* Creating a Vulkan instance or connecting ffmpeg to it should be thread safe (safe to * run in multiple test threads), but in practice there may be memory corruption visible * every few hundred runs. This should not be a problem in practice for Waypipe since * setup_vulkan() is only called once; but could make tests flaky. */ #[cfg(test)] pub static VULKAN_MUTEX: Mutex<()> = Mutex::new(()); #[test] fn test_dmabuf() { let _serialize_test = VULKAN_MUTEX.lock().unwrap(); let Ok(Some(instance)) = setup_vulkan_instance(true, &VideoSetting::default(), false, false) else { return; }; for dev_id in list_render_device_ids() { let Ok(Some(vulk)) = setup_vulkan_device(&instance, Some(dev_id), &VideoSetting::default(), true) else { continue; }; println!("Setup complete for device id {}", dev_id); let mut format_modifiers = Vec::<(u32, u64)>::new(); for f in DRM_FORMATS { let Some(vkf) = drm_to_vulkan(*f) else { continue; }; let Some(data) = vulk.formats.get(&vkf) else { continue; }; for m in &data.modifiers { format_modifiers.push((*f, *m)); } } println!("formats: {:#?}", vulk.formats); for (j, (format, modifier)) in format_modifiers.iter().enumerate() { let (format, modifier) = (*format, *modifier); let vkf = drm_to_vulkan(format).unwrap(); println!( "\nTesting format 0x{:x} => {:?}, modifier 0x{:x}", format, vkf, modifier ); let (width, height) = (110, 44); let bpp = get_vulkan_info(drm_to_vulkan(format).unwrap()).bpp; let start_time = std::time::Instant::now(); let mod_options = &[modifier]; let (dmabuf1, planes) = vulkan_create_dmabuf(&vulk, width, height, format, mod_options, false).unwrap(); println!("DMABUF for 0x{:x} created with planes {:?}", format, planes); let dmabuf2 = vulkan_import_dmabuf(&vulk, planes, width, height, format, false).unwrap(); println!("DMABUF imported"); let mut pattern: Vec = vec![0; (width * height * bpp) as usize]; for x in pattern.iter_mut().enumerate() { *x.1 = (x.0 * (j + 1)) as u8; } let copy1 = Arc::new(vulkan_get_buffer(&vulk, dmabuf1.nominal_size(None), true).unwrap()); let copy2 = Arc::new(vulkan_get_buffer(&vulk, dmabuf2.nominal_size(None), true).unwrap()); copy_onto_dmabuf(&dmabuf1, ©1, &pattern[..]).unwrap(); let output = copy_from_dmabuf(&dmabuf2, ©2).unwrap(); let end_time = std::time::Instant::now(); let duration = end_time.duration_since(start_time); println!( "pattern max {} output max {}, {} msec", pattern.iter().max().unwrap(), output.iter().max().unwrap(), duration.as_secs_f32() * 1e3, ); if vkf != vk::Format::R16G16B16A16_SFLOAT { // TODO: Nans need not roundtrip exactly, need a check for this assert!(pattern == output); } } } } waypipe-v0.10.5/src/gbm.rs000066400000000000000000000451651507071711100153560ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! DMABUF handling with libgbm; should only be used if Vulkan is not available. * * To maximize compatibility, newer features and optimizations should be avoided. * It is very hard to test them fully without having a variety of old hardware and * library versions. To be safe, libgbm functions should only ever be called on * the main thread, and mapped memory only accessed from a single thread. */ #![cfg(feature = "gbmfallback")] use crate::tag; use crate::util::*; use crate::wayland_gen::WlShmFormat; use log::{debug, error}; use std::cell::OnceCell; use std::collections::BTreeMap; use std::ffi::c_void; use std::os::fd::{AsRawFd, FromRawFd, OwnedFd}; use std::rc::Rc; use waypipe_gbm_wrapper::*; /** A GBM device for a render node, and associated information */ pub struct GBMDevice { device: *mut gbm_device, bindings: gbm, device_id: u64, /** Keep the render node fd alive, as the gbm device appears to refer to it. */ _drm_fd: OwnedFd, /** Cache listing all available modifiers. Modifiers are typically requested for * use by protocol editing code for dmabuf-feedback and so if a request for one format * is made, usually requests for the other formats will follow. Mesa typically supports, * and compositors advertise, about half of the options in [GBM_SUPPORTED_FORMATS], so * the overhead of checking for all possible formats is not very large. If individual * format queries become expensive, BTreeMap>> can be used. */ supported_modifiers: OnceCell>>, } /** A type corresponding to a DMABUF object */ pub struct GBMDmabuf { /** Reference to keep device alive at least as long as the gbm_bo; the documention does * not state that this is necessary, but it also does not state that it isn't. */ device: Rc, bo: *mut gbm_bo, pub width: u32, pub height: u32, pub format: u32, } impl Drop for GBMDevice { fn drop(&mut self) { unsafe { (self.bindings.gbm_device_destroy)(self.device); } } } impl Drop for GBMDmabuf { fn drop(&mut self) { unsafe { (self.device.bindings.gbm_bo_destroy)(self.bo); } } } const LINEAR_MODIFIER: u64 = 0; const INVALID_MODIFIER: u64 = 0x00ffffffffffffff; /** Submitting overly large dimensions can make libgbm (or at least, some older version of it) * crash, and libgbm does not expose buffer size limits; so set an arbitrary limit which is well under * u16::MAX. */ const MAX_DIMENSION: u32 = 16384; /** List of formats GBM can support which are RGB, single-plane, and have a possible linear layout. */ const GBM_SUPPORTED_FORMATS: &[u32] = &[ /* Note: GBM also accepts 0=GBM_BO_FORMAT_XRGB8888 and 1=GBM_BO_FORMAT_ARGB8888, * but these are not valid DRM format codes. (Note: in an unfortunate coincidence, * wl_shm::format::argb8888 is 0 and wl_shm::format::xrgb8888 is 1; but this should * not matter because wl_shm format codes should never be passed to libgbm.) */ fourcc('A', 'R', '2', '4'), // Argb8888 fourcc('X', 'R', '2', '4'), // Xrgb8888 WlShmFormat::Rgb332 as u32, WlShmFormat::Bgr233 as u32, WlShmFormat::Xrgb4444 as u32, WlShmFormat::Xbgr4444 as u32, WlShmFormat::Rgbx4444 as u32, WlShmFormat::Bgrx4444 as u32, WlShmFormat::Argb4444 as u32, WlShmFormat::Abgr4444 as u32, WlShmFormat::Rgba4444 as u32, WlShmFormat::Bgra4444 as u32, WlShmFormat::Xrgb1555 as u32, WlShmFormat::Xbgr1555 as u32, WlShmFormat::Rgbx5551 as u32, WlShmFormat::Bgrx5551 as u32, WlShmFormat::Argb1555 as u32, WlShmFormat::Abgr1555 as u32, WlShmFormat::Rgba5551 as u32, WlShmFormat::Bgra5551 as u32, WlShmFormat::Rgb565 as u32, WlShmFormat::Bgr565 as u32, WlShmFormat::Rgb888 as u32, WlShmFormat::Bgr888 as u32, WlShmFormat::Xbgr8888 as u32, WlShmFormat::Rgbx8888 as u32, WlShmFormat::Bgrx8888 as u32, WlShmFormat::Abgr8888 as u32, WlShmFormat::Rgba8888 as u32, WlShmFormat::Bgra8888 as u32, WlShmFormat::Xrgb2101010 as u32, WlShmFormat::Xbgr2101010 as u32, WlShmFormat::Rgbx1010102 as u32, WlShmFormat::Bgrx1010102 as u32, WlShmFormat::Argb2101010 as u32, WlShmFormat::Abgr2101010 as u32, WlShmFormat::Rgba1010102 as u32, WlShmFormat::Bgra1010102 as u32, WlShmFormat::R8 as u32, WlShmFormat::R16 as u32, WlShmFormat::Rg88 as u32, WlShmFormat::Gr88 as u32, WlShmFormat::Rg1616 as u32, WlShmFormat::Gr1616 as u32, WlShmFormat::Xrgb16161616f as u32, WlShmFormat::Xbgr16161616f as u32, WlShmFormat::Argb16161616f as u32, WlShmFormat::Abgr16161616f as u32, WlShmFormat::Xrgb16161616 as u32, WlShmFormat::Xbgr16161616 as u32, WlShmFormat::Argb16161616 as u32, WlShmFormat::Abgr16161616 as u32, ]; fn get_bpp_if_rgb_planar(fmt: u32) -> Option { use WlShmFormat::*; if fmt == fourcc('A', 'R', '2', '4') || fmt == fourcc('X', 'R', '2', '4') { return Some(4); } let f: WlShmFormat = fmt.try_into().ok()?; match f { Argb8888 | Xrgb8888 => Some(4), Rgb332 | Bgr233 => Some(1), Xrgb4444 | Xbgr4444 | Rgbx4444 | Bgrx4444 | Argb4444 | Abgr4444 | Rgba4444 | Bgra4444 | Xrgb1555 | Xbgr1555 | Rgbx5551 | Bgrx5551 | Argb1555 | Abgr1555 | Rgba5551 | Bgra5551 | Rgb565 | Bgr565 => Some(2), Rgb888 | Bgr888 => Some(3), Xbgr8888 | Rgbx8888 | Bgrx8888 | Abgr8888 | Rgba8888 | Bgra8888 | Xrgb2101010 | Xbgr2101010 | Rgbx1010102 | Bgrx1010102 | Argb2101010 | Abgr2101010 | Rgba1010102 | Bgra1010102 => Some(4), R8 => Some(1), R16 | Rg88 | Gr88 => Some(2), Rg1616 | Gr1616 => Some(4), Xrgb16161616f | Xbgr16161616f | Argb16161616f | Abgr16161616f => Some(8), Xrgb16161616 | Xbgr16161616 | Argb16161616 | Abgr16161616 => Some(8), _ => None, } } /** Create a GBMDevice, if one with the specified device id exists */ pub fn setup_gbm_device(device: Option) -> Result>, String> { let mut id_list = if let Some(d) = device { vec![d] } else { list_render_device_ids() }; id_list.sort_unstable(); debug!("Candidate device ids for gbm backend: 0x{:x?}", id_list); if id_list.is_empty() { return Ok(None); } unsafe { let bindings = match gbm::new("libgbm.so.1") { Err(x) => { error!("Failed to load libgbm.so.1: {}", x); return Ok(None); } Ok(x) => x, }; for id in id_list { let render_fd = match drm_open_render(id, true) { Ok(x) => x, Err(_) => continue, }; let dev = bindings.gbm_create_device(render_fd.as_raw_fd()); if dev.is_null() { continue; } debug!("Created gbm device at id: 0x{:x}", id); return Ok(Some(Rc::new(GBMDevice { bindings, device: dev, device_id: id, _drm_fd: render_fd, supported_modifiers: OnceCell::new(), }))); } } Ok(None) } /** Import a dmabuf. */ pub fn gbm_import_dmabuf( device: &Rc, mut planes: Vec, width: u32, height: u32, drm_format: u32, ) -> Result { if width > MAX_DIMENSION || height > MAX_DIMENSION { return Err(tag!( "DMABUF size to import is too large: ({},{}) > ({},{})", width, height, MAX_DIMENSION, MAX_DIMENSION )); } if planes.len() != 1 { return Err(tag!( "Received {} DMABUF planes when single plane expected", planes.len(), )); }; let plane = planes.pop().unwrap(); if plane.plane_idx != 0 { return Err(tag!("Incorrect plane index {}!=0", plane.plane_idx,)); } if plane.offset != 0 { return Err(tag!( "Expected zero offset for gbm import, not {}", plane.offset, )); } let flags = match plane.modifier { LINEAR_MODIFIER => gbm_bo_flags_GBM_BO_USE_LINEAR | gbm_bo_flags_GBM_BO_USE_RENDERING, INVALID_MODIFIER => gbm_bo_flags_GBM_BO_USE_RENDERING, _ => { return Err(tag!( "Importing is only supported with invalid/unspecified or linear modifier, not {:#016x}", plane.modifier, )); } }; let modifier = plane.modifier; let stride = plane.stride; let mut data = gbm_import_fd_data { fd: plane.fd.as_raw_fd(), width, height, stride, format: drm_format, }; unsafe { let bo = device.bindings.gbm_bo_import( device.device, GBM_BO_IMPORT_FD, &mut data as *mut gbm_import_fd_data as *mut c_void, flags, ); /* Keep the fd alive until after the import. */ drop(plane); if bo.is_null() { return Err(tag!( "Failed to import DMABUF with (format, modifier) = ({:#08x}, {:#016x})", drm_format, modifier, )); } Ok(GBMDmabuf { device: device.clone(), bo, width, height, format: drm_format, }) } } /** Create a dmabuf with the specified properties and a modifier chosen from the list, if possible. */ pub fn gbm_create_dmabuf( device: &Rc, width: u32, height: u32, format: u32, modifier_options: &[u64], ) -> Result<(GBMDmabuf, Vec), String> { if width > MAX_DIMENSION || height > MAX_DIMENSION { return Err(tag!( "DMABUF size to create is too large: ({},{}) > ({},{})", width, height, MAX_DIMENSION, MAX_DIMENSION )); } let (flags, actual_mod) = if modifier_options.contains(&LINEAR_MODIFIER) { ( gbm_bo_flags_GBM_BO_USE_RENDERING | gbm_bo_flags_GBM_BO_USE_LINEAR, LINEAR_MODIFIER, ) } else if modifier_options.contains(&INVALID_MODIFIER) { (gbm_bo_flags_GBM_BO_USE_RENDERING, INVALID_MODIFIER) } else { return Err(tag!( "Unsupported DMABUF modifier options: ({:#08x},{:#016x?})", format, modifier_options, )); }; if get_bpp_if_rgb_planar(format).is_none() { return Err(tag!( "Unsupported DMABUF format or modifier: ({:#08x},{:#016x?})", format, modifier_options, )); } unsafe { let bo = (device.bindings.gbm_bo_create)(device.device, width, height, format, flags); if bo.is_null() { return Err(tag!( "Failed to create DMABUF with (format, modifier) = ({:#08x}, {:#016x})", format, actual_mod, )); } let fd = match (device.bindings.gbm_bo_get_fd)(bo) { -1 => { (device.bindings.gbm_bo_destroy)(bo); return Err(tag!( "Failed to export DMABUF with (format, modifier) = ({:#08x}, {:#016x})", format, actual_mod, )); } x => OwnedFd::from_raw_fd(x), }; /* No failure mechanism is documented */ let stride = (device.bindings.gbm_bo_get_stride)(bo); Ok(( GBMDmabuf { device: device.clone(), bo, width, height, format, }, vec![AddDmabufPlane { fd, plane_idx: 0, /* gbm_bo_get_offset was added in 2016 and appears to be used only for plane indices; */ offset: 0, stride, modifier: actual_mod, }], )) } } enum MapType { Read, WriteAll, } /** Map a dmabuf using gbm's API. * * It is unclear how safe multi-threaded access to buffers. */ unsafe fn map_dmabuf( bindings: &gbm, bo: *mut gbm_bo, width: u32, height: u32, map: MapType, ) -> Result<(*mut u8, u32, *mut c_void), String> { /* With i965, the map handle MUST initially point to a NULL pointer; otherwise * the handler may silently exit, sometimes with misleading errno :-( */ let mut map_handle: *mut c_void = std::ptr::null_mut(); /* As of 2022, with amdgpu, GBM_BO_TRANSFER_WRITE invalidates * regions not written to during the mapping, while iris preserves * the original buffer contents. GBM documentation does not say which * WRITE behavior is correct. What the individual drivers do may change * in the future. Specifying READ_WRITE preserves the old contents with * both drivers. */ let flags = match map { MapType::Read => gbm_bo_transfer_flags_GBM_BO_TRANSFER_READ, MapType::WriteAll => gbm_bo_transfer_flags_GBM_BO_TRANSFER_WRITE, }; let mut stride = 0; let data = (bindings.gbm_bo_map)(bo, 0, 0, width, height, flags, &mut stride, &mut map_handle); if data.is_null() { return Err(tag!("Failed to map dmabuf with gbm")); } Ok((data as *mut u8, stride, map_handle)) } unsafe fn unmap_dmabuf(bindings: &gbm, bo: *mut gbm_bo, handle: *mut c_void) { (bindings.gbm_bo_unmap)(bo, handle); } fn stride_adjusted_copy(dst: &mut [u8], dst_stride: u32, src: &[u8], src_stride: u32, height: u32) { let common = dst_stride.min(src_stride); for row in 0..height { dst[(dst_stride * row) as usize..((dst_stride * row) + common) as usize].copy_from_slice( &src[(src_stride * row) as usize..((src_stride * row) + common) as usize], ) } } impl GBMDmabuf { /** Copy out the entire contents of the dmabuf onto an array (which is either densely * packed or uses the nominal stride. */ pub fn copy_from_dmabuf( &mut self, view_row_stride: Option, data: &mut [u8], ) -> Result<(), String> { let data_stride = view_row_stride.unwrap_or( self.width .checked_mul(get_bpp_if_rgb_planar(self.format).unwrap()) .unwrap(), ); unsafe { let (map_data, map_stride, map_handle) = map_dmabuf( &self.device.bindings, self.bo, self.width, self.height, MapType::Read, )?; let mapped_length: usize = map_stride .checked_mul(self.height) .unwrap() .try_into() .unwrap(); assert!(mapped_length <= isize::MAX as usize); let mapped_region = std::slice::from_raw_parts(map_data, mapped_length); stride_adjusted_copy(data, data_stride, mapped_region, map_stride, self.height); unmap_dmabuf(&self.device.bindings, self.bo, map_handle); } Ok(()) } /** Copy data onto the dmabuf. */ pub fn copy_onto_dmabuf( &mut self, view_row_stride: Option, data: &[u8], ) -> Result<(), String> { let data_stride = view_row_stride.unwrap_or( self.width .checked_mul(get_bpp_if_rgb_planar(self.format).unwrap()) .unwrap(), ); unsafe { let (map_data, map_stride, map_handle) = map_dmabuf( &self.device.bindings, self.bo, self.width, self.height, MapType::WriteAll, )?; let mapped_length: usize = map_stride .checked_mul(self.height) .unwrap() .try_into() .unwrap(); assert!(mapped_length <= isize::MAX as usize); let mapped_region = std::slice::from_raw_parts_mut(map_data, mapped_length); stride_adjusted_copy(mapped_region, map_stride, data, data_stride, self.height); unmap_dmabuf(&self.device.bindings, self.bo, map_handle); } Ok(()) } // TODO: deduplicate with Vulkan pub fn nominal_size(&self, view_row_length: Option) -> usize { if let Some(r) = view_row_length { (self.height * r) as usize } else { let bpp = get_bpp_if_rgb_planar(self.format).unwrap(); (self.width * self.height * bpp) as usize } } pub fn get_bpp(&self) -> u32 { get_bpp_if_rgb_planar(self.format).unwrap() } } /** Build table to identify which formats and modifiers are supported. */ fn gbm_build_modifier_table(device: &Rc) -> &BTreeMap> { device.supported_modifiers.get_or_init(|| { let mut supported_modifiers = BTreeMap::new(); /* Identify which modifiers are available at startup. In practice, this is not * too expensive compared to initializing gbm itself */ for format in GBM_SUPPORTED_FORMATS { let mut mods = Vec::new(); unsafe { if (device.bindings.gbm_device_is_format_supported)( device.device, *format, gbm_bo_flags_GBM_BO_USE_RENDERING, ) == 1 { mods.push(INVALID_MODIFIER); } if (device.bindings.gbm_device_is_format_supported)( device.device, *format, gbm_bo_flags_GBM_BO_USE_RENDERING | gbm_bo_flags_GBM_BO_USE_LINEAR, ) == 1 { mods.push(LINEAR_MODIFIER); } } if !mods.is_empty() { supported_modifiers.insert(*format, mods.into_boxed_slice()); } } supported_modifiers }) } /** Return supported GBM modifiers for a format, or empty list if format not supported. * * Restrict to known single-plane RGB-type formats, and to LINEAR or INVALID modifiers. * Other modifiers are not supported, because a) they may require auxiliary control planes * or other features which are awkward or impossible to use in all versions of libgbm; b) * performance can be terrible (using e.g. Strong Uncacheable mappings that forbid pipelining * or caching read/write operations). */ pub fn gbm_supported_modifiers(device: &Rc, format: u32) -> &[u64] { let table = gbm_build_modifier_table(device); if let Some(mods) = table.get(&format) { mods } else { &[] } } /** Get the dev_t identifying the device. */ pub fn gbm_get_device_id(device: &Rc) -> u64 { device.device_id } waypipe-v0.10.5/src/kernel.rs000066400000000000000000000753411507071711100160700ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Fast diff calculation and application code */ use crate::tag; use nix::errno::Errno; use nix::libc; use std::os::fd::{AsRawFd, OwnedFd}; use std::sync::atomic::{AtomicU32, AtomicU8, Ordering}; /** A memory mapped buffer from a file, which may be externally modified. * * Note: a commonly used library for Wayland applications, memmap2::Mmap has * the flaw that it is unsound if the mapped file is ever modified while actions * are being performed on it. It produces a &[u8], which Rust considers * immutable; compilation can then use the assumption that reading from the same * entry twice will produce the same result, which does not hold if there was a * modification in between. The correct thing to do is probably to use * &[AtomicU8] buffers, for which the compiler (hopefully) cannot rule out * mutations. (Although even this might be insufficient, because the compiler * technically only needs to be correct against the other code it makes, and * hardware features could in theory escape the memory model; consider foreign * and external queue types in Vulkan.) * * Note: ensuring that get_u32 sees up to date data is not, due to the Atomic type, * a soundness issue. Adding atomic fences (Acquire before, Release after) should * ensure that any other user sees the updates; in practice, these aren't needed, * because Wayland communication acts as synchronization. */ pub struct ExternalMapping { /* Addr must be 64-aligned */ addr: *mut libc::c_void, size: usize, } // SAFETY: only either an Atomic view of data is exposed, or simd access is used // (which hopefully is not mis-optimized and satisfies a read-once guarantee) // Creation/drop are not bound to specific threads. `.addr`, `.size` never // change during lifespan of object, and are not public. Therefore, safe to // & access from multiple threads, and safe to move between threads. unsafe impl Send for ExternalMapping {} unsafe impl Sync for ExternalMapping {} impl Drop for ExternalMapping { fn drop(&mut self) { if self.size > 0 { let ret = unsafe { /* SAFETY: region addr[..size] was mmapped, and is valid munmap input. * `addr` is not null. */ libc::munmap(self.addr, self.size) }; assert!(ret != libc::EINVAL); } } } impl ExternalMapping { pub fn new(fd: &OwnedFd, size: usize, readonce: bool) -> Result { if size == 0 { return Ok(ExternalMapping { addr: std::ptr::null_mut(), size: 0, }); } let the_fd: libc::c_int = fd.as_raw_fd(); if size > isize::MAX as usize { return Err(tag!("Failed to mmap {} bytes, region too large", size)); } let (prot_type, map_type) = if readonce { /* For things like keymaps, ICC profiles, or dmabuf feedback lists, * which correct programs should never change after sending, and * which therefore should never be updated by the other party. * File sealing can be used to enforce this. */ (libc::PROT_READ, libc::MAP_PRIVATE) } else { (libc::PROT_READ | libc::PROT_WRITE, libc::MAP_SHARED) }; // todo: handle read-only sealing // TODO: need a special F_SEAL_WRITE-compatible branch for 'one-shot' reads // note: while newer kernels may allow MAP_SHARED+PROT_READ, MAP_PRIVATE+PROT_READ is enough for one-shot let addr: *mut libc::c_void = unsafe { /* SAFETY: external function call, no references to existing memory; * if successful will allocate at least `size` bytes. */ libc::mmap(std::ptr::null_mut(), size, prot_type, map_type, the_fd, 0) }; if addr == libc::MAP_FAILED { Err(tag!("Failed to mmap {}", Errno::last_raw())) } else { assert!(!addr.is_null(), "Weird system allocating null page"); /* Verify 64-alignment. (mmap _should_ page align) */ assert!(addr as usize % 64 == 0); Ok(ExternalMapping { addr, size }) } } pub fn get_u32(&self) -> &[AtomicU32] { // containing everything, rounded down let nblocks = self.size / 4; if nblocks == 0 { &[] } else { unsafe { /* SAFETY: have verified 64-alignment, so ptr is 4-aligned; * Allocated size was >= nblocks * 4, containing this slice * nblocks * 4 <= .size, which was verified to be < isize::MAX * ptr was checked to not be null * &AtomicU32 permits (in-memory-model) modifications at any time * No &mut derived from self.addr will ever be created */ let ptr = self.addr as *const AtomicU32; std::slice::from_raw_parts(ptr, nblocks) } } } pub fn get_u8(&self) -> &[AtomicU8] { // containing everything, exact size if self.size == 0 { &[] } else { unsafe { /* SAFETY: no alignment requirement * Allocated size was >= self.size, and have checked self.size < isize::MAX * ptr was checked to not be null * &AtomicU8 permits (in-memory-model) modifications at any time * No &mut derived from self.addr will ever be created */ let ptr = self.addr as *const AtomicU8; std::slice::from_raw_parts(ptr, self.size) } } } } pub fn construct_diff( diff: &mut [u8], fd: &ExternalMapping, intervals: &[(u32, u32)], reference: &mut [u8], // of length end-start reference_base: u32, ) -> usize { let target = &fd.get_u32(); let mut output_len = 0; for intv in intervals { assert!(intv.0 % 64 == 0 && intv.1 % 64 == 0); assert!(reference_base <= intv.0 && intv.0 < intv.1); output_len += construct_diff_segment_two_iter::( &mut diff[output_len..], ShmIterator::new( &target[(intv.0 / 4) as usize..(intv.1 / 4) as usize], &mut reference [(intv.0 - reference_base) as usize..(intv.1 - reference_base) as usize], ), intv.0, 16, ) as usize; } output_len } #[cfg(target_arch = "x86_64")] #[target_feature(enable = "avx2,lzcnt,bmi1")] unsafe fn construct_diff_segment_two_avx2( mut diff: &mut [u8], target: &[u8], reference: &mut [u8], reference_base: u32, skip_gap_len: usize, ) -> u32 { use std::arch::x86_64::*; /* 1. load and operate on chunks of 64 bytes at a time (i.e, by cache line). * 2. Use separate loops depending on whether data is being written to the diff */ let mut i = 0; let nslabs = target.len() / 64; assert!(reference.len() == target.len()); assert!(target.len() % 64 == 0); assert!(target.as_ptr() as usize % 64 == 0); assert!(reference.as_ptr() as usize % 64 == 0); assert!(diff.as_ptr() as usize % 4 == 0); assert!(diff.len() >= target.len() + 8); assert!(diff.len() < (u32::MAX as usize)); let refbase_block: u32 = reference_base / 4; /* At least one line, otherwise can have errors */ assert!(skip_gap_len >= 16); let skip_slab_len = skip_gap_len / 16; assert!(skip_slab_len >= 1); let ones = _mm256_set1_epi64x(u64::MAX as i64); let mut dc = 0; while i < nslabs { /* Scan for next difference */ let (ctrl_blocks, x) = diff.split_at_mut(8); diff = x; dc += 2; /* start of diff position */ let idc = dc; /* Start of diff interval */ let mut trailing_unchanged = 0; // let mut last_nontrivial_trailing = 0; let mut start = 0; while i < nslabs { // Q: are nontemporal memory hints worthwhile when buffer size is large? // (_mm256_stream_load_si256 _may_ be useful for certain GPU-visible memory types, // but not elsewhere) let t1 = _mm256_load_si256(target.as_ptr().add(64 * i) as *const _); let t2 = _mm256_load_si256(target.as_ptr().add(64 * i + 32) as *const _); let r1 = _mm256_load_si256(reference.as_ptr().add(64 * i) as *const _); let r2 = _mm256_load_si256(reference.as_ptr().add(64 * i + 32) as *const _); let d1 = _mm256_cmpeq_epi32(t1, r1); let d2 = _mm256_cmpeq_epi32(t2, r2); // There are a few ways to implement this: movemask x2, combine, test; or blend, test, movemask x1 // In practice, time is probably mostly spent waiting for memory, so the number of instructions' // in this loop is not too important. /* let m1 = _mm256_movemask_epi8(d1) as u32; let m2 = _mm256_movemask_epi8(d2) as u32; let mask: u64 = ((m2 as u64) << 32) | (m1 as u64); if !mask != 0 { let ncom = (_tzcnt_u64(!mask) >> 2) as usize; trailing_unchanged = (_lzcnt_u64(!mask) >> 2) as usize; // */ let merged = _mm256_blend_epi16::<0b01010101>(d1, d2); let identical: bool = _mm256_testc_si256(merged, ones) != 0; if !identical { let merged_mask = _mm256_movemask_epi8(merged) as u32; let part1 = 0b00110011001100110011001100110011; let part2 = !part1; let new_mask = (((merged_mask & part1) as u64) << 32) | (merged_mask & part2) as u64; let ncom = (_tzcnt_u64(!new_mask) >> 2) as usize; trailing_unchanged = (_lzcnt_u64(!new_mask) >> 2) as usize; // */ _mm256_store_si256(reference.as_mut_ptr().add(64 * i) as *mut _, t1); _mm256_store_si256(reference.as_mut_ptr().add(64 * i + 32) as *mut _, t2); // last_nontrivial_trailing = trailing_unchanged; let block_shift = ncom & 7; let esmask: u64 = 0xffffffffu64 << (block_shift * 4); let halfsize = _mm_set_epi64x(0i64, esmask as i64); let estoremask = _mm256_cvtepi8_epi64(halfsize); _mm256_maskstore_epi32( /* the overwritten portion gets masked out, but can technically cover _before_ the diff array start. alternatively use _mm256_permutevar8x32_epi32 */ diff.as_mut_ptr().sub(block_shift * 4) as *mut _, estoremask, if ncom < 8 { t1 } else { t2 }, ); if ncom < 8 { _mm256_storeu_si256(diff.as_mut_ptr().add(4 * (8 - block_shift)) as *mut _, t2); } dc += 16 - ncom; start = 16 * i as u32 + ncom as u32 + refbase_block; i += 1; break; } i += 1; } // let mut nclear = 0; /* Produce diff */ while i < nslabs { let t1 = _mm256_load_si256(target.as_ptr().add(64 * i) as *const _); let t2 = _mm256_load_si256(target.as_ptr().add(64 * i + 32) as *const _); let r1 = _mm256_load_si256(reference.as_ptr().add(64 * i) as *const _); let r2 = _mm256_load_si256(reference.as_ptr().add(64 * i + 32) as *const _); let d1 = _mm256_cmpeq_epi32(t1, r1); let d2 = _mm256_cmpeq_epi32(t2, r2); let m1 = _mm256_movemask_epi8(d1) as u32; let m2 = _mm256_movemask_epi8(d2) as u32; let mask = ((m2 as u64) << 32) | (m1 as u64); let clear = (!mask == 0) as usize; let trail_count = (_lzcnt_u64(!mask) >> 2) as usize; trailing_unchanged = trailing_unchanged * clear + trail_count; // nclear = nclear * clear + clear; // last_nontrivial_trailing = clear * last_nontrivial_trailing + (1 - clear) * trail_count; _mm256_storeu_si256(diff.as_mut_ptr().add(4 * (dc - idc)) as *mut _, t1); _mm256_storeu_si256(diff.as_mut_ptr().add(4 * (dc - idc) + 32) as *mut _, t2); dc += 16; // todo: consider requiring 'clear' X times in a row, rather than branching based off trailing_unchanged // if nclear > skip_slab_len { if trailing_unchanged > skip_gap_len { i += 1; break; } _mm256_store_si256(reference.as_ptr().add(64 * i) as *mut _, t1); _mm256_store_si256(reference.as_ptr().add(64 * i + 32) as *mut _, t2); i += 1; } if i >= nslabs && dc == idc { /* No change detected in this run */ dc -= 2; break; } // assert!(last_nontrivial_trailing + 16 * nclear == trailing_unchanged); // trailing_unchanged = 0; dc -= trailing_unchanged; let end = 16 * i as u32 - trailing_unchanged as u32 + refbase_block; ctrl_blocks[..4].copy_from_slice(&start.to_le_bytes()); ctrl_blocks[4..].copy_from_slice(&end.to_le_bytes()); diff = &mut diff[(4 * (end - start)) as usize..]; } (dc * 4) as u32 } pub trait DiffIterator { type Next: DiffWriteback; fn next(self) -> Option<(Self::Next, [u8; 64], [u8; 64])>; } pub trait DiffWriteback { type Next: DiffIterator; fn next(self, store: bool) -> Self::Next; } struct LocalIterator<'a> { target: &'a [u8], reference: &'a mut [u8], pos: usize, } struct LocalWriteback<'a> { target: &'a [u8], reference: &'a mut [u8], pos: usize, values: [u8; 64], } impl LocalIterator<'_> { fn new<'b>(target: &'b [u8], reference: &'b mut [u8]) -> LocalIterator<'b> { assert!(target.len() == reference.len()); assert!(target.len() % 64 == 0); assert!(target.as_ptr() as usize % 64 == 0); assert!(reference.as_ptr() as usize % 64 == 0); LocalIterator { target, reference, pos: 0, } } } impl<'a> DiffIterator for LocalIterator<'a> { type Next = LocalWriteback<'a>; fn next(self) -> Option<(Self::Next, [u8; 64], [u8; 64])> { if self.pos >= self.target.len() { return None; } let values: [u8; 64] = self.target[self.pos..self.pos + 64].try_into().unwrap(); let refvals: [u8; 64] = self.reference[self.pos..self.pos + 64].try_into().unwrap(); Some(( LocalWriteback { target: self.target, reference: self.reference, pos: self.pos, values, }, values, refvals, )) } } impl<'a> DiffWriteback for LocalWriteback<'a> { type Next = LocalIterator<'a>; fn next(self, store: bool) -> Self::Next { if store { self.reference[self.pos..self.pos + 64].copy_from_slice(&self.values); } LocalIterator { target: self.target, reference: self.reference, pos: self.pos + 64, } } } struct ShmIterator<'a> { target: &'a [AtomicU32], reference: &'a mut [u8], pos: usize, } struct ShmWriteback<'a> { target: &'a [AtomicU32], reference: &'a mut [u8], pos: usize, values: [u8; 64], } impl ShmIterator<'_> { fn new<'b>(target: &'b [AtomicU32], reference: &'b mut [u8]) -> ShmIterator<'b> { assert!(target.len() * 4 == reference.len()); assert!(target.len() % 16 == 0); assert!(target.as_ptr() as usize % 64 == 0); assert!(reference.as_ptr() as usize % 64 == 0); ShmIterator { target, reference, pos: 0, } } } impl<'a> DiffIterator for ShmIterator<'a> { type Next = ShmWriteback<'a>; fn next(self) -> Option<(Self::Next, [u8; 64], [u8; 64])> { if self.pos >= self.target.len() { return None; } let mut values = [0_u8; 64]; for i in 0..16 { values[4 * i..4 * (i + 1)].copy_from_slice( &self.target[self.pos + i] .load(Ordering::Relaxed) .to_le_bytes(), ); } let refvals: [u8; 64] = self.reference[(4 * self.pos)..(4 * self.pos + 64)] .try_into() .unwrap(); Some(( ShmWriteback { target: self.target, reference: self.reference, pos: self.pos, values, }, values, refvals, )) } } impl<'a> DiffWriteback for ShmWriteback<'a> { type Next = ShmIterator<'a>; fn next(self, store: bool) -> Self::Next { if store { self.reference[(4 * self.pos)..(4 * self.pos + 64)].copy_from_slice(&self.values); } ShmIterator { target: self.target, reference: self.reference, pos: self.pos + 16, } } } /* A cache-line oriented diff implementation, without SIMD */ fn construct_diff_segment_two_iter( mut diff: &mut [u8], mut iter: A, reference_base: u32, skip_gap_len: usize, ) -> u32 where A: DiffIterator, B: DiffWriteback, { assert!(reference_base % 4 == 0); assert!(diff.as_ptr() as usize % 4 == 0); let refbase_block: u32 = reference_base / 4; assert!(skip_gap_len >= 16); let mut dc: usize = 0; let mut line_no = 0; 'outer: loop { let (ctrl_blocks, x) = diff.split_at_mut(8); diff = x; dc += 2; /* start of diff position */ let idc = dc; /* Start of diff interval */ let mut trailing_unchanged: usize; let start: u32; /* Scan: unchanged region */ loop { let Some((res, values, refvals)) = iter.next() else { /* Nothing to record */ dc -= 2; break 'outer; }; /* Problem: llvm inserts a memcmp here, even though it is _much_ less efficient * (requires placing values on the stack?) */ if values != refvals { /* Leading/trailing difference calculations are expensive without SIMD, * so round up the leading/trailing diff portions; this has low * overhead iff changed runs are long. (While doing this usually * worsens compression ratio, it can in rare instances reduce the * compressed size) */ let leading_unchanged = if values[..32] == refvals[..32] { 8 } else { 0 }; trailing_unchanged = 0; diff[..(4 * (16 - leading_unchanged))] .copy_from_slice(&values[4 * leading_unchanged..]); dc += 16 - leading_unchanged; start = 16 * line_no as u32 + leading_unchanged as u32 + refbase_block; iter = res.next(true); line_no += 1; break; } else { iter = res.next(false); line_no += 1; } } /* Scan: changed region */ loop { let Some((res, values, refvals)) = iter.next() else { /* Exit, end of input: */ dc -= trailing_unchanged; let end = (16 * line_no - trailing_unchanged) as u32 + refbase_block; ctrl_blocks[..4].copy_from_slice(&start.to_le_bytes()); ctrl_blocks[4..].copy_from_slice(&end.to_le_bytes()); break 'outer; }; /* Write unconditionally -- this is technically not necessary if values match, but * doing this avoids a branch */ iter = res.next(true); let clear = unsafe { // Workaround to not compile into memcmp, which would make code twice as slow // SAFETY: in both cases, input and output are plain data and have size 64 let a: [u64; 8] = std::mem::transmute(values); let b: [u64; 8] = std::mem::transmute(refvals); (a[0] == b[0]) && (a[1] == b[1]) && (a[2] == b[2]) && (a[3] == b[3]) && (a[4] == b[4]) && (a[5] == b[5]) && (a[6] == b[6]) && (a[7] == b[7]) }; // let clear = values == refvals; let trail_count = (clear as usize) * 16; trailing_unchanged = trailing_unchanged * (clear as usize) + trail_count; diff[4 * (dc - idc)..(4 * (dc - idc)) + 64].copy_from_slice(&values); dc += 16; if trailing_unchanged > skip_gap_len { /* Exit, diff interval ended */ dc -= trailing_unchanged; let end = (16 + 16 * line_no - trailing_unchanged) as u32 + refbase_block; ctrl_blocks[..4].copy_from_slice(&start.to_le_bytes()); ctrl_blocks[4..].copy_from_slice(&end.to_le_bytes()); diff = &mut diff[(4 * (end - start)) as usize..]; line_no += 1; break; } line_no += 1; } } (dc * 4).try_into().unwrap() } pub fn construct_diff_segment_two( diff: &mut [u8], target: &[u8], reference: &mut [u8], reference_base: u32, skip_gap_len: usize, ) -> u32 { #[cfg(target_arch = "x86_64")] if is_x86_feature_detected!("avx2") && is_x86_feature_detected!("lzcnt") && is_x86_feature_detected!("bmi1") { return unsafe { // SAFETY: feature detection matches target features construct_diff_segment_two_avx2(diff, target, reference, reference_base, skip_gap_len) }; } assert!(diff.len() >= target.len() + 8); construct_diff_segment_two_iter::( diff, LocalIterator::new(target, reference), reference_base, skip_gap_len, ) } pub fn copy_tail_if_diff( diff_tail: &mut [u8], fd: &ExternalMapping, tail_len: usize, reference: &mut [u8], ) -> bool { assert!(reference.len() == tail_len); let byte_level = &fd.get_u8(); let start = byte_level.len() - tail_len; /* only read from the mmapped region once, to avoid race conditions */ let mut any_change = false; for i in 0..tail_len { diff_tail[i] = byte_level[start + i].load(Ordering::Relaxed); if diff_tail[i] != reference[i] { any_change = true; } reference[i] = diff_tail[i]; } any_change } pub fn copy_from_mapping(dest: &mut [u8], fd: &ExternalMapping, start: usize) { let byte_level = &fd.get_u8(); for i in 0..dest.len() { dest[i] = byte_level[i + start].load(Ordering::Relaxed); } } pub fn copy_onto_mapping(src: &[u8], fd: &ExternalMapping, start: usize) { let byte_level = &fd.get_u8(); for i in 0..src.len() { byte_level[i + start].store(src[i], Ordering::Relaxed); } } pub fn apply_diff_one( diff: &[u8], ntrailing: usize, /* Starting point in the mapping of the mirror */ mir_start: usize, mirror: &mut [u8], ) -> Result<(), String> { assert!((diff.len() - ntrailing) % 4 == 0); let mlen = mirror.len(); let nblocks = (diff.len() - ntrailing) / 4; // todo: check that diff() is 4-aligned let buf_end = mir_start + mlen; let mut pos: usize = 0; while pos < 4 * nblocks { // todo: out of bounds read? let start = u32::from_le_bytes(diff[pos..pos + 4].try_into().unwrap()) as usize; let end = u32::from_le_bytes(diff[pos + 4..pos + 8].try_into().unwrap()) as usize; if end <= start || end > buf_end || pos + 4 * (end - start) > 4 * nblocks { return Err(tag!( "copy interval invalid: pos {} segment [{},{}) mirror range [{},{}) remaining {}", pos, start, end, mir_start / 4, buf_end / 4, nblocks - pos / 4 )); } pos += 8; mirror[(start * 4 - mir_start)..(end * 4 - mir_start)] .copy_from_slice(&diff[pos..pos + 4 * (end - start)]); pos += (end - start) * 4; } if ntrailing > 0 { let offset = mlen - ntrailing; mirror[offset..].copy_from_slice(&diff[nblocks * 4..nblocks * 4 + ntrailing]); } Ok(()) } pub fn apply_diff( diff: &[u8], ntrailing: usize, fd: &ExternalMapping, /* Starting point, in the external mapping, of the mirror */ mir_start: usize, mirror: &mut [u8], ) -> Result<(), &'static str> { let target = &fd.get_u32(); assert!((diff.len() - ntrailing) % 4 == 0); let nblocks = (diff.len() - ntrailing) / 4; let buf_end = target.len(); let mut pos: usize = 0; while pos < 4 * nblocks { let start = u32::from_le_bytes(diff[pos..pos + 4].try_into().unwrap()) as usize; let end = u32::from_le_bytes(diff[pos + 4..pos + 8].try_into().unwrap()) as usize; if end <= start || end > buf_end || pos + 8 + 4 * (end - start) > 4 * nblocks { return Err("Copy interval invalid"); } pos += 8; mirror[(start * 4 - mir_start)..(end * 4 - mir_start)] .copy_from_slice(&diff[pos..pos + 4 * (end - start)]); for i in 0..(end - start) { target[i + start].store( u32::from_le_bytes(diff[(pos + 4 * i)..(pos + 4 * (i + 1))].try_into().unwrap()), Ordering::Relaxed, ); } pos += (end - start) * 4; } if ntrailing > 0 { let byte_level = &fd.get_u8(); let offset = byte_level.len() - ntrailing; for i in 0..ntrailing { byte_level[offset + i].store(diff[nblocks * 4 + i], Ordering::Relaxed); mirror[offset + i - mir_start] = diff[nblocks * 4 + i]; } } Ok(()) } /* Report the interval (in bytes) in which the diff will update data */ pub fn compute_diff_span( diff: &[u8], ntrailing: usize, buf_len: usize, ) -> Result<(usize, usize), &'static str> { let mut start = buf_len; let mut end = 0; assert!((diff.len() - ntrailing) % 4 == 0); let nblocks = (diff.len() - ntrailing) / 4; if nblocks == 0 { if ntrailing == 0 { return Err("computed diff span on empty diff"); } else { return Ok((buf_len - ntrailing, buf_len)); } } let mut pos = 0; while pos < 4 * nblocks { // todo: error on out of bounds read? let span_start = u32::from_le_bytes(diff[pos..pos + 4].try_into().unwrap()) as usize; let span_end = u32::from_le_bytes(diff[pos + 4..pos + 8].try_into().unwrap()) as usize; pos += 8; pos += (span_end - span_start) * 4; start = std::cmp::min(start, span_start); end = std::cmp::max(end, span_end); } assert!(start < end); if ntrailing > 0 { Ok((start * 4, buf_len)) } else { Ok((start * 4, end * 4)) } } #[test] fn test_buffer_replication() { use crate::util::AlignedArray; let local_fd = nix::sys::memfd::memfd_create( c"/test", nix::sys::memfd::MemFdCreateFlag::MFD_CLOEXEC | nix::sys::memfd::MemFdCreateFlag::MFD_ALLOW_SEALING, ) .unwrap(); let size = 4096; nix::unistd::ftruncate(&local_fd, size as libc::off_t).unwrap(); let mapping: ExternalMapping = ExternalMapping::new(&local_fd, size, false).unwrap(); let mut reference_arr = AlignedArray::new(size); let reference = reference_arr.get_mut(); /* keeping the mapping as all-zero, modify the reference. The exact * values aren't so important here */ reference[123..789].fill(1u8); reference[1023..1889].fill(1u8); reference[1901..2000].fill(1u8); reference[size - 1] = 1; let mut diff = vec![0; size + 16]; let intvs = &[(0, size as u32)]; let diff_len = construct_diff(&mut diff, &mapping, intvs, &mut reference[..], 0); println!("diff len (from fd): {}", diff_len); apply_diff(&diff[..diff_len], 0, &mapping, 0, reference).unwrap(); assert!(reference.iter().all(|x| *x == 0)); } #[test] fn test_memory_replication() { use crate::util::AlignedArray; fn test_pattern(name: &str, diff_start_pos: usize, fill: &dyn Fn(&mut [u8])) { let size = 4096; let mut mem_arr = AlignedArray::new(size); let mut reference_arr = AlignedArray::new(size); let mut copy_arr = AlignedArray::new(size); let mem = mem_arr.get_mut(); let reference = reference_arr.get_mut(); let copy = copy_arr.get_mut(); /* Set initial pattern */ for (i, x) in mem.chunks_exact_mut(4).enumerate() { x.copy_from_slice(&((i + 0xf) as u32).to_le_bytes()); } reference.copy_from_slice(mem); copy.copy_from_slice(mem); fill(mem); let start = std::time::Instant::now(); let mut diff = vec![0; size + 16]; let diff_len = construct_diff_segment_two( &mut diff, &mem[diff_start_pos..], &mut reference[diff_start_pos..], diff_start_pos as u32, 16, ); let end = std::time::Instant::now(); println!( "pattern {}, diff len (from memory): {}, elapsed {:.6} msecs", name, diff_len, end.duration_since(start).as_secs_f64() * 1e3 ); apply_diff_one( &diff[..diff_len as usize], 0, diff_start_pos, &mut copy[diff_start_pos..], ) .unwrap(); /* Check that replication worked */ assert!(copy[diff_start_pos..] == mem[diff_start_pos..]); /* Check that reference was updated */ assert!(reference[diff_start_pos..] == mem[diff_start_pos..]); } test_pattern("no change", 1024, &|_| ()); test_pattern("all change", 64, &|x| x.fill(1)); test_pattern("irregular", 64, &|x| { x[123..789].fill(1); x[1023..1889].fill(1); x[1901..2000].fill(1); x[x.len() - 1] = 1; }); test_pattern("mark some even lines", 64, &|x| { for i in 5..23 { x[128 * i] = 1u8; } }); test_pattern("mark every fourth line", 64, &|x| { for i in 0..x.len() / 256 { x[256 * i] = 1u8; } }); test_pattern("small gaps", 64, &|x| { x[123..246].fill(1); x[369..1200].fill(1); x[1421..2000].fill(1); }); for osc in [1, 7, 63, 64, 65, 127, 128, 129, 255, 256, 257] { let s = format!("alternating {}, with gap", osc); test_pattern(&s, 64, &|x| { for i in 0..x.len() / (2 * osc) { let end = std::cmp::min(x.len(), (2 * i + 1) * osc); x[(2 * i * osc)..end].fill(1); } x[1500..2500].fill(1); }); } test_pattern("short start", 0, &|x| { x[0..4].fill(1); }); } waypipe-v0.10.5/src/main.rs000066400000000000000000002335701507071711100155340ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Main executable for Waypipe, handles all subcommands. */ use clap::{value_parser, Arg, ArgAction, Command}; use log::{debug, error, Log, Record}; use nix::errno::Errno; use nix::libc; use nix::poll::{PollFd, PollFlags}; use nix::sys::{signal, socket, wait}; use nix::{fcntl, unistd}; use std::collections::BTreeMap; use std::env; use std::ffi::{OsStr, OsString}; use std::fmt; use std::io::Write; use std::os::fd::{AsFd, AsRawFd, FromRawFd, OwnedFd, RawFd}; use std::os::unix::ffi::OsStrExt; use std::os::unix::process::CommandExt; use std::path::{Path, PathBuf}; use std::process::Child; use std::str::FromStr; use std::sync::atomic::{AtomicBool, Ordering}; mod bench; mod compress; mod damage; mod dmabuf; mod gbm; mod kernel; mod mainloop; mod mirror; mod platform; mod read; mod secctx; mod stub; mod tracking; mod util; mod video; mod wayland; mod wayland_gen; use crate::mainloop::*; use crate::util::*; /** Logger configuration data */ struct Logger { max_level: log::LevelFilter, pid: u32, color_output: bool, anti_staircase: bool, color: usize, label: &'static str, } impl Log for Logger { fn enabled(&self, meta: &log::Metadata<'_>) -> bool { meta.level() <= self.max_level } fn log(&self, record: &Record<'_>) { if record.level() > self.max_level { return; } let time = std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH); let t = if let Ok(t) = time { (t.as_nanos() % 100000000000u128) / 1000u128 } else { 0 }; let (esc1a, esc1b, esc1c) = if self.color_output { let c = if self.color == 0 { "36" } else if self.color == 1 { "34" } else { "35" }; if record.level() <= log::Level::Error { ("\x1b[0;", c, ";1m") } else { ("\x1b[0;", c, "m") } } else { ("", "", "") }; let esc2 = if self.color_output { "\x1b[0m" } else { "" }; let esc3 = if self.anti_staircase { "\r\n" } else { "\n" }; let lvl_str: &str = match record.level() { log::Level::Error => "ERR", log::Level::Warn => "Wrn", log::Level::Debug => "dbg", log::Level::Info => "inf", log::Level::Trace => "trc", }; const MAX_LOG_LEN: usize = 512; let mut buf = [0u8; MAX_LOG_LEN]; let mut cursor = std::io::Cursor::new(&mut buf[..MAX_LOG_LEN - 5]); let _ = write!( &mut cursor, "{}{}{}[{:02}.{:06} {} {}({}) {}:{}]{} {}{}", esc1a, esc1b, esc1c, t / 1000000u128, t % 1000000u128, lvl_str, self.label, self.pid, record .file() .unwrap_or("src/unknown") .strip_prefix("src/") .unwrap(), record.line().unwrap_or(0), esc2, record.args(), esc3 ); let mut str_end = cursor.position() as usize; if str_end >= MAX_LOG_LEN - 9 { /* Deal with possible partial UTF-8 char */ str_end = match std::str::from_utf8(&buf[..str_end]) { Ok(x) => x.len(), Err(y) => y.valid_up_to(), }; } if str_end >= MAX_LOG_LEN - 9 { /* Assume message was truncated */ assert!(str_end <= MAX_LOG_LEN - 5, "{} {}", str_end, MAX_LOG_LEN); buf[str_end..str_end + 3].fill(b'.'); if self.anti_staircase { buf[str_end + 3] = b'\r'; buf[str_end + 4] = b'\n'; str_end += 5; } else { buf[str_end + 3] = b'\n'; str_end += 4; } } let handle = &mut std::io::stderr().lock(); let _ = handle.write_all(&buf[..str_end]); let _ = handle.flush(); } fn flush(&self) { /* not needed */ } } /** Return a random list of 10 alphanumeric characters */ fn get_rand_tag() -> Result<[u8; 10], String> { let mut rand_buf = [0_u8; 16]; getrandom::getrandom(&mut rand_buf).map_err(|x| tag!("Failed to get random bits: {}", x))?; let mut n: u128 = u128::from_le_bytes(rand_buf); // Note: log2(62^10) ≈ 59.5 which is ≪ 128, so the resulting // random strings are only very slightly biased. let mut rand_tag = [0u8; 10]; for i in rand_tag.iter_mut() { let v = (n % 62) as u32; n /= 62; *i = if v < 26 { (v + ('a' as u32)) as u8 } else if v < 52 { (v - 26 + ('A' as u32)) as u8 } else { (v - 52 + ('0' as u32)) as u8 } } Ok(rand_tag) } /** Flags for `open()` to open a reference to a directory */ #[cfg(target_os = "linux")] fn dir_flags() -> fcntl::OFlag { /* O_PATH is from 2.6.39 Linux */ fcntl::OFlag::O_PATH | fcntl::OFlag::O_DIRECTORY } #[cfg(not(target_os = "linux"))] fn dir_flags() -> fcntl::OFlag { fcntl::OFlag::O_DIRECTORY } /** Get a file descriptor corresponding to a path, suitable for `fchdir()` */ fn open_folder(p: &Path) -> Result { let raw_fd = fcntl::open( p, dir_flags() | fcntl::OFlag::O_CLOEXEC | fcntl::OFlag::O_NOCTTY, nix::sys::stat::Mode::empty(), ) .map_err(|x| tag!("Failed to open folder '{:?}': {}", p, x))?; Ok(unsafe { // SAFETY: freshly created, checked valid, exclusively owned OwnedFd::from_raw_fd(raw_fd) }) } /** Connection information for a VSOCK socket */ #[derive(Debug, Copy, Clone)] struct VSockConfig { // todo: use option for (to_host, cid)? to_host: bool, cid: u32, port: u32, } /** Specification for a Unix (or VSOCK) socket that Waypipe may bind or connect to. */ #[derive(Debug, Clone)] enum SocketSpec { VSock(VSockConfig), Unix(PathBuf), } /* Wrapper for `libc::VMADDR_CID_HOST` or 0 if not supported */ #[cfg(target_os = "linux")] const VMADDR_CID_HOST: u32 = libc::VMADDR_CID_HOST; #[cfg(not(target_os = "linux"))] const VMADDR_CID_HOST: u32 = 0; impl FromStr for VSockConfig { type Err = &'static str; fn from_str(mut s: &str) -> Result { const FAILURE: &str = "VSOCK spec should have format [[s]CID:]port"; let (to_host, cid) = if let Some((mut prefix, suffix)) = s.split_once(':') { let to_host = if prefix.starts_with('s') { prefix = &prefix[1..]; true } else { false }; let cid = prefix.parse::().map_err(|_| FAILURE)?; s = suffix; (to_host, cid) } else { (false, VMADDR_CID_HOST) }; let port = s.parse::().map_err(|_| FAILURE)?; Ok(VSockConfig { to_host, cid, port }) } } impl fmt::Display for VSockConfig { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { if self.cid != VMADDR_CID_HOST { write!(f, "{}", self.port) } else { let prefix = if self.to_host { "s" } else { "" }; write!(f, "{}{}:{}", prefix, self.cid, self.port) } } } /** Remove the dead child indicated by `pid` from the connection set */ fn prune_connections(connections: &mut BTreeMap, pid: nix::unistd::Pid) { if let Some(mut child) = connections.remove(&(pid.as_raw() as u32)) { debug!("Waiting for dead child {} to reveal status", child.id()); let _ = child.wait(); debug!("Status received"); } else { let status = wait::waitpid(pid, Some(wait::WaitPidFlag::WNOHANG)); error!( "Received SIGCHLD for unexpected child {}: {:?}", pid.as_raw(), status ); } } /** Wait until all processes in the set have died */ fn wait_for_connnections(mut connections: BTreeMap) { while let Some((_, mut child)) = connections.pop_first() { debug!("Waiting for dead child {} to reveal status", child.id()); let _ = child.wait(); debug!("Status received"); } } /** Create the argument list for `waypipe client-conn` or `waypipe server-conn` * * New strings allocated for this will be stored into `strings`. */ fn build_connection_command<'a>( strings: &'a mut Vec, socket_path: &'a SocketSpec, options: &'a Options, client: bool, anti_staircase: bool, ) -> Vec<&'a OsStr> { let mut args: Vec<&'a OsStr> = Vec::new(); strings.push(OsString::from(options.compression.to_string())); strings.push(OsString::from(options.threads.to_string())); strings.push(OsString::from(format!("--video={}", options.video))); match socket_path { SocketSpec::VSock(x) => strings.push(format!("{}", x).into()), SocketSpec::Unix(x) => strings.push(x.into()), }; let comp_str = &strings[0]; let thread_str = &strings[1]; let vid_str = &strings[2]; let socket_str = &strings[3]; /* Unlike the parent process, use short option names for these, making the command * line shorter and easier to skim */ args.push(OsStr::new("-s")); args.push(socket_str); if matches!(socket_path, SocketSpec::VSock(_)) { args.push(OsStr::new("--vsock")); } if options.debug { args.push(OsStr::new("-d")); } if options.no_gpu { args.push(OsStr::new("-n")); } args.push(OsStr::new("--threads")); args.push(thread_str); args.push(OsStr::new("-c")); args.push(comp_str); if !options.title_prefix.is_empty() { args.push(OsStr::new("--title-prefix")); args.push(OsStr::new(&options.title_prefix)); } if options.video.format.is_some() { args.push(vid_str); } if let Some(d) = &options.drm_node { assert!(!client); args.push(OsStr::new("--drm-node")); args.push(OsStr::new(d)); } if anti_staircase { args.push(OsStr::new("--anti-staircase")); } if let Some(ref path) = options.debug_store_video { args.push(OsStr::new("--test-store-video")); args.push(path.as_os_str()); } if options.test_skip_vulkan { args.push(OsStr::new("--test-skip-vulkan")); } if options.test_no_timeline_export { args.push(OsStr::new("--test-no-timeline-export")); } if options.test_no_binary_semaphore_import { args.push(OsStr::new("--test-no-binary-semaphore-import")); } if client { args.push(OsStr::new("client-conn")); } else { args.push(OsStr::new("server-conn")); } args } /** Send connection header and run the main proxy loop for a new * `waypipe server` connection. `link_fd` is expected to be a _blocking_ * socket. */ fn handle_server_conn( link_fd: OwnedFd, wayland_fd: OwnedFd, opts: &Options, wire_version_override: Option, ) -> Result<(), String> { /* Note: the last 12 bytes of the header will stay at zero. They would * only need be set to a unique (random) value if reconnection support were * implemented. */ let mut header: [u8; 16] = [0_u8; 16]; let ver = wire_version_override .map(|x| x.clamp(MIN_PROTOCOL_VERSION, WAYPIPE_PROTOCOL_VERSION)) .unwrap_or(WAYPIPE_PROTOCOL_VERSION); let ver_hi = ver >> 4; let ver_lo = ver & ((1 << 4) - 1); let mut lead: u32 = (ver_hi << 16) | (ver_lo << 3) | CONN_FIXED_BIT; match opts.compression { Compression::None => lead |= CONN_NO_COMPRESSION, Compression::Lz4(_) => lead |= CONN_LZ4_COMPRESSION, Compression::Zstd(_) => lead |= CONN_ZSTD_COMPRESSION, } if opts.no_gpu { lead |= CONN_NO_DMABUF_SUPPORT; } if let Some(ref f) = opts.video.format { match f { VideoFormat::H264 => { lead |= CONN_H264_VIDEO; } VideoFormat::VP9 => { lead |= CONN_VP9_VIDEO; } VideoFormat::AV1 => { lead |= CONN_AV1_VIDEO; } } } else { lead |= CONN_NO_VIDEO; } debug!("header: {:0x}", lead); header[..4].copy_from_slice(&u32::to_le_bytes(lead)); write_exact(&link_fd, &header).map_err(|x| tag!("Failed to write connection header: {}", x))?; debug!("have written initial bytes"); set_nonblock(&link_fd)?; set_nonblock(&wayland_fd)?; let (sigmask, sigint_received) = setup_sigint_handler()?; mainloop::main_interface_loop( link_fd, wayland_fd, opts, MIN_PROTOCOL_VERSION, false, sigmask, sigint_received, ) } /** Connect to a socket (and possibly unlink its path); the socket fd returned is * cloexec. the socket file status flags may or may not include O_NONBLOCK and * should be set by the caller. */ fn socket_connect( spec: &SocketSpec, cwd: &OwnedFd, nonblocking: bool, unlink_after: bool, /* Unlink after connecting? */ ) -> Result { let socket_flags = if nonblocking { socket::SockFlag::SOCK_CLOEXEC | socket::SockFlag::SOCK_NONBLOCK } else { socket::SockFlag::SOCK_CLOEXEC }; let socket = match spec { SocketSpec::Unix(path) => { let socket = socket::socket( socket::AddressFamily::Unix, socket::SockType::Stream, socket_flags, None, ) .map_err(|x| tag!("Failed to create socket: {}", x))?; let file = path .file_name() .ok_or_else(|| tag!("Socket path {:?} missing file name", path))?; let addr = socket::UnixAddr::new(file) .map_err(|x| tag!("Failed to create Unix socket address from file name: {}", x))?; let r = if let Some(folder) = path.parent() { nix::unistd::chdir(folder).map_err(|x| tag!("Failed to visit folder: {}", x))?; // eventually: is a 'connectat' equivalent available? // can use /proc/self/fd to workaround socket path length issues let x = socket::connect(socket.as_raw_fd(), &addr); if x.is_ok() && unlink_after { // race condition possible if socket is moved or replaced, but // unlinking should have no ill effect in such scenarios nix::unistd::unlink(file) .map_err(|x| tag!("Failed to unlink socket: {}", x))?; } nix::unistd::fchdir(cwd.as_raw_fd()) .map_err(|x| tag!("Failed to return to original path: {}", x))?; x } else { let x = socket::connect(socket.as_raw_fd(), &addr); if x.is_ok() && unlink_after { nix::unistd::unlink(file) .map_err(|x| tag!("Failed to unlink socket: {}", x))?; } x }; r.map_err(|x| tag!("Failed to connnect to socket at {:?}: {}", path, x))?; socket } #[cfg(target_os = "linux")] SocketSpec::VSock(v) => { let socket = socket::socket( socket::AddressFamily::Vsock, socket::SockType::Stream, socket::SockFlag::SOCK_CLOEXEC, None, ) .map_err(|x| tag!("Failed to create socket: {}", x))?; unsafe { /* nix does not yet support svm_flags, so directly use libc */ const VMADDR_FLAG_TO_HOST: u8 = 0x1; let svm_flags = if v.to_host { VMADDR_FLAG_TO_HOST } else { 0 }; let addr = libc::sockaddr_vm { svm_family: libc::AF_VSOCK as u16, svm_reserved1: 0, svm_port: v.port, svm_cid: v.cid, svm_zero: [svm_flags, 0, 0, 0], }; assert!(std::mem::align_of::() == 4); assert!(std::mem::size_of::() == 16); // SAFETY: `addr` is repr(C) and fully initialzied; // connect() only reads within the size bounds given let r = libc::connect( socket.as_raw_fd(), &addr as *const libc::sockaddr_vm as *const libc::sockaddr, std::mem::size_of::() as _, ); if r != 0 { return Err(tag!( "Failed to connnect to socket at {}: {}", v.to_string(), Errno::last() )); } socket } } #[cfg(not(target_os = "linux"))] SocketSpec::VSock(_) => unreachable!(), }; set_nonblock(&socket)?; Ok(socket) } /** Helper structure to unlink a created file when it Drops. * * This keeps the folder in which the file was created alive. */ struct FileCleanup { folder: OwnedFd, full_path: PathBuf, } impl Drop for FileCleanup { fn drop(&mut self) { let file_name = self.full_path.file_name().unwrap(); debug!("Trying to unlink socket created at: {:?}", self.full_path); if let Err(x) = unistd::unlinkat( Some(self.folder.as_raw_fd()), file_name, unistd::UnlinkatFlags::NoRemoveDir, ) { error!( "Failed to unlink display socket at: {:?}: {:?}", self.full_path, x ) } } } /** Create and bind to a Unix socket * * This returns the socket and a file cleanup structure to unlink it when * it is no longer needed. */ fn unix_socket_create_and_bind( path: &Path, cwd: &OwnedFd, flags: socket::SockFlag, ) -> Result<(OwnedFd, FileCleanup), String> { let socket: OwnedFd = socket::socket( socket::AddressFamily::Unix, socket::SockType::Stream, flags, None, ) .map_err(|x| tag!("Failed to create socket: {}", x))?; let file = path .file_name() .ok_or_else(|| tag!("Socket path {:?} missing file name", path))?; let addr = socket::UnixAddr::new(file) .map_err(|x| tag!("Failed to create Unix socket address from file name: {}", x))?; let (f, r) = if let Some(folder) = path.parent() { let f = open_folder(folder)?; unistd::fchdir(f.as_raw_fd()).map_err(|x| tag!("Failed to visit folder: {}", x))?; // eventually: is a 'bindat' equivalent available? // can use /proc/self/fd to workaround socket path length issues let x = socket::bind(socket.as_raw_fd(), &addr); unistd::fchdir(cwd.as_raw_fd()) .map_err(|x| tag!("Failed to return to original path: {}", x))?; (f, x) } else { let f: OwnedFd = OwnedFd::try_clone(cwd).map_err(|x| tag!("Failed to duplicate cwd: {}", x))?; let x = socket::bind(socket.as_raw_fd(), &addr); (f, x) }; r.map_err(|x| tag!("Failed to bind socket at {:?}: {}", path, x))?; Ok(( socket, FileCleanup { folder: f, full_path: PathBuf::from(path), }, )) } /** Create and bind to a socket */ fn socket_create_and_bind( path: &SocketSpec, cwd: &OwnedFd, flags: socket::SockFlag, ) -> Result<(OwnedFd, Option), String> { match path { #[cfg(target_os = "linux")] SocketSpec::VSock(spec) => { let socket: OwnedFd = socket::socket( socket::AddressFamily::Vsock, socket::SockType::Stream, flags, None, ) .map_err(|x| tag!("Failed to create socket: {}", x))?; let addr = socket::VsockAddr::new(libc::VMADDR_CID_ANY, spec.port); socket::bind(socket.as_raw_fd(), &addr) .map_err(|x| tag!("Failed to bind socket at {}: {}", spec.to_string(), x))?; Ok((socket, None)) } #[cfg(not(target_os = "linux"))] SocketSpec::VSock(_) => unreachable!(), SocketSpec::Unix(path) => { let (socket, cleanup) = unix_socket_create_and_bind(path, cwd, flags)?; Ok((socket, Some(cleanup))) } } } /** Connect to the Wayland display socket at the given `path`. */ fn connect_to_display_at(cwd: &OwnedFd, path: &Path) -> Result { socket_connect(&SocketSpec::Unix(path.into()), cwd, true, false) } /** Connect to the Wayland display socket indicated by `WAYLAND_DISPLAY` and * (if not using absolute path) `XDG_RUNTIME_DIR` */ fn connect_to_wayland_display(cwd: &OwnedFd) -> Result { let wayl_disp = std::env::var_os("WAYLAND_DISPLAY") .ok_or("Missing environment variable WAYLAND_DISPLAY")?; let leading_slash: &[u8] = b"/"; if wayl_disp.as_encoded_bytes().starts_with(leading_slash) { connect_to_display_at(cwd, Path::new(&wayl_disp)) } else if let Some(dir) = std::env::var_os("XDG_RUNTIME_DIR") { let mut path = PathBuf::new(); path.push(dir); path.push(wayl_disp); connect_to_display_at(cwd, &path) } else { Err(tag!("XDG_RUNTIME_DIR was not in environment")) } } /** Get the socket fd number indicated by environment variable `WAYLAND_SOCKET` */ fn get_wayland_socket_id() -> Result, String> { if let Some(x) = std::env::var_os("WAYLAND_SOCKET") { let y = x .into_string() .ok() .and_then(|x| x.parse::().ok()) .ok_or("Failed to parse connection fd")?; Ok(Some(y)) } else { Ok(None) } } /** For SIGINT handler; is set to true after SIGINT was received */ static SIGINT_RECEIVED: AtomicBool = AtomicBool::new(false); /** Handler to record whether SIGINT was received */ extern "C" fn sigint_handler(_signo: i32) { SIGINT_RECEIVED.store(true, Ordering::Release); } /** Setup a SIGINT handler, and return a modified poll mask in which SIGINT is not blocked. */ fn setup_sigint_handler() -> Result<(signal::SigSet, &'static AtomicBool), String> { /* Block SIGINT, except when polling; this prevents a race in which SIGINT is received outside the poll. */ let mut mask = signal::SigSet::empty(); mask.add(signal::SIGINT); let mut pollmask = mask .thread_swap_mask(signal::SigmaskHow::SIG_BLOCK) .map_err(|x| tag!("Failed to set sigmask: {}", x))?; pollmask.remove(signal::SIGINT); let sigaction = signal::SigAction::new( signal::SigHandler::Handler(sigint_handler), signal::SaFlags::SA_NOCLDSTOP, signal::SigSet::empty(), ); unsafe { // SAFETY: this is only called once, so existing signal handler is being overwritten; // Also, sigint_handler is async signal safe signal::sigaction(signal::Signal::SIGINT, &sigaction) .map_err(|x| tag!("Failed to set sigaction: {}", x))?; } Ok((pollmask, &SIGINT_RECEIVED)) } /** Check connection header and run the main proxy loop for a new * `waypipe client` connection. `link_fd` is expected to be a * _blocking_ socket. */ fn handle_client_conn(link_fd: OwnedFd, wayland_fd: OwnedFd, opts: &Options) -> Result<(), String> { let mut buf = [0_u8; 16]; read_exact(&link_fd, &mut buf) .map_err(|x| tag!("Failed to read connection header: {:?}", x))?; let header = u32::from_le_bytes(buf[..4].try_into().unwrap()); debug!("Connection header: 0x{:08x}", header); if header & CONN_FIXED_BIT == 0 && header & CONN_UNSET_BIT != 0 { error!("Possible endianness mismatch"); return Err(tag!( "Header failure: possible endianness mismatch, or garbage input" )); } let version = (((header >> 16) & 0xff) << 4) | (header >> 3) & 0xf; let min_version = std::cmp::min(version, WAYPIPE_PROTOCOL_VERSION); debug!( "Connection remote version is {}, local is {}, using {}", version, WAYPIPE_PROTOCOL_VERSION, min_version ); let comp = header & CONN_COMPRESSION_MASK; /* Validate compression type */ let expected_comp = match opts.compression { Compression::None => CONN_NO_COMPRESSION, Compression::Lz4(_) => CONN_LZ4_COMPRESSION, Compression::Zstd(_) => CONN_ZSTD_COMPRESSION, }; if comp != expected_comp { error!("Rejecting connection header {:x} due to compression type mismatch: header has {:x} != own {:x}", header, comp, expected_comp); return Err(tag!("Header compression failure")); } let video = header & CONN_VIDEO_MASK; /* Ignore video details for now? OpenDMAVidSrcV2/OpenDMAVidDstV2 explicitly specify format */ match video { CONN_NO_VIDEO => { debug!("Connected waypipe-server not receiving video"); } CONN_H264_VIDEO => { debug!("Connected waypipe-server may send H264 video"); } CONN_VP9_VIDEO => { debug!("Connected waypipe-server may send VP9 video"); } CONN_AV1_VIDEO => { debug!("Connected waypipe-server may send AV1 video"); } _ => { debug!("Unknown video format specification") } } /* Nothing to do here: dmabuf creation requires letting linux-dmabuf-v1 through */ let remote_using_dmabuf = header & CONN_NO_DMABUF_SUPPORT == 0; debug!( "Connected waypipe-server may use dmabufs: {}", remote_using_dmabuf ); set_nonblock(&link_fd)?; set_nonblock(&wayland_fd)?; let (sigmask, sigint_received) = setup_sigint_handler()?; mainloop::main_interface_loop( link_fd, wayland_fd, opts, min_version, true, sigmask, sigint_received, ) } /** Start a process to handle a specific Wayland connection. * * (Use spawn instead of forking. Spawning ensures ASLR is reseeded, and * provides a nicer command line string, with the mixed-cost-benefit of * using the latest dynamic library versions, and repeating process * setup costs.). * * Note: the connection file descriptor `connection_fd` is expected to * not be cloexec, and not have the O_NONBLOCK file status flag. */ fn spawn_connection_handler( self_path: &OsStr, conn_args: &[&OsStr], connection_fd: OwnedFd, wayland_display: Option<&OsStr>, ) -> Result { /* Launch connection handler using explicit path to self. An alternative * would be to directly use /proc/self/exe, but some tools would end up * using the file name 'exe' to label the client process, instead of the * value of argv0, which could be confusing. */ let mut process = std::process::Command::new(self_path); process .arg0(env!("CARGO_PKG_NAME")) .args(conn_args) .env( "WAYPIPE_CONNECTION_FD", format!("{}", connection_fd.as_raw_fd()), ) .env_remove("WAYLAND_SOCKET"); if let Some(disp) = wayland_display { process.env("WAYLAND_DISPLAY", disp); } let child = process.spawn().map_err(|x| { tag!( "Failed to run connection subprocess with path {:?}: {}", self_path, x ) })?; drop(connection_fd); Ok(child) } /** `waypipe-server` logic for the single connection case */ fn run_server_oneshot( command: &[&std::ffi::OsStr], argv0: &std::ffi::OsStr, options: &Options, unlink_at_end: bool, socket_path: &SocketSpec, cwd: &OwnedFd, ) -> Result<(), String> { let (sock1, sock2) = socket::socketpair( socket::AddressFamily::Unix, socket::SockType::Stream, None, socket::SockFlag::SOCK_NONBLOCK | socket::SockFlag::SOCK_CLOEXEC, ) .map_err(|x| tag!("Failed to create socketpair: {}", x))?; let sock_str = format!("{}", sock2.as_raw_fd()); set_cloexec(&sock2, false)?; let mut cmd_child: std::process::Child = std::process::Command::new(command[0]) .arg0(argv0) .args(&command[1..]) .env("WAYLAND_SOCKET", &sock_str) .env_remove("WAYLAND_DISPLAY") .spawn() .map_err(|x| tag!("Failed to run program {:?}: {}", command[0], x))?; drop(sock2); let link_fd = socket_connect(socket_path, cwd, false, unlink_at_end)?; handle_server_conn(link_fd, sock1, options, None)?; debug!("Waiting for only child {} to reveal status", cmd_child.id()); let _ = cmd_child.wait(); debug!("Status received"); Ok(()) } /** Inner function for `run_server_multi`, used to ensure its cleanup always runs */ fn run_server_inner( display_socket: &OwnedFd, argv0: &std::ffi::OsStr, command: &[&OsStr], display_short: &OsStr, conn_args: &[&OsStr], connections: &mut BTreeMap, ) -> Result { let mut cmd_child: std::process::Child = std::process::Command::new(command[0]) .arg0(argv0) .args(&command[1..]) .env("WAYLAND_DISPLAY", display_short) .env_remove("WAYLAND_SOCKET") .spawn() .map_err(|x| tag!("Failed to run program {:?}: {}", command[0], x))?; /* Block SIGCHLD _after_ spawning subprocess, to avoid having cmd_child inherit * signal disposition changes; note this process should be single threaded. */ let mut mask = signal::SigSet::empty(); mask.add(signal::SIGCHLD); let mut pollmask = mask .thread_swap_mask(signal::SigmaskHow::SIG_BLOCK) .map_err(|x| tag!("Failed to set sigmask: {}", x))?; pollmask.remove(signal::SIGCHLD); let sigaction = signal::SigAction::new( signal::SigHandler::Handler(noop_signal_handler), signal::SaFlags::SA_NOCLDSTOP, signal::SigSet::empty(), ); unsafe { // SAFETY: signal handler installed is trivial signal::sigaction(signal::Signal::SIGCHLD, &sigaction) .map_err(|x| tag!("Failed to set sigaction: {}", x))?; } let self_path = env::current_exe() .map_err(|x| tag!("Failed to lookup path to own executable: {}", x))? .into_os_string(); socket::listen(&display_socket, socket::Backlog::MAXCONN) .map_err(|x| tag!("Failed to listen to socket: {}", x))?; 'outer: loop { loop { /* Handle any child process exits */ let res = wait::waitid( wait::Id::All, wait::WaitPidFlag::WEXITED | wait::WaitPidFlag::WNOHANG | wait::WaitPidFlag::WNOWAIT, ); match res { Ok(status) => match status { wait::WaitStatus::Exited(pid, _code) => { if pid.as_raw() as u32 == cmd_child.id() { let _ = cmd_child.wait(); debug!("Exiting, main command has stopped"); break 'outer; } prune_connections(connections, pid); } wait::WaitStatus::Signaled(pid, _signal, _bool) => { if pid.as_raw() as u32 == cmd_child.id() { let _ = cmd_child.wait(); debug!("Exiting, main command has stopped"); break 'outer; } prune_connections(connections, pid); } wait::WaitStatus::StillAlive => { break; } _ => { panic!("Unexpected process status: {:?}", status); } }, Err(Errno::ECHILD) => { error!("Unexpected: no unwaited for children"); break 'outer; } Err(errno) => { assert!(errno == Errno::EINTR); } } } /* Wait for SIGCHLD or action */ let mut pfds = [PollFd::new(display_socket.as_fd(), PollFlags::POLLIN)]; let res = nix::poll::ppoll(&mut pfds, None, Some(pollmask)); if let Err(errno) = res { assert!(errno == Errno::EINTR || errno == Errno::EAGAIN); continue; } let rev = pfds[0].revents().unwrap(); if rev.contains(PollFlags::POLLERR) { debug!("Exiting, socket error"); break 'outer; } if !rev.contains(PollFlags::POLLIN) { continue; } /* We have a connection */ debug!("Connection received"); let res = socket::accept(display_socket.as_raw_fd()); match res { Ok(conn_fd) => { let wrapped_fd = unsafe { // SAFETY: freshly created file descriptor, exclusively captured here OwnedFd::from_raw_fd(conn_fd) }; set_blocking(&wrapped_fd)?; let child = spawn_connection_handler(&self_path, conn_args, wrapped_fd, None)?; let cid = child.id(); if connections.insert(cid, child).is_some() { return Err(tag!("Pid reuse: {}", cid)); } } Err(errno) => { assert!(errno != Errno::EBADF && errno != Errno::EINVAL); // This can fail for a variety of reasons, including OOM // and the connection being aborted debug!("Failed to receive connection"); } } } Ok(cmd_child) } /** `waypipe-server` logic for the multiple connection case */ fn run_server_multi( command: &[&std::ffi::OsStr], argv0: &std::ffi::OsStr, options: &Options, unlink_at_end: bool, socket_path: &SocketSpec, display_short: &OsStr, display: &OsStr, cwd: &OwnedFd, ) -> Result<(), String> { let mut connections = BTreeMap::new(); let mut conn_strings = Vec::new(); let conn_args = build_connection_command(&mut conn_strings, socket_path, options, false, false); let (display_socket, sock_cleanup) = unix_socket_create_and_bind( &PathBuf::from(display), cwd, socket::SockFlag::SOCK_NONBLOCK | socket::SockFlag::SOCK_CLOEXEC, )?; let res = run_server_inner( &display_socket, argv0, command, display_short, &conn_args, &mut connections, ); /* Unlink the connection socket that was used */ let sock_err = if unlink_at_end { if let SocketSpec::Unix(p) = socket_path { nix::unistd::unlink(p).map_err(|x| tag!("Failed to unlink socket: {}", x)) } else { Ok(()) } } else { Ok(()) }; /* Unlink the Wayland display socket that was created */ drop(sock_cleanup); if let Err(err) = res { if let Err(e) = sock_err { error!("While cleaning up: {}", e); } return Err(err); } sock_err?; debug!("Shutting down"); // Wait for subprocesses to wait_for_connnections(connections); debug!("Done"); Ok(()) } /** `waypipe-client` logic for the single connection case */ fn run_client_oneshot( command: Option<&[&std::ffi::OsStr]>, options: &Options, wayland_fd: OwnedFd, socket_path: &SocketSpec, cwd: &OwnedFd, ) -> Result<(), String> { let (channel_socket, sock_cleanup) = socket_create_and_bind(socket_path, cwd, socket::SockFlag::SOCK_CLOEXEC)?; socket::listen(&channel_socket, socket::Backlog::new(1).unwrap()) .map_err(|x| tag!("Failed to listen to socket: {}", x))?; /* After the socket has been created, start ssh if necessary */ let mut cmd_child: Option = None; if let Some(command_seq) = command { cmd_child = Some( std::process::Command::new(command_seq[0]) .args(&command_seq[1..]) .env_remove("WAYLAND_DISPLAY") .env_remove("WAYLAND_SOCKET") .spawn() .map_err(|x| tag!("Failed to run program {:?}: {}", command_seq[0], x))?, ); } let link_fd = loop { let res = socket::accept(channel_socket.as_raw_fd()); match res { Ok(conn_fd) => { break unsafe { // SAFETY: freshly created file descriptor, exclusively captured here OwnedFd::from_raw_fd(conn_fd) }; } Err(Errno::EINTR) => continue, Err(x) => { return Err(tag!("Failed to accept for socket: {}", x)); } } }; set_cloexec(&link_fd, true)?; set_blocking(&link_fd)?; /* Now that ssh has connected to the socket, it can safely be removed * from the file system */ drop(sock_cleanup); handle_client_conn(link_fd, wayland_fd, options)?; if let Some(mut c) = cmd_child { debug!("Waiting for only child {} to reveal status", c.id()); let _ = c.wait(); debug!("Status received"); } debug!("Done"); Ok(()) } /** No-op signal handler (used to ensure SIGCHLD interrupts poll) */ extern "C" fn noop_signal_handler(_: i32) {} /** Inner function for `run_client_multi`, used to ensure its cleanup always runs */ fn run_client_inner( channel_socket: &OwnedFd, command: Option<&[&OsStr]>, conn_args: &[&OsStr], wayland_display: &OsStr, connections: &mut BTreeMap, ) -> Result, String> { socket::listen(&channel_socket, socket::Backlog::MAXCONN) .map_err(|x| tag!("Failed to listen to socket: {}", x))?; /* Only run ssh once the necessary socket to forward has been set up */ let mut cmd_child: Option = None; if let Some(command_seq) = command { cmd_child = Some( std::process::Command::new(command_seq[0]) .args(&command_seq[1..]) .env_remove("WAYLAND_DISPLAY") .env_remove("WAYLAND_SOCKET") .spawn() .map_err(|x| tag!("Failed to run program {:?}: {}", command_seq[0], x))?, ); } /* Block SIGCHLD _after_ spawning subprocess, to avoid having cmd_child inherit * signal disposition changes; note this process should be single threaded. */ let mut mask = signal::SigSet::empty(); mask.add(signal::SIGCHLD); let mut pollmask = mask .thread_swap_mask(signal::SigmaskHow::SIG_BLOCK) .map_err(|x| tag!("Failed to set sigmask: {}", x))?; pollmask.remove(signal::SIGCHLD); let sigaction = signal::SigAction::new( signal::SigHandler::Handler(noop_signal_handler), signal::SaFlags::SA_NOCLDSTOP, signal::SigSet::empty(), ); unsafe { // SAFETY: signal handler installed is trivial signal::sigaction(signal::Signal::SIGCHLD, &sigaction) .map_err(|x| tag!("Failed to set sigaction: {}", x))?; } /* Collect path to self now, instead of later, for use when spawning subprocesses. * (When the executable is deleted, /proc/self/exe is modified with a " (deleted)" * suffix, so identifying the executable path could fail.) */ let self_path = env::current_exe() .map_err(|x| tag!("Failed to lookup path to own executable: {}", x))? .into_os_string(); 'outer: loop { /* Handle any child process exits (including the case where cmd_child exited immediately)*/ loop { let res = wait::waitid( wait::Id::All, wait::WaitPidFlag::WEXITED | wait::WaitPidFlag::WNOHANG | wait::WaitPidFlag::WNOWAIT, ); match res { Ok(status) => match status { wait::WaitStatus::Exited(pid, _code) => { if let Some(ref mut c) = cmd_child { if pid.as_raw() as u32 == c.id() { let _ = c.wait(); debug!("Exiting, main command has stopped"); break 'outer; } } prune_connections(connections, pid); } wait::WaitStatus::Signaled(pid, _signal, _bool) => { if let Some(ref mut c) = cmd_child { if pid.as_raw() as u32 == c.id() { let _ = c.wait(); debug!("Exiting, main command has stopped"); break 'outer; } } prune_connections(connections, pid); } wait::WaitStatus::StillAlive => { break; } _ => { panic!("Unexpected process status: {:?}", status); } }, Err(Errno::ECHILD) => { /* no children to wait for; can happen if no command is run */ break; } Err(errno) => { assert!(errno == Errno::EINTR); break; } } } /* Wait for SIGCHLD or socket connection */ let mut pfds = [PollFd::new(channel_socket.as_fd(), PollFlags::POLLIN)]; let res = nix::poll::ppoll(&mut pfds, None, Some(pollmask)); if let Err(errno) = res { assert!(errno == Errno::EINTR || errno == Errno::EAGAIN); continue; } let rev = pfds[0].revents().unwrap(); if rev.contains(PollFlags::POLLERR) { debug!("Exiting, socket error"); break 'outer; } if !rev.contains(PollFlags::POLLIN) { continue; } /* We have a connection */ debug!("Connection received"); let res = socket::accept(channel_socket.as_raw_fd()); match res { Ok(conn_fd) => { // note: since no reconnection is done, we do not need to // keep track of a subprocess let wrapped_fd = unsafe { // SAFETY: freshly created file descriptor, exclusively captured here OwnedFd::from_raw_fd(conn_fd) }; set_blocking(&wrapped_fd)?; let child = spawn_connection_handler( &self_path, conn_args, wrapped_fd, Some(wayland_display), )?; let cid = child.id(); if connections.insert(cid, child).is_some() { return Err(tag!("Pid reuse: {}", cid)); } } Err(errno) => { assert!(errno != Errno::EBADF && errno != Errno::EINVAL); // This can fail for a variety of reasons, including OOM // and the connection being aborted debug!("Failed to receive connection"); } } } Ok(cmd_child) } /** `waypipe-client` logic for the multiple connection case */ fn run_client_multi( command: Option<&[&std::ffi::OsStr]>, options: &Options, socket_path: &SocketSpec, wayland_display: &OsStr, anti_staircase: bool, cwd: &OwnedFd, ) -> Result<(), String> { let mut conn_strings = Vec::new(); let conn_args = build_connection_command( &mut conn_strings, socket_path, options, true, anti_staircase, ); let (channel_socket, sock_cleanup) = socket_create_and_bind( socket_path, cwd, socket::SockFlag::SOCK_NONBLOCK | socket::SockFlag::SOCK_CLOEXEC, )?; let mut connections = BTreeMap::new(); let cmd_child = run_client_inner( &channel_socket, command, &conn_args, wayland_display, &mut connections, )?; drop(sock_cleanup); debug!("Shutting down"); wait_for_connnections(connections); if let Some(mut child) = cmd_child { debug!( "Waiting for client command child {} to reveal status", child.id() ); let _ = child.wait(); debug!("Status received"); } debug!("Done"); Ok(()) } /** Main logic for `waypipe client` */ fn run_client( command: Option<&[&std::ffi::OsStr]>, opts: &Options, oneshot: bool, socket_path: &SocketSpec, anti_staircase: bool, cwd: &OwnedFd, wayland_socket: Option, secctx: Option<&str>, ) -> Result<(), String> { if let Some(app_id) = secctx { let (wayland_disp, sock_cleanup, close_fd) = setup_secctx(cwd, app_id, wayland_socket)?; if oneshot { let c = connect_to_display_at(cwd, Path::new(&wayland_disp))?; drop(sock_cleanup); drop(close_fd); run_client_oneshot(command, opts, c, socket_path, cwd) } else { let r = run_client_multi( command, opts, socket_path, &wayland_disp, anti_staircase, cwd, ); drop(close_fd); drop(sock_cleanup); r } } else if oneshot || wayland_socket.is_some() { let wayland_fd: OwnedFd = if let Some(s) = wayland_socket { s } else { connect_to_wayland_display(cwd)? }; run_client_oneshot(command, opts, wayland_fd, socket_path, cwd) } else { let wayland_disp = std::env::var_os("WAYLAND_DISPLAY").ok_or_else(|| tag!("The environment variable WAYLAND_DISPLAY is not set, cannot connect to Wayland server."))?; run_client_multi( command, opts, socket_path, &wayland_disp, anti_staircase, cwd, ) } } /** Connect to upstream Wayland compositor, create a security context, * and return the path to the security context's socket */ fn setup_secctx( cwd: &OwnedFd, app_id: &str, wayland_socket: Option, ) -> Result<(OsString, FileCleanup, OwnedFd), String> { let xdg_runtime = std::env::var_os("XDG_RUNTIME_DIR"); let mut secctx_sock_path = PathBuf::from(xdg_runtime.as_deref().unwrap_or(OsStr::new("/tmp/"))); secctx_sock_path.push(format!("waypipe-secctx-{}", std::process::id())); debug!( "Setting up security context socket at: {:?}", secctx_sock_path ); let (sock, sock_cleanup) = unix_socket_create_and_bind( &secctx_sock_path, cwd, socket::SockFlag::SOCK_NONBLOCK | socket::SockFlag::SOCK_CLOEXEC, )?; socket::listen(&sock, socket::Backlog::MAXCONN) .map_err(|x| tag!("Failed to listen to socket: {}", x))?; let wayland_conn = if let Some(s) = wayland_socket { s } else { connect_to_wayland_display(cwd)? }; let flags = fcntl::fcntl(wayland_conn.as_raw_fd(), fcntl::FcntlArg::F_GETFL) .map_err(|x| tag!("Failed to get wayland socket flags: {}", x))?; let mut flags = fcntl::OFlag::from_bits(flags).unwrap(); flags.remove(fcntl::OFlag::O_NONBLOCK); fcntl::fcntl(wayland_conn.as_raw_fd(), fcntl::FcntlArg::F_SETFL(flags)) .map_err(|x| tag!("Failed to set wayland socket flags: {}", x))?; let (close_r, close_w) = unistd::pipe2(fcntl::OFlag::O_CLOEXEC | fcntl::OFlag::O_NONBLOCK) .map_err(|x| tag!("Failed to create pipe: {:?}", x))?; secctx::provide_secctx(wayland_conn, app_id, sock, close_r)?; debug!("Security context is ready"); Ok((secctx_sock_path.into_os_string(), sock_cleanup, close_w)) } /** Identify the index of the hostname in ssh_args, and detect * whether ssh will force pseudo-terminal allocation */ fn locate_openssh_cmd_hostname(ssh_args: &[&OsStr]) -> Result<(usize, bool), String> { /* Based on command line help for openssh 8.0 and 9.7 */ // let fix_letters = b"46AaCfGgKkMNnqsTtVvXxYy"; let arg_letters = b"BbcDEeFIiJLlmOopQRSWw"; let mut dst_idx = 0; let mut allocates_pty = false; /* Note: a valid hostname never has a - prefix */ while dst_idx < ssh_args.len() { let base_arg: &[u8] = ssh_args[dst_idx].as_encoded_bytes(); if !base_arg.starts_with(b"-") { /* Not an argument, must be hostname */ break; } if base_arg.len() == 1 { return Err(tag!("Failed to parse arguments after ssh: single '-'?")); } if base_arg == [b'-', b'-'] { /* No arguments after -- */ dst_idx += 1; break; } // loop over letters; fix* arg ( value ) for i in 1..base_arg.len() { if arg_letters.contains(&base_arg[i]) { if i == base_arg.len() - 1 { // Value is next argument dst_idx += 1; } else { // Value is tail of this arguent } } else if base_arg[i] == b't' { allocates_pty = true; } else if base_arg[i] == b'T' { allocates_pty = false; } else { /* All other letters interpreted as fixed arg letters */ } } // Eat this argument dst_idx += 1; } if dst_idx >= ssh_args.len() || ssh_args[dst_idx].as_encoded_bytes().starts_with(b"-") { Err(tag!("Failed to locate ssh hostname in {:?}", ssh_args)) } else { Ok((dst_idx, allocates_pty)) } } #[test] fn test_ssh_parsing() { let x: &[(&[&str], usize, bool)] = &[ (&["-tlfoo", "host", "command"], 2, true), (&["-t", "-l", "foo", "host", "command"], 3, true), (&["host"], 0, false), (&["host", "-t"], 0, false), (&["-T", "--", "host"], 2, false), (&["-T", "-t", "--", "host"], 3, true), ]; for entry in x { let y: Vec<&std::ffi::OsStr> = entry.0.iter().map(|s| OsStr::new(*s)).collect(); let r = locate_openssh_cmd_hostname(&y); println!("{:?} > {:?}", entry.0, r); assert!(r == Ok((entry.1, entry.2))); } } // an available/unavailable list could be constructed if #[cfg] where to apply to expressions const VERSION_STRING_CARGO: &str = concat!( env!("CARGO_PKG_VERSION"), "\nfeatures:", "\n lz4: ", cfg!(feature = "lz4"), "\n zstd: ", cfg!(feature = "zstd"), "\n dmabuf: ", cfg!(feature = "dmabuf"), "\n video: ", cfg!(feature = "video"), ); /** String providing Waypipe's version and which features it has */ pub const VERSION_STRING: &str = match option_env!("WAYPIPE_VERSION") { Some(x) => x, None => VERSION_STRING_CARGO, }; /** Main entrypoint */ fn main() -> Result<(), String> { let command = Command::new(env!("CARGO_PKG_NAME")) .disable_help_subcommand(true) .subcommand_required(true) .help_expected(true) .flatten_help(false) .subcommand_help_heading("Modes") .subcommand_value_name("MODE") .about( "A proxy to remotely use Wayland protocol applications\n\ Example: waypipe ssh user@server weston-terminal\n\ See `man 1 waypipe` for detailed help.", ) .next_line_help(false) .version(option_env!("WAYPIPE_VERSION").unwrap_or(VERSION_STRING)); let command = command .subcommand( Command::new("ssh") .about("Wrap an ssh invocation to run Waypipe on both ends of the connection, and\nautomatically forward Wayland applications") .disable_help_flag(true) // collect all following arguments .arg(Arg::new("ssh_args").num_args(0..).trailing_var_arg(true).allow_hyphen_values(true).help("Arguments for ssh")) ).subcommand( Command::new("server") .about("Run remotely to run a process and forward application data through a socket\nto a matching `waypipe client` instance") .disable_help_flag(true) // collect all following arguments as the command .arg(Arg::new("command").num_args(0..).trailing_var_arg(true).help("Command to execute") .allow_hyphen_values(true) ) ).subcommand( Command::new("client") .disable_help_flag(true) .about("Run locally to set up a Unix socket that `waypipe server` can connect to") // forbid all following arguments ).subcommand( Command::new("bench") .about("Estimate the best compression level used to send data, for each bandwidth") .disable_help_flag(true) ).subcommand( Command::new("server-conn") .disable_help_flag(true).hide(true) ).subcommand( Command::new("client-conn") .disable_help_flag(true).hide(true) ); let command = command .arg( Arg::new("compress") .short('c') .long("compress") .value_name("comp") .help("Choose compression method: lz4[=#], zstd[=#], none") .value_parser(value_parser!(Compression)) .default_value("lz4"), ) .arg( Arg::new("debug") .short('d') .long("debug") .help("Print debug messages") .action(ArgAction::SetTrue), ) .arg( Arg::new("no-gpu") .short('n') .long("no-gpu") .help("Block protocols using GPU memory transfers (via DMABUFs)") .action(ArgAction::SetTrue), ) .arg( Arg::new("oneshot") .short('o') .long("oneshot") .help("Only permit one connected application") .action(ArgAction::SetTrue), ) .arg( Arg::new("socket") .short('s') .long("socket") .value_name("path") .help( "Set the socket path to either create or connect to.\n\ - server default: /tmp/waypipe-server.sock\n\ - client default: /tmp/waypipe-client.sock\n\ - ssh: sets the prefix for client and server socket paths\n\ - vsock: [[s]CID:]port", ) // todo: decide value parser based on --vsock flag? .value_parser(value_parser!(OsString)), ) .arg( Arg::new("display") .long("display") .value_name("display") .help("server,ssh: Set the Wayland display name or path") .value_parser(value_parser!(PathBuf)), ) .arg( Arg::new("drm-node") .long("drm-node") .value_name("path") .help("Set preferred DRM node (may be ignored in ssh/client modes)") .value_parser(value_parser!(PathBuf)), ) .arg( Arg::new("remote-node") .long("remote-node") .value_name("path") .help("ssh: Set the preferred remote DRM node") .value_parser(value_parser!(PathBuf)), ) .arg( Arg::new("remote-bin") .long("remote-bin") .value_name("path") .help("ssh: Set the remote Waypipe binary to use") .value_parser(value_parser!(PathBuf)) .default_value(env!("CARGO_PKG_NAME")), ) .arg( Arg::new("remote-socket") .long("remote-socket") .value_name("path") .help("ssh: sets prefix of the remote server socket path") .value_parser(value_parser!(OsString)), ) .arg( Arg::new("login-shell") .long("login-shell") .help("server: If server command is empty, run a login shell") .action(ArgAction::SetTrue), ) .arg( Arg::new("threads") .long("threads") .help("Number of worker threads to use: 0 ⇒ hardware threads/2") .value_parser(value_parser!(u32)) .default_value("0"), ) .arg( Arg::new("title-prefix") .long("title-prefix") .value_name("str") .help("Prepend string to all window titles") .default_value(""), ) .arg( Arg::new("unlink-socket") .long("unlink-socket") .help("server: Unlink the socket that Waypipe connects to") .action(ArgAction::SetTrue), ) .arg( Arg::new("video") .long("video") .value_name("options") .help( "Video-encode DMABUFs when possible\n\ option format: (none|h264|vp9|av1)[,bpf=]", ) .default_value("none") .value_parser(value_parser!(VideoSetting)), ) .arg( Arg::new("vsock") .long("vsock") .help("Connect over vsock-type socket instead of unix socket") .action(ArgAction::SetTrue), ) .arg( Arg::new("secctx") .long("secctx") .value_name("str") .help("client,ssh: Use security-context protocol with application ID") .value_parser(value_parser!(String)), ) .arg( Arg::new("anti-staircase") .long("anti-staircase") .hide(true) .help("Prevent staircasing effect in terminal logs") .action(ArgAction::SetTrue), ) .arg( Arg::new("test-loop") .long("test-loop") .hide(true) .help("Test option: act like `ssh localhost` without ssh") .action(ArgAction::SetTrue), ) .arg( Arg::new("test-wire-version") .long("test-wire-version") .hide(true) .help("Test option: set the wire protocol version tried for `waypipe server`; must be >= 16") .value_parser(value_parser!(u32)), ) .arg( Arg::new("test-store-video") .long("test-store-video") .hide(true) .help("Test option: client,server: save received video packets to folder") .value_parser(value_parser!(PathBuf)), ) .arg( Arg::new("test-skip-vulkan") .long("test-skip-vulkan") .hide(true) .help("Test option: make Vulkan initialization fail and fall back to gbm backend if available") .action(ArgAction::SetTrue), ) .arg( Arg::new("test-no-timeline-export") .long("test-no-timeline-export") .hide(true) .help("Test option: assume Vulkan timeline semaphore import/export is not available") .action(ArgAction::SetTrue), ) .arg( Arg::new("test-no-binary-semaphore-import") .long("test-no-binary-semaphore-import") .hide(true) .help("Test option: assume Vulkan binary semaphore import is not available") .action(ArgAction::SetTrue), ) .arg( Arg::new("test-fast-bench") .long("test-fast-bench") .hide(true) .help("Test option: run 'bench' mode on a tiny input") .action(ArgAction::SetTrue), ) .arg( Arg::new("trace") .long("trace") .action(ArgAction::SetTrue) .help("Test option: log all Wayland messages received and sent") .hide(true), ); let matches = command.get_matches(); let debug = matches.get_one::("debug").unwrap(); let trace = matches.get_one::("trace").unwrap(); let max_level = if *trace { log::LevelFilter::Trace } else if *debug { log::LevelFilter::Debug } else { log::LevelFilter::Error }; let (log_color, log_label) = match matches.subcommand() { Some(("ssh", _)) => (1, "waypipe-client"), Some(("client", _)) => (1, "waypipe-client"), Some(("server", _)) => (2, "waypipe-server"), Some(("client-conn", _)) => (1, "waypipe-client"), Some(("server-conn", _)) => (2, "waypipe-server"), _ => (0, "waypipe"), }; let mut anti_staircase: bool = *matches.get_one::("anti-staircase").unwrap(); if let Some(("ssh", submatch)) = matches.subcommand() { let subargs = submatch.get_raw("ssh_args"); let ssh_args: Vec<&std::ffi::OsStr> = subargs.unwrap_or_default().collect(); let (destination_idx, allocates_pty) = locate_openssh_cmd_hostname(&ssh_args)?; let needs_login_shell = destination_idx == ssh_args.len() - 1; anti_staircase = needs_login_shell || allocates_pty; } let logger = Logger { max_level, pid: std::process::id(), color_output: nix::unistd::isatty(2).unwrap(), anti_staircase, color: log_color, label: log_label, }; log::set_max_level(max_level); log::set_boxed_logger(Box::new(logger)).unwrap(); let mut oneshot = *matches.get_one::("oneshot").unwrap(); let no_gpu = matches.get_one::("no-gpu").unwrap(); let remotebin = matches.get_one::("remote-bin").unwrap(); let socket_arg = matches.get_one::("socket"); let remote_socket_arg = matches.get_one::("remote-socket"); let title_prefix = matches.get_one::("title-prefix").unwrap(); let display = matches.get_one::("display"); let threads = matches.get_one::("threads").unwrap(); let unlink = *matches.get_one::("unlink-socket").unwrap(); let mut compression: Compression = *matches.get_one::("compress").unwrap(); let mut video: VideoSetting = *matches.get_one::("video").unwrap(); let login_shell = matches.get_one::("login-shell").unwrap(); let remote_node = matches.get_one::("remote-node"); let drm_node = matches.get_one::("drm-node"); let loop_test = matches.get_one::("test-loop").unwrap(); let fast_bench = *matches.get_one::("test-fast-bench").unwrap(); let test_wire_version: Option = matches.get_one::("test-wire-version").copied(); let test_store_video: Option = matches.get_one::("test-store-video").cloned(); let test_skip_vulkan: bool = *matches.get_one::("test-skip-vulkan").unwrap(); let test_no_timeline_export: bool = *matches.get_one::("test-no-timeline-export").unwrap(); let test_no_binary_semaphore_import: bool = *matches .get_one::("test-no-binary-semaphore-import") .unwrap(); let secctx = matches.get_one::("secctx"); let vsock = *matches.get_one::("vsock").unwrap(); if !oneshot && std::env::var_os("WAYLAND_SOCKET").is_some() { debug!("Automatically enabling oneshot mode because WAYLAND_SOCKET is present"); oneshot = true; } if cfg!(not(feature = "video")) && video.format.is_some() { error!("Waypipe was not build with video encoding support, ignoring --video command line option."); video.format = None; } if cfg!(not(target_os = "linux")) && vsock { return Err( "Waypipe was built with support for VSOCK-type sockets on this platform.".into(), ); } if vsock && socket_arg.is_none() { return Err("Socket must be specified with --socket when --vsock option used".into()); } let (client_sock_arg, server_sock_arg) = match matches.subcommand() { Some(("ssh", _)) => (socket_arg, remote_socket_arg.or(socket_arg)), Some(("client", _)) | Some(("client-conn", _)) => (socket_arg, None), Some(("server", _)) | Some(("server-conn", _)) => (None, socket_arg), _ => (None, None), }; let to_socket_spec = |s: &OsStr| -> Result { if vsock { Ok(SocketSpec::VSock(VSockConfig::from_str( s.to_str().unwrap(), )?)) } else { Ok(SocketSpec::Unix(PathBuf::from(s))) } }; let client_socket = if let Some(s) = client_sock_arg { Some(to_socket_spec(s)?) } else { None }; let server_socket = if let Some(s) = server_sock_arg { Some(to_socket_spec(s)?) } else { None }; if let Compression::Lz4(_) = compression { if cfg!(not(feature = "lz4")) { error!("Waypipe was not built with lz4 compression/decompression support, downgrading compression mode to 'none'"); compression = Compression::None; } } if let Compression::Zstd(_) = compression { if cfg!(not(feature = "zstd")) { error!("Waypipe was not built with zstd compression/decompression support, downgrading compression mode to 'none'"); compression = Compression::None; } } let opts = Options { debug: *debug, no_gpu: *no_gpu || cfg!(not(feature = "dmabuf")), compression, video, threads: *threads, title_prefix: (*title_prefix).clone(), drm_node: drm_node.cloned(), debug_store_video: test_store_video, test_skip_vulkan, test_no_timeline_export, test_no_binary_semaphore_import, }; /* Needed to revert back to original cwdir after * changes made to possibly open sockets with a >108 byte path length. */ let cwd: OwnedFd = open_folder(&PathBuf::from("."))?; /* If WAYLAND_SOCKET was set, extract the socket now, ensuring it is only done once */ let wayland_socket = if let Some(wayl_sock) = get_wayland_socket_id()? { let fd = unsafe { // SAFETY: relies on external promise that value is a Wayland connection fd, // and not something else (like of STDIN/STDOUT/STDRRR). Exclusive, because // this is the only place WAYLAND_SOCKET is extracted OwnedFd::from_raw_fd(RawFd::from(wayl_sock)) }; /* Ensure spawned processes cannot inherit the socket. (All spawned processes have * the environment variable WAYLAND_SOCKET cleared, so cannot detect it.) */ set_cloexec(&fd, true)?; Some(fd) } else { None }; debug!( "waypipe version: {}", VERSION_STRING.split_once('\n').unwrap().0 ); match matches.subcommand() { Some(("ssh", submatch)) => { debug!("Starting client+ssh main process"); let subargs = submatch.get_raw("ssh_args"); let ssh_args: Vec<&std::ffi::OsStr> = subargs.unwrap_or_default().collect(); let (destination_idx, _) = locate_openssh_cmd_hostname(&ssh_args)?; /* Login shell required if there are no arguments following the ssh destination */ let needs_login_shell = destination_idx == ssh_args.len() - 1; let rand_tag = get_rand_tag()?; let mut client_sock_path = OsString::new(); let client_sock = match client_socket.unwrap_or(SocketSpec::Unix(PathBuf::from("/tmp/waypipe"))) { SocketSpec::Unix(path) => { client_sock_path.push(&path); client_sock_path.push(OsStr::new("-client-")); client_sock_path.push(OsStr::from_bytes(&rand_tag)); client_sock_path.push(OsStr::new(".sock")); SocketSpec::Unix(PathBuf::from(client_sock_path.clone())) } SocketSpec::VSock(v) => SocketSpec::VSock(v), }; let mut server_sock_path = OsString::new(); match server_socket.unwrap_or(SocketSpec::Unix(PathBuf::from("/tmp/waypipe"))) { SocketSpec::Unix(path) => { server_sock_path.push(&path); server_sock_path.push(OsStr::new("-server-")); server_sock_path.push(OsStr::from_bytes(&rand_tag)); server_sock_path.push(OsStr::new(".sock")); } SocketSpec::VSock(v) => { server_sock_path = OsString::from(v.port.to_string()); } }; if *loop_test && !vsock { let client_path = PathBuf::from(&client_sock_path); let server_path = PathBuf::from(&server_sock_path); unistd::symlinkat(&client_path, None, &server_path).map_err(|x| { tag!( "Failed to create symlink from {:?} to {:?}: {}", client_path, server_path, x ) })?; } let mut linkage = OsString::new(); linkage.push(server_sock_path.clone()); linkage.push(OsStr::new(":")); linkage.push(client_sock_path.clone()); let mut wayland_display = OsString::new(); if let Some(p) = display { wayland_display.push(p); } else { wayland_display.push(OsStr::new("wayland-")); wayland_display.push(OsStr::from_bytes(&rand_tag)); } let mut ssh_cmd: Vec<&std::ffi::OsStr> = Vec::new(); if !loop_test { ssh_cmd.push(OsStr::new("ssh")); if needs_login_shell { ssh_cmd.push(OsStr::new("-t")); } if matches!(client_sock, SocketSpec::Unix(_)) { ssh_cmd.push(OsStr::new("-R")); ssh_cmd.push(&linkage); } ssh_cmd.extend_from_slice(&ssh_args[..=destination_idx]); } ssh_cmd.push(OsStr::new(remotebin)); if opts.debug { ssh_cmd.push(OsStr::new("--debug")); } if oneshot { ssh_cmd.push(OsStr::new("--oneshot")); } if needs_login_shell { ssh_cmd.push(OsStr::new("--login-shell")); } if opts.no_gpu { ssh_cmd.push(OsStr::new("--no-gpu")); } ssh_cmd.push(OsStr::new("--unlink-socket")); ssh_cmd.push(OsStr::new("--threads")); let arg_nthreads = OsString::from(opts.threads.to_string()); ssh_cmd.push(&arg_nthreads); let arg_drm_node_val; if let Some(r) = remote_node { arg_drm_node_val = r.clone().into_os_string(); ssh_cmd.push(OsStr::new("--drm-node")); ssh_cmd.push(&arg_drm_node_val); } ssh_cmd.push(OsStr::new("--compress")); let arg_compress_val = OsString::from(compression.to_string()); ssh_cmd.push(&arg_compress_val); let arg_video = OsString::from(format!("--video={}", video)); if video.format.is_some() { ssh_cmd.push(&arg_video); } if matches!(client_sock, SocketSpec::VSock(_)) { ssh_cmd.push(OsStr::new("--vsock")); } ssh_cmd.push(OsStr::new("--socket")); ssh_cmd.push(&server_sock_path); if !oneshot { ssh_cmd.push(OsStr::new("--display")); ssh_cmd.push(&wayland_display); } if opts.test_skip_vulkan { ssh_cmd.push(OsStr::new("--test-skip-vulkan")); } if opts.test_no_timeline_export { ssh_cmd.push(OsStr::new("--test-no-timeline-export")); } if opts.test_no_binary_semaphore_import { ssh_cmd.push(OsStr::new("--test-no-binary-semaphore-import")); } ssh_cmd.push(OsStr::new("server")); ssh_cmd.extend_from_slice(&ssh_args[destination_idx + 1..]); run_client( Some(&ssh_cmd), &opts, oneshot, &client_sock, anti_staircase, &cwd, wayland_socket, secctx.map(|x| x.as_str()), ) } Some(("client", _submatch)) => { debug!("Starting client main process"); let socket_path: SocketSpec = client_socket .unwrap_or(SocketSpec::Unix(PathBuf::from("/tmp/waypipe-client.sock"))); run_client( None, &opts, oneshot, &socket_path, false, &cwd, wayland_socket, secctx.map(|x| x.as_str()), ) } Some(("server", submatch)) => { debug!("Starting server main process"); let subargs = submatch.get_raw("command"); let (shell, shell_argv0) = if let Some(shell) = std::env::var_os("SHELL") { let bt = shell.as_bytes(); let mut a = OsString::new(); a.push("-"); if let Some(idx) = bt.iter().rposition(|x| *x == b'/') { let sl: &[u8] = &bt[idx + 1..]; a.push(OsStr::from_bytes(sl)); } else { a.push(shell.clone()); }; (shell.clone(), a) } else { (OsString::from("/bin/sh"), OsString::from("-sh")) }; let (command, argv0): (Vec<&std::ffi::OsStr>, &std::ffi::OsStr) = if let Some(s) = subargs { let x: Vec<_> = s.collect(); let y: &std::ffi::OsStr = x[0]; (x, y) } else { let sv: Vec<&std::ffi::OsStr> = vec![&shell]; (sv, &shell_argv0) }; let argv0 = if *login_shell { argv0 } else { command[0] }; let socket_path: SocketSpec = server_socket .unwrap_or(SocketSpec::Unix(PathBuf::from("/tmp/waypipe-server.sock"))); if oneshot { run_server_oneshot(&command, argv0, &opts, unlink, &socket_path, &cwd) } else { let display_val: PathBuf = if let Some(s) = display { s.clone() } else { let rand_tag = get_rand_tag()?; let mut w = OsString::from("wayland-"); w.push(OsStr::from_bytes(&rand_tag)); PathBuf::from(w) }; let display_path: PathBuf = if display_val.is_absolute() { display_val.clone() } else { let runtime_dir = std::env::var_os("XDG_RUNTIME_DIR") .ok_or_else(|| tag!("Environment variable XDG_RUNTIME_DIR not present"))?; PathBuf::from(runtime_dir).join(&display_val) }; run_server_multi( &command, argv0, &opts, unlink, &socket_path, display_val.as_ref(), display_path.as_ref(), &cwd, ) } } Some(("bench", _)) => bench::run_benchmark(&opts, fast_bench), Some(("server-conn", _)) => { debug!("Starting server connection process"); let env_sock = std::env::var_os("WAYPIPE_CONNECTION_FD") .ok_or_else(|| tag!("Connection fd not provided for server-conn mode"))?; let opt_fd = env_sock .into_string() .ok() .and_then(|x| x.parse::().ok()) .ok_or_else(|| tag!("Failed to parse connection fd"))?; // Capture inherited fd from environment variable // Must only call connect_to_upstream() once, at risk of closing random fd let wayland_fd = unsafe { // SAFETY: relies on internal promise that value is a Wayland connection fd, // and not a random integer. Exclusive, because this is the only time // WAYPIPE_CONNECTION_FD is read from in this branch OwnedFd::from_raw_fd(RawFd::from(opt_fd)) }; set_cloexec(&wayland_fd, true)?; let link_fd = if let Some(s) = wayland_socket { /* For test use: provide the upstream Waypipe connection through WAYLAND_SOCKET. * (To reduce the risk of unexpected connections, the downstream Wayland connection * used WAYPIPE_CONNECTION_FD.) */ s } else { socket_connect( &server_socket.ok_or_else(|| tag!("Socket path not provided"))?, &cwd, false, false, )? }; handle_server_conn(link_fd, wayland_fd, &opts, test_wire_version) } Some(("client-conn", _)) => { debug!("Starting client connection process"); let env_sock = std::env::var_os("WAYPIPE_CONNECTION_FD") .ok_or_else(|| tag!("Connection fd not provided for client-conn mode"))?; let opt_fd = env_sock .into_string() .ok() .and_then(|x| x.parse::().ok()) .ok_or("Failed to parse connection fd")?; let link_fd = unsafe { // SAFETY: relies on internal promise that value is a Wayland connection fd, // and not a random integer. Exclusive, because this is the only time // WAYPIPE_CONNECTION_FD is read from in this branch OwnedFd::from_raw_fd(RawFd::from(opt_fd)) }; let wayland_fd = if let Some(s) = wayland_socket { /* For test use: provide the Wayland connection through WAYLAND_SOCKET */ s } else { connect_to_wayland_display(&cwd)? }; debug!("have read initial bytes"); handle_client_conn(link_fd, wayland_fd, &opts) } _ => unreachable!(), } } waypipe-v0.10.5/src/mainloop.rs000066400000000000000000006720251507071711100164300ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Logic and organization to proxy a single Wayland connection */ use crate::compress::*; #[cfg(feature = "dmabuf")] use crate::dmabuf::*; #[cfg(feature = "gbmfallback")] use crate::gbm::*; use crate::kernel::*; use crate::mirror::*; use crate::read::*; #[cfg(any(not(feature = "video"), not(feature = "gbmfallback")))] use crate::stub::*; use crate::tracking::*; use crate::util::*; use crate::wayland::*; use crate::wayland_gen::*; use crate::WAYPIPE_PROTOCOL_VERSION; use crate::{tag, Compression, VideoFormat, VideoSetting, MIN_PROTOCOL_VERSION}; use log::{debug, error}; use nix::errno::Errno; use nix::fcntl; use nix::libc; use nix::poll::{PollFd, PollFlags}; use nix::sys::{memfd, signal, socket, time, uio}; use nix::unistd; use std::cell::RefCell; use std::collections::{BTreeMap, BTreeSet, VecDeque}; use std::io::{IoSlice, IoSliceMut}; use std::os::fd::{AsFd, AsRawFd, BorrowedFd, FromRawFd, OwnedFd, RawFd}; use std::path::PathBuf; use std::rc::{Rc, Weak}; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::mpsc::{Receiver, Sender}; use std::sync::{Arc, Condvar, Mutex}; /** Whether a message transfer direction is active, shutting down, or off. * * The shutdown process for each direction is tracked using the DirectionState enum. * * We assume that the connection ends shutdown entirely; a half-shutdown * (read only, or write only) is misbehavior and is interpreted as a full shutdown. * * The initial state is On. When the source side of a transfer direction shuts down, * the state changes to Drain, and any remaining input will be read, processed, * and written. When there is no more output to write, the state transitions * to Off, and no more processing is done. On the other hand, when the target * side of a transfer direction shuts down, the state immediately changes to Off * (since this implies the target side will not respond anyway, to anything * that is done to it.) * * If a protocol error is detected (or memory or some other resource is low), * data will be flushed and the error message sent with some other event loop. */ #[derive(PartialEq, Eq, Debug)] enum DirectionState { On, Drain, Off, } /** A unique number identifying a ShadowFd. * * `waypipe client` allocates negative RIDs; `waypipe server` allocates positive RIDs. */ #[derive(PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)] #[repr(transparent)] pub struct Rid(i32); impl std::fmt::Display for Rid { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { self.0.fmt(f) } } /** Allocate a new RID */ fn allocate_rid(max_local_id: &mut i32) -> Rid { let v: i32 = *max_local_id; *max_local_id = max_local_id.checked_add(max_local_id.signum()).unwrap(); Rid(v) } /** State of DMABUF handling instances and devices. */ pub enum DmabufDevice { /** Initial state, not yet known if device should be checked for */ Unknown, /** Tried to create a device but failed / not available; will not try again */ Unavailable, /** Partially set up Vulkan instance, device not yet chosen. This delayed setup * is only done on the display side to avoid setting up a device before a client * would use it. */ VulkanSetup(Arc), // TODO: support multiple devices properly /** Vulkan instance and device set up */ Vulkan((Arc, Arc)), /** libgbm device */ Gbm(Rc), } /** Most of the state used by the main loop (excluding in progress tasks and read/write buffers) */ pub struct Globals { /* Index of ShadowFd by ID; ShadowFds stay alive as long as they have * some type of reference by the protocol objects or main code */ pub map: BTreeMap>>, /* Newly created objects which have are waiting for RID->SFD translation */ fresh: BTreeMap>>, /* Keep pipes alive until no more progress feasible */ pipes: Vec>>, pub on_display_side: bool, pub max_local_id: i32, /* Vulkan instance and other data; lazily loaded after client binds linux-dmabuf-v1 */ pub dmabuf_device: DmabufDevice, // note: slight space/time perf improvement may be possible by using different // maps for regular objects (which need only store 1 or 2 bytes/object) // and extended objects (which include a Box) pub objects: BTreeMap, /** Counter to distinguish buffer objects, because ObjIds can be recycled */ pub max_buffer_uid: u64, /** The clock id given by wp_presentation. This is guaranteed to never change for a * given client connection, and therefore should be consistent over different * instances of the wp_presentation global (or of different globals, if a weird * compositor makes multiple of them). wp_commit_timer_v1 uses timestamps * relative to this clock. */ pub presentation_clock: Option, /** Table of DRM format modifiers that both a) have been provided by linux-dmabuf-feedback-v1 * b) are supported by Waypipe. Use this to restrict which modifiers to try when * replicating DMABUFs. */ pub advertised_modifiers: BTreeMap>, /** If on display side, this stores the _most_ recent restrictions on dmabuf modifiers * from ext_image_copy_capture_session_v1. Because any created wl_buffer could * hypothetically be passed to the protocol, we must restrict all dmabuf that Waypipe * creates to be safe. */ pub screencopy_restrictions: BTreeMap>, pub opts: Options, /* Waypipe communication protocol version. For waypipe-client, this is fixed; * for waypipe-server, this may increase from the baseline 16 to the actual * version on receipt of the first message */ wire_version: u32, /** Has the first message arrived and has had some processing done on it? */ has_first_message: bool, /** Has a Close message arrived (implying a clean and deliberate connection shutdown). */ has_received_close: bool, } /** Data received from a Wayland connection */ struct WaylandInputRing<'a> { // todo: eventually, convert to be a ring buffer (with 2x overflow space // to ensure individual messages are contiguous) data: &'a mut [u8], len: usize, fds: VecDeque, } /** Limit on the size of TransferWayland.fds, and the FromChannel.rid_queue */ pub const MAX_OUTGOING_FDS: usize = 8; /** Data queued for transfer to the Wayland connection */ struct TransferWayland<'a> { data: &'a mut [u8], start: usize, len: usize, // Will be translated at runtime to the correct format // Volume is limited, so this is very much not a perf bottleneck fds: VecDeque>>, } // TODO: consistent naming. // 'channel'/'wayland' /** State corresponding to the transfers from the channel to the Wayland connection */ struct FromChannel<'a> { state: DirectionState, /* (typically) zero-copy buffer tracker */ input: ReadBuffer, /* If there is a next complete message to read, it will be stored here */ next_msg: Option, rid_queue: VecDeque>>, output: TransferWayland<'a>, // of Wayland messages /* If set, waiting for all apply operations on the given RID to complete */ waiting_for: Option, // number of messages read and processed. Note: lookahead into the buffer is // fine, but ultimately not necessary message_counter: usize, } /** Reference to a DMABUF and associated metadata needed to apply diffs/fills to it */ #[derive(Clone)] struct DecompTaskDmabuf { mirror: Option>, dst: Arc, view_row_stride: Option, } /** Reference to a shm file and its associated metadata */ struct DecompTaskFile { skip_mirror: bool, /* Whether to write onto the mirror or not */ target: Arc, } /** Reference to a mirror object and associated metadata */ #[derive(Clone)] struct DecompTaskMirror { mirror: Arc, /** If true, update RID apply task counter when the operation has been fully * applied; if false, silently apply with no notification. */ notify_on_completion: bool, } /** Destination information when applying a diff or fill to a DMABUF */ struct ApplyTaskDmabuf { target: DecompTaskDmabuf, /* ApplyTask.region_start/end can be rounded up/down to the nearest texel */ orig_start: usize, orig_end: usize, } /** Possible destinations for an ApplyTask */ enum ApplyTaskTarget { Dmabuf(ApplyTaskDmabuf), Shm(DecompTaskFile), MirrorOnly(DecompTaskMirror), } /** A task to apply a diff or fill operation to some object */ struct ApplyTask { /* This task can only safely be applied when all apply tasks with earlier * sequence values are known, and when no preceding tasks have overlapping * areas */ sequence: u64, remote_id: Rid, data: Vec, is_diff_type: bool, /* Is data a BufferDiff, or is it a BufferFill message*/ ntrailing: usize, target: ApplyTaskTarget, /* The region of the target containing the precise area on which the diff * is to be applied */ region_start: usize, region_end: usize, } /** The target of a diff or fill operation */ enum DecompTarget { Dmabuf(DecompTaskDmabuf), File(DecompTaskFile), MirrorOnly(DecompTaskMirror), } /** A task to decompress a diff or fill message. * * Applying the decompressed operation is done by ApplyTask. */ struct DecompTask { sequence: Option, msg_view: ReadBufferView, file_size: usize, compression: Compression, /* How the message is compressed */ target: DecompTarget, } /** A structure keeping track of the various compute heavy tasks to perform or * which are in progress. * * Note: to avoid race conditions, tasks whose target regions or memory overlap * may be delayed until they have exclusive access. */ struct TaskSet { /* Tasks to construct diff or fill messages. These _should_ be safe to * evaluate in parallel, in any order, because these tasks should only * be created, once per protocol segment being delivered, inside * `collect_updates()`. */ construct: VecDeque, /** Construction tasks (see [TaskSet::construct]) which are waiting for their DMABUF * to be safely readable; this is only used if the system does not support importing * a sync file from the DMABUF. */ waiting_for_implicit_acquire: BTreeMap>, /* Decompress messages to apply a diff. */ last_seqno: u64, decompress: VecDeque, /* Sequence numbers for in-progress decompression tasks */ in_progress_decomp: BTreeSet, /* Diffs to apply (once ordering rules permit) */ apply: BTreeMap, /* Diff regions being applied */ in_progress_apply: BTreeSet<(u64, (usize, usize))>, // TODO: regions need tags for applicable shadowfd /* A list of pending copy operation timeline points (& target RIDs) which the main thread has not * yet noticed as complete */ apply_operations: Vec<(u64, Rid)>, // TODO: should these be sent in a channel or stored under mutex? /* List of fill tasks to start, once data has been copied out of the image */ dmabuf_fill_tasks: Vec, dmabuf_diff_tasks: Vec, /* Shutting down? */ stop: bool, } /** Structure shared between the main thread and workers, to keep track of tasks * and notify the workers or main thread when they must act. */ struct TaskSystem { task_notify: Condvar, tasks: Mutex, wake_fd: OwnedFd, } /** Context objects for compression and decompression. */ struct ThreadCacheComp { lz4_c: Option, zstd_c: Option, zstd_d: Option, } /** Data specific to a worker thread. */ struct ThreadCache { /* Large (~256KB) vector in which to store intermediate diff / decompression contents; size * dynamically increased as needed. */ large: Vec, cmd_pool: Option>, /* List of in-flight command operations, to be cleaned up on this thread */ copy_ops: Vec, /* List of in-flight decode operations, to be cleaned up on this thread */ decode_ops: Vec, // todo: sort by completion point? /* Contexts for compression/decompression */ comp: ThreadCacheComp, } /** State for [VulkanWaitThread] */ struct VulkanWaitThreadInner { pipe_w: OwnedFd, /* If true, have written 8 bytes to the pipe, but main loop has not read them */ pending_read: bool, target_sequence_point: Option, stop: bool, /* If there was an error on the thread, store it here to be forwarded to the main loop */ error: Option, } /** Data for a thread that notifies the main loop whenever the main vulkan timeline semaphore updates */ struct VulkanWaitThread { notify: Condvar, inner: Mutex, } /** Struct which on Drop notifies Vulkan semaphore wait thread to stop */ struct VulkanWaitThreadShutdown { state: Arc, } /** Messages to be written through the channel (and associated metadata.) */ struct TransferQueue<'a> { // protocol data: translated wayland messages protocol_data: &'a mut [u8], protocol_len: usize, protocol_header_added: bool, protocol_rids: Vec>>, // TODO: optimize later last_ack: u32, // most recent ack value put into one of the message fields needs_new_ack: bool, // does any ack message need to be sent? ack_msg_cur: [u8; 8], // if nwritten = 0 and needs_new_ack, inject this; or continue if nwritten > 0 ack_msg_nxt: [u8; 8], // if nwritten != 0 and needs_new_ack, append this ack_nwritten: usize, // todo: iovec of messages to send before sending the 'output' buffer // TODO: indirection should not be necessary; regular Mutex should be OK other_messages: Vec>, recv_msgs: Receiver, // queue, receiving possibly empty messages (or errors to handle) expected_recvd_msgs: u64, // number of messages expected to be received before protocol can be sent // number of bytes that have been written; when other_messages + protocol // have been written, reset this nbytes_written: usize, } /** State corresponding to the transfers from the Wayland connection to the channel */ struct FromWayland<'a> { state: DirectionState, input: WaylandInputRing<'a>, // of Wayland messages & fds output: TransferQueue<'a>, } /** Structure containing semaphores to wait on before reading from a DMABUF. */ struct ExplicitSyncAcquires { explicit: Vec<(Arc, u64)>, implicit: Option, } /** Task to compute the changes for a shared memory file */ struct DiffTask { rid: Rid, compression: Compression, /* The region to compute the diff on (contains all intervals; excludes trailing bits). * May be "none" if there are only trailing bits. */ region: Option<(u32, u32)>, /* Damaged intervals */ intervals: Vec<(u32, u32)>, trailing: u32, target: Arc, } /** Task to compute the changed data for a DMABUF: initial step to start copying data from the image */ struct DiffDmabufTask { rid: Rid, compression: Compression, /* The region to compute the diff on (contains all intervals; excludes trailing bits) */ region: Option<(u32, u32)>, /* Damaged intervals */ intervals: Vec<(u32, u32)>, trailing: u32, img: Arc, mirror: Arc, view_row_stride: Option, acquires: ExplicitSyncAcquires, } enum ReadDmabufResult { Vulkan(Arc), Shm(Vec), } /** Task to compute the changed data for a DMABUF: final step to compute the diff */ struct DiffDmabufTask2 { rid: Rid, compression: Compression, /* The region to compute the diff on (contains all intervals; excludes trailing bits) */ region: Option<(u32, u32)>, /* Damaged intervals */ intervals: Vec<(u32, u32)>, trailing: u32, wait_until: u64, nominal_size: usize, read_buf: ReadDmabufResult, mirror: Arc, } /** Task to copy data for a DMABUF: initial step to start copying data */ struct FillDmabufTask { rid: Rid, compression: Compression, region_start: u32, region_end: u32, // Reading to small buffers; mirror: Option>, dst: Arc, view_row_stride: Option, acquires: ExplicitSyncAcquires, } /** Task to copy data for a DMABUF: final step to construct and compress message */ struct FillDmabufTask2 { rid: Rid, compression: Compression, region_start: u32, region_end: u32, wait_until: u64, // timeline value for copy to complete mirror: Option>, read_buf: ReadDmabufResult, } /** Task to encode DMABUF changes as a video packet */ struct VideoEncodeTask { vulk: Arc, state: Arc, remote_id: Rid, acquires: ExplicitSyncAcquires, } /** Task to apply a video packet to a DMABUF */ struct VideoDecodeTask { msg: ReadBufferView, remote_id: Rid, vulk: Arc, state: Arc, } /** A task to be performed by a worker thread */ enum WorkTask { FillDmabuf(FillDmabufTask), FillDmabuf2(FillDmabufTask2), Diff(DiffTask), DiffDmabuf(DiffDmabufTask), DiffDmabuf2(DiffDmabufTask2), Decomp(DecompTask), Apply(ApplyTask), VideoEncode(VideoEncodeTask), VideoDecode(VideoDecodeTask), } /** The result of a typical task */ enum TaskOutput { /** Have updated a mirror, so far; task completion will be signalled by some other pathway */ MirrorApply, /** A new message to append to the output queue */ Msg(Vec), /** Finished applying the message to the ShadowFd with the given RID */ ApplyDone(Rid), /** Applying the message to the ShadowFd with the RID will be done when the main timeline semaphore * reaches the given point */ DmabufApplyOp((u64, Rid)), } /** Result of a typical task (or error message )*/ type TaskResult = Result; /** Damaged region of an DMABUF or File ShadowFd*/ #[derive(PartialEq, Eq, Debug)] pub enum Damage { Everything, Intervals(Vec<(usize, usize)>), } /** Data for a ShadowFdFile to be shared between threads */ pub struct ShadowFdFileCore { pub mem_mirror: Mirror, pub mapping: ExternalMapping, } /** A ShadowFd for a shared memory file object (either mutable wl_shm_pool, or something readonly) */ pub struct ShadowFdFile { pub buffer_size: usize, pub remote_bufsize: usize, readonly: bool, pub damage: Damage, // note: Option used to extend safely; reconsider once mirror is mmapped // and ShadowFdFileCore contains just 'ExternalMapping' + 'InternalMapping'? pub core: Option>, pub fd: OwnedFd, /* Number of apply tasks received but whose work has not yet completed */ pub pending_apply_tasks: u64, } /** Structure to hold a DMABUF */ pub enum DmabufImpl { Vulkan(Arc), Gbm(GBMDmabuf), } /** A ShadowFd associated with a DMABUF */ pub struct ShadowFdDmabuf { pub buf: DmabufImpl, /* Mirror copy of the dmabuf; only present after the first request */ mirror: Option>, pub drm_format: u32, pub drm_modifier: u64, pub damage: Damage, /* For compatibility with the C implementation of Waypipe, which acts as-if * dmabufs had linear layout with stride matching dmabuf stride parameter. */ pub view_row_stride: Option, /* Is true until first damage applied and sent; typically will start * with a fill transfer, and do diff transfers afterwards */ first_damage: bool, /* Vector of plane FDs to send, in order, plus aux info */ pub export_planes: Vec, /* If not null, the data necessary for video decoding */ video_decode: Option>, video_encode: Option>, /** Set to true if the DMABUF was committed to a wl_surface which does * _not_ have a wp_linux_drm_syncobj_surface_v1 attached. Reset to false * after processing updates. Note: This variable is true when the * wl_buffer is committed to both explicitly and implicitly * synchronized surfaces. */ pub using_implicit_sync: bool, /* Must wait for these before processing buffer.*/ pub acquires: Vec<(u64, Rc>)>, /* Must signal these when processing is complete. Keeps timeline objects alive. */ pub releases: BTreeMap<(Rid, u64), Rc>>, /* Number of apply tasks received but whose work has not yet completed */ pub pending_apply_tasks: u64, /* wayland wl_buffer object id corresponding to this timeline */ pub debug_wayland_id: ObjId, } /** State of a ShadowFdPipe */ enum ShadowFdPipeBuffer { // todo: use ring buffer, and/or increase size to 32K ReadFromWayland((Box<[u8; 4096]>, usize)), // Note: no mechanism for backpressure at the moment. Not too critical for pipes since typically // appliation processing is faster than network. Some client pairs could lead to this growing // large, but they could also waste memory as is. ReadFromChannel(VecDeque), } /** A ShadowFd associated with a DMABUF * * Note: Wayland protocols need one-directional transfers only */ pub struct ShadowFdPipe { buf: ShadowFdPipeBuffer, program_closed: bool, // can data be read or written from the program? channel_closed: bool, /** File descriptor, end of pipe to read/write from */ fd: OwnedFd, /** File descriptor to be sent over Wayland connection */ export_fd: Option, } /** A ShadowFd associated with a DRM syncobj timeline object */ pub struct ShadowFdTimeline { pub timeline: Arc, export_fd: Option, /* wayland timeline object id corresponding to this timeline */ pub debug_wayland_id: ObjId, /* List of dmabufs which have pending associated release operations */ pub releases: Vec<(u64, Rc>)>, } /** Type of ShadowFd */ pub enum ShadowFdVariant { File(ShadowFdFile), Pipe(ShadowFdPipe), Dmabuf(ShadowFdDmabuf), Timeline(ShadowFdTimeline), } /** Structure keeping metadata and content for a file descriptor that Waypipe * is proxying over its connection */ pub struct ShadowFd { pub remote_id: Rid, /** This is true if the ShadowFd has not yet been replicated. This flag * is useful in particular for shm pool fds, which may be resized after * creation. Delaying the creation message allows some resize operations * to be avoided. * */ pub only_here: bool, pub data: ShadowFdVariant, } /** Read data and fds from a Wayland socket.. * * Returns true if the Wayland connection fd closed */ fn read_from_socket(socket: &OwnedFd, buf: &mut WaylandInputRing<'_>) -> Result { // assume the socket starts _empty_, for now if buf.len == buf.data.len() { panic!( "no remaining space: {} used, {} total", buf.len, buf.data.len() ); } let mut iovs = [IoSliceMut::new(&mut buf.data[buf.len..])]; /* libwayland sends at most 28 file descriptors at a time */ let mut cmsg_fds = nix::cmsg_space!([RawFd; 32]); let r = socket::recvmsg::( socket.as_raw_fd(), &mut iovs, Some(&mut cmsg_fds), socket::MsgFlags::empty(), ); match r { Ok(resp) => { buf.len += resp.bytes; for msg in resp .cmsgs() .map_err(|x| tag!("Failed to get cmsgs: {:?}", x))? { match msg { socket::ControlMessageOwned::ScmRights(tfds) => { for f in &tfds { if *f == -1 { return Err(tag!("Received too many file descriptors")); } buf.fds.push_back(unsafe { // SAFETY: fd was just created, checked valid, and is recorded nowhere else OwnedFd::from_raw_fd(*f) }); } } _ => { error!("Unexpected control message: {:?}, ignoring", msg); } } } Ok(resp.bytes == 0) } Err(nix::errno::Errno::ECONNRESET) => Ok(true), Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => { // Having no data (EAGAIN) for nonblocking FDs // is unexpected due to use of poll; but can safely ignore this. // For EINTR, we could retry in this a loop, but instead will // return OK and let the main loop handle retrying for us Ok(false) } Err(x) => Err(tag!("Error reading from socket: {:?}", x)), } } /** Write data and fds to a Wayland socket. * * Returns true if the Wayland connection fd closed */ fn write_to_socket(socket: &OwnedFd, buf: &mut TransferWayland<'_>) -> Result { assert!(buf.len > 0); // let nfds_sent = std::cmp::min(buf.fds.len(), 16); let mut raw_fds: [RawFd; 16] = [0; 16]; let mut nfds_sent = 0; let mut i = 0; let mut trunc = false; while let Some(r) = buf.fds.get(i) { i += 1; let sfd = r.borrow(); match sfd.data { ShadowFdVariant::File(ref data) => { if nfds_sent + 1 > 16 { trunc = true; break; } raw_fds[nfds_sent] = data.fd.as_raw_fd(); nfds_sent += 1; } ShadowFdVariant::Pipe(ref data) => { if nfds_sent + 1 > 16 { trunc = true; break; } raw_fds[nfds_sent] = data.export_fd.as_ref().unwrap().as_raw_fd(); nfds_sent += 1; } ShadowFdVariant::Timeline(ref data) => { if nfds_sent + 1 > 16 { trunc = true; break; } raw_fds[nfds_sent] = data.export_fd.as_ref().unwrap().as_raw_fd(); nfds_sent += 1; } ShadowFdVariant::Dmabuf(ref data) => { if nfds_sent + data.export_planes.len() > 16 { trunc = true; break; } for (i, e) in data.export_planes.iter().enumerate() { raw_fds[nfds_sent + i] = e.fd.as_raw_fd(); } nfds_sent += data.export_planes.len(); } } } /* File descriptors and message bytes will always be queued _together_, but * currently the association is lost, and the file descriptors must arrive before * the message bytes. Therefore: eagerly send all the fds. The 16 fd limit per * message is OK, since no message should have >16 fds/byte. */ let nbytes_sent = if trunc { 1 } else { buf.len }; let iovs = [IoSlice::new(&buf.data[buf.start..buf.start + nbytes_sent])]; let cmsgs = [nix::sys::socket::ControlMessage::ScmRights( &raw_fds[..nfds_sent], )]; let r = nix::sys::socket::sendmsg::<()>( socket.as_raw_fd(), &iovs, if nfds_sent > 0 { &cmsgs } else { &[] }, nix::sys::socket::MsgFlags::empty(), None, ); match r { Ok(s) => { /* Because the buffer is flushed entirely before anything new is * added to it, */ buf.start += s; buf.len -= s; if buf.len == 0 { buf.start = 0; } /* All fds provided were sent; drop them if necessary */ while nfds_sent > 0 { let r = buf.fds.pop_front().unwrap(); let mut sfd = r.borrow_mut(); match sfd.data { ShadowFdVariant::File(_) => { nfds_sent -= 1; } ShadowFdVariant::Pipe(ref mut data) => { data.export_fd = None; nfds_sent -= 1; } ShadowFdVariant::Timeline(ref mut data) => { data.export_fd = None; nfds_sent -= 1; } ShadowFdVariant::Dmabuf(ref mut data) => { nfds_sent -= data.export_planes.len(); data.export_planes = Vec::new(); } } } Ok(false) } Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => { // Having no space (EAGAIN) for nonblocking FDs is unexpected due // to use of poll; but not impossible; can safely ignore this. // For EINTR, we could retry in this a loop, but instead will // return OK and let the main loop handle retrying for us Ok(false) } Err(nix::errno::Errno::ECONNRESET) | Err(nix::errno::Errno::EPIPE) => { /* Socket has disconnected or at least partially shut down */ Ok(true) } Err(e) => Err(tag!("Error writing to socket: {:?}", e)), } } /** Write data to the channel connecting this Waypipe instance with the other one. * * Returns true if the channel connection fd closed */ fn write_to_channel(socket: &OwnedFd, queue: &mut TransferQueue) -> Result { let send_protocol = queue.protocol_len > 0 && queue.expected_recvd_msgs == 0; let mut net_len = if send_protocol { queue.protocol_len } else { 0 }; for v in &queue.other_messages { net_len += v.len(); } assert!(net_len % 4 == 0); debug!( "Write to channel: {} protocol bytes (send: {}), {} net, {} written, {} needs-ack, {} ack-written ", queue.protocol_len, send_protocol, net_len, queue.nbytes_written, queue.needs_new_ack, queue.ack_nwritten, ); if net_len == 0 && !queue.needs_new_ack && queue.ack_nwritten == 0 { /* Nothing to write, cannot EOF */ return Ok(false); } // After writing everything, should wipe entries assert!(queue.nbytes_written < net_len || (queue.needs_new_ack || queue.ack_nwritten > 0)); let mut nwritten = queue.nbytes_written; let mut iovs: Vec = Vec::new(); if queue.ack_nwritten > 0 { iovs.push(IoSlice::new(&queue.ack_msg_cur[queue.ack_nwritten..])); } let opt_whole_ack: Option<&[u8]> = if queue.needs_new_ack { Some(if queue.ack_nwritten > 0 { &queue.ack_msg_nxt } else { &queue.ack_msg_cur }) } else { None }; let mut injected_whole_ack = false; let mut first_partial_len = 0; for v in &queue.other_messages { if v.len() <= nwritten { nwritten -= v.len(); } else { if let Some(ackmsg) = opt_whole_ack { // only inject ack message after a complete message if !injected_whole_ack && nwritten == 0 { iovs.push(IoSlice::new(ackmsg)); injected_whole_ack = true; } } if nwritten > 0 { first_partial_len = v.len() - nwritten; } iovs.push(IoSlice::new(&v[nwritten..])); nwritten = 0; } } if let Some(ackmsg) = opt_whole_ack { // only inject ack message after a complete message if !injected_whole_ack && nwritten == 0 { iovs.push(IoSlice::new(ackmsg)); injected_whole_ack = true; } } if nwritten < queue.protocol_len && send_protocol { iovs.push(IoSlice::new( &queue.protocol_data[nwritten..queue.protocol_len], )); } let r = uio::writev(socket, &iovs); match r { Ok(mut len) => { debug!("Wrote: {} bytes", len); /* This is complicated somewhat by the out-of-order ack messages. * First, record amount of partial ack that was written. */ if queue.ack_nwritten > 0 { let absorbed = std::cmp::min(8 - queue.ack_nwritten, len); debug!("Absorbed {} bytes from partial ack message", absorbed); queue.ack_nwritten += absorbed; len -= absorbed; if queue.ack_nwritten == 8 { /* Partial message complete, move next message forward. * Copying garbage is fine if needs_new_ack is false. */ queue.ack_nwritten = 0; queue.ack_msg_cur = queue.ack_msg_nxt; } } /* Next, account first partial message to nbytes_written */ if first_partial_len > 0 { let absorbed = std::cmp::min(first_partial_len, len); debug!("Absorbed {} bytes from first partial message", absorbed); queue.nbytes_written += absorbed; len -= absorbed; } /* Next, account to whole ack */ if injected_whole_ack { let absorbed = std::cmp::min(8, len); debug!("Absorbed {} bytes from whole ack message", absorbed); len -= absorbed; if absorbed < 8 { // write in progress queue.ack_nwritten = absorbed; } /* whole ack has been sent or is being sent */ queue.needs_new_ack = false; } /* Finally, account remainder to nbytes_written */ debug!("Absorbed {} bytes from the rest", len); queue.nbytes_written += len; if queue.nbytes_written == net_len { debug!("Completed write to channel of total length: {}", net_len); if send_protocol { // All done. Reset. queue.protocol_len = 0; queue.protocol_header_added = false; } // issue: clear does not reset allocation amount queue.other_messages.clear(); queue.nbytes_written = 0; } Ok(false) } Err(nix::errno::Errno::EPIPE) | Err(nix::errno::Errno::ECONNRESET) => Ok(true), Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => Ok(false), Err(x) => Err(tag!("Error writing to socket: {:?}", x)), } } /** Read data from the channel connecting this Waypipe instance with the other one. * * Returns true if the channel connection fd closed */ fn read_from_channel(socket: &OwnedFd, from_chan: &mut FromChannel) -> Result { let eof = from_chan.input.read_more(socket)?; if from_chan.next_msg.is_none() { from_chan.next_msg = from_chan.input.pop_next_msg(); } Ok(eof) } /** Set up a vulkan or gbm instance but do not fully initialize it (since it is not yet clear if the * client will try to use it). Will return Unavailable if there are no devices available. */ pub fn try_setup_dmabuf_instance_light( opts: &Options, device: Option, ) -> Result { if !opts.test_skip_vulkan { if let Some(instance) = setup_vulkan_instance( opts.debug, &opts.video, opts.test_no_timeline_export, opts.test_no_binary_semaphore_import, )? { if instance.has_device(device) { return Ok(DmabufDevice::VulkanSetup(instance)); } } } /* Fallback path if Vulkan is not available */ if let Some(dev) = setup_gbm_device(device)? { return Ok(DmabufDevice::Gbm(dev)); } Ok(DmabufDevice::Unavailable) } /** Set up a vulkan or gbm instance and initialize it */ pub fn try_setup_dmabuf_instance_full( opts: &Options, device: Option, ) -> Result { if !opts.test_skip_vulkan { if let Some(instance) = setup_vulkan_instance( opts.debug, &opts.video, opts.test_no_timeline_export, opts.test_no_binary_semaphore_import, )? { if let Some(device) = setup_vulkan_device(&instance, device, &opts.video, opts.debug)? { return Ok(DmabufDevice::Vulkan((instance, device))); } }; } /* Fallback path if Vulkan is not available */ if let Some(dev) = setup_gbm_device(device)? { return Ok(DmabufDevice::Gbm(dev)); } Ok(DmabufDevice::Unavailable) } /** Fully initialize Vulkan device, and error if this does not work */ pub fn complete_dmabuf_setup( opts: &Options, device: Option, dmabuf_dev: &mut DmabufDevice, ) -> Result<(), String> { if matches!(dmabuf_dev, DmabufDevice::VulkanSetup(_)) { let mut tmp = DmabufDevice::Unknown; std::mem::swap(dmabuf_dev, &mut tmp); let DmabufDevice::VulkanSetup(instance) = tmp else { unreachable!(); }; let device = setup_vulkan_device(&instance, device, &opts.video, opts.debug)? .expect("Vulkan device existence should already have been checked"); *dmabuf_dev = DmabufDevice::Vulkan((instance, device)); } Ok(()) } pub fn dmabuf_dev_supports_format(dmabuf_dev: &DmabufDevice, format: u32, modifier: u64) -> bool { match dmabuf_dev { DmabufDevice::Unknown | DmabufDevice::Unavailable | DmabufDevice::VulkanSetup(_) => { unreachable!() } DmabufDevice::Vulkan((_, vulk)) => vulk.supports_format(format, modifier), DmabufDevice::Gbm(gbm) => gbm_supported_modifiers(gbm, format).contains(&modifier), } } pub fn dmabuf_dev_modifier_list(dmabuf_dev: &DmabufDevice, format: u32) -> &[u64] { match dmabuf_dev { DmabufDevice::Unknown | DmabufDevice::Unavailable | DmabufDevice::VulkanSetup(_) => { unreachable!() } DmabufDevice::Vulkan((_, vulk)) => vulk.get_supported_modifiers(format), DmabufDevice::Gbm(gbm) => gbm_supported_modifiers(gbm, format), } } pub fn dmabuf_dev_get_id(dmabuf_dev: &DmabufDevice) -> u64 { match dmabuf_dev { DmabufDevice::Unknown | DmabufDevice::Unavailable | DmabufDevice::VulkanSetup(_) => { unreachable!() } DmabufDevice::Vulkan((_, vulk)) => vulk.get_device(), DmabufDevice::Gbm(gbm) => gbm_get_device_id(gbm), } } /** When using GBM for DMABUFs, changes are accumulated in the mirror and synchronously * copied to the DMABUF after all changes have been received. This function does this */ pub fn dmabuf_post_apply_task_operations(data: &mut ShadowFdDmabuf) -> Result<(), String> { if let DmabufImpl::Gbm(ref mut buf) = data.buf { /* Synchronize mirror, which has collected all updates so far, * with the DMABUF. */ let len = buf.nominal_size(data.view_row_stride); let src = data .mirror .as_ref() .unwrap() .get_mut_range(0..len) .ok_or_else(|| tag!("Failed to get entire mirror, to apply changes to DMABUF"))?; buf.copy_onto_dmabuf(data.view_row_stride, src.data)?; } Ok(()) } /** For a DMABUF: * - If required and supported, return a [VulkanSyncFile] from which to derive binary acquire semaphores * - If required, return true to indicate that the DMABUF should be polled to determine when it is safe to read */ pub fn dmabuf_setup_implicit_sync( data: &ShadowFdDmabuf, ) -> Result<(Option, bool), String> { if data.using_implicit_sync { /* Using implicit sync, so export a sync file to be waited on */ if let DmabufImpl::Vulkan(ref vulk_buf) = data.buf { if !vulk_buf.vulk.supports_binary_semaphore_import() { Ok((None, true)) } else if let Some(f) = vulk_buf.export_sync_file()? { Ok((Some(f), false)) } else { Ok((None, true)) } } else { Ok((None, false)) } } else { Ok((None, false)) } } /** Construct a ShadowFd for a shared memory file descriptor * * `readonce`: is this just a raw file transfer? */ pub fn translate_shm_fd( fd: OwnedFd, size_lb: usize, map: &mut BTreeMap>>, max_local_id: &mut i32, default_damage: bool, readonce: bool, local_only: bool, ) -> Result>, String> { let remote_id = allocate_rid(max_local_id); let mapping: ExternalMapping = ExternalMapping::new(&fd, size_lb, readonce)?; let mir_size = if readonce { 0 } else { size_lb }; let core = Some(Arc::new(ShadowFdFileCore { /* read-once files do not need a mirror */ mem_mirror: Mirror::new(mir_size, !readonce)?, mapping, })); let sfd = Rc::new(RefCell::new(ShadowFd { remote_id, only_here: !local_only, data: ShadowFdVariant::File(ShadowFdFile { fd, buffer_size: size_lb, readonly: readonce, remote_bufsize: 0, damage: if default_damage { Damage::Everything } else { Damage::Intervals(Vec::new()) }, pending_apply_tasks: 0, core, }), })); map.insert(remote_id, Rc::downgrade(&sfd)); Ok(sfd) } /** Construct a ShadowFd for a DMABUF file descriptor */ pub fn translate_dmabuf_fd( width: u32, height: u32, drm_format: u32, planes: Vec, opts: &Options, device: &DmabufDevice, max_local_id: &mut i32, map: &mut BTreeMap>>, wayland_id: ObjId, ) -> Result>, String> { let remote_id = allocate_rid(max_local_id); debug!("Translating dmabuf fd"); let drm_modifier = planes[0].modifier; let (buf, video_encode) = match device { DmabufDevice::Unknown | DmabufDevice::Unavailable | DmabufDevice::VulkanSetup(_) => { unreachable!() } DmabufDevice::Vulkan((_, vulk)) => { let mut use_video = false; if let Some(ref f) = opts.video.format { if supports_video_format(vulk, *f, drm_format, width, height) { use_video = true; } } if use_video { if vulk .can_import_image(drm_format, width, height, &planes, true) .is_err() { use_video = false; } } if !use_video { if let Err(e) = vulk.can_import_image(drm_format, width, height, &planes, false) { return Err(tag!("Cannot import DMABUF, format/size/modifier combination: {:x}, {}x{}, {:x}, has: {}", drm_format, width, height, planes[0].modifier, e)); } } let buf = vulkan_import_dmabuf(vulk, planes, width, height, drm_format, use_video)?; let video_encode = if use_video { if let Some(f) = opts.video.format { Some(Arc::new(setup_video_encode( &buf, f, opts.video.bits_per_frame, )?)) } else { None } } else { None }; (DmabufImpl::Vulkan(buf), video_encode) } DmabufDevice::Gbm(gbm) => ( DmabufImpl::Gbm(gbm_import_dmabuf(gbm, planes, width, height, drm_format)?), None, ), }; let sfd = Rc::new(RefCell::new(ShadowFd { remote_id, only_here: true, data: ShadowFdVariant::Dmabuf(ShadowFdDmabuf { buf, mirror: None, drm_format, drm_modifier, first_damage: true, export_planes: Vec::new(), damage: Damage::Intervals(Vec::new()), video_decode: None, video_encode, acquires: Vec::new(), releases: BTreeMap::new(), pending_apply_tasks: 0, using_implicit_sync: false, /* Use the optimal (packed) stride for fill/diff operations */ view_row_stride: None, debug_wayland_id: wayland_id, }), })); map.insert(remote_id, Rc::downgrade(&sfd)); Ok(sfd) } /** Construct a ShadowFd for a timeline object file descriptor */ pub fn translate_timeline( fd: OwnedFd, glob: &mut Globals, object_id: ObjId, ) -> Result>, String> { let remote_id = allocate_rid(&mut glob.max_local_id); debug!("Translating timeline semaphore fd"); let DmabufDevice::Vulkan((_, ref vulk)) = glob.dmabuf_device else { unreachable!(); }; let tm = vulkan_import_timeline(vulk, fd)?; let sfd = Rc::new(RefCell::new(ShadowFd { remote_id, only_here: true, data: ShadowFdVariant::Timeline(ShadowFdTimeline { timeline: tm, export_fd: None, debug_wayland_id: object_id, releases: Vec::new(), }), })); glob.map.insert(remote_id, Rc::downgrade(&sfd)); Ok(sfd) } /** Construct a ShadowFd for a pipe-like file descriptor */ pub fn translate_pipe_fd( fd: OwnedFd, glob: &mut Globals, reading_from_channel: bool, ) -> Result>, String> { let remote_id = allocate_rid(&mut glob.max_local_id); debug!( "Translating pipe fd: reading from channel: {}", reading_from_channel ); let sfd = Rc::new(RefCell::new(ShadowFd { remote_id, only_here: true, data: ShadowFdVariant::Pipe(ShadowFdPipe { fd, export_fd: None, buf: if reading_from_channel { ShadowFdPipeBuffer::ReadFromChannel(VecDeque::new()) } else { ShadowFdPipeBuffer::ReadFromWayland((Box::new([0; 4096]), 0)) }, program_closed: false, channel_closed: false, }), })); glob.map.insert(remote_id, Rc::downgrade(&sfd)); glob.pipes.push(sfd.clone()); Ok(sfd) } /** Update mapping and mirror of a shared memory file descriptor for an increased * size */ pub fn update_core_for_new_size( fd: &OwnedFd, size: usize, core: &mut Option>, ) -> Result<(), String> { let mapping: ExternalMapping = ExternalMapping::new(fd, size, false)?; // mutating data.core requires exclusive access let mut alt: Option> = None; std::mem::swap(core, &mut alt); let mut inner = Arc::::into_inner(alt.unwrap()) .ok_or("ExtendFile invoked without exclusive access to ShadowFd")?; inner.mem_mirror.extend(size)?; let mut new: Option> = Some(Arc::new(ShadowFdFileCore { mem_mirror: inner.mem_mirror, mapping, })); std::mem::swap(core, &mut new); Ok(()) } /** Lookup a ShadowFd by its RID, if it has been created and is still referenced by something */ fn get_sfd( map: &BTreeMap>>, rid: Rid, ) -> Option>> { map.get(&rid)?.upgrade() } /** Process a message directed to a ShadowFd */ fn process_sfd_msg( typ: WmsgType, length: usize, msg_view: ReadBufferView, glob: &mut Globals, tasksys: &TaskSystem, ) -> Result<(), String> { let msg = &msg_view.get()[..length]; if msg.len() < 8 { return Err(tag!( "message to shadowfd is too short, {} bytes", msg.len() )); } let remote_id = Rid(i32::from_le_bytes(msg[4..8].try_into().unwrap())); match typ { WmsgType::OpenFile => { /* Note: a slight optimization is possible by caching all messages to * a new RID, and delaying their application until the protocol actually * needs them. This would reveal whether the file is used to send or * receive data, or in a one-shot or repeated fashion, allowing some * optimizations. Alternative: make an OpenFileV2 message to encode this. */ // todo: handle error let size = i32::from_le_bytes(msg[8..12].try_into().unwrap()); let local_fd = memfd::memfd_create( c"/waypipe", memfd::MemFdCreateFlag::MFD_CLOEXEC | memfd::MemFdCreateFlag::MFD_ALLOW_SEALING, ) .map_err(|x| tag!("Failed to create memfd: {:?}", x))?; unistd::ftruncate(&local_fd, size as libc::off_t) .map_err(|x| tag!("Failed to resize memfd: {:?}", x))?; // Newly created memfds are fully zeroed // TODO: delay mirror creation if file will never be read from let mirror = Mirror::new(size as usize, true)?; // TODO: resize and seal (to grow only?) let mapping: ExternalMapping = ExternalMapping::new(&local_fd, size as usize, false)?; let core = Some(Arc::new(ShadowFdFileCore { mem_mirror: mirror, mapping, })); let sfd = Rc::new(RefCell::new(ShadowFd { remote_id, only_here: false, data: ShadowFdVariant::File(ShadowFdFile { fd: local_fd, buffer_size: size as usize, readonly: false, remote_bufsize: size as usize, damage: Damage::Intervals(Vec::new()), core, pending_apply_tasks: 0, }), })); glob.map.insert(remote_id, Rc::downgrade(&sfd)); glob.fresh.insert(remote_id, sfd); Ok(()) } WmsgType::ExtendFile => { let size = i32::from_le_bytes(msg[8..12].try_into().unwrap()); let new_size: usize = size .try_into() .map_err(|_| tag!("Invalid size: {}", size))?; let sfd_handle = get_sfd(&glob.map, remote_id).ok_or("RID does not exist")?; let mut sfd = sfd_handle.borrow_mut(); let ShadowFdVariant::File(data) = &mut sfd.data else { return Err(tag!("Applying ExtendFile to non-File ShadowFd")); }; if data.buffer_size > new_size { return Err(tag!("ExtendFile would shrink size")); } else if data.buffer_size == new_size { return Ok(()); // no-op } if data.readonly { return Err(tag!("Tried to apply ExtendFile to read-only file")); } debug!( "Extending file at RID={} from {} to {} bytes", remote_id, data.buffer_size, new_size ); unistd::ftruncate(&data.fd, new_size.try_into().unwrap()) .map_err(|x| tag!("Failed to resize file: {:?}", x))?; data.buffer_size = new_size; data.remote_bufsize = new_size; update_core_for_new_size(&data.fd, new_size, &mut data.core)?; Ok(()) } WmsgType::BufferDiff => { let sfd_handle = get_sfd(&glob.map, remote_id).ok_or("RID not in map")?; let mut sfd = sfd_handle.borrow_mut(); match &mut sfd.data { ShadowFdVariant::Dmabuf(data) => { match data.buf { DmabufImpl::Vulkan(ref buf) => { // TODO: check that all preceding releases actually were signalled beforehand // (i.e., check for client misbehavior) data.pending_apply_tasks += 1; let t = DecompTask { sequence: None, msg_view, file_size: buf.nominal_size(data.view_row_stride), compression: glob.opts.compression, target: DecompTarget::Dmabuf(DecompTaskDmabuf { dst: buf.clone(), view_row_stride: data.view_row_stride, mirror: data.mirror.clone(), }), }; tasksys.tasks.lock().unwrap().decompress.push_back(t); tasksys.task_notify.notify_one(); } DmabufImpl::Gbm(ref buf) => { data.pending_apply_tasks += 1; let nominal_size = buf.nominal_size(data.view_row_stride); if data.mirror.is_none() { data.mirror = Some(Arc::new(Mirror::new(nominal_size, false)?)); } let t = DecompTask { sequence: None, msg_view, file_size: nominal_size, compression: glob.opts.compression, target: DecompTarget::MirrorOnly(DecompTaskMirror { mirror: data.mirror.as_ref().unwrap().clone(), notify_on_completion: true, }), }; tasksys.tasks.lock().unwrap().decompress.push_back(t); tasksys.task_notify.notify_one(); } } Ok(()) } ShadowFdVariant::File(data) => { if data.readonly { return Err(tag!("Applying BufferDiff to readonly ShadowFd")); } data.pending_apply_tasks += 1; let t = DecompTask { sequence: None, msg_view, file_size: data.buffer_size, compression: glob.opts.compression, target: DecompTarget::File(DecompTaskFile { skip_mirror: data.readonly, target: data.core.as_ref().unwrap().clone(), }), }; tasksys.tasks.lock().unwrap().decompress.push_back(t); tasksys.task_notify.notify_one(); Ok(()) } _ => Err(tag!("Applying BufferDiff to non-File ShadowFd")), } } WmsgType::BufferFill => { let sfd_handle = get_sfd(&glob.map, remote_id).ok_or("RID not in map")?; let mut sfd = sfd_handle.borrow_mut(); match &mut sfd.data { ShadowFdVariant::Dmabuf(data) => { match data.buf { DmabufImpl::Vulkan(ref buf) => { data.pending_apply_tasks += 1; let t = DecompTask { sequence: None, msg_view, compression: glob.opts.compression, file_size: buf.nominal_size(data.view_row_stride), target: DecompTarget::Dmabuf(DecompTaskDmabuf { dst: buf.clone(), view_row_stride: data.view_row_stride, mirror: data.mirror.clone(), }), }; tasksys.tasks.lock().unwrap().decompress.push_back(t); tasksys.task_notify.notify_one(); } DmabufImpl::Gbm(ref buf) => { data.pending_apply_tasks += 1; let nominal_size = buf.nominal_size(data.view_row_stride); if data.mirror.is_none() { data.mirror = Some(Arc::new(Mirror::new(nominal_size, false)?)); } let t = DecompTask { sequence: None, msg_view, compression: glob.opts.compression, file_size: nominal_size, target: DecompTarget::MirrorOnly(DecompTaskMirror { mirror: data.mirror.as_ref().unwrap().clone(), notify_on_completion: true, }), }; tasksys.tasks.lock().unwrap().decompress.push_back(t); tasksys.task_notify.notify_one(); /* The mirror will be copied onto the DMABUF synchronously * when the next Wayland message requires it */ } } Ok(()) } ShadowFdVariant::File(data) => { data.pending_apply_tasks += 1; let t = DecompTask { sequence: None, msg_view, file_size: data.buffer_size, target: DecompTarget::File(DecompTaskFile { skip_mirror: data.readonly, target: data.core.as_ref().unwrap().clone(), }), compression: glob.opts.compression, }; tasksys.tasks.lock().unwrap().decompress.push_back(t); tasksys.task_notify.notify_one(); Ok(()) } _ => Err(tag!("Applying BufferFill to non-File ShadowFd")), } } WmsgType::OpenDMABUF => { // todo: handle error let width = u32::from_le_bytes(msg[12..16].try_into().unwrap()); let height = u32::from_le_bytes(msg[16..20].try_into().unwrap()); let drm_format = u32::from_le_bytes(msg[20..24].try_into().unwrap()); /* Ignore all other parameters -- these should be chosen locally */ /* For compatibility reasons, will interpret dmabuf data diff/fill ops as having * linear layout with the specified stride */ let view_row_stride = Some(dmabuf_slice_get_first_stride( msg[12..76].try_into().unwrap(), )); /* Restrict to compositor preferred modifiers, if any are available, or the (probably) * sublist of modifiers that were at one point in time acceptable for screencopy; * otherwise, try arbitrary format, since e.g. a Wayland client could be blindly guessing * the compositor's format support matches its own. */ let modifier_list = glob.screencopy_restrictions.get(&drm_format).or_else(|| glob.advertised_modifiers.get(&drm_format)).map(|x| &x[..]).unwrap_or_else(|| { debug!("No advertised modifiers for {}, falling back to arbitrary supported modifier", drm_format); dmabuf_dev_modifier_list(&glob.dmabuf_device, drm_format) } ); let (buf, nom_size, add_planes) = match glob.dmabuf_device { DmabufDevice::Unknown | DmabufDevice::Unavailable | DmabufDevice::VulkanSetup(_) => { return Err(tag!("Received OpenDMABUF too early")); } DmabufDevice::Vulkan((_, ref vulk)) => { let (buf, add_planes) = vulkan_create_dmabuf( vulk, width, height, drm_format, modifier_list, /* force linear; these might be the fastest to create/update on compositor side, * although they might make compositor rendering a bit less efficient. */ // &[0], false, )?; let nom_size = buf.nominal_size(view_row_stride); (DmabufImpl::Vulkan(buf), nom_size, add_planes) } DmabufDevice::Gbm(ref gbm) => { let mods = gbm_supported_modifiers(gbm, drm_format); let (buf, add_planes) = gbm_create_dmabuf(gbm, width, height, drm_format, mods)?; let nom_size = buf.nominal_size(view_row_stride); (DmabufImpl::Gbm(buf), nom_size, add_planes) } }; /* Eagerly create a mirror copy of the dmabuf contents; this is currently needed * to properly handle non-texel-block-aligned diff messages, as those cannot be directly * written to the dmabuf and must update the mirror first. */ // TODO: for protocol version 2, this can be dropped let mirror = Some(Arc::new(Mirror::new(nom_size, false)?)); let drm_modifier = add_planes[0].modifier; let sfd = Rc::new(RefCell::new(ShadowFd { remote_id, only_here: false, data: ShadowFdVariant::Dmabuf(ShadowFdDmabuf { buf, mirror, drm_format, drm_modifier, view_row_stride, first_damage: true, damage: Damage::Intervals(Vec::new()), export_planes: add_planes, video_decode: None, video_encode: None, using_implicit_sync: false, acquires: Vec::new(), releases: BTreeMap::new(), pending_apply_tasks: 0, debug_wayland_id: ObjId(0), }), })); glob.map.insert(remote_id, Rc::downgrade(&sfd)); glob.fresh.insert(remote_id, sfd); Ok(()) } WmsgType::OpenDMAVidDstV2 => { // 8..12: file size, can ignore let vid_flags = u32::from_le_bytes(msg[12..16].try_into().unwrap()); let width = u32::from_le_bytes(msg[16..20].try_into().unwrap()); let height = u32::from_le_bytes(msg[20..24].try_into().unwrap()); let drm_format = u32::from_le_bytes(msg[24..28].try_into().unwrap()); /* Ignore all other parameters -- these should be chosen locally */ /* For compatibility reasons, will interpret dmabuf data diff/fill ops as having * linear layout with the specified stride */ // TODO: need method to prevent diff/fill ops from conflicting with video: is this even needed? let view_row_stride = Some(dmabuf_slice_get_first_stride( msg[12..76].try_into().unwrap(), )); const H264: u32 = VideoFormat::H264 as u32; const VP9: u32 = VideoFormat::VP9 as u32; const AV1: u32 = VideoFormat::AV1 as u32; let vid_type: VideoFormat = match vid_flags & 0xff { H264 => VideoFormat::H264, VP9 => VideoFormat::VP9, AV1 => VideoFormat::AV1, _ => { return Err(tag!("Unidentified video format {}", vid_flags & 0xff)); } }; let DmabufDevice::Vulkan((_, ref vulk)) = glob.dmabuf_device else { return Err(tag!( "Received OpenDMAVidDstV2 before Vulkan device was set up" )); }; if !supports_video_format(vulk, vid_type, drm_format, width, height) { return Err(tag!( "Video format {:?} at {}x{} is not supported", vid_type, width, height )); } let modifier_list = glob.screencopy_restrictions.get(&drm_format).or_else(|| glob.advertised_modifiers.get(&drm_format)).map(|x| &x[..]).unwrap_or_else(|| { debug!("No advertised modifiers for {}, falling back to arbitrary supported modifier", drm_format); dmabuf_dev_modifier_list(&glob.dmabuf_device, drm_format) } ); let (buf, add_planes) = vulkan_create_dmabuf( vulk, width, height, drm_format, modifier_list, /* force linear; these might be the fastest to create/update on compositor side, * although they might make compositor rendering a bit less efficient. */ // &[0], true, )?; let drm_modifier = add_planes[0].modifier; let video_decode_state = setup_video_decode(&buf, vid_type)?; let sfd = Rc::new(RefCell::new(ShadowFd { remote_id, only_here: false, data: ShadowFdVariant::Dmabuf(ShadowFdDmabuf { buf: DmabufImpl::Vulkan(buf), mirror: None, drm_format, drm_modifier, view_row_stride, first_damage: true, damage: Damage::Intervals(Vec::new()), export_planes: add_planes, video_decode: Some(Arc::new(video_decode_state)), video_encode: None, using_implicit_sync: false, acquires: Vec::new(), releases: BTreeMap::new(), pending_apply_tasks: 0, debug_wayland_id: ObjId(0), }), })); glob.map.insert(remote_id, Rc::downgrade(&sfd)); glob.fresh.insert(remote_id, sfd); Ok(()) } WmsgType::SendDMAVidPacket => { let sfd_handle = get_sfd(&glob.map, remote_id).ok_or("RID not in map")?; let mut sfd = sfd_handle.borrow_mut(); let ShadowFdVariant::Dmabuf(data) = &mut sfd.data else { return Err(tag!("Applying DMAVid to non-DMABUF ShadowFd")); }; let Some(ref video_decode) = data.video_decode else { return Err(tag!( "Applying DMAVid to non-DMABUF video-decode-type ShadowFd" )); }; if let Some(ref folder) = glob.opts.debug_store_video { /* Debug option: all received video packets */ let mut full_path = folder.clone(); let filename = format!("packets-{}-{}", unistd::getpid(), remote_id); full_path.push(std::ffi::OsStr::new(&filename)); let mut logfile = std::fs::File::options() .create(true) .append(true) .open(full_path) .unwrap(); let packet = &msg[8..]; use std::io::Write; logfile.write_all(packet).unwrap(); } let DmabufDevice::Vulkan((_, ref vulk)) = glob.dmabuf_device else { unreachable!(); }; let task = VideoDecodeTask { msg: msg_view, remote_id, vulk: vulk.clone(), state: video_decode.clone(), }; tasksys .tasks .lock() .unwrap() .construct .push_back(WorkTask::VideoDecode(task)); tasksys.task_notify.notify_one(); data.pending_apply_tasks += 1; Ok(()) } WmsgType::OpenTimeline => { let DmabufDevice::Vulkan((_, ref vulk)) = glob.dmabuf_device else { return Err(tag!( "Received OpenTimeline before Vulkan device was set up" )); }; let start_pt = u64::from_le_bytes(msg[8..16].try_into().unwrap()); let (timeline, fd) = vulkan_create_timeline(vulk, start_pt)?; let sfd = Rc::new(RefCell::new(ShadowFd { remote_id, only_here: false, data: ShadowFdVariant::Timeline(ShadowFdTimeline { timeline, export_fd: Some(fd), debug_wayland_id: ObjId(0), releases: Vec::new(), }), })); glob.map.insert(remote_id, Rc::downgrade(&sfd)); glob.fresh.insert(remote_id, sfd); Ok(()) } WmsgType::SignalTimeline => { let sfd_handle = get_sfd(&glob.map, remote_id).ok_or("RID not in map")?; let mut sfd = sfd_handle.borrow_mut(); let ShadowFdVariant::Timeline(ref mut data) = sfd.data else { return Err(tag!("Applying SignalTimeline to non-timeline ShadowFd")); }; if glob.on_display_side { return Err(tag!("Received SignalTimeline on compositor side")); } let pt = u64::from_le_bytes(msg[8..16].try_into().unwrap()); prune_releases(&mut data.releases, pt, remote_id); data.timeline.signal_timeline_pt(pt) } WmsgType::OpenIRPipe | WmsgType::OpenIWPipe => { let (pipe_r, pipe_w) = unistd::pipe2(fcntl::OFlag::O_CLOEXEC | fcntl::OFlag::O_NONBLOCK) .map_err(|x| tag!("Failed to create pipe: {:?}", x))?; let (local_pipe, export_pipe) = if typ == WmsgType::OpenIRPipe { (pipe_r, pipe_w) } else { (pipe_w, pipe_r) }; let sfd = Rc::new(RefCell::new(ShadowFd { remote_id, only_here: false, data: ShadowFdVariant::Pipe(ShadowFdPipe { fd: local_pipe, export_fd: Some(export_pipe), buf: if typ == WmsgType::OpenIWPipe { ShadowFdPipeBuffer::ReadFromChannel(VecDeque::new()) } else { ShadowFdPipeBuffer::ReadFromWayland((Box::new([0; 4096]), 0)) }, program_closed: false, channel_closed: false, }), })); glob.map.insert(remote_id, Rc::downgrade(&sfd)); glob.pipes.push(sfd.clone()); glob.fresh.insert(remote_id, sfd); Ok(()) } WmsgType::PipeTransfer => { let Some(sfd_handle) = get_sfd(&glob.map, remote_id) else { debug!("PipeTransfer message directed to RID={}, which does not exist or was already closed and deleted. Ignoring message, because it may have been sent before the remote was notified.", remote_id); return Ok(()); }; let mut sfd = sfd_handle.borrow_mut(); let ShadowFdVariant::Pipe(data) = &mut sfd.data else { return Err(tag!("Applying PipeTransfer to non-Pipe ShadowFd")); }; if data.program_closed { debug!("Received transfer to pipe with program connection closed, dropping"); /* Silently ignore, no point in doing anything */ return Ok(()); } let add = &msg[8..]; let ShadowFdPipeBuffer::ReadFromChannel(ref mut x) = data.buf else { return Err(tag!( "Applying PipeTransfer to pipe ShadowFd not reading from channel" )); }; x.extend(add.iter()); Ok(()) } WmsgType::PipeShutdownR | WmsgType::PipeShutdownW => { let map = &mut glob.map; let mut delete = false; let Some(sfd_handle) = get_sfd(map, remote_id) else { debug!("Shutdown message directed to RID={}, which does not exist or was already closed and deleted. Ignoring message, because it may have been sent before the remote was notified.", remote_id); return Ok(()); }; let mut sfd = sfd_handle.borrow_mut(); let ShadowFdVariant::Pipe(data) = &mut sfd.data else { return Err(tag!("Applying PipeTransfer to non-Pipe ShadowFd")); }; // from channel = expect: PipeShutdownR if let ShadowFdPipeBuffer::ReadFromChannel(ref v) = data.buf { if typ != WmsgType::PipeShutdownW { return Err(tag!( "Did not receive (remote) write shutdown when reading from channel" )); } data.channel_closed = true; if !v.is_empty() { // Need to flush all data; will close when that is complete debug!( "Received write shutdown for RID={}, {} bytes pending", remote_id, v.len() ); } else { // Nothing left to do; drop the FD debug!( "Received write shutdown for RID={}, nothing pending", remote_id ); delete = true; } } else { if typ != WmsgType::PipeShutdownR { return Err(tag!( "Did not receive (remote) read shutdown when writing to channel" )); } /* other side not receiving anything more */ debug!("Received read shutdown for RID={}", remote_id); data.channel_closed = true; delete = true; } drop(sfd); if delete { // TODO: make a drop_list for sfd items, like for collect_updates let pos = glob .pipes .iter() .position(|x| x.borrow().remote_id == remote_id) .unwrap(); glob.pipes.remove(pos); map.remove(&remote_id); } Ok(()) } _ => unreachable!("Unhandled message type: {:?}", typ), } } /** Maximize total length of the intervals on which a single diff or fill operation * should be peformed */ const DIFF_CHUNKSIZE: u32 = 262144; /** Construct BufferFill messages for an entire shared memory file descriptor */ fn construct_fill_transfers( rid: Rid, bufsize: usize, mapping: &ExternalMapping, way_msg_output: &mut TransferQueue, compression: Compression, ) -> Result<(), String> { // TODO: parallelize (although this is lower priority, since oneshot transfers aren't that common) let div_intv = (0_u32, (bufsize / 64) as u32); let len = div_intv.1 - div_intv.0; let mut nshards = ceildiv(len, DIFF_CHUNKSIZE / 64); let trail_size = bufsize % 64; if nshards == 0 && trail_size > 0 { nshards = 1; } for i in 0..nshards { let start = 64 * split_interval(div_intv.0, div_intv.1, nshards, i); let mut end = 64 * split_interval(div_intv.0, div_intv.1, nshards, i + 1); if i == nshards - 1 { end += trail_size as u32; } let space = (end - start) as usize; let mut output: Vec = vec![0; space]; // NOTE: to be fully safe, this copy should not be removed. // (Compressor would need to have read-once guarantee or something similar) copy_from_mapping(&mut output, mapping, start as usize); let compressed: Vec = match compression { Compression::None => output, Compression::Lz4(lvl) => { let mut ctx = lz4_make_cctx().unwrap(); lz4_compress_to_vec(&mut ctx, &output, lvl, 0, 0) } Compression::Zstd(lvl) => { let mut ctx = zstd_make_cctx().unwrap(); zstd_compress_to_vec(&mut ctx, &output, lvl, 0, 0) } }; let header = cat4x4( build_wmsg_header(WmsgType::BufferFill, (16 + compressed.len()) as usize).to_le_bytes(), rid.0.to_le_bytes(), start.to_le_bytes(), end.to_le_bytes(), ); way_msg_output.other_messages.push(Vec::from(header)); let pad = align4(compressed.len()) - compressed.len(); way_msg_output.other_messages.push(compressed); if pad > 0 { way_msg_output.other_messages.push(vec![0; pad]); } } Ok(()) } // TODO: produce an iterator instead? /** Partition a list of damaged intervals into smaller chunks, for processing on individual threads. * * Output: (ntrailing, interval lists) */ fn split_damage(intervals: &[(usize, usize)], buf_size: usize) -> (usize, Vec>) { let final_chunk = 64 * (buf_size / 64); let mut net_len = 0; let mut has_trailing = false; for r in intervals.iter() { assert!(r.0 < r.1); assert!(r.0 % 64 == 0 && r.1 % 64 == 0); if r.1 > final_chunk { /* Last interval ended near trailing segment, and was rounded up */ if r.0 < buf_size { has_trailing = true; assert!(final_chunk >= r.0); net_len += final_chunk - r.0; } } else { net_len += r.1 - r.0; } } // TODO: net_len is _not_ a good worst-case cost measure; should add +8 for each disjoint // interval to account for diff segment headers; diffs may get >72/64=1.125 times longer than the // sum of segment lengths suggests let net_len: u32 = net_len.try_into().unwrap(); assert!(net_len % 64 == 0); if net_len == 0 { if has_trailing { return (buf_size % 64, vec![Vec::new()]); } else { return (0, Vec::new()); } } let nshards = ceildiv(net_len / 64, DIFF_CHUNKSIZE / 64); let trail_size = buf_size % 64; let mut intv_iter = intervals.iter(); let mut prev_interval: Option<(usize, usize)> = None; let mut parts = Vec::new(); for i in 0..nshards { let start = 64 * split_interval(0, net_len / 64, nshards, i); let end = 64 * split_interval(0, net_len / 64, nshards, i + 1); let mut remaining = end - start; let mut output = Vec::<(u32, u32)>::new(); while remaining > 0 { let mut cur = if let Some(x) = prev_interval { prev_interval = None; x } else if let Some(y) = intv_iter.next() { *y } else { /* nothing left */ break; }; if cur.1 > final_chunk { /* Last interval ended near trailing segment, and was rounded up */ if cur.0 >= final_chunk { /* Done, last interval started in trailing region and is covered * by that. Note: there _could_ be a few tiny damaged segments * after this one, which get dropped anyway. */ break; } /* Trim last interval at last complete chunk */ cur = (cur.0, final_chunk); } if (cur.1 - cur.0) as u32 <= remaining { remaining -= (cur.1 - cur.0) as u32; output.push((cur.0 as u32, cur.1 as u32)); } else { output.push((cur.0 as u32, cur.0 as u32 + remaining)); prev_interval = Some((cur.0 + remaining as usize, cur.1)); break; } } parts.push(output); } (if has_trailing { trail_size } else { 0 }, parts) } /** Check if a specific ShadowFd has been updated, and if so create the messages * or tasks required to replicate those updates. * * Return value: true = keep, false = delete the sfd */ fn collect_updates( sfd: &mut ShadowFd, way_msg_output: &mut TransferQueue, compression: Compression, tasksys: &TaskSystem, opts: &Options, ) -> Result { match &mut sfd.data { ShadowFdVariant::File(data) => { let first_visit = sfd.only_here; if sfd.only_here { // Send creation message let msg = cat3x4( build_wmsg_header(WmsgType::OpenFile, 12).to_le_bytes(), sfd.remote_id.0.to_le_bytes(), (data.buffer_size as u32).to_le_bytes(), ); way_msg_output.other_messages.push(Vec::from(msg)); data.remote_bufsize = data.buffer_size; sfd.only_here = false; } if data.remote_bufsize < data.buffer_size { assert!(!data.readonly); // send extend message let msg = cat3x4( build_wmsg_header(WmsgType::ExtendFile, 12).to_le_bytes(), sfd.remote_id.0.to_le_bytes(), (data.buffer_size as u32).to_le_bytes(), ); way_msg_output.other_messages.push(Vec::from(msg)); data.remote_bufsize = data.buffer_size; } if data.readonly { if !first_visit { /* file data should not be changed */ return Ok(true); } /* Construct 'Fill' transfers for entire buffer */ construct_fill_transfers( sfd.remote_id, data.buffer_size, &data.core.as_ref().unwrap().mapping, way_msg_output, compression, )?; return Ok(true); } let full_region = &[(0, align(data.buffer_size, 64))]; let damaged_intervals: &[(usize, usize)] = match &data.damage { Damage::Everything => full_region, Damage::Intervals(ref x) => &x[..], }; if damaged_intervals.is_empty() { /* Nothing to do here */ return Ok(true); } let (trail_len, parts) = split_damage(damaged_intervals, data.buffer_size); let nparts = parts.len(); for (i, output) in parts.into_iter().enumerate() { /* Note: this uses the property that the intervals are disjoint and provided * in ascending order */ if i == nparts - 1 { assert!(output.len() + trail_len > 0); } else { assert!(!output.is_empty()); } let region = if !output.is_empty() { Some((output.first().unwrap().0, output.last().unwrap().1)) } else { None }; let t = WorkTask::Diff(DiffTask { rid: sfd.remote_id, region, intervals: output, trailing: if i == nparts - 1 && trail_len > 0 { trail_len as u32 } else { 0 }, compression, target: data.core.as_ref().unwrap().clone(), }); way_msg_output.expected_recvd_msgs += 1; tasksys.tasks.lock().unwrap().construct.push_back(t); tasksys.task_notify.notify_one(); } /* Reset damage */ data.damage = Damage::Intervals(Vec::new()); Ok(true) } ShadowFdVariant::Dmabuf(data) => { if let Some(ref vid_enc) = data.video_encode { let DmabufImpl::Vulkan(ref buf) = data.buf else { unreachable!(); }; if sfd.only_here { let slice_data = dmabuf_slice_make_ideal( data.drm_format, buf.width, buf.height, buf.get_bpp(), ); let vid_flags: u32 = 0xff & (opts.video.format.unwrap() as u32); let msg = cat4x4( build_wmsg_header(WmsgType::OpenDMAVidDstV2, 16 + slice_data.len()) .to_le_bytes(), sfd.remote_id.0.to_le_bytes(), (buf.nominal_size(data.view_row_stride) as u32).to_le_bytes(), vid_flags.to_le_bytes(), ); way_msg_output.other_messages.push(Vec::from(msg)); way_msg_output.other_messages.push(Vec::from(slice_data)); sfd.only_here = false; } /* Get damage as a list of intervals */ let full_region = &[(0, align(buf.nominal_size(data.view_row_stride), 64))]; let damaged_intervals: &[(usize, usize)] = match &data.damage { Damage::Everything => full_region, Damage::Intervals(ref x) => &x[..], }; if damaged_intervals.is_empty() { /* Nothing to do here */ return Ok(true); } let mut acquires = Vec::new(); for acq in data.acquires.drain(..) { let (pt, sfd) = acq; let b = sfd.borrow_mut(); let ShadowFdVariant::Timeline(ref timeline_data) = &b.data else { panic!("Expected timeline sfd"); }; acquires.push((timeline_data.timeline.clone(), pt)); } let mut implicit_semaphore = None; let (implicit_sync_file, must_poll) = dmabuf_setup_implicit_sync(data)?; data.using_implicit_sync = false; if let Some(sync_file) = implicit_sync_file { /* Using implicit sync, so export a sync file to be waited on */ implicit_semaphore = Some(sync_file.export_binary_semaphore()?); } data.damage = Damage::Intervals(Vec::new()); let task = VideoEncodeTask { remote_id: sfd.remote_id, vulk: buf.vulk.clone(), state: vid_enc.clone(), acquires: ExplicitSyncAcquires { explicit: acquires, implicit: implicit_semaphore, }, }; let mut g = tasksys.tasks.lock().unwrap(); if must_poll { g.waiting_for_implicit_acquire .entry(sfd.remote_id) .or_default() .push(WorkTask::VideoEncode(task)); drop(g); } else { g.construct.push_back(WorkTask::VideoEncode(task)); drop(g); tasksys.task_notify.notify_one(); }; way_msg_output.expected_recvd_msgs += 1; return Ok(true); } let nominal_size = match data.buf { DmabufImpl::Vulkan(ref vulk_buf) => vulk_buf.nominal_size(data.view_row_stride), DmabufImpl::Gbm(ref gbm_buf) => gbm_buf.nominal_size(data.view_row_stride), }; if sfd.only_here { // Send creation message let (width, height, bpp) = match data.buf { DmabufImpl::Vulkan(ref vulk_buf) => { (vulk_buf.width, vulk_buf.height, vulk_buf.get_bpp()) } DmabufImpl::Gbm(ref gbm_buf) => { (gbm_buf.width, gbm_buf.height, gbm_buf.get_bpp()) } }; let slice_data = dmabuf_slice_make_ideal(data.drm_format, width, height, bpp); let msg = cat3x4( build_wmsg_header(WmsgType::OpenDMABUF, 12 + slice_data.len()).to_le_bytes(), sfd.remote_id.0.to_le_bytes(), (nominal_size as u32).to_le_bytes(), ); way_msg_output.other_messages.push(Vec::from(msg)); way_msg_output.other_messages.push(Vec::from(slice_data)); sfd.only_here = false; } /* Get damage as a list of intervals */ let full_region = &[(0, align(nominal_size, 64))]; let damaged_intervals: &[(usize, usize)] = match &data.damage { Damage::Everything => full_region, Damage::Intervals(ref x) => &x[..], }; if damaged_intervals.is_empty() { /* Nothing to do here */ return Ok(true); } // TODO: is it possible to extract all acquires unconditionally, in case there is no damage? let mut acquires = Vec::new(); for acq in data.acquires.drain(..) { let (pt, sfd) = acq; let b = sfd.borrow_mut(); let ShadowFdVariant::Timeline(ref timeline_data) = &b.data else { panic!("Expected timeline sfd"); }; acquires.push((timeline_data.timeline.clone(), pt)); } let (implicit_sync_file, must_poll) = dmabuf_setup_implicit_sync(data)?; data.using_implicit_sync = false; let copied = if let DmabufImpl::Gbm(ref mut gbm_buf) = data.buf { /* Copy out entire contents of buffer immediately and synchronously, do avoid * running into possible threading issues with libgbm. */ let mut v = vec![0; nominal_size]; gbm_buf.copy_from_dmabuf(data.view_row_stride, &mut v)?; v } else { Vec::new() }; if data.first_damage || !copied.is_empty() { /* The first time _any_ damage is reported, do a fill transfer and * set up the mirror for future diffs. */ data.first_damage = false; if data.mirror.is_none() { /* Create mirror for use by future diff operations */ data.mirror = Some(Arc::new(Mirror::new(nominal_size, false)?)); } let div_intv = (0_u32, (nominal_size / 64) as u32); let len = div_intv.1 - div_intv.0; let nshards = ceildiv(len, DIFF_CHUNKSIZE / 64); let trail_size = nominal_size % 64; for i in 0..nshards { let start = 64 * split_interval(div_intv.0, div_intv.1, nshards, i); let mut end = 64 * split_interval(div_intv.0, div_intv.1, nshards, i + 1); if i == nshards - 1 { end += trail_size as u32; } let t = match data.buf { DmabufImpl::Vulkan(ref vulk_buf) => { let implicit_semaphore = if let Some(ref s) = implicit_sync_file { Some(s.export_binary_semaphore()?) } else { None }; WorkTask::FillDmabuf(FillDmabufTask { rid: sfd.remote_id, compression, region_start: start, region_end: end, mirror: data.mirror.clone(), dst: vulk_buf.clone(), view_row_stride: data.view_row_stride, acquires: ExplicitSyncAcquires { explicit: acquires.clone(), implicit: implicit_semaphore, }, }) } DmabufImpl::Gbm(_) => WorkTask::FillDmabuf2(FillDmabufTask2 { rid: sfd.remote_id, compression, region_start: start, region_end: end, mirror: data.mirror.clone(), wait_until: 0, read_buf: ReadDmabufResult::Shm(Vec::from( &copied[start as usize..end as usize], )), }), }; way_msg_output.expected_recvd_msgs += 1; let mut g = tasksys.tasks.lock().unwrap(); if must_poll { g.waiting_for_implicit_acquire .entry(sfd.remote_id) .or_default() .push(t); drop(g); } else { g.construct.push_back(t); drop(g); tasksys.task_notify.notify_one(); }; } } else { // Then make diff tasks that copy the _bound_ of the slice , // and run the diff routine inside it (against the mirror) // via diff_two let (trail_len, parts) = split_damage(damaged_intervals, nominal_size); let nparts = parts.len(); for (i, output) in parts.into_iter().enumerate() { /* Note: this uses the property that the intervals are disjoint and provided * in ascending order */ if i == nparts - 1 { assert!(output.len() + trail_len > 0); } else { assert!(!output.is_empty()); } let region = if !output.is_empty() { Some((output.first().unwrap().0, output.last().unwrap().1)) } else { None }; let trailing = if i == nparts - 1 && trail_len > 0 { trail_len as u32 } else { 0 }; let t = match data.buf { DmabufImpl::Vulkan(ref vulk_buf) => { let implicit_semaphore = if let Some(ref s) = implicit_sync_file { Some(s.export_binary_semaphore()?) } else { None }; WorkTask::DiffDmabuf(DiffDmabufTask { rid: sfd.remote_id, region, intervals: output, trailing, compression, mirror: data.mirror.as_ref().unwrap().clone(), img: vulk_buf.clone(), view_row_stride: data.view_row_stride, acquires: ExplicitSyncAcquires { explicit: acquires.clone(), implicit: implicit_semaphore, }, }) } DmabufImpl::Gbm(_) => todo!(), }; way_msg_output.expected_recvd_msgs += 1; let mut g = tasksys.tasks.lock().unwrap(); if must_poll { g.waiting_for_implicit_acquire .entry(sfd.remote_id) .or_default() .push(t); } else { g.construct.push_back(t); }; drop(g); tasksys.task_notify.notify_one(); } } data.damage = Damage::Intervals(Vec::new()); Ok(true) } ShadowFdVariant::Timeline(_data) => { // Send creation message if sfd.only_here { // TODO: which timeline value should be used? // (there might be updates between initial sending and this message) let pt: u64 = 0; let pt_val = pt.to_le_bytes(); let msg = cat4x4( build_wmsg_header(WmsgType::OpenTimeline, 16).to_le_bytes(), sfd.remote_id.0.to_le_bytes(), pt_val[..4].try_into().unwrap(), pt_val[4..].try_into().unwrap(), ); way_msg_output.other_messages.push(Vec::from(msg)); sfd.only_here = false; } Ok(true) } ShadowFdVariant::Pipe(data) => { let reading_from_channel = match data.buf { ShadowFdPipeBuffer::ReadFromWayland(_) => false, ShadowFdPipeBuffer::ReadFromChannel(_) => true, }; if sfd.only_here { // Send creation message let mtype = if reading_from_channel { WmsgType::OpenIRPipe } else { WmsgType::OpenIWPipe }; let msg = cat2x4( build_wmsg_header(mtype, 8).to_le_bytes(), sfd.remote_id.0.to_le_bytes(), ); way_msg_output.other_messages.push(Vec::from(msg)); sfd.only_here = false; debug!("Queueing message: {:?}", mtype); } // Send any data that has been read from the Wayland side if let ShadowFdPipeBuffer::ReadFromWayland((ref mut buf, ref mut len)) = data.buf { if *len > 0 { let sz = 8 + *len; let msg_header = cat2x4( build_wmsg_header(WmsgType::PipeTransfer, sz).to_le_bytes(), sfd.remote_id.0.to_le_bytes(), ); let pad = align4(*len) - *len; way_msg_output.other_messages.push(Vec::from(msg_header)); way_msg_output.other_messages.push(Vec::from(&buf[..*len])); if pad > 0 { way_msg_output.other_messages.push(vec![0; pad]); } debug!("Queueing message: {:?}", WmsgType::PipeTransfer); *len = 0; } } if data.program_closed { /* Cannot read nor write to the program side, nothing left to do */ // TODO: check data.fd is not None ? let ctype = if reading_from_channel { WmsgType::PipeShutdownR } else { WmsgType::PipeShutdownW }; let msg = cat2x4( build_wmsg_header(ctype, 8).to_le_bytes(), sfd.remote_id.0.to_le_bytes(), ); way_msg_output.other_messages.push(Vec::from(msg)); debug!("Queueing message: {:?}", ctype); // Remove this from the local map Ok(false) } else { Ok(true) } } } } /** Remove buffer release events which have occurred. * * Requires that none of the ShadowFds in the list are currently borrowed */ fn prune_releases( releases: &mut Vec<(u64, Rc>)>, current_pt: u64, this_timeline: Rid, ) { releases.retain(|(pt, sfd)| { if *pt > current_pt { /* Keep, *pt has not occurred yet */ return true; } let mut c = sfd.borrow_mut(); let ShadowFdVariant::Dmabuf(ref mut dmabuf) = c.data else { panic!(); }; dmabuf.releases.remove(&(this_timeline, *pt)).unwrap(); false }); } /** Signal the timeline points in a list of buffer acquires */ pub fn signal_timeline_acquires( acquires: &mut Vec<(u64, Rc>)>, ) -> Result<(), String> { for acq in acquires.drain(..) { let (pt, timeline) = acq; let c = timeline.borrow_mut(); let ShadowFdVariant::Timeline(ref timeline_data) = c.data else { panic!("expected timeline-type shadowfd"); }; debug!( "Signalling timeline acquire for {}, pt {}", timeline_data.debug_wayland_id, pt ); timeline_data.timeline.signal_timeline_pt(pt)?; } Ok(()) } /** Central logic to compute a diff on a shared memory region * * Returns: core diff length, trailing diff bytes */ fn diff_inner(task: &DiffTask, dst: &mut [u8]) -> Result<(u32, u32), String> { let buflen = task.target.mapping.get_u8().len(); /* Compute the diff */ let diff_len = if let Some((start, end)) = task.region { assert!(start % 64 == 0); assert!(end % 64 == 0); let mirror = task .target .mem_mirror .get_mut_range((start as usize)..(end as usize)) .ok_or("Failed to acquire mirror for diff")?; construct_diff( dst, &task.target.mapping, &task.intervals[..], mirror.data, start, ) as u32 } else { 0 }; let mut ntrailing: u32 = 0; if task.trailing > 0 { let trail_mirror = task .target .mem_mirror .get_mut_range((buflen - (task.trailing as usize))..buflen) .ok_or("Failed to acquire trailing mirror")?; let tail_changed = copy_tail_if_diff( &mut dst[(diff_len as usize)..], &task.target.mapping, task.trailing as usize, trail_mirror.data, ); if tail_changed { ntrailing = task.trailing; } } let region = task.region.unwrap_or((0, 0)); debug!( "{} mid diff task: {}..{},+{} -> diff {} {}", std::thread::current().name().unwrap_or(""), region.0, region.1, task.trailing, diff_len, ntrailing ); Ok((diff_len, ntrailing)) } /** Run a [DiffTask] */ fn run_diff_task(task: &DiffTask, cache: &mut ThreadCache) -> TaskResult { debug!( "{} running diff task: {}..{},+{}", std::thread::current().name().unwrap_or(""), task.region.unwrap_or((0, 0)).0, task.region.unwrap_or((0, 0)).1, task.trailing ); // Maximum space usage let mut diffspace = 0; for t in task.intervals.iter() { diffspace += 8 + t.1 - t.0; } let space = diffspace + task.trailing; let (mut msg, unpadded_len, diff_len, ntrailing) = match task.compression { Compression::None => { let mut buf: Vec = vec![0; align4(space as usize) + 16]; let (diff_len, ntrailing) = diff_inner(task, &mut buf[16..(16 + space as usize)])?; if diff_len == 0 && ntrailing == 0 { /* Null message */ return Ok(TaskOutput::Msg(Vec::new())); } let raw_len = (diff_len + ntrailing) as usize; (buf, 16 + raw_len, diff_len, ntrailing) } Compression::Lz4(_) | Compression::Zstd(_) => { cache .large .resize(std::cmp::max(cache.large.len(), space as usize), 0); let (diff_len, ntrailing) = diff_inner(task, &mut cache.large[..(space as usize)])?; if diff_len == 0 && ntrailing == 0 { /* Null message */ return Ok(TaskOutput::Msg(Vec::new())); } let raw_len = (diff_len + ntrailing) as usize; let nxt = comp_into_vec( task.compression, &mut cache.comp, &cache.large[..raw_len], 16, 4, )?; let sz = nxt.len() - 4 - 16; (nxt, 16 + sz, diff_len, ntrailing) } }; /* Discard extra data */ assert!(msg.len() >= align4(unpadded_len)); msg.truncate(align4(unpadded_len)); /* Set trailing padding */ for i in 0..(align4(unpadded_len) - unpadded_len) { msg[unpadded_len + i] = 0; } let header = cat4x4( build_wmsg_header(WmsgType::BufferDiff, unpadded_len).to_le_bytes(), task.rid.0.to_le_bytes(), diff_len.to_le_bytes(), ntrailing.to_le_bytes(), ); msg[..16].copy_from_slice(&header); Ok(TaskOutput::Msg(msg)) } /** Central logic to compute a diff using a buffer copied from a DMABUF * * Returns: core diff length, trailing diff bytes */ fn diff_dmabuf_inner(task: &DiffDmabufTask2, dst: &mut [u8]) -> Result<(u32, u32), String> { let img_len = task.nominal_size; let data = match task.read_buf { ReadDmabufResult::Vulkan(ref buf) => { buf.prepare_read()?; buf.get_read_view().data } ReadDmabufResult::Shm(ref v) => &v[..], }; let mut dst_view = dst; let diff_len: u32 = if let Some((region_start, region_end)) = task.region { assert!(region_start % 64 == 0); assert!(region_end % 64 == 0); /* Compute the diff */ let mirror = task .mirror .get_mut_range((region_start as usize)..(region_end as usize)) .ok_or("Failed to acquire mirror for diff")?; let mut start: usize = 0; let mut diff_len: u32 = 0; for intv in task.intervals.iter() { let intv_len = (intv.1 - intv.0) as usize; let mirr_range = &mut mirror.data [((intv.0 - region_start) as usize)..((intv.1 - region_start) as usize)]; let mut diff_segment_len = construct_diff_segment_two( dst_view, &data[start..start + intv_len], mirr_range, intv.0, 32, // skip gaps of size 4*32 ; every individual transfer is _expensive_ ); if false { // test: copy entire damaged region dst_view[..4].copy_from_slice((intv.0 / 4).to_le_bytes().as_slice()); dst_view[4..8].copy_from_slice((intv.1 / 4).to_le_bytes().as_slice()); dst_view[8..8 + intv_len].copy_from_slice(&data[start..start + intv_len]); diff_segment_len = (intv_len + 8) as u32; } dst_view = &mut std::mem::take(&mut dst_view)[diff_segment_len as usize..]; diff_len += diff_segment_len; start += intv_len; } assert!(start + (task.trailing as usize) == data.len()); diff_len } else { 0 }; let mut ntrailing: u32 = 0; if task.trailing > 0 { let trail_mirror = task .mirror .get_mut_range((img_len - (task.trailing as usize))..img_len) .ok_or("Failed to acquire trailing mirror")?; let tail_segment: &[u8] = &data[data.len() - task.trailing as usize..]; assert!(tail_segment.len() == trail_mirror.data.len()); if tail_segment != trail_mirror.data { trail_mirror.data.copy_from_slice(tail_segment); dst_view[..task.trailing as usize].copy_from_slice(tail_segment); ntrailing = task.trailing; } } debug!( "{} mid dmabuf diff task: {}..{},+{} -> diff {} {}", std::thread::current().name().unwrap_or(""), task.region.unwrap_or((0, 0)).0, task.region.unwrap_or((0, 0)).1, task.trailing, diff_len, ntrailing ); Ok((diff_len, ntrailing)) } /** Run a [DiffDmabufTask] */ fn run_diff_dmabuf_task( task: DiffDmabufTask, cache: &mut ThreadCache, ) -> Result { debug!( "{} running diff task for dmabuf: {}..{},+{}", std::thread::current().name().unwrap_or(""), task.region.unwrap_or((0, 0)).0, task.region.unwrap_or((0, 0)).1, task.trailing ); let mut segments = Vec::new(); let mut start = 0; for intv in task.intervals.iter() { segments.push((start, intv.0, intv.1)); start += intv.1 - intv.0; } let nom_len = task.img.nominal_size(task.view_row_stride); if task.trailing > 0 { segments.push((start, (nom_len as u32) - task.trailing, (nom_len as u32))); } let pool: &Arc = cache.get_cmd_pool(&task.img.vulk)?; let buf_len = start + task.trailing; let read_buf = Arc::new(vulkan_get_buffer(&task.img.vulk, buf_len as usize, true)?); /* Extract data into staging buffer */ let handle = start_copy_segments_from_dmabuf( &task.img, &read_buf, pool, &segments[..], task.view_row_stride, &task.acquires.explicit[..], task.acquires.implicit.as_slice(), )?; let pt = handle.get_timeline_point(); cache.copy_ops.push(handle); Ok(DiffDmabufTask2 { rid: task.rid, compression: task.compression, region: task.region, intervals: task.intervals, trailing: task.trailing, wait_until: pt, read_buf: ReadDmabufResult::Vulkan(read_buf), mirror: task.mirror, nominal_size: task.img.nominal_size(task.view_row_stride), }) } /** Run a [DiffDmabufTask2] */ fn run_diff_dmabuf_task_2(task: DiffDmabufTask2, cache: &mut ThreadCache) -> TaskResult { // Maximum space usage let mut diffspace = 0; for t in task.intervals.iter() { diffspace += 8 + t.1 - t.0; } let space = diffspace + task.trailing; // TODO: if view_row_stride is not None, there may be padding bytes that need to be zeroed; // Vulkan does not do guarantee anything with them; scan intervals to do this, and account for // edge propagation let (mut msg, unpadded_len, diff_len, ntrailing) = match task.compression { Compression::None => { let mut buf: Vec = vec![0; align4(space as usize) + 16]; let (diff_len, ntrailing) = diff_dmabuf_inner(&task, &mut buf[16..(16 + space as usize)])?; if diff_len == 0 && ntrailing == 0 { /* Null message */ return Ok(TaskOutput::Msg(Vec::new())); } let raw_len = (diff_len + ntrailing) as usize; (buf, 16 + raw_len, diff_len, ntrailing) } Compression::Lz4(_) | Compression::Zstd(_) => { cache .large .resize(std::cmp::max(cache.large.len(), space as usize), 0); let (diff_len, ntrailing) = diff_dmabuf_inner(&task, &mut cache.large[..(space as usize)])?; if diff_len == 0 && ntrailing == 0 { /* Null message */ return Ok(TaskOutput::Msg(Vec::new())); } let raw_len = (diff_len + ntrailing) as usize; let nxt = comp_into_vec( task.compression, &mut cache.comp, &cache.large[..raw_len], 16, 4, )?; let sz = nxt.len() - 4 - 16; (nxt, 16 + sz, diff_len, ntrailing) } }; /* Discard extra data */ assert!(msg.len() >= align4(unpadded_len)); msg.truncate(align4(unpadded_len)); /* Set trailing padding */ for i in 0..(align4(unpadded_len) - unpadded_len) { msg[unpadded_len + i] = 0; } let header = cat4x4( build_wmsg_header(WmsgType::BufferDiff, unpadded_len).to_le_bytes(), task.rid.0.to_le_bytes(), diff_len.to_le_bytes(), ntrailing.to_le_bytes(), ); msg[..16].copy_from_slice(&header); Ok(TaskOutput::Msg(msg)) } impl ThreadCache { fn get_cmd_pool<'a>( &'a mut self, vulk: &Arc, ) -> Result<&'a Arc, String> { if self.cmd_pool.is_none() { let p = vulkan_get_cmd_pool(vulk)?; self.cmd_pool = Some(p); } Ok(self.cmd_pool.as_ref().unwrap()) } } impl ThreadCacheComp { fn get_lz4_cctx(&mut self) -> Result<&mut LZ4CCtx, String> { if self.lz4_c.is_none() { self.lz4_c = Some( lz4_make_cctx().ok_or_else(|| tag!("Failed to make LZ4 compression context"))?, ); } Ok(self.lz4_c.as_mut().unwrap()) } fn get_zstd_cctx(&mut self) -> Result<&mut ZstdCCtx, String> { if self.zstd_c.is_none() { self.zstd_c = Some( zstd_make_cctx().ok_or_else(|| tag!("Failed to make LZ4 compression context"))?, ); } Ok(self.zstd_c.as_mut().unwrap()) } fn get_zstd_dctx(&mut self) -> Result<&mut ZstdDCtx, String> { if self.zstd_d.is_none() { self.zstd_d = Some( zstd_make_dctx().ok_or_else(|| tag!("Failed to make LZ4 compression context"))?, ); } Ok(self.zstd_d.as_mut().unwrap()) } } /** Run a [FillDmabufTask] */ fn run_fill_dmabuf_task( task: FillDmabufTask, cache: &mut ThreadCache, ) -> Result { debug!( "{} running fill task: {}..{}", std::thread::current().name().unwrap_or(""), task.region_start, task.region_end, ); let pool: &Arc = cache.get_cmd_pool(&task.dst.vulk)?; let space = task.region_end - task.region_start; let read_buf = Arc::new(vulkan_get_buffer(&task.dst.vulk, space as usize, true)?); /* Extract data into staging buffer */ let handle = start_copy_segments_from_dmabuf( &task.dst, &read_buf, pool, &[(0, task.region_start, task.region_end)], task.view_row_stride, &task.acquires.explicit[..], task.acquires.implicit.as_slice(), )?; let pt = handle.get_timeline_point(); cache.copy_ops.push(handle); Ok(FillDmabufTask2 { rid: task.rid, compression: task.compression, region_start: task.region_start, region_end: task.region_end, mirror: task.mirror, read_buf: ReadDmabufResult::Vulkan(read_buf), wait_until: pt, }) } /** Run a [FillDmabufTask2] */ fn run_dmabuf_fill_task_2(task: FillDmabufTask2, cache: &mut ThreadCache) -> TaskResult { let data = match task.read_buf { ReadDmabufResult::Vulkan(ref buf) => { buf.prepare_read()?; buf.get_read_view().data } ReadDmabufResult::Shm(ref v) => &v[..], }; // TODO: parallelizable sub tasks, after 'prepare_read' is done: compress staging buffer (critical path), // and copy to mirror (not so critical, but must complete before next fill/diff op in region) if let Some(mir) = &task.mirror { let range = mir .get_mut_range(task.region_start as usize..task.region_end as usize) .ok_or("failed to acquire mirror range")?; range.data.copy_from_slice(data); } let mut msg: Vec = comp_into_vec(task.compression, &mut cache.comp, data, 16, 4)?; let msg_len = msg.len() - 4; msg.truncate(align4(msg_len)); let header = cat4x4( build_wmsg_header(WmsgType::BufferFill, msg_len as usize).to_le_bytes(), task.rid.0.to_le_bytes(), task.region_start.to_le_bytes(), task.region_end.to_le_bytes(), ); msg[..16].copy_from_slice(&header); Ok(TaskOutput::Msg(msg)) } /** The result of a message decompression task */ enum DecompReturn { Shm(ApplyTask), Dmabuf((u64, Rid, Option)), } /** Return whether an interval is aligned with texel boundaries (multiples of `bpp`)*/ fn is_segment_texel_aligned(start: usize, end: usize, bpp: usize) -> bool { start % bpp == 0 && end % bpp == 0 } /** Decompress the input `src` with the specified compression method into `dst`, which * must have exactly the decompressed size of the input. */ fn decomp_into_slice( comp: Compression, cache: &mut ThreadCacheComp, src: &[u8], dst: &mut [u8], ) -> Result<(), String> { match comp { Compression::None => dst.copy_from_slice(src), Compression::Lz4(_) => { lz4_decompress_to_slice(src, dst).ok_or_else(|| tag!("Failed to decompress"))? } Compression::Zstd(_) => zstd_decompress_to_slice(cache.get_zstd_dctx()?, src, dst) .ok_or_else(|| tag!("Failed to decompress"))?, }; Ok(()) } /** Decompress the input `src` with the specified compression method and return the result. * * note: this copies the input when the compression mode is None, which is slightly inefficient */ fn decomp_into_vec( comp: Compression, cache: &mut ThreadCacheComp, src: &[u8], uncomp_len: usize, ) -> Result, String> { Ok(match comp { Compression::None => { assert!(src.len() == uncomp_len); Vec::from(src) } Compression::Lz4(_) => { lz4_decompress_to_vec(src, uncomp_len).ok_or_else(|| tag!("Failed to decompress"))? } Compression::Zstd(_) => zstd_decompress_to_vec(cache.get_zstd_dctx()?, src, uncomp_len) .ok_or_else(|| tag!("Failed to decompress"))?, }) } /** Compress the input `src` with the specified compression method and return the result. * * note: this copies the input when the compression mode is None, which is slightly inefficient; * instead, use a fast path which avoids comp=None entirely */ fn comp_into_vec( comp: Compression, cache: &mut ThreadCacheComp, src: &[u8], pad_pre: usize, pad_post: usize, ) -> Result, String> { Ok(match comp { Compression::None => { let mut v = vec![0; pad_pre]; v.extend_from_slice(src); v.resize(pad_pre + src.len() + pad_post, 0); v } Compression::Lz4(lvl) => { lz4_compress_to_vec(cache.get_lz4_cctx()?, src, lvl, pad_pre, pad_post) } Compression::Zstd(lvl) => { zstd_compress_to_vec(cache.get_zstd_cctx()?, src, lvl, pad_pre, pad_post) } }) } /** Run a [DecompTask] */ fn run_decomp_task(task: &DecompTask, cache: &mut ThreadCache) -> Result { let msg = task.msg_view.get(); let header = u32::from_le_bytes(msg[0..4].try_into().unwrap()); let (len, t) = parse_wmsg_header(header).unwrap(); let remote_id = Rid(i32::from_le_bytes(msg[4..8].try_into().unwrap())); if t == WmsgType::BufferDiff { let diff_size = u32::from_le_bytes(msg[8..12].try_into().unwrap()); let ntrailing = u32::from_le_bytes(msg[12..16].try_into().unwrap()); match &task.target { DecompTarget::Dmabuf(target) => { let decomp_len = (diff_size + ntrailing) as usize; // Note: this is 'read-optimized', since the diff span and segment calculation // depend on the buffer contents. TODO: any better solution? let write_buf = Arc::new(vulkan_get_buffer(&target.dst.vulk, decomp_len, true)?); let write_view = write_buf.get_write_view(); decomp_into_slice( task.compression, &mut cache.comp, &msg[16..len], write_view.data, )?; drop(write_view); write_buf.complete_write()?; /* Compute region from diff */ let reread_view = write_buf.get_read_view(); let (region_start, region_end) = compute_diff_span(reread_view.data, ntrailing as usize, task.file_size)?; let mut misaligned: bool = false; let bpp = target.dst.get_bpp(); let mut segments: Vec<(u32, u32, u32)> = Vec::new(); let mut pos: usize = 0; while pos + 8 <= (diff_size as usize) { let span_start = u32::from_le_bytes(reread_view.data[pos..pos + 4].try_into().unwrap()); let span_end = u32::from_le_bytes(reread_view.data[pos + 4..pos + 8].try_into().unwrap()); if (4 * span_start) % bpp != 0 || (4 * span_end) % bpp != 0 { misaligned = true; } pos += 8; /* Copy start location is just after the header */ segments.push((pos as u32, 4 * span_start, 4 * span_end)); pos += (span_end - span_start) as usize * 4; } assert!(pos == diff_size as usize); if ntrailing > 0 { segments.push(( diff_size, task.file_size as u32 - ntrailing, task.file_size as u32, )); } if !misaligned { /* Fast path: all updates texel aligned, apply buffer immediately */ let pool: &Arc = cache.get_cmd_pool(&target.dst.vulk)?; let copy_handle: VulkanCopyHandle = start_copy_segments_onto_dmabuf( &target.dst, &write_buf, pool, &segments[..], target.view_row_stride, &[], )?; let copy_id = copy_handle.get_timeline_point(); /* Store copy handle immediately to avoid dropping it early on error return */ cache.copy_ops.push(copy_handle); let apply_task = if let Some(mir) = &target.mirror.as_ref() { // TODO: avoid a copy by passing the Arc as data let data = Vec::from(reread_view.data); Some(ApplyTask { sequence: task.sequence.unwrap(), data, is_diff_type: true, ntrailing: ntrailing as usize, target: ApplyTaskTarget::MirrorOnly(DecompTaskMirror { mirror: (*mir).clone(), notify_on_completion: false, }), region_start, region_end, remote_id, }) } else { None }; Ok(DecompReturn::Dmabuf((copy_id, remote_id, apply_task))) } else { /* Slow path: to write entire pixels, must acquire the remainder of the pixel * data if there was no change; this requires reading from the mirror, which * needs to be delayed to avoid races */ debug!("Using slow path for diff application, a segment is not pixel aligned"); let diff = decomp_into_vec( task.compression, &mut cache.comp, &msg[16..len], decomp_len, )?; let b = target.dst.get_bpp() as usize; let (ext_start, ext_end) = (b * (region_start / b), align(region_end, b)); /* The new interval might overlap */ Ok(DecompReturn::Shm(ApplyTask { sequence: task.sequence.unwrap(), data: diff, is_diff_type: true, ntrailing: ntrailing as usize, target: ApplyTaskTarget::Dmabuf(ApplyTaskDmabuf { target: target.clone(), orig_start: region_start, orig_end: region_end, }), region_start: ext_start, region_end: ext_end, remote_id, })) } } DecompTarget::File(target) => { let diff = decomp_into_vec( task.compression, &mut cache.comp, &msg[16..len], (diff_size + ntrailing) as usize, )?; /* Compute region from diff */ let (region_start, region_end) = compute_diff_span(&diff, ntrailing as usize, task.file_size)?; Ok(DecompReturn::Shm(ApplyTask { sequence: task.sequence.unwrap(), data: diff, is_diff_type: true, ntrailing: ntrailing as usize, target: ApplyTaskTarget::Shm(DecompTaskFile { skip_mirror: target.skip_mirror, target: target.target.clone(), }), region_start, region_end, remote_id, })) } DecompTarget::MirrorOnly(target) => { let diff = decomp_into_vec( task.compression, &mut cache.comp, &msg[16..len], (diff_size + ntrailing) as usize, )?; /* Compute region from diff */ let (region_start, region_end) = compute_diff_span(&diff, ntrailing as usize, task.file_size)?; Ok(DecompReturn::Shm(ApplyTask { sequence: task.sequence.unwrap(), data: diff, is_diff_type: true, ntrailing: ntrailing as usize, target: ApplyTaskTarget::MirrorOnly(target.clone()), region_start, region_end, remote_id, })) } } } else if t == WmsgType::BufferFill { let region_start = u32::from_le_bytes(msg[8..12].try_into().unwrap()) as usize; let region_end = u32::from_le_bytes(msg[12..16].try_into().unwrap()) as usize; if region_end <= region_start { return Err(tag!("Invalid region: {} {}", region_start, region_end)); } let reg_len = region_end - region_start; match &task.target { DecompTarget::Dmabuf(target) => { // TODO: create a 'pending_writes' state region of the VulkanDmabuf to // verify that there are no overlapping segments with other tasks if is_segment_texel_aligned(region_start, region_end, target.dst.get_bpp() as usize) { /* Fast path: decompress into write-buf, copy immediately to image, and copy to mirror afterwards */ let write_buf = Arc::new(vulkan_get_buffer(&target.dst.vulk, reg_len, true)?); let write_view = write_buf.get_write_view(); decomp_into_slice( task.compression, &mut cache.comp, &msg[16..len], write_view.data, )?; drop(write_view); write_buf.complete_write()?; let pool: &Arc = cache.get_cmd_pool(&target.dst.vulk)?; let copy_handle: VulkanCopyHandle = start_copy_segments_onto_dmabuf( &target.dst, &write_buf, pool, &[(0, region_start as u32, region_end as u32)], target.view_row_stride, &[], )?; let copy_id = copy_handle.get_timeline_point(); cache.copy_ops.push(copy_handle); /* Delay the mirror application to an ApplyTask, where disjointness is guaranteed */ let mir_task = if let Some(mir) = &target.mirror.as_ref() { // TODO: is repeating the decompression worth it to get a better memory // type for the critical path / not need to read back here? let reread_view = write_buf.get_read_view(); let data = Vec::from(reread_view.data); // let v = mir // .get_mut_range(region_start..region_end) // .ok_or_else(|| tag!("Failed to get mirror segment"))?; // v.data.copy_from_slice(reread_view.data); Some(ApplyTask { sequence: task.sequence.unwrap(), data, is_diff_type: false, ntrailing: 0, target: ApplyTaskTarget::MirrorOnly(DecompTaskMirror { mirror: (*mir).clone(), notify_on_completion: false, }), region_start, region_end, remote_id, }) } else { None }; Ok(DecompReturn::Dmabuf((copy_id, remote_id, mir_task))) } else { /* Slow path: decompress, and when the aligned region is available, * copy onto mirror and writebuf, and then onto image. (This might produce * races on the border texels, but those are probably hard to see) */ debug!( "Using slow path for fill application {}..{} is not bpp={} aligned", region_start, region_end, target.dst.get_bpp() ); let fill = decomp_into_vec(task.compression, &mut cache.comp, &msg[16..len], reg_len)?; let b = target.dst.get_bpp() as usize; let (ext_start, ext_end) = (b * (region_start / b), align(region_end, b)); /* The new interval might overlap */ Ok(DecompReturn::Shm(ApplyTask { sequence: task.sequence.unwrap(), data: fill, is_diff_type: false, ntrailing: 0, target: ApplyTaskTarget::Dmabuf(ApplyTaskDmabuf { target: target.clone(), orig_start: region_start, orig_end: region_end, }), region_start: ext_start, region_end: ext_end, remote_id, })) } } DecompTarget::File(target) => { let fill = decomp_into_vec(task.compression, &mut cache.comp, &msg[16..len], reg_len)?; Ok(DecompReturn::Shm(ApplyTask { sequence: task.sequence.unwrap(), data: fill, is_diff_type: false, ntrailing: 0, target: ApplyTaskTarget::Shm(DecompTaskFile { skip_mirror: target.skip_mirror, target: target.target.clone(), }), region_start, region_end, remote_id, })) } DecompTarget::MirrorOnly(target) => { let fill = decomp_into_vec(task.compression, &mut cache.comp, &msg[16..len], reg_len)?; Ok(DecompReturn::Shm(ApplyTask { sequence: task.sequence.unwrap(), data: fill, is_diff_type: false, ntrailing: 0, target: ApplyTaskTarget::MirrorOnly(target.clone()), region_start, region_end, remote_id, })) } } } else { unreachable!(); } } /** Run an [ApplyTask] */ fn run_apply_task(task: &ApplyTask, cache: &mut ThreadCache) -> TaskResult { match task.target { ApplyTaskTarget::MirrorOnly(ref d) => { if task.is_diff_type { let v = d .mirror .get_mut_range(task.region_start..task.region_end) .ok_or_else(|| { tag!( "Failed to get mirror segment {}..{} from mirror of length {}", task.region_start, task.region_end, d.mirror.len(), ) })?; apply_diff_one(&task.data, task.ntrailing, task.region_start, v.data)?; } else { let v = d .mirror .get_mut_range(task.region_start..task.region_end) .ok_or_else(|| { tag!( "Failed to get mirror segment {}..{} from mirror of length {}", task.region_start, task.region_end, d.mirror.len(), ) })?; v.data.copy_from_slice(&task.data); } if d.notify_on_completion { Ok(TaskOutput::ApplyDone(task.remote_id)) } else { Ok(TaskOutput::MirrorApply) } } ApplyTaskTarget::Dmabuf(ref d) => { assert!( task.region_start <= d.orig_start && d.orig_start <= d.orig_end && d.orig_end <= task.region_end ); let copy_handle = if task.is_diff_type { let mirror = d.target.mirror.as_ref().unwrap(); let v = mirror .get_mut_range(task.region_start..task.region_end) .ok_or_else(|| { tag!( "Failed to get mirror segment {}..{}", task.region_start, task.region_end ) })?; /* Apply diff to mirror */ apply_diff_one( &task.data, task.ntrailing, d.orig_start, &mut v.data [(d.orig_start - task.region_start)..(d.orig_end - task.region_start)], )?; // TODO: instead of copying entire region, perform a properly pixel aligned diff let write_len = task.region_end - task.region_start; let write_buf = Arc::new(vulkan_get_buffer(&d.target.dst.vulk, write_len, false)?); let write_view = write_buf.get_write_view(); write_view.data.copy_from_slice(v.data); drop(write_view); write_buf.complete_write()?; let pool: &Arc = cache.get_cmd_pool(&d.target.dst.vulk)?; let copy_handle: VulkanCopyHandle = start_copy_segments_onto_dmabuf( &d.target.dst, &write_buf, pool, &[(0, task.region_start as u32, task.region_end as u32)], d.target.view_row_stride, &[], )?; copy_handle } else { let mirror = d.target.mirror.as_ref().unwrap(); let v = mirror .get_mut_range(task.region_start..task.region_end) .ok_or_else(|| { tag!( "Failed to get mirror segment: {}..{} (orig {}..{}); bufsize {}", task.region_start, task.region_end, d.orig_start, d.orig_end, d.target.dst.nominal_size(d.target.view_row_stride) ) })?; let write_len = task.region_end - task.region_start; let write_buf = Arc::new(vulkan_get_buffer(&d.target.dst.vulk, write_len, false)?); let write_view = write_buf.get_write_view(); write_view.data[0..d.orig_start - task.region_start] .copy_from_slice(&v.data[0..d.orig_start - task.region_start]); write_view.data[d.orig_start - task.region_start..(d.orig_end - task.region_start)] .copy_from_slice(&task.data); write_view.data [(d.orig_end - task.region_start)..(task.region_end - task.region_start)] .copy_from_slice( &v.data[(d.orig_end - task.region_start) ..(task.region_end - task.region_start)], ); drop(write_view); write_buf.complete_write()?; let pool: &Arc = cache.get_cmd_pool(&d.target.dst.vulk)?; let copy_handle: VulkanCopyHandle = start_copy_segments_onto_dmabuf( &d.target.dst, &write_buf, pool, &[(0, task.region_start as u32, task.region_end as u32)], d.target.view_row_stride, &[], )?; /* Update mirror */ v.data[d.orig_start - task.region_start..(d.orig_end - task.region_start)] .copy_from_slice(&task.data); copy_handle }; let copy_id = copy_handle.get_timeline_point(); cache.copy_ops.push(copy_handle); Ok(TaskOutput::DmabufApplyOp((copy_id, task.remote_id))) } ApplyTaskTarget::Shm(ref d) => { if task.is_diff_type { debug!( "Applying diff: seq: {} len: {} ntrailing {} region {}..{} buflen {}", task.sequence, task.data.len(), task.ntrailing, task.region_start, task.region_end, d.target.mapping.get_u8().len() ); /* Ask just for required range; if scheduling was done right, this should succeed */ let m = d .target .mem_mirror .get_mut_range(task.region_start..task.region_end) .ok_or_else(|| { tag!( "Failed to acquire mirror range {}..{} when applying", task.region_start, task.region_end ) })?; apply_diff( &task.data, task.ntrailing, // todo: specify interval start/end &d.target.mapping, task.region_start, m.data, )?; } else { debug!( "Applying fill: seq: {} len: {} region {}..{} buflen {}", task.sequence, task.data.len(), task.region_start, task.region_end, d.target.mapping.get_u8().len() ); copy_onto_mapping(&task.data, &d.target.mapping, task.region_start); if !d.skip_mirror { let m = d .target .mem_mirror .get_mut_range(task.region_start..task.region_end) .ok_or_else(|| { tag!( "Failed to acquire mirror range {}..{} when applying", task.region_start, task.region_end ) })?; m.data.copy_from_slice(&task.data); } } Ok(TaskOutput::ApplyDone(task.remote_id)) } } } /** Run a [VideoEncodeTask] */ fn run_encode_task(task: VideoEncodeTask, cache: &mut ThreadCache) -> TaskResult { let pool: &Arc = cache.get_cmd_pool(&task.vulk)?; let packet = start_dmavid_encode( &task.state, pool, &task.acquires.explicit[..], task.acquires.implicit.as_slice(), )?; let npadding = align4(packet.len()) - packet.len(); let update_header = cat2x4( build_wmsg_header(WmsgType::SendDMAVidPacket, 8 + packet.len()).to_le_bytes(), task.remote_id.0.to_le_bytes(), ); // TODO: reduce number of copies let mut msg = Vec::from(update_header); msg.extend_from_slice(&packet); if npadding > 0 { msg.extend_from_slice(&vec![0; npadding]); } Ok(TaskOutput::Msg(msg)) } /** Run a [VideoDecodeTask] */ fn run_decode_task(task: VideoDecodeTask, cache: &mut ThreadCache) -> TaskResult { let pool: &Arc = cache.get_cmd_pool(&task.vulk)?; let msg = &task.msg.get(); let (len, _t) = parse_wmsg_header(u32::from_le_bytes(msg[..4].try_into().unwrap())).unwrap(); let packet = &msg[8..len]; let decode_handle = start_dmavid_apply(&task.state, pool, packet)?; let completion_point = decode_handle.get_timeline_point(); cache.decode_ops.push(decode_handle); Ok(TaskOutput::DmabufApplyOp(( completion_point, task.remote_id, ))) } /** Process messages received from the channel to the other Waypipe instance */ fn process_channel( chan_msg: &mut FromChannel, glob: &mut Globals, tasksys: &TaskSystem, ) -> Result<(), String> { debug!("Process channel"); loop { let Some(ref mut msg_view) = &mut chan_msg.next_msg else { /* No more messages */ break; }; let data = msg_view.get_mut(); let header = u32::from_le_bytes(data[0..4].try_into().unwrap()); let (length, typ) = parse_wmsg_header(header) .ok_or_else(|| tag!("Failed to parse wmsg header: {:x}", header))?; if typ != WmsgType::Close && typ != WmsgType::AckNblocks && typ != WmsgType::Restart { chan_msg.message_counter += 1; } /* the message, without padding */ let msg = &mut data[..length]; debug!("Received {:?} message of length {}", typ, length); let is_first = if !glob.has_first_message { glob.has_first_message = true; true } else { false }; match typ { WmsgType::Version => { let version = u32::from_le_bytes(data[4..8].try_into().unwrap()); if version > WAYPIPE_PROTOCOL_VERSION { return Err(tag!( "waypipe client replied with larger protocol version ({}) than requested ({})", version, WAYPIPE_PROTOCOL_VERSION )); } if version < MIN_PROTOCOL_VERSION { return Err(tag!( "waypipe client requested too small of a version: {}", version )); } if !is_first { return Err(tag!( "Version message must be the first sent by waypipe-client" )); } if glob.on_display_side { return Err(tag!("waypipe-server should not send Version message")); } glob.wire_version = version; debug!("Wire version has been set to: {}", version); } WmsgType::InjectRIDs => { /* note: this should immediately precede the 'protocol' */ if length % 4 != 0 { return Err(tag!("InjectRIDs length {} not divisible by four", length)); } let nnew = (length - 4) / 4; for i in 0..nnew { let rid = Rid(i32::from_le_bytes( msg[4 + 4 * i..8 + 4 * i].try_into().unwrap(), )); /* The ShadowFd with this RID needs to be created at this point */ let sfd = glob.fresh.remove(&rid).ok_or_else(|| { tag!( "Injecting RID {} which has no matching created ShadowFd", rid ) })?; chan_msg.rid_queue.push_back(sfd); } } WmsgType::Protocol => { /* sanity check: only append to channel that has not started writing */ assert!(chan_msg.output.start == 0); let mut msg_region = &msg[4..]; if msg_region.is_empty() { debug!("Note: received empty protocol message"); } while !msg_region.is_empty() { if msg_region.len() < 8 { return Err(tag!("Truncated Wayland message inside Protocol message")); } let header2 = u32::from_le_bytes(msg_region[4..8].try_into().unwrap()); let length = (header2 >> 16) as usize; if length < 8 || length % 4 != 0 || length > msg_region.len() { return Err(tag!("invalid Wayland message: bad length field: {} (compare region length {})", length, msg_region.len())); } let (waymsg, tail) = msg_region.split_at(length); let mut chan_out_tail = &mut chan_msg.output.data[chan_msg.output.len..]; let orig_tail_len = chan_out_tail.len(); let aux = TranslationInfo::FromChannel(( &mut chan_msg.rid_queue, &mut chan_msg.output.fds, )); let ret = process_way_msg(waymsg, &mut chan_out_tail, aux, glob)?; let new_dst_len = chan_out_tail.len(); match ret { ProcMsg::Done => { log_way_msg_output( waymsg, &chan_msg.output.data [chan_msg.output.len..chan_msg.output.data.len() - new_dst_len], &glob.objects, glob.on_display_side, ); } ProcMsg::WaitFor(r) => { chan_msg.waiting_for = Some(r); break; } ProcMsg::NeedsSpace((nbytes, nfds)) => { if nbytes > chan_msg.output.data.len() || nfds > MAX_OUTGOING_FDS { /* Output message or messages are too large to send */ return Err(tag!("Failed to send message(s): not enough space, ({}, {}) vs ({}, {})", nbytes, nfds, chan_msg.output.data.len(), MAX_OUTGOING_FDS)); } debug!("Skipping last message: not enough space"); // Edit the protocol message in place, and stop processing. break; } } // update amount written chan_msg.output.len += orig_tail_len - new_dst_len; // after message processed, update region msg_region = tail; } let unproc = msg_region.len(); // msg_region reference expires here if unproc > 0 { debug!( "Adjusting protocol message for {} unprocessed bytes", unproc ); let trail_length = unproc + 4; let skip = msg.len() - trail_length; msg[skip..skip + 4].copy_from_slice( &build_wmsg_header(WmsgType::Protocol, trail_length).to_le_bytes(), ); chan_msg.message_counter -= 1; /* Stop here -- cannot process any more messages */ msg_view.advance(skip); // no point in updating 'region' /* Return immediately, without replacing msg_view */ return Ok(()); } /* Note: a similar trick may be possible for InjectRIDs, assuming the * existing behavior in which InjectRIDs + Protocol messages arrive * together. However, in practice it is probably better just to queue * up all RIDs, possibly reallocating memory if necessary; or give up * if there are more RIDs than message bytes, since no real protocol * requires this. */ } WmsgType::AckNblocks => { /* This message type is silently dropped, because reconnection * support is explicitly not supported, and so this implementation * has no use for it. */ if glob.wire_version > 16 { return Err(tag!("Received AckNBlocks message, but reconnection support is explicitly disabled at wire version {}", glob.wire_version)); } } WmsgType::Restart => { return Err("Unsupported Restart message".into()); } WmsgType::Close => { glob.has_received_close = true; } _ => { let mut tmp = None; std::mem::swap(&mut tmp, &mut chan_msg.next_msg); process_sfd_msg(typ, length, tmp.unwrap(), glob, tasksys)?; } } chan_msg.next_msg = chan_msg.input.pop_next_msg(); } Ok(()) } /** Process messages received from the Wayland connection */ fn process_wayland_1( way_msg: &mut FromWayland, glob: &mut Globals, tasksys: &TaskSystem, ) -> Result<(), String> { debug!("Process wayland 1: {} bytes", way_msg.input.len); /* while there is output space, and the first message is complete, process * messages from Wayland */ let max_len = way_msg.input.data.len(); let mut region: &[u8] = &way_msg.input.data[..way_msg.input.len]; let mut nread: usize = 0; loop { if region.len() < 8 { // Message header is incomplete break; } let header1 = u32::from_le_bytes(region[0..4].try_into().unwrap()); let header2 = u32::from_le_bytes(region[4..8].try_into().unwrap()); let length = (header2 >> 16) as usize; if length < 8 || length % 4 != 0 { error!("Bad length field: {}", length); return Err(tag!("invalid Wayland message: bad length field {}", length)); } if length >= max_len { return Err(tag!( "Message to object {} (length {}) is longer than {}-byte receive buffer", header1, length, max_len )); } if length > region.len() { // Message is incomplete break; } let (msg, tail) = region.split_at(length); region = tail; let data_max_len = way_msg.output.protocol_data.len(); let mut dst = &mut way_msg.output.protocol_data[way_msg.output.protocol_len..]; let orig_dst_len = dst.len(); let aux = TranslationInfo::FromWayland(( &mut way_msg.input.fds, &mut way_msg.output.protocol_rids, )); let ret = process_way_msg(msg, &mut dst, aux, glob)?; let new_dst_len = dst.len(); match ret { ProcMsg::Done => { log_way_msg_output( msg, &way_msg.output.protocol_data[way_msg.output.protocol_len ..way_msg.output.protocol_data.len() - new_dst_len], &glob.objects, !glob.on_display_side, ); } ProcMsg::WaitFor(_) => { unreachable!("Unexpected ProcMsg::WaitFor") } ProcMsg::NeedsSpace((nbytes, nfds)) => { if nbytes > data_max_len || nfds > MAX_OUTGOING_FDS { /* Output message or messages are too large to send */ return Err(tag!( "Failed to send message(s): not enough space, ({}, {}) vs ({}, {})", nbytes, nfds, way_msg.output.protocol_data.len(), MAX_OUTGOING_FDS )); } debug!( "Skipping last message: not enough space ({},{}) vs ({},{})", nbytes, nfds, new_dst_len, MAX_OUTGOING_FDS - way_msg.output.protocol_rids.len() ); break; } } // update amount read and written nread += length; way_msg.output.protocol_len += orig_dst_len - new_dst_len; } way_msg.input.len -= nread; way_msg .input .data .copy_within(nread..(nread + way_msg.input.len), 0); /* Collect updates to all FDs here, to be sent across the channel */ let comp = glob.opts.compression; let mut drop_list: Vec = Vec::new(); for (rid, sfd) in &glob.map { /* Collecting updates for some pipes can lead to their removal * (todo: make this more efficient with upcoming 'extract_if') */ let Some(s) = sfd.upgrade() else { // todo: also clear out empty weak pointers here? continue; }; let keep = collect_updates( &mut s.borrow_mut(), &mut way_msg.output, comp, tasksys, &glob.opts, )?; if !keep { drop_list.push(*rid); } } let mut delete_idxs: Vec = Vec::new(); for (i, sfd) in glob.pipes.iter().enumerate() { let rid = sfd.borrow().remote_id; if drop_list.contains(&rid) { delete_idxs.push(i); } } for drop_pos in delete_idxs.iter().rev() { glob.pipes.remove(*drop_pos); } for rid in drop_list { debug!("Dropping RID {} from pipe list", rid); glob.map.remove(&rid); } Ok(()) } /** Add RID and protocol transfer messages to send to the channel */ fn process_wayland_2(way_msg: &mut FromWayland) { // (todo: dynamically glue on when constructing the iovecs, instead?) if !way_msg.output.protocol_rids.is_empty() { debug!( "Inserting RID message with {} rids", way_msg.output.protocol_rids.len() ); let len = way_msg.output.protocol_rids.len() * 4 + 4; let rid_header: u32 = build_wmsg_header(WmsgType::InjectRIDs, len); let mut v = Vec::with_capacity(len); for e in rid_header.to_le_bytes() { v.push(e); } for rid in &way_msg.output.protocol_rids { let r = rid.borrow().remote_id; for e in r.0.to_le_bytes() { v.push(e); } } way_msg.output.other_messages.push(v); way_msg.output.protocol_rids.clear(); } if way_msg.output.protocol_len > 0 && !way_msg.output.protocol_header_added { debug!( "Inserting protocol header, for {} bytes of content", way_msg.output.protocol_len ); let proto_header: u32 = build_wmsg_header( WmsgType::Protocol, way_msg.output.protocol_len + std::mem::size_of::(), ); let mut v = Vec::new(); for e in proto_header.to_le_bytes() { v.push(e); } way_msg.output.other_messages.push(v); way_msg.output.protocol_header_added = true; /* Note: this pushes only the header, under the assumption that the * protocol data will immediately follow */ // is this a good idea? } } /** Process task completion events indicated by the main Vulkan timeline semaphore */ fn process_vulkan_updates( glob: &mut Globals, tasksys: &TaskSystem, from_chan: &mut FromChannel, ) -> Result<(), String> { let DmabufDevice::Vulkan((_, ref vulk)) = glob.dmabuf_device else { unreachable!(); }; let current: u64 = vulk.get_current_timeline_pt()?; let mut g = tasksys.tasks.lock().unwrap(); // TODO: more efficient filtering, this is O(n^2) as typically only // one element gets removed at a time. Use a heap, or some other method // to ensure g.apply_operations, fill tasks, etc. are kept sorted retain_err(&mut g.apply_operations, |x| -> Result { if x.0 <= current { let rid = x.1; let Some(wsfd) = glob.map.get(&rid) else { return Err(tag!("ShadowFd no longer in map")); }; let Some(sfd) = wsfd.upgrade() else { return Err(tag!("ShadowFd no longer strongly referenced")); }; let mut b = sfd.borrow_mut(); let rid = b.remote_id; let ShadowFdVariant::Dmabuf(ref mut data) = &mut b.data else { panic!(); }; data.pending_apply_tasks = data .pending_apply_tasks .checked_sub(1) .ok_or("Task miscount")?; /* The acquire list is provided by protocol processing, after all relevant * apply tasks have been sent. */ if data.pending_apply_tasks == 0 { debug!( "Tasks completed, signalling {} acquires for (dmabuf rid={}, wlbuf={})", data.acquires.len(), rid, data.debug_wayland_id ); if !data.acquires.is_empty() { assert!(glob.on_display_side); signal_timeline_acquires(&mut data.acquires)?; } } Ok(false) } else { Ok(true) } })?; /* Indicate if waiting for operations on this specific dmabuf */ if let Some(rid) = from_chan.waiting_for { let sfd = glob.map.get(&rid).unwrap().upgrade().unwrap(); let b = sfd.borrow_mut(); if let ShadowFdVariant::Dmabuf(ref data) = &b.data { if data.pending_apply_tasks == 0 { from_chan.waiting_for = None; } }; } let mut new_tasks = false; for i in (0..g.dmabuf_fill_tasks.len()).rev() { if g.dmabuf_fill_tasks[i].wait_until <= current { let t = g.dmabuf_fill_tasks.remove(i); g.construct.push_back(WorkTask::FillDmabuf2(t)); new_tasks = true; } } for i in (0..g.dmabuf_diff_tasks.len()).rev() { if g.dmabuf_diff_tasks[i].wait_until <= current { let t = g.dmabuf_diff_tasks.remove(i); g.construct.push_back(WorkTask::DiffDmabuf2(t)); new_tasks = true; } } if new_tasks { tasksys.task_notify.notify_all(); } /* Protocol processing will be started when all copy operations are complete */ Ok(()) } /** Are there complete messages for the wayland->channel direction that * can be processed. */ fn is_from_way_processable(way_msg_input: &WaylandInputRing, glob: &Globals) -> bool { // todo: this should be a cached flag; iterating and borrowing is slow in the long run for sfd in &glob.pipes { let x = sfd.borrow(); if let ShadowFdVariant::Pipe(data) = &x.data { if data.program_closed { /* Need to send shutdown messages and/or final transfer to other side */ return true; } if let ShadowFdPipeBuffer::ReadFromWayland((_, len)) = data.buf { if len > 0 { /* Have read data from a pipe that could be sent */ return true; } } } } // peek at message header if way_msg_input.len < 8 { return false; } let header2 = u32::from_le_bytes(way_msg_input.data[4..8].try_into().unwrap()); let length = header2 >> 16; if length >= way_msg_input.data.len() as u32 { /* Overly long message, processing it will trigger an error */ return true; } // is the first message complete? way_msg_input.len >= length as usize } /** Are there complete messages for the channel->wayland direction that * can be processed, assuming prerequisites are complete */ fn is_from_chan_processable(chan_msg: &FromChannel) -> bool { chan_msg.next_msg.is_some() } /** Are there messages for the wayland->channel direction that are ready to send? */ fn has_from_way_output(from_way_output: &TransferQueue) -> bool { let send_protocol = from_way_output.protocol_len > 0 && from_way_output.expected_recvd_msgs == 0; !from_way_output.other_messages.is_empty() || (send_protocol && from_way_output.protocol_len > 0) || from_way_output.needs_new_ack || from_way_output.ack_nwritten > 0 } /** Are there messages for the channel->wayland direction that are ready to send? */ fn has_from_chan_output(from_chan_output: &TransferWayland) -> bool { from_chan_output.len > 0 } /** Write a message to the file descriptor to wake up the thread that polls `poll()` it. */ fn wakeup_fd(fd: &OwnedFd, val: &[u8]) -> Result<(), ()> { loop { match unistd::write(fd, val) { Ok(_) => return Ok(()), Err(nix::errno::Errno::EINTR) => continue, Err(nix::errno::Errno::EAGAIN) => { /* pipe is full, will be woken up anyway */ return Ok(()); } Err(nix::errno::Errno::EPIPE) => { /* Remote end shut down */ return Err(()); } Err(e) => { // TODO: ENOBUFS is theoretically possible, so errors here // should probably be propagated to the main loop panic!("Pipe wakeup failed {:?}", e); } } } } /** Return true iff the two half-open intervals intersect */ fn interval_overlaps(a: &(usize, usize), b: &(usize, usize)) -> bool { b.0 < a.1 && a.0 < b.1 } /** Return true if waiting for pending tasks that may produce a message */ fn has_pending_compute_tasks(from_way_output: &TransferQueue) -> bool { from_way_output.expected_recvd_msgs > 0 } /** Identify a task which can be run, if one exists */ fn pop_task(tasksys: &mut TaskSet) -> Option { /* Task priorities are chosen to minimize the amount of data in-flight */ /* First priority: apply-type tasks, choose lowest sequence number */ if let Some((_, t)) = tasksys.apply.first_key_value() { /* Task can be applied when no preceding tasks could have overlapping region */ let mut ok: bool = true; for q in &tasksys.in_progress_decomp { if *q < t.sequence { ok = false; break; } } for (v, (l, h)) in &tasksys.in_progress_apply { if interval_overlaps(&(*l, *h), &(t.region_start, t.region_end)) { assert!(*v < t.sequence); ok = false; break; } } if ok { let (_, r) = tasksys.apply.pop_first().unwrap(); tasksys .in_progress_apply .insert((r.sequence, (r.region_start, r.region_end))); return Some(WorkTask::Apply(r)); } } if let Some(mut x) = tasksys.decompress.pop_front() { let s = tasksys.last_seqno; tasksys.last_seqno += 1; x.sequence = Some(s); tasksys.in_progress_decomp.insert(s); return Some(WorkTask::Decomp(x)); } /* Finally: construct diffs */ tasksys.construct.pop_front() } /** Main loop for a worker thread to do compute-heavy tasks */ fn work_thread(tasksys: &TaskSystem, output: Sender) { let notify: &Condvar = &tasksys.task_notify; let mtx: &Mutex<_> = &tasksys.tasks; let mut cache = ThreadCache { large: Vec::new(), cmd_pool: None, copy_ops: Vec::new(), decode_ops: Vec::new(), comp: ThreadCacheComp { lz4_c: None, zstd_c: None, zstd_d: None, }, }; let mut guard = mtx.lock().unwrap(); while !guard.stop { /* Choose a task to do */ let Some(task) = pop_task(&mut guard) else { /* If no tasks left, wait. */ guard = match notify.wait(guard) { Ok(g) => g, Err(_) => { error!("Mutex poisoned, stopping worker"); break; } }; continue; }; drop(guard); /* Run the task */ match task { WorkTask::Diff(x) => { let result = run_diff_task(&x, &mut cache); /* one task, one message */ output.send(result).unwrap(); } WorkTask::DiffDmabuf(x) => { let result = run_diff_dmabuf_task(x, &mut cache); match result { Err(z) => { output.send(Err(z)).unwrap(); } Ok(t) => { let mut g = mtx.lock().unwrap(); g.dmabuf_diff_tasks.push(t); drop(g); } } } WorkTask::DiffDmabuf2(x) => { let result = run_diff_dmabuf_task_2(x, &mut cache); /* one task, one message */ output.send(result).unwrap(); } WorkTask::FillDmabuf(x) => { let result = run_fill_dmabuf_task(x, &mut cache); match result { Err(z) => { output.send(Err(z)).unwrap(); } Ok(t) => { let mut g = mtx.lock().unwrap(); g.dmabuf_fill_tasks.push(t); drop(g); } } } WorkTask::FillDmabuf2(x) => { let result = run_dmabuf_fill_task_2(x, &mut cache); /* one task, one message */ output.send(result).unwrap(); } WorkTask::Decomp(x) => { let y = run_decomp_task(&x, &mut cache); match y { Err(z) => { output.send(Err(z)).unwrap(); } Ok(t) => { let mut g = mtx.lock().unwrap(); // Drop from in-progress list g.in_progress_decomp.remove(&x.sequence.unwrap()); match t { DecompReturn::Shm(x) => { g.apply.insert(x.sequence, x); } DecompReturn::Dmabuf((seq, rid, mir_task)) => { // TODO: consider sending these over the `output` channel? // Push the _id_; and have tasksys store the most recent value to cull // (to be cleared up locally.) g.apply_operations.push((seq, rid)); if let Some(m) = mir_task { g.apply.insert(m.sequence, m); } } } drop(g); } } } WorkTask::Apply(x) => { let y = run_apply_task(&x, &mut cache); let mut g = mtx.lock().unwrap(); /* Send error or notify of completion */ if let Ok(TaskOutput::DmabufApplyOp(x)) = y { // Add to list for main thread to act once complete g.apply_operations.push(x); } else if let Ok(TaskOutput::MirrorApply) = y { // No action } else { // Send to main thread output.send(y).unwrap(); } g.in_progress_apply .remove(&(x.sequence, (x.region_start, x.region_end))); drop(g); } WorkTask::VideoEncode(x) => { let y = run_encode_task(x, &mut cache); /* Output is the message to send */ output.send(y).unwrap(); } WorkTask::VideoDecode(x) => { let y = run_decode_task(x, &mut cache); if let Ok(TaskOutput::DmabufApplyOp(x)) = y { // Add to list for main thread to act once complete let mut g = mtx.lock().unwrap(); g.apply_operations.push(x); drop(g); } else { // Send to main thread output.send(y).unwrap(); } } }; // write->read establishes happens-before and thus that any new messages // sent over the channel will be seen, as will updates to the lists of // available tasks. // TODO: better notification mechanism for apply tasks being done // is atomic_bool enough? let zero = [0]; if wakeup_fd(&tasksys.wake_fd, &zero).is_err() { /* Remote end shut down */ break; } /* Periodic cleanup work */ if let Some(c) = &cache.cmd_pool { let current: u64 = c.vulk.get_current_timeline_pt().unwrap(); // TODO: more efficient filtering for i in (0..cache.copy_ops.len()).rev() { if cache.copy_ops[i].get_timeline_point() <= current { cache.copy_ops.remove(i); } } for i in (0..cache.decode_ops.len()).rev() { if cache.decode_ops[i].get_timeline_point() <= current { cache.decode_ops.remove(i); } } } guard = mtx.lock().unwrap(); } if !cache.copy_ops.is_empty() || !cache.decode_ops.is_empty() { let final_pt = cache .copy_ops .iter() .map(|x| x.get_timeline_point()) .chain(cache.decode_ops.iter().map(|x| x.get_timeline_point())) .max() .unwrap(); debug!( "Work thread {} waiting for Vulkan timeline point {}", std::thread::current().name().unwrap_or("unknown"), final_pt, ); cache .cmd_pool .as_ref() .unwrap() .vulk .wait_for_timeline_pt(final_pt, u64::MAX) .unwrap(); } debug!( "Work thread {} complete", std::thread::current().name().unwrap_or("unknown") ); } /** Main loop for the thread waiting on the main Vulkan timeline semaphore, if it cannot * be exported to a file descriptor. */ fn vulkan_wait_thread(state: Arc, vulk: Arc) { let notify: &Condvar = &state.notify; let mtx: &Mutex<_> = &state.inner; debug!( "Vulkan waiting thread {} started", std::thread::current().name().unwrap_or("unknown") ); let mut guard = mtx.lock().unwrap(); while !guard.stop { let Some(pt) = guard.target_sequence_point else { /* No Vulkan timeline point to wait for, so wait until a new timeline point is * requested */ guard = match notify.wait(guard) { Ok(g) => g, Err(_) => { error!("Mutex poisoned, stopping Vulkan waiter thread"); break; } }; continue; }; drop(guard); /* Note: setting a short timeout (even 1ms) should be acceptable since (barring * client deadlock) either the client should be actively rendering a frame or * Waypipe should be actively processing one. */ let res = vulk.wait_for_timeline_pt(pt, u64::MAX); guard = mtx.lock().unwrap(); /* In any case, wake up main loop; spurious wakeups are acceptable and rare. */ if !guard.pending_read { /* Try to wake up main loop */ let long_zero = [0u8; 8]; if wakeup_fd(&guard.pipe_w, &long_zero).is_err() { /* Failed to write to pipe because read end must have dropped or there is * a system error. In either case, exit silently. */ break; } guard.pending_read = true; } match res { Err(e) => { guard.error = Some(e); break; } Ok(true) => { /* Have reached the target point, do not need to wait again if there was a * spurious wakeup */ guard.target_sequence_point = None; } /* This case is theoretically possible; for example, if the monotonic clock may * have jumped ahead by u64::MAX nanoseconds. In that case, keep the current state * and wait again */ Ok(false) => (), } } debug!( "Vulkan waiting thread {} complete", std::thread::current().name().unwrap_or("unknown") ); } impl Drop for VulkanWaitThreadShutdown { fn drop(&mut self) { match self.state.inner.lock() { Ok(mut guard) => { guard.stop = true; } Err(_) => { error!("Mutex poisoned, Vulkan wait thread expected to shut down"); } } self.state.notify.notify_all(); } } /** Inner loop of Waypipe's proxy logic, which reads and writes Wayland messages for * the Wayland application or compositor, and Waypipe messages for the matching Waypipe instance. * * Returns: if unsuccessful, an error message to print and send to the Wayland application */ fn loop_inner<'a>( scope: &'a std::thread::Scope<'a, '_>, glob: &mut Globals, from_chan: &mut FromChannel, from_way: &mut FromWayland, tasksys: &TaskSystem, wake_r: OwnedFd, chanfd: &OwnedFd, progfd: &OwnedFd, pollmask: &signal::SigSet, sigint_received: &AtomicBool, ) -> Result<(), String> { let mut vulkan_wait_state: Option> = None; let mut _vulkan_wait_thread_shutdown: Option = None; let mut vulkan_wait_pipe_r: Option = None; while from_chan.state != DirectionState::Off || from_way.state != DirectionState::Off { let has_chan_output = has_from_way_output(&from_way.output) && from_way.state != DirectionState::Off; let has_way_output = has_from_chan_output(&from_chan.output) && from_chan.state != DirectionState::Off; let read_chan_input = !is_from_chan_processable(from_chan) && from_chan.state == DirectionState::On && !has_from_chan_output(&from_chan.output); let read_way_input = !is_from_way_processable(&from_way.input, glob) && from_way.state == DirectionState::On && !has_from_way_output(&from_way.output) && !has_pending_compute_tasks(&from_way.output); // Is there unprocessed and complete content for either direction? let work_way = is_from_way_processable(&from_way.input, glob) && !has_pending_compute_tasks(&from_way.output) && (from_way.state == DirectionState::On || from_way.state == DirectionState::Drain); let work_chan = is_from_chan_processable(from_chan) && from_chan.waiting_for.is_none() && (from_chan.state == DirectionState::On || from_chan.state == DirectionState::Drain); let work_to_do_now = (work_way && !has_chan_output) || (work_chan && !has_way_output); debug!( "poll: from_chan ({:?}{}{}{}; wait={}) from_way ({:?}{}{}{}; wait={}) work now {}", from_chan.state, string_if_bool(is_from_chan_processable(from_chan), ",proc"), string_if_bool(has_from_chan_output(&from_chan.output), ",output"), string_if_bool(read_chan_input, ",read"), from_chan.waiting_for.unwrap_or(Rid(0)), from_way.state, string_if_bool(is_from_way_processable(&from_way.input, glob), ",proc"), string_if_bool(has_from_way_output(&from_way.output), ",output"), string_if_bool(read_way_input, ",read"), from_way.output.expected_recvd_msgs, fmt_bool(work_to_do_now) ); // TODO: avoid reallocating pfds. One useful structure: // [chan | prog | wake | pipes...] // swap chan/prog order and use slices as necessary to avoid polling nval // Or: use an EPOLL wrapper let mut pfds = Vec::new(); // avoid realloc, maintain with capacity pfds.push(PollFd::new(wake_r.as_fd(), PollFlags::POLLIN)); let chan_id: Option = if read_chan_input || has_chan_output { let mut flags = PollFlags::empty(); flags.set(PollFlags::POLLIN, read_chan_input); flags.set(PollFlags::POLLOUT, has_chan_output); pfds.push(PollFd::new(chanfd.as_fd(), flags)); Some(pfds.len() - 1) } else { None }; let prog_id: Option = if read_way_input || has_way_output { let mut flags = PollFlags::empty(); flags.set(PollFlags::POLLIN, read_way_input); flags.set(PollFlags::POLLOUT, has_way_output); pfds.push(PollFd::new(progfd.as_fd(), flags)); Some(pfds.len() - 1) } else { None }; let (vulk_id, borrowed_fd): (Option, Option) = if let DmabufDevice::Vulkan((_, ref vulk)) = glob.dmabuf_device { let g = tasksys.tasks.lock().unwrap(); let mut first_pt = u64::MAX; first_pt = std::cmp::min( first_pt, g.apply_operations .iter() .map(|x| x.0) .min() .unwrap_or(u64::MAX), ); first_pt = std::cmp::min( first_pt, g.dmabuf_fill_tasks .iter() .map(|x| x.wait_until) .min() .unwrap_or(u64::MAX), ); first_pt = std::cmp::min( first_pt, g.dmabuf_diff_tasks .iter() .map(|x| x.wait_until) .min() .unwrap_or(u64::MAX), ); if first_pt < u64::MAX { /* Wait until the first incomplete timeline point, at which point some * progress can be made */ let mut flags = PollFlags::empty(); flags.set(PollFlags::POLLIN, true); let obfd = vulk.get_event_fd(first_pt).unwrap(); if let Some(bfd) = obfd { pfds.push(PollFd::new(bfd, flags)); (Some(pfds.len() - 1), Some(bfd)) } else { /* Emulate the eventfd using a thread that waits on the Vulkan main semaphore */ if let Some(ref w) = vulkan_wait_state { /* Update target sequence point. Note: it is possible `first_pt` may be _smaller_ * than the current target sequence point value depending on the precise task execution * order. There is no mechanism to immediately cancel currently running wait operations * without e.g. polling, so the wait thread will wait somewhat longer than necessary. * This will probably not cause a deadlock since (assuming sane clients) all submitted * Vulkan command buffers should make progress eventually if the main thread does * nothing, and is unlikely to be significant in practice. Insane clients can of * course never signal the acquire semaphore they provided. */ w.inner.lock().unwrap().target_sequence_point = Some(first_pt); w.notify.notify_all(); } else { let (wake_r, wake_w) = unistd::pipe2(fcntl::OFlag::O_CLOEXEC | fcntl::OFlag::O_NONBLOCK) .map_err(|x| tag!("Failed to create pipe: {}", x))?; let wait_state = Arc::new(VulkanWaitThread { inner: Mutex::new(VulkanWaitThreadInner { pipe_w: wake_w, pending_read: false, target_sequence_point: Some(first_pt), stop: false, error: None, }), notify: Condvar::new(), }); let shutdown = VulkanWaitThreadShutdown { state: wait_state.clone(), }; _vulkan_wait_thread_shutdown = Some(shutdown); let vkclone = vulk.clone(); let waitclone = wait_state.clone(); let thread = std::thread::Builder::new() .name(format!( "{}-vulkan-wait", if glob.on_display_side { "c" } else { "s" }, )) .spawn_scoped(scope, move || vulkan_wait_thread(waitclone, vkclone)) .map_err(|x| tag!("Failed to spawn thread: {:?}", x))?; /* Thread will automatically join on scope end, and will be asked to shut * down when vulkan_wait_thread_shutdown drops. (The shutdown itself may * be delayed until all in-progress Vulkan operations complete.) */ drop(thread); vulkan_wait_state = Some(wait_state); vulkan_wait_pipe_r = Some(wake_r); } let bfd = vulkan_wait_pipe_r.as_ref().unwrap().as_fd(); pfds.push(PollFd::new(bfd, flags)); (Some(pfds.len() - 1), Some(bfd)) } } else { (None, None) } } else { (None, None) }; let nbase_fds = pfds.len(); // todo: this is an awkward way to avoid the borrow checker, by allocating // an array of references to clear when pfds have been processed. let sfd_refs: Vec> = glob.pipes.iter().map(|v| v.borrow()).collect(); for x in &sfd_refs { if let ShadowFdVariant::Pipe(data) = &x.data { if data.program_closed { continue; } match &data.buf { ShadowFdPipeBuffer::ReadFromWayland((buf, len)) => { if *len < buf.len() { pfds.push(PollFd::new(data.fd.as_fd(), PollFlags::POLLIN)); } } ShadowFdPipeBuffer::ReadFromChannel(v) => { if !v.is_empty() { pfds.push(PollFd::new(data.fd.as_fd(), PollFlags::POLLOUT)); } } }; } } let ntimelinebase_fds = pfds.len(); let mut timelines = Vec::new(); for (_rid, v) in glob.map.iter() { let Some(w) = v.upgrade() else { continue; }; let b = w.borrow(); let ShadowFdVariant::Timeline(ref data) = b.data else { continue; }; if !glob.on_display_side { /* Only back-propagate releases from compositor to program; * do not wait for or signal releases on the program side */ continue; } let Some(min_pt) = data.releases.iter().map(|x| x.0).min() else { continue; }; drop(b); timelines.push((w, min_pt)); } let timeline_refs: Vec<(std::cell::Ref<'_, ShadowFd>, u64)> = timelines.iter().map(|v| (v.0.borrow(), v.1)).collect(); for (b, pt) in timeline_refs.iter() { let ShadowFdVariant::Timeline(ref data) = b.data else { continue; }; let evfd = data.timeline.link_event_fd(*pt)?; pfds.push(PollFd::new(evfd, PollFlags::POLLIN)); } /* Add DMABUFs that need to be polled to get implicit fence state */ let ndmabufbase_fds = pfds.len(); let mut dmabufs = Vec::new(); { let g = tasksys.tasks.lock().unwrap(); let mut dmabuf_rids: Vec = Vec::new(); dmabuf_rids.extend(g.waiting_for_implicit_acquire.keys().copied()); for rid in dmabuf_rids { let Some(wsfd) = glob.map.get(&rid) else { error!( "ShadowFd with RID={} for dmabuf destroyed before task completion", rid ); continue; }; let Some(sfd) = wsfd.upgrade() else { error!( "ShadowFd with RID={} for dmabuf destroyed before task completion", rid ); continue; }; dmabufs.push(sfd); } } let dmabuf_refs: Vec> = dmabufs.iter().map(|v| v.borrow()).collect(); for d in dmabuf_refs.iter() { let ShadowFdVariant::Dmabuf(ref data) = d.data else { unreachable!(); }; let DmabufImpl::Vulkan(ref v) = data.buf else { /* Only DMABUFs handled with Vulkan should have tasks put in TaskSet::waiting_for_implicit_acquire */ unreachable!(); }; let dfd = v.main_fd.as_fd(); pfds.push(PollFd::new(dfd, PollFlags::POLLIN)); } let zero_timeout = time::TimeSpec::new(0, 0); let res = nix::poll::ppoll( &mut pfds, if work_to_do_now { Some(zero_timeout) } else { None }, Some(*pollmask), ); if let Err(errno) = res { assert!(errno == Errno::EINTR || errno == Errno::EAGAIN); } // This avoids the borrow checker by ensuring pfds can drop references // to pipe fds. todo: this is technically unnecessary, how to avoid? let pfd_returns: Vec = pfds.iter().map(|x| x.revents().unwrap()).collect(); drop(pfds); drop(sfd_refs); drop(timeline_refs); drop(dmabuf_refs); if sigint_received.load(Ordering::Acquire) { error!("SIGINT"); break; } let mut self_wakeup = false; if pfd_returns[0].contains(PollFlags::POLLIN) { debug!("Self-pipe wakeup"); let mut tmp: [u8; 64] = [0; 64]; let res = unistd::read(wake_r.as_raw_fd(), &mut tmp[..]); match res { Ok(_) => { /* worker thread may have a message */ self_wakeup = true; } Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => { /* No action */ } Err(code) => { return Err(tag!("Failed to read from self-pipe: {:?}", code)); } }; } let (mut prog_read_eof, mut prog_write_eof) = (false, false); if let Some(id) = prog_id { let evts = pfd_returns[id]; if evts.contains(PollFlags::POLLIN) { // Read from program prog_read_eof |= read_from_socket(progfd, &mut from_way.input)?; if prog_read_eof { debug!("EOF reading from program"); } } if evts.contains(PollFlags::POLLOUT) { // Write to program prog_write_eof |= write_to_socket(progfd, &mut from_chan.output)?; if prog_write_eof { debug!("EOF writing to program"); } } if evts.contains(PollFlags::POLLHUP) { /* As POLLHUP is mutually exclusive with POLLOUT, prog fd is * no longer writable, so transfers to it can make no more * progress. */ debug!("POLLHUP from wayland side"); prog_write_eof = true; } } let (mut chan_read_eof, mut chan_write_eof) = (false, false); if let Some(id) = chan_id { let evts = pfd_returns[id]; if evts.contains(PollFlags::POLLIN) { // Read from channel chan_read_eof |= read_from_channel(chanfd, from_chan)?; if chan_read_eof { debug!( "EOF reading from channel: has close message? {}", fmt_bool(glob.has_received_close) ); } } if evts.contains(PollFlags::POLLOUT) { // Write to channel chan_write_eof |= write_to_channel(chanfd, &mut from_way.output)?; if chan_write_eof { debug!( "EOF writing to channel: has close message? {}", fmt_bool(glob.has_received_close) ); } } if evts.contains(PollFlags::POLLHUP) { /* channel fd is no longer writable */ debug!( "POLLHUP from channel side: has close message? {}", fmt_bool(glob.has_received_close) ); chan_write_eof = true; } } /* Process EOFs */ if prog_read_eof || prog_write_eof { /* In either case, no more data can be sent to the program, * and there is no point in updating FDs -- not even pipes. * As the compositor might have learned about the Wayland connection * closure already, we can safely behave as if this was the case. */ from_chan.state = DirectionState::Off; } if chan_read_eof || chan_write_eof { /* Same reasoning as for the program. */ from_way.state = DirectionState::Off; } if prog_read_eof { /* Drain and send any buffered messages; reduce to Drain or lesser */ from_way.state = match from_way.state { DirectionState::On | DirectionState::Drain => DirectionState::Drain, DirectionState::Off => DirectionState::Off, }; /* Note: if prog_write_eof = true, do nothing to from_way.state -- there might * still be buffered messages to process and send */ } if chan_read_eof { /* Drain and send any buffered messages; reduce to Drain or lesser */ from_chan.state = match from_chan.state { DirectionState::On | DirectionState::Drain => DirectionState::Drain, DirectionState::Off => DirectionState::Off, }; /* Note: if chan_write_eof = true, do nothing to from_chan.state -- there might * still be buffered messages to process and send */ } /* React to Vulkan timeline updates */ let mut timeline_update = false; if let Some(id) = vulk_id { let evts = pfd_returns[id]; if evts.contains(PollFlags::POLLIN) { let mut data = [0u8; 8]; let r = nix::unistd::read(borrowed_fd.unwrap().as_raw_fd(), &mut data); match r { Ok(s) => { /* Reads from an eventfd should always return 8 bytes; and short * 8-byte reads from a pipe should be atomic in practice */ assert!(s == 8); /* The u64 counter returned by data indicates the number of times * drmSyncObjEventfd was called since last read, and is not important. */ timeline_update = true; if let Some(ref w) = vulkan_wait_state { let mut m = w.inner.lock().unwrap(); m.pending_read = false; let mut tmp = None; std::mem::swap(&mut tmp, &mut m.error); if let Some(e) = tmp { /* Propagate errors from the wait thread */ error!("Vulkan timeline waiting thread failed: {}", e); return Err(e); } } } Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => { /* no action */ } Err(code) => { return Err(tag!("Failed to read from eventfd: {:?}", code)); } } } } /* Process any pipes before doing other work; at this point the sfd ordering will be the same */ let mut base_id = nbase_fds; retain_err(&mut glob.pipes, |sfd| -> Result { let mut r = sfd.borrow_mut(); let rid = r.remote_id; let ShadowFdVariant::Pipe(ref mut data) = &mut r.data else { panic!("Pipe list contained a non-pipe"); }; if data.program_closed { return Ok(true); } let has_pfd = match &data.buf { ShadowFdPipeBuffer::ReadFromWayland((buf, len)) => *len < buf.len(), ShadowFdPipeBuffer::ReadFromChannel(v) => !v.is_empty(), }; if !has_pfd { return Ok(true); } // TODO: move into function let evts = pfd_returns[base_id]; base_id += 1; let mut keep = true; match data.buf { ShadowFdPipeBuffer::ReadFromWayland((ref mut buf, ref mut used_len)) => { if evts.contains(PollFlags::POLLIN) { /* read whatever is in buffer, append to region; fixed buffer size is OK? */ let res = unistd::read(data.fd.as_raw_fd(), &mut buf[*used_len..]); match res { Ok(len) => { if len == 0 { debug!("Pipe closed when reading from it"); data.program_closed = true; } else { debug!("Have read {} bytes from pipe at RID {}", len, rid); *used_len += len; } } Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => { /* no action */ } Err(code) => { return Err(tag!("Failed to read from pipe: {:?}", code)); } }; } else if evts.contains(PollFlags::POLLHUP) || evts.contains(PollFlags::POLLERR) { /* pipe has closed */ debug!( "Pipe at RID={} received{}{}", rid, string_if_bool(evts.contains(PollFlags::POLLHUP), " POLLHUP"), string_if_bool(evts.contains(PollFlags::POLLERR), " POLLERR"), ); data.program_closed = true; } } ShadowFdPipeBuffer::ReadFromChannel(ref mut buf) => { if evts.contains(PollFlags::POLLOUT) { let (slice1, slice2) = buf.as_slices(); let io_slices = &[IoSlice::new(slice1), IoSlice::new(slice2)]; /* write buffer to output */ let res = uio::writev(&data.fd, io_slices); match res { Ok(len) => { buf.drain(0..len).count(); if buf.is_empty() && data.channel_closed { /* nothing left to send; can drop this sfd */ debug!("Deleting pipe at RID={} after all data written", rid); keep = false; } } Err(nix::errno::Errno::EPIPE) | Err(nix::errno::Errno::ECONNRESET) => { debug!("Pipe closed when writing to it"); data.program_closed = true; } Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => { /* no action */ } Err(code) => { return Err(tag!("Failed to write to pipe: {:?}", code)); } } } else if evts.contains(PollFlags::POLLHUP) || evts.contains(PollFlags::POLLERR) { /* pipe has closed */ debug!( "Pipe at RID={} received{}{}", rid, string_if_bool(evts.contains(PollFlags::POLLHUP), " POLLHUP"), string_if_bool(evts.contains(PollFlags::POLLERR), " POLLERR"), ); data.program_closed = true; } } } Ok(keep) })?; /* Process timeline fds being waited on */ let mut base_timeline_id = ntimelinebase_fds; for (w, min_pt) in timelines.iter() { let evts = pfd_returns[base_timeline_id]; base_timeline_id += 1; let mut b = w.borrow_mut(); let rid = b.remote_id; let ShadowFdVariant::Timeline(ref mut data) = b.data else { continue; }; if evts.contains(PollFlags::POLLIN) { let mut ret = [0u8; 8]; let r = nix::unistd::read(data.timeline.get_event_fd().as_raw_fd(), &mut ret); match r { Ok(s) => { assert!(s == 8); /* The u64 counter returned by data indicates the number of times * drmSyncObjEventfd was called since last read, and is not important. */ let current = data.timeline.get_current_pt()?; if current < *min_pt { /* Spurious wakeup? */ continue; } prune_releases(&mut data.releases, current, rid); if let Some(new_min) = data.releases.iter().map(|x| x.0).min() { assert!(new_min > *min_pt); } let pt_bytes = current.to_le_bytes(); let msg = cat4x4( build_wmsg_header(WmsgType::SignalTimeline, 16).to_le_bytes(), rid.0.to_le_bytes(), pt_bytes[..4].try_into().unwrap(), pt_bytes[4..].try_into().unwrap(), ); from_way.output.other_messages.push(Vec::from(msg)); } Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => { /* no action */ } Err(code) => { return Err(tag!("Failed to read from eventfd: {:?}", code)); } } } } /* Process DMABUFs being polled */ let mut base_dmabuf_id = ndmabufbase_fds; for d in dmabufs.iter() { let evts = pfd_returns[base_dmabuf_id]; base_dmabuf_id += 1; let rid = d.borrow().remote_id; if evts.contains(PollFlags::POLLIN) { let mut g = tasksys.tasks.lock().unwrap(); /* Tasks should now be ready to process */ if let Some(mut tasks) = g.waiting_for_implicit_acquire.remove(&rid) { debug!( "Dmabuf with RID={} may safely be read from, queueing {} tasks", rid, tasks.len() ); g.construct.extend(tasks.drain(..)); } drop(g); tasksys.task_notify.notify_one(); } } /* Process Vulkan events */ if timeline_update { process_vulkan_updates(glob, tasksys, from_chan)?; } /* Process work results */ if self_wakeup { while let Ok(msg) = from_way.output.recv_msgs.try_recv() { match msg { Ok(TaskOutput::Msg(vec)) => { debug!( "Received off thread work result: new output message of length: {}", vec.len() ); from_way.output.expected_recvd_msgs = from_way.output.expected_recvd_msgs.checked_sub(1).unwrap(); if (from_way.state == DirectionState::On || from_way.state == DirectionState::Drain) && !vec.is_empty() { from_way.output.other_messages.push(vec); } } Ok(TaskOutput::ApplyDone(rid)) => { debug!( "Received off thread work result: completed apply operation for RID={}", rid ); let Some(wsfd) = glob.map.get(&rid) else { return Err(tag!("Completed apply for RID that no longer exists")); }; let r = wsfd.upgrade().unwrap(); let mut sfd = r.borrow_mut(); if let ShadowFdVariant::File(ref mut d) = sfd.data { d.pending_apply_tasks = d.pending_apply_tasks.checked_sub(1).unwrap(); } else if let ShadowFdVariant::Dmabuf(ref mut d) = sfd.data { d.pending_apply_tasks = d.pending_apply_tasks.checked_sub(1).unwrap(); } else { unreachable!(); } } Ok(TaskOutput::MirrorApply) | Ok(TaskOutput::DmabufApplyOp(_)) => { unreachable!() } Err(e) => { error!("worker failed: {}", e); return Err(e); } } if let Some(rid) = from_chan.waiting_for { let sfd = glob.map.get(&rid).unwrap().upgrade().unwrap(); let b = sfd.borrow(); if let ShadowFdVariant::File(ref data) = &b.data { if data.pending_apply_tasks == 0 { from_chan.waiting_for = None; } } else if let ShadowFdVariant::Dmabuf(ref data) = &b.data { if data.pending_apply_tasks == 0 { from_chan.waiting_for = None; } } else { unreachable!(); } } } } /* Process results of reads/writes */ if from_way.state == DirectionState::On || from_way.state == DirectionState::Drain { if is_from_way_processable(&from_way.input, glob) && !has_from_way_output(&from_way.output) && !has_pending_compute_tasks(&from_way.output) { // Read protocol messages and collect corresponding updates process_wayland_1(from_way, glob, tasksys)?; } if !has_pending_compute_tasks(&from_way.output) { // Push final protocol messages, after all thread work cleared process_wayland_2(from_way); } } if is_from_chan_processable(from_chan) && (from_chan.state == DirectionState::On || from_chan.state == DirectionState::Drain) && from_chan.waiting_for.is_none() && !has_from_chan_output(&from_chan.output) { process_channel(from_chan, glob, tasksys)?; } /* Acknowledgement messages are only included in version <= 16; they were needed for * reconnection support with original Waypipe, but reconnection support is best done * either at a lower (tcp, ssh) or higher (Wayland application or multiplexer) level */ if from_way.state == DirectionState::On && from_chan.state == DirectionState::On && glob.wire_version <= 16 { /* inject acknowledgement messages */ let val = from_chan.message_counter as u32; if val != from_way.output.last_ack { let dst = if from_way.output.ack_nwritten > 0 { &mut from_way.output.ack_msg_nxt } else { &mut from_way.output.ack_msg_cur }; dst[0..4] .copy_from_slice(&build_wmsg_header(WmsgType::AckNblocks, 8).to_le_bytes()); dst[4..8].copy_from_slice(&val.to_le_bytes()); debug!("Queued ack: counter {}", val); from_way.output.needs_new_ack = true; from_way.output.last_ack = val; } } /* Cleanup all weak references from global table */ glob.map.retain(|_rid, sfd| sfd.upgrade().is_some()); /* If, after processing and writing, there is nothing left for a direction * being drained, turn the direction off */ if from_chan.state == DirectionState::Drain // TODO: what if there is still something readable? && !is_from_chan_processable(from_chan) && !has_from_chan_output(&from_chan.output) { from_chan.state = DirectionState::Off; } if from_way.state == DirectionState::Drain && !is_from_way_processable(&from_way.input, glob) && !has_from_way_output(&from_way.output) { from_way.state = DirectionState::Off; } } Ok(()) } /** Write data to a socket. * * Returns (eof, nwritten) */ fn write_bytes(sockfd: &OwnedFd, data: &[u8]) -> Result<(bool, usize), String> { let slice = &[IoSlice::new(data)]; match uio::writev(sockfd, slice) { Ok(len) => Ok((false, len)), Err(nix::errno::Errno::EPIPE) | Err(nix::errno::Errno::ECONNRESET) => { debug!("Channel closed while writing error"); Ok((true, 0)) } Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => { /* Do nothing */ Ok((false, 0)) } Err(x) => Err(tag!("Error writing to socket: {:?}", x)), } } /** Finish writing any queued data and then send a close message (and maybe * also error message) over the channel */ fn loop_end_to_channel( error: Option<&str>, from_way: &mut FromWayland, chanfd: &OwnedFd, pollmask: &signal::SigSet, sigint_received: &AtomicBool, send_close_msg: bool, ) -> Result<(), String> { let mut endmsg = Vec::new(); if let Some(msg) = error { let err = format!("waypipe-client internal error: {}", msg); let errmsg_len = 4 + length_evt_wl_display_error(err.len()); endmsg.extend_from_slice(&build_wmsg_header(WmsgType::Protocol, errmsg_len).to_le_bytes()); assert!(errmsg_len % 4 == 0); endmsg.resize(errmsg_len, 0); let mut tmp = &mut endmsg[4..]; write_evt_wl_display_error(&mut tmp, ObjId(1), ObjId(1), 0, err.as_bytes()); } if send_close_msg { /* Send 'Close' message to channel to make it clear the shutdown/disconnection * is deliberate and that this instance of Waypipe has not crashed */ endmsg.extend_from_slice(&build_wmsg_header(WmsgType::Close, 8).to_le_bytes()); endmsg.extend_from_slice(&0_u32.to_le_bytes()); } let mut nwritten_err = 0; /* Attach protocol header, RIDs to in progress messages if needed */ process_wayland_2(from_way); while has_from_way_output(&from_way.output) || nwritten_err < endmsg.len() { let mut pfds = [PollFd::new(chanfd.as_fd(), PollFlags::POLLOUT)]; let res = nix::poll::ppoll(&mut pfds, None, Some(*pollmask)); if let Err(errno) = res { assert!(errno == Errno::EINTR || errno == Errno::EAGAIN); } if sigint_received.load(Ordering::Acquire) { error!("SIGINT"); break; } let evts = pfds[0].revents().unwrap(); if evts.contains(PollFlags::POLLOUT) { let eof = if has_from_way_output(&from_way.output) { write_to_channel(chanfd, &mut from_way.output)? } else { let (eof, nwritten) = write_bytes(chanfd, &endmsg[nwritten_err..])?; debug!( "Wrote bytes {}..{} of length {} final message; eof: {}", nwritten_err, nwritten_err + nwritten, endmsg.len(), eof ); nwritten_err += nwritten; eof }; if eof { debug!("Channel closed while writing final message"); break; } } if evts.contains(PollFlags::POLLHUP) { /* Disconnected */ debug!("Channel closed"); break; } } Ok(()) } /** Finish writing any queued data and then send an error message to the Wayland application */ fn loop_error_to_wayland( error: &str, from_chan: &mut FromChannel, progfd: &OwnedFd, pollmask: &signal::SigSet, sigint_received: &AtomicBool, ) -> Result<(), String> { let err = format!("waypipe-server internal error: {}", error); let mut errmsg = Vec::new(); let errmsg_len = length_evt_wl_display_error(err.len()); errmsg.resize(errmsg_len, 0); let mut tmp = &mut errmsg[..]; write_evt_wl_display_error(&mut tmp, ObjId(1), ObjId(1), 0, err.as_bytes()); let mut nwritten_err = 0; while has_from_chan_output(&from_chan.output) || nwritten_err < errmsg_len { let mut pfds = [PollFd::new(progfd.as_fd(), PollFlags::POLLOUT)]; let res = nix::poll::ppoll(&mut pfds, None, Some(*pollmask)); if let Err(errno) = res { assert!(errno == Errno::EINTR || errno == Errno::EAGAIN); } if sigint_received.load(Ordering::Acquire) { error!("SIGINT"); break; } let evts = pfds[0].revents().unwrap(); if evts.contains(PollFlags::POLLOUT) { let eof = if has_from_chan_output(&from_chan.output) { write_to_socket(progfd, &mut from_chan.output)? } else { let (eof, nwritten) = write_bytes(progfd, &errmsg[nwritten_err..])?; debug!( "Wrote bytes {}..{} of length {} error message; eof: {}", nwritten_err, nwritten_err + nwritten, errmsg.len(), eof ); nwritten_err += nwritten; eof }; if eof { debug!("Wayland connection closed while writing error"); break; } } if evts.contains(PollFlags::POLLHUP) { /* Disconnected */ debug!("Wayland connection closed"); break; } } Ok(()) } /** Struct which on Drop notifies worker threads to stop */ struct ThreadShutdown<'a> { tasksys: &'a TaskSystem, } impl Drop for ThreadShutdown<'_> { fn drop(&mut self) { /* Notify all worker threads that they should shut down */ match self.tasksys.tasks.lock() { Ok(mut guard) => { guard.stop = true; } Err(_) => { error!("Mutex poisoned, workers expected to shut down"); } } self.tasksys.task_notify.notify_all(); } } /** Options for the main interface loop */ #[derive(Debug, Clone)] pub struct Options { /** Whether to print debug messages and add extra correctness checks */ pub debug: bool, /** Compression type to use */ pub compression: Compression, /** If set, video encoding type to try to send */ pub video: VideoSetting, /** Number of worker threads to use for diff/compression operations; 0=autoselect */ pub threads: u32, /** A valid utf8 string with which to prefix xdg toplevel titles */ pub title_prefix: String, /* If true, filter out protocols and messages using dmabufs */ pub no_gpu: bool, /* The drm render node to use (if the Wayland compositor does not specify a specific node) */ pub drm_node: Option, /** If nonzero, path to a folder in which all received video streams will be stored */ pub debug_store_video: Option, /** If true, make vulkan initialization fail so that gbm fallback will be tried if available */ pub test_skip_vulkan: bool, /** If true, assume the platform does not support timeline semaphore importing/exporting */ pub test_no_timeline_export: bool, /** If true, assume the platform does not support binary semaphore importing */ pub test_no_binary_semaphore_import: bool, } /** The main entrypoint for Wayland protocol proxying; should be given already opened and connected sockets * for the Wayland program and for the other Waypipe instance. */ pub fn main_interface_loop( chanfd: OwnedFd, progfd: OwnedFd, opts: &Options, init_wire_version: u32, on_display_side: bool, pollmask: signal::SigSet, sigint_received: &AtomicBool, ) -> Result<(), String> { debug!("Entered main loop"); let (sender, receiver): (Sender, Receiver) = std::sync::mpsc::channel(); let mut from_chan = FromChannel { state: DirectionState::On, input: ReadBuffer::new(), next_msg: None, rid_queue: VecDeque::new(), output: TransferWayland { data: &mut [0; 4096], start: 0, len: 0, fds: VecDeque::new(), }, message_counter: 0, waiting_for: None, }; let mut from_way = FromWayland { state: DirectionState::On, input: WaylandInputRing { data: &mut [0; 4096], len: 0, fds: VecDeque::new(), }, output: TransferQueue { protocol_data: &mut [0; 4096], protocol_len: 0, protocol_header_added: false, protocol_rids: Vec::new(), ack_msg_cur: [0; 8], ack_msg_nxt: [0; 8], ack_nwritten: 0, needs_new_ack: false, last_ack: 0, other_messages: Vec::new(), expected_recvd_msgs: 0, recv_msgs: receiver, nbytes_written: 0, }, }; if init_wire_version > MIN_PROTOCOL_VERSION { let version_resp = cat2x4( build_wmsg_header(WmsgType::Version, 8).to_le_bytes(), init_wire_version.to_le_bytes(), ); from_way.output.other_messages.push(Vec::from(version_resp)); } let mut glob = Globals { map: BTreeMap::new(), fresh: BTreeMap::new(), pipes: Vec::new(), on_display_side, dmabuf_device: if opts.no_gpu { DmabufDevice::Unavailable } else { DmabufDevice::Unknown }, max_local_id: if on_display_side { -1 } else { 1 }, objects: setup_object_map(), max_buffer_uid: 1, /* Start at 1 to ensure 0 is never valid */ presentation_clock: None, advertised_modifiers: BTreeMap::new(), screencopy_restrictions: BTreeMap::new(), opts: (*opts).clone(), // todo: reference opts instead? wire_version: init_wire_version, has_first_message: false, has_received_close: false, }; let (wake_r, wake_w) = unistd::pipe2(fcntl::OFlag::O_CLOEXEC | fcntl::OFlag::O_NONBLOCK) .map_err(|x| tag!("Failed to create pipe: {}", x))?; let tasksys = TaskSystem { task_notify: Condvar::new(), tasks: Mutex::new(TaskSet { construct: VecDeque::new(), waiting_for_implicit_acquire: BTreeMap::new(), last_seqno: 0, decompress: VecDeque::new(), apply: BTreeMap::new(), in_progress_decomp: BTreeSet::new(), in_progress_apply: BTreeSet::new(), apply_operations: Vec::new(), dmabuf_fill_tasks: Vec::new(), dmabuf_diff_tasks: Vec::new(), stop: false, }), wake_fd: wake_w, }; let nthreads = if opts.threads == 0 { std::cmp::max(1, std::thread::available_parallelism().unwrap().get() / 2) } else { opts.threads as usize }; let tasksys_ref: &TaskSystem = &tasksys; let ret = std::thread::scope(|scope| { let shutdown = ThreadShutdown { tasksys: &tasksys }; let mut threads: Vec> = Vec::new(); for i in 0..nthreads { let senderclone = sender.clone(); let t = std::thread::Builder::new() .name(format!( "{}-worker{}", if on_display_side { "c" } else { "s" }, i )) .spawn_scoped(scope, move || work_thread(tasksys_ref, senderclone)) .map_err(|x| tag!("Failed to spawn thread: {:?}", x))?; threads.push(t); } let ret = loop_inner( scope, &mut glob, &mut from_chan, &mut from_way, &tasksys, wake_r, &chanfd, &progfd, &pollmask, sigint_received, ); /* Ask worker threads to stop processing now. (This is implemented using Drop to * ensure that, if built with panic=unwind, worker threads spawned in this scope * will still stop if the main thread panics before this.) */ drop(shutdown); let end_res = if let Err(err) = ret { error!("Sending error: {}", err); if on_display_side { loop_end_to_channel( Some(&err), &mut from_way, &chanfd, &pollmask, sigint_received, !glob.has_received_close, ) } else { let x = loop_error_to_wayland( &err, &mut from_chan, &progfd, &pollmask, sigint_received, ); let y = loop_end_to_channel( None, &mut from_way, &chanfd, &pollmask, sigint_received, !glob.has_received_close, ); x.and(y) } } else { loop_end_to_channel( None, &mut from_way, &chanfd, &pollmask, sigint_received, !glob.has_received_close, ) }; if let Err(errerr) = end_res { error!("Error while trying to send final messages: {}", errerr); } /* Errors from the main loop have been handled */ Ok(()) }); debug!("Done."); ret } waypipe-v0.10.5/src/mirror.rs000066400000000000000000000270651507071711100161220ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Structures to hold copy of a buffer */ use crate::tag; use crate::util::*; use nix::errno::Errno; use nix::libc; use std::collections::BTreeSet; use std::ops::Range; use std::sync::Mutex; enum MirrorBacking { Alloc(AlignedArray), Mmap(*mut u8), } // SAFETY: no operations for MirrorBacking::Mmap linked to a specific thread unsafe impl Send for MirrorBacking {} struct MirrorState { data: MirrorBacking, region_size: usize, ranges: BTreeSet<(usize, usize)>, } /** A vector with range-based access control. */ pub struct Mirror { data: Mutex, } pub struct MirrorRange<'a> { mirror: &'a Mirror, span: (usize, usize), pub data: &'a mut [u8], } fn nonempty_range_overlap(a: &(usize, usize), b: &(usize, usize)) -> bool { b.0 < a.1 && a.0 < b.1 } impl Drop for MirrorRange<'_> { fn drop(&mut self) { // Need to mark the range as being free let mut x = self.mirror.data.lock().unwrap(); x.ranges.remove(&self.span); } } impl Drop for MirrorState { fn drop(&mut self) { if let MirrorBacking::Mmap(v) = self.data { if !v.is_null() { unsafe { /* SAFETY: region_size is kept in sync with the amount mapped to * self.data, so munmap will unmap precisely the mapped region. */ let ret = libc::munmap(v as *mut libc::c_void, self.region_size); /* munmap should only fail if EINVAL or bad security policy */ assert!(ret == 0); } } } } } /* Requires size > 0; data will be initialized to 0 */ unsafe fn do_mmap(size: usize) -> Result<*mut libc::c_void, String> { /* Note: freshly mmapped data is zero-initialized */ let addr: *mut libc::c_void = unsafe { /* SAFETY: no references to memory and will only create fresh allocation if successful */ libc::mmap( std::ptr::null_mut(), size, libc::PROT_READ | libc::PROT_WRITE, libc::MAP_PRIVATE | libc::MAP_ANONYMOUS, -1, 0, ) }; if addr == libc::MAP_FAILED { let errno = Errno::last_raw(); return Err(tag!("Failed to mmap size {}: {}", size, errno)); } /* Code later will check whether addr is null */ assert!(!addr.is_null()); /* mmap should provide page-alignment */ assert!( (addr as usize) % 64 == 0, "Insufficient mmap address alignment: {:?}", addr ); Ok(addr) } /* Assumes 'src' is not null and was mmapped at size old_size; and new_size > old_size; * initializes the new memory to 0 */ #[cfg(target_os = "linux")] unsafe fn do_mremap( src: *mut libc::c_void, old_size: usize, new_size: usize, ) -> Result<*mut libc::c_void, String> { let new_addr: *mut libc::c_void = unsafe { /* SAFETY: src was previously mapped at size `old_size`, as required, so if * successful mmap will ensure returned allocation has `new_size`. * New pages are zero-initialized. * On failure, no change. */ libc::mremap(src, old_size, new_size, libc::MREMAP_MAYMOVE) }; if new_addr == libc::MAP_FAILED { let errno = Errno::last_raw(); return Err(tag!( "Failed to remap from size {} to size {}: {}", old_size, new_size, errno, )); } assert!(!new_addr.is_null()); assert!( (new_addr as usize) % 64 == 0, "Insufficient mmap address alignment: {:?}", new_addr ); Ok(new_addr) } #[cfg(not(target_os = "linux"))] unsafe fn do_mremap( src: *mut libc::c_void, old_size: usize, new_size: usize, ) -> Result<*mut libc::c_void, String> { let new_addr: *mut libc::c_void = unsafe { /* SAFETY: no references to existing memory; creates new allocation if successful */ libc::mmap( std::ptr::null_mut(), new_size, libc::PROT_READ | libc::PROT_WRITE, libc::MAP_PRIVATE | libc::MAP_ANONYMOUS, -1, 0, ) }; if new_addr == libc::MAP_FAILED { let errno = Errno::last_raw(); return Err(tag!( "Failed to remap from size {} to size {}: {}", old_size, new_size, errno, )); } assert!(!new_addr.is_null()); assert!( (new_addr as usize) % 64 == 0, "Insufficient mmap address alignment: {:?}", new_addr ); unsafe { /* SAFETY: pointers to u8 always aligned; new_addr is mapped at * new_size > old_size, src mapped at old_size, and the mmap * creating new_addr ensures a disjoint region */ std::ptr::copy_nonoverlapping(src, new_addr, old_size); /* SAFETY: src is not null and was created by mmap of size old_size */ let ret = libc::munmap(src, old_size); /* munmap should only fail if EINVAL or bad security policy */ assert!(ret == 0); } Ok(new_addr) } impl Mirror { pub fn new(size: usize, mmapped: bool) -> Result { if size > isize::MAX as usize { return Err(tag!("Creating mirror too large: {} > {}", size, isize::MAX)); } let s = if mmapped { // TODO: instead of mmapping when foot provides a huge (512MB) buffer, // consider a 'sparse mirror' construction that only provides a mirror for // memory regions as needed, and adjust diff logic to not cross underlying // boundaries. This could help mitigate memory map exhaustion attacks? let addr: *mut libc::c_void = if size > 0 { unsafe { // SAFETY: size is >0 do_mmap(size)? } } else { std::ptr::null_mut() }; MirrorState { data: MirrorBacking::Mmap(addr as *mut u8), region_size: size, ranges: BTreeSet::new(), } } else { MirrorState { data: MirrorBacking::Alloc(AlignedArray::new(size)), region_size: size, ranges: BTreeSet::new(), } }; Ok(Mirror { data: Mutex::new(s), }) } /* Get a subrange of the vector, assuming it is available */ pub fn get_mut_range<'a>(&'a self, span: Range) -> Option> { if span.end <= span.start { return None; } let x = (span.start, span.end); let mut guard = self.data.lock().unwrap(); // todo: faster search, using e.g. experimental BTreeSet::lower_bound() // or existing BTreeSet::intersection() with custom secretly partial Ord for sp in &guard.ranges { if nonempty_range_overlap(sp, &x) { /* part of range is already accounted for */ return None; } } guard.ranges.insert(x); if x.1 > guard.region_size { return None; } let len = x.1 - x.0; let start: isize = x.0.try_into().unwrap(); /* SAFETY: similar logic in both cases. u8 has no alignment requirement; * allocated size is > 0 because 0 <= x.0 < x.1 <= guard.region_size, * and thus base pointer is not null. Have checked above that no * overlapping range to the given memory region has been created, and * these are only released at the drop of the corresponding MirrorRange. * * len is <= isize::MAX because len <= guard.region_size, which was * checked <= isize::MAX at construction and extend. * * Memory was originally initialized to zero by mmap, mremap, or AlignedArray. */ match guard.data { MirrorBacking::Mmap(ref mut p) => { unsafe { /* SAFETY: see above; p+start is in bounds of the array allocation */ let s: &mut [u8] = std::slice::from_raw_parts_mut(p.offset(start), len); Some(MirrorRange { mirror: self, span: x, data: s, }) } } MirrorBacking::Alloc(ref mut v) => { unsafe { /* SAFETY: see above; p+start is in bounds of the array allocation */ let (p, size) = v.get_parts(); assert!(start >= 0 && (start as usize).saturating_add(len) <= size); let s: &mut [u8] = std::slice::from_raw_parts_mut(p.offset(start), len); Some(MirrorRange { mirror: self, span: x, data: s, }) } } } } /* Increase the size of the mirror; panics if any range is being accessed */ pub fn extend(&mut self, new_size: usize) -> Result<(), String> { if new_size > isize::MAX as usize { return Err(tag!( "Extending mirror too large: {} >= {}", new_size, isize::MAX )); } let mut guard = self.data.lock().unwrap(); let old_size = guard.region_size; /* Resizing the vector invalidates all references; check there are none */ assert!(guard.ranges.is_empty()); assert!( old_size <= new_size, "region_size = {} <= new_size = {}", old_size, new_size ); if new_size == old_size { return Ok(()); // no change } assert!(new_size > old_size); match guard.data { MirrorBacking::Mmap(ref mut p) => { let new_addr = unsafe { if old_size == 0 { // SAFETY: new_size is > 0 do_mmap(new_size)? } else { // SAFETY: have checked new_size > old_size do_mremap(*p as *mut libc::c_void, old_size, new_size)? } }; *p = new_addr as *mut u8; } MirrorBacking::Alloc(ref mut v) => { let mut new = AlignedArray::new(new_size); new.get_mut()[..v.get().len()].copy_from_slice(v.get()); *v = new; } } guard.region_size = new_size; Ok(()) } pub fn len(&self) -> usize { self.data.lock().unwrap().region_size } } #[cfg(test)] use std::sync::Arc; #[test] fn test_mirror_type() { for use_mmap in &[false, true] { let m: Arc = Arc::new(Mirror::new(1024, *use_mmap).unwrap()); let m1 = m.clone(); let m2 = m.clone(); let j1 = std::thread::spawn(move || { let x = m1.get_mut_range(0..20).unwrap(); x.data[0] = 1; }); let j2 = std::thread::spawn(move || { let x = m2.get_mut_range(20..100).unwrap(); x.data[0] = 1; }); j1.join().unwrap(); j2.join().unwrap(); let mut y = Arc::into_inner(m).unwrap(); y.extend(2048).unwrap(); let a = y.get_mut_range(0..10).unwrap(); let b = y.get_mut_range(10..1500).unwrap(); let c = y.get_mut_range(15..200); let d = y.get_mut_range(1600..5000); assert!(c.is_none()); assert!(a.data[0] == 1); assert!(b.data[10] == 1); assert!(d.is_none()); // todo: more tests, including with threads } } waypipe-v0.10.5/src/platform.rs000066400000000000000000000056021507071711100164250ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Platform-specific code. * * For platforms where Rust's `struct stat` is inaccurate, the key parameters can be * acquired using the following C script, to be compiled using `clang -std=c11`. * * ``` * #include * #include * #include * #include // for dev_t * #include // for struct stat * * int main(int argc, char **argv) { * (void)argc; * (void)argv; * printf("sizeof(dev_t) %zu\n", sizeof(dev_t)); * printf("alignof(dev_t) %zu\n", alignof(dev_t)); * printf("sizeof(struct stat) %zu\n", sizeof(struct stat)); * printf("alignof(struct stat) %zu\n", alignof(struct stat)); * printf("offsetof(struct stat, st_rdev) %zu\n", offsetof(struct stat, st_rdev)); * } * ``` */ use std::path::Path; #[cfg(target_os = "freebsd")] pub const SIZEOF_DEV_T: usize = 8; #[cfg(not(target_os = "freebsd"))] pub const SIZEOF_DEV_T: usize = std::mem::size_of::(); #[cfg(target_os = "freebsd")] fn get_rdev_for_file_freebsd(path: &Path) -> Option { use core::ffi::c_char; use core::ffi::c_int; use std::ffi::CStr; use std::os::unix::ffi::OsStrExt; const SIZEOF_FREEBSD_STAT: usize = 224; const OFFSETOF_FREEBSD_ST_RDEV: usize = 40; #[repr(C, align(8))] struct FreeBSDStat([u8; SIZEOF_FREEBSD_STAT]); unsafe extern "C" { fn stat(path: *const c_char, buf: *mut FreeBSDStat) -> c_int; } let mut path_bytes = Vec::from(path.as_os_str().as_bytes()); path_bytes.push(0); let path_str = CStr::from_bytes_with_nul(&path_bytes).unwrap(); let mut output: FreeBSDStat = FreeBSDStat([0; SIZEOF_FREEBSD_STAT]); let ret: i32 = unsafe { /* SAFETY: requires FreeBSD >= 12.0 for the definition of the function * and struct stat to be correct. `stat()` does not hold onto references * to its input so there is no lifetime issue. */ stat(path_str.as_ptr(), &mut output) }; if ret != 0 { return None; } let st_rdev_bytes = &output.0[OFFSETOF_FREEBSD_ST_RDEV..OFFSETOF_FREEBSD_ST_RDEV + SIZEOF_DEV_T]; let st_rdev = u64::from_ne_bytes(st_rdev_bytes.try_into().unwrap()); Some(st_rdev) } #[cfg(not(target_os = "freebsd"))] fn get_rdev_for_file_base(path: &Path) -> Option { use nix::sys::stat; let result = stat::stat(path).ok()?; /* st_rdev size varies by platform and may be 4 on old * architectures, but is typically 8 and always <= 8 */ #[allow(clippy::useless_conversion)] Some(result.st_rdev.into()) } /** Get the (`st_rdev`) device id for the special file at `path`, cast to a u64. */ pub fn get_rdev_for_file(path: &Path) -> Option { #[cfg(target_os = "freebsd")] { get_rdev_for_file_freebsd(path) } #[cfg(not(target_os = "freebsd"))] { get_rdev_for_file_base(path) } } waypipe-v0.10.5/src/read.rs000066400000000000000000000573521507071711100155250ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Structures to read messages from a socket and keep references to them */ use crate::tag; use crate::util::*; use log::debug; use nix::errno::Errno; use nix::libc; use std::ffi::c_void; use std::os::fd::AsRawFd; use std::{collections::VecDeque, os::fd::OwnedFd, sync::Arc}; /** A region of memory filled by a ReadBuffer and referred to by some * number of ReadBufferView objects. */ struct ReadBufferChunk { data: *mut u8, size: usize, } // SAFETY: Multi-thread access is handled through ReadBufferView, of which only one is created // for any given byte in data. unsafe impl Sync for ReadBufferChunk {} // SAFETY: It is safe to move and drop ReadBufferChunk in a different thread as dealloc is thread-safe // (also because Arc is used, drop will only happen when there are no references.) unsafe impl Send for ReadBufferChunk {} /** A data structure to allow readv-ing a sequence of large packets, which can be * loaned to a different thread for some variable time before the space can be * reused. * * To limit the number of allocations and read operations needed, multiple packets may * share the same backing storage, which will only be freed when all packets are dropped. */ pub struct ReadBuffer { /* Buffer with one last message in it to complete; if present, last_msg_start/end refer * to _this_ buffer */ old: Option>, /* current.capacity() is the available space in the vector */ current: Arc, // span of the current partial message; everything _before_ this span has been loaned out // everything _after_ this span is uninitialized. last_msg_start: usize, /* Start of the in progress message/next message to be completed */ end: usize, msg_queue: VecDeque, } impl ReadBufferChunk { fn new(len: usize) -> ReadBufferChunk { // 4 alignment in theory allows for faster u32 reads when applying diff // (but only for uncompressed inputs), and may make Wayland parsing faster let layout = std::alloc::Layout::from_size_align(len, 4).unwrap(); assert!(len > 0); unsafe { // SAFETY: have checked size is > 0 let data = std::alloc::alloc(layout); assert!(!data.is_null()); ReadBufferChunk { data, size: len } } } } impl Drop for ReadBufferChunk { fn drop(&mut self) { let layout = std::alloc::Layout::from_size_align(self.size, 4).unwrap(); unsafe { // SAFETY: deallocating pointer only constructed through alloc with same layout std::alloc::dealloc(self.data, layout); } } } /** A reference to a single message from a ReadBuffer */ pub struct ReadBufferView { // Keep the chunk alive _base: Arc, // alternatively: panic on drop, but let the main object consume these data: *mut u8, data_len: usize, } // SAFETY: Safe to move between threads, allocation in `.data` has no thread-specific ties. // Furthermore, `._base` is Send+Sync (if Rc were used, moving to another thread could break it) unsafe impl Send for ReadBufferView {} // SAFETY: Safe to share between threads -- &/&mut self control data access unsafe impl Sync for ReadBufferView {} /** The maximum size expected for a typical message. Messages larger than this size may * require copying memory around to ensure an contiguous allocation. */ const MAX_NORMAL_MSG_SIZE: usize = 1 << 18; impl ReadBuffer { /** Create a new ReadBuffer */ pub fn new() -> Self { let chunksize = 4 * MAX_NORMAL_MSG_SIZE; ReadBuffer { old: None, current: Arc::new(ReadBufferChunk::new(chunksize)), last_msg_start: 0, end: 0, msg_queue: VecDeque::new(), } } /* Caller is responsible for verifying base[start..start+4] is available, initialized, constant. */ unsafe fn get_message_padded_len(base: *const u8, start: usize) -> Result { // SAFETY: caller responsibility that start is inbounds and < isize::MAX let header_start = base.add(start); // SAFETY: *u8 is always aligned; caller responsibility that header_start // is in bounds, that base[start..start+4] is initialized, and nothing mutates // it. Slice length 4 < isize::MAX. let header_slice = std::ptr::slice_from_raw_parts(header_start, 4); let header = u32::from_le_bytes((&*header_slice).try_into().unwrap()); let (msg_len, _typ) = parse_wmsg_header(header) .ok_or_else(|| tag!("Failed to parse wmsg header: {}", header))?; if msg_len < 4 { return Err(tag!("Message lengths must be at least 4, not {}", msg_len)); } Ok(align4(msg_len)) } /* bool is true iff EOF; usize = number of bytes read, may be zero if EINTR. * Caller must ensure iovecs indicate valid *mut pointers to regions of u8 data */ unsafe fn read_inner(iovs: &[libc::iovec], src_fd: &OwnedFd) -> Result<(bool, usize), String> { assert!(iovs.iter().map(|x| x.iov_len).sum::() > 0); /* Cannot use uio::readv wrapper here, because it takes IoSliceMut which take * &mut [u8] which require "initialized" memory. Rust also currently has no * way to make memory as initialized, so to avoid UB, use raw libc and pointers * to also handle the first-write scenario. */ // SAFETY: caller ensures iovs are valid and exclusively owned intervals; there // is no alignment requirement. Also, iovs.len() (typically 1 or 2) is < i32::MAX. #[cfg(not(miri))] let ret = libc::readv(src_fd.as_raw_fd(), iovs.as_ptr(), iovs.len() as _); #[cfg(miri)] let ret = test_readv(src_fd.as_raw_fd(), iovs.as_ptr(), iovs.len() as _); if ret < 0 { return match Errno::last() { Errno::ECONNRESET => Ok((true, 0)), Errno::EINTR | Errno::EAGAIN => Ok((false, 0)), x => Err(tag!("Error reading from channel socket: {:?}", x)), }; } debug!("Read {} bytes from channel", ret); Ok((ret == 0, ret.try_into().unwrap())) } /* Copy the last incomplete message into a new buffer */ fn move_tail_to_new(&mut self, new_size: usize) { let base = self.current.data; let nxt = Arc::new(ReadBufferChunk::new(new_size)); /* Copy data over and adjust */ assert!(new_size >= self.end - self.last_msg_start); unsafe { let c_dst = nxt.data; // SAFETY: inside allocation, because last_msg_start < old.size < isize::MAX let c_src = base.add(self.last_msg_start); // SAFETY: c_src/c_dst are from different allocations because self.current/ // self.old never refer to the same chunk. u8 pointers are always aligned. // regions are valid, because self.end stays in [0,base.size]: // (c_src + self.end - self.last_msg_start) = base + self.end <= base.size // and have asserted (c_dst + self.end - self.last_msg_start) <= nxt.size std::ptr::copy_nonoverlapping(c_src, c_dst, self.end - self.last_msg_start); } self.current = nxt; self.end -= self.last_msg_start; self.last_msg_start = 0; } fn extract_messages(&mut self) -> Result<(), String> { let cur_ptr = self.current.data; while self.end - self.last_msg_start >= 4 { let msg_len = unsafe { // SAFETY: everything in cur_ptr[..self.end] is initialized, // last_msg_start + 4 < self.end, and self.end < current.size <= isize::MAX Self::get_message_padded_len(cur_ptr, self.last_msg_start)? }; if self.end - self.last_msg_start >= msg_len { /* Message is complete, queue it */ let ptr = unsafe { // SAFETY: cur_ptr + self.last_msg_start is inside allocation of // length current.size, which is > self.end > self.last_msg_start cur_ptr.add(self.last_msg_start) }; self.msg_queue.push_back(ReadBufferView { _base: self.current.clone(), data: ptr, data_len: msg_len, }); self.last_msg_start += msg_len; } else { /* Incomplete message, stop scanning */ break; } } Ok(()) } fn read_with_old(&mut self, src_fd: &OwnedFd) -> Result { let old = self.old.as_ref().unwrap(); assert!(self.end - self.last_msg_start >= 4); let msg_len = unsafe { // SAFETY: everything in old.data[..self.end] is initialized, // have asserted last_msg_start + 4 < self.end, // and self.end < current.size <= isize::MAX Self::get_message_padded_len(old.data, self.last_msg_start)? }; let msg_end = self.last_msg_start + msg_len; let (eof, mut nread) = unsafe { // SAFETY: self.end < old.size, so the pointer addition remains in bounds let iovs = [ libc::iovec { iov_base: old.data.add(self.end) as *mut c_void, iov_len: msg_end - self.end, }, libc::iovec { iov_base: self.current.data as *mut c_void, iov_len: self.current.size - MAX_NORMAL_MSG_SIZE, }, ]; // SAFETY: both intervals are contained in existing allocations // (The first ends at old + msg_end < old + last_msg_start + (last message len) // which read_more() ensures is < old.data + old.size; the second ends well // before current.data + current.size since MAX_NORMAL_MSG_SIZE >= 0.) // They are also not used anywhere else -- data after old.data + last_msg_start // has not yet been shared through a ReadBufferView, and nothing in self.current // was shared yet. Self::read_inner(&iovs, src_fd)? }; if nread < msg_end - self.end { /* Did not complete reading current message */ self.end += nread; return Ok(eof); } nread -= msg_end - self.end; let ptr = unsafe { // SAFETY: old.data + self.last_msg_start is inside allocation of length old.size old.data.add(self.last_msg_start) }; let mut tmp = None; std::mem::swap(&mut self.old, &mut tmp); self.msg_queue.push_back(ReadBufferView { _base: tmp.unwrap(), data: ptr, data_len: msg_len, }); self.last_msg_start = 0; self.end = nread; self.extract_messages()?; Ok(eof) } /** Try to read more input. Returns `true` iff source has no more data. * * Use `pop_next_msg()` to get the complete messages read, if there are any. */ pub fn read_more(&mut self, src_fd: &OwnedFd) -> Result { if self.old.is_some() { return self.read_with_old(src_fd); } if self.end - self.last_msg_start >= 4 { let chunk: &ReadBufferChunk = &self.current; let cap = chunk.size; /* Message length is known */ let msg_len = unsafe { // SAFETY: everything in chunk.data[..self.end] is initialized, // this branch requires last_msg_start + 4 < self.end, // and self.end < current.size <= isize::MAX Self::get_message_padded_len(chunk.data, self.last_msg_start)? }; let msg_end = self.last_msg_start + msg_len; if msg_end > cap { /* Not enough space: must move message and create a new chunk for it */ debug!( "oversized message, length {}, end {} is > capacity {}", msg_len, msg_end, cap ); let new_size = std::cmp::max(2 * msg_len, 4 * MAX_NORMAL_MSG_SIZE); assert!(new_size > msg_len + MAX_NORMAL_MSG_SIZE); self.move_tail_to_new(new_size); } else if cap - msg_end <= MAX_NORMAL_MSG_SIZE { /* Complete this message, and start a new chunk */ let new_size = 4 * MAX_NORMAL_MSG_SIZE; let mut nxt = Arc::new(ReadBufferChunk::new(new_size)); std::mem::swap(&mut self.current, &mut nxt); self.old = Some(nxt); return self.read_with_old(src_fd); } } else { let chunk: &ReadBufferChunk = &self.current; let cap = chunk.size; /* No message or incomplete header */ let msg_end = self.last_msg_start; assert!(msg_end <= cap); if cap - msg_end <= MAX_NORMAL_MSG_SIZE { /* Not enough space for a long message; start a new chunk (and move any partial header) */ /* Not enough space: must move message and create a new chunk for it */ debug!( "partial header move, {} {}", msg_end, cap - MAX_NORMAL_MSG_SIZE, ); let new_size = 4 * MAX_NORMAL_MSG_SIZE; self.move_tail_to_new(new_size); } } /* Try to read up to (cap - MAX_NORMAL_MSG_SIZE), which may be small; the exact message * breaking point can only be determined after reading. */ let chunk: &ReadBufferChunk = &self.current; assert!(chunk.size - self.end >= MAX_NORMAL_MSG_SIZE); let (eof, nread) = unsafe { // SAFETY: at this point, have self.end < chunk.size, so iov_base is inside // chunk's allocation let iovs = [libc::iovec { iov_base: chunk.data.add(self.end) as *mut c_void, iov_len: chunk.size - MAX_NORMAL_MSG_SIZE - self.end, }]; // SAFETY: Have asserted chunk.size - self.end >= MAX_NORMAL_MSG_SIZE, // so iov_len does not overflow. Since MAX_NORMAL_MSG_SIZE >= 0, the // interval ends at or before chunk.data + chunk.size and is part of chunk's allocation // Exclusive access, because data past current.data + self.end has not // been shared via ReadBufferView Self::read_inner(&iovs, src_fd)? }; self.end += nread; self.extract_messages()?; Ok(eof) } /** If there is a complete message available, extract and return it */ pub fn pop_next_msg(&mut self) -> Option { self.msg_queue.pop_front() } } impl ReadBufferView { /** Get an &mut-reference to the data of this buffer */ pub fn get_mut(&mut self) -> &mut [u8] { unsafe { // SAFETY: ReadBufferView has exclusive access to the portion // self.data[..self.data_len] of the underlying ReadBufferChunk, // which is kept alive as long as self by self._base. // Data was initialized by ReadBuffer. u8 pointers are always aligned. // &mut self delegates exclusivity from ReadBufferView let dst = std::ptr::slice_from_raw_parts_mut(self.data, self.data_len); &mut *dst } } /** Get an &-reference to the data of this buffer */ pub fn get(&self) -> &[u8] { unsafe { // SAFETY: ReadBufferView has exclusive access to the portion // self.data[..self.data_len] of the underlying ReadBufferChunk, // which is kept alive as long as self by self._base. // Data was initialized by ReadBuffer. u8 pointers are always aligned. // Mutable access only provided through &mut self which conflicts with &self let dst = std::ptr::slice_from_raw_parts_mut(self.data, self.data_len); &mut *dst } } /** Move the start of the buffer forward by the indicated amount * * This will panic if skip is larger than the remaining buffer size. */ pub fn advance(&mut self, skip: usize) { assert!(skip % 4 == 0); // preserve alignment assert!(skip <= self.data_len, "{} isize { let mut nread: isize = 0; let mut first = true; for i in 0..(len as isize) { let iov = *iovs.offset(i); if iov.iov_len == 0 { continue; } let r = libc::read(fd, iov.iov_base, iov.iov_len); if r == -1 { if !first { return nread; } else { return -1; } } nread = nread.checked_add(r).unwrap(); first = false; } nread } #[test] fn test_read_buffer() { use nix::fcntl; use nix::poll; use nix::unistd; use std::os::fd::AsFd; use std::time::Instant; let mut rb = ReadBuffer::new(); let (pipe_r, pipe_w) = unistd::pipe2(fcntl::OFlag::O_CLOEXEC | fcntl::OFlag::O_NONBLOCK).unwrap(); #[cfg(not(miri))] fn read_all(rb: &mut ReadBuffer, fd: &OwnedFd) { loop { let mut p = [poll::PollFd::new(fd.as_fd(), poll::PollFlags::POLLIN)]; let r = poll::poll(&mut p, poll::PollTimeout::ZERO); match r { Err(Errno::EINTR) => { continue; } Err(Errno::EAGAIN) => { break; } Err(x) => panic!("{:?}", x), Ok(_) => { let rev = p[0].revents().unwrap(); if rev.contains(poll::PollFlags::POLLIN) { rb.read_more(fd).unwrap(); } else if rev.intersects(poll::PollFlags::POLLHUP | poll::PollFlags::POLLERR) { panic!(); } else { return; } } } } } #[cfg(miri)] fn read_all(rb: &mut ReadBuffer, fd: &OwnedFd) { // Calling `read_more` multiple times will consume the entire pipe // buffer, as long as it is not too large. for i in 0..20 { rb.read_more(fd).unwrap(); } } let start = Instant::now(); println!( "Many small messages, immediately dequeued: {}", Instant::now().duration_since(start).as_secs_f32() ); { for i in 0..(MAX_NORMAL_MSG_SIZE / 2) { let mut small_msg = [0u8; 16]; small_msg[..4] .copy_from_slice(&build_wmsg_header(WmsgType::AckNblocks, 16).to_le_bytes()); small_msg[4..8].copy_from_slice(&(i as u32).to_le_bytes()); unistd::write(&pipe_w, &small_msg).unwrap(); read_all(&mut rb, &pipe_r); let nxt = rb.pop_next_msg().unwrap(); assert!(nxt.get() == small_msg); } } println!( "Long input with many small chunks: {}", Instant::now().duration_since(start).as_secs_f32() ); { let mut long_fragmented_input = Vec::::new(); for i in 0..(2 * MAX_NORMAL_MSG_SIZE) { let mut small_msg = [0u8; 8]; /* actual length of 7 pads to 8 */ small_msg[..4] .copy_from_slice(&build_wmsg_header(WmsgType::AckNblocks, 7).to_le_bytes()); small_msg[4..].copy_from_slice(&(i as u32).to_le_bytes()); long_fragmented_input.extend_from_slice(&small_msg); } let mut x = 0; while x < long_fragmented_input.len() { let y = unistd::write(&pipe_w, &long_fragmented_input[x..]).unwrap(); x += y; assert!(y > 0); rb.read_more(&pipe_r).unwrap(); } /* Read remainder of data in case writing outpaced reads */ read_all(&mut rb, &pipe_r); for i in 0..(2 * MAX_NORMAL_MSG_SIZE) { let nxt = rb.pop_next_msg().unwrap(); let val = u32::from_le_bytes(nxt.get()[4..].try_into().unwrap()); assert!(val == i as u32); } } println!( "Very long input, needs oversize buffer: {}", Instant::now().duration_since(start).as_secs_f32() ); { let mut ultra_long_input = Vec::::new(); let len = 10 * MAX_NORMAL_MSG_SIZE; ultra_long_input.resize(len, 0); ultra_long_input[..4] .copy_from_slice(&build_wmsg_header(WmsgType::Protocol, len).to_le_bytes()); let mut x = 0; while x < ultra_long_input.len() { let y = unistd::write(&pipe_w, &ultra_long_input[x..]).unwrap(); x += y; assert!(y > 0); rb.read_more(&pipe_r).unwrap(); } read_all(&mut rb, &pipe_r); assert!(rb.pop_next_msg().unwrap().get_mut().len() == len); } /* This has the side effect of clearing out the overcapacity buffer */ println!( "Many long chunks: {}", Instant::now().duration_since(start).as_secs_f32() ); { let mut long_block_input = Vec::::new(); let mut long_msg = vec![0; align4(MAX_NORMAL_MSG_SIZE)]; long_msg[..4].copy_from_slice( &build_wmsg_header(WmsgType::AckNblocks, MAX_NORMAL_MSG_SIZE).to_le_bytes(), ); for _ in 0..20 { long_block_input.extend_from_slice(&long_msg); } let mut x = 0; while x < long_block_input.len() { let y = unistd::write(&pipe_w, &long_block_input[x..]).unwrap(); x += y; assert!(y > 0); rb.read_more(&pipe_r).unwrap(); } read_all(&mut rb, &pipe_r); let mut concat = Vec::::new(); while concat.len() < long_block_input.len() { concat.extend_from_slice(rb.pop_next_msg().unwrap().get()); } assert!(concat == long_block_input); } println!( "Mixture of lengths, initially sent byte-by-byte: {}", Instant::now().duration_since(start).as_secs_f32() ); { let mut long_mixed_input = Vec::::new(); let mut i = 0; let zero_slice = &[0; 1004]; while long_mixed_input.len() < 8 * MAX_NORMAL_MSG_SIZE { let length = 4 + i % 1000; i += 1; long_mixed_input .extend_from_slice(&build_wmsg_header(WmsgType::AckNblocks, length).to_le_bytes()); long_mixed_input.extend_from_slice(&zero_slice[..align4(length - 4)]); } let mut x = 0; while x < long_mixed_input.len() { let step = if x < 10000 { 1 } else { 100 }; let y = unistd::write( &pipe_w, &long_mixed_input[x..std::cmp::min(x + step, long_mixed_input.len())], ) .unwrap(); x += y; assert!(y > 0); rb.read_more(&pipe_r).unwrap(); } read_all(&mut rb, &pipe_r); let mut concat = Vec::::new(); while concat.len() < long_mixed_input.len() { let mut nxt = rb.pop_next_msg().unwrap(); concat.extend_from_slice(&nxt.get()[..4]); nxt.advance(4); concat.extend_from_slice(nxt.get()); } assert!(concat == long_mixed_input); } println!( "Done: {}", Instant::now().duration_since(start).as_secs_f32() ); } waypipe-v0.10.5/src/secctx.rs000066400000000000000000000141631507071711100160740ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Security context setup communication with the Wayland compositor */ use crate::tag; use crate::util::*; use crate::wayland::*; use crate::wayland_gen::*; use log::debug; use nix::sys::socket; use std::io::{Cursor, IoSlice, Write}; use std::os::fd::{AsRawFd, OwnedFd}; fn read_event(connection: &OwnedFd) -> Result, String> { let mut msg = vec![0; 8]; read_exact(connection, &mut msg) .map_err(|x| tag!("Reading from compositor failed: {:?}", x))?; let (_object_id, length, _opcode) = parse_wl_header(&msg); msg.resize(length, 0); read_exact(connection, &mut msg[8..]) .map_err(|x| tag!("Reading from compositor failed: {:?}", x))?; Ok(msg) } /** * Set up a security context. * * `connection` is a blocking socket connecting to the compositor. */ pub fn provide_secctx( connection: OwnedFd, app_id: &str, listen_fd: OwnedFd, close_fd: OwnedFd, ) -> Result<(), String> { let mut tmp = [0_u8; 64]; let tmp_len = tmp.len(); let mut dst = &mut tmp[..]; let (display, registry, callback, manager, context, callback2) = (ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6)); write_req_wl_display_get_registry(&mut dst, display, registry); write_req_wl_display_sync(&mut dst, display, callback); let msg_len = tmp_len - dst.len(); let msg = &tmp[..msg_len]; debug!("Requesting compositor globals"); write_exact(&connection, msg).map_err(|x| tag!("Failed to write to compositor: {:?}", x))?; let secctx_name: u32 = loop { let msg = read_event(&connection)?; let (object_id, _length, opcode) = parse_wl_header(&msg); if object_id == callback { /* wl_callback; only event is 'done' */ return Err(tag!( "Compositor did not provide wp_security_context_manager_v1 global" )); } else if object_id == registry { if opcode != OPCODE_WL_REGISTRY_GLOBAL.code() { // global remove should not happen in first roundtrip for reasonable compositors debug!("Unexpected event from registry {}: {}", object_id, opcode); continue; } let (name, interface, version) = parse_evt_wl_registry_global(&msg[..])?; if interface != WP_SECURITY_CONTEXT_MANAGER_V1 { continue; } assert!(version >= 1); break name; } else { debug!("Unexpected event from object {}: {}", object_id, opcode); } }; let mut tmp = [0_u8; 512]; let tmp_len = tmp.len(); let mut dst = &mut tmp[..]; write_req_wl_registry_bind( &mut dst, registry, secctx_name, WP_SECURITY_CONTEXT_MANAGER_V1, 1, manager, ); write_req_wp_security_context_manager_v1_create_listener(&mut dst, manager, false, context); write_req_wp_security_context_v1_set_app_id(&mut dst, context, app_id.as_bytes()); /* Set the instance id to indicate the root process */ let pid = std::process::id(); let mut pid_str = [0u8; 10]; let mut pid_cursor = Cursor::new(&mut pid_str[..]); write!(pid_cursor, "{}", pid).unwrap(); let pid_len = pid_cursor.position() as usize; write_req_wp_security_context_v1_set_instance_id(&mut dst, context, &pid_str[..pid_len]); write_req_wp_security_context_v1_set_sandbox_engine(&mut dst, context, "waypipe".as_bytes()); write_req_wp_security_context_v1_commit(&mut dst, context); write_req_wl_display_sync(&mut dst, display, callback2); let msg_len = tmp_len - dst.len(); let iovs = [IoSlice::new(&tmp[..msg_len])]; let fds = [listen_fd.as_raw_fd(), close_fd.as_raw_fd()]; let cmsgs = [nix::sys::socket::ControlMessage::ScmRights(&fds)]; debug!( "Setting security context with app_id: {}, instance id {}", app_id, std::str::from_utf8(&pid_str[..pid_len]).unwrap() ); loop { match socket::sendmsg::<()>( connection.as_raw_fd(), &iovs, &cmsgs, nix::sys::socket::MsgFlags::empty(), None, ) { Ok(s) => { assert!(s == msg_len); break; } Err(nix::errno::Errno::EINTR) => { continue; } Err(x) => { return Err(tag!("Failed to write to compositor: {:?}", x)); } } } drop(close_fd); drop(listen_fd); /* Wait for callback2 to return. Technically this should not be necessary, since * in the event of failure later connections will break; but even libwayland 1.23 * has the misbehavior of dropping trailing messages when the connection closes, * so a roundtrip is necessary to verify arrival. */ loop { let msg = read_event(&connection)?; let (object_id, _length, opcode) = parse_wl_header(&msg); let opcode = MethodId::Event(opcode); if object_id == display { if opcode == OPCODE_WL_DISPLAY_DELETE_ID { continue; } else if opcode == OPCODE_WL_DISPLAY_ERROR { let (objid, code, errmsg) = parse_evt_wl_display_error(&msg)?; return Err(tag!( "Failed to set security context: Wayland error on {}, {}: {}", objid, code, EscapeAsciiPrintable(errmsg) )); } else { debug!("Unexpected event from object {}: {}", object_id, opcode); } } else if object_id == callback2 { if opcode == OPCODE_WL_CALLBACK_DONE { break; } else { debug!("Unexpected event from object {}: {}", object_id, opcode); } } else if object_id == registry || object_id == callback { // ignore continue; } else { debug!("Unexpected event from object {}: {}", object_id, opcode); } } /* Connection is safe to drop: compositor will keep listening until close_fd is hung up on */ drop(connection); Ok(()) } waypipe-v0.10.5/src/stub.rs000066400000000000000000000247671507071711100155730ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! No-op implementations to build with specific features disabled */ #![allow(unused_variables)] #[cfg(not(feature = "dmabuf"))] mod dmabuf_stub { use crate::util::AddDmabufPlane; use std::os::fd::{BorrowedFd, OwnedFd}; use std::path::PathBuf; use std::sync::Arc; use crate::VideoSetting; pub struct VulkanInstance(()); pub struct VulkanDevice(()); pub struct VulkanCommandPool { pub vulk: Arc, } pub struct VulkanSyncFile(()); pub struct VulkanBinarySemaphore(()); pub struct VulkanTimelineSemaphore(()); pub struct VulkanCopyHandle(()); pub struct VulkanDmabuf { pub vulk: Arc, pub width: u32, pub height: u32, pub main_fd: OwnedFd, } pub struct VulkanBuffer(()); pub struct VulkanBufferReadView<'a> { pub data: &'a [u8], } pub struct VulkanBufferWriteView<'a> { pub data: &'a mut [u8], } pub fn setup_vulkan_instance( debug: bool, video: &VideoSetting, test_no_timeline_export: bool, test_no_binary_import: bool, ) -> Result>, String> { unreachable!(); } pub fn setup_vulkan_device_base( instance: &Arc, main_device: Option, format_filter_for_video: bool, ) -> Result, String> { unreachable!(); } pub fn setup_vulkan_device( instance: &Arc, main_device: Option, video: &VideoSetting, debug: bool, ) -> Result>, String> { unreachable!(); } pub fn start_copy_segments_from_dmabuf( img: &Arc, copy: &Arc, pool: &Arc, segments: &[(u32, u32, u32)], view_row_length: Option, wait_semaphores: &[(Arc, u64)], wait_binary_semaphores: &[VulkanBinarySemaphore], ) -> Result { unreachable!(); } pub fn start_copy_segments_onto_dmabuf( img: &Arc, copy: &Arc, pool: &Arc, segments: &[(u32, u32, u32)], view_row_length: Option, wait_semaphores: &[(Arc, u64)], ) -> Result { unreachable!(); } pub fn vulkan_get_cmd_pool( vulk: &Arc, ) -> Result, &'static str> { unreachable!(); } pub fn vulkan_import_timeline( vulk: &Arc, fd: OwnedFd, ) -> Result, String> { unreachable!(); } pub fn vulkan_create_timeline( vulk: &Arc, start_pt: u64, ) -> Result<(Arc, OwnedFd), String> { unreachable!(); } pub fn get_dev_for_drm_node_path(path: &PathBuf) -> Result { unreachable!(); } pub fn vulkan_get_buffer( vulk: &Arc, nom_len: usize, read_optimized: bool, ) -> Result { unreachable!(); } pub fn vulkan_create_dmabuf( vulk: &Arc, width: u32, height: u32, drm_format: u32, modifier_options: &[u64], can_store_and_sample: bool, ) -> Result<(Arc, Vec), String> { unreachable!(); } pub fn vulkan_import_dmabuf( vulk: &Arc, planes: Vec, width: u32, height: u32, drm_format: u32, can_store_and_sample: bool, ) -> Result, String> { unreachable!(); } impl VulkanBuffer { pub fn prepare_read(self: &VulkanBuffer) -> Result<(), &'static str> { unreachable!(); } pub fn complete_write(self: &VulkanBuffer) -> Result<(), &'static str> { unreachable!(); } pub fn get_read_view(self: &VulkanBuffer) -> VulkanBufferReadView { unreachable!(); } pub fn get_write_view(self: &VulkanBuffer) -> VulkanBufferWriteView { unreachable!(); } } impl VulkanCopyHandle { pub fn get_timeline_point(self: &VulkanCopyHandle) -> u64 { unreachable!(); } } impl VulkanInstance { pub fn has_device(&self, main_device: Option) -> bool { unreachable!(); } pub fn device_supports_timeline_import_export(&self, main_device: Option) -> bool { unreachable!(); } } pub struct VulkanImageParameterMismatch(()); impl std::fmt::Display for VulkanImageParameterMismatch { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { unreachable!(); } } impl VulkanDevice { pub fn wait_for_timeline_pt(&self, pt: u64, max_wait: u64) -> Result { unreachable!(); } pub fn get_device(&self) -> u64 { unreachable!(); } pub fn get_event_fd(&self, timeline_point: u64) -> Result, String> { unreachable!(); } pub fn get_current_timeline_pt(&self) -> Result { unreachable!(); } pub fn supports_format(&self, drm_format: u32, drm_modifier: u64) -> bool { unreachable!(); } pub fn get_supported_modifiers(&self, drm_format: u32) -> &[u64] { unreachable!(); } pub fn can_import_image( &self, drm_format: u32, width: u32, height: u32, planes: &[AddDmabufPlane], can_store_and_sample: bool, ) -> Result<(), VulkanImageParameterMismatch> { unreachable!(); } pub fn supports_binary_semaphore_import(&self) -> bool { unreachable!(); } pub fn supports_timeline_import_export(&self) -> bool { unreachable!(); } } impl VulkanDmabuf { pub fn nominal_size(self: &VulkanDmabuf, view_row_length: Option) -> usize { unreachable!(); } pub fn get_bpp(&self) -> u32 { unreachable!(); } pub fn export_sync_file(&self) -> Result, String> { unreachable!(); } } impl VulkanSyncFile { pub fn export_binary_semaphore(&self) -> Result { unreachable!(); } } impl VulkanTimelineSemaphore { pub fn get_current_pt(self: &VulkanTimelineSemaphore) -> Result { unreachable!(); } pub fn get_event_fd(self: &VulkanTimelineSemaphore) -> BorrowedFd { unreachable!(); } pub fn link_event_fd( self: &VulkanTimelineSemaphore, timeline_point: u64, ) -> Result { unreachable!(); } pub fn signal_timeline_pt(self: &VulkanTimelineSemaphore, pt: u64) -> Result<(), String> { unreachable!(); } } } #[cfg(not(feature = "dmabuf"))] pub use dmabuf_stub::*; #[cfg(not(feature = "gbmfallback"))] mod gbm_stub { use crate::util::AddDmabufPlane; use std::rc::Rc; pub struct GBMDevice {} pub struct GBMDmabuf { pub width: u32, pub height: u32, } pub fn setup_gbm_device(device: Option) -> Result>, String> { Ok(None) } pub fn gbm_create_dmabuf( device: &Rc, width: u32, height: u32, format: u32, modifier_options: &[u64], ) -> Result<(GBMDmabuf, Vec), String> { unreachable!(); } pub fn gbm_import_dmabuf( device: &Rc, planes: Vec, width: u32, height: u32, drm_format: u32, ) -> Result { unreachable!(); } pub fn gbm_supported_modifiers(device: &Rc, format: u32) -> &[u64] { unreachable!(); } pub fn gbm_get_device_id(device: &Rc) -> u64 { unreachable!(); } impl GBMDmabuf { pub fn copy_from_dmabuf( &mut self, view_row_stride: Option, data: &mut [u8], ) -> Result<(), String> { unreachable!(); } pub fn copy_onto_dmabuf( &mut self, view_row_stride: Option, data: &[u8], ) -> Result<(), String> { unreachable!(); } pub fn nominal_size(&self, view_row_length: Option) -> usize { unreachable!(); } pub fn get_bpp(&self) -> u32 { unreachable!(); } } } #[cfg(not(feature = "gbmfallback"))] pub use gbm_stub::*; #[cfg(not(feature = "video"))] mod video_stub { use std::sync::Arc; #[cfg(feature = "dmabuf")] use crate::dmabuf::*; #[cfg(not(feature = "dmabuf"))] use crate::stub::*; use crate::VideoFormat; pub struct VideoEncodeState(()); pub struct VideoDecodeState(()); pub struct VulkanDecodeOpHandle(()); pub fn start_dmavid_apply( state: &Arc, pool: &Arc, packet: &[u8], ) -> Result { unreachable!(); } pub fn start_dmavid_encode( state: &Arc, pool: &Arc, wait_semaphores: &[(Arc, u64)], wait_binary_semaphores: &[VulkanBinarySemaphore], ) -> Result, String> { unreachable!(); } pub fn setup_video_decode( img: &Arc, fmt: VideoFormat, ) -> Result { unreachable!(); } pub fn setup_video_encode( img: &Arc, fmt: VideoFormat, bpf: Option, ) -> Result { unreachable!(); } pub fn supports_video_format( vulk: &VulkanDevice, fmt: VideoFormat, drm_format: u32, width: u32, height: u32, ) -> bool { false } impl VulkanDecodeOpHandle { pub fn get_timeline_point(&self) -> u64 { unreachable!(); } } } #[cfg(not(feature = "video"))] pub use video_stub::*; waypipe-v0.10.5/src/test_proto.rs000066400000000000000000006577741507071711100170320ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Protocol test runner and framework. * * Emulate a Wayland client/compositor pair and verify that Waypipe properly * forwards messages and file descriptors. * * This is needed to: * - Properly capture debug output from processes under test * - Run Waypipe instances as individual processes instead of threads (to avoid * possible bugs if Vulkan validation layers are used for independent instances) * - Test different versions of Waypipe against each other. * - Break out test variants by the Vulkan physical device being used */ use clap::{value_parser, Arg, ArgAction, Command as ClapCommand}; use nix::libc; use nix::sys::wait::WaitStatus; use nix::sys::{memfd, signal, socket, time, wait}; use nix::{errno::Errno, fcntl, poll, unistd}; use std::collections::{BTreeMap, BTreeSet}; use std::ffi::{OsStr, OsString}; use std::io::{IoSlice, IoSliceMut, Write}; use std::os::fd::{AsFd, AsRawFd, BorrowedFd, FromRawFd, OwnedFd, RawFd}; use std::process::{Child, Command, ExitCode, Stdio}; use std::sync::atomic::AtomicBool; use std::sync::Arc; use std::time::{Duration, Instant}; #[allow(dead_code)] mod dmabuf; #[allow(dead_code)] mod kernel; #[allow(dead_code)] mod platform; #[allow(dead_code)] mod util; #[allow(dead_code)] mod video; /* Only included because it is required by 'video' */ #[allow(dead_code)] mod wayland; mod wayland_gen; #[cfg(feature = "dmabuf")] use dmabuf::*; use kernel::*; use platform::*; use util::*; use wayland::*; use wayland_gen::*; /** Test result codes for which error-type control flow is not needed. */ #[derive(Debug)] enum StatusOk { /** Corresponds to [TestCategory::Pass] */ Pass, /** Corresponds to [TestCategory::Skipped] */ Skipped, } /** Test result codes which, like errors, typically are worth stopping a test over. */ #[derive(Debug)] enum StatusBad { /** Corresponds to [TestCategory::Unclear] */ Unclear(String), /** Corresponds to [TestCategory::Fail] */ Fail(String), } /** Result of a test, organized so that StatusBad can be progated with ? */ type TestResult = Result; /** The result of a test. */ #[derive(Debug)] enum TestCategory { /** Test passed */ Pass, /** Test was skipped. */ Skipped, /** Test failed */ Fail, /** Something was broken, but it may be an issue in a library used by Waypipe */ Unclear, } /** 77 is the automake return code for a skipped test */ const EXITCODE_SKIPPED: u8 = 77; /** 99 is the automake return code for a hard error (failure of test set up, * segfault, failure of an external library, or something else very unexpected) */ const EXITCODE_UNCLEAR: u8 = 99; /** Specifications for a filter of test names. */ struct Filter<'a> { substrings: &'a [&'a str], exact: bool, } /** Basic parameters needed to run a test. */ struct TestInfo<'a> { /** The name of the test. */ test_name: &'a str, /** File to execute (posix_spawnp) for the `waypipe client` connection handling instance. */ waypipe_client: &'a OsStr, /** File to execute (posix_spawnp) for the `waypipe server` connection handling instance. */ waypipe_server: &'a OsStr, } /** A constant that saves some code when allocating sequentual object ids */ const ID_SEQUENCE: [ObjId; 12] = [ ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6), ObjId(7), ObjId(8), ObjId(9), ObjId(10), ObjId(11), ObjId(12), ]; /** Write messages to the Wayland connection, followed by a test message that should directly pass through; * possibly stopping early if the connection is closed. * * Returns: the messages received on the other end, minus the test message; if there was an error details * will be returned */ fn test_interact( prog: &OwnedFd, comp: &OwnedFd, write_to_prog: bool, data: &[u8], fds: &[&OwnedFd], ) -> (Vec>, Vec, Option<(ObjId, u32, String)>) { let mut end_msg = [0_u8; 8]; let mut end_msg_view: &mut [u8] = &mut end_msg; write_header(&mut end_msg_view, ObjId(u32::MAX), 8, 0, 0); let mut nbytes_sent: usize = 0; let mut nfds_sent: usize = 0; let net_len = data.len() + end_msg.len(); let raw_fds: Vec = fds.iter().map(|x| x.as_raw_fd()).collect(); /* Not: libwayland sends up to 28 fds per byte */ const MAX_FDS_PER_BYTE: usize = 32; assert!(data.len() >= fds.len().div_ceil(MAX_FDS_PER_BYTE)); struct ReadState { data: Vec, rmsgs: Vec>, fds: Vec, eof: bool, } let mut recv_prog = ReadState { data: Vec::new(), fds: Vec::new(), rmsgs: Vec::new(), eof: false, }; let mut recv_comp = ReadState { data: Vec::new(), fds: Vec::new(), rmsgs: Vec::new(), eof: false, }; let start = Instant::now(); let timeout = Duration::from_secs(1); let mut err: Option<(ObjId, u32, String)> = None; 'outer: loop { let current = Instant::now(); let elapsed = current.duration_since(start); if elapsed >= timeout { panic!("timeout: {:?}", elapsed); } let remaining = time::TimeSpec::from_duration(timeout.saturating_sub(elapsed)); let mut pfds = Vec::new(); let mut recvs: Vec<&mut ReadState> = Vec::new(); let writing = nbytes_sent < net_len; if !recv_prog.eof { pfds.push(poll::PollFd::new( prog.as_fd(), if writing && write_to_prog { poll::PollFlags::POLLIN | poll::PollFlags::POLLOUT } else { poll::PollFlags::POLLIN }, )); recvs.push(&mut recv_prog); } if !recv_comp.eof { pfds.push(poll::PollFd::new( comp.as_fd(), if writing && !write_to_prog { poll::PollFlags::POLLIN | poll::PollFlags::POLLOUT } else { poll::PollFlags::POLLIN }, )); recvs.push(&mut recv_comp); } /* Connections should not close before either error or end message is received */ assert!( !pfds.is_empty(), "connections should not close before error or end message received" ); let res = nix::poll::ppoll(&mut pfds, Some(remaining), None); if let Err(e) = res { assert!(e == Errno::EINTR || e == Errno::EAGAIN); } for (pfd, recv) in pfds.into_iter().zip(recvs.into_iter()) { let evt = pfd.revents().unwrap(); if evt.contains(poll::PollFlags::POLLIN) { assert!(!recv.eof); let mut tmp = vec![0u8; 16384]; let mut iovs = [IoSliceMut::new(&mut tmp)]; let mut cmsg_fds = nix::cmsg_space!([RawFd; 32]); let r = socket::recvmsg::( pfd.as_fd().as_raw_fd(), &mut iovs, Some(&mut cmsg_fds), socket::MsgFlags::empty(), ); match r { Ok(resp) => { for msg in resp.cmsgs().unwrap() { match msg { socket::ControlMessageOwned::ScmRights(tfds) => { for f in &tfds { assert!(*f != -1); recv.fds.push(unsafe { // SAFETY: fd was just created, checked valid, // and is recorded nowhere else OwnedFd::from_raw_fd(*f) }); } } _ => { panic!("Unexpected control message"); } } } let nbytes = resp.bytes; recv.data.extend_from_slice(&tmp[..nbytes]); recv.eof = nbytes == 0; } Err(nix::errno::Errno::ECONNRESET) => { recv.eof = true; } Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => (), Err(x) => { panic!("Error reading from socket: {:?}", x) } } /* Extract any complete messages from stream */ let mut tail: &[u8] = &recv.data; while tail.len() >= 8 { let (_obj_id, length, _opcode) = parse_wl_header(tail); assert!(length >= 8 && length % 4 == 0); if tail.len() >= length { let (msg, nxt) = tail.split_at(length); recv.rmsgs.push(msg.into()); tail = nxt; } } recv.data.drain(..(recv.data.len() - tail.len())); } else if evt.contains(poll::PollFlags::POLLHUP) { recv.eof = true; } else if evt.contains(poll::PollFlags::POLLERR) { panic!("unexpected pollerr"); } if evt.contains(poll::PollFlags::POLLOUT) { /* Only the writing side checks for POLLOUT, so write messages here */ let iovs_long = [ IoSlice::new(&data[std::cmp::min(nbytes_sent, data.len())..]), IoSlice::new(&end_msg[(std::cmp::max(nbytes_sent, data.len()) - data.len())..]), ]; let iovs_short = [IoSlice::new(if nbytes_sent >= data.len() { &end_msg[nbytes_sent - data.len()..(nbytes_sent - data.len() + 1)] } else { &data[nbytes_sent..(nbytes_sent + 1)] })]; let sfds = &raw_fds[nfds_sent..std::cmp::min(fds.len(), nfds_sent + MAX_FDS_PER_BYTE)]; let short_transfer = fds.len() > nfds_sent + MAX_FDS_PER_BYTE; let iovs: &[IoSlice] = if short_transfer { &iovs_short } else { &iovs_long }; let cmsgs = [socket::ControlMessage::ScmRights(sfds)]; let r = nix::sys::socket::sendmsg::<()>( pfd.as_fd().as_raw_fd(), iovs, if sfds.is_empty() { &[] } else { &cmsgs }, nix::sys::socket::MsgFlags::empty(), None, ); match r { Ok(s) => { nbytes_sent += s; nfds_sent += sfds.len(); } Err(Errno::EINTR) | Err(Errno::EAGAIN) => { println!("eintr"); } Err(Errno::EPIPE) | Err(Errno::ECONNRESET) => { recv.eof = true; } Err(e) => { panic!("{:?}", e); } } } } let opp_recv = if write_to_prog { &mut recv_comp } else { &mut recv_prog }; for (i, msg) in opp_recv.rmsgs.iter().enumerate() { let (obj_id, _length, opcode) = parse_wl_header(msg); if obj_id == ObjId(u32::MAX) && opcode == 0 { /* encountered pass-through message; stop, are done */ opp_recv.rmsgs.remove(i); break 'outer; } } for msg in &recv_prog.rmsgs { let (obj_id, _length, opcode) = parse_wl_header(msg); if obj_id == ObjId(1) && MethodId::Event(opcode) == OPCODE_WL_DISPLAY_ERROR { /* encountered error message; stop and return error */ let (obj, code, msg) = parse_evt_wl_display_error(msg).unwrap(); err = Some((obj, code, String::from_utf8(Vec::from(msg)).unwrap())); break 'outer; } } } if write_to_prog { assert!(recv_prog.fds.is_empty()); assert!((recv_prog.rmsgs.len() == (err.is_some() as usize)) && recv_prog.data.is_empty()); (recv_comp.rmsgs, recv_comp.fds, err) } else { assert!(recv_comp.fds.is_empty()); assert!(recv_comp.rmsgs.is_empty() && recv_comp.data.is_empty()); (recv_prog.rmsgs, recv_prog.fds, err) } } /** Write messages to the Wayland connection, followed by a test message that should directly pass through; * possibly stopping early if the connection is closed. (Note: in that case, test_read_msgs should * capture an error message). */ fn test_write_msgs(socket: &OwnedFd, data: &[u8], fds: &[&OwnedFd]) { let mut end_msg = [0_u8; 8]; let mut end_msg_view: &mut [u8] = &mut end_msg; write_header(&mut end_msg_view, ObjId(u32::MAX), 8, 0, 0); let mut nbytes_sent: usize = 0; let mut nfds_sent: usize = 0; let net_len = data.len() + end_msg.len(); let raw_fds: Vec = fds.iter().map(|x| x.as_raw_fd()).collect(); assert!(data.len() >= fds.len().div_ceil(32)); let start = Instant::now(); let timeout = Duration::from_secs(1); while nbytes_sent < net_len { let iovs_long = [ IoSlice::new(&data[std::cmp::min(nbytes_sent, data.len())..]), IoSlice::new(&end_msg[(std::cmp::max(nbytes_sent, data.len()) - data.len())..]), ]; let iovs_short = [IoSlice::new(if nbytes_sent >= data.len() { &end_msg[nbytes_sent - data.len()..(nbytes_sent - data.len() + 1)] } else { &data[nbytes_sent..(nbytes_sent + 1)] })]; let fds = &raw_fds[nfds_sent..std::cmp::min(fds.len(), nfds_sent + 32)]; let short_transfer = fds.len() - nfds_sent >= 32; let iovs: &[IoSlice] = if short_transfer { &iovs_short } else { &iovs_long }; let cmsgs = [socket::ControlMessage::ScmRights(fds)]; let mut pfd = [poll::PollFd::new(socket.as_fd(), poll::PollFlags::POLLOUT)]; let current = Instant::now(); let elapsed = current.duration_since(start); if elapsed >= timeout { panic!("timeout: {:?}", elapsed); } let remaining = time::TimeSpec::from_duration(timeout.saturating_sub(elapsed)); let res = nix::poll::ppoll(&mut pfd, Some(remaining), None); if let Err(e) = res { assert!(e == Errno::EINTR || e == Errno::EAGAIN); } let evts = pfd[0].revents().unwrap(); if evts.contains(poll::PollFlags::POLLHUP) { println!("Pollhup on write"); return; } if evts.contains(poll::PollFlags::POLLERR) { panic!("Unexpected pollerr"); } if !evts.contains(poll::PollFlags::POLLOUT) { continue; } let r = nix::sys::socket::sendmsg::<()>( socket.as_raw_fd(), iovs, if fds.is_empty() { &[] } else { &cmsgs }, nix::sys::socket::MsgFlags::empty(), None, ); match r { Ok(s) => { nbytes_sent += s; if short_transfer { nfds_sent += 32; } else { nfds_sent = fds.len(); } } Err(Errno::EINTR) | Err(Errno::EAGAIN) => { println!("eintr"); } Err(e) => { panic!("{:?}", e); } } } } /** Buffer holding data read from a socket and whether the connection has closed. */ struct ReadRecv { data: Vec, fds: Vec, eof: bool, } /** Read data/fds from a socket, and return true on EOF. */ fn test_read_from_socket(socket: BorrowedFd, recv: &mut ReadRecv) -> bool { let mut tmp = vec![0u8; 16384]; let mut iovs = [IoSliceMut::new(&mut tmp)]; let mut cmsg_fds = nix::cmsg_space!([RawFd; 32]); let r = socket::recvmsg::( socket.as_raw_fd(), &mut iovs, Some(&mut cmsg_fds), socket::MsgFlags::empty(), ); match r { Ok(resp) => { for msg in resp.cmsgs().unwrap() { match msg { socket::ControlMessageOwned::ScmRights(tfds) => { for f in &tfds { assert!(*f != -1); recv.fds.push(unsafe { // SAFETY: fd was just created, checked valid, // and is recorded nowhere else OwnedFd::from_raw_fd(*f) }); } } _ => { panic!("Unexpected control message"); } } } let nbytes = resp.bytes; recv.data.extend_from_slice(&tmp[..nbytes]); nbytes == 0 } Err(nix::errno::Errno::ECONNRESET) => true, Err(nix::errno::Errno::EINTR) | Err(nix::errno::Errno::EAGAIN) => false, Err(x) => { panic!("Error reading from socket: {:?}", x) } } } /** Read messages from the Wayland connection, until an error is returned or the test message is found. * If opposite_fd is not None, then it is the program-side socket and will receive wl_display_error. */ fn test_read_msgs( socket: &OwnedFd, opposite_socket: Option<&OwnedFd>, ) -> (Vec>, Vec, Option<(ObjId, u32, String)>) { let mut msgs = Vec::new(); let mut fds = Vec::new(); let mut pfds = Vec::new(); let mut recv = Vec::new(); pfds.push(poll::PollFd::new(socket.as_fd(), poll::PollFlags::POLLIN)); recv.push(ReadRecv { data: Vec::new(), fds: Vec::new(), eof: false, }); if let Some(f) = opposite_socket { pfds.push(poll::PollFd::new(f.as_fd(), poll::PollFlags::POLLIN)); recv.push(ReadRecv { data: Vec::new(), fds: Vec::new(), eof: false, }); } let start = Instant::now(); let timeout = Duration::from_secs(1); let mut err: Option<(ObjId, u32, String)> = None; 'outer: loop { let current = Instant::now(); let elapsed = current.duration_since(start); if elapsed >= timeout { panic!("timeout: {:?}", elapsed); } let remaining = time::TimeSpec::from_duration(timeout.saturating_sub(elapsed)); let res = nix::poll::ppoll(&mut pfds, Some(remaining), None); if let Err(e) = res { assert!(e == Errno::EINTR || e == Errno::EAGAIN); } for (p, r) in pfds.iter().zip(recv.iter_mut()) { let evt = p.revents().unwrap(); if evt.contains(poll::PollFlags::POLLIN) { let eof = test_read_from_socket(p.as_fd(), r); if eof { r.eof = true; } } if evt.contains(poll::PollFlags::POLLERR) { panic!("unexpected pollerr"); } } if !recv[0].fds.is_empty() { fds.append(&mut recv[0].fds); } while recv[0].data.len() >= 8 { let (obj_id, length, opcode) = parse_wl_header(&recv[0].data); assert!(length >= 8 && length % 4 == 0); if obj_id == ObjId(u32::MAX) && opcode == 0 { /* encountered pass-through message; we are done */ break 'outer; } if recv[0].data.len() >= length { let data: Vec = recv[0].data.drain(..length).collect(); if opposite_socket.is_none() { if obj_id == ObjId(1) && MethodId::Event(opcode) == OPCODE_WL_DISPLAY_ERROR { let (obj, code, msg) = parse_evt_wl_display_error(&data).unwrap(); err = Some((obj, code, String::from_utf8(Vec::from(msg)).unwrap())); break 'outer; } } msgs.push(data); } } if opposite_socket.is_some() { assert!(recv[1].fds.is_empty()); if recv[1].data.len() >= 8 { let (obj_id, length, opcode) = parse_wl_header(&recv[1].data); if recv[1].data.len() >= length { let data: Vec = recv[1].data.drain(..length).collect(); if obj_id == ObjId(1) && MethodId::Event(opcode) == OPCODE_WL_DISPLAY_ERROR { let (obj, code, msg) = parse_evt_wl_display_error(&data).unwrap(); err = Some((obj, code, String::from_utf8(Vec::from(msg)).unwrap())); break 'outer; } else { panic!("unexpected message on program side"); } } } } } (msgs, fds, err) } /** Helper function to return a `Vec`, given a function that builds its * contents by writing to a `&mut &mut [u8]`. Used to build Wayland message sequences. */ fn build_msgs(f: F) -> Vec where F: FnOnce(&mut &mut [u8]), { let len = 16384; let mut buf = vec![0u8; len]; let mut rest = &mut buf[..]; f(&mut rest); let nwritten = len - rest.len(); Vec::from(&buf[..nwritten]) } /** Return true iff the interaction result in `x` matches concatenated messages `concat`. */ fn is_plain_msgs( x: Result<(Vec>, Vec), (ObjId, u32, String)>, concat: Vec, ) -> bool { if let Ok((msg, fds)) = x { fds.is_empty() && msg.concat() == concat } else { false } } /** Things needed for the test program to interact with a linked pair of Waypipe instances. */ struct ProtocolTestContext { sock_prog: OwnedFd, sock_comp: OwnedFd, } impl ProtocolTestContext { /** Write data and fds to the program-side Waypipe instance, and return the resulting messages or error message. */ fn prog_write( &mut self, data: &[u8], fds: &[&OwnedFd], ) -> Result<(Vec>, Vec), (ObjId, u32, String)> { let (msg, ofds, err) = test_interact(&self.sock_prog, &self.sock_comp, true, data, fds); if let Some(e) = err { Err(e) } else { Ok((msg, ofds)) } } /** Write data and fds to the compositor-side Waypipe instance, and return the resulting messages or error message. */ fn comp_write( &mut self, data: &[u8], fds: &[&OwnedFd], ) -> Result<(Vec>, Vec), (ObjId, u32, String)> { let (msg, ofds, err) = test_interact(&self.sock_prog, &self.sock_comp, false, data, fds); if let Some(e) = err { Err(e) } else { Ok((msg, ofds)) } } /** Write messages to a program-side Waypipe instance, and panic if they are not passed through unchanged. */ fn prog_write_passthrough(&mut self, data: Vec) { assert!(is_plain_msgs(self.prog_write(&data, &[]), data)); } /** Write messages to a compositor-side Waypipe instance, and panic if they are not passed through unchanged. */ fn comp_write_passthrough(&mut self, data: Vec) { assert!(is_plain_msgs(self.comp_write(&data, &[]), data)); } } /** Options to be passed to an instance of Waypipe */ struct WaypipeOptions<'a> { wire_version: Option, drm_node: Option, device_type: RenderDeviceType, video: VideoSetting, title_prefix: &'a str, compression: Compression, } /** Construct the command to run a Waypipe connection handling process, with specified options. */ fn build_arguments(waypipe_bin: &OsStr, opts: &WaypipeOptions, is_client: bool) -> Vec { let mut v = Vec::new(); let cross_runner = std::env::var_os("CROSS_TARGET_RUNNER"); let cx: &OsStr = cross_runner.as_deref().unwrap_or_default(); /* note: CROSS_TARGET_RUNNER is typically something like 'CROSS_TARGET_RUNNER=/linux-runner aarch64' */ for chunk in cx.as_encoded_bytes().split(|x| *x == b' ') { if !chunk.is_empty() { v.push(std::str::from_utf8(chunk).unwrap().into()); } } v.push(waypipe_bin.to_str().unwrap().into()); v.push("--debug".into()); v.push("--threads=1".into()); v.push(format!("--compress={}", opts.compression)); if !opts.title_prefix.is_empty() { v.push(format!("--title-prefix={}", opts.title_prefix)); } if let Some(device_id) = opts.drm_node { v.push(format!("--drm-node=/dev/dri/renderD{}", (device_id & 0xff))); } else { v.push("--no-gpu".into()); } if matches!(opts.device_type, RenderDeviceType::Gbm) { v.push("--test-skip-vulkan".into()); } if opts.video.format.is_some() { v.push(format!("--video={}", opts.video)); } if let Some(ver) = opts.wire_version { v.push(format!("--test-wire-version={}", ver)); } if is_client { v.push("client-conn".into()); } else { v.push("server-conn".into()); } v } /** Run a protocol test with the specified options. */ fn run_protocol_test_with_opts( info: &TestInfo, opts_client: &WaypipeOptions, opts_server: &WaypipeOptions, test_fn: &dyn Fn(ProtocolTestContext), ) -> Result<(), StatusBad> { let (channel1, channel2) = socket::socketpair( socket::AddressFamily::Unix, socket::SockType::Stream, None, socket::SockFlag::SOCK_CLOEXEC, ) .unwrap(); let (prog_appl1, prog_appl2) = socket::socketpair( socket::AddressFamily::Unix, socket::SockType::Stream, None, socket::SockFlag::SOCK_NONBLOCK | socket::SockFlag::SOCK_CLOEXEC, ) .unwrap(); let (prog_comp1, prog_comp2) = socket::socketpair( socket::AddressFamily::Unix, socket::SockType::Stream, None, socket::SockFlag::SOCK_NONBLOCK | socket::SockFlag::SOCK_CLOEXEC, ) .unwrap(); let client_args = build_arguments(info.waypipe_client, opts_client, true); let server_args = build_arguments(info.waypipe_server, opts_server, false); set_cloexec(&prog_comp2, false).unwrap(); set_cloexec(&channel1, false).unwrap(); let mut client: Child = Command::new(&client_args[0]) .args(&client_args[1..]) .env("WAYLAND_SOCKET", format!("{}", prog_comp2.as_raw_fd())) .env("WAYPIPE_CONNECTION_FD", format!("{}", channel1.as_raw_fd())) .stdin(Stdio::null()) .stdout(Stdio::inherit()) .stderr(Stdio::inherit()) .spawn() .unwrap(); drop(prog_comp2); drop(channel1); set_cloexec(&prog_appl2, false).unwrap(); set_cloexec(&channel2, false).unwrap(); let mut server: Child = Command::new(&server_args[0]) .args(&server_args[1..]) .env("WAYLAND_SOCKET", format!("{}", channel2.as_raw_fd())) .env( "WAYPIPE_CONNECTION_FD", format!("{}", prog_appl2.as_raw_fd()), ) .stdin(Stdio::null()) .stdout(Stdio::inherit()) .stderr(Stdio::inherit()) .spawn() .unwrap(); drop(prog_appl2); drop(channel2); let ctx = ProtocolTestContext { sock_prog: prog_appl1, sock_comp: prog_comp1, }; test_fn(ctx); /* Wait for processes to die */ // todo: this needs a 1 second timeout to properly catch deadlocked processes let exit_c = client.wait().unwrap(); let exit_s = server.wait().unwrap(); if !exit_s.success() || !exit_c.success() { let msg = tag!( "Waypipe connection handlers for {} did not exit cleanly: client {} server {}", info.test_name, exit_c.success(), exit_s.success() ); return Err(StatusBad::Fail(msg)); } Ok(()) } /** Run a protocol test with both sides of the connection using the DRM node for `device`. */ #[cfg(feature = "dmabuf")] fn run_protocol_test_with_drm_node( info: &TestInfo, device: &RenderDevice, test_fn: &dyn Fn(ProtocolTestContext), ) -> Result<(), StatusBad> { let options = WaypipeOptions { wire_version: None, drm_node: Some(device.id), device_type: device.device_type, compression: Compression::None, title_prefix: "", video: VideoSetting::default(), }; run_protocol_test_with_opts(info, &options, &options, test_fn) } /** Run a protocol test with default options. */ fn run_protocol_test( info: &TestInfo, test_fn: &dyn Fn(ProtocolTestContext), ) -> Result<(), StatusBad> { let options = WaypipeOptions { wire_version: None, drm_node: None, device_type: RenderDeviceType::Vulkan, compression: Compression::None, title_prefix: "", video: VideoSetting::default(), }; run_protocol_test_with_opts(info, &options, &options, test_fn) } /** Try to setup a Vulkan instance and device for the given `device_id`. */ #[cfg(feature = "dmabuf")] fn setup_vulkan(device_id: u64) -> Result, String> { let instance = setup_vulkan_instance(true, &VideoSetting::default(), false, false)? .ok_or_else(|| tag!("Vulkan instance not available"))?; Ok(Arc::new( setup_vulkan_device_base(&instance, Some(device_id), false)? .ok_or_else(|| tag!("Vulkan device {} not available", device_id))?, )) } /** Return a memfd whose contents are precisely `data`. */ fn make_file_with_contents(data: &[u8]) -> Result { let local_fd = memfd::memfd_create( c"/waypipe", memfd::MemFdCreateFlag::MFD_CLOEXEC | memfd::MemFdCreateFlag::MFD_ALLOW_SEALING, ) .map_err(|x| tag!("Failed to create memfd: {:?}", x))?; unistd::ftruncate(&local_fd, data.len().try_into().unwrap()) .map_err(|x| tag!("Failed to resize memfd: {:?}", x))?; let mapping = ExternalMapping::new(&local_fd, data.len(), false)?; copy_onto_mapping(data, &mapping, 0); Ok(local_fd) } /** Replace the contents of a memfd, whose length is already `data.len()`, with `data`. */ fn update_file_contents(fd: &OwnedFd, data: &[u8]) -> Result<(), String> { let mapping = ExternalMapping::new(fd, data.len(), false)?; copy_onto_mapping(data, &mapping, 0); drop(mapping); Ok(()) } /** Replace the conents of a memfd with `data`, changing the file size as needed. */ fn resize_file_with_contents(fd: &OwnedFd, data: &[u8]) -> Result<(), String> { unistd::ftruncate(fd, data.len().try_into().unwrap()) .map_err(|x| tag!("Failed to resize memfd: {:?}", x))?; let mapping = ExternalMapping::new(fd, data.len(), false)?; copy_onto_mapping(data, &mapping, 0); Ok(()) } /** Get the contents of the file, assuming its length is `len`. May error or SIGBUS if the actual * length is shorter */ fn get_file_contents(fd: &OwnedFd, len: usize) -> Result, String> { let mapping = ExternalMapping::new(fd, len, true)?; let mut data = vec![0xff_u8; len]; copy_from_mapping(&mut data, &mapping, 0); drop(mapping); Ok(data) } /** Return true iff the name matches the given filter. */ fn test_is_included(name: &str, filter: &Filter) -> bool { if filter.substrings.is_empty() { return true; } for x in filter.substrings { if filter.exact { let extended_x = format!("::{}::", x); let extended_name = format!("::{}::", name); if extended_name.contains(&extended_x) { return true; } } else if name.contains(x) { return true; } } false } /** Register a single test, if the name passes the filter. */ fn register_single<'a>( tests: &mut Vec<(String, Box TestResult + 'a>)>, filter: &Filter, name: &str, func: fn(TestInfo) -> TestResult, ) { let ext_name = format!("proto::{}", name); if !test_is_included(&ext_name, filter) { return; } tests.push((ext_name, Box::new(func))); } /** Register a test instance for each device in the list, if the resulting test name passes the filter. */ fn register_per_device<'a>( tests: &mut Vec<(String, Box TestResult + 'a>)>, filter: &Filter, devices: &[(String, u64)], name: &str, func: fn(TestInfo, RenderDevice) -> TestResult, ) { for (dev_name, dev_id) in devices { for (tp, tpname) in [ (RenderDeviceType::Vulkan, "vk"), (RenderDeviceType::Gbm, "gbm"), ] { if !cfg!(feature = "gbmfallback") && matches!(tp, RenderDeviceType::Gbm) { continue; } let ext_name = format!("proto::{}::{}::{}", name, dev_name, tpname); if !test_is_included(&ext_name, filter) { continue; } let m: (String, u64) = (dev_name.clone(), *dev_id); tests.push(( ext_name, Box::new(move |info| { let s = &m; let dev = RenderDevice { name: &s.0, id: s.1, device_type: tp, }; func(info, dev) }), )); } } } #[derive(Clone, Copy)] enum RenderDeviceType { Gbm, Vulkan, } /** Information about a render device. */ struct RenderDevice<'a> { #[allow(unused)] name: &'a str, id: u64, /** The device type for the _instances under test_ to use; test_proto itself, * for now, uses Vulkan to handle buffers. */ device_type: RenderDeviceType, } /** List all render devices on this system. This just checks file properties * and does not test that they are actually usable. */ pub fn list_vulkan_device_ids() -> Vec<(String, u64)> { use std::os::unix::ffi::OsStrExt; let mut dev_ids = Vec::new(); let Ok(dir_iter) = std::fs::read_dir("/dev/dri") else { /* On failure, assume Vulkan is not available */ return dev_ids; }; for r in dir_iter { let std::io::Result::Ok(entry) = r else { continue; }; if !entry.file_name().as_bytes().starts_with(b"renderD") { continue; } let Some(rdev) = get_rdev_for_file(&entry.path()) else { continue; }; dev_ids.push((entry.file_name().into_string().unwrap(), rdev)); } dev_ids } /** No-op signal handler (used to ensure SIGCHLD interrupts poll) */ extern "C" fn noop_signal_handler(_: i32) {} /** Test to verify that a simple message exchange behaves as expected */ fn proto_basic(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let write_prog = build_msgs(|dst| { write_req_wl_display_get_registry(dst, ObjId(1), ObjId(2)); write_req_wl_display_sync(dst, ObjId(1), ObjId(3)); }); let (resp, resp_fds) = ctx.prog_write(&write_prog, &[]).unwrap(); assert!(resp_fds.is_empty()); assert!(resp.concat() == write_prog); let write_comp = build_msgs(|dst| { write_evt_wl_registry_global(dst, ObjId(2), 1, WL_COMPOSITOR, 3); write_evt_wl_callback_done(dst, ObjId(3), 0); }); assert!(is_plain_msgs(ctx.comp_write(&write_comp, &[]), write_comp)); })?; Ok(StatusOk::Pass) } /** Test that using the base protocol version still works, for basic operations */ fn proto_base_wire(info: TestInfo) -> TestResult { let opts = WaypipeOptions { wire_version: Some(MIN_PROTOCOL_VERSION), drm_node: None, device_type: RenderDeviceType::Vulkan, title_prefix: "", compression: Compression::None, video: VideoSetting::default(), }; run_protocol_test_with_opts(&info, &opts, &opts, &|mut ctx: ProtocolTestContext| { ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, ObjId(1), ObjId(2)); write_req_wl_display_sync(dst, ObjId(1), ObjId(3)); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_callback_done(dst, ObjId(3), 0); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_sync(dst, ObjId(1), ObjId(3)); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_callback_done(dst, ObjId(3), 0); })); })?; Ok(StatusOk::Pass) } /** Test to verify that keymap files can be transferred reliably */ fn proto_keymap(info: TestInfo) -> TestResult { for length in [0, 9, 4096, 300001] { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let source_text = "test data ".as_bytes(); let mut source_data = source_text.repeat(align(length, source_text.len()) / source_text.len()); source_data.truncate(length); let source_fd = make_file_with_contents(&source_data).unwrap(); let (display, registry, callback, seat, keyboard) = (ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5)); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); write_req_wl_display_sync(dst, display, callback); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SEAT, 7); write_evt_wl_callback_done(dst, callback, 0); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SEAT, 7, seat); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_seat_capabilities(dst, seat, 3); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_seat_get_keyboard(dst, seat, keyboard); })); let (msgs, mut fds) = ctx .comp_write( &build_msgs(|dst| { write_evt_wl_keyboard_keymap( dst, keyboard, false, 1, source_data.len() as _, ); }), &[&source_fd], ) .unwrap(); drop(source_fd); let (_format, keymap_length) = parse_evt_wl_keyboard_keymap(&msgs[0]).unwrap(); let new_kb_fd = fds.remove(0); let new_data = get_file_contents(&new_kb_fd, keymap_length as usize).unwrap(); assert!( new_data == source_data, "{} {:?} {:?}", keymap_length, &new_data[..new_data.len().min(1000)], &source_data[..source_data.len().min(1000)] ); })?; } Ok(StatusOk::Pass) } /** Test to verify that data for wp_image_description_creator_icc_v1::set_icc_file and * wp_image_description_info_v1::icc_file is correctly transferred */ fn proto_icc(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let [display, registry, color_mgr, output, creator, color_output, desc, info, ..] = ID_SEQUENCE; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_OUTPUT, 4); write_evt_wl_registry_global(dst, registry, 2, WP_COLOR_MANAGER_V1, 1); })); let offset: u32 = 2048; let length: u32 = 1024; let file_sz: u32 = 4096; assert!(file_sz >= offset + length); let data: Vec = (0..file_sz / 2) .flat_map(|x| (x as u16).to_le_bytes()) .collect(); let base_fd = make_file_with_contents(&data).unwrap(); // Check that wp_image_description_creator_icc_v1::set_icc_file works let msg_start = build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 2, WP_COLOR_MANAGER_V1, 1, color_mgr); write_req_wl_registry_bind(dst, registry, 1, WL_OUTPUT, 4, output); write_req_wp_color_manager_v1_create_icc_creator(dst, color_mgr, creator); write_req_wp_color_manager_v1_get_output(dst, color_mgr, color_output, output); write_req_wp_color_management_output_v1_get_image_description(dst, color_output, desc); write_req_wp_image_description_v1_get_information(dst, desc, info); }); let msg_fd = build_msgs(|dst| { write_req_wp_image_description_creator_icc_v1_set_icc_file( dst, creator, false, offset, length, ); }); let mut smsgs = msg_start.clone(); smsgs.extend_from_slice(&msg_fd); let (mut rmsgs, mut rfds) = ctx.prog_write(&smsgs, &[&base_fd]).unwrap(); let rmsg_fd = rmsgs.pop().unwrap(); let fd_from_prog = rfds.pop().unwrap(); assert!(rfds.is_empty()); assert!( parse_wl_header(&rmsg_fd) == ( creator, length_req_wp_image_description_creator_icc_v1_set_icc_file(), OPCODE_WP_IMAGE_DESCRIPTION_CREATOR_ICC_V1_SET_ICC_FILE.code() ) ); let (roffset, rlength) = parse_req_wp_image_description_creator_icc_v1_set_icc_file(&rmsg_fd).unwrap(); assert!(rlength == length); assert!(rmsgs.concat() == msg_start); let data_from_prog = get_file_contents( &fd_from_prog, (offset.checked_add(length).unwrap()) as usize, ) .unwrap(); assert!( data_from_prog[roffset as usize..(roffset + length) as usize] == data[offset as usize..(offset + length) as usize] ); // Check that wp_image_description_info_v1::icc_file works let cmsgs = build_msgs(|dst| { write_evt_wp_image_description_info_v1_icc_file(dst, info, false, length); }); let (rcmsgs, mut rcfds) = ctx.comp_write(&cmsgs, &[&base_fd]).unwrap(); assert!(rcmsgs.concat() == cmsgs); let fd_from_comp = rcfds.pop().unwrap(); assert!(rcfds.is_empty()); let data_from_comp = get_file_contents(&fd_from_comp, length as usize).unwrap(); assert!(data_from_comp == data[..length as usize]); })?; Ok(StatusOk::Pass) } /** Test to verify that proper replication for wlr_gamma_control_unstable_v1 */ fn proto_gamma_control(info: TestInfo) -> TestResult { let gamma_size: usize = 4096; const BYTES_PER_ENTRY: usize = 6; let mut source_pattern = Vec::new(); /* Pattern for a single gamma ramp channel */ for i in 0..gamma_size { source_pattern.extend_from_slice(&u16::to_le_bytes(i as u16)); } println!("Subtest: sending gamma ramp fd before size provided"); run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let [display, registry, output, manager, gamma, ..] = ID_SEQUENCE; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_OUTPUT, 4); write_evt_wl_registry_global(dst, registry, 1, ZWLR_GAMMA_CONTROL_MANAGER_V1, 1); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_OUTPUT, 4, output); write_req_wl_registry_bind(dst, registry, 1, ZWLR_GAMMA_CONTROL_MANAGER_V1, 1, manager); write_req_zwlr_gamma_control_manager_v1_get_gamma_control(dst, manager, gamma, output); })); let source_data = source_pattern.repeat(3); let source_fd = make_file_with_contents(&source_data).unwrap(); let msg = build_msgs(|dst| { write_req_zwlr_gamma_control_v1_set_gamma(dst, gamma, false); }); let err_msg = ctx .prog_write(&msg, &[&source_fd]) .expect_err("sending ramp early should fail"); println!( "Error message from sending gamma ramp fd too early: {} {} {}", err_msg.0, err_msg.1, err_msg.2 ); })?; let correct_length = gamma_size * BYTES_PER_ENTRY; let lengths = [ /* correct size */ gamma_size * BYTES_PER_ENTRY, /* oversize: OK */ gamma_size * BYTES_PER_ENTRY * 2, /* slightly undersize (typically zero extended) */ // gamma_size * BYTES_PER_ENTRY - 2, /* very undersize (SIGBUS) */ // gamma_size, ]; for length in lengths { println!( "Subtest: specified size {}={}*{}, provided file size {}", gamma_size * BYTES_PER_ENTRY, gamma_size, BYTES_PER_ENTRY, length ); run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let mut source_data = source_pattern.repeat(align(length, source_pattern.len()) / source_pattern.len()); source_data.truncate(length); let source_fd = make_file_with_contents(&source_data).unwrap(); let [display, registry, output, manager, gamma, ..] = ID_SEQUENCE; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_OUTPUT, 4); write_evt_wl_registry_global(dst, registry, 1, ZWLR_GAMMA_CONTROL_MANAGER_V1, 1); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_OUTPUT, 4, output); write_req_wl_registry_bind( dst, registry, 1, ZWLR_GAMMA_CONTROL_MANAGER_V1, 1, manager, ); write_req_zwlr_gamma_control_manager_v1_get_gamma_control( dst, manager, gamma, output, ); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_zwlr_gamma_control_v1_gamma_size( dst, gamma, gamma_size.try_into().unwrap(), ); })); let msg = build_msgs(|dst| { write_req_zwlr_gamma_control_v1_set_gamma(dst, gamma, false); }); let (msgs, mut fds) = ctx.prog_write(&msg, &[&source_fd]).unwrap(); drop(source_fd); if length >= gamma_size * BYTES_PER_ENTRY { assert!(msgs.concat() == msg); assert!(fds.len() == 1); let new_gamma_fd = fds.remove(0); let new_data = get_file_contents(&new_gamma_fd, correct_length).unwrap(); assert!(new_data == source_data[..correct_length]); } else { /* Corruption, error message, or SIGBUS */ } })?; } Ok(StatusOk::Pass) } /** Generate a stream of data using seed `seed`, and send the first `max_write` bytes of it (or as much as possible), * while receiving the first `max_read` bytes of it (or as much as possible), and check that the data read matches * what was sent. */ fn check_pipe_transfer( pipe_w: OwnedFd, pipe_r: OwnedFd, seed: u64, max_write: Option, max_read: Option, ) { assert!(max_write.is_some() || max_read.is_some()); let mut nwritten = 0; let start = Instant::now(); let timeout = Duration::from_secs(1); let mut ord = Some(pipe_r); let mut owr = Some(pipe_w); if max_write == Some(0) { owr = None; } let mut recv = Vec::new(); let mut to_send = Vec::new(); let mut gen = BadRng { state: seed }; let mut tmp = vec![0; 4096]; while ord.is_some() || owr.is_some() { let mut pfds = Vec::new(); let wr_idx = owr.as_ref().map(|x| { pfds.push(poll::PollFd::new(x.as_fd(), poll::PollFlags::POLLOUT)); pfds.len() - 1 }); let rd_idx = ord.as_ref().map(|x| { pfds.push(poll::PollFd::new(x.as_fd(), poll::PollFlags::POLLIN)); pfds.len() - 1 }); let current = Instant::now(); let elapsed = current.duration_since(start); if elapsed >= timeout { panic!("timeout: {:?}", elapsed); } let remaining = time::TimeSpec::from_duration(timeout.saturating_sub(elapsed)); let ret = poll::ppoll(&mut pfds, Some(remaining), None); if let Err(e) = ret { assert!(e == Errno::EINTR); } let rev_wr = wr_idx.map(|i| pfds[i].revents().unwrap()); let rev_rd = rd_idx.map(|i| pfds[i].revents().unwrap()); if let Some(evts) = rev_rd { if evts.contains(poll::PollFlags::POLLIN) { let read_len = if let Some(limit) = max_read { limit.checked_sub(recv.len()).unwrap().min(tmp.len()) } else { tmp.len() }; match unistd::read(ord.as_ref().unwrap().as_raw_fd(), &mut tmp[..read_len]) { Err(Errno::EINTR) | Err(Errno::EAGAIN) => { /* do nothing */ } Err(Errno::ECONNRESET) | Err(Errno::ENOTCONN) => { ord = None; } Err(x) => panic!("{:?}", x), Ok(len) => { if len > 0 { recv.extend_from_slice(&tmp[..len]); } else { /* nothing more to read */ ord = None; } if let Some(limit) = max_read { if recv.len() >= limit { /* Stop reading, have read enough */ ord = None; } } } } } else if evts.contains(poll::PollFlags::POLLHUP) || evts.contains(poll::PollFlags::POLLERR) { /* case: hangup, no pending data */ ord = None; } } if let Some(evts) = rev_wr { if evts.contains(poll::PollFlags::POLLHUP) || evts.contains(poll::PollFlags::POLLERR) { owr = None; } else if evts.contains(poll::PollFlags::POLLOUT) { let extension = if let Some(len) = max_write { len - to_send.len() } else { std::cmp::max(nwritten + (1 << 20), to_send.len()) - to_send.len() }; for _ in 0..extension { to_send.push(gen.next() as u8); } assert!(to_send.len() > nwritten); match unistd::write(owr.as_ref().unwrap(), &to_send[nwritten..]) { Err(Errno::EINTR) | Err(Errno::EAGAIN) => { /* do nothing */ } Err(Errno::EPIPE) | Err(Errno::ECONNRESET) => { owr = None; } Err(x) => panic!("{:?}", x), Ok(len) => { nwritten += len; if max_write == Some(nwritten) { owr = None; } } } } } } /* Data received must be a prefix of data sent */ assert!(recv == to_send[..recv.len()]); if let Some(len) = max_read { /* Should have read exactly the requested amount */ assert!(recv.len() == len); } else { /* Should read all of input */ assert!(recv.len() == to_send.len()); } } /** Test to check that basic pipe replication works for the various copy-paste type protocols, and * that the pipe replicated can be used to transfer various amounts of data. */ fn proto_pipe_write(info: TestInfo) -> TestResult { let (display, registry, manager, seat, dev, source) = (ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6)); let seat_name = WL_SEAT; let ddev_name = WL_DATA_DEVICE_MANAGER; let prim_name = ZWP_PRIMARY_SELECTION_DEVICE_MANAGER_V1; let data_name = EXT_DATA_CONTROL_MANAGER_V1; let gtk_name = GTK_PRIMARY_SELECTION_DEVICE_MANAGER; let wlr_name = ZWLR_DATA_CONTROL_MANAGER_V1; let mime = "text/plain;charset=utf-8".as_bytes(); /* Protocol sequences leading to a pipe receipt; in all cases the pipe is provided with the last message */ let ex_wl: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, ddev_name, 3); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, ddev_name, 3, manager); write_req_wl_data_device_manager_get_data_device(dst, manager, dev, seat); write_req_wl_data_device_manager_create_data_source(dst, manager, source); write_req_wl_data_source_offer(dst, source, mime); write_req_wl_data_device_set_selection(dst, dev, source, 99); }), build_msgs(|dst| { write_evt_wl_data_source_send(dst, source, false, mime); }), ]; let ex_prim: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, prim_name, 1); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, prim_name, 1, manager); write_req_zwp_primary_selection_device_manager_v1_get_device(dst, manager, dev, seat); write_req_zwp_primary_selection_device_manager_v1_create_source(dst, manager, source); write_req_zwp_primary_selection_source_v1_offer(dst, source, mime); write_req_zwp_primary_selection_device_v1_set_selection(dst, dev, source, 99); }), build_msgs(|dst| { write_evt_zwp_primary_selection_source_v1_send(dst, source, false, mime); }), ]; let ex_data: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, data_name, 1); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, data_name, 1, manager); write_req_ext_data_control_manager_v1_get_data_device(dst, manager, dev, seat); write_req_ext_data_control_manager_v1_create_data_source(dst, manager, source); write_req_ext_data_control_source_v1_offer(dst, source, mime); write_req_ext_data_control_device_v1_set_selection(dst, dev, source); }), build_msgs(|dst| { write_evt_ext_data_control_source_v1_send(dst, source, false, mime); }), ]; let ex_gtk: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, gtk_name, 1); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, gtk_name, 1, manager); write_req_gtk_primary_selection_device_manager_get_device(dst, manager, dev, seat); write_req_gtk_primary_selection_device_manager_create_source(dst, manager, source); write_req_gtk_primary_selection_source_offer(dst, source, mime); write_req_gtk_primary_selection_device_set_selection(dst, dev, source, 99); }), build_msgs(|dst| { write_evt_gtk_primary_selection_source_send(dst, source, false, mime); }), ]; let ex_wlr: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, wlr_name, 1); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, wlr_name, 1, manager); write_req_zwlr_data_control_manager_v1_get_data_device(dst, manager, dev, seat); write_req_zwlr_data_control_manager_v1_create_data_source(dst, manager, source); write_req_zwlr_data_control_source_v1_offer(dst, source, mime); write_req_zwlr_data_control_device_v1_set_selection(dst, dev, source); }), build_msgs(|dst| { write_evt_zwlr_data_control_source_v1_send(dst, source, false, mime); }), ]; let (display, registry, manager, seat, dev, offer) = (ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6)); /* Protocol sequences leading to a pipe receipt; in all cases the pipe is provided with the last message */ let ex2_wl: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, ddev_name, 3); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, ddev_name, 3, manager); write_req_wl_data_device_manager_get_data_device(dst, manager, dev, seat); }), build_msgs(|dst| { write_evt_wl_data_device_data_offer(dst, dev, offer); write_evt_wl_data_offer_offer(dst, offer, mime); write_evt_wl_data_device_selection(dst, dev, offer); }), build_msgs(|dst| { write_req_wl_data_offer_receive(dst, offer, false, mime); }), ]; let ex2_prim: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, prim_name, 1); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, prim_name, 1, manager); write_req_zwp_primary_selection_device_manager_v1_get_device(dst, manager, dev, seat); }), build_msgs(|dst| { write_evt_zwp_primary_selection_device_v1_data_offer(dst, dev, offer); write_evt_zwp_primary_selection_offer_v1_offer(dst, offer, mime); write_evt_zwp_primary_selection_device_v1_selection(dst, dev, offer); }), build_msgs(|dst| { write_req_zwp_primary_selection_offer_v1_receive(dst, offer, false, mime); }), ]; let ex2_data: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, data_name, 1); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, data_name, 1, manager); write_req_ext_data_control_manager_v1_get_data_device(dst, manager, dev, seat); }), build_msgs(|dst| { write_evt_ext_data_control_device_v1_data_offer(dst, dev, offer); write_evt_ext_data_control_offer_v1_offer(dst, offer, mime); write_evt_ext_data_control_device_v1_selection(dst, dev, offer); }), build_msgs(|dst| { write_req_ext_data_control_offer_v1_receive(dst, offer, false, mime); }), ]; let ex2_gtk: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, gtk_name, 1); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, gtk_name, 1, manager); write_req_gtk_primary_selection_device_manager_get_device(dst, manager, dev, seat); }), build_msgs(|dst| { write_evt_gtk_primary_selection_device_data_offer(dst, dev, offer); write_evt_gtk_primary_selection_offer_offer(dst, offer, mime); write_evt_gtk_primary_selection_device_selection(dst, dev, offer); }), build_msgs(|dst| { write_req_gtk_primary_selection_offer_receive(dst, offer, false, mime); }), ]; let ex2_wlr: &[Vec] = &[ build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); }), build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, seat_name, 7); write_evt_wl_registry_global(dst, registry, 2, wlr_name, 1); }), build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, seat_name, 7, seat); write_req_wl_registry_bind(dst, registry, 2, wlr_name, 1, manager); write_req_zwlr_data_control_manager_v1_get_data_device(dst, manager, dev, seat); }), build_msgs(|dst| { write_evt_zwlr_data_control_device_v1_data_offer(dst, dev, offer); write_evt_zwlr_data_control_offer_v1_offer(dst, offer, mime); write_evt_zwlr_data_control_device_v1_selection(dst, dev, offer); }), build_msgs(|dst| { write_req_zwlr_data_control_offer_v1_receive(dst, offer, false, mime); }), ]; let test_cases: &[&[Vec]] = &[ ex_wl, ex_prim, ex_data, ex_gtk, ex_wlr, ex2_wl, ex2_prim, ex2_data, ex2_gtk, ex2_wlr, ]; let lengths = [usize::MAX, 100_usize, 0_usize, 131073_usize] .iter() .chain(std::iter::repeat(&256_usize)); for (test_no, (test, length)) in test_cases.iter().zip(lengths).enumerate() { println!("Test {}.", test_no); run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let (pipe_r, pipe_w) = unistd::pipe2(fcntl::OFlag::O_CLOEXEC | fcntl::OFlag::O_NONBLOCK).unwrap(); for (i, line) in test.iter().enumerate().take(test.iter().len() - 1) { if i % 2 == 0 { ctx.prog_write_passthrough(line.clone()); } else { ctx.comp_write_passthrough(line.clone()); } } let end = test.last().unwrap(); let ifds = [&pipe_w]; let (msg, mut ofds) = if test.iter().len() % 2 == 0 { ctx.comp_write(end, &ifds) } else { ctx.prog_write(end, &ifds) } .unwrap(); assert!(msg.concat() == *end); drop(pipe_w); let pipe_w = ofds.remove(0); if *length < usize::MAX { /* Send and receive a given length of message. */ check_pipe_transfer( pipe_w, pipe_r, test_no as u64, Some(*length), if test_no == 5 { Some(*length) } else { None }, ); } else { /* Send infinite message, and receive only the first part */ check_pipe_transfer(pipe_w, pipe_r, test_no as u64, None, Some(50000)); } /* Pass through empty message to determine if there was an error */ ctx.comp_write_passthrough(Vec::new()); })?; } Ok(StatusOk::Pass) } /** Test to verify that presentation time handling does not introduce major errors */ fn proto_presentation_time(info: TestInfo) -> TestResult { for (pres_clock, fast_start) in [ (libc::CLOCK_MONOTONIC as u32, true), (libc::CLOCK_REALTIME as u32, false), ] { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let (display, registry, pres, comp, surface, feedback) = (ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6)); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WP_PRESENTATION, 1); write_evt_wl_registry_global(dst, registry, 2, WL_COMPOSITOR, 1); })); let start = Instant::now(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WP_PRESENTATION, 1, pres); write_req_wl_registry_bind(dst, registry, 2, WL_COMPOSITOR, 1, comp); write_req_wl_compositor_create_surface(dst, comp, surface); write_req_wl_surface_damage(dst, surface, 0, 0, 64, 64); if fast_start { write_req_wp_presentation_feedback(dst, pres, surface, feedback); write_req_wp_presentation_destroy(dst, pres); write_req_wl_surface_commit(dst, surface); } })); if !fast_start { ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wp_presentation_clock_id(dst, pres, pres_clock); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wp_presentation_feedback(dst, pres, surface, feedback); })); } let init_time_ns = 1111111111111u128; let data = build_msgs(|dst| { if fast_start { write_evt_wp_presentation_clock_id(dst, pres, pres_clock); write_evt_wl_display_delete_id(dst, display, pres.0); } write_evt_wp_presentation_feedback_presented( dst, feedback, 0, (init_time_ns / 1000000000) as u32, (init_time_ns % 1000000000) as u32, 500000000, 0, 10, 0x9, ); }); let (msgs, fds) = ctx.comp_write(&data, &[]).unwrap(); let end = Instant::now(); assert!(fds.is_empty()); if fast_start { assert!( msgs[..2].concat() == data[..length_evt_wp_presentation_clock_id() + length_evt_wl_display_delete_id()] ); } assert!( parse_wl_header(msgs.last().unwrap()) == ( feedback, length_evt_wp_presentation_feedback_presented(), OPCODE_WP_PRESENTATION_FEEDBACK_PRESENTED.code() ) ); let (tv_sec_hi, tv_sec_lo, tv_nsec, _refresh, _seq_hi, _seq_lo, _flags) = parse_evt_wp_presentation_feedback_presented(msgs.last().unwrap()).unwrap(); let output_ns = 1000000000 * (join_u64(tv_sec_hi, tv_sec_lo) as u128) + (tv_nsec as u128); /* The time adjustment uses two XYX measurements, whose absolute error * is ≤ half the elapsed time each, assuming the clocks run at the same * rate and do not change. */ let max_time_error = end.duration_since(start).saturating_mul(2); let abs_diff = output_ns.abs_diff(init_time_ns); println!( "clock {}: roundtrip diff: {} ns, max permissible error: {} ns", pres_clock, (abs_diff as i128) * (if output_ns > init_time_ns { 1 } else { -1 }), max_time_error.as_nanos() ); assert!(abs_diff < max_time_error.as_nanos()); })?; } Ok(StatusOk::Pass) } /** Test to verify that presentation time handling does not introduce major errors */ fn proto_commit_timing(info: TestInfo) -> TestResult { for pres_clock in [libc::CLOCK_MONOTONIC as u32, libc::CLOCK_REALTIME as u32] { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let [display, registry, pres, comp, manager, surface, timer, ..] = ID_SEQUENCE; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WP_PRESENTATION, 1); write_evt_wl_registry_global(dst, registry, 2, WP_COMMIT_TIMING_MANAGER_V1, 1); write_evt_wl_registry_global(dst, registry, 3, WL_COMPOSITOR, 1); })); let start = Instant::now(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WP_PRESENTATION, 1, pres); write_req_wl_registry_bind(dst, registry, 3, WL_COMPOSITOR, 1, comp); write_req_wl_registry_bind( dst, registry, 2, WP_COMMIT_TIMING_MANAGER_V1, 1, manager, ); write_req_wl_compositor_create_surface(dst, comp, surface); write_req_wl_surface_damage(dst, surface, 0, 0, 64, 64); write_req_wp_commit_timing_manager_v1_get_timer(dst, manager, timer, surface); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wp_presentation_clock_id(dst, pres, pres_clock); })); let init_time_ns = 11111111111111111111111u128; let ((tv_sec_hi, tv_sec_lo), tv_nsec) = ( split_u64((init_time_ns / 1000000000) as u64), (init_time_ns % 1000000000) as u32, ); let data = build_msgs(|dst| { write_req_wp_commit_timer_v1_set_timestamp( dst, timer, tv_sec_hi, tv_sec_lo, tv_nsec, ); }); let (msgs, fds) = ctx.prog_write(&data, &[]).unwrap(); let end = Instant::now(); assert!(fds.is_empty()); assert!(msgs.len() == 1); let msg = &msgs[0]; assert!( parse_wl_header(msg) == ( timer, length_req_wp_commit_timer_v1_set_timestamp(), OPCODE_WP_COMMIT_TIMER_V1_SET_TIMESTAMP.code() ) ); let (new_sec_hi, new_sec_lo, new_nsec) = parse_req_wp_commit_timer_v1_set_timestamp(msg).unwrap(); let output_ns = 1000000000 * (join_u64(new_sec_hi, new_sec_lo) as u128) + (new_nsec as u128); /* The time adjustment uses two XYX measurements, whose absolute error * is ≤ half the elapsed time each, assuming the clocks run at the same * rate and do not change. */ let max_time_error = end.duration_since(start).saturating_mul(2); let abs_diff = output_ns.abs_diff(init_time_ns); println!( "clock {}: roundtrip diff: {} ns, max permissible error: {} ns", pres_clock, (abs_diff as i128) * (if output_ns > init_time_ns { 1 } else { -1 }), max_time_error.as_nanos() ); assert!(abs_diff < max_time_error.as_nanos()); })?; } Ok(StatusOk::Pass) } /** Test that an error is reported when a Wayland client attempts to make two objects * with the same ID at the same time. */ fn proto_object_collision(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let (display, registry) = (ObjId(1), ObjId(2)); let msgs = build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); write_req_wl_display_get_registry(dst, display, registry); }); let res = ctx.prog_write(&msgs, &[]); if let Err(ref e) = res { println!("error: {:?}", e); } assert!(res.is_err()); })?; Ok(StatusOk::Pass) } /** Test to check that a wl_shm buffer can be created and its contents replicated. */ fn proto_shm_buffer(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let (display, registry, shm, comp, surface, pool, buffer) = ( ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6), ObjId(7), ); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SHM, 2); write_evt_wl_registry_global(dst, registry, 2, WL_COMPOSITOR, 6); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SHM, 2, shm); write_req_wl_registry_bind(dst, registry, 2, WL_COMPOSITOR, 6, comp); write_req_wl_compositor_create_surface(dst, comp, surface); })); /* First, simple test case: an empty shm pool, never modified */ let empty_fd = make_file_with_contents(&[]).unwrap(); let msg = build_msgs(|dst| { write_req_wl_shm_create_pool(dst, shm, false, pool, 0); }); let (rmsg, mut rfd) = ctx.prog_write(&msg[..], &[&empty_fd]).unwrap(); assert!(rmsg.concat() == msg); drop(empty_fd); assert!(rfd.len() == 1); let output_fd = rfd.remove(0); assert!(get_file_contents(&output_fd, 0).unwrap().is_empty()); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_shm_pool_destroy(dst, pool); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_display_delete_id(dst, display, pool.0); })); /* pools and images in various sizes */ for (w, h) in [(3, 3), (16, 16), (1023, 1025)] { let sz: usize = w * h; let mut data = vec![0; sz]; let mut i: u8 = 0x80; /* Draw a square inside */ for y in (h / 3)..(2 * h) / 3 { for x in (w / 3)..(2 * w) / 3 { data[y * w + x] = i; i = i.wrapping_add(1); } } let buf_fd = make_file_with_contents(&data[..]).unwrap(); let msgs = build_msgs(|dst| { write_req_wl_shm_create_pool(dst, shm, false, pool, sz as i32); write_req_wl_shm_pool_create_buffer( dst, pool, buffer, 0, w as i32, h as i32, w as i32, WlShmFormat::R8 as u32, ); write_req_wl_surface_attach(dst, surface, buffer, 0, 0); write_req_wl_surface_damage_buffer(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); }); let (rmsg, mut rfd) = ctx.prog_write(&msgs[..], &[&buf_fd]).unwrap(); assert!(rmsg.concat() == msgs); drop(buf_fd); assert!(rfd.len() == 1); let output_fd = rfd.remove(0); let output = get_file_contents(&output_fd, sz).unwrap(); assert!(output == data); /* Cleanup */ ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_shm_pool_destroy(dst, pool); write_req_wl_buffer_destroy(dst, buffer); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_display_delete_id(dst, display, pool.0); write_evt_wl_display_delete_id(dst, display, buffer.0); })); } })?; Ok(StatusOk::Pass) } /** Test that wl_shm_pool::resize is handled correctly. */ fn proto_shm_extend(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let (display, registry, shm, comp, surface, pool, buf_a, buf_b, buf_c) = ( ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6), ObjId(7), ObjId(8), ObjId(9), ); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SHM, 2); write_evt_wl_registry_global(dst, registry, 2, WL_COMPOSITOR, 6); })); let pool_fd = make_file_with_contents(&[]).unwrap(); let msg = build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SHM, 2, shm); write_req_wl_registry_bind(dst, registry, 2, WL_COMPOSITOR, 6, comp); write_req_wl_compositor_create_surface(dst, comp, surface); write_req_wl_shm_create_pool(dst, shm, false, pool, 0); }); let (rmsg, mut rfd) = ctx.prog_write(&msg[..], &[&pool_fd]).unwrap(); assert!(rmsg.concat() == msg); assert!(rfd.len() == 1); let mirror_fd = rfd.remove(0); assert!(get_file_contents(&mirror_fd, 0).unwrap().is_empty()); let (w, h): (usize, usize) = (15, 15); let fmt = WlShmFormat::Abgr16161616; let buf_ids = [buf_a, buf_b, buf_c]; let mut i: u16 = 0x1000; for nblocks in 1..=3 { let new_sz = w * h * 8 * nblocks; let mut img_data = vec![0u8; new_sz]; for k in 0..nblocks { /* Fill with gradient */ let block_data = &mut img_data[(w * h * 8 * k)..(w * h * 8 * (k + 1))]; for j in 0..(w * h) { for h in 0..4 { block_data[(8 * j + 2 * h)..(8 * j + 2 * h + 2)] .copy_from_slice(&((h as u16 + 1) * i).to_le_bytes()); } i = i.wrapping_add(1); } } resize_file_with_contents(&pool_fd, &img_data).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { for i in 0..nblocks { if i == nblocks - 1 { write_req_wl_shm_pool_resize(dst, pool, new_sz as i32); write_req_wl_shm_pool_create_buffer( dst, pool, buf_ids[nblocks - 1], (w * h * 8 * (nblocks - 1)) as i32, w as i32, h as i32, (w * 8) as i32, fmt as u32, ); } write_req_wl_surface_attach(dst, surface, buf_ids[i], 0, 0); write_req_wl_surface_damage_buffer(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); } })); let mir = get_file_contents(&mirror_fd, new_sz).unwrap(); assert!(mir == img_data); } })?; Ok(StatusOk::Pass) } /** Helper function: _process_ dmabuf_feedback events for the given `feedback` object, and * return the map of format-modifier combinations supported by the program-side Waypipe * instance. `format_table` should be, and will be updated to, the most recently received * format table. */ fn process_linux_dmabuf_feedback( rmsgs: Vec>, mut rfds: Vec, format_table: &mut Vec<(u32, u64)>, feedback: ObjId, ) -> BTreeMap> { rfds.reverse(); let mut mod_table: BTreeMap> = BTreeMap::new(); let mut tranches: Vec> = Vec::new(); let mut current_tranche = Vec::new(); for msg in rmsgs { let (obj, _len, opcode) = parse_wl_header(&msg); /* Have only send events to this dmabuf_feedback */ assert!(obj == feedback); match MethodId::Event(opcode) { OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_MAIN_DEVICE => { /* ignore, Waypipe may choose a different one */ } OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_TARGET_DEVICE => { /* ignore, Waypipe currently doesn't do anything interesting here */ } OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_FLAGS => { /* ignore, Waypipe currently doesn't do anything interesting here */ } OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_FORMATS => { let format_list = parse_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats(&msg).unwrap(); /* The indices correspond to the last received format table, and should must be interpreted immediately * in case the format table is changed later on */ for c in format_list.chunks_exact(2) { let idx = u16::from_le_bytes(c.try_into().unwrap()); let entry: (u32, u64) = *format_table .get(idx as usize) .expect("format index out of range"); current_tranche.push(entry); } } OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_DONE => { tranches.push(current_tranche.clone()); current_tranche = Vec::new(); } OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_FORMAT_TABLE => { format_table.clear(); let fd = rfds.pop().unwrap(); let len = parse_evt_zwp_linux_dmabuf_feedback_v1_format_table(&msg).unwrap(); let table_contents = get_file_contents(&fd, len as usize).unwrap(); for chunk in table_contents.chunks_exact(16) { let format: u32 = u32::from_le_bytes(chunk[..4].try_into().unwrap()); let modifier: u64 = u64::from_le_bytes(chunk[8..].try_into().unwrap()); format_table.push((format, modifier)); } } OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_DONE => { mod_table = BTreeMap::new(); for t in &tranches { for (fmt, modifier) in t { mod_table .entry(*fmt) .and_modify(|x| x.push(*modifier)) .or_insert(vec![*modifier]); } } } _ => { panic!("Unexpected opcode: {}", opcode); } } } assert!(rfds.is_empty()); mod_table } /** Helper function: send dmabuf_feedback events for the given `feedback` object, and * return the map of format-modifier combinations supported by the program-side Waypipe * instance. */ #[cfg(feature = "dmabuf")] fn send_linux_dmabuf_feedback( ctx: &mut ProtocolTestContext, vulk: &VulkanDevice, feedback: ObjId, ) -> BTreeMap> { let main_device = vulk.get_device(); let advertised_formats = [ wayland_to_drm(WlShmFormat::R8), wayland_to_drm(WlShmFormat::Rgb565), wayland_to_drm(WlShmFormat::Argb8888), wayland_to_drm(WlShmFormat::Xrgb8888), wayland_to_drm(WlShmFormat::Xbgr16161616), wayland_to_drm(WlShmFormat::Abgr16161616), ]; let mut table: Vec = Vec::new(); let mut array: Vec = Vec::new(); let mut i: u16 = 0; for fmt in advertised_formats { let modifier_list = vulk.get_supported_modifiers(fmt); for m in modifier_list { table.extend_from_slice(&fmt.to_le_bytes()); table.extend_from_slice(&0_u32.to_le_bytes()); table.extend_from_slice(&m.to_le_bytes()); array.extend_from_slice(&i.to_le_bytes()); i += 1; } } let table_fd = make_file_with_contents(&table).unwrap(); let msgs = build_msgs(|dst| { write_evt_zwp_linux_dmabuf_feedback_v1_format_table( dst, feedback, false, table.len() as u32, ); write_evt_zwp_linux_dmabuf_feedback_v1_main_device( dst, feedback, &main_device.to_le_bytes(), ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device( dst, feedback, &main_device.to_le_bytes(), ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(dst, feedback, 0); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats(dst, feedback, &array[..]); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(dst, feedback); write_evt_zwp_linux_dmabuf_feedback_v1_done(dst, feedback); }); let (rmsgs, rfds) = ctx.comp_write(&msgs[..], &[&table_fd]).unwrap(); let mut format_table = Vec::new(); process_linux_dmabuf_feedback(rmsgs, rfds, &mut format_table, feedback) } /** Helper function: setup wl_compositor and zwp_linux_dmabuf_v1 globals and a surface, * and return the map of format-modifier combinations supported by the program-side Waypipe * instance. */ #[cfg(feature = "dmabuf")] fn setup_linux_dmabuf( ctx: &mut ProtocolTestContext, vulk: &VulkanDevice, display: ObjId, registry: ObjId, dmabuf: ObjId, comp: ObjId, surface: ObjId, feedback: ObjId, ) -> BTreeMap> { ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, ZWP_LINUX_DMABUF_V1, 5); write_evt_wl_registry_global(dst, registry, 2, WL_COMPOSITOR, 6); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, ZWP_LINUX_DMABUF_V1, 5, dmabuf); write_req_wl_registry_bind(dst, registry, 2, WL_COMPOSITOR, 6, comp); write_req_wl_compositor_create_surface(dst, comp, surface); write_req_zwp_linux_dmabuf_v1_get_default_feedback(dst, dmabuf, feedback); })); send_linux_dmabuf_feedback(ctx, vulk, feedback) } /** Test that `wl_buffer` objects backed by DMABUFs are properly replicated. */ #[cfg(feature = "dmabuf")] fn proto_dmabuf(info: TestInfo, device: RenderDevice) -> TestResult { let Ok(vulk) = setup_vulkan(device.id) else { return Ok(StatusOk::Skipped); }; run_protocol_test_with_drm_node(&info, &device, &|mut ctx: ProtocolTestContext| { let (display, registry, dmabuf, comp, surface, feedback, params, buffer, sbuffer) = ( ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6), ObjId(7), ObjId(8), ObjId(0xff000000), ); let supported_modifier_table = setup_linux_dmabuf( &mut ctx, &vulk, display, registry, dmabuf, comp, surface, feedback, ); let fmt = wayland_to_drm(WlShmFormat::Rgb565); let bpp = 2; let Some(mod_list) = supported_modifier_table.get(&fmt) else { println!("Skipping test, format not supported"); return; }; for (w, h, immed) in [(3, 4, true), (64, 64, true), (513, 511, false)] { let mut img_data = vec![0u8; w * h * bpp]; let mut i: u16 = 0x1234; /* Draw a square inside */ for y in (h / 3)..(2 * h) / 3 { for x in (w / 3)..(2 * w) / 3 { img_data[2 * (y * w + x)..2 * (y * w + x) + 2] .copy_from_slice(&i.to_le_bytes()); i = i.wrapping_add(1); } } let (img, planes) = vulkan_create_dmabuf(&vulk, w as u32, h as u32, fmt, mod_list, false).unwrap(); let msgs = build_msgs(|dst| { write_req_zwp_linux_dmabuf_v1_create_params(dst, dmabuf, params); for p in planes.iter() { let (mod_hi, mod_lo) = split_u64(p.modifier); write_req_zwp_linux_buffer_params_v1_add( dst, params, false, p.plane_idx, p.offset, p.stride, mod_hi, mod_lo, ); } if immed { write_req_zwp_linux_buffer_params_v1_create_immed( dst, params, buffer, w as i32, h as i32, fmt, 0, ); } else { write_req_zwp_linux_buffer_params_v1_create( dst, params, w as i32, h as i32, fmt, 0, ); } }); let plane_fds: Vec<&OwnedFd> = planes.iter().map(|p| &p.fd).collect(); let (rmsgs, mut rfds) = ctx.prog_write(&msgs[..], &plane_fds[..]).unwrap(); drop(planes); let add_msgs = &rmsgs[1..rmsgs.len() - 1]; assert!(rfds.len() == add_msgs.len()); let create_msg = &rmsgs[rmsgs.len() - 1]; let (rw, rh, rfmt) = if immed { let (_rbuf, rw, rh, rfmt, _rflags) = parse_req_zwp_linux_buffer_params_v1_create_immed(&create_msg[..]).unwrap(); (rw, rh, rfmt) } else { let (rw, rh, rfmt, _rflags) = parse_req_zwp_linux_buffer_params_v1_create(&create_msg[..]).unwrap(); (rw, rh, rfmt) }; assert!((rw, rh, rfmt) == (w as i32, h as i32, fmt)); let mut planes = Vec::new(); for (fd, msg) in rfds.drain(..).zip(add_msgs.iter()) { let (plane_idx, offset, stride, mod_hi, mod_lo) = parse_req_zwp_linux_buffer_params_v1_add(msg).unwrap(); let modifier = join_u64(mod_hi, mod_lo); planes.push(AddDmabufPlane { fd, plane_idx, offset, stride, modifier, }); } let mirror = vulkan_import_dmabuf(&vulk, planes, w as u32, h as u32, fmt, false).unwrap(); let tmp = Arc::new(vulkan_get_buffer(&vulk, img.nominal_size(None), true).unwrap()); copy_onto_dmabuf(&img, &tmp, &img_data).unwrap(); if immed { ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_attach(dst, surface, buffer, 0, 0); write_req_wl_surface_damage_buffer(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); } else { ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_zwp_linux_buffer_params_v1_created(dst, params, sbuffer); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_attach(dst, surface, sbuffer, 0, 0); write_req_wl_surface_damage_buffer(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); } let tmp = Arc::new(vulkan_get_buffer(&vulk, mirror.nominal_size(None), true).unwrap()); let mir_data = copy_from_dmabuf(&img, &tmp).unwrap(); assert!(img_data == mir_data); /* Cleanup buffer and params objects, for reuse with next image */ ctx.prog_write_passthrough(build_msgs(|dst| { write_req_zwp_linux_buffer_params_v1_destroy(dst, params); write_req_wl_buffer_destroy(dst, if immed { buffer } else { sbuffer }); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_display_delete_id(dst, display, params.0); if immed { write_evt_wl_display_delete_id(dst, display, buffer.0); } })); } })?; Ok(StatusOk::Pass) } /** Test that Waypipe can process dmabuf feedback _changes_ (including table changes * during the feedback events) properly */ #[cfg(feature = "dmabuf")] fn proto_dmabuf_feedback_table(info: TestInfo, device: RenderDevice) -> TestResult { if setup_vulkan(device.id).is_err() { return Ok(StatusOk::Skipped); }; fn make_simple_format_table(formats: &[u32]) -> (OwnedFd, u32) { let mod_linear = 0u64; let mut table: Vec = Vec::new(); for fmt in formats { table.extend_from_slice(&fmt.to_le_bytes()); table.extend_from_slice(&0_u32.to_le_bytes()); table.extend_from_slice(&mod_linear.to_le_bytes()); } (make_file_with_contents(&table).unwrap(), table.len() as u32) } fn make_index_array(indices: &[usize]) -> Vec { let mut arr: Vec = Vec::new(); for i in indices { arr.extend_from_slice(&(*i as u16).to_le_bytes()); } arr } run_protocol_test_with_drm_node(&info, &device, &|mut ctx: ProtocolTestContext| { let [display, registry, dmabuf, feedback, ..] = ID_SEQUENCE; /* Note: Vulkan implementations are required to support TRANSFER_SRC_BIT | * TRANSFER_DST_BIT for the following formats, so Waypipe should be able to * process all their corresponding formats. * * R5G6B5_UNORM_PACK16, R8G8_UNORM, R8G8B8A8_UNORM, A2B10G10R10_UNORM * R16G16B16A16_SFLOAT. */ let formats: [u32; 4] = [ wayland_to_drm(WlShmFormat::R8), wayland_to_drm(WlShmFormat::Rgb565), wayland_to_drm(WlShmFormat::Argb8888), wayland_to_drm(WlShmFormat::Xrgb8888), ]; println!("Formats: {:?}", formats); let unsupported_format: u32 = 0xFFFFFFFF; let main_device = &device.id.to_le_bytes(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, ZWP_LINUX_DMABUF_V1, 5); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, ZWP_LINUX_DMABUF_V1, 5, dmabuf); write_req_zwp_linux_dmabuf_v1_get_default_feedback(dst, dmabuf, feedback); })); /* Round 1: Regular setup */ let r1_formats = [formats[0], formats[1]]; let (r1_fd, r1_size) = make_simple_format_table(&r1_formats); let mut format_table = Vec::new(); let ret = ctx .comp_write( &build_msgs(|dst| { write_evt_zwp_linux_dmabuf_feedback_v1_format_table( dst, feedback, false, r1_size, ); write_evt_zwp_linux_dmabuf_feedback_v1_main_device(dst, feedback, main_device); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device( dst, feedback, main_device, ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(dst, feedback, 0); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats( dst, feedback, &make_index_array(&[1, 0]), ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(dst, feedback); write_evt_zwp_linux_dmabuf_feedback_v1_done(dst, feedback); }), &[&r1_fd], ) .unwrap(); let map = process_linux_dmabuf_feedback(ret.0, ret.1, &mut format_table, feedback); println!("Round 1: {:?}", map); for f in r1_formats { assert!(map.contains_key(&f), "missing {:x}", f); } /* Round 2: Repeat setup with slightly different tranches, keeping the same table */ let ret = ctx .comp_write( &build_msgs(|dst| { write_evt_zwp_linux_dmabuf_feedback_v1_main_device(dst, feedback, main_device); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device( dst, feedback, main_device, ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(dst, feedback, 0); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats( dst, feedback, &make_index_array(&[0]), ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(dst, feedback); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device( dst, feedback, main_device, ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(dst, feedback, 0); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats( dst, feedback, &make_index_array(&[1]), ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(dst, feedback); write_evt_zwp_linux_dmabuf_feedback_v1_done(dst, feedback); }), &[], ) .unwrap(); let map = process_linux_dmabuf_feedback(ret.0, ret.1, &mut format_table, feedback); println!("Round 2: {:?}", map); for f in r1_formats { assert!(map.contains_key(&f), "missing {:x}", f); } /* Round 3: Changing format table for each tranche, or more frequently. * (Note: the protocol specification implies but does not explicitly state that * there is at most one format table per feedback::done; in practice most * implementations will work even if the table is changed more often.) * * Table 1; tranche 1 * Table 2: tranche 2 * Table 3: ignored * Table 4: empty, ignored * Table 5: tranches 3+4 */ let t1_formats = [formats[1]]; let t2_formats = [unsupported_format, formats[0]]; let t3_formats = [formats[0], formats[3], formats[3], formats[0]]; let t4_formats: [u32; 0] = []; let t5_formats = [formats[3], formats[2], unsupported_format]; let (t1, t2, t3, t4, t5) = ( make_simple_format_table(&t1_formats), make_simple_format_table(&t2_formats), make_simple_format_table(&t3_formats), make_simple_format_table(&t4_formats), make_simple_format_table(&t5_formats), ); let ret = ctx .comp_write( &build_msgs(|dst| { write_evt_zwp_linux_dmabuf_feedback_v1_main_device(dst, feedback, main_device); write_evt_zwp_linux_dmabuf_feedback_v1_format_table(dst, feedback, false, t1.1); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device( dst, feedback, main_device, ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(dst, feedback, 0); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats( dst, feedback, &make_index_array(&[0]), ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(dst, feedback); write_evt_zwp_linux_dmabuf_feedback_v1_format_table(dst, feedback, false, t2.1); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device( dst, feedback, main_device, ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(dst, feedback, 0); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats( dst, feedback, &make_index_array(&[1]), ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(dst, feedback); write_evt_zwp_linux_dmabuf_feedback_v1_format_table(dst, feedback, false, t3.1); write_evt_zwp_linux_dmabuf_feedback_v1_format_table(dst, feedback, false, t4.1); write_evt_zwp_linux_dmabuf_feedback_v1_format_table(dst, feedback, false, t5.1); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device( dst, feedback, main_device, ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(dst, feedback, 0); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats( dst, feedback, &make_index_array(&[0, 1, 2]), ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(dst, feedback); write_evt_zwp_linux_dmabuf_feedback_v1_done(dst, feedback); }), &[&t1.0, &t2.0, &t3.0, &t4.0, &t5.0], ) .unwrap(); let map = process_linux_dmabuf_feedback(ret.0, ret.1, &mut format_table, feedback); println!("Round 3: {:?}", map); for f in formats { assert!(map.contains_key(&f), "missing {:x}", f); } })?; Ok(StatusOk::Pass) } /** Test that Waypipe correctly processes linux-dmabuf format and modifier * advertisements for linux-dmabuf version ≤ 3 */ #[cfg(feature = "dmabuf")] fn proto_dmabuf_pre_v4(info: TestInfo, device: RenderDevice) -> TestResult { let Ok(vulk) = setup_vulkan(device.id) else { return Ok(StatusOk::Skipped); }; let supported_modifier: u64 = 0x0; let unsupported_modifier: u64 = 0x1; let unsupported_format: u32 = 0x0; let formats: [u32; 5] = [ wayland_to_drm(WlShmFormat::R8), wayland_to_drm(WlShmFormat::Rgb565), wayland_to_drm(WlShmFormat::Argb8888), wayland_to_drm(WlShmFormat::Xrgb8888), unsupported_format, /* unsupported */ ]; let mut ext_mods = Vec::from(vulk.get_supported_modifiers(formats[2])); assert!(ext_mods.contains(&supported_modifier)); ext_mods.insert(0, unsupported_modifier); let modifiers: [&[u64]; 5] = [ &[supported_modifier], &[supported_modifier, unsupported_modifier], &ext_mods, &[unsupported_modifier], &[supported_modifier], ]; println!("Initial formats: {:?}", formats); println!("Initial modifiers: {:?}", modifiers); run_protocol_test_with_drm_node(&info, &device, &|mut ctx: ProtocolTestContext| { /* version 3 introduced zwp_linux_dmabuf_v1::modifier */ let [display, registry, dmabuf_v1, dmabuf_v3, ..] = ID_SEQUENCE; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, ZWP_LINUX_DMABUF_V1, 1); write_evt_wl_registry_global(dst, registry, 2, ZWP_LINUX_DMABUF_V1, 3); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, ZWP_LINUX_DMABUF_V1, 1, dmabuf_v1); write_req_wl_registry_bind(dst, registry, 2, ZWP_LINUX_DMABUF_V1, 3, dmabuf_v3); })); let msgs_v1 = build_msgs(|dst| { for f in formats.iter() { write_evt_zwp_linux_dmabuf_v1_format(dst, dmabuf_v1, *f); } }); let (rmsgs_v1, rfds_v1) = ctx.comp_write(&msgs_v1, &[]).unwrap(); assert!(rfds_v1.is_empty()); let mut recvd_fmts_v1 = BTreeSet::::new(); for msg in rmsgs_v1 { assert!( parse_wl_header(&msg) == ( dmabuf_v1, length_evt_zwp_linux_dmabuf_v1_format(), OPCODE_ZWP_LINUX_DMABUF_V1_FORMAT.code() ) ); let fmt = parse_evt_zwp_linux_dmabuf_v1_format(&msg).unwrap(); recvd_fmts_v1.insert(fmt); } println!("Received formats for v1: {:?}", recvd_fmts_v1); for f in formats.iter() { assert!(recvd_fmts_v1.contains(f) == (*f != unsupported_format)); } let msgs_v3 = build_msgs(|dst| { for (f, mods) in formats.iter().zip(modifiers.iter()) { write_evt_zwp_linux_dmabuf_v1_format(dst, dmabuf_v3, *f); for m in mods.iter() { let (m_hi, m_lo) = split_u64(*m); write_evt_zwp_linux_dmabuf_v1_modifier(dst, dmabuf_v3, *f, m_hi, m_lo); } } }); let (rmsgs_v3, rfds_v3) = ctx.comp_write(&msgs_v3, &[]).unwrap(); assert!(rfds_v3.is_empty()); let mut recvd_fmts_v3 = BTreeSet::::new(); let mut recvd_mods_v3 = BTreeMap::>::new(); for msg in rmsgs_v3 { if parse_wl_header(&msg).2 == OPCODE_ZWP_LINUX_DMABUF_V1_FORMAT.code() { assert!( parse_wl_header(&msg) == ( dmabuf_v3, length_evt_zwp_linux_dmabuf_v1_format(), OPCODE_ZWP_LINUX_DMABUF_V1_FORMAT.code() ) ); let fmt = parse_evt_zwp_linux_dmabuf_v1_format(&msg).unwrap(); recvd_fmts_v3.insert(fmt); } else { assert!( parse_wl_header(&msg) == ( dmabuf_v3, length_evt_zwp_linux_dmabuf_v1_modifier(), OPCODE_ZWP_LINUX_DMABUF_V1_MODIFIER.code() ) ); let (fmt, mod_hi, mod_lo) = parse_evt_zwp_linux_dmabuf_v1_modifier(&msg).unwrap(); recvd_mods_v3 .entry(fmt) .or_default() .push(join_u64(mod_hi, mod_lo)); } } println!("Received formats for v3: {:?}", recvd_fmts_v1); println!("Received modifiers for v3: {:?}", recvd_mods_v3); for (i, f) in formats.iter().enumerate() { assert!(recvd_fmts_v3.contains(f) == (*f != unsupported_format)); if i < 3 { let mod_list = recvd_mods_v3.get(f); /* Modifiers returned depend on what Waypipe supports, which should at least * include the linear modifier */ assert!(mod_list.is_some() && mod_list.unwrap().contains(&supported_modifier)); } else { assert!(!recvd_mods_v3.contains_key(f)); } } })?; Ok(StatusOk::Pass) } /** Helper function to return data for a test image split into four quadrants with * different colors */ #[cfg(feature = "video")] fn fill_blocks_xrgb(w: usize, h: usize) -> Vec { let bpp = 4; let colors: [u32; 4] = [0xaf0080ff, 0xbf00ff00, 0xcf800080, 0xdf808080]; let mut img_data = vec![0u8; w * h * bpp]; for y in 0..h { for x in 0..w { let by = (2 * y) / h; let bx = (2 * x) / w; let c = colors[2 * by + bx] ^ ((((x * y) % 2) as u32) * 0x00010101); img_data[bpp * (y * w + x)..bpp * (y * w + x) + bpp].copy_from_slice(&c.to_le_bytes()); } } img_data } /** Fill a buffer with a pattern that does not have any major obvious patterns and * is unlikely to video-encode well */ #[cfg(feature = "video")] fn fill_pseudorand_xrgb(w: usize, h: usize) -> Vec { let bpp = 4; let mut img_data = vec![0u8; w * h * bpp]; let mut a: u32 = 1; for y in 0..h { for x in 0..w { a = (2 * a) % 16777213; let c = a | 0xff000000; img_data[bpp * (y * w + x)..bpp * (y * w + x) + bpp].copy_from_slice(&c.to_le_bytes()); } } img_data } /** Helper function to create and share a DMABUF with a wl_buffer. */ #[cfg(feature = "dmabuf")] fn create_dmabuf_and_copy( vulk: &Arc, ctx: &mut ProtocolTestContext, params: ObjId, dmabuf: ObjId, buffer: ObjId, w: usize, h: usize, fmt: u32, modifier_list: &[u64], initial_data: &[u8], ) -> Result<(Arc, Arc), String> { let (img, planes) = vulkan_create_dmabuf(vulk, w as u32, h as u32, fmt, modifier_list, false).unwrap(); /* Initialize the image with garbage data */ let tmp_img = Arc::new(vulkan_get_buffer(vulk, img.nominal_size(None), false).unwrap()); copy_onto_dmabuf(&img, &tmp_img, initial_data).unwrap(); let msgs = build_msgs(|dst| { write_req_zwp_linux_dmabuf_v1_create_params(dst, dmabuf, params); for p in planes.iter() { let (mod_hi, mod_lo) = split_u64(p.modifier); write_req_zwp_linux_buffer_params_v1_add( dst, params, false, p.plane_idx, p.offset, p.stride, mod_hi, mod_lo, ); } write_req_zwp_linux_buffer_params_v1_create_immed( dst, params, buffer, w as i32, h as i32, fmt, 0, ); }); let plane_fds: Vec<&OwnedFd> = planes.iter().map(|p| &p.fd).collect(); let (rmsgs, mut rfds) = ctx .prog_write(&msgs[..], &plane_fds[..]) .map_err(|_| tag!("Failed to replicate dmabuf"))?; drop(planes); let add_msgs = &rmsgs[1..rmsgs.len() - 1]; assert!(rfds.len() == add_msgs.len()); let create_msg = &rmsgs[rmsgs.len() - 1]; let (_rbuf, rw, rh, rfmt, _rflags) = parse_req_zwp_linux_buffer_params_v1_create_immed(&create_msg[..]).unwrap(); assert!((rw, rh, rfmt) == (w as i32, h as i32, fmt)); let mut planes = Vec::new(); for (fd, msg) in rfds.drain(..).zip(add_msgs.iter()) { let (plane_idx, offset, stride, mod_hi, mod_lo) = parse_req_zwp_linux_buffer_params_v1_add(msg).unwrap(); let modifier = join_u64(mod_hi, mod_lo); planes.push(AddDmabufPlane { fd, plane_idx, offset, stride, modifier, }); } let mirror = vulkan_import_dmabuf(vulk, planes, w as u32, h as u32, fmt, false).unwrap(); Ok((img, mirror)) } /** Helper function to test that video encoding works and approximately replicates test patterns; * the format and other video properties are specified in `opts`. */ #[cfg(feature = "video")] fn test_dmavid_inner( vulk: &Arc, info: &TestInfo, opts: &WaypipeOptions, sizes: &[(usize, usize)], ) -> Vec { let mut accurate_replication = Vec::new(); for _ in sizes { accurate_replication.push(AtomicBool::new(true)); } run_protocol_test_with_opts( info, opts, opts, &|mut ctx: ProtocolTestContext| { let (display, registry, dmabuf, comp, surface, feedback, params, buffer) = ( ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6), ObjId(7), ObjId(8), ); let supported_modifier_table = setup_linux_dmabuf( &mut ctx, vulk, display, registry, dmabuf, comp, surface, feedback, ); let fmt = wayland_to_drm(WlShmFormat::Xrgb8888); let Some(modifier_list) = supported_modifier_table.get(&fmt) else { println!("Skipping test, format {:#08x} not supported", fmt); return; }; // todo: run these in parallel? // In older libavcodec versions, the small (width <= 32, height <= 16) test sizes // failed with uniform (88) green channel. More recent versions enforce minimum sizes. for (i, (w, h)) in sizes.iter().copied().enumerate() { println!("Testing image transfer for WxH: {}x{}", w, h); let seed = fill_pseudorand_xrgb(w, h); let (img, mirror) = create_dmabuf_and_copy( vulk, &mut ctx, params, dmabuf, buffer, w, h, fmt, modifier_list, &seed, ) .unwrap(); let img_data = fill_blocks_xrgb(w, h); let tmp_img = Arc::new(vulkan_get_buffer(vulk, img.nominal_size(None), true).unwrap()); copy_onto_dmabuf(&img, &tmp_img, &img_data).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_attach(dst, surface, buffer, 0, 0); write_req_wl_surface_damage_buffer(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); let tmp_mirror = Arc::new(vulkan_get_buffer(vulk, mirror.nominal_size(None), true).unwrap()); let mir_data = copy_from_dmabuf(&mirror, &tmp_mirror).unwrap(); let mut net_diff: u64 = 0; for (px_i, px_m) in img_data.chunks_exact(4).zip(mir_data.chunks_exact(4)) { let (ib, ig, ir, _ix) = (px_i[0], px_i[1], px_i[2], px_i[3]); let (mb, mg, mr, _mx) = (px_m[0], px_m[1], px_m[2], px_m[3]); net_diff += ib.abs_diff(mb) as u64 + ig.abs_diff(mg) as u64 + ir.abs_diff(mr) as u64; } let avg_diff = net_diff / ((w * h) as u64); /* Allow up to +/- 32 error on flat regions, and +/- 255 on edges between colored blocks */ let mut threshold: u64 = 32 + (255 - 32) / (w as u64) + (255 - 32) / (h as u64); if w == 1 && h == 1 { threshold = 32; } if net_diff == 0 { /* Either lossless video, or no video encoding at all */ println!( "Perfect replication of {}x{} image, likely not using video enc/decoding", w, h ); } else { println!( "Average difference on slightly noisy {}x{} block pattern: {}, threshold {}", w, h, (net_diff as f32) / ((w * h) as f32), threshold, ); if avg_diff >= threshold { /* XRGB8888 bytes are ordered: B G R x */ for (i, channel) in ["Blue", "Green", "Red"].iter().enumerate() { println!("{} channel, original / replicated", channel); for y in 0..h { for x in 0..w { let base: [u8; 4] = img_data [(4 * y * w + 4 * x)..(4 * y * w + 4 * x + 4)] .try_into() .unwrap(); let rep: [u8; 4] = mir_data [(4 * y * w + 4 * x)..(4 * y * w + 4 * x + 4)] .try_into() .unwrap(); print!("{:2x}/{:2x} ", base[i], rep[i]); } println!(); } } } } if avg_diff > threshold { accurate_replication[i].store(false, std::sync::atomic::Ordering::SeqCst); } // TODO: once video library bugs are resolved, set this // assert!(avg_diff < threshold); /* Cleanup buffer and params objects, for reuse with next image */ ctx.prog_write_passthrough(build_msgs(|dst| { write_req_zwp_linux_buffer_params_v1_destroy(dst, params); write_req_wl_buffer_destroy(dst, buffer); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_display_delete_id(dst, display, params.0); write_evt_wl_display_delete_id(dst, display, buffer.0); })); } } ).unwrap(); accurate_replication .iter() .map(|x| x.load(std::sync::atomic::Ordering::SeqCst)) .collect() } /** Test that video encoding works and approximately replicates test patterns. */ #[cfg(feature = "video")] fn proto_dmavid( info: TestInfo, device: RenderDevice, video_format: VideoFormat, try_hw_dec: bool, try_hw_enc: bool, accurate_on_small_images: bool, ) -> TestResult { let Ok(vulk) = setup_vulkan(device.id) else { return Ok(StatusOk::Skipped); }; let opts = WaypipeOptions { compression: Compression::None, video: VideoSetting { format: Some(video_format), bits_per_frame: None, // note: very high/low values can cause codec failure enc_pref: Some(if try_hw_enc { CodecPreference::HW } else { CodecPreference::SW }), dec_pref: Some(if try_hw_dec { CodecPreference::HW } else { CodecPreference::SW }), }, title_prefix: "", drm_node: Some(device.id), device_type: device.device_type, wire_version: None, }; println!( "\nTrying combination: video={:?}, try_hw_dec={}, try_hw_enc={}", video_format, try_hw_dec, try_hw_enc ); let sizes = [(257, 240), (222, 129), (128, 333), (64, 64), (1, 1)]; let passes = test_dmavid_inner(&vulk, &info, &opts, &sizes); let mut overall_pass = true; let mut perfect = true; for (pass, (w, h)) in passes.iter().zip(sizes.iter().copied()) { let is_small = w < 128 || h < 128; println!( "Result for video={:?}, try_hw_dec={}, try_hw_enc={}, size={:?}: {}", video_format, try_hw_dec, try_hw_enc, (w, h), if *pass { "pass" } else { "fail" } ); if accurate_on_small_images || !is_small { if !pass { overall_pass = false; } } if !pass { perfect = false; } } // NOTE: as of writing, AMD hardware video decoding, and Intel hardware video // encoding, of size w<=32, h<=16 (w=32,h=16) images does not accurately // reproduce colors. Nor do x264 or vp9 software encoding/decoding for 1x1 images. println!( "Overall result for video={:?}, try_hw_dec={}, try_hw_enc={}: expected accurate on small {}; was accurate {}; accurate on all {}", video_format, try_hw_dec, try_hw_enc, accurate_on_small_images, overall_pass, perfect ); assert!(overall_pass); if perfect { Ok(StatusOk::Pass) } else { Err(StatusBad::Unclear("Video replication not exact".into())) } } /** Test that very long messages are either cleanly accepted or cleanly rejected. */ fn proto_oversized(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let (display, registry) = (ObjId(1), ObjId(2)); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); let msg_len = (1 << 16) - 4; let str_length = msg_len - length_evt_wl_registry_global(0); assert!(length_evt_wl_registry_global(str_length) == msg_len); let mut msg = vec![0; msg_len]; let mut dst = &mut msg[..]; let long_name = vec![b'a'; str_length]; write_evt_wl_registry_global(&mut dst, registry, 1, &long_name, 1); println!( "header: {:?}, {}", parse_wl_header(&msg), length_evt_wl_registry_global(0) ); /* Waypipe should either reject or pass the message, but not hang or crash */ let res = ctx.comp_write(&msg, &[]); assert!(res.is_err() || res.is_ok_and(|x| x.0.concat() == msg)); })?; Ok(StatusOk::Pass) } /** Damage a buffer and return rectangles containing the damage. * * Individual sub-patterns are labeled by the high 4 bits of each byte, with the low 4 bits used * to make data harder to replicate by accident. */ fn get_diff_damage( iter: usize, base: &mut [u8], w: usize, h: usize, stride: usize, bpp: usize, ) -> Vec<(i32, i32, i32, i32)> { let iw: i32 = w.try_into().unwrap(); let ih: i32 = h.try_into().unwrap(); let mut ctr = 0; const CYCLE: u8 = 11; match iter { 0 => { /* Test: large disjoint blocks, left side */ for y in 0..h / 4 { for x in 0..w / 4 { base[(y * stride + bpp * x)..(y * stride + bpp * x + bpp)].fill(0x10 + ctr); ctr = (ctr + 1) % CYCLE; } } for y in (3 * h) / 4..h { for x in 0..w / 4 { base[(y * stride + bpp * x)..(y * stride + bpp * x + bpp)].fill(0x20 + ctr); ctr = (ctr + 1) % CYCLE; } } vec![ (0, 0, iw / 4, ih / 4), (0, (3 * ih) / 4, iw / 4, (ih - (3 * ih) / 4)), ] } 1 => { /* Test: sparse differences in middle */ for y in (3 * h) / 8..(5 * h) / 8 { let x = y.clamp(w / 8, (7 * w) / 8); base[y * stride + bpp * x + bpp / 2] = 0x30 + ctr; ctr = (ctr + 1) % CYCLE; } vec![(0, (3 * ih) / 8, iw, (5 * ih) / 8 - (3 * ih) / 8)] } 2 => { /* Test: large disjoint blocks, right side */ for y in 0..h / 4 { for x in (w / 8)..w { base[(y * stride + bpp * x)..(y * stride + bpp * x + bpp)].fill(0x40 + ctr); ctr = (ctr + 1) % CYCLE; } } for y in (3 * h) / 4..h { for x in (w / 8)..w { base[(y * stride + bpp * x)..(y * stride + bpp * x + bpp)].fill(0x50 + ctr); ctr = (ctr + 1) % CYCLE; } } vec![ (0, 0, iw, ih / 4), (0, (3 * ih) / 4, iw, (ih - (3 * ih) / 4)), ] } _ => unreachable!(), } } /** Test to check that damaged regions of shm-type `wl_buffer`s are replicated. */ fn proto_shm_damage(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let (display, registry, shm, comp, surface, pool, buffer) = ( ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6), ObjId(7), ); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SHM, 2); write_evt_wl_registry_global(dst, registry, 2, WL_COMPOSITOR, 6); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SHM, 2, shm); write_req_wl_registry_bind(dst, registry, 2, WL_COMPOSITOR, 6, comp); write_req_wl_compositor_create_surface(dst, comp, surface); })); /* For wl_shm buffer replication, the format is not very important as it only affects damage calculations */ let (w, h) = (257, 257); for format in [WlShmFormat::Rgb565, WlShmFormat::Argb8888] { let bpp = match format { WlShmFormat::Argb8888 => 4, WlShmFormat::Rgb565 => 2, _ => unreachable!(), }; let stride = align(bpp * w, 19); let file_sz = h * stride; let mut base = vec![0xf0; file_sz]; for (i, x) in base.iter_mut().enumerate() { *x = 0xf0 + (i % 11) as u8; } let buffer_fd = make_file_with_contents(&base).unwrap(); let msg = build_msgs(|dst| { write_req_wl_shm_create_pool(dst, shm, false, pool, file_sz as i32); write_req_wl_shm_pool_create_buffer( dst, pool, buffer, 0, w as i32, h as i32, stride as i32, format as u32, ); write_req_wl_shm_pool_destroy(dst, pool); write_req_wl_surface_attach(dst, surface, buffer, 0, 0); write_req_wl_surface_damage_buffer(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); }); let (rmsg, mut rfd) = ctx.prog_write(&msg[..], &[&buffer_fd]).unwrap(); assert!(rmsg.concat() == msg); assert!(rfd.len() == 1); let output_fd = rfd.remove(0); assert!(get_file_contents(&output_fd, file_sz).unwrap() == base); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_display_delete_id(dst, display, pool.0); })); for iter in 0..3 { let damage: Vec<(i32, i32, i32, i32)> = get_diff_damage(iter, &mut base, w, h, stride, bpp); update_file_contents(&buffer_fd, &base).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { for d in damage { write_req_wl_surface_damage_buffer(dst, surface, d.0, d.1, d.2, d.3); } write_req_wl_surface_commit(dst, surface); })); assert!(get_file_contents(&output_fd, file_sz).unwrap() == base); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_buffer_release(dst, buffer); })); } ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_buffer_destroy(dst, buffer); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_display_delete_id(dst, display, buffer.0); })); } })?; Ok(StatusOk::Pass) } /** Test to check that damage calculations do not severely overestimate the damaged region; * in particular, that even the entire buffer contents are changed, but only a small region * is damaged, Waypipe will act as if the stated damage is correct and only update a small * region. */ fn proto_damage_efficiency(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let [display, registry, shm, comp, surface, pool, buffer, ..] = ID_SEQUENCE; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SHM, 2); write_evt_wl_registry_global(dst, registry, 2, WL_COMPOSITOR, 6); })); let format = WlShmFormat::Argb8888; let bpp = 4; let (w, h): (u32, u32) = (128, 32); let stride = w * bpp; let file_sz = h * stride; let mut base = vec![0; file_sz as usize]; for (i, x) in base.iter_mut().enumerate() { *x = 0xf0 + ((i as u32).wrapping_mul(0x01234567) >> 28) as u8; } let buffer_fd = make_file_with_contents(&base).unwrap(); let setup_msgs = build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SHM, 2, shm); write_req_wl_registry_bind(dst, registry, 2, WL_COMPOSITOR, 6, comp); write_req_wl_compositor_create_surface(dst, comp, surface); write_req_wl_shm_create_pool(dst, shm, false, pool, file_sz as i32); write_req_wl_shm_pool_create_buffer( dst, pool, buffer, 0, w as i32, h as i32, stride as i32, format as u32, ); write_req_wl_shm_pool_destroy(dst, pool); write_req_wl_surface_attach(dst, surface, buffer, 0, 0); /* The initial attachment effectively damages everything */ write_req_wl_surface_damage_buffer(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); }); let (rmsgs, mut rfds) = ctx.prog_write(&setup_msgs, &[&buffer_fd]).unwrap(); assert!(rmsgs.concat() == setup_msgs); assert!(rfds.len() == 1); let output_fd = rfds.pop().unwrap(); assert!(get_file_contents(&output_fd, file_sz as usize).unwrap() == base); for x in base.iter_mut() { *x = 0xff - *x; } update_file_contents(&buffer_fd, &base).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { /* Update just the opposite corners. */ write_req_wl_surface_damage_buffer(dst, surface, 0, 0, 1, 1); write_req_wl_surface_damage_buffer(dst, surface, w as i32 - 1, h as i32 - 1, 1, 1); write_req_wl_surface_commit(dst, surface); })); let mut n_updated = 0; let updated = get_file_contents(&output_fd, file_sz as usize).unwrap(); for (u, b) in updated.iter().zip(base.iter()) { if u == b { n_updated += 1; } } println!( "Updated bytes: {} of ideal {}/{}", n_updated, 2 * bpp, file_sz ); assert!(n_updated <= file_sz / 8); })?; Ok(StatusOk::Pass) } /** Test to check that damaged regions of DMABUF-type `wl_buffer`s are replicated. */ #[cfg(feature = "dmabuf")] fn proto_dmabuf_damage(info: TestInfo, device: RenderDevice) -> TestResult { let Ok(vulk) = setup_vulkan(device.id) else { return Ok(StatusOk::Skipped); }; run_protocol_test_with_drm_node(&info, &device, &|mut ctx: ProtocolTestContext| { let (display, registry, dmabuf, comp, surface, feedback, params, buffer) = ( ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6), ObjId(7), ObjId(8), ); let supported_modifier_table = setup_linux_dmabuf( &mut ctx, &vulk, display, registry, dmabuf, comp, surface, feedback, ); /* For dmabuf replication, the format (well, texel size) affects diff alignment */ for (w, h) in [(64_usize, 64_usize), (257_usize, 257_usize)] { for wl_format in [ WlShmFormat::R8, WlShmFormat::Rgb565, WlShmFormat::Argb8888, WlShmFormat::Abgr16161616, ] { let bpp = match wl_format { WlShmFormat::Abgr16161616 => 8, WlShmFormat::Argb8888 => 4, WlShmFormat::Rgb565 => 2, WlShmFormat::R8 => 1, _ => unreachable!(), }; let format = wayland_to_drm(wl_format); let stride = bpp * w; let file_sz = h * stride; let Some(modifier_list) = supported_modifier_table.get(&format) else { println!("Skipping test, format {:#08x} not supported", format); continue; }; let mut base = vec![0xf0; file_sz]; for (i, x) in base.iter_mut().enumerate() { *x = 0xf0 + (i % 11) as u8; } let (img, mirror) = create_dmabuf_and_copy( &vulk, &mut ctx, params, dmabuf, buffer, w, h, format, modifier_list, &base, ) .unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_attach(dst, surface, buffer, 0, 0); write_req_wl_surface_damage_buffer(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); let tmp_wr = Arc::new(vulkan_get_buffer(&vulk, img.nominal_size(None), false).unwrap()); let tmp_rd = Arc::new(vulkan_get_buffer(&vulk, img.nominal_size(None), true).unwrap()); let dup = copy_from_dmabuf(&mirror, &tmp_rd).unwrap(); fn freq_counts(s: &[u8]) -> Vec { let mut c = vec![0; 256]; for x in s { c[*x as usize] += 1; } c } assert!( dup == base, "initial mismatch {} {}\n{:?}\n{:?}\n{:?}\n{:?}", dup.len(), base.len(), &dup, &base, freq_counts(&dup), freq_counts(&base), ); for iter in 0..3 { let damage: Vec<(i32, i32, i32, i32)> = get_diff_damage(iter, &mut base, w, h, stride, bpp); copy_onto_dmabuf(&img, &tmp_wr, &base).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { for d in damage { write_req_wl_surface_damage_buffer(dst, surface, d.0, d.1, d.2, d.3); } write_req_wl_surface_commit(dst, surface); })); let dup = copy_from_dmabuf(&mirror, &tmp_rd).unwrap(); assert!( dup == base, "mismatch iter {}, {} {}\n{:?}\n{:?}", iter, dup.len(), base.len(), dup, base ); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_buffer_release(dst, buffer); })); } ctx.prog_write_passthrough(build_msgs(|dst| { write_req_zwp_linux_buffer_params_v1_destroy(dst, params); write_req_wl_buffer_destroy(dst, buffer); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_display_delete_id(dst, display, params.0); write_evt_wl_display_delete_id(dst, display, buffer.0); })); } } })?; Ok(StatusOk::Pass) } /** Test to verify the damage is correctly calculated (or at least, overestimated) * when using wp_viewport together with wl_surface.damage */ fn proto_viewporter_damage(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let [display, registry, shm, comp, viewporter, surface, pool, buffer, viewport, ..] = ID_SEQUENCE; let scale: u32 = 2; let w: u32 = 50 * scale; let h: u32 = 7 * scale; let fmt = WlShmFormat::Argb8888; let bpp = 4; let stride = w * bpp; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SHM, 2); write_evt_wl_registry_global(dst, registry, 2, WL_COMPOSITOR, 6); write_evt_wl_registry_global(dst, registry, 3, WP_VIEWPORTER, 1); })); let file_sz: usize = (w * h * bpp) as usize; let mut base = vec![0x01; file_sz]; let buffer_fd = make_file_with_contents(&base).unwrap(); let msg = build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SHM, 2, shm); write_req_wl_registry_bind(dst, registry, 2, WL_COMPOSITOR, 6, comp); write_req_wl_registry_bind(dst, registry, 3, WP_VIEWPORTER, 1, viewporter); write_req_wl_compositor_create_surface(dst, comp, surface); write_req_wl_shm_create_pool(dst, shm, false, pool, file_sz as i32); write_req_wl_shm_pool_create_buffer( dst, pool, buffer, 0, w as i32, h as i32, (w * bpp) as i32, fmt as u32, ); write_req_wp_viewporter_get_viewport(dst, viewporter, viewport, surface); write_req_wl_surface_set_buffer_scale(dst, surface, scale as i32); write_req_wl_surface_attach(dst, surface, buffer, 0, 0); }); let (rmsg, mut rfd) = ctx.prog_write(&msg[..], &[&buffer_fd]).unwrap(); assert!(rmsg.concat() == msg); assert!(rfd.len() == 1); let output_fd = rfd.remove(0); /* First, ensure replicated buffer matches */ ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_damage(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); assert!(get_file_contents(&output_fd, file_sz).unwrap() == base); /* Scale-only test */ ctx.prog_write_passthrough(build_msgs(|dst| { /* Set viewport parameters (which effectively invalidates buffer contents, * making Waypipe replicate the entire visible buffer contents */ write_req_wp_viewport_set_destination(dst, viewport, 200, 200); write_req_wl_surface_damage(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); for x in w - 2..w { for y in h - 1..h { let o = (y * stride + x * bpp) as usize; base[o..o + bpp as usize].copy_from_slice(&0x11223344_u32.to_le_bytes()); } } update_file_contents(&buffer_fd, &base).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { /* Now that viewport parameters are set, _this_ damage should be efficiently * tracked */ write_req_wl_surface_damage(dst, surface, 196, 196, 4, 4); write_req_wl_surface_commit(dst, surface); })); assert!(get_file_contents(&output_fd, file_sz).unwrap() == base); /* Crop-only test */ ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wp_viewport_set_destination(dst, viewport, -1, -1); write_req_wp_viewport_set_source( dst, viewport, 24 * 256 + 128, 4 * 256, 7 * 256, 3 * 256, ); write_req_wl_surface_damage(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); for x in 25 * scale..(24 + 8) * scale { for y in 5 * scale..7 * scale { let o = (y * stride + x * bpp) as usize; base[o..o + bpp as usize].copy_from_slice(&0xaabbccdd_u32.to_le_bytes()); } } update_file_contents(&buffer_fd, &base).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_damage(dst, surface, 1, 1, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); assert!(get_file_contents(&output_fd, file_sz).unwrap() == base); /* Combination scale and crop test */ ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wp_viewport_set_source( dst, viewport, 7 * 256 + 128 + 1, 1, 7 * 256 + 254, 6 * 256 + 254, ); write_req_wp_viewport_set_destination(dst, viewport, 2, 2); write_req_wl_surface_damage(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); for x in (7 + 3) * scale..(7 + 7) * scale { for y in 3 * scale..7 * scale { let o = (y * stride + x * bpp) as usize; base[o..o + bpp as usize].copy_from_slice(&0x12345678_u32.to_le_bytes()); } } update_file_contents(&buffer_fd, &base).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_damage(dst, surface, 1, 1, 1, 1); write_req_wl_surface_commit(dst, surface); })); assert!(get_file_contents(&output_fd, file_sz).unwrap() == base); /* Test destroying the viewport */ ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wp_viewport_destroy(dst, viewport); write_req_wl_surface_damage(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); for x in 20 * scale..21 * scale { for y in scale..2 * scale { let o = (y * stride + x * bpp) as usize; base[o..o + bpp as usize].copy_from_slice(&0x22222222_u32.to_le_bytes()); } } update_file_contents(&buffer_fd, &base).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_damage(dst, surface, 20, 1, 1, 1); write_req_wl_surface_commit(dst, surface); })); assert!(get_file_contents(&output_fd, file_sz).unwrap() == base); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_destroy(dst, surface); })); })?; Ok(StatusOk::Pass) } /** Test that the entire buffer is updated when a transform change implies buffer content * changes. Flipping the way the buffer is linked to the surface may require changing its * contents when there is no damage -- because damage is defined as the change to _surface_ * contents, not that to buffer contents */ fn proto_flip_damage(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let [display, registry, shm, comp, surface, pool, buffer, ..] = ID_SEQUENCE; let w: u32 = 80; let h: u32 = 2; let fmt = WlShmFormat::R8; let mut local_data = vec![0x00; (w * h) as usize]; for x in 0..w / 2 { for y in 0..h { local_data[(y * w + x) as usize] = 0xff; } } ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SHM, 2); write_evt_wl_registry_global(dst, registry, 2, WL_COMPOSITOR, 6); })); let local_fd = make_file_with_contents(&local_data).unwrap(); let msg = build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SHM, 2, shm); write_req_wl_registry_bind(dst, registry, 2, WL_COMPOSITOR, 6, comp); write_req_wl_compositor_create_surface(dst, comp, surface); write_req_wl_shm_create_pool(dst, shm, false, pool, (w * h) as i32); write_req_wl_shm_pool_create_buffer( dst, pool, buffer, 0, w as i32, h as i32, w as i32, fmt as u32, ); write_req_wl_surface_attach(dst, surface, buffer, 0, 0); write_req_wl_surface_damage(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); }); let (rmsg, mut rfd) = ctx.prog_write(&msg[..], &[&local_fd]).unwrap(); assert!(rmsg.concat() == msg); assert!(rfd.len() == 1); let remote_fd = rfd.remove(0); assert!(get_file_contents(&remote_fd, (w * h) as usize).unwrap() == local_data); /* Flip the buffer and its transform horizontally, but do not report any damage, * since the pending buffer still agrees with the surface contents. */ local_data.fill(0xff); for x in 0..w / 2 { for y in 0..h { local_data[(y * w + x) as usize] = 0x00; } } update_file_contents(&local_fd, &local_data).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_set_buffer_transform( dst, surface, WlOutputTransform::Flipped as i32, ); write_req_wl_surface_commit(dst, surface); })); let remote_data = get_file_contents(&remote_fd, (w * h) as usize).unwrap(); assert!( remote_data == local_data, "{:?} {:?}", remote_data, local_data ); })?; Ok(StatusOk::Pass) } /** Test that damage tracking is done _correctly_ when the surface transform and scale change. * * Design: with pseudorandomly transformed and scaled underlying buffers, render a scene in * which a sequence of pixels (in surface coordinates) are drawn, using precise damage * tracking to indicate in either surface or buffer coordinates what has changed. If Waypipe * correctly and efficiently tracks damage, it should always successfully replicate the * buffers. */ fn proto_rotating_damage(info: TestInfo) -> TestResult { let transforms = [ WlOutputTransform::Normal, WlOutputTransform::Item90, WlOutputTransform::Item180, WlOutputTransform::Item270, WlOutputTransform::Flipped, WlOutputTransform::Flipped90, WlOutputTransform::Flipped180, WlOutputTransform::Flipped270, ]; /** Transform pixel from buffer coordinates to surface coordinates, * assuming scale 1. * * `buf_sz` is the size of the buffer. * * `transform` is the transform passed to `wl_surface::set_buffer_transform`, which * is the "transformation the client has already applied to the content of the buffer". * As a result, when moving from buffer to surface, one needs to do the _inverse_ of the * operation that the transform says to do. */ fn buf_to_surface_tx( transform: WlOutputTransform, mut px: (i32, i32), buf_sz: (u32, u32), ) -> (i32, i32) { let s: (i32, i32) = (buf_sz.0.try_into().unwrap(), buf_sz.1.try_into().unwrap()); px = match transform { WlOutputTransform::Normal | WlOutputTransform::Flipped => (px.0, px.1), WlOutputTransform::Item90 | WlOutputTransform::Flipped90 => (s.1 - 1 - px.1, px.0), WlOutputTransform::Item180 | WlOutputTransform::Flipped180 => { (s.0 - 1 - px.0, s.1 - 1 - px.1) } WlOutputTransform::Item270 | WlOutputTransform::Flipped270 => (px.1, s.0 - 1 - px.0), }; match transform { WlOutputTransform::Normal | WlOutputTransform::Item90 | WlOutputTransform::Item180 | WlOutputTransform::Item270 => px, WlOutputTransform::Flipped | WlOutputTransform::Flipped180 => (s.0 - 1 - px.0, px.1), WlOutputTransform::Flipped90 | WlOutputTransform::Flipped270 => (s.1 - 1 - px.0, px.1), } } /** Like buf_to_surface_tx; here the transform operation is applied to the input pixel * within a surface. * * `buf_sz` is the size of the _buffer_, not the surface. */ fn surface_to_buf_tx( transform: WlOutputTransform, mut px: (i32, i32), buf_sz: (u32, u32), ) -> (i32, i32) { let transpose = match transform { WlOutputTransform::Normal | WlOutputTransform::Item180 | WlOutputTransform::Flipped | WlOutputTransform::Flipped180 => false, WlOutputTransform::Item90 | WlOutputTransform::Item270 | WlOutputTransform::Flipped90 | WlOutputTransform::Flipped270 => true, }; let surf_sz: (i32, i32) = if transpose { (buf_sz.1.try_into().unwrap(), buf_sz.0.try_into().unwrap()) } else { (buf_sz.0.try_into().unwrap(), buf_sz.1.try_into().unwrap()) }; /* flip over vertical axis happens before rotation */ px = match transform { WlOutputTransform::Normal | WlOutputTransform::Item90 | WlOutputTransform::Item180 | WlOutputTransform::Item270 => px, /* Subtract 1 when flipping since pixels are actually [(x,x+1,y,y+1)] * rectangles whose bounds swap on subtraction. */ WlOutputTransform::Flipped | WlOutputTransform::Flipped90 | WlOutputTransform::Flipped180 | WlOutputTransform::Flipped270 => (surf_sz.0 - 1 - px.0, px.1), }; match transform { WlOutputTransform::Normal | WlOutputTransform::Flipped => (px.0, px.1), /* contents rotated 90 deg ccw around the top left corner (0,0) * * surface buffer * 0--------X 0---X * | | | *| * | * | -> | | * X--------X | | * | | * X---X */ WlOutputTransform::Item90 | WlOutputTransform::Flipped90 => { (px.1, surf_sz.0 - 1 - px.0) } WlOutputTransform::Item180 | WlOutputTransform::Flipped180 => { (surf_sz.0 - 1 - px.0, surf_sz.1 - 1 - px.1) } WlOutputTransform::Item270 | WlOutputTransform::Flipped270 => { (surf_sz.1 - 1 - px.1, px.0) } } } /* Sanity check, that transforms work correctly */ for t in transforms { let p = (1, 2); let q = buf_to_surface_tx(t, p, (100, 200)); let r = surface_to_buf_tx(t, q, (100, 200)); println!("{:?} {:?} {:?} {:?}", t, p, q, r); assert!(p == r); } fn draw_pixel( region: &mut [u8], buf_size: (u32, u32), bpp: u32, spx: (i32, i32), scale: u32, transform: WlOutputTransform, value: u16, ) -> Vec<(u32, u32)> { assert!(buf_size.0 % scale == 0 && buf_size.1 % scale == 0); let bpx = surface_to_buf_tx(transform, spx, (buf_size.0 / scale, buf_size.1 / scale)); println!( "Draw pixel: {:?} {} {:?}, surface {:?} -> buffer {:?}", transform, scale, (buf_size.0 / scale, buf_size.1 / scale), spx, (bpx.0 * scale as i32, bpx.1 * scale as i32), ); let mut mod_pixels = Vec::new(); if bpx.0 >= 0 && bpx.1 >= 0 { for x in (bpx.0 as u32) * scale..(bpx.0 as u32 + 1) * scale { for y in (bpx.1 as u32) * scale..(bpx.1 as u32 + 1) * scale { if x >= buf_size.0 || y >= buf_size.1 { continue; } region[(y * buf_size.0 * bpp + x * bpp) as usize ..(y * buf_size.0 * bpp + x * bpp + bpp) as usize] .copy_from_slice(&value.to_le_bytes()); mod_pixels.push((x, y)); } } } mod_pixels } run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let [display, registry, shm, comp, surface, pool, buffer_base, ..] = ID_SEQUENCE; let mut rng = BadRng { state: 1 }; let fmt = WlShmFormat::Rgb565; let bpp = 2; let pixel_s = 20; /* surface square size for pixel selection */ let s = 2 * pixel_s + 30; /* So that all pixels fall inside a scale 2 surface */ assert!(s * bpp >= 64); /* A randomly shuffled sequence which uses every transform+scale combination at least once */ let buffer_sequence: &[(usize, u32, u32, u32)] = &[ /* transform idx, scale, width, height */ (0, 1, s, s), /* seed, default setup */ (5, 2, s, s), (5, 1, s + 2, s - 2), (3, 2, s + 4, s - 4), (0, 1, s + 6, s - 6), (6, 1, s + 8, s - 8), (0, 2, s + 10, s - 10), (7, 2, s + 12, s - 12), (2, 1, s + 14, s - 14), (2, 2, s + 16, s - 16), (1, 1, s + 18, s - 18), (7, 1, s + 20, s - 20), (6, 2, s + 22, s - 22), (3, 1, s + 24, s - 24), (4, 1, s + 26, s - 26), (4, 2, s + 28, s - 28), (1, 2, s + 30, s - 30), ]; let mut offsets: Vec = Vec::new(); offsets.push(0); let mut file_sz = 0; for b in buffer_sequence { file_sz += (bpp * b.2 * b.3) as usize; offsets.push(file_sz as u32); } /* Setup: zero initialize three wl_buffers */ let mut local_data = vec![0x01; file_sz]; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SHM, 2); write_evt_wl_registry_global(dst, registry, 2, WL_COMPOSITOR, 6); })); let local_fd = make_file_with_contents(&local_data).unwrap(); let msg = build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SHM, 2, shm); write_req_wl_registry_bind(dst, registry, 2, WL_COMPOSITOR, 6, comp); write_req_wl_compositor_create_surface(dst, comp, surface); write_req_wl_shm_create_pool(dst, shm, false, pool, file_sz as i32); for i in 0..buffer_sequence.len() { write_req_wl_shm_pool_create_buffer( dst, pool, ObjId(buffer_base.0 + i as u32), offsets[i] as i32, buffer_sequence[i].2 as i32, buffer_sequence[i].3 as i32, (buffer_sequence[i].2 * bpp) as i32, fmt as u32, ); } write_req_wl_surface_attach(dst, surface, buffer_base, 0, 0); write_req_wl_surface_damage(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); }); let (rmsg, mut rfd) = ctx.prog_write(&msg[..], &[&local_fd]).unwrap(); assert!(rmsg.concat() == msg); assert!(rfd.len() == 1); let remote_fd = rfd.remove(0); let remote_data = get_file_contents(&remote_fd, file_sz).unwrap(); /* Check that the seed buffer was correctly replicated; others have not yet been used and may have arbitrary contents */ assert!( remote_data[offsets[0] as usize..offsets[1] as usize] == local_data[offsets[0] as usize..offsets[1] as usize] ); let mut px_seq = Vec::new(); for step in 0..(3 * buffer_sequence.len() - 4) { let px = ( (rng.next() as u32 % pixel_s) as i32, (rng.next() as u32 % pixel_s) as i32, ); px_seq.push(px); /* Sequence; 0 121 232 343 ... 15.16.15 + 16 */ let buf_no = if step == 0 { 0 } else { let b = 1 + (step - 1) / 3; let o = ((step - 1) % 3 == 1) as usize; b + o }; let on_surface = rng.next() % 2 == 0; let transform = transforms[buffer_sequence[buf_no].0]; let scale = buffer_sequence[buf_no].1; let buf_size = (buffer_sequence[buf_no].2, buffer_sequence[buf_no].3); println!( "Step {}: buffer {}, scale {}, transform {:?}, size {:?}, px {:?}", step, buf_no, scale, transform, buf_size, px ); /* extract sub-buffer contents ? but note the stride difference, and that the pixel may miss */ let local_region = &mut local_data[offsets[buf_no] as usize..offsets[buf_no + 1] as usize]; for (step, p) in px_seq[..px_seq.len() - 1].iter().enumerate() { /* draw entire past pixel sequence; since each buffer only ever uses the same * scale/transform parameters the data will be consistent between commits. */ draw_pixel( local_region, buf_size, bpp, *p, scale, transform, (step + 100) as u16, ); } let mod_pixels = draw_pixel( local_region, buf_size, bpp, px, scale, transform, (step + 100) as u16, ); /* If pixel is not drawn this round, its damage can be ignored and Waypipe should * not be expected to synchronize the not-actually-"damaged" spot for later commits. */ assert!(!mod_pixels.is_empty()); let cfg = build_msgs(|dst| { write_req_wl_surface_attach( dst, surface, ObjId(buffer_base.0 + buf_no as u32), 0, 0, ); write_req_wl_surface_set_buffer_scale(dst, surface, scale as i32); write_req_wl_surface_set_buffer_transform(dst, surface, transform as i32); }); update_file_contents(&local_fd, &local_data).unwrap(); if on_surface { /* plain wl_surface::damage; just mark old and new pixel positions */ ctx.prog_write_passthrough( [ cfg, build_msgs(|dst| { write_req_wl_surface_damage(dst, surface, px.0, px.1, 1, 1); write_req_wl_surface_commit(dst, surface); }), ] .concat(), ); } else { /* damage_buffer: compute difference between old and new images, and * report changed pixels from those */ ctx.prog_write_passthrough( [ cfg, build_msgs(|dst| { for (x, y) in mod_pixels { write_req_wl_surface_damage_buffer( dst, surface, x as i32, y as i32, 1, 1, ); } write_req_wl_surface_commit(dst, surface); }), ] .concat(), ); } let remote_data = get_file_contents(&remote_fd, file_sz).unwrap(); let remote_region = &remote_data[offsets[buf_no] as usize..offsets[buf_no + 1] as usize]; let local_region = &local_data[offsets[buf_no] as usize..offsets[buf_no + 1] as usize]; assert!(remote_region == local_region); } })?; Ok(StatusOk::Pass) } /** Test that timeline semaphores for the linux-drm-syncobj-v1 protocol are correctly handled. */ #[cfg(feature = "dmabuf")] fn proto_explicit_sync(info: TestInfo, device: RenderDevice) -> TestResult { if matches!(device.device_type, RenderDeviceType::Gbm) { return Ok(StatusOk::Skipped); } let Ok(vulk) = setup_vulkan(device.id) else { return Ok(StatusOk::Skipped); }; run_protocol_test_with_drm_node(&info, &device, &|mut ctx: ProtocolTestContext| { let ( display, registry, dmabuf, comp, surface, feedback, params, manager, sync_surf, timeline, buffer, ) = ( ObjId(1), ObjId(2), ObjId(3), ObjId(4), ObjId(5), ObjId(6), ObjId(7), ObjId(8), ObjId(9), ObjId(10), ObjId(11), ); let supported_modifier_table = setup_linux_dmabuf( &mut ctx, &vulk, display, registry, dmabuf, comp, surface, feedback, ); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 3, WP_LINUX_DRM_SYNCOBJ_MANAGER_V1, 1); })); let msg = build_msgs(|dst| { write_req_wl_registry_bind( dst, registry, 3, WP_LINUX_DRM_SYNCOBJ_MANAGER_V1, 1, manager, ); write_req_wp_linux_drm_syncobj_manager_v1_get_surface(dst, manager, sync_surf, surface); write_req_wp_linux_drm_syncobj_manager_v1_import_timeline( dst, manager, false, timeline, ); }); let start_pt = 150; let (prog_timeline, timeline_fd) = vulkan_create_timeline(&vulk, start_pt).unwrap(); let (rmsg, mut rfd) = ctx.prog_write(&msg[..], &[&timeline_fd]).unwrap(); assert!(rmsg.concat() == msg); drop(timeline_fd); assert!(rfd.len() == 1); let output_fd = rfd.remove(0); let comp_timeline = vulkan_import_timeline(&vulk, output_fd).unwrap(); let (w, h) = (512, 512); let format = wayland_to_drm(WlShmFormat::R8); let file_sz = h * w; let mut base = vec![0x80; file_sz]; let Some(modifier_list) = supported_modifier_table.get(&format) else { println!("Skipping test, format {:#08x} not supported", format); return; }; let (img, mirror) = create_dmabuf_and_copy( &vulk, &mut ctx, params, dmabuf, buffer, w, h, format, modifier_list, &base, ) .unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_surface_attach(dst, surface, buffer, 0, 0); write_req_wl_surface_damage_buffer(dst, surface, 0, 0, i32::MAX, i32::MAX); write_req_wl_surface_commit(dst, surface); })); let tmp_wr = Arc::new(vulkan_get_buffer(&vulk, img.nominal_size(None), false).unwrap()); let tmp_rd = Arc::new(vulkan_get_buffer(&vulk, img.nominal_size(None), true).unwrap()); for iter in 0..4 { /* Change entire image, except on the second iteration */ if iter != 1 { base.fill(iter); } copy_onto_dmabuf(&img, &tmp_wr, &base).unwrap(); let acq_pt: u64 = start_pt + 11 + (iter as u64) * 7; let rel_pt = acq_pt + 2; if iter == 2 { /* Special case: signal before writing */ println!("Signalling acquire {}", acq_pt); prog_timeline.signal_timeline_pt(acq_pt).unwrap(); } let msg = build_msgs(|dst| { write_req_wl_surface_damage_buffer(dst, surface, 0, 0, w as i32, h as i32); let (acq_hi, acq_lo) = split_u64(acq_pt); let (rel_hi, rel_lo) = split_u64(rel_pt); write_req_wp_linux_drm_syncobj_surface_v1_set_acquire_point( dst, sync_surf, timeline, acq_hi, acq_lo, ); write_req_wp_linux_drm_syncobj_surface_v1_set_release_point( dst, sync_surf, timeline, rel_hi, rel_lo, ); write_req_wl_surface_commit(dst, surface); }); test_write_msgs(&ctx.sock_prog, &msg, &[]); /* Signal after writing; this is safe because the messages sent will at minimum * fit in the pipe buffer */ if iter != 2 { println!("Signalling acquire {}", acq_pt); prog_timeline.signal_timeline_pt(acq_pt).unwrap(); } /* Only start reading messages after signalling; this prevents a possible deadlock, * because the code might wait for the signal before sending messages further */ let (rmsg, rfds, err) = test_read_msgs(&ctx.sock_comp, Some(&ctx.sock_prog)); assert!(err.is_none()); assert!(rfds.is_empty()); assert!(rmsg.concat() == msg); let max_wait = 1000000000; println!("Waiting for acquire {}", acq_pt); comp_timeline .wait_for_timeline_pt(acq_pt, max_wait) .unwrap(); let dup = copy_from_dmabuf(&mirror, &tmp_rd).unwrap(); assert!(dup == base); println!("Signalling release {}", rel_pt); comp_timeline.signal_timeline_pt(rel_pt).unwrap(); println!("Waiting for release {}", rel_pt); prog_timeline .wait_for_timeline_pt(rel_pt, max_wait) .unwrap(); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_buffer_release(dst, buffer); })); } ctx.prog_write_passthrough(build_msgs(|dst| { write_req_zwp_linux_buffer_params_v1_destroy(dst, params); write_req_wl_buffer_destroy(dst, buffer); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_display_delete_id(dst, display, params.0); write_evt_wl_display_delete_id(dst, display, buffer.0); })); })?; Ok(StatusOk::Pass) } /** Test that Waypipe can successfully process a large number of FDS */ fn proto_many_fds(info: TestInfo) -> TestResult { let mut files: Vec<(Vec, OwnedFd)> = Vec::new(); /* 100 is the > the 28 max fds sent in a batch by libwayland, but also * not that large that having four copies of each would break a standard * 1024-fd ulimit. */ for i in 0..100 { let x: Vec = format!("{}", i).into(); let fd = make_file_with_contents(&x).unwrap(); files.push((x, fd)); } run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { /* Setup a wl_keyboard */ let (display, registry, seat, keyboard) = (ObjId(1), ObjId(2), ObjId(3), ObjId(4)); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SEAT, 7); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SEAT, 7, seat); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_seat_capabilities(dst, seat, 3); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_seat_get_keyboard(dst, seat, keyboard); })); /* Send all the files, in order */ let fds: Vec<&OwnedFd> = files.iter().map(|(_c, f)| f).collect(); let m = &build_msgs(|dst| { for (contents, _fd) in &files { write_evt_wl_keyboard_keymap(dst, keyboard, false, 1, contents.len() as u32); } }); let (msgs, ofds) = ctx.comp_write(m, &fds).unwrap(); assert!(msgs.concat() == *m); assert!(msgs.len() == ofds.len() && msgs.len() == files.len()); for ((_msg, rfd), (contents, _fd)) in msgs.into_iter().zip(ofds.into_iter()).zip(files.iter()) { let v = get_file_contents(&rfd, contents.len()).unwrap(); assert!(v == *contents); } })?; Ok(StatusOk::Pass) } /** Test that Waypipe either cleanly processes or errors when applying a title prefix. */ fn proto_title_prefix(info: TestInfo) -> TestResult { // /* test lengths are: empty, short, too long for small buffers, cannot fit in a Wayland message */ for (prefix_len, fail) in [(0, false), (100, false), (10000, true), (100000, true)] { let prefix = String::from("a").repeat(prefix_len); let options = WaypipeOptions { wire_version: None, drm_node: None, device_type: RenderDeviceType::Vulkan, video: VideoSetting::default(), title_prefix: &prefix, compression: Compression::None, }; run_protocol_test_with_opts( &info, &options, &options, &|mut ctx: ProtocolTestContext| { let [display, reg, compositor, xdg_wm_base, wl_surf, xdg_surf, toplevel, ..] = ID_SEQUENCE; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, reg); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, reg, 1, WL_COMPOSITOR, 6); write_evt_wl_registry_global(dst, reg, 2, XDG_WM_BASE, 6); })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, reg, 1, WL_COMPOSITOR, 6, compositor); write_req_wl_registry_bind(dst, reg, 2, XDG_WM_BASE, 6, xdg_wm_base); write_req_wl_compositor_create_surface(dst, compositor, wl_surf); write_req_xdg_wm_base_get_xdg_surface(dst, xdg_wm_base, xdg_surf, wl_surf); write_req_xdg_surface_get_toplevel(dst, xdg_surf, toplevel); })); let title_lengths = [0, 200]; let test = build_msgs(|dst| { for title_len in title_lengths { let title = vec![b'b'; title_len]; write_req_xdg_toplevel_set_title(dst, toplevel, &title); } }); match ctx.prog_write(&test, &[]) { Err(_) => { assert!(fail); } Ok((rmsgs, rfds)) => { assert!(!fail); assert!(rfds.is_empty()); for (title_len, msg) in title_lengths.iter().zip(rmsgs.iter()) { assert!( parse_wl_header(msg) == (toplevel, msg.len(), OPCODE_XDG_TOPLEVEL_SET_TITLE.code()) ); let title = parse_req_xdg_toplevel_set_title(msg).unwrap(); let orig_title = vec![b'b'; *title_len]; /* The title prefix is added twice, once per main loop instance */ let mut ref_title = vec![b'a'; prefix_len * 2]; ref_title.extend_from_slice(&orig_title); assert!(title == ref_title); } } } }, )?; } Ok(StatusOk::Pass) } /** Protocol to use for a screencopy test */ #[derive(Clone, Copy)] enum ScreencopyType { WlrScreencopy, ExtImageCopyCapture, } /** Send event(s) signaling completion of a screencopy frame, and check they are correctly * replicated */ fn send_screencopy_ready(ctx: &mut ProtocolTestContext, frame: ObjId, style: ScreencopyType) { let time_ns: u32 = 123456789; let time_s: u32 = 1; let init_time_ns: u128 = (time_s as u128) * 1000000000 + (time_ns as u128); let ready_msg = build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => { write_evt_zwlr_screencopy_frame_v1_ready(dst, frame, 0, time_s, time_ns) } ScreencopyType::ExtImageCopyCapture => { write_evt_ext_image_copy_capture_frame_v1_presentation_time( dst, frame, 0, time_s, time_ns, ); write_evt_ext_image_copy_capture_frame_v1_ready(dst, frame) } }); let start = Instant::now(); let (rmsgs, rfds) = ctx.comp_write(&ready_msg, &[]).unwrap(); assert!(rfds.is_empty()); let end = Instant::now(); let (tv_sec_hi, tv_sec_lo, tv_nsec) = match style { ScreencopyType::WlrScreencopy => { assert!(rmsgs.len() == 1); assert!( parse_wl_header(&rmsgs[0]) == ( frame, length_evt_zwlr_screencopy_frame_v1_ready(), OPCODE_ZWLR_SCREENCOPY_FRAME_V1_READY.code() ) ); parse_evt_zwlr_screencopy_frame_v1_ready(&rmsgs[0]).unwrap() } ScreencopyType::ExtImageCopyCapture => { assert!(rmsgs.len() == 2); assert!( parse_wl_header(&rmsgs[0]) == ( frame, length_evt_ext_image_copy_capture_frame_v1_presentation_time(), OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_PRESENTATION_TIME.code() ) ); assert!( parse_wl_header(&rmsgs[1]) == ( frame, length_evt_ext_image_copy_capture_frame_v1_ready(), OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_READY.code() ) ); parse_evt_ext_image_copy_capture_frame_v1_presentation_time(&rmsgs[0]).unwrap() } }; let output_ns = 1000000000 * (join_u64(tv_sec_hi, tv_sec_lo) as u128) + (tv_nsec as u128); /* The time adjustment uses two XYX measurements, whose absolute error * is ≤ half the elapsed time each, assuming the clocks run at the same * rate and do not change. */ let max_time_error = end.duration_since(start).saturating_mul(2); let abs_diff = output_ns.abs_diff(init_time_ns); assert!(abs_diff < max_time_error.as_nanos()); } /** Test that basic screencopy operations work with wl_shm buffers */ fn proto_screencopy_shm(info: TestInfo, style: ScreencopyType) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let [display, reg, shm, screencopy, output, shm_pool, buffer1, buffer2, ..] = ID_SEQUENCE; let (capture_manager, capture_source, session, frame) = match style { ScreencopyType::WlrScreencopy => (ObjId(0), ObjId(0), ObjId(0), ObjId(9)), ScreencopyType::ExtImageCopyCapture => (ObjId(9), ObjId(10), ObjId(11), ObjId(12)), }; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, reg); })); let scrcopy_name = ZWLR_SCREENCOPY_MANAGER_V1; ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, reg, 1, WL_SHM, 2); write_evt_wl_registry_global(dst, reg, 2, WL_OUTPUT, 4); match style { ScreencopyType::WlrScreencopy => { write_evt_wl_registry_global(dst, reg, 3, scrcopy_name, 3); } ScreencopyType::ExtImageCopyCapture => { write_evt_wl_registry_global(dst, reg, 3, EXT_IMAGE_COPY_CAPTURE_MANAGER_V1, 1); write_evt_wl_registry_global( dst, reg, 4, EXT_OUTPUT_IMAGE_CAPTURE_SOURCE_MANAGER_V1, 1, ); } } })); let (w, h, fmt) = (33, 17, WlShmFormat::Xrgb8888 as u32); let pool_sz: usize = 3 * (w * h * 4) / 2; let offset1: usize = 10; let offset2: usize = pool_sz - (w * h * 4); let seed_contents: Vec = (0..pool_sz).map(|x| (x * 101) as u8).collect(); let mut copy_contents: Vec = seed_contents.clone(); assert!(offset1 <= 333 && 666 <= offset1 + w * h * 4); copy_contents[333..666].fill(0xaa); let shm_fd = make_file_with_contents(&seed_contents).unwrap(); let setup = build_msgs(|dst| { write_req_wl_registry_bind(dst, reg, 1, WL_SHM, 2, shm); write_req_wl_registry_bind(dst, reg, 2, WL_OUTPUT, 4, output); match style { ScreencopyType::WlrScreencopy => { write_req_wl_registry_bind(dst, reg, 3, scrcopy_name, 3, screencopy) } ScreencopyType::ExtImageCopyCapture => write_req_wl_registry_bind( dst, reg, 3, EXT_IMAGE_COPY_CAPTURE_MANAGER_V1, 1, screencopy, ), } write_req_wl_shm_create_pool(dst, shm, false, shm_pool, pool_sz.try_into().unwrap()); write_req_wl_shm_pool_create_buffer( dst, shm_pool, buffer1, offset1.try_into().unwrap(), w.try_into().unwrap(), h.try_into().unwrap(), (w * 4).try_into().unwrap(), fmt, ); write_req_wl_shm_pool_create_buffer( dst, shm_pool, buffer2, offset2.try_into().unwrap(), w.try_into().unwrap(), h.try_into().unwrap(), (w * 4).try_into().unwrap(), fmt, ); match style { ScreencopyType::WlrScreencopy => { write_req_zwlr_screencopy_manager_v1_capture_output_region( dst, screencopy, frame, 0, output, 1, 1, (w - 2) as _, (h - 2) as _, ); } ScreencopyType::ExtImageCopyCapture => { write_req_wl_registry_bind( dst, reg, 4, EXT_OUTPUT_IMAGE_CAPTURE_SOURCE_MANAGER_V1, 1, capture_manager, ); write_req_ext_output_image_capture_source_manager_v1_create_source( dst, capture_manager, capture_source, output, ); write_req_ext_image_copy_capture_manager_v1_create_session( dst, screencopy, session, capture_source, 0, ); } } }); let fds = [&shm_fd]; let (rmsgs, mut rfds) = ctx.prog_write(&setup, &fds).unwrap(); assert!(rmsgs.concat() == setup); assert!(rfds.len() == 1); let rfd = rfds.pop().unwrap(); update_file_contents(&rfd, ©_contents).unwrap(); ctx.comp_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => { write_evt_zwlr_screencopy_frame_v1_buffer( dst, frame, fmt, w as _, h as _, (4 * w) as _, ); write_evt_zwlr_screencopy_frame_v1_buffer_done(dst, frame); } ScreencopyType::ExtImageCopyCapture => { write_evt_ext_image_copy_capture_session_v1_buffer_size( dst, session, w as _, h as _, ); write_evt_ext_image_copy_capture_session_v1_shm_format(dst, session, fmt); write_evt_ext_image_copy_capture_session_v1_done(dst, session); } })); ctx.prog_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => { write_req_zwlr_screencopy_frame_v1_copy(dst, frame, buffer2); } ScreencopyType::ExtImageCopyCapture => { write_req_ext_image_copy_capture_session_v1_create_frame(dst, session, frame); write_req_ext_image_copy_capture_frame_v1_attach_buffer(dst, frame, buffer2); write_req_ext_image_copy_capture_frame_v1_damage_buffer( dst, frame, 0, 0, i32::MAX, i32::MAX, ); write_req_ext_image_copy_capture_frame_v1_capture(dst, frame); } })); ctx.comp_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => write_evt_zwlr_screencopy_frame_v1_failed(dst, frame), ScreencopyType::ExtImageCopyCapture => { write_evt_ext_image_copy_capture_frame_v1_failed(dst, frame, 0) } })); /* Check that the failed screencopy does not lead to replication * (Although technically Waypipe _could_ eagerly make updates in this scenario, * since the buffer contents were already updated, doing so would be inefficient; * the wlr-screencopy protocol also says nothing about the buffer state on failure.) */ let check1 = get_file_contents(&shm_fd, pool_sz).unwrap(); assert!(check1 == seed_contents); // Check diff ctx.prog_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => write_req_zwlr_screencopy_frame_v1_destroy(dst, frame), ScreencopyType::ExtImageCopyCapture => { write_req_ext_image_copy_capture_session_v1_destroy(dst, session); write_req_ext_image_copy_capture_frame_v1_destroy(dst, frame) } })); ctx.comp_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => write_evt_wl_display_delete_id(dst, display, frame.0), ScreencopyType::ExtImageCopyCapture => { write_evt_wl_display_delete_id(dst, display, session.0); write_evt_wl_display_delete_id(dst, display, frame.0); } })); ctx.prog_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => { write_req_zwlr_screencopy_manager_v1_capture_output( dst, screencopy, frame, 0, output, ); } ScreencopyType::ExtImageCopyCapture => { write_req_ext_image_copy_capture_manager_v1_create_session( dst, screencopy, session, capture_source, 0, ); } })); ctx.comp_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => { write_evt_zwlr_screencopy_frame_v1_buffer( dst, frame, fmt, w as _, h as _, (4 * w) as _, ); write_evt_zwlr_screencopy_frame_v1_buffer_done(dst, frame); } ScreencopyType::ExtImageCopyCapture => { write_evt_ext_image_copy_capture_session_v1_buffer_size( dst, session, w as _, h as _, ); write_evt_ext_image_copy_capture_session_v1_shm_format(dst, session, fmt); write_evt_ext_image_copy_capture_session_v1_done(dst, session); } })); ctx.prog_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => { write_req_zwlr_screencopy_frame_v1_copy(dst, frame, buffer1); } ScreencopyType::ExtImageCopyCapture => { write_req_ext_image_copy_capture_session_v1_create_frame(dst, session, frame); write_req_ext_image_copy_capture_frame_v1_attach_buffer(dst, frame, buffer1); write_req_ext_image_copy_capture_frame_v1_damage_buffer( dst, frame, 0, 0, i32::MAX, i32::MAX, ); write_req_ext_image_copy_capture_frame_v1_capture(dst, frame); } })); send_screencopy_ready(&mut ctx, frame, style); /* Check that the update is replicated */ let check2 = get_file_contents(&shm_fd, pool_sz).unwrap(); assert!(check2 == copy_contents); })?; Ok(StatusOk::Pass) } /** Test that basic wlr-screencopy operations work with wl_shm buffers */ fn proto_screencopy_shm_wlr(info: TestInfo) -> TestResult { proto_screencopy_shm(info, ScreencopyType::WlrScreencopy) } /** Test that basic ext-image-copy-capture operations work with wl_shm buffers */ fn proto_screencopy_shm_ext(info: TestInfo) -> TestResult { proto_screencopy_shm(info, ScreencopyType::ExtImageCopyCapture) } /** Test that basic wlr_screencopy operations work with dmabufs */ #[cfg(feature = "dmabuf")] fn proto_screencopy_dmabuf( info: TestInfo, device: RenderDevice, style: ScreencopyType, ) -> TestResult { let Ok(vulk) = setup_vulkan(device.id) else { return Ok(StatusOk::Skipped); }; run_protocol_test_with_drm_node(&info, &device, &|mut ctx: ProtocolTestContext| { let [display, reg, dmabuf, feedback, output, screencopy, ..] = ID_SEQUENCE; let (capture_manager, capture_source, session, frame, params, buffer) = match style { ScreencopyType::WlrScreencopy => { (ObjId(0), ObjId(0), ObjId(0), ObjId(9), ObjId(10), ObjId(11)) } ScreencopyType::ExtImageCopyCapture => ( ObjId(9), ObjId(10), ObjId(11), ObjId(12), ObjId(13), ObjId(14), ), }; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, reg); })); let scrcopy_name = ZWLR_SCREENCOPY_MANAGER_V1; ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, reg, 1, ZWP_LINUX_DMABUF_V1, 3); write_evt_wl_registry_global(dst, reg, 2, WL_OUTPUT, 4); match style { ScreencopyType::WlrScreencopy => { write_evt_wl_registry_global(dst, reg, 3, scrcopy_name, 3) } ScreencopyType::ExtImageCopyCapture => { write_evt_wl_registry_global(dst, reg, 3, EXT_IMAGE_COPY_CAPTURE_MANAGER_V1, 1); write_evt_wl_registry_global( dst, reg, 4, EXT_OUTPUT_IMAGE_CAPTURE_SOURCE_MANAGER_V1, 1, ); } } })); ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_registry_bind(dst, reg, 1, ZWP_LINUX_DMABUF_V1, 3, dmabuf); write_req_zwp_linux_dmabuf_v1_get_default_feedback(dst, dmabuf, feedback); write_req_wl_registry_bind(dst, reg, 2, WL_OUTPUT, 4, output); match style { ScreencopyType::WlrScreencopy => { write_req_wl_registry_bind(dst, reg, 3, scrcopy_name, 3, screencopy); write_req_zwlr_screencopy_manager_v1_capture_output( dst, screencopy, frame, 0, output, ); } ScreencopyType::ExtImageCopyCapture => { write_req_wl_registry_bind( dst, reg, 3, EXT_IMAGE_COPY_CAPTURE_MANAGER_V1, 1, screencopy, ); write_req_wl_registry_bind( dst, reg, 4, EXT_OUTPUT_IMAGE_CAPTURE_SOURCE_MANAGER_V1, 1, capture_manager, ); write_req_ext_output_image_capture_source_manager_v1_create_source( dst, capture_manager, capture_source, output, ); write_req_ext_image_copy_capture_manager_v1_create_session( dst, screencopy, session, capture_source, 0, ); } } })); let supported_modifier_table = send_linux_dmabuf_feedback(&mut ctx, &vulk, feedback); let fmt = wayland_to_drm(WlShmFormat::R8); let bpp = 1; let (w, h) = (8, 8); let Some(mod_list) = supported_modifier_table.get(&fmt) else { println!("Skipping test, format not supported"); return; }; let msg_batch = build_msgs(|dst| { write_evt_ext_image_copy_capture_session_v1_buffer_size(dst, session, w as _, h as _); write_evt_ext_image_copy_capture_session_v1_dmabuf_device( dst, session, &u64::to_le_bytes(device.id), ); let mut mod_array = Vec::new(); for m in mod_list { mod_array.extend_from_slice(&u64::to_le_bytes(*m)); } write_evt_ext_image_copy_capture_session_v1_dmabuf_format( dst, session, fmt, &mod_array, ); write_evt_ext_image_copy_capture_session_v1_done(dst, session); }); let (rmsgs, rfds) = ctx.comp_write(&msg_batch, &[]).unwrap(); assert!(rfds.is_empty()); let nmsgs = rmsgs.len(); let mut received_mod_table: Vec<(u32, Vec)> = Vec::new(); let (mut has_size, mut has_device) = (false, false); for (i, msg) in rmsgs.into_iter().enumerate() { let (obj, _len, opcode) = parse_wl_header(&msg); assert!(obj == session); match MethodId::Event(opcode) { OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_BUFFER_SIZE => { assert!( parse_evt_ext_image_copy_capture_session_v1_buffer_size(&msg).unwrap() == (w as _, h as _) ); has_size = true; } OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DMABUF_DEVICE => { assert!( parse_evt_ext_image_copy_capture_session_v1_dmabuf_device(&msg).unwrap() == u64::to_le_bytes(device.id) ); has_device = true; } OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DMABUF_FORMAT => { let (fmt, mods) = parse_evt_ext_image_copy_capture_session_v1_dmabuf_format(&msg).unwrap(); assert!(mods.len() % 8 == 0); received_mod_table.push(( fmt, mods.chunks_exact(8) .map(|x| u64::from_le_bytes(x.try_into().unwrap())) .collect::>(), )); } OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DONE => { assert!(i == nmsgs - 1); } _ => panic!("Unexpected message opcode {}", opcode), } } assert!(has_size); assert!(has_device); assert!(received_mod_table.len() == 1); assert!(received_mod_table[0].0 == fmt); received_mod_table[0].1.sort(); let mut mod_copy = mod_list.clone(); mod_copy.sort(); /* All modifiers should make the roundtrip, since they were already filtered by linux-dmabuf */ assert!(received_mod_table[0].1 == mod_copy); let img_size = (w * h) as usize * bpp; let img_data = vec![0x33u8; img_size]; let mod_data = vec![0x44u8; img_size]; let copy_buf = Arc::new(vulkan_get_buffer(&vulk, img_size, true).unwrap()); let (prog_img, comp_img) = create_dmabuf_and_copy( &vulk, &mut ctx, params, dmabuf, buffer, w as _, h as _, fmt, mod_list, &img_data, ) .unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => { write_req_zwlr_screencopy_frame_v1_copy(dst, frame, buffer); } ScreencopyType::ExtImageCopyCapture => { write_req_ext_image_copy_capture_session_v1_create_frame(dst, session, frame); write_req_ext_image_copy_capture_frame_v1_attach_buffer(dst, frame, buffer); write_req_ext_image_copy_capture_frame_v1_damage_buffer( dst, frame, 0, 0, i32::MAX, i32::MAX, ); write_req_ext_image_copy_capture_frame_v1_capture(dst, frame); } })); copy_onto_dmabuf(&comp_img, ©_buf, &mod_data).unwrap(); send_screencopy_ready(&mut ctx, frame, style); let output = copy_from_dmabuf(&prog_img, ©_buf).unwrap(); assert!(output == mod_data); ctx.prog_write_passthrough(build_msgs(|dst| match style { ScreencopyType::WlrScreencopy => { write_req_zwlr_screencopy_frame_v1_destroy(dst, frame); } ScreencopyType::ExtImageCopyCapture => { write_req_ext_image_copy_capture_frame_v1_destroy(dst, frame); write_req_ext_image_copy_capture_session_v1_destroy(dst, session); } })); })?; Ok(StatusOk::Pass) } /** Test that basic wlr-screencopy operations work with dmabufs */ #[cfg(feature = "dmabuf")] fn proto_screencopy_dmabuf_wlr(info: TestInfo, device: RenderDevice) -> TestResult { proto_screencopy_dmabuf(info, device, ScreencopyType::WlrScreencopy) } /** Test that basic ext-image-copy-capture operations work with dmabufs */ #[cfg(feature = "dmabuf")] fn proto_screencopy_dmabuf_ext(info: TestInfo, device: RenderDevice) -> TestResult { proto_screencopy_dmabuf(info, device, ScreencopyType::ExtImageCopyCapture) } /** Register an array of video tests for various video format and hardware/software * encoding/decoding parameters */ #[cfg(feature = "video")] fn register_video_tests<'a>( tests: &mut Vec<(String, Box TestResult + 'a>)>, filter: &Filter, devices: &[(String, u64)], ) { let table = [ (VideoFormat::H264, false, false, true), (VideoFormat::H264, true, false, false), (VideoFormat::H264, true, true, false), (VideoFormat::H264, false, true, false), (VideoFormat::VP9, false, false, true), (VideoFormat::AV1, false, false, true), (VideoFormat::AV1, false, true, true), ]; for (format, try_hwenc, try_hwdec, expect_accurate_on_small) in table { for (dev_name, dev_id) in devices { let name = format!( "proto::dmavid_{}::{}::{}::{}", match format { VideoFormat::H264 => "h264", VideoFormat::VP9 => "vp9", VideoFormat::AV1 => "av1", }, if try_hwenc { "try_hwenc" } else { "swenc" }, if try_hwdec { "try_hwdec" } else { "swdec" }, dev_name, ); if !test_is_included(&name, filter) { continue; } let m: (String, u64) = (dev_name.clone(), *dev_id); tests.push(( name, Box::new(move |info| { let s = &m; let dev = RenderDevice { name: &s.0, id: s.1, /* Video encoding/decoding only works with Vulkan right now */ device_type: RenderDeviceType::Vulkan, }; proto_dmavid( info, dev, format, try_hwdec, try_hwenc, expect_accurate_on_small, ) }), )); } } } /** Test that Waypipe correctly updates buffers when xdg_toplevel_icon_v1::add_buffer is used */ fn proto_toplevel_icon(info: TestInfo) -> TestResult { run_protocol_test(&info, &|mut ctx: ProtocolTestContext| { let [display, registry, shm, ico_mgr, pool, icon, buffer1, buffer2, buffer3, ..] = ID_SEQUENCE; ctx.prog_write_passthrough(build_msgs(|dst| { write_req_wl_display_get_registry(dst, display, registry); })); ctx.comp_write_passthrough(build_msgs(|dst| { write_evt_wl_registry_global(dst, registry, 1, WL_SHM, 2); write_evt_wl_registry_global(dst, registry, 2, XDG_TOPLEVEL_ICON_MANAGER_V1, 1); })); let size = 1300; let mut data = vec![0x01; size]; let offsets: [u32; 3] = [0, 99, 600]; let sizes: [u32; 3] = [1, 10, 13]; let format = WlShmFormat::Argb8888; let buffers = [buffer1, buffer2, buffer3]; let msgs = build_msgs(|dst| { write_req_wl_registry_bind(dst, registry, 1, WL_SHM, 2, shm); write_req_wl_registry_bind(dst, registry, 2, XDG_TOPLEVEL_ICON_MANAGER_V1, 1, ico_mgr); write_req_wl_shm_create_pool(dst, shm, false, pool, size as i32); write_req_xdg_toplevel_icon_manager_v1_create_icon(dst, ico_mgr, icon); for ((size, offset), buffer) in sizes.iter().zip(offsets.iter()).zip(buffers.iter()) { write_req_wl_shm_pool_create_buffer( dst, pool, *buffer, *offset as i32, *size as i32, *size as i32, *size as i32 * 4, format as u32, ); } }); let pool_fd = make_file_with_contents(&data).unwrap(); let (rmsg, mut rfd) = ctx.prog_write(&msgs[..], &[&pool_fd]).unwrap(); assert!(rmsg.concat() == msgs); assert!(rfd.len() == 1); let output_fd = rfd.remove(0); ctx.comp_write_passthrough(build_msgs(|dst| { for size in sizes { write_evt_xdg_toplevel_icon_manager_v1_icon_size(dst, ico_mgr, size as i32); } write_evt_xdg_toplevel_icon_manager_v1_done(dst, ico_mgr); })); // Fill icon buffer contents for (size, offset) in sizes.iter().zip(offsets.iter()) { for (z, c) in data[(*offset as usize)..(*offset + size * size * 4) as usize] .chunks_exact_mut(4) .enumerate() { c.copy_from_slice(&(z as u32 + 10).to_le_bytes()); } } update_file_contents(&pool_fd, &data).unwrap(); ctx.prog_write_passthrough(build_msgs(|dst| { for buffer in buffers { write_req_xdg_toplevel_icon_v1_add_buffer(dst, icon, buffer, 1); } })); // Verify buffers were replicated let rcontents = get_file_contents(&output_fd, data.len()).unwrap(); for (size, offset) in sizes.iter().zip(offsets.iter()) { let range = (*offset as usize)..(*offset + size * size * 4) as usize; assert!(data[range.clone()] == rcontents[range.clone()]); } })?; Ok(StatusOk::Pass) } /** Main entry point. */ fn main() -> ExitCode { let command = ClapCommand::new(env!("CARGO_BIN_NAME")) .help_expected(true) .flatten_help(false) .about( "A collection of protocol tests to be run against the Waypipe client\n\ and server connection subprocesses. This is not expected to be packaged\n\ and has no stability guarantee.", ) .next_line_help(false) .version(env!("CARGO_PKG_VERSION")) .arg( Arg::new("client-path") .help("waypipe binary to use as client") .required(true) .value_parser(value_parser!(OsString)), ) .arg( Arg::new("server-path") .help("waypipe binary to use as server") .required(true) .value_parser(value_parser!(OsString)), ) .arg( Arg::new("test-filter") .help("If present, run tests whose names contain any test filter string") .num_args(0..) .action(ArgAction::Append) .value_parser(value_parser!(String)), ) .arg( Arg::new("list") .long("list") .action(ArgAction::SetTrue) .conflicts_with_all(["client-path", "server-path", "test-filter"]) .help("List available tests"), ) .arg( Arg::new("exact") .long("exact") .action(ArgAction::SetTrue) .help("When matching test names, treat parts between :: as indivisible tokens"), ) .arg( Arg::new("core") .long("core") .action(ArgAction::SetTrue) .help("Allow subprocesses to create core dumps on crash"), ) .arg( Arg::new("quiet") .long("quiet") .short('q') .action(ArgAction::SetTrue) .help("Hide captured output"), ); let matches = command.get_matches(); let substrings: Vec<&str> = matches .get_many::("test-filter") .map(|x| x.map(|y| y.as_str()).collect()) .unwrap_or_default(); let f = Filter { substrings: &substrings, exact: matches.get_flag("exact"), }; let core = matches.get_flag("core"); let vk_device_ids = list_vulkan_device_ids(); /* Construct a list of all tests */ let mut tests: Vec<(String, Box TestResult>)> = Vec::new(); let t = &mut tests; register_single(t, &f, "basic", proto_basic); register_single(t, &f, "base_wire", proto_base_wire); register_single(t, &f, "commit_timing", proto_commit_timing); register_single(t, &f, "damage_efficiency", proto_damage_efficiency); register_single(t, &f, "flip_damage", proto_flip_damage); register_single(t, &f, "gamma_control", proto_gamma_control); register_single(t, &f, "keymap", proto_keymap); register_single(t, &f, "icc", proto_icc); register_single(t, &f, "many_fds", proto_many_fds); register_single(t, &f, "object_collision", proto_object_collision); register_single(t, &f, "oversized", proto_oversized); register_single(t, &f, "pipe_write", proto_pipe_write); register_single(t, &f, "presentation_time", proto_presentation_time); register_single(t, &f, "rotating_damage", proto_rotating_damage); register_single(t, &f, "screencopy_shm_wlr", proto_screencopy_shm_wlr); register_single(t, &f, "screencopy_shm_ext", proto_screencopy_shm_ext); register_single(t, &f, "shm_buffer", proto_shm_buffer); register_single(t, &f, "shm_damage", proto_shm_damage); register_single(t, &f, "shm_extend", proto_shm_extend); register_single(t, &f, "title_prefix", proto_title_prefix); register_single(t, &f, "toplevel_icon", proto_toplevel_icon); register_single(t, &f, "viewporter_damage", proto_viewporter_damage); #[cfg(feature = "dmabuf")] { register_per_device(t, &f, &vk_device_ids, "dmabuf", proto_dmabuf); register_per_device(t, &f, &vk_device_ids, "dmabuf_damage", proto_dmabuf_damage); register_per_device( t, &f, &vk_device_ids, "dmabuf_feedback_table", proto_dmabuf_feedback_table, ); register_per_device(t, &f, &vk_device_ids, "dmabuf_pre_v4", proto_dmabuf_pre_v4); register_per_device(t, &f, &vk_device_ids, "explicit_sync", proto_explicit_sync); register_per_device( t, &f, &vk_device_ids, "screencopy_dmabuf_ext", proto_screencopy_dmabuf_ext, ); register_per_device( t, &f, &vk_device_ids, "screencopy_dmabuf_wlr", proto_screencopy_dmabuf_wlr, ); } #[cfg(feature = "video")] { register_video_tests(t, &f, &vk_device_ids); } if matches.get_flag("list") { for (name, _) in tests { println!("{}", name); } return ExitCode::SUCCESS; } let client_file: &OsString = matches.get_one("client-path").unwrap(); let server_file: &OsString = matches.get_one("server-path").unwrap(); let quiet: bool = matches.get_flag("quiet"); let mut mask = signal::SigSet::empty(); mask.add(signal::SIGCHLD); let mut pollmask = mask .thread_swap_mask(signal::SigmaskHow::SIG_BLOCK) .map_err(|x| tag!("Failed to set sigmask: {}", x)) .unwrap(); pollmask.remove(signal::SIGCHLD); let sigaction = signal::SigAction::new( signal::SigHandler::Handler(noop_signal_handler), signal::SaFlags::SA_NOCLDSTOP, signal::SigSet::empty(), ); unsafe { // SAFETY: signal handler installed is trivial and replaces nothing signal::sigaction(signal::Signal::SIGCHLD, &sigaction) .map_err(|x| tag!("Failed to set sigaction: {}", x)) .unwrap(); } let mut nfail: usize = 0; let mut nskip: usize = 0; for (name, func) in &tests { let mut stdout = std::io::stdout().lock(); write!(&mut stdout, "{} ...", name).unwrap(); /* flush immediately so that buffer is empty before fork and will not be written twice */ stdout.flush().unwrap(); drop(stdout); let info = TestInfo { test_name: name, waypipe_client: client_file, waypipe_server: server_file, }; let (read_out, new_stdouterr) = unistd::pipe2(fcntl::OFlag::empty()).unwrap(); let new_stdin = unsafe { /* SAFETY: newly opened file descriptor */ OwnedFd::from_raw_fd( fcntl::open( "/dev/null", fcntl::OFlag::O_RDONLY | fcntl::OFlag::O_NOCTTY, nix::sys::stat::Mode::empty(), ) .unwrap(), ) }; let res = match unsafe { /* SAFETY: this program is not multi-threaded at this point. * (No vulkan setup or initialization has been done.) */ unistd::fork().unwrap() } { unistd::ForkResult::Child => { /* Blocking wait for child to complete */ #[allow(unused_unsafe)] /* dup2 is file-descriptor-unsafe */ unsafe { /* Atomically replace STDOUT, STDERR, STDIN; this may break library code which * incorrectly assumes standard io file descriptors never change properties * (e.g., by caching isatty()). */ unistd::dup2(new_stdouterr.as_raw_fd(), libc::STDOUT_FILENO).unwrap(); unistd::dup2(new_stdouterr.as_raw_fd(), libc::STDERR_FILENO).unwrap(); unistd::dup2(new_stdin.as_raw_fd(), libc::STDIN_FILENO).unwrap(); } drop(read_out); if !core { /* Disable core dumps for child process, because the tests report errors * using using panic! or a failed assert!, and core dumps should not need * recording. This _also_ prevents the Waypipe instances from making a * core dump. */ assert!( unsafe { let x = libc::rlimit { rlim_cur: 0, rlim_max: 0, }; /* SAFETY: x is properly aligned, is not captured by setrlimit, and lives until end of scope */ libc::setrlimit(libc::RLIMIT_CORE, &x) } != -1 ); } // TODO: process group configuration to kill the child process and // everything it spawns on timeout events? let ret = func(info); return match ret { Ok(StatusOk::Pass) => ExitCode::SUCCESS, Ok(StatusOk::Skipped) => ExitCode::from(EXITCODE_SKIPPED), Err(StatusBad::Fail(msg)) => { println!("Test {} failed with error: {}", name, msg); ExitCode::FAILURE } Err(StatusBad::Unclear(msg)) => { println!("Test {} unclear, with error: {}", name, msg); ExitCode::from(EXITCODE_UNCLEAR) } }; } unistd::ForkResult::Parent { child } => { drop(new_stdin); drop(new_stdouterr); set_nonblock(&read_out).unwrap(); /* Wait for the child to complete, and capture its output */ let mut log = Vec::new(); let mut tmp = vec![0u8; 1 << 18]; let child_status: WaitStatus; loop { let status = wait::waitpid(child, Some(wait::WaitPidFlag::WNOHANG)).unwrap(); match status { wait::WaitStatus::Exited(..) | wait::WaitStatus::Signaled(..) => { child_status = status; break; } _ => (), } let mut pfds = [poll::PollFd::new(read_out.as_fd(), poll::PollFlags::POLLIN)]; let res = poll::ppoll(&mut pfds, None, Some(pollmask)); if let Err(errno) = res { assert!(errno == Errno::EINTR || errno == Errno::EAGAIN); continue; } let rev = pfds[0].revents().unwrap(); if rev.contains(poll::PollFlags::POLLERR) || rev.contains(poll::PollFlags::POLLHUP) { /* Child closed connection (or more likely, died). * Data remaining in pipe will be read. */ child_status = wait::waitpid(child, None).unwrap(); break; } if !rev.contains(poll::PollFlags::POLLIN) { continue; } let eof_or_err = match unistd::read(read_out.as_raw_fd(), &mut tmp) { Ok(n) => { if n == 0 { true } else { log.extend_from_slice(&tmp[..n]); false } } Err(Errno::EAGAIN) => false, Err(Errno::EINTR) => false, Err(_) => true, }; if eof_or_err { child_status = wait::waitpid(child, None).unwrap(); break; } } /* Read all remaining data in the pipe, dropping anything * that was read after receipt of the information of test process death */ loop { match unistd::read(read_out.as_raw_fd(), &mut tmp) { Ok(n) => { if n == 0 { break; } log.extend_from_slice(&tmp[..n]); } Err(Errno::EINTR) => continue, Err(_) => break, } } if !quiet { println!("captured output:"); println!("{}", String::from_utf8_lossy(&log)); } match child_status { wait::WaitStatus::Exited(pid, exit_code) => { assert!(pid == child); let e = if exit_code > (u8::MAX as i32) || exit_code < 0 { u8::MAX } else { exit_code as u8 }; match e { 0 => TestCategory::Pass, EXITCODE_SKIPPED => TestCategory::Skipped, EXITCODE_UNCLEAR => TestCategory::Unclear, _ => TestCategory::Fail, } } wait::WaitStatus::Signaled(pid, signal, dump) => { if false { println!( "Test process {} crashed with signal={}; coredump={}", pid, signal, dump ); } /* The test process aborting signals either the Waypipe instance * failing (making the test panic) or a major bug in the test. */ TestCategory::Fail } _ => { todo!("Unexpected exit status"); } } } }; let s = match res { TestCategory::Pass => "ok", TestCategory::Fail => { nfail += 1; "FAILED" } TestCategory::Skipped => { nskip += 1; "skipped" } TestCategory::Unclear => "UNCLEAR", }; println!(" {}", s); } if nfail > 0 { ExitCode::FAILURE } else if nskip == tests.len() { /* This can happen in regular use, when there are no usable render devices */ ExitCode::from(EXITCODE_SKIPPED) } else { ExitCode::SUCCESS } } waypipe-v0.10.5/src/tracking.rs000066400000000000000000005274561507071711100164230ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Wayland protocol message handling and translation logic */ use crate::damage::*; #[cfg(feature = "dmabuf")] use crate::dmabuf::*; #[cfg(feature = "gbmfallback")] use crate::gbm::*; use crate::kernel::*; use crate::mainloop::*; use crate::platform::*; #[cfg(any(not(feature = "video"), not(feature = "gbmfallback")))] use crate::stub::*; use crate::tag; use crate::util::*; use crate::wayland::*; use crate::wayland_gen::*; use core::str; use log::{debug, error}; use nix::libc; use nix::sys::memfd; use nix::unistd; use std::cell::RefCell; use std::collections::{BTreeMap, BTreeSet, VecDeque}; use std::fmt::{Display, Formatter}; use std::os::fd::OwnedFd; use std::rc::Rc; /** Structure storing information for a specific Wayland object */ pub struct WpObject { /** Wayland interface associated with the object */ obj_type: WaylandInterface, /** Extra data associated with the object; in practice this enum either matches * WaylandInterface or is WpExtra::None */ extra: WpExtra, } /** Damage rectangle, of the form directly provided by wl_surface.damage and wl_surface.damage_buffer */ #[derive(Clone, Copy, Debug)] struct WlRect { x: i32, y: i32, width: i32, height: i32, } /** Properties of a buffer's attachment to the surface, which are required to interpret how * surface coordinates and buffer coordinates interact. */ #[derive(Eq, PartialEq, Clone)] struct BufferAttachment { /** Buffer scale */ scale: u32, /** Transform (from buffer to surface) */ transform: WlOutputTransform, /** Viewport source x-offset,y-offset,width,height, all 24.8 fixed point and >=,>=,>,> 0 */ viewport_src: Option<(i32, i32, i32, i32)>, /** Viewport destination width/height, should both be > 0 */ viewport_dst: Option<(i32, i32)>, /** The buffer attached at the time the damage was committed (or an arbitrary value if * the batch was not yet committed.) */ buffer_uid: u64, /** Dimensions of the attached buffer; these are needed to properly evaluate transforms. */ buffer_size: (i32, i32), } /** Damage recorded for a surface during a commit interval, plus information * needed to interpret the damage and record the associated buffer. */ #[derive(Clone)] struct DamageBatch { /** Information about which buffer is used in this batch and how it is attached to the surface */ attachment: BufferAttachment, /** wl_surface::damage directly applies in the surface coordinate space, and the conversion to buffer * coordinate space can only be done at commit time, so these must be cached instead of immediately * converted. */ damage: Vec, /** Most clients use wl_surface::damage_buffer, which is easier to use and more precise than wl_surface::damage. * For Waypipe, it is usually easier, except in the rare case of accumulating damage over different * buffers whose attachment parameters differ (and thus whose buffer coordinate spaces differ); then * one must convert between spaces. */ damage_buffer: Vec, } /** Additional information for wl_surface */ struct ObjWlSurface { attached_buffer_id: Option, /* The total damage for a buffer since the last time it was committed is given * by the accumulateed damage committed. The pending state is at index 0, * last commit at index 1, etc. */ damage_history: [DamageBatch; 7], /* acquire/release timline points for explicit sync */ acquire_pt: Option<(u64, Rc>)>, release_pt: Option<(u64, Rc>)>, /* unique wp_viewport object associated with this surface */ viewport_id: Option, } /** Additional information for wp_viewport */ struct ObjWpViewport { /** Surface to which this wp_viewport is attached. Is None when wl_surface has been destroyed. */ wl_surface: Option, } /** Additional information for wl_shm_pool */ struct ObjWlShmPool { buffer: Rc>, } /** Metadata indicating how a wl_buffer created from a wl_shm pool maps * onto the underlying pool */ #[derive(Clone, Copy)] struct ObjWlBufferShm { width: i32, height: i32, format: u32, offset: i32, stride: i32, } /** Additional information for wl_buffer */ struct ObjWlBuffer { sfd: Rc>, /* Metadata explaining how a wl_buffer relates to its underlying wl_shm_pool.' * DMABUFs do not need this metadata because the shadowfd stores this information. */ shm_info: Option, unique_id: u64, } /** Data of a format tranche from zwp_linux_dmabuf_feedback_v1 */ struct DmabufTranche { flags: u32, /** Format table entries; these are interpreted _immediately_ when the * tranche is provided, using the last format table to arrive. */ values: Vec<(u32, u64)>, /** When translating tranche, cache output indices here */ indices: Vec, device: u64, } /** Additional information for zwp_linux_dmabuf_v1 */ struct ObjZwpLinuxDmabuf { /** Set of formats seen in .modifier events. This makes it possible to * replace the first .modifier received with a full list of modifiers events * for that format, and then drop all subsequent .modifier events with same * format. (Alternatively, one could wait for "a roundtrip after binding" to * determine when all format events have arrived, but this approach will never * introduce any delay. */ formats_seen: BTreeSet, } /** Additional information for zwp_linux_dmabuf_feedback_v1 */ struct ObjZwpLinuxDmabufFeedback { /** Last format table received from the Wayland compositor */ input_format_table: Option>, /** Contents of last format table sent (or which will be sent to) to the Wayland client */ output_format_table: Option>, main_device: Option, tranches: Vec, current: DmabufTranche, /* If true, have already processed tranches */ processed: bool, /* If true, should send format table when processing ::done */ queued_format_table: Option<(Rc>, u32)>, } /** Additional information for zwp_linux_buffer_params_v1 */ struct ObjZwpLinuxDmabufParams { /* if using 'create', output dmabuf reference stored here before transferring to wl_buffer */ dmabuf: Option>>, // A bunch of zwp_linux_buffer_params_v1.add calls will be followed by create or create_immed, // which provide the dimensions and format planes: Vec, // todo: the size is limited in practice, so instead of Vec one could use an array; // however, this would require some way of handling drop (e.g., using Option) // which might be awkward } /** Additional information for wp_linux_drm_syncobj_surface_v1 */ struct ObjWpDrmSyncobjSurface { /* Corresponding wl_surface object */ surface: ObjId, } /** Additional information for wp_linux_drm_syncobj_timeline_v1 */ struct ObjWpDrmSyncobjTimeline { timeline: Rc>, } /** Additional information for zwlr_screencopy_frame_v1 */ struct ObjZwlrScreencopyFrame { /* Store sfd and shm metadata of target buffer in case the wl_buffer is destroyed early */ buffer: Option<(Rc>, Option)>, } /** Additional information for ext_image_copy_capture_session_v1. * * The dmabuf device and formats need to be processed together in order to * restrict to what the upstream device and Waypipe both support. shm formats * are safe to pass through unmodified since, in the worst case, any shm format * can be handled by replicating the entire shm pool. */ struct ObjExtImageCopyCaptureSession { dmabuf_device: Option, dmabuf_formats: Vec<(u32, Vec)>, /* The sorted list of format/modifier pairs from the last batch of buffer constraint events */ last_format_mod_list: Vec<(u32, u64)>, frame_list: Vec, /* At most one frame _should_ exist at a time */ } /** Additional information for ext_image_copy_capture_frame_v1 */ struct ObjExtImageCopyCaptureFrame { /* Store sfd and shm metadata of target buffer in case the wl_buffer is destroyed early */ buffer: Option<(Rc>, Option)>, /** This is reset to None when the parent session is destroyed. */ capture_session: Option, /** If the parent session is destroyed, retain the last list of suppported modifiers here. */ supported_modifiers: Vec<(u32, u64)>, } /** Additional information for zwlr_gamma_control_v1 */ struct ObjZwlrGammaControl { gamma_size: Option, } /** Additional information for wl_registry */ struct ObjWlRegistry { /** Store global advertisements for wp_linux_drm_syncobj_manager_v1 * until zwp_linux_dmabuf_v1 arrives and it is known whether Waypipe * is able to handle DMABUFs and timeline semaphores. */ syncobj_manager_replay: Vec<(u32, u32)>, } /** Additional information attached to specific Wayland objects */ enum WpExtra { WlSurface(Box), WlBuffer(Box), WlRegistry(Box), WlShmPool(Box), WpViewport(Box), ZwpDmabuf(Box), ZwpDmabufFeedback(Box), ZwpDmabufParams(Box), ZwlrScreencopyFrame(Box), ExtImageCopyCaptureSession(Box), ExtImageCopyCaptureFrame(Box), ZwlrGammaControl(Box), WpDrmSyncobjSurface(Box), WpDrmSyncobjTimeline(Box), None, } /** This enum indicates in which direction messages are being processed, and * provides queues for ShadowFds or file descriptors to receive or send. */ pub enum TranslationInfo<'a> { // RID -> SFD/FD FromChannel( ( &'a mut VecDeque>>, &'a mut VecDeque>>, ), ), // FD to SFD/RID FromWayland( ( &'a mut VecDeque, &'a mut Vec>>, ), ), } /** Given a Wayland rectangle, clip it with the \[0,0,w,h\] rectangle and return * the result, if nonempty */ fn clip_wlrect_to_buffer(a: &WlRect, w: i32, h: i32) -> Option { let x1 = a.x; let x2 = a.x.saturating_add(a.width); let y1 = a.y; let y2 = a.y.saturating_add(a.height); let nx1 = x1.clamp(0, w); let nx2 = x2.clamp(0, w); let ny1 = y1.clamp(0, h); let ny2 = y2.clamp(0, h); if nx2 > nx1 && ny2 > ny1 { Some(Rect { x1: nx1 as u32, x2: nx2 as u32, y1: ny1 as u32, y2: ny2 as u32, }) } else { None } } /** Apply the viewport crop and scale to a rectangle in surface-local coordinates. After this, * apply buffer scale/transforms. * * `buffer_size` is the size of the buffer _after_ scale/transform operations. * * This rounds boundaries outward by up to a pixel, under the assumption that "linear" scaling * is used, not nearest-neighbor, cubic, fft, etc. * * view_src/view_dst should be valid for wp_viewporter * * This function saturates on overflow. */ fn apply_viewport_transform( a: &WlRect, buffer_size: (i32, i32), view_src: Option<(i32, i32, i32, i32)>, view_dst: Option<(i32, i32)>, ) -> Option { assert!(a.width >= 0 && a.height >= 0); let dst: (i32, i32) = if let Some(x) = view_dst { (x.0, x.1) } else if let Some(x) = view_src { /* in crop-only case, the crop rectangle size should be an integer */ (x.2 / 256, x.3 / 256) } else { (buffer_size.0, buffer_size.1) }; /* 1: clip rectangle to 'dst' */ let x1 = a.x.clamp(0, dst.0); let x2 = a.x.saturating_add(a.width).clamp(0, dst.0); let y1 = a.y.clamp(0, dst.1); let y2 = a.y.saturating_add(a.height).clamp(0, dst.1); if x2 <= x1 || y2 <= y1 { /* Rectangle intersection with 'dst' region is empty. */ return None; } /* Fixed point. Expand to i64 to avoid early clipping. */ if let Some(v) = view_src { assert!(v.0 >= 0 && v.1 >= 0 && v.2 > 0 && v.3 > 0); fn source_floor(v: i32, dst: i32, src_sz_fixed: i32, src_offset_fixed: i32) -> u32 { /* Fixed point calculation: floor(x1 * src.width / dst.width + src.x1), where src.width, src.x1 are in 24.8 fixed point. * Worst case: may use 62 bits for multiplication result and 32 for addition. */ (((v as u64) * (src_sz_fixed as u64) / (dst as u64) + (src_offset_fixed as u64)) / 256) as u32 } fn source_ceil(v: i32, dst: i32, src_sz_fixed: i32, src_offset_fixed: i32) -> u32 { ((((v as u64) * (src_sz_fixed as u64)).div_ceil(dst as u64) + (src_offset_fixed as u64)) .div_ceil(256)) as u32 } /* The rectangle should either map into the rectangle given by buffer_transformed_size, * or else there should be a protocol error because the viewport src is out of bounds; * however, it is safe for Waypipe to just clip the region and not raise an error. */ let sx1 = source_floor(x1, dst.0, v.2, v.0).min(buffer_size.0 as u32); let sx2 = source_ceil(x2, dst.0, v.2, v.0).min(buffer_size.0 as u32); let sy1 = source_floor(y1, dst.1, v.3, v.1).min(buffer_size.1 as u32); let sy2 = source_ceil(y2, dst.1, v.3, v.1).min(buffer_size.1 as u32); if sx1 >= sx2 || sy1 >= sy2 { return None; } Some(Rect { x1: sx1, x2: sx2, y1: sy1, y2: sy2, }) } else { /* When view_dst=None, buffer_size = dst and this scaling has no effect. */ let sx1 = (((x1 as u64) * (buffer_size.0 as u64)) / (dst.0 as u64)) as u32; let sx2 = (((x2 as u64) * (buffer_size.0 as u64)).div_ceil(dst.0 as u64)) as u32; let sy1 = (((y1 as u64) * (buffer_size.1 as u64)) / (dst.1 as u64)) as u32; let sy2 = (((y2 as u64) * (buffer_size.1 as u64)).div_ceil(dst.1 as u64)) as u32; /* Only clip by transformed size */ Some(Rect { x1: sx1, x2: sx2, y1: sy1, y2: sy2, }) } } /** Transform a rectangle indicating damage on a surface to a rectangle indicating * damage on a buffer. * * `scale` and `transform` are those of the surface; `width` and `height` those of the buffer. * * `view_src`/`view_dst` are validated viewport source and destination parameters. * * This clips the damage to the surface and returns None if the result is nonempty. */ fn apply_damage_rect_transform( a: &WlRect, scale: u32, transform: WlOutputTransform, view_src: Option<(i32, i32, i32, i32)>, view_dst: Option<(i32, i32)>, width: i32, height: i32, ) -> Option { /* Each of the eight transformations corresponds to a * unique set of reflections: X<->Y | Xflip | Yflip */ let seq = [0b000, 0b011, 0b110, 0b101, 0b010, 0b001, 0b100, 0b111]; let code = seq[transform as u32 as usize]; let swap_xy = code & 0x1 != 0; let flip_x = code & 0x2 != 0; let flip_y = code & 0x4 != 0; let pre_vp_size = if swap_xy { (height / scale as i32, width / scale as i32) } else { (width / scale as i32, height / scale as i32) }; let b = apply_viewport_transform(a, pre_vp_size, view_src, view_dst)?; // These should not overflow, since b should be clipped to `pre_vp_size`. let mut xl = b.x1.checked_mul(scale).unwrap(); let mut yl = b.y1.checked_mul(scale).unwrap(); let mut xh = b.x2.checked_mul(scale).unwrap(); let mut yh = b.y2.checked_mul(scale).unwrap(); let end_w = if swap_xy { height } else { width } as u32; let end_h = if swap_xy { width } else { height } as u32; if flip_x { (xh, xl) = (end_w - xl, end_w - xh); } if flip_y { (yh, yl) = (end_h - yl, end_h - yh); } if swap_xy { (xl, xh, yl, yh) = (yl, yh, xl, xh); } Some(Rect { x1: xl, x2: xh, y1: yl, y2: yh, }) } /** Invert a viewport transform, rounding damage rectangle sizes up. * * The input is a nondegenerate rectangle _contained in_ the [0..buffer_size] rectangle. */ fn inverse_viewport_transform( a: &Rect, buffer_size: (i32, i32), view_src: Option<(i32, i32, i32, i32)>, view_dst: Option<(i32, i32)>, ) -> Option { assert!(buffer_size.0 > 0 && buffer_size.1 > 0); assert!( a.x1 < a.x2 && a.y1 < a.y2 && a.x2 <= buffer_size.0 as u32 && a.y2 <= buffer_size.1 as u32 ); /* 1: convert to .8 fixed point, and clip rectangle to 'src' */ let (mut x1, mut x2, mut y1, mut y2) = ( (a.x1 as u64) * 256, (a.x2 as u64) * 256, (a.y1 as u64) * 256, (a.y2 as u64) * 256, ); if let Some((sx, sy, sw, sh)) = view_src { assert!(sx >= 0 && sy >= 0 && sw > 0 && sh > 0); let e = (sx as u64 + sw as u64, sy as u64 + sh as u64); x1 = x1.clamp(sx as u64, e.0) - (sx as u64); x2 = x2.clamp(sx as u64, e.0) - (sx as u64); y1 = y1.clamp(sy as u64, e.1) - (sy as u64); y2 = y2.clamp(sy as u64, e.1) - (sy as u64); /* Per protocol, parts of damage outside surface dimensions are ignored */ if x2 <= x1 || y2 <= y1 { return None; } }; /* src size, in .8 fixed point */ let src: (u64, u64) = if let Some(x) = view_src { (x.2 as u64, x.3 as u64) } else { (buffer_size.0 as u64 * 256, buffer_size.1 as u64 * 256) }; /* 2: scale to 'dst' and round result to integer */ let dst: (u32, u32) = if let Some(x) = view_dst { (x.0 as u32, x.1 as u32) } else if let Some(x) = view_src { /* in crop-only case, the crop rectangle size should be an integer */ (x.2 as u32 / 256, x.3 as u32 / 256) } else { (buffer_size.0 as u32, buffer_size.1 as u32) }; /* Rectangle coordinates, in fixed point, are up to 31+8 bits, * dst is at most 31 bits, and src.0 is at most 31 bits. Do operations * in u128 to ensure 31+31+8 bit intermediate products are not clipped. * (Note: it is possible to reduce the intermediate product size, since * if view_src is not None, coordinates are clipped to 32 bits, and * otherwise are divisible by 256.) */ let xl = ((x1 as u128) * (dst.0 as u128)) / (src.0 as u128); let xh = ((x2 as u128) * (dst.0 as u128)).div_ceil(src.0 as u128); let yl = ((y1 as u128) * (dst.1 as u128)) / (src.1 as u128); let yh = ((y2 as u128) * (dst.1 as u128)).div_ceil(src.1 as u128); assert!(xh > xl && yh > yl); /* Results should fall within [0,i32::MAX] since dst is so bounded, and coordinates are <= src*256. */ assert!(xh <= i32::MAX as _ && yh <= i32::MAX as _); Some(WlRect { x: xl as i32, y: yl as i32, width: (xh - xl) as i32, height: (yh - yl) as i32, }) } /** Transform a rectangle indicating damage on a buffer to a rectangle * indicating damage on a surface. * * The rectangle processed may be clipped to the buffer size and to the viewport size; * None will be returned if it has no overlap with the surface extents. */ fn inverse_damage_rect_transform( a: &WlRect, scale: u32, transform: WlOutputTransform, view_src: Option<(i32, i32, i32, i32)>, view_dst: Option<(i32, i32)>, width: i32, height: i32, ) -> Option { assert!(width > 0 && height > 0 && scale > 0); /* Each of the eight transformations corresponds to a * unique set of reflections: X<->Y | Xflip | Yflip */ let seq = [0b000, 0b011, 0b110, 0b101, 0b010, 0b001, 0b100, 0b111]; let code = seq[transform as u32 as usize]; let swap_xy = code & 0x1 != 0; let flip_x = code & 0x2 != 0; let flip_y = code & 0x4 != 0; let mut xl = a.x.clamp(0, width) as u32; let mut xh = a.x.saturating_add(a.width).clamp(0, width) as u32; let mut yl = a.y.clamp(0, height) as u32; let mut yh = a.y.saturating_add(a.height).clamp(0, height) as u32; if xh <= xl || yh <= yl { /* Rectangle is degenerate after clipping */ return None; } let (end_w, end_h) = if swap_xy { (height as u32, width as u32) } else { (width as u32, height as u32) }; if swap_xy { (xl, xh, yl, yh) = (yl, yh, xl, xh); } if flip_y { (yh, yl) = (end_h - yl, end_h - yh); } if flip_x { (xh, xl) = (end_w - xl, end_w - xh); } (xl, xh) = (xl / scale, xh.div_ceil(scale)); (yl, yh) = (yl / scale, yh.div_ceil(scale)); let post_vp_size = if swap_xy { (height / scale as i32, width / scale as i32) } else { (width / scale as i32, height / scale as i32) }; let b = Rect { x1: xl, x2: xh, y1: yl, y2: yh, }; inverse_viewport_transform(&b, post_vp_size, view_src, view_dst) } /** Get an interval containing the entire memory region corresponding to the buffer */ fn damage_for_entire_buffer(buffer: &ObjWlBufferShm) -> (usize, usize) { let start = (buffer.offset) as usize; let end = if let Some(layout) = get_shm_format_layout(buffer.format) { let mut end = start; assert!(buffer.stride >= 0 && buffer.width >= 0 && buffer.height >= 0); // TODO: use checked_mul and return None on overflow? let ext_stride = (buffer.stride as u32) * layout.planes[0].hsub.get(); for plane in layout.planes { /* In practice, divisions should always be exact/with no remainder; * in case odd sizes are for some reason used, the most sensible thing * to do is round up. (Exactly how to calculate these parameters is left * entirely unspecified by the Wayland protocol as of writing.) */ let plane_stride = (ext_stride * plane.bpt.get()) .div_ceil(layout.planes[0].bpt.get() * plane.hsub.get()); let plane_height = (buffer.height as u32).div_ceil(plane.vsub.get()); end = end.saturating_add(plane_height.saturating_mul(plane_stride) as usize); } end } else { start.saturating_add(buffer.stride.saturating_mul(buffer.height) as usize) }; (64 * (start / 64), align(end, 64)) } /** Return a list of rectangular areas damaged on a surface since the last time the given * buffer was committed; the rectangles are not guaranteed to be disjoint. */ fn get_damage_rects(surface: &ObjWlSurface, attachment: &BufferAttachment) -> Vec { let (width, height) = attachment.buffer_size; let mut rects = Vec::::new(); let full_damage = Rect { x1: 0, x2: width.try_into().unwrap(), y1: 0, y2: height.try_into().unwrap(), }; /* The first slot will later be updated to match `attachment`. */ let Some(first_idx_offset) = surface .damage_history .iter() .skip(1) .position(|x| x.attachment.buffer_uid == attachment.buffer_uid) else { /* First time this buffer was seen at all; mark the entire buffer as damaged */ rects.push(full_damage); return rects; }; let first_idx = first_idx_offset + 1; if surface.damage_history[first_idx].attachment != *attachment { /* Require an exact match: it is posssible that the surface->buffer coordinate * transform and the buffer contents have changed in concert so that the surface * pixels remain the same, even though all buffer pixels are different. In this * scenario no damage will be reported. However, Waypipe must still update all * pixels, because the compositor may read from and rerender the surface at any * time. It is only when the visible surface contents have no damage and the * coordinate transform stays the same that the contents of the buffer are pinned. * * Note: in theory, one could special-case viewport-resize operations which preserve * the mapping for all retained surface pixels, and then automatically damage the * buffer pixels that are newly made visible. (Clients may do this to enable * high-performance smooth resizing.) */ rects.push(full_damage); return rects; } /* Scan all recorded damage from previous commits */ for (i, batch) in surface.damage_history[..first_idx].iter().enumerate() { if i == 0 { /* Special case: damage to the target buffer needs no conversion */ for w in &batch.damage_buffer { if let Some(r) = clip_wlrect_to_buffer(w, width, height) { rects.push(r); } } } else { for w in &batch.damage_buffer { /* Translate damage from this buffer's coordinate space to the current * buffer's coordinate space, through the surface coordinate space. * * Note: buffer damage can be more precise than surface damage when the * scale is > 1 or wp_viewporter is involved, so this can round up buffer * sizes more than strictly necessary. */ let Some(s) = inverse_damage_rect_transform( w, batch.attachment.scale, batch.attachment.transform, batch.attachment.viewport_src, batch.attachment.viewport_dst, batch.attachment.buffer_size.0, batch.attachment.buffer_size.1, ) else { continue; }; if let Some(r) = apply_damage_rect_transform( &s, attachment.scale, attachment.transform, attachment.viewport_src, attachment.viewport_dst, width, height, ) { rects.push(r); } } } for w in &batch.damage { /* Convert from surface space to local space, using the _current_ transform. * * Note: the damage is clipped according to the current surface size; * technically one could clip damage at each commit according to the then-current * surface size, and then reclip it for this one; however, such an optimization * would be complicated and only benefits weird clients, so is not worth doing. */ if let Some(r) = apply_damage_rect_transform( w, attachment.scale, attachment.transform, attachment.viewport_src, attachment.viewport_dst, width, height, ) { rects.push(r); } } } rects } /** Compute the (disjoint) damage intervals for a shared memory pool used by the buffer, * using damage accumulated since the last time the buffer was committed */ fn get_damage_for_shm( buffer: &ObjWlBuffer, surface: &ObjWlSurface, attachment: &BufferAttachment, ) -> Vec<(usize, usize)> { let Some(shm_info) = &buffer.shm_info else { panic!(); }; let Some(layout) = get_shm_format_layout(shm_info.format) else { debug!("Format without known linear layout {}", shm_info.format); return vec![damage_for_entire_buffer(shm_info)]; }; let [p0] = layout.planes else { debug!( "Format {} has {} planes", shm_info.format, layout.planes.len() ); return vec![damage_for_entire_buffer(shm_info)]; }; if p0.hsub.get() != 1 || p0.vsub.get() != 1 || p0.htex.get() != 1 || p0.vtex.get() != 1 { debug!( "Format {} has nontrivial texels or subsampling", shm_info.format ); return vec![damage_for_entire_buffer(shm_info)]; } let bpp = p0.bpt.get(); let mut rects = get_damage_rects(surface, attachment); compute_damaged_segments( &mut rects[..], 6, 128, shm_info.offset.try_into().unwrap(), shm_info.stride.try_into().unwrap(), bpp as usize, ) } /** Compute the (disjoint) damage intervals for the linear view of a DMABUF's contents, * using damage accumulated since the last time the buffer was committed */ fn get_damage_for_dmabuf( sfdd: &ShadowFdDmabuf, surface: &ObjWlSurface, attachment: &BufferAttachment, ) -> Vec<(usize, usize)> { // TODO: deduplicate implementations let (nom_len, width) = match sfdd.buf { DmabufImpl::Vulkan(ref buf) => (buf.nominal_size(sfdd.view_row_stride), buf.width), DmabufImpl::Gbm(ref buf) => (buf.nominal_size(sfdd.view_row_stride), buf.width), }; assert!( sfdd.drm_format != WlShmFormat::Xrgb8888 as u32 && sfdd.drm_format != WlShmFormat::Argb8888 as u32 ); let wayl_format = drm_to_wayland(sfdd.drm_format); let Some(layout) = get_shm_format_layout(wayl_format) else { debug!("Format without known bpp {}", sfdd.drm_format); return vec![(0, align(nom_len, 64))]; }; let [p0] = layout.planes else { debug!( "Format {} has {} planes", sfdd.drm_format, layout.planes.len() ); return vec![(0, align(nom_len, 64))]; }; if p0.hsub.get() != 1 || p0.vsub.get() != 1 || p0.htex.get() != 1 || p0.vtex.get() != 1 { debug!( "Format {} has nontrivial texels or subsampling", sfdd.drm_format ); return vec![(0, align(nom_len, 64))]; } let bpp = p0.bpt.get(); let mut rects = get_damage_rects(surface, attachment); /* Stride: tightly packed. */ // except: possibly gcd(4,bpp) aligned ? let stride = sfdd.view_row_stride.unwrap_or(width * bpp); compute_damaged_segments(&mut rects[..], 6, 128, 0, stride as usize, bpp as usize) } /** Construct a format table for use by the zwp_linux_dmabuf_v1 protocol, * from the tranches */ fn process_dmabuf_feedback(feedback: &mut ObjZwpLinuxDmabufFeedback) -> Result, String> { let mut index: BTreeMap<(u32, u64), u16> = BTreeMap::new(); for t in feedback.tranches.iter() { for f in t.values.iter() { index.insert(*f, u16::MAX); } } if index.len() > u16::MAX as usize { return Err(tag!( "Format table is too large ({} > {})", index.len(), u16::MAX )); } let mut table = Vec::new(); for (i, (f, v)) in index.iter_mut().enumerate() { table.extend_from_slice(&f.0.to_le_bytes()); table.extend_from_slice(&0u32.to_le_bytes()); table.extend_from_slice(&f.1.to_le_bytes()); *v = i as u16; } for t in feedback.tranches.iter_mut() { let mut indices = Vec::new(); for f in t.values.iter() { let idx = index.get(f).expect("Inserted key should still be present"); indices.extend_from_slice(&idx.to_le_bytes()); } t.indices = indices; } Ok(table) } /** Construct a format table for use by the zwp_linux_dmabuf_v1 protocol */ fn rebuild_format_table( dmabuf_dev: &DmabufDevice, feedback: &mut ObjZwpLinuxDmabufFeedback, ) -> Result<(), String> { /* Identify the remotely supported formats */ let mut remote_formats = BTreeSet::::new(); for t in feedback.tranches.iter() { if t.device != feedback.main_device.unwrap() { /* Only use main device of compositor; at least one tranche will use it. */ continue; } for (fmt, _modifier) in t.values.iter() { remote_formats.insert(*fmt); } } /* Regenerate tranches, roughly matching original _format_ preferences */ let mut new_tranches = Vec::::new(); for t in feedback.tranches.iter() { if t.device != feedback.main_device.unwrap() { continue; } let mut n = DmabufTranche { device: feedback.main_device.unwrap(), flags: 0, values: Vec::new(), indices: Vec::new(), }; for (fmt, _modifier) in t.values.iter() { /* Record each format in exactly one tranche, preferring earlier tranches; * all modifiers for a format are put in the same tranche. */ if remote_formats.remove(fmt) { let mods = dmabuf_dev_modifier_list(dmabuf_dev, *fmt); for m in mods { n.values.push((*fmt, *m)); } } } if !n.values.is_empty() { new_tranches.push(n); } } if new_tranches.is_empty() { return Err(tag!( "Failed to build new format tranches: no formats with common support" )); } feedback.tranches = new_tranches; Ok(()) } /** Add new modifiers to the format-to-modifier-list table. * * Modifiers added are not deduplicated; if `modifiers` is empty nothing will be done. * * This can theoretically lead to quadratic runtime, but should only be used with * modifiers that are a subset of what Waypipe supports, so list lengths will * never be long. */ fn add_advertised_modifiers(map: &mut BTreeMap>, format: u32, modifiers: &[u64]) { if modifiers.is_empty() { return; } let entries: &mut Vec = map.entry(format).or_default(); for m in modifiers { if !entries.contains(m) { entries.push(*m); } } } /** Record a new Wayland object, checking that its ID was not already used */ fn insert_new_object( objects: &mut BTreeMap, id: ObjId, obj: WpObject, ) -> Result<(), String> { let t = obj.obj_type; if let Some(old) = objects.insert(id, obj) { return Err(tag!( "Creating object of type {:?} with id {}, but object of type {:?} with same id already exists", t, id, old.obj_type, )); } Ok(()) } /** Register any new objects created by a message */ fn register_generic_new_ids( msg: &[u8], meth: &WaylandMethod, glob: &mut Globals, ) -> Result<(), String> { let mut tail = &msg[8..]; for op in meth.sig { match op { WaylandArgument::Uint | WaylandArgument::Int | WaylandArgument::Fixed => { let _ = parse_u32(&mut tail); } WaylandArgument::Fd => { // do nothing } WaylandArgument::Object(_) | WaylandArgument::GenericObject => { let _ = parse_obj(&mut tail)?; } WaylandArgument::NewId(new_intf) => { let id = parse_obj(&mut tail)?; insert_new_object( &mut glob.objects, id, WpObject { obj_type: *new_intf, extra: WpExtra::None, }, )?; } WaylandArgument::GenericNewId => { // order: (string, version, new_id) let string = parse_string(&mut tail)? .ok_or_else(|| tag!("New id string should not be null"))?; let _version = parse_u32(&mut tail)?; let id = parse_obj(&mut tail)?; if glob.objects.contains_key(&id) { return Err(tag!("Creating object with id not detected as deleted")); } if let Some(new_intf) = lookup_intf_by_name(string) { /* Only track recognized object types; messages for * unrecognized types will be ignored. */ glob.objects.insert( id, WpObject { obj_type: new_intf, extra: WpExtra::None, }, ); } } WaylandArgument::String => { let x = parse_string(&mut tail)?; if x.is_none() { return Err(tag!("Received null string where none allowed")); } } WaylandArgument::OptionalString => { let _ = parse_string(&mut tail)?; } WaylandArgument::Array => { let _ = parse_array(&mut tail)?; } } } Ok(()) } /** Copy a message to the destination buffer */ fn copy_msg(msg: &[u8], dst: &mut &mut [u8]) { dst[..msg.len()].copy_from_slice(msg); *dst = &mut std::mem::take(dst)[msg.len()..]; } /** Copy a message, stripping or adding the Waypipe-specific fd count field */ fn copy_msg_tag_fd(msg: &[u8], dst: &mut &mut [u8], from_channel: bool) -> Result<(), String> { dst[..msg.len()].copy_from_slice(msg); let mut h2 = u32::from_le_bytes(dst[4..8].try_into().unwrap()); if from_channel { // drop the upper bits of the opcode entirely if h2 & (1 << 11) == 0 { return Err(tag!("header part {:x} missing fd tag", h2)); } h2 &= !0xff00; } else { // tag message as using one file descriptor h2 = (h2 & !0xff00) | (1 << 11); } dst[4..8].copy_from_slice(&u32::to_le_bytes(h2)); *dst = &mut std::mem::take(dst)[msg.len()..]; Ok(()) } /** Return true iff `id` in the Wayland server allocation range */ fn is_server_object(id: ObjId) -> bool { id.0 >= 0xff000000 } /** Default processing for an identified Wayland message */ fn default_proc_way_msg( msg: &[u8], dst: &mut &mut [u8], meth: &WaylandMethod, is_req: bool, object_id: ObjId, glob: &mut Globals, ) -> Result { if dst.len() < msg.len() { // Not enough space to store the message. Then edit the // protocol message in place, and stop processing. return Ok(ProcMsg::NeedsSpace((msg.len(), 0))); } register_generic_new_ids(msg, meth, glob)?; if meth.destructor { if !is_req { // Deletion events; process immediately; object is already in map glob.objects.remove(&object_id).unwrap(); } else if is_server_object(object_id) { // TODO: is zombie handling necessary, when clients delete server generated objects? glob.objects.remove(&object_id).unwrap(); } else { // client object, destructor request: wait until: wl_display.delete_id } } copy_msg(msg, dst); Ok(ProcMsg::Done) } /** Process an unidentified Wayland message */ fn proc_unknown_way_msg( msg: &[u8], dst: &mut &mut [u8], transl: TranslationInfo, ) -> Result { /* Untracked, unidentified object with unknown message */ if let TranslationInfo::FromChannel((x, y)) = transl { let mut header2 = u32::from_le_bytes(msg[4..8].try_into().unwrap()); let ntagfds = ((header2 & ((1 << 16) - 1)) >> 11) as usize; // This is usually safe for simple file transfers (like keymaps), // where as long as the sending side recognizes the file it can // be replicated OK. if ntagfds > 0 { error!("Unidentified message has {} fds attached according to other Waypipe instance; blindly transferring them", ntagfds); } if dst.len() < msg.len() || y.len() + ntagfds > MAX_OUTGOING_FDS { // Not enough space to store the message. return Ok(ProcMsg::NeedsSpace((msg.len(), ntagfds))); } if x.len() < ntagfds { return Err(tag!("Missing sfd")); } for sfd in x.iter().take(ntagfds) { let b = sfd.borrow(); if let ShadowFdVariant::File(data) = &b.data { if data.pending_apply_tasks > 0 { return Ok(ProcMsg::WaitFor(b.remote_id)); } } } for _ in 0..ntagfds { let sfd = x.pop_front().unwrap(); y.push_back(sfd); } /* Copy message to wayland, removing fd tags */ dst[..msg.len()].copy_from_slice(msg); header2 &= !0xf800; dst[4..8].copy_from_slice(&u32::to_le_bytes(header2)); } else { if dst.len() < msg.len() { // Not enough space to store the message. return Ok(ProcMsg::NeedsSpace((msg.len(), 0))); } /* Copy message to channel, assuming no fds are attached */ dst[..msg.len()].copy_from_slice(msg); } *dst = &mut std::mem::take(dst)[msg.len()..]; Ok(ProcMsg::Done) } /** Parse and return the entries of a zwp_linux_dmabuf_v1 format table. Any * trailing bytes after the last multiple of 16 will be ignored */ fn parse_format_table(data: &[u8]) -> Vec<(u32, u64)> { let mut t = Vec::new(); for chunk in data.chunks_exact(16) { let format: u32 = u32::from_le_bytes(chunk[..4].try_into().unwrap()); let modifier: u64 = u64::from_le_bytes(chunk[8..16].try_into().unwrap()); t.push((format, modifier)); } t } /** Parse a `dev_t` array into a u64 */ fn parse_dev_array(arr: &[u8]) -> Option { /* dev_t may be smaller on some old systems, but detection of this is complicated, so * accept both reasonable options */ if arr.len() == 4 { Some(u32::from_le_bytes(arr.try_into().unwrap()) as u64) } else if arr.len() == 8 { Some(u64::from_le_bytes(arr.try_into().unwrap())) } else { None } } /** Convert a `u64` into an array of size `dev_t`. On systems where dev_t * is u32, the high bits are expected to be zero. */ fn write_dev_array(dev: u64) -> [u8; SIZEOF_DEV_T] { let b = dev.to_le_bytes(); let (dev, leftover) = b.split_at(SIZEOF_DEV_T); assert!(leftover.iter().all(|x| *x == 0)); dev.try_into().unwrap() } /** Assuming the ShadowFd has file type, return whether has pending apply tasks? */ fn file_has_pending_apply_tasks(sfd: &RefCell) -> Result { let b = sfd.borrow(); let ShadowFdVariant::File(data) = &b.data else { // TODO: make this a helper function return Err(tag!("ShadowFd is not of file type")); }; Ok(data.pending_apply_tasks > 0) } /** Compute the midpoint of two timestamps, rounding to -∞ */ fn timespec_midpoint(stamp_1: libc::timespec, stamp_3: libc::timespec) -> libc::timespec { let mut mid_nsec = if stamp_1.tv_nsec < stamp_3.tv_nsec { stamp_1.tv_nsec + (stamp_3.tv_nsec - stamp_1.tv_nsec) / 2 } else { stamp_3.tv_nsec + (stamp_1.tv_nsec - stamp_3.tv_nsec) / 2 }; let mut mid_sec = if stamp_1.tv_sec < stamp_3.tv_sec { stamp_1.tv_sec + (stamp_3.tv_sec - stamp_1.tv_sec) / 2 } else { stamp_3.tv_sec + (stamp_1.tv_sec - stamp_3.tv_sec) / 2 }; if stamp_3.tv_sec % 2 != stamp_1.tv_sec % 2 { mid_nsec += 500_000_000; } if mid_nsec > 1_000_000_000 { mid_sec += 1; mid_nsec -= 1_000_000_000; } libc::timespec { tv_sec: mid_sec, tv_nsec: mid_nsec, } } /** Estimate the time difference between two clocks. * * Return value is: (sec_diff, nsec_diff) where nsec_diff >= 0 */ fn clock_sub(clock_a: u32, clock_b: u32) -> Result<(i64, u32), String> { let mut stamp_1 = libc::timespec { tv_sec: 0, tv_nsec: 0, }; let mut stamp_2 = stamp_1; let mut stamp_3 = stamp_1; let ca: libc::clockid_t = clock_a.try_into().unwrap(); let cb: libc::clockid_t = clock_b.try_into().unwrap(); unsafe { // SAFETY: clock_gettime only writes into second argument, which is repr(C) // and thus properly aligned let ret1 = libc::clock_gettime(ca, &mut stamp_1); let ret2 = libc::clock_gettime(cb, &mut stamp_2); let ret3 = libc::clock_gettime(ca, &mut stamp_3); if ret1 != 0 || ret2 != 0 || ret3 != 0 { return Err(tag!( "clock_gettime failed for clock {} or {}", clock_a, clock_b )); } } let stamp_avg = timespec_midpoint(stamp_1, stamp_3); /* tv_sec is i32 on pre-Y2K38 systems */ #[allow(clippy::unnecessary_cast)] let mut tv_sec = stamp_avg .tv_sec .checked_sub(stamp_2.tv_sec) .ok_or_else(|| tag!("overflow"))? as i64; let mut tv_nsec = stamp_avg .tv_nsec .checked_sub(stamp_2.tv_nsec) .ok_or_else(|| tag!("overflow"))? as i32; if tv_nsec < 0 { tv_sec -= 1; tv_nsec += 1_000_000_000; } assert!((0..1_000_000_000).contains(&tv_nsec)); Ok((tv_sec, tv_nsec as u32)) } /** Add a signed time offset to a time point, returning None on overflow. */ fn time_add(mut a: (u64, u32), b: (i64, u32)) -> Option<(u64, u32)> { assert!(a.1 < 1_000_000_000 && b.1 < 1_000_000_000); let mut nsec = a.1.checked_add(b.1)?; if nsec > 1_000_000_000 { nsec -= 1_000_000_000; a.0 = a.0.checked_add(1)?; } // 64-bit time overflow should never happen in practice Some((a.0.checked_add_signed(b.0)?, nsec)) } /** Convert the given timestamp from/to the specified clock ID to/from the realtime * clock, depending on whether message is going toward the channel or not. */ fn translate_timestamp( tv_sec_hi: u32, tv_sec_lo: u32, tv_nsec: u32, clock_id: u32, to_channel: bool, ) -> Result<(u32, u32, u32), String> { let tv_sec = join_u64(tv_sec_hi, tv_sec_lo); let realtime = libc::CLOCK_REALTIME as u32; let (new_sec, new_nsec) = if to_channel { /* Convert from clock_id to CLOCK_REALTIME */ let (diff_sec, diff_nsec) = clock_sub(realtime, clock_id)?; time_add((tv_sec, tv_nsec), (diff_sec, diff_nsec)).ok_or_else(|| tag!("overflow"))? } else { /* Convert from CLOCK_REALTIME to clock_id */ let (diff_sec, diff_nsec) = clock_sub(clock_id, realtime)?; time_add((tv_sec, tv_nsec), (diff_sec, diff_nsec)).ok_or_else(|| tag!("overflow"))? }; let (new_sec_hi, new_sec_lo) = split_u64(new_sec); Ok((new_sec_hi, new_sec_lo, new_nsec)) } /** Handle a readonly file whose contents need exact replication. * * When the message comes from Wayland, translate it; when the message comes from the * channel, return Ok(Some(ProcMsg::WaitFor(...))) if the file is not yet ready to export. */ fn translate_or_wait_for_fixed_file( transl: TranslationInfo, glob: &mut Globals, file_sz: u32, ) -> Result, String> { match transl { TranslationInfo::FromChannel((x, y)) => { let sfd = &x.front().ok_or_else(|| tag!("Missing fd"))?; let rid = sfd.borrow().remote_id; if file_has_pending_apply_tasks(sfd)? { return Ok(Some(ProcMsg::WaitFor(rid))); } y.push_back(x.pop_front().unwrap()); } TranslationInfo::FromWayland((x, y)) => { let v = translate_shm_fd( x.pop_front().ok_or_else(|| tag!("Missing fd"))?, file_sz.try_into().unwrap(), &mut glob.map, &mut glob.max_local_id, true, true, false, )?; y.push(v); } }; Ok(None) } /** A helper structure to print a Wayland method's arguments, on demand */ struct MethodArguments<'a> { meth: &'a WaylandMethod, msg: &'a [u8], } /** Display a Wayland method's arguments; error if parsing failed, return true if formatter error */ fn fmt_method(arg: &MethodArguments, f: &mut Formatter<'_>) -> Result { assert!(arg.msg.len() >= 8); let mut tail: &[u8] = &arg.msg[8..]; let mut first = true; for op in arg.meth.sig { if !first { if write!(f, ", ").is_err() { return Ok(true); } } else { first = false; } match op { WaylandArgument::Uint => { let v = parse_u32(&mut tail)?; if write!(f, "{}:u", v).is_err() { return Ok(true); } } WaylandArgument::Int => { let v = parse_i32(&mut tail)?; if write!(f, "{}:i", v).is_err() { return Ok(true); } } WaylandArgument::Fixed => { let v = parse_i32(&mut tail)?; if write!(f, "{:.8}:f", (v as f64) * 0.00390625).is_err() { return Ok(true); } } WaylandArgument::Fd => { // do nothing if write!(f, "fd").is_err() { return Ok(true); } } WaylandArgument::Object(t) => { let id = parse_u32(&mut tail)?; if write!(f, "{}#{}:obj", INTERFACE_TABLE[*t as usize].name, id).is_err() { return Ok(true); } } WaylandArgument::NewId(t) => { let id = parse_u32(&mut tail)?; if write!(f, "{}#{}:new_id", INTERFACE_TABLE[*t as usize].name, id).is_err() { return Ok(true); } } WaylandArgument::GenericObject => { let id = parse_u32(&mut tail)?; if write!(f, "{}:gobj", id).is_err() { return Ok(true); } } WaylandArgument::GenericNewId => { // order: (string, version, new_id) let ostring = parse_string(&mut tail)?; let version = parse_u32(&mut tail)?; let id = parse_u32(&mut tail)?; if (if let Some(string) = ostring { write!( f, "(\"{}\", {}:u, {}:new_id)", EscapeAsciiPrintable(string), version, id ) } else { write!(f, "(null_str, {}:u, {}:new_id)", version, id) }) .is_err() { return Ok(true); } } WaylandArgument::String | WaylandArgument::OptionalString => { let ostring = parse_string(&mut tail)?; if (if let Some(string) = ostring { write!(f, "\"{}\"", EscapeAsciiPrintable(string)) } else { write!(f, "null_str") }) .is_err() { return Ok(true); } } WaylandArgument::Array => { let a = parse_array(&mut tail)?; if write!(f, "{:?}", a).is_err() { return Ok(true); } } } } if !tail.is_empty() { return Err(PARSE_ERROR); } Ok(false) } impl Display for MethodArguments<'_> { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { match fmt_method(self, f) { Err(e) => write!(f, "...format error: {}", e), Ok(eof) => { if eof { Err(std::fmt::Error) } else { Ok(()) } } } } } /** Result of a message processing attempt */ #[derive(Debug, Eq, PartialEq)] pub enum ProcMsg { /** Message successfully processed */ Done, /* Not enough bytes or fds in the output queue to process the message. * * The argument is the (number of bytes, number of fds) needed */ NeedsSpace((usize, usize)), /** Need to wait for all operations on a given RID to complete. * * This is only useful for messages coming from the channel whose associated * processing might still be in progress. */ WaitFor(Rid), } /** Returns true iff `x` is <= `y` in both coordinates */ fn space_le(x: (usize, usize), y: (usize, usize)) -> bool { x.0 <= y.0 && x.1 <= y.1 } /** Macro to check if the required space (`x` bytes, `y` fds) is more than * the available space (`r.0` and `r.1`) and if so, return the required space. */ macro_rules! check_space { ($x:expr, $y:expr, $r:expr) => { let space: (usize, usize) = ($x, $y); if !space_le(space, $r) { return Ok(ProcMsg::NeedsSpace(space)); } }; } /** Process a Wayland message; typically this just copies the message from the source * to the destination buffer, but sometimes messages are dropped and inserted, and file * descriptor processing is done or queued. * * The function returns ProcMsg depending on whether the message was processed or * if some condition must be met first. */ pub fn process_way_msg( msg: &[u8], dst: &mut &mut [u8], transl: TranslationInfo, glob: &mut Globals, ) -> Result { let object_id = ObjId(u32::from_le_bytes(msg[0..4].try_into().unwrap())); let header2 = u32::from_le_bytes(msg[4..8].try_into().unwrap()); let length = (header2 >> 16) as usize; assert!(msg.len() == length); // drop bits 11-16 from the opcode, as these may encode fds let opcode = (header2 & ((1 << 11) - 1)) as usize; let (from_channel, outgoing_fds): (bool, usize) = match &transl { TranslationInfo::FromChannel((_x, y)) => (true, y.len()), TranslationInfo::FromWayland((_x, y)) => (false, y.len()), }; let is_req = glob.on_display_side == from_channel; let Some(ref mut obj) = glob.objects.get_mut(&object_id) else { debug!( "Processing {} on unknown object {}; opcode {} length {}", if is_req { "request" } else { "event" }, object_id, opcode, length ); return proc_unknown_way_msg(msg, dst, transl); }; let opt_meth: Option<&WaylandMethod> = if is_req { INTERFACE_TABLE[obj.obj_type as usize].reqs.get(opcode) } else { INTERFACE_TABLE[obj.obj_type as usize].evts.get(opcode) }; if opt_meth.is_none() { debug!( "Method out of range: {}#{}, opcode {}", INTERFACE_TABLE[obj.obj_type as usize].name, object_id, opcode ); return proc_unknown_way_msg(msg, dst, transl); } let meth = opt_meth.unwrap(); /* note: this may fail */ if log::log_enabled!(log::Level::Debug) { debug!( "Processing {}: {}#{}.{}({})", if is_req { "request" } else { "event" }, INTERFACE_TABLE[obj.obj_type as usize].name, object_id, meth.name, MethodArguments { meth, msg } ); } assert!(opcode <= u8::MAX as usize); let mod_opcode = if is_req { MethodId::Request(opcode as u8) } else { MethodId::Event(opcode as u8) }; let remaining_space = (dst.len(), MAX_OUTGOING_FDS - outgoing_fds); match (obj.obj_type, mod_opcode) { (WaylandInterface::WlDisplay, OPCODE_WL_DISPLAY_DELETE_ID) => { check_space!(msg.len(), 0, remaining_space); let object_id = ObjId(parse_evt_wl_display_delete_id(msg)?); if object_id == ObjId(1) { return Err(tag!("Tried to delete wl_display object")); } if let Some(_removed) = glob.objects.remove(&object_id) { /* Cleanup .extra state: currently nothing to do */ } else { debug!("Deleted untracked object"); } copy_msg(msg, dst); Ok(ProcMsg::Done) } (WaylandInterface::WlDisplay, OPCODE_WL_DISPLAY_GET_REGISTRY) => { check_space!(msg.len(), 0, remaining_space); let registry_id = parse_req_wl_display_get_registry(msg)?; insert_new_object( &mut glob.objects, registry_id, WpObject { obj_type: WaylandInterface::WlRegistry, extra: WpExtra::WlRegistry(Box::new(ObjWlRegistry { syncobj_manager_replay: Vec::new(), })), }, )?; copy_msg(msg, dst); Ok(ProcMsg::Done) } (WaylandInterface::WlCallback, OPCODE_WL_CALLBACK_DONE) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); /* This object has a destructor event, and no methods, so * it is considered deleted immediately on receipt of the message */ glob.objects.remove(&object_id); // TODO: handle all 'destructor-events' like this, including those for compositor-created objects? Ok(ProcMsg::Done) } (WaylandInterface::WlShm, OPCODE_WL_SHM_CREATE_POOL) => { check_space!(msg.len(), 1, remaining_space); let (pool_id, pool_size) = parse_req_wl_shm_create_pool(msg)?; let pos_size = pool_size .try_into() .map_err(|_| tag!("Need nonnegative shm pool size, given {}", pool_size))?; let buffer: Rc> = match transl { TranslationInfo::FromChannel((x, y)) => { let sfd = x.pop_front().ok_or_else(|| tag!("Missing fd"))?; y.push_back(sfd.clone()); sfd } TranslationInfo::FromWayland((x, y)) => { // Note: actual file size provided may be larger than pool_size, // since there may be following wl_shm_pool::resize calls let v = translate_shm_fd( x.pop_front().ok_or_else(|| tag!("Missing fd"))?, pos_size, &mut glob.map, &mut glob.max_local_id, false, false, false, )?; y.push(v.clone()); v } }; insert_new_object( &mut glob.objects, pool_id, WpObject { obj_type: WaylandInterface::WlShmPool, extra: WpExtra::WlShmPool(Box::new(ObjWlShmPool { buffer })), }, )?; copy_msg_tag_fd(msg, dst, from_channel)?; Ok(ProcMsg::Done) } (WaylandInterface::WlShmPool, OPCODE_WL_SHM_POOL_CREATE_BUFFER) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let (buffer_id, offset, width, height, stride, format) = parse_req_wl_shm_pool_create_buffer(msg)?; let sfd = if let WpExtra::WlShmPool(ref x) = &obj.extra { x.buffer.clone() } else { return Err(tag!("wl_shm_pool object has invalid extra type")); }; insert_new_object( &mut glob.objects, buffer_id, WpObject { obj_type: WaylandInterface::WlBuffer, extra: WpExtra::WlBuffer(Box::new(ObjWlBuffer { sfd, shm_info: Some(ObjWlBufferShm { width, height, format, offset, stride, }), unique_id: glob.max_buffer_uid, })), }, )?; glob.max_buffer_uid += 1; Ok(ProcMsg::Done) } (WaylandInterface::WlShmPool, OPCODE_WL_SHM_POOL_RESIZE) => { check_space!(msg.len(), 0, remaining_space); let WpExtra::WlShmPool(ref x) = &obj.extra else { return Err(tag!("wl_shm_pool object has invalid extra type")); }; if file_has_pending_apply_tasks(&x.buffer)? { let b = x.buffer.borrow(); return Ok(ProcMsg::WaitFor(b.remote_id)); } copy_msg(msg, dst); if glob.on_display_side { /* The application side is responsible for notifying the display * side of the updated buffer size and its contents. */ return Ok(ProcMsg::Done); } let size = parse_req_wl_shm_pool_resize(msg)?; let new_size: usize = size .try_into() .map_err(|_| tag!("Invalid buffer size: {}", size))?; let x: &mut ShadowFd = &mut x.buffer.borrow_mut(); if let ShadowFdVariant::File(ref mut y) = x.data { y.buffer_size = new_size; // extend the mirror and initialize a new mapping // requires that no other operations be in progress on mapping update_core_for_new_size(&y.fd, new_size, &mut y.core)?; } Ok(ProcMsg::Done) } (WaylandInterface::WlCompositor, OPCODE_WL_COMPOSITOR_CREATE_SURFACE) => { check_space!(msg.len(), 0, remaining_space); let surf_id = parse_req_wl_compositor_create_surface(msg)?; let d = DamageBatch { damage: Vec::new(), damage_buffer: Vec::new(), attachment: BufferAttachment { scale: 1, transform: WlOutputTransform::Normal, viewport_src: None, viewport_dst: None, buffer_uid: 0, buffer_size: (0, 0), }, }; insert_new_object( &mut glob.objects, surf_id, WpObject { obj_type: WaylandInterface::WlSurface, extra: WpExtra::WlSurface(Box::new(ObjWlSurface { attached_buffer_id: None, damage_history: [ d.clone(), d.clone(), d.clone(), d.clone(), d.clone(), d.clone(), d.clone(), ], acquire_pt: None, release_pt: None, viewport_id: None, })), }, )?; copy_msg(msg, dst); Ok(ProcMsg::Done) } (WaylandInterface::WlSurface, OPCODE_WL_SURFACE_DESTROY) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let WpExtra::WlSurface(ref mut surf) = obj.extra else { return Err(tag!("Surface object has invalid extra type")); }; let mut tmp = None; std::mem::swap(&mut tmp, &mut surf.viewport_id); if let Some(vp_id) = tmp { if let Some(ref mut object) = glob.objects.get_mut(&vp_id) { let WpExtra::WpViewport(ref mut viewport) = object.extra else { return Err(tag!("Viewport object has invalid extra type")); }; viewport.wl_surface = None; } } Ok(ProcMsg::Done) } (WaylandInterface::WlSurface, OPCODE_WL_SURFACE_ATTACH) => { check_space!(msg.len(), 0, remaining_space); let (buf_id, _x, _y) = parse_req_wl_surface_attach(msg)?; if let Some(ref mut object) = glob.objects.get_mut(&object_id) { if let WpExtra::WlSurface(ref mut x) = &mut object.extra { x.attached_buffer_id = if buf_id != ObjId(0) { Some(buf_id) } else { None }; } else { return Err(tag!("Surface object has invalid extra type")); } } else { return Err(tag!("Attaching to nonexistant object")); } copy_msg(msg, dst); Ok(ProcMsg::Done) } (WaylandInterface::WlSurface, OPCODE_WL_SURFACE_SET_BUFFER_SCALE) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let s = parse_req_wl_surface_set_buffer_scale(msg)?; if s <= 0 { return Err(tag!("wl_surface.set_buffer_scale used nonpositive scale")); } let WpExtra::WlSurface(ref mut surf) = &mut obj.extra else { return Err(tag!("Surface object has invalid extra type")); }; surf.damage_history[0].attachment.scale = s as u32; Ok(ProcMsg::Done) } (WaylandInterface::WlSurface, OPCODE_WL_SURFACE_SET_BUFFER_TRANSFORM) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let t = parse_req_wl_surface_set_buffer_transform(msg)?; let WpExtra::WlSurface(ref mut surf) = &mut obj.extra else { return Err(tag!("Surface object has invalid extra type")); }; if t < 0 { return Err(tag!("Buffer transform value should be nonnegative")); } surf.damage_history[0].attachment.transform = (t as u32) .try_into() .map_err(|()| "Not a valid transform type")?; Ok(ProcMsg::Done) } (WaylandInterface::WlSurface, OPCODE_WL_SURFACE_DAMAGE) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); if glob.on_display_side { /* Only the buffers on the application side will be dirty */ return Ok(ProcMsg::Done); } let WpExtra::WlSurface(ref mut surf) = &mut obj.extra else { return Err(tag!("Surface object has invalid extra type")); }; let (x, y, width, height) = parse_req_wl_surface_damage(msg)?; if width <= 0 || height <= 0 { /* This doesn't appear to be a protocol error; still, filter out degenerate rectangles */ error!( "Received degenerate damage rectangle: x={} y={} w={} h={}", x, y, width, height ); } else { surf.damage_history[0].damage.push(WlRect { x, y, width, height, }); } Ok(ProcMsg::Done) } (WaylandInterface::WlSurface, OPCODE_WL_SURFACE_DAMAGE_BUFFER) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); if glob.on_display_side { /* Only the buffers on the application side will be dirty */ return Ok(ProcMsg::Done); } let WpExtra::WlSurface(ref mut surf) = &mut obj.extra else { return Err(tag!("Surface object has invalid extra type")); }; let (x, y, width, height) = parse_req_wl_surface_damage_buffer(msg)?; if width <= 0 || height <= 0 { /* This doesn't appear to be a protocol error; still, filter out degenerate rectangles */ error!( "Received degenerate damage rectangle: x={} y={} w={} h={}", x, y, width, height ); } else { surf.damage_history[0].damage_buffer.push(WlRect { x, y, width, height, }); } Ok(ProcMsg::Done) } (WaylandInterface::WlSurface, OPCODE_WL_SURFACE_COMMIT) => { check_space!(msg.len(), 0, remaining_space); let () = parse_req_wl_surface_commit(msg)?; let WpExtra::WlSurface(ref x) = obj.extra else { return Err(tag!("Surface object has invalid extra type")); }; let opt_buf_id: Option = x.attached_buffer_id; if x.acquire_pt.is_some() != x.release_pt.is_some() { return Err(tag!("Acquire/release points must both be set")); } let has_timelines = x.acquire_pt.is_some() || x.release_pt.is_some(); if from_channel { if let Some(buf_id) = opt_buf_id { if let Some(buf) = glob.objects.get(&buf_id) { if let WpExtra::WlBuffer(ref buf_data) = buf.extra { let b = buf_data.sfd.borrow(); let apply_count = if let ShadowFdVariant::File(data) = &b.data { data.pending_apply_tasks } else if let ShadowFdVariant::Dmabuf(data) = &b.data { /* Note: there must be a wait _somewhere_ when using timelines, otherwise diffs may be received faster * than they can be processed. */ if has_timelines { 0 /* do not wait for tasks; will signal acquire semaphore on last task completion */ } else { data.pending_apply_tasks } } else { return Err(tag!("Attached buffer is not of file or dmabuf type")); }; if apply_count > 0 { return Ok(ProcMsg::WaitFor(b.remote_id)); } } } } } copy_msg(msg, dst); if glob.on_display_side { /* Only the buffers on the application side will be dirty */ let obj = &mut glob.objects.get_mut(&object_id).unwrap(); let WpExtra::WlSurface(ref mut x) = &mut obj.extra else { return Err(tag!("Surface object has invalid extra type")); }; if x.acquire_pt.is_some() != x.release_pt.is_some() { return Err(tag!("Acquire/release points must both be set")); } /* These must be set every swap cycle, so take them */ let mut acq_pt = None; let mut rel_pt = None; std::mem::swap(&mut x.acquire_pt, &mut acq_pt); std::mem::swap(&mut x.release_pt, &mut rel_pt); let opt_buf_id: Option = x.attached_buffer_id; if let Some(buf_id) = opt_buf_id { if let Some(buf) = glob.objects.get(&buf_id) { if let WpExtra::WlBuffer(ref buf_data) = buf.extra { let mut sfd = buf_data.sfd.borrow_mut(); if let ShadowFdVariant::Dmabuf(ref mut y) = &mut sfd.data { dmabuf_post_apply_task_operations(y)?; if let Some((pt, timeline)) = acq_pt { y.acquires.push((pt, timeline)); } if let Some((pt, timeline)) = rel_pt { let mut tsfd = timeline.borrow_mut(); let ShadowFdVariant::Timeline(ref mut timeline_data) = tsfd.data else { panic!("Expected timeline sfd"); }; timeline_data.releases.push((pt, buf_data.sfd.clone())); let trid = tsfd.remote_id; drop(tsfd); y.releases.insert((trid, pt), timeline); } if y.pending_apply_tasks == 0 { /* All tasks completed before this was processed, so signal immediately */ debug!("Tasks already done, signalling acquires"); signal_timeline_acquires(&mut y.acquires)?; } } } } } return Ok(ProcMsg::Done); } let obj = &mut glob.objects.get_mut(&object_id).unwrap(); let WpExtra::WlSurface(ref mut x) = &mut obj.extra else { return Err(tag!("Surface object has invalid extra type")); }; if x.acquire_pt.is_some() != x.release_pt.is_some() { return Err(tag!("Acquire/release points must both be set")); } /* These must be set every swap cycle, so take them */ let mut acq_pt = None; let mut rel_pt = None; std::mem::swap(&mut x.acquire_pt, &mut acq_pt); std::mem::swap(&mut x.release_pt, &mut rel_pt); let mut current_attachment = x.damage_history[0].attachment.clone(); /* This shifts all entries of x.damage_history right by one */ // mutable wl_surface reference dropped; now reading from wl_buffer and wl_surface objects let mut found_buffer = false; if let Some(buf_id) = opt_buf_id { /* Note: this is vulnerable to ABA-type problems when the buffer is * destroyed and a new one with the same id is recreated; however, * this is client misbehavior and Waypipe can silently ignore it, * apply extra damage to the new buffer, and let the compositor * handle it. */ if let Some(buf) = glob.objects.get(&buf_id) { let obj = &glob.objects.get(&object_id).unwrap(); let WpExtra::WlSurface(ref x) = &obj.extra else { unreachable!(); }; if let WpExtra::WlBuffer(ref buf_data) = buf.extra { let mut sfd = buf_data.sfd.borrow_mut(); let buffer_size: (i32, i32) = if let ShadowFdVariant::File(_) = sfd.data { let Some(shm_info) = buf_data.shm_info else { return Err(tag!( "Expected shm info for wl_buffer with File-type ShadowFd" )); }; (shm_info.width, shm_info.height) } else if let ShadowFdVariant::Dmabuf(ref y) = sfd.data { match y.buf { DmabufImpl::Vulkan(ref buf) => ( buf.width.try_into().unwrap(), buf.height.try_into().unwrap(), ), DmabufImpl::Gbm(ref buf) => ( buf.width.try_into().unwrap(), buf.height.try_into().unwrap(), ), } } else { return Err(tag!("Expected buffer shadowfd to be of file type")); }; current_attachment.buffer_uid = buf_data.unique_id; current_attachment.buffer_size = buffer_size; found_buffer = true; if let ShadowFdVariant::File(ref mut y) = &mut sfd.data { match &y.damage { Damage::Everything => {} Damage::Intervals(old) => { let dmg = get_damage_for_shm(buf_data, x, ¤t_attachment); y.damage = Damage::Intervals(union_damage(&old[..], &dmg[..], 128)); } } } else if let ShadowFdVariant::Dmabuf(ref mut y) = &mut sfd.data { match &y.damage { Damage::Everything => {} Damage::Intervals(old) => { let dmg = get_damage_for_dmabuf(y, x, ¤t_attachment); y.damage = Damage::Intervals(union_damage(&old[..], &dmg[..], 128)); } } if acq_pt.is_none() { y.using_implicit_sync = true; } /* todo: in theory these should still work with shm buffers */ if let Some((pt, timeline)) = acq_pt { y.acquires.push((pt, timeline)); } if let Some((pt, timeline)) = rel_pt { let mut tsfd = timeline.borrow_mut(); let ShadowFdVariant::Timeline(ref mut timeline_data) = tsfd.data else { panic!("Expected timeline sfd"); }; timeline_data.releases.push((pt, buf_data.sfd.clone())); let trid = tsfd.remote_id; drop(tsfd); y.releases.insert((trid, pt), timeline); } } else { unreachable!(); } } } else { debug!("Attached wl_buffer {} for wl_surface {} destroyed before commit: the result of this is not specified and compositors may do anything. Interpreting as null attachment.", buf_id, object_id); } } /* acquire mutable reference again */ let obj = &mut glob.objects.get_mut(&object_id).unwrap(); let WpExtra::WlSurface(ref mut x) = &mut obj.extra else { return Err(tag!("Surface object has invalid extra type")); }; if found_buffer { /* Have not yet updated properties for the current buffer attachment, so do so now */ x.damage_history[0].attachment = current_attachment.clone(); /* Rotate the damage log */ let mut fresh = DamageBatch { attachment: current_attachment.clone(), damage: Vec::new(), damage_buffer: Vec::new(), }; std::mem::swap(&mut x.damage_history[6], &mut fresh); x.damage_history.swap(5, 6); x.damage_history.swap(4, 5); x.damage_history.swap(3, 4); x.damage_history.swap(2, 3); x.damage_history.swap(1, 2); x.damage_history.swap(0, 1); } else { /* Null attachment (or buffer of unknown properties and size), wipe history */ current_attachment.buffer_uid = 0; current_attachment.buffer_size = (0, 0); for i in 0..7 { x.damage_history[i] = DamageBatch { attachment: current_attachment.clone(), damage: Vec::new(), damage_buffer: Vec::new(), }; } } Ok(ProcMsg::Done) } (WaylandInterface::WpViewporter, OPCODE_WP_VIEWPORTER_GET_VIEWPORT) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let (new_id, surface) = parse_req_wp_viewporter_get_viewport(msg)?; insert_new_object( &mut glob.objects, new_id, WpObject { obj_type: WaylandInterface::WpViewport, extra: WpExtra::WpViewport(Box::new(ObjWpViewport { wl_surface: Some(surface), })), }, )?; Ok(ProcMsg::Done) } (WaylandInterface::WpViewport, OPCODE_WP_VIEWPORT_DESTROY) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let WpExtra::WpViewport(ref mut viewport) = obj.extra else { return Err(tag!("Viewport object has invalid extra type")); }; let mut tmp = None; std::mem::swap(&mut tmp, &mut viewport.wl_surface); if let Some(surf_id) = tmp { if let Some(ref mut object) = glob.objects.get_mut(&surf_id) { let WpExtra::WlSurface(ref mut surface) = object.extra else { return Err(tag!("Surface object has invalid extra type")); }; surface.damage_history[0].attachment.viewport_src = None; surface.damage_history[0].attachment.viewport_dst = None; surface.viewport_id = None; } } Ok(ProcMsg::Done) } (WaylandInterface::WpViewport, OPCODE_WP_VIEWPORT_SET_DESTINATION) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let (w, h) = parse_req_wp_viewport_set_destination(msg)?; let destination: Option<(i32, i32)> = if w == -1 && h == -1 { None } else if w <= 0 || h <= 0 { return Err(tag!("invalid wp_viewport destination ({},{})", w, h)); } else { Some((w, h)) }; let WpExtra::WpViewport(ref mut viewport) = obj.extra else { return Err(tag!("Viewport object has invalid extra type")); }; if let Some(surf_id) = viewport.wl_surface { if let Some(ref mut object) = glob.objects.get_mut(&surf_id) { let WpExtra::WlSurface(ref mut surface) = object.extra else { return Err(tag!("Surface object has invalid extra type")); }; surface.damage_history[0].attachment.viewport_dst = destination; } } Ok(ProcMsg::Done) } (WaylandInterface::WpViewport, OPCODE_WP_VIEWPORT_SET_SOURCE) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let (x, y, w, h) = parse_req_wp_viewport_set_source(msg)?; let source: Option<(i32, i32, i32, i32)> = if x == -256 && y == -256 && w == -256 && h == -256 { None } else if x < 0 || y < 0 || w <= 0 || h <= 0 { return Err(tag!("invalid wp_viewport source ({},{},{},{})", x, y, w, h)); } else { Some((x, y, w, h)) }; let WpExtra::WpViewport(ref mut viewport) = obj.extra else { return Err(tag!("Viewport object has invalid extra type")); }; if let Some(surf_id) = viewport.wl_surface { if let Some(ref mut object) = glob.objects.get_mut(&surf_id) { let WpExtra::WlSurface(ref mut surface) = object.extra else { return Err(tag!("Surface object has invalid extra type")); }; surface.damage_history[0].attachment.viewport_src = source; } } Ok(ProcMsg::Done) } ( WaylandInterface::WpLinuxDrmSyncobjManagerV1, OPCODE_WP_LINUX_DRM_SYNCOBJ_MANAGER_V1_GET_SURFACE, ) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let (new_id, surf_id) = parse_req_wp_linux_drm_syncobj_manager_v1_get_surface(msg)?; // Currently unclear how drm_syncobj interacts with wl_shm-type buffers; in theory, // with GPU access to CPU memory, it _should_ be possible. insert_new_object( &mut glob.objects, new_id, WpObject { obj_type: WaylandInterface::WpLinuxDrmSyncobjSurfaceV1, extra: WpExtra::WpDrmSyncobjSurface(Box::new(ObjWpDrmSyncobjSurface { // TODO: check if ABA problem applies surface: surf_id, })), }, )?; Ok(ProcMsg::Done) } ( WaylandInterface::WpLinuxDrmSyncobjManagerV1, OPCODE_WP_LINUX_DRM_SYNCOBJ_MANAGER_V1_IMPORT_TIMELINE, ) => { check_space!(msg.len(), 1, remaining_space); let new_id = parse_req_wp_linux_drm_syncobj_manager_v1_import_timeline(msg)?; let sfd = match transl { TranslationInfo::FromChannel((x, y)) => { let sfd = x.pop_front().ok_or_else(|| tag!("Missing sfd"))?; let mut b = sfd.borrow_mut(); if let ShadowFdVariant::Timeline(t) = &mut b.data { t.debug_wayland_id = new_id; } else { return Err(tag!("Expected timeline fd")); } drop(b); y.push_back(sfd.clone()); sfd } TranslationInfo::FromWayland((x, y)) => { let v = translate_timeline( x.pop_front().ok_or_else(|| tag!("Missing fd"))?, glob, new_id, )?; y.push(v.clone()); v } }; insert_new_object( &mut glob.objects, new_id, WpObject { obj_type: WaylandInterface::WpLinuxDrmSyncobjTimelineV1, extra: WpExtra::WpDrmSyncobjTimeline(Box::new(ObjWpDrmSyncobjTimeline { // TODO: check if ABA problem applies timeline: sfd, })), }, )?; copy_msg_tag_fd(msg, dst, from_channel)?; Ok(ProcMsg::Done) } ( WaylandInterface::WpLinuxDrmSyncobjSurfaceV1, OPCODE_WP_LINUX_DRM_SYNCOBJ_SURFACE_V1_SET_ACQUIRE_POINT, ) | ( WaylandInterface::WpLinuxDrmSyncobjSurfaceV1, OPCODE_WP_LINUX_DRM_SYNCOBJ_SURFACE_V1_SET_RELEASE_POINT, ) => { check_space!(msg.len(), 0, remaining_space); let acquire = mod_opcode == OPCODE_WP_LINUX_DRM_SYNCOBJ_SURFACE_V1_SET_ACQUIRE_POINT; let (timeline_id, pt_hi, pt_lo) = if acquire { parse_req_wp_linux_drm_syncobj_surface_v1_set_acquire_point(msg)? } else { parse_req_wp_linux_drm_syncobj_surface_v1_set_release_point(msg)? }; let pt = join_u64(pt_hi, pt_lo); let WpExtra::WpDrmSyncobjSurface(s) = &obj.extra else { panic!("Incorrect extra type"); }; let surf_id = s.surface; let timeline_obj = glob.objects.get(&timeline_id).ok_or("")?; let WpExtra::WpDrmSyncobjTimeline(t) = &timeline_obj.extra else { return Err(tag!("Incorrect extra type, expected timeline")); }; let sfd = t.timeline.clone(); // Currently unclear how drm_syncobj interacts with wl_shm-type buffers; in theory, // with GPU access to CPU memory, it _should_ be possible. let surface_obj = glob.objects.get_mut(&surf_id).ok_or("")?; let WpExtra::WlSurface(ref mut surf) = &mut surface_obj.extra else { return Err(tag!("Incorrect extra type, expected surface")); }; if acquire { surf.acquire_pt = Some((pt, sfd)); } else { surf.release_pt = Some((pt, sfd)); } copy_msg(msg, dst); Ok(ProcMsg::Done) } (WaylandInterface::ZwpLinuxDmabufV1, OPCODE_ZWP_LINUX_DMABUF_V1_FORMAT) => { let format = parse_evt_zwp_linux_dmabuf_v1_format(msg)?; match glob.dmabuf_device { DmabufDevice::Unknown | DmabufDevice::Unavailable | DmabufDevice::VulkanSetup(_) => unreachable!(), DmabufDevice::Vulkan((_, ref vulk)) => { let mod_linear = 0; if !vulk.supports_format(format, mod_linear) { /* Drop message, format not supported in standard scenario (linear modifier); * and this event cannot communicate any modifiers. */ return Ok(ProcMsg::Done); } } DmabufDevice::Gbm(ref gbm) => { if gbm_supported_modifiers(gbm, format).is_empty() { /* Format not supported */ return Ok(ProcMsg::Done); } } } check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); Ok(ProcMsg::Done) } (WaylandInterface::ZwpLinuxDmabufV1, OPCODE_ZWP_LINUX_DMABUF_V1_MODIFIER) => { let (format, mod_hi, mod_lo) = parse_evt_zwp_linux_dmabuf_v1_modifier(msg)?; let modifier = join_u64(mod_hi, mod_lo); if glob.on_display_side { /* Restrict the format/modifier pairs to what this instance of Waypipe supports */ if !dmabuf_dev_supports_format(&glob.dmabuf_device, format, modifier) { return Ok(ProcMsg::Done); } check_space!(msg.len(), 0, remaining_space); add_advertised_modifiers(&mut glob.advertised_modifiers, format, &[modifier]); copy_msg(msg, dst); Ok(ProcMsg::Done) } else { /* For each format, replace the modifiers listed with what this instance of Waypipe accepts */ let WpExtra::ZwpDmabuf(d) = &mut obj.extra else { panic!(); }; if d.formats_seen.contains(&format) { return Ok(ProcMsg::Done); } d.formats_seen.insert(format); let mods = dmabuf_dev_modifier_list(&glob.dmabuf_device, format); check_space!( mods.len() * length_evt_zwp_linux_dmabuf_v1_modifier(), 0, remaining_space ); add_advertised_modifiers(&mut glob.advertised_modifiers, format, mods); for new_mod in mods { let (nmod_hi, nmod_lo) = split_u64(*new_mod); write_evt_zwp_linux_dmabuf_v1_modifier( dst, object_id, format, nmod_hi, nmod_lo, ); } Ok(ProcMsg::Done) } } (WaylandInterface::ZwpLinuxDmabufV1, OPCODE_ZWP_LINUX_DMABUF_V1_CREATE_PARAMS) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let params_id = parse_req_zwp_linux_dmabuf_v1_create_params(msg)?; insert_new_object( &mut glob.objects, params_id, WpObject { obj_type: WaylandInterface::ZwpLinuxBufferParamsV1, extra: WpExtra::ZwpDmabufParams(Box::new(ObjZwpLinuxDmabufParams { planes: Vec::new(), dmabuf: None, })), }, )?; Ok(ProcMsg::Done) } (WaylandInterface::ZwpLinuxBufferParamsV1, OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_ADD) => { /* This message will be consumed and recreated later once full information has arrived; * it can always be processed */ let (plane_idx, offset, stride, modifier_hi, modifier_lo) = parse_req_zwp_linux_buffer_params_v1_add(msg)?; let modifier: u64 = join_u64(modifier_hi, modifier_lo); let WpExtra::ZwpDmabufParams(ref mut p) = &mut obj.extra else { return Err(tag!("Incorrect extra type")); /* TODO: make a helper method for this, with standardized error ? */ }; match transl { TranslationInfo::FromChannel((_, _)) => { /* Do nothing: the OpenDMABUF message will be sent at time of ::create or ::create_immed ; * there is no point in dequeing it earlier. Also, there will only be one message, even if * there are multiple planes. */ } TranslationInfo::FromWayland((x, _)) => { let fd = x.pop_front().ok_or_else(|| tag!("Missing fd"))?; p.planes.push(AddDmabufPlane { fd, plane_idx, offset, stride, modifier, }); } }; Ok(ProcMsg::Done) } ( WaylandInterface::ZwpLinuxBufferParamsV1, OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_CREATE_IMMED, ) => { let WpExtra::ZwpDmabufParams(ref mut params) = &mut obj.extra else { return Err(tag!("Incorrect extra type")); /* TODO: make a helper method for this, with standardized error ? */ }; let (buffer_id, width, height, drm_format, flags) = parse_req_zwp_linux_buffer_params_v1_create_immed(msg)?; if width <= 0 || height <= 0 { return Err(tag!("DMABUF width or height should be positive")); } let (width, height) = (width as u32, height as u32); if flags != 0 { return Err(tag!("DMABUF flags not yet supported")); } let sfd: Rc> = match transl { TranslationInfo::FromChannel((x, y)) => { let sfd = &x.front().ok_or_else(|| tag!("Missing sfd"))?; let mut b = sfd.borrow_mut(); let ShadowFdVariant::Dmabuf(ref mut data) = &mut b.data else { return Err(tag!("Incorrect extra type")); }; data.debug_wayland_id = buffer_id; let estimated_msg_len = length_req_zwp_linux_buffer_params_v1_add() * data.export_planes.len() + length_req_zwp_linux_buffer_params_v1_create_immed(); check_space!(estimated_msg_len, data.export_planes.len(), remaining_space); for plane in data.export_planes.iter() { let (mod_hi, mod_lo) = split_u64(plane.modifier); write_req_zwp_linux_buffer_params_v1_add( dst, object_id, false, plane.plane_idx, plane.offset, plane.stride, mod_hi, mod_lo, ); } copy_msg(msg, dst); drop(b); let sfd = x.pop_front().unwrap(); y.push_back(sfd.clone()); sfd } TranslationInfo::FromWayland((_, y)) => { /* Finally process all the SFDs, introduce add messages */ let estimated_msg_len = length_req_zwp_linux_buffer_params_v1_add() /* send add messages as-if the buffer were actually linear, * for improved compatibility with older Waypipe */ + length_req_zwp_linux_buffer_params_v1_create_immed(); check_space!(estimated_msg_len, params.planes.len(), remaining_space); let mut planes = Vec::new(); std::mem::swap(&mut params.planes, &mut planes); let sfd = translate_dmabuf_fd( width, height, drm_format, planes, &glob.opts, &glob.dmabuf_device, &mut glob.max_local_id, &mut glob.map, buffer_id, )?; y.push(sfd.clone()); /* Recreate all the 'add' messages at once. While this is technically pointless when Waypipe runs against itself, * since those add messages get ignored and dropped, older versions may care. */ let mod_linear: u64 = 0; /* Assuming format is single planar, instruct other end to create a linear buffer; * older Waypipe will do this, and newer will ignore it. */ let (mod_hi, mod_lo) = split_u64(mod_linear); let wayl_format = drm_to_wayland(drm_format); // TODO: this will need to change for multiplanar or packed formats let bpp = get_shm_format_layout(wayl_format).unwrap().planes[0].bpt; write_req_zwp_linux_buffer_params_v1_add( dst, object_id, true, /* plane idx */ 0, /* offset */ 0, bpp.get().checked_mul(width).unwrap(), // stride as if tightly packed mod_hi, mod_lo, ); write_req_zwp_linux_buffer_params_v1_create_immed( dst, object_id, buffer_id, width as i32, height as i32, drm_format, flags, ); sfd } }; insert_new_object( &mut glob.objects, buffer_id, WpObject { obj_type: WaylandInterface::WlBuffer, extra: WpExtra::WlBuffer(Box::new(ObjWlBuffer { sfd, shm_info: None, unique_id: glob.max_buffer_uid, })), }, )?; glob.max_buffer_uid += 1; Ok(ProcMsg::Done) } (WaylandInterface::ZwpLinuxBufferParamsV1, OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_CREATE) => { let WpExtra::ZwpDmabufParams(ref mut params) = &mut obj.extra else { return Err(tag!("Incorrect extra type")); /* TODO: make a helper method for this, with standardized error ? */ }; if params.dmabuf.is_some() { return Err(tag!("Can only create dmabuf from params once")); } let (width, height, drm_format, flags) = parse_req_zwp_linux_buffer_params_v1_create(msg)?; if width <= 0 || height <= 0 { return Err(tag!("DMABUF width or height should be positive")); } let (width, height) = (width as u32, height as u32); let sfd: Rc> = match transl { TranslationInfo::FromChannel((x, y)) => { let sfd = &x.front().ok_or_else(|| tag!("Missing sfd"))?; let mut b = sfd.borrow_mut(); let ShadowFdVariant::Dmabuf(ref mut data) = &mut b.data else { return Err(tag!("Incorrect extra type")); }; let estimated_msg_len = length_req_zwp_linux_buffer_params_v1_add() * data.export_planes.len() + length_req_zwp_linux_buffer_params_v1_create_immed(); check_space!(estimated_msg_len, data.export_planes.len(), remaining_space); for plane in data.export_planes.iter() { let (mod_hi, mod_lo) = split_u64(plane.modifier); write_req_zwp_linux_buffer_params_v1_add( dst, object_id, false, plane.plane_idx, plane.offset, plane.stride, mod_hi, mod_lo, ); } copy_msg(msg, dst); drop(b); let sfd = x.pop_front().unwrap(); y.push_back(sfd.clone()); sfd } TranslationInfo::FromWayland((_, y)) => { /* Finally process all the SFDs, introduce add messages */ let estimated_msg_len = length_req_zwp_linux_buffer_params_v1_add() /* send add messages as-if the buffer were actually linear, * for improved compatibility with older Waypipe */ + length_req_zwp_linux_buffer_params_v1_create_immed(); check_space!(estimated_msg_len, params.planes.len(), remaining_space); let mut planes = Vec::new(); std::mem::swap(&mut params.planes, &mut planes); let sfd = translate_dmabuf_fd( width, height, drm_format, planes, &glob.opts, &glob.dmabuf_device, &mut glob.max_local_id, &mut glob.map, ObjId(0), )?; y.push(sfd.clone()); /* Recreate all the 'add' messages at once. While this is technically pointless when Waypipe runs against itself, * since those add messages get ignored and dropped, older versions may care. */ let mod_linear: u64 = 0; /* Assuming format is single planar, instruct other end to create a linear buffer; * older Waypipe will do this, and newer will ignore it. */ let (mod_hi, mod_lo) = split_u64(mod_linear); let wayl_format = drm_to_wayland(drm_format); // TODO: update for multiplanar/complicated pixel formats let bpp = get_shm_format_layout(wayl_format).unwrap().planes[0].bpt; write_req_zwp_linux_buffer_params_v1_add( dst, object_id, true, /* plane idx */ 0, /* offset */ 0, bpp.get().checked_mul(width).unwrap(), // stride as if tightly packed mod_hi, mod_lo, ); write_req_zwp_linux_buffer_params_v1_create( dst, object_id, width as i32, height as i32, drm_format, flags, ); sfd } }; params.dmabuf = Some(sfd); Ok(ProcMsg::Done) } (WaylandInterface::ZwpLinuxBufferParamsV1, OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_FAILED) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); /* Discard reference to dmabuf now; compositor rejected it */ let WpExtra::ZwpDmabufParams(ref mut params) = &mut obj.extra else { return Err(tag!("Incorrect extra type")); }; params.dmabuf = None; Ok(ProcMsg::Done) } (WaylandInterface::ZwpLinuxBufferParamsV1, OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_CREATED) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let obj_id = parse_evt_zwp_linux_buffer_params_v1_created(msg)?; let WpExtra::ZwpDmabufParams(ref mut params) = &mut obj.extra else { return Err(tag!("Incorrect extra type")); }; let mut osfd = None; std::mem::swap(&mut params.dmabuf, &mut osfd); let Some(sfd) = osfd else { return Err(tag!( "zwp_linux_buffer_params_v1::created event must follow ::create request and must occur at most once" )); }; let mut b = sfd.borrow_mut(); let ShadowFdVariant::Dmabuf(ref mut d) = b.data else { return Err(tag!("Incorrect shadowfd type")); }; d.debug_wayland_id = obj_id; drop(b); insert_new_object( &mut glob.objects, obj_id, WpObject { obj_type: WaylandInterface::WlBuffer, extra: WpExtra::WlBuffer(Box::new(ObjWlBuffer { sfd, shm_info: None, unique_id: glob.max_buffer_uid, })), }, )?; glob.max_buffer_uid += 1; Ok(ProcMsg::Done) } (WaylandInterface::ZwpLinuxDmabufV1, OPCODE_ZWP_LINUX_DMABUF_V1_GET_DEFAULT_FEEDBACK) | (WaylandInterface::ZwpLinuxDmabufV1, OPCODE_ZWP_LINUX_DMABUF_V1_GET_SURFACE_FEEDBACK) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let feedback_id = if mod_opcode == OPCODE_ZWP_LINUX_DMABUF_V1_GET_DEFAULT_FEEDBACK { parse_req_zwp_linux_dmabuf_v1_get_default_feedback(msg)? } else { let (f, _surface) = parse_req_zwp_linux_dmabuf_v1_get_surface_feedback(msg)?; f }; insert_new_object( &mut glob.objects, feedback_id, WpObject { obj_type: WaylandInterface::ZwpLinuxDmabufFeedbackV1, extra: WpExtra::ZwpDmabufFeedback(Box::new(ObjZwpLinuxDmabufFeedback { input_format_table: None, output_format_table: None, main_device: None, tranches: Vec::new(), processed: false, queued_format_table: None, current: DmabufTranche { flags: 0, values: Vec::new(), indices: Vec::new(), device: 0, }, })), }, )?; Ok(ProcMsg::Done) } ( WaylandInterface::ZwpLinuxDmabufFeedbackV1, OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_FORMAT_TABLE, ) => { let table_size = parse_evt_zwp_linux_dmabuf_feedback_v1_format_table(msg)?; let WpExtra::ZwpDmabufFeedback(ref mut f) = &mut obj.extra else { return Err(tag!("Expected object to have dmabuf_feedback data")); }; /* Load the format table */ let mut table_data = vec![0; table_size as usize]; match transl { TranslationInfo::FromChannel((x, _)) => { let sfd = x.front().ok_or_else(|| tag!("Missing sfd"))?; if file_has_pending_apply_tasks(sfd)? { let b = sfd.borrow(); return Ok(ProcMsg::WaitFor(b.remote_id)); } let sfd = x.pop_front().unwrap(); let b = sfd.borrow_mut(); let ShadowFdVariant::File(ref f) = b.data else { return Err(tag!("Received non-File ShadowFd for format table")); }; if table_size as usize != f.buffer_size { return Err(tag!( "Wrong buffer size for format table: got {}, expected {}", f.buffer_size, table_size )); } copy_from_mapping(&mut table_data, &f.core.as_ref().unwrap().mapping, 0); } TranslationInfo::FromWayland((x, _)) => { let fd = x.pop_front().ok_or_else(|| tag!("Missing fd"))?; let mapping = ExternalMapping::new(&fd, table_size as usize, true)?; copy_from_mapping(&mut table_data, &mapping, 0); } }; f.input_format_table = Some(parse_format_table(&table_data)); Ok(ProcMsg::Done) } ( WaylandInterface::ZwpLinuxDmabufFeedbackV1, OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_MAIN_DEVICE, ) => { let WpExtra::ZwpDmabufFeedback(ref mut feedback) = &mut obj.extra else { return Err(tag!("Unexpected object extra type")); }; let dev = parse_evt_zwp_linux_dmabuf_feedback_v1_main_device(msg)?; let main_device = parse_dev_array(dev) .ok_or_else(|| tag!("Unexpected size for dev_t: {}", dev.len()))?; feedback.main_device = Some(main_device); if glob.on_display_side && matches!(glob.dmabuf_device, DmabufDevice::VulkanSetup(_)) { complete_dmabuf_setup(&glob.opts, Some(main_device), &mut glob.dmabuf_device)?; } Ok(ProcMsg::Done) } ( WaylandInterface::ZwpLinuxDmabufFeedbackV1, OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_FLAGS, ) => { let WpExtra::ZwpDmabufFeedback(ref mut feedback) = &mut obj.extra else { return Err(tag!("Unexpected object extra type")); }; let flags = parse_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(msg)?; feedback.current.flags = flags; Ok(ProcMsg::Done) } ( WaylandInterface::ZwpLinuxDmabufFeedbackV1, OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_TARGET_DEVICE, ) => { let WpExtra::ZwpDmabufFeedback(ref mut feedback) = &mut obj.extra else { return Err(tag!("Unexpected object extra type")); }; let dev = parse_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device(msg)?; feedback.current.device = parse_dev_array(dev) .ok_or_else(|| tag!("Unexpected size for dev_t: {}", dev.len()))?; Ok(ProcMsg::Done) } ( WaylandInterface::ZwpLinuxDmabufFeedbackV1, OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_FORMATS, ) => { let WpExtra::ZwpDmabufFeedback(ref mut feedback) = &mut obj.extra else { return Err(tag!("Unexpected object extra type")); }; let fmts = parse_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats(msg)?; if fmts.len() % 2 != 0 { return Err(tag!("Format array not of even length")); } if fmts.is_empty() { return Ok(ProcMsg::Done); } let Some(ref table) = feedback.input_format_table else { return Err(tag!( "No format table provided before tranche_formats was received" )); }; for chunk in fmts.chunks_exact(2) { let idx = u16::from_le_bytes(chunk.try_into().unwrap()); let Some(pair) = table.get(idx as usize) else { return Err(tag!( "Tranche format index {} out of range for format table of length {}", idx, table.len() )); }; /* On display side, immediately filter out unsupported format-modifier pairs. */ if glob.on_display_side && !dmabuf_dev_supports_format(&glob.dmabuf_device, pair.0, pair.1) { continue; } feedback.current.values.push(*pair); } Ok(ProcMsg::Done) } ( WaylandInterface::ZwpLinuxDmabufFeedbackV1, OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_DONE, ) => { let WpExtra::ZwpDmabufFeedback(ref mut feedback) = &mut obj.extra else { return Err(tag!("Unexpected object extra type")); }; feedback.tranches.push(DmabufTranche { flags: 0, values: Vec::new(), indices: Vec::new(), device: 0, }); std::mem::swap(feedback.tranches.last_mut().unwrap(), &mut feedback.current); Ok(ProcMsg::Done) } (WaylandInterface::ZwpLinuxDmabufFeedbackV1, OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_DONE) => { let WpExtra::ZwpDmabufFeedback(ref mut feedback) = &mut obj.extra else { return Err(tag!("Unexpected object extra type")); }; let dev_len = write_dev_array(0).len(); if !feedback.processed { /* Process feedback now, to determine how much space is needed for it. */ feedback.processed = true; if !glob.on_display_side { rebuild_format_table(&glob.dmabuf_device, feedback)?; } let new_table = process_dmabuf_feedback(feedback)?; if Some(&new_table) != feedback.output_format_table.as_ref() { // Table has changed; send new fd let local_fd = memfd::memfd_create( c"/waypipe", memfd::MemFdCreateFlag::MFD_CLOEXEC | memfd::MemFdCreateFlag::MFD_ALLOW_SEALING, ) .map_err(|x| tag!("Failed to create memfd: {:?}", x))?; let sz: u32 = new_table.len().try_into().unwrap(); assert!(sz > 0); unistd::ftruncate(&local_fd, sz as libc::off_t) .map_err(|x| tag!("Failed to resize memfd: {:?}", x))?; let mapping: ExternalMapping = ExternalMapping::new(&local_fd, sz as usize, false).map_err(|x| { tag!("Failed to mmap fd when building new format table: {}", x) })?; copy_onto_mapping(&new_table[..], &mapping, 0); let sfd = translate_shm_fd( local_fd, sz as usize, &mut glob.map, &mut glob.max_local_id, glob.on_display_side, true, !glob.on_display_side, )?; feedback.output_format_table = Some(new_table); feedback.queued_format_table = Some((sfd, sz)); } } let mut space_est = 0; if feedback.queued_format_table.is_some() { space_est += length_evt_zwp_linux_dmabuf_feedback_v1_format_table(); } space_est += length_evt_zwp_linux_dmabuf_feedback_v1_main_device(dev_len); space_est += length_evt_zwp_linux_dmabuf_feedback_v1_done(); /* note: this is now using the _converted_ tranche lengths and counts */ for t in feedback.tranches.iter() { space_est += length_evt_zwp_linux_dmabuf_feedback_v1_tranche_done() + length_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags() + length_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device(dev_len) + length_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats(t.indices.len()); } check_space!( space_est, if feedback.queued_format_table.is_some() { 1 } else { 0 }, remaining_space ); let mut queued_table = None; std::mem::swap(&mut queued_table, &mut feedback.queued_format_table); if let Some((sfd, sz)) = queued_table { match transl { TranslationInfo::FromChannel((_, y)) => { write_evt_zwp_linux_dmabuf_feedback_v1_format_table( dst, object_id, false, sz, ); y.push_back(sfd); } TranslationInfo::FromWayland((_, y)) => { write_evt_zwp_linux_dmabuf_feedback_v1_format_table( dst, object_id, true, sz, ); y.push(sfd); } } } /* Write messages, filtering as necessary. */ let dev_id = dmabuf_dev_get_id(&glob.dmabuf_device); write_evt_zwp_linux_dmabuf_feedback_v1_main_device( dst, object_id, write_dev_array(dev_id).as_slice(), ); for t in feedback.tranches.iter() { for f in t.values.iter() { add_advertised_modifiers(&mut glob.advertised_modifiers, f.0, &[f.1]); } write_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device( dst, object_id, write_dev_array(dev_id).as_slice(), ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(dst, object_id, t.flags); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats( dst, object_id, &t.indices[..], ); write_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(dst, object_id); } write_evt_zwp_linux_dmabuf_feedback_v1_done(dst, object_id); /* Reset state for next batch */ feedback.processed = false; feedback.tranches = Vec::new(); // note: `feedback.current` _should_ already be reset in _tranche_done feedback.current = DmabufTranche { flags: 0, values: Vec::new(), indices: Vec::new(), device: 0, }; Ok(ProcMsg::Done) } (WaylandInterface::WlKeyboard, OPCODE_WL_KEYBOARD_KEYMAP) => { check_space!(msg.len(), 1, remaining_space); let (_, keymap_size) = parse_evt_wl_keyboard_keymap(msg)?; if let Some(wait) = translate_or_wait_for_fixed_file(transl, glob, keymap_size)? { return Ok(wait); } copy_msg_tag_fd(msg, dst, from_channel)?; Ok(ProcMsg::Done) } ( WaylandInterface::WpImageDescriptionInfoV1, OPCODE_WP_IMAGE_DESCRIPTION_INFO_V1_ICC_FILE, ) => { check_space!(msg.len(), 1, remaining_space); let file_size = parse_evt_wp_image_description_info_v1_icc_file(msg)?; if let Some(wait) = translate_or_wait_for_fixed_file(transl, glob, file_size)? { return Ok(wait); } copy_msg_tag_fd(msg, dst, from_channel)?; Ok(ProcMsg::Done) } ( WaylandInterface::WpImageDescriptionCreatorIccV1, OPCODE_WP_IMAGE_DESCRIPTION_CREATOR_ICC_V1_SET_ICC_FILE, ) => { check_space!(msg.len(), 1, remaining_space); // TODO: only damage the portion of the file between 'offset' and 'offset+length'. // Or, to save remote resources: reduce the offset to zero and have Waypipe // convert between coordinates. // Also: mmap is not required to be supported, only seek+read, so this file may // need different handling anyway. let (offset, length) = parse_req_wp_image_description_creator_icc_v1_set_icc_file(msg)?; if length == 0 { return Err(tag!("File length for wp_image_description_creator_icc_v1::set_icc_file should not be zero")); } let Some(file_sz) = offset.checked_add(length) else { return Err(tag!("File offset+length={}+{} overflow for wp_image_description_creator_icc_v1::set_icc_file", offset, length)); }; match transl { TranslationInfo::FromChannel((x, y)) => { let sfd = &x.front().ok_or_else(|| tag!("Missing fd"))?; let rid = sfd.borrow().remote_id; if file_has_pending_apply_tasks(sfd)? { return Ok(ProcMsg::WaitFor(rid)); } y.push_back(x.pop_front().unwrap()); } TranslationInfo::FromWayland((x, y)) => { let v = translate_shm_fd( x.pop_front().ok_or_else(|| tag!("Missing fd"))?, file_sz.try_into().unwrap(), &mut glob.map, &mut glob.max_local_id, true, true, false, )?; y.push(v); } }; copy_msg_tag_fd(msg, dst, from_channel)?; Ok(ProcMsg::Done) } ( WaylandInterface::ZwlrGammaControlManagerV1, OPCODE_ZWLR_GAMMA_CONTROL_MANAGER_V1_GET_GAMMA_CONTROL, ) => { check_space!(msg.len(), 0, remaining_space); let (gamma, _output) = parse_req_zwlr_gamma_control_manager_v1_get_gamma_control(msg)?; insert_new_object( &mut glob.objects, gamma, WpObject { obj_type: WaylandInterface::ZwlrGammaControlV1, extra: WpExtra::ZwlrGammaControl(Box::new(ObjZwlrGammaControl { gamma_size: None, })), }, )?; copy_msg(msg, dst); Ok(ProcMsg::Done) } (WaylandInterface::ZwlrGammaControlV1, OPCODE_ZWLR_GAMMA_CONTROL_V1_GAMMA_SIZE) => { check_space!(msg.len(), 0, remaining_space); let WpExtra::ZwlrGammaControl(ref mut gamma) = obj.extra else { unreachable!(); }; let gamma_size = parse_evt_zwlr_gamma_control_v1_gamma_size(msg)?; if gamma_size > u32::MAX / 6 { return Err(tag!( "Gamma size too large (ramps would use >u32::MAX bytes)" )); } gamma.gamma_size = Some(gamma_size); copy_msg(msg, dst); Ok(ProcMsg::Done) } (WaylandInterface::ZwlrGammaControlV1, OPCODE_ZWLR_GAMMA_CONTROL_V1_SET_GAMMA) => { check_space!(msg.len(), 1, remaining_space); let WpExtra::ZwlrGammaControl(ref gamma) = obj.extra else { unreachable!(); }; let Some(gamma_size) = gamma.gamma_size else { return Err(tag!( "zwlr_gamma_control_v1::set_gamma called before gamma size provided" )); }; if let Some(wait) = translate_or_wait_for_fixed_file(transl, glob, gamma_size.checked_mul(6).unwrap())? { return Ok(wait); } copy_msg_tag_fd(msg, dst, from_channel)?; Ok(ProcMsg::Done) } ( WaylandInterface::ZwpPrimarySelectionSourceV1, OPCODE_ZWP_PRIMARY_SELECTION_SOURCE_V1_SEND, ) | ( WaylandInterface::ZwpPrimarySelectionOfferV1, OPCODE_ZWP_PRIMARY_SELECTION_OFFER_V1_RECEIVE, ) | (WaylandInterface::GtkPrimarySelectionSource, OPCODE_GTK_PRIMARY_SELECTION_SOURCE_SEND) | ( WaylandInterface::GtkPrimarySelectionOffer, OPCODE_GTK_PRIMARY_SELECTION_OFFER_RECEIVE, ) | (WaylandInterface::ExtDataControlSourceV1, OPCODE_EXT_DATA_CONTROL_SOURCE_V1_SEND) | (WaylandInterface::ExtDataControlOfferV1, OPCODE_EXT_DATA_CONTROL_OFFER_V1_RECEIVE) | (WaylandInterface::ZwlrDataControlSourceV1, OPCODE_ZWLR_DATA_CONTROL_SOURCE_V1_SEND) | (WaylandInterface::ZwlrDataControlOfferV1, OPCODE_ZWLR_DATA_CONTROL_OFFER_V1_RECEIVE) | (WaylandInterface::WlDataSource, OPCODE_WL_DATA_SOURCE_SEND) | (WaylandInterface::WlDataOffer, OPCODE_WL_DATA_OFFER_RECEIVE) => { check_space!(msg.len(), 1, remaining_space); /* Message format is mimetype + fd, and the mimetype doesn't matter * for Waypipe */ match transl { TranslationInfo::FromChannel((x, y)) => { let sfd = x.pop_front().ok_or_else(|| tag!("Missing sfd"))?; y.push_back(sfd); } TranslationInfo::FromWayland((x, y)) => { let v = translate_pipe_fd( x.pop_front().ok_or_else(|| tag!("Missing fd"))?, glob, true, // reading from channel )?; y.push(v); } }; copy_msg_tag_fd(msg, dst, from_channel)?; Ok(ProcMsg::Done) } ( WaylandInterface::ZwlrScreencopyManagerV1, OPCODE_ZWLR_SCREENCOPY_MANAGER_V1_CAPTURE_OUTPUT_REGION, ) => { check_space!(msg.len(), 0, remaining_space); let (frame, _overlay_cursor, _output, _x, _y, _w, _h) = parse_req_zwlr_screencopy_manager_v1_capture_output_region(msg)?; insert_new_object( &mut glob.objects, frame, WpObject { obj_type: WaylandInterface::ZwlrScreencopyFrameV1, extra: WpExtra::ZwlrScreencopyFrame(Box::new(ObjZwlrScreencopyFrame { buffer: None, })), }, )?; copy_msg(msg, dst); Ok(ProcMsg::Done) } ( WaylandInterface::ZwlrScreencopyManagerV1, OPCODE_ZWLR_SCREENCOPY_MANAGER_V1_CAPTURE_OUTPUT, ) => { check_space!(msg.len(), 0, remaining_space); let (frame, _overlay_cursor, _output) = parse_req_zwlr_screencopy_manager_v1_capture_output(msg)?; insert_new_object( &mut glob.objects, frame, WpObject { obj_type: WaylandInterface::ZwlrScreencopyFrameV1, extra: WpExtra::ZwlrScreencopyFrame(Box::new(ObjZwlrScreencopyFrame { buffer: None, })), }, )?; copy_msg(msg, dst); Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureManagerV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_MANAGER_V1_CREATE_SESSION, ) => { check_space!(msg.len(), 0, remaining_space); let (frame, _output, _options) = parse_req_ext_image_copy_capture_manager_v1_create_session(msg)?; insert_new_object( &mut glob.objects, frame, WpObject { obj_type: WaylandInterface::ExtImageCopyCaptureSessionV1, extra: WpExtra::ExtImageCopyCaptureSession(Box::new( ObjExtImageCopyCaptureSession { dmabuf_device: None, dmabuf_formats: Vec::new(), last_format_mod_list: Vec::new(), frame_list: Vec::new(), }, )), }, )?; copy_msg(msg, dst); Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureCursorSessionV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_CURSOR_SESSION_V1_GET_CAPTURE_SESSION, ) => { check_space!(msg.len(), 0, remaining_space); let frame = parse_req_ext_image_copy_capture_cursor_session_v1_get_capture_session(msg)?; insert_new_object( &mut glob.objects, frame, WpObject { obj_type: WaylandInterface::ExtImageCopyCaptureSessionV1, extra: WpExtra::ExtImageCopyCaptureSession(Box::new( ObjExtImageCopyCaptureSession { dmabuf_device: None, dmabuf_formats: Vec::new(), last_format_mod_list: Vec::new(), frame_list: Vec::new(), }, )), }, )?; copy_msg(msg, dst); Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureSessionV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DMABUF_DEVICE, ) => { check_space!(msg.len(), 0, remaining_space); if glob.opts.no_gpu { return Ok(ProcMsg::Done); } let WpExtra::ExtImageCopyCaptureSession(ref mut session) = obj.extra else { unreachable!(); }; let dev = parse_evt_ext_image_copy_capture_session_v1_dmabuf_device(msg)?; let main_device = parse_dev_array(dev) .ok_or_else(|| tag!("Unexpected size for dev_t: {}", dev.len()))?; session.dmabuf_device = Some(main_device); /* Drop message; will be recreated when ::done arrives */ Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureSessionV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DMABUF_FORMAT, ) => { check_space!(msg.len(), 0, remaining_space); if glob.opts.no_gpu { return Ok(ProcMsg::Done); } let WpExtra::ExtImageCopyCaptureSession(ref mut session) = obj.extra else { unreachable!(); }; let (fmt, modifiers) = parse_evt_ext_image_copy_capture_session_v1_dmabuf_format(msg)?; let mut mod_list = Vec::new(); for mb in modifiers.chunks_exact(std::mem::size_of::()) { let m = u64::from_le_bytes(mb.try_into().unwrap()); mod_list.push(m); } if !mod_list.is_empty() { session.dmabuf_formats.push((fmt, mod_list)); } /* Drop message; will be recreated when ::done arrives */ Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureSessionV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DONE, ) => { /* Replay messages */ let WpExtra::ExtImageCopyCaptureSession(ref mut session) = obj.extra else { unreachable!(); }; let mut space_needed = length_evt_ext_image_copy_capture_session_v1_done(); if let Some(main_device) = session.dmabuf_device { let dev: Option = if matches!( glob.dmabuf_device, DmabufDevice::Unknown | DmabufDevice::VulkanSetup(_) ) { /* Identify which device to use, if needed */ if glob.on_display_side { Some(main_device) } else if let Some(node) = &glob.opts.drm_node { Some(get_dev_for_drm_node_path(node)?) } else { None } } else { None }; match glob.dmabuf_device { DmabufDevice::Unknown => { glob.dmabuf_device = try_setup_dmabuf_instance_full(&glob.opts, dev)?; } DmabufDevice::VulkanSetup(_) => { complete_dmabuf_setup(&glob.opts, dev, &mut glob.dmabuf_device)?; } _ => (), } if matches!(glob.dmabuf_device, DmabufDevice::Unavailable) { return Err(tag!( "DMABUF device specified, but DMABUFs are not supported" )); } let current_device_id = dmabuf_dev_get_id(&glob.dmabuf_device); if glob.on_display_side && main_device != current_device_id { // todo: handle this case return Err(tag!("image copy device did not match existing device; multiple devices are not yet supported")); } space_needed += length_evt_ext_image_copy_capture_session_v1_dmabuf_device( write_dev_array(current_device_id).len(), ); for (fmt, mod_list) in session.dmabuf_formats.iter() { let new_list_len = if glob.on_display_side { mod_list .iter() .filter(|m| dmabuf_dev_supports_format(&glob.dmabuf_device, *fmt, **m)) .count() } else { dmabuf_dev_modifier_list(&glob.dmabuf_device, *fmt).len() }; if new_list_len == 0 { continue; } space_needed += length_evt_ext_image_copy_capture_session_v1_dmabuf_format( new_list_len * std::mem::size_of::(), ); } } check_space!(space_needed, 0, remaining_space); if session.dmabuf_device.is_some() { let current_device_id = dmabuf_dev_get_id(&glob.dmabuf_device); write_evt_ext_image_copy_capture_session_v1_dmabuf_device( dst, object_id, write_dev_array(current_device_id).as_slice(), ); for (fmt, mod_list) in session.dmabuf_formats.iter() { let mut output = Vec::new(); if glob.on_display_side { /* Filter list of available modifiers */ for m in mod_list.iter() { if dmabuf_dev_supports_format(&glob.dmabuf_device, *fmt, *m) { output.extend_from_slice(&u64::to_le_bytes(*m)); add_advertised_modifiers( &mut glob.advertised_modifiers, *fmt, &[*m], ); } } } else { /* Replace modifier list with what is available locally */ let local_mods = dmabuf_dev_modifier_list(&glob.dmabuf_device, *fmt); add_advertised_modifiers(&mut glob.advertised_modifiers, *fmt, local_mods); for m in local_mods { output.extend_from_slice(&u64::to_le_bytes(*m)); } } if output.is_empty() { continue; } write_evt_ext_image_copy_capture_session_v1_dmabuf_format( dst, object_id, *fmt, &output, ); } let mut format_mod_list = Vec::new(); for (fmt, mods) in session.dmabuf_formats.iter() { for m in mods.iter() { format_mod_list.push((*fmt, *m)) } } format_mod_list.sort_unstable(); session.last_format_mod_list = format_mod_list; if glob.on_display_side { for (fmt, mods) in session.dmabuf_formats.iter() { glob.screencopy_restrictions.insert(*fmt, mods.clone()); } } } else { session.last_format_mod_list = Vec::new(); } write_evt_ext_image_copy_capture_session_v1_done(dst, object_id); /* Reset state: formats do not persist between ::done calls, because * ext_image_copy_capture_session_v1 has no event to remove a format; * presumably the dmabuf_device behaves similarly. */ session.dmabuf_device = None; session.dmabuf_formats = Vec::new(); Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureSessionV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_CREATE_FRAME, ) => { check_space!(msg.len(), 0, remaining_space); let WpExtra::ExtImageCopyCaptureSession(ref session) = obj.extra else { unreachable!(); }; let supported_modifiers = session.last_format_mod_list.clone(); let frame = parse_req_ext_image_copy_capture_session_v1_create_frame(msg)?; insert_new_object( &mut glob.objects, frame, WpObject { obj_type: WaylandInterface::ExtImageCopyCaptureFrameV1, extra: WpExtra::ExtImageCopyCaptureFrame(Box::new( ObjExtImageCopyCaptureFrame { buffer: None, supported_modifiers, capture_session: Some(object_id), }, )), }, )?; copy_msg(msg, dst); Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureSessionV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DESTROY, ) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let WpExtra::ExtImageCopyCaptureSession(ref mut session) = obj.extra else { unreachable!(); }; let mut frames = Vec::new(); std::mem::swap(&mut session.frame_list, &mut frames); let last_format_mod_list = session.last_format_mod_list.clone(); for frame_id in frames { let object = glob.objects.get_mut(&frame_id).unwrap(); let WpExtra::ExtImageCopyCaptureFrame(ref mut frame) = object.extra else { unreachable!(); }; frame.capture_session = None; frame.supported_modifiers = last_format_mod_list.clone(); } Ok(ProcMsg::Done) } (WaylandInterface::ZwlrScreencopyFrameV1, OPCODE_ZWLR_SCREENCOPY_FRAME_V1_COPY) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let buffer = parse_req_zwlr_screencopy_frame_v1_copy(msg)?; if buffer.0 == 0 { return Err(tag!( "zwlr_screencopy_frame_v1::copy requires non-null object" )); } let buf_obj = glob.objects.get(&buffer).ok_or_else(|| { tag!( "Failed to lookup buffer (id {}) for zwlr_screencopy_frame_v1::copy", buffer ) })?; let WpExtra::WlBuffer(ref d) = buf_obj.extra else { return Err(tag!("Expected wl_buffer object")); }; let buf_info = (d.sfd.clone(), d.shm_info); let object = glob.objects.get_mut(&object_id).unwrap(); let WpExtra::ZwlrScreencopyFrame(ref mut frame) = object.extra else { unreachable!(); }; frame.buffer = Some(buf_info); Ok(ProcMsg::Done) } ( WaylandInterface::ZwlrScreencopyFrameV1, OPCODE_ZWLR_SCREENCOPY_FRAME_V1_COPY_WITH_DAMAGE, ) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let buffer = parse_req_zwlr_screencopy_frame_v1_copy_with_damage(msg)?; if buffer.0 == 0 { return Err(tag!( "zwlr_screencopy_frame_v1::copy requires non-null object" )); } let buf_obj = glob.objects.get(&buffer).ok_or_else(|| { tag!( "Failed to lookup buffer (id {}) for zwlr_screencopy_frame_v1::copy", buffer ) })?; let WpExtra::WlBuffer(ref d) = buf_obj.extra else { return Err(tag!("Expected wl_buffer object")); }; let buf_info = (d.sfd.clone(), d.shm_info); let object = glob.objects.get_mut(&object_id).unwrap(); let WpExtra::ZwlrScreencopyFrame(ref mut frame) = object.extra else { unreachable!(); }; frame.buffer = Some(buf_info); Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureFrameV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_DESTROY, ) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let WpExtra::ExtImageCopyCaptureFrame(ref mut frame) = obj.extra else { unreachable!(); }; let mut session: Option = None; std::mem::swap(&mut frame.capture_session, &mut session); if let Some(session_id) = session { let object = glob.objects.get_mut(&session_id).unwrap(); let WpExtra::ExtImageCopyCaptureSession(ref mut session) = object.extra else { unreachable!(); }; if let Some(i) = session.frame_list.iter().position(|x| *x == object_id) { session.frame_list.remove(i); } } Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureFrameV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_ATTACH_BUFFER, ) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let buffer = parse_req_ext_image_copy_capture_frame_v1_attach_buffer(msg)?; if buffer.0 == 0 { return Err(tag!( "ext_image_copy_capture_frame_v1::attach_buffer requires non-null object" )); } let buf_obj = glob.objects.get(&buffer).ok_or_else(|| { tag!( "Failed to lookup buffer (id {}) for ext_image_copy_capture_frame_v1::attach_buffer", buffer ) })?; let WpExtra::WlBuffer(ref d) = buf_obj.extra else { return Err(tag!("Expected wl_buffer object")); }; let buf_info = (d.sfd.clone(), d.shm_info); let object = glob.objects.get_mut(&object_id).unwrap(); let WpExtra::ExtImageCopyCaptureFrame(ref mut frame) = object.extra else { unreachable!(); }; frame.buffer = Some(buf_info); Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureFrameV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_CAPTURE, ) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); if glob.on_display_side { let WpExtra::ExtImageCopyCaptureFrame(ref frame) = obj.extra else { unreachable!(); }; /* Warn if the buffer being submitted does not have a format/modifier pair in the * buffer constraints list; Waypipe currently does not have a mechanism to reliably * ensure this occurs. */ let fmtmod = if let Some(ref buffer) = frame.buffer { let b = buffer.0.borrow(); if let ShadowFdVariant::Dmabuf(ref d) = b.data { Some((d.drm_format, d.drm_modifier)) } else { None } } else { None }; if let Some(pair) = fmtmod { let err = if let Some(session_id) = frame.capture_session { let object = glob.objects.get_mut(&session_id).unwrap(); let WpExtra::ExtImageCopyCaptureSession(ref session) = object.extra else { unreachable!(); }; session.last_format_mod_list.binary_search(&pair).is_err() } else { frame.supported_modifiers.binary_search(&pair).is_err() }; if err { error!("A wl_buffer Waypipe created with (format, modifier) = (0x{:08x},0x{:016x}) \ is being submitted to ext_image_copy_capture_frame_v1#{}, whose parent ext_image_copy_capture_session_v1's \ most recent update did not include the (format, modifier) combination. This may be a known Waypipe issue.", pair.0, pair.1, object_id); } } } Ok(ProcMsg::Done) } (WaylandInterface::ZwlrScreencopyFrameV1, OPCODE_ZWLR_SCREENCOPY_FRAME_V1_READY) => { check_space!(msg.len(), 0, remaining_space); let WpExtra::ZwlrScreencopyFrame(ref mut frame) = obj.extra else { unreachable!(); }; let Some((ref sfd, ref shm_info)) = frame.buffer else { return Err(tag!( "zwlr_screencopy_frame_v1::ready is missing buffer information" )); }; if !glob.on_display_side { let b = sfd.borrow(); let apply_count = if let ShadowFdVariant::File(data) = &b.data { data.pending_apply_tasks } else if let ShadowFdVariant::Dmabuf(data) = &b.data { /* Assuming no timelines for screencopy-frame-v1 */ data.pending_apply_tasks } else { return Err(tag!("Attached buffer is not of file or dmabuf type")); }; if apply_count > 0 { return Ok(ProcMsg::WaitFor(b.remote_id)); } } let (tv_sec_hi, tv_sec_lo, tv_nsec) = parse_evt_zwlr_screencopy_frame_v1_ready(msg)?; let (new_sec_hi, new_sec_lo, new_nsec) = translate_timestamp( tv_sec_hi, tv_sec_lo, tv_nsec, libc::CLOCK_MONOTONIC as u32, glob.on_display_side, )?; write_evt_zwlr_screencopy_frame_v1_ready( dst, object_id, new_sec_hi, new_sec_lo, new_nsec, ); if !glob.on_display_side { let mut sfd = sfd.borrow_mut(); if let ShadowFdVariant::Dmabuf(ref mut y) = &mut sfd.data { dmabuf_post_apply_task_operations(y)?; } } if glob.on_display_side { /* Mark damage */ let mut sfd = sfd.borrow_mut(); if let ShadowFdVariant::File(ref mut y) = &mut sfd.data { let damage_interval = damage_for_entire_buffer(shm_info.as_ref().unwrap()); match &y.damage { Damage::Everything => {} Damage::Intervals(old) => { let dmg = &[damage_interval]; y.damage = Damage::Intervals(union_damage(&old[..], &dmg[..], 128)); } } } else if let ShadowFdVariant::Dmabuf(ref mut y) = &mut sfd.data { y.damage = Damage::Everything; y.using_implicit_sync = true; } else { return Err(tag!("Expected buffer shadowfd to be of file type")); } } frame.buffer = None; Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureFrameV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_PRESENTATION_TIME, ) => { check_space!(msg.len(), 0, remaining_space); let (tv_sec_hi, tv_sec_lo, tv_nsec) = parse_evt_ext_image_copy_capture_frame_v1_presentation_time(msg)?; let (new_sec_hi, new_sec_lo, new_nsec) = translate_timestamp( tv_sec_hi, tv_sec_lo, tv_nsec, libc::CLOCK_MONOTONIC as u32, glob.on_display_side, )?; write_evt_ext_image_copy_capture_frame_v1_presentation_time( dst, object_id, new_sec_hi, new_sec_lo, new_nsec, ); Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureFrameV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_READY, ) => { // TODO: deduplicate with wlr_screencopy_frame_v1::ready check_space!(msg.len(), 0, remaining_space); let WpExtra::ExtImageCopyCaptureFrame(ref mut frame) = obj.extra else { unreachable!(); }; let Some((ref sfd, ref shm_info)) = frame.buffer else { return Err(tag!( "zwlr_screencopy_frame_v1::ready is missing buffer information" )); }; if !glob.on_display_side { let b = sfd.borrow(); let apply_count = if let ShadowFdVariant::File(data) = &b.data { data.pending_apply_tasks } else if let ShadowFdVariant::Dmabuf(data) = &b.data { /* Assuming no timelines for screencopy-frame-v1 */ data.pending_apply_tasks } else { return Err(tag!("Attached buffer is not of file or dmabuf type")); }; if apply_count > 0 { return Ok(ProcMsg::WaitFor(b.remote_id)); } } copy_msg(msg, dst); if !glob.on_display_side { let mut sfd = sfd.borrow_mut(); if let ShadowFdVariant::Dmabuf(ref mut y) = &mut sfd.data { dmabuf_post_apply_task_operations(y)?; } } if glob.on_display_side { /* Mark damage */ let mut sfd = sfd.borrow_mut(); if let ShadowFdVariant::File(ref mut y) = &mut sfd.data { let damage_interval = damage_for_entire_buffer(shm_info.as_ref().unwrap()); match &y.damage { Damage::Everything => {} Damage::Intervals(old) => { let dmg = &[damage_interval]; y.damage = Damage::Intervals(union_damage(&old[..], &dmg[..], 128)); } } } else if let ShadowFdVariant::Dmabuf(ref mut y) = &mut sfd.data { y.using_implicit_sync = true; y.damage = Damage::Everything; } else { return Err(tag!("Expected buffer shadowfd to be of file type")); } } frame.buffer = None; Ok(ProcMsg::Done) } (WaylandInterface::ZwlrScreencopyFrameV1, OPCODE_ZWLR_SCREENCOPY_FRAME_V1_FAILED) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let WpExtra::ZwlrScreencopyFrame(ref mut frame) = obj.extra else { unreachable!(); }; frame.buffer = None; Ok(ProcMsg::Done) } ( WaylandInterface::ExtImageCopyCaptureFrameV1, OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_FAILED, ) => { check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); let WpExtra::ExtImageCopyCaptureFrame(ref mut frame) = obj.extra else { unreachable!(); }; frame.buffer = None; Ok(ProcMsg::Done) } (WaylandInterface::XdgToplevel, OPCODE_XDG_TOPLEVEL_SET_TITLE) => { let title = parse_req_xdg_toplevel_set_title(msg)?; let prefix = glob.opts.title_prefix.as_bytes(); let space_needed = length_req_xdg_toplevel_set_title(title.len() + prefix.len()); check_space!(space_needed, 0, remaining_space); // TODO: direct manipulation is appropriate here, because the output // already provides the necessary space let mut concat: Vec = Vec::new(); concat.extend_from_slice(prefix); concat.extend_from_slice(title); write_req_xdg_toplevel_set_title(dst, object_id, &concat); Ok(ProcMsg::Done) } (WaylandInterface::XdgToplevelIconV1, OPCODE_XDG_TOPLEVEL_ICON_V1_ADD_BUFFER) => { check_space!(msg.len(), 0, remaining_space); let (buffer_id, _scale) = parse_req_xdg_toplevel_icon_v1_add_buffer(msg)?; let Some(buffer) = glob.objects.get(&buffer_id) else { return Err(tag!( "Provided buffer is null, was never created, or is not tracked" )); }; let WpExtra::WlBuffer(ref extra) = buffer.extra else { return Err(tag!("Expected wl_buffer object")); }; /* This request makes the current buffer contents available to the compositor. */ if glob.on_display_side { let b = extra.sfd.borrow(); // Only wl_shm buffers are allowed for xdg_toplevel_icon_v1::add_buffer let apply_count = if let ShadowFdVariant::File(data) = &b.data { data.pending_apply_tasks } else { return Err(tag!("Attached buffer shadowfd is not of file type")); }; if apply_count > 0 { return Ok(ProcMsg::WaitFor(b.remote_id)); } } copy_msg(msg, dst); if !glob.on_display_side { /* Mark entire buffer as damaged */ let mut sfd = extra.sfd.borrow_mut(); if let ShadowFdVariant::File(ref mut y) = &mut sfd.data { let damage_interval = damage_for_entire_buffer(extra.shm_info.as_ref().unwrap()); match &y.damage { Damage::Everything => {} Damage::Intervals(old) => { let dmg = &[damage_interval]; y.damage = Damage::Intervals(union_damage(&old[..], &dmg[..], 128)); } } } else { return Err(tag!("Expected buffer shadowfd to be of file type")); } } Ok(ProcMsg::Done) } (WaylandInterface::WpPresentationFeedback, OPCODE_WP_PRESENTATION_FEEDBACK_PRESENTED) => { check_space!(msg.len(), 0, remaining_space); let (tv_sec_hi, tv_sec_lo, tv_nsec, refresh, seq_hi, seq_lo, flags) = parse_evt_wp_presentation_feedback_presented(msg)?; let clock_id = glob.presentation_clock.unwrap_or_else(|| { error!("wp_presentation_feedback::presented timestamp was received before any wp_presentation::clock event,\ so Waypipe assumes CLOCK_MONOTONIC was used and may misconvert times if wrong."); libc::CLOCK_MONOTONIC as u32 } ); let (new_sec_hi, new_sec_lo, new_nsec) = translate_timestamp( tv_sec_hi, tv_sec_lo, tv_nsec, clock_id, glob.on_display_side, )?; write_evt_wp_presentation_feedback_presented( dst, object_id, new_sec_hi, new_sec_lo, new_nsec, refresh, seq_hi, seq_lo, flags, ); Ok(ProcMsg::Done) } (WaylandInterface::WpPresentation, OPCODE_WP_PRESENTATION_CLOCK_ID) => { check_space!(msg.len(), 0, remaining_space); let clock_id = parse_evt_wp_presentation_clock_id(msg)?; if let Some(old) = glob.presentation_clock { if clock_id != old { return Err(tag!( "The wp_presentation clock was already set to {} and cannot be changed to {}.", old, clock_id )); } } // note: in theory, `waypipe server` could choose a preferred clock of its // own (like CLOCK_REALTIME or CLOCK_TAI) to reduce the number of clock // conversions. glob.presentation_clock = Some(clock_id); copy_msg(msg, dst); Ok(ProcMsg::Done) } (WaylandInterface::WpCommitTimerV1, OPCODE_WP_COMMIT_TIMER_V1_SET_TIMESTAMP) => { check_space!(msg.len(), 0, remaining_space); let (tv_sec_hi, tv_sec_lo, tv_nsec) = parse_req_wp_commit_timer_v1_set_timestamp(msg)?; let clock_id = glob.presentation_clock.unwrap_or_else(|| { error!("wp_commit_timer_v1::set_timestamp was received before wp_presentation::clock,\ so Waypipe assumes CLOCK_MONOTONIC was used and may misconvert times if wrong."); libc::CLOCK_MONOTONIC as u32 } ); let (new_sec_hi, new_sec_lo, new_nsec) = translate_timestamp( tv_sec_hi, tv_sec_lo, tv_nsec, clock_id, !glob.on_display_side, )?; write_req_wp_commit_timer_v1_set_timestamp( dst, object_id, new_sec_hi, new_sec_lo, new_nsec, ); Ok(ProcMsg::Done) } (WaylandInterface::WlRegistry, OPCODE_WL_REGISTRY_GLOBAL) => { // filter out events let (name, intf, mut version) = parse_evt_wl_registry_global(msg)?; /* Note: nothing that gets filtered out should ever be removed, * so it is not necessary to track interface name codes for global_remove */ let blacklist: &'static [&'static [u8]] = &[ b"wl_drm", /* very old/deprecated API: linux-dmabuf-v4 replaces */ b"wp_drm_lease_device_v1", b"zwlr_export_dmabuf_manager_v1", b"zwp_linux_explicit_synchronization_v1", // outdated, uses fences instead of timelines b"wp_security_context_manager_v1", // sends socket listen fd over network ]; /* Limit the version of the following protocols to what Waypipe has compiled * in, because future versions are very likely to require additional handling. */ let intf_code = match intf { WL_SHM => Some(WaylandInterface::WlShm), ZWP_LINUX_DMABUF_V1 => Some(WaylandInterface::ZwpLinuxDmabufV1), WP_LINUX_DRM_SYNCOBJ_MANAGER_V1 => { Some(WaylandInterface::WpLinuxDrmSyncobjManagerV1) } EXT_IMAGE_COPY_CAPTURE_MANAGER_V1 => { Some(WaylandInterface::ExtImageCopyCaptureManagerV1) } ZWLR_SCREENCOPY_MANAGER_V1 => Some(WaylandInterface::ZwlrScreencopyManagerV1), ZWLR_EXPORT_DMABUF_MANAGER_V1 => Some(WaylandInterface::ZwlrExportDmabufManagerV1), _ => None, }; if let Some(code) = intf_code { let max_v = INTERFACE_TABLE[code as usize].version; if version > max_v { debug!( "Downgrading {} version from {} to {}", EscapeWlName(intf), version, max_v ); version = max_v; } } if blacklist.contains(&intf) { /* Drop interface entirely */ debug!("Dropping interface: {}", EscapeWlName(intf)); return Ok(ProcMsg::Done); } if intf == ZWP_LINUX_DMABUF_V1 { /* waypipe-server side: Filter out dmabuf support if the target device (or _any_ device) * is not available; this must be done now to prevent advertising this global when * DMABUF support is not actually available. */ match glob.dmabuf_device { DmabufDevice::Unavailable => (), /* case handled later */ DmabufDevice::Vulkan(_) | DmabufDevice::Gbm(_) => (), DmabufDevice::VulkanSetup(_) => (), DmabufDevice::Unknown => { if !glob.on_display_side { let dev = if let Some(node) = &glob.opts.drm_node { /* Pick specified device */ Some(get_dev_for_drm_node_path(node)?) } else { /* Pick best device */ None }; glob.dmabuf_device = try_setup_dmabuf_instance_light(&glob.opts, dev)?; assert!(!matches!(glob.dmabuf_device, DmabufDevice::Unknown)); } } } if matches!(glob.dmabuf_device, DmabufDevice::Unavailable) { debug!( "No DMABUF handling device available: Dropping interface: {}", EscapeWlName(intf) ); return Ok(ProcMsg::Done); } /* note: with versions < 4, the the compositor has no way to specify the preferred * drm node, so it may be chosen arbitrarily */ } if intf == WP_LINUX_DRM_SYNCOBJ_MANAGER_V1 { match &glob.dmabuf_device { DmabufDevice::Unknown => { /* store globals for replay later */ let WpExtra::WlRegistry(ref mut reg) = obj.extra else { return Err(tag!("Unexpected extra type for wl_registry")); }; reg.syncobj_manager_replay.push((name, version)); } DmabufDevice::Gbm(_) | DmabufDevice::Unavailable => { /* drop, not supported */ debug!( "No timeline semaphore handling device available: Dropping interface: {}", EscapeWlName(intf) ); return Ok(ProcMsg::Done); } DmabufDevice::VulkanSetup(vkinst) => { let dev = if let Some(node) = &glob.opts.drm_node { Some(get_dev_for_drm_node_path(node)?) } else { None }; if !vkinst.device_supports_timeline_import_export(dev) { debug!( "Timeline semaphore import/export will not be supported: Dropping interface: {}", EscapeWlName(intf) ); return Ok(ProcMsg::Done); } } DmabufDevice::Vulkan((_, vulk)) => { /* Keep if timeline semaphores supported */ if !vulk.supports_timeline_import_export() { debug!( "Timeline semaphore import/export is not supported: Dropping interface: {}", EscapeWlName(intf) ); return Ok(ProcMsg::Done); } } } } let mut space = msg.len(); if intf == ZWP_LINUX_DMABUF_V1 { let WpExtra::WlRegistry(ref mut reg) = obj.extra else { return Err(tag!("Unexpected extra type for wl_registry")); }; if !reg.syncobj_manager_replay.is_empty() { space += length_evt_wl_registry_global(WP_LINUX_DRM_SYNCOBJ_MANAGER_V1.len()) * reg.syncobj_manager_replay.len(); } } check_space!(space, 0, remaining_space); write_evt_wl_registry_global(dst, object_id, name, intf, version); if intf == ZWP_LINUX_DMABUF_V1 { /* Replay syncobj manager events once it is certain Vulkan is available. */ let WpExtra::WlRegistry(ref mut reg) = obj.extra else { return Err(tag!("Unexpected extra type for wl_registry")); }; let timelines_supported = match &glob.dmabuf_device { /* Assume true since, once set up, Waypipe will most likely support drm syncobj iff the compositor does */ DmabufDevice::VulkanSetup(_) => true, DmabufDevice::Unknown => { if glob.on_display_side { true } else { unreachable!(); } } DmabufDevice::Unavailable => unreachable!(), DmabufDevice::Gbm(_) => false, DmabufDevice::Vulkan((_, vulk)) => vulk.supports_timeline_import_export(), }; if !timelines_supported && !reg.syncobj_manager_replay.is_empty() { debug!( "Timeline semaphore import/export is not supported, not replaying {} advertisements for {}", reg.syncobj_manager_replay.len(), EscapeWlName(WP_LINUX_DRM_SYNCOBJ_MANAGER_V1) ); } for (sync_name, sync_version) in reg.syncobj_manager_replay.drain(..) { if timelines_supported { write_evt_wl_registry_global( dst, object_id, sync_name, WP_LINUX_DRM_SYNCOBJ_MANAGER_V1, sync_version, ); } } } Ok(ProcMsg::Done) } (WaylandInterface::WlRegistry, OPCODE_WL_REGISTRY_BIND) => { // filter out events let (_id, name, version, oid) = parse_req_wl_registry_bind(msg)?; if name == ZWP_LINUX_DMABUF_V1 { let light_setup = version >= 4 && glob.on_display_side; if matches!(glob.dmabuf_device, DmabufDevice::Unknown) { let dev = if let Some(node) = &glob.opts.drm_node { /* Pick specified device */ Some(get_dev_for_drm_node_path(node)?) } else { /* Pick best device */ None }; if light_setup { /* In this case, device will be provided later through dmabuf-feedback * main_device event */ glob.dmabuf_device = try_setup_dmabuf_instance_light(&glob.opts, dev)?; } else { debug!( "Client bound zwp_linux_dmabuf_v1 at version {} older than 4, using best-or-specified drm node", version ); glob.dmabuf_device = try_setup_dmabuf_instance_full(&glob.opts, dev)?; } assert!(!matches!(glob.dmabuf_device, DmabufDevice::Unknown)); } if !light_setup && matches!(glob.dmabuf_device, DmabufDevice::VulkanSetup(_)) { let dev = if let Some(node) = &glob.opts.drm_node { Some(get_dev_for_drm_node_path(node)?) } else { None }; complete_dmabuf_setup(&glob.opts, dev, &mut glob.dmabuf_device)?; } if matches!(glob.dmabuf_device, DmabufDevice::Unavailable) { return Err(tag!("Failed to set up a device to handle DMABUFS")); } check_space!(msg.len(), 0, remaining_space); copy_msg(msg, dst); insert_new_object( &mut glob.objects, oid, WpObject { obj_type: WaylandInterface::ZwpLinuxDmabufV1, extra: WpExtra::ZwpDmabuf(Box::new(ObjZwpLinuxDmabuf { formats_seen: BTreeSet::new(), })), }, )?; return Ok(ProcMsg::Done); } /* create new global objects */ default_proc_way_msg(msg, dst, meth, is_req, object_id, glob) } _ => { // Default handling: copy message, and create IDs default_proc_way_msg(msg, dst, meth, is_req, object_id, glob) } } } /** Log the _changed_ messages in the given buffer whose corresponding object is * currently in the provided set of objects. * * To avoid issues resulting from object deletion, this should be called * promptly after processing a message and producing `output_msgs`. */ pub fn log_way_msg_output( orig_msg: &[u8], mut output_msgs: &[u8], objects: &BTreeMap, is_req: bool, ) { if !log::log_enabled!(log::Level::Debug) { return; } if output_msgs.is_empty() { debug!("Dropped last {}", if is_req { "request" } else { "event" },); return; } if orig_msg[0..4] == output_msgs[0..4] && orig_msg[8..] == output_msgs[8..] { /* No change, only message was copied as is, modulo fd tagging changes */ return; } /* Output messages should be well formed. */ while !output_msgs.is_empty() { let object_id = ObjId(u32::from_le_bytes(output_msgs[0..4].try_into().unwrap())); let header2 = u32::from_le_bytes(output_msgs[4..8].try_into().unwrap()); let length = (header2 >> 16) as usize; let opcode = (header2 & ((1 << 11) - 1)) as usize; let msg = &output_msgs[..length]; output_msgs = &output_msgs[length..]; let Some(obj) = objects.get(&object_id) else { /* Unknown messages will always be copied, so no point in logging them again */ continue; }; let opt_meth: Option<&WaylandMethod> = if is_req { INTERFACE_TABLE[obj.obj_type as usize].reqs.get(opcode) } else { INTERFACE_TABLE[obj.obj_type as usize].evts.get(opcode) }; let Some(meth) = opt_meth else { /* Method out of range, will be copied */ continue; }; debug!( "Modified {}: {}#{}.{}({})", if is_req { "request" } else { "event" }, INTERFACE_TABLE[obj.obj_type as usize].name, object_id, meth.name, MethodArguments { meth, msg } ); } } /** Construct the Wayland object map with the initial object, wl_display#1 */ pub fn setup_object_map() -> BTreeMap { let mut map = BTreeMap::new(); map.insert( ObjId(1), WpObject { obj_type: WaylandInterface::WlDisplay, extra: WpExtra::None, }, ); map } waypipe-v0.10.5/src/util.rs000066400000000000000000001133661507071711100155650ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Misc utilities and types */ use crate::platform::*; use crate::wayland_gen::WlShmFormat; use core::num::NonZeroU32; use nix::{fcntl, unistd}; use std::fmt; use std::fmt::{Display, Formatter}; use std::fs::ReadDir; use std::os::fd::{AsRawFd, FromRawFd, OwnedFd}; use std::os::unix::ffi::OsStrExt; use std::str::FromStr; /** Like `format!`, but prepends file and line number. * * Example: `tag!("Failed to X: {} {}", arg1, arg2)` */ #[macro_export] macro_rules! tag { ($x:tt) => { format!(concat!(std::file!(), ":", std::line!(), ": ", $x)) }; ($x:tt, $($arg:tt)+) => { format!(concat!(std::file!(), ":", std::line!(), ": ", $x), $($arg)+) }; } /* Connection header constants. The original header layout is: * * 0: set iff reconnectable * 1: set iff update for reconnectable connection * 2: no dmabuf support (can be ignored as we lazily initialize) * 3-6: ignored * 7: fixed to 1 * 8-10: compression type * 11-13: video type * 14-15: ignored * 16-30: version field, original Waypipe only accepts 0x1 * 31: fixed to 0 * * Waypipe's protocol does not use any interesting features early on; * the application side always starts by sending a Protocol-type message. * * To allow for a "silent" version upgrade, where a new version is * only used if acknowledged, the version field will now be interpreted as * follows: * * 3-6: lower bits of version * 16-23: upper bits of version * * All versions from 16 (=1) to 31 to should be able to interoperate * with original Waypipe. */ pub const MIN_PROTOCOL_VERSION: u32 = 0x10; pub const WAYPIPE_PROTOCOL_VERSION: u32 = 0x11; pub const CONN_FIXED_BIT: u32 = 0x1 << 7; pub const CONN_UNSET_BIT: u32 = 0x1 << 31; pub const _CONN_RECONNECTABLE_BIT: u32 = 0x1 << 0; pub const _CONN_UPDATE_BIT: u32 = 0x1 << 1; pub const CONN_NO_DMABUF_SUPPORT: u32 = 0x1 << 2; pub const CONN_COMPRESSION_MASK: u32 = 0x7 << 8; pub const CONN_NO_COMPRESSION: u32 = 0x1 << 8; pub const CONN_LZ4_COMPRESSION: u32 = 0x2 << 8; pub const CONN_ZSTD_COMPRESSION: u32 = 0x3 << 8; pub const CONN_VIDEO_MASK: u32 = 0x7 << 11; pub const CONN_NO_VIDEO: u32 = 0x1 << 11; pub const CONN_VP9_VIDEO: u32 = 0x2 << 11; pub const CONN_H264_VIDEO: u32 = 0x3 << 11; pub const CONN_AV1_VIDEO: u32 = 0x4 << 11; #[derive(Debug, PartialEq, Eq, Copy, Clone)] pub enum WmsgType { /** Send over a set of Wayland protocol messages. Preceding messages * must create or update file descriptors and inject file descriptors * to the queue. */ Protocol = 0, // header uint32_t, then protocol messages /** Inject file descriptors into the receiver's buffer, for use by the * protocol parser. */ InjectRIDs = 1, // header uint32_t, then fds /** Create a new shared memory file of the given size. * Format: \ref wmsg_open_file */ OpenFile = 2, /** Provide a new (larger) size for the file buffer. * Format: \ref wmsg_open_file */ ExtendFile = 3, /** Create a new DMABUF with the given size and \ref dmabuf_slice_data. * Format: \ref wmsg_open_dmabuf */ OpenDMABUF = 4, /** Fill the region of the file with the folllowing data. The data * should be compressed according to the global compression option. * Format: \ref wmsg_buffer_fill */ BufferFill = 5, /** Apply a diff to the file. The diff contents may be compressed. * Format: \ref wmsg_buffer_diff */ BufferDiff = 6, /** Create a new pipe, with the given remote R/W status */ OpenIRPipe = 7, // wmsg_basic OpenIWPipe = 8, // wmsg_basic OpenRWPipe = 9, // wmsg_basic /** Transfer data to the pipe */ PipeTransfer = 10, // wmsg_basic /** Shutdown the read end of the pipe that waypipe uses. */ PipeShutdownR = 11, // wmsg_basic /** Shutdown the write end of the pipe that waypipe uses. */ PipeShutdownW = 12, // wmsg_basic /** Create a DMABUF (with following data parameters) that will be used * to produce/consume video frames. Format: \ref wmsg_open_dmabuf. * Deprecated and may be disabled/removed in the future. */ OpenDMAVidSrc = 13, OpenDMAVidDst = 14, /** Send a packet of video data to the destination */ SendDMAVidPacket = 15, // wmsg_basic /** Acknowledge that a given number of messages has been received, so * that the sender of those messages no longer needs to store them * for replaying in case of reconnection. Format: \ref wmsg_ack */ AckNblocks = 16, /** When restarting a connection, indicate the number of the message * which will be sent next. Format: \ref wmsg_restart */ Restart = 17, // wmsg_restart /** When the remote program is closing. Format: only the header */ Close = 18, /** Create a DMABUF (with following data parameters) that will be used * to produce/consume video frames. Format: \ref wmsg_open_dmavid */ OpenDMAVidSrcV2 = 19, OpenDMAVidDstV2 = 20, /* Create a DRM syncobj timeline semaphore. Format: header, u64-le initial point */ OpenTimeline = 21, /* Signal the indicated DRM syncobj timeline semaphore. Format: header, u64-le initial point. */ SignalTimeline = 22, /* Sent as the first message from the client to reveal the negotiated wire protocol * version. Format: header, u32 version field */ Version = 23, } pub fn align(x: usize, y: usize) -> usize { y * ((x.checked_add(y - 1).unwrap()) / y) } pub fn align4(x: usize) -> usize { align(x, 4) } pub fn cat2x4(x: [u8; 4], y: [u8; 4]) -> [u8; 8] { [x[0], x[1], x[2], x[3], y[0], y[1], y[2], y[3]] } pub fn cat3x4(x: [u8; 4], y: [u8; 4], z: [u8; 4]) -> [u8; 12] { [ x[0], x[1], x[2], x[3], y[0], y[1], y[2], y[3], z[0], z[1], z[2], z[3], ] } pub fn cat4x4(x: [u8; 4], y: [u8; 4], z: [u8; 4], a: [u8; 4]) -> [u8; 16] { [ x[0], x[1], x[2], x[3], y[0], y[1], y[2], y[3], z[0], z[1], z[2], z[3], a[0], a[1], a[2], a[3], ] } pub fn split_interval(lo: u32, hi: u32, nparts: u32, index: u32) -> u32 { assert!(nparts < 1 << 15 && hi - lo < 1 << 31); lo + index * ((hi - lo) / nparts) + (index * ((hi - lo) % nparts)) / nparts } pub fn ceildiv(v: u32, u: u32) -> u32 { v.div_ceil(u) } /* Split u64 into high (32:63) and low (0:31) parts */ pub fn split_u64(x: u64) -> (u32, u32) { ((x >> 32) as u32, x as u32) } pub fn join_u64(hi: u32, lo: u32) -> u64 { ((hi as u64) << 32) | (lo as u64) } pub fn build_wmsg_header(typ: WmsgType, len: usize) -> u32 { u32::try_from(len).unwrap().checked_mul(1 << 5).unwrap() | (typ as u32) } /** The size excludes trailing padding (to multiple of 4). */ pub fn parse_wmsg_header(header: u32) -> Option<(usize, WmsgType)> { let code = header & ((1 << 5) - 1); let len = (header >> 5) as usize; let t = match code { 0 => WmsgType::Protocol, 1 => WmsgType::InjectRIDs, 2 => WmsgType::OpenFile, 3 => WmsgType::ExtendFile, 4 => WmsgType::OpenDMABUF, 5 => WmsgType::BufferFill, 6 => WmsgType::BufferDiff, 7 => WmsgType::OpenIRPipe, 8 => WmsgType::OpenIWPipe, 9 => WmsgType::OpenRWPipe, 10 => WmsgType::PipeTransfer, 11 => WmsgType::PipeShutdownR, 12 => WmsgType::PipeShutdownW, 13 => WmsgType::OpenDMAVidSrc, 14 => WmsgType::OpenDMAVidDst, 15 => WmsgType::SendDMAVidPacket, 16 => WmsgType::AckNblocks, 17 => WmsgType::Restart, 18 => WmsgType::Close, 19 => WmsgType::OpenDMAVidSrcV2, 20 => WmsgType::OpenDMAVidDstV2, 21 => WmsgType::OpenTimeline, 22 => WmsgType::SignalTimeline, 23 => WmsgType::Version, _ => { return None; } }; Some((len, t)) } pub fn retain_err(x: &mut Vec, mut f: F) -> Result<(), E> where F: FnMut(&mut T) -> Result, { let mut e: Result<(), E> = Ok(()); x.retain_mut(|y| match f(y) { Ok(b) => b, Err(x) => { e = Err(x); /* It doesn't matter whether we keep or exit in this case */ true } }); e } /** A type to escape Wayland interface names, which should only consist of [a-zA-Z0-9_] */ pub struct EscapeWlName<'a>(pub &'a [u8]); impl Display for EscapeWlName<'_> { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { for c in self.0 { match *c { b'_' | b'a'..=b'z' | b'0'..=b'9' | b'A'..=b'Z' => { write!(f, "{}", char::from_u32(*c as u32).unwrap()) } _ => { write!(f, "\\x{:02x}", *c) } }? } Ok(()) } } /** A type to escape all non-ascii-printable characters when Displayed, to leave strings * somewhat legible but make it clear exactly what bytes they contain */ pub struct EscapeAsciiPrintable<'a>(pub &'a [u8]); impl Display for EscapeAsciiPrintable<'_> { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { for c in self.0 { match *c { b' '..=b'~' => write!(f, "{}", char::from_u32(*c as u32).unwrap()), _ => { write!(f, "\\x{:02x}", *c) } }? } Ok(()) } } /** Format a bool as 'T' or 'F' */ pub fn fmt_bool(x: bool) -> char { if x { 'T' } else { 'f' } } /** Return the string iff `x`, otherwise empty string. Can be efficient * for logging conditions that are rarely true. */ pub fn string_if_bool(x: bool, y: &str) -> &str { if x { y } else { "" } } /* A heap-allocated 64-aligned array */ pub struct AlignedArray { data: *mut u8, size: usize, } unsafe impl Send for AlignedArray {} unsafe impl Sync for AlignedArray {} impl AlignedArray { pub fn new(size: usize) -> AlignedArray { if size == 0 { AlignedArray { data: std::ptr::null_mut(), size: 0, } } else { let layout = std::alloc::Layout::from_size_align(size, 64).unwrap(); unsafe { // SAFETY: layout size was checked to be > 0 let mem = std::alloc::alloc_zeroed(layout).cast::(); assert!(!mem.is_null()); AlignedArray { data: mem, size } } } } /* Returns (ptr, len); ptr is promised to be 64 aligned */ pub fn get_parts(&self) -> (*mut u8, usize) { (self.data, self.size) } pub fn get_mut(&mut self) -> &mut [u8] { if self.size == 0 { return &mut []; } unsafe { // SAFETY: self.data is not null since size > 0 was checked // data is 64-aligned, and only 1-alignment needed for u8 // size matches allocated amount // &mut self argument ensures no other calls to get_mut() can // overlap in lifespan, so slice is not otherwise accessed; other unsafe // users of AlignedArray should enforce similar behavior &mut *std::ptr::slice_from_raw_parts_mut(self.data, self.size) } } pub fn get(&self) -> &[u8] { if self.size == 0 { return &[]; } unsafe { // SAFETY: bounds OK else allocation would fail, todo &*std::ptr::slice_from_raw_parts(self.data, self.size) } } } impl Drop for AlignedArray { fn drop(&mut self) { if self.size > 0 { let layout = std::alloc::Layout::from_size_align(self.size, 64).unwrap(); unsafe { // SAFETY: self.data is not null and was allocated with the same layout std::alloc::dealloc(self.data, layout); } } } } #[derive(Clone, Copy, Debug, PartialEq, Eq)] pub enum Compression { None, Lz4(i8), Zstd(i8), } impl FromStr for Compression { type Err = &'static str; fn from_str(s: &str) -> Result { const FAILURE: &str = "Compression should have format: 'none', 'lz4[=#]', or 'zstd[=#]'"; if s == "none" { Ok(Compression::None) } else if s.starts_with("lz4") { let lvl: i8; if s == "lz4" { lvl = 0; } else if let Some(suffix) = s.strip_prefix("lz4=") { lvl = suffix.parse::().map_err(|_| FAILURE)?; } else { return Err(FAILURE); } Ok(Compression::Lz4(lvl)) } else if s.starts_with("zstd") { let lvl: i8; if s == "zstd" { lvl = 0; } else if let Some(suffix) = s.strip_prefix("zstd=") { lvl = suffix.parse::().map_err(|_| FAILURE)?; } else { return Err(FAILURE); } Ok(Compression::Zstd(lvl)) } else { Err(FAILURE) } } } impl fmt::Display for Compression { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { Compression::None => write!(f, "none"), Compression::Lz4(i) => { if *i == 0 { write!(f, "lz4") } else { write!(f, "lz4={}", i) } } Compression::Zstd(i) => { if *i == 0 { write!(f, "zstd") } else { write!(f, "zstd={}", i) } } } } } #[test] fn compression_enum_roundtrip() { assert_eq!( Compression::from_str(&Compression::None.to_string()), Ok(Compression::None) ); for i in i8::MIN..=i8::MAX { assert_eq!( Compression::from_str(&Compression::Lz4(i).to_string()), Ok(Compression::Lz4(i)) ); assert_eq!( Compression::from_str(&Compression::Zstd(i).to_string()), Ok(Compression::Zstd(i)) ); } } #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub enum VideoFormat { /* Values are used in wire protocol */ H264 = 0, VP9 = 1, AV1 = 2, } /** Whether to prefer software or hardware encoding, when available */ #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub enum CodecPreference { SW = 0, HW = 1, } /** Configuration for video encoding/decoding */ #[derive(Debug, Copy, Clone, PartialEq, Default)] pub struct VideoSetting { /* If not set, no video encoding done */ pub format: Option, /* If not set, default */ pub bits_per_frame: Option, pub enc_pref: Option, pub dec_pref: Option, } impl FromStr for VideoSetting { type Err = &'static str; fn from_str(s: &str) -> Result { const FAILURE: &str = "Video spec should be comma-separated list containing any of: 'none', 'h264', 'vp9', 'av1', 'sw', 'hw', 'hwenc', 'swenc', 'hwdec', 'swdec', 'bpf='"; let mut f = VideoSetting { format: None, bits_per_frame: None, enc_pref: None, dec_pref: None, }; for chunk in s.split_terminator(',') { if chunk == "none" { f.format = None; } else if chunk == "hw" { f.enc_pref = Some(CodecPreference::HW); f.dec_pref = Some(CodecPreference::HW); } else if chunk == "sw" { f.enc_pref = Some(CodecPreference::SW); f.dec_pref = Some(CodecPreference::SW); } else if chunk == "swenc" { f.enc_pref = Some(CodecPreference::SW); } else if chunk == "hwenc" { f.enc_pref = Some(CodecPreference::HW); } else if chunk == "swdec" { f.dec_pref = Some(CodecPreference::SW); } else if chunk == "hwdec" { f.dec_pref = Some(CodecPreference::HW); } else if chunk == "h264" { f.format = Some(VideoFormat::H264); } else if chunk == "vp9" { f.format = Some(VideoFormat::VP9); } else if chunk == "av1" { f.format = Some(VideoFormat::AV1); } else if let Some(suffix) = chunk.strip_prefix("bpf=") { f.bits_per_frame = Some(suffix.parse::().map_err(|_| FAILURE)?); } else { return Err(FAILURE); } } Ok(f) } } impl fmt::Display for VideoSetting { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { if let Some(fmt) = &self.format { match fmt { VideoFormat::H264 => write!(f, "h264")?, VideoFormat::AV1 => write!(f, "av1")?, VideoFormat::VP9 => write!(f, "vp9")?, }; } else { write!(f, "none")?; } if self.enc_pref == Some(CodecPreference::SW) && self.dec_pref == Some(CodecPreference::SW) { write!(f, ",sw")?; } else if self.enc_pref == Some(CodecPreference::HW) && self.dec_pref == Some(CodecPreference::HW) { write!(f, ",hw")?; } else { if let Some(p) = self.enc_pref { match p { CodecPreference::SW => write!(f, ",swenc")?, CodecPreference::HW => write!(f, ",hwenc")?, } } if let Some(p) = self.dec_pref { match p { CodecPreference::SW => write!(f, ",swdec")?, CodecPreference::HW => write!(f, ",hwdec")?, } } } if let Some(bpf) = self.bits_per_frame { write!(f, ",bpf={}", bpf)?; } Ok(()) } } #[test] fn video_setting_roundtrip() { let examples = [ VideoSetting { format: None, bits_per_frame: None, enc_pref: None, dec_pref: None, }, VideoSetting { format: None, bits_per_frame: Some(1e9), enc_pref: Some(CodecPreference::SW), dec_pref: None, }, VideoSetting { format: Some(VideoFormat::H264), bits_per_frame: Some(100.0), enc_pref: None, dec_pref: Some(CodecPreference::HW), }, VideoSetting { format: Some(VideoFormat::VP9), bits_per_frame: Some(4321.0), enc_pref: Some(CodecPreference::SW), dec_pref: Some(CodecPreference::HW), }, VideoSetting { format: Some(VideoFormat::H264), bits_per_frame: None, enc_pref: Some(CodecPreference::SW), dec_pref: Some(CodecPreference::SW), }, VideoSetting { format: Some(VideoFormat::AV1), bits_per_frame: None, enc_pref: Some(CodecPreference::HW), dec_pref: Some(CodecPreference::HW), }, ]; for v in examples { println!("{}", VideoSetting::to_string(&v)); assert_eq!(VideoSetting::from_str(&VideoSetting::to_string(&v)), Ok(v)); } } #[derive(Debug)] pub struct AddDmabufPlane { pub fd: OwnedFd, pub plane_idx: u32, pub offset: u32, pub stride: u32, pub modifier: u64, } /** Construct a fourcc code from the component letters */ pub const fn fourcc(a: char, b: char, c: char, d: char) -> u32 { u32::from_le_bytes([(a as u8), (b as u8), (c as u8), (d as u8)]) } pub fn list_render_device_ids() -> Vec { use std::os::unix::ffi::OsStrExt; let mut dev_ids = Vec::new(); let Ok(dir_iter) = std::fs::read_dir("/dev/dri") else { /* On failure, assume Vulkan is not available */ return dev_ids; }; for r in dir_iter { let std::io::Result::Ok(entry) = r else { continue; }; if !entry.file_name().as_bytes().starts_with(b"renderD") { continue; } let Some(rdev) = get_rdev_for_file(&entry.path()) else { continue; }; dev_ids.push(rdev); } dev_ids } /** Open the render node with specified device id*/ pub fn drm_open_render(dev_id: u64, rdrw: bool) -> Result { /* On Linux, the render node is usually /dev/dri/renderD$X where $X is the * minor value/lowest 8 bits, but this may not be the case on all platforms */ let rd: ReadDir = std::fs::read_dir("/dev/dri").map_err(|x| tag!("Failed to read /dev/dri/: {}", x))?; for entry in rd { let e = entry.map_err(|x| tag!("Failed to read entry in /dev/dri: {}", x))?; /* Restrict to render nodes */ if e.file_name().as_bytes().starts_with(b"renderD") { let path = e.path(); let Some(rdev) = get_rdev_for_file(&path) else { continue; }; /* Note: technically there is a check-vs-open race condition here, but * render nodes rarely change. It could be avoided using `fstat`. */ if rdev == dev_id { let mut flags = fcntl::OFlag::O_CLOEXEC | fcntl::OFlag::O_NOCTTY; if rdrw { flags |= fcntl::OFlag::O_RDWR; } let raw_fd = fcntl::open(&path, flags, nix::sys::stat::Mode::empty()) .map_err(|x| tag!("Failed to open drm node fd at '{:?}': {}", path, x))?; return Ok(unsafe { // SAFETY: fd was just created, was checked valid, and is recorded nowhere else OwnedFd::from_raw_fd(raw_fd) }); } } } Err(tag!( "Failed to find render node with device id 0x{:x}", dev_id, )) } /** Provide contents of dmabuf_slice_data, pretending the buffer has a linear modifier * and is tightly packed. */ pub fn dmabuf_slice_make_ideal(drm_format: u32, width: u32, height: u32, bpp: u32) -> [u8; 64] { let mut out = [0; 64]; out[0..4].copy_from_slice(&width.to_le_bytes()); out[4..8].copy_from_slice(&height.to_le_bytes()); out[8..12].copy_from_slice(&drm_format.to_le_bytes()); out[12..16].copy_from_slice(&1u32.to_le_bytes()); let offset = 0_u32; out[16..20].copy_from_slice(&offset.to_le_bytes()); let stride = width.checked_mul(bpp).unwrap(); out[32..36].copy_from_slice(&stride.to_le_bytes()); /* This modifier is only ever used by waypipe-c to decide what buffer type to create */ out[48..56].copy_from_slice(&0_u64.to_le_bytes()); /* Link plane to dmabuf */ out[56] = 1; out } /** Get the stride from a dmabuf_slice_data; waypipe-c will interpret this as the nominal stride. */ pub fn dmabuf_slice_get_first_stride(data: [u8; 64]) -> u32 { u32::from_le_bytes(data[32..36].try_into().unwrap()) } /** Set the close-on-exec flag for a file descriptor */ pub fn set_cloexec(fd: &OwnedFd, cloexec: bool) -> Result<(), String> { fcntl::fcntl( fd.as_raw_fd(), fcntl::FcntlArg::F_SETFD(if cloexec { fcntl::FdFlag::FD_CLOEXEC } else { fcntl::FdFlag::empty() }), ) .map_err(|x| tag!("Failed to set cloexec flag: {:?}", x))?; Ok(()) } /** Set the O_NONBLOCK flag for the file description, clearing all other flags. */ pub fn set_nonblock(fd: &OwnedFd) -> Result<(), String> { fcntl::fcntl( fd.as_raw_fd(), fcntl::FcntlArg::F_SETFL(nix::fcntl::OFlag::O_NONBLOCK), ) .map_err(|x| tag!("Failed to set nonblocking: {:?}", x))?; Ok(()) } /** Unset the O_NONBLOCK flag for the file description, clearing all other flags. */ pub fn set_blocking(fd: &OwnedFd) -> Result<(), String> { fcntl::fcntl( fd.as_raw_fd(), fcntl::FcntlArg::F_SETFL(nix::fcntl::OFlag::empty()), ) .map_err(|x| tag!("Failed to set blocking: {:?}", x))?; Ok(()) } /** Given blocking `fd`, read exactly enough to fill `data`, or return error. * This blocks until completion or error. Returns Err(None) on EOF. */ pub fn read_exact(fd: &OwnedFd, data: &mut [u8]) -> Result<(), Option> { let mut offset = 0; while offset < data.len() { match unistd::read(fd.as_raw_fd(), &mut data[offset..]) { Ok(s) => { if s == 0 { return Err(None); } offset += s; } Err(nix::errno::Errno::EINTR) => { continue; } Err(code) => { /* Note: since `fd` should not have O_NONBLOCK, EAGAIN is unexpected */ return Err(Some(code)); } } } Ok(()) } /** Given blocking `fd`, write `data`, or return error if not everything was written. * This blocks until completion or error. */ pub fn write_exact(fd: &OwnedFd, data: &[u8]) -> nix::Result<()> { let mut offset = 0; while offset < data.len() { match unistd::write(fd, &data[offset..]) { Ok(s) => { offset += s; } Err(nix::errno::Errno::EINTR) => { continue; } Err(code) => { /* Note: since `fd` should not have O_NONBLOCK, EAGAIN is unexpected */ return Err(code); } } } Ok(()) } /** A very simple and fast pseudorandom generator; output is only hard to * predict for very restricted; this should be enough to fool a branch * predictor or general-purpose compression algorithm, but should not be * used outside test or benchmarking code. */ pub struct BadRng { pub state: u64, } impl BadRng { /** Get a new u64 value */ pub fn next(&mut self) -> u64 { // Xorshift RNG, see Marsaglia 2003 self.state ^= self.state << 13; self.state ^= self.state >> 7; self.state ^= self.state << 17; self.state } /** Get a new value, in the range 0..maxval; this is only approximately uniform */ pub fn next_usize(&mut self, maxval: usize) -> usize { (self.next() % maxval as u64) as usize } } /** Basic layout parameters of a Wayland/drm_fourcc.h linear layout format plane; these are * sufficient to describe where data for a pixel is, within a plane, but do not describe * the exact way the data is encoded. */ #[derive(Clone, Copy)] pub struct PlaneLayout { /** Bytes per (subsampled) texel block */ pub bpt: NonZeroU32, /** Horizontal subsampling ratio (width / texel block count) */ pub hsub: NonZeroU32, /** Vertical subsampling ratio (height / texel block count) */ pub vsub: NonZeroU32, /** Width of a texel block in pixels; this should divide hsub. */ pub htex: NonZeroU32, /** Height of a texel block in pixels; this should divide vsub. */ pub vtex: NonZeroU32, } /** Basic layout parameters for a (possibly) multiplanar Wayland/drm_fourcc.h linear layout format. * These are sufficient to determine, given width/height/offset/stride parameters, where the data * corresponding to a given pixel is, but do not determine the exact way the data is encoded. */ pub struct FormatLayout { pub planes: &'static [PlaneLayout], } /** Convert a DRM fourcc format code to a Wayland format code. * * Wayland and DRM differ in encodings for Argb8888 and Xrgb8888 only. * * Both names for Argb8888/Xrgb8888 can safely be used to specify wl_shm formats if * the compositor advertised both, but only DRM formats are permitted for linux-dmabuf. */ pub fn drm_to_wayland(drm_format: u32) -> u32 { if drm_format == fourcc('A', 'R', '2', '4') { WlShmFormat::Argb8888 as u32 } else if drm_format == fourcc('X', 'R', '2', '4') { WlShmFormat::Xrgb8888 as u32 } else { drm_format } } /** Get the layout for a wl_shm/drm_fourcc format. Returns None if the format is unsupported * (either being entirely invalid, or a format lacking any linear layout.) */ pub fn get_shm_format_layout(format: u32) -> Option { use crate::wayland_gen::WlShmFormat::*; /* Safety: values are not zero */ const N1: NonZeroU32 = unsafe { NonZeroU32::new_unchecked(1) }; const N2: NonZeroU32 = unsafe { NonZeroU32::new_unchecked(2) }; const N3: NonZeroU32 = unsafe { NonZeroU32::new_unchecked(3) }; const N4: NonZeroU32 = unsafe { NonZeroU32::new_unchecked(4) }; const N5: NonZeroU32 = unsafe { NonZeroU32::new_unchecked(5) }; const N6: NonZeroU32 = unsafe { NonZeroU32::new_unchecked(6) }; const N8: NonZeroU32 = unsafe { NonZeroU32::new_unchecked(8) }; const SIMPLE_1: PlaneLayout = PlaneLayout { bpt: N1, hsub: N1, vsub: N1, htex: N1, vtex: N1, }; const SIMPLE_2: PlaneLayout = PlaneLayout { bpt: N2, hsub: N1, vsub: N1, htex: N1, vtex: N1, }; /* Formats which do not have a wl_shm name yet */ const NV20: u32 = fourcc('N', 'V', '2', '0'); const NV30: u32 = fourcc('N', 'V', '3', '0'); match format { NV20 => { return Some(FormatLayout { planes: &[ PlaneLayout { bpt: N5, hsub: N4, vsub: N1, htex: N4, vtex: N1, }, PlaneLayout { bpt: N5, hsub: N4, vsub: N1, htex: N2, vtex: N1, }, ], }) } NV30 => { return Some(FormatLayout { planes: &[ PlaneLayout { bpt: N5, hsub: N4, vsub: N1, htex: N4, vtex: N1, }, PlaneLayout { // ? bpt: N5, hsub: N2, vsub: N1, htex: N2, vtex: N1, }, ], }); } _ => (), } let f: WlShmFormat = (drm_to_wayland(format)).try_into().ok()?; Some(match f { R8 | C8 | D8 | Rgb332 | Bgr233 => FormatLayout { planes: &[SIMPLE_1], }, Xrgb4444 | Xbgr4444 | Rgbx4444 | Bgrx4444 | Argb4444 | Abgr4444 | Rgba4444 | Bgra4444 | Xrgb1555 | Xbgr1555 | Rgbx5551 | Bgrx5551 | Argb1555 | Abgr1555 | Rgba5551 | Bgra5551 | Rgb565 | Bgr565 | R10 | R12 | R16 | Rg88 | Gr88 => FormatLayout { planes: &[SIMPLE_2], }, Rgb888 | Bgr888 | Vuy888 => FormatLayout { planes: &[PlaneLayout { bpt: N3, hsub: N1, vsub: N1, htex: N1, vtex: N1, }], }, Argb8888 | Xrgb8888 | Xbgr8888 | Rgbx8888 | Bgrx8888 | Abgr8888 | Rgba8888 | Bgra8888 | Xrgb2101010 | Xbgr2101010 | Rgbx1010102 | Bgrx1010102 | Argb2101010 | Abgr2101010 | Rgba1010102 | Bgra1010102 | Ayuv | Avuy8888 | Xvuy8888 | Xyuv8888 | Xvyu2101010 | Rg1616 | Gr1616 | Y410 => FormatLayout { planes: &[PlaneLayout { bpt: N4, hsub: N1, vsub: N1, htex: N1, vtex: N1, }], }, Xrgb16161616f | Xbgr16161616f | Argb16161616f | Abgr16161616f | Xrgb16161616 | Xbgr16161616 | Argb16161616 | Abgr16161616 | Axbxgxrx106106106106 | Xvyu1216161616 | Xvyu16161616 | Y412 | Y416 => FormatLayout { planes: &[PlaneLayout { bpt: N8, hsub: N1, vsub: N1, htex: N1, vtex: N1, }], }, R1 | C1 | D1 => FormatLayout { planes: &[PlaneLayout { bpt: N1, hsub: N8, vsub: N1, htex: N8, vtex: N1, }], }, R2 | C2 | D2 => FormatLayout { planes: &[PlaneLayout { bpt: N1, hsub: N4, vsub: N1, htex: N4, vtex: N1, }], }, R4 | C4 | D4 => FormatLayout { planes: &[PlaneLayout { bpt: N1, hsub: N2, vsub: N1, htex: N2, vtex: N1, }], }, Yuyv | Yvyu | Uyvy | Vyuy => FormatLayout { planes: &[PlaneLayout { bpt: N4, hsub: N2, vsub: N1, htex: N2, vtex: N1, }], }, Y210 | Y212 | Y216 => FormatLayout { planes: &[PlaneLayout { bpt: N8, hsub: N2, vsub: N1, htex: N2, vtex: N1, }], }, Y0l0 | X0l0 | Y0l2 | X0l2 => FormatLayout { planes: &[PlaneLayout { bpt: N8, hsub: N2, vsub: N2, htex: N2, vtex: N2, }], }, Rgb565A8 | Bgr565A8 => FormatLayout { planes: &[SIMPLE_2, SIMPLE_1], }, Rgb888A8 | Bgr888A8 => FormatLayout { planes: &[ PlaneLayout { bpt: N3, hsub: N1, vsub: N1, htex: N1, vtex: N1, }, SIMPLE_1, ], }, Xrgb8888A8 | Xbgr8888A8 | Rgbx8888A8 | Bgrx8888A8 => FormatLayout { planes: &[ PlaneLayout { bpt: N4, hsub: N1, vsub: N1, htex: N1, vtex: N1, }, SIMPLE_1, ], }, Nv12 | Nv21 => FormatLayout { planes: &[ SIMPLE_1, PlaneLayout { bpt: N2, hsub: N2, vsub: N2, htex: N1, vtex: N1, }, ], }, Nv16 | Nv61 => FormatLayout { planes: &[ SIMPLE_1, PlaneLayout { bpt: N2, hsub: N2, vsub: N1, htex: N1, vtex: N1, }, ], }, Nv24 | Nv42 => FormatLayout { planes: &[SIMPLE_1, SIMPLE_2], }, Nv15 => FormatLayout { planes: &[ PlaneLayout { bpt: N5, hsub: N4, vsub: N1, htex: N4, vtex: N1, }, PlaneLayout { bpt: N5, hsub: N4, vsub: N2, htex: N2, vtex: N1, }, ], }, P210 | P010 | P012 | P016 => FormatLayout { planes: &[ SIMPLE_2, PlaneLayout { bpt: N4, hsub: N2, vsub: N2, htex: N1, vtex: N1, }, ], }, P030 => FormatLayout { planes: &[ PlaneLayout { bpt: N4, hsub: N3, vsub: N1, htex: N3, vtex: N1, }, PlaneLayout { bpt: N8, hsub: N6, vsub: N2, htex: N3, vtex: N1, }, ], }, Yuv410 | Yvu410 => FormatLayout { planes: &[ SIMPLE_1, PlaneLayout { bpt: N1, hsub: N4, vsub: N4, htex: N1, vtex: N1, }, PlaneLayout { bpt: N1, hsub: N4, vsub: N4, htex: N1, vtex: N1, }, ], }, Yuv411 | Yvu411 => FormatLayout { planes: &[ SIMPLE_1, PlaneLayout { bpt: N1, hsub: N4, vsub: N1, htex: N1, vtex: N1, }, PlaneLayout { bpt: N1, hsub: N4, vsub: N1, htex: N1, vtex: N1, }, ], }, Yuv420 | Yvu420 => FormatLayout { planes: &[ SIMPLE_1, PlaneLayout { bpt: N1, hsub: N2, vsub: N2, htex: N1, vtex: N1, }, PlaneLayout { bpt: N1, hsub: N2, vsub: N2, htex: N1, vtex: N1, }, ], }, Yuv422 | Yvu422 => FormatLayout { planes: &[ SIMPLE_1, PlaneLayout { bpt: N1, hsub: N2, vsub: N1, htex: N1, vtex: N1, }, PlaneLayout { bpt: N1, hsub: N2, vsub: N1, htex: N1, vtex: N1, }, ], }, Yuv444 | Yvu444 => FormatLayout { planes: &[SIMPLE_1, SIMPLE_1, SIMPLE_1], }, Q401 | Q410 => FormatLayout { planes: &[SIMPLE_2, SIMPLE_2, SIMPLE_2], }, Yuv4208bit | Yuv42010bit | Vuy101010 => { return None; } }) } waypipe-v0.10.5/src/video.rs000066400000000000000000003617751507071711100157270ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Video encoding for dmabufs */ #![cfg(feature = "video")] use crate::dmabuf::*; use crate::tag; use crate::util::*; use crate::wayland_gen::*; use ash::vk::Handle; use ash::*; use log::{debug, error}; use std::ffi::{c_char, c_int, CStr}; use std::ptr::slice_from_raw_parts; use std::sync::{Arc, Mutex}; use waypipe_ffmpeg_wrapper::{ ffmpeg, AVBufferRef, AVCodec, AVCodecContext, AVDictionary, AVFrame, AVHWDeviceContext, AVHWDeviceType_AV_HWDEVICE_TYPE_VULKAN, AVHWFramesContext, AVPacket, AVPixelFormat_AV_PIX_FMT_NONE, AVPixelFormat_AV_PIX_FMT_NV12, AVPixelFormat_AV_PIX_FMT_VULKAN, AVPixelFormat_AV_PIX_FMT_YUV420P, AVRational, AVVkFrame, AVVulkanDeviceContext, AVVulkanFramesContext, VkStructureType_VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_FEATURES_2, AV_LOG_VERBOSE, AV_LOG_WARNING, AV_NUM_DATA_POINTERS, LIBAVCODEC_VERSION_MAJOR, }; use waypipe_shaders::{NV12_IMG_TO_RGB, RGB_TO_NV12_IMG, RGB_TO_YUV420_BUF, YUV420_BUF_TO_RGB}; struct VulkanComputePipeline { shader_module: vk::ShaderModule, ds_layout: vk::DescriptorSetLayout, pipeline_layout: vk::PipelineLayout, pipeline: vk::Pipeline, } /** The preferred hardware and software decoders and encoders for a format. * Entries may be null if the codec is not available, and the same AVCodec may * be used for both software and hardware video. */ struct CodecSet { decoder: *const AVCodec, sw_decoder: *const AVCodec, encoder: *const AVCodec, sw_encoder: *const AVCodec, } /** Structure holding all video-related state (ffmpeg bindings, devices, pipelines, etc.) * linked to a specific Vulkan logical device. */ pub struct VulkanVideo { bindings: ffmpeg, av_hwdevice: *mut AVBufferRef, // to AVHWDeviceContext codecs_h264: CodecSet, codecs_vp9: CodecSet, codecs_av1: CodecSet, can_hw_enc_h264: bool, can_hw_dec_h264: bool, can_hw_dec_av1: bool, // TODO: is it possible to detect in advance when hardware en/decoding works? // and only create the necessary pipeline? nv12_img_to_rgb: VulkanComputePipeline, rgb_to_nv12_img: VulkanComputePipeline, yuv420_buf_to_rgb: VulkanComputePipeline, rgb_to_yuv420_buf: VulkanComputePipeline, yuv_to_rgb_sampler_y: vk::Sampler, yuv_to_rgb_sampler_rb: vk::Sampler, rgb_to_yuv_sampler_rgb: vk::Sampler, } unsafe impl Send for VulkanVideo {} unsafe impl Sync for VulkanVideo {} struct VulkanSWDecodeData { _buf_y: VulkanBuffer, _buf_u: VulkanBuffer, _buf_v: VulkanBuffer, buf_y_view: vk::BufferView, buf_u_view: vk::BufferView, buf_v_view: vk::BufferView, } struct VulkanHWDecodeData {} enum VulkanDecodeOpData { Software(VulkanSWDecodeData), Hardware(VulkanHWDecodeData), } pub struct VulkanDecodeOpHandle { /* Copy operation is between these two objects */ decode: Arc, pool: Arc, // TODO: not safe to free a 'pending' command buffer; give Vulkan itself a list of copy-handles? cb: vk::CommandBuffer, desc_pool: vk::DescriptorPool, descriptor_set: vk::DescriptorSet, data: VulkanDecodeOpData, // on the main queue's timeline semaphore pub completion_time_point: u64, } impl Drop for VulkanDecodeOpHandle { fn drop(&mut self) { let cmd_pool = self.pool.pool.lock().unwrap(); let vulk: &VulkanDevice = &self.decode.target.vulk; unsafe { /* Verify that the command buffer execution has completed; if not, panic, as it's a program error */ if let Ok(counter) = vulk .timeline_semaphore .get_semaphore_counter_value(vulk.semaphore) { assert!( counter >= self.completion_time_point, "decode op handle deleted at {} >!= {}; dropped too early?", counter, self.completion_time_point ); } vulk.dev.free_command_buffers(*cmd_pool, &[self.cb]); match self.data { VulkanDecodeOpData::Software(ref x) => { vulk.dev.destroy_buffer_view(x.buf_y_view, None); vulk.dev.destroy_buffer_view(x.buf_u_view, None); vulk.dev.destroy_buffer_view(x.buf_v_view, None); // VulkanBuffer has Drop impl } VulkanDecodeOpData::Hardware(_) => {} } vulk.dev .free_descriptor_sets(self.desc_pool, &[self.descriptor_set]) .map_err(|_| "Failed to free descriptor set") .unwrap(); vulk.dev.destroy_descriptor_pool(self.desc_pool, None); } } } impl Drop for VulkanVideo { fn drop(&mut self) { unsafe { let r = &mut self.av_hwdevice; self.bindings.av_buffer_unref(r); } } } unsafe fn destroy_compute_pipeline(dev: &Device, p: &VulkanComputePipeline) { dev.destroy_pipeline(p.pipeline, None); dev.destroy_pipeline_layout(p.pipeline_layout, None); dev.destroy_descriptor_set_layout(p.ds_layout, None); dev.destroy_shader_module(p.shader_module, None); } pub unsafe fn destroy_video(dev: &Device, video: &VulkanVideo) { /* Cleanup vulkan bits of video state */ destroy_compute_pipeline(dev, &video.nv12_img_to_rgb); destroy_compute_pipeline(dev, &video.rgb_to_nv12_img); destroy_compute_pipeline(dev, &video.yuv420_buf_to_rgb); destroy_compute_pipeline(dev, &video.rgb_to_yuv420_buf); dev.destroy_sampler(video.yuv_to_rgb_sampler_y, None); dev.destroy_sampler(video.yuv_to_rgb_sampler_rb, None); dev.destroy_sampler(video.rgb_to_yuv_sampler_rgb, None); } struct VideoDecodeInner { ctx: *mut AVCodecContext, } unsafe impl Send for VideoDecodeInner {} /** Hardware video decoding produces, and encoding uses, a multi-planar image with a * format like G8_B8R8_2PLANE_420_UNORM; however, these typically only have * TRANSFER and various SAMPLED format features, but not BLIT, STORAGE, or * COLOR_ATTACHMENT. (Note: STORAGE sometimes works in practice despite not being * supported; doing this may be faster but is risky and could lead to exciting bugs.) * Also, * Also, aliasing individual planes appears to require ALIAS|DISJOINT which Vulkan * video does not appear to always support (or if so, figuring it out and * integrating it with ffmpeg's code is complicated.) Therefore: To make encoding and * decoding more reliable and symmetric, transfer image data to/from a staging image * before doing the YUV<->RGB conversion. * * This does not hold a reference to the vulkan instance and cleaning up this struct * is the owner's responsibility. */ struct VideoHWStagingImages { plane_memories: [vk::DeviceMemory; 2], plane_images: [vk::Image; 2], plane_image_views: [vk::ImageView; 2], } enum VideoDecodeStateData { SW, HW(VideoHWStagingImages), } pub struct VideoDecodeState { // for now, only updating a single dmabuf target: Arc, inner: Mutex, /* Image view for `target`, type COLOR, entire image */ output_image_view: vk::ImageView, /* State specific to hardware vs software decoding pathways */ data: VideoDecodeStateData, } struct VideoEncodeInner { ctx: *mut AVCodecContext, } unsafe impl Send for VideoEncodeInner {} enum VideoEncodeStateData { SW, HW(VideoHWStagingImages), } pub struct VideoEncodeState { target: Arc, inner: Mutex, /* Image view for `target`, type COLOR, entire image */ output_image_view: vk::ImageView, /* State specific to hardware vs software encoding pathways */ data: VideoEncodeStateData, } impl Drop for VideoDecodeState { fn drop(&mut self) { unsafe { let mut x = self.inner.lock().unwrap(); self.target .vulk .video .as_ref() .unwrap() .bindings .avcodec_free_context(&mut x.ctx); self.target .vulk .dev .destroy_image_view(self.output_image_view, None); match self.data { VideoDecodeStateData::HW(ref data) => { free_staging_images(&self.target.vulk, data); } VideoDecodeStateData::SW => (), } } } } #[cfg(feature = "video")] impl Drop for VideoEncodeState { fn drop(&mut self) { unsafe { let mut x = self.inner.lock().unwrap(); self.target .vulk .video .as_ref() .unwrap() .bindings .avcodec_free_context(&mut x.ctx); self.target .vulk .dev .destroy_image_view(self.output_image_view, None); match self.data { VideoEncodeStateData::HW(ref data) => { free_staging_images(&self.target.vulk, data); } VideoEncodeStateData::SW => (), } } } } impl VulkanDecodeOpHandle { /* Not recommended in general -- blocks the thread. Returns true if point reached. */ #[cfg(test)] pub fn wait_until_done(self: &VulkanDecodeOpHandle) -> Result<(), String> { self.decode .target .vulk .wait_for_timeline_pt(self.completion_time_point, u64::MAX) .map(|_| ()) } pub fn get_timeline_point(self: &VulkanDecodeOpHandle) -> u64 { self.completion_time_point } } /** Return the length (excluding trailing 0) of a 0-terminated string. */ unsafe fn strlen(s: *const c_char) -> usize { for i in 0.. { if s.add(i).read() == 0 { return i; } } unreachable!(); } fn av_strerror<'a>(bindings: &ffmpeg, err_buf: &'a mut [u8], ret: c_int) -> &'a str { unsafe { // SAFETY: av_strerror null-terminates, sizeof(u8) = sizeof(char), todo if bindings.av_strerror(ret, err_buf.as_mut_ptr() as *mut c_char, err_buf.len()) == 0 { std::str::from_utf8(&err_buf[..err_buf.iter().position(|x| *x == 0).unwrap()]).unwrap() } else { "unknown error" } } } unsafe fn av_hwframe_ctx_init( bindings: &ffmpeg, frames_ref: *mut AVBufferRef, ) -> Result<(), String> { let ret = bindings.av_hwframe_ctx_init(frames_ref); if ret != 0 { let mut err_buf = [0_u8; 1024]; let err = av_strerror(bindings, &mut err_buf, ret); return Err(tag!( "Failed to initialize hwframe context: {}: {:?}", ret, err )); } Ok(()) } unsafe fn av_hwdevice_ctx_init( bindings: &ffmpeg, device_ref: *mut AVBufferRef, ) -> Result<(), String> { let ret = bindings.av_hwdevice_ctx_init(device_ref); if ret != 0 { let mut err_buf = [0_u8; 1024]; let err = av_strerror(bindings, &mut err_buf, ret); return Err(tag!( "Failed to initialize vulkan hwdevice: {}: {:?}", ret, err )); } Ok(()) } unsafe fn avcodec_open( bindings: &ffmpeg, avctx: *mut AVCodecContext, codec: *const AVCodec, opts: *mut *mut AVDictionary, ) -> Result<(), String> { let ret = bindings.avcodec_open2(avctx, codec, opts); if ret != 0 { let mut err_buf = [0_u8; 1024]; let err = av_strerror(bindings, &mut err_buf, ret); let name = CStr::from_ptr((*codec).name); return Err(tag!( "Failed to open codec context for {:?}: {}: {:?}", name, ret, err )); } Ok(()) } unsafe fn avcodec_send_packet( bindings: &ffmpeg, avctx: *mut AVCodecContext, packet: *const AVPacket, ) -> Result<(), String> { let ret = bindings.avcodec_send_packet(avctx, packet); if ret != 0 { let mut err_buf = [0_u8; 1024]; let err = av_strerror(bindings, &mut err_buf, ret); let name = CStr::from_ptr((*(*avctx).codec).name); return Err(tag!( "Failed to send video packet to {:?}: {}: {:?}", name, ret, err )); } Ok(()) } unsafe fn avcodec_receive_packet( bindings: &ffmpeg, avctx: *mut AVCodecContext, packet: *mut AVPacket, ) -> Result<(), String> { let ret = bindings.avcodec_receive_packet(avctx, packet); if ret != 0 { let mut err_buf = [0_u8; 1024]; let err = av_strerror(bindings, &mut err_buf, ret); let name = CStr::from_ptr((*(*avctx).codec).name); if name == c"libsvtav1" { debug!("Note: libsvtav1 version > 2.3.0 is required for low delay encoding to work"); } return Err(tag!( "Failed to receive video packet from {:?}: {}: {:?}", name, ret, err )); } Ok(()) } unsafe fn avcodec_send_frame( bindings: &ffmpeg, avctx: *mut AVCodecContext, frame: *const AVFrame, ) -> Result<(), String> { let ret = bindings.avcodec_send_frame(avctx, frame); if ret != 0 { let mut err_buf = [0_u8; 1024]; let err = av_strerror(bindings, &mut err_buf, ret); let name = CStr::from_ptr((*(*avctx).codec).name); return Err(tag!( "Failed to send video packet to {:?}: {}: {:?}", name, ret, err )); } Ok(()) } unsafe fn avcodec_receive_frame( bindings: &ffmpeg, avctx: *mut AVCodecContext, frame: *mut AVFrame, ) -> Result<(), String> { let ret = bindings.avcodec_receive_frame(avctx, frame); if ret != 0 { let mut err_buf = [0_u8; 1024]; let err = av_strerror(bindings, &mut err_buf, ret); let name = CStr::from_ptr((*(*avctx).codec).name); return Err(tag!( "Failed to receive video frame from {:?}: {}: {:?}", name, ret, err )); } Ok(()) } fn pack_glsl_mat3x4(mtx: &[[f32; 4]; 3]) -> [u8; 48] { let mut push_u8 = [0u8; 48]; // 3 columns, 4 rows, rows packed for (j, col) in mtx.iter().enumerate().take(3) { for (i, px) in col.iter().enumerate().take(4) { let k = 4 * j + i; push_u8[k * 4..(k + 1) * 4].copy_from_slice(&px.to_le_bytes()); } } push_u8 } /* For compatibility with original Waypipe; align to 16-pixel blocks. This will * suffice for most alignment requirements. This is not a big deal since we should * copy to an intermediate buffer anyway. */ fn align_size(width: u32, height: u32, format: VideoFormat) -> (i32, i32) { let mut w = width.next_multiple_of(16) as i32; if format == VideoFormat::H264 { /* libavcodec requires width >= 32 for software encoding H264 */ w = w.max(32); } let h = height.next_multiple_of(16) as i32; (w, h) } fn set_context_extensions( bindings: &ffmpeg, ctx: &mut AVVulkanDeviceContext, device_exts: &[*const c_char], instance_exts: &[*const c_char], ) -> Result<(), String> { /* Provide instance and device extensions being used; all associated data * (including strings) must be allocated, as it will be freed with av_free later */ unsafe { let inst_exts: *mut *const c_char = bindings.av_malloc(std::mem::size_of_val(instance_exts)) as _; if inst_exts.is_null() { return Err(tag!("failed to allocate instance extensions")); } let dev_exts: *mut *const c_char = bindings.av_malloc(std::mem::size_of_val(device_exts)) as _; if dev_exts.is_null() { bindings.av_free(inst_exts as _); return Err(tag!("failed to allocate device extensions")); } for (i, e) in instance_exts.iter().enumerate() { let len = strlen(*e); let v: *mut c_char = bindings.av_malloc(len + 1) as _; if v.is_null() { for j in 0..i { bindings.av_free(inst_exts.add(j) as _); } bindings.av_free(inst_exts as _); bindings.av_free(dev_exts as _); return Err(tag!("failed to allocated extension name")); } v.copy_from_nonoverlapping(*e, len + 1); (*inst_exts.add(i)) = v as _; } for (i, e) in device_exts.iter().enumerate() { let len = strlen(*e); let v: *mut c_char = bindings.av_malloc(len + 1) as _; if v.is_null() { for j in 0..i { bindings.av_free(dev_exts.add(j) as _); } for j in 0..instance_exts.len() { bindings.av_free(dev_exts.add(j) as _); } bindings.av_free(inst_exts as _); bindings.av_free(dev_exts as _); return Err(tag!("failed to allocated extension name")); } v.copy_from_nonoverlapping(*e, len + 1); (*dev_exts.add(i)) = v as _; } ctx.enabled_inst_extensions = inst_exts; ctx.nb_enabled_inst_extensions = instance_exts.len().try_into().unwrap(); ctx.nb_enabled_dev_extensions = device_exts.len().try_into().unwrap(); ctx.enabled_dev_extensions = dev_exts; } Ok(()) } fn create_compute_pipeline( dev: &Device, shader: &[u32], bindings: &[vk::DescriptorSetLayoutBinding], push_len: usize, ) -> Result { unsafe { let shader_create = vk::ShaderModuleCreateInfo::default() .flags(vk::ShaderModuleCreateFlags::empty()) .code(shader); let shader_module = dev .create_shader_module(&shader_create, None) .map_err(|_| "Failed to create shader")?; let layout_info = vk::DescriptorSetLayoutCreateInfo::default() .flags(vk::DescriptorSetLayoutCreateFlags::empty()) .bindings(bindings); let ds_layout = dev .create_descriptor_set_layout(&layout_info, None) .map_err(|_| "Failed to create descriptor set layout")?; let layouts = &[ds_layout]; let push_ranges = &[vk::PushConstantRange::default() .stage_flags(vk::ShaderStageFlags::COMPUTE) .offset(0) .size(push_len.try_into().unwrap())]; let pipeline_layout_info = vk::PipelineLayoutCreateInfo::default() .flags(vk::PipelineLayoutCreateFlags::empty()) .set_layouts(layouts) .push_constant_ranges(push_ranges); let pipeline_layout = dev .create_pipeline_layout(&pipeline_layout_info, None) .map_err(|_| "Failed to create pipeline layout")?; let entrypoint = c"main"; let pipeline_shader_create = vk::PipelineShaderStageCreateInfo::default() .stage(vk::ShaderStageFlags::COMPUTE) .module(shader_module) .name(entrypoint); // no specialization info let pipeline_info = vk::ComputePipelineCreateInfo::default() .flags(vk::PipelineCreateFlags::empty()) .stage(pipeline_shader_create) .layout(pipeline_layout); let pipeline = dev .create_compute_pipelines(vk::PipelineCache::null(), &[pipeline_info], None) .map_err(|_| "Failed to create compute pipeline")? .pop() .unwrap(); Ok(VulkanComputePipeline { shader_module, ds_layout, pipeline_layout, pipeline, }) } } pub unsafe fn setup_video( entry: &Entry, instance: &Instance, physdev: &vk::PhysicalDevice, dev: &Device, pdev_info: &DeviceInfo, debug: bool, qfis: [u32; 4], device_exts: &[*const c_char], instance_exts: &[*const c_char], ) -> Result, String> { /* loading libavcodec transitively loads a matching libavutil */ let lib = match ffmpeg::new(format!("libavcodec.so.{}", LIBAVCODEC_VERSION_MAJOR)) { Ok(x) => x, Err(x) => { error!("Failed to load libavcodec (+ libavutil, etc.): {}. Video encoding/decoding is disabled.", x); return Ok(None); } }; lib.av_log_set_level(if debug { AV_LOG_VERBOSE } else { AV_LOG_WARNING } as _); // av_log_set_level(AV_LOG_TRACE as _); lib.av_log_set_callback(Some(lib.av_log_default_callback)); let hw_video = pdev_info.hw_enc_h264 | pdev_info.hw_dec_h264 | pdev_info.hw_dec_av1; let device_ref = if hw_video { debug!("Setting up video hardware device context"); // Option let device_ref: *mut AVBufferRef = lib.av_hwdevice_ctx_alloc(AVHWDeviceType_AV_HWDEVICE_TYPE_VULKAN); if device_ref.is_null() { return Err(tag!("Failed to allocate vulkan type hwdevice")); } let hw_context = (*device_ref).data.cast::(); let vk_context = (*hw_context).hwctx.cast::(); let ctx: &mut AVVulkanDeviceContext = vk_context.as_mut().unwrap(); // todo: sanity check this ctx.get_proc_addr = Some(core::mem::transmute::< unsafe extern "system" fn( ash::vk::Instance, *const c_char, ) -> std::option::Option, unsafe extern "C" fn( *mut waypipe_ffmpeg_wrapper::VkInstance_T, *const c_char, ) -> std::option::Option, >(entry.static_fn().get_instance_proc_addr)); // u64? ctx.inst = instance.handle().as_raw() as *mut _; ctx.phys_dev = physdev.as_raw() as *mut _; ctx.act_dev = dev.handle().as_raw() as *mut _; let mut feats = vk::PhysicalDeviceFeatures2 { ..Default::default() }; instance.get_physical_device_features2(*physdev, &mut feats); ctx.device_features.sType = VkStructureType_VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_FEATURES_2; ctx.device_features.pNext = std::ptr::null_mut(); ctx.device_features.features = std::mem::transmute::< ash::vk::PhysicalDeviceFeatures, waypipe_ffmpeg_wrapper::VkPhysicalDeviceFeatures, >(feats.features); set_context_extensions(&lib, ctx, device_exts, instance_exts)?; /* Note: the queue_family_indices are deprecated and will be replaced * by `.qf`/`.nb_qf` */ ctx.queue_family_tx_index = qfis[0].try_into().unwrap(); ctx.queue_family_comp_index = qfis[0].try_into().unwrap(); ctx.queue_family_index = qfis[1].try_into().unwrap(); ctx.queue_family_encode_index = qfis[2].try_into().unwrap(); ctx.queue_family_decode_index = qfis[3].try_into().unwrap(); ctx.nb_graphics_queues = 1; ctx.nb_tx_queues = 1; ctx.nb_comp_queues = 1; ctx.nb_encode_queues = 1; ctx.nb_decode_queues = 1; av_hwdevice_ctx_init(&lib, device_ref)?; // For vulkan, hwconfig currently ignored let hwframes_constraints = lib.av_hwdevice_get_hwframe_constraints(device_ref, std::ptr::null_mut()); // NOTE: these are all formats supported by Vulkan; must constrain with decoding details... let _hw_fmtlist = (*hwframes_constraints).valid_hw_formats; let _sw_fmtlist = (*hwframes_constraints).valid_sw_formats; /* TODO: hwframes_constraints only gives Vulkan limitations; decoder may have other limits (like <= 4096 wide), as seen by trace output */ device_ref } else { std::ptr::null_mut() }; // todo: loading earlier may simplify video availability detection; loading on demand may reduce latency let h264dec = lib.avcodec_find_decoder_by_name("h264\0".as_bytes().as_ptr() as *const _); let codecs_h264 = CodecSet { decoder: h264dec, sw_decoder: h264dec, encoder: lib.avcodec_find_encoder_by_name("h264_vulkan\0".as_bytes().as_ptr() as *const _), sw_encoder: lib.avcodec_find_encoder_by_name("libx264\0".as_bytes().as_ptr() as *const _), }; let codecs_vp9 = CodecSet { decoder: std::ptr::null(), sw_decoder: lib.avcodec_find_decoder_by_name("vp9\0".as_bytes().as_ptr() as *const _), encoder: std::ptr::null(), sw_encoder: lib .avcodec_find_encoder_by_name("libvpx-vp9\0".as_bytes().as_ptr() as *const _), }; let codecs_av1 = CodecSet { decoder: lib.avcodec_find_decoder_by_name("av1\0".as_bytes().as_ptr() as *const _), sw_decoder: lib.avcodec_find_decoder_by_name("libdav1d\0".as_bytes().as_ptr() as *const _), encoder: std::ptr::null(), /* AV1 encoder comparison. As of writing: * - librav1e: may require a minimum frame lookahead, unknown if this was ever fixed * - libsvtav1: as of version 2.3.0, zero latency is attainable with pred-struct=1:rc=2. * but: the setup/memory allocation for encoding takes a large fraction of a second, * which is impractical for Waypipe's current one-stream-per-buffer approach * - libaom-av1: works, has a zero lag mode */ sw_encoder: lib .avcodec_find_encoder_by_name("libaom-av1\0".as_bytes().as_ptr() as *const _), }; debug!( "H264 support: hwenc {} swenc {} hwdec {} swdec {}", fmt_bool(!codecs_h264.encoder.is_null() && pdev_info.hw_enc_h264), fmt_bool(!codecs_h264.sw_encoder.is_null()), fmt_bool(!codecs_h264.decoder.is_null() && pdev_info.hw_dec_h264), fmt_bool(!codecs_h264.sw_decoder.is_null()), ); debug!( "VP9 support: hwenc f swenc {} hwdec f swdec {}", fmt_bool(!codecs_vp9.sw_encoder.is_null()), fmt_bool(!codecs_vp9.sw_decoder.is_null()), ); debug!( "AV1 support: hwenc f swenc {} hwdec {} swdec {}", fmt_bool(!codecs_av1.sw_encoder.is_null()), fmt_bool(!codecs_av1.decoder.is_null() && pdev_info.hw_dec_av1), fmt_bool(!codecs_av1.sw_decoder.is_null()), ); let bindings = &[ vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::STORAGE_IMAGE) .binding(0) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER) .binding(1) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER) .binding(2) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), ]; let nv12_img_to_rgb = create_compute_pipeline(dev, NV12_IMG_TO_RGB, bindings, 4 * 3 * 4)?; let bindings = &[ vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER) .binding(0) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::STORAGE_IMAGE) .binding(1) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::STORAGE_IMAGE) .binding(2) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), ]; let rgb_to_nv12_img = create_compute_pipeline(dev, RGB_TO_NV12_IMG, bindings, 4 * 3 * 4)?; let bindings = &[ vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER) .binding(0) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::STORAGE_TEXEL_BUFFER) .binding(1) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::STORAGE_TEXEL_BUFFER) .binding(2) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::STORAGE_TEXEL_BUFFER) .binding(3) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), ]; let rgb_to_yuv420_buf = create_compute_pipeline(dev, RGB_TO_YUV420_BUF, bindings, 4 * 3 * 4 + 3 * 4)?; let bindings = &[ vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::STORAGE_IMAGE) .binding(0) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::UNIFORM_TEXEL_BUFFER) .binding(1) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::UNIFORM_TEXEL_BUFFER) .binding(2) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), vk::DescriptorSetLayoutBinding::default() .descriptor_type(vk::DescriptorType::UNIFORM_TEXEL_BUFFER) .binding(3) .descriptor_count(1) .stage_flags(vk::ShaderStageFlags::COMPUTE), ]; let yuv420_buf_to_rgb = create_compute_pipeline(dev, YUV420_BUF_TO_RGB, bindings, 4 * 3 * 4 + 3 * 4)?; let rect_lin_sampler = vk::SamplerCreateInfo::default() .flags(vk::SamplerCreateFlags::empty()) .mag_filter(vk::Filter::LINEAR) .min_filter(vk::Filter::LINEAR) .mipmap_mode(vk::SamplerMipmapMode::NEAREST) .address_mode_u(vk::SamplerAddressMode::CLAMP_TO_EDGE) .address_mode_v(vk::SamplerAddressMode::CLAMP_TO_EDGE) .address_mode_w(vk::SamplerAddressMode::REPEAT) .mip_lod_bias(0.) .anisotropy_enable(false) .max_anisotropy(0.) .compare_enable(false) .min_lod(0.) .max_lod(0.) .border_color(vk::BorderColor::default()) .unnormalized_coordinates(true); let sampler_y_info = rect_lin_sampler; let sampler_rb_info = rect_lin_sampler; let yuv_to_rgb_sampler_y = dev .create_sampler(&sampler_y_info, None) .map_err(|_| "Failed to allocate sampler Y")?; let yuv_to_rgb_sampler_rb = dev .create_sampler(&sampler_rb_info, None) .map_err(|_| "Failed to allocate sampler CrCb")?; let sampler_rgb_info = rect_lin_sampler; let rgb_to_yuv_sampler_rgb = dev .create_sampler(&sampler_rgb_info, None) .map_err(|_| "Failed to allocate sampler RGB")?; Ok(Some(VulkanVideo { bindings: lib, av_hwdevice: device_ref, codecs_h264, codecs_vp9, codecs_av1, can_hw_enc_h264: pdev_info.hw_enc_h264, can_hw_dec_h264: pdev_info.hw_dec_h264, can_hw_dec_av1: pdev_info.hw_dec_av1, rgb_to_yuv420_buf, yuv420_buf_to_rgb, nv12_img_to_rgb, rgb_to_nv12_img, yuv_to_rgb_sampler_y, yuv_to_rgb_sampler_rb, rgb_to_yuv_sampler_rgb, })) } /** Lock the first queue in the given family, if a hardware context was set up */ pub unsafe fn video_lock_queue(video: &VulkanVideo, queue_family: u32) { if video.av_hwdevice.is_null() { return; } let hw_context = (*video.av_hwdevice).data.cast::(); let vk_context = (*hw_context).hwctx.cast::(); (*vk_context).lock_queue.unwrap()(hw_context, queue_family, 0); } /** Unlock the first queue in the given family, if a hardware context was set up */ pub unsafe fn video_unlock_queue(video: &VulkanVideo, queue_family: u32) { if video.av_hwdevice.is_null() { return; } let hw_context = (*video.av_hwdevice).data.cast::(); let vk_context = (*hw_context).hwctx.cast::(); (*vk_context).unlock_queue.unwrap()(hw_context, queue_family, 0); } /* Pick format: Vulkan, and setup hw frames context */ #[cfg(feature = "video")] unsafe extern "C" fn pick_video_format_hw(ctx: *mut AVCodecContext, fmts: *const i32) -> i32 { /* Return AV_PIX_FMT_VULKAN if present in list */ for i in 0.. { let f = fmts.add(i).read(); if f == AVPixelFormat_AV_PIX_FMT_NONE { /* Failure */ error!("Did not find AV_PIX_FMT_VULKAN in format list"); return AVPixelFormat_AV_PIX_FMT_NONE; } if f == AVPixelFormat_AV_PIX_FMT_VULKAN { break; } } let bindings: &ffmpeg = ((*ctx).opaque as *const ffmpeg).as_ref().unwrap(); { let out_frames_ref = &mut (*ctx).hw_frames_ctx; let ret = bindings.avcodec_get_hw_frames_parameters( ctx, (*ctx).hw_device_ctx, AVPixelFormat_AV_PIX_FMT_VULKAN, out_frames_ref, ); if ret != 0 { error!("Failed to get hw frame parameters: {}", ret); return AVPixelFormat_AV_PIX_FMT_NONE; } } if let Err(e) = av_hwframe_ctx_init(bindings, (*ctx).hw_frames_ctx) { error!("Failed to initialize hw frames: {}", e); return AVPixelFormat_AV_PIX_FMT_NONE; } AVPixelFormat_AV_PIX_FMT_VULKAN } unsafe fn free_staging_images(vulk: &VulkanDevice, data: &VideoHWStagingImages) { for i in 0..2 { vulk.dev.destroy_image_view(data.plane_image_views[i], None); vulk.dev.destroy_image(data.plane_images[i], None); vulk.dev.free_memory(data.plane_memories[i], None); } } fn create_staging_images( vulk: &VulkanDevice, width: u32, height: u32, for_encode: bool, fmt: vk::Format, ) -> Result { assert!(fmt == vk::Format::G8_B8R8_2PLANE_420_UNORM); assert!(width % 2 == 0 && height % 2 == 0); let planes = &[ (vk::Format::R8_UNORM, width, height), (vk::Format::R8G8_UNORM, width / 2, height / 2), ]; let mut plane_images = [vk::Image::null(); 2]; let mut plane_memories = [vk::DeviceMemory::null(); 2]; let mut plane_image_views = [vk::ImageView::null(); 2]; unsafe { for plane in 0..=1 { let image_info = vk::ImageCreateInfo::default() .flags(vk::ImageCreateFlags::empty()) .image_type(vk::ImageType::TYPE_2D) .format(planes[plane].0) .extent(vk::Extent3D { width: planes[plane].1, height: planes[plane].2, depth: 1, }) .mip_levels(1) .array_layers(1) .samples(vk::SampleCountFlags::TYPE_1) .tiling(vk::ImageTiling::OPTIMAL) .usage(if for_encode { vk::ImageUsageFlags::STORAGE | vk::ImageUsageFlags::TRANSFER_SRC } else { vk::ImageUsageFlags::TRANSFER_DST | vk::ImageUsageFlags::SAMPLED }) .sharing_mode(vk::SharingMode::EXCLUSIVE) .initial_layout(vk::ImageLayout::UNDEFINED); let image = vulk .dev .create_image(&image_info, None) .map_err(|x| tag!("Failed to create Vulkan image: {:?}", x))?; let mem_reqs = vulk.dev.get_image_memory_requirements(image); assert!(mem_reqs.memory_type_bits != 0); let mem_index = mem_reqs.memory_type_bits.trailing_zeros(); let alloc_info = vk::MemoryAllocateInfo::default() .allocation_size(mem_reqs.size) .memory_type_index(mem_index); let memory = vulk .dev .allocate_memory(&alloc_info, None) .map_err(|x| tag!("Failed to allocate image memory: {:?}", x))?; vulk.dev .bind_image_memory(image, memory, 0) .map_err(|x| tag!("Failed to bind image memory: {:?}", x))?; let image_view_info = vk::ImageViewCreateInfo::default() .flags(vk::ImageViewCreateFlags::empty()) .image(image) .view_type(vk::ImageViewType::TYPE_2D) .format(planes[plane].0) .components(vk::ComponentMapping::default().r(vk::ComponentSwizzle::IDENTITY)) .subresource_range( vk::ImageSubresourceRange::default() .aspect_mask(vk::ImageAspectFlags::COLOR) .base_mip_level(0) .level_count(1) .base_array_layer(0) .layer_count(1), ); let image_view = vulk .dev .create_image_view(&image_view_info, None) .map_err(|_| "Failed to create plane 1 image view")?; plane_images[plane] = image; plane_memories[plane] = memory; plane_image_views[plane] = image_view; } Ok(VideoHWStagingImages { plane_memories, plane_images, plane_image_views, }) } } /** Create a vkImageSubresourceLayers for an entire image (single layer, no mipmaps) */ fn subresource_layer(aspect_mask: vk::ImageAspectFlags) -> vk::ImageSubresourceLayers { vk::ImageSubresourceLayers { aspect_mask, mip_level: 0, base_array_layer: 0, layer_count: 1, } } /** Image memory barrier for a same-queue layout transition. * * The access range is COLOR for the single level/layer of the entire image. */ fn image_layout_transition( image: vk::Image, old_layout: vk::ImageLayout, new_layout: vk::ImageLayout, src_access_mask: vk::AccessFlags, dst_access_mask: vk::AccessFlags, ) -> vk::ImageMemoryBarrier<'static> { let standard_access_range = vk::ImageSubresourceRange::default() .aspect_mask(vk::ImageAspectFlags::COLOR) .level_count(1) .layer_count(1); vk::ImageMemoryBarrier::default() .image(image) .old_layout(old_layout) .new_layout(new_layout) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .src_access_mask(src_access_mask) .dst_access_mask(dst_access_mask) .subresource_range(standard_access_range) } fn create_dmabuf_view(img: &VulkanDmabuf) -> Result { let idswizzle = vk::ComponentMapping::default() .r(vk::ComponentSwizzle::IDENTITY) .g(vk::ComponentSwizzle::IDENTITY) .b(vk::ComponentSwizzle::IDENTITY) .a(vk::ComponentSwizzle::IDENTITY); let output_image_view_info = vk::ImageViewCreateInfo::default() .flags(vk::ImageViewCreateFlags::empty()) .image(img.image) .view_type(vk::ImageViewType::TYPE_2D) .format(img.vk_format) .components(idswizzle) .subresource_range( vk::ImageSubresourceRange::default() .aspect_mask(vk::ImageAspectFlags::COLOR) .base_mip_level(0) .level_count(1) .base_array_layer(0) .layer_count(1), ); unsafe { let output_image_view = img .vulk .dev .create_image_view(&output_image_view_info, None) .map_err(|x| tag!("Failed to create image view for dmabuf image: {}", x))?; Ok(output_image_view) } } pub fn setup_video_decode_hw( img: &Arc, fmt: VideoFormat, ) -> Result { let video = img.vulk.video.as_ref().unwrap(); let decoder: *const AVCodec = match fmt { VideoFormat::H264 => video.codecs_h264.decoder, VideoFormat::VP9 => video.codecs_vp9.decoder, VideoFormat::AV1 => video.codecs_av1.decoder, }; assert!(!decoder.is_null()); unsafe { let ctx: *mut AVCodecContext = video.bindings.avcodec_alloc_context3(decoder); if ctx.is_null() { return Err(tag!("Failed to allocate context")); } let (awidth, aheight) = align_size(img.width, img.height, fmt); { let cr = ctx.as_mut().unwrap(); let nref = video.bindings.av_buffer_ref(video.av_hwdevice); if nref.is_null() { return Err(tag!("Failed to add reference for av_hwdevice")); } cr.hw_device_ctx = nref; // todo: need to ensure video bindings are not moved; do Arc ? cr.opaque = &video.bindings as *const ffmpeg as *mut _; cr.get_format = Some(pick_video_format_hw); (cr.width, cr.height) = (awidth, aheight); } /* ctx->get_format will be called to do setup work once a packet is received */ avcodec_open(&video.bindings, ctx, decoder, std::ptr::null_mut())?; let output_image_view = create_dmabuf_view(img)?; let staging_images = create_staging_images( &img.vulk, awidth.try_into().unwrap(), aheight.try_into().unwrap(), false, vk::Format::G8_B8R8_2PLANE_420_UNORM, )?; Ok(VideoDecodeState { target: img.clone(), inner: Mutex::new(VideoDecodeInner { ctx }), output_image_view, data: VideoDecodeStateData::HW(staging_images), }) } } /* Pick format: NV12 */ unsafe extern "C" fn pick_video_format_sw(_ctx: *mut AVCodecContext, fmts: *const i32) -> i32 { for i in 0.. { let f = fmts.add(i).read(); if f == AVPixelFormat_AV_PIX_FMT_NONE { /* Failure */ error!("Did not find AVPixelFormat_AV_PIX_FMT_YUV420P in list"); return AVPixelFormat_AV_PIX_FMT_NONE; } if f == AVPixelFormat_AV_PIX_FMT_YUV420P { break; } } AVPixelFormat_AV_PIX_FMT_YUV420P } pub fn setup_video_decode_sw( img: &Arc, fmt: VideoFormat, ) -> Result { let video = img.vulk.video.as_ref().unwrap(); let decoder: *const AVCodec = match fmt { VideoFormat::H264 => video.codecs_h264.sw_decoder, VideoFormat::VP9 => video.codecs_vp9.sw_decoder, VideoFormat::AV1 => video.codecs_av1.sw_decoder, }; unsafe { let ctx: *mut AVCodecContext = video.bindings.avcodec_alloc_context3(decoder); if ctx.is_null() { return Err(tag!("Failed to allocate context")); } { let cr = ctx.as_mut().unwrap(); // todo: need to ensure video bindings are not moved; do Arc ? cr.opaque = &video.bindings as *const ffmpeg as *mut _; cr.get_format = Some(pick_video_format_sw); (cr.width, cr.height) = align_size(img.width, img.height, fmt); } /* ctx->get_format will be called to do setup work once a packet is received */ avcodec_open(&video.bindings, ctx, decoder, std::ptr::null_mut())?; let output_image_view = create_dmabuf_view(img)?; Ok(VideoDecodeState { target: img.clone(), inner: Mutex::new(VideoDecodeInner { ctx }), output_image_view, data: VideoDecodeStateData::SW, }) } } /** libavcodec does not currently appear to have a way to expose the range of sizes * for which hardware encoding/decoding is supported. (Which may be graphics driver * dependent.) Technically one could try avcodec_open() and look for an EINVAL, but * doing that could hide bugs in Waypipe.) Hardcode a relatively safe threshold * for now, until a better solution is found. */ const MIN_H264_HW_SIZE: (u32, u32) = (128, 128); pub fn setup_video_decode( img: &Arc, fmt: VideoFormat, ) -> Result { assert!(img.can_store_and_sample); let video = img.vulk.video.as_ref().unwrap(); let can_hw_decode = match fmt { VideoFormat::AV1 => video.can_hw_dec_av1 && !video.codecs_av1.decoder.is_null(), VideoFormat::VP9 => false, VideoFormat::H264 => { video.can_hw_dec_h264 && !video.codecs_h264.decoder.is_null() && (img.width >= MIN_H264_HW_SIZE.0 && img.height >= MIN_H264_HW_SIZE.1) } }; if can_hw_decode { setup_video_decode_hw(img, fmt) } else { setup_video_decode_sw(img, fmt) } } pub fn supports_video_format( vulk: &VulkanDevice, fmt: VideoFormat, drm_format: u32, _width: u32, _height: u32, ) -> bool { let Some(ref vid) = vulk.video else { return false; }; let Ok(wlfmt) = TryInto::::try_into(drm_to_wayland(drm_format)) else { return false; }; match wlfmt { WlShmFormat::Xrgb8888 => (), WlShmFormat::Xbgr8888 => (), _ => { return false; } }; // TODO: lookup max size available for format match fmt { VideoFormat::H264 => { !vid.codecs_h264.sw_decoder.is_null() && !vid.codecs_h264.sw_encoder.is_null() } VideoFormat::VP9 => { !vid.codecs_vp9.sw_decoder.is_null() && !vid.codecs_vp9.sw_encoder.is_null() } VideoFormat::AV1 => { !vid.codecs_av1.sw_decoder.is_null() && !vid.codecs_av1.sw_encoder.is_null() } } } /* YUV to RGB conversion matrices. For compatibility with original Waypipe, * broadcast-limited output ranges (16-235 & 16-240, not 0-255) are used, and * Rec. 601 (where Y = 0.299 R + 0.587 G + 0.114 B, instead of * Rec. 709's Y = 0.2126 R + 0.7152 G + 0.0722 B) */ const RGB_TO_YUV: &[[f32; 4]; 3] = &[ /* Limited range */ [0.25678822, 0.5041294, 0.09790588, 0.0627451], // Y [0.4392157, -0.3677883, -0.07142738, 0.5019608], // U (Cb) [-0.1482229, -0.2909928, 0.4392157, 0.5019608], // V (Cr) ]; const YUV_TO_RGB: &[[f32; 4]; 3] = &[ [1.1643835, 1.5960268, 0., -0.8742022], [1.1643835, -0.81296766, -0.3917623, 0.5316678], [1.1643835, 0., 2.0172322, -1.0856308], ]; pub fn start_dmavid_decode_hw( state: &Arc, pool: &Arc, packet: &[u8], ) -> Result { let vulk: &VulkanDevice = &state.target.vulk; let video = vulk.video.as_ref().unwrap(); debug!( "Hardware decoding frame for {}x{} image, packet len {}", state.target.width, state.target.height, packet.len() ); let VideoDecodeStateData::HW(ref state_data) = state.data else { unreachable!() }; unsafe { let av_packet = video.bindings.av_packet_alloc(); video .bindings .av_new_packet(av_packet, packet.len().try_into().unwrap()); (*av_packet).data.copy_from(packet.as_ptr(), packet.len()); let dec_inner = state.inner.lock().unwrap(); avcodec_send_packet(&video.bindings, dec_inner.ctx, av_packet)?; let frame: *mut AVFrame = video.bindings.av_frame_alloc(); // ignoring EAGAIN, since Waypipe's video streaming does one packet, one frame avcodec_receive_frame(&video.bindings, dec_inner.ctx, frame)?; let (frame_width, frame_height): (u32, u32) = ( (*frame).width.try_into().unwrap(), (*frame).height.try_into().unwrap(), ); assert!(frame_width >= state.target.width && frame_height >= state.target.height); let hw_fr_ref = (*frame).hw_frames_ctx.as_ref().unwrap(); let hwfc_ref = hw_fr_ref.data.cast::().as_mut().unwrap(); let avvulk = hwfc_ref .hwctx .cast::() .as_mut() .unwrap(); let vkframe = ((*frame).data[0]).cast::().as_mut().unwrap(); /* Lock frame while recording command buffer */ avvulk.lock_frame.as_ref().unwrap()(hwfc_ref, vkframe); /* Assert single image output */ assert!(vkframe.img[1..] .iter() .all(|x| vk::Image::from_raw(*x as _).is_null())); assert!(avvulk.format[0] == (vk::Format::G8_B8R8_2PLANE_420_UNORM.as_raw() as u32)); let wait_sems = &[vk::Semaphore::from_raw(vkframe.sem[0] as _)]; let wait_values = &[vkframe.sem_value[0]]; let init_layout = vkframe.layout[0]; let src_img = vk::Image::from_raw(vkframe.img[0] as _); let sizes = &[ vk::DescriptorPoolSize::default() .descriptor_count(1) .ty(vk::DescriptorType::STORAGE_IMAGE), vk::DescriptorPoolSize::default() .descriptor_count(2) .ty(vk::DescriptorType::COMBINED_IMAGE_SAMPLER), ]; // at most 1 descriptor set let pool_storage_info = vk::DescriptorPoolCreateInfo::default() .flags(vk::DescriptorPoolCreateFlags::FREE_DESCRIPTOR_SET) .max_sets(1) .pool_sizes(sizes); let desc_pool = vulk .dev .create_descriptor_pool(&pool_storage_info, None) .map_err(|_| "Failed to create descriptor pool")?; let layouts = &[video.nv12_img_to_rgb.ds_layout]; let desc_set_alloc_info = vk::DescriptorSetAllocateInfo::default() .descriptor_pool(desc_pool) .set_layouts(layouts); let descs = vulk .dev .allocate_descriptor_sets(&desc_set_alloc_info) .map_err(|_| "Failed to allocate descriptor sets")?; let descriptor_set = descs[0]; let output_image_info = &[vk::DescriptorImageInfo::default() .image_view(state.output_image_view) .image_layout(vk::ImageLayout::GENERAL)]; let input_1_image_info = &[vk::DescriptorImageInfo::default() .image_view(state_data.plane_image_views[0]) .image_layout(vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL) .sampler(video.yuv_to_rgb_sampler_y)]; let input_2_image_info = &[vk::DescriptorImageInfo::default() .image_view(state_data.plane_image_views[1]) .image_layout(vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL) .sampler(video.yuv_to_rgb_sampler_rb)]; let descriptor_writes = &[ vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(0) .descriptor_count(1) .descriptor_type(vk::DescriptorType::STORAGE_IMAGE) .image_info(output_image_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(1) .descriptor_count(1) .descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER) .image_info(input_1_image_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(2) .descriptor_count(1) .descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER) .image_info(input_2_image_info), ]; vulk.dev.update_descriptor_sets(descriptor_writes, &[]); let inner_pool = pool.pool.lock().unwrap(); let alloc_cb_info = vk::CommandBufferAllocateInfo::default() .command_pool(*inner_pool) .command_buffer_count(1) .level(vk::CommandBufferLevel::PRIMARY); drop(inner_pool); let cbvec = vulk .dev .allocate_command_buffers(&alloc_cb_info) .map_err(|_| "Failed to allocate command buffers")?; let cb = cbvec[0]; let cb_info = vk::CommandBufferBeginInfo::default().flags(vk::CommandBufferUsageFlags::empty()); vulk.dev .begin_command_buffer(cb, &cb_info) .map_err(|_| "Failed to begin command buffer")?; let target_layout = vk::ImageLayout::GENERAL; let src_layout = vk::ImageLayout::TRANSFER_SRC_OPTIMAL; let standard_access_range = vk::ImageSubresourceRange::default() .aspect_mask(vk::ImageAspectFlags::COLOR) .level_count(1) .layer_count(1); let mut img_inner = state.target.inner.lock().unwrap(); let pre_transfer_barriers = &[vk::ImageMemoryBarrier::default() .image(src_img) .old_layout(vk::ImageLayout::from_raw(init_layout as _)) .new_layout(src_layout) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .src_access_mask(vk::AccessFlags::MEMORY_WRITE) .dst_access_mask(vk::AccessFlags::TRANSFER_READ) .subresource_range(standard_access_range)]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TOP_OF_PIPE, vk::PipelineStageFlags::TRANSFER, vk::DependencyFlags::empty(), &[], &[], pre_transfer_barriers, ); /* Transition the staging images from undefined, discarding their previous * contents; the images will be entirely filled by the following copy operations. */ let pre_transfer_barriers2 = &[ image_layout_transition( state_data.plane_images[0], vk::ImageLayout::UNDEFINED, vk::ImageLayout::TRANSFER_DST_OPTIMAL, vk::AccessFlags::SHADER_READ, vk::AccessFlags::TRANSFER_WRITE, ), image_layout_transition( state_data.plane_images[1], vk::ImageLayout::UNDEFINED, vk::ImageLayout::TRANSFER_DST_OPTIMAL, vk::AccessFlags::SHADER_READ, vk::AccessFlags::TRANSFER_WRITE, ), ]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::COMPUTE_SHADER, vk::PipelineStageFlags::TRANSFER, vk::DependencyFlags::empty(), &[], &[], pre_transfer_barriers2, ); let copy_plane_1 = &[vk::ImageCopy { src_subresource: subresource_layer(vk::ImageAspectFlags::PLANE_0), src_offset: vk::Offset3D::default(), dst_subresource: subresource_layer(vk::ImageAspectFlags::COLOR), dst_offset: vk::Offset3D::default(), extent: vk::Extent3D { width: frame_width, height: frame_height, depth: 1, }, }]; let copy_plane_2 = &[vk::ImageCopy { src_subresource: subresource_layer(vk::ImageAspectFlags::PLANE_1), src_offset: vk::Offset3D::default(), dst_subresource: subresource_layer(vk::ImageAspectFlags::COLOR), dst_offset: vk::Offset3D::default(), extent: vk::Extent3D { width: frame_width / 2, height: frame_height / 2, depth: 1, }, }]; vulk.dev.cmd_copy_image( cb, src_img, src_layout, state_data.plane_images[0], vk::ImageLayout::TRANSFER_DST_OPTIMAL, copy_plane_1, ); vulk.dev.cmd_copy_image( cb, src_img, src_layout, state_data.plane_images[1], vk::ImageLayout::TRANSFER_DST_OPTIMAL, copy_plane_2, ); let output_image_barrier = &[qfot_acquire_image_memory_barrier( state.target.image, img_inner.image_layout, target_layout, vulk.queue_family, vk::AccessFlags::SHADER_WRITE, )]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TOP_OF_PIPE, vk::PipelineStageFlags::COMPUTE_SHADER, vk::DependencyFlags::empty(), &[], &[], output_image_barrier, ); let staging_barrier = &[ image_layout_transition( state_data.plane_images[0], vk::ImageLayout::TRANSFER_DST_OPTIMAL, vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, vk::AccessFlags::TRANSFER_WRITE, vk::AccessFlags::SHADER_READ, ), image_layout_transition( state_data.plane_images[1], vk::ImageLayout::TRANSFER_DST_OPTIMAL, vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, vk::AccessFlags::TRANSFER_WRITE, vk::AccessFlags::SHADER_READ, ), ]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TRANSFER, vk::PipelineStageFlags::COMPUTE_SHADER, vk::DependencyFlags::empty(), &[], &[], staging_barrier, ); vulk.dev.cmd_bind_pipeline( cb, vk::PipelineBindPoint::COMPUTE, video.nv12_img_to_rgb.pipeline, ); let push_u8 = pack_glsl_mat3x4(YUV_TO_RGB); vulk.dev.cmd_push_constants( cb, video.nv12_img_to_rgb.pipeline_layout, vk::ShaderStageFlags::COMPUTE, 0, &push_u8, ); let bind_descs = &[descriptor_set]; vulk.dev.cmd_bind_descriptor_sets( cb, vk::PipelineBindPoint::COMPUTE, video.nv12_img_to_rgb.pipeline_layout, 0, bind_descs, &[], ); let xgroups = state.target.width.div_ceil(8); let ygroups = state.target.height.div_ceil(8); vulk.dev.cmd_dispatch(cb, xgroups, ygroups, 1); // Only for main image; other barriers are let exit_barriers = &[qfot_release_image_memory_barrier( state.target.image, target_layout, vk::ImageLayout::GENERAL, vulk.queue_family, vk::AccessFlags::SHADER_WRITE, )]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::COMPUTE_SHADER, vk::PipelineStageFlags::BOTTOM_OF_PIPE, vk::DependencyFlags::empty(), &[], &[], exit_barriers, ); img_inner.image_layout = vk::ImageLayout::GENERAL; vkframe.layout[0] = src_layout.as_raw() as _; vulk.dev .end_command_buffer(cb) .map_err(|_| "Failed to end command buffer")?; /* Wait for _everything_ to complete -- do not know if graphics/compute/decode is last */ // vkframe.access not used? let waitv_stage_flags = &[vk::PipelineStageFlags::ALL_COMMANDS]; let cbs = &[cb]; let mut queue = vulkan_lock_queue(vulk); queue.inner.last_semaphore_value += 1; let completion_time_point = queue.inner.last_semaphore_value; vkframe.sem_value[0] += 1; /* Signal vkframe's semaphore to indicate when this operation is done, * and main semaphore to notify main loop. */ let signal_values = &[completion_time_point, vkframe.sem_value[0]]; let signal_semaphores = &[vulk.semaphore, wait_sems[0]]; let mut wait_timeline_info = vk::TimelineSemaphoreSubmitInfo::default() .wait_semaphore_values(wait_values) .signal_semaphore_values(signal_values); let submits = &[vk::SubmitInfo::default() .command_buffers(cbs) .wait_semaphores(wait_sems) .wait_dst_stage_mask(waitv_stage_flags) .signal_semaphores(signal_semaphores) .push_next(&mut wait_timeline_info)]; vulk.dev .queue_submit(queue.inner.queue, submits, vk::Fence::null()) .map_err(|_| "Queue submit failed")?; // <- can fail with OOM drop(queue); /* Unlock frame, now that command is submitted. (Note: unlocking before * submission could risk timeline semaphore value updates and monotonicity * violations */ avvulk.unlock_frame.as_ref().unwrap()(hwfc_ref, vkframe); let mut av_packet_ref = av_packet; video .bindings .av_packet_free(std::ptr::from_mut(&mut av_packet_ref)); // av_hwframe_transfer_data: does not work, width/height do not match // (and dmabuf sharing does not have a reliable way to have "display" dimensions <= "allocated" dimensions) let mut frame_ref: *mut AVFrame = frame; video.bindings.av_frame_free(&mut frame_ref); Ok(VulkanDecodeOpHandle { decode: state.clone(), pool: pool.clone(), data: VulkanDecodeOpData::Hardware(VulkanHWDecodeData {}), desc_pool, descriptor_set, cb, completion_time_point, }) } } pub fn start_dmavid_decode_sw( state: &Arc, pool: &Arc, packet: &[u8], ) -> Result { let vulk: &VulkanDevice = &state.target.vulk; let video = vulk.video.as_ref().unwrap(); debug!( "Software decoding frame for {}x{} image, packet len {}", state.target.width, state.target.height, packet.len() ); unsafe { let av_packet = video.bindings.av_packet_alloc(); video .bindings .av_new_packet(av_packet, packet.len().try_into().unwrap()); (*av_packet).data.copy_from(packet.as_ptr(), packet.len()); let dec_inner = state.inner.lock().unwrap(); avcodec_send_packet(&video.bindings, dec_inner.ctx, av_packet)?; let frame: *mut AVFrame = video.bindings.av_frame_alloc(); avcodec_receive_frame(&video.bindings, dec_inner.ctx, frame)?; let ext_w = (*dec_inner.ctx).width as usize; let ext_h = (*dec_inner.ctx).height as usize; assert!(ext_w % 2 == 0 && ext_h % 2 == 0); let ystride = (*frame).linesize[0] as usize; let ustride = (*frame).linesize[1] as usize; let vstride = (*frame).linesize[2] as usize; let buf_y = vulkan_get_buffer(&state.target.vulk, ystride * ext_h, true)?; let buf_u = vulkan_get_buffer(&state.target.vulk, ustride * (ext_h / 2), true)?; let buf_v = vulkan_get_buffer(&state.target.vulk, vstride * (ext_h / 2), true)?; let view_y = buf_y.get_write_view(); let view_u = buf_u.get_write_view(); let view_v = buf_v.get_write_view(); { // TODO: avoid this copy by implementing AVCodecContext.get_buffer2 let ydata: &[u8] = &*slice_from_raw_parts((*frame).data[0], ystride * ext_h); let udata: &[u8] = &*slice_from_raw_parts((*frame).data[1], ustride * (ext_h / 2)); let vdata: &[u8] = &*slice_from_raw_parts((*frame).data[2], vstride * (ext_h / 2)); view_y.data.copy_from_slice(ydata); view_u.data.copy_from_slice(udata); view_v.data.copy_from_slice(vdata); } drop(view_y); drop(view_u); drop(view_v); buf_y.complete_write()?; buf_u.complete_write()?; buf_v.complete_write()?; let sizes = &[ vk::DescriptorPoolSize::default() .descriptor_count(1) .ty(vk::DescriptorType::STORAGE_IMAGE), vk::DescriptorPoolSize::default() .descriptor_count(3) .ty(vk::DescriptorType::UNIFORM_TEXEL_BUFFER), ]; // at most 1 descriptor set let pool_storage_info = vk::DescriptorPoolCreateInfo::default() .flags(vk::DescriptorPoolCreateFlags::FREE_DESCRIPTOR_SET) .max_sets(1) .pool_sizes(sizes); let desc_pool = vulk .dev .create_descriptor_pool(&pool_storage_info, None) .map_err(|_| "Failed to create descriptor pool")?; let layouts = &[video.yuv420_buf_to_rgb.ds_layout]; let desc_set_alloc_info = vk::DescriptorSetAllocateInfo::default() .descriptor_pool(desc_pool) .set_layouts(layouts); let descs = vulk .dev .allocate_descriptor_sets(&desc_set_alloc_info) .map_err(|_| "Failed to allocate descriptor sets")?; let descriptor_set = descs[0]; let output_image_info = &[vk::DescriptorImageInfo::default() .image_view(state.output_image_view) .image_layout(vk::ImageLayout::GENERAL)]; let buf_y_image_view_info = vk::BufferViewCreateInfo::default() .flags(vk::BufferViewCreateFlags::empty()) .buffer(buf_y.buffer) .format(vk::Format::R8_UNORM) .offset(0) .range(vk::WHOLE_SIZE); // todo: with buffer pooling, precise size will need specifying let buf_u_image_view_info = vk::BufferViewCreateInfo::default() .flags(vk::BufferViewCreateFlags::empty()) .buffer(buf_u.buffer) .format(vk::Format::R8_UNORM) .offset(0) .range(vk::WHOLE_SIZE); let buf_v_image_view_info = vk::BufferViewCreateInfo::default() .flags(vk::BufferViewCreateFlags::empty()) .buffer(buf_v.buffer) .format(vk::Format::R8_UNORM) .offset(0) .range(vk::WHOLE_SIZE); let buf_y_view = vulk .dev .create_buffer_view(&buf_y_image_view_info, None) .map_err(|_| tag!("Failed to create y buffer image view"))?; let buf_u_view = vulk .dev .create_buffer_view(&buf_u_image_view_info, None) .map_err(|_| tag!("Failed to create u buffer image view"))?; let buf_v_view = vulk .dev .create_buffer_view(&buf_v_image_view_info, None) .map_err(|_| tag!("Failed to create v buffer image view"))?; let buf_y_info = &[buf_y_view]; let buf_u_info = &[buf_u_view]; let buf_v_info = &[buf_v_view]; let descriptor_writes = &[ vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(0) .descriptor_count(1) .descriptor_type(vk::DescriptorType::STORAGE_IMAGE) .image_info(output_image_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(1) .descriptor_count(1) .descriptor_type(vk::DescriptorType::UNIFORM_TEXEL_BUFFER) .texel_buffer_view(buf_y_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(2) .descriptor_count(1) .descriptor_type(vk::DescriptorType::UNIFORM_TEXEL_BUFFER) .texel_buffer_view(buf_u_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(3) .descriptor_count(1) .descriptor_type(vk::DescriptorType::UNIFORM_TEXEL_BUFFER) .texel_buffer_view(buf_v_info), ]; vulk.dev.update_descriptor_sets(descriptor_writes, &[]); let inner_pool = pool.pool.lock().unwrap(); let alloc_cb_info = vk::CommandBufferAllocateInfo::default() .command_pool(*inner_pool) .command_buffer_count(1) .level(vk::CommandBufferLevel::PRIMARY); drop(inner_pool); let cbvec = vulk .dev .allocate_command_buffers(&alloc_cb_info) .map_err(|_| "Failed to allocate command buffers")?; let cb = cbvec[0]; let cb_info = vk::CommandBufferBeginInfo::default().flags(vk::CommandBufferUsageFlags::empty()); vulk.dev .begin_command_buffer(cb, &cb_info) .map_err(|_| "Failed to begin command buffer")?; let target_layout = vk::ImageLayout::GENERAL; let mut img_inner = state.target.inner.lock().unwrap(); // note: original contents of image do not need to be preserved as // all pixels should be written; can the queue transfer be skipped? let entry_barriers = &[qfot_acquire_image_memory_barrier( state.target.image, img_inner.image_layout, target_layout, vulk.queue_family, vk::AccessFlags::SHADER_WRITE, )]; let buf_memory_barriers = &[ vk::BufferMemoryBarrier::default() .src_access_mask(vk::AccessFlags::HOST_WRITE) .dst_access_mask(vk::AccessFlags::SHADER_READ) .buffer(buf_y.buffer) .offset(0) .size(buf_y.buffer_len) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED), vk::BufferMemoryBarrier::default() .src_access_mask(vk::AccessFlags::HOST_WRITE) .dst_access_mask(vk::AccessFlags::SHADER_READ) .buffer(buf_u.buffer) .offset(0) .size(buf_u.buffer_len) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED), vk::BufferMemoryBarrier::default() .src_access_mask(vk::AccessFlags::HOST_WRITE) .dst_access_mask(vk::AccessFlags::SHADER_READ) .buffer(buf_v.buffer) .offset(0) .size(buf_v.buffer_len) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED), ]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::HOST, vk::PipelineStageFlags::COMPUTE_SHADER, vk::DependencyFlags::empty(), &[], buf_memory_barriers, &[], ); vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TOP_OF_PIPE, vk::PipelineStageFlags::COMPUTE_SHADER, vk::DependencyFlags::empty(), &[], &[], entry_barriers, ); vulk.dev.cmd_bind_pipeline( cb, vk::PipelineBindPoint::COMPUTE, video.yuv420_buf_to_rgb.pipeline, ); let push_u8_mtx = pack_glsl_mat3x4(YUV_TO_RGB); let mut push_u8: [u8; 60] = [0; 60]; push_u8[..48].copy_from_slice(&push_u8_mtx); push_u8[48..52].copy_from_slice(&(ystride as i32).to_le_bytes()); push_u8[52..56].copy_from_slice(&(ustride as i32).to_le_bytes()); push_u8[56..60].copy_from_slice(&(vstride as i32).to_le_bytes()); vulk.dev.cmd_push_constants( cb, video.yuv420_buf_to_rgb.pipeline_layout, vk::ShaderStageFlags::COMPUTE, 0, &push_u8, ); let bind_descs = &[descriptor_set]; vulk.dev.cmd_bind_descriptor_sets( cb, vk::PipelineBindPoint::COMPUTE, video.yuv420_buf_to_rgb.pipeline_layout, 0, bind_descs, &[], ); let xgroups = state.target.width.div_ceil(8); let ygroups = state.target.height.div_ceil(8); vulk.dev.cmd_dispatch(cb, xgroups, ygroups, 1); // Only for main image; other barriers are let exit_barriers = &[qfot_release_image_memory_barrier( state.target.image, target_layout, vk::ImageLayout::GENERAL, vulk.queue_family, vk::AccessFlags::SHADER_WRITE, )]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::COMPUTE_SHADER, vk::PipelineStageFlags::BOTTOM_OF_PIPE, vk::DependencyFlags::empty(), &[], &[], exit_barriers, ); img_inner.image_layout = vk::ImageLayout::GENERAL; vulk.dev .end_command_buffer(cb) .map_err(|_| "Failed to end command buffer")?; let cbs = &[cb]; let mut queue = vulkan_lock_queue(vulk); queue.inner.last_semaphore_value += 1; let completion_time_point = queue.inner.last_semaphore_value; /* Signal vkframe's semaphore to indicate when this operation is done, * and main semaphore to notify main loop. */ let signal_values = &[completion_time_point]; let signal_semaphores = &[vulk.semaphore]; let mut wait_timeline_info = vk::TimelineSemaphoreSubmitInfo::default() .wait_semaphore_values(&[]) .signal_semaphore_values(signal_values); let submits = &[vk::SubmitInfo::default() .command_buffers(cbs) .wait_semaphores(&[]) .wait_dst_stage_mask(&[]) .signal_semaphores(signal_semaphores) .push_next(&mut wait_timeline_info)]; vulk.dev .queue_submit(queue.inner.queue, submits, vk::Fence::null()) .map_err(|_| "Queue submit failed")?; // <- can fail with OOM drop(queue); let mut av_packet_ref = av_packet; video .bindings .av_packet_free(std::ptr::from_mut(&mut av_packet_ref)); let mut frame_ref: *mut AVFrame = frame; video.bindings.av_frame_free(&mut frame_ref); Ok(VulkanDecodeOpHandle { decode: state.clone(), pool: pool.clone(), data: VulkanDecodeOpData::Software(VulkanSWDecodeData { _buf_y: buf_y, _buf_u: buf_u, _buf_v: buf_v, buf_y_view, buf_u_view, buf_v_view, }), desc_pool, descriptor_set, cb, completion_time_point, }) } } pub fn start_dmavid_apply( state: &Arc, pool: &Arc, packet: &[u8], ) -> Result { if matches!(state.data, VideoDecodeStateData::SW) { start_dmavid_decode_sw(state, pool, packet) } else { start_dmavid_decode_hw(state, pool, packet) } } /* kv must contain _null terminated_ strings */ fn build_av_dict(bindings: &ffmpeg, kv: &[(&[u8], &[u8])]) -> Result<*mut AVDictionary, String> { let mut options = std::ptr::null_mut(); for (k, v) in kv.iter() { assert!(k.ends_with(&[0]) && v.ends_with(&[0])); unsafe { // SAFETY: null termination verified; // todo: let r = bindings.av_dict_set( &mut options, k.as_ptr() as *const c_char, v.as_ptr() as *const c_char, 0, ); if r < 0 { bindings.av_dict_free(&mut options); return Err(tag!("Failed to set key/value pair in dictionary")); } } } Ok(options) } pub fn setup_video_encode_hw( img: &Arc, fmt: VideoFormat, bpf: Option, ) -> Result { let video = img.vulk.video.as_ref().unwrap(); let encoder: *const AVCodec = match fmt { VideoFormat::H264 => video.codecs_h264.encoder, VideoFormat::VP9 => video.codecs_vp9.encoder, VideoFormat::AV1 => video.codecs_av1.encoder, }; assert!(!encoder.is_null()); unsafe { let ctx = video.bindings.avcodec_alloc_context3(encoder); if ctx.is_null() { return Err(tag!("Failed to allocate codec context")); } let hctx_ref = video.bindings.av_hwframe_ctx_alloc(video.av_hwdevice); if hctx_ref.is_null() { return Err(tag!("Failed to allocate hardware frames context")); } let (frame_width, frame_height) = align_size(img.width, img.height, fmt); { let hr = (*hctx_ref) .data .cast::() .as_mut() .unwrap(); hr.format = AVPixelFormat_AV_PIX_FMT_VULKAN; hr.sw_format = AVPixelFormat_AV_PIX_FMT_NV12; hr.height = frame_height; hr.width = frame_width; } av_hwframe_ctx_init(&video.bindings, hctx_ref)?; { let cr = ctx.as_mut().unwrap(); let nref = video.bindings.av_buffer_ref(video.av_hwdevice); if nref.is_null() { return Err(tag!("Failed to add reference for av_hwdevice")); } cr.hw_device_ctx = nref; cr.hw_frames_ctx = hctx_ref; cr.width = frame_width; cr.height = frame_height; /* Arbitrary, since Waypipe currently does per-buffer video instead of per-surface */ let nom_fps = 100; cr.time_base = AVRational { num: 1, den: nom_fps, }; cr.framerate = AVRational { num: nom_fps, den: 1, }; /* Streaming, no latency, only I and P frames */ cr.delay = 0; cr.max_b_frames = 0; // todo: instead of bpf, use a 'bpp' -- bits-per-pixel equivalent, which scales // properly with image size. Or crf? let b = bpf.unwrap_or(1e5); // todo: sanity checks cr.bit_rate = (b * (nom_fps as f32)) as i64; cr.pix_fmt = AVPixelFormat_AV_PIX_FMT_VULKAN; // cr.color_range = AVColorRange_AVCOL_RANGE_MPEG; } /* Encoder specific options */ let mut options = build_av_dict( &video.bindings, &[ (b"tune\0", b"ull\0"), (b"usage\0", b"stream\0"), (b"async_depth\0", b"1\0"), ], )?; avcodec_open(&video.bindings, ctx, encoder, &mut options)?; video.bindings.av_dict_free(&mut options); let output_image_view = create_dmabuf_view(img)?; let staging_images = create_staging_images( &img.vulk, frame_width.try_into().unwrap(), frame_height.try_into().unwrap(), true, vk::Format::G8_B8R8_2PLANE_420_UNORM, )?; Ok(VideoEncodeState { target: img.clone(), inner: Mutex::new(VideoEncodeInner { ctx }), output_image_view, data: VideoEncodeStateData::HW(staging_images), }) } } pub fn setup_video_encode_sw( img: &Arc, fmt: VideoFormat, bpf: Option, ) -> Result { let video = img.vulk.video.as_ref().unwrap(); let sw_encoder: *const AVCodec = match fmt { VideoFormat::H264 => video.codecs_h264.sw_encoder, VideoFormat::VP9 => video.codecs_vp9.sw_encoder, VideoFormat::AV1 => video.codecs_av1.sw_encoder, }; assert!(!sw_encoder.is_null()); unsafe { let ctx = video.bindings.avcodec_alloc_context3(sw_encoder); if ctx.is_null() { return Err(tag!("Failed to allocate codec context")); } let (frame_width, frame_height) = align_size(img.width, img.height, fmt); { let cr = ctx.as_mut().unwrap(); cr.width = frame_width; cr.height = frame_height; /* Arbitrary, since Waypipe currently does per-buffer video instead of per-surface */ let nom_fps = 100; cr.time_base = AVRational { num: 1, den: nom_fps, }; cr.framerate = AVRational { num: nom_fps, den: 1, }; /* Streaming, no latency, only I and P frames */ cr.delay = 0; cr.max_b_frames = 0; // todo: instead of bpf, use a 'bpp' -- bits-per-pixel equivalent, which scales // properly with image size. Or crf? let b = bpf.unwrap_or(1e5); // todo: sanity checks cr.bit_rate = (b * (nom_fps as f32)) as i64; cr.pix_fmt = AVPixelFormat_AV_PIX_FMT_YUV420P; // cr.color_range = AVColorRange_AVCOL_RANGE_MPEG; } /* Encoder specific options. In general, minimize latency */ let mut options = match fmt { VideoFormat::H264 => build_av_dict( &video.bindings, &[(b"tune\0", b"zerolatency\0"), (b"preset\0", b"ultrafast\0")], )?, VideoFormat::VP9 => build_av_dict( &video.bindings, &[ (b"lag-in-frames\0", b"0\0"), (b"deadline\0", b"realtime\0"), (b"quality\0", b"realtime\0"), (b"cpu-used\0", b"8\0"), ], )?, VideoFormat::AV1 => build_av_dict( &video.bindings, &[ (b"usage\0", b"realtime\0"), (b"lag-in-frames\0", b"0\0"), (b"cpu-used\0", b"8\0"), ], )?, }; avcodec_open(&video.bindings, ctx, sw_encoder, &mut options)?; video.bindings.av_dict_free(&mut options); let output_image_view = create_dmabuf_view(img)?; Ok(VideoEncodeState { target: img.clone(), inner: Mutex::new(VideoEncodeInner { ctx }), output_image_view, data: VideoEncodeStateData::SW, }) } } pub fn setup_video_encode( img: &Arc, fmt: VideoFormat, bpf: Option, ) -> Result { assert!(img.can_store_and_sample); let video = img.vulk.video.as_ref().unwrap(); let can_hw_encode = match fmt { VideoFormat::AV1 => false, VideoFormat::VP9 => false, VideoFormat::H264 => { video.can_hw_enc_h264 && !video.codecs_h264.encoder.is_null() && (img.width >= MIN_H264_HW_SIZE.0 && img.height >= MIN_H264_HW_SIZE.1) } }; if can_hw_encode { setup_video_encode_hw(img, fmt, bpf) } else { setup_video_encode_sw(img, fmt, bpf) } } pub fn start_dmavid_encode_hw( state: &Arc, pool: &Arc, wait_semaphores: &[(Arc, u64)], wait_binary_semaphores: &[VulkanBinarySemaphore], ) -> Result, String> { let vulk: &VulkanDevice = &state.target.vulk; let video = vulk.video.as_ref().unwrap(); let VideoEncodeStateData::HW(ref state_data) = state.data else { unreachable!() }; debug!( "Hardware encoding frame for {}x{} image", state.target.width, state.target.height ); unsafe { let enc_inner = state.inner.lock().unwrap(); let hwframe_ctx_ref = (*enc_inner.ctx).hw_frames_ctx; let frame: *mut AVFrame = video.bindings.av_frame_alloc(); if frame.is_null() { return Err(tag!("Failed to allocate frame")); } let get_buf_ret = video .bindings .av_hwframe_get_buffer(hwframe_ctx_ref, frame, 0); if get_buf_ret != 0 { return Err(tag!("Failed to get buffer for frame: {}", get_buf_ret)); } let hw_fr_ref = (*frame).hw_frames_ctx.as_ref().unwrap(); let hwfc_ref = hw_fr_ref.data.cast::().as_mut().unwrap(); let vk_fc = hwfc_ref .hwctx .cast::() .as_mut() .unwrap(); let vkframe = ((*frame).data[0]).cast::().as_mut().unwrap(); /* Lock frame, to prevent concurrent modifications */ vk_fc.lock_frame.as_ref().unwrap()(hwfc_ref, vkframe); assert!(vk_fc.format[0] == vk::Format::G8_B8R8_2PLANE_420_UNORM.as_raw() as _); assert!(vkframe.img[1..] .iter() .all(|x| vk::Image::from_raw(*x as _).is_null())); /* Blocking wait for semaphores; remove this later */ let mut wait_sems = vec![vk::Semaphore::from_raw(vkframe.sem[0] as _)]; let mut wait_values = vec![vkframe.sem_value[0]]; wait_sems.extend(wait_semaphores.iter().map(|x| x.0.semaphore)); wait_values.extend(wait_semaphores.iter().map(|x| x.1)); wait_sems.extend(wait_binary_semaphores.iter().map(|x| x.semaphore)); wait_values.extend(wait_binary_semaphores.iter().map(|_| u64::MAX)); let init_layout = vkframe.layout[0]; let dst_img = vk::Image::from_raw(vkframe.img[0] as _); let sizes = &[ vk::DescriptorPoolSize::default() .descriptor_count(1) .ty(vk::DescriptorType::COMBINED_IMAGE_SAMPLER), vk::DescriptorPoolSize::default() .descriptor_count(2) .ty(vk::DescriptorType::STORAGE_IMAGE), ]; // at most 1 descriptor set let pool_storage_info = vk::DescriptorPoolCreateInfo::default() .flags(vk::DescriptorPoolCreateFlags::FREE_DESCRIPTOR_SET) .max_sets(1) .pool_sizes(sizes); let desc_pool = vulk .dev .create_descriptor_pool(&pool_storage_info, None) .map_err(|_| "Failed to create descriptor pool")?; let layouts = &[video.rgb_to_nv12_img.ds_layout]; let desc_set_alloc_info = vk::DescriptorSetAllocateInfo::default() .descriptor_pool(desc_pool) .set_layouts(layouts); let descs = vulk .dev .allocate_descriptor_sets(&desc_set_alloc_info) .map_err(|_| "Failed to allocate descriptor sets")?; let descriptor_set = descs[0]; let output_image_info = &[vk::DescriptorImageInfo::default() .image_view(state.output_image_view) .image_layout(vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL) .sampler(video.rgb_to_yuv_sampler_rgb)]; let input_1_image_info = &[vk::DescriptorImageInfo::default() .image_view(state_data.plane_image_views[0]) .image_layout(vk::ImageLayout::GENERAL)]; let input_2_image_info = &[vk::DescriptorImageInfo::default() .image_view(state_data.plane_image_views[1]) .image_layout(vk::ImageLayout::GENERAL)]; let descriptor_writes = &[ vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(0) .descriptor_count(1) .descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER) .image_info(output_image_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(1) .descriptor_count(1) .descriptor_type(vk::DescriptorType::STORAGE_IMAGE) .image_info(input_1_image_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(2) .descriptor_count(1) .descriptor_type(vk::DescriptorType::STORAGE_IMAGE) .image_info(input_2_image_info), ]; vulk.dev.update_descriptor_sets(descriptor_writes, &[]); let inner_pool = pool.pool.lock().unwrap(); let alloc_cb_info = vk::CommandBufferAllocateInfo::default() .command_pool(*inner_pool) .command_buffer_count(1) .level(vk::CommandBufferLevel::PRIMARY); drop(inner_pool); let cbvec = vulk .dev .allocate_command_buffers(&alloc_cb_info) .map_err(|_| "Failed to allocate command buffers")?; let cb = cbvec[0]; // TODO: figure out proper pipeline barriers & queue transfers // want image memory barriers on all three images let cb_info = vk::CommandBufferBeginInfo::default().flags(vk::CommandBufferUsageFlags::empty()); vulk.dev .begin_command_buffer(cb, &cb_info) .map_err(|_| "Failed to begin command buffer")?; let target_layout = vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL; let dst_layout = vk::ImageLayout::TRANSFER_DST_OPTIMAL; let standard_access_range = vk::ImageSubresourceRange::default() .aspect_mask(vk::ImageAspectFlags::COLOR) .level_count(1) .layer_count(1); let mut img_inner = state.target.inner.lock().unwrap(); let entry_barriers = &[qfot_acquire_image_memory_barrier( state.target.image, img_inner.image_layout, target_layout, vulk.queue_family, vk::AccessFlags::SHADER_READ, )]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TOP_OF_PIPE, vk::PipelineStageFlags::COMPUTE_SHADER, vk::DependencyFlags::empty(), &[], &[], entry_barriers, ); /* Set staging image layout and discard old contents */ let staging_entry_barriers = &[ image_layout_transition( state_data.plane_images[0], vk::ImageLayout::UNDEFINED, vk::ImageLayout::GENERAL, vk::AccessFlags::TRANSFER_READ, vk::AccessFlags::SHADER_WRITE, ), image_layout_transition( state_data.plane_images[1], vk::ImageLayout::UNDEFINED, vk::ImageLayout::GENERAL, vk::AccessFlags::TRANSFER_READ, vk::AccessFlags::SHADER_WRITE, ), ]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TRANSFER, vk::PipelineStageFlags::COMPUTE_SHADER, vk::DependencyFlags::empty(), &[], &[], staging_entry_barriers, ); vulk.dev.cmd_bind_pipeline( cb, vk::PipelineBindPoint::COMPUTE, video.rgb_to_nv12_img.pipeline, ); let bind_descs = &[descriptor_set]; vulk.dev.cmd_bind_descriptor_sets( cb, vk::PipelineBindPoint::COMPUTE, video.rgb_to_nv12_img.pipeline_layout, 0, bind_descs, &[], ); let push_u8 = pack_glsl_mat3x4(RGB_TO_YUV); vulk.dev.cmd_push_constants( cb, video.rgb_to_nv12_img.pipeline_layout, vk::ShaderStageFlags::COMPUTE, 0, &push_u8, ); /* Fill every pixel of the Y and CbCr planes */ assert!(hwfc_ref.width % 16 == 0); assert!(hwfc_ref.height % 16 == 0); let xgroups = (hwfc_ref.width / 16) as u32; let ygroups = (hwfc_ref.height / 16) as u32; vulk.dev.cmd_dispatch(cb, xgroups, ygroups, 1); let exit_barriers = &[qfot_release_image_memory_barrier( state.target.image, target_layout, vk::ImageLayout::GENERAL, vulk.queue_family, vk::AccessFlags::SHADER_READ, )]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::COMPUTE_SHADER, vk::PipelineStageFlags::BOTTOM_OF_PIPE, vk::DependencyFlags::empty(), &[], &[], exit_barriers, ); let post_compute_barriers = &[vk::ImageMemoryBarrier::default() .image(dst_img) .old_layout(vk::ImageLayout::from_raw(init_layout as _)) .new_layout(dst_layout) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .src_access_mask(vk::AccessFlags::MEMORY_WRITE) .dst_access_mask(vk::AccessFlags::MEMORY_WRITE) .subresource_range(standard_access_range)]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TOP_OF_PIPE, vk::PipelineStageFlags::TRANSFER, vk::DependencyFlags::empty(), &[], &[], post_compute_barriers, ); let staging_tx_barriers = &[ image_layout_transition( state_data.plane_images[0], vk::ImageLayout::GENERAL, vk::ImageLayout::TRANSFER_SRC_OPTIMAL, vk::AccessFlags::SHADER_WRITE, vk::AccessFlags::TRANSFER_READ, ), image_layout_transition( state_data.plane_images[1], vk::ImageLayout::GENERAL, vk::ImageLayout::TRANSFER_SRC_OPTIMAL, vk::AccessFlags::SHADER_WRITE, vk::AccessFlags::TRANSFER_READ, ), ]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::COMPUTE_SHADER, vk::PipelineStageFlags::TRANSFER, vk::DependencyFlags::empty(), &[], &[], staging_tx_barriers, ); let copy_plane_1 = &[vk::ImageCopy { src_subresource: subresource_layer(vk::ImageAspectFlags::COLOR), src_offset: vk::Offset3D::default(), dst_subresource: subresource_layer(vk::ImageAspectFlags::PLANE_0), dst_offset: vk::Offset3D::default(), extent: vk::Extent3D { width: hwfc_ref.width as u32, height: hwfc_ref.height as u32, depth: 1, }, }]; let copy_plane_2 = &[vk::ImageCopy { src_subresource: subresource_layer(vk::ImageAspectFlags::COLOR), src_offset: vk::Offset3D::default(), dst_subresource: subresource_layer(vk::ImageAspectFlags::PLANE_1), dst_offset: vk::Offset3D::default(), extent: vk::Extent3D { width: (hwfc_ref.width as u32) / 2, height: (hwfc_ref.height as u32) / 2, depth: 1, }, }]; vulk.dev.cmd_copy_image( cb, state_data.plane_images[0], vk::ImageLayout::TRANSFER_SRC_OPTIMAL, dst_img, dst_layout, copy_plane_1, ); vulk.dev.cmd_copy_image( cb, state_data.plane_images[1], vk::ImageLayout::TRANSFER_SRC_OPTIMAL, dst_img, dst_layout, copy_plane_2, ); // ffmpeg will handle barriers / layout transitions for the vkframe img_inner.image_layout = vk::ImageLayout::GENERAL; vkframe.layout[0] = dst_layout.as_raw() as _; vulk.dev .end_command_buffer(cb) .map_err(|_| "Failed to end command buffer")?; /* Wait for _everything_ to complete -- do not know if graphics/compute/decode is last */ // vkframe.access not used? let waitv_stage_flags = vec![vk::PipelineStageFlags::ALL_COMMANDS; wait_values.len()]; let cbs = &[cb]; vkframe.sem_value[0] += 1; /* Signal vkframe's semaphore to indicate when this operation is done, * and main semaphore to notify main loop. */ let signal_values = &[vkframe.sem_value[0]]; let signal_semaphores = &[wait_sems[0]]; let mut wait_timeline_info = vk::TimelineSemaphoreSubmitInfo::default() .wait_semaphore_values(&wait_values) .signal_semaphore_values(signal_values); let submits = &[vk::SubmitInfo::default() .command_buffers(cbs) .wait_semaphores(&wait_sems) .wait_dst_stage_mask(&waitv_stage_flags) .signal_semaphores(signal_semaphores) .push_next(&mut wait_timeline_info)]; let queue = vulkan_lock_queue(vulk); vulk.dev .queue_submit(queue.inner.queue, submits, vk::Fence::null()) .map_err(|_| "Queue submit failed")?; // <- can fail with OOM drop(queue); /* Unlock frame, now that command is submitted. (Note: unlocking before * submission could risk timeline semaphore value updates and monotonicity * violations */ vk_fc.unlock_frame.as_ref().unwrap()(hwfc_ref, vkframe); avcodec_send_frame(&video.bindings, enc_inner.ctx, frame)?; let mut packet: *mut AVPacket = video.bindings.av_packet_alloc(); avcodec_receive_packet(&video.bindings, enc_inner.ctx, packet)?; /* Cleanup; receiving packet should mean preceding operation is entirely done? */ let inner_pool = pool.pool.lock().unwrap(); vulk.dev.free_command_buffers(*inner_pool, &[cb]); vulk.dev .free_descriptor_sets(desc_pool, &[descriptor_set]) .map_err(|_| "Failed to free descriptor set") .unwrap(); vulk.dev.destroy_descriptor_pool(desc_pool, None); let mut f = frame; video.bindings.av_frame_free(&mut f); let data = std::slice::from_raw_parts((*packet).data, (*packet).size.try_into().unwrap()); let mut packet_data = Vec::::new(); packet_data.extend_from_slice(data); video.bindings.av_packet_free(&mut packet); Ok(packet_data) } } pub fn start_dmavid_encode_sw( state: &Arc, pool: &Arc, wait_semaphores: &[(Arc, u64)], wait_binary_semaphores: &[VulkanBinarySemaphore], ) -> Result, String> { let vulk: &VulkanDevice = &state.target.vulk; let video = vulk.video.as_ref().unwrap(); debug!( "Software encoding a frame for {}x{} image", state.target.width, state.target.height ); unsafe { let frame: *mut AVFrame = video.bindings.av_frame_alloc(); if frame.is_null() { return Err(tag!("Failed to allocate frame")); } let enc_inner = state.inner.lock().unwrap(); // Y plane let ext_w = (*enc_inner.ctx).width as usize; let ext_h = (*enc_inner.ctx).height as usize; assert!(ext_w % 2 == 0 && ext_h % 2 == 0); let mut w: i32 = (*enc_inner.ctx).width; let mut h: i32 = (*enc_inner.ctx).height; let mut line_alignments: [i32; AV_NUM_DATA_POINTERS as usize] = [0, 0, 0, 0, 0, 0, 0, 0]; // todo: variable alignment? // TODO: can be computed _once_ and cached video.bindings.avcodec_align_dimensions2( enc_inner.ctx, &mut w, &mut h, &mut line_alignments as *mut i32, ); assert!(w as usize == ext_w); assert!(h as usize >= ext_h); // TODO: handle ffmpeg extra height request to allow overreading let stride_y = align(ext_w, line_alignments[0].try_into().unwrap()); // 1bpp, no subsampling let stride_u = align(ext_w / 2, line_alignments[1].try_into().unwrap()); // 1bpp and 2x subsampled let stride_v = align(ext_w / 2, line_alignments[2].try_into().unwrap()); // 1bpp and 2x subsampled let buf_y = vulkan_get_buffer(&state.target.vulk, stride_y * ext_h, true)?; let buf_u = vulkan_get_buffer(&state.target.vulk, stride_u * (ext_h / 2), true)?; let buf_v = vulkan_get_buffer(&state.target.vulk, stride_v * (ext_h / 2), true)?; let sizes = &[ vk::DescriptorPoolSize::default() .descriptor_count(1) .ty(vk::DescriptorType::COMBINED_IMAGE_SAMPLER), vk::DescriptorPoolSize::default() .descriptor_count(3) .ty(vk::DescriptorType::STORAGE_TEXEL_BUFFER), ]; // at most 1 descriptor set let pool_storage_info = vk::DescriptorPoolCreateInfo::default() .flags(vk::DescriptorPoolCreateFlags::FREE_DESCRIPTOR_SET) .max_sets(1) .pool_sizes(sizes); let desc_pool = vulk .dev .create_descriptor_pool(&pool_storage_info, None) .map_err(|_| "Failed to create descriptor pool")?; let layouts = &[video.rgb_to_yuv420_buf.ds_layout]; let desc_set_alloc_info = vk::DescriptorSetAllocateInfo::default() .descriptor_pool(desc_pool) .set_layouts(layouts); let descs = vulk .dev .allocate_descriptor_sets(&desc_set_alloc_info) .map_err(|_| "Failed to allocate descriptor sets")?; let descriptor_set = descs[0]; let buf_y_image_view_info = vk::BufferViewCreateInfo::default() .flags(vk::BufferViewCreateFlags::empty()) .buffer(buf_y.buffer) .format(vk::Format::R8_UNORM) .offset(0) .range(vk::WHOLE_SIZE); // todo: with buffer pooling, precise size will need specifying let buf_u_image_view_info = vk::BufferViewCreateInfo::default() .flags(vk::BufferViewCreateFlags::empty()) .buffer(buf_u.buffer) .format(vk::Format::R8_UNORM) .offset(0) .range(vk::WHOLE_SIZE); let buf_v_image_view_info = vk::BufferViewCreateInfo::default() .flags(vk::BufferViewCreateFlags::empty()) .buffer(buf_v.buffer) .format(vk::Format::R8_UNORM) .offset(0) .range(vk::WHOLE_SIZE); let buf_y_image_view = vulk .dev .create_buffer_view(&buf_y_image_view_info, None) .map_err(|_| tag!("Failed to create y buffer image view"))?; let buf_u_image_view = vulk .dev .create_buffer_view(&buf_u_image_view_info, None) .map_err(|_| tag!("Failed to create u buffer image view"))?; let buf_v_image_view = vulk .dev .create_buffer_view(&buf_v_image_view_info, None) .map_err(|_| tag!("Failed to create v buffer image view"))?; let output_image_info = &[vk::DescriptorImageInfo::default() .image_view(state.output_image_view) .image_layout(vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL) .sampler(video.rgb_to_yuv_sampler_rgb)]; let buf_y_info = &[buf_y_image_view]; let buf_u_info = &[buf_u_image_view]; let buf_v_info = &[buf_v_image_view]; let descriptor_writes = &[ vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(0) .descriptor_count(1) .descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER) .image_info(output_image_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(1) .descriptor_count(1) .descriptor_type(vk::DescriptorType::STORAGE_TEXEL_BUFFER) .texel_buffer_view(buf_y_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(2) .descriptor_count(1) .descriptor_type(vk::DescriptorType::STORAGE_TEXEL_BUFFER) .texel_buffer_view(buf_u_info), vk::WriteDescriptorSet::default() .dst_set(descriptor_set) .dst_binding(3) .descriptor_count(1) .descriptor_type(vk::DescriptorType::STORAGE_TEXEL_BUFFER) .texel_buffer_view(buf_v_info), ]; vulk.dev.update_descriptor_sets(descriptor_writes, &[]); let inner_pool = pool.pool.lock().unwrap(); let alloc_cb_info = vk::CommandBufferAllocateInfo::default() .command_pool(*inner_pool) .command_buffer_count(1) .level(vk::CommandBufferLevel::PRIMARY); drop(inner_pool); let cbvec = vulk .dev .allocate_command_buffers(&alloc_cb_info) .map_err(|_| "Failed to allocate command buffers")?; let cb = cbvec[0]; // TODO: figure out proper pipeline barriers & queue transfers // want image memory barriers on all three images let cb_info = vk::CommandBufferBeginInfo::default().flags(vk::CommandBufferUsageFlags::empty()); vulk.dev .begin_command_buffer(cb, &cb_info) .map_err(|_| "Failed to begin command buffer")?; let target_layout = vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL; let mut img_inner = state.target.inner.lock().unwrap(); let entry_barriers = &[qfot_acquire_image_memory_barrier( state.target.image, img_inner.image_layout, target_layout, vulk.queue_family, vk::AccessFlags::SHADER_READ, )]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::TOP_OF_PIPE, vk::PipelineStageFlags::COMPUTE_SHADER, vk::DependencyFlags::empty(), &[], &[], entry_barriers, ); vulk.dev.cmd_bind_pipeline( cb, vk::PipelineBindPoint::COMPUTE, video.rgb_to_yuv420_buf.pipeline, ); let bind_descs = &[descriptor_set]; vulk.dev.cmd_bind_descriptor_sets( cb, vk::PipelineBindPoint::COMPUTE, video.rgb_to_yuv420_buf.pipeline_layout, 0, bind_descs, &[], ); let push_u8_mtx = pack_glsl_mat3x4(RGB_TO_YUV); let mut push_u8: [u8; 60] = [0; 60]; push_u8[..48].copy_from_slice(&push_u8_mtx); push_u8[48..52].copy_from_slice(&(stride_y as i32).to_le_bytes()); push_u8[52..56].copy_from_slice(&(stride_u as i32).to_le_bytes()); push_u8[56..60].copy_from_slice(&(stride_v as i32).to_le_bytes()); vulk.dev.cmd_push_constants( cb, video.rgb_to_yuv420_buf.pipeline_layout, vk::ShaderStageFlags::COMPUTE, 0, &push_u8, ); /* Fill every pixel of the Y and CbCr planes */ assert!(ext_w % 16 == 0); assert!(ext_h % 16 == 0); let xgroups = (ext_w / 16) as u32; let ygroups = (ext_h / 16) as u32; vulk.dev.cmd_dispatch(cb, xgroups, ygroups, 1); // Only for main image; buffers let exit_barriers = &[qfot_release_image_memory_barrier( state.target.image, target_layout, vk::ImageLayout::GENERAL, vulk.queue_family, vk::AccessFlags::SHADER_READ, )]; let buf_memory_barriers = &[ vk::BufferMemoryBarrier::default() .src_access_mask(vk::AccessFlags::SHADER_WRITE) .dst_access_mask(vk::AccessFlags::HOST_READ) .buffer(buf_y.buffer) .offset(0) .size(buf_y.buffer_len) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED), vk::BufferMemoryBarrier::default() .src_access_mask(vk::AccessFlags::SHADER_WRITE) .dst_access_mask(vk::AccessFlags::HOST_READ) .buffer(buf_u.buffer) .offset(0) .size(buf_u.buffer_len) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED), vk::BufferMemoryBarrier::default() .src_access_mask(vk::AccessFlags::SHADER_WRITE) .dst_access_mask(vk::AccessFlags::HOST_READ) .buffer(buf_v.buffer) .offset(0) .size(buf_v.buffer_len) .src_queue_family_index(vk::QUEUE_FAMILY_IGNORED) .dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED), ]; vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::COMPUTE_SHADER, vk::PipelineStageFlags::HOST, vk::DependencyFlags::empty(), &[], buf_memory_barriers, &[], ); vulk.dev.cmd_pipeline_barrier( cb, vk::PipelineStageFlags::COMPUTE_SHADER, vk::PipelineStageFlags::BOTTOM_OF_PIPE, vk::DependencyFlags::empty(), &[], &[], exit_barriers, ); img_inner.image_layout = vk::ImageLayout::GENERAL; vulk.dev .end_command_buffer(cb) .map_err(|_| "Failed to end command buffer")?; /* Wait for _everything_ to complete -- do not know if graphics/compute/decode is last */ let cbs = &[cb]; // todo: deduplicate with `start_copy_segments_from_dmabuf`, and structure to reduce allocations let mut waitv_values: Vec = wait_semaphores.iter().map(|x| x.1).collect(); let mut waitv_semaphores: Vec = wait_semaphores.iter().map(|x| x.0.semaphore).collect(); for bs in wait_binary_semaphores { waitv_values.push(u64::MAX); waitv_semaphores.push(bs.semaphore); } let mut waitv_stage_flags = Vec::new(); waitv_stage_flags.resize(waitv_semaphores.len(), vk::PipelineStageFlags::ALL_COMMANDS); let mut wait_timeline_info = vk::TimelineSemaphoreSubmitInfo::default() .wait_semaphore_values(&waitv_values) .signal_semaphore_values(&[]); let submits = &[vk::SubmitInfo::default() .command_buffers(cbs) .wait_semaphores(&waitv_semaphores) .wait_dst_stage_mask(&waitv_stage_flags) .signal_semaphores(&[]) .push_next(&mut wait_timeline_info)]; let queue = vulkan_lock_queue(vulk); vulk.dev .queue_submit(queue.inner.queue, submits, vk::Fence::null()) .map_err(|_| "Queue submit failed")?; // <- can fail with OOM vulk.dev .queue_wait_idle(queue.inner.queue) .map_err(|_| tag!("Queue wait idle failed"))?; drop(queue); buf_y.prepare_read()?; buf_u.prepare_read()?; buf_v.prepare_read()?; let view_y = buf_y.get_read_view(); let view_u = buf_u.get_read_view(); let view_v = buf_v.get_read_view(); (*frame).width = ext_w as i32; (*frame).height = ext_h as i32; (*frame).format = AVPixelFormat_AV_PIX_FMT_YUV420P; (*frame).linesize[0] = stride_y.try_into().unwrap(); (*frame).linesize[1] = stride_u.try_into().unwrap(); (*frame).linesize[2] = stride_v.try_into().unwrap(); assert!(view_y.data.as_ptr() as usize % 64 == 0); assert!(view_u.data.as_ptr() as usize % 64 == 0); assert!(view_v.data.as_ptr() as usize % 64 == 0); (*frame).data[0] = view_y.data.as_ptr() as *mut u8; (*frame).data[1] = view_u.data.as_ptr() as *mut u8; (*frame).data[2] = view_v.data.as_ptr() as *mut u8; // TODO: refcounting frame may avoid ffmpeg-side copies? avcodec_send_frame(&video.bindings, enc_inner.ctx, frame)?; let mut packet: *mut AVPacket = video.bindings.av_packet_alloc(); avcodec_receive_packet(&video.bindings, enc_inner.ctx, packet)?; /* Keep frame data alive until processing is done? TODO: what does ffmpeg require? It might want to hold onto the frame a bit longer */ drop(view_y); drop(view_u); drop(view_v); /* Cleanup; receiving packet should mean preceding operation is entirely done? */ let inner_pool = pool.pool.lock().unwrap(); vulk.dev.free_command_buffers(*inner_pool, &[cb]); vulk.dev .free_descriptor_sets(desc_pool, &[descriptor_set]) .map_err(|_| "Failed to free descriptor set") .unwrap(); vulk.dev.destroy_descriptor_pool(desc_pool, None); vulk.dev.destroy_buffer_view(buf_y_image_view, None); vulk.dev.destroy_buffer_view(buf_u_image_view, None); vulk.dev.destroy_buffer_view(buf_v_image_view, None); drop(buf_y); drop(buf_u); drop(buf_v); let mut f = frame; video.bindings.av_frame_free(&mut f); // TODO: allocate the packet data ourselves, using AVCodecContext.get_encode_buffer let data = std::slice::from_raw_parts((*packet).data, (*packet).size.try_into().unwrap()); let mut packet_data = Vec::::new(); packet_data.extend_from_slice(data); video.bindings.av_packet_free(&mut packet); Ok(packet_data) } } pub fn start_dmavid_encode( state: &Arc, pool: &Arc, wait_semaphores: &[(Arc, u64)], wait_binary_semaphores: &[VulkanBinarySemaphore], ) -> Result, String> { if matches!(state.data, VideoEncodeStateData::SW) { start_dmavid_encode_sw(state, pool, wait_semaphores, wait_binary_semaphores) } else { start_dmavid_encode_hw(state, pool, wait_semaphores, wait_binary_semaphores) } } /* Fill with specified RGB color */ #[cfg(test)] fn fill_with_color(w: usize, h: usize, format: u32, color: (f32, f32, f32)) -> Vec { use crate::wayland_gen::*; /* using: byte order of channels */ fn pack8888(b0: f32, b1: f32, b2: f32, b3: f32) -> [u8; 4] { [ (b0 * 255.0).clamp(0., 255.0).round() as u8, (b1 * 255.0).clamp(0., 255.0).round() as u8, (b2 * 255.0).clamp(0., 255.0).round() as u8, (b3 * 255.0).clamp(0., 255.0).round() as u8, ] } fn replicate(pattern: &[u8], len: usize) -> Vec { pattern .iter() .cycle() .take(pattern.len() * len) .copied() .collect() } match drm_to_wayland(format).try_into().unwrap() { WlShmFormat::Xrgb8888 => replicate(&pack8888(color.2, color.1, color.0, 1.0), w * h), WlShmFormat::Xbgr8888 => replicate(&pack8888(color.0, color.1, color.2, 1.0), w * h), _ => todo!(), } } #[cfg(test)] fn get_average_color(w: usize, h: usize, format: u32, data: &[u8]) -> (f32, f32, f32) { use crate::wayland_gen::*; /* from: byte order of channels -> rgb */ fn swizzle_bgrx(x: (f32, f32, f32, f32)) -> (f32, f32, f32) { (x.2, x.1, x.0) } fn swizzle_rgbx(x: (f32, f32, f32, f32)) -> (f32, f32, f32) { (x.0, x.1, x.2) } fn unpack8888(x: &[u8]) -> (f32, f32, f32, f32) { ( x[0] as f32 / 255.0, x[1] as f32 / 255.0, x[2] as f32 / 255.0, x[3] as f32 / 255.0, ) } fn add_color(x: (f32, f32, f32), y: (f32, f32, f32)) -> (f32, f32, f32) { (x.0 + y.0, x.1 + y.1, x.2 + y.2) } let base: (f32, f32, f32) = (0., 0., 0.); let rgb = match drm_to_wayland(format).try_into().unwrap() { WlShmFormat::Xrgb8888 => data .chunks_exact(4) .map(|x| swizzle_bgrx(unpack8888(x))) .fold(base, add_color), WlShmFormat::Xbgr8888 => data .chunks_exact(4) .map(|x| swizzle_rgbx(unpack8888(x))) .fold(base, add_color), _ => todo!(), }; let scale = 1. / ((w * h) as f32); (rgb.0 * scale, rgb.1 * scale, rgb.2 * scale) } #[cfg(test)] fn test_video(try_hardware: bool) { let _serialize_test = VULKAN_MUTEX.lock().unwrap(); /* A crude error metric */ fn color_error(x: (f32, f32, f32), y: (f32, f32, f32)) -> f32 { (x.0 - y.0).abs() + (x.1 - y.1).abs() + (x.2 - y.2).abs() } // debug disabled as libavcodec logging escapes test framework let debug = false; for dev_id in list_render_device_ids() { let pref = Some(if try_hardware { CodecPreference::HW } else { CodecPreference::SW }); let vid_setting = VideoSetting { format: Some(VideoFormat::H264), /* the actual format given here does not matter */ bits_per_frame: None, enc_pref: pref, dec_pref: pref, }; let Ok(Some(instance)) = setup_vulkan_instance(debug, &vid_setting, false, false) else { continue; }; let Ok(Some(vulk)) = setup_vulkan_device(&instance, Some(dev_id), &vid_setting, debug) else { continue; }; println!("Setup complete for device id {}", dev_id); /* Test relatively small image sizes, since many formats will be tested */ let sizes: [(usize, usize); 2] = [(63, 65), (1, 1)]; for video_format in [VideoFormat::H264, VideoFormat::VP9, VideoFormat::AV1] { let mut format_modifiers = Vec::<(u32, u64, bool)>::new(); 'scan: for f in DRM_FORMATS { for s in sizes { if !supports_video_format(&vulk, video_format, *f, s.0 as u32, s.1 as u32) { continue 'scan; } } /* no point in testing all modifiers for all video formats, since the intermediate * copy step does not depend on the format. So only do this in one case. */ if video_format == VideoFormat::H264 && !try_hardware { let mut first = false; for m in vulk.get_supported_modifiers(*f) { format_modifiers.push((*f, *m, first)); first = false; } } else if let [m, ..] = vulk.get_supported_modifiers(*f) { format_modifiers.push((*f, *m, true)); } } let pool = vulkan_get_cmd_pool(&vulk).unwrap(); for (j, (format, modifier, first)) in format_modifiers.iter().enumerate() { let (format, modifier) = (*format, *modifier); let vkf = drm_to_vulkan(format).unwrap(); println!( "\nTesting {:?}, format {}/{} 0x{:x} => {:?}, modifier 0x{:x}", video_format, j + 1, format_modifiers.len(), format, vkf, modifier ); let start_time = std::time::Instant::now(); let mut color_errs = Vec::new(); let mut elapsed_times = Vec::new(); for (w, h) in sizes { let mod_options = &[modifier]; let (dmabuf1, planes) = vulkan_create_dmabuf(&vulk, w as u32, h as u32, format, mod_options, true) .unwrap(); drop(planes); let (dmabuf2, planes) = vulkan_create_dmabuf(&vulk, w as u32, h as u32, format, mod_options, true) .unwrap(); drop(planes); let enc_state = Arc::new(setup_video_encode(&dmabuf1, video_format, None).unwrap()); let dec_state = Arc::new(setup_video_decode(&dmabuf2, video_format).unwrap()); println!( "Enc is sw: {}, Dec is sw: {}", matches!(enc_state.data, VideoEncodeStateData::SW), matches!(dec_state.data, VideoDecodeStateData::SW) ); let copy1 = Arc::new( vulkan_get_buffer(&vulk, dmabuf1.nominal_size(None), false).unwrap(), ); let copy2 = Arc::new( vulkan_get_buffer(&vulk, dmabuf2.nominal_size(None), true).unwrap(), ); let colors_long = &[ (0.0, 0.0, 0.0), (0.5, 0.5, 0.5), (1.0, 0.5, 0.2), (0.3, 0.0, 0.7), (0.0, 1.0, 0.0), (1.0, 1.0, 1.0), ]; let colors_short = &[(1.0, 0.5, 0.2), (0.3, 0.0, 0.7)]; let colors: &[(f32, f32, f32)] = if *first { colors_long } else { colors_short }; let mut net_err = 0.0; for color in colors { let data = fill_with_color(w, h, format, *color); let check = get_average_color(w, h, format, &data); copy_onto_dmabuf(&dmabuf1, ©1, &data).unwrap(); let packet = start_dmavid_encode(&enc_state, &pool, &[], &[]).unwrap(); let vid_op = start_dmavid_apply(&dec_state, &pool, &packet).unwrap(); vid_op.wait_until_done().unwrap(); let mirror = copy_from_dmabuf(&dmabuf2, ©2).unwrap(); let output = get_average_color(w, h, format, &mirror); let check_err = color_error(*color, check); let rtrip_err = color_error(*color, output); /* Verify that the video encoding gets the color relatively close */ assert!(check_err <= 0.1); if !try_hardware { // As of writing, H264+radeon hardware video decoding fails on <=32x32 images let thresh = if video_format == VideoFormat::AV1 { 0.2 } else { 0.1 }; assert!( rtrip_err <= thresh, "size: {:?} color: {:?} output: {:?}", (w, h), *color, output ); } net_err += rtrip_err; } let end_time = std::time::Instant::now(); let duration = end_time.duration_since(start_time); elapsed_times.push(duration.as_secs_f32()); color_errs.push(net_err / (colors.len() as f32)); } println!( "Tested sizes: {:?}; average errors: {:?}; times: {:?}", sizes, color_errs, elapsed_times, ); } } } } #[test] fn test_video_try_hw() { test_video(true) } #[test] fn test_video_sw() { test_video(false) } waypipe-v0.10.5/src/wayland.rs000066400000000000000000000137371507071711100162500ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Types and basic parsing/writing code for Wayland protocol */ use crate::wayland_gen::{WaylandInterface, INTERFACE_TABLE}; pub const PARSE_ERROR: &str = "Failed to parse message"; /* Wayland object Id. The null object 0 is not special cased and may require checking. */ #[derive(PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)] #[repr(transparent)] pub struct ObjId(pub u32); impl std::fmt::Display for ObjId { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "{}", self.0) } } pub enum WaylandArgument { Int, Uint, Fixed, String, OptionalString, Object(WaylandInterface), GenericObject, NewId(WaylandInterface), GenericNewId, Array, Fd, } pub struct WaylandMethod { pub name: &'static str, pub sig: &'static [WaylandArgument], pub destructor: bool, } // TODO: memory layout optimizations. As with the C implementation, could // compact this data a lot by merging all these arrays into a single table, // and using indices into it // In particular: unify the event/req tables, by making them use an index // into a global table, placing events at coordinates x+0,x+1,x+2..., and placing // requests at coordinates x-1,x-2,x-3... /** Data for a Wayland interface. */ pub struct WaylandData { pub name: &'static str, pub evts: &'static [WaylandMethod], pub reqs: &'static [WaylandMethod], pub version: u32, } #[derive(PartialEq, Eq, Copy, Clone)] pub enum MethodId { Event(u8), Request(u8), } impl MethodId { pub fn code(&self) -> u8 { match self { Self::Event(ref x) => *x, Self::Request(ref x) => *x, } } } impl std::fmt::Display for MethodId { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match self { Self::Event(ref x) => { write!(f, "e{}", x) } Self::Request(ref x) => { write!(f, "r{}", x) } } } } pub fn lookup_intf_by_name(name: &[u8]) -> Option { let r = INTERFACE_TABLE.binary_search_by(|cand_intf| cand_intf.name.as_bytes().cmp(name)); if let Ok(idx) = r { Some(WaylandInterface::try_from(idx as u32).unwrap()) } else { None } } pub fn parse_wl_header(msg: &[u8]) -> (ObjId, usize, u8) { let obj_id = ObjId(u32::from_le_bytes(msg[0..4].try_into().unwrap())); let x = u32::from_le_bytes(msg[4..8].try_into().unwrap()); let length = (x >> 16) as usize; let opcode = (x & 0x00ff) as u8; (obj_id, length, opcode) } pub fn parse_string<'a>(msg_tail: &mut &'a [u8]) -> Result, &'static str> { let v = parse_array(msg_tail)?; // Null values for Wayland strings are encoded with empty arrays if v.is_empty() { Ok(None) } else { // drop null terminator Ok(Some(&v[..v.len() - 1])) } } pub fn parse_array<'a>(msg_tail: &mut &'a [u8]) -> Result<&'a [u8], &'static str> { if msg_tail.len() < 4 { return Err(PARSE_ERROR); } let length = u32::from_le_bytes(msg_tail[..4].try_into().unwrap()) as usize; // length includes null terminator, if string let space = length.checked_next_multiple_of(4).ok_or(PARSE_ERROR)?; if space > msg_tail.len() - 4 { return Err(PARSE_ERROR); } let ret = &msg_tail[4..4 + length]; *msg_tail = &std::mem::take(msg_tail)[4 + space..]; Ok(ret) } pub fn parse_obj(msg_tail: &mut &[u8]) -> Result { Ok(ObjId(parse_u32(msg_tail)?)) } pub fn parse_u32(msg_tail: &mut &[u8]) -> Result { if msg_tail.len() < 4 { return Err(PARSE_ERROR); } let val = u32::from_le_bytes(msg_tail[0..4].try_into().unwrap()); *msg_tail = &std::mem::take(msg_tail)[4..]; Ok(val) } pub fn parse_i32(msg_tail: &mut &[u8]) -> Result { if msg_tail.len() < 4 { return Err(PARSE_ERROR); } let val = i32::from_le_bytes(msg_tail[0..4].try_into().unwrap()); *msg_tail = &std::mem::take(msg_tail)[4..]; Ok(val) } pub fn length_array(arr_len: usize) -> usize { let arrlen = arr_len; arrlen .checked_next_multiple_of(4) .unwrap_or(usize::MAX) .saturating_add(4) } pub fn length_string(str_len_unterminated: usize) -> usize { length_array(str_len_unterminated.saturating_add(1)) } pub fn write_header( tail: &mut &mut [u8], obj_id: ObjId, length: usize, opcode: usize, ntagfds: u32, ) { let id = u32::to_le_bytes(obj_id.0); assert!(length < (1 << 16) && ntagfds < (1 << 5) && opcode < (1 << 8)); let p = u32::to_le_bytes(((length as u32) << 16) | (ntagfds << 11) | ((opcode as u32) & 0x00ff)); tail[..4].copy_from_slice(&id); tail[4..8].copy_from_slice(&p); *tail = &mut std::mem::take(tail)[8..]; } pub fn write_u32(tail: &mut &mut [u8], val: u32) { let v = u32::to_le_bytes(val); tail[..4].copy_from_slice(&v); *tail = &mut std::mem::take(tail)[4..]; } pub fn write_obj(tail: &mut &mut [u8], val: ObjId) { write_u32(tail, val.0) } pub fn write_i32(tail: &mut &mut [u8], val: i32) { let v = i32::to_le_bytes(val); tail[..4].copy_from_slice(&v); *tail = &mut std::mem::take(tail)[4..]; } pub fn write_array(tail: &mut &mut [u8], s: &[u8]) { let l = u32::to_le_bytes(s.len() as u32); tail[..4].copy_from_slice(&l); tail[4..(s.len() + 4)].copy_from_slice(s); let e = length_array(s.len()); tail[(s.len() + 4)..e].fill(0); *tail = &mut std::mem::take(tail)[e..]; } pub fn write_string(tail: &mut &mut [u8], os: Option<&[u8]>) { if let Some(s) = os { let l = u32::to_le_bytes(s.len() as u32 + 1); tail[..4].copy_from_slice(&l); tail[4..(s.len() + 4)].copy_from_slice(s); let e = length_string(s.len()); tail[(s.len() + 4)..e].fill(0); *tail = &mut std::mem::take(tail)[e..]; } else { tail[0..4].fill(0); *tail = &mut std::mem::take(tail)[4..]; } } waypipe-v0.10.5/src/wayland_gen.rs000066400000000000000000006446741507071711100171130ustar00rootroot00000000000000/*! Wayland protocol interface and method data and functions. Code automatically generated from protocols/ folder. */ #![allow(clippy::all, dead_code)] use crate::wayland::WaylandArgument::*; use crate::wayland::*; use WaylandInterface::*; pub fn write_req_wp_color_manager_v1_get_output( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, output: ObjId, ) { let l = length_req_wp_color_manager_v1_get_output(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); write_obj(dst, output); } pub fn length_req_wp_color_manager_v1_get_output() -> usize { 16 } pub fn parse_req_wp_color_manager_v1_get_output<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_WP_COLOR_MANAGER_V1_GET_OUTPUT: MethodId = MethodId::Request(1); pub fn write_req_wp_color_manager_v1_create_icc_creator( dst: &mut &mut [u8], for_id: ObjId, obj: ObjId, ) { let l = length_req_wp_color_manager_v1_create_icc_creator(); write_header(dst, for_id, l, 4, 0); write_obj(dst, obj); } pub fn length_req_wp_color_manager_v1_create_icc_creator() -> usize { 12 } pub fn parse_req_wp_color_manager_v1_create_icc_creator<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WP_COLOR_MANAGER_V1_CREATE_ICC_CREATOR: MethodId = MethodId::Request(4); const DATA_WP_COLOR_MANAGER_V1: WaylandData = WaylandData { name: "wp_color_manager_v1", evts: &[ WaylandMethod { name: "supported_intent", sig: &[Uint], destructor: false, }, WaylandMethod { name: "supported_feature", sig: &[Uint], destructor: false, }, WaylandMethod { name: "supported_tf_named", sig: &[Uint], destructor: false, }, WaylandMethod { name: "supported_primaries_named", sig: &[Uint], destructor: false, }, WaylandMethod { name: "done", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_output", sig: &[NewId(WpColorManagementOutputV1), Object(WlOutput)], destructor: false, }, WaylandMethod { name: "get_surface", sig: &[NewId(WpColorManagementSurfaceV1), Object(WlSurface)], destructor: false, }, WaylandMethod { name: "get_surface_feedback", sig: &[NewId(WpColorManagementSurfaceFeedbackV1), Object(WlSurface)], destructor: false, }, WaylandMethod { name: "create_icc_creator", sig: &[NewId(WpImageDescriptionCreatorIccV1)], destructor: false, }, WaylandMethod { name: "create_parametric_creator", sig: &[NewId(WpImageDescriptionCreatorParamsV1)], destructor: false, }, WaylandMethod { name: "create_windows_scrgb", sig: &[NewId(WpImageDescriptionV1)], destructor: false, }, ], version: 1, }; pub const WP_COLOR_MANAGER_V1: &[u8] = DATA_WP_COLOR_MANAGER_V1.name.as_bytes(); pub fn write_req_wp_color_management_output_v1_get_image_description( dst: &mut &mut [u8], for_id: ObjId, image_description: ObjId, ) { let l = length_req_wp_color_management_output_v1_get_image_description(); write_header(dst, for_id, l, 1, 0); write_obj(dst, image_description); } pub fn length_req_wp_color_management_output_v1_get_image_description() -> usize { 12 } pub fn parse_req_wp_color_management_output_v1_get_image_description<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WP_COLOR_MANAGEMENT_OUTPUT_V1_GET_IMAGE_DESCRIPTION: MethodId = MethodId::Request(1); const DATA_WP_COLOR_MANAGEMENT_OUTPUT_V1: WaylandData = WaylandData { name: "wp_color_management_output_v1", evts: &[WaylandMethod { name: "image_description_changed", sig: &[], destructor: false, }], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_image_description", sig: &[NewId(WpImageDescriptionV1)], destructor: false, }, ], version: 1, }; pub const WP_COLOR_MANAGEMENT_OUTPUT_V1: &[u8] = DATA_WP_COLOR_MANAGEMENT_OUTPUT_V1.name.as_bytes(); const DATA_WP_COLOR_MANAGEMENT_SURFACE_V1: WaylandData = WaylandData { name: "wp_color_management_surface_v1", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_image_description", sig: &[Object(WpImageDescriptionV1), Uint], destructor: false, }, WaylandMethod { name: "unset_image_description", sig: &[], destructor: false, }, ], version: 1, }; pub const WP_COLOR_MANAGEMENT_SURFACE_V1: &[u8] = DATA_WP_COLOR_MANAGEMENT_SURFACE_V1.name.as_bytes(); const DATA_WP_COLOR_MANAGEMENT_SURFACE_FEEDBACK_V1: WaylandData = WaylandData { name: "wp_color_management_surface_feedback_v1", evts: &[WaylandMethod { name: "preferred_changed", sig: &[Uint], destructor: false, }], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_preferred", sig: &[NewId(WpImageDescriptionV1)], destructor: false, }, WaylandMethod { name: "get_preferred_parametric", sig: &[NewId(WpImageDescriptionV1)], destructor: false, }, ], version: 1, }; pub const WP_COLOR_MANAGEMENT_SURFACE_FEEDBACK_V1: &[u8] = DATA_WP_COLOR_MANAGEMENT_SURFACE_FEEDBACK_V1.name.as_bytes(); pub fn write_req_wp_image_description_creator_icc_v1_set_icc_file( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, offset: u32, length: u32, ) { let l = length_req_wp_image_description_creator_icc_v1_set_icc_file(); write_header(dst, for_id, l, 1, if tag_fds { 1 } else { 0 }); write_u32(dst, offset); write_u32(dst, length); } pub fn length_req_wp_image_description_creator_icc_v1_set_icc_file() -> usize { 16 } pub fn parse_req_wp_image_description_creator_icc_v1_set_icc_file<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg2, arg3)) } pub const OPCODE_WP_IMAGE_DESCRIPTION_CREATOR_ICC_V1_SET_ICC_FILE: MethodId = MethodId::Request(1); const DATA_WP_IMAGE_DESCRIPTION_CREATOR_ICC_V1: WaylandData = WaylandData { name: "wp_image_description_creator_icc_v1", evts: &[], reqs: &[ WaylandMethod { name: "create", sig: &[NewId(WpImageDescriptionV1)], destructor: true, }, WaylandMethod { name: "set_icc_file", sig: &[Fd, Uint, Uint], destructor: false, }, ], version: 1, }; pub const WP_IMAGE_DESCRIPTION_CREATOR_ICC_V1: &[u8] = DATA_WP_IMAGE_DESCRIPTION_CREATOR_ICC_V1.name.as_bytes(); const DATA_WP_IMAGE_DESCRIPTION_CREATOR_PARAMS_V1: WaylandData = WaylandData { name: "wp_image_description_creator_params_v1", evts: &[], reqs: &[ WaylandMethod { name: "create", sig: &[NewId(WpImageDescriptionV1)], destructor: true, }, WaylandMethod { name: "set_tf_named", sig: &[Uint], destructor: false, }, WaylandMethod { name: "set_tf_power", sig: &[Uint], destructor: false, }, WaylandMethod { name: "set_primaries_named", sig: &[Uint], destructor: false, }, WaylandMethod { name: "set_primaries", sig: &[Int, Int, Int, Int, Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "set_luminances", sig: &[Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "set_mastering_display_primaries", sig: &[Int, Int, Int, Int, Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "set_mastering_luminance", sig: &[Uint, Uint], destructor: false, }, WaylandMethod { name: "set_max_cll", sig: &[Uint], destructor: false, }, WaylandMethod { name: "set_max_fall", sig: &[Uint], destructor: false, }, ], version: 1, }; pub const WP_IMAGE_DESCRIPTION_CREATOR_PARAMS_V1: &[u8] = DATA_WP_IMAGE_DESCRIPTION_CREATOR_PARAMS_V1.name.as_bytes(); pub fn write_req_wp_image_description_v1_get_information( dst: &mut &mut [u8], for_id: ObjId, information: ObjId, ) { let l = length_req_wp_image_description_v1_get_information(); write_header(dst, for_id, l, 1, 0); write_obj(dst, information); } pub fn length_req_wp_image_description_v1_get_information() -> usize { 12 } pub fn parse_req_wp_image_description_v1_get_information<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WP_IMAGE_DESCRIPTION_V1_GET_INFORMATION: MethodId = MethodId::Request(1); const DATA_WP_IMAGE_DESCRIPTION_V1: WaylandData = WaylandData { name: "wp_image_description_v1", evts: &[ WaylandMethod { name: "failed", sig: &[Uint, String], destructor: false, }, WaylandMethod { name: "ready", sig: &[Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_information", sig: &[NewId(WpImageDescriptionInfoV1)], destructor: false, }, ], version: 1, }; pub const WP_IMAGE_DESCRIPTION_V1: &[u8] = DATA_WP_IMAGE_DESCRIPTION_V1.name.as_bytes(); pub fn write_evt_wp_image_description_info_v1_icc_file( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, icc_size: u32, ) { let l = length_evt_wp_image_description_info_v1_icc_file(); write_header(dst, for_id, l, 1, if tag_fds { 1 } else { 0 }); write_u32(dst, icc_size); } pub fn length_evt_wp_image_description_info_v1_icc_file() -> usize { 12 } pub fn parse_evt_wp_image_description_info_v1_icc_file<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg2 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg2) } pub const OPCODE_WP_IMAGE_DESCRIPTION_INFO_V1_ICC_FILE: MethodId = MethodId::Event(1); const DATA_WP_IMAGE_DESCRIPTION_INFO_V1: WaylandData = WaylandData { name: "wp_image_description_info_v1", evts: &[ WaylandMethod { name: "done", sig: &[], destructor: true, }, WaylandMethod { name: "icc_file", sig: &[Fd, Uint], destructor: false, }, WaylandMethod { name: "primaries", sig: &[Int, Int, Int, Int, Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "primaries_named", sig: &[Uint], destructor: false, }, WaylandMethod { name: "tf_power", sig: &[Uint], destructor: false, }, WaylandMethod { name: "tf_named", sig: &[Uint], destructor: false, }, WaylandMethod { name: "luminances", sig: &[Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "target_primaries", sig: &[Int, Int, Int, Int, Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "target_luminance", sig: &[Uint, Uint], destructor: false, }, WaylandMethod { name: "target_max_cll", sig: &[Uint], destructor: false, }, WaylandMethod { name: "target_max_fall", sig: &[Uint], destructor: false, }, ], reqs: &[], version: 1, }; pub const WP_IMAGE_DESCRIPTION_INFO_V1: &[u8] = DATA_WP_IMAGE_DESCRIPTION_INFO_V1.name.as_bytes(); pub fn write_req_wp_commit_timing_manager_v1_get_timer( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, surface: ObjId, ) { let l = length_req_wp_commit_timing_manager_v1_get_timer(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); write_obj(dst, surface); } pub fn length_req_wp_commit_timing_manager_v1_get_timer() -> usize { 16 } pub fn parse_req_wp_commit_timing_manager_v1_get_timer<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_WP_COMMIT_TIMING_MANAGER_V1_GET_TIMER: MethodId = MethodId::Request(1); const DATA_WP_COMMIT_TIMING_MANAGER_V1: WaylandData = WaylandData { name: "wp_commit_timing_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_timer", sig: &[NewId(WpCommitTimerV1), Object(WlSurface)], destructor: false, }, ], version: 1, }; pub const WP_COMMIT_TIMING_MANAGER_V1: &[u8] = DATA_WP_COMMIT_TIMING_MANAGER_V1.name.as_bytes(); pub fn write_req_wp_commit_timer_v1_set_timestamp( dst: &mut &mut [u8], for_id: ObjId, tv_sec_hi: u32, tv_sec_lo: u32, tv_nsec: u32, ) { let l = length_req_wp_commit_timer_v1_set_timestamp(); write_header(dst, for_id, l, 0, 0); write_u32(dst, tv_sec_hi); write_u32(dst, tv_sec_lo); write_u32(dst, tv_nsec); } pub fn length_req_wp_commit_timer_v1_set_timestamp() -> usize { 20 } pub fn parse_req_wp_commit_timer_v1_set_timestamp<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_WP_COMMIT_TIMER_V1_SET_TIMESTAMP: MethodId = MethodId::Request(0); const DATA_WP_COMMIT_TIMER_V1: WaylandData = WaylandData { name: "wp_commit_timer_v1", evts: &[], reqs: &[ WaylandMethod { name: "set_timestamp", sig: &[Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const WP_COMMIT_TIMER_V1: &[u8] = DATA_WP_COMMIT_TIMER_V1.name.as_bytes(); pub fn write_req_ext_data_control_manager_v1_create_data_source( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_req_ext_data_control_manager_v1_create_data_source(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); } pub fn length_req_ext_data_control_manager_v1_create_data_source() -> usize { 12 } pub fn parse_req_ext_data_control_manager_v1_create_data_source<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_DATA_CONTROL_MANAGER_V1_CREATE_DATA_SOURCE: MethodId = MethodId::Request(0); pub fn write_req_ext_data_control_manager_v1_get_data_device( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, seat: ObjId, ) { let l = length_req_ext_data_control_manager_v1_get_data_device(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); write_obj(dst, seat); } pub fn length_req_ext_data_control_manager_v1_get_data_device() -> usize { 16 } pub fn parse_req_ext_data_control_manager_v1_get_data_device<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_EXT_DATA_CONTROL_MANAGER_V1_GET_DATA_DEVICE: MethodId = MethodId::Request(1); const DATA_EXT_DATA_CONTROL_MANAGER_V1: WaylandData = WaylandData { name: "ext_data_control_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "create_data_source", sig: &[NewId(ExtDataControlSourceV1)], destructor: false, }, WaylandMethod { name: "get_data_device", sig: &[NewId(ExtDataControlDeviceV1), Object(WlSeat)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const EXT_DATA_CONTROL_MANAGER_V1: &[u8] = DATA_EXT_DATA_CONTROL_MANAGER_V1.name.as_bytes(); pub fn write_req_ext_data_control_device_v1_set_selection( dst: &mut &mut [u8], for_id: ObjId, source: ObjId, ) { let l = length_req_ext_data_control_device_v1_set_selection(); write_header(dst, for_id, l, 0, 0); write_obj(dst, source); } pub fn length_req_ext_data_control_device_v1_set_selection() -> usize { 12 } pub fn parse_req_ext_data_control_device_v1_set_selection<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_DATA_CONTROL_DEVICE_V1_SET_SELECTION: MethodId = MethodId::Request(0); pub fn write_evt_ext_data_control_device_v1_data_offer( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_evt_ext_data_control_device_v1_data_offer(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); } pub fn length_evt_ext_data_control_device_v1_data_offer() -> usize { 12 } pub fn parse_evt_ext_data_control_device_v1_data_offer<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_DATA_CONTROL_DEVICE_V1_DATA_OFFER: MethodId = MethodId::Event(0); pub fn write_evt_ext_data_control_device_v1_selection( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_evt_ext_data_control_device_v1_selection(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); } pub fn length_evt_ext_data_control_device_v1_selection() -> usize { 12 } pub fn parse_evt_ext_data_control_device_v1_selection<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_DATA_CONTROL_DEVICE_V1_SELECTION: MethodId = MethodId::Event(1); const DATA_EXT_DATA_CONTROL_DEVICE_V1: WaylandData = WaylandData { name: "ext_data_control_device_v1", evts: &[ WaylandMethod { name: "data_offer", sig: &[NewId(ExtDataControlOfferV1)], destructor: false, }, WaylandMethod { name: "selection", sig: &[Object(ExtDataControlOfferV1)], destructor: false, }, WaylandMethod { name: "finished", sig: &[], destructor: false, }, WaylandMethod { name: "primary_selection", sig: &[Object(ExtDataControlOfferV1)], destructor: false, }, ], reqs: &[ WaylandMethod { name: "set_selection", sig: &[Object(ExtDataControlSourceV1)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_primary_selection", sig: &[Object(ExtDataControlSourceV1)], destructor: false, }, ], version: 1, }; pub const EXT_DATA_CONTROL_DEVICE_V1: &[u8] = DATA_EXT_DATA_CONTROL_DEVICE_V1.name.as_bytes(); pub fn write_req_ext_data_control_source_v1_offer( dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8], ) { let l = length_req_ext_data_control_source_v1_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_req_ext_data_control_source_v1_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_ext_data_control_source_v1_offer<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_DATA_CONTROL_SOURCE_V1_OFFER: MethodId = MethodId::Request(0); pub fn write_evt_ext_data_control_source_v1_send( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_evt_ext_data_control_source_v1_send(mime_type.len()); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_evt_ext_data_control_source_v1_send(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_ext_data_control_source_v1_send<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_DATA_CONTROL_SOURCE_V1_SEND: MethodId = MethodId::Event(0); const DATA_EXT_DATA_CONTROL_SOURCE_V1: WaylandData = WaylandData { name: "ext_data_control_source_v1", evts: &[ WaylandMethod { name: "send", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "cancelled", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "offer", sig: &[String], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const EXT_DATA_CONTROL_SOURCE_V1: &[u8] = DATA_EXT_DATA_CONTROL_SOURCE_V1.name.as_bytes(); pub fn write_req_ext_data_control_offer_v1_receive( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_req_ext_data_control_offer_v1_receive(mime_type.len()); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_req_ext_data_control_offer_v1_receive(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_ext_data_control_offer_v1_receive<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_DATA_CONTROL_OFFER_V1_RECEIVE: MethodId = MethodId::Request(0); pub fn write_evt_ext_data_control_offer_v1_offer( dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8], ) { let l = length_evt_ext_data_control_offer_v1_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_evt_ext_data_control_offer_v1_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_ext_data_control_offer_v1_offer<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_DATA_CONTROL_OFFER_V1_OFFER: MethodId = MethodId::Event(0); const DATA_EXT_DATA_CONTROL_OFFER_V1: WaylandData = WaylandData { name: "ext_data_control_offer_v1", evts: &[WaylandMethod { name: "offer", sig: &[String], destructor: false, }], reqs: &[ WaylandMethod { name: "receive", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const EXT_DATA_CONTROL_OFFER_V1: &[u8] = DATA_EXT_DATA_CONTROL_OFFER_V1.name.as_bytes(); const DATA_EXT_FOREIGN_TOPLEVEL_LIST_V1: WaylandData = WaylandData { name: "ext_foreign_toplevel_list_v1", evts: &[ WaylandMethod { name: "toplevel", sig: &[NewId(ExtForeignToplevelHandleV1)], destructor: false, }, WaylandMethod { name: "finished", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "stop", sig: &[], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const EXT_FOREIGN_TOPLEVEL_LIST_V1: &[u8] = DATA_EXT_FOREIGN_TOPLEVEL_LIST_V1.name.as_bytes(); const DATA_EXT_FOREIGN_TOPLEVEL_HANDLE_V1: WaylandData = WaylandData { name: "ext_foreign_toplevel_handle_v1", evts: &[ WaylandMethod { name: "closed", sig: &[], destructor: false, }, WaylandMethod { name: "done", sig: &[], destructor: false, }, WaylandMethod { name: "title", sig: &[String], destructor: false, }, WaylandMethod { name: "app_id", sig: &[String], destructor: false, }, WaylandMethod { name: "identifier", sig: &[String], destructor: false, }, ], reqs: &[WaylandMethod { name: "destroy", sig: &[], destructor: true, }], version: 1, }; pub const EXT_FOREIGN_TOPLEVEL_HANDLE_V1: &[u8] = DATA_EXT_FOREIGN_TOPLEVEL_HANDLE_V1.name.as_bytes(); const DATA_EXT_IMAGE_CAPTURE_SOURCE_V1: WaylandData = WaylandData { name: "ext_image_capture_source_v1", evts: &[], reqs: &[WaylandMethod { name: "destroy", sig: &[], destructor: true, }], version: 1, }; pub const EXT_IMAGE_CAPTURE_SOURCE_V1: &[u8] = DATA_EXT_IMAGE_CAPTURE_SOURCE_V1.name.as_bytes(); pub fn write_req_ext_output_image_capture_source_manager_v1_create_source( dst: &mut &mut [u8], for_id: ObjId, source: ObjId, output: ObjId, ) { let l = length_req_ext_output_image_capture_source_manager_v1_create_source(); write_header(dst, for_id, l, 0, 0); write_obj(dst, source); write_obj(dst, output); } pub fn length_req_ext_output_image_capture_source_manager_v1_create_source() -> usize { 16 } pub fn parse_req_ext_output_image_capture_source_manager_v1_create_source<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_EXT_OUTPUT_IMAGE_CAPTURE_SOURCE_MANAGER_V1_CREATE_SOURCE: MethodId = MethodId::Request(0); const DATA_EXT_OUTPUT_IMAGE_CAPTURE_SOURCE_MANAGER_V1: WaylandData = WaylandData { name: "ext_output_image_capture_source_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "create_source", sig: &[NewId(ExtImageCaptureSourceV1), Object(WlOutput)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const EXT_OUTPUT_IMAGE_CAPTURE_SOURCE_MANAGER_V1: &[u8] = DATA_EXT_OUTPUT_IMAGE_CAPTURE_SOURCE_MANAGER_V1 .name .as_bytes(); const DATA_EXT_FOREIGN_TOPLEVEL_IMAGE_CAPTURE_SOURCE_MANAGER_V1: WaylandData = WaylandData { name: "ext_foreign_toplevel_image_capture_source_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "create_source", sig: &[ NewId(ExtImageCaptureSourceV1), Object(ExtForeignToplevelHandleV1), ], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const EXT_FOREIGN_TOPLEVEL_IMAGE_CAPTURE_SOURCE_MANAGER_V1: &[u8] = DATA_EXT_FOREIGN_TOPLEVEL_IMAGE_CAPTURE_SOURCE_MANAGER_V1 .name .as_bytes(); pub fn write_req_ext_image_copy_capture_manager_v1_create_session( dst: &mut &mut [u8], for_id: ObjId, session: ObjId, source: ObjId, options: u32, ) { let l = length_req_ext_image_copy_capture_manager_v1_create_session(); write_header(dst, for_id, l, 0, 0); write_obj(dst, session); write_obj(dst, source); write_u32(dst, options); } pub fn length_req_ext_image_copy_capture_manager_v1_create_session() -> usize { 20 } pub fn parse_req_ext_image_copy_capture_manager_v1_create_session<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_MANAGER_V1_CREATE_SESSION: MethodId = MethodId::Request(0); const DATA_EXT_IMAGE_COPY_CAPTURE_MANAGER_V1: WaylandData = WaylandData { name: "ext_image_copy_capture_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "create_session", sig: &[ NewId(ExtImageCopyCaptureSessionV1), Object(ExtImageCaptureSourceV1), Uint, ], destructor: false, }, WaylandMethod { name: "create_pointer_cursor_session", sig: &[ NewId(ExtImageCopyCaptureCursorSessionV1), Object(ExtImageCaptureSourceV1), Object(WlPointer), ], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const EXT_IMAGE_COPY_CAPTURE_MANAGER_V1: &[u8] = DATA_EXT_IMAGE_COPY_CAPTURE_MANAGER_V1.name.as_bytes(); pub fn write_evt_ext_image_copy_capture_session_v1_buffer_size( dst: &mut &mut [u8], for_id: ObjId, width: u32, height: u32, ) { let l = length_evt_ext_image_copy_capture_session_v1_buffer_size(); write_header(dst, for_id, l, 0, 0); write_u32(dst, width); write_u32(dst, height); } pub fn length_evt_ext_image_copy_capture_session_v1_buffer_size() -> usize { 16 } pub fn parse_evt_ext_image_copy_capture_session_v1_buffer_size<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_BUFFER_SIZE: MethodId = MethodId::Event(0); pub fn write_evt_ext_image_copy_capture_session_v1_shm_format( dst: &mut &mut [u8], for_id: ObjId, format: u32, ) { let l = length_evt_ext_image_copy_capture_session_v1_shm_format(); write_header(dst, for_id, l, 1, 0); write_u32(dst, format); } pub fn length_evt_ext_image_copy_capture_session_v1_shm_format() -> usize { 12 } pub fn parse_evt_ext_image_copy_capture_session_v1_shm_format<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_SHM_FORMAT: MethodId = MethodId::Event(1); pub fn write_evt_ext_image_copy_capture_session_v1_dmabuf_device( dst: &mut &mut [u8], for_id: ObjId, device: &[u8], ) { let l = length_evt_ext_image_copy_capture_session_v1_dmabuf_device(device.len()); write_header(dst, for_id, l, 2, 0); write_array(dst, device); } pub fn length_evt_ext_image_copy_capture_session_v1_dmabuf_device(device_len: usize) -> usize { let mut v = 8; v += length_array(device_len); v } pub fn parse_evt_ext_image_copy_capture_session_v1_dmabuf_device<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_array(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DMABUF_DEVICE: MethodId = MethodId::Event(2); pub fn write_evt_ext_image_copy_capture_session_v1_dmabuf_format( dst: &mut &mut [u8], for_id: ObjId, format: u32, modifiers: &[u8], ) { let l = length_evt_ext_image_copy_capture_session_v1_dmabuf_format(modifiers.len()); write_header(dst, for_id, l, 3, 0); write_u32(dst, format); write_array(dst, modifiers); } pub fn length_evt_ext_image_copy_capture_session_v1_dmabuf_format(modifiers_len: usize) -> usize { let mut v = 12; v += length_array(modifiers_len); v } pub fn parse_evt_ext_image_copy_capture_session_v1_dmabuf_format<'a>( mut msg: &'a [u8], ) -> Result<(u32, &'a [u8]), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_array(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DMABUF_FORMAT: MethodId = MethodId::Event(3); pub fn write_evt_ext_image_copy_capture_session_v1_done(dst: &mut &mut [u8], for_id: ObjId) { let l = length_evt_ext_image_copy_capture_session_v1_done(); write_header(dst, for_id, l, 4, 0); } pub fn length_evt_ext_image_copy_capture_session_v1_done() -> usize { 8 } pub fn parse_evt_ext_image_copy_capture_session_v1_done<'a>( msg: &'a [u8], ) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DONE: MethodId = MethodId::Event(4); pub fn write_req_ext_image_copy_capture_session_v1_create_frame( dst: &mut &mut [u8], for_id: ObjId, frame: ObjId, ) { let l = length_req_ext_image_copy_capture_session_v1_create_frame(); write_header(dst, for_id, l, 0, 0); write_obj(dst, frame); } pub fn length_req_ext_image_copy_capture_session_v1_create_frame() -> usize { 12 } pub fn parse_req_ext_image_copy_capture_session_v1_create_frame<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_CREATE_FRAME: MethodId = MethodId::Request(0); pub fn write_req_ext_image_copy_capture_session_v1_destroy(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_ext_image_copy_capture_session_v1_destroy(); write_header(dst, for_id, l, 1, 0); } pub fn length_req_ext_image_copy_capture_session_v1_destroy() -> usize { 8 } pub fn parse_req_ext_image_copy_capture_session_v1_destroy<'a>( msg: &'a [u8], ) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_SESSION_V1_DESTROY: MethodId = MethodId::Request(1); const DATA_EXT_IMAGE_COPY_CAPTURE_SESSION_V1: WaylandData = WaylandData { name: "ext_image_copy_capture_session_v1", evts: &[ WaylandMethod { name: "buffer_size", sig: &[Uint, Uint], destructor: false, }, WaylandMethod { name: "shm_format", sig: &[Uint], destructor: false, }, WaylandMethod { name: "dmabuf_device", sig: &[Array], destructor: false, }, WaylandMethod { name: "dmabuf_format", sig: &[Uint, Array], destructor: false, }, WaylandMethod { name: "done", sig: &[], destructor: false, }, WaylandMethod { name: "stopped", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "create_frame", sig: &[NewId(ExtImageCopyCaptureFrameV1)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const EXT_IMAGE_COPY_CAPTURE_SESSION_V1: &[u8] = DATA_EXT_IMAGE_COPY_CAPTURE_SESSION_V1.name.as_bytes(); pub fn write_req_ext_image_copy_capture_frame_v1_destroy(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_ext_image_copy_capture_frame_v1_destroy(); write_header(dst, for_id, l, 0, 0); } pub fn length_req_ext_image_copy_capture_frame_v1_destroy() -> usize { 8 } pub fn parse_req_ext_image_copy_capture_frame_v1_destroy<'a>( msg: &'a [u8], ) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_DESTROY: MethodId = MethodId::Request(0); pub fn write_req_ext_image_copy_capture_frame_v1_attach_buffer( dst: &mut &mut [u8], for_id: ObjId, buffer: ObjId, ) { let l = length_req_ext_image_copy_capture_frame_v1_attach_buffer(); write_header(dst, for_id, l, 1, 0); write_obj(dst, buffer); } pub fn length_req_ext_image_copy_capture_frame_v1_attach_buffer() -> usize { 12 } pub fn parse_req_ext_image_copy_capture_frame_v1_attach_buffer<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_ATTACH_BUFFER: MethodId = MethodId::Request(1); pub fn write_req_ext_image_copy_capture_frame_v1_damage_buffer( dst: &mut &mut [u8], for_id: ObjId, x: i32, y: i32, width: i32, height: i32, ) { let l = length_req_ext_image_copy_capture_frame_v1_damage_buffer(); write_header(dst, for_id, l, 2, 0); write_i32(dst, x); write_i32(dst, y); write_i32(dst, width); write_i32(dst, height); } pub fn length_req_ext_image_copy_capture_frame_v1_damage_buffer() -> usize { 24 } pub fn parse_req_ext_image_copy_capture_frame_v1_damage_buffer<'a>( mut msg: &'a [u8], ) -> Result<(i32, i32, i32, i32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_i32(&mut msg)?; let arg4 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4)) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_DAMAGE_BUFFER: MethodId = MethodId::Request(2); pub fn write_req_ext_image_copy_capture_frame_v1_capture(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_ext_image_copy_capture_frame_v1_capture(); write_header(dst, for_id, l, 3, 0); } pub fn length_req_ext_image_copy_capture_frame_v1_capture() -> usize { 8 } pub fn parse_req_ext_image_copy_capture_frame_v1_capture<'a>( msg: &'a [u8], ) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_CAPTURE: MethodId = MethodId::Request(3); pub fn write_evt_ext_image_copy_capture_frame_v1_presentation_time( dst: &mut &mut [u8], for_id: ObjId, tv_sec_hi: u32, tv_sec_lo: u32, tv_nsec: u32, ) { let l = length_evt_ext_image_copy_capture_frame_v1_presentation_time(); write_header(dst, for_id, l, 2, 0); write_u32(dst, tv_sec_hi); write_u32(dst, tv_sec_lo); write_u32(dst, tv_nsec); } pub fn length_evt_ext_image_copy_capture_frame_v1_presentation_time() -> usize { 20 } pub fn parse_evt_ext_image_copy_capture_frame_v1_presentation_time<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_PRESENTATION_TIME: MethodId = MethodId::Event(2); pub fn write_evt_ext_image_copy_capture_frame_v1_ready(dst: &mut &mut [u8], for_id: ObjId) { let l = length_evt_ext_image_copy_capture_frame_v1_ready(); write_header(dst, for_id, l, 3, 0); } pub fn length_evt_ext_image_copy_capture_frame_v1_ready() -> usize { 8 } pub fn parse_evt_ext_image_copy_capture_frame_v1_ready<'a>( msg: &'a [u8], ) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_READY: MethodId = MethodId::Event(3); pub fn write_evt_ext_image_copy_capture_frame_v1_failed( dst: &mut &mut [u8], for_id: ObjId, reason: u32, ) { let l = length_evt_ext_image_copy_capture_frame_v1_failed(); write_header(dst, for_id, l, 4, 0); write_u32(dst, reason); } pub fn length_evt_ext_image_copy_capture_frame_v1_failed() -> usize { 12 } pub fn parse_evt_ext_image_copy_capture_frame_v1_failed<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_FRAME_V1_FAILED: MethodId = MethodId::Event(4); const DATA_EXT_IMAGE_COPY_CAPTURE_FRAME_V1: WaylandData = WaylandData { name: "ext_image_copy_capture_frame_v1", evts: &[ WaylandMethod { name: "transform", sig: &[Uint], destructor: false, }, WaylandMethod { name: "damage", sig: &[Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "presentation_time", sig: &[Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "ready", sig: &[], destructor: false, }, WaylandMethod { name: "failed", sig: &[Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "attach_buffer", sig: &[Object(WlBuffer)], destructor: false, }, WaylandMethod { name: "damage_buffer", sig: &[Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "capture", sig: &[], destructor: false, }, ], version: 1, }; pub const EXT_IMAGE_COPY_CAPTURE_FRAME_V1: &[u8] = DATA_EXT_IMAGE_COPY_CAPTURE_FRAME_V1.name.as_bytes(); pub fn write_req_ext_image_copy_capture_cursor_session_v1_get_capture_session( dst: &mut &mut [u8], for_id: ObjId, session: ObjId, ) { let l = length_req_ext_image_copy_capture_cursor_session_v1_get_capture_session(); write_header(dst, for_id, l, 1, 0); write_obj(dst, session); } pub fn length_req_ext_image_copy_capture_cursor_session_v1_get_capture_session() -> usize { 12 } pub fn parse_req_ext_image_copy_capture_cursor_session_v1_get_capture_session<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_EXT_IMAGE_COPY_CAPTURE_CURSOR_SESSION_V1_GET_CAPTURE_SESSION: MethodId = MethodId::Request(1); const DATA_EXT_IMAGE_COPY_CAPTURE_CURSOR_SESSION_V1: WaylandData = WaylandData { name: "ext_image_copy_capture_cursor_session_v1", evts: &[ WaylandMethod { name: "enter", sig: &[], destructor: false, }, WaylandMethod { name: "leave", sig: &[], destructor: false, }, WaylandMethod { name: "position", sig: &[Int, Int], destructor: false, }, WaylandMethod { name: "hotspot", sig: &[Int, Int], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_capture_session", sig: &[NewId(ExtImageCopyCaptureSessionV1)], destructor: false, }, ], version: 1, }; pub const EXT_IMAGE_COPY_CAPTURE_CURSOR_SESSION_V1: &[u8] = DATA_EXT_IMAGE_COPY_CAPTURE_CURSOR_SESSION_V1 .name .as_bytes(); pub fn write_req_gtk_primary_selection_device_manager_create_source( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_req_gtk_primary_selection_device_manager_create_source(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); } pub fn length_req_gtk_primary_selection_device_manager_create_source() -> usize { 12 } pub fn parse_req_gtk_primary_selection_device_manager_create_source<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_GTK_PRIMARY_SELECTION_DEVICE_MANAGER_CREATE_SOURCE: MethodId = MethodId::Request(0); pub fn write_req_gtk_primary_selection_device_manager_get_device( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, seat: ObjId, ) { let l = length_req_gtk_primary_selection_device_manager_get_device(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); write_obj(dst, seat); } pub fn length_req_gtk_primary_selection_device_manager_get_device() -> usize { 16 } pub fn parse_req_gtk_primary_selection_device_manager_get_device<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_GTK_PRIMARY_SELECTION_DEVICE_MANAGER_GET_DEVICE: MethodId = MethodId::Request(1); const DATA_GTK_PRIMARY_SELECTION_DEVICE_MANAGER: WaylandData = WaylandData { name: "gtk_primary_selection_device_manager", evts: &[], reqs: &[ WaylandMethod { name: "create_source", sig: &[NewId(GtkPrimarySelectionSource)], destructor: false, }, WaylandMethod { name: "get_device", sig: &[NewId(GtkPrimarySelectionDevice), Object(WlSeat)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const GTK_PRIMARY_SELECTION_DEVICE_MANAGER: &[u8] = DATA_GTK_PRIMARY_SELECTION_DEVICE_MANAGER.name.as_bytes(); pub fn write_req_gtk_primary_selection_device_set_selection( dst: &mut &mut [u8], for_id: ObjId, source: ObjId, serial: u32, ) { let l = length_req_gtk_primary_selection_device_set_selection(); write_header(dst, for_id, l, 0, 0); write_obj(dst, source); write_u32(dst, serial); } pub fn length_req_gtk_primary_selection_device_set_selection() -> usize { 16 } pub fn parse_req_gtk_primary_selection_device_set_selection<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_GTK_PRIMARY_SELECTION_DEVICE_SET_SELECTION: MethodId = MethodId::Request(0); pub fn write_evt_gtk_primary_selection_device_data_offer( dst: &mut &mut [u8], for_id: ObjId, offer: ObjId, ) { let l = length_evt_gtk_primary_selection_device_data_offer(); write_header(dst, for_id, l, 0, 0); write_obj(dst, offer); } pub fn length_evt_gtk_primary_selection_device_data_offer() -> usize { 12 } pub fn parse_evt_gtk_primary_selection_device_data_offer<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_GTK_PRIMARY_SELECTION_DEVICE_DATA_OFFER: MethodId = MethodId::Event(0); pub fn write_evt_gtk_primary_selection_device_selection( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_evt_gtk_primary_selection_device_selection(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); } pub fn length_evt_gtk_primary_selection_device_selection() -> usize { 12 } pub fn parse_evt_gtk_primary_selection_device_selection<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_GTK_PRIMARY_SELECTION_DEVICE_SELECTION: MethodId = MethodId::Event(1); const DATA_GTK_PRIMARY_SELECTION_DEVICE: WaylandData = WaylandData { name: "gtk_primary_selection_device", evts: &[ WaylandMethod { name: "data_offer", sig: &[NewId(GtkPrimarySelectionOffer)], destructor: false, }, WaylandMethod { name: "selection", sig: &[Object(GtkPrimarySelectionOffer)], destructor: false, }, ], reqs: &[ WaylandMethod { name: "set_selection", sig: &[Object(GtkPrimarySelectionSource), Uint], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const GTK_PRIMARY_SELECTION_DEVICE: &[u8] = DATA_GTK_PRIMARY_SELECTION_DEVICE.name.as_bytes(); pub fn write_req_gtk_primary_selection_offer_receive( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_req_gtk_primary_selection_offer_receive(mime_type.len()); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_req_gtk_primary_selection_offer_receive(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_gtk_primary_selection_offer_receive<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_GTK_PRIMARY_SELECTION_OFFER_RECEIVE: MethodId = MethodId::Request(0); pub fn write_evt_gtk_primary_selection_offer_offer( dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8], ) { let l = length_evt_gtk_primary_selection_offer_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_evt_gtk_primary_selection_offer_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_gtk_primary_selection_offer_offer<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_GTK_PRIMARY_SELECTION_OFFER_OFFER: MethodId = MethodId::Event(0); const DATA_GTK_PRIMARY_SELECTION_OFFER: WaylandData = WaylandData { name: "gtk_primary_selection_offer", evts: &[WaylandMethod { name: "offer", sig: &[String], destructor: false, }], reqs: &[ WaylandMethod { name: "receive", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const GTK_PRIMARY_SELECTION_OFFER: &[u8] = DATA_GTK_PRIMARY_SELECTION_OFFER.name.as_bytes(); pub fn write_req_gtk_primary_selection_source_offer( dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8], ) { let l = length_req_gtk_primary_selection_source_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_req_gtk_primary_selection_source_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_gtk_primary_selection_source_offer<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_GTK_PRIMARY_SELECTION_SOURCE_OFFER: MethodId = MethodId::Request(0); pub fn write_evt_gtk_primary_selection_source_send( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_evt_gtk_primary_selection_source_send(mime_type.len()); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_evt_gtk_primary_selection_source_send(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_gtk_primary_selection_source_send<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_GTK_PRIMARY_SELECTION_SOURCE_SEND: MethodId = MethodId::Event(0); const DATA_GTK_PRIMARY_SELECTION_SOURCE: WaylandData = WaylandData { name: "gtk_primary_selection_source", evts: &[ WaylandMethod { name: "send", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "cancelled", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "offer", sig: &[String], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const GTK_PRIMARY_SELECTION_SOURCE: &[u8] = DATA_GTK_PRIMARY_SELECTION_SOURCE.name.as_bytes(); const DATA_ZWP_INPUT_METHOD_V2: WaylandData = WaylandData { name: "zwp_input_method_v2", evts: &[ WaylandMethod { name: "activate", sig: &[], destructor: false, }, WaylandMethod { name: "deactivate", sig: &[], destructor: false, }, WaylandMethod { name: "surrounding_text", sig: &[String, Uint, Uint], destructor: false, }, WaylandMethod { name: "text_change_cause", sig: &[Uint], destructor: false, }, WaylandMethod { name: "content_type", sig: &[Uint, Uint], destructor: false, }, WaylandMethod { name: "done", sig: &[], destructor: false, }, WaylandMethod { name: "unavailable", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "commit_string", sig: &[String], destructor: false, }, WaylandMethod { name: "set_preedit_string", sig: &[String, Int, Int], destructor: false, }, WaylandMethod { name: "delete_surrounding_text", sig: &[Uint, Uint], destructor: false, }, WaylandMethod { name: "commit", sig: &[Uint], destructor: false, }, WaylandMethod { name: "get_input_popup_surface", sig: &[NewId(ZwpInputPopupSurfaceV2), Object(WlSurface)], destructor: false, }, WaylandMethod { name: "grab_keyboard", sig: &[NewId(ZwpInputMethodKeyboardGrabV2)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWP_INPUT_METHOD_V2: &[u8] = DATA_ZWP_INPUT_METHOD_V2.name.as_bytes(); const DATA_ZWP_INPUT_POPUP_SURFACE_V2: WaylandData = WaylandData { name: "zwp_input_popup_surface_v2", evts: &[WaylandMethod { name: "text_input_rectangle", sig: &[Int, Int, Int, Int], destructor: false, }], reqs: &[WaylandMethod { name: "destroy", sig: &[], destructor: true, }], version: 1, }; pub const ZWP_INPUT_POPUP_SURFACE_V2: &[u8] = DATA_ZWP_INPUT_POPUP_SURFACE_V2.name.as_bytes(); const DATA_ZWP_INPUT_METHOD_KEYBOARD_GRAB_V2: WaylandData = WaylandData { name: "zwp_input_method_keyboard_grab_v2", evts: &[ WaylandMethod { name: "keymap", sig: &[Uint, Fd, Uint], destructor: false, }, WaylandMethod { name: "key", sig: &[Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "modifiers", sig: &[Uint, Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "repeat_info", sig: &[Int, Int], destructor: false, }, ], reqs: &[WaylandMethod { name: "release", sig: &[], destructor: true, }], version: 1, }; pub const ZWP_INPUT_METHOD_KEYBOARD_GRAB_V2: &[u8] = DATA_ZWP_INPUT_METHOD_KEYBOARD_GRAB_V2.name.as_bytes(); const DATA_ZWP_INPUT_METHOD_MANAGER_V2: WaylandData = WaylandData { name: "zwp_input_method_manager_v2", evts: &[], reqs: &[ WaylandMethod { name: "get_input_method", sig: &[Object(WlSeat), NewId(ZwpInputMethodV2)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWP_INPUT_METHOD_MANAGER_V2: &[u8] = DATA_ZWP_INPUT_METHOD_MANAGER_V2.name.as_bytes(); pub fn write_req_zwp_linux_dmabuf_v1_create_params( dst: &mut &mut [u8], for_id: ObjId, params_id: ObjId, ) { let l = length_req_zwp_linux_dmabuf_v1_create_params(); write_header(dst, for_id, l, 1, 0); write_obj(dst, params_id); } pub fn length_req_zwp_linux_dmabuf_v1_create_params() -> usize { 12 } pub fn parse_req_zwp_linux_dmabuf_v1_create_params<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_LINUX_DMABUF_V1_CREATE_PARAMS: MethodId = MethodId::Request(1); pub fn write_evt_zwp_linux_dmabuf_v1_format(dst: &mut &mut [u8], for_id: ObjId, format: u32) { let l = length_evt_zwp_linux_dmabuf_v1_format(); write_header(dst, for_id, l, 0, 0); write_u32(dst, format); } pub fn length_evt_zwp_linux_dmabuf_v1_format() -> usize { 12 } pub fn parse_evt_zwp_linux_dmabuf_v1_format<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_LINUX_DMABUF_V1_FORMAT: MethodId = MethodId::Event(0); pub fn write_evt_zwp_linux_dmabuf_v1_modifier( dst: &mut &mut [u8], for_id: ObjId, format: u32, modifier_hi: u32, modifier_lo: u32, ) { let l = length_evt_zwp_linux_dmabuf_v1_modifier(); write_header(dst, for_id, l, 1, 0); write_u32(dst, format); write_u32(dst, modifier_hi); write_u32(dst, modifier_lo); } pub fn length_evt_zwp_linux_dmabuf_v1_modifier() -> usize { 20 } pub fn parse_evt_zwp_linux_dmabuf_v1_modifier<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_ZWP_LINUX_DMABUF_V1_MODIFIER: MethodId = MethodId::Event(1); pub fn write_req_zwp_linux_dmabuf_v1_get_default_feedback( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_req_zwp_linux_dmabuf_v1_get_default_feedback(); write_header(dst, for_id, l, 2, 0); write_obj(dst, id); } pub fn length_req_zwp_linux_dmabuf_v1_get_default_feedback() -> usize { 12 } pub fn parse_req_zwp_linux_dmabuf_v1_get_default_feedback<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_LINUX_DMABUF_V1_GET_DEFAULT_FEEDBACK: MethodId = MethodId::Request(2); pub fn write_req_zwp_linux_dmabuf_v1_get_surface_feedback( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, surface: ObjId, ) { let l = length_req_zwp_linux_dmabuf_v1_get_surface_feedback(); write_header(dst, for_id, l, 3, 0); write_obj(dst, id); write_obj(dst, surface); } pub fn length_req_zwp_linux_dmabuf_v1_get_surface_feedback() -> usize { 16 } pub fn parse_req_zwp_linux_dmabuf_v1_get_surface_feedback<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_ZWP_LINUX_DMABUF_V1_GET_SURFACE_FEEDBACK: MethodId = MethodId::Request(3); const DATA_ZWP_LINUX_DMABUF_V1: WaylandData = WaylandData { name: "zwp_linux_dmabuf_v1", evts: &[ WaylandMethod { name: "format", sig: &[Uint], destructor: false, }, WaylandMethod { name: "modifier", sig: &[Uint, Uint, Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "create_params", sig: &[NewId(ZwpLinuxBufferParamsV1)], destructor: false, }, WaylandMethod { name: "get_default_feedback", sig: &[NewId(ZwpLinuxDmabufFeedbackV1)], destructor: false, }, WaylandMethod { name: "get_surface_feedback", sig: &[NewId(ZwpLinuxDmabufFeedbackV1), Object(WlSurface)], destructor: false, }, ], version: 5, }; pub const ZWP_LINUX_DMABUF_V1: &[u8] = DATA_ZWP_LINUX_DMABUF_V1.name.as_bytes(); pub fn write_req_zwp_linux_buffer_params_v1_destroy(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_zwp_linux_buffer_params_v1_destroy(); write_header(dst, for_id, l, 0, 0); } pub fn length_req_zwp_linux_buffer_params_v1_destroy() -> usize { 8 } pub fn parse_req_zwp_linux_buffer_params_v1_destroy<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_DESTROY: MethodId = MethodId::Request(0); pub fn write_req_zwp_linux_buffer_params_v1_add( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, plane_idx: u32, offset: u32, stride: u32, modifier_hi: u32, modifier_lo: u32, ) { let l = length_req_zwp_linux_buffer_params_v1_add(); write_header(dst, for_id, l, 1, if tag_fds { 1 } else { 0 }); write_u32(dst, plane_idx); write_u32(dst, offset); write_u32(dst, stride); write_u32(dst, modifier_hi); write_u32(dst, modifier_lo); } pub fn length_req_zwp_linux_buffer_params_v1_add() -> usize { 28 } pub fn parse_req_zwp_linux_buffer_params_v1_add<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32, u32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; let arg4 = parse_u32(&mut msg)?; let arg5 = parse_u32(&mut msg)?; let arg6 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg2, arg3, arg4, arg5, arg6)) } pub const OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_ADD: MethodId = MethodId::Request(1); pub fn write_req_zwp_linux_buffer_params_v1_create( dst: &mut &mut [u8], for_id: ObjId, width: i32, height: i32, format: u32, flags: u32, ) { let l = length_req_zwp_linux_buffer_params_v1_create(); write_header(dst, for_id, l, 2, 0); write_i32(dst, width); write_i32(dst, height); write_u32(dst, format); write_u32(dst, flags); } pub fn length_req_zwp_linux_buffer_params_v1_create() -> usize { 24 } pub fn parse_req_zwp_linux_buffer_params_v1_create<'a>( mut msg: &'a [u8], ) -> Result<(i32, i32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; let arg4 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4)) } pub const OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_CREATE: MethodId = MethodId::Request(2); pub fn write_evt_zwp_linux_buffer_params_v1_created( dst: &mut &mut [u8], for_id: ObjId, buffer: ObjId, ) { let l = length_evt_zwp_linux_buffer_params_v1_created(); write_header(dst, for_id, l, 0, 0); write_obj(dst, buffer); } pub fn length_evt_zwp_linux_buffer_params_v1_created() -> usize { 12 } pub fn parse_evt_zwp_linux_buffer_params_v1_created<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_CREATED: MethodId = MethodId::Event(0); pub fn write_evt_zwp_linux_buffer_params_v1_failed(dst: &mut &mut [u8], for_id: ObjId) { let l = length_evt_zwp_linux_buffer_params_v1_failed(); write_header(dst, for_id, l, 1, 0); } pub fn length_evt_zwp_linux_buffer_params_v1_failed() -> usize { 8 } pub fn parse_evt_zwp_linux_buffer_params_v1_failed<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_FAILED: MethodId = MethodId::Event(1); pub fn write_req_zwp_linux_buffer_params_v1_create_immed( dst: &mut &mut [u8], for_id: ObjId, buffer_id: ObjId, width: i32, height: i32, format: u32, flags: u32, ) { let l = length_req_zwp_linux_buffer_params_v1_create_immed(); write_header(dst, for_id, l, 3, 0); write_obj(dst, buffer_id); write_i32(dst, width); write_i32(dst, height); write_u32(dst, format); write_u32(dst, flags); } pub fn length_req_zwp_linux_buffer_params_v1_create_immed() -> usize { 28 } pub fn parse_req_zwp_linux_buffer_params_v1_create_immed<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, i32, i32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_i32(&mut msg)?; let arg4 = parse_u32(&mut msg)?; let arg5 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4, arg5)) } pub const OPCODE_ZWP_LINUX_BUFFER_PARAMS_V1_CREATE_IMMED: MethodId = MethodId::Request(3); const DATA_ZWP_LINUX_BUFFER_PARAMS_V1: WaylandData = WaylandData { name: "zwp_linux_buffer_params_v1", evts: &[ WaylandMethod { name: "created", sig: &[NewId(WlBuffer)], destructor: false, }, WaylandMethod { name: "failed", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "add", sig: &[Fd, Uint, Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "create", sig: &[Int, Int, Uint, Uint], destructor: false, }, WaylandMethod { name: "create_immed", sig: &[NewId(WlBuffer), Int, Int, Uint, Uint], destructor: false, }, ], version: 5, }; pub const ZWP_LINUX_BUFFER_PARAMS_V1: &[u8] = DATA_ZWP_LINUX_BUFFER_PARAMS_V1.name.as_bytes(); pub fn write_evt_zwp_linux_dmabuf_feedback_v1_done(dst: &mut &mut [u8], for_id: ObjId) { let l = length_evt_zwp_linux_dmabuf_feedback_v1_done(); write_header(dst, for_id, l, 0, 0); } pub fn length_evt_zwp_linux_dmabuf_feedback_v1_done() -> usize { 8 } pub fn parse_evt_zwp_linux_dmabuf_feedback_v1_done<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_DONE: MethodId = MethodId::Event(0); pub fn write_evt_zwp_linux_dmabuf_feedback_v1_format_table( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, size: u32, ) { let l = length_evt_zwp_linux_dmabuf_feedback_v1_format_table(); write_header(dst, for_id, l, 1, if tag_fds { 1 } else { 0 }); write_u32(dst, size); } pub fn length_evt_zwp_linux_dmabuf_feedback_v1_format_table() -> usize { 12 } pub fn parse_evt_zwp_linux_dmabuf_feedback_v1_format_table<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg2 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg2) } pub const OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_FORMAT_TABLE: MethodId = MethodId::Event(1); pub fn write_evt_zwp_linux_dmabuf_feedback_v1_main_device( dst: &mut &mut [u8], for_id: ObjId, device: &[u8], ) { let l = length_evt_zwp_linux_dmabuf_feedback_v1_main_device(device.len()); write_header(dst, for_id, l, 2, 0); write_array(dst, device); } pub fn length_evt_zwp_linux_dmabuf_feedback_v1_main_device(device_len: usize) -> usize { let mut v = 8; v += length_array(device_len); v } pub fn parse_evt_zwp_linux_dmabuf_feedback_v1_main_device<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_array(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_MAIN_DEVICE: MethodId = MethodId::Event(2); pub fn write_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(dst: &mut &mut [u8], for_id: ObjId) { let l = length_evt_zwp_linux_dmabuf_feedback_v1_tranche_done(); write_header(dst, for_id, l, 3, 0); } pub fn length_evt_zwp_linux_dmabuf_feedback_v1_tranche_done() -> usize { 8 } pub fn parse_evt_zwp_linux_dmabuf_feedback_v1_tranche_done<'a>( msg: &'a [u8], ) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_DONE: MethodId = MethodId::Event(3); pub fn write_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device( dst: &mut &mut [u8], for_id: ObjId, device: &[u8], ) { let l = length_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device(device.len()); write_header(dst, for_id, l, 4, 0); write_array(dst, device); } pub fn length_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device(device_len: usize) -> usize { let mut v = 8; v += length_array(device_len); v } pub fn parse_evt_zwp_linux_dmabuf_feedback_v1_tranche_target_device<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_array(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_TARGET_DEVICE: MethodId = MethodId::Event(4); pub fn write_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats( dst: &mut &mut [u8], for_id: ObjId, indices: &[u8], ) { let l = length_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats(indices.len()); write_header(dst, for_id, l, 5, 0); write_array(dst, indices); } pub fn length_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats(indices_len: usize) -> usize { let mut v = 8; v += length_array(indices_len); v } pub fn parse_evt_zwp_linux_dmabuf_feedback_v1_tranche_formats<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_array(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_FORMATS: MethodId = MethodId::Event(5); pub fn write_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags( dst: &mut &mut [u8], for_id: ObjId, flags: u32, ) { let l = length_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags(); write_header(dst, for_id, l, 6, 0); write_u32(dst, flags); } pub fn length_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags() -> usize { 12 } pub fn parse_evt_zwp_linux_dmabuf_feedback_v1_tranche_flags<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_LINUX_DMABUF_FEEDBACK_V1_TRANCHE_FLAGS: MethodId = MethodId::Event(6); const DATA_ZWP_LINUX_DMABUF_FEEDBACK_V1: WaylandData = WaylandData { name: "zwp_linux_dmabuf_feedback_v1", evts: &[ WaylandMethod { name: "done", sig: &[], destructor: false, }, WaylandMethod { name: "format_table", sig: &[Fd, Uint], destructor: false, }, WaylandMethod { name: "main_device", sig: &[Array], destructor: false, }, WaylandMethod { name: "tranche_done", sig: &[], destructor: false, }, WaylandMethod { name: "tranche_target_device", sig: &[Array], destructor: false, }, WaylandMethod { name: "tranche_formats", sig: &[Array], destructor: false, }, WaylandMethod { name: "tranche_flags", sig: &[Uint], destructor: false, }, ], reqs: &[WaylandMethod { name: "destroy", sig: &[], destructor: true, }], version: 5, }; pub const ZWP_LINUX_DMABUF_FEEDBACK_V1: &[u8] = DATA_ZWP_LINUX_DMABUF_FEEDBACK_V1.name.as_bytes(); pub fn write_req_wp_linux_drm_syncobj_manager_v1_get_surface( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, surface: ObjId, ) { let l = length_req_wp_linux_drm_syncobj_manager_v1_get_surface(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); write_obj(dst, surface); } pub fn length_req_wp_linux_drm_syncobj_manager_v1_get_surface() -> usize { 16 } pub fn parse_req_wp_linux_drm_syncobj_manager_v1_get_surface<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_WP_LINUX_DRM_SYNCOBJ_MANAGER_V1_GET_SURFACE: MethodId = MethodId::Request(1); pub fn write_req_wp_linux_drm_syncobj_manager_v1_import_timeline( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, id: ObjId, ) { let l = length_req_wp_linux_drm_syncobj_manager_v1_import_timeline(); write_header(dst, for_id, l, 2, if tag_fds { 1 } else { 0 }); write_obj(dst, id); } pub fn length_req_wp_linux_drm_syncobj_manager_v1_import_timeline() -> usize { 12 } pub fn parse_req_wp_linux_drm_syncobj_manager_v1_import_timeline<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WP_LINUX_DRM_SYNCOBJ_MANAGER_V1_IMPORT_TIMELINE: MethodId = MethodId::Request(2); const DATA_WP_LINUX_DRM_SYNCOBJ_MANAGER_V1: WaylandData = WaylandData { name: "wp_linux_drm_syncobj_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_surface", sig: &[NewId(WpLinuxDrmSyncobjSurfaceV1), Object(WlSurface)], destructor: false, }, WaylandMethod { name: "import_timeline", sig: &[NewId(WpLinuxDrmSyncobjTimelineV1), Fd], destructor: false, }, ], version: 1, }; pub const WP_LINUX_DRM_SYNCOBJ_MANAGER_V1: &[u8] = DATA_WP_LINUX_DRM_SYNCOBJ_MANAGER_V1.name.as_bytes(); const DATA_WP_LINUX_DRM_SYNCOBJ_TIMELINE_V1: WaylandData = WaylandData { name: "wp_linux_drm_syncobj_timeline_v1", evts: &[], reqs: &[WaylandMethod { name: "destroy", sig: &[], destructor: true, }], version: 1, }; pub const WP_LINUX_DRM_SYNCOBJ_TIMELINE_V1: &[u8] = DATA_WP_LINUX_DRM_SYNCOBJ_TIMELINE_V1.name.as_bytes(); pub fn write_req_wp_linux_drm_syncobj_surface_v1_set_acquire_point( dst: &mut &mut [u8], for_id: ObjId, timeline: ObjId, point_hi: u32, point_lo: u32, ) { let l = length_req_wp_linux_drm_syncobj_surface_v1_set_acquire_point(); write_header(dst, for_id, l, 1, 0); write_obj(dst, timeline); write_u32(dst, point_hi); write_u32(dst, point_lo); } pub fn length_req_wp_linux_drm_syncobj_surface_v1_set_acquire_point() -> usize { 20 } pub fn parse_req_wp_linux_drm_syncobj_surface_v1_set_acquire_point<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_WP_LINUX_DRM_SYNCOBJ_SURFACE_V1_SET_ACQUIRE_POINT: MethodId = MethodId::Request(1); pub fn write_req_wp_linux_drm_syncobj_surface_v1_set_release_point( dst: &mut &mut [u8], for_id: ObjId, timeline: ObjId, point_hi: u32, point_lo: u32, ) { let l = length_req_wp_linux_drm_syncobj_surface_v1_set_release_point(); write_header(dst, for_id, l, 2, 0); write_obj(dst, timeline); write_u32(dst, point_hi); write_u32(dst, point_lo); } pub fn length_req_wp_linux_drm_syncobj_surface_v1_set_release_point() -> usize { 20 } pub fn parse_req_wp_linux_drm_syncobj_surface_v1_set_release_point<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_WP_LINUX_DRM_SYNCOBJ_SURFACE_V1_SET_RELEASE_POINT: MethodId = MethodId::Request(2); const DATA_WP_LINUX_DRM_SYNCOBJ_SURFACE_V1: WaylandData = WaylandData { name: "wp_linux_drm_syncobj_surface_v1", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_acquire_point", sig: &[Object(WpLinuxDrmSyncobjTimelineV1), Uint, Uint], destructor: false, }, WaylandMethod { name: "set_release_point", sig: &[Object(WpLinuxDrmSyncobjTimelineV1), Uint, Uint], destructor: false, }, ], version: 1, }; pub const WP_LINUX_DRM_SYNCOBJ_SURFACE_V1: &[u8] = DATA_WP_LINUX_DRM_SYNCOBJ_SURFACE_V1.name.as_bytes(); pub fn write_req_wp_presentation_destroy(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_wp_presentation_destroy(); write_header(dst, for_id, l, 0, 0); } pub fn length_req_wp_presentation_destroy() -> usize { 8 } pub fn parse_req_wp_presentation_destroy<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_WP_PRESENTATION_DESTROY: MethodId = MethodId::Request(0); pub fn write_req_wp_presentation_feedback( dst: &mut &mut [u8], for_id: ObjId, surface: ObjId, callback: ObjId, ) { let l = length_req_wp_presentation_feedback(); write_header(dst, for_id, l, 1, 0); write_obj(dst, surface); write_obj(dst, callback); } pub fn length_req_wp_presentation_feedback() -> usize { 16 } pub fn parse_req_wp_presentation_feedback<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_WP_PRESENTATION_FEEDBACK: MethodId = MethodId::Request(1); pub fn write_evt_wp_presentation_clock_id(dst: &mut &mut [u8], for_id: ObjId, clk_id: u32) { let l = length_evt_wp_presentation_clock_id(); write_header(dst, for_id, l, 0, 0); write_u32(dst, clk_id); } pub fn length_evt_wp_presentation_clock_id() -> usize { 12 } pub fn parse_evt_wp_presentation_clock_id<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WP_PRESENTATION_CLOCK_ID: MethodId = MethodId::Event(0); const DATA_WP_PRESENTATION: WaylandData = WaylandData { name: "wp_presentation", evts: &[WaylandMethod { name: "clock_id", sig: &[Uint], destructor: false, }], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "feedback", sig: &[Object(WlSurface), NewId(WpPresentationFeedback)], destructor: false, }, ], version: 2, }; pub const WP_PRESENTATION: &[u8] = DATA_WP_PRESENTATION.name.as_bytes(); pub fn write_evt_wp_presentation_feedback_presented( dst: &mut &mut [u8], for_id: ObjId, tv_sec_hi: u32, tv_sec_lo: u32, tv_nsec: u32, refresh: u32, seq_hi: u32, seq_lo: u32, flags: u32, ) { let l = length_evt_wp_presentation_feedback_presented(); write_header(dst, for_id, l, 1, 0); write_u32(dst, tv_sec_hi); write_u32(dst, tv_sec_lo); write_u32(dst, tv_nsec); write_u32(dst, refresh); write_u32(dst, seq_hi); write_u32(dst, seq_lo); write_u32(dst, flags); } pub fn length_evt_wp_presentation_feedback_presented() -> usize { 36 } pub fn parse_evt_wp_presentation_feedback_presented<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32, u32, u32, u32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; let arg4 = parse_u32(&mut msg)?; let arg5 = parse_u32(&mut msg)?; let arg6 = parse_u32(&mut msg)?; let arg7 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4, arg5, arg6, arg7)) } pub const OPCODE_WP_PRESENTATION_FEEDBACK_PRESENTED: MethodId = MethodId::Event(1); const DATA_WP_PRESENTATION_FEEDBACK: WaylandData = WaylandData { name: "wp_presentation_feedback", evts: &[ WaylandMethod { name: "sync_output", sig: &[Object(WlOutput)], destructor: false, }, WaylandMethod { name: "presented", sig: &[Uint, Uint, Uint, Uint, Uint, Uint, Uint], destructor: true, }, WaylandMethod { name: "discarded", sig: &[], destructor: true, }, ], reqs: &[], version: 2, }; pub const WP_PRESENTATION_FEEDBACK: &[u8] = DATA_WP_PRESENTATION_FEEDBACK.name.as_bytes(); pub fn write_req_zwp_primary_selection_device_manager_v1_create_source( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_req_zwp_primary_selection_device_manager_v1_create_source(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); } pub fn length_req_zwp_primary_selection_device_manager_v1_create_source() -> usize { 12 } pub fn parse_req_zwp_primary_selection_device_manager_v1_create_source<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_PRIMARY_SELECTION_DEVICE_MANAGER_V1_CREATE_SOURCE: MethodId = MethodId::Request(0); pub fn write_req_zwp_primary_selection_device_manager_v1_get_device( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, seat: ObjId, ) { let l = length_req_zwp_primary_selection_device_manager_v1_get_device(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); write_obj(dst, seat); } pub fn length_req_zwp_primary_selection_device_manager_v1_get_device() -> usize { 16 } pub fn parse_req_zwp_primary_selection_device_manager_v1_get_device<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_ZWP_PRIMARY_SELECTION_DEVICE_MANAGER_V1_GET_DEVICE: MethodId = MethodId::Request(1); const DATA_ZWP_PRIMARY_SELECTION_DEVICE_MANAGER_V1: WaylandData = WaylandData { name: "zwp_primary_selection_device_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "create_source", sig: &[NewId(ZwpPrimarySelectionSourceV1)], destructor: false, }, WaylandMethod { name: "get_device", sig: &[NewId(ZwpPrimarySelectionDeviceV1), Object(WlSeat)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWP_PRIMARY_SELECTION_DEVICE_MANAGER_V1: &[u8] = DATA_ZWP_PRIMARY_SELECTION_DEVICE_MANAGER_V1.name.as_bytes(); pub fn write_req_zwp_primary_selection_device_v1_set_selection( dst: &mut &mut [u8], for_id: ObjId, source: ObjId, serial: u32, ) { let l = length_req_zwp_primary_selection_device_v1_set_selection(); write_header(dst, for_id, l, 0, 0); write_obj(dst, source); write_u32(dst, serial); } pub fn length_req_zwp_primary_selection_device_v1_set_selection() -> usize { 16 } pub fn parse_req_zwp_primary_selection_device_v1_set_selection<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_ZWP_PRIMARY_SELECTION_DEVICE_V1_SET_SELECTION: MethodId = MethodId::Request(0); pub fn write_evt_zwp_primary_selection_device_v1_data_offer( dst: &mut &mut [u8], for_id: ObjId, offer: ObjId, ) { let l = length_evt_zwp_primary_selection_device_v1_data_offer(); write_header(dst, for_id, l, 0, 0); write_obj(dst, offer); } pub fn length_evt_zwp_primary_selection_device_v1_data_offer() -> usize { 12 } pub fn parse_evt_zwp_primary_selection_device_v1_data_offer<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_PRIMARY_SELECTION_DEVICE_V1_DATA_OFFER: MethodId = MethodId::Event(0); pub fn write_evt_zwp_primary_selection_device_v1_selection( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_evt_zwp_primary_selection_device_v1_selection(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); } pub fn length_evt_zwp_primary_selection_device_v1_selection() -> usize { 12 } pub fn parse_evt_zwp_primary_selection_device_v1_selection<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_PRIMARY_SELECTION_DEVICE_V1_SELECTION: MethodId = MethodId::Event(1); const DATA_ZWP_PRIMARY_SELECTION_DEVICE_V1: WaylandData = WaylandData { name: "zwp_primary_selection_device_v1", evts: &[ WaylandMethod { name: "data_offer", sig: &[NewId(ZwpPrimarySelectionOfferV1)], destructor: false, }, WaylandMethod { name: "selection", sig: &[Object(ZwpPrimarySelectionOfferV1)], destructor: false, }, ], reqs: &[ WaylandMethod { name: "set_selection", sig: &[Object(ZwpPrimarySelectionSourceV1), Uint], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWP_PRIMARY_SELECTION_DEVICE_V1: &[u8] = DATA_ZWP_PRIMARY_SELECTION_DEVICE_V1.name.as_bytes(); pub fn write_req_zwp_primary_selection_offer_v1_receive( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_req_zwp_primary_selection_offer_v1_receive(mime_type.len()); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_req_zwp_primary_selection_offer_v1_receive(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_zwp_primary_selection_offer_v1_receive<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_PRIMARY_SELECTION_OFFER_V1_RECEIVE: MethodId = MethodId::Request(0); pub fn write_evt_zwp_primary_selection_offer_v1_offer( dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8], ) { let l = length_evt_zwp_primary_selection_offer_v1_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_evt_zwp_primary_selection_offer_v1_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_zwp_primary_selection_offer_v1_offer<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_PRIMARY_SELECTION_OFFER_V1_OFFER: MethodId = MethodId::Event(0); const DATA_ZWP_PRIMARY_SELECTION_OFFER_V1: WaylandData = WaylandData { name: "zwp_primary_selection_offer_v1", evts: &[WaylandMethod { name: "offer", sig: &[String], destructor: false, }], reqs: &[ WaylandMethod { name: "receive", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWP_PRIMARY_SELECTION_OFFER_V1: &[u8] = DATA_ZWP_PRIMARY_SELECTION_OFFER_V1.name.as_bytes(); pub fn write_req_zwp_primary_selection_source_v1_offer( dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8], ) { let l = length_req_zwp_primary_selection_source_v1_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_req_zwp_primary_selection_source_v1_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_zwp_primary_selection_source_v1_offer<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_PRIMARY_SELECTION_SOURCE_V1_OFFER: MethodId = MethodId::Request(0); pub fn write_evt_zwp_primary_selection_source_v1_send( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_evt_zwp_primary_selection_source_v1_send(mime_type.len()); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_evt_zwp_primary_selection_source_v1_send(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_zwp_primary_selection_source_v1_send<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWP_PRIMARY_SELECTION_SOURCE_V1_SEND: MethodId = MethodId::Event(0); const DATA_ZWP_PRIMARY_SELECTION_SOURCE_V1: WaylandData = WaylandData { name: "zwp_primary_selection_source_v1", evts: &[ WaylandMethod { name: "send", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "cancelled", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "offer", sig: &[String], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWP_PRIMARY_SELECTION_SOURCE_V1: &[u8] = DATA_ZWP_PRIMARY_SELECTION_SOURCE_V1.name.as_bytes(); pub fn write_req_wp_security_context_manager_v1_create_listener( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, id: ObjId, ) { let l = length_req_wp_security_context_manager_v1_create_listener(); write_header(dst, for_id, l, 1, if tag_fds { 2 } else { 0 }); write_obj(dst, id); } pub fn length_req_wp_security_context_manager_v1_create_listener() -> usize { 12 } pub fn parse_req_wp_security_context_manager_v1_create_listener<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WP_SECURITY_CONTEXT_MANAGER_V1_CREATE_LISTENER: MethodId = MethodId::Request(1); const DATA_WP_SECURITY_CONTEXT_MANAGER_V1: WaylandData = WaylandData { name: "wp_security_context_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "create_listener", sig: &[NewId(WpSecurityContextV1), Fd, Fd], destructor: false, }, ], version: 1, }; pub const WP_SECURITY_CONTEXT_MANAGER_V1: &[u8] = DATA_WP_SECURITY_CONTEXT_MANAGER_V1.name.as_bytes(); pub fn write_req_wp_security_context_v1_set_sandbox_engine( dst: &mut &mut [u8], for_id: ObjId, name: &[u8], ) { let l = length_req_wp_security_context_v1_set_sandbox_engine(name.len()); write_header(dst, for_id, l, 1, 0); write_string(dst, Some(name)); } pub fn length_req_wp_security_context_v1_set_sandbox_engine(name_len: usize) -> usize { let mut v = 8; v += length_string(name_len); v } pub fn parse_req_wp_security_context_v1_set_sandbox_engine<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WP_SECURITY_CONTEXT_V1_SET_SANDBOX_ENGINE: MethodId = MethodId::Request(1); pub fn write_req_wp_security_context_v1_set_app_id( dst: &mut &mut [u8], for_id: ObjId, app_id: &[u8], ) { let l = length_req_wp_security_context_v1_set_app_id(app_id.len()); write_header(dst, for_id, l, 2, 0); write_string(dst, Some(app_id)); } pub fn length_req_wp_security_context_v1_set_app_id(app_id_len: usize) -> usize { let mut v = 8; v += length_string(app_id_len); v } pub fn parse_req_wp_security_context_v1_set_app_id<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WP_SECURITY_CONTEXT_V1_SET_APP_ID: MethodId = MethodId::Request(2); pub fn write_req_wp_security_context_v1_set_instance_id( dst: &mut &mut [u8], for_id: ObjId, instance_id: &[u8], ) { let l = length_req_wp_security_context_v1_set_instance_id(instance_id.len()); write_header(dst, for_id, l, 3, 0); write_string(dst, Some(instance_id)); } pub fn length_req_wp_security_context_v1_set_instance_id(instance_id_len: usize) -> usize { let mut v = 8; v += length_string(instance_id_len); v } pub fn parse_req_wp_security_context_v1_set_instance_id<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WP_SECURITY_CONTEXT_V1_SET_INSTANCE_ID: MethodId = MethodId::Request(3); pub fn write_req_wp_security_context_v1_commit(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_wp_security_context_v1_commit(); write_header(dst, for_id, l, 4, 0); } pub fn length_req_wp_security_context_v1_commit() -> usize { 8 } pub fn parse_req_wp_security_context_v1_commit<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_WP_SECURITY_CONTEXT_V1_COMMIT: MethodId = MethodId::Request(4); const DATA_WP_SECURITY_CONTEXT_V1: WaylandData = WaylandData { name: "wp_security_context_v1", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_sandbox_engine", sig: &[String], destructor: false, }, WaylandMethod { name: "set_app_id", sig: &[String], destructor: false, }, WaylandMethod { name: "set_instance_id", sig: &[String], destructor: false, }, WaylandMethod { name: "commit", sig: &[], destructor: false, }, ], version: 1, }; pub const WP_SECURITY_CONTEXT_V1: &[u8] = DATA_WP_SECURITY_CONTEXT_V1.name.as_bytes(); pub fn write_req_wp_viewporter_get_viewport( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, surface: ObjId, ) { let l = length_req_wp_viewporter_get_viewport(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); write_obj(dst, surface); } pub fn length_req_wp_viewporter_get_viewport() -> usize { 16 } pub fn parse_req_wp_viewporter_get_viewport<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_WP_VIEWPORTER_GET_VIEWPORT: MethodId = MethodId::Request(1); const DATA_WP_VIEWPORTER: WaylandData = WaylandData { name: "wp_viewporter", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_viewport", sig: &[NewId(WpViewport), Object(WlSurface)], destructor: false, }, ], version: 1, }; pub const WP_VIEWPORTER: &[u8] = DATA_WP_VIEWPORTER.name.as_bytes(); pub fn write_req_wp_viewport_destroy(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_wp_viewport_destroy(); write_header(dst, for_id, l, 0, 0); } pub fn length_req_wp_viewport_destroy() -> usize { 8 } pub fn parse_req_wp_viewport_destroy<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_WP_VIEWPORT_DESTROY: MethodId = MethodId::Request(0); pub fn write_req_wp_viewport_set_source( dst: &mut &mut [u8], for_id: ObjId, x: i32, y: i32, width: i32, height: i32, ) { let l = length_req_wp_viewport_set_source(); write_header(dst, for_id, l, 1, 0); write_i32(dst, x); write_i32(dst, y); write_i32(dst, width); write_i32(dst, height); } pub fn length_req_wp_viewport_set_source() -> usize { 24 } pub fn parse_req_wp_viewport_set_source<'a>( mut msg: &'a [u8], ) -> Result<(i32, i32, i32, i32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_i32(&mut msg)?; let arg4 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4)) } pub const OPCODE_WP_VIEWPORT_SET_SOURCE: MethodId = MethodId::Request(1); pub fn write_req_wp_viewport_set_destination( dst: &mut &mut [u8], for_id: ObjId, width: i32, height: i32, ) { let l = length_req_wp_viewport_set_destination(); write_header(dst, for_id, l, 2, 0); write_i32(dst, width); write_i32(dst, height); } pub fn length_req_wp_viewport_set_destination() -> usize { 16 } pub fn parse_req_wp_viewport_set_destination<'a>( mut msg: &'a [u8], ) -> Result<(i32, i32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; let arg2 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_WP_VIEWPORT_SET_DESTINATION: MethodId = MethodId::Request(2); const DATA_WP_VIEWPORT: WaylandData = WaylandData { name: "wp_viewport", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_source", sig: &[Fixed, Fixed, Fixed, Fixed], destructor: false, }, WaylandMethod { name: "set_destination", sig: &[Int, Int], destructor: false, }, ], version: 1, }; pub const WP_VIEWPORT: &[u8] = DATA_WP_VIEWPORT.name.as_bytes(); const DATA_ZWP_VIRTUAL_KEYBOARD_V1: WaylandData = WaylandData { name: "zwp_virtual_keyboard_v1", evts: &[], reqs: &[ WaylandMethod { name: "keymap", sig: &[Uint, Fd, Uint], destructor: false, }, WaylandMethod { name: "key", sig: &[Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "modifiers", sig: &[Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWP_VIRTUAL_KEYBOARD_V1: &[u8] = DATA_ZWP_VIRTUAL_KEYBOARD_V1.name.as_bytes(); const DATA_ZWP_VIRTUAL_KEYBOARD_MANAGER_V1: WaylandData = WaylandData { name: "zwp_virtual_keyboard_manager_v1", evts: &[], reqs: &[WaylandMethod { name: "create_virtual_keyboard", sig: &[Object(WlSeat), NewId(ZwpVirtualKeyboardV1)], destructor: false, }], version: 1, }; pub const ZWP_VIRTUAL_KEYBOARD_MANAGER_V1: &[u8] = DATA_ZWP_VIRTUAL_KEYBOARD_MANAGER_V1.name.as_bytes(); const DATA_WL_DRM: WaylandData = WaylandData { name: "wl_drm", evts: &[ WaylandMethod { name: "device", sig: &[String], destructor: false, }, WaylandMethod { name: "format", sig: &[Uint], destructor: false, }, WaylandMethod { name: "authenticated", sig: &[], destructor: false, }, WaylandMethod { name: "capabilities", sig: &[Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "authenticate", sig: &[Uint], destructor: false, }, WaylandMethod { name: "create_buffer", sig: &[NewId(WlBuffer), Uint, Int, Int, Uint, Uint], destructor: false, }, WaylandMethod { name: "create_planar_buffer", sig: &[ NewId(WlBuffer), Uint, Int, Int, Uint, Int, Int, Int, Int, Int, Int, ], destructor: false, }, WaylandMethod { name: "create_prime_buffer", sig: &[ NewId(WlBuffer), Fd, Int, Int, Uint, Int, Int, Int, Int, Int, Int, ], destructor: false, }, ], version: 2, }; pub const WL_DRM: &[u8] = DATA_WL_DRM.name.as_bytes(); pub fn write_req_wl_display_sync(dst: &mut &mut [u8], for_id: ObjId, callback: ObjId) { let l = length_req_wl_display_sync(); write_header(dst, for_id, l, 0, 0); write_obj(dst, callback); } pub fn length_req_wl_display_sync() -> usize { 12 } pub fn parse_req_wl_display_sync<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DISPLAY_SYNC: MethodId = MethodId::Request(0); pub fn write_req_wl_display_get_registry(dst: &mut &mut [u8], for_id: ObjId, registry: ObjId) { let l = length_req_wl_display_get_registry(); write_header(dst, for_id, l, 1, 0); write_obj(dst, registry); } pub fn length_req_wl_display_get_registry() -> usize { 12 } pub fn parse_req_wl_display_get_registry<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DISPLAY_GET_REGISTRY: MethodId = MethodId::Request(1); pub fn write_evt_wl_display_error( dst: &mut &mut [u8], for_id: ObjId, object_id: ObjId, code: u32, message: &[u8], ) { let l = length_evt_wl_display_error(message.len()); write_header(dst, for_id, l, 0, 0); write_obj(dst, object_id); write_u32(dst, code); write_string(dst, Some(message)); } pub fn length_evt_wl_display_error(message_len: usize) -> usize { let mut v = 16; v += length_string(message_len); v } pub fn parse_evt_wl_display_error<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, u32, &'a [u8]), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_WL_DISPLAY_ERROR: MethodId = MethodId::Event(0); pub fn write_evt_wl_display_delete_id(dst: &mut &mut [u8], for_id: ObjId, id: u32) { let l = length_evt_wl_display_delete_id(); write_header(dst, for_id, l, 1, 0); write_u32(dst, id); } pub fn length_evt_wl_display_delete_id() -> usize { 12 } pub fn parse_evt_wl_display_delete_id<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DISPLAY_DELETE_ID: MethodId = MethodId::Event(1); const DATA_WL_DISPLAY: WaylandData = WaylandData { name: "wl_display", evts: &[ WaylandMethod { name: "error", sig: &[GenericObject, Uint, String], destructor: false, }, WaylandMethod { name: "delete_id", sig: &[Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "sync", sig: &[NewId(WlCallback)], destructor: false, }, WaylandMethod { name: "get_registry", sig: &[NewId(WlRegistry)], destructor: false, }, ], version: 1, }; pub const WL_DISPLAY: &[u8] = DATA_WL_DISPLAY.name.as_bytes(); pub fn write_req_wl_registry_bind( dst: &mut &mut [u8], for_id: ObjId, name: u32, id_iface_name: &[u8], id_version: u32, id: ObjId, ) { let l = length_req_wl_registry_bind(id_iface_name.len()); write_header(dst, for_id, l, 0, 0); write_u32(dst, name); write_string(dst, Some(id_iface_name)); write_u32(dst, id_version); write_obj(dst, id); } pub fn length_req_wl_registry_bind(id_iface_name_len: usize) -> usize { let mut v = 20; v += length_string(id_iface_name_len); v } pub fn parse_req_wl_registry_bind<'a>( mut msg: &'a [u8], ) -> Result<(u32, &'a [u8], u32, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2_iface_name = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; let arg2_version = parse_u32(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2_iface_name, arg2_version, arg2)) } pub const OPCODE_WL_REGISTRY_BIND: MethodId = MethodId::Request(0); pub fn write_evt_wl_registry_global( dst: &mut &mut [u8], for_id: ObjId, name: u32, interface: &[u8], version: u32, ) { let l = length_evt_wl_registry_global(interface.len()); write_header(dst, for_id, l, 0, 0); write_u32(dst, name); write_string(dst, Some(interface)); write_u32(dst, version); } pub fn length_evt_wl_registry_global(interface_len: usize) -> usize { let mut v = 16; v += length_string(interface_len); v } pub fn parse_evt_wl_registry_global<'a>( mut msg: &'a [u8], ) -> Result<(u32, &'a [u8], u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_WL_REGISTRY_GLOBAL: MethodId = MethodId::Event(0); pub fn write_evt_wl_registry_global_remove(dst: &mut &mut [u8], for_id: ObjId, name: u32) { let l = length_evt_wl_registry_global_remove(); write_header(dst, for_id, l, 1, 0); write_u32(dst, name); } pub fn length_evt_wl_registry_global_remove() -> usize { 12 } pub fn parse_evt_wl_registry_global_remove<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_REGISTRY_GLOBAL_REMOVE: MethodId = MethodId::Event(1); const DATA_WL_REGISTRY: WaylandData = WaylandData { name: "wl_registry", evts: &[ WaylandMethod { name: "global", sig: &[Uint, String, Uint], destructor: false, }, WaylandMethod { name: "global_remove", sig: &[Uint], destructor: false, }, ], reqs: &[WaylandMethod { name: "bind", sig: &[Uint, GenericNewId], destructor: false, }], version: 1, }; pub const WL_REGISTRY: &[u8] = DATA_WL_REGISTRY.name.as_bytes(); pub fn write_evt_wl_callback_done(dst: &mut &mut [u8], for_id: ObjId, callback_data: u32) { let l = length_evt_wl_callback_done(); write_header(dst, for_id, l, 0, 0); write_u32(dst, callback_data); } pub fn length_evt_wl_callback_done() -> usize { 12 } pub fn parse_evt_wl_callback_done<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_CALLBACK_DONE: MethodId = MethodId::Event(0); const DATA_WL_CALLBACK: WaylandData = WaylandData { name: "wl_callback", evts: &[WaylandMethod { name: "done", sig: &[Uint], destructor: true, }], reqs: &[], version: 1, }; pub const WL_CALLBACK: &[u8] = DATA_WL_CALLBACK.name.as_bytes(); pub fn write_req_wl_compositor_create_surface(dst: &mut &mut [u8], for_id: ObjId, id: ObjId) { let l = length_req_wl_compositor_create_surface(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); } pub fn length_req_wl_compositor_create_surface() -> usize { 12 } pub fn parse_req_wl_compositor_create_surface<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_COMPOSITOR_CREATE_SURFACE: MethodId = MethodId::Request(0); const DATA_WL_COMPOSITOR: WaylandData = WaylandData { name: "wl_compositor", evts: &[], reqs: &[ WaylandMethod { name: "create_surface", sig: &[NewId(WlSurface)], destructor: false, }, WaylandMethod { name: "create_region", sig: &[NewId(WlRegion)], destructor: false, }, ], version: 6, }; pub const WL_COMPOSITOR: &[u8] = DATA_WL_COMPOSITOR.name.as_bytes(); pub fn write_req_wl_shm_pool_create_buffer( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, offset: i32, width: i32, height: i32, stride: i32, format: u32, ) { let l = length_req_wl_shm_pool_create_buffer(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); write_i32(dst, offset); write_i32(dst, width); write_i32(dst, height); write_i32(dst, stride); write_u32(dst, format); } pub fn length_req_wl_shm_pool_create_buffer() -> usize { 32 } pub fn parse_req_wl_shm_pool_create_buffer<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, i32, i32, i32, i32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_i32(&mut msg)?; let arg4 = parse_i32(&mut msg)?; let arg5 = parse_i32(&mut msg)?; let arg6 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4, arg5, arg6)) } pub const OPCODE_WL_SHM_POOL_CREATE_BUFFER: MethodId = MethodId::Request(0); pub fn write_req_wl_shm_pool_destroy(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_wl_shm_pool_destroy(); write_header(dst, for_id, l, 1, 0); } pub fn length_req_wl_shm_pool_destroy() -> usize { 8 } pub fn parse_req_wl_shm_pool_destroy<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_WL_SHM_POOL_DESTROY: MethodId = MethodId::Request(1); pub fn write_req_wl_shm_pool_resize(dst: &mut &mut [u8], for_id: ObjId, size: i32) { let l = length_req_wl_shm_pool_resize(); write_header(dst, for_id, l, 2, 0); write_i32(dst, size); } pub fn length_req_wl_shm_pool_resize() -> usize { 12 } pub fn parse_req_wl_shm_pool_resize<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_SHM_POOL_RESIZE: MethodId = MethodId::Request(2); const DATA_WL_SHM_POOL: WaylandData = WaylandData { name: "wl_shm_pool", evts: &[], reqs: &[ WaylandMethod { name: "create_buffer", sig: &[NewId(WlBuffer), Int, Int, Int, Int, Uint], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "resize", sig: &[Int], destructor: false, }, ], version: 2, }; pub const WL_SHM_POOL: &[u8] = DATA_WL_SHM_POOL.name.as_bytes(); #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum WlShmFormat { Argb8888 = 0, Xrgb8888 = 1, C8 = 0x20203843, Rgb332 = 0x38424752, Bgr233 = 0x38524742, Xrgb4444 = 0x32315258, Xbgr4444 = 0x32314258, Rgbx4444 = 0x32315852, Bgrx4444 = 0x32315842, Argb4444 = 0x32315241, Abgr4444 = 0x32314241, Rgba4444 = 0x32314152, Bgra4444 = 0x32314142, Xrgb1555 = 0x35315258, Xbgr1555 = 0x35314258, Rgbx5551 = 0x35315852, Bgrx5551 = 0x35315842, Argb1555 = 0x35315241, Abgr1555 = 0x35314241, Rgba5551 = 0x35314152, Bgra5551 = 0x35314142, Rgb565 = 0x36314752, Bgr565 = 0x36314742, Rgb888 = 0x34324752, Bgr888 = 0x34324742, Xbgr8888 = 0x34324258, Rgbx8888 = 0x34325852, Bgrx8888 = 0x34325842, Abgr8888 = 0x34324241, Rgba8888 = 0x34324152, Bgra8888 = 0x34324142, Xrgb2101010 = 0x30335258, Xbgr2101010 = 0x30334258, Rgbx1010102 = 0x30335852, Bgrx1010102 = 0x30335842, Argb2101010 = 0x30335241, Abgr2101010 = 0x30334241, Rgba1010102 = 0x30334152, Bgra1010102 = 0x30334142, Yuyv = 0x56595559, Yvyu = 0x55595659, Uyvy = 0x59565955, Vyuy = 0x59555956, Ayuv = 0x56555941, Nv12 = 0x3231564e, Nv21 = 0x3132564e, Nv16 = 0x3631564e, Nv61 = 0x3136564e, Yuv410 = 0x39565559, Yvu410 = 0x39555659, Yuv411 = 0x31315559, Yvu411 = 0x31315659, Yuv420 = 0x32315559, Yvu420 = 0x32315659, Yuv422 = 0x36315559, Yvu422 = 0x36315659, Yuv444 = 0x34325559, Yvu444 = 0x34325659, R8 = 0x20203852, R16 = 0x20363152, Rg88 = 0x38384752, Gr88 = 0x38385247, Rg1616 = 0x32334752, Gr1616 = 0x32335247, Xrgb16161616f = 0x48345258, Xbgr16161616f = 0x48344258, Argb16161616f = 0x48345241, Abgr16161616f = 0x48344241, Xyuv8888 = 0x56555958, Vuy888 = 0x34325556, Vuy101010 = 0x30335556, Y210 = 0x30313259, Y212 = 0x32313259, Y216 = 0x36313259, Y410 = 0x30313459, Y412 = 0x32313459, Y416 = 0x36313459, Xvyu2101010 = 0x30335658, Xvyu1216161616 = 0x36335658, Xvyu16161616 = 0x38345658, Y0l0 = 0x304c3059, X0l0 = 0x304c3058, Y0l2 = 0x324c3059, X0l2 = 0x324c3058, Yuv4208bit = 0x38305559, Yuv42010bit = 0x30315559, Xrgb8888A8 = 0x38415258, Xbgr8888A8 = 0x38414258, Rgbx8888A8 = 0x38415852, Bgrx8888A8 = 0x38415842, Rgb888A8 = 0x38413852, Bgr888A8 = 0x38413842, Rgb565A8 = 0x38413552, Bgr565A8 = 0x38413542, Nv24 = 0x3432564e, Nv42 = 0x3234564e, P210 = 0x30313250, P010 = 0x30313050, P012 = 0x32313050, P016 = 0x36313050, Axbxgxrx106106106106 = 0x30314241, Nv15 = 0x3531564e, Q410 = 0x30313451, Q401 = 0x31303451, Xrgb16161616 = 0x38345258, Xbgr16161616 = 0x38344258, Argb16161616 = 0x38345241, Abgr16161616 = 0x38344241, C1 = 0x20203143, C2 = 0x20203243, C4 = 0x20203443, D1 = 0x20203144, D2 = 0x20203244, D4 = 0x20203444, D8 = 0x20203844, R1 = 0x20203152, R2 = 0x20203252, R4 = 0x20203452, R10 = 0x20303152, R12 = 0x20323152, Avuy8888 = 0x59555641, Xvuy8888 = 0x59555658, P030 = 0x30333050, } impl TryFrom for WlShmFormat { type Error = (); fn try_from(v: u32) -> Result { Ok(match v { 0 => WlShmFormat::Argb8888, 1 => WlShmFormat::Xrgb8888, 0x20203843 => WlShmFormat::C8, 0x38424752 => WlShmFormat::Rgb332, 0x38524742 => WlShmFormat::Bgr233, 0x32315258 => WlShmFormat::Xrgb4444, 0x32314258 => WlShmFormat::Xbgr4444, 0x32315852 => WlShmFormat::Rgbx4444, 0x32315842 => WlShmFormat::Bgrx4444, 0x32315241 => WlShmFormat::Argb4444, 0x32314241 => WlShmFormat::Abgr4444, 0x32314152 => WlShmFormat::Rgba4444, 0x32314142 => WlShmFormat::Bgra4444, 0x35315258 => WlShmFormat::Xrgb1555, 0x35314258 => WlShmFormat::Xbgr1555, 0x35315852 => WlShmFormat::Rgbx5551, 0x35315842 => WlShmFormat::Bgrx5551, 0x35315241 => WlShmFormat::Argb1555, 0x35314241 => WlShmFormat::Abgr1555, 0x35314152 => WlShmFormat::Rgba5551, 0x35314142 => WlShmFormat::Bgra5551, 0x36314752 => WlShmFormat::Rgb565, 0x36314742 => WlShmFormat::Bgr565, 0x34324752 => WlShmFormat::Rgb888, 0x34324742 => WlShmFormat::Bgr888, 0x34324258 => WlShmFormat::Xbgr8888, 0x34325852 => WlShmFormat::Rgbx8888, 0x34325842 => WlShmFormat::Bgrx8888, 0x34324241 => WlShmFormat::Abgr8888, 0x34324152 => WlShmFormat::Rgba8888, 0x34324142 => WlShmFormat::Bgra8888, 0x30335258 => WlShmFormat::Xrgb2101010, 0x30334258 => WlShmFormat::Xbgr2101010, 0x30335852 => WlShmFormat::Rgbx1010102, 0x30335842 => WlShmFormat::Bgrx1010102, 0x30335241 => WlShmFormat::Argb2101010, 0x30334241 => WlShmFormat::Abgr2101010, 0x30334152 => WlShmFormat::Rgba1010102, 0x30334142 => WlShmFormat::Bgra1010102, 0x56595559 => WlShmFormat::Yuyv, 0x55595659 => WlShmFormat::Yvyu, 0x59565955 => WlShmFormat::Uyvy, 0x59555956 => WlShmFormat::Vyuy, 0x56555941 => WlShmFormat::Ayuv, 0x3231564e => WlShmFormat::Nv12, 0x3132564e => WlShmFormat::Nv21, 0x3631564e => WlShmFormat::Nv16, 0x3136564e => WlShmFormat::Nv61, 0x39565559 => WlShmFormat::Yuv410, 0x39555659 => WlShmFormat::Yvu410, 0x31315559 => WlShmFormat::Yuv411, 0x31315659 => WlShmFormat::Yvu411, 0x32315559 => WlShmFormat::Yuv420, 0x32315659 => WlShmFormat::Yvu420, 0x36315559 => WlShmFormat::Yuv422, 0x36315659 => WlShmFormat::Yvu422, 0x34325559 => WlShmFormat::Yuv444, 0x34325659 => WlShmFormat::Yvu444, 0x20203852 => WlShmFormat::R8, 0x20363152 => WlShmFormat::R16, 0x38384752 => WlShmFormat::Rg88, 0x38385247 => WlShmFormat::Gr88, 0x32334752 => WlShmFormat::Rg1616, 0x32335247 => WlShmFormat::Gr1616, 0x48345258 => WlShmFormat::Xrgb16161616f, 0x48344258 => WlShmFormat::Xbgr16161616f, 0x48345241 => WlShmFormat::Argb16161616f, 0x48344241 => WlShmFormat::Abgr16161616f, 0x56555958 => WlShmFormat::Xyuv8888, 0x34325556 => WlShmFormat::Vuy888, 0x30335556 => WlShmFormat::Vuy101010, 0x30313259 => WlShmFormat::Y210, 0x32313259 => WlShmFormat::Y212, 0x36313259 => WlShmFormat::Y216, 0x30313459 => WlShmFormat::Y410, 0x32313459 => WlShmFormat::Y412, 0x36313459 => WlShmFormat::Y416, 0x30335658 => WlShmFormat::Xvyu2101010, 0x36335658 => WlShmFormat::Xvyu1216161616, 0x38345658 => WlShmFormat::Xvyu16161616, 0x304c3059 => WlShmFormat::Y0l0, 0x304c3058 => WlShmFormat::X0l0, 0x324c3059 => WlShmFormat::Y0l2, 0x324c3058 => WlShmFormat::X0l2, 0x38305559 => WlShmFormat::Yuv4208bit, 0x30315559 => WlShmFormat::Yuv42010bit, 0x38415258 => WlShmFormat::Xrgb8888A8, 0x38414258 => WlShmFormat::Xbgr8888A8, 0x38415852 => WlShmFormat::Rgbx8888A8, 0x38415842 => WlShmFormat::Bgrx8888A8, 0x38413852 => WlShmFormat::Rgb888A8, 0x38413842 => WlShmFormat::Bgr888A8, 0x38413552 => WlShmFormat::Rgb565A8, 0x38413542 => WlShmFormat::Bgr565A8, 0x3432564e => WlShmFormat::Nv24, 0x3234564e => WlShmFormat::Nv42, 0x30313250 => WlShmFormat::P210, 0x30313050 => WlShmFormat::P010, 0x32313050 => WlShmFormat::P012, 0x36313050 => WlShmFormat::P016, 0x30314241 => WlShmFormat::Axbxgxrx106106106106, 0x3531564e => WlShmFormat::Nv15, 0x30313451 => WlShmFormat::Q410, 0x31303451 => WlShmFormat::Q401, 0x38345258 => WlShmFormat::Xrgb16161616, 0x38344258 => WlShmFormat::Xbgr16161616, 0x38345241 => WlShmFormat::Argb16161616, 0x38344241 => WlShmFormat::Abgr16161616, 0x20203143 => WlShmFormat::C1, 0x20203243 => WlShmFormat::C2, 0x20203443 => WlShmFormat::C4, 0x20203144 => WlShmFormat::D1, 0x20203244 => WlShmFormat::D2, 0x20203444 => WlShmFormat::D4, 0x20203844 => WlShmFormat::D8, 0x20203152 => WlShmFormat::R1, 0x20203252 => WlShmFormat::R2, 0x20203452 => WlShmFormat::R4, 0x20303152 => WlShmFormat::R10, 0x20323152 => WlShmFormat::R12, 0x59555641 => WlShmFormat::Avuy8888, 0x59555658 => WlShmFormat::Xvuy8888, 0x30333050 => WlShmFormat::P030, _ => return Err(()), }) } } pub fn write_req_wl_shm_create_pool( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, id: ObjId, size: i32, ) { let l = length_req_wl_shm_create_pool(); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_obj(dst, id); write_i32(dst, size); } pub fn length_req_wl_shm_create_pool() -> usize { 16 } pub fn parse_req_wl_shm_create_pool<'a>(mut msg: &'a [u8]) -> Result<(ObjId, i32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg3 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg3)) } pub const OPCODE_WL_SHM_CREATE_POOL: MethodId = MethodId::Request(0); const DATA_WL_SHM: WaylandData = WaylandData { name: "wl_shm", evts: &[WaylandMethod { name: "format", sig: &[Uint], destructor: false, }], reqs: &[ WaylandMethod { name: "create_pool", sig: &[NewId(WlShmPool), Fd, Int], destructor: false, }, WaylandMethod { name: "release", sig: &[], destructor: true, }, ], version: 2, }; pub const WL_SHM: &[u8] = DATA_WL_SHM.name.as_bytes(); pub fn write_req_wl_buffer_destroy(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_wl_buffer_destroy(); write_header(dst, for_id, l, 0, 0); } pub fn length_req_wl_buffer_destroy() -> usize { 8 } pub fn parse_req_wl_buffer_destroy<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_WL_BUFFER_DESTROY: MethodId = MethodId::Request(0); pub fn write_evt_wl_buffer_release(dst: &mut &mut [u8], for_id: ObjId) { let l = length_evt_wl_buffer_release(); write_header(dst, for_id, l, 0, 0); } pub fn length_evt_wl_buffer_release() -> usize { 8 } pub fn parse_evt_wl_buffer_release<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_WL_BUFFER_RELEASE: MethodId = MethodId::Event(0); const DATA_WL_BUFFER: WaylandData = WaylandData { name: "wl_buffer", evts: &[WaylandMethod { name: "release", sig: &[], destructor: false, }], reqs: &[WaylandMethod { name: "destroy", sig: &[], destructor: true, }], version: 1, }; pub const WL_BUFFER: &[u8] = DATA_WL_BUFFER.name.as_bytes(); pub fn write_req_wl_data_offer_receive( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_req_wl_data_offer_receive(mime_type.len()); write_header(dst, for_id, l, 1, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_req_wl_data_offer_receive(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_wl_data_offer_receive<'a>(mut msg: &'a [u8]) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DATA_OFFER_RECEIVE: MethodId = MethodId::Request(1); pub fn write_evt_wl_data_offer_offer(dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8]) { let l = length_evt_wl_data_offer_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_evt_wl_data_offer_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_wl_data_offer_offer<'a>(mut msg: &'a [u8]) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DATA_OFFER_OFFER: MethodId = MethodId::Event(0); const DATA_WL_DATA_OFFER: WaylandData = WaylandData { name: "wl_data_offer", evts: &[ WaylandMethod { name: "offer", sig: &[String], destructor: false, }, WaylandMethod { name: "source_actions", sig: &[Uint], destructor: false, }, WaylandMethod { name: "action", sig: &[Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "accept", sig: &[Uint, OptionalString], destructor: false, }, WaylandMethod { name: "receive", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "finish", sig: &[], destructor: false, }, WaylandMethod { name: "set_actions", sig: &[Uint, Uint], destructor: false, }, ], version: 3, }; pub const WL_DATA_OFFER: &[u8] = DATA_WL_DATA_OFFER.name.as_bytes(); pub fn write_req_wl_data_source_offer(dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8]) { let l = length_req_wl_data_source_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_req_wl_data_source_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_wl_data_source_offer<'a>(mut msg: &'a [u8]) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DATA_SOURCE_OFFER: MethodId = MethodId::Request(0); pub fn write_evt_wl_data_source_send( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_evt_wl_data_source_send(mime_type.len()); write_header(dst, for_id, l, 1, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_evt_wl_data_source_send(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_wl_data_source_send<'a>(mut msg: &'a [u8]) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DATA_SOURCE_SEND: MethodId = MethodId::Event(1); const DATA_WL_DATA_SOURCE: WaylandData = WaylandData { name: "wl_data_source", evts: &[ WaylandMethod { name: "target", sig: &[OptionalString], destructor: false, }, WaylandMethod { name: "send", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "cancelled", sig: &[], destructor: false, }, WaylandMethod { name: "dnd_drop_performed", sig: &[], destructor: false, }, WaylandMethod { name: "dnd_finished", sig: &[], destructor: false, }, WaylandMethod { name: "action", sig: &[Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "offer", sig: &[String], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_actions", sig: &[Uint], destructor: false, }, ], version: 3, }; pub const WL_DATA_SOURCE: &[u8] = DATA_WL_DATA_SOURCE.name.as_bytes(); pub fn write_req_wl_data_device_set_selection( dst: &mut &mut [u8], for_id: ObjId, source: ObjId, serial: u32, ) { let l = length_req_wl_data_device_set_selection(); write_header(dst, for_id, l, 1, 0); write_obj(dst, source); write_u32(dst, serial); } pub fn length_req_wl_data_device_set_selection() -> usize { 16 } pub fn parse_req_wl_data_device_set_selection<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_WL_DATA_DEVICE_SET_SELECTION: MethodId = MethodId::Request(1); pub fn write_evt_wl_data_device_data_offer(dst: &mut &mut [u8], for_id: ObjId, id: ObjId) { let l = length_evt_wl_data_device_data_offer(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); } pub fn length_evt_wl_data_device_data_offer() -> usize { 12 } pub fn parse_evt_wl_data_device_data_offer<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DATA_DEVICE_DATA_OFFER: MethodId = MethodId::Event(0); pub fn write_evt_wl_data_device_selection(dst: &mut &mut [u8], for_id: ObjId, id: ObjId) { let l = length_evt_wl_data_device_selection(); write_header(dst, for_id, l, 5, 0); write_obj(dst, id); } pub fn length_evt_wl_data_device_selection() -> usize { 12 } pub fn parse_evt_wl_data_device_selection<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DATA_DEVICE_SELECTION: MethodId = MethodId::Event(5); const DATA_WL_DATA_DEVICE: WaylandData = WaylandData { name: "wl_data_device", evts: &[ WaylandMethod { name: "data_offer", sig: &[NewId(WlDataOffer)], destructor: false, }, WaylandMethod { name: "enter", sig: &[Uint, Object(WlSurface), Fixed, Fixed, Object(WlDataOffer)], destructor: false, }, WaylandMethod { name: "leave", sig: &[], destructor: false, }, WaylandMethod { name: "motion", sig: &[Uint, Fixed, Fixed], destructor: false, }, WaylandMethod { name: "drop", sig: &[], destructor: false, }, WaylandMethod { name: "selection", sig: &[Object(WlDataOffer)], destructor: false, }, ], reqs: &[ WaylandMethod { name: "start_drag", sig: &[ Object(WlDataSource), Object(WlSurface), Object(WlSurface), Uint, ], destructor: false, }, WaylandMethod { name: "set_selection", sig: &[Object(WlDataSource), Uint], destructor: false, }, WaylandMethod { name: "release", sig: &[], destructor: true, }, ], version: 3, }; pub const WL_DATA_DEVICE: &[u8] = DATA_WL_DATA_DEVICE.name.as_bytes(); pub fn write_req_wl_data_device_manager_create_data_source( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_req_wl_data_device_manager_create_data_source(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); } pub fn length_req_wl_data_device_manager_create_data_source() -> usize { 12 } pub fn parse_req_wl_data_device_manager_create_data_source<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_DATA_DEVICE_MANAGER_CREATE_DATA_SOURCE: MethodId = MethodId::Request(0); pub fn write_req_wl_data_device_manager_get_data_device( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, seat: ObjId, ) { let l = length_req_wl_data_device_manager_get_data_device(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); write_obj(dst, seat); } pub fn length_req_wl_data_device_manager_get_data_device() -> usize { 16 } pub fn parse_req_wl_data_device_manager_get_data_device<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_WL_DATA_DEVICE_MANAGER_GET_DATA_DEVICE: MethodId = MethodId::Request(1); const DATA_WL_DATA_DEVICE_MANAGER: WaylandData = WaylandData { name: "wl_data_device_manager", evts: &[], reqs: &[ WaylandMethod { name: "create_data_source", sig: &[NewId(WlDataSource)], destructor: false, }, WaylandMethod { name: "get_data_device", sig: &[NewId(WlDataDevice), Object(WlSeat)], destructor: false, }, ], version: 3, }; pub const WL_DATA_DEVICE_MANAGER: &[u8] = DATA_WL_DATA_DEVICE_MANAGER.name.as_bytes(); const DATA_WL_SHELL: WaylandData = WaylandData { name: "wl_shell", evts: &[], reqs: &[WaylandMethod { name: "get_shell_surface", sig: &[NewId(WlShellSurface), Object(WlSurface)], destructor: false, }], version: 1, }; pub const WL_SHELL: &[u8] = DATA_WL_SHELL.name.as_bytes(); const DATA_WL_SHELL_SURFACE: WaylandData = WaylandData { name: "wl_shell_surface", evts: &[ WaylandMethod { name: "ping", sig: &[Uint], destructor: false, }, WaylandMethod { name: "configure", sig: &[Uint, Int, Int], destructor: false, }, WaylandMethod { name: "popup_done", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "pong", sig: &[Uint], destructor: false, }, WaylandMethod { name: "move", sig: &[Object(WlSeat), Uint], destructor: false, }, WaylandMethod { name: "resize", sig: &[Object(WlSeat), Uint, Uint], destructor: false, }, WaylandMethod { name: "set_toplevel", sig: &[], destructor: false, }, WaylandMethod { name: "set_transient", sig: &[Object(WlSurface), Int, Int, Uint], destructor: false, }, WaylandMethod { name: "set_fullscreen", sig: &[Uint, Uint, Object(WlOutput)], destructor: false, }, WaylandMethod { name: "set_popup", sig: &[Object(WlSeat), Uint, Object(WlSurface), Int, Int, Uint], destructor: false, }, WaylandMethod { name: "set_maximized", sig: &[Object(WlOutput)], destructor: false, }, WaylandMethod { name: "set_title", sig: &[String], destructor: false, }, WaylandMethod { name: "set_class", sig: &[String], destructor: false, }, ], version: 1, }; pub const WL_SHELL_SURFACE: &[u8] = DATA_WL_SHELL_SURFACE.name.as_bytes(); pub fn write_req_wl_surface_destroy(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_wl_surface_destroy(); write_header(dst, for_id, l, 0, 0); } pub fn length_req_wl_surface_destroy() -> usize { 8 } pub fn parse_req_wl_surface_destroy<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_WL_SURFACE_DESTROY: MethodId = MethodId::Request(0); pub fn write_req_wl_surface_attach( dst: &mut &mut [u8], for_id: ObjId, buffer: ObjId, x: i32, y: i32, ) { let l = length_req_wl_surface_attach(); write_header(dst, for_id, l, 1, 0); write_obj(dst, buffer); write_i32(dst, x); write_i32(dst, y); } pub fn length_req_wl_surface_attach() -> usize { 20 } pub fn parse_req_wl_surface_attach<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, i32, i32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_WL_SURFACE_ATTACH: MethodId = MethodId::Request(1); pub fn write_req_wl_surface_damage( dst: &mut &mut [u8], for_id: ObjId, x: i32, y: i32, width: i32, height: i32, ) { let l = length_req_wl_surface_damage(); write_header(dst, for_id, l, 2, 0); write_i32(dst, x); write_i32(dst, y); write_i32(dst, width); write_i32(dst, height); } pub fn length_req_wl_surface_damage() -> usize { 24 } pub fn parse_req_wl_surface_damage<'a>( mut msg: &'a [u8], ) -> Result<(i32, i32, i32, i32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_i32(&mut msg)?; let arg4 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4)) } pub const OPCODE_WL_SURFACE_DAMAGE: MethodId = MethodId::Request(2); pub fn write_req_wl_surface_commit(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_wl_surface_commit(); write_header(dst, for_id, l, 6, 0); } pub fn length_req_wl_surface_commit() -> usize { 8 } pub fn parse_req_wl_surface_commit<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_WL_SURFACE_COMMIT: MethodId = MethodId::Request(6); pub fn write_req_wl_surface_set_buffer_transform( dst: &mut &mut [u8], for_id: ObjId, transform: i32, ) { let l = length_req_wl_surface_set_buffer_transform(); write_header(dst, for_id, l, 7, 0); write_i32(dst, transform); } pub fn length_req_wl_surface_set_buffer_transform() -> usize { 12 } pub fn parse_req_wl_surface_set_buffer_transform<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_SURFACE_SET_BUFFER_TRANSFORM: MethodId = MethodId::Request(7); pub fn write_req_wl_surface_set_buffer_scale(dst: &mut &mut [u8], for_id: ObjId, scale: i32) { let l = length_req_wl_surface_set_buffer_scale(); write_header(dst, for_id, l, 8, 0); write_i32(dst, scale); } pub fn length_req_wl_surface_set_buffer_scale() -> usize { 12 } pub fn parse_req_wl_surface_set_buffer_scale<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_SURFACE_SET_BUFFER_SCALE: MethodId = MethodId::Request(8); pub fn write_req_wl_surface_damage_buffer( dst: &mut &mut [u8], for_id: ObjId, x: i32, y: i32, width: i32, height: i32, ) { let l = length_req_wl_surface_damage_buffer(); write_header(dst, for_id, l, 9, 0); write_i32(dst, x); write_i32(dst, y); write_i32(dst, width); write_i32(dst, height); } pub fn length_req_wl_surface_damage_buffer() -> usize { 24 } pub fn parse_req_wl_surface_damage_buffer<'a>( mut msg: &'a [u8], ) -> Result<(i32, i32, i32, i32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_i32(&mut msg)?; let arg4 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4)) } pub const OPCODE_WL_SURFACE_DAMAGE_BUFFER: MethodId = MethodId::Request(9); const DATA_WL_SURFACE: WaylandData = WaylandData { name: "wl_surface", evts: &[ WaylandMethod { name: "enter", sig: &[Object(WlOutput)], destructor: false, }, WaylandMethod { name: "leave", sig: &[Object(WlOutput)], destructor: false, }, WaylandMethod { name: "preferred_buffer_scale", sig: &[Int], destructor: false, }, WaylandMethod { name: "preferred_buffer_transform", sig: &[Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "attach", sig: &[Object(WlBuffer), Int, Int], destructor: false, }, WaylandMethod { name: "damage", sig: &[Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "frame", sig: &[NewId(WlCallback)], destructor: false, }, WaylandMethod { name: "set_opaque_region", sig: &[Object(WlRegion)], destructor: false, }, WaylandMethod { name: "set_input_region", sig: &[Object(WlRegion)], destructor: false, }, WaylandMethod { name: "commit", sig: &[], destructor: false, }, WaylandMethod { name: "set_buffer_transform", sig: &[Int], destructor: false, }, WaylandMethod { name: "set_buffer_scale", sig: &[Int], destructor: false, }, WaylandMethod { name: "damage_buffer", sig: &[Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "offset", sig: &[Int, Int], destructor: false, }, ], version: 6, }; pub const WL_SURFACE: &[u8] = DATA_WL_SURFACE.name.as_bytes(); pub fn write_evt_wl_seat_capabilities(dst: &mut &mut [u8], for_id: ObjId, capabilities: u32) { let l = length_evt_wl_seat_capabilities(); write_header(dst, for_id, l, 0, 0); write_u32(dst, capabilities); } pub fn length_evt_wl_seat_capabilities() -> usize { 12 } pub fn parse_evt_wl_seat_capabilities<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_SEAT_CAPABILITIES: MethodId = MethodId::Event(0); pub fn write_req_wl_seat_get_keyboard(dst: &mut &mut [u8], for_id: ObjId, id: ObjId) { let l = length_req_wl_seat_get_keyboard(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); } pub fn length_req_wl_seat_get_keyboard() -> usize { 12 } pub fn parse_req_wl_seat_get_keyboard<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_WL_SEAT_GET_KEYBOARD: MethodId = MethodId::Request(1); const DATA_WL_SEAT: WaylandData = WaylandData { name: "wl_seat", evts: &[ WaylandMethod { name: "capabilities", sig: &[Uint], destructor: false, }, WaylandMethod { name: "name", sig: &[String], destructor: false, }, ], reqs: &[ WaylandMethod { name: "get_pointer", sig: &[NewId(WlPointer)], destructor: false, }, WaylandMethod { name: "get_keyboard", sig: &[NewId(WlKeyboard)], destructor: false, }, WaylandMethod { name: "get_touch", sig: &[NewId(WlTouch)], destructor: false, }, WaylandMethod { name: "release", sig: &[], destructor: true, }, ], version: 10, }; pub const WL_SEAT: &[u8] = DATA_WL_SEAT.name.as_bytes(); const DATA_WL_POINTER: WaylandData = WaylandData { name: "wl_pointer", evts: &[ WaylandMethod { name: "enter", sig: &[Uint, Object(WlSurface), Fixed, Fixed], destructor: false, }, WaylandMethod { name: "leave", sig: &[Uint, Object(WlSurface)], destructor: false, }, WaylandMethod { name: "motion", sig: &[Uint, Fixed, Fixed], destructor: false, }, WaylandMethod { name: "button", sig: &[Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "axis", sig: &[Uint, Uint, Fixed], destructor: false, }, WaylandMethod { name: "frame", sig: &[], destructor: false, }, WaylandMethod { name: "axis_source", sig: &[Uint], destructor: false, }, WaylandMethod { name: "axis_stop", sig: &[Uint, Uint], destructor: false, }, WaylandMethod { name: "axis_discrete", sig: &[Uint, Int], destructor: false, }, WaylandMethod { name: "axis_value120", sig: &[Uint, Int], destructor: false, }, WaylandMethod { name: "axis_relative_direction", sig: &[Uint, Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "set_cursor", sig: &[Uint, Object(WlSurface), Int, Int], destructor: false, }, WaylandMethod { name: "release", sig: &[], destructor: true, }, ], version: 10, }; pub const WL_POINTER: &[u8] = DATA_WL_POINTER.name.as_bytes(); pub fn write_evt_wl_keyboard_keymap( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, format: u32, size: u32, ) { let l = length_evt_wl_keyboard_keymap(); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_u32(dst, format); write_u32(dst, size); } pub fn length_evt_wl_keyboard_keymap() -> usize { 16 } pub fn parse_evt_wl_keyboard_keymap<'a>(mut msg: &'a [u8]) -> Result<(u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg3)) } pub const OPCODE_WL_KEYBOARD_KEYMAP: MethodId = MethodId::Event(0); const DATA_WL_KEYBOARD: WaylandData = WaylandData { name: "wl_keyboard", evts: &[ WaylandMethod { name: "keymap", sig: &[Uint, Fd, Uint], destructor: false, }, WaylandMethod { name: "enter", sig: &[Uint, Object(WlSurface), Array], destructor: false, }, WaylandMethod { name: "leave", sig: &[Uint, Object(WlSurface)], destructor: false, }, WaylandMethod { name: "key", sig: &[Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "modifiers", sig: &[Uint, Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "repeat_info", sig: &[Int, Int], destructor: false, }, ], reqs: &[WaylandMethod { name: "release", sig: &[], destructor: true, }], version: 10, }; pub const WL_KEYBOARD: &[u8] = DATA_WL_KEYBOARD.name.as_bytes(); const DATA_WL_TOUCH: WaylandData = WaylandData { name: "wl_touch", evts: &[ WaylandMethod { name: "down", sig: &[Uint, Uint, Object(WlSurface), Int, Fixed, Fixed], destructor: false, }, WaylandMethod { name: "up", sig: &[Uint, Uint, Int], destructor: false, }, WaylandMethod { name: "motion", sig: &[Uint, Int, Fixed, Fixed], destructor: false, }, WaylandMethod { name: "frame", sig: &[], destructor: false, }, WaylandMethod { name: "cancel", sig: &[], destructor: false, }, WaylandMethod { name: "shape", sig: &[Int, Fixed, Fixed], destructor: false, }, WaylandMethod { name: "orientation", sig: &[Int, Fixed], destructor: false, }, ], reqs: &[WaylandMethod { name: "release", sig: &[], destructor: true, }], version: 10, }; pub const WL_TOUCH: &[u8] = DATA_WL_TOUCH.name.as_bytes(); #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum WlOutputTransform { Normal = 0, Item90 = 1, Item180 = 2, Item270 = 3, Flipped = 4, Flipped90 = 5, Flipped180 = 6, Flipped270 = 7, } impl TryFrom for WlOutputTransform { type Error = (); fn try_from(v: u32) -> Result { Ok(match v { 0 => WlOutputTransform::Normal, 1 => WlOutputTransform::Item90, 2 => WlOutputTransform::Item180, 3 => WlOutputTransform::Item270, 4 => WlOutputTransform::Flipped, 5 => WlOutputTransform::Flipped90, 6 => WlOutputTransform::Flipped180, 7 => WlOutputTransform::Flipped270, _ => return Err(()), }) } } const DATA_WL_OUTPUT: WaylandData = WaylandData { name: "wl_output", evts: &[ WaylandMethod { name: "geometry", sig: &[Int, Int, Int, Int, Int, String, String, Int], destructor: false, }, WaylandMethod { name: "mode", sig: &[Uint, Int, Int, Int], destructor: false, }, WaylandMethod { name: "done", sig: &[], destructor: false, }, WaylandMethod { name: "scale", sig: &[Int], destructor: false, }, WaylandMethod { name: "name", sig: &[String], destructor: false, }, WaylandMethod { name: "description", sig: &[String], destructor: false, }, ], reqs: &[WaylandMethod { name: "release", sig: &[], destructor: true, }], version: 4, }; pub const WL_OUTPUT: &[u8] = DATA_WL_OUTPUT.name.as_bytes(); const DATA_WL_REGION: WaylandData = WaylandData { name: "wl_region", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "add", sig: &[Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "subtract", sig: &[Int, Int, Int, Int], destructor: false, }, ], version: 1, }; pub const WL_REGION: &[u8] = DATA_WL_REGION.name.as_bytes(); const DATA_WL_SUBCOMPOSITOR: WaylandData = WaylandData { name: "wl_subcompositor", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_subsurface", sig: &[NewId(WlSubsurface), Object(WlSurface), Object(WlSurface)], destructor: false, }, ], version: 1, }; pub const WL_SUBCOMPOSITOR: &[u8] = DATA_WL_SUBCOMPOSITOR.name.as_bytes(); const DATA_WL_SUBSURFACE: WaylandData = WaylandData { name: "wl_subsurface", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_position", sig: &[Int, Int], destructor: false, }, WaylandMethod { name: "place_above", sig: &[Object(WlSurface)], destructor: false, }, WaylandMethod { name: "place_below", sig: &[Object(WlSurface)], destructor: false, }, WaylandMethod { name: "set_sync", sig: &[], destructor: false, }, WaylandMethod { name: "set_desync", sig: &[], destructor: false, }, ], version: 1, }; pub const WL_SUBSURFACE: &[u8] = DATA_WL_SUBSURFACE.name.as_bytes(); pub fn write_req_zwlr_data_control_manager_v1_create_data_source( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_req_zwlr_data_control_manager_v1_create_data_source(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); } pub fn length_req_zwlr_data_control_manager_v1_create_data_source() -> usize { 12 } pub fn parse_req_zwlr_data_control_manager_v1_create_data_source<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_DATA_CONTROL_MANAGER_V1_CREATE_DATA_SOURCE: MethodId = MethodId::Request(0); pub fn write_req_zwlr_data_control_manager_v1_get_data_device( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, seat: ObjId, ) { let l = length_req_zwlr_data_control_manager_v1_get_data_device(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); write_obj(dst, seat); } pub fn length_req_zwlr_data_control_manager_v1_get_data_device() -> usize { 16 } pub fn parse_req_zwlr_data_control_manager_v1_get_data_device<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_ZWLR_DATA_CONTROL_MANAGER_V1_GET_DATA_DEVICE: MethodId = MethodId::Request(1); const DATA_ZWLR_DATA_CONTROL_MANAGER_V1: WaylandData = WaylandData { name: "zwlr_data_control_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "create_data_source", sig: &[NewId(ZwlrDataControlSourceV1)], destructor: false, }, WaylandMethod { name: "get_data_device", sig: &[NewId(ZwlrDataControlDeviceV1), Object(WlSeat)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 2, }; pub const ZWLR_DATA_CONTROL_MANAGER_V1: &[u8] = DATA_ZWLR_DATA_CONTROL_MANAGER_V1.name.as_bytes(); pub fn write_req_zwlr_data_control_device_v1_set_selection( dst: &mut &mut [u8], for_id: ObjId, source: ObjId, ) { let l = length_req_zwlr_data_control_device_v1_set_selection(); write_header(dst, for_id, l, 0, 0); write_obj(dst, source); } pub fn length_req_zwlr_data_control_device_v1_set_selection() -> usize { 12 } pub fn parse_req_zwlr_data_control_device_v1_set_selection<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_DATA_CONTROL_DEVICE_V1_SET_SELECTION: MethodId = MethodId::Request(0); pub fn write_evt_zwlr_data_control_device_v1_data_offer( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_evt_zwlr_data_control_device_v1_data_offer(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); } pub fn length_evt_zwlr_data_control_device_v1_data_offer() -> usize { 12 } pub fn parse_evt_zwlr_data_control_device_v1_data_offer<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_DATA_CONTROL_DEVICE_V1_DATA_OFFER: MethodId = MethodId::Event(0); pub fn write_evt_zwlr_data_control_device_v1_selection( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_evt_zwlr_data_control_device_v1_selection(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); } pub fn length_evt_zwlr_data_control_device_v1_selection() -> usize { 12 } pub fn parse_evt_zwlr_data_control_device_v1_selection<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_DATA_CONTROL_DEVICE_V1_SELECTION: MethodId = MethodId::Event(1); const DATA_ZWLR_DATA_CONTROL_DEVICE_V1: WaylandData = WaylandData { name: "zwlr_data_control_device_v1", evts: &[ WaylandMethod { name: "data_offer", sig: &[NewId(ZwlrDataControlOfferV1)], destructor: false, }, WaylandMethod { name: "selection", sig: &[Object(ZwlrDataControlOfferV1)], destructor: false, }, WaylandMethod { name: "finished", sig: &[], destructor: false, }, WaylandMethod { name: "primary_selection", sig: &[Object(ZwlrDataControlOfferV1)], destructor: false, }, ], reqs: &[ WaylandMethod { name: "set_selection", sig: &[Object(ZwlrDataControlSourceV1)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_primary_selection", sig: &[Object(ZwlrDataControlSourceV1)], destructor: false, }, ], version: 2, }; pub const ZWLR_DATA_CONTROL_DEVICE_V1: &[u8] = DATA_ZWLR_DATA_CONTROL_DEVICE_V1.name.as_bytes(); pub fn write_req_zwlr_data_control_source_v1_offer( dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8], ) { let l = length_req_zwlr_data_control_source_v1_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_req_zwlr_data_control_source_v1_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_zwlr_data_control_source_v1_offer<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_DATA_CONTROL_SOURCE_V1_OFFER: MethodId = MethodId::Request(0); pub fn write_evt_zwlr_data_control_source_v1_send( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_evt_zwlr_data_control_source_v1_send(mime_type.len()); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_evt_zwlr_data_control_source_v1_send(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_zwlr_data_control_source_v1_send<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_DATA_CONTROL_SOURCE_V1_SEND: MethodId = MethodId::Event(0); const DATA_ZWLR_DATA_CONTROL_SOURCE_V1: WaylandData = WaylandData { name: "zwlr_data_control_source_v1", evts: &[ WaylandMethod { name: "send", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "cancelled", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "offer", sig: &[String], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWLR_DATA_CONTROL_SOURCE_V1: &[u8] = DATA_ZWLR_DATA_CONTROL_SOURCE_V1.name.as_bytes(); pub fn write_req_zwlr_data_control_offer_v1_receive( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, mime_type: &[u8], ) { let l = length_req_zwlr_data_control_offer_v1_receive(mime_type.len()); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); write_string(dst, Some(mime_type)); } pub fn length_req_zwlr_data_control_offer_v1_receive(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_req_zwlr_data_control_offer_v1_receive<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_DATA_CONTROL_OFFER_V1_RECEIVE: MethodId = MethodId::Request(0); pub fn write_evt_zwlr_data_control_offer_v1_offer( dst: &mut &mut [u8], for_id: ObjId, mime_type: &[u8], ) { let l = length_evt_zwlr_data_control_offer_v1_offer(mime_type.len()); write_header(dst, for_id, l, 0, 0); write_string(dst, Some(mime_type)); } pub fn length_evt_zwlr_data_control_offer_v1_offer(mime_type_len: usize) -> usize { let mut v = 8; v += length_string(mime_type_len); v } pub fn parse_evt_zwlr_data_control_offer_v1_offer<'a>( mut msg: &'a [u8], ) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_DATA_CONTROL_OFFER_V1_OFFER: MethodId = MethodId::Event(0); const DATA_ZWLR_DATA_CONTROL_OFFER_V1: WaylandData = WaylandData { name: "zwlr_data_control_offer_v1", evts: &[WaylandMethod { name: "offer", sig: &[String], destructor: false, }], reqs: &[ WaylandMethod { name: "receive", sig: &[String, Fd], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWLR_DATA_CONTROL_OFFER_V1: &[u8] = DATA_ZWLR_DATA_CONTROL_OFFER_V1.name.as_bytes(); const DATA_ZWLR_EXPORT_DMABUF_MANAGER_V1: WaylandData = WaylandData { name: "zwlr_export_dmabuf_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "capture_output", sig: &[NewId(ZwlrExportDmabufFrameV1), Int, Object(WlOutput)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWLR_EXPORT_DMABUF_MANAGER_V1: &[u8] = DATA_ZWLR_EXPORT_DMABUF_MANAGER_V1.name.as_bytes(); const DATA_ZWLR_EXPORT_DMABUF_FRAME_V1: WaylandData = WaylandData { name: "zwlr_export_dmabuf_frame_v1", evts: &[ WaylandMethod { name: "frame", sig: &[Uint, Uint, Uint, Uint, Uint, Uint, Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "object", sig: &[Uint, Fd, Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "ready", sig: &[Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "cancel", sig: &[Uint], destructor: false, }, ], reqs: &[WaylandMethod { name: "destroy", sig: &[], destructor: true, }], version: 1, }; pub const ZWLR_EXPORT_DMABUF_FRAME_V1: &[u8] = DATA_ZWLR_EXPORT_DMABUF_FRAME_V1.name.as_bytes(); pub fn write_req_zwlr_gamma_control_manager_v1_get_gamma_control( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, output: ObjId, ) { let l = length_req_zwlr_gamma_control_manager_v1_get_gamma_control(); write_header(dst, for_id, l, 0, 0); write_obj(dst, id); write_obj(dst, output); } pub fn length_req_zwlr_gamma_control_manager_v1_get_gamma_control() -> usize { 16 } pub fn parse_req_zwlr_gamma_control_manager_v1_get_gamma_control<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_ZWLR_GAMMA_CONTROL_MANAGER_V1_GET_GAMMA_CONTROL: MethodId = MethodId::Request(0); const DATA_ZWLR_GAMMA_CONTROL_MANAGER_V1: WaylandData = WaylandData { name: "zwlr_gamma_control_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "get_gamma_control", sig: &[NewId(ZwlrGammaControlV1), Object(WlOutput)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWLR_GAMMA_CONTROL_MANAGER_V1: &[u8] = DATA_ZWLR_GAMMA_CONTROL_MANAGER_V1.name.as_bytes(); pub fn write_evt_zwlr_gamma_control_v1_gamma_size(dst: &mut &mut [u8], for_id: ObjId, size: u32) { let l = length_evt_zwlr_gamma_control_v1_gamma_size(); write_header(dst, for_id, l, 0, 0); write_u32(dst, size); } pub fn length_evt_zwlr_gamma_control_v1_gamma_size() -> usize { 12 } pub fn parse_evt_zwlr_gamma_control_v1_gamma_size<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_GAMMA_CONTROL_V1_GAMMA_SIZE: MethodId = MethodId::Event(0); pub fn write_req_zwlr_gamma_control_v1_set_gamma( dst: &mut &mut [u8], for_id: ObjId, tag_fds: bool, ) { let l = length_req_zwlr_gamma_control_v1_set_gamma(); write_header(dst, for_id, l, 0, if tag_fds { 1 } else { 0 }); } pub fn length_req_zwlr_gamma_control_v1_set_gamma() -> usize { 8 } pub fn parse_req_zwlr_gamma_control_v1_set_gamma<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_ZWLR_GAMMA_CONTROL_V1_SET_GAMMA: MethodId = MethodId::Request(0); const DATA_ZWLR_GAMMA_CONTROL_V1: WaylandData = WaylandData { name: "zwlr_gamma_control_v1", evts: &[ WaylandMethod { name: "gamma_size", sig: &[Uint], destructor: false, }, WaylandMethod { name: "failed", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "set_gamma", sig: &[Fd], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 1, }; pub const ZWLR_GAMMA_CONTROL_V1: &[u8] = DATA_ZWLR_GAMMA_CONTROL_V1.name.as_bytes(); pub fn write_req_zwlr_screencopy_manager_v1_capture_output( dst: &mut &mut [u8], for_id: ObjId, frame: ObjId, overlay_cursor: i32, output: ObjId, ) { let l = length_req_zwlr_screencopy_manager_v1_capture_output(); write_header(dst, for_id, l, 0, 0); write_obj(dst, frame); write_i32(dst, overlay_cursor); write_obj(dst, output); } pub fn length_req_zwlr_screencopy_manager_v1_capture_output() -> usize { 20 } pub fn parse_req_zwlr_screencopy_manager_v1_capture_output<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, i32, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_ZWLR_SCREENCOPY_MANAGER_V1_CAPTURE_OUTPUT: MethodId = MethodId::Request(0); pub fn write_req_zwlr_screencopy_manager_v1_capture_output_region( dst: &mut &mut [u8], for_id: ObjId, frame: ObjId, overlay_cursor: i32, output: ObjId, x: i32, y: i32, width: i32, height: i32, ) { let l = length_req_zwlr_screencopy_manager_v1_capture_output_region(); write_header(dst, for_id, l, 1, 0); write_obj(dst, frame); write_i32(dst, overlay_cursor); write_obj(dst, output); write_i32(dst, x); write_i32(dst, y); write_i32(dst, width); write_i32(dst, height); } pub fn length_req_zwlr_screencopy_manager_v1_capture_output_region() -> usize { 36 } pub fn parse_req_zwlr_screencopy_manager_v1_capture_output_region<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, i32, ObjId, i32, i32, i32, i32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_i32(&mut msg)?; let arg3 = parse_obj(&mut msg)?; let arg4 = parse_i32(&mut msg)?; let arg5 = parse_i32(&mut msg)?; let arg6 = parse_i32(&mut msg)?; let arg7 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4, arg5, arg6, arg7)) } pub const OPCODE_ZWLR_SCREENCOPY_MANAGER_V1_CAPTURE_OUTPUT_REGION: MethodId = MethodId::Request(1); const DATA_ZWLR_SCREENCOPY_MANAGER_V1: WaylandData = WaylandData { name: "zwlr_screencopy_manager_v1", evts: &[], reqs: &[ WaylandMethod { name: "capture_output", sig: &[NewId(ZwlrScreencopyFrameV1), Int, Object(WlOutput)], destructor: false, }, WaylandMethod { name: "capture_output_region", sig: &[ NewId(ZwlrScreencopyFrameV1), Int, Object(WlOutput), Int, Int, Int, Int, ], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, ], version: 3, }; pub const ZWLR_SCREENCOPY_MANAGER_V1: &[u8] = DATA_ZWLR_SCREENCOPY_MANAGER_V1.name.as_bytes(); pub fn write_evt_zwlr_screencopy_frame_v1_buffer( dst: &mut &mut [u8], for_id: ObjId, format: u32, width: u32, height: u32, stride: u32, ) { let l = length_evt_zwlr_screencopy_frame_v1_buffer(); write_header(dst, for_id, l, 0, 0); write_u32(dst, format); write_u32(dst, width); write_u32(dst, height); write_u32(dst, stride); } pub fn length_evt_zwlr_screencopy_frame_v1_buffer() -> usize { 24 } pub fn parse_evt_zwlr_screencopy_frame_v1_buffer<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; let arg4 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3, arg4)) } pub const OPCODE_ZWLR_SCREENCOPY_FRAME_V1_BUFFER: MethodId = MethodId::Event(0); pub fn write_req_zwlr_screencopy_frame_v1_copy(dst: &mut &mut [u8], for_id: ObjId, buffer: ObjId) { let l = length_req_zwlr_screencopy_frame_v1_copy(); write_header(dst, for_id, l, 0, 0); write_obj(dst, buffer); } pub fn length_req_zwlr_screencopy_frame_v1_copy() -> usize { 12 } pub fn parse_req_zwlr_screencopy_frame_v1_copy<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_SCREENCOPY_FRAME_V1_COPY: MethodId = MethodId::Request(0); pub fn write_evt_zwlr_screencopy_frame_v1_ready( dst: &mut &mut [u8], for_id: ObjId, tv_sec_hi: u32, tv_sec_lo: u32, tv_nsec: u32, ) { let l = length_evt_zwlr_screencopy_frame_v1_ready(); write_header(dst, for_id, l, 2, 0); write_u32(dst, tv_sec_hi); write_u32(dst, tv_sec_lo); write_u32(dst, tv_nsec); } pub fn length_evt_zwlr_screencopy_frame_v1_ready() -> usize { 20 } pub fn parse_evt_zwlr_screencopy_frame_v1_ready<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_ZWLR_SCREENCOPY_FRAME_V1_READY: MethodId = MethodId::Event(2); pub fn write_evt_zwlr_screencopy_frame_v1_failed(dst: &mut &mut [u8], for_id: ObjId) { let l = length_evt_zwlr_screencopy_frame_v1_failed(); write_header(dst, for_id, l, 3, 0); } pub fn length_evt_zwlr_screencopy_frame_v1_failed() -> usize { 8 } pub fn parse_evt_zwlr_screencopy_frame_v1_failed<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_ZWLR_SCREENCOPY_FRAME_V1_FAILED: MethodId = MethodId::Event(3); pub fn write_req_zwlr_screencopy_frame_v1_destroy(dst: &mut &mut [u8], for_id: ObjId) { let l = length_req_zwlr_screencopy_frame_v1_destroy(); write_header(dst, for_id, l, 1, 0); } pub fn length_req_zwlr_screencopy_frame_v1_destroy() -> usize { 8 } pub fn parse_req_zwlr_screencopy_frame_v1_destroy<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_ZWLR_SCREENCOPY_FRAME_V1_DESTROY: MethodId = MethodId::Request(1); pub fn write_req_zwlr_screencopy_frame_v1_copy_with_damage( dst: &mut &mut [u8], for_id: ObjId, buffer: ObjId, ) { let l = length_req_zwlr_screencopy_frame_v1_copy_with_damage(); write_header(dst, for_id, l, 2, 0); write_obj(dst, buffer); } pub fn length_req_zwlr_screencopy_frame_v1_copy_with_damage() -> usize { 12 } pub fn parse_req_zwlr_screencopy_frame_v1_copy_with_damage<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_ZWLR_SCREENCOPY_FRAME_V1_COPY_WITH_DAMAGE: MethodId = MethodId::Request(2); pub fn write_evt_zwlr_screencopy_frame_v1_linux_dmabuf( dst: &mut &mut [u8], for_id: ObjId, format: u32, width: u32, height: u32, ) { let l = length_evt_zwlr_screencopy_frame_v1_linux_dmabuf(); write_header(dst, for_id, l, 5, 0); write_u32(dst, format); write_u32(dst, width); write_u32(dst, height); } pub fn length_evt_zwlr_screencopy_frame_v1_linux_dmabuf() -> usize { 20 } pub fn parse_evt_zwlr_screencopy_frame_v1_linux_dmabuf<'a>( mut msg: &'a [u8], ) -> Result<(u32, u32, u32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_u32(&mut msg)?; let arg2 = parse_u32(&mut msg)?; let arg3 = parse_u32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2, arg3)) } pub const OPCODE_ZWLR_SCREENCOPY_FRAME_V1_LINUX_DMABUF: MethodId = MethodId::Event(5); pub fn write_evt_zwlr_screencopy_frame_v1_buffer_done(dst: &mut &mut [u8], for_id: ObjId) { let l = length_evt_zwlr_screencopy_frame_v1_buffer_done(); write_header(dst, for_id, l, 6, 0); } pub fn length_evt_zwlr_screencopy_frame_v1_buffer_done() -> usize { 8 } pub fn parse_evt_zwlr_screencopy_frame_v1_buffer_done<'a>( msg: &'a [u8], ) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_ZWLR_SCREENCOPY_FRAME_V1_BUFFER_DONE: MethodId = MethodId::Event(6); const DATA_ZWLR_SCREENCOPY_FRAME_V1: WaylandData = WaylandData { name: "zwlr_screencopy_frame_v1", evts: &[ WaylandMethod { name: "buffer", sig: &[Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "flags", sig: &[Uint], destructor: false, }, WaylandMethod { name: "ready", sig: &[Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "failed", sig: &[], destructor: false, }, WaylandMethod { name: "damage", sig: &[Uint, Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "linux_dmabuf", sig: &[Uint, Uint, Uint], destructor: false, }, WaylandMethod { name: "buffer_done", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "copy", sig: &[Object(WlBuffer)], destructor: false, }, WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "copy_with_damage", sig: &[Object(WlBuffer)], destructor: false, }, ], version: 3, }; pub const ZWLR_SCREENCOPY_FRAME_V1: &[u8] = DATA_ZWLR_SCREENCOPY_FRAME_V1.name.as_bytes(); pub fn write_req_xdg_wm_base_get_xdg_surface( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, surface: ObjId, ) { let l = length_req_xdg_wm_base_get_xdg_surface(); write_header(dst, for_id, l, 2, 0); write_obj(dst, id); write_obj(dst, surface); } pub fn length_req_xdg_wm_base_get_xdg_surface() -> usize { 16 } pub fn parse_req_xdg_wm_base_get_xdg_surface<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, ObjId), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_XDG_WM_BASE_GET_XDG_SURFACE: MethodId = MethodId::Request(2); const DATA_XDG_WM_BASE: WaylandData = WaylandData { name: "xdg_wm_base", evts: &[WaylandMethod { name: "ping", sig: &[Uint], destructor: false, }], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "create_positioner", sig: &[NewId(XdgPositioner)], destructor: false, }, WaylandMethod { name: "get_xdg_surface", sig: &[NewId(XdgSurface), Object(WlSurface)], destructor: false, }, WaylandMethod { name: "pong", sig: &[Uint], destructor: false, }, ], version: 6, }; pub const XDG_WM_BASE: &[u8] = DATA_XDG_WM_BASE.name.as_bytes(); const DATA_XDG_POSITIONER: WaylandData = WaylandData { name: "xdg_positioner", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_size", sig: &[Int, Int], destructor: false, }, WaylandMethod { name: "set_anchor_rect", sig: &[Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "set_anchor", sig: &[Uint], destructor: false, }, WaylandMethod { name: "set_gravity", sig: &[Uint], destructor: false, }, WaylandMethod { name: "set_constraint_adjustment", sig: &[Uint], destructor: false, }, WaylandMethod { name: "set_offset", sig: &[Int, Int], destructor: false, }, WaylandMethod { name: "set_reactive", sig: &[], destructor: false, }, WaylandMethod { name: "set_parent_size", sig: &[Int, Int], destructor: false, }, WaylandMethod { name: "set_parent_configure", sig: &[Uint], destructor: false, }, ], version: 6, }; pub const XDG_POSITIONER: &[u8] = DATA_XDG_POSITIONER.name.as_bytes(); pub fn write_req_xdg_surface_get_toplevel(dst: &mut &mut [u8], for_id: ObjId, id: ObjId) { let l = length_req_xdg_surface_get_toplevel(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); } pub fn length_req_xdg_surface_get_toplevel() -> usize { 12 } pub fn parse_req_xdg_surface_get_toplevel<'a>(mut msg: &'a [u8]) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_XDG_SURFACE_GET_TOPLEVEL: MethodId = MethodId::Request(1); const DATA_XDG_SURFACE: WaylandData = WaylandData { name: "xdg_surface", evts: &[WaylandMethod { name: "configure", sig: &[Uint], destructor: false, }], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "get_toplevel", sig: &[NewId(XdgToplevel)], destructor: false, }, WaylandMethod { name: "get_popup", sig: &[NewId(XdgPopup), Object(XdgSurface), Object(XdgPositioner)], destructor: false, }, WaylandMethod { name: "set_window_geometry", sig: &[Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "ack_configure", sig: &[Uint], destructor: false, }, ], version: 6, }; pub const XDG_SURFACE: &[u8] = DATA_XDG_SURFACE.name.as_bytes(); pub fn write_req_xdg_toplevel_set_title(dst: &mut &mut [u8], for_id: ObjId, title: &[u8]) { let l = length_req_xdg_toplevel_set_title(title.len()); write_header(dst, for_id, l, 2, 0); write_string(dst, Some(title)); } pub fn length_req_xdg_toplevel_set_title(title_len: usize) -> usize { let mut v = 8; v += length_string(title_len); v } pub fn parse_req_xdg_toplevel_set_title<'a>(mut msg: &'a [u8]) -> Result<&'a [u8], &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_string(&mut msg)?.ok_or(PARSE_ERROR)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_XDG_TOPLEVEL_SET_TITLE: MethodId = MethodId::Request(2); const DATA_XDG_TOPLEVEL: WaylandData = WaylandData { name: "xdg_toplevel", evts: &[ WaylandMethod { name: "configure", sig: &[Int, Int, Array], destructor: false, }, WaylandMethod { name: "close", sig: &[], destructor: false, }, WaylandMethod { name: "configure_bounds", sig: &[Int, Int], destructor: false, }, WaylandMethod { name: "wm_capabilities", sig: &[Array], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_parent", sig: &[Object(XdgToplevel)], destructor: false, }, WaylandMethod { name: "set_title", sig: &[String], destructor: false, }, WaylandMethod { name: "set_app_id", sig: &[String], destructor: false, }, WaylandMethod { name: "show_window_menu", sig: &[Object(WlSeat), Uint, Int, Int], destructor: false, }, WaylandMethod { name: "move", sig: &[Object(WlSeat), Uint], destructor: false, }, WaylandMethod { name: "resize", sig: &[Object(WlSeat), Uint, Uint], destructor: false, }, WaylandMethod { name: "set_max_size", sig: &[Int, Int], destructor: false, }, WaylandMethod { name: "set_min_size", sig: &[Int, Int], destructor: false, }, WaylandMethod { name: "set_maximized", sig: &[], destructor: false, }, WaylandMethod { name: "unset_maximized", sig: &[], destructor: false, }, WaylandMethod { name: "set_fullscreen", sig: &[Object(WlOutput)], destructor: false, }, WaylandMethod { name: "unset_fullscreen", sig: &[], destructor: false, }, WaylandMethod { name: "set_minimized", sig: &[], destructor: false, }, ], version: 6, }; pub const XDG_TOPLEVEL: &[u8] = DATA_XDG_TOPLEVEL.name.as_bytes(); const DATA_XDG_POPUP: WaylandData = WaylandData { name: "xdg_popup", evts: &[ WaylandMethod { name: "configure", sig: &[Int, Int, Int, Int], destructor: false, }, WaylandMethod { name: "popup_done", sig: &[], destructor: false, }, WaylandMethod { name: "repositioned", sig: &[Uint], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "grab", sig: &[Object(WlSeat), Uint], destructor: false, }, WaylandMethod { name: "reposition", sig: &[Object(XdgPositioner), Uint], destructor: false, }, ], version: 6, }; pub const XDG_POPUP: &[u8] = DATA_XDG_POPUP.name.as_bytes(); pub fn write_req_xdg_toplevel_icon_manager_v1_create_icon( dst: &mut &mut [u8], for_id: ObjId, id: ObjId, ) { let l = length_req_xdg_toplevel_icon_manager_v1_create_icon(); write_header(dst, for_id, l, 1, 0); write_obj(dst, id); } pub fn length_req_xdg_toplevel_icon_manager_v1_create_icon() -> usize { 12 } pub fn parse_req_xdg_toplevel_icon_manager_v1_create_icon<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_XDG_TOPLEVEL_ICON_MANAGER_V1_CREATE_ICON: MethodId = MethodId::Request(1); pub fn write_evt_xdg_toplevel_icon_manager_v1_icon_size( dst: &mut &mut [u8], for_id: ObjId, size: i32, ) { let l = length_evt_xdg_toplevel_icon_manager_v1_icon_size(); write_header(dst, for_id, l, 0, 0); write_i32(dst, size); } pub fn length_evt_xdg_toplevel_icon_manager_v1_icon_size() -> usize { 12 } pub fn parse_evt_xdg_toplevel_icon_manager_v1_icon_size<'a>( mut msg: &'a [u8], ) -> Result { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok(arg1) } pub const OPCODE_XDG_TOPLEVEL_ICON_MANAGER_V1_ICON_SIZE: MethodId = MethodId::Event(0); pub fn write_evt_xdg_toplevel_icon_manager_v1_done(dst: &mut &mut [u8], for_id: ObjId) { let l = length_evt_xdg_toplevel_icon_manager_v1_done(); write_header(dst, for_id, l, 1, 0); } pub fn length_evt_xdg_toplevel_icon_manager_v1_done() -> usize { 8 } pub fn parse_evt_xdg_toplevel_icon_manager_v1_done<'a>(msg: &'a [u8]) -> Result<(), &'static str> { if msg.len() != 8 { return Err(PARSE_ERROR); } Ok(()) } pub const OPCODE_XDG_TOPLEVEL_ICON_MANAGER_V1_DONE: MethodId = MethodId::Event(1); const DATA_XDG_TOPLEVEL_ICON_MANAGER_V1: WaylandData = WaylandData { name: "xdg_toplevel_icon_manager_v1", evts: &[ WaylandMethod { name: "icon_size", sig: &[Int], destructor: false, }, WaylandMethod { name: "done", sig: &[], destructor: false, }, ], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "create_icon", sig: &[NewId(XdgToplevelIconV1)], destructor: false, }, WaylandMethod { name: "set_icon", sig: &[Object(XdgToplevel), Object(XdgToplevelIconV1)], destructor: false, }, ], version: 1, }; pub const XDG_TOPLEVEL_ICON_MANAGER_V1: &[u8] = DATA_XDG_TOPLEVEL_ICON_MANAGER_V1.name.as_bytes(); pub fn write_req_xdg_toplevel_icon_v1_add_buffer( dst: &mut &mut [u8], for_id: ObjId, buffer: ObjId, scale: i32, ) { let l = length_req_xdg_toplevel_icon_v1_add_buffer(); write_header(dst, for_id, l, 2, 0); write_obj(dst, buffer); write_i32(dst, scale); } pub fn length_req_xdg_toplevel_icon_v1_add_buffer() -> usize { 16 } pub fn parse_req_xdg_toplevel_icon_v1_add_buffer<'a>( mut msg: &'a [u8], ) -> Result<(ObjId, i32), &'static str> { msg = msg.get(8..).ok_or(PARSE_ERROR)?; let arg1 = parse_obj(&mut msg)?; let arg2 = parse_i32(&mut msg)?; if !msg.is_empty() { return Err(PARSE_ERROR); } Ok((arg1, arg2)) } pub const OPCODE_XDG_TOPLEVEL_ICON_V1_ADD_BUFFER: MethodId = MethodId::Request(2); const DATA_XDG_TOPLEVEL_ICON_V1: WaylandData = WaylandData { name: "xdg_toplevel_icon_v1", evts: &[], reqs: &[ WaylandMethod { name: "destroy", sig: &[], destructor: true, }, WaylandMethod { name: "set_name", sig: &[String], destructor: false, }, WaylandMethod { name: "add_buffer", sig: &[Object(WlBuffer), Int], destructor: false, }, ], version: 1, }; pub const XDG_TOPLEVEL_ICON_V1: &[u8] = DATA_XDG_TOPLEVEL_ICON_V1.name.as_bytes(); #[repr(u8)] #[derive(Debug, Clone, Copy)] pub enum WaylandInterface { ExtDataControlDeviceV1 = 0, ExtDataControlManagerV1 = 1, ExtDataControlOfferV1 = 2, ExtDataControlSourceV1 = 3, ExtForeignToplevelHandleV1 = 4, ExtForeignToplevelImageCaptureSourceManagerV1 = 5, ExtForeignToplevelListV1 = 6, ExtImageCaptureSourceV1 = 7, ExtImageCopyCaptureCursorSessionV1 = 8, ExtImageCopyCaptureFrameV1 = 9, ExtImageCopyCaptureManagerV1 = 10, ExtImageCopyCaptureSessionV1 = 11, ExtOutputImageCaptureSourceManagerV1 = 12, GtkPrimarySelectionDevice = 13, GtkPrimarySelectionDeviceManager = 14, GtkPrimarySelectionOffer = 15, GtkPrimarySelectionSource = 16, WlBuffer = 17, WlCallback = 18, WlCompositor = 19, WlDataDevice = 20, WlDataDeviceManager = 21, WlDataOffer = 22, WlDataSource = 23, WlDisplay = 24, WlDrm = 25, WlKeyboard = 26, WlOutput = 27, WlPointer = 28, WlRegion = 29, WlRegistry = 30, WlSeat = 31, WlShell = 32, WlShellSurface = 33, WlShm = 34, WlShmPool = 35, WlSubcompositor = 36, WlSubsurface = 37, WlSurface = 38, WlTouch = 39, WpColorManagementOutputV1 = 40, WpColorManagementSurfaceFeedbackV1 = 41, WpColorManagementSurfaceV1 = 42, WpColorManagerV1 = 43, WpCommitTimerV1 = 44, WpCommitTimingManagerV1 = 45, WpImageDescriptionCreatorIccV1 = 46, WpImageDescriptionCreatorParamsV1 = 47, WpImageDescriptionInfoV1 = 48, WpImageDescriptionV1 = 49, WpLinuxDrmSyncobjManagerV1 = 50, WpLinuxDrmSyncobjSurfaceV1 = 51, WpLinuxDrmSyncobjTimelineV1 = 52, WpPresentation = 53, WpPresentationFeedback = 54, WpSecurityContextManagerV1 = 55, WpSecurityContextV1 = 56, WpViewport = 57, WpViewporter = 58, XdgPopup = 59, XdgPositioner = 60, XdgSurface = 61, XdgToplevel = 62, XdgToplevelIconManagerV1 = 63, XdgToplevelIconV1 = 64, XdgWmBase = 65, ZwlrDataControlDeviceV1 = 66, ZwlrDataControlManagerV1 = 67, ZwlrDataControlOfferV1 = 68, ZwlrDataControlSourceV1 = 69, ZwlrExportDmabufFrameV1 = 70, ZwlrExportDmabufManagerV1 = 71, ZwlrGammaControlManagerV1 = 72, ZwlrGammaControlV1 = 73, ZwlrScreencopyFrameV1 = 74, ZwlrScreencopyManagerV1 = 75, ZwpInputMethodKeyboardGrabV2 = 76, ZwpInputMethodManagerV2 = 77, ZwpInputMethodV2 = 78, ZwpInputPopupSurfaceV2 = 79, ZwpLinuxBufferParamsV1 = 80, ZwpLinuxDmabufFeedbackV1 = 81, ZwpLinuxDmabufV1 = 82, ZwpPrimarySelectionDeviceManagerV1 = 83, ZwpPrimarySelectionDeviceV1 = 84, ZwpPrimarySelectionOfferV1 = 85, ZwpPrimarySelectionSourceV1 = 86, ZwpVirtualKeyboardManagerV1 = 87, ZwpVirtualKeyboardV1 = 88, } impl TryFrom for WaylandInterface { type Error = (); fn try_from(v: u32) -> Result { Ok(match v { 0 => WaylandInterface::ExtDataControlDeviceV1, 1 => WaylandInterface::ExtDataControlManagerV1, 2 => WaylandInterface::ExtDataControlOfferV1, 3 => WaylandInterface::ExtDataControlSourceV1, 4 => WaylandInterface::ExtForeignToplevelHandleV1, 5 => WaylandInterface::ExtForeignToplevelImageCaptureSourceManagerV1, 6 => WaylandInterface::ExtForeignToplevelListV1, 7 => WaylandInterface::ExtImageCaptureSourceV1, 8 => WaylandInterface::ExtImageCopyCaptureCursorSessionV1, 9 => WaylandInterface::ExtImageCopyCaptureFrameV1, 10 => WaylandInterface::ExtImageCopyCaptureManagerV1, 11 => WaylandInterface::ExtImageCopyCaptureSessionV1, 12 => WaylandInterface::ExtOutputImageCaptureSourceManagerV1, 13 => WaylandInterface::GtkPrimarySelectionDevice, 14 => WaylandInterface::GtkPrimarySelectionDeviceManager, 15 => WaylandInterface::GtkPrimarySelectionOffer, 16 => WaylandInterface::GtkPrimarySelectionSource, 17 => WaylandInterface::WlBuffer, 18 => WaylandInterface::WlCallback, 19 => WaylandInterface::WlCompositor, 20 => WaylandInterface::WlDataDevice, 21 => WaylandInterface::WlDataDeviceManager, 22 => WaylandInterface::WlDataOffer, 23 => WaylandInterface::WlDataSource, 24 => WaylandInterface::WlDisplay, 25 => WaylandInterface::WlDrm, 26 => WaylandInterface::WlKeyboard, 27 => WaylandInterface::WlOutput, 28 => WaylandInterface::WlPointer, 29 => WaylandInterface::WlRegion, 30 => WaylandInterface::WlRegistry, 31 => WaylandInterface::WlSeat, 32 => WaylandInterface::WlShell, 33 => WaylandInterface::WlShellSurface, 34 => WaylandInterface::WlShm, 35 => WaylandInterface::WlShmPool, 36 => WaylandInterface::WlSubcompositor, 37 => WaylandInterface::WlSubsurface, 38 => WaylandInterface::WlSurface, 39 => WaylandInterface::WlTouch, 40 => WaylandInterface::WpColorManagementOutputV1, 41 => WaylandInterface::WpColorManagementSurfaceFeedbackV1, 42 => WaylandInterface::WpColorManagementSurfaceV1, 43 => WaylandInterface::WpColorManagerV1, 44 => WaylandInterface::WpCommitTimerV1, 45 => WaylandInterface::WpCommitTimingManagerV1, 46 => WaylandInterface::WpImageDescriptionCreatorIccV1, 47 => WaylandInterface::WpImageDescriptionCreatorParamsV1, 48 => WaylandInterface::WpImageDescriptionInfoV1, 49 => WaylandInterface::WpImageDescriptionV1, 50 => WaylandInterface::WpLinuxDrmSyncobjManagerV1, 51 => WaylandInterface::WpLinuxDrmSyncobjSurfaceV1, 52 => WaylandInterface::WpLinuxDrmSyncobjTimelineV1, 53 => WaylandInterface::WpPresentation, 54 => WaylandInterface::WpPresentationFeedback, 55 => WaylandInterface::WpSecurityContextManagerV1, 56 => WaylandInterface::WpSecurityContextV1, 57 => WaylandInterface::WpViewport, 58 => WaylandInterface::WpViewporter, 59 => WaylandInterface::XdgPopup, 60 => WaylandInterface::XdgPositioner, 61 => WaylandInterface::XdgSurface, 62 => WaylandInterface::XdgToplevel, 63 => WaylandInterface::XdgToplevelIconManagerV1, 64 => WaylandInterface::XdgToplevelIconV1, 65 => WaylandInterface::XdgWmBase, 66 => WaylandInterface::ZwlrDataControlDeviceV1, 67 => WaylandInterface::ZwlrDataControlManagerV1, 68 => WaylandInterface::ZwlrDataControlOfferV1, 69 => WaylandInterface::ZwlrDataControlSourceV1, 70 => WaylandInterface::ZwlrExportDmabufFrameV1, 71 => WaylandInterface::ZwlrExportDmabufManagerV1, 72 => WaylandInterface::ZwlrGammaControlManagerV1, 73 => WaylandInterface::ZwlrGammaControlV1, 74 => WaylandInterface::ZwlrScreencopyFrameV1, 75 => WaylandInterface::ZwlrScreencopyManagerV1, 76 => WaylandInterface::ZwpInputMethodKeyboardGrabV2, 77 => WaylandInterface::ZwpInputMethodManagerV2, 78 => WaylandInterface::ZwpInputMethodV2, 79 => WaylandInterface::ZwpInputPopupSurfaceV2, 80 => WaylandInterface::ZwpLinuxBufferParamsV1, 81 => WaylandInterface::ZwpLinuxDmabufFeedbackV1, 82 => WaylandInterface::ZwpLinuxDmabufV1, 83 => WaylandInterface::ZwpPrimarySelectionDeviceManagerV1, 84 => WaylandInterface::ZwpPrimarySelectionDeviceV1, 85 => WaylandInterface::ZwpPrimarySelectionOfferV1, 86 => WaylandInterface::ZwpPrimarySelectionSourceV1, 87 => WaylandInterface::ZwpVirtualKeyboardManagerV1, 88 => WaylandInterface::ZwpVirtualKeyboardV1, _ => return Err(()), }) } } pub const INTERFACE_TABLE: &[WaylandData] = &[ DATA_EXT_DATA_CONTROL_DEVICE_V1, DATA_EXT_DATA_CONTROL_MANAGER_V1, DATA_EXT_DATA_CONTROL_OFFER_V1, DATA_EXT_DATA_CONTROL_SOURCE_V1, DATA_EXT_FOREIGN_TOPLEVEL_HANDLE_V1, DATA_EXT_FOREIGN_TOPLEVEL_IMAGE_CAPTURE_SOURCE_MANAGER_V1, DATA_EXT_FOREIGN_TOPLEVEL_LIST_V1, DATA_EXT_IMAGE_CAPTURE_SOURCE_V1, DATA_EXT_IMAGE_COPY_CAPTURE_CURSOR_SESSION_V1, DATA_EXT_IMAGE_COPY_CAPTURE_FRAME_V1, DATA_EXT_IMAGE_COPY_CAPTURE_MANAGER_V1, DATA_EXT_IMAGE_COPY_CAPTURE_SESSION_V1, DATA_EXT_OUTPUT_IMAGE_CAPTURE_SOURCE_MANAGER_V1, DATA_GTK_PRIMARY_SELECTION_DEVICE, DATA_GTK_PRIMARY_SELECTION_DEVICE_MANAGER, DATA_GTK_PRIMARY_SELECTION_OFFER, DATA_GTK_PRIMARY_SELECTION_SOURCE, DATA_WL_BUFFER, DATA_WL_CALLBACK, DATA_WL_COMPOSITOR, DATA_WL_DATA_DEVICE, DATA_WL_DATA_DEVICE_MANAGER, DATA_WL_DATA_OFFER, DATA_WL_DATA_SOURCE, DATA_WL_DISPLAY, DATA_WL_DRM, DATA_WL_KEYBOARD, DATA_WL_OUTPUT, DATA_WL_POINTER, DATA_WL_REGION, DATA_WL_REGISTRY, DATA_WL_SEAT, DATA_WL_SHELL, DATA_WL_SHELL_SURFACE, DATA_WL_SHM, DATA_WL_SHM_POOL, DATA_WL_SUBCOMPOSITOR, DATA_WL_SUBSURFACE, DATA_WL_SURFACE, DATA_WL_TOUCH, DATA_WP_COLOR_MANAGEMENT_OUTPUT_V1, DATA_WP_COLOR_MANAGEMENT_SURFACE_FEEDBACK_V1, DATA_WP_COLOR_MANAGEMENT_SURFACE_V1, DATA_WP_COLOR_MANAGER_V1, DATA_WP_COMMIT_TIMER_V1, DATA_WP_COMMIT_TIMING_MANAGER_V1, DATA_WP_IMAGE_DESCRIPTION_CREATOR_ICC_V1, DATA_WP_IMAGE_DESCRIPTION_CREATOR_PARAMS_V1, DATA_WP_IMAGE_DESCRIPTION_INFO_V1, DATA_WP_IMAGE_DESCRIPTION_V1, DATA_WP_LINUX_DRM_SYNCOBJ_MANAGER_V1, DATA_WP_LINUX_DRM_SYNCOBJ_SURFACE_V1, DATA_WP_LINUX_DRM_SYNCOBJ_TIMELINE_V1, DATA_WP_PRESENTATION, DATA_WP_PRESENTATION_FEEDBACK, DATA_WP_SECURITY_CONTEXT_MANAGER_V1, DATA_WP_SECURITY_CONTEXT_V1, DATA_WP_VIEWPORT, DATA_WP_VIEWPORTER, DATA_XDG_POPUP, DATA_XDG_POSITIONER, DATA_XDG_SURFACE, DATA_XDG_TOPLEVEL, DATA_XDG_TOPLEVEL_ICON_MANAGER_V1, DATA_XDG_TOPLEVEL_ICON_V1, DATA_XDG_WM_BASE, DATA_ZWLR_DATA_CONTROL_DEVICE_V1, DATA_ZWLR_DATA_CONTROL_MANAGER_V1, DATA_ZWLR_DATA_CONTROL_OFFER_V1, DATA_ZWLR_DATA_CONTROL_SOURCE_V1, DATA_ZWLR_EXPORT_DMABUF_FRAME_V1, DATA_ZWLR_EXPORT_DMABUF_MANAGER_V1, DATA_ZWLR_GAMMA_CONTROL_MANAGER_V1, DATA_ZWLR_GAMMA_CONTROL_V1, DATA_ZWLR_SCREENCOPY_FRAME_V1, DATA_ZWLR_SCREENCOPY_MANAGER_V1, DATA_ZWP_INPUT_METHOD_KEYBOARD_GRAB_V2, DATA_ZWP_INPUT_METHOD_MANAGER_V2, DATA_ZWP_INPUT_METHOD_V2, DATA_ZWP_INPUT_POPUP_SURFACE_V2, DATA_ZWP_LINUX_BUFFER_PARAMS_V1, DATA_ZWP_LINUX_DMABUF_FEEDBACK_V1, DATA_ZWP_LINUX_DMABUF_V1, DATA_ZWP_PRIMARY_SELECTION_DEVICE_MANAGER_V1, DATA_ZWP_PRIMARY_SELECTION_DEVICE_V1, DATA_ZWP_PRIMARY_SELECTION_OFFER_V1, DATA_ZWP_PRIMARY_SELECTION_SOURCE_V1, DATA_ZWP_VIRTUAL_KEYBOARD_MANAGER_V1, DATA_ZWP_VIRTUAL_KEYBOARD_V1, ]; waypipe-v0.10.5/tests/000077500000000000000000000000001507071711100146035ustar00rootroot00000000000000waypipe-v0.10.5/tests/proto.rs000066400000000000000000000100461507071711100163150ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-3.0-or-later */ /*! Protocol tests using the `test_proto` test runner */ #![cfg(feature = "test_proto")] use std::ffi::OsStr; use std::os::unix::ffi::OsStrExt; use std::process::{Command, ExitCode, Stdio}; /** Run the named test, print its output, and return an appropriate exit code */ fn do_test(name: &str) -> ExitCode { let waypipe_bin = env!("CARGO_BIN_EXE_waypipe"); let test_proto_bin = env!("CARGO_BIN_EXE_test_proto"); let cross_runner = std::env::var_os("CROSS_TARGET_RUNNER"); let mut args: Vec<&OsStr> = Vec::new(); let cx: &OsStr = cross_runner.as_deref().unwrap_or(OsStr::new("")); /* note: CROSS_TARGET_RUNNER is typically something like 'CROSS_TARGET_RUNNER=/linux-runner aarch64' */ for chunk in cx.as_encoded_bytes().split(|x| *x == b' ') { if !chunk.is_empty() { args.push(OsStr::from_bytes(chunk)); } } args.push(OsStr::new(test_proto_bin)); args.push(OsStr::new("--exact")); args.push(OsStr::new(waypipe_bin)); args.push(OsStr::new(waypipe_bin)); args.push(OsStr::new(name)); let mut command = Command::new(args[0]); command .args(&args[1..]) .stdin(Stdio::null()) .stdout(Stdio::piped()) .stderr(Stdio::piped()); let child = match command.spawn() { Err(x) => { println!("Error when running test_proto: {:?}", x); return ExitCode::FAILURE; } Ok(c) => c, }; let output = match child.wait_with_output() { Err(x) => { println!("Error when waiting for test_proto to complete: {:?}", x); return ExitCode::FAILURE; } Ok(output) => output, }; /* Write with println! so Rust's test framework properly captures this */ println!("Test stdout ({} bytes):", output.stdout.len()); println!("{}", String::from_utf8_lossy(&output.stdout)); /* test_proto is not expected to write anything to stderr; print it just in case */ println!("Test stderr ({} bytes):", output.stderr.len()); println!("{}", String::from_utf8_lossy(&output.stderr)); match output.status.code() { Some(0) => { println!("Test {} passed.", name); ExitCode::SUCCESS } Some(77) => { println!("Test {} was skipped.", name); ExitCode::SUCCESS } Some(x) => { println!("Test {} failed (exit code {}).", name, x); ExitCode::FAILURE } None => { println!("Test {} failed (no exit code).", name); ExitCode::FAILURE } } } macro_rules! define_test { ($x:ident) => { #[test] fn $x() -> ExitCode { do_test(stringify!($x)) } }; } mod proto { use crate::{do_test, ExitCode}; define_test! {basic} define_test! {base_wire} define_test! {commit_timing} define_test! {damage_efficiency} define_test! {flip_damage} define_test! {gamma_control} define_test! {keymap} define_test! {icc} define_test! {many_fds} define_test! {object_collision} define_test! {oversized} define_test! {pipe_write} define_test! {presentation_time} define_test! {rotating_damage} define_test! {screencopy_shm_ext} define_test! {screencopy_shm_wlr} define_test! {shm_buffer} define_test! {shm_damage} define_test! {shm_extend} define_test! {title_prefix} define_test! {toplevel_icon} define_test! {viewporter_damage} #[cfg(feature = "dmabuf")] define_test! {dmabuf} #[cfg(feature = "dmabuf")] define_test! {dmabuf_damage} #[cfg(feature = "dmabuf")] define_test! {dmabuf_feedback_table} #[cfg(feature = "dmabuf")] define_test! {dmabuf_pre_v4} #[cfg(feature = "video")] define_test! {dmavid_h264} #[cfg(feature = "video")] define_test! {dmavid_vp9} #[cfg(feature = "dmabuf")] define_test! {explicit_sync} #[cfg(feature = "dmabuf")] define_test! {screencopy_dmabuf_ext} #[cfg(feature = "dmabuf")] define_test! {screencopy_dmabuf_wlr} } waypipe-v0.10.5/waypipe-c/000077500000000000000000000000001507071711100153375ustar00rootroot00000000000000waypipe-v0.10.5/waypipe-c/bench.c000066400000000000000000000306131507071711100165650ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "shadow.h" #include "util.h" #include #include #include #include #include #include struct compression_range { enum compression_mode mode; int min_val; int max_val; const char *desc; }; static const struct compression_range comp_ranges[] = { {COMP_NONE, 0, 0, "none"}, #ifdef HAS_LZ4 {COMP_LZ4, -10, 16, "lz4"}, #endif #ifdef HAS_ZSTD {COMP_ZSTD, -10, 22, "zstd"}, #endif }; static void *create_text_like_image(size_t size) { uint8_t *data = malloc(size); if (!data) { return NULL; } for (size_t i = 0; i < size; i++) { size_t step = i / 203 - i / 501; bool s = step % 2 == 0; data[i] = (uint8_t)(s ? ((step >> 1) & 0x2) + 0xfe : 0x00); } // int f = open("1.rgb", O_RDONLY); // read(f, data, size); // close(f); return data; } static void *create_video_like_image(size_t size) { uint8_t *data = malloc(size); if (!data) { return NULL; } for (size_t i = 0; i < size; i++) { /* primary sequence, with runs, but avoiding obvious repetition * then add fine grain, a main source of complexity in real * images */ uint32_t noise = (uint32_t)rand() % 2; data[i] = (uint8_t)(i + i / 101 + i / 33 + noise); } // int f = open("0.rgb", O_RDONLY); // read(f, data, size); // close(f); return data; } /** Create a shuffled variation of the original image. */ static void perturb(void *data, size_t size) { uint8_t *bytes = (uint8_t *)data; for (int i = 0; i < 50; i++) { // TODO: avoid redundant motion, and make this very fast size_t low = (size_t)rand() % size; size_t high = (size_t)rand() % size; if (low >= high) { continue; } for (size_t k = 0; k < (high - low) / 2; k++) { uint8_t tmp = bytes[low + k]; bytes[low + k] = bytes[high - k]; bytes[high - k] = tmp; } } } struct bench_result { const struct compression_range *rng; int level; float comp_time, dcomp_time; }; static int float_compare(const void *a, const void *b) { float va = *(const float *)a; float vb = *(const float *)b; if (va < vb) return -1; if (va > vb) return 1; return 0; } static int compare_bench_result(const void *a, const void *b) { const struct bench_result *va = (const struct bench_result *)a; const struct bench_result *vb = (const struct bench_result *)b; if (va->comp_time < vb->comp_time) return -1; if (va->comp_time > vb->comp_time) return 1; return 0; } struct diff_comp_results { /* Compressed packet size, in bytes */ float packet_size; /* Time to construct compressed diff, in seconds */ float diffcomp_time; /* Diff size / buffer size */ float diff_frac; /* Compressed size / original size */ float comp_frac; }; static int compare_timespec(const struct timespec *a, const struct timespec *b) { if (a->tv_sec != b->tv_sec) return a->tv_sec < b->tv_sec ? -1 : 1; if (a->tv_nsec != b->tv_nsec) return a->tv_nsec < b->tv_nsec ? -1 : 1; return 0; } /* requires delta >= 0 */ static struct timespec timespec_add(struct timespec base, int64_t delta_ns) { struct timespec ret; ret.tv_sec = base.tv_sec + delta_ns / 1000000000LL; ret.tv_nsec = base.tv_nsec + delta_ns % 1000000000LL; if (ret.tv_nsec > 1000000000LL) { ret.tv_nsec -= 1000000000LL; ret.tv_sec++; } return ret; } static int64_t timespec_sub(struct timespec a, struct timespec b) { return (a.tv_sec - b.tv_sec) * 1000000000LL + (a.tv_nsec - b.tv_nsec); } #define NSAMPLES 5 static struct bench_result run_sub_bench(bool first, const struct compression_range *rng, int level, float bandwidth_mBps, int n_worker_threads, unsigned int seed, bool text_like, size_t test_size, void *image) { /* Reset seed, so that all random image * perturbations are consistent between runs */ srand(seed); /* Setup a shadow structure */ struct thread_pool pool; setup_thread_pool(&pool, rng->mode, level, n_worker_threads); if (first) { printf("Running compression level benchmarks, assuming bandwidth=%g MB/s, with %d threads\n", bandwidth_mBps, pool.nthreads); } struct fd_translation_map map; setup_translation_map(&map, false); struct wmsg_open_file file_msg; file_msg.remote_id = 0; file_msg.file_size = (uint32_t)test_size; file_msg.size_and_type = transfer_header( sizeof(struct wmsg_open_file), WMSG_OPEN_FILE); struct render_data render; memset(&render, 0, sizeof(render)); render.disabled = true; render.drm_fd = 1; render.av_disabled = true; struct bytebuf msg = {.size = sizeof(struct wmsg_open_file), .data = (char *)&file_msg}; (void)apply_update(&map, &pool, &render, WMSG_OPEN_FILE, 0, &msg); struct shadow_fd *sfd = get_shadow_for_rid(&map, 0); int iter = 0; float samples[NSAMPLES]; float diff_frac[NSAMPLES], comp_frac[NSAMPLES]; for (; !shutdown_flag && iter < NSAMPLES; iter++) { /* Reset image state */ memcpy(sfd->mem_local, image, test_size); memcpy(sfd->mem_mirror, image, test_size); perturb(sfd->mem_local, test_size); sfd->is_dirty = true; damage_everything(&sfd->damage); /* Create transfer queue */ struct transfer_queue transfer_data; memset(&transfer_data, 0, sizeof(struct transfer_queue)); pthread_mutex_init(&transfer_data.async_recv_queue.lock, NULL); struct timespec t0, t1; clock_gettime(CLOCK_REALTIME, &t0); collect_update(&pool, sfd, &transfer_data, false); start_parallel_work(&pool, &transfer_data.async_recv_queue); /* A restricted main loop, in which transfer blocks are * instantaneously consumed when previous blocks have been * 'sent' */ struct timespec next_write_time = {.tv_sec = 0, .tv_nsec = 0}; size_t total_wire_size = 0; size_t net_diff_size = 0; while (1) { uint8_t flush[64]; (void)read(pool.selfpipe_r, flush, sizeof(flush)); /* Run tasks on main thread, just like the main loop */ bool done = false; struct task_data task; bool has_task = request_work_task(&pool, &task, &done); if (has_task) { run_task(&task, &pool.threads[0]); pthread_mutex_lock(&pool.work_mutex); pool.tasks_in_progress--; pthread_mutex_unlock(&pool.work_mutex); } struct timespec cur_time; clock_gettime(CLOCK_REALTIME, &cur_time); if (compare_timespec(&next_write_time, &cur_time) < 0) { transfer_load_async(&transfer_data); if (transfer_data.start < transfer_data.end) { struct iovec v = transfer_data.vecs[transfer_data .start++]; float delay_s = (float)v.iov_len / (bandwidth_mBps * 1e6f); total_wire_size += v.iov_len; /* Only one message type will be * produced for diffs */ struct wmsg_buffer_diff *header = v.iov_base; net_diff_size += (size_t)(header->diff_size + header->ntrailing); /* Advance timer for next receipt */ int64_t delay_ns = (int64_t)(delay_s * 1e9f); next_write_time = timespec_add( cur_time, delay_ns); } } else { /* Very short delay, for poll loop */ bool tasks_remaining = false; pthread_mutex_lock(&pool.work_mutex); tasks_remaining = pool.stack_count > 0; pthread_mutex_unlock(&pool.work_mutex); struct timespec delay_time; delay_time.tv_sec = 0; delay_time.tv_nsec = 10000; if (!tasks_remaining) { int64_t nsecs_left = timespec_sub( next_write_time, cur_time); if (nsecs_left > 1000000000LL) { nsecs_left = 1000000000LL; } if (nsecs_left > delay_time.tv_nsec) { delay_time.tv_nsec = nsecs_left; } } nanosleep(&delay_time, NULL); } bool all_sent = false; all_sent = transfer_data.start == transfer_data.end; if (done && all_sent) { break; } } finish_update(sfd); cleanup_transfer_queue(&transfer_data); clock_gettime(CLOCK_REALTIME, &t1); struct diff_comp_results r; r.packet_size = (float)total_wire_size; r.diffcomp_time = 1.0f * (float)(t1.tv_sec - t0.tv_sec) + 1e-9f * (float)(t1.tv_nsec - t0.tv_nsec); r.comp_frac = r.packet_size / (float)net_diff_size; r.diff_frac = (float)net_diff_size / (float)test_size; samples[iter] = r.diffcomp_time; diff_frac[iter] = r.diff_frac; comp_frac[iter] = r.comp_frac; } /* Cleanup sfd and helper structures */ cleanup_thread_pool(&pool); cleanup_translation_map(&map); qsort(samples, (size_t)iter, sizeof(float), float_compare); qsort(diff_frac, (size_t)iter, sizeof(float), float_compare); qsort(comp_frac, (size_t)iter, sizeof(float), float_compare); /* Using order statistics, because moment statistics a) require * libm; b) don't work well with outliers. */ float median = samples[iter / 2]; float hiqr = (samples[(iter * 3) / 4] - samples[iter / 4]) / 2; float dmedian = diff_frac[iter / 2]; float dhiqr = (diff_frac[(iter * 3) / 4] - diff_frac[iter / 4]) / 2; float cmedian = comp_frac[iter / 2]; float chiqr = (comp_frac[(iter * 3) / 4] - comp_frac[iter / 4]) / 2; struct bench_result res; res.rng = rng; res.level = level; printf("%s, %s=%d: transfer %f+/-%f sec, diff %f+/-%f, comp %f+/-%f\n", text_like ? "txt" : "img", rng->desc, level, median, hiqr, dmedian, dhiqr, cmedian, chiqr); res.comp_time = median; res.dcomp_time = hiqr; return res; } int run_bench(float bandwidth_mBps, uint32_t test_size, int n_worker_threads) { /* 4MB test image - 1024x1024x4. Any smaller, and unrealistic caching * speedups may occur */ struct timespec tp; clock_gettime(CLOCK_REALTIME, &tp); srand((unsigned int)tp.tv_nsec); void *text_image = create_text_like_image(test_size); void *vid_image = create_video_like_image(test_size); if (!text_image || !vid_image) { free(text_image); free(vid_image); wp_error("Failed to allocate test images"); return EXIT_FAILURE; } /* Q: store an array of all the modes -> outputs */ // Then sort that array int ntests = 0; for (size_t c = 0; c < sizeof(comp_ranges) / sizeof(comp_ranges[0]); c++) { ntests += comp_ranges[c].max_val - comp_ranges[c].min_val + 1; } /* For the content, the mode is generally consistent */ struct bench_result *tresults = calloc((size_t)ntests, sizeof(struct bench_result)); struct bench_result *iresults = calloc((size_t)ntests, sizeof(struct bench_result)); int ntres = 0, nires = 0; for (int k = 0; k < 2; k++) { bool text_like = k == 0; int j = 0; for (size_t c = 0; !shutdown_flag && c < sizeof(comp_ranges) / sizeof(comp_ranges[0]); c++) { for (int lvl = comp_ranges[c].min_val; !shutdown_flag && lvl <= comp_ranges[c].max_val; lvl++) { struct bench_result res = run_sub_bench(j == 0, &comp_ranges[c], lvl, bandwidth_mBps, n_worker_threads, (unsigned int)tp.tv_nsec, text_like, test_size, text_like ? text_image : vid_image); if (text_like) { tresults[j++] = res; ntres++; } else { iresults[j++] = res; nires++; } } } } for (int k = 0; k < 2; k++) { bool text_like = k == 0; struct bench_result *results = text_like ? tresults : iresults; int nr = text_like ? ntres : nires; if (nr <= 0) { continue; } /* Print best recommendation */ qsort(results, (size_t)nr, sizeof(struct bench_result), compare_bench_result); struct bench_result best = results[0]; printf("%s, best compression level: \"%s=%d\", with %f+/-%f sec for sample transfer\n", text_like ? "Text heavy image" : "Photo-like image", best.rng->desc, best.level, best.comp_time, best.dcomp_time); } free(tresults); free(iresults); free(vid_image); free(text_image); return EXIT_SUCCESS; } waypipe-v0.10.5/waypipe-c/client.c000066400000000000000000000562471507071711100167770ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include #include #include #include #include #include #include #include #include #include #include static int check_conn_header(uint32_t header, const struct main_config *config, char *err, size_t err_size) { if ((header & CONN_FIXED_BIT) == 0 && (header & CONN_UNSET_BIT) != 0) { snprintf(err, err_size, "Waypipe client is rejecting connection header %08" PRIx32 "; it is either garbage or there was a wire protocol endianness mismatch.", header); return -1; } /* Skip the following checks if config is null * (i.e., called from reconnection loop) */ if (!config) { return 0; } /* Earlier versions strictly required a protocol version match; now * there is a protocol version negotiation where waypipe-server sends * its desired version, and if this is not the minimum, the * waypipe-client's first message in reply will acknowledge that * version. To ensure newer clients still work with older Waypipe (that * checked bits 16-31), the version field is now extracted from bits 3-6 * and 16-23. */ uint32_t version = (((header >> 16) & 0xff) << 4) | ((header >> 3) & 0xf); wp_debug("Waypipe server is requesting protocol version %u; using default version 16", version); /* For now, reject mismatches in compression format and video coding * setting, and print an error. Adopting whatever the server asks for * is a minor security issue -- e.g., video handling is a good target * for exploits, and compression can cost CPU time, especially if the * initial connection mechanism were to be expanded to allow setting * compression level. */ if ((header & CONN_COMPRESSION_MASK) == CONN_ZSTD_COMPRESSION) { if (config->compression != COMP_ZSTD) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the compression=ZSTD the Waypipe server expected", compression_mode_to_str( config->compression)); return -1; } } else if ((header & CONN_COMPRESSION_MASK) == CONN_LZ4_COMPRESSION) { if (config->compression != COMP_LZ4) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the compression=LZ4 the Waypipe server expected", compression_mode_to_str( config->compression)); return -1; } } else if ((header & CONN_COMPRESSION_MASK) == CONN_NO_COMPRESSION) { if (config->compression != COMP_NONE) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the compression=NONE the Waypipe server expected", compression_mode_to_str( config->compression)); return -1; } } else if ((header & CONN_COMPRESSION_MASK) != 0) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the unidentified compression type the Waypipe server expected", compression_mode_to_str(config->compression)); return -1; } if ((header & CONN_VIDEO_MASK) == CONN_VP9_VIDEO) { if (!config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not run with video encoding enabled, unlike Waypipe server"); return -1; } if (config->video_fmt != VIDEO_VP9) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the VP9 video coding format requested by the Waypipe server"); return -1; } } else if ((header & CONN_VIDEO_MASK) == CONN_H264_VIDEO) { if (!config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not run with video encoding enabled, unlike Waypipe server"); return -1; } if (config->video_fmt != VIDEO_H264) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the VP9 video coding format requested by the Waypipe server"); return -1; } } else if ((header & CONN_VIDEO_MASK) == CONN_AV1_VIDEO) { if (!config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not run with video encoding enabled, unlike Waypipe server"); return -1; } if (config->video_fmt != VIDEO_AV1) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the AV1 video coding format requested by the Waypipe server"); return -1; } } else if ((header & CONN_VIDEO_MASK) == CONN_NO_VIDEO) { if (config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client has video encoding enabled, but Waypipe server does not"); return -1; } } else if ((header & CONN_VIDEO_MASK) != 0) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the unidentified video coding format requested by the Waypipe server"); return -1; } return 0; } static void apply_conn_header(uint32_t header, struct main_config *config) { if (header & CONN_NO_DMABUF_SUPPORT) { if (config) { config->no_gpu = true; } } // todo: consider allowing to disable video encoding } static void write_rejection_message(int channel_fd, char *msg) { char buf[512]; size_t len = print_wrapped_error(buf, sizeof(buf), msg); if (!len) { wp_error("Failed to print wrapped error for message of length %zu, not enough space", strlen(msg)); return; } ssize_t written = write(channel_fd, buf, len); if (written != (ssize_t)len) { wp_error("Failed to send rejection message, only %d bytes of %d written", (int)written, (int)len); } } static inline bool key_match( const uint32_t key1[static 3], const uint32_t key2[static 3]) { return key1[0] == key2[0] && key1[1] == key2[1] && key1[2] == key2[2]; } static int get_inherited_socket(const char *wayland_socket) { uint32_t val; if (parse_uint32(wayland_socket, &val) == -1 || ((int)val) < 0) { wp_error("Failed to parse \"%s\" (value of WAYLAND_SOCKET) as a nonnegative integer, exiting", wayland_socket); return -1; } int fd = (int)val; int flags = fcntl(fd, F_GETFL, 0); if (flags == -1 && errno == EBADF) { wp_error("The file descriptor WAYLAND_SOCKET=%d was invalid, exiting", fd); return -1; } return fd; } static int get_display_path(char *path, size_t max_len) { const char *display = getenv("WAYLAND_DISPLAY"); if (!display) { wp_error("WAYLAND_DISPLAY is not set, exiting"); return -1; } if (display[0] != '/') { const char *xdg_runtime_dir = getenv("XDG_RUNTIME_DIR"); if (!xdg_runtime_dir) { wp_error("XDG_RUNTIME_DIR is not set, exiting"); return -1; } if (multi_strcat(path, max_len, xdg_runtime_dir, "/", display, NULL) == 0) { wp_error("full WAYLAND_DISPLAY path '%s' is longer than %z bytes, exiting", display, max_len); return -1; } } else { if (strlen(display) + 1 >= max_len) { wp_error("WAYLAND_DISPLAY='%s' is longer than %zu bytes, exiting", display, max_len); return -1; } strcpy(path, display); } return 0; } static int run_single_client_reconnector( int channelsock, int linkfd, struct connection_token conn_id) { int retcode = EXIT_SUCCESS; while (!shutdown_flag) { struct pollfd pf[2]; pf[0].fd = channelsock; pf[0].events = POLLIN; pf[0].revents = 0; pf[1].fd = linkfd; pf[1].events = 0; pf[1].revents = 0; int r = poll(pf, 2, -1); if (r == -1 && errno == EINTR) { continue; } else if (r == -1) { retcode = EXIT_FAILURE; break; } else if (r == 0) { // Nothing to read continue; } if (pf[1].revents & POLLHUP) { /* Hang up, main thread has closed its link */ break; } if (!(pf[0].revents & POLLIN)) { continue; } int newclient = accept(channelsock, NULL, NULL); if (newclient == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) { // The wakeup may have been spurious continue; } wp_error("Connection failure: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } wp_debug("Reconnection to oneshot client"); struct connection_token new_conn; memset(&new_conn, 0, sizeof(new_conn)); if (read(newclient, &new_conn.header, sizeof(new_conn.header)) != sizeof(new_conn.header)) { wp_error("Failed to get connection id header"); goto done; } if (check_conn_header(new_conn.header, NULL, NULL, 0) < 0) { goto done; } if (read(newclient, &new_conn.key, sizeof(new_conn.key)) != sizeof(new_conn.key)) { wp_error("Failed to get connection id key"); goto done; } if (!key_match(new_conn.key, conn_id.key)) { wp_error("Connection attempt with unmatched key"); goto done; } bool update = new_conn.header & CONN_RECONNECTABLE_BIT; if (!update) { wp_error("Connection token is missing update flag"); goto done; } if (send_one_fd(linkfd, newclient) == -1) { wp_error("Failed to get connection id"); retcode = EXIT_FAILURE; checked_close(newclient); break; } done: checked_close(newclient); } checked_close(channelsock); checked_close(linkfd); return retcode; } static int run_single_client(int channelsock, pid_t *eol_pid, const struct main_config *config, int disp_fd) { /* To support reconnection attempts, this mode creates a child * reconnection watcher process, linked via socketpair */ int retcode = EXIT_SUCCESS; int chanclient = -1; struct connection_token conn_id; memset(&conn_id, 0, sizeof(conn_id)); while (!shutdown_flag) { int status = -1; if (wait_for_pid_and_clean(eol_pid, &status, WNOHANG, NULL)) { eol_pid = 0; // < in case eol_pid is recycled wp_debug("Child (ssh) died, exiting"); // Copy the exit code retcode = WEXITSTATUS(status); break; } struct pollfd cs; cs.fd = channelsock; cs.events = POLLIN; cs.revents = 0; int r = poll(&cs, 1, -1); if (r == -1) { if (errno == EINTR) { // If SIGCHLD, we will check the child. // If SIGINT, the loop ends continue; } retcode = EXIT_FAILURE; break; } else if (r == 0) { // Nothing to read continue; } chanclient = accept(channelsock, NULL, NULL); if (chanclient == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) { // The wakeup may have been spurious continue; } wp_error("Connection failure: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } char err_msg[512]; wp_debug("New connection to client"); if (read(chanclient, &conn_id.header, sizeof(conn_id.header)) != sizeof(conn_id.header)) { wp_error("Failed to get connection id header"); goto fail_cc; } if (check_conn_header(conn_id.header, config, err_msg, sizeof(err_msg)) < 0) { wp_error("%s", err_msg); write_rejection_message(chanclient, err_msg); goto fail_cc; } if (read(chanclient, &conn_id.key, sizeof(conn_id.key)) != sizeof(conn_id.key)) { wp_error("Failed to get connection id key"); goto fail_cc; } break; fail_cc: retcode = EXIT_FAILURE; checked_close(chanclient); chanclient = -1; break; } if (retcode == EXIT_FAILURE || shutdown_flag || chanclient == -1) { checked_close(channelsock); checked_close(disp_fd); return retcode; } if (conn_id.header & CONN_UPDATE_BIT) { wp_error("Initial connection token had update flag set"); checked_close(channelsock); checked_close(disp_fd); return retcode; } /* Fork a reconnection handler, only if the connection is * reconnectable/has a nonzero id */ int linkfds[2] = {-1, -1}; if (conn_id.header & CONN_RECONNECTABLE_BIT) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, linkfds) == -1) { wp_error("Failed to create socketpair: %s", strerror(errno)); checked_close(chanclient); return EXIT_FAILURE; } pid_t reco_pid = fork(); if (reco_pid == -1) { wp_error("Fork failure: %s", strerror(errno)); checked_close(chanclient); return EXIT_FAILURE; } else if (reco_pid == 0) { if (linkfds[0] != -1) { checked_close(linkfds[0]); } checked_close(chanclient); checked_close(disp_fd); int rc = run_single_client_reconnector( channelsock, linkfds[1], conn_id); exit(rc); } checked_close(linkfds[1]); } checked_close(channelsock); struct main_config mod_config = *config; apply_conn_header(conn_id.header, &mod_config); return main_interface_loop( chanclient, disp_fd, linkfds[0], &mod_config, true); } void send_new_connection_fd( struct conn_map *connmap, uint32_t key[static 3], int new_fd) { for (int i = 0; i < connmap->count; i++) { if (key_match(connmap->data[i].token.key, key)) { if (send_one_fd(connmap->data[i].linkfd, new_fd) == -1) { wp_error("Failed to send new connection fd to subprocess: %s", strerror(errno)); } break; } } } static void handle_new_client_connection(int cwd_fd, struct pollfd *other_fds, int n_other_fds, int chanclient, struct conn_map *connmap, const struct main_config *config, const struct socket_path disp_path, const struct connection_token *conn_id) { bool reconnectable = conn_id->header & CONN_RECONNECTABLE_BIT; if (reconnectable && buf_ensure_size(connmap->count + 1, sizeof(struct conn_addr), &connmap->size, (void **)&connmap->data) == -1) { wp_error("Failed to allocate space to track connection"); goto fail_cc; } int linkfds[2] = {-1, -1}; if (reconnectable) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, linkfds) == -1) { wp_error("Failed to create socketpair: %s", strerror(errno)); goto fail_cc; } } pid_t npid = fork(); if (npid == 0) { // Run forked process, with the only shared // state being the new channel socket for (int i = 0; i < n_other_fds; i++) { if (other_fds[i].fd != chanclient) { checked_close(other_fds[i].fd); } } if (reconnectable) { checked_close(linkfds[0]); } for (int i = 0; i < connmap->count; i++) { checked_close(connmap->data[i].linkfd); } int display_fd = -1; if (connect_to_socket(cwd_fd, disp_path, NULL, &display_fd) == -1) { exit(EXIT_FAILURE); } checked_close(cwd_fd); struct main_config mod_config = *config; apply_conn_header(conn_id->header, &mod_config); int rc = main_interface_loop(chanclient, display_fd, linkfds[1], &mod_config, true); check_unclosed_fds(); exit(rc); } else if (npid == -1) { wp_error("Fork failure: %s", strerror(errno)); goto fail_ps; } // Remove connection from this process if (reconnectable) { checked_close(linkfds[1]); connmap->data[connmap->count++] = (struct conn_addr){.linkfd = linkfds[0], .token = *conn_id, .pid = npid}; } return; fail_ps: checked_close(linkfds[0]); fail_cc: checked_close(chanclient); return; } #define NUM_INCOMPLETE_CONNECTIONS 63 static void drop_incoming_connection(struct pollfd *fds, struct connection_token *tokens, uint8_t *bytes_read, int index, int incomplete) { checked_close(fds[index].fd); if (index != incomplete - 1) { size_t shift = (size_t)(incomplete - 1 - index); memmove(fds + index, fds + index + 1, sizeof(struct pollfd) * shift); memmove(tokens + index, tokens + index + 1, sizeof(struct connection_token) * shift); memmove(bytes_read + index, bytes_read + index + 1, sizeof(uint8_t) * shift); } memset(&fds[incomplete - 1], 0, sizeof(struct pollfd)); memset(&tokens[incomplete - 1], 0, sizeof(struct connection_token)); bytes_read[incomplete - 1] = 0; } static int run_multi_client(int cwd_fd, int channelsock, pid_t *eol_pid, const struct main_config *config, const struct socket_path disp_path) { struct conn_map connmap = {.data = NULL, .count = 0, .size = 0}; /* Keep track of the main socket, and all connections which have not * yet fully provided their connection token. If we run out of space, * the oldest incomplete connection gets dropped */ struct pollfd fds[NUM_INCOMPLETE_CONNECTIONS + 1]; struct connection_token tokens[NUM_INCOMPLETE_CONNECTIONS]; uint8_t bytes_read[NUM_INCOMPLETE_CONNECTIONS]; int incomplete = 0; memset(fds, 0, sizeof(fds)); memset(tokens, 0, sizeof(tokens)); memset(bytes_read, 0, sizeof(bytes_read)); fds[0].fd = channelsock; fds[0].events = POLLIN; fds[0].revents = 0; int retcode = EXIT_SUCCESS; while (!shutdown_flag) { int status = -1; if (wait_for_pid_and_clean( eol_pid, &status, WNOHANG, &connmap)) { wp_debug("Child (ssh) died, exiting"); // Copy the exit code retcode = WEXITSTATUS(status); break; } int r = poll(fds, 1 + (nfds_t)incomplete, -1); if (r == -1) { if (errno == EINTR) { // If SIGCHLD, we will check the child. // If SIGINT, the loop ends continue; } retcode = EXIT_FAILURE; break; } else if (r == 0) { // Nothing to read continue; } for (int i = 0; i < incomplete; i++) { if (!(fds[i + 1].revents & POLLIN)) { continue; } int cur_fd = fds[i + 1].fd; char *dest = ((char *)&tokens[i]) + bytes_read[i]; ssize_t s = read(cur_fd, dest, 16 - bytes_read[i]); if (s == -1) { wp_error("Failed to read from connection: %s", strerror(errno)); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; continue; } else if (s == 0) { /* connection closed */ wp_error("Connection closed early"); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; continue; } bytes_read[i] += (uint8_t)s; if (bytes_read[i] - (uint8_t)s < 4 && bytes_read[i] >= 4) { char err_msg[512]; /* Validate connection token header */ if (check_conn_header(tokens[i].header, config, err_msg, sizeof(err_msg)) < 0) { wp_error("%s", err_msg); write_rejection_message( cur_fd, err_msg); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; continue; } } if (bytes_read[i] < 16) { continue; } /* Validate connection token key */ if (tokens[i].header & CONN_UPDATE_BIT) { send_new_connection_fd(&connmap, tokens[i].key, cur_fd); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; continue; } /* Failures here are logged, but should not * affect this process' ability to e.g. handle * reconnections. */ handle_new_client_connection(cwd_fd, fds, 1 + incomplete, cur_fd, &connmap, config, disp_path, &tokens[i]); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; } /* Process new connections second, to give incomplete * connections a chance to clear first */ if (fds[0].revents & POLLIN) { int chanclient = accept(channelsock, NULL, NULL); if (chanclient == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) { // The wakeup may have been spurious continue; } // should errors like econnaborted exit? wp_error("Connection failure: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } wp_debug("New connection to client"); if (set_nonblocking(chanclient) == -1) { wp_error("Error making new connection nonblocking: %s", strerror(errno)); checked_close(chanclient); continue; } if (incomplete == NUM_INCOMPLETE_CONNECTIONS) { wp_error("Dropping oldest incomplete connection (out of %d)", NUM_INCOMPLETE_CONNECTIONS); drop_incoming_connection(fds + 1, tokens, bytes_read, 0, incomplete); incomplete--; } fds[1 + incomplete].fd = chanclient; fds[1 + incomplete].events = POLLIN; fds[1 + incomplete].revents = 0; memset(&tokens[incomplete], 0, sizeof(struct connection_token)); bytes_read[incomplete] = 0; incomplete++; } } for (int i = 0; i < incomplete; i++) { checked_close(fds[i + 1].fd); } for (int i = 0; i < connmap.count; i++) { checked_close(connmap.data[i].linkfd); } free(connmap.data); checked_close(channelsock); return retcode; } int run_client(int cwd_fd, const char *sock_folder_name, int sock_folder_fd, const char *sock_filename, const struct main_config *config, bool oneshot, const char *wayland_socket, pid_t eol_pid, int channelsock) { wp_debug("I'm a client listening on '%s' / '%s'", sock_folder_name, sock_filename); wp_debug("version: %s", WAYPIPE_VERSION); /* Connect to Wayland display. We don't use the wayland-client * function here, because its errors aren't immediately useful, * and older Wayland versions have edge cases */ int dispfd = -1; struct sockaddr_un display_filename = {0}; char display_folder[256] = {0}; if (wayland_socket) { dispfd = get_inherited_socket(wayland_socket); if (dispfd == -1) { goto fail; } /* This socket is inherited and meant to be closed by Waypipe */ if (dispfd >= 0 && dispfd < 256) { inherited_fds[dispfd / 64] &= ~(1uLL << (dispfd % 64)); } } else { if (get_display_path(display_folder, sizeof(display_folder)) == -1) { goto fail; } if (split_socket_path(display_folder, &display_filename) == -1) { goto fail; } } struct socket_path display_path = { .folder = display_folder, .filename = &display_filename, }; if (oneshot) { if (!wayland_socket) { connect_to_socket(cwd_fd, display_path, NULL, &dispfd); } } else { int test_conn = -1; if (connect_to_socket(cwd_fd, display_path, NULL, &test_conn) == -1) { goto fail; } checked_close(test_conn); } wp_debug("A wayland compositor is available. Proceeding."); /* These handlers close the channelsock and dispfd */ int retcode; if (oneshot) { retcode = run_single_client( channelsock, &eol_pid, config, dispfd); } else { retcode = run_multi_client(cwd_fd, channelsock, &eol_pid, config, display_path); } if (!config->vsock) { unlink_at_folder(cwd_fd, sock_folder_fd, sock_folder_name, sock_filename); } int cleanup_type = shutdown_flag ? WNOHANG : 0; int status = -1; // Don't return until all child processes complete if (wait_for_pid_and_clean(&eol_pid, &status, cleanup_type, NULL)) { retcode = WEXITSTATUS(status); } return retcode; fail: close(channelsock); if (eol_pid) { waitpid(eol_pid, NULL, 0); } if (!config->vsock) { unlink_at_folder(cwd_fd, sock_folder_fd, sock_folder_name, sock_filename); } return EXIT_FAILURE; } waypipe-v0.10.5/waypipe-c/dmabuf.c000066400000000000000000000267131507071711100167520ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "dmabuf.h" #include "util.h" #ifndef HAS_DMABUF int init_render_data(struct render_data *data) { data->disabled = true; (void)data; return -1; } void cleanup_render_data(struct render_data *data) { (void)data; } struct gbm_bo *import_dmabuf(struct render_data *rd, int fd, size_t *size, const struct dmabuf_slice_data *info) { (void)rd; (void)fd; (void)size; (void)info; return NULL; } int get_unique_dmabuf_handle( struct render_data *rd, int fd, struct gbm_bo **temporary_bo) { (void)rd; (void)fd; (void)temporary_bo; return -1; } struct gbm_bo *make_dmabuf( struct render_data *rd, const struct dmabuf_slice_data *info) { (void)rd; (void)info; return NULL; } int export_dmabuf(struct gbm_bo *bo) { (void)bo; return -1; } void destroy_dmabuf(struct gbm_bo *bo) { (void)bo; } void *map_dmabuf(struct gbm_bo *bo, bool write, void **map_handle, uint32_t *exp_stride) { (void)bo; (void)write; (void)map_handle; (void)exp_stride; return NULL; } int unmap_dmabuf(struct gbm_bo *bo, void *map_handle) { (void)bo; (void)map_handle; return 0; } uint32_t dmabuf_get_simple_format_for_plane(uint32_t format, int plane) { (void)format; (void)plane; return 0; } uint32_t dmabuf_get_stride(struct gbm_bo *bo) { (void)bo; return 0; } #else /* HAS_DMABUF */ #include #include #include #include #include #include #include #include #ifndef DRM_FORMAT_MOD_INVALID #define DRM_FORMAT_MOD_INVALID 0x00ffffffffffffffULL #endif int init_render_data(struct render_data *data) { /* render node support can be disabled either by choice * or when a previous version fails */ if (data->disabled) { return -1; } if (data->drm_fd != -1) { // Silent return, idempotent return 0; } const char *card = data->drm_node_path ? data->drm_node_path : "/dev/dri/renderD128"; int drm_fd = open(card, O_RDWR | O_CLOEXEC | O_NOCTTY); if (drm_fd == -1) { wp_error("Failed to open drm fd for %s: %s", card, strerror(errno)); data->disabled = true; return -1; } struct gbm_device *dev = gbm_create_device(drm_fd); if (!dev) { data->disabled = true; checked_close(drm_fd); wp_error("Failed to create gbm device from drm_fd"); return -1; } data->drm_fd = drm_fd; data->dev = dev; /* Set the path to the card used for protocol handlers to see */ data->drm_node_path = card; /* Assume true initially, fall back to old buffer creation path * if the newer path errors out */ data->supports_modifiers = true; return 0; } void cleanup_render_data(struct render_data *data) { if (data->drm_fd != -1) { gbm_device_destroy(data->dev); checked_close(data->drm_fd); data->dev = NULL; data->drm_fd = -1; } } static bool dmabuf_info_valid(const struct dmabuf_slice_data *info) { if (info->height > (1u << 24) || info->width > (1u << 24) || info->num_planes > 4 || info->num_planes == 0) { wp_error("Invalid DMABUF slice data: height %" PRIu32 " width %" PRIu32 " num_planes %" PRIu32, info->height, info->width, info->num_planes); return false; } return true; } struct gbm_bo *import_dmabuf(struct render_data *rd, int fd, size_t *size, const struct dmabuf_slice_data *info) { struct gbm_bo *bo; if (!dmabuf_info_valid(info)) { return NULL; } /* Multiplanar formats are all rather badly supported by * drivers/libgbm/libdrm/compositors/applications/everything. */ struct gbm_import_fd_modifier_data data; // Select all plane metadata associated to planes linked // to this fd data.modifier = info->modifier; data.num_fds = 0; uint32_t simple_format = 0; for (int i = 0; i < info->num_planes; i++) { if (info->using_planes[i]) { data.fds[data.num_fds] = fd; data.strides[data.num_fds] = (int)info->strides[i]; data.offsets[data.num_fds] = (int)info->offsets[i]; data.num_fds++; if (!simple_format) { simple_format = dmabuf_get_simple_format_for_plane( info->format, i); } } } if (!simple_format) { simple_format = info->format; } data.width = info->width; data.height = info->height; data.format = simple_format; bo = gbm_bo_import(rd->dev, GBM_BO_IMPORT_FD_MODIFIER, &data, GBM_BO_USE_RENDERING); if (!bo) { wp_error("Failed to import dmabuf (format %x, modifier %" PRIx64 ") to gbm bo: %s", info->format, info->modifier, strerror(errno)); return NULL; } /* todo: find out how to correctly map multiplanar formats */ *size = gbm_bo_get_stride(bo) * gbm_bo_get_height(bo); return bo; } int get_unique_dmabuf_handle( struct render_data *rd, int fd, struct gbm_bo **temporary_bo) { struct gbm_import_fd_data data; data.fd = fd; data.width = 1; data.stride = 1; data.height = 1; data.format = GBM_FORMAT_R8; *temporary_bo = gbm_bo_import( rd->dev, GBM_BO_IMPORT_FD, &data, GBM_BO_USE_RENDERING); if (!*temporary_bo) { return -1; } // This effectively reduces to DRM_IOCTL_PRIME_FD_TO_HANDLE. Is the // runtime dependency worth it? int handle = gbm_bo_get_handle(*temporary_bo).s32; return handle; } struct gbm_bo *make_dmabuf( struct render_data *rd, const struct dmabuf_slice_data *info) { struct gbm_bo *bo; if (!dmabuf_info_valid(info)) { return NULL; } retry: if (!rd->supports_modifiers || info->modifier == DRM_FORMAT_MOD_INVALID) { uint32_t simple_format = dmabuf_get_simple_format_for_plane( info->format, 0); /* If the modifier is nonzero, assume that the backend * preferred modifier matches it. With this old API, there * really isn't any way to do this better */ bo = gbm_bo_create(rd->dev, info->width, info->height, simple_format, GBM_BO_USE_RENDERING | (info->modifier ? 0 : GBM_BO_USE_LINEAR)); if (!bo) { wp_error("Failed to make dmabuf (old path): %s", strerror(errno)); return NULL; } uint64_t mod = gbm_bo_get_modifier(bo); if (info->modifier != DRM_FORMAT_MOD_INVALID && mod != DRM_FORMAT_MOD_INVALID && mod != info->modifier) { wp_error("DMABUF with format %08x, autoselected modifier %" PRIx64 " does not match desired %" PRIx64 ", expect a crash", simple_format, mod, info->modifier); } } else { uint64_t modifiers[2] = {info->modifier, GBM_BO_USE_RENDERING}; uint32_t simple_format = dmabuf_get_simple_format_for_plane( info->format, 0); /* Whether just size and modifiers suffice to replicate * a surface is driver dependent, and requires actual testing * with the hardware. * * i915 DRM ioctls cover size, swizzling, tiling state, only. * amdgpu, size + allocation domain/caching/align flags * etnaviv, size + caching flags * tegra, vc4: size + tiling + flags * radeon: size + tiling + flags, including pitch * * Note that gbm doesn't have a specific api for creating * buffers with minimal information, or even just getting * the size of the buffer contents. */ bo = gbm_bo_create_with_modifiers(rd->dev, info->width, info->height, simple_format, modifiers, 2); if (!bo && errno == ENOSYS) { wp_debug("Creating a DMABUF with modifiers explicitly set is not supported; retrying"); rd->supports_modifiers = false; goto retry; } if (!bo) { wp_error("Failed to make dmabuf (with format %x, modifier %" PRIx64 "): %s", simple_format, info->modifier, strerror(errno)); return NULL; } } return bo; } int export_dmabuf(struct gbm_bo *bo) { int fd = gbm_bo_get_fd(bo); if (fd == -1) { wp_error("Failed to export dmabuf: %s", strerror(errno)); } return fd; } void destroy_dmabuf(struct gbm_bo *bo) { if (bo) { gbm_bo_destroy(bo); } } void *map_dmabuf(struct gbm_bo *bo, bool write, void **map_handle, uint32_t *exp_stride) { if (!bo) { wp_error("Tried to map null gbm_bo"); return NULL; } /* With i965, the map handle MUST initially point to a NULL pointer; * otherwise the handler silently exits, sometimes with misleading errno * :-( */ *map_handle = NULL; uint32_t stride; uint32_t width = gbm_bo_get_width(bo); uint32_t height = gbm_bo_get_height(bo); /* As of writing, with amdgpu, GBM_BO_TRANSFER_WRITE invalidates * regions not written to during the mapping, while iris preserves * the original buffer contents. GBM documentation does not say which * WRITE behavior is correct. What the individual drivers do may change * in the future. Specifying READ_WRITE preserves the old contents with * both drivers. */ uint32_t flags = write ? GBM_BO_TRANSFER_READ_WRITE : GBM_BO_TRANSFER_READ; void *data = gbm_bo_map( bo, 0, 0, width, height, flags, &stride, map_handle); if (!data) { // errno is useless here wp_error("Failed to map dmabuf"); return NULL; } *exp_stride = stride; return data; } int unmap_dmabuf(struct gbm_bo *bo, void *map_handle) { gbm_bo_unmap(bo, map_handle); return 0; } // TODO: support DRM formats, like DRM_FORMAT_RGB888_A8 and // DRM_FORMAT_ARGB16161616F, defined in drm_fourcc.h. struct multiplanar_info { uint32_t format; struct { int subsample_w; int subsample_h; int cpp; } planes[3]; }; static const struct multiplanar_info plane_table[] = { {GBM_FORMAT_NV12, {{1, 1, 1}, {2, 2, 2}}}, {GBM_FORMAT_NV21, {{1, 1, 1}, {2, 2, 2}}}, {GBM_FORMAT_NV16, {{1, 1, 1}, {2, 1, 2}}}, {GBM_FORMAT_NV61, {{1, 1, 1}, {2, 1, 2}}}, {GBM_FORMAT_YUV410, {{1, 1, 1}, {4, 4, 1}, {4, 4, 1}}}, {GBM_FORMAT_YVU410, {{1, 1, 1}, {4, 4, 1}, {4, 4, 1}}}, {GBM_FORMAT_YUV411, {{1, 1, 1}, {4, 1, 1}, {4, 1, 1}}}, {GBM_FORMAT_YVU411, {{1, 1, 1}, {4, 1, 1}, {4, 1, 1}}}, {GBM_FORMAT_YUV420, {{1, 1, 1}, {2, 2, 1}, {2, 2, 1}}}, {GBM_FORMAT_YVU420, {{1, 1, 1}, {2, 2, 1}, {2, 2, 1}}}, {GBM_FORMAT_YUV422, {{1, 1, 1}, {2, 1, 1}, {2, 1, 1}}}, {GBM_FORMAT_YVU422, {{1, 1, 1}, {2, 1, 1}, {2, 1, 1}}}, {GBM_FORMAT_YUV444, {{1, 1, 1}, {1, 1, 1}, {1, 1, 1}}}, {GBM_FORMAT_YVU444, {{1, 1, 1}, {1, 1, 1}, {1, 1, 1}}}, {0}}; uint32_t dmabuf_get_simple_format_for_plane(uint32_t format, int plane) { const uint32_t by_cpp[] = {0, GBM_FORMAT_R8, GBM_FORMAT_GR88, GBM_FORMAT_RGB888, GBM_BO_FORMAT_ARGB8888}; for (int i = 0; plane_table[i].format; i++) { if (plane_table[i].format == format) { int cpp = plane_table[i].planes[plane].cpp; return by_cpp[cpp]; } } if (format == GBM_FORMAT_YUYV || format == GBM_FORMAT_YVYU || format == GBM_FORMAT_UYVY || format == GBM_FORMAT_VYUY || format == GBM_FORMAT_AYUV) { return by_cpp[4]; } return format; } uint32_t dmabuf_get_stride(struct gbm_bo *bo) { return gbm_bo_get_stride(bo); } #endif /* HAS_DMABUF */ waypipe-v0.10.5/waypipe-c/dmabuf.h000066400000000000000000000071561507071711100167570ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_DMABUF_H #define WAYPIPE_DMABUF_H #include #include #include #include typedef void *VADisplay; typedef unsigned int VAGenericID; typedef VAGenericID VAConfigID; struct render_data { bool disabled; int drm_fd; const char *drm_node_path; struct gbm_device *dev; bool supports_modifiers; /* video hardware context */ bool av_disabled; int av_bpf; int av_video_fmt; struct AVBufferRef *av_hwdevice_ref; struct AVBufferRef *av_drmdevice_ref; VADisplay av_vadisplay; VAConfigID av_copy_config; }; /** Additional information to help serialize a dmabuf */ struct dmabuf_slice_data { /* This information partially duplicates that of a gbm_bo. However, for * instance with weston, it is possible for the compositor to handle * multibuffer multiplanar images, even though a driver may only support * multiplanar images derived from a single underlying dmabuf. */ uint32_t width; uint32_t height; uint32_t format; int32_t num_planes; uint32_t offsets[4]; uint32_t strides[4]; uint64_t modifier; // to which planes is the matching dmabuf assigned? uint8_t using_planes[4]; char pad[4]; }; static_assert(sizeof(struct dmabuf_slice_data) == 64, "size check"); int init_render_data(struct render_data *); void cleanup_render_data(struct render_data *); struct gbm_bo *make_dmabuf( struct render_data *rd, const struct dmabuf_slice_data *info); int export_dmabuf(struct gbm_bo *bo); /** Import DMABUF to a GBM buffer object. */ struct gbm_bo *import_dmabuf(struct render_data *rd, int fd, size_t *size, const struct dmabuf_slice_data *info); void destroy_dmabuf(struct gbm_bo *bo); /** Map a DMABUF for reading or for writing */ void *map_dmabuf(struct gbm_bo *bo, bool write, void **map_handle, uint32_t *exp_stride); int unmap_dmabuf(struct gbm_bo *bo, void *map_handle); /** The handle values are unique among the set of currently active buffer * objects. To compare a set of buffer objects, produce handles in a batch, and * then free the temporary buffer objects in a batch */ int get_unique_dmabuf_handle( struct render_data *rd, int fd, struct gbm_bo **temporary_bo); uint32_t dmabuf_get_simple_format_for_plane(uint32_t format, int plane); uint32_t dmabuf_get_stride(struct gbm_bo *bo); /** Returns the number of bytes per pixel for WL or DRM format 'format', if the * format is an RGBA-type single plane format. For YUV-type or planar formats, * returns -1. */ int get_shm_bytes_per_pixel(uint32_t format); #endif // WAYPIPE_DMABUF_H waypipe-v0.10.5/waypipe-c/handlers.c000066400000000000000000001706751507071711100173230ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include "parsing.h" #include "shadow.h" #include #include #include #include #include #include #include #ifndef DRM_FORMAT_MOD_INVALID #define DRM_FORMAT_MOD_INVALID 0x00ffffffffffffffULL #endif struct obj_wl_shm_pool { struct wp_object base; struct shadow_fd *owned_buffer; }; enum buffer_type { BUF_SHM, BUF_DMA }; // This should be a safe limit for the maximum number of dmabuf planes #define MAX_DMABUF_PLANES 8 struct obj_wl_buffer { struct wp_object base; enum buffer_type type; struct shadow_fd *shm_buffer; int32_t shm_offset; int32_t shm_width; int32_t shm_height; int32_t shm_stride; uint32_t shm_format; int dmabuf_nplanes; int32_t dmabuf_width; int32_t dmabuf_height; uint32_t dmabuf_format; uint32_t dmabuf_flags; struct shadow_fd *dmabuf_buffers[MAX_DMABUF_PLANES]; uint32_t dmabuf_offsets[MAX_DMABUF_PLANES]; uint32_t dmabuf_strides[MAX_DMABUF_PLANES]; uint64_t dmabuf_modifiers[MAX_DMABUF_PLANES]; uint64_t unique_id; }; struct damage_record { int x, y, width, height; bool buffer_coordinates; }; struct damage_list { struct damage_record *list; int len; int size; }; #define SURFACE_DAMAGE_BACKLOG 7 struct obj_wl_surface { struct wp_object base; /* The zeroth list is the "current" one, 1st was damage provided at last * commit, etc. */ struct damage_list damage_lists[SURFACE_DAMAGE_BACKLOG]; /* Unique buffer identifiers to which the above damage lists apply */ uint64_t attached_buffer_uids[SURFACE_DAMAGE_BACKLOG]; uint32_t attached_buffer_id; /* protocol object id */ int32_t scale; int32_t transform; }; struct obj_wlr_screencopy_frame { struct wp_object base; /* Link to a wp_buffer instead of its underlying data, * because if the buffer object is destroyed early, then * we do not want to accidentally write over a section of a shm_pool * which is now used for transport in the reverse direction. */ uint32_t buffer_id; }; struct obj_wp_presentation { struct wp_object base; // reference clock - given clock int64_t clock_delta_nsec; int clock_id; }; struct obj_wp_presentation_feedback { struct wp_object base; int64_t clock_delta_nsec; }; struct obj_zwp_linux_dmabuf_params { struct wp_object base; struct shadow_fd *sfds; // These variables are set by 'params.create', and passed on in // params.created int32_t create_width; int32_t create_height; uint32_t create_format; uint32_t create_flags; struct { int fd; struct shadow_fd *buffer; uint32_t offset; uint32_t stride; uint64_t modifier; } add[MAX_DMABUF_PLANES]; int nplanes; }; struct format_table_entry { uint32_t format; uint32_t padding; uint64_t modifier; }; struct dmabuf_tranche { uint32_t flags; uint16_t *tranche; size_t tranche_size; }; struct obj_zwp_linux_dmabuf_feedback { struct wp_object base; struct format_table_entry *table; size_t table_len; dev_t main_device; /* the tranche being edited until tranche_done is called */ dev_t current_device; /* the tranche being edited until tranche_done is called */ struct dmabuf_tranche current; /* list of all tranches */ struct dmabuf_tranche *tranches; size_t tranche_count; }; struct obj_wlr_export_dmabuf_frame { struct wp_object base; uint32_t width; uint32_t height; uint32_t format; uint64_t modifier; // At the moment, no message reordering support, for lack of a client // to test it with struct { struct shadow_fd *buffer; uint32_t offset; uint32_t stride; uint64_t modifier; } objects[MAX_DMABUF_PLANES]; uint32_t nobjects; }; /* List of interfaces which may be advertised as globals */ static const struct wp_interface *const global_interfaces[] = { &intf_ext_data_control_manager_v1, &intf_gtk_primary_selection_device_manager, &intf_wl_compositor, &intf_wl_data_device_manager, &intf_wl_drm, &intf_wl_output, &intf_wl_seat, &intf_wl_shm, &intf_wl_subcompositor, &intf_wp_presentation, &intf_xdg_wm_base, &intf_zwlr_data_control_manager_v1, &intf_zwlr_export_dmabuf_manager_v1, &intf_zwlr_gamma_control_manager_v1, &intf_zwlr_screencopy_manager_v1, &intf_zwp_input_method_manager_v2, &intf_zwp_linux_dmabuf_v1, &intf_zwp_primary_selection_device_manager_v1, &intf_zwp_virtual_keyboard_manager_v1, }; /* List of interfaces which are never advertised as globals */ static const struct wp_interface *const non_global_interfaces[] = { &intf_ext_data_control_device_v1, &intf_ext_data_control_offer_v1, &intf_ext_data_control_source_v1, &intf_gtk_primary_selection_offer, &intf_gtk_primary_selection_source, &intf_wl_buffer, &intf_wl_data_offer, &intf_wl_data_source, &intf_wl_display, &intf_wl_keyboard, &intf_wl_registry, &intf_wl_shm_pool, &intf_wl_surface, &intf_wp_presentation_feedback, &intf_zwlr_data_control_offer_v1, &intf_zwlr_data_control_source_v1, &intf_zwlr_export_dmabuf_frame_v1, &intf_zwlr_gamma_control_v1, &intf_zwlr_screencopy_frame_v1, &intf_zwp_linux_buffer_params_v1, &intf_zwp_primary_selection_offer_v1, &intf_zwp_primary_selection_source_v1, }; static void cleanup_dmabuf_params_fds(struct obj_zwp_linux_dmabuf_params *r) { // Sometimes multiple entries point to the same buffer for (int i = 0; i < MAX_DMABUF_PLANES; i++) { int fd = r->add[i].fd; if (fd != -1) { checked_close(fd); for (int k = 0; k < MAX_DMABUF_PLANES; k++) { if (fd == r->add[k].fd) { r->add[k].fd = -1; } } } } } void destroy_wp_object(struct wp_object *object) { if (object->type == &intf_wl_shm_pool) { struct obj_wl_shm_pool *r = (struct obj_wl_shm_pool *)object; if (r->owned_buffer) { shadow_decref_protocol(r->owned_buffer); } } else if (object->type == &intf_wl_buffer) { struct obj_wl_buffer *r = (struct obj_wl_buffer *)object; for (int i = 0; i < MAX_DMABUF_PLANES; i++) { if (r->dmabuf_buffers[i]) { shadow_decref_protocol(r->dmabuf_buffers[i]); } } if (r->shm_buffer) { shadow_decref_protocol(r->shm_buffer); } } else if (object->type == &intf_wl_surface) { struct obj_wl_surface *r = (struct obj_wl_surface *)object; for (int i = 0; i < SURFACE_DAMAGE_BACKLOG; i++) { free(r->damage_lists[i].list); } } else if (object->type == &intf_zwlr_screencopy_frame_v1) { struct obj_wlr_screencopy_frame *r = (struct obj_wlr_screencopy_frame *)object; (void)r; } else if (object->type == &intf_wp_presentation) { } else if (object->type == &intf_wp_presentation_feedback) { } else if (object->type == &intf_zwp_linux_buffer_params_v1) { struct obj_zwp_linux_dmabuf_params *r = (struct obj_zwp_linux_dmabuf_params *)object; for (int i = 0; i < MAX_DMABUF_PLANES; i++) { if (r->add[i].buffer) { shadow_decref_protocol(r->add[i].buffer); } } cleanup_dmabuf_params_fds(r); } else if (object->type == &intf_zwlr_export_dmabuf_frame_v1) { struct obj_wlr_export_dmabuf_frame *r = (struct obj_wlr_export_dmabuf_frame *)object; for (int i = 0; i < MAX_DMABUF_PLANES; i++) { if (r->objects[i].buffer) { shadow_decref_protocol(r->objects[i].buffer); } } } else if (object->type == &intf_zwp_linux_dmabuf_feedback_v1) { struct obj_zwp_linux_dmabuf_feedback *r = (struct obj_zwp_linux_dmabuf_feedback *)object; free(r->table); if (r->tranche_count > 0) { for (size_t i = 0; i < r->tranche_count; i++) { free(r->tranches[i].tranche); } free(r->tranches); } } free(object); } struct wp_object *create_wp_object(uint32_t id, const struct wp_interface *type) { /* Note: if custom types are ever implemented for globals, they would * need special replacement logic when the type is set */ size_t sz; if (type == &intf_wl_shm_pool) { sz = sizeof(struct obj_wl_shm_pool); } else if (type == &intf_wl_buffer) { sz = sizeof(struct obj_wl_buffer); } else if (type == &intf_wl_surface) { sz = sizeof(struct obj_wl_surface); } else if (type == &intf_zwlr_screencopy_frame_v1) { sz = sizeof(struct obj_wlr_screencopy_frame); } else if (type == &intf_wp_presentation) { sz = sizeof(struct obj_wp_presentation); } else if (type == &intf_wp_presentation_feedback) { sz = sizeof(struct obj_wp_presentation_feedback); } else if (type == &intf_zwp_linux_buffer_params_v1) { sz = sizeof(struct obj_zwp_linux_dmabuf_params); } else if (type == &intf_zwlr_export_dmabuf_frame_v1) { sz = sizeof(struct obj_wlr_export_dmabuf_frame); } else if (type == &intf_zwp_linux_dmabuf_feedback_v1) { sz = sizeof(struct obj_zwp_linux_dmabuf_feedback); } else { sz = sizeof(struct wp_object); } struct wp_object *new_obj = calloc(1, sz); if (!new_obj) { wp_error("Failed to allocate new wp_object id=%d type=%s", id, type->name); return NULL; } new_obj->obj_id = id; new_obj->type = type; new_obj->is_zombie = false; if (type == &intf_zwp_linux_buffer_params_v1) { struct obj_zwp_linux_dmabuf_params *params = (struct obj_zwp_linux_dmabuf_params *)new_obj; for (int i = 0; i < MAX_DMABUF_PLANES; i++) { params->add[i].fd = -1; } } else if (type == &intf_wl_surface) { ((struct obj_wl_surface *)new_obj)->scale = 1; } return new_obj; } void do_wl_display_evt_error(struct context *ctx, struct wp_object *object_id, uint32_t code, const char *message) { const char *type_name = object_id ? (object_id->type ? object_id->type->name : "") : ""; wp_error("Display sent fatal error message %s, code %u: %s", type_name, code, message ? message : ""); (void)ctx; } void do_wl_display_evt_delete_id(struct context *ctx, uint32_t id) { struct wp_object *obj = tracker_get(ctx->tracker, id); /* ensure this isn't miscalled to have wl_display delete itself */ if (obj && obj != ctx->obj) { tracker_remove(ctx->tracker, obj); destroy_wp_object(obj); } } void do_wl_display_req_get_registry( struct context *ctx, struct wp_object *registry) { (void)ctx; (void)registry; } void do_wl_display_req_sync(struct context *ctx, struct wp_object *callback) { (void)ctx; (void)callback; } void do_wl_registry_evt_global(struct context *ctx, uint32_t name, const char *interface, uint32_t version) { if (!interface) { wp_debug("Interface name provided via wl_registry::global was NULL"); return; } bool requires_rnode = false; requires_rnode |= !strcmp(interface, "wl_drm"); requires_rnode |= !strcmp(interface, "zwp_linux_dmabuf_v1"); requires_rnode |= !strcmp(interface, "zwlr_export_dmabuf_manager_v1"); if (requires_rnode) { if (init_render_data(&ctx->g->render) == -1) { /* A gpu connection supported by waypipe is required on * both sides, since data transfers may occur in both * directions, and * modifying textures may require driver support */ wp_debug("Discarding protocol advertisement for %s, render node support disabled", interface); ctx->drop_this_msg = true; return; } } if (!strcmp(interface, "zwp_linux_dmabuf_v1")) { /* Higher versions will very likely require new Waypipe code to * support, so limit this to what Waypipe supports */ if (ctx->message[2 + 1 + 1 + 5] > ZWP_LINUX_DMABUF_V1_INTERFACE_VERSION) { ctx->message[2 + 1 + 1 + 5] = ZWP_LINUX_DMABUF_V1_INTERFACE_VERSION; } } if (!strcmp(interface, "wl_shm")) { /* Higher versions will very likely require new Waypipe code to * support, so limit this to what Waypipe supports */ if (ctx->message[2 + 1 + 1 + 2] > WL_SHM_INTERFACE_VERSION) { ctx->message[2 + 1 + 1 + 2] = WL_SHM_INTERFACE_VERSION; } } bool unsupported = false; // requires novel fd translation, not yet supported unsupported |= !strcmp( interface, "zwp_linux_explicit_synchronization_v1"); unsupported |= !strcmp(interface, "wp_linux_drm_syncobj_manager_v1"); unsupported |= !strcmp(interface, "wp_security_context_manager_v1"); unsupported |= !strcmp(interface, "wp_drm_lease_device_v1"); if (unsupported) { wp_debug("Hiding %s advertisement, unsupported", interface); ctx->drop_this_msg = true; } (void)name; (void)version; } void do_wl_registry_evt_global_remove(struct context *ctx, uint32_t name) { (void)ctx; (void)name; } void do_wl_registry_req_bind(struct context *ctx, uint32_t name, const char *interface, uint32_t version, struct wp_object *id) { if (!interface) { wp_debug("Interface name provided to wl_registry::bind was NULL"); return; } /* The object has already been created, but its type is NULL */ struct wp_object *the_object = id; uint32_t obj_id = the_object->obj_id; for (size_t i = 0; i < sizeof(non_global_interfaces) / sizeof(non_global_interfaces[0]); i++) { if (!strcmp(interface, non_global_interfaces[i]->name)) { wp_error("Interface %s does not support binding globals", non_global_interfaces[i]->name); /* exit search, discard unbound object */ goto fail; } } for (size_t i = 0; i < sizeof(global_interfaces) / sizeof(global_interfaces[0]); i++) { if (!strcmp(interface, global_interfaces[i]->name)) { // Set the object type the_object->type = global_interfaces[i]; if (global_interfaces[i] == &intf_wp_presentation) { struct wp_object *new_object = create_wp_object( obj_id, &intf_wp_presentation); if (!new_object) { return; } tracker_replace_existing( ctx->tracker, new_object); free(the_object); } return; } } fail: wp_debug("Unhandled protocol %s name=%d id=%d (v%d)", interface, name, the_object->obj_id, version); tracker_remove(ctx->tracker, the_object); free(the_object); (void)name; (void)version; } void do_wl_buffer_evt_release(struct context *ctx) { (void)ctx; } int get_shm_bytes_per_pixel(uint32_t format) { switch (format) { case 0x34325241: /* DRM_FORMAT_ARGB8888 */ case 0x34325258: /* DRM_FORMAT_XRGB8888 */ case WL_SHM_FORMAT_ARGB8888: case WL_SHM_FORMAT_XRGB8888: return 4; case WL_SHM_FORMAT_C8: case WL_SHM_FORMAT_RGB332: case WL_SHM_FORMAT_BGR233: return 1; case WL_SHM_FORMAT_XRGB4444: case WL_SHM_FORMAT_XBGR4444: case WL_SHM_FORMAT_RGBX4444: case WL_SHM_FORMAT_BGRX4444: case WL_SHM_FORMAT_ARGB4444: case WL_SHM_FORMAT_ABGR4444: case WL_SHM_FORMAT_RGBA4444: case WL_SHM_FORMAT_BGRA4444: case WL_SHM_FORMAT_XRGB1555: case WL_SHM_FORMAT_XBGR1555: case WL_SHM_FORMAT_RGBX5551: case WL_SHM_FORMAT_BGRX5551: case WL_SHM_FORMAT_ARGB1555: case WL_SHM_FORMAT_ABGR1555: case WL_SHM_FORMAT_RGBA5551: case WL_SHM_FORMAT_BGRA5551: case WL_SHM_FORMAT_RGB565: case WL_SHM_FORMAT_BGR565: return 2; case WL_SHM_FORMAT_RGB888: case WL_SHM_FORMAT_BGR888: return 3; case WL_SHM_FORMAT_XBGR8888: case WL_SHM_FORMAT_RGBX8888: case WL_SHM_FORMAT_BGRX8888: case WL_SHM_FORMAT_ABGR8888: case WL_SHM_FORMAT_RGBA8888: case WL_SHM_FORMAT_BGRA8888: case WL_SHM_FORMAT_XRGB2101010: case WL_SHM_FORMAT_XBGR2101010: case WL_SHM_FORMAT_RGBX1010102: case WL_SHM_FORMAT_BGRX1010102: case WL_SHM_FORMAT_ARGB2101010: case WL_SHM_FORMAT_ABGR2101010: case WL_SHM_FORMAT_RGBA1010102: case WL_SHM_FORMAT_BGRA1010102: return 4; case WL_SHM_FORMAT_YUYV: case WL_SHM_FORMAT_YVYU: case WL_SHM_FORMAT_UYVY: case WL_SHM_FORMAT_VYUY: case WL_SHM_FORMAT_AYUV: case WL_SHM_FORMAT_NV12: case WL_SHM_FORMAT_NV21: case WL_SHM_FORMAT_NV16: case WL_SHM_FORMAT_NV61: case WL_SHM_FORMAT_YUV410: case WL_SHM_FORMAT_YVU410: case WL_SHM_FORMAT_YUV411: case WL_SHM_FORMAT_YVU411: case WL_SHM_FORMAT_YUV420: case WL_SHM_FORMAT_YVU420: case WL_SHM_FORMAT_YUV422: case WL_SHM_FORMAT_YVU422: case WL_SHM_FORMAT_YUV444: case WL_SHM_FORMAT_YVU444: goto planar; case WL_SHM_FORMAT_R8: return 1; case WL_SHM_FORMAT_R16: case WL_SHM_FORMAT_RG88: case WL_SHM_FORMAT_GR88: return 2; case WL_SHM_FORMAT_RG1616: case WL_SHM_FORMAT_GR1616: return 4; case WL_SHM_FORMAT_XRGB16161616F: case WL_SHM_FORMAT_XBGR16161616F: case WL_SHM_FORMAT_ARGB16161616F: case WL_SHM_FORMAT_ABGR16161616F: case WL_SHM_FORMAT_AXBXGXRX106106106106: return 8; case WL_SHM_FORMAT_XYUV8888: case WL_SHM_FORMAT_VUY888: case WL_SHM_FORMAT_VUY101010: case WL_SHM_FORMAT_Y210: case WL_SHM_FORMAT_Y212: case WL_SHM_FORMAT_Y216: case WL_SHM_FORMAT_Y410: case WL_SHM_FORMAT_Y412: case WL_SHM_FORMAT_Y416: case WL_SHM_FORMAT_XVYU2101010: case WL_SHM_FORMAT_XVYU12_16161616: case WL_SHM_FORMAT_XVYU16161616: case WL_SHM_FORMAT_Y0L0: case WL_SHM_FORMAT_X0L0: case WL_SHM_FORMAT_Y0L2: case WL_SHM_FORMAT_X0L2: case WL_SHM_FORMAT_YUV420_8BIT: case WL_SHM_FORMAT_YUV420_10BIT: case WL_SHM_FORMAT_XRGB8888_A8: case WL_SHM_FORMAT_XBGR8888_A8: case WL_SHM_FORMAT_RGBX8888_A8: case WL_SHM_FORMAT_BGRX8888_A8: case WL_SHM_FORMAT_RGB888_A8: case WL_SHM_FORMAT_BGR888_A8: case WL_SHM_FORMAT_RGB565_A8: case WL_SHM_FORMAT_BGR565_A8: case WL_SHM_FORMAT_NV24: case WL_SHM_FORMAT_NV42: case WL_SHM_FORMAT_P210: case WL_SHM_FORMAT_P010: case WL_SHM_FORMAT_P012: case WL_SHM_FORMAT_P016: case WL_SHM_FORMAT_NV15: case WL_SHM_FORMAT_Q410: case WL_SHM_FORMAT_Q401: goto planar; case WL_SHM_FORMAT_XRGB16161616: case WL_SHM_FORMAT_XBGR16161616: case WL_SHM_FORMAT_ARGB16161616: case WL_SHM_FORMAT_ABGR16161616: return 8; // todo: adjust API to handle bit packed formats case WL_SHM_FORMAT_C1: case WL_SHM_FORMAT_C2: case WL_SHM_FORMAT_C4: case WL_SHM_FORMAT_D1: case WL_SHM_FORMAT_D2: case WL_SHM_FORMAT_D4: goto planar; case WL_SHM_FORMAT_D8: return 1; case WL_SHM_FORMAT_R1: case WL_SHM_FORMAT_R2: case WL_SHM_FORMAT_R4: goto planar; case WL_SHM_FORMAT_R10: case WL_SHM_FORMAT_R12: return 2; case WL_SHM_FORMAT_AVUY8888: case WL_SHM_FORMAT_XVUY8888: return 4; case WL_SHM_FORMAT_P030: goto planar; default: wp_error("Unidentified WL_SHM format %x", format); return -1; } planar: return -1; } static void compute_damage_coordinates(int *xlow, int *xhigh, int *ylow, int *yhigh, const struct damage_record *rec, int buf_width, int buf_height, int transform, int scale) { if (rec->buffer_coordinates) { *xlow = rec->x; *xhigh = rec->x + rec->width; *ylow = rec->y; *yhigh = rec->y + rec->height; } else { int xl = rec->x * scale; int yl = rec->y * scale; int xh = (rec->width + rec->x) * scale; int yh = (rec->y + rec->height) * scale; /* Each of the eight transformations corresponds to a * unique set of reflections: X<->Y | Xflip | Yflip */ uint32_t magic = 0x74125630; /* idx 76543210 * xyech = 10101010 * xflip = 11000110 * yflip = 10011100 */ bool xyexch = magic & (1u << (4 * transform)); bool xflip = magic & (1u << (4 * transform + 1)); bool yflip = magic & (1u << (4 * transform + 2)); int ew = xyexch ? buf_height : buf_width; int eh = xyexch ? buf_width : buf_height; if (xflip) { int tmp = ew - xh; xh = ew - xl; xl = tmp; } if (yflip) { int tmp = eh - yh; yh = eh - yl; yl = tmp; } if (xyexch) { *xlow = yl; *xhigh = yh; *ylow = xl; *yhigh = xh; } else { *xlow = xl; *xhigh = xh; *ylow = yl; *yhigh = yh; } } } void do_wl_surface_req_attach(struct context *ctx, struct wp_object *buffer, int32_t x, int32_t y) { (void)x; (void)y; struct wp_object *bufobj = (struct wp_object *)buffer; if (!bufobj) { /* A null buffer can legitimately be send to remove * surface contents, presumably with shell-defined * semantics */ wp_debug("Buffer to be attached is null"); return; } if (bufobj->type != &intf_wl_buffer) { wp_error("Buffer to be attached has the wrong type"); return; } struct obj_wl_surface *surface = (struct obj_wl_surface *)ctx->obj; surface->attached_buffer_id = bufobj->obj_id; } static void rotate_damage_lists(struct obj_wl_surface *surface) { free(surface->damage_lists[SURFACE_DAMAGE_BACKLOG - 1].list); memmove(surface->damage_lists + 1, surface->damage_lists, (SURFACE_DAMAGE_BACKLOG - 1) * sizeof(struct damage_list)); memset(surface->damage_lists, 0, sizeof(struct damage_list)); memmove(surface->attached_buffer_uids + 1, surface->attached_buffer_uids, (SURFACE_DAMAGE_BACKLOG - 1) * sizeof(uint64_t)); surface->attached_buffer_uids[0] = 0; } void do_wl_surface_req_commit(struct context *ctx) { struct obj_wl_surface *surface = (struct obj_wl_surface *)ctx->obj; if (!surface->attached_buffer_id) { /* The wl_surface.commit operation applies all "pending * state", much of which we don't care about. Typically, * when a wl_surface is first created, it is soon * committed to atomically update state variables. An * attached wl_buffer is not required. */ return; } if (ctx->on_display_side) { /* commit signifies a client-side update only */ return; } struct wp_object *obj = tracker_get(ctx->tracker, surface->attached_buffer_id); if (!obj) { wp_error("Attached buffer no longer exists"); return; } if (obj->type != &intf_wl_buffer) { wp_error("Buffer to commit has the wrong type, and may have been recycled"); return; } struct obj_wl_buffer *buf = (struct obj_wl_buffer *)obj; surface->attached_buffer_uids[0] = buf->unique_id; if (buf->type == BUF_DMA) { rotate_damage_lists(surface); for (int i = 0; i < buf->dmabuf_nplanes; i++) { struct shadow_fd *sfd = buf->dmabuf_buffers[i]; if (!sfd) { wp_error("dmabuf surface buffer is missing plane %d", i); continue; } if (!(sfd->type == FDC_DMABUF || sfd->type == FDC_DMAVID_IR)) { wp_error("fd associated with dmabuf surface is not a dmabuf"); continue; } // detailed damage tracking is not yet supported sfd->is_dirty = true; damage_everything(&sfd->damage); } return; } else if (buf->type != BUF_SHM) { wp_error("wp_buffer is backed neither by DMA nor SHM, not yet supported"); return; } struct shadow_fd *sfd = buf->shm_buffer; if (!sfd) { wp_error("wp_buffer to be committed has no fd"); return; } if (sfd->type != FDC_FILE) { wp_error("fd associated with surface is not file-like"); return; } sfd->is_dirty = true; int bpp = get_shm_bytes_per_pixel(buf->shm_format); if (bpp == -1) { wp_error("Encountered unknown/planar/subsampled wl_shm format %x; marking entire buffer", buf->shm_format); goto backup; } if (surface->scale <= 0) { wp_error("Invalid buffer scale during commit (%d), assuming everything damaged", surface->scale); goto backup; } if (surface->transform < 0 || surface->transform >= 8) { wp_error("Invalid buffer transform during commit (%d), assuming everything damaged", surface->transform); goto backup; } /* The damage specified as of wl_surface commit indicates which region * of the surface has changed between the last commit and the current * one. However, the last time the attached buffer was used may have * been several commits ago, so we need to replay all the damage up * to the current point. */ int age = -1; int n_damaged_rects = surface->damage_lists[0].len; for (int j = 1; j < SURFACE_DAMAGE_BACKLOG; j++) { if (surface->attached_buffer_uids[0] == surface->attached_buffer_uids[j]) { age = j; break; } n_damaged_rects += surface->damage_lists[j].len; } if (age == -1) { /* cannot find last time buffer+surface combo was used */ goto backup; } struct ext_interval *damage_array = malloc( sizeof(struct ext_interval) * (size_t)n_damaged_rects); if (!damage_array) { wp_error("Failed to allocate damage array"); goto backup; } int i = 0; // Translate damage stack into damage records for the fd buffer for (int k = 0; k < age; k++) { const struct damage_list *frame_damage = &surface->damage_lists[k]; for (int j = 0; j < frame_damage->len; j++) { int xlow, xhigh, ylow, yhigh; compute_damage_coordinates(&xlow, &xhigh, &ylow, &yhigh, &frame_damage->list[j], buf->shm_width, buf->shm_height, surface->transform, surface->scale); /* Clip the damage rectangle to the containing * buffer. */ xlow = clamp(xlow, 0, buf->shm_width); xhigh = clamp(xhigh, 0, buf->shm_width); ylow = clamp(ylow, 0, buf->shm_height); yhigh = clamp(yhigh, 0, buf->shm_height); damage_array[i].start = buf->shm_offset + buf->shm_stride * ylow + bpp * xlow; damage_array[i].rep = yhigh - ylow; damage_array[i].stride = buf->shm_stride; damage_array[i].width = bpp * (xhigh - xlow); i++; } } merge_damage_records(&sfd->damage, i, damage_array, ctx->g->threads.diff_alignment_bits); free(damage_array); rotate_damage_lists(surface); backup: if (1) { /* damage the entire buffer (but no other part of the shm_pool) */ struct ext_interval full_surface_damage; full_surface_damage.start = buf->shm_offset; full_surface_damage.rep = 1; full_surface_damage.stride = 0; full_surface_damage.width = buf->shm_stride * buf->shm_height; merge_damage_records(&sfd->damage, 1, &full_surface_damage, ctx->g->threads.diff_alignment_bits); } rotate_damage_lists(surface); return; } static void append_damage_record(struct obj_wl_surface *surface, int32_t x, int32_t y, int32_t width, int32_t height, bool in_buffer_coordinates) { struct damage_list *current = &surface->damage_lists[0]; if (buf_ensure_size(current->len + 1, sizeof(struct damage_record), ¤t->size, (void **)¤t->list) == -1) { wp_error("Failed to allocate space for damage list, dropping damage record"); return; } // A rectangle of the buffer was damaged, hence backing buffers // may be updated. struct damage_record *damage = ¤t->list[current->len++]; damage->buffer_coordinates = in_buffer_coordinates; damage->x = x; damage->y = y; damage->width = width; damage->height = height; } void do_wl_surface_req_damage(struct context *ctx, int32_t x, int32_t y, int32_t width, int32_t height) { if (ctx->on_display_side) { // The display side does not need to track the damage return; } append_damage_record((struct obj_wl_surface *)ctx->obj, x, y, width, height, false); } void do_wl_surface_req_damage_buffer(struct context *ctx, int32_t x, int32_t y, int32_t width, int32_t height) { if (ctx->on_display_side) { // The display side does not need to track the damage return; } append_damage_record((struct obj_wl_surface *)ctx->obj, x, y, width, height, true); } void do_wl_surface_req_set_buffer_transform( struct context *ctx, int32_t transform) { struct obj_wl_surface *surface = (struct obj_wl_surface *)ctx->obj; surface->transform = transform; } void do_wl_surface_req_set_buffer_scale(struct context *ctx, int32_t scale) { struct obj_wl_surface *surface = (struct obj_wl_surface *)ctx->obj; surface->scale = scale; } void do_wl_keyboard_evt_keymap( struct context *ctx, uint32_t format, int fd, uint32_t size) { size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd, &fdsz); if (fdtype == FDC_UNKNOWN) { fdtype = FDC_FILE; fdsz = (size_t)size; } if (fdtype != FDC_FILE || fdsz != size) { wp_error("keymap candidate fd %d was not file-like (type=%s), and with size=%zu did not match %u", fd, fdcat_to_str(fdtype), fdsz, size); return; } struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_FILE, fdsz, NULL, false); if (!sfd) { wp_error("Failed to create shadow for keymap fd=%d", fd); return; } /* The keyboard file descriptor is never changed after being sent. * Mark the shadow structure as owned by the protocol, so it can be * automatically deleted as soon as the fd has been transferred. */ sfd->has_owner = true; (void)format; } void do_wl_shm_req_create_pool( struct context *ctx, struct wp_object *id, int fd, int32_t size) { struct obj_wl_shm_pool *the_shm_pool = (struct obj_wl_shm_pool *)id; if (size <= 0) { wp_error("Ignoring attempt to create a wl_shm_pool with size %d", size); } size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd, &fdsz); if (fdtype == FDC_UNKNOWN) { fdtype = FDC_FILE; fdsz = (size_t)size; } /* It may be valid for the file descriptor size to be larger * than the immediately advertised size, since the call to * wl_shm.create_pool may be followed by wl_shm_pool.resize, * which then increases the size */ if (fdtype != FDC_FILE || (int32_t)fdsz < size) { wp_error("File type or size mismatch for fd %d with claimed: %s %s | %zu %u", fd, fdcat_to_str(fdtype), fdcat_to_str(FDC_FILE), fdsz, size); return; } struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_FILE, fdsz, NULL, false); if (!sfd) { return; } the_shm_pool->owned_buffer = shadow_incref_protocol(sfd); /* We only send shm_pool updates when the buffers created from the * pool are used. Some applications make the pool >> actual buffers, * so this can reduce communication by a lot*/ reset_damage(&sfd->damage); } void do_wl_shm_pool_req_resize(struct context *ctx, int32_t size) { struct obj_wl_shm_pool *the_shm_pool = (struct obj_wl_shm_pool *)ctx->obj; if (!the_shm_pool->owned_buffer) { wp_error("Pool to be resized owns no buffer"); return; } if ((int32_t)the_shm_pool->owned_buffer->buffer_size >= size) { // The underlying buffer was already resized by the time // this protocol message was received return; } /* The display side will be updated already via buffer update msg */ if (!ctx->on_display_side) { extend_shm_shadow(&ctx->g->threads, the_shm_pool->owned_buffer, (size_t)size); } } void do_wl_shm_pool_req_create_buffer(struct context *ctx, struct wp_object *id, int32_t offset, int32_t width, int32_t height, int32_t stride, uint32_t format) { struct obj_wl_shm_pool *the_shm_pool = (struct obj_wl_shm_pool *)ctx->obj; struct obj_wl_buffer *the_buffer = (struct obj_wl_buffer *)id; if (!the_buffer) { wp_error("No buffer available"); return; } struct shadow_fd *sfd = the_shm_pool->owned_buffer; if (!sfd) { wp_error("Creating a wl_buffer from a pool that does not own an fd"); return; } the_buffer->type = BUF_SHM; the_buffer->shm_buffer = shadow_incref_protocol(the_shm_pool->owned_buffer); the_buffer->shm_offset = offset; the_buffer->shm_width = width; the_buffer->shm_height = height; the_buffer->shm_stride = stride; the_buffer->shm_format = format; the_buffer->unique_id = ctx->g->tracker.buffer_seqno++; } void do_zwlr_screencopy_frame_v1_evt_ready(struct context *ctx, uint32_t tv_sec_hi, uint32_t tv_sec_lo, uint32_t tv_nsec) { struct obj_wlr_screencopy_frame *frame = (struct obj_wlr_screencopy_frame *)ctx->obj; if (!frame->buffer_id) { wp_error("frame has no copy target"); return; } struct wp_object *obj = (struct wp_object *)tracker_get( ctx->tracker, frame->buffer_id); if (!obj) { wp_error("frame copy target no longer exists"); return; } if (obj->type != &intf_wl_buffer) { wp_error("frame copy target is not a wl_buffer"); return; } struct obj_wl_buffer *buffer = (struct obj_wl_buffer *)obj; struct shadow_fd *sfd = buffer->shm_buffer; if (!sfd) { wp_error("frame copy target does not own any buffers"); return; } if (sfd->type != FDC_FILE) { wp_error("frame copy target buffer file descriptor (RID=%d) was not file-like (type=%d)", sfd->remote_id, sfd->type); return; } if (buffer->type != BUF_SHM) { wp_error("screencopy not yet supported for non-shm-backed buffers"); return; } if (!ctx->on_display_side) { // The display side performs the update return; } sfd->is_dirty = true; /* The protocol guarantees that the buffer attributes match * those of the written frame */ const struct ext_interval interval = {.start = buffer->shm_offset, .width = buffer->shm_height * buffer->shm_stride, .stride = 0, .rep = 1}; merge_damage_records(&sfd->damage, 1, &interval, ctx->g->threads.diff_alignment_bits); (void)tv_sec_lo; (void)tv_sec_hi; (void)tv_nsec; } void do_zwlr_screencopy_frame_v1_req_copy( struct context *ctx, struct wp_object *buffer) { struct obj_wlr_screencopy_frame *frame = (struct obj_wlr_screencopy_frame *)ctx->obj; struct wp_object *buf = (struct wp_object *)buffer; if (buf->type != &intf_wl_buffer) { wp_error("frame copy destination is not a wl_buffer"); return; } frame->buffer_id = buf->obj_id; } static int64_t timespec_diff(struct timespec val, struct timespec sub) { // Overflows only with 68 year error, insignificant return (val.tv_sec - sub.tv_sec) * 1000000000LL + (val.tv_nsec - sub.tv_nsec); } void do_wp_presentation_evt_clock_id(struct context *ctx, uint32_t clk_id) { struct obj_wp_presentation *pres = (struct obj_wp_presentation *)ctx->obj; pres->clock_id = (int)clk_id; int reference_clock = CLOCK_REALTIME; if (pres->clock_id == reference_clock) { pres->clock_delta_nsec = 0; } else { /* Estimate the difference in baseline between clocks. * (TODO: Is there a syscall for this?) do median of 3? */ struct timespec t0, t1, t2; clock_gettime(pres->clock_id, &t0); clock_gettime(reference_clock, &t1); clock_gettime(pres->clock_id, &t2); int64_t diff1m0 = timespec_diff(t1, t0); int64_t diff2m1 = timespec_diff(t2, t1); pres->clock_delta_nsec = (diff1m0 - diff2m1) / 2; } } void do_wp_presentation_req_feedback(struct context *ctx, struct wp_object *surface, struct wp_object *callback) { struct obj_wp_presentation *pres = (struct obj_wp_presentation *)ctx->obj; struct obj_wp_presentation_feedback *feedback = (struct obj_wp_presentation_feedback *)callback; (void)surface; feedback->clock_delta_nsec = pres->clock_delta_nsec; } void do_wp_presentation_feedback_evt_presented(struct context *ctx, uint32_t tv_sec_hi, uint32_t tv_sec_lo, uint32_t tv_nsec, uint32_t refresh, uint32_t seq_hi, uint32_t seq_lo, uint32_t flags) { struct obj_wp_presentation_feedback *feedback = (struct obj_wp_presentation_feedback *)ctx->obj; (void)refresh; (void)seq_hi; (void)seq_lo; (void)flags; /* convert local to reference, on display side */ int dir = ctx->on_display_side ? 1 : -1; uint64_t sec = tv_sec_lo + tv_sec_hi * 0x100000000uLL; int64_t nsec = tv_nsec; nsec += dir * feedback->clock_delta_nsec; sec = (uint64_t)((int64_t)sec + nsec / 1000000000LL); nsec = nsec % 1000000000L; if (nsec < 0) { nsec += 1000000000L; sec--; } // Size not changed, no other edits required ctx->message[2] = (uint32_t)(sec / 0x100000000uLL); ctx->message[3] = (uint32_t)(sec % 0x100000000uLL); ctx->message[4] = (uint32_t)nsec; } void do_wl_drm_evt_device(struct context *ctx, const char *name) { if (ctx->on_display_side) { /* Replacing the (remote) DRM device path with a local * render node path only is useful on the application * side */ return; } if (!name) { wp_debug("Device name provided via wl_drm::device was NULL"); return; } if (!ctx->g->render.drm_node_path) { /* While the render node should have been initialized in * wl_registry.global, setting this path, we still don't want * to crash even if this gets called by accident */ wp_debug("wl_drm::device, local render node not set up"); return; } int path_len = (int)strlen(ctx->g->render.drm_node_path); int message_bytes = 8 + 4 + 4 * ((path_len + 1 + 3) / 4); if (message_bytes > ctx->message_available_space) { wp_error("Not enough space to modify DRM device advertisement from '%s' to '%s'", name, ctx->g->render.drm_node_path); return; } ctx->message_length = message_bytes; uint32_t *payload = ctx->message + 2; memset(payload, 0, (size_t)message_bytes - 8); payload[0] = (uint32_t)path_len + 1; memcpy(ctx->message + 3, ctx->g->render.drm_node_path, (size_t)path_len); uint32_t meth = (ctx->message[1] << 16) >> 16; ctx->message[1] = message_header_2((uint32_t)message_bytes, meth); } void do_wl_drm_req_create_prime_buffer(struct context *ctx, struct wp_object *id, int name, int32_t width, int32_t height, uint32_t format, int32_t offset0, int32_t stride0, int32_t offset1, int32_t stride1, int32_t offset2, int32_t stride2) { struct obj_wl_buffer *buf = (struct obj_wl_buffer *)id; struct dmabuf_slice_data info = { .num_planes = 1, .width = (uint32_t)width, .height = (uint32_t)height, .modifier = DRM_FORMAT_MOD_INVALID, .format = format, .offsets = {(uint32_t)offset0, (uint32_t)offset1, (uint32_t)offset2, 0}, .strides = {(uint32_t)stride0, (uint32_t)stride1, (uint32_t)stride2, 0}, .using_planes = {true, false, false, false}, }; struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, name, FDC_DMABUF, 0, &info, false); if (!sfd) { return; } buf->type = BUF_DMA; buf->dmabuf_nplanes = 1; buf->dmabuf_buffers[0] = shadow_incref_protocol(sfd); buf->dmabuf_width = width; buf->dmabuf_height = height; buf->dmabuf_format = format; // handling multiple offsets (?) buf->dmabuf_offsets[0] = (uint32_t)offset0; buf->dmabuf_strides[0] = (uint32_t)stride0; buf->unique_id = ctx->g->tracker.buffer_seqno++; if (ctx->on_display_side) { /* the new dmabuf being created is not guaranteed to * have the original offset/stride parameters, so reset * them */ ctx->message[6] = 0; ctx->message[7] = dmabuf_get_stride(sfd->dmabuf_bo); } } static bool dmabuf_format_permitted( struct context *ctx, uint32_t format, uint64_t modifier) { if (ctx->g->config->only_linear_dmabuf) { /* MOD_INVALID is allowed because some drivers don't support * LINEAR. Every modern GPU+driver should be able to handle * LINEAR. Conditionally blocking INVALID (i.e, if LINEAR is an * option) can break things when the application-side Waypipe * instance does not support LINEAR. */ if (modifier != 0 && modifier != DRM_FORMAT_MOD_INVALID) { return false; } } /* Filter out formats which are not recognized, or multiplane */ if (get_shm_bytes_per_pixel(format) == -1) { return false; } /* Blacklist intel modifiers which introduce a second color control * surface; todo: add support for these, eventually */ if (modifier == (1uLL << 56 | 4) || modifier == (1uLL << 56 | 5) || modifier == (1uLL << 56 | 6) || modifier == (1uLL << 56 | 7) || modifier == (1uLL << 56 | 8)) { return false; } return true; } void do_zwp_linux_dmabuf_v1_evt_modifier(struct context *ctx, uint32_t format, uint32_t modifier_hi, uint32_t modifier_lo) { (void)format; uint64_t modifier = modifier_hi * 0x100000000uLL + modifier_lo; // Prevent all advertisements for dmabufs with modifiers if (!dmabuf_format_permitted(ctx, format, modifier)) { ctx->drop_this_msg = true; } } void do_zwp_linux_dmabuf_v1_req_get_default_feedback( struct context *ctx, struct wp_object *id) { // todo: use this to find the correct main device (void)ctx; (void)id; } void do_zwp_linux_dmabuf_v1_req_get_surface_feedback(struct context *ctx, struct wp_object *id, struct wp_object *surface) { (void)ctx; (void)id; (void)surface; } void do_zwp_linux_buffer_params_v1_evt_created( struct context *ctx, struct wp_object *buffer) { struct obj_zwp_linux_dmabuf_params *params = (struct obj_zwp_linux_dmabuf_params *)ctx->obj; struct obj_wl_buffer *buf = (struct obj_wl_buffer *)buffer; buf->type = BUF_DMA; buf->dmabuf_nplanes = params->nplanes; for (int i = 0; i < params->nplanes; i++) { if (!params->add[i].buffer) { wp_error("dmabuf backed wl_buffer plane %d was missing", i); continue; } // Move protocol reference from `params` to `buf` // (The params object can only be used to create one buffer, // so this ensures that if the params object leaks, the // shadow_fd does not leak as well.) buf->dmabuf_buffers[i] = params->add[i].buffer; buf->dmabuf_offsets[i] = params->add[i].offset; buf->dmabuf_strides[i] = params->add[i].stride; buf->dmabuf_modifiers[i] = params->add[i].modifier; params->add[i].buffer = NULL; } cleanup_dmabuf_params_fds(params); buf->dmabuf_flags = params->create_flags; buf->dmabuf_width = params->create_width; buf->dmabuf_height = params->create_height; buf->dmabuf_format = params->create_format; buf->unique_id = ctx->g->tracker.buffer_seqno++; } void do_zwp_linux_buffer_params_v1_req_add(struct context *ctx, int fd, uint32_t plane_idx, uint32_t offset, uint32_t stride, uint32_t modifier_hi, uint32_t modifier_lo) { struct obj_zwp_linux_dmabuf_params *params = (struct obj_zwp_linux_dmabuf_params *)ctx->obj; if (params->nplanes != (int)plane_idx) { wp_error("Expected sequentially assigned plane fds: got new_idx=%d != %d=nplanes", plane_idx, params->nplanes); return; } if (params->nplanes >= MAX_DMABUF_PLANES) { wp_error("Too many planes"); return; } params->nplanes++; params->add[plane_idx].fd = fd; params->add[plane_idx].offset = offset; params->add[plane_idx].stride = stride; params->add[plane_idx].modifier = modifier_lo + modifier_hi * 0x100000000uLL; // Only perform rearrangement on the client side, for now if (true) { ctx->drop_this_msg = true; } } static uint32_t append_zwp_linux_buffer_params_v1_req_add(uint32_t *msg, bool display_side, uint32_t obj_id, uint32_t plane_idx, uint32_t offset, uint32_t stride, uint32_t modifier_hi, uint32_t modifier_lo) { uint32_t msg_size = 2; if (msg) { msg[0] = obj_id; msg[msg_size++] = plane_idx; msg[msg_size++] = offset; msg[msg_size++] = stride; msg[msg_size++] = modifier_hi; msg[msg_size++] = modifier_lo; msg[1] = ((uint32_t)msg_size << 18) | 1; /* Tag the message as having one file descriptor */ if (!display_side) { msg[1] |= (uint32_t)(1 << 11); } } else { msg_size += 5; } return msg_size; } void do_zwp_linux_buffer_params_v1_req_create(struct context *ctx, int32_t width, int32_t height, uint32_t format, uint32_t flags) { struct obj_zwp_linux_dmabuf_params *params = (struct obj_zwp_linux_dmabuf_params *)ctx->obj; params->create_flags = flags; params->create_width = width; params->create_height = height; params->create_format = format; struct dmabuf_slice_data info = {.width = (uint32_t)width, .height = (uint32_t)height, .format = format, .num_planes = params->nplanes, .strides = {params->add[0].stride, params->add[1].stride, params->add[2].stride, params->add[3].stride}, .offsets = {params->add[0].offset, params->add[1].offset, params->add[2].offset, params->add[3].offset}}; bool all_same_fds = true; for (int i = 1; i < params->nplanes; i++) { if (params->add[i].fd != params->add[0].fd) { all_same_fds = false; } } for (int i = 0; i < params->nplanes; i++) { memset(info.using_planes, 0, sizeof(info.using_planes)); for (int k = 0; k < min(params->nplanes, 4); k++) { if (params->add[k].fd == params->add[i].fd) { info.using_planes[k] = 1; info.modifier = params->add[k].modifier; } } enum fdcat res_type = FDC_DMABUF; if (ctx->g->config->video_if_possible) { // TODO: multibuffer support if (all_same_fds && video_supports_dmabuf_format(format, info.modifier)) { res_type = ctx->on_display_side ? FDC_DMAVID_IW : FDC_DMAVID_IR; } } /* note: the``info` provided includes the incoming/as-if stride * data. */ struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, params->add[i].fd, res_type, 0, &info, false); if (!sfd) { continue; } if (ctx->on_display_side) { /* the new dmabuf being created is not guaranteed to * have the original offset/stride parameters, so reset * them */ params->add[i].offset = 0; params->add[i].stride = dmabuf_get_stride(sfd->dmabuf_bo); } /* increment for each extra time this fd will be sent */ if (sfd->has_owner) { shadow_incref_transfer(sfd); } // Convert the stored fds to buffer pointers now. params->add[i].buffer = shadow_incref_protocol(sfd); } if (true) { // Update file descriptors int nfds = params->nplanes; if (nfds > ctx->fds->size - ctx->fds->zone_end) { wp_error("Not enough space to reintroduce zwp_linux_buffer_params_v1.add message fds"); return; } int nmoved = (ctx->fds->zone_end - ctx->fds->zone_start); memmove(ctx->fds->data + ctx->fds->zone_start + nfds, ctx->fds->data + ctx->fds->zone_start, (size_t)nmoved * sizeof(int)); for (int i = 0; i < params->nplanes; i++) { ctx->fds->data[ctx->fds->zone_start + i] = params->add[i].fd; } /* We inject `nfds` new file descriptors, and advance the zone * of queued file descriptors forward, since the injected file * descriptors will not be used by the parser, but will still * be transported out. */ ctx->fds->zone_start += nfds; ctx->fds->zone_end += nfds; ctx->fds_changed = true; // Update data int net_length = ctx->message_length; uint32_t extra = 0; for (int i = 0; i < params->nplanes; i++) { extra += append_zwp_linux_buffer_params_v1_req_add(NULL, ctx->on_display_side, params->base.obj_id, (uint32_t)i, params->add[i].offset, params->add[i].stride, (uint32_t)(params->add[i].modifier >> 32), (uint32_t)(params->add[i].modifier)); } net_length += (int)(sizeof(uint32_t) * extra); if (net_length > ctx->message_available_space) { wp_error("Not enough space to reintroduce zwp_linux_buffer_params_v1.add message data"); return; } char *cmsg = (char *)ctx->message; memmove(cmsg + net_length - ctx->message_length, cmsg, (size_t)ctx->message_length); size_t start = 0; for (int i = 0; i < params->nplanes; i++) { uint32_t step = append_zwp_linux_buffer_params_v1_req_add( (uint32_t *)(cmsg + start), ctx->on_display_side, params->base.obj_id, (uint32_t)i, params->add[i].offset, params->add[i].stride, (uint32_t)(params->add[i].modifier >> 32), (uint32_t)(params->add[i].modifier)); start += step * sizeof(uint32_t); } wp_debug("Reintroducing add requests for zwp_linux_buffer_params_v1, going from %d to %d bytes", ctx->message_length, net_length); ctx->message_length = net_length; } // Avoid closing in destroy_wp_object for (int i = 0; i < MAX_DMABUF_PLANES; i++) { params->add[i].fd = -1; } } void do_zwp_linux_buffer_params_v1_req_create_immed(struct context *ctx, struct wp_object *buffer_id, int32_t width, int32_t height, uint32_t format, uint32_t flags) { // There isn't really that much unnecessary copying. Note that // 'create' may modify messages do_zwp_linux_buffer_params_v1_req_create( ctx, width, height, format, flags); do_zwp_linux_buffer_params_v1_evt_created(ctx, buffer_id); } void do_zwp_linux_dmabuf_feedback_v1_evt_done(struct context *ctx) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; int worst_case_space = 2; for (size_t i = 0; i < obj->tranche_count; i++) { for (size_t j = 0; j < obj->tranches[i].tranche_size; j++) { uint16_t idx = obj->tranches[i].tranche[j]; if (idx > obj->table_len) { wp_error("Tranche format index %u out of bounds [0,%zu)", idx, obj->table_len); return; } } worst_case_space += 2 + 3 + 3 + 3 + ((int)sizeof(dev_t) + 3) / 4 + ((int)obj->tranches[i].tranche_size + 1) / 2; } if (ctx->message_available_space < worst_case_space * 4) { wp_error("Not enough space to introduce all tranche fields"); return; } /* Inject messages for filtered tranche parameters here */ size_t m = 0; for (size_t i = 0; i < obj->tranche_count; i++) { bool empty = true; for (size_t j = 0; j < obj->tranches[i].tranche_size; j++) { uint16_t idx = obj->tranches[i].tranche[j]; if (dmabuf_format_permitted(ctx, obj->table[idx].format, obj->table[idx].modifier)) { empty = false; break; } } if (empty) { /* discard tranche, has no entries */ continue; } size_t s; s = 3 + ((sizeof(dev_t) + 3) / 4); ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2( 4 * (uint32_t)s, 4); // tranche_target_device ctx->message[m + 2] = sizeof(dev_t); memcpy(&ctx->message[m + 3], &obj->main_device, sizeof(dev_t)); m += s; s = 3; ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2( 4 * (uint32_t)s, 6); // tranche_flags ctx->message[m + 2] = obj->tranches[i].flags; m += s; size_t w = 0; uint16_t *fmts = (uint16_t *)&ctx->message[m + 3]; for (size_t j = 0; j < obj->tranches[i].tranche_size; j++) { uint16_t idx = obj->tranches[i].tranche[j]; if (dmabuf_format_permitted(ctx, obj->table[idx].format, obj->table[idx].modifier)) { fmts[w++] = idx; } } s = 3 + ((w + 1) / 2); ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2( 4 * (uint32_t)s, 5); // tranche_formats ctx->message[m + 2] = (uint32_t)(2 * w); m += s; s = 2; ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2( 4 * (uint32_t)s, 3); // tranche_done m += s; } ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2(8, 0); // done m += 2; ctx->message_length = (int)(m * 4); for (size_t i = 0; i < obj->tranche_count; i++) { free(obj->tranches[i].tranche); } free(obj->tranches); obj->tranches = NULL; obj->tranche_count = 0; } void do_zwp_linux_dmabuf_feedback_v1_evt_format_table( struct context *ctx, int fd, uint32_t size) { size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd, &fdsz); if (fdtype == FDC_UNKNOWN) { fdtype = FDC_FILE; } if (fdtype != FDC_FILE || fdsz != size) { wp_error("format tabl fd %d was not file-like (type=%s), and size=%zu did not match %u", fd, fdcat_to_str(fdtype), fdsz, size); return; } struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_FILE, size, NULL, false); if (!sfd) { return; } /* Mark the shadow structure as owned by the protocol, but do not * increase the protocol refcount, so that as soon as it gets * transferred it is destroyed */ sfd->has_owner = true; struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; free(obj->table); obj->table_len = sfd->buffer_size / sizeof(struct format_table_entry); obj->table = calloc(obj->table_len, sizeof(struct format_table_entry)); if (!obj->table) { wp_error("failed to allocate copy of dmabuf feedback format table"); return; } memcpy(obj->table, sfd->mem_local, obj->table_len * sizeof(struct format_table_entry)); } void do_zwp_linux_dmabuf_feedback_v1_evt_main_device(struct context *ctx, uint32_t device_count, const uint8_t *device_val) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; if ((size_t)device_count != sizeof(dev_t)) { wp_error("Invalid dev_t size %zu, should be %zu", (size_t)device_count, sizeof(dev_t)); return; } if (ctx->on_display_side) { memcpy(&obj->main_device, device_val, sizeof(dev_t)); } else { // adopt the main device from the render fd being used struct stat fsdata; memset(&fsdata, 0, sizeof(fsdata)); int ret = fstat(ctx->g->render.drm_fd, &fsdata); if (ret == -1) { wp_error("Failed to get render device info"); return; } obj->main_device = fsdata.st_rdev; } /* todo: add support for changing render devices in waypipe */ } void do_zwp_linux_dmabuf_feedback_v1_evt_tranche_done(struct context *ctx) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; if (obj->main_device != obj->current_device && ctx->on_display_side) { /* Filter out/ignore all tranches for anything but the main * device. */ return; } void *next = realloc(obj->tranches, (obj->tranche_count + 1) * sizeof(*obj->tranches)); if (!next) { wp_error("Failed to resize tranche list"); return; } obj->tranches = next; obj->tranches[obj->tranche_count] = obj->current; obj->tranche_count++; /* it is unclear whether flags/device get in a valid use of the * protocol, but assuming they do not costs nothing. */ // todo: what about the tranche? obj->current.tranche = NULL; obj->current.tranche_size = 0; /* discard message, will be resent later if needed */ ctx->drop_this_msg = true; } void do_zwp_linux_dmabuf_feedback_v1_evt_tranche_target_device( struct context *ctx, uint32_t device_count, const uint8_t *device_val) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; if ((size_t)device_count != sizeof(dev_t)) { wp_error("Invalid dev_t size %zu, should be %zu", (size_t)device_count, sizeof(dev_t)); } memcpy(&obj->current_device, device_val, sizeof(dev_t)); /* discard message, will be resent later if needed */ ctx->drop_this_msg = true; } void do_zwp_linux_dmabuf_feedback_v1_evt_tranche_formats(struct context *ctx, uint32_t indices_count, const uint8_t *indices_val) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; size_t num_indices = (size_t)indices_count / 2; free(obj->current.tranche); obj->current.tranche_size = num_indices; obj->current.tranche = calloc(num_indices, sizeof(uint16_t)); if (!obj->current.tranche) { wp_error("failed to allocate for tranche"); return; } // todo: translation to formats+modifiers should be performed // immediately, in case format table changes between tranches memcpy(obj->current.tranche, indices_val, num_indices * sizeof(uint16_t)); /* discard message, will be resent later if needed */ ctx->drop_this_msg = true; } void do_zwp_linux_dmabuf_feedback_v1_evt_tranche_flags( struct context *ctx, uint32_t flags) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; obj->current.flags = flags; /* discard message, will be resent later if needed */ ctx->drop_this_msg = true; } void do_zwlr_export_dmabuf_frame_v1_evt_frame(struct context *ctx, uint32_t width, uint32_t height, uint32_t offset_x, uint32_t offset_y, uint32_t buffer_flags, uint32_t flags, uint32_t format, uint32_t mod_high, uint32_t mod_low, uint32_t num_objects) { struct obj_wlr_export_dmabuf_frame *frame = (struct obj_wlr_export_dmabuf_frame *)ctx->obj; frame->width = width; frame->height = height; (void)offset_x; (void)offset_y; // the 'transient' flag could be cleared, technically (void)flags; (void)buffer_flags; frame->format = format; frame->modifier = mod_high * 0x100000000uLL + mod_low; frame->nobjects = num_objects; if (frame->nobjects > MAX_DMABUF_PLANES) { wp_error("Too many (%u) frame objects required", frame->nobjects); frame->nobjects = MAX_DMABUF_PLANES; } } void do_zwlr_export_dmabuf_frame_v1_evt_object(struct context *ctx, uint32_t index, int fd, uint32_t size, uint32_t offset, uint32_t stride, uint32_t plane_index) { struct obj_wlr_export_dmabuf_frame *frame = (struct obj_wlr_export_dmabuf_frame *)ctx->obj; if (index > frame->nobjects) { wp_error("Cannot add frame object with index %u >= %u", index, frame->nobjects); return; } if (frame->objects[index].buffer) { wp_error("Cannot add frame object with index %u, already used", frame->nobjects); return; } frame->objects[index].offset = offset; frame->objects[index].stride = stride; // for lack of a test program, we assume all dmabufs passed in // here are distinct, and hence need no 'multiplane' adjustments struct dmabuf_slice_data info = {.width = frame->width, .height = frame->height, .format = frame->format, .num_planes = (int32_t)frame->nobjects, .strides = {frame->objects[0].stride, frame->objects[1].stride, frame->objects[2].stride, frame->objects[3].stride}, .offsets = {frame->objects[0].offset, frame->objects[1].offset, frame->objects[2].offset, frame->objects[3].offset}, .using_planes = {false, false, false, false}, .modifier = frame->modifier}; info.using_planes[index] = true; struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_DMABUF, 0, &info, false); if (!sfd) { return; } if (sfd->buffer_size < size) { wp_error("Frame object %u has a dmabuf with less (%u) than the advertised (%u) size", index, (uint32_t)sfd->buffer_size, size); } // Convert the stored fds to buffer pointers now. frame->objects[index].buffer = shadow_incref_protocol(sfd); // in practice, index+1? (void)plane_index; } void do_zwlr_export_dmabuf_frame_v1_evt_ready(struct context *ctx, uint32_t tv_sec_hi, uint32_t tv_sec_lo, uint32_t tv_nsec) { struct obj_wlr_export_dmabuf_frame *frame = (struct obj_wlr_export_dmabuf_frame *)ctx->obj; if (!ctx->on_display_side) { /* The client side does not update the buffer */ return; } (void)tv_sec_hi; (void)tv_sec_lo; (void)tv_nsec; for (uint32_t i = 0; i < frame->nobjects; i++) { struct shadow_fd *sfd = frame->objects[i].buffer; if (sfd) { sfd->is_dirty = true; damage_everything(&sfd->damage); } } } static void translate_data_transfer_fd(struct context *ctx, int32_t fd) { /* treat the fd as a one-way pipe, even if it is e.g. a file or * socketpair, with additional properties. The fd being sent * around should be, according to the protocol, only written into and * closed */ (void)translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_PIPE, 0, NULL, true); } void do_gtk_primary_selection_offer_req_receive( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_gtk_primary_selection_source_evt_send( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwp_primary_selection_offer_v1_req_receive( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwp_primary_selection_source_v1_evt_send( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwlr_data_control_offer_v1_req_receive( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwlr_data_control_source_v1_evt_send( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_ext_data_control_offer_v1_req_receive( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_ext_data_control_source_v1_evt_send( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_wl_data_offer_req_receive( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_wl_data_source_evt_send( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwlr_gamma_control_v1_req_set_gamma(struct context *ctx, int fd) { size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd, &fdsz); if (fdtype == FDC_UNKNOWN) { fdtype = FDC_FILE; /* fdsz fallback? */ } // TODO: use file size from earlier in the protocol, because some // systems may send file-like objects not supporting fstat if (fdtype != FDC_FILE) { wp_error("gamma ramp fd %d was not file-like (type=%s)", fd, fdcat_to_str(fdtype)); return; } struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_FILE, fdsz, NULL, false); if (!sfd) { return; } /* Mark the shadow structure as owned by the protocol, but do not * increase the protocol refcount, so that as soon as it gets * transferred it is destroyed */ sfd->has_owner = true; } #define MSGNO_XDG_TOPLEVEL_REQ_SET_TITLE 2 void do_xdg_toplevel_req_set_title(struct context *ctx, const char *str) { if (!ctx->g->config->title_prefix) { return; } size_t prefix_len = strlen(ctx->g->config->title_prefix); if (4 + (int)prefix_len >= ctx->message_available_space) { wp_error("Not enough space (%d left, at most %d needed) to prepend title prefix", ctx->message_available_space, 4 + prefix_len); return; } size_t title_len = strlen(str); size_t str_part = alignz(prefix_len + title_len + 1, 4); ctx->message[1] = message_header_2((uint32_t)str_part + 12, MSGNO_XDG_TOPLEVEL_REQ_SET_TITLE); ctx->message[2] = (uint32_t)(prefix_len + title_len + 1); char *v = (char *)&ctx->message[3]; // Using memmove, as str=&ctx->message[3] memmove(v + prefix_len, v, title_len); memset(v + prefix_len + title_len, 0, str_part - prefix_len - title_len); memcpy(v, ctx->g->config->title_prefix, prefix_len); ctx->message_length = 12 + (int)str_part; } const struct wp_interface *the_display_interface = &intf_wl_display; waypipe-v0.10.5/waypipe-c/interval.c000066400000000000000000000207571507071711100173420ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "shadow.h" #include #include struct merge_stack_elem { int offset; int count; }; struct merge_stack { struct interval *data; int size; int count; }; static int stream_merge(int a_count, const struct interval *__restrict__ a_list, int b_count, const struct interval *__restrict__ b_list, struct interval *__restrict__ c_list, int margin) { int ia = 0, ib = 0, ic = 0; int cursor = INT32_MIN; (void)a_count; (void)b_count; /* the loop exit condition appears to be faster than checking * ia= cursor + margin) { c_list[ic++] = sel; } else { c_list[ic - 1].end = new_cursor; } cursor = new_cursor; } /* add end sentinel */ c_list[ic] = (struct interval){.start = INT32_MAX, .end = INT32_MAX}; return ic; } static int fix_merge_stack_property(int size, struct merge_stack_elem *stack, struct merge_stack *base, struct merge_stack *temp, int merge_margin, bool force_compact, int *absorbed) { while (size > 1) { struct merge_stack_elem top = stack[size - 1]; struct merge_stack_elem nxt = stack[size - 2]; if (2 * top.count <= nxt.count && !force_compact) { return size; } if (buf_ensure_size(top.count + nxt.count + 1, sizeof(struct interval), &temp->size, (void **)&temp->data) == -1) { wp_error("Failed to resize a merge buffer, some damage intervals may be lost"); return size; } int xs = stream_merge(top.count, &base->data[top.offset], nxt.count, &base->data[nxt.offset], temp->data, merge_margin); /* There are more complicated/multi-buffer alternatives with * fewer memory copies, but this is already <20% of stream * merge time */ memcpy(&base->data[nxt.offset], temp->data, (size_t)(xs + 1) * sizeof(struct interval)); base->count = nxt.offset + xs + 1; stack[size - 1] = (struct merge_stack_elem){ .offset = 0, .count = 0}; stack[size - 2] = (struct merge_stack_elem){ .offset = nxt.offset, .count = xs}; size--; *absorbed += (top.count + nxt.count - xs); } return size; } static int unpack_ext_interval(struct interval *vec, const struct ext_interval e, int alignment_bits) { int iw = 0; int last_end = INT32_MIN; for (int ir = 0; ir < e.rep; ir++) { int start = e.start + ir * e.stride; int end = start + e.width; start = (start >> alignment_bits) << alignment_bits; end = ((end + (1 << alignment_bits) - 1) >> alignment_bits) << alignment_bits; if (start > last_end) { vec[iw].start = start; vec[iw].end = end; last_end = end; iw++; } else { vec[iw - 1].end = end; last_end = end; } } /* end sentinel */ vec[iw] = (struct interval){.start = INT32_MAX, .end = INT32_MAX}; return iw; } /* By writing a mergesort by hand, we can detect duplicates early. * * TODO: optimize output with run-length-encoded segments * TODO: explicit time limiting/adaptive margin! */ void merge_mergesort(const int old_count, struct interval *old_list, const int new_count, const struct ext_interval *const new_list, int *dst_count, struct interval **dst_list, int merge_margin, int alignment_bits) { /* Stack-based mergesort: the buffer at position `i+1` * should be <= 1/2 times the size of the buffer at * position `i`; buffers will be merged * to maintain this invariant */ // TODO: improve memory management! struct merge_stack_elem substack[32]; int substack_size = 0; memset(substack, 0, sizeof(substack)); struct merge_stack base = {.data = NULL, .count = 0, .size = 0}; struct merge_stack temp = {.data = NULL, .count = 0, .size = 0}; if (old_count) { /* seed the stack with the previous damage * interval list, * including trailing terminator */ base.data = old_list; base.size = old_count + 1; base.count = old_count + 1; substack[substack_size++] = (struct merge_stack_elem){ .offset = 0, .count = old_count}; } int src_count = 0, absorbed = 0; for (int jn = 0; jn < new_count; jn++) { struct ext_interval e = new_list[jn]; /* ignore invalid intervals -- also, if e.start * is close to INT32_MIN, the stream merge * breaks */ if (e.width <= 0 || e.rep <= 0 || e.start < 0) { continue; } /* To limit CPU time, if it is very likely that * an interval would be merged anyway, then * replace it with its containing interval. */ int remaining = src_count - absorbed; bool force_combine = (absorbed > 30000) || 10 * remaining < src_count; int64_t intv_end = e.start + e.stride * (int64_t)(e.rep - 1) + e.width; if (intv_end >= INT32_MAX) { /* overflow protection */ e.width = INT32_MAX - 1 - e.start; e.rep = 1; } /* Remove internal gaps are smaller than the * margin and hence * would need to be merged away anyway. */ if (e.width > e.stride - merge_margin || force_combine) { e.width = e.stride * (e.rep - 1) + e.width; e.rep = 1; } if (buf_ensure_size(base.count + e.rep + 1, sizeof(struct interval), &base.size, (void **)&base.data) == -1) { wp_error("Failed to resize a merge buffer, some damage intervals may be lost"); continue; } struct interval *vec = &base.data[base.count]; int iw = unpack_ext_interval(vec, e, alignment_bits); src_count += iw; substack[substack_size] = (struct merge_stack_elem){ .offset = base.count, .count = iw}; substack_size++; base.count += iw + 1; /* merge down the stack as far as possible */ substack_size = fix_merge_stack_property(substack_size, substack, &base, &temp, merge_margin, false, &absorbed); } /* collapse the stack into a final interval */ fix_merge_stack_property(substack_size, substack, &base, &temp, merge_margin, true, &absorbed); free(temp.data); *dst_list = base.data; *dst_count = substack[0].count; } /* This value must be larger than 8, or diffs will explode */ #define MERGE_MARGIN 256 void merge_damage_records(struct damage *base, int nintervals, const struct ext_interval *const new_list, int alignment_bits) { for (int i = 0; i < nintervals; i++) { base->acc_damage_stat += new_list[i].width * new_list[i].rep; base->acc_count++; } // Fast return if there is nothing to do if (base->damage == DAMAGE_EVERYTHING || nintervals <= 0) { return; } if (nintervals >= (1 << 30) || base->ndamage_intvs >= (1 << 30)) { /* avoid overflow in merge routine; also would be cheaper to * damage everything at this point; */ damage_everything(base); return; } merge_mergesort(base->ndamage_intvs, base->damage, nintervals, new_list, &base->ndamage_intvs, &base->damage, MERGE_MARGIN, alignment_bits); } void reset_damage(struct damage *base) { if (base->damage != DAMAGE_EVERYTHING) { free(base->damage); } base->damage = NULL; base->ndamage_intvs = 0; base->acc_damage_stat = 0; base->acc_count = 0; } void damage_everything(struct damage *base) { if (base->damage != DAMAGE_EVERYTHING) { free(base->damage); } base->damage = DAMAGE_EVERYTHING; base->ndamage_intvs = 0; } waypipe-v0.10.5/waypipe-c/interval.h000066400000000000000000000065321507071711100173420ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_INTERVAL_H #define WAYPIPE_INTERVAL_H #include /** A slight modification of the standard 'damage' rectangle * formulation, written to be agnostic of whatever buffers * underlie the system. * * [start,start+width),[start+stride,start+stride+width), * ... [start+(rep-1)*stride,start+(rep-1)*stride+width) */ struct ext_interval { int32_t start; /** Subinterval width */ int32_t width; /** Number of distinct subinterval start positions. For a single * interval, this is one. */ int32_t rep; /** Spacing between start positions, should be > width, unless * the is only one subinterval, in which case the value shouldn't * matter and is conventionally set to 0. */ int32_t stride; }; /** [start, end). (This is better than {start,width}, since width computations * are rare and trivial, while merging code branches frequently off of * endpoints) */ struct interval { int32_t start; int32_t end; }; #define DAMAGE_EVERYTHING ((struct interval *)-1) /** Interval-based damage tracking. If damage is NULL, there is * no recorded damage. If damage is DAMAGE_EVERYTHING, the entire * region should be updated. If ndamage_intvs > 0, then * damage points to an array of struct interval objects. */ struct damage { struct interval *damage; int ndamage_intvs; int64_t acc_damage_stat; int acc_count; }; /** Given an array of extended intervals, update the base damage structure * so that it contains a reasonably small disjoint set of extended intervals * which contains the old base set and the new set. Before merging, all * interval boundaries will be rounded to the next multiple of * `1 << alignment_bits`. */ void merge_damage_records(struct damage *base, int nintervals, const struct ext_interval *const new_list, int alignment_bits); /** Set damage to empty */ void reset_damage(struct damage *base); /** Expand damage to cover everything */ void damage_everything(struct damage *base); /* internal merge driver, made visible for testing */ void merge_mergesort(const int old_count, struct interval *old_list, const int new_count, const struct ext_interval *const new_list, int *dst_count, struct interval **dst_list, int merge_margin, int alignment_bits); #endif // WAYPIPE_INTERVAL_H waypipe-v0.10.5/waypipe-c/kernel.c000066400000000000000000000216561507071711100167750ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "kernel.h" #include "interval.h" #include "util.h" #include #include #include #include #include static size_t run_interval_diff_C(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end) { const uint64_t *__restrict__ mod = imod; uint64_t *__restrict__ base = ibase; uint64_t *__restrict__ diff = (uint64_t *__restrict__)idiff; /* we paper over gaps of a given window size, to avoid fine * grained context switches */ const size_t i_start = i; size_t dc = 0; uint64_t changed_val = i < i_end ? mod[i] : 0; uint64_t base_val = i < i_end ? base[i] : 0; i++; // Alternating scanners, ending with a mispredict each. bool clear_exit = false; while (i < i_end) { while (changed_val == base_val && i < i_end) { changed_val = mod[i]; base_val = base[i]; i++; } if (i == i_end) { /* it's possible that the last value actually; * see exit block */ clear_exit = true; break; } uint32_t *ctrl_blocks = (uint32_t *)&diff[dc++]; ctrl_blocks[0] = (uint32_t)((i - 1) * 2); diff[dc++] = changed_val; base[i - 1] = changed_val; // changed_val != base_val, difference occurs at early // index size_t nskip = 0; // we could only sentinel this assuming a tiny window // size while (i < i_end && nskip <= (size_t)diff_window_size / 2) { base_val = base[i]; changed_val = mod[i]; base[i] = changed_val; i++; diff[dc++] = changed_val; nskip++; nskip *= (base_val == changed_val); } dc -= nskip; ctrl_blocks[1] = (uint32_t)((i - nskip) * 2); /* our sentinel, at worst, causes overcopy by one. this * is fine */ } /* If only the last block changed */ if ((clear_exit || i_start + 1 == i_end) && changed_val != base_val) { uint32_t *ctrl_blocks = (uint32_t *)&diff[dc++]; ctrl_blocks[0] = (uint32_t)(i_end - 1) * 2; ctrl_blocks[1] = (uint32_t)i_end * 2; diff[dc++] = changed_val; base[i_end - 1] = changed_val; } return dc * 2; } #ifdef HAVE_AVX512F static bool avx512f_available(void) { return __builtin_cpu_supports("avx512f"); } size_t run_interval_diff_avx512f(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end); #endif #ifdef HAVE_AVX2 static bool avx2_available(void) { return __builtin_cpu_supports("avx2"); } size_t run_interval_diff_avx2(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end); #endif #ifdef HAVE_NEON bool neon_available(void); // in platform.c size_t run_interval_diff_neon(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end); #endif #ifdef HAVE_SSE3 static bool sse3_available(void) { return __builtin_cpu_supports("sse3"); } size_t run_interval_diff_sse3(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end); #endif interval_diff_fn_t get_diff_function(enum diff_type type, int *alignment_bits) { #ifdef HAVE_AVX512F if ((type == DIFF_FASTEST || type == DIFF_AVX512F) && avx512f_available()) { *alignment_bits = 6; return run_interval_diff_avx512f; } #endif #ifdef HAVE_AVX2 if ((type == DIFF_FASTEST || type == DIFF_AVX2) && avx2_available()) { *alignment_bits = 6; return run_interval_diff_avx2; } #endif #ifdef HAVE_NEON if ((type == DIFF_FASTEST || type == DIFF_NEON) && neon_available()) { *alignment_bits = 4; return run_interval_diff_neon; } #endif #ifdef HAVE_SSE3 if ((type == DIFF_FASTEST || type == DIFF_SSE3) && sse3_available()) { *alignment_bits = 5; return run_interval_diff_sse3; } #endif if ((type == DIFF_FASTEST || type == DIFF_C)) { *alignment_bits = 3; return run_interval_diff_C; } *alignment_bits = 0; return NULL; } /** Construct the main portion of a diff. The provided arguments should * be validated beforehand. All intervals, as well as the base/changed data * pointers, should be aligned to the alignment size associated with the * interval diff function */ size_t construct_diff_core(interval_diff_fn_t idiff_fn, int alignment_bits, const struct interval *__restrict__ damaged_intervals, int n_intervals, void *__restrict__ base, const void *__restrict__ changed, void *__restrict__ diff) { uint32_t *diff_blocks = (uint32_t *)diff; size_t cursor = 0; for (int i = 0; i < n_intervals; i++) { struct interval e = damaged_intervals[i]; size_t bend = (size_t)e.end >> alignment_bits; size_t bstart = (size_t)e.start >> alignment_bits; cursor += (*idiff_fn)(24, changed, base, diff_blocks + cursor, bstart, bend); } return cursor * sizeof(uint32_t); } size_t construct_diff_trailing(size_t size, int alignment_bits, char *__restrict__ base, const char *__restrict__ changed, char *__restrict__ diff) { size_t alignment = 1u << alignment_bits; size_t ntrailing = size % alignment; size_t offset = size - ntrailing; bool tail_change = false; if (ntrailing > 0) { for (size_t i = 0; i < ntrailing; i++) { tail_change |= base[offset + i] != changed[offset + i]; } } if (tail_change) { for (size_t i = 0; i < ntrailing; i++) { diff[i] = changed[offset + i]; base[offset + i] = changed[offset + i]; } return ntrailing; } return 0; } void apply_diff(size_t size, char *__restrict__ target1, char *__restrict__ target2, size_t diffsize, size_t ntrailing, const char *__restrict__ diff) { size_t nblocks = size / sizeof(uint32_t); size_t ndiffblocks = diffsize / sizeof(uint32_t); uint32_t *__restrict__ t1_blocks = (uint32_t *)target1; uint32_t *__restrict__ t2_blocks = (uint32_t *)target2; uint32_t *__restrict__ diff_blocks = (uint32_t *)diff; for (size_t i = 0; i < ndiffblocks;) { size_t nfrom = (size_t)diff_blocks[i]; size_t nto = (size_t)diff_blocks[i + 1]; size_t span = nto - nfrom; if (nto > nblocks || nfrom >= nto || i + (nto - nfrom) >= ndiffblocks) { wp_error("Invalid copy range [%zu,%zu) > %zu=nblocks or [%zu,%zu) > %zu=ndiffblocks", nfrom, nto, nblocks, i + 1, i + 1 + span, ndiffblocks); return; } memcpy(t1_blocks + nfrom, diff_blocks + i + 2, sizeof(uint32_t) * span); memcpy(t2_blocks + nfrom, diff_blocks + i + 2, sizeof(uint32_t) * span); i += span + 2; } if (ntrailing > 0) { size_t offset = size - ntrailing; for (size_t i = 0; i < ntrailing; i++) { target1[offset + i] = diff[diffsize + i]; target2[offset + i] = diff[diffsize + i]; } } } void stride_shifted_copy(char *dest, const char *src, size_t src_start, size_t copy_length, size_t row_length, size_t src_stride, size_t dst_stride) { size_t src_end = src_start + copy_length; size_t lrow = src_start / src_stride; size_t trow = src_end / src_stride; /* special case: inside a segment */ if (lrow == trow) { size_t cstart = src_start - lrow * src_stride; if (cstart < row_length) { size_t cend = src_end - trow * src_stride; cend = cend > row_length ? row_length : cend; memcpy(dest + dst_stride * lrow + cstart, src + src_start, cend - cstart); } return; } /* leading segment */ if (src_start > lrow * src_stride) { size_t igap = src_start - lrow * src_stride; if (igap < row_length) { memcpy(dest + dst_stride * lrow + igap, src + src_start, row_length - igap); } } /* main body */ size_t srow = (src_start + src_stride - 1) / src_stride; for (size_t i = srow; i < trow; i++) { memcpy(dest + dst_stride * i, src + src_stride * i, row_length); } /* trailing segment */ if (src_end > trow * src_stride) { size_t local = src_end - trow * src_stride; local = local > row_length ? row_length : local; memcpy(dest + dst_stride * trow, src + trow * src_stride, local); } } waypipe-v0.10.5/waypipe-c/kernel.h000066400000000000000000000064051507071711100167750ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_KERNEL_H #define WAYPIPE_KERNEL_H #include #include struct interval; typedef size_t (*interval_diff_fn_t)(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end); enum diff_type { DIFF_FASTEST, DIFF_AVX512F, DIFF_AVX2, DIFF_SSE3, DIFF_NEON, DIFF_C, }; /** Returns a function pointer to a diff construction kernel, and indicates * the alignment of the data which is to be passed in */ interval_diff_fn_t get_diff_function(enum diff_type type, int *alignment_bits); /** Given intervals aligned to 1< #include #include #include #ifdef __x86_64__ static inline int tzcnt(uint64_t v) { return (int)_tzcnt_u64(v); } #else static inline int tzcnt(uint64_t v) { return v ? __builtin_ctzll(v) : 64; } #endif #ifdef __x86_64__ static inline int lzcnt(uint64_t v) { return (int)_lzcnt_u64(v); } #else static inline int lzcnt(uint64_t v) { return v ? __builtin_clzll(v) : 64; } #endif size_t run_interval_diff_avx2(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end) { const __m256i *__restrict__ mod = imod; __m256i *__restrict__ base = ibase; size_t dc = 0; while (1) { /* Loop: no changes */ uint32_t *ctrl_blocks = &diff[dc]; dc += 2; int trailing_unchanged = 0; for (; i < i_end; i++) { __m256i m0 = _mm256_load_si256(&mod[2 * i]); __m256i m1 = _mm256_load_si256(&mod[2 * i + 1]); __m256i b0 = _mm256_load_si256(&base[2 * i]); __m256i b1 = _mm256_load_si256(&base[2 * i + 1]); __m256i eq0 = _mm256_cmpeq_epi32(m0, b0); __m256i eq1 = _mm256_cmpeq_epi32(m1, b1); /* It's very hard to tell which loop exit method is * better, since the routine is typically bandwidth * limited */ #if 1 uint32_t mask0 = (uint32_t)_mm256_movemask_epi8(eq0); uint32_t mask1 = (uint32_t)_mm256_movemask_epi8(eq1); uint64_t mask = mask0 + mask1 * 0x100000000uLL; if (~mask) { #else __m256i andv = _mm256_and_si256(eq0, eq1); if (_mm256_testz_si256(andv, _mm256_set1_epi8(-1))) { uint32_t mask0 = (uint32_t)_mm256_movemask_epi8( eq0); uint32_t mask1 = (uint32_t)_mm256_movemask_epi8( eq1); uint64_t mask = mask0 + mask1 * 0x100000000uLL; #endif _mm256_store_si256(&base[2 * i], m0); _mm256_store_si256(&base[2 * i + 1], m1); /* Write the changed bytes, starting at the * first modified term, * and set the n_unchanged counter */ size_t ncom = (size_t)tzcnt(~mask) >> 2; size_t block_shift = (ncom & 7); uint64_t esmask = 0xffffffffuLL << (block_shift * 4); __m128i halfsize = _mm_set_epi64x( 0uLL, (long long)esmask); __m256i estoremask = _mm256_cvtepi8_epi64(halfsize); _mm256_maskstore_epi32( (int *)&diff[dc - block_shift], estoremask, ncom < 8 ? m0 : m1); if (ncom < 8) { _mm256_storeu_si256( (__m256i *)&diff[dc + 8 - block_shift], m1); } dc += 16 - ncom; trailing_unchanged = lzcnt(~mask) >> 2; ctrl_blocks[0] = (uint32_t)(16 * i + ncom); i++; if (i >= i_end) { /* Last block, hence will not enter copy * loop */ ctrl_blocks[1] = (uint32_t)(16 * i); dc += 2; } break; } } if (i >= i_end) { dc -= 2; break; } /* Loop: until no changes for DIFF_WINDOW +/- 4 spaces */ for (; i < i_end; i++) { __m256i m0 = _mm256_load_si256(&mod[2 * i]); __m256i m1 = _mm256_load_si256(&mod[2 * i + 1]); __m256i b0 = _mm256_load_si256(&base[2 * i]); __m256i b1 = _mm256_load_si256(&base[2 * i + 1]); __m256i eq0 = _mm256_cmpeq_epi32(m0, b0); __m256i eq1 = _mm256_cmpeq_epi32(m1, b1); uint32_t mask0 = (uint32_t)_mm256_movemask_epi8(eq0); uint32_t mask1 = (uint32_t)_mm256_movemask_epi8(eq1); uint64_t mask = mask0 + mask1 * 0x100000000uLL; /* Reset trailing counter if anything changed */ bool clear = ~mask == 0; trailing_unchanged = clear * trailing_unchanged + (lzcnt(~mask) >> 2); _mm256_storeu_si256((__m256i *)&diff[dc], m0); _mm256_storeu_si256((__m256i *)&diff[dc + 8], m1); dc += 16; if (trailing_unchanged > diff_window_size) { i++; break; } _mm256_store_si256(&base[2 * i], m0); _mm256_store_si256(&base[2 * i + 1], m1); } /* Write coda */ dc -= (size_t)trailing_unchanged; ctrl_blocks[1] = (uint32_t)(16 * i - (size_t)trailing_unchanged); if (i >= i_end) { break; } } return dc; } waypipe-v0.10.5/waypipe-c/kernel_avx512f.c000066400000000000000000000064221507071711100202430ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include #include #include #include size_t run_interval_diff_avx512f(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end) { const __m512i *mod = imod; __m512i *base = ibase; size_t dc = 0; while (1) { /* Loop: no changes */ uint32_t *ctrl_blocks = (uint32_t *)&diff[dc]; dc += 2; int trailing_unchanged = 0; for (; i < i_end; i++) { __m512i m = _mm512_load_si512(&mod[i]); __m512i b = _mm512_load_si512(&base[i]); uint32_t mask = (uint32_t)_mm512_cmpeq_epi32_mask(m, b); if (mask != 0xffff) { _mm512_store_si512(&base[i], m); size_t ncom = (size_t)_tzcnt_u32( ~(unsigned int)mask); __mmask16 storemask = (__mmask16)(0xffffu << ncom); #if 0 __m512i v = _mm512_maskz_compress_epi32( storemask, m); _mm512_storeu_si512(&diff[dc], v); #else _mm512_mask_storeu_epi32( &diff[dc - ncom], storemask, m); #endif dc += 16 - ncom; trailing_unchanged = (int)_lzcnt_u32(~mask & 0xffff) - 16; ctrl_blocks[0] = (uint32_t)(16 * i + ncom); i++; if (i >= i_end) { /* Last block, hence will not enter copy * loop */ ctrl_blocks[1] = (uint32_t)(16 * i); dc += 2; } break; } } if (i >= i_end) { dc -= 2; break; } /* Loop: until an entire window is clear */ for (; i < i_end; i++) { __m512i m = _mm512_load_si512(&mod[i]); __m512i b = _mm512_load_si512(&base[i]); uint32_t mask = (uint32_t)_mm512_cmpeq_epi32_mask(m, b); /* Reset trailing counter if anything changed */ uint32_t amask = ~(mask << 16); int clear = (mask == 0xffff) ? 1 : 0; trailing_unchanged = clear * trailing_unchanged + (int)_lzcnt_u32(amask); _mm512_storeu_si512(&diff[dc], m); dc += 16; if (trailing_unchanged > diff_window_size) { i++; break; } _mm512_store_si512(&base[i], m); } /* Write coda */ dc -= (size_t)trailing_unchanged; ctrl_blocks[1] = (uint32_t)(16 * i - (size_t)trailing_unchanged); if (i >= i_end) { break; } } return dc; } waypipe-v0.10.5/waypipe-c/kernel_neon.c000066400000000000000000000070131507071711100200030ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include #include #include #include #include size_t run_interval_diff_neon(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end) { const uint64_t *__restrict__ mod = imod; uint64_t *__restrict__ base = ibase; size_t dc = 0; while (1) { uint32_t *ctrl_blocks = &diff[dc]; dc += 2; /* Loop: no changes */ size_t trailing_unchanged = 0; for (; i < i_end; i++) { /* Q: does it make sense to unroll by 2, cutting branch * count in half? */ uint64x2_t b = vld1q_u64(&base[2 * i]); uint64x2_t m = vld1q_u64(&mod[2 * i]); uint64x2_t x = veorq_u64(m, b); uint32x2_t o = vqmovn_u64(x); uint64_t n = vget_lane_u64(vreinterpret_u64_u32(o), 0); if (n) { vst1q_u64(&base[2 * i], m); bool lead_empty = vget_lane_u32(o, 0) == 0; /* vtbl only works on u64 chunks, so we branch * instead */ if (lead_empty) { vst1_u64((uint64_t *)&diff[dc], vget_high_u64(m)); trailing_unchanged = 0; ctrl_blocks[0] = (uint32_t)(4 * i + 2); dc += 2; } else { vst1q_u64((uint64_t *)&diff[dc], m); trailing_unchanged = 2 * (vget_lane_u32(o, 1) == 0); ctrl_blocks[0] = (uint32_t)(4 * i); dc += 4; } trailing_unchanged = 0; i++; if (i >= i_end) { /* Last block, hence will not enter copy * loop */ ctrl_blocks[1] = (uint32_t)(4 * i); dc += 2; } break; } } if (i >= i_end) { dc -= 2; break; } /* Main copy loop */ for (; i < i_end; i++) { uint64x2_t m = vld1q_u64(&mod[2 * i]); uint64x2_t b = vld1q_u64(&base[2 * i]); uint64x2_t x = veorq_u64(m, b); uint32x2_t o = vqmovn_u64(x); uint64_t n = vget_lane_u64(vreinterpret_u64_u32(o), 0); /* Reset trailing counter if anything changed */ trailing_unchanged = trailing_unchanged * (n == 0); size_t nt = (size_t)((vget_lane_u32(o, 1) == 0) * (1 + (vget_lane_u32(o, 0) == 0))); trailing_unchanged += 2 * nt; vst1q_u64((uint64_t *)&diff[dc], m); dc += 4; if (trailing_unchanged > (size_t)diff_window_size) { i++; break; } vst1q_u64(&base[2 * i], m); } /* Write coda */ dc -= trailing_unchanged; ctrl_blocks[1] = (uint32_t)(4 * i - trailing_unchanged); if (i >= i_end) { break; } } return dc; } waypipe-v0.10.5/waypipe-c/kernel_sse3.c000066400000000000000000000101311507071711100177140ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include #include #include #include // sse #include // sse2 #include // sse3 size_t run_interval_diff_sse3(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end) { const __m128i *__restrict__ mod = imod; __m128i *__restrict__ base = ibase; size_t dc = 0; while (1) { /* Loop: no changes */ uint32_t *ctrl_blocks = (uint32_t *)&diff[dc]; dc += 2; int trailing_unchanged = 0; for (; i < i_end; i++) { __m128i b0 = _mm_load_si128(&base[2 * i]); __m128i b1 = _mm_load_si128(&base[2 * i + 1]); __m128i m0 = _mm_load_si128(&mod[2 * i]); __m128i m1 = _mm_load_si128(&mod[2 * i + 1]); /* pxor + ptest + branch could be faster, depending on * compiler choices */ __m128i eq0 = _mm_cmpeq_epi32(m0, b0); __m128i eq1 = _mm_cmpeq_epi32(m1, b1); uint32_t mask = (uint32_t)_mm_movemask_epi8(eq0); mask |= ((uint32_t)_mm_movemask_epi8(eq1)) << 16; if (mask != 0xffffffff) { _mm_storeu_si128(&base[2 * i], m0); _mm_storeu_si128(&base[2 * i + 1], m1); /* Write the changed bytes, starting at the * first modified term, and set the unchanged * counter. */ size_t ncom = (size_t)__builtin_ctz(~mask) >> 2; union { __m128i s[2]; uint32_t v[8]; } tmp; tmp.s[0] = m0; tmp.s[1] = m1; for (size_t z = ncom; z < 8; z++) { diff[dc++] = tmp.v[z]; } trailing_unchanged = __builtin_clz(~mask) >> 2; ctrl_blocks[0] = (uint32_t)(8 * i + ncom); i++; if (i >= i_end) { /* Last block, hence will not enter copy * loop */ ctrl_blocks[1] = (uint32_t)(8 * i); dc += 2; } break; } } if (i >= i_end) { dc -= 2; break; } /* Loop: until no changes for DIFF_WINDOW +/- 4 spaces */ for (; i < i_end; i++) { __m128i b0 = _mm_load_si128(&base[2 * i]); __m128i b1 = _mm_load_si128(&base[2 * i + 1]); __m128i m0 = _mm_load_si128(&mod[2 * i]); __m128i m1 = _mm_load_si128(&mod[2 * i + 1]); __m128i eq0 = _mm_cmpeq_epi32(m0, b0); __m128i eq1 = _mm_cmpeq_epi32(m1, b1); uint32_t mask = (uint32_t)_mm_movemask_epi8(eq0); mask |= ((uint32_t)_mm_movemask_epi8(eq1)) << 16; bool clear = mask == 0xffffffff; /* Because clz is undefined when mask=0, extend */ uint64_t ext_mask = ((uint64_t)mask) << 32; int nleading = __builtin_clzll(~ext_mask); trailing_unchanged = clear * (trailing_unchanged + 8) + (!clear) * (nleading >> 2); _mm_storeu_si128((__m128i *)&diff[dc], m0); _mm_storeu_si128((__m128i *)&diff[dc + 4], m1); dc += 8; if (trailing_unchanged > diff_window_size) { i++; break; } _mm_storeu_si128(&base[2 * i], m0); _mm_storeu_si128(&base[2 * i + 1], m1); } /* Write coda */ dc -= (size_t)trailing_unchanged; ctrl_blocks[1] = (uint32_t)(8 * i - (size_t)trailing_unchanged); if (i >= i_end) { break; } } return dc; } waypipe-v0.10.5/waypipe-c/main.h000066400000000000000000000056221507071711100164410ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_MAIN_H #define WAYPIPE_MAIN_H #include "parsing.h" #include "shadow.h" #include "util.h" struct main_config { const char *drm_node; int n_worker_threads; enum compression_mode compression; int compression_level; bool no_gpu; bool only_linear_dmabuf; bool video_if_possible; int video_bpf; enum video_coding_fmt video_fmt; bool prefer_hwvideo; bool old_video_mode; bool vsock; uint32_t vsock_cid; uint32_t vsock_port; bool vsock_to_host; const char *title_prefix; const char *secctx_app_id; }; struct globals { const struct main_config *config; struct fd_translation_map map; struct render_data render; struct message_tracker tracker; struct thread_pool threads; }; /** Main processing loop * * chanfd: connected socket to channel * progfd: connected socket to Wayland program * linkfd: optional socket providing new chanfds. (-1 means not provided) * * Returns either EXIT_SUCCESS or EXIT_FAILURE (if exit caused by an error.) */ int main_interface_loop(int chanfd, int progfd, int linkfd, const struct main_config *config, bool display_side); /** Act as a Wayland server */ int run_server(int cwd_fd, struct socket_path socket_path, const char *display_suffix, const char *control_path, const struct main_config *config, bool oneshot, bool unlink_at_end, char *const app_argv[], bool login_shell_if_backup); /** Act as a Wayland client */ int run_client(int cwd_fd, const char *sock_folder_name, int sock_folder_fd, const char *sock_filename, const struct main_config *config, bool oneshot, const char *wayland_socket, pid_t eol_pid, int channelsock); /** Run benchmarking tool; n_worker_threads defined as with \ref main_config */ int run_bench(float bandwidth_mBps, uint32_t test_size, int n_worker_threads); #endif // WAYPIPE_MAIN_H waypipe-v0.10.5/waypipe-c/mainloop.c000066400000000000000000001361741507071711100173350ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include #include #include #include #include #include #include #include #include #include // The maximum number of fds libwayland can recvmsg at once #define MAX_LIBWAY_FDS 28 static ssize_t iovec_read( int conn, char *buf, size_t buflen, struct int_window *fds) { char cmsgdata[(CMSG_LEN(MAX_LIBWAY_FDS * sizeof(int32_t)))] = {0}; struct iovec the_iovec; the_iovec.iov_len = buflen; the_iovec.iov_base = buf; struct msghdr msg = {0}; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = &cmsgdata; msg.msg_controllen = sizeof(cmsgdata); msg.msg_flags = 0; ssize_t ret = recvmsg(conn, &msg, 0); if (msg.msg_flags & MSG_CTRUNC) { wp_error("Warning, control data was truncated in recvmsg"); } // Read cmsg struct cmsghdr *header = CMSG_FIRSTHDR(&msg); while (header) { struct cmsghdr *nxt_hdr = CMSG_NXTHDR(&msg, header); if (header->cmsg_level != SOL_SOCKET || header->cmsg_type != SCM_RIGHTS) { header = nxt_hdr; continue; } int *data = (int *)CMSG_DATA(header); int nf = (int)((header->cmsg_len - CMSG_LEN(0)) / sizeof(int)); if (buf_ensure_size(fds->zone_end + nf, sizeof(int), &fds->size, (void **)&fds->data) == -1) { wp_error("Failed to allocate space for new fds"); errno = ENOMEM; ret = -1; } else { for (int i = 0; i < nf; i++) { fds->data[fds->zone_end++] = data[i]; } } header = nxt_hdr; } return ret; } static ssize_t iovec_write(int conn, const char *buf, size_t buflen, const int *fds, int numfds, int *nfds_written) { bool overflow = numfds > MAX_LIBWAY_FDS; struct iovec the_iovec; the_iovec.iov_len = overflow ? 1 : buflen; the_iovec.iov_base = (char *)buf; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; union { char buf[CMSG_SPACE(sizeof(int) * MAX_LIBWAY_FDS)]; struct cmsghdr align; } uc; memset(uc.buf, 0, sizeof(uc.buf)); if (numfds > 0) { msg.msg_control = uc.buf; msg.msg_controllen = sizeof(uc.buf); struct cmsghdr *frst = CMSG_FIRSTHDR(&msg); frst->cmsg_level = SOL_SOCKET; frst->cmsg_type = SCM_RIGHTS; *nfds_written = min(numfds, MAX_LIBWAY_FDS); size_t nwritten = (size_t)(*nfds_written); memcpy(CMSG_DATA(frst), fds, nwritten * sizeof(int)); for (int i = 0; i < numfds; i++) { int flags = fcntl(fds[i], F_GETFL, 0); if (flags == -1 && errno == EBADF) { wp_error("Writing invalid fd %d", fds[i]); } } frst->cmsg_len = CMSG_LEN(nwritten * sizeof(int)); msg.msg_controllen = CMSG_SPACE(nwritten * sizeof(int)); wp_debug("Writing %d fds to cmsg data", *nfds_written); } else { *nfds_written = 0; } ssize_t ret = sendmsg(conn, &msg, 0); return ret; } static int translate_fds(struct fd_translation_map *map, struct render_data *render, struct thread_pool *threads, int nfds, const int fds[], int ids[]) { for (int i = 0; i < nfds; i++) { struct shadow_fd *sfd = get_shadow_for_local_fd(map, fds[i]); if (!sfd) { /* Autodetect type + create shadow fd */ size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fds[i], &fdsz); sfd = translate_fd(map, render, threads, fds[i], fdtype, fdsz, NULL, false); } if (sfd) { ids[i] = sfd->remote_id; } else { return -1; } } return 0; } /** Given a list of global ids, and an up-to-date translation map, produce local * file descriptors */ static void untranslate_ids(struct fd_translation_map *map, int nids, const int *ids, int *fds) { for (int i = 0; i < nids; i++) { struct shadow_fd *shadow = get_shadow_for_rid(map, ids[i]); if (!shadow) { wp_error("Could not untranslate remote id %d in map. Application will probably crash.", ids[i]); fds[i] = -1; } else { fds[i] = shadow->fd_local; } } } enum wm_state { WM_WAITING_FOR_PROGRAM, WM_WAITING_FOR_CHANNEL, WM_TERMINAL }; /** This state corresponds to the in-progress transfer from the program * (compositor or application) and its pipes/buffers to the channel. */ struct way_msg_state { enum wm_state state; /** Window zone contains the message data which has been read * but not yet parsed/copied to proto_write */ struct char_window proto_read; /** Buffer of complete protocol messages to be written to the channel */ struct char_window proto_write; /** Queue of fds to be used by protocol parser */ struct int_window fds; /** Individual messages, to be sent out via writev and deleted on * acknowledgement */ struct transfer_queue transfers; /** bytes written in this cycle, for debug */ int total_written; /** Maximum chunk size to writev at once*/ int max_iov; /** Transfers to send after the compute queue is empty */ int ntrailing; struct iovec trailing[3]; /** Statically allocated message acknowledgement messages; due * to the way they are updated out of order, at most two are needed */ struct wmsg_ack ack_msgs[2]; }; enum cm_state { CM_WAITING_FOR_PROGRAM, CM_WAITING_FOR_CHANNEL, CM_TERMINAL }; /** This state corresponds to the in-progress transfer from the channel * to the program and the buffers/pipes on which will be written. */ struct chan_msg_state { enum cm_state state; /** Edited protocol data which is being written to the program */ struct char_window proto_write; /**< FDs that should immediately be transferred to the program */ struct int_window transf_fds; /**< FD queue for the protocol parser */ struct int_window proto_fds; #define RECV_GOAL_READ_SIZE 131072 char *recv_buffer; // ring-like buffer for message data size_t recv_size; size_t recv_start; // (recv_buffer+rev_start) should be a message header size_t recv_end; // last byte read from channel, always >=recv_start int recv_unhandled_messages; // number of messages to parse }; /** State used by both forward and reverse messages */ struct cross_state { /* Which was the last received message received from the other * application, for which acknowledgement was sent? */ uint32_t last_acked_msgno; /* Which was the last message number received from the other * application? */ uint32_t last_received_msgno; /* What was the highest number message received from the other * application? (matches last_received, unless we needed a restart */ uint32_t newest_received_msgno; /* Which was the last message number sent to the other application which * was acknowledged by that side? */ uint32_t last_confirmed_msgno; }; static int interpret_chanmsg(struct chan_msg_state *cmsg, struct cross_state *cxs, struct globals *g, bool display_side, char *packet) { uint32_t size_and_type = *(uint32_t *)packet; size_t unpadded_size = transfer_size(size_and_type); enum wmsg_type type = transfer_type(size_and_type); if (type == WMSG_CLOSE) { /* No new messages from the channel to the program will be * allowed after this */ cmsg->state = CM_TERMINAL; wp_debug("Other side has closed"); if (unpadded_size < 8) { return ERR_FATAL; } int32_t code = ((int32_t *)packet)[1]; if (code == ERR_FATAL) { return ERR_FATAL; } else if (code == ERR_NOMEM) { return ERR_NOMEM; } else { return ERR_STOP; } } else if (type == WMSG_RESTART) { struct wmsg_restart *ackm = (struct wmsg_restart *)packet; wp_debug("Received WMSG_RESTART: remote last saw ack %d (we last recvd %d, acked %d)", ackm->last_ack_received, cxs->last_received_msgno, cxs->last_acked_msgno); cxs->last_received_msgno = ackm->last_ack_received; return 0; } else if (type == WMSG_ACK_NBLOCKS) { struct wmsg_ack *ackm = (struct wmsg_ack *)packet; wp_debug("Received WMSG_ACK_NBLOCKS: remote recvd %u", ackm->messages_received); if (msgno_gt(ackm->messages_received, cxs->last_confirmed_msgno)) { cxs->last_confirmed_msgno = ackm->messages_received; } return 0; } else { cxs->last_received_msgno++; if (msgno_gt(cxs->newest_received_msgno, cxs->last_received_msgno)) { /* Skip packet, as we already received it */ wp_debug("Ignoring replayed message %d (newest=%d)", cxs->last_received_msgno, cxs->newest_received_msgno); return 0; } cxs->newest_received_msgno = cxs->last_received_msgno; } if (type == WMSG_INJECT_RIDS) { const int32_t *fds = &((const int32_t *)packet)[1]; int nfds = (int)((unpadded_size - sizeof(uint32_t)) / sizeof(int32_t)); wp_debug("Received WMSG_INJECT_RIDS with %d fds", nfds); if (buf_ensure_size(nfds, sizeof(int), &cmsg->transf_fds.size, (void **)&cmsg->transf_fds.data) == -1) { wp_error("Allocation failure for fd transfer queue, expect a crash"); return ERR_NOMEM; } /* Reset transfer buffer; all fds in here were already sent */ cmsg->transf_fds.zone_start = 0; cmsg->transf_fds.zone_end = nfds; untranslate_ids(&g->map, nfds, fds, cmsg->transf_fds.data); if (nfds > 0) { if (buf_ensure_size(cmsg->proto_fds.zone_end + nfds, sizeof(int), &cmsg->proto_fds.size, (void **)&cmsg->proto_fds.data) == -1) { wp_error("Allocation failure for fd protocol queue"); return ERR_NOMEM; } // Append the new file descriptors to the parsing queue memcpy(cmsg->proto_fds.data + cmsg->proto_fds.zone_end, cmsg->transf_fds.data, sizeof(int) * (size_t)nfds); cmsg->proto_fds.zone_end += nfds; } return 0; } else if (type == WMSG_PROTOCOL) { /* While by construction, the provided message buffer should be * aligned with individual message boundaries, it is not * guaranteed that all file descriptors provided will be used by * the messages. This makes fd handling more complicated. */ int protosize = (int)(unpadded_size - sizeof(uint32_t)); wp_debug("Received WMSG_PROTOCOL with %d bytes of messages", protosize); // TODO: have message editing routines ensure size, so // that this limit can be tighter if (buf_ensure_size(protosize + 1024, 1, &cmsg->proto_write.size, (void **)&cmsg->proto_write.data) == -1) { wp_error("Allocation failure for message workspace"); return ERR_NOMEM; } cmsg->proto_write.zone_end = 0; cmsg->proto_write.zone_start = 0; struct char_window src; src.data = packet + sizeof(uint32_t); src.zone_start = 0; src.zone_end = protosize; src.size = protosize; parse_and_prune_messages(g, display_side, display_side, &src, &cmsg->proto_write, &cmsg->proto_fds); if (src.zone_start != src.zone_end) { wp_error("did not expect partial messages over channel, only parsed %d/%d bytes", src.zone_start, src.zone_end); return ERR_FATAL; } /* Update file descriptor queue */ if (cmsg->proto_fds.zone_end > cmsg->proto_fds.zone_start) { memmove(cmsg->proto_fds.data, cmsg->proto_fds.data + cmsg->proto_fds.zone_start, sizeof(int) * (size_t)(cmsg->proto_fds.zone_end > cmsg->proto_fds.zone_start)); cmsg->proto_fds.zone_end -= cmsg->proto_fds.zone_start; } return 0; } else { if (unpadded_size < sizeof(struct wmsg_basic)) { wp_error("Message is too small to contain header+RID, %d bytes", unpadded_size); return ERR_FATAL; } const struct wmsg_basic *op_header = (const struct wmsg_basic *)packet; struct bytebuf msg = { .data = packet, .size = unpadded_size, }; wp_debug("Received %s for RID=%d (len %d)", wmsg_type_to_str(type), op_header->remote_id, unpadded_size); return apply_update(&g->map, &g->threads, &g->render, type, op_header->remote_id, &msg); } } static int advance_chanmsg_chanread(struct chan_msg_state *cmsg, struct cross_state *cxs, int chanfd, bool display_side, struct globals *g) { /* Setup read operation to be able to read a minimum number of bytes, * wrapping around as early as overlap conditions permit */ if (cmsg->recv_unhandled_messages == 0) { struct iovec vec[2]; memset(vec, 0, sizeof(vec)); int nvec; if (cmsg->recv_start == cmsg->recv_end) { /* A fresh packet */ cmsg->recv_start = 0; cmsg->recv_end = 0; nvec = 1; vec[0].iov_base = cmsg->recv_buffer; vec[0].iov_len = (size_t)(cmsg->recv_size / 2); } else if (cmsg->recv_end < cmsg->recv_start + sizeof(uint32_t)) { /* Didn't quite finish reading the header */ int recvsz = (int)cmsg->recv_size; if (buf_ensure_size((int)cmsg->recv_end + RECV_GOAL_READ_SIZE, 1, &recvsz, (void **)&cmsg->recv_buffer) == -1) { wp_error("Allocation failure, resizing receive buffer failed"); return ERR_NOMEM; } cmsg->recv_size = (size_t)recvsz; nvec = 1; vec[0].iov_base = cmsg->recv_buffer + cmsg->recv_end; vec[0].iov_len = RECV_GOAL_READ_SIZE; } else { /* Continuing an old packet; space made available last * time */ uint32_t *header = (uint32_t *)&cmsg->recv_buffer[cmsg ->recv_start]; size_t sz = alignz(transfer_size(*header), 4); size_t read_end = cmsg->recv_start + sz; bool wraparound = cmsg->recv_start >= RECV_GOAL_READ_SIZE; if (!wraparound) { read_end = maxu(read_end, cmsg->recv_end + RECV_GOAL_READ_SIZE); } int recvsz = (int)cmsg->recv_size; if (buf_ensure_size((int)read_end, 1, &recvsz, (void **)&cmsg->recv_buffer) == -1) { wp_error("Allocation failure, resizing receive buffer failed"); return ERR_NOMEM; } cmsg->recv_size = (size_t)recvsz; nvec = 1; vec[0].iov_base = cmsg->recv_buffer + cmsg->recv_end; vec[0].iov_len = read_end - cmsg->recv_end; if (wraparound) { nvec = 2; vec[1].iov_base = cmsg->recv_buffer; vec[1].iov_len = cmsg->recv_start; } } ssize_t r = readv(chanfd, vec, nvec); if (r == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { wp_debug("Read would block"); return 0; } else if (r == 0 || (r == -1 && errno == ECONNRESET)) { wp_debug("Channel connection closed"); return ERR_DISCONN; } else if (r == -1) { wp_error("chanfd read failure: %s", strerror(errno)); return ERR_FATAL; } else { if (nvec == 2 && (size_t)r >= vec[0].iov_len) { /* Complete parsing this message */ int cm_ret = interpret_chanmsg(cmsg, cxs, g, display_side, cmsg->recv_buffer + cmsg->recv_start); if (cm_ret < 0) { return cm_ret; } cmsg->recv_start = 0; cmsg->recv_end = (size_t)r - vec[0].iov_len; if (cmsg->proto_write.zone_start < cmsg->proto_write.zone_end) { goto next_stage; } } else { cmsg->recv_end += (size_t)r; } } } /* Recount unhandled messages */ cmsg->recv_unhandled_messages = 0; size_t i = cmsg->recv_start; while (i + sizeof(uint32_t) <= cmsg->recv_end) { uint32_t *header = (uint32_t *)&cmsg->recv_buffer[i]; size_t sz = alignz(transfer_size(*header), 4); if (sz == 0) { wp_error("Encountered malformed zero size packet"); return ERR_FATAL; } i += sz; if (i > cmsg->recv_end) { break; } cmsg->recv_unhandled_messages++; } while (cmsg->recv_unhandled_messages > 0) { char *packet_start = &cmsg->recv_buffer[cmsg->recv_start]; uint32_t *header = (uint32_t *)packet_start; size_t sz = transfer_size(*header); int cm_ret = interpret_chanmsg( cmsg, cxs, g, display_side, packet_start); if (cm_ret < 0) { return cm_ret; } cmsg->recv_start += alignz(sz, 4); cmsg->recv_unhandled_messages--; if (cmsg->proto_write.zone_start < cmsg->proto_write.zone_end) { goto next_stage; } } return 0; next_stage: /* When protocol data was sent, switch to trying to write the protocol * data to its socket, before trying to parse any other message */ cmsg->state = CM_WAITING_FOR_PROGRAM; DTRACE_PROBE(waypipe, chanmsg_program_wait); return 0; } static int advance_chanmsg_progwrite(struct chan_msg_state *cmsg, int progfd, bool display_side, struct globals *g) { const char *progdesc = display_side ? "compositor" : "application"; // Write as much as possible while (cmsg->proto_write.zone_start < cmsg->proto_write.zone_end) { ssize_t wc = iovec_write(progfd, cmsg->proto_write.data + cmsg->proto_write.zone_start, (size_t)(cmsg->proto_write.zone_end - cmsg->proto_write.zone_start), cmsg->transf_fds.data, cmsg->transf_fds.zone_end, &cmsg->transf_fds.zone_start); if (wc == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { wp_debug("Write to the %s would block", progdesc); return 0; } else if (wc == -1 && (errno == EPIPE || errno == ECONNRESET)) { wp_error("%s has closed", progdesc); /* The program has closed its end of the connection, * so waypipe can also cease to process all messages and * data updates that would be directed to it */ cmsg->state = CM_TERMINAL; return ERR_STOP; } else if (wc == -1) { wp_error("%s write failure %zd: %s", progdesc, wc, strerror(errno)); return ERR_FATAL; } else { cmsg->proto_write.zone_start += (int)wc; wp_debug("Wrote to %s, %d/%d bytes in chunk %zd, %d/%d fds", progdesc, cmsg->proto_write.zone_start, cmsg->proto_write.zone_end, wc, cmsg->transf_fds.zone_start, cmsg->transf_fds.zone_end); if (cmsg->transf_fds.zone_start > 0) { decref_transferred_fds(&g->map, cmsg->transf_fds.zone_start, cmsg->transf_fds.data); memmove(cmsg->transf_fds.data, cmsg->transf_fds.data + cmsg->transf_fds.zone_start, (size_t)(cmsg->transf_fds.zone_end - cmsg->transf_fds.zone_start) * sizeof(int)); cmsg->transf_fds.zone_end -= cmsg->transf_fds.zone_start; } } } if (cmsg->proto_write.zone_start == cmsg->proto_write.zone_end) { wp_debug("Write to the %s succeeded", progdesc); cmsg->state = CM_WAITING_FOR_CHANNEL; DTRACE_PROBE(waypipe, chanmsg_channel_wait); } return 0; } static int advance_chanmsg_transfer(struct globals *g, struct chan_msg_state *cmsg, struct cross_state *cxs, bool display_side, int chanfd, int progfd, bool any_changes) { if (!any_changes) { return 0; } if (cmsg->state == CM_WAITING_FOR_CHANNEL) { return advance_chanmsg_chanread( cmsg, cxs, chanfd, display_side, g); } else if (cmsg->state == CM_WAITING_FOR_PROGRAM) { return advance_chanmsg_progwrite(cmsg, progfd, display_side, g); } return 0; } static void clear_old_transfers( struct transfer_queue *td, uint32_t inclusive_cutoff) { for (int i = 0; i < td->end; i++) { if (td->vecs[i].iov_len == 0) { wp_error("Unexpected zero sized item %d [%d,%d)", i, td->start, td->end); } } int k = 0; for (int i = 0; i < td->start; i++) { if (!msgno_gt(inclusive_cutoff, td->meta[i].msgno)) { break; } if (!td->meta[i].static_alloc) { free(td->vecs[i].iov_base); } td->vecs[i].iov_base = NULL; td->vecs[i].iov_len = 0; k = i + 1; } if (k > 0) { size_t nshift = (size_t)(td->end - k); memmove(td->meta, td->meta + k, nshift * sizeof(td->meta[0])); memmove(td->vecs, td->vecs + k, nshift * sizeof(td->vecs[0])); td->start -= k; td->end -= k; } } /* Returns 0 sucessful -1 if fatal error, -2 if closed */ static int partial_write_transfer(int chanfd, struct transfer_queue *td, int *total_written, int max_iov) { // Waiting for channel write to complete if (td->start < td->end) { /* Advance the current element by amount actually written */ char *orig_base = td->vecs[td->start].iov_base; size_t orig_len = td->vecs[td->start].iov_len; td->vecs[td->start].iov_base = orig_base + td->partial_write_amt; td->vecs[td->start].iov_len = orig_len - td->partial_write_amt; int count = min(max_iov, td->end - td->start); ssize_t wr = writev(chanfd, &td->vecs[td->start], count); td->vecs[td->start].iov_base = orig_base; td->vecs[td->start].iov_len = orig_len; if (wr == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { return 0; } else if (wr == -1 && (errno == ECONNRESET || errno == EPIPE)) { wp_debug("Channel connection closed"); return ERR_DISCONN; } else if (wr == -1) { wp_error("chanfd write failure: %s", strerror(errno)); return ERR_FATAL; } size_t uwr = (size_t)wr; *total_written += (int)wr; while (uwr > 0 && td->start < td->end) { /* Skip past zero-length blocks */ if (td->vecs[td->start].iov_len == 0) { td->start++; continue; } size_t left = td->vecs[td->start].iov_len - td->partial_write_amt; if (left > uwr) { /* Block partially completed */ td->partial_write_amt += uwr; uwr = 0; } else { /* Block completed */ td->partial_write_amt = 0; uwr -= left; td->start++; } } } return 0; } static int inject_acknowledge( struct way_msg_state *wmsg, struct cross_state *cxs) { if (transfer_ensure_size(&wmsg->transfers, wmsg->transfers.end + 1) == -1) { wp_error("Failed to allocate space for ack message transfer"); return -1; } /* To avoid infinite regress, receive acknowledgement * messages do not themselves increase the message counters. */ uint32_t ack_msgno; if (wmsg->transfers.start == wmsg->transfers.end) { ack_msgno = wmsg->transfers.last_msgno; } else { ack_msgno = wmsg->transfers.meta[wmsg->transfers.start].msgno; } /* This is the next point where messages can be changed */ int next_slot = (wmsg->transfers.partial_write_amt > 0) ? wmsg->transfers.start + 1 : wmsg->transfers.start; struct wmsg_ack *not_in_prog_msg = NULL; struct wmsg_ack *queued_msg = NULL; for (size_t i = 0; i < 2; i++) { if (wmsg->transfers.partial_write_amt > 0 && wmsg->transfers.vecs[wmsg->transfers.start] .iov_base == &wmsg->ack_msgs[i]) { not_in_prog_msg = &wmsg->ack_msgs[1 - i]; } if (next_slot < wmsg->transfers.end && wmsg->transfers.vecs[next_slot].iov_base == &wmsg->ack_msgs[i]) { queued_msg = &wmsg->ack_msgs[i]; } } if (!queued_msg) { /* Insert a message--which is not partially written-- * in the next available slot, pushing forward other * messages */ if (!not_in_prog_msg) { queued_msg = &wmsg->ack_msgs[0]; } else { queued_msg = not_in_prog_msg; } if (next_slot < wmsg->transfers.end) { size_t nmoved = (size_t)(wmsg->transfers.end - next_slot); memmove(wmsg->transfers.vecs + next_slot + 1, wmsg->transfers.vecs + next_slot, sizeof(*wmsg->transfers.vecs) * nmoved); memmove(wmsg->transfers.meta + next_slot + 1, wmsg->transfers.meta + next_slot, sizeof(*wmsg->transfers.meta) * nmoved); } wmsg->transfers.vecs[next_slot].iov_len = sizeof(struct wmsg_ack); wmsg->transfers.vecs[next_slot].iov_base = queued_msg; wmsg->transfers.meta[next_slot].msgno = ack_msgno; wmsg->transfers.meta[next_slot].static_alloc = true; wmsg->transfers.end++; } /* Modify the message which is now next up in the transfer * queue */ queued_msg->size_and_type = transfer_header( sizeof(struct wmsg_ack), WMSG_ACK_NBLOCKS); queued_msg->messages_received = cxs->last_received_msgno; cxs->last_acked_msgno = cxs->last_received_msgno; return 0; } static int advance_waymsg_chanwrite(struct way_msg_state *wmsg, struct cross_state *cxs, struct globals *g, int chanfd, bool display_side) { const char *progdesc = display_side ? "compositor" : "application"; /* Copy the data in the transfer queue to the write queue. */ (void)transfer_load_async(&wmsg->transfers); // First, clear out any transfers that are no longer needed clear_old_transfers(&wmsg->transfers, cxs->last_confirmed_msgno); /* Acknowledge the other side's transfers as soon as possible */ if (cxs->last_acked_msgno != cxs->last_received_msgno) { (void)inject_acknowledge(wmsg, cxs); } int ret = partial_write_transfer(chanfd, &wmsg->transfers, &wmsg->total_written, wmsg->max_iov); if (ret < 0) { return ret; } bool is_done = false; struct task_data task; bool has_task = request_work_task(&g->threads, &task, &is_done); /* Run a task ourselves, making use of the main thread */ if (has_task) { run_task(&task, &g->threads.threads[0]); pthread_mutex_lock(&g->threads.work_mutex); g->threads.tasks_in_progress--; pthread_mutex_unlock(&g->threads.work_mutex); /* To skip the next poll */ uint8_t triv = 0; if (write(g->threads.selfpipe_w, &triv, 1) == -1) { wp_error("Failed to write to self-pipe"); } } if (is_done) { /* It's possible for the last task to complete between * `transfer_load_async` and `request_work_task` in this * function, so copy out any remaining messages.`*/ (void)transfer_load_async(&wmsg->transfers); } if (is_done && wmsg->ntrailing > 0) { for (int i = 0; i < wmsg->ntrailing; i++) { transfer_add(&wmsg->transfers, wmsg->trailing[i].iov_len, wmsg->trailing[i].iov_base); } wmsg->ntrailing = 0; memset(wmsg->trailing, 0, sizeof(wmsg->trailing)); } if (wmsg->transfers.start == wmsg->transfers.end && is_done) { for (struct shadow_fd_link *lcur = g->map.link.l_next, *lnxt = lcur->l_next; lcur != &g->map.link; lcur = lnxt, lnxt = lcur->l_next) { /* Note: finish_update() may delete `cur` */ struct shadow_fd *cur = (struct shadow_fd *)lcur; finish_update(cur); destroy_shadow_if_unreferenced(cur); } /* Reset work queue */ pthread_mutex_lock(&g->threads.work_mutex); if (g->threads.stack_count > 0 || g->threads.tasks_in_progress > 0) { wp_error("Multithreading state failure"); } g->threads.do_work = false; g->threads.stack_count = 0; g->threads.tasks_in_progress = 0; pthread_mutex_unlock(&g->threads.work_mutex); DTRACE_PROBE(waypipe, channel_write_end); size_t unacked_bytes = 0; for (int i = 0; i < wmsg->transfers.end; i++) { unacked_bytes += wmsg->transfers.vecs[i].iov_len; } wp_debug("Sent %d-byte message from %s to channel; %zu-bytes in flight", wmsg->total_written, progdesc, unacked_bytes); /* do not delete the used transfers yet; we need a remote * acknowledgement */ wmsg->total_written = 0; wmsg->state = WM_WAITING_FOR_PROGRAM; } return 0; } static int advance_waymsg_progread(struct way_msg_state *wmsg, struct globals *g, int progfd, bool display_side, bool progsock_readable) { const char *progdesc = display_side ? "compositor" : "application"; // We have data to read from programs/pipes bool new_proto_data = false; int old_fbuffer_end = wmsg->fds.zone_end; if (progsock_readable) { // Read /once/ ssize_t rc = iovec_read(progfd, wmsg->proto_read.data + wmsg->proto_read.zone_end, (size_t)(wmsg->proto_read.size - wmsg->proto_read.zone_end), &wmsg->fds); if (rc == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { // do nothing } else if (rc == 0 || (rc == -1 && errno == ECONNRESET)) { wp_debug("%s has closed", progdesc); // state transitions handled in main loop return ERR_STOP; } else if (rc == -1) { wp_error("%s read failure: %s", progdesc, strerror(errno)); return ERR_FATAL; } else { // We have successfully read some data. wmsg->proto_read.zone_end += (int)rc; new_proto_data = true; } } if (new_proto_data) { wp_debug("Read %d new file descriptors, have %d total now", wmsg->fds.zone_end - old_fbuffer_end, wmsg->fds.zone_end); if (buf_ensure_size(wmsg->proto_read.size + 1024, 1, &wmsg->proto_write.size, (void **)&wmsg->proto_write.data) == -1) { wp_error("Allocation failure for message workspace"); return ERR_NOMEM; } wmsg->proto_write.zone_start = 0; wmsg->proto_write.zone_end = 0; parse_and_prune_messages(g, display_side, !display_side, &wmsg->proto_read, &wmsg->proto_write, &wmsg->fds); /* Recycle partial message bytes */ if (wmsg->proto_read.zone_start > 0) { if (wmsg->proto_read.zone_end > wmsg->proto_read.zone_start) { memmove(wmsg->proto_read.data, wmsg->proto_read.data + wmsg->proto_read.zone_start, (size_t)(wmsg->proto_read.zone_end - wmsg->proto_read.zone_start)); } wmsg->proto_read.zone_end -= wmsg->proto_read.zone_start; wmsg->proto_read.zone_start = 0; } } read_readable_pipes(&g->map); for (struct shadow_fd_link *lcur = g->map.link.l_next, *lnxt = lcur->l_next; lcur != &g->map.link; lcur = lnxt, lnxt = lcur->l_next) { /* Note: finish_update() may delete `cur` */ struct shadow_fd *cur = (struct shadow_fd *)lcur; collect_update(&g->threads, cur, &wmsg->transfers, g->config->old_video_mode); /* collecting updates can reset `pipe.remote_can_X` state, so * garbage collect the sfd immediately after */ destroy_shadow_if_unreferenced(cur); } int num_mt_tasks = start_parallel_work( &g->threads, &wmsg->transfers.async_recv_queue); if (new_proto_data) { /* Send all file descriptors which have been used by the * protocol parser, translating them if this has not already * been done */ if (wmsg->fds.zone_start > 0) { size_t act_size = (size_t)wmsg->fds.zone_start * sizeof(int32_t) + sizeof(uint32_t); uint32_t *msg = malloc(act_size); if (!msg) { // TODO: use a ring buffer for allocations, // and figure out how to block until it is clear wp_error("Failed to allocate file desc tx msg"); return ERR_NOMEM; } msg[0] = transfer_header(act_size, WMSG_INJECT_RIDS); int32_t *rbuffer = (int32_t *)(msg + 1); /* Translate and adjust refcounts */ if (translate_fds(&g->map, &g->render, &g->threads, wmsg->fds.zone_start, wmsg->fds.data, rbuffer) == -1) { free(msg); return ERR_FATAL; } decref_transferred_rids( &g->map, wmsg->fds.zone_start, rbuffer); memmove(wmsg->fds.data, wmsg->fds.data + wmsg->fds.zone_start, sizeof(int) * (size_t)(wmsg->fds.zone_end - wmsg->fds.zone_start)); wmsg->fds.zone_end -= wmsg->fds.zone_start; wmsg->fds.zone_start = 0; /* Add message to trailing queue */ wmsg->trailing[wmsg->ntrailing].iov_len = act_size; wmsg->trailing[wmsg->ntrailing].iov_base = msg; wmsg->ntrailing++; } if (wmsg->proto_write.zone_end > 0) { wp_debug("We are transferring a data buffer with %d bytes", wmsg->proto_write.zone_end); size_t act_size = (size_t)wmsg->proto_write.zone_end + sizeof(uint32_t); uint32_t protoh = transfer_header( act_size, WMSG_PROTOCOL); uint8_t *copy_proto = malloc(alignz(act_size, 4)); if (!copy_proto) { wp_error("Failed to allocate protocol tx msg"); return ERR_NOMEM; } memcpy(copy_proto, &protoh, sizeof(uint32_t)); memcpy(copy_proto + sizeof(uint32_t), wmsg->proto_write.data, (size_t)wmsg->proto_write.zone_end); memset(copy_proto + sizeof(uint32_t) + wmsg->proto_write .zone_end, 0, alignz(act_size, 4) - act_size); wmsg->trailing[wmsg->ntrailing].iov_len = alignz(act_size, 4); wmsg->trailing[wmsg->ntrailing].iov_base = copy_proto; wmsg->ntrailing++; } } int n_transfers = wmsg->transfers.end - wmsg->transfers.start; size_t net_bytes = 0; for (int i = wmsg->transfers.start; i < wmsg->transfers.end; i++) { net_bytes += wmsg->transfers.vecs[i].iov_len; } if (n_transfers > 0 || num_mt_tasks > 0 || wmsg->ntrailing > 0) { wp_debug("Channel message start (%d blobs, %d bytes, %d trailing, %d tasks)", n_transfers, net_bytes, wmsg->ntrailing, num_mt_tasks); wmsg->state = WM_WAITING_FOR_CHANNEL; DTRACE_PROBE(waypipe, channel_write_start); } return 0; } static int advance_waymsg_transfer(struct globals *g, struct way_msg_state *wmsg, struct cross_state *cxs, bool display_side, int chanfd, int progfd, bool progsock_readable) { if (wmsg->state == WM_WAITING_FOR_CHANNEL) { return advance_waymsg_chanwrite( wmsg, cxs, g, chanfd, display_side); } else if (wmsg->state == WM_WAITING_FOR_PROGRAM) { return advance_waymsg_progread(wmsg, g, progfd, display_side, progsock_readable); } return 0; } static int read_new_chanfd(int linkfd, struct int_window *recon_fds) { uint8_t tmp = 0; ssize_t rd = iovec_read(linkfd, (char *)&tmp, 1, recon_fds); if (rd == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { // do nothing return -1; } else if (rd == 0 || (rd == -1 && errno == ECONNRESET)) { wp_error("link has closed"); // sentinel value, to indicate that linkfd should be closed return -2; } else if (rd == -1) { wp_error("link read failure: %s", strerror(errno)); return -1; } for (int i = 0; i < recon_fds->zone_end - 1; i++) { checked_close(recon_fds->data[i]); } int ret_fd = -1; if (recon_fds->zone_end > 0) { ret_fd = recon_fds->data[recon_fds->zone_end - 1]; } recon_fds->zone_end = 0; return ret_fd; } static int reconnect_loop(int linkfd, int progfd, struct int_window *recon_fds) { while (!shutdown_flag) { struct pollfd rcfs[2]; rcfs[0].fd = linkfd; rcfs[0].events = POLLIN; rcfs[0].revents = 0; rcfs[1].fd = progfd; rcfs[1].events = 0; rcfs[1].revents = 0; int r = poll(rcfs, 2, -1); if (r == -1) { if (errno == EINTR) { continue; } else { break; } } if (rcfs[0].revents & POLLIN) { int nfd = read_new_chanfd(linkfd, recon_fds); if (nfd != -1) { return nfd; } } if (rcfs[0].revents & POLLHUP || rcfs[1].revents & POLLHUP) { return -1; } } return -1; } static void reset_connection(struct cross_state *cxs, struct chan_msg_state *cmsg, struct way_msg_state *wmsg, int chanfd) { /* Discard partial read transfer, throwing away complete but unread * messages, and trailing remnants */ cmsg->recv_end = 0; cmsg->recv_start = 0; cmsg->recv_unhandled_messages = 0; clear_old_transfers(&wmsg->transfers, cxs->last_confirmed_msgno); wp_debug("Resetting connection: %d blocks unacknowledged", wmsg->transfers.end); if (wmsg->transfers.end > 0) { /* If there was any data in flight, restart. If there wasn't * anything in flight, then the remote side shouldn't notice the * difference */ struct wmsg_restart restart; restart.size_and_type = transfer_header(sizeof(restart), WMSG_RESTART); restart.last_ack_received = cxs->last_confirmed_msgno; wmsg->transfers.start = 0; wmsg->transfers.partial_write_amt = 0; wp_debug("Sending restart message: last ack=%d", restart.last_ack_received); if (write(chanfd, &restart, sizeof(restart)) != sizeof(restart)) { wp_error("Failed to write restart message"); } } if (set_nonblocking(chanfd) == -1) { wp_error("Error making new channel connection nonblocking: %s", strerror(errno)); } (void)cxs; } static int set_connections_nonblocking( int chanfd, int progfd, int linkfd, bool display_side) { const char *progdesc = display_side ? "compositor" : "application"; if (set_nonblocking(chanfd) == -1) { wp_error("Error making channel connection nonblocking: %s", strerror(errno)); return -1; } if (set_nonblocking(progfd) == -1) { wp_error("Error making %s connection nonblocking: %s", progdesc, strerror(errno)); return -1; } if (linkfd != -1 && set_nonblocking(linkfd) == -1) { wp_error("Error making link connection nonblocking: %s", strerror(errno)); return -1; } return 0; } int main_interface_loop(int chanfd, int progfd, int linkfd, const struct main_config *config, bool display_side) { if (set_connections_nonblocking(chanfd, progfd, linkfd, display_side) == -1) { if (linkfd != -1) { checked_close(linkfd); } checked_close(chanfd); checked_close(progfd); return EXIT_FAILURE; } const char *progdesc = display_side ? "compositor" : "application"; wp_debug("Running main loop on %s side", progdesc); struct way_msg_state way_msg; memset(&way_msg, 0, sizeof(way_msg)); struct chan_msg_state chan_msg; memset(&chan_msg, 0, sizeof(chan_msg)); struct cross_state cross_data; memset(&cross_data, 0, sizeof(cross_data)); struct globals g; memset(&g, 0, sizeof(g)); way_msg.state = WM_WAITING_FOR_PROGRAM; /* AFAIK, there is no documented upper bound for the size of a * Wayland protocol message, but libwayland (in wl_buffer_put) * effectively limits message sizes to 4096 bytes. We must * therefore adopt a limit as least as large. */ const int max_read_size = 4096; way_msg.proto_read.size = max_read_size; way_msg.proto_read.data = malloc((size_t)way_msg.proto_read.size); way_msg.fds.size = 128; way_msg.fds.data = malloc((size_t)way_msg.fds.size * sizeof(int)); way_msg.proto_write.size = 2 * max_read_size; way_msg.proto_write.data = malloc((size_t)way_msg.proto_write.size); way_msg.max_iov = get_iov_max(); int mut_ret = pthread_mutex_init( &way_msg.transfers.async_recv_queue.lock, NULL); if (mut_ret) { wp_error("Mutex creation failed: %s", strerror(mut_ret)); goto init_failure_cleanup; } chan_msg.state = CM_WAITING_FOR_CHANNEL; chan_msg.recv_size = 2 * RECV_GOAL_READ_SIZE; chan_msg.recv_buffer = malloc((size_t)chan_msg.recv_size); chan_msg.proto_write.size = max_read_size * 2; chan_msg.proto_write.data = malloc((size_t)chan_msg.proto_write.size); if (!chan_msg.proto_write.data || !chan_msg.recv_buffer || !way_msg.proto_write.data || !way_msg.fds.data || !way_msg.proto_read.data) { wp_error("Failed to allocate a message scratch buffer"); goto init_failure_cleanup; } /* The first packet received will be #1 */ way_msg.transfers.last_msgno = 1; g.config = config; g.render = (struct render_data){ .drm_node_path = config->drm_node, .drm_fd = -1, .dev = NULL, .disabled = config->no_gpu, .av_disabled = config->no_gpu || !config->prefer_hwvideo, .av_bpf = config->video_bpf, .av_video_fmt = (int)config->video_fmt, .av_hwdevice_ref = NULL, .av_drmdevice_ref = NULL, .av_vadisplay = NULL, .av_copy_config = 0, }; if (setup_thread_pool(&g.threads, config->compression, config->compression_level, config->n_worker_threads) == -1) { goto init_failure_cleanup; } setup_translation_map(&g.map, display_side); if (init_message_tracker(&g.tracker) == -1) { goto init_failure_cleanup; } struct int_window recon_fds = { .data = NULL, .size = 0, .zone_start = 0, .zone_end = 0, }; bool needs_new_channel = false; struct pollfd *pfds = NULL; int pfds_size = 0; int exit_code = 0; while (!shutdown_flag && exit_code == 0 && !(way_msg.state == WM_TERMINAL && chan_msg.state == CM_TERMINAL)) { int psize = 4 + count_npipes(&g.map); if (buf_ensure_size(psize, sizeof(struct pollfd), &pfds_size, (void **)&pfds) == -1) { wp_error("Allocation failure, not enough space for pollfds"); exit_code = ERR_NOMEM; break; } pfds[0].fd = chanfd; pfds[1].fd = progfd; pfds[2].fd = linkfd; pfds[3].fd = g.threads.selfpipe_r; pfds[0].events = 0; pfds[1].events = 0; pfds[2].events = POLLIN; pfds[3].events = POLLIN; if (way_msg.state == WM_WAITING_FOR_CHANNEL) { pfds[0].events |= POLLOUT; } else if (way_msg.state == WM_WAITING_FOR_PROGRAM) { pfds[1].events |= POLLIN; } if (chan_msg.state == CM_WAITING_FOR_CHANNEL) { pfds[0].events |= POLLIN; } else if (chan_msg.state == CM_WAITING_FOR_PROGRAM) { pfds[1].events |= POLLOUT; } bool check_read = way_msg.state == WM_WAITING_FOR_PROGRAM; int npoll = 4 + fill_with_pipes(&g.map, pfds + 4, check_read); bool own_msg_pending = (cross_data.last_acked_msgno != cross_data.last_received_msgno) && way_msg.state == WM_WAITING_FOR_PROGRAM; bool unread_chan_msgs = chan_msg.state == CM_WAITING_FOR_CHANNEL && chan_msg.recv_unhandled_messages > 0; int poll_delay; if (unread_chan_msgs) { /* There is work to do, so continue */ poll_delay = 0; } else if (own_msg_pending) { /* To coalesce acknowledgements, we wait for a minimum * amount */ poll_delay = 20; } else { poll_delay = -1; } int r = poll(pfds, (nfds_t)npoll, poll_delay); if (r == -1) { if (errno == EINTR) { wp_error("poll interrupted: shutdown=%c", shutdown_flag ? 'Y' : 'n'); continue; } else { wp_error("poll failed due to, stopping: %s", strerror(errno)); exit_code = ERR_FATAL; break; } } if (pfds[3].revents & POLLIN) { /* After the self pipe has been used to wake up the * connection, drain it */ char tmp[64]; (void)read(g.threads.selfpipe_r, tmp, sizeof(tmp)); } mark_pipe_object_statuses(&g.map, npoll - 4, pfds + 4); /* POLLHUP sometimes implies POLLIN, but not on all systems. * Checking POLLHUP|POLLIN means that we can detect EOF when * we actually do try to read from the sockets, but also, if * there was data in the pipe just before the hang up, then we * can read and handle that data. */ bool progsock_readable = pfds[1].revents & (POLLIN | POLLHUP); bool chanmsg_active = (pfds[0].revents & (POLLIN | POLLHUP)) || (pfds[1].revents & POLLOUT) || unread_chan_msgs; bool maybe_new_channel = (pfds[2].revents & (POLLIN | POLLHUP)); if (maybe_new_channel) { int new_fd = read_new_chanfd(linkfd, &recon_fds); if (new_fd >= 0) { if (chanfd != -1) { checked_close(chanfd); } chanfd = new_fd; reset_connection(&cross_data, &chan_msg, &way_msg, chanfd); needs_new_channel = false; } else if (new_fd == -2) { wp_error("Link to root process hang-up detected"); checked_close(linkfd); linkfd = -1; } } if (needs_new_channel && linkfd != -1) { wp_error("Channel hang up detected, waiting for reconnection"); int new_fd = reconnect_loop(linkfd, progfd, &recon_fds); if (new_fd < 0) { // -1 is read failure or misc error, -2 is HUP exit_code = ERR_FATAL; break; } else { /* Actually handle the reconnection/reset state */ if (chanfd != -1) { checked_close(chanfd); } chanfd = new_fd; reset_connection(&cross_data, &chan_msg, &way_msg, chanfd); needs_new_channel = false; } } else if (needs_new_channel) { wp_error("Channel hang up detected, no reconnection link, fatal"); exit_code = ERR_FATAL; break; } // Q: randomize the order of these?, to highlight // accidental dependencies? for (int m = 0; m < 2; m++) { int tr; if (m == 0) { tr = advance_chanmsg_transfer(&g, &chan_msg, &cross_data, display_side, chanfd, progfd, chanmsg_active); } else { tr = advance_waymsg_transfer(&g, &way_msg, &cross_data, display_side, chanfd, progfd, progsock_readable); } if (tr >= 0) { /* do nothing */ } else if (tr == ERR_DISCONN) { /* Channel connection has at least * partially been shut down, so close it * fully. */ checked_close(chanfd); chanfd = -1; if (linkfd == -1) { wp_error("Channel hang up detected, no reconnection link, fatal"); exit_code = ERR_FATAL; break; } needs_new_channel = true; } else if (tr == ERR_STOP) { if (m == 0) { /* Stop returned while writing: Wayland * connection has at least partially * shut down, so close it fully. */ checked_close(progfd); progfd = -1; } else { /* Stop returned while reading */ checked_close(progfd); progfd = -1; if (way_msg.state == WM_WAITING_FOR_PROGRAM) { way_msg.state = WM_TERMINAL; } if (chan_msg.state == CM_WAITING_FOR_PROGRAM || chan_msg.recv_start == chan_msg.recv_end) { chan_msg.state = CM_TERMINAL; } } } else { /* Fatal error, close and flush */ exit_code = tr; break; } /* If the program connection has closed, and * there waypipe is not currently transferring * any message to the channel, then shutdown the * program->channel transfers. (The reverse * situation with the chnanel connection is not * a cause for permanent closure, thanks to * reconnection support */ if (progfd == -1) { if (way_msg.state == WM_WAITING_FOR_PROGRAM) { way_msg.state = WM_TERMINAL; } if (chan_msg.state == CM_WAITING_FOR_PROGRAM || chan_msg.recv_start == chan_msg.recv_end) { chan_msg.state = CM_TERMINAL; } } } // Periodic maintenance. It doesn't matter who does this flush_writable_pipes(&g.map); } free(pfds); free(recon_fds.data); wp_debug("Exiting main loop (%d, %d, %d), attempting close message", exit_code, way_msg.state, chan_msg.state); init_failure_cleanup: /* It's possible, but very very unlikely, that waypipe gets closed * while Wayland protocol messages are being written to the program * and the most recent message was only partially written. */ exit_code = ERR_FATAL; if (chan_msg.proto_write.zone_start != chan_msg.proto_write.zone_end) { wp_debug("Final write to %s was incomplete, %d/%d", progdesc, chan_msg.proto_write.zone_start, chan_msg.proto_write.zone_end); } if (!display_side && progfd != -1) { char error[128]; if (exit_code == ERR_FATAL) { size_t len = print_display_error(error, sizeof(error), 3, "waypipe internal error"); if (write(progfd, error, len) == -1) { wp_error("Failed to send waypipe error notification: %s", strerror(errno)); } } else if (exit_code == ERR_NOMEM) { size_t len = print_display_error( error, sizeof(error), 2, "no memory"); if (write(progfd, error, len) == -1) { wp_error("Failed to send OOM notification: %s", strerror(errno)); } } } /* Attempt to notify remote end that the application has closed, * waiting at most for a very short amount of time */ if (way_msg.transfers.start != way_msg.transfers.end) { wp_error("Final write to channel was incomplete, %d+%zu/%d", way_msg.transfers.start, way_msg.transfers.partial_write_amt, way_msg.transfers.end); } if (chanfd != -1) { struct pollfd close_poll; close_poll.fd = chanfd; close_poll.events = POLLOUT; int close_ret = poll(&close_poll, 1, 200); if (close_ret == 0) { wp_debug("Exit poll timed out"); } uint32_t close_msg[2]; close_msg[0] = transfer_header(sizeof(close_msg), WMSG_CLOSE); close_msg[1] = exit_code == ERR_STOP ? 0 : (uint32_t)exit_code; wp_debug("Sending close message, modecode=%d", close_msg[1]); if (write(chanfd, &close_msg, sizeof(close_msg)) == -1) { wp_error("Failed to send close notification: %s", strerror(errno)); } } else { wp_debug("Channel closed, hence no close notification"); } cleanup_thread_pool(&g.threads); cleanup_message_tracker(&g.tracker); cleanup_translation_map(&g.map); cleanup_render_data(&g.render); cleanup_hwcontext(&g.render); free(way_msg.proto_read.data); free(way_msg.proto_write.data); free(way_msg.fds.data); cleanup_transfer_queue(&way_msg.transfers); for (int i = 0; i < way_msg.ntrailing; i++) { free(way_msg.trailing[i].iov_base); } free(chan_msg.transf_fds.data); free(chan_msg.proto_fds.data); free(chan_msg.recv_buffer); free(chan_msg.proto_write.data); if (chanfd != -1) { checked_close(chanfd); } if (progfd != -1) { checked_close(progfd); } if (linkfd != -1) { checked_close(linkfd); } return EXIT_SUCCESS; } waypipe-v0.10.5/waypipe-c/meson.build000066400000000000000000000175711507071711100175140ustar00rootroot00000000000000# DEFAULT_SOURCE implies POSIX_C_SOURCE 200809L + extras like CMSG_LEN # requires glibc >= 4.19 (2014), freebsd libc (since 2016?), musl >= 1.15 (2014) add_project_arguments('-D_DEFAULT_SOURCE', language: 'c') # Sometimes ignoring the result of read()/write() is the right thing to do add_project_arguments('-Wno-unused-result', language: 'c') cc = meson.get_compiler('c') config_data = configuration_data() config_data.set('WAYPIPE_VERSION', version) # Make build reproducible if possible python3 = import('python').find_installation() prefix_finder = 'import os.path; print(os.path.join(os.path.relpath(\'@0@\', \'@1@\'),\'\'))' r = run_command( python3, '-c', prefix_finder.format(meson.current_source_dir(), meson.project_build_root()), check: true, ) relative_dir = r.stdout().strip() if cc.has_argument('-fmacro-prefix-map=/prefix/to/hide=') add_project_arguments( '-fmacro-prefix-map=@0@='.format(relative_dir), language: 'c', ) else add_project_arguments( '-DWAYPIPE_REL_SRC_DIR="@0@"'.format(relative_dir), language: 'c', ) endif libgbm = dependency('gbm', required: get_option('with_dmabuf')) libdrm = dependency('libdrm', required: get_option('with_dmabuf')) if libgbm.found() and libdrm.found() config_data.set('HAS_DMABUF', 1, description: 'Support DMABUF replication') has_dmabuf = true else has_dmabuf = false endif pthreads = dependency('threads') rt = cc.find_library('rt') # XXX dtrace -G (Solaris, FreeBSD, NetBSD) isn't supported yet is_linux = host_machine.system() == 'linux' is_darwin = host_machine.system() == 'darwin' if (is_linux or is_darwin) and get_option('with_systemtap') and cc.has_header( 'sys/sdt.h', ) config_data.set('HAS_USDT', 1, description: 'Enable static trace probes') endif has_flag_to_host = ''' // linux/vm_sockets.h doesn't compile on its own // "invalid application of 'sizeof' to incomplete type 'struct sockaddr'" #include #include #ifndef VMADDR_FLAG_TO_HOST #error #endif int main(void) { return 0; } ''' if is_linux and cc.has_header('linux/vm_sockets.h') and cc.compiles( has_flag_to_host, name: 'has VMADDR_FLAG_TO_HOST', ) config_data.set('HAS_VSOCK', 1, description: 'Enable VM Sockets (VSOCK)') endif liblz4 = dependency( 'liblz4', version: '>=1.7.0', required: get_option('with_lz4'), ) if liblz4.found() config_data.set('HAS_LZ4', 1, description: 'Enable LZ4 compression') endif libzstd = dependency( 'libzstd', version: '>=0.4.6', required: get_option('with_zstd'), ) if libzstd.found() config_data.set('HAS_ZSTD', 1, description: 'Enable Zstd compression') endif libavcodec = dependency('libavcodec', required: get_option('with_video')) libavutil = dependency('libavutil', required: get_option('with_video')) libswscale = dependency('libswscale', required: get_option('with_video')) libva = dependency('libva', required: get_option('with_vaapi')) if libavcodec.found() and libavutil.found() and libswscale.found() config_data.set('HAS_VIDEO', 1, description: 'Enable video (de)compression') if libva.found() config_data.set( 'HAS_VAAPI', 1, description: 'Enable hardware video (de)compression with VAAPI', ) endif endif wayland_scanner = dependency( 'wayland-scanner', version: '>=1.15.0', required: get_option('with_secctx'), native: true, ) wayland_client = dependency( 'wayland-client', required: get_option('with_secctx'), ) if wayland_scanner.found() and wayland_client.found() config_data.set( 'HAS_SECURITY_CONTEXT', 1, description: 'Enable security-context-v1 support', ) endif waypipe_includes = [include_directories('protocols'), include_directories('.')] if libdrm.found() waypipe_includes += include_directories( libdrm.get_variable(pkgconfig: 'includedir'), ) endif subdir('protocols') waypipe_source_files = [ 'dmabuf.c', 'handlers.c', 'kernel.c', 'mainloop.c', 'parsing.c', 'platform.c', 'shadow.c', 'interval.c', 'util.c', 'video.c', ] waypipe_deps = [ pthreads, # To run expensive computations in parallel rt, # For shared memory ] if config_data.has('HAS_DMABUF') # General GPU buffer creation, aligned with dmabuf proto waypipe_deps += [libgbm] endif if config_data.has('HAS_LZ4') waypipe_deps += [liblz4] # Fast compression option endif if config_data.has('HAS_ZSTD') waypipe_deps += [libzstd] # Slow compression option endif if config_data.has('HAS_VIDEO') waypipe_deps += [libavcodec, libavutil, libswscale] endif if config_data.has('HAS_VAAPI') waypipe_deps += [libva] # For NV12->RGB conversions endif # Conditionally compile SIMD-optimized code. # (The meson simd module is a bit too limited for this) kernel_libs = [] if cc.has_argument('-mavx512f') and cc.has_argument('-mlzcnt') and cc.has_argument( '-mbmi', ) and get_option( 'with_avx512f', ) kernel_libs += static_library( 'kernel_avx512f', 'kernel_avx512f.c', c_args: ['-mavx512f', '-mlzcnt', '-mbmi'], ) config_data.set('HAVE_AVX512F', 1, description: 'Compiler supports AVX-512F') endif if cc.has_argument('-mavx2') and cc.has_argument('-mlzcnt') and cc.has_argument( '-mbmi', ) and get_option( 'with_avx2', ) kernel_libs += static_library( 'kernel_avx2', 'kernel_avx2.c', c_args: ['-mavx2', '-mlzcnt', '-mbmi'], ) config_data.set('HAVE_AVX2', 1, description: 'Compiler supports AVX2') endif if cc.has_argument('-msse3') and get_option('with_sse3') kernel_libs += static_library( 'kernel_sse3', 'kernel_sse3.c', c_args: ['-msse3'], ) config_data.set('HAVE_SSE3', 1, description: 'Compiler supports SSE 3') endif if (host_machine.cpu_family() == 'aarch64' or cc.has_argument('-mfpu=neon')) and get_option( 'with_neon_opts', ) neon_args = host_machine.cpu_family() == 'aarch64' ? [] : ['-mfpu=neon'] # Clang additionally enforces that NEON code only be compiled # to target a CPU that actually supports NEON instructions, # so bump the host CPU version for the optionally executed code only. if host_machine.cpu_family() == 'arm' and cc.get_id() == 'clang' host_cpu = host_machine.cpu() if host_cpu.contains('4') or host_cpu.contains('5') or host_cpu.contains( '6', ) neon_args += ['-march=armv7-a'] endif endif kernel_libs += static_library( 'kernel_neon', 'kernel_neon.c', c_args: neon_args, ) config_data.set('HAVE_NEON', 1, description: 'Compiler supports NEON') endif if config_data.has('HAS_SECURITY_CONTEXT') wayland_scanner_prog = find_program( wayland_scanner.get_variable(pkgconfig: 'wayland_scanner'), ) wl_security_context_xml = '../protocols/security-context-v1.xml' protocols_src += custom_target( 'security-context-v1-protocol.c', input: wl_security_context_xml, output: 'security-context-v1-protocol.c', command: [wayland_scanner_prog, 'private-code', '@INPUT@', '@OUTPUT@'], ) protocols_src += custom_target( 'security-context-v1-protocol.h', input: wl_security_context_xml, output: 'security-context-v1-protocol.h', command: [wayland_scanner_prog, 'client-header', '@INPUT@', '@OUTPUT@'], ) waypipe_deps += [wayland_client] waypipe_source_files += ['secctx.c'] endif configure_file(output: 'config-waypipe.h', configuration: config_data) lib_waypipe_src = static_library( 'waypipe_src', waypipe_source_files + protocols_src, include_directories: waypipe_includes, link_with: kernel_libs, dependencies: waypipe_deps, ) waypipe_prog = executable( 'waypipe-c', ['waypipe.c', 'bench.c', 'client.c', 'server.c'], link_with: lib_waypipe_src, install: true, ) if get_option('tests') subdir('test') endif waypipe-v0.10.5/waypipe-c/parsing.c000066400000000000000000000410421507071711100171470ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "parsing.h" #include "main.h" #include "util.h" #include #include #include #include static const char *get_type_name(struct wp_object *obj) { return obj->type ? obj->type->name : ""; } const char *get_nth_packed_string(const char *pack, int n) { for (int i = 0; i < n; i++) { pack += strlen(pack) + 1; } return pack; } static struct wp_object *tree_rotate_left(struct wp_object *n) { struct wp_object *tmp = n->t_right; n->t_right = tmp->t_left; tmp->t_left = n; return tmp; } static struct wp_object *tree_rotate_right(struct wp_object *n) { struct wp_object *tmp = n->t_left; n->t_left = tmp->t_right; tmp->t_right = n; return tmp; } static void tree_link_right(struct wp_object **cur, struct wp_object **rn) { (*rn)->t_left = *cur; *rn = *cur; *cur = (*cur)->t_left; } static void tree_link_left(struct wp_object **cur, struct wp_object **ln) { (*ln)->t_right = *cur; *ln = *cur; *cur = (*cur)->t_right; } /* Splay operation, following Sleator+Tarjan, 1985 */ static struct wp_object *tree_branch_splay(struct wp_object *root, uint32_t key) { if (!root) { return NULL; } struct wp_object bg = {.t_left = NULL, .t_right = NULL}; struct wp_object *ln = &bg; struct wp_object *rn = &bg; struct wp_object *cur = root; while (key != cur->obj_id) { if (key < cur->obj_id) { if (cur->t_left && key < cur->t_left->obj_id) { cur = tree_rotate_right(cur); } if (!cur->t_left) { break; } tree_link_right(&cur, &rn); } else { if (cur->t_right && key > cur->t_right->obj_id) { cur = tree_rotate_left(cur); } if (!cur->t_right) { break; } tree_link_left(&cur, &ln); } } ln->t_right = cur->t_left; rn->t_left = cur->t_right; cur->t_left = bg.t_right; cur->t_right = bg.t_left; return cur; } static void tree_insert(struct wp_object **tree, struct wp_object *new_node) { /* Reset these, just in case */ new_node->t_left = NULL; new_node->t_right = NULL; struct wp_object *r = *tree; if (!r) { *tree = new_node; return; } r = tree_branch_splay(r, new_node->obj_id); if (new_node->obj_id < r->obj_id) { new_node->t_left = r->t_left; new_node->t_right = r; r->t_left = NULL; r = new_node; } else if (new_node->obj_id > r->obj_id) { new_node->t_right = r->t_right; new_node->t_left = r; r->t_right = NULL; r = new_node; } else { /* already in tree, no effect? or do silent override */ } *tree = r; } static void tree_remove(struct wp_object **tree, uint32_t key) { struct wp_object *r = *tree; r = tree_branch_splay(r, key); if (!r || r->obj_id != key) { /* wasn't in tree */ return; } struct wp_object *lbranch = r->t_left; struct wp_object *rbranch = r->t_right; if (!lbranch) { *tree = rbranch; return; } r = tree_branch_splay(lbranch, key); r->t_right = rbranch; *tree = r; } static struct wp_object *tree_lookup(struct wp_object **tree, uint32_t key) { *tree = tree_branch_splay(*tree, key); if (*tree && (*tree)->obj_id == key) { return *tree; } return NULL; } static void tree_clear(struct wp_object **tree, void (*node_free)(struct wp_object *object)) { struct wp_object *root = *tree; while (root) { root = tree_branch_splay(root, 0); struct wp_object *right = root->t_right; root->t_right = NULL; node_free(root); root = right; } *tree = NULL; } void tracker_insert(struct message_tracker *mt, struct wp_object *obj) { struct wp_object *old_obj = tree_lookup(&mt->objtree_root, obj->obj_id); if (old_obj) { /* We /always/ replace the object, to ensure that map * elements are never duplicated and make the deletion * process cause crashes */ if (!old_obj->is_zombie) { wp_error("Replacing object @%u that already exists: old type %s, new type %s", obj->obj_id, get_type_name(old_obj), get_type_name(obj)); } /* Zombie objects (server allocated, client deleted) are * only acknowledged destroyed by the server when they * are replaced. */ tree_remove(&mt->objtree_root, old_obj->obj_id); destroy_wp_object(old_obj); } tree_insert(&mt->objtree_root, obj); } void tracker_replace_existing( struct message_tracker *mt, struct wp_object *new_obj) { tree_remove(&mt->objtree_root, new_obj->obj_id); tree_insert(&mt->objtree_root, new_obj); } void tracker_remove(struct message_tracker *mt, struct wp_object *obj) { tree_remove(&mt->objtree_root, obj->obj_id); } struct wp_object *tracker_get(struct message_tracker *mt, uint32_t id) { return tree_lookup(&mt->objtree_root, id); } struct wp_object *get_object(struct message_tracker *mt, uint32_t id, const struct wp_interface *intf) { (void)intf; return tracker_get(mt, id); } int init_message_tracker(struct message_tracker *mt) { memset(mt, 0, sizeof(*mt)); /* heap allocate this, so we don't need to protect against adversarial * replacement */ struct wp_object *disp = create_wp_object(1, the_display_interface); if (!disp) { return -1; } tracker_insert(mt, disp); return 0; } void cleanup_message_tracker(struct message_tracker *mt) { tree_clear(&mt->objtree_root, destroy_wp_object); } static bool word_has_empty_bytes(uint32_t v) { return ((v & 0xFF) == 0) || ((v & 0xFF00) == 0) || ((v & 0xFF0000) == 0) || ((v & 0xFF000000) == 0); } bool size_check(const struct msg_data *data, const uint32_t *payload, unsigned int true_length, int fd_length) { if (data->n_fds > fd_length) { wp_error("Msg overflow, not enough fds %d > %d", data->n_fds, fd_length); return false; } const uint16_t *gaps = data->gaps; uint32_t pos = 0; for (;; gaps++) { uint16_t g = (*gaps >> 2); uint16_t e = (*gaps & 0x3); pos += g; if (pos > true_length) { wp_error("Msg overflow, not enough words %d > %d", pos, true_length); return false; } switch (e) { case GAP_CODE_STR: { uint32_t x_words = (payload[pos - 1] + 3) / 4; uint32_t end_idx = pos + x_words - 1; if (end_idx < true_length && !word_has_empty_bytes( payload[end_idx])) { wp_error("Msg overflow, string termination %d < %d, %d, %x %d", pos, true_length, x_words, payload[end_idx], word_has_empty_bytes( payload[end_idx])); return false; } pos += x_words; } break; case GAP_CODE_ARR: pos += (payload[pos - 1] + 3) / 4; break; case GAP_CODE_OBJ: break; case GAP_CODE_END: return true; } } } /* Given a size-checked request, try to construct all the new objects * that the request requires. Return true if successful, false otherwise. * * The argument `caller_obj` should be the object on which the request was * invoked; this function checks to make sure that object is not * overwritten by accident/corrupt input. */ static bool build_new_objects(const struct msg_data *data, const uint32_t *payload, struct message_tracker *mt, const struct wp_object *caller_obj, int msg_offset) { const uint16_t *gaps = data->gaps; uint32_t pos = 0; uint32_t objno = 0; for (;; gaps++) { uint16_t g = (*gaps >> 2); uint16_t e = (*gaps & 0x3); pos += g; switch (e) { case GAP_CODE_STR: case GAP_CODE_ARR: pos += (payload[pos - 1] + 3) / 4; break; case GAP_CODE_OBJ: { uint32_t new_id = payload[pos - 1]; if (new_id == caller_obj->obj_id) { wp_error("In %s.%s, tried to create object id=%u conflicting with object being called, also id=%u", caller_obj->type->name, get_nth_packed_string( caller_obj->type->msg_names, msg_offset), new_id, caller_obj->obj_id); return false; } struct wp_object *new_obj = create_wp_object( new_id, data->new_objs[objno]); if (!new_obj) { return false; } tracker_insert(mt, new_obj); objno++; } break; case GAP_CODE_END: return true; } } } int peek_message_size(const void *data) { return (int)(((const uint32_t *)data)[1] >> 16); } enum parse_state handle_message(struct globals *g, bool display_side, bool from_client, struct char_window *chars, struct int_window *fds) { bool to_wire = from_client == !display_side; const uint32_t *const header = (uint32_t *)&chars->data[chars->zone_start]; uint32_t obj = header[0]; int len = (int)(header[1] >> 16); int meth = (int)((header[1] << 16) >> 16); if (len != chars->zone_end - chars->zone_start) { wp_error("Message length disagreement %d vs %d", len, chars->zone_end - chars->zone_start); return PARSE_ERROR; } // display: object = 0? struct wp_object *objh = tracker_get(&g->tracker, obj); if (!objh || !objh->type) { wp_debug("Unidentified object %d with %s", obj, from_client ? "request" : "event"); return PARSE_UNKNOWN; } /* Identify the message type. Messages sent over the wire are tagged * with the number of file descriptors that are bound to the message. * This incidentally limits the number of fds to 31, and number of * messages per type 2047. */ int num_fds_with_message = -1; if (!to_wire) { num_fds_with_message = meth >> 11; meth = meth & ((1 << 11) - 1); if (num_fds_with_message > 0) { wp_debug("Reading message tagged with %d fds.", num_fds_with_message); } // Strip out the FD counters ((uint32_t *)&chars->data[chars->zone_start])[1] &= ~(uint32_t)((1 << 16) - (1 << 11)); } const struct wp_interface *intf = objh->type; int nmsgs = from_client ? intf->nreq : intf->nevt; if (meth < 0 || meth >= nmsgs) { wp_debug("Unidentified request #%d (of %d) on interface %s", meth, nmsgs, intf->name); return PARSE_UNKNOWN; } int meth_offset = from_client ? meth : meth + intf->nreq; const struct msg_data *msg = &intf->msgs[meth_offset]; const uint32_t *payload = header + 2; if (!size_check(msg, payload, (unsigned int)len / 4 - 2, fds->zone_end - fds->zone_start)) { wp_error("Message %x %s@%u.%s parse length overflow", payload, intf->name, objh->obj_id, get_nth_packed_string( intf->msg_names, meth_offset)); return PARSE_UNKNOWN; } if (!build_new_objects(msg, payload, &g->tracker, objh, meth_offset)) { return PARSE_UNKNOWN; } int fds_used = 0; struct context ctx = { .g = g, .tracker = &g->tracker, .obj = objh, .on_display_side = display_side, .drop_this_msg = false, .message = (uint32_t *)&chars->data[chars->zone_start], .message_length = len, .message_available_space = chars->size - chars->zone_start, .fds = fds, .fds_changed = false, }; if (msg->call) { (*msg->call)(&ctx, payload, &fds->data[fds->zone_start], &g->tracker); } if (num_fds_with_message >= 0 && msg->n_fds != num_fds_with_message) { wp_error("Message used %d file descriptors, but was tagged as using %d", msg->n_fds, num_fds_with_message); } fds_used += msg->n_fds; if (objh->obj_id >= 0xff000000 && msg->is_destructor) { /* Unfortunately, the wayland server library does not explicitly * acknowledge the client requested deletion of objects that the * wayland server has created; the client assumes success, * except by creating a new object that overrides the existing * id. * * To correctly vanish all events in flight, we mark the element * as having been zombified; it will only be destroyed when a * new element is created to take its place, since there could * still be e.g. data transfers in the channel, and it's best * that those only vanish when needed. * * Fortunately, wl_registry::bind objects are all client * produced. * * TODO: early struct shadow_fd closure for all deletion * requests, with a matching zombie flag to vanish transfers; * * TODO: avert the zombie apocalypse, where the compositor * sends creation notices for a full hierarchy of objects * before it receives the root's .destroy request. */ objh->is_zombie = true; } if (ctx.drop_this_msg) { wp_debug("Dropping %s.%s, with %d fds", intf->name, get_nth_packed_string( intf->msg_names, meth_offset), fds_used); chars->zone_end = chars->zone_start; int nmoved = fds->zone_end - fds->zone_start - fds_used; memmove(&fds->data[fds->zone_start], &fds->data[fds->zone_start + fds_used], (size_t)nmoved * sizeof(int)); fds->zone_end -= fds_used; return PARSE_KNOWN; } if (!ctx.fds_changed) { // Default, autoadvance fd queue, unless handler disagreed. fds->zone_start += fds_used; // Tag message with number of FDs. If the fds were modified // nontrivially, (i.e, ctx.fds_changed is true), tagging is // handler's responsibility if (to_wire) { if (fds_used >= 32 || meth >= 2048) { wp_error("Message used %d>=32 file descriptors or had index %d>=2048. FD tagging failed, expect a crash.", fds_used, meth); } if (fds_used > 0) { wp_debug("Tagging message with %d fds.", fds_used); ((uint32_t *)&chars->data[chars->zone_start]) [1] |= (uint32_t)(fds_used << 11); } } } if (fds->zone_end < fds->zone_start) { wp_error("Handler error after %s.%s: fdzs = %d > %d = fdze", intf->name, get_nth_packed_string( intf->msg_names, meth_offset), fds->zone_start, fds->zone_end); } // Move the end, in case there were changes chars->zone_end = chars->zone_start + ctx.message_length; return PARSE_KNOWN; } void parse_and_prune_messages(struct globals *g, bool on_display_side, bool from_client, struct char_window *source_bytes, struct char_window *dest_bytes, struct int_window *fds) { bool anything_unknown = false; struct char_window scan_bytes; scan_bytes.data = dest_bytes->data; scan_bytes.zone_start = dest_bytes->zone_start; scan_bytes.zone_end = dest_bytes->zone_start; scan_bytes.size = dest_bytes->size; DTRACE_PROBE1(waypipe, parse_enter, source_bytes->zone_end - source_bytes->zone_start); for (; source_bytes->zone_start < source_bytes->zone_end;) { if (source_bytes->zone_end - source_bytes->zone_start < 8) { // Not enough remaining bytes to parse the // header wp_debug("Insufficient bytes for header: %d %d", source_bytes->zone_start, source_bytes->zone_end); break; } int msgsz = peek_message_size( &source_bytes->data[source_bytes->zone_start]); if (msgsz % 4 != 0) { wp_debug("Wayland messages lengths must be divisible by 4"); break; } if (source_bytes->zone_start + msgsz > source_bytes->zone_end) { wp_debug("Insufficient bytes"); // Not enough remaining bytes to contain the // message break; } if (msgsz < 8) { wp_debug("Degenerate message, claimed len=%d", msgsz); // Not enough remaining bytes to contain the // message break; } /* We copy the message to the trailing end of the * in-progress buffer; the parser may elect to modify * the message's size */ memcpy(&scan_bytes.data[scan_bytes.zone_start], &source_bytes->data[source_bytes->zone_start], (size_t)msgsz); source_bytes->zone_start += msgsz; scan_bytes.zone_end = scan_bytes.zone_start + msgsz; enum parse_state pstate = handle_message(g, on_display_side, from_client, &scan_bytes, fds); if (pstate == PARSE_UNKNOWN || pstate == PARSE_ERROR) { anything_unknown = true; } scan_bytes.zone_start = scan_bytes.zone_end; } dest_bytes->zone_end = scan_bytes.zone_end; if (anything_unknown) { // All-un-owned buffers are assumed to have changed. // (Note that in some cases, a new protocol could imply // a change for an existing buffer; it may make sense to // mark everything dirty, then.) for (struct shadow_fd_link *lcur = g->map.link.l_next, *lnxt = lcur->l_next; lcur != &g->map.link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (!cur->has_owner) { cur->is_dirty = true; } } } DTRACE_PROBE(waypipe, parse_exit); return; } waypipe-v0.10.5/waypipe-c/parsing.h000066400000000000000000000134301507071711100171540ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_PARSING_H #define WAYPIPE_PARSING_H #include #include struct char_window; struct int_window; struct fd_translation_map; struct main_config; struct wp_interface; /** An object used by the wayland protocol. Specific types may extend * this struct, using the following data as a header */ struct wp_object { struct wp_object *t_left, *t_right; // inline tree implementation const struct wp_interface *type; // Use to lookup the message handler uint32_t obj_id; bool is_zombie; // object deleted but not yet acknowledged remotely }; struct message_tracker { /* Tree containing all objects that are currently alive or zombie */ struct wp_object *objtree_root; /* sequence number to discriminate between wl_buffer objects; object ids * and pointers are not guaranteed to be unique */ uint64_t buffer_seqno; }; /** Context object, to be passed to the protocol handler functions */ struct context { struct globals *const g; struct message_tracker *const tracker; struct wp_object *obj; bool drop_this_msg; /* If true, running as waypipe client, and interfacing with compositor's * buffers */ const bool on_display_side; /* The transferred message can be rewritten in place, and resized, as * long as there is space available. Setting 'fds_changed' will * prevent the fd zone start from autoincrementing after running * the function, which may be useful when injecting messages with fds */ const int message_available_space; uint32_t *const message; int message_length; bool fds_changed; struct int_window *const fds; }; /** Add a protocol object to the list, replacing any preceding object with * the same id. */ void tracker_insert(struct message_tracker *mt, struct wp_object *obj); void tracker_remove(struct message_tracker *mt, struct wp_object *obj); /** Replace an object that is already in the protocol list with a new object * that has the same id; will silently fail if id not present */ void tracker_replace_existing( struct message_tracker *mt, struct wp_object *obj); struct wp_object *tracker_get(struct message_tracker *mt, uint32_t id); int init_message_tracker(struct message_tracker *mt); void cleanup_message_tracker(struct message_tracker *mt); /** Read message size from header; the 8 bytes beyond data must exist */ int peek_message_size(const void *data); /** Generate the second uint32_t field of a message header; this assumes no * fds or equivalently no fd count subfield */ static inline uint32_t message_header_2(uint32_t size_bytes, uint32_t msgno) { return (size_bytes << 16) | msgno; } const char *get_nth_packed_string(const char *pack, int n); enum parse_state { PARSE_KNOWN, PARSE_UNKNOWN, PARSE_ERROR }; /** * The return value is false iff the given message should be dropped. * The flag `unidentified_changes` is set to true if the message does * not correspond to a known protocol. * * The message data payload may be modified and increased in size. * * The window `chars` should start at the message start, end * at its end, and indicate remaining space. * The window `fds` should start at the next fd in the queue, ends * with the last. * * The start and end of `chars` will be moved to the new end of the message. * The end of `fds` may be moved if any fds are inserted or discarded. * The start of fds will be moved, depending on how many fds were consumed. */ enum parse_state handle_message(struct globals *g, bool on_display_side, bool from_client, struct char_window *chars, struct int_window *fds); /** * Given a set of messages and fds, parse the messages, and if indicated * by parsing logic, compact the message buffer by removing selected * messages, or edit message contents. * * The `source_bytes` window indicates the range of unread data; it's * zone start point will be advanced. The 'dest_bytes' window indicates * the range of written data; it's zone end point will be advanced. * * The file descriptor queue `fds` will have its start advanced, leaving only * file descriptors that have not yet been read. Further edits may be made * to inject new file descriptors. */ void parse_and_prune_messages(struct globals *g, bool on_display_side, bool from_client, struct char_window *source_bytes, struct char_window *dest_bytes, struct int_window *fds); // handlers.c /** Create a new Wayland protocol object of the given type; some types * produce structs extending from wp_object */ struct wp_object *create_wp_object( uint32_t it, const struct wp_interface *type); /** Type-specific destruction routines, also dereferencing linked shadow_fds */ void destroy_wp_object(struct wp_object *object); extern const struct wp_interface *the_display_interface; #endif // WAYPIPE_PARSING_H waypipe-v0.10.5/waypipe-c/platform.c000066400000000000000000000113761507071711100173370ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef _GNU_SOURCE #define _GNU_SOURCE #endif #include "config-waypipe.h" #include #include #include #include #include #include #include #include #include #include #if defined(__linux__) && defined(__arm__) #include #include #elif defined(__FreeBSD__) && defined(__arm__) #include #endif #if defined(__linux__) /* memfd_create was introduced in glibc 2.27 */ #if !defined(__GLIBC__) || (__GLIBC__ >= 2 && __GLIBC_MINOR__ >= 27) #define HAS_MEMFD 1 #endif #endif #if defined(__linux__) #define HAS_O_PATH 1 #endif int create_anon_file(void) { int new_fileno; #ifdef HAS_MEMFD new_fileno = memfd_create("waypipe", 0); #elif defined(SHM_ANON) new_fileno = shm_open(SHM_ANON, O_RDWR, 0600); #else // Fallback code. Should not be used from multiple threads static int counter = 0; int pid = getpid(); counter++; char tmp_name[64]; sprintf(tmp_name, "/waypipe%d-data_%d", pid, counter); new_fileno = shm_open(tmp_name, O_EXCL | O_RDWR | O_CREAT, 0644); if (new_fileno == -1) { return -1; } (void)shm_unlink(tmp_name); #endif return new_fileno; } int get_hardware_thread_count(void) { return (int)sysconf(_SC_NPROCESSORS_ONLN); } int get_iov_max(void) { return (int)sysconf(_SC_IOV_MAX); } #ifdef HAVE_NEON bool neon_available(void) { /* The actual methods are platform-dependent */ #if defined(__linux__) && defined(__arm__) return (getauxval(AT_HWCAP) & HWCAP_NEON) != 0; #elif defined(__FreeBSD__) && defined(__arm__) unsigned long hwcap = 0; elf_aux_info(AT_HWCAP, &hwcap, sizeof(hwcap)); return (hwcap & HWCAP_NEON) != 0; #endif return true; } #endif static void *align_ptr(void *ptr, size_t alignment) { return (uint8_t *)ptr + ((alignment - (uintptr_t)ptr) % alignment); } void *zeroed_aligned_alloc(size_t bytes, size_t alignment, void **handle) { if (*handle) { /* require a clean handle */ return NULL; } *handle = calloc(bytes + alignment - 1, 1); return align_ptr(*handle, alignment); } void *zeroed_aligned_realloc(size_t old_size_bytes, size_t new_size_bytes, size_t alignment, void *data, void **handle) { /* warning: this might copy a lot of data */ if (new_size_bytes <= 2 * old_size_bytes) { void *old_handle = *handle; ptrdiff_t old_offset = (uint8_t *)data - (uint8_t *)old_handle; void *new_handle = realloc( old_handle, new_size_bytes + alignment - 1); if (!new_handle) { return NULL; } void *new_data = align_ptr(new_handle, alignment); ptrdiff_t new_offset = (uint8_t *)new_data - (uint8_t *)new_handle; if (old_offset != new_offset) { /* realloc broke alignment offset */ memmove((uint8_t *)new_data + new_offset, (uint8_t *)new_data + old_offset, new_size_bytes > old_size_bytes ? old_size_bytes : new_size_bytes); } if (new_size_bytes > old_size_bytes) { memset((uint8_t *)new_data + old_size_bytes, 0, new_size_bytes - old_size_bytes); } *handle = new_handle; return new_data; } else { void *new_handle = calloc(new_size_bytes + alignment - 1, 1); if (!new_handle) { return NULL; } void *new_data = align_ptr(new_handle, alignment); memcpy(new_data, data, new_size_bytes > old_size_bytes ? old_size_bytes : new_size_bytes); free(*handle); *handle = new_handle; return new_data; } } void zeroed_aligned_free(void *data, void **handle) { (void)data; free(*handle); *handle = NULL; } int open_folder(const char *name) { const char *path = name[0] ? name : "."; #ifdef HAS_O_PATH return open(path, O_PATH); #else return open(path, O_RDONLY | O_DIRECTORY); #endif } waypipe-v0.10.5/waypipe-c/protocols/000077500000000000000000000000001507071711100173635ustar00rootroot00000000000000waypipe-v0.10.5/waypipe-c/protocols/function_list.txt000066400000000000000000000034241507071711100230070ustar00rootroot00000000000000ext_data_control_offer_v1_req_receive ext_data_control_source_v1_evt_send gtk_primary_selection_offer_req_receive gtk_primary_selection_source_evt_send wl_buffer_evt_release wl_data_offer_req_receive wl_data_source_evt_send wl_display_evt_delete_id wl_display_evt_error wl_display_req_get_registry wl_display_req_sync wl_drm_evt_device wl_drm_req_create_prime_buffer wl_keyboard_evt_keymap wl_registry_evt_global wl_registry_evt_global_remove wl_registry_req_bind wl_shm_req_create_pool wl_shm_pool_req_create_buffer wl_shm_pool_req_resize wl_surface_req_attach wl_surface_req_commit wl_surface_req_damage wl_surface_req_damage_buffer wl_surface_req_set_buffer_transform wl_surface_req_set_buffer_scale wp_presentation_evt_clock_id wp_presentation_feedback_evt_presented wp_presentation_req_feedback xdg_toplevel_req_set_title zwlr_data_control_offer_v1_req_receive zwlr_data_control_source_v1_evt_send zwlr_export_dmabuf_frame_v1_evt_frame zwlr_export_dmabuf_frame_v1_evt_object zwlr_export_dmabuf_frame_v1_evt_ready zwlr_gamma_control_v1_req_set_gamma zwlr_screencopy_frame_v1_evt_ready zwlr_screencopy_frame_v1_req_copy zwp_linux_dmabuf_feedback_v1_evt_done zwp_linux_dmabuf_feedback_v1_evt_format_table zwp_linux_dmabuf_feedback_v1_evt_main_device zwp_linux_dmabuf_feedback_v1_evt_tranche_done zwp_linux_dmabuf_feedback_v1_evt_tranche_target_device zwp_linux_dmabuf_feedback_v1_evt_tranche_formats zwp_linux_dmabuf_feedback_v1_evt_tranche_flags zwp_linux_buffer_params_v1_evt_created zwp_linux_buffer_params_v1_req_add zwp_linux_buffer_params_v1_req_create zwp_linux_buffer_params_v1_req_create_immed zwp_linux_dmabuf_v1_evt_modifier zwp_linux_dmabuf_v1_req_get_default_feedback zwp_linux_dmabuf_v1_req_get_surface_feedback zwp_primary_selection_offer_v1_req_receive zwp_primary_selection_source_v1_evt_send waypipe-v0.10.5/waypipe-c/protocols/meson.build000066400000000000000000000031771507071711100215350ustar00rootroot00000000000000 symgen_path = join_paths(meson.current_source_dir(), 'symgen.py') sendgen_path = join_paths(meson.current_source_dir(), 'sendgen.py') fn_list = join_paths(meson.current_source_dir(), 'function_list.txt') # Include a copy of these protocols in the repository, rather than looking # for packages containing them, to: # a) avoid versioning problems as new protocols/methods are introduced # b) keep the minimum build complexity for waypipe low # c) be able to relay through newer protocols than are default on a system protocols = [ 'wayland.xml', 'xdg-shell.xml', 'presentation-time.xml', 'linux-dmabuf-v1.xml', 'ext-data-control-v1.xml', 'gtk-primary-selection.xml', 'input-method-unstable-v2.xml', 'primary-selection-unstable-v1.xml', 'virtual-keyboard-unstable-v1.xml', 'wlr-screencopy-unstable-v1.xml', 'wlr-export-dmabuf-unstable-v1.xml', 'wlr-data-control-unstable-v1.xml', 'wlr-gamma-control-unstable-v1.xml', 'wayland-drm.xml', ] protocol_folder = meson.project_source_root() / 'protocols/' abs_protocols = [] foreach xml : protocols abs_protocols += join_paths(protocol_folder, xml) endforeach protocols_src = [] protocols_src += custom_target( 'protocol code', output: 'protocols.c', input: abs_protocols, depend_files: [fn_list, symgen_path], command: [python3, symgen_path, 'data', fn_list, '@OUTPUT@', '@INPUT@'], ) protocols_src += custom_target( 'protocol header', output: 'protocols.h', input: abs_protocols, depend_files: [fn_list, symgen_path], command: [python3, symgen_path, 'header', fn_list, '@OUTPUT@', '@INPUT@'], ) # For use in test waypipe-v0.10.5/waypipe-c/protocols/sendgen.py000066400000000000000000000154141507071711100213650ustar00rootroot00000000000000#!/usr/bin/env python3 import os, sys, fnmatch import xml.etree.ElementTree as ET """ A static protocol code generator for the task of creating the wire representation of a list of events/requests """ wltype_to_ctypes = { "uint": "uint32_t ", "fixed": "uint32_t ", "int": "int32_t ", "object": "struct wp_objid ", "new_id": "struct wp_objid ", "string": "const char *", "fd": "int ", } def write_enum(ostream, iface_name, enum): enum_name = enum.attrib["name"] is_bitfield = "bitfield" in enum.attrib and enum.attrib["bitfield"] == "true" for entry in enum: if entry.tag != "entry": continue entry_name = entry.attrib["name"] entry_value = entry.attrib["value"] full_name = (iface_name + "_" + enum_name + "_" + entry_name).upper() print("#define {} {}".format(full_name, entry_value), file=ostream) def is_exportable(func_name, export_list): for e in export_list: if fnmatch.fnmatchcase(func_name, e): return True return False def write_func(ostream, iface_name, func, is_request, func_no, export_list): func_name = ( iface_name + "_" + ("req" if is_request else "evt") + "_" + func.attrib["name"] ) for_export = is_exportable(func_name, export_list) if not for_export: return c_sig = ["struct transfer_states *ts", "struct wp_objid " + iface_name + "_id"] w_args = [] num_fd_args = 0 num_reg_args = 0 num_obj_args = 0 num_new_args = 0 num_stretch_args = 0 for arg in func: if arg.tag != "arg": continue arg_name = arg.attrib["name"] arg_type = arg.attrib["type"] arg_interface = arg.attrib["interface"] if "interface" in arg.attrib else None if arg_type == "new_id" and arg_interface is None: # Special case, for wl_registry_bind c_sig.append("const char *interface") c_sig.append("uint32_t version") c_sig.append("struct wp_objid id") w_args.append(("interface", "string", None)) w_args.append(("version", "uint", None)) w_args.append((arg_name, "new_id", None)) num_obj_args += 1 num_new_args += 1 num_reg_args += 3 num_stretch_args += 1 continue if arg_type == "array": c_sig.append("int " + arg_name + "_count") c_sig.append("const uint8_t *" + arg_name + "_val") else: c_sig.append(wltype_to_ctypes[arg_type] + arg_name) w_args.append((arg_name, arg_type, arg_interface)) if arg_type == "fd": num_fd_args += 1 else: num_reg_args += 1 if arg_type == "object" or arg_type == "new_id": num_obj_args += 1 if arg_type == "new_id": num_new_args += 1 if arg_type in ("array", "string"): num_stretch_args += 1 send_signature = "static void send_{}({}) ".format(func_name, ", ".join(c_sig)) W = lambda *x: print(*x, file=ostream) # Write function definition W(send_signature + " {") W("\tts->fd_size = 0;") W("\tts->msg_space[0] = {}.id;".format(iface_name + "_id")) W("\tts->msg_size = 2;") tmp_names = ["ctx"] for i, (arg_name, arg_type, arg_interface) in enumerate(w_args): if arg_type == "array": raise NotImplementedError() continue elif arg_type == "fd": W("\tts->fd_space[ts->fd_size++] = {};".format(arg_name)) continue elif arg_type == "string": W("\tserialize_string(ts, {});".format(arg_name)) continue elif arg_type == "object" or arg_type == "new_id": W("\tts->msg_space[ts->msg_size++] = {}.id;".format(arg_name)) elif arg_type == "int": W("\tts->msg_space[ts->msg_size++] = (uint32_t){};".format(arg_name)) elif arg_type == "uint" or arg_type == "fixed": W("\tts->msg_space[ts->msg_size++] = {};".format(arg_name)) else: raise KeyError(arg_type) W("\tts->msg_space[1] = ((uint32_t)ts->msg_size << 18) | {};".format(func_no)) if is_request: W("\tts->send(ts, ts->app, ts->comp);") else: W("\tts->send(ts, ts->comp, ts->app);") W("}") if __name__ == "__main__": req_file, dest = sys.argv[1:3] sources = sys.argv[3:] assert dest.endswith(".h") dest_shortname = dest[:-2] header_flag = dest_shortname.upper().replace("/", "_").replace("-", "_") + "_H" export_list = open(req_file).read().split("\n") with open(dest, "w") as ostream: W = lambda *x: print(*x, file=ostream) W("#ifndef {}".format(header_flag)) W("#include ") W("#include ") W("#include ") W("struct test_state;") W("struct wp_objid { uint32_t id; };") W("struct transfer_states {") W("\tuint32_t msg_space[256];") W("\tint fd_space[16];") W("\tunsigned int msg_size;") W("\tunsigned int fd_size;") W("\tstruct test_state *app;") W("\tstruct test_state *comp;") W( "\tvoid (*send)(struct transfer_states *, struct test_state *src, struct test_state *dst);" ) W("};") # note: this script assumes that serialize_string will be used W("static void serialize_string(struct transfer_states *ts, const char *str) {") W("\tif (str) {") W("\t\tsize_t slen = strlen(str) + 1;") W("\t\tts->msg_space[ts->msg_size] = (uint32_t)slen;") W("\t\tmemcpy(&ts->msg_space[ts->msg_size + 1], str, slen);") W("\t\tts->msg_size += ((uint32_t)slen + 0x7) >> 2;") W("\t} else {") W("\t\tts->msg_space[ts->msg_size++] = 0;") W("\t}") W("}") for source in sorted(sources): tree = ET.parse(source) root = tree.getroot() for interface in root: if interface.tag != "interface": continue iface_name = interface.attrib["name"] func_data = [] nreq, nevt = 0, 0 for item in interface: if item.tag == "enum": write_enum(ostream, iface_name, item) elif item.tag == "request": write_func(ostream, iface_name, item, True, nreq, export_list) nreq += 1 elif item.tag == "event": write_func(ostream, iface_name, item, False, nevt, export_list) nevt += 1 elif item.tag == "description": pass else: raise Exception(item.tag) W("#endif /* {} */".format(header_flag)) waypipe-v0.10.5/waypipe-c/protocols/symgen.py000077500000000000000000000356771507071711100212640ustar00rootroot00000000000000#!/usr/bin/env python3 import os, sys, fnmatch import xml.etree.ElementTree as ET import argparse """ A static protocol code generator. """ wltype_to_ctypes = { "uint": "uint32_t ", "fixed": "uint32_t ", "int": "int32_t ", "object": "struct wp_object *", "new_id": "struct wp_object *", "string": "const char *", "fd": "int ", } def superstring(a, b): na, nb = len(a), len(b) if nb > na: b, a, nb, na = a, b, na, nb # A contains B for i in range(na - nb + 1): if a[i : nb + i] == b: return a # suffix of B is prefix of A ba_overlap = 0 for i in range(1, nb): if b[-i:] == a[:i]: ba_overlap = i # suffix of A is prefix of B ab_overlap = 0 for i in range(1, nb): if a[-i:] == b[:i]: ab_overlap = i if ba_overlap > ab_overlap: return b + a[ba_overlap:] else: return a + b[ab_overlap:] def get_offset(haystack, needle): for i in range(len(haystack) - len(needle) + 1): if haystack[i : i + len(needle)] == needle: return i return None def shortest_superstring(strings): """ Given strings L_1,...L_n over domain U, report an approximation of the shortest superstring of the lists, and offsets of the L_i into this string. Has O(n^3) runtime; O(n^2 polylog) is possible. """ if not len(strings): return None pool = [] for s in strings: if s not in pool: pool.append(s) while len(pool) > 1: max_overlap = 0 best = None for i in range(len(pool)): for j in range(i): d = len(pool[i]) + len(pool[j]) - len(superstring(pool[i], pool[j])) if d >= max_overlap: max_overlap = d best = (j, i) s = superstring(pool[best[0]], pool[best[1]]) del pool[best[1]] del pool[best[0]] pool.append(s) sstring = pool[0] for s in strings: assert get_offset(sstring, s) != None, ("substring property", sstring, s) return sstring def write_enum(is_header, ostream, iface_name, enum): if not is_header: return enum_name = enum.attrib["name"] is_bitfield = "bitfield" in enum.attrib and enum.attrib["bitfield"] == "true" long_name = iface_name + "_" + enum_name print("enum " + long_name + " {", file=ostream) for entry in enum: if entry.tag != "entry": continue entry_name = entry.attrib["name"] entry_value = entry.attrib["value"] full_name = long_name.upper() + "_" + entry_name.upper() print("\t" + full_name + " = " + entry_value + ",", file=ostream) print("};", file=ostream) def write_version(is_header, ostream, iface_name, version): if not is_header: return print( "#define " + iface_name.upper() + "_INTERFACE_VERSION " + str(version), file=ostream, ) def is_exportable(func_name, export_list): for e in export_list: if fnmatch.fnmatchcase(func_name, e): return True return False def write_func(is_header, ostream, func_name, func): c_sig = ["struct context *ctx"] w_args = [] num_fd_args = 0 num_reg_args = 0 num_obj_args = 0 num_new_args = 0 num_stretch_args = 0 for arg in func: if arg.tag != "arg": continue arg_name = arg.attrib["name"] arg_type = arg.attrib["type"] arg_interface = arg.attrib["interface"] if "interface" in arg.attrib else None if arg_type == "new_id" and arg_interface is None: # Special case, for wl_registry_bind c_sig.append("const char *interface") c_sig.append("uint32_t version") c_sig.append("struct wp_object *id") w_args.append(("interface", "string", None)) w_args.append(("version", "uint", None)) w_args.append((arg_name, "new_id", None)) num_obj_args += 1 num_new_args += 1 num_reg_args += 3 num_stretch_args += 1 continue if arg_type == "array": c_sig.append("uint32_t " + arg_name + "_count") c_sig.append("const uint8_t *" + arg_name + "_val") else: c_sig.append(wltype_to_ctypes[arg_type] + arg_name) w_args.append((arg_name, arg_type, arg_interface)) if arg_type == "fd": num_fd_args += 1 else: num_reg_args += 1 if arg_type == "object" or arg_type == "new_id": num_obj_args += 1 if arg_type == "new_id": num_new_args += 1 if arg_type in ("array", "string"): num_stretch_args += 1 do_signature = "void do_{}({});".format(func_name, ", ".join(c_sig)) handle_signature = "static void call_{}(struct context *ctx, const uint32_t *payload, const int *fds, struct message_tracker *mt)".format( func_name ) W = lambda *x: print(*x, file=ostream) if is_header: W(do_signature) if not is_header: # Write function definition W(do_signature) W(handle_signature + " {") if num_reg_args > 0: W("\tunsigned int i = 0;") if num_fd_args > 0: W("\tunsigned int k = 0;") tmp_names = ["ctx"] n_fds_left = num_fd_args n_reg_left = num_reg_args for i, (arg_name, arg_type, arg_interface) in enumerate(w_args): if arg_type == "array": n_reg_left -= 1 W( "\tconst uint8_t *arg{}_b = (const uint8_t *)&payload[i + 1];".format( i ) ) W("\tuint32_t arg{}_a = payload[i];".format(i)) if n_reg_left > 0: W("\ti += 1 + (unsigned int)((arg{}_a + 0x3) >> 2);".format(i)) tmp_names.append("arg{}_a".format(i)) tmp_names.append("arg{}_b".format(i)) continue tmp_names.append("arg{}".format(i)) if arg_type == "fd": n_fds_left -= 1 W("\tint arg{} = fds[{}];".format(i, "k++" if n_fds_left > 0 else "k")) continue n_reg_left -= 1 if arg_type == "string": W("\tconst char *arg{} = (const char *)&payload[i + 1];".format(i)) W("\tif (!payload[i]) arg{} = NULL;".format(i)) if n_reg_left > 0: W("\ti += 1 + ((payload[i] + 0x3) >> 2);") continue i_incr = "i++" if n_reg_left > 0 else "i" if arg_type == "object" or arg_type == "new_id": if arg_interface is None: intf_str = "NULL" else: intf_str = "&intf_" + arg_interface W( "\tstruct wp_object *arg{} = get_object(mt, payload[{}], {});".format( i, i_incr, intf_str ) ) elif arg_type == "int": W("\tint32_t arg{} = (int32_t)payload[{}];".format(i, i_incr)) elif arg_type == "uint" or arg_type == "fixed": W("\tuint32_t arg{} = payload[{}];".format(i, i_incr)) W("\tdo_{}({});".format(func_name, ", ".join(tmp_names))) if num_obj_args == 0: W("\t(void)mt;") if num_fd_args == 0: W("\t(void)fds;") if num_reg_args == 0: W("\t(void)payload;") W("}") def load_msg_data(func_name, func, for_export): w_args = [] for arg in func: if arg.tag != "arg": continue arg_name = arg.attrib["name"] arg_type = arg.attrib["type"] arg_interface = arg.attrib["interface"] if "interface" in arg.attrib else None if arg_type == "new_id" and arg_interface is None: w_args.append(("interface", "string", None)) w_args.append(("version", "uint", None)) w_args.append((arg_name, "new_id", None)) else: w_args.append((arg_name, arg_type, arg_interface)) new_objs = [] for arg_name, arg_type, arg_interface in w_args: if arg_type == "new_id": new_objs.append( "&intf_" + arg_interface if arg_interface is not None else "NULL" ) # gap coding: 0=end,1=new_obj,2=array,3=string num_fd_args = 0 gaps = [0] gap_ends = [] for arg_name, arg_type, arg_interface in w_args: if arg_type == "fd": num_fd_args += 1 continue gaps[-1] += 1 if arg_type in ("new_id", "string", "array"): gap_ends.append({"new_id": 1, "string": 3, "array": 2}[arg_type]) gaps.append(0) gap_ends.append(0) gap_codes = [str(g * 4 + e) for g, e in zip(gaps, gap_ends)] is_destructor = "type" in func.attrib and func.attrib["type"] == "destructor" is_request = item.tag == "request" short_name = func.attrib["name"] return ( is_request, func_name, short_name, new_objs, gap_codes, is_destructor, num_fd_args, for_export, ) def write_interface( ostream, iface_name, func_data, gap_code_array, new_obj_array, dest_name ): reqs, evts = [], [] for x in func_data: if x[0]: reqs.append(x) else: evts.append(x) W = lambda *x: print(*x, file=ostream) if len(reqs) > 0 or len(evts) > 0: W("static const struct msg_data msgs_" + iface_name + "[] = {") msg_names = [] for x in reqs + evts: ( is_request, func_name, short_name, new_objs, gap_codes, is_destructor, num_fd_args, for_export, ) = x msg_names.append(short_name) mda = [] mda.append( "gaps_{} + {}".format(dest_name, get_offset(gap_code_array, gap_codes)) ) if len(new_objs) > 0: mda.append( "objt_{} + {}".format(dest_name, get_offset(new_obj_array, new_objs)) ) else: mda.append("NULL") mda.append(("call_" + func_name) if for_export else "NULL") mda.append(str(num_fd_args)) mda.append("true" if is_destructor else "false") W("\t{" + ", ".join(mda) + "},") mcn = "NULL" if len(reqs) > 0 or len(evts) > 0: W("};") mcn = "msgs_" + iface_name W("const struct wp_interface intf_" + iface_name + " = {") W("\t" + mcn + ",") W("\t" + str(len(reqs)) + ",") W("\t" + str(len(evts)) + ",") W('\t"{}",'.format(iface_name)) W('\t"{}",'.format("\\0".join(msg_names))) W("};") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("mode", help="Either 'header' or 'data'.") parser.add_argument( "export_list", help="List of events/requests which need parsing." ) parser.add_argument("output_file", help="C file to create.") parser.add_argument("protocols", nargs="+", help="XML protocol files to use.") args = parser.parse_args() is_header = {"data": False, "header": True}[args.mode] if is_header: assert args.output_file[-2:] == ".h" else: assert args.output_file[-2:] == ".c" dest_name = os.path.basename(args.output_file)[:-2].replace("-", "_") export_list = open(args.export_list).read().split("\n") intfset = set() for source in args.protocols: tree = ET.parse(source) root = tree.getroot() for intf in root: if intf.tag == "interface": intfset.add(intf.attrib["name"]) for msg in intf: for arg in msg: if "interface" in arg.attrib: intfset.add(arg.attrib["interface"]) interfaces = sorted(intfset) header_guard = "{}_H".format(dest_name.upper()) with open(args.output_file, "w") as ostream: W = lambda *x: print(*x, file=ostream) if is_header: W("#ifndef {}".format(header_guard)) W("#define {}".format(header_guard)) W() W('#include "symgen_types.h"') if not is_header: W("#include ") for intf in interfaces: W("extern const struct wp_interface intf_{};".format(intf)) gap_code_list = [] new_obj_list = [] interface_data = [] for source in sorted(args.protocols): tree = ET.parse(source) root = tree.getroot() for interface in root: if interface.tag != "interface": continue iface_name = interface.attrib["name"] write_version( is_header, ostream, iface_name, interface.attrib["version"] ) func_data = [] for item in interface: if item.tag == "enum": write_enum(is_header, ostream, iface_name, item) elif item.tag == "request" or item.tag == "event": is_req = item.tag == "request" func_name = ( iface_name + "_" + ("req" if is_req else "evt") + "_" + item.attrib["name"] ) for_export = is_exportable(func_name, export_list) if for_export: write_func(is_header, ostream, func_name, item) if not is_header: func_data.append(load_msg_data(func_name, item, for_export)) elif item.tag == "description": pass else: raise Exception(item.tag) for x in func_data: gap_code_list.append(x[4]) new_obj_list.append(x[3]) interface_data.append((iface_name, func_data)) if not is_header: gap_code_array = shortest_superstring(gap_code_list) new_obj_array = shortest_superstring(new_obj_list) if new_obj_array is not None: W("static const struct wp_interface *objt_" + dest_name + "[] = {") W("\t" + ",\n\t".join(new_obj_array)) W("};") if gap_code_array is not None: W("static const uint16_t gaps_" + dest_name + "[] = {") W("\t" + ",\n\t".join(gap_code_array)) W("};") for iface_name, func_data in interface_data: write_interface( ostream, iface_name, func_data, gap_code_array, new_obj_array, dest_name, ) if is_header: W() W("#endif /* {} */".format(header_guard)) waypipe-v0.10.5/waypipe-c/protocols/symgen_types.h000066400000000000000000000025461507071711100222710ustar00rootroot00000000000000#ifndef SYMGEN_TYPES_H #define SYMGEN_TYPES_H #include #include struct context; struct message_tracker; struct wp_object; typedef void (*wp_callfn_t)(struct context *ctx, const uint32_t *payload, const int *fds, struct message_tracker *mt); #define GAP_CODE_END 0x0 #define GAP_CODE_OBJ 0x1 #define GAP_CODE_ARR 0x2 #define GAP_CODE_STR 0x3 struct msg_data { /* Number of 4-byte blocks until next nontrivial input. * (Note: 16-bit length is sufficient since message lengths also 16-bit) * Lowest 2 bits indicate if what follows is end/obj/array/string */ const uint16_t* gaps; /* Pointer to new object types, can be null if none indicated */ const struct wp_interface **new_objs; /* Function pointer to parse + invoke do_ handler */ const wp_callfn_t call; /* Number of associated file descriptors */ const int16_t n_fds; /* Whether message destroys the object */ bool is_destructor; }; struct wp_interface { /* msgs[0..nreq-1] are reqs; msgs[nreq..nreq+nevt-1] are evts */ const struct msg_data *msgs; const int nreq, nevt; /* The name of the interface */ const char *name; /* The names of the messages, in order; stored tightly packed */ const char *msg_names; }; /* User should define this function. */ struct wp_object *get_object(struct message_tracker *mt, uint32_t id, const struct wp_interface *intf); #endif /* SYMGEN_TYPES_H */ waypipe-v0.10.5/waypipe-c/secctx.c000066400000000000000000000073231507071711100170010ustar00rootroot00000000000000#include "security-context-v1-protocol.h" #include "util.h" #include #include #include #include #include static struct wp_security_context_manager_v1 *security_context_manager = NULL; static struct wp_security_context_v1 *security_context = NULL; static int listen_fd = -1; static int close_fd[2] = {-1, -1}; static void registry_handle_global(void *data, struct wl_registry *registry, uint32_t name, const char *interface, uint32_t version) { (void)data; (void)version; if (strcmp(interface, "wp_security_context_manager_v1") == 0) { security_context_manager = wl_registry_bind(registry, name, &wp_security_context_manager_v1_interface, 1); } } static void registry_handle_global_remove( void *data, struct wl_registry *registry, uint32_t name) { (void)data; (void)registry; (void)name; } static const struct wl_registry_listener registry_listener = { .global = registry_handle_global, .global_remove = registry_handle_global_remove}; void close_security_context(void) { if (close_fd[1] >= 0) { close(close_fd[1]); close_fd[1] = -1; } if (listen_fd >= 0) { close(listen_fd); listen_fd = -1; } } int create_security_context(const char *sock_path, const char *engine, const char *instance_id, const char *app_id) { struct wl_display *display = NULL; struct wl_registry *registry = NULL; int res = -1; wp_debug("Enabling wayland security context"); display = wl_display_connect(NULL); if (display == NULL) { wp_error("Failed to connect to the Wayland compositor"); goto cleanup; } registry = wl_display_get_registry(display); if (registry == NULL) { wp_error("Failed to get Wayland display registry"); goto cleanup; } wl_registry_add_listener(registry, ®istry_listener, NULL); wl_display_dispatch(display); if (wl_display_roundtrip(display) == -1) { wp_error("Failed to execute display roundtrip"); goto cleanup; } if (!security_context_manager) { wp_error("Security context is not supported by the Wayland compositor"); goto cleanup; } listen_fd = socket(AF_UNIX, SOCK_STREAM, 0); if (listen_fd < 0) { wp_error("Failed to create a Unix socket for security context"); goto cleanup; } struct sockaddr_un sockaddr = {0}; sockaddr.sun_family = AF_UNIX; strncpy(sockaddr.sun_path, sock_path, sizeof(sockaddr.sun_path) - 1); if (bind(listen_fd, (struct sockaddr *)&sockaddr, sizeof(sockaddr)) != 0) { wp_error("Failed to bind the Unix socket for the security context"); goto cleanup; } if (listen(listen_fd, 0) != 0) { wp_error("Failed to listen on the Unix socket for the security context"); goto cleanup; } if (pipe(close_fd)) { wp_error("Failed to create a pipe for the security context"); goto cleanup; } security_context = wp_security_context_manager_v1_create_listener( security_context_manager, listen_fd, close_fd[0]); if (security_context == NULL) { wp_error("Failed to create a security context listener"); goto cleanup; } wp_security_context_v1_set_sandbox_engine(security_context, engine); wp_security_context_v1_set_instance_id(security_context, instance_id); wp_security_context_v1_set_app_id(security_context, app_id); wp_security_context_v1_commit(security_context); wp_security_context_v1_destroy(security_context); if (wl_display_roundtrip(display) < 0) { wp_error("Failed to execute display roundtrip"); goto cleanup; } wp_debug("Successfully enabled Wayland security context"); res = 0; cleanup: if (res) { close_security_context(); } if (security_context_manager) { wp_security_context_manager_v1_destroy( security_context_manager); } if (registry) { wl_registry_destroy(registry); } if (display) { wl_display_disconnect(display); } return res; } waypipe-v0.10.5/waypipe-c/server.c000066400000000000000000000533451507071711100170230ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include static inline uint32_t conntoken_header(const struct main_config *config, bool reconnectable, bool update) { uint32_t header = (WAYPIPE_PROTOCOL_VERSION << 16) | CONN_FIXED_BIT; header |= (update ? CONN_UPDATE_BIT : 0); header |= (reconnectable ? CONN_RECONNECTABLE_BIT : 0); // TODO: stop compile gating the 'COMP' enum entries #ifdef HAS_LZ4 header |= (config->compression == COMP_LZ4 ? CONN_LZ4_COMPRESSION : 0); #endif #ifdef HAS_ZSTD header |= (config->compression == COMP_ZSTD ? CONN_ZSTD_COMPRESSION : 0); #endif if (config->compression == COMP_NONE) { header |= CONN_NO_COMPRESSION; } if (config->video_if_possible) { header |= (config->video_fmt == VIDEO_H264 ? CONN_H264_VIDEO : 0); header |= (config->video_fmt == VIDEO_VP9 ? CONN_VP9_VIDEO : 0); header |= (config->video_fmt == VIDEO_AV1 ? CONN_AV1_VIDEO : 0); } else { header |= CONN_NO_VIDEO; } #ifdef HAS_DMABUF header |= (config->no_gpu ? CONN_NO_DMABUF_SUPPORT : 0); #else header |= CONN_NO_DMABUF_SUPPORT; #endif return header; } /** Fill the key for a token using random data with a very low accidental * collision probability. Whatever data was in the key before will be shuffled * in.*/ static void fill_random_key(struct connection_token *token) { token->key[0] *= 13; token->key[1] *= 17; token->key[2] *= 29; struct timespec tp; clock_gettime(CLOCK_REALTIME, &tp); token->key[0] += (uint32_t)getpid(); token->key[1] += 1 + (uint32_t)tp.tv_sec; token->key[2] += 2 + (uint32_t)tp.tv_nsec; int devrand = open("/dev/urandom", O_RDONLY | O_NOCTTY); if (devrand != -1) { uint32_t tmp[3]; errno = 0; (void)read(devrand, tmp, sizeof(tmp)); checked_close(devrand); token->key[0] ^= tmp[0]; token->key[1] ^= tmp[1]; token->key[2] ^= tmp[2]; } } static int read_path(int control_pipe, char *path, size_t path_space) { /* It is unlikely that a signal would interrupt a read of a ~100 byte * sockaddr; and if used properly, the control pipe should never be * sent much more data than that */ ssize_t amt = read(control_pipe, path, path_space - 1); if (amt == -1) { wp_error("Failed to read from control pipe: %s", strerror(errno)); return -1; } else if (amt == (ssize_t)path_space - 1) { wp_error("Too much data sent to control pipe\n"); return -1; } path[amt] = '\0'; return 0; } static int run_single_server_reconnector(int cwd_fd, int control_pipe, int linkfd, const struct connection_token *flagged_token) { int retcode = EXIT_SUCCESS; while (!shutdown_flag) { struct pollfd pf[2]; pf[0].fd = control_pipe; pf[0].events = POLLIN; pf[0].revents = 0; pf[1].fd = linkfd; pf[1].events = 0; pf[1].revents = 0; int r = poll(pf, 2, -1); if (r == -1 && errno == EINTR) { continue; } else if (r == -1) { retcode = EXIT_FAILURE; break; } else if (r == 0) { // Nothing to read continue; } if (pf[1].revents & POLLHUP) { /* Hang up, main thread has closed its link */ break; } if (pf[0].revents & POLLIN) { char sockaddr_folder[512]; if (read_path(control_pipe, sockaddr_folder, sizeof(sockaddr_folder)) == -1) { continue; } struct sockaddr_un sockaddr_filename = {0}; if (split_socket_path(sockaddr_folder, &sockaddr_filename)) { continue; } struct socket_path sockaddr_path = { .filename = &sockaddr_filename, .folder = sockaddr_folder, }; int new_conn = -1; if (connect_to_socket(cwd_fd, sockaddr_path, NULL, &new_conn) == -1) { wp_error("Socket path \"%s\"/\"%s\" was invalid: %s", sockaddr_path.folder, sockaddr_path.filename ->sun_path, strerror(errno)); /* Socket path was invalid */ continue; } if (write(new_conn, flagged_token, sizeof(*flagged_token)) != sizeof(*flagged_token)) { wp_error("Failed to write to new connection: %s", strerror(errno)); checked_close(new_conn); continue; } if (send_one_fd(linkfd, new_conn) == -1) { wp_error("Failed to send new connection to subprocess: %s", strerror(errno)); } checked_close(new_conn); } } checked_close(control_pipe); checked_close(linkfd); return retcode; } static int run_single_server(int cwd_fd, int control_pipe, struct socket_path socket_path, bool unlink_at_end, int server_link, const struct main_config *config) { int chanfd = -1, chanfolder_fd = -1; if (connect_to_socket(cwd_fd, socket_path, &chanfolder_fd, &chanfd) == -1) { goto fail_srv; } /* Only unlink the socket if it actually was a socket */ if (unlink_at_end) { unlink_at_folder(cwd_fd, chanfolder_fd, socket_path.folder, socket_path.filename->sun_path); } checked_close(chanfolder_fd); bool reconnectable = control_pipe != -1; struct connection_token token; memset(&token, 0, sizeof(token)); fill_random_key(&token); token.header = conntoken_header(config, reconnectable, false); wp_debug("Connection token header: %08" PRIx32, token.header); if (write(chanfd, &token, sizeof(token)) != sizeof(token)) { wp_error("Failed to write connection token to socket"); goto fail_cfd; } int linkfds[2] = {-1, -1}; if (control_pipe != -1) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, linkfds) == -1) { wp_error("Failed to create socketpair: %s", strerror(errno)); goto fail_cfd; } pid_t reco_pid = fork(); if (reco_pid == -1) { wp_error("Fork failure: %s", strerror(errno)); checked_close(linkfds[0]); checked_close(linkfds[1]); goto fail_cfd; } else if (reco_pid == 0) { checked_close(chanfd); checked_close(linkfds[0]); checked_close(server_link); /* Further uses of the token will be to reconnect */ token.header |= CONN_UPDATE_BIT; int rc = run_single_server_reconnector(cwd_fd, control_pipe, linkfds[1], &token); exit(rc); } checked_close(control_pipe); checked_close(linkfds[1]); } int ret = main_interface_loop( chanfd, server_link, linkfds[0], config, false); return ret; fail_cfd: checked_close(chanfd); fail_srv: checked_close(server_link); return EXIT_FAILURE; } static int handle_new_server_connection(int cwd_fd, struct socket_path current_sockaddr, int control_pipe, int wdisplay_socket, int appfd, struct conn_map *connmap, const struct main_config *config, const struct connection_token *new_token) { bool reconnectable = control_pipe != -1; if (reconnectable && buf_ensure_size(connmap->count + 1, sizeof(struct conn_addr), &connmap->size, (void **)&connmap->data) == -1) { wp_error("Failed to allocate memory to track new connection"); goto fail_appfd; } int chanfd = -1; if (!config->vsock) { if (connect_to_socket(cwd_fd, current_sockaddr, NULL, &chanfd) == -1) { goto fail_appfd; } } else { #ifdef HAS_VSOCK if (connect_to_vsock(config->vsock_port, config->vsock_cid, config->vsock_to_host, &chanfd) == -1) { goto fail_appfd; } #endif } if (write(chanfd, new_token, sizeof(*new_token)) != sizeof(*new_token)) { wp_error("Failed to write connection token: %s", strerror(errno)); goto fail_chanfd; } int linksocks[2] = {-1, -1}; if (reconnectable) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, linksocks) == -1) { wp_error("Socketpair for process link failed: %s", strerror(errno)); goto fail_chanfd; } } pid_t npid = fork(); if (npid == 0) { // Run forked process, with the only shared state being the // new channel socket checked_close(wdisplay_socket); if (reconnectable) { checked_close(control_pipe); checked_close(linksocks[0]); } for (int i = 0; i < connmap->count; i++) { if (connmap->data[i].linkfd != -1) { checked_close(connmap->data[i].linkfd); } } int rc = main_interface_loop( chanfd, appfd, linksocks[1], config, false); check_unclosed_fds(); exit(rc); } else if (npid == -1) { wp_error("Fork failure: %s", strerror(errno)); if (reconnectable) { checked_close(linksocks[0]); checked_close(linksocks[1]); } goto fail_chanfd; } // This process no longer needs the application connection checked_close(chanfd); checked_close(appfd); if (reconnectable) { checked_close(linksocks[1]); connmap->data[connmap->count++] = (struct conn_addr){ .token = *new_token, .pid = npid, .linkfd = linksocks[0], }; } return 0; fail_chanfd: checked_close(chanfd); fail_appfd: checked_close(appfd); return -1; } static int update_connections(int cwd_fd, struct socket_path new_sock, int new_sock_folder, struct conn_map *connmap) { /* TODO: what happens if there's a partial failure? */ for (int i = 0; i < connmap->count; i++) { int chanfd = -1; if (connect_to_socket_at_folder(cwd_fd, new_sock_folder, new_sock.filename, &chanfd) == -1) { wp_error("Failed to connect to socket at \"%s\"/\"%s\": %s", new_sock.folder, new_sock.filename->sun_path, strerror(errno)); return -1; } struct connection_token flagged_token = connmap->data[i].token; flagged_token.header |= CONN_UPDATE_BIT; if (write(chanfd, &flagged_token, sizeof(flagged_token)) != sizeof(flagged_token)) { wp_error("Failed to write token to replacement connection: %s", strerror(errno)); checked_close(chanfd); continue; } /* ignore return value -- errors like the other process having * closed the connection do not count as this processes' problem */ (void)send_one_fd(connmap->data[i].linkfd, chanfd); checked_close(chanfd); } return 0; } static int run_multi_server(int cwd_fd, int control_pipe, struct socket_path socket_addr, bool unlink_at_end, int wdisplay_socket, const struct main_config *config, pid_t *child_pid) { struct conn_map connmap = {.data = NULL, .count = 0, .size = 0}; struct sockaddr_un current_sockaddr_filename = *socket_addr.filename; char current_sockaddr_folder[256] = {0}; int retcode = EXIT_SUCCESS; struct socket_path current_sockaddr = socket_addr; // TODO: grab the folder, on startup; then connectat within the folder // we do not need to remember the folder name, thankfully struct pollfd pfs[2]; pfs[0].fd = wdisplay_socket; pfs[0].events = POLLIN; pfs[0].revents = 0; pfs[1].fd = control_pipe; pfs[1].events = POLLIN; pfs[1].revents = 0; struct connection_token token; memset(&token, 0, sizeof(token)); token.header = conntoken_header(config, control_pipe != -1, false); wp_debug("Connection token header: %08" PRIx32, token.header); int current_folder_fd = open_folder(current_sockaddr.folder); if (current_folder_fd == -1) { wp_error("Failed to open folder '%s' for connection socket: %s", current_sockaddr.folder, strerror(errno)); retcode = EXIT_FAILURE; shutdown_flag = true; } while (!shutdown_flag) { int status = -1; if (wait_for_pid_and_clean( child_pid, &status, WNOHANG, &connmap)) { wp_debug("Child program has died, exiting"); retcode = WEXITSTATUS(status); break; } int r = poll(pfs, 1 + (control_pipe != -1), -1); if (r == -1) { if (errno == EINTR) { // If SIGCHLD, we will check the child. // If SIGINT, the loop ends continue; } wp_error("Poll failed: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } else if (r == 0) { continue; } if (pfs[1].revents & POLLIN) { struct sockaddr_un new_sockaddr_filename = {0}; char new_sockaddr_folder[sizeof( current_sockaddr_folder)] = {0}; if (read_path(control_pipe, new_sockaddr_folder, sizeof(new_sockaddr_folder)) == -1) { goto end_new_path; } if (split_socket_path(new_sockaddr_folder, &new_sockaddr_filename) == -1) { goto end_new_path; } struct socket_path new_sockaddr = { .filename = &new_sockaddr_filename, .folder = new_sockaddr_folder, }; int new_folder_fd = open_folder(new_sockaddr_folder); if (new_folder_fd == -1) { wp_error("Failed to open folder '%s' for proposed reconnection socket: %s", new_sockaddr_folder, strerror(errno)); goto end_new_path; } if (update_connections(cwd_fd, new_sockaddr, new_folder_fd, &connmap) == -1) { /* failed to connect to the new socket */ goto end_new_path; } bool same_path = !strcmp(current_sockaddr.filename ->sun_path, new_sockaddr.filename ->sun_path) && files_equiv(current_folder_fd, new_folder_fd); /* If switching connections succeeded, adopt the new * socket. (We avoid deleting if the old socket was * replaced by a new socket at the same name in the * same folder.) */ if (unlink_at_end && !same_path) { unlink_at_folder(cwd_fd, current_folder_fd, current_sockaddr.folder, current_sockaddr.filename ->sun_path); } checked_close(current_folder_fd); current_folder_fd = new_folder_fd; memcpy(current_sockaddr_folder, new_sockaddr_folder, sizeof(current_sockaddr_folder)); memcpy(¤t_sockaddr_filename, &new_sockaddr_filename, sizeof(current_sockaddr_filename)); current_sockaddr = (struct socket_path){ .filename = ¤t_sockaddr_filename, .folder = current_sockaddr_folder, }; end_new_path:; } if (pfs[0].revents & POLLIN) { int appfd = accept(wdisplay_socket, NULL, NULL); if (appfd == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) { // The wakeup may have been // spurious continue; } wp_error("Connection failure: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } else { wp_debug("New connection to server"); fill_random_key(&token); if (handle_new_server_connection(cwd_fd, current_sockaddr, control_pipe, wdisplay_socket, appfd, &connmap, config, &token) == -1) { retcode = EXIT_FAILURE; break; } } } } if (unlink_at_end) { unlink_at_folder(cwd_fd, current_folder_fd, current_sockaddr.folder, current_sockaddr.filename->sun_path); } checked_close(wdisplay_socket); if (control_pipe != -1) { checked_close(control_pipe); } checked_close(current_folder_fd); for (int i = 0; i < connmap.count; i++) { checked_close(connmap.data[i].linkfd); } free(connmap.data); return retcode; } /* requires >=256 byte shell/shellname buffers */ static void setup_login_shell_command(char shell[static 256], char shellname[static 256], bool login_shell) { strcpy(shellname, "-sh"); strcpy(shell, "/bin/sh"); // Select the preferred shell on the system char *shell_env = getenv("SHELL"); if (!shell_env) { return; } int len = (int)strlen(shell_env); if (len >= 254) { wp_error("Environment variable $SHELL is too long at %d bytes, falling back to %s", len, shell); return; } strcpy(shell, shell_env); if (login_shell) { /* Create a login shell. The convention for this is to prefix * the name of the shell with a single hyphen */ int start = len; for (; start-- > 0;) { if (shell[start] == '/') { start++; break; } } shellname[0] = '-'; strcpy(shellname + 1, shell + start); } else { strcpy(shellname, shell); } } extern char **environ; int run_server(int cwd_fd, struct socket_path socket_path, const char *display_suffix, const char *control_path, const struct main_config *config, bool oneshot, bool unlink_at_end, char *const app_argv[], bool login_shell_if_backup) { wp_debug("I'm a server connecting on %s x %s, running: %s", socket_path.folder, socket_path.filename->sun_path, app_argv[0]); wp_debug("version: %s", WAYPIPE_VERSION); struct sockaddr_un display_path; memset(&display_path, 0, sizeof(display_path)); int display_folder_fd = -1; // Setup connection to program int wayland_socket = -1, server_link = -1, wdisplay_socket = -1; if (oneshot) { int csockpair[2]; if (socketpair(AF_UNIX, SOCK_STREAM, 0, csockpair) == -1) { wp_error("Socketpair failed: %s", strerror(errno)); return EXIT_FAILURE; } wayland_socket = csockpair[1]; server_link = csockpair[0]; /* only set cloexec for `server_link`, as `wayland_socket` * is meant to be inherited */ if (set_cloexec(server_link) == -11) { close(wayland_socket); close(server_link); return EXIT_FAILURE; } } else { // Bind a socket for WAYLAND_DISPLAY, and listen int nmaxclients = 128; char display_folder[512]; memset(&display_folder, 0, sizeof(display_folder)); if (display_suffix[0] == '/') { if (strlen(display_suffix) >= sizeof(display_folder)) { wp_error("Absolute path '%s' specified for WAYLAND_DISPLAY is far long (%zu bytes >= %zu)", display_suffix, strlen(display_suffix), sizeof(display_folder)); return EXIT_FAILURE; } strcpy(display_folder, display_suffix); } else { const char *xdg_dir = getenv("XDG_RUNTIME_DIR"); if (!xdg_dir) { wp_error("Env. var XDG_RUNTIME_DIR not available, cannot place display socket for WAYLAND_DISPLAY=\"%s\"", display_suffix); return EXIT_FAILURE; } if (multi_strcat(display_folder, sizeof(display_folder), xdg_dir, "/", display_suffix, NULL) == 0) { wp_error("Path '%s'/'%s' specified for WAYLAND_DISPLAY is far long (%zu bytes >= %zu)", xdg_dir, display_suffix, strlen(xdg_dir) + 1 + strlen(display_suffix), sizeof(display_folder)); return EXIT_FAILURE; } } if (split_socket_path(display_folder, &display_path) == -1) { return EXIT_FAILURE; } struct socket_path path; path.filename = &display_path; path.folder = display_folder; if (setup_nb_socket(cwd_fd, path, nmaxclients, &display_folder_fd, &wdisplay_socket) == -1) { // Error messages already made return EXIT_FAILURE; } if (set_cloexec(display_folder_fd) == -1 || set_cloexec(wdisplay_socket) == -1) { close(display_folder_fd); close(wdisplay_socket); return EXIT_FAILURE; } } /* Set env variables for child process */ if (oneshot) { char bufs2[16]; sprintf(bufs2, "%d", wayland_socket); // Provide the other socket in the pair to child // application unsetenv("WAYLAND_DISPLAY"); setenv("WAYLAND_SOCKET", bufs2, 1); } else { // Since Wayland 1.15, absolute paths are supported in // WAYLAND_DISPLAY unsetenv("WAYLAND_SOCKET"); setenv("WAYLAND_DISPLAY", display_suffix, 1); } // Launch program. pid_t pid = -1; { const char *application = app_argv[0]; char shell[256]; char shellname[256]; char *shellcmd[2] = {shellname, NULL}; if (!application) { setup_login_shell_command(shell, shellname, login_shell_if_backup); application = shell; app_argv = shellcmd; } int err = posix_spawnp(&pid, application, NULL, NULL, app_argv, environ); if (err) { wp_error("Spawn failure for '%s': %s", application, strerror(err)); if (!oneshot) { unlink_at_folder(cwd_fd, display_folder_fd, NULL, display_path.sun_path); checked_close(display_folder_fd); checked_close(wdisplay_socket); } else { checked_close(wayland_socket); checked_close(server_link); } return EXIT_FAILURE; } } /* Drop any env variables that were set for the child process */ unsetenv("WAYLAND_SOCKET"); unsetenv("WAYLAND_DISPLAY"); if (oneshot) { // We no longer need to see this side checked_close(wayland_socket); } int control_pipe = -1; if (control_path) { if (mkfifo(control_path, 0644) == -1) { wp_error("Failed to make a control FIFO at %s: %s", control_path, strerror(errno)); } else { /* To prevent getting POLLHUP spam after the first user * closes this pipe, open both read and write ends of * the named pipe */ control_pipe = open(control_path, O_RDWR | O_NONBLOCK | O_NOCTTY); if (control_pipe == -1) { wp_error("Failed to open created FIFO for reading: %s", control_path, strerror(errno)); } } } int retcode = EXIT_SUCCESS; /* These functions will close server_link, wdisplay_socket, and * control_pipe */ if (oneshot) { retcode = run_single_server(cwd_fd, control_pipe, socket_path, unlink_at_end, server_link, config); } else { retcode = run_multi_server(cwd_fd, control_pipe, socket_path, unlink_at_end, wdisplay_socket, config, &pid); } if (control_pipe != -1) { unlink(control_path); } if (!oneshot) { unlink_at_folder(cwd_fd, display_folder_fd, NULL, display_path.sun_path); checked_close(display_folder_fd); } // Wait for child processes to exit wp_debug("Waiting for child handlers and program"); int status = -1; if (wait_for_pid_and_clean( &pid, &status, shutdown_flag ? WNOHANG : 0, NULL)) { wp_debug("Child program has died, exiting"); retcode = WEXITSTATUS(status); } wp_debug("Program ended"); return retcode; } waypipe-v0.10.5/waypipe-c/shadow.c000066400000000000000000002204721507071711100167770ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "shadow.h" #include #include #include #include #include #include #include #include #include #include #include #ifdef HAS_LZ4 #include #include #endif #ifdef HAS_ZSTD #include #endif struct shadow_fd *get_shadow_for_local_fd( struct fd_translation_map *map, int lfd) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->fd_local == lfd) { return cur; } } return NULL; } struct shadow_fd *get_shadow_for_rid(struct fd_translation_map *map, int rid) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->remote_id == rid) { return cur; } } return NULL; } static void destroy_unlinked_sfd(struct shadow_fd *sfd) { wp_debug("Destroying %s RID=%d", fdcat_to_str(sfd->type), sfd->remote_id); /* video must be cleaned up before any buffers that it may rely on */ destroy_video_data(sfd); /* free all accumulated damage records */ reset_damage(&sfd->damage); free(sfd->damage_task_interval_store); if (sfd->type == FDC_FILE) { munmap(sfd->mem_local, sfd->buffer_size); zeroed_aligned_free(sfd->mem_mirror, &sfd->mem_mirror_handle); } else if (sfd->type == FDC_DMABUF || sfd->type == FDC_DMAVID_IR || sfd->type == FDC_DMAVID_IW) { if (sfd->dmabuf_map_handle) { unmap_dmabuf(sfd->dmabuf_bo, sfd->dmabuf_map_handle); } destroy_dmabuf(sfd->dmabuf_bo); zeroed_aligned_free(sfd->mem_mirror, &sfd->mem_mirror_handle); if (sfd->dmabuf_warped_handle) { zeroed_aligned_free(sfd->dmabuf_warped, &sfd->dmabuf_warped_handle); } } else if (sfd->type == FDC_PIPE) { if (sfd->pipe.fd != sfd->fd_local && sfd->pipe.fd != -1) { checked_close(sfd->pipe.fd); } free(sfd->pipe.recv.data); free(sfd->pipe.send.data); } if (sfd->fd_local != -1) { checked_close(sfd->fd_local); } free(sfd); } static void cleanup_thread_local(struct thread_data *data) { #ifdef HAS_ZSTD ZSTD_freeCCtx(data->comp_ctx.zstd_ccontext); ZSTD_freeDCtx(data->comp_ctx.zstd_dcontext); #endif #ifdef HAS_LZ4 free(data->comp_ctx.lz4_extstate); #endif free(data->tmp_buf); } static void setup_thread_local(struct thread_data *data, enum compression_mode mode, int compression_level) { struct comp_ctx *ctx = &data->comp_ctx; ctx->zstd_ccontext = NULL; ctx->zstd_dcontext = NULL; ctx->lz4_extstate = NULL; #ifdef HAS_LZ4 if (mode == COMP_LZ4) { /* Like LZ4Frame, integer codes indicate compression level. * Negative numbers are acceleration, positive use the HC * routines */ if (compression_level <= 0) { ctx->lz4_extstate = malloc((size_t)LZ4_sizeofState()); } else { ctx->lz4_extstate = malloc((size_t)LZ4_sizeofStateHC()); } } #endif #ifdef HAS_ZSTD if (mode == COMP_ZSTD) { ctx->zstd_ccontext = ZSTD_createCCtx(); ctx->zstd_dcontext = ZSTD_createDCtx(); } #endif (void)mode; (void)compression_level; data->tmp_buf = NULL; data->tmp_size = 0; } void cleanup_translation_map(struct fd_translation_map *map) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; destroy_unlinked_sfd(cur); } map->link.l_next = &map->link; map->link.l_prev = &map->link; } bool destroy_shadow_if_unreferenced(struct shadow_fd *sfd) { bool autodelete = sfd->has_owner; if (sfd->type == FDC_PIPE && !sfd->pipe.can_read && !sfd->pipe.can_write && !sfd->pipe.remote_can_read && !sfd->pipe.remote_can_write) { autodelete = true; } if (sfd->refcount.protocol == 0 && sfd->refcount.transfer == 0 && sfd->refcount.compute == false && autodelete) { /* remove shadowfd from list */ sfd->link.l_prev->l_next = sfd->link.l_next; sfd->link.l_next->l_prev = sfd->link.l_prev; sfd->link.l_next = NULL; sfd->link.l_prev = NULL; destroy_unlinked_sfd(sfd); return true; } else if (sfd->refcount.protocol < 0 || sfd->refcount.transfer < 0) { wp_error("Negative refcount for rid=%d: %d protocol references, %d transfer references", sfd->remote_id, sfd->refcount.protocol, sfd->refcount.transfer); } return false; } static void *worker_thread_main(void *arg); void setup_translation_map(struct fd_translation_map *map, bool display_side) { map->local_sign = display_side ? -1 : 1; map->link.l_next = &map->link; map->link.l_prev = &map->link; map->max_local_id = 1; } static void shutdown_threads(struct thread_pool *pool) { pthread_mutex_lock(&pool->work_mutex); free(pool->stack); struct task_data task; memset(&task, 0, sizeof(task)); task.type = TASK_STOP; pool->stack = &task; pool->stack_count = 1; pool->stack_size = 1; pool->do_work = true; pthread_cond_broadcast(&pool->work_cond); pthread_mutex_unlock(&pool->work_mutex); if (pool->threads) { for (int i = 1; i < pool->nthreads; i++) { if (pool->threads[i].thread) { pthread_join(pool->threads[i].thread, NULL); } } } pool->stack = NULL; } int setup_thread_pool(struct thread_pool *pool, enum compression_mode compression, int comp_level, int n_threads) { memset(pool, 0, sizeof(struct thread_pool)); pool->diff_func = get_diff_function( DIFF_FASTEST, &pool->diff_alignment_bits); pool->compression = compression; pool->compression_level = comp_level; if (n_threads <= 0) { // platform dependent int nt = get_hardware_thread_count(); pool->nthreads = max(nt / 2, 1); } else { pool->nthreads = n_threads; } pool->stack_size = 0; pool->stack_count = 0; pool->stack = NULL; pool->tasks_in_progress = 0; pool->do_work = true; /* Thread #0 is the 'main' thread */ pool->threads = calloc( (size_t)pool->nthreads, sizeof(struct thread_data)); if (!pool->threads) { wp_error("Failed to allocate list of thread data"); return -1; } int ret; ret = pthread_mutex_init(&pool->work_mutex, NULL); if (ret) { wp_error("Mutex creation failed: %s", strerror(ret)); return -1; } ret = pthread_cond_init(&pool->work_cond, NULL); if (ret) { wp_error("Condition variable creation failed: %s", strerror(ret)); return -1; } pool->threads[0].pool = pool; pool->threads[0].thread = pthread_self(); for (int i = 1; i < pool->nthreads; i++) { pool->threads[i].pool = pool; ret = pthread_create(&pool->threads[i].thread, NULL, worker_thread_main, &pool->threads[i]); if (ret) { wp_error("Thread creation failed: %s", strerror(ret)); // Stop making new threads, but keep what is there pool->nthreads = i; break; } } /* Setup thread local data from the main thread, to avoid requiring * the worker threads to allocate pools, for a few fixed buffers */ for (int i = 0; i < pool->nthreads; i++) { setup_thread_local(&pool->threads[i], compression, comp_level); } int fds[2]; if (pipe(fds) == -1) { wp_error("Failed to create pipe: %s", strerror(errno)); } pool->selfpipe_r = fds[0]; pool->selfpipe_w = fds[1]; if (set_nonblocking(pool->selfpipe_r) == -1) { wp_error("Failed to make read end of pipe nonblocking: %s", strerror(errno)); } return 0; } void cleanup_thread_pool(struct thread_pool *pool) { shutdown_threads(pool); if (pool->threads) { for (int i = 0; i < pool->nthreads; i++) { cleanup_thread_local(&pool->threads[i]); } } pthread_mutex_destroy(&pool->work_mutex); pthread_cond_destroy(&pool->work_cond); free(pool->threads); free(pool->stack); checked_close(pool->selfpipe_r); checked_close(pool->selfpipe_w); } const char *fdcat_to_str(enum fdcat cat) { switch (cat) { case FDC_UNKNOWN: return "FDC_UNKNOWN"; case FDC_FILE: return "FDC_FILE"; case FDC_PIPE: return "FDC_PIPE"; case FDC_DMABUF: return "FDC_DMABUF"; case FDC_DMAVID_IR: return "FDC_DMAVID_IR"; case FDC_DMAVID_IW: return "FDC_DMAVID_IW"; } return ""; } const char *compression_mode_to_str(enum compression_mode mode) { switch (mode) { case COMP_NONE: return "NONE"; case COMP_LZ4: return "LZ4"; case COMP_ZSTD: return "ZSTD"; default: return ""; } } enum fdcat get_fd_type(int fd, size_t *size) { struct stat fsdata; memset(&fsdata, 0, sizeof(fsdata)); int ret = fstat(fd, &fsdata); if (ret == -1) { wp_error("The fd %d is not file-like: %s", fd, strerror(errno)); return FDC_UNKNOWN; } else if (S_ISREG(fsdata.st_mode)) { if (size) { *size = (size_t)fsdata.st_size; } return FDC_FILE; } else if (S_ISFIFO(fsdata.st_mode) || S_ISCHR(fsdata.st_mode) || S_ISSOCK(fsdata.st_mode)) { if (S_ISCHR(fsdata.st_mode)) { wp_error("The fd %d, size %" PRId64 ", mode %x is a character device. Proceeding under the assumption that it is pipe-like.", fd, (int64_t)fsdata.st_size, fsdata.st_mode); } if (S_ISSOCK(fsdata.st_mode)) { wp_error("The fd %d, size %" PRId64 ", mode %x is a socket. Proceeding under the assumption that it is pipe-like.", fd, (int64_t)fsdata.st_size, fsdata.st_mode); } return FDC_PIPE; } else { /* Note: we cannot at the moment reliably identify a dmabuf; * trying to do so by importing it may file if we have the wrong * parameters. */ wp_error("The fd %d has an unusual mode %x (type=%x): blk=%d chr=%d dir=%d lnk=%d reg=%d fifo=%d sock=%d; expect an application crash!", fd, fsdata.st_mode, fsdata.st_mode & S_IFMT, S_ISBLK(fsdata.st_mode), S_ISCHR(fsdata.st_mode), S_ISDIR(fsdata.st_mode), S_ISLNK(fsdata.st_mode), S_ISREG(fsdata.st_mode), S_ISFIFO(fsdata.st_mode), S_ISSOCK(fsdata.st_mode), strerror(errno)); return FDC_UNKNOWN; } } static size_t compress_bufsize(struct thread_pool *pool, size_t max_input) { switch (pool->compression) { default: case COMP_NONE: (void)max_input; return 0; #ifdef HAS_LZ4 case COMP_LZ4: /* This bound applies for both LZ4 and LZ4HC compressors */ return (size_t)LZ4_compressBound((int)max_input); #endif #ifdef HAS_ZSTD case COMP_ZSTD: return ZSTD_compressBound(max_input); #endif } return 0; } /* With the selected compression method, compress the buffer * {isize,ibuf}, possibly modifying {msize,mbuf}, and setting * {wsize,wbuf} to indicate the result */ static void compress_buffer(struct thread_pool *pool, struct comp_ctx *ctx, size_t isize, const char *ibuf, size_t msize, char *mbuf, struct bytebuf *dst) { (void)ctx; // Ensure inputs always nontrivial if (isize == 0) { dst->size = 0; dst->data = (char *)ibuf; return; } DTRACE_PROBE1(waypipe, compress_buffer_enter, isize); switch (pool->compression) { default: case COMP_NONE: (void)msize; (void)mbuf; dst->size = isize; dst->data = (char *)ibuf; break; #ifdef HAS_LZ4 case COMP_LZ4: { int ws; if (pool->compression_level <= 0) { ws = LZ4_compress_fast_extState(ctx->lz4_extstate, ibuf, mbuf, (int)isize, (int)msize, -pool->compression_level); } else { ws = LZ4_compress_HC_extStateHC(ctx->lz4_extstate, ibuf, mbuf, (int)isize, (int)msize, pool->compression_level); } if (ws == 0) { wp_error("LZ4 compression failed for %zu bytes in %zu of space", isize, msize); } dst->size = (size_t)ws; dst->data = (char *)mbuf; break; } #endif #ifdef HAS_ZSTD case COMP_ZSTD: { size_t ws = ZSTD_compressCCtx(ctx->zstd_ccontext, mbuf, msize, ibuf, isize, pool->compression_level); if (ZSTD_isError(ws)) { wp_error("Zstd compression failed for %d bytes in %d of space: %s", (int)isize, (int)msize, ZSTD_getErrorName(ws)); } dst->size = (size_t)ws; dst->data = (char *)mbuf; break; } #endif } DTRACE_PROBE1(waypipe, compress_buffer_exit, dst->size); } /* With the selected compression method, uncompress the buffer {isize,ibuf}, * to precisely msize bytes, setting {wsize,wbuf} to indicate the result. * If the compression mode requires it. */ static void uncompress_buffer(struct thread_pool *pool, struct comp_ctx *ctx, size_t isize, const char *ibuf, size_t msize, char *mbuf, size_t *wsize, const char **wbuf) { (void)ctx; // Ensure inputs always nontrivial if (isize == 0) { *wsize = 0; *wbuf = ibuf; return; } DTRACE_PROBE1(waypipe, uncompress_buffer_enter, isize); switch (pool->compression) { default: case COMP_NONE: (void)mbuf; (void)msize; *wsize = isize; *wbuf = ibuf; break; #ifdef HAS_LZ4 case COMP_LZ4: { int ws = LZ4_decompress_safe( ibuf, mbuf, (int)isize, (int)msize); if (ws < 0 || (size_t)ws != msize) { wp_error("Lz4 decompression failed for %d bytes to %d of space, used %d", (int)isize, (int)msize, ws); } *wsize = (size_t)ws; *wbuf = mbuf; break; } #endif #ifdef HAS_ZSTD case COMP_ZSTD: { size_t ws = ZSTD_decompressDCtx( ctx->zstd_dcontext, mbuf, msize, ibuf, isize); if (ZSTD_isError(ws) || (size_t)ws != msize) { wp_error("Zstd decompression failed for %d bytes to %d of space: %s", (int)isize, (int)msize, ZSTD_getErrorName(ws)); ws = 0; } *wsize = ws; *wbuf = mbuf; break; } #endif } DTRACE_PROBE1(waypipe, uncompress_buffer_exit, *wsize); } struct shadow_fd *translate_fd(struct fd_translation_map *map, struct render_data *render, struct thread_pool *threads, int fd, enum fdcat type, size_t file_sz, const struct dmabuf_slice_data *info, bool force_pipe_iw) { struct shadow_fd *sfd = get_shadow_for_local_fd(map, fd); if (sfd) { return sfd; } if (type == FDC_DMAVID_IR || type == FDC_DMAVID_IW) { if (!info) { wp_error("No dmabuf info provided"); return NULL; } } // Create a new translation map. sfd = calloc(1, sizeof(struct shadow_fd)); if (!sfd) { wp_error("Failed to allocate shadow_fd structure"); return NULL; } sfd->link.l_prev = &map->link; sfd->link.l_next = map->link.l_next; sfd->link.l_prev->l_next = &sfd->link; sfd->link.l_next->l_prev = &sfd->link; sfd->fd_local = fd; sfd->mem_local = NULL; sfd->mem_mirror = NULL; sfd->mem_mirror_handle = NULL; sfd->buffer_size = 0; sfd->remote_id = (map->max_local_id++) * map->local_sign; sfd->type = type; // File changes must be propagated sfd->is_dirty = true; /* files/dmabufs are damaged by default; shm_pools are explicitly * undamaged in handlers.c */ damage_everything(&sfd->damage); sfd->has_owner = false; /* Start the number of expected transfers to channel remaining * at one, and number of protocol objects referencing this * shadow_fd at zero.*/ sfd->refcount.transfer = 1; sfd->refcount.protocol = 0; sfd->refcount.compute = false; sfd->only_here = true; wp_debug("Creating new %s shadow RID=%d for local fd %d", fdcat_to_str(sfd->type), sfd->remote_id, fd); switch (sfd->type) { case FDC_FILE: { if (file_sz >= UINT32_MAX / 2) { wp_error("Failed to create shadow structure, file size %zu too large to transfer", file_sz); return sfd; } sfd->buffer_size = file_sz; sfd->file_readonly = false; // both r/w permissions, because the side which allocates the // memory does not always have to be the side that modifies it sfd->mem_local = mmap(NULL, sfd->buffer_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if (sfd->mem_local == MAP_FAILED && (errno == EPERM || errno == EACCES)) { wp_debug("Initial mmap for RID=%d failed, trying private+readonly", sfd->remote_id); // Some files are memfds that are sealed // to be read-only sfd->mem_local = mmap(NULL, sfd->buffer_size, PROT_READ, MAP_PRIVATE, fd, 0); if (sfd->mem_local != MAP_FAILED) { sfd->file_readonly = true; } } if (sfd->mem_local == MAP_FAILED) { wp_error("Mmap failed when creating shadow RID=%d: %s", sfd->remote_id, strerror(errno)); return sfd; } // This will be created at the first transfer. // todo: why not create it now? sfd->mem_mirror = NULL; } break; case FDC_PIPE: { // Make this end of the pipe nonblocking, so that we can // include it in our main loop. if (set_nonblocking(sfd->fd_local) == -1) { wp_error("Failed to make fd nonblocking"); } sfd->pipe.fd = sfd->fd_local; if (force_pipe_iw) { sfd->pipe.can_write = true; } else { /* this classification overestimates with * socketpairs that have partially been shutdown. * what about platform-specific RW pipes? */ int flags = fcntl(fd, F_GETFL, 0); if (flags == -1) { wp_error("fctnl F_GETFL failed!"); } if ((flags & O_ACCMODE) == O_RDONLY) { sfd->pipe.can_read = true; } else if ((flags & O_ACCMODE) == O_WRONLY) { sfd->pipe.can_write = true; } else { sfd->pipe.can_read = true; sfd->pipe.can_write = true; } } } break; case FDC_DMAVID_IR: { sfd->video_fmt = render->av_video_fmt; memcpy(&sfd->dmabuf_info, info, sizeof(struct dmabuf_slice_data)); init_render_data(render); sfd->dmabuf_bo = import_dmabuf(render, sfd->fd_local, &sfd->buffer_size, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { return sfd; } if (setup_video_encode(sfd, render, threads->nthreads) == -1) { wp_error("Video encoding setup failed for RID=%d", sfd->remote_id); } } break; case FDC_DMAVID_IW: { sfd->video_fmt = render->av_video_fmt; memcpy(&sfd->dmabuf_info, info, sizeof(struct dmabuf_slice_data)); // TODO: multifd-dmabuf video surface init_render_data(render); sfd->dmabuf_bo = import_dmabuf(render, sfd->fd_local, &sfd->buffer_size, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { return sfd; } if (setup_video_decode(sfd, render) == -1) { wp_error("Video decoding setup failed for RID=%d", sfd->remote_id); } } break; case FDC_DMABUF: { sfd->buffer_size = 0; init_render_data(render); memcpy(&sfd->dmabuf_info, info, sizeof(struct dmabuf_slice_data)); sfd->dmabuf_bo = import_dmabuf(render, sfd->fd_local, &sfd->buffer_size, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { return sfd; } // to be created on first transfer sfd->mem_mirror = NULL; } break; case FDC_UNKNOWN: wp_error("Trying to create shadow_fd for unknown filedesc type"); break; } return sfd; } static void *shrink_buffer(void *buf, size_t sz) { void *nbuf = realloc(buf, sz); if (nbuf) { return nbuf; } else { wp_debug("Failed to shrink buffer with realloc, not a problem"); return buf; } } /* Construct and optionally compress a diff between sfd->mem_mirror and * the actual memmap'd data, and synchronize sfd->mem_mirror */ static void worker_run_compress_diff( struct task_data *task, struct thread_data *local) { struct shadow_fd *sfd = task->sfd; struct thread_pool *pool = local->pool; size_t diffsize = (size_t)-1; size_t damage_space = 0; for (int i = 0; i < task->damage_len; i++) { int range = task->damage_intervals[i].end - task->damage_intervals[i].start; damage_space += (size_t)range + 8; } if (task->damaged_end) { damage_space += 1u << pool->diff_alignment_bits; } DTRACE_PROBE1(waypipe, worker_compdiff_enter, damage_space); char *diff_buffer = NULL; char *diff_target = NULL; if (pool->compression == COMP_NONE) { diff_buffer = malloc( damage_space + sizeof(struct wmsg_buffer_diff)); if (!diff_buffer) { wp_error("Allocation failed, dropping diff transfer block"); goto end; } diff_target = diff_buffer + sizeof(struct wmsg_buffer_diff); } else { if (buf_ensure_size((int)damage_space, 1, &local->tmp_size, &local->tmp_buf) == -1) { wp_error("Allocation failed, dropping diff transfer block"); goto end; } diff_target = local->tmp_buf; } DTRACE_PROBE1(waypipe, construct_diff_enter, task->damage_len); char *source = sfd->mem_local; if (sfd->type == FDC_DMABUF && sfd->dmabuf_map_stride != sfd->dmabuf_info.strides[0]) { size_t tx_stride = (size_t)sfd->dmabuf_info.strides[0]; size_t common = (size_t)minu(sfd->dmabuf_map_stride, tx_stride); /* copy mapped data to temporary buffer whose stride matches * what is sent over the wire */ for (int i = 0; i < task->damage_len; i++) { size_t start = (size_t)task->damage_intervals[i].start; size_t end = (size_t)task->damage_intervals[i].end; size_t loc_start = (start % tx_stride) + (start / tx_stride) * sfd->dmabuf_map_stride; size_t loc_end = (end % tx_stride) + (end / tx_stride) * sfd->dmabuf_map_stride; stride_shifted_copy(sfd->dmabuf_warped, sfd->mem_local, loc_start, loc_end - loc_start, common, sfd->dmabuf_map_stride, sfd->dmabuf_info.strides[0]); } if (task->damaged_end) { size_t alignment = 1u << pool->diff_alignment_bits; size_t start = alignment * (sfd->buffer_size / alignment); size_t end = sfd->buffer_size; size_t loc_start = (start % tx_stride) + (start / tx_stride) * sfd->dmabuf_map_stride; size_t loc_end = (end % tx_stride) + (end / tx_stride) * sfd->dmabuf_map_stride; stride_shifted_copy(sfd->dmabuf_warped, sfd->mem_local, loc_start, loc_end - loc_start, common, sfd->dmabuf_map_stride, sfd->dmabuf_info.strides[0]); } source = sfd->dmabuf_warped; } diffsize = construct_diff_core(pool->diff_func, pool->diff_alignment_bits, task->damage_intervals, task->damage_len, sfd->mem_mirror, source, diff_target); size_t ntrailing = 0; if (task->damaged_end) { ntrailing = construct_diff_trailing(sfd->buffer_size, pool->diff_alignment_bits, sfd->mem_mirror, source, diff_target + diffsize); } DTRACE_PROBE1(waypipe, construct_diff_exit, diffsize); if (diffsize == 0 && ntrailing == 0) { free(diff_buffer); goto end; } uint8_t *msg; size_t sz; size_t net_diff_sz = diffsize + ntrailing; if (pool->compression == COMP_NONE) { sz = net_diff_sz + sizeof(struct wmsg_buffer_diff); msg = (uint8_t *)diff_buffer; } else { struct bytebuf dst; size_t comp_size = compress_bufsize(pool, net_diff_sz); char *comp_buf = malloc(alignz(comp_size, 4) + sizeof(struct wmsg_buffer_diff)); if (!comp_buf) { wp_error("Allocation failed, dropping diff transfer block"); goto end; } compress_buffer(pool, &local->comp_ctx, net_diff_sz, diff_target, comp_size, comp_buf + sizeof(struct wmsg_buffer_diff), &dst); sz = dst.size + sizeof(struct wmsg_buffer_diff); msg = (uint8_t *)comp_buf; } msg = shrink_buffer(msg, alignz(sz, 4)); memset(msg + sz, 0, alignz(sz, 4) - sz); struct wmsg_buffer_diff header; header.size_and_type = transfer_header(sz, WMSG_BUFFER_DIFF); header.remote_id = sfd->remote_id; header.diff_size = (uint32_t)diffsize; header.ntrailing = (uint32_t)ntrailing; memcpy(msg, &header, sizeof(struct wmsg_buffer_diff)); transfer_async_add(task->msg_queue, msg, alignz(sz, 4)); end: DTRACE_PROBE1(waypipe, worker_compdiff_exit, diffsize); } /* Compress data for sfd->mem_mirror, and synchronize sfd->mem_mirror */ static void worker_run_compress_block( struct task_data *task, struct thread_data *local) { struct shadow_fd *sfd = task->sfd; struct thread_pool *pool = local->pool; if (task->zone_end == task->zone_start) { wp_error("Skipping task"); return; } /* Allocate a disjoint target interval to each worker */ size_t source_start = (size_t)task->zone_start; size_t source_end = (size_t)task->zone_end; DTRACE_PROBE1(waypipe, worker_comp_enter, source_end - source_start); /* Update mirror to match local */ if (sfd->type == FDC_DMABUF && sfd->dmabuf_map_stride != sfd->dmabuf_info.strides[0]) { uint32_t tx_stride = sfd->dmabuf_info.strides[0]; size_t common = (size_t)minu(sfd->dmabuf_map_stride, sfd->dmabuf_info.strides[0]); size_t loc_start = (source_start % tx_stride) + (source_start / tx_stride) * sfd->dmabuf_map_stride; size_t loc_end = (source_end % tx_stride) + (source_end / tx_stride) * sfd->dmabuf_map_stride; stride_shifted_copy(sfd->mem_mirror, sfd->mem_local, loc_start, loc_end - loc_start, common, sfd->dmabuf_map_stride, sfd->dmabuf_info.strides[0]); } else { memcpy(sfd->mem_mirror + source_start, sfd->mem_local + source_start, source_end - source_start); } size_t sz = 0; uint8_t *msg; if (pool->compression == COMP_NONE) { sz = sizeof(struct wmsg_buffer_fill) + (source_end - source_start); msg = malloc(alignz(sz, 4)); if (!msg) { wp_error("Allocation failed, dropping fill transfer block"); goto end; } memcpy(msg + sizeof(struct wmsg_buffer_fill), sfd->mem_mirror + source_start, source_end - source_start); } else { size_t comp_size = compress_bufsize( pool, source_end - source_start); msg = malloc(alignz(comp_size, 4) + sizeof(struct wmsg_buffer_fill)); if (!msg) { wp_error("Allocation failed, dropping fill transfer block"); goto end; } struct bytebuf dst; compress_buffer(pool, &local->comp_ctx, source_end - source_start, &sfd->mem_mirror[source_start], comp_size, (char *)msg + sizeof(struct wmsg_buffer_fill), &dst); sz = dst.size + sizeof(struct wmsg_buffer_fill); msg = shrink_buffer(msg, alignz(sz, 4)); } memset(msg + sz, 0, alignz(sz, 4) - sz); struct wmsg_buffer_fill header; header.size_and_type = transfer_header(sz, WMSG_BUFFER_FILL); header.remote_id = sfd->remote_id; header.start = (uint32_t)source_start; header.end = (uint32_t)source_end; memcpy(msg, &header, sizeof(struct wmsg_buffer_fill)); transfer_async_add(task->msg_queue, msg, alignz(sz, 4)); end: DTRACE_PROBE1(waypipe, worker_comp_exit, sz - sizeof(struct wmsg_buffer_fill)); } /* Optionally compress the data in mem_mirror, and set up the initial * transfer blocks */ static void queue_fill_transfers(struct thread_pool *threads, struct shadow_fd *sfd, struct transfer_queue *transfers) { // new transfer, we send file contents verbatim const int chunksize = 262144; int region_start = (int)sfd->remote_bufsize; int region_end = (int)sfd->buffer_size; if (region_start > region_end) { wp_error("Cannot queue fill transfers for a size reduction from %d to %d bytes", region_start, region_end); return; } if (region_start == region_end) { return; } /* Keep sfd alive at least until write to channel is done */ sfd->refcount.compute = true; int nshards = ceildiv((region_end - region_start), chunksize); pthread_mutex_lock(&threads->work_mutex); if (buf_ensure_size(threads->stack_count + nshards, sizeof(struct task_data), &threads->stack_size, (void **)&threads->stack) == -1) { wp_error("Allocation failed, dropping some fill tasks"); pthread_mutex_unlock(&threads->work_mutex); return; } /* Align fill operations to multiples of 64; this may slightly overcopy * if region_start % 64 != 0. */ int block_start = region_start / 64; int block_end = align(region_end, 64) / 64; for (int i = 0; i < nshards; i++) { struct task_data task; memset(&task, 0, sizeof(task)); task.type = TASK_COMPRESS_BLOCK; task.sfd = sfd; task.msg_queue = &transfers->async_recv_queue; task.zone_start = 64 * split_interval(block_start, block_end, nshards, i); task.zone_end = 64 * split_interval(block_start, block_end, nshards, i + 1); if (task.zone_end > region_end) { task.zone_end = region_end; } threads->stack[threads->stack_count++] = task; } pthread_mutex_unlock(&threads->work_mutex); } static void queue_diff_transfers(struct thread_pool *threads, struct shadow_fd *sfd, struct transfer_queue *transfers) { const int chunksize = 262144; if (!sfd->damage.damage) { return; } /* Keep sfd alive at least until write to channel is done */ sfd->refcount.compute = true; int bs = 1 << threads->diff_alignment_bits; int align_end = bs * ((int)sfd->buffer_size / bs); bool check_tail = false; int net_damage = 0; if (sfd->damage.damage == DAMAGE_EVERYTHING) { reset_damage(&sfd->damage); struct ext_interval all = {.start = 0, .width = align_end, .rep = 1, .stride = 0}; merge_damage_records(&sfd->damage, 1, &all, threads->diff_alignment_bits); check_tail = true; net_damage = align_end; } else { for (int ir = 0, iw = 0; ir < sfd->damage.ndamage_intvs; ir++) { /* Extend all damage to the nearest alignment block */ struct interval e = sfd->damage.damage[ir]; check_tail |= e.end > align_end; e.end = min(e.end, align_end); if (e.start < e.end) { /* End clipping may produce empty/degenerate * intervals, so filter them out now */ sfd->damage.damage[iw++] = e; net_damage += e.end - e.start; } if (e.end & (bs - 1) || e.start & (bs - 1)) { wp_error("Interval [%d, %d) is not aligned", e.start, e.end); } } } int nshards = ceildiv(net_damage, chunksize); /* Instead of allocating individual buffers for each task, create a * global damage tracking buffer into which tasks index. It will be * deleted in `finish_update`. */ struct interval *intvs = malloc( sizeof(struct interval) * (size_t)(sfd->damage.ndamage_intvs + nshards)); int *offsets = calloc((size_t)nshards + 1, sizeof(int)); if (!offsets || !intvs) { // TODO: avoid making this allocation entirely wp_error("Failed to allocate diff region control buffer, dropping diff tasks"); free(intvs); free(offsets); return; } sfd->damage_task_interval_store = intvs; int tot_blocks = net_damage / bs; int ir = 0, iw = 0, acc_prev_blocks = 0; for (int shard = 0; shard < nshards; shard++) { int s_lower = split_interval(0, tot_blocks, nshards, shard); int s_upper = split_interval(0, tot_blocks, nshards, shard + 1); while (acc_prev_blocks < s_upper && ir < sfd->damage.ndamage_intvs) { struct interval e = sfd->damage.damage[ir]; const int w = (e.end - e.start) / bs; int a_low = max(0, s_lower - acc_prev_blocks); int a_high = min(w, s_upper - acc_prev_blocks); struct interval r = { .start = e.start + bs * a_low, .end = e.start + bs * a_high, }; intvs[iw++] = r; if (acc_prev_blocks + w > s_upper) { break; } else { acc_prev_blocks += w; ir++; } } offsets[shard + 1] = iw; } /* Reset damage, once it has been applied */ reset_damage(&sfd->damage); pthread_mutex_lock(&threads->work_mutex); if (buf_ensure_size(threads->stack_count + nshards, sizeof(struct task_data), &threads->stack_size, (void **)&threads->stack) == -1) { wp_error("Allocation failed, dropping some diff tasks"); pthread_mutex_unlock(&threads->work_mutex); free(offsets); return; } for (int i = 0; i < nshards; i++) { struct task_data task; memset(&task, 0, sizeof(task)); task.type = TASK_COMPRESS_DIFF; task.sfd = sfd; task.msg_queue = &transfers->async_recv_queue; task.damage_len = offsets[i + 1] - offsets[i]; task.damage_intervals = &sfd->damage_task_interval_store[offsets[i]]; task.damaged_end = (i == nshards - 1) && check_tail; threads->stack[threads->stack_count++] = task; } pthread_mutex_unlock(&threads->work_mutex); free(offsets); } static void add_dmabuf_create_request(struct transfer_queue *transfers, struct shadow_fd *sfd, enum wmsg_type variant) { size_t actual_len = sizeof(struct wmsg_open_dmabuf) + sizeof(struct dmabuf_slice_data); size_t padded_len = alignz(actual_len, 4); uint8_t *data = calloc(1, padded_len); struct wmsg_open_dmabuf *header = (struct wmsg_open_dmabuf *)data; header->file_size = (uint32_t)sfd->buffer_size; header->remote_id = sfd->remote_id; header->size_and_type = transfer_header(actual_len, variant); memcpy(data + sizeof(struct wmsg_open_dmabuf), &sfd->dmabuf_info, sizeof(struct dmabuf_slice_data)); transfer_add(transfers, padded_len, data); } static void add_dmabuf_create_request_v2(struct transfer_queue *transfers, struct shadow_fd *sfd, enum wmsg_type variant, enum video_coding_fmt fmt) { size_t actual_len = sizeof(struct wmsg_open_dmavid) + sizeof(struct dmabuf_slice_data); static_assert((sizeof(struct wmsg_open_dmavid) + sizeof(struct dmabuf_slice_data)) % 4 == 0, "alignment"); uint8_t *data = calloc(1, actual_len); struct wmsg_open_dmavid *header = (struct wmsg_open_dmavid *)data; header->file_size = (uint32_t)sfd->buffer_size; header->remote_id = sfd->remote_id; header->size_and_type = transfer_header(actual_len, variant); header->vid_flags = (uint32_t)fmt; memcpy(data + sizeof(*header), &sfd->dmabuf_info, sizeof(struct dmabuf_slice_data)); transfer_add(transfers, actual_len, data); } static void add_file_create_request( struct transfer_queue *transfers, struct shadow_fd *sfd) { struct wmsg_open_file *header = calloc(1, sizeof(struct wmsg_open_file)); header->file_size = (uint32_t)sfd->buffer_size; header->remote_id = sfd->remote_id; header->size_and_type = transfer_header( sizeof(struct wmsg_open_file), WMSG_OPEN_FILE); transfer_add(transfers, sizeof(struct wmsg_open_file), header); } void finish_update(struct shadow_fd *sfd) { if (!sfd->refcount.compute) { return; } if (sfd->type == FDC_DMABUF && sfd->dmabuf_map_handle) { // if this fails, unmap_dmabuf will print error (void)unmap_dmabuf(sfd->dmabuf_bo, sfd->dmabuf_map_handle); sfd->dmabuf_map_handle = NULL; sfd->mem_local = NULL; } if (sfd->damage_task_interval_store) { free(sfd->damage_task_interval_store); sfd->damage_task_interval_store = NULL; } sfd->refcount.compute = false; } void collect_update(struct thread_pool *threads, struct shadow_fd *sfd, struct transfer_queue *transfers, bool use_old_dmavid_req) { switch (sfd->type) { case FDC_FILE: { if (!sfd->is_dirty) { // File is clean, we have no reason to believe // that its contents could have changed return; } // Clear dirty state sfd->is_dirty = false; if (sfd->only_here) { // increase space, to avoid overflow when // writing this buffer along with padding size_t alignment = 1u << threads->diff_alignment_bits; sfd->mem_mirror = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->mem_mirror_handle); if (!sfd->mem_mirror) { wp_error("Failed to allocate mirror"); return; } sfd->only_here = false; sfd->remote_bufsize = 0; add_file_create_request(transfers, sfd); sfd->remote_bufsize = sfd->buffer_size; queue_diff_transfers(threads, sfd, transfers); return; } if (sfd->remote_bufsize < sfd->buffer_size) { struct wmsg_open_file *header = calloc( 1, sizeof(struct wmsg_open_file)); header->file_size = (uint32_t)sfd->buffer_size; header->remote_id = sfd->remote_id; header->size_and_type = transfer_header( sizeof(struct wmsg_open_file), WMSG_EXTEND_FILE); transfer_add(transfers, sizeof(struct wmsg_open_file), header); sfd->remote_bufsize = sfd->buffer_size; } queue_diff_transfers(threads, sfd, transfers); } break; case FDC_DMABUF: { // If buffer is clean, do not check for changes if (!sfd->is_dirty) { return; } sfd->is_dirty = false; bool first = false; if (sfd->only_here) { sfd->only_here = false; first = true; add_dmabuf_create_request( transfers, sfd, WMSG_OPEN_DMABUF); } if (!sfd->dmabuf_bo) { // ^ was not previously able to create buffer return; } if (!sfd->mem_local) { sfd->mem_local = map_dmabuf(sfd->dmabuf_bo, false, &sfd->dmabuf_map_handle, &sfd->dmabuf_map_stride); if (!sfd->mem_local) { return; } } if (first) { size_t alignment = 1u << threads->diff_alignment_bits; sfd->mem_mirror = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->mem_mirror_handle); sfd->dmabuf_warped = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->dmabuf_warped_handle); if (!sfd->mem_mirror || !sfd->dmabuf_warped) { wp_error("Failed to allocate mirror"); return; } sfd->remote_bufsize = 0; queue_fill_transfers(threads, sfd, transfers); sfd->remote_bufsize = sfd->buffer_size; } else { // TODO: detailed damage tracking damage_everything(&sfd->damage); queue_diff_transfers(threads, sfd, transfers); } /* Unmapping will be handled by finish_update() */ } break; case FDC_DMAVID_IR: { if (!sfd->is_dirty) { return; } sfd->is_dirty = false; if (!sfd->dmabuf_bo || !sfd->video_context) { // ^ was not previously able to create buffer return; } if (sfd->only_here) { sfd->only_here = false; if (use_old_dmavid_req) { add_dmabuf_create_request(transfers, sfd, WMSG_OPEN_DMAVID_DST); } else { add_dmabuf_create_request_v2(transfers, sfd, WMSG_OPEN_DMAVID_DST_V2, sfd->video_fmt); } } collect_video_from_mirror(sfd, transfers); } break; case FDC_DMAVID_IW: { sfd->is_dirty = false; if (sfd->only_here) { sfd->only_here = false; if (use_old_dmavid_req) { add_dmabuf_create_request(transfers, sfd, WMSG_OPEN_DMAVID_SRC); } else { add_dmabuf_create_request_v2(transfers, sfd, WMSG_OPEN_DMAVID_SRC_V2, sfd->video_fmt); } } } break; case FDC_PIPE: { // Pipes always update, no matter what the message // stream indicates. if (sfd->only_here) { sfd->only_here = false; struct wmsg_basic *createh = calloc(1, sizeof(struct wmsg_basic)); enum wmsg_type type; if (sfd->pipe.can_read && !sfd->pipe.can_write) { type = WMSG_OPEN_IW_PIPE; sfd->pipe.remote_can_write = true; } else if (sfd->pipe.can_write && !sfd->pipe.can_read) { type = WMSG_OPEN_IR_PIPE; sfd->pipe.remote_can_read = true; } else { type = WMSG_OPEN_RW_PIPE; sfd->pipe.remote_can_read = true; sfd->pipe.remote_can_write = true; } createh->size_and_type = transfer_header( sizeof(struct wmsg_basic), type); createh->remote_id = sfd->remote_id; transfer_add(transfers, sizeof(struct wmsg_basic), createh); } if (sfd->pipe.recv.used > 0) { size_t msgsz = sizeof(struct wmsg_basic) + (size_t)sfd->pipe.recv.used; char *buf = malloc(alignz(msgsz, 4)); struct wmsg_basic *header = (struct wmsg_basic *)buf; header->size_and_type = transfer_header( msgsz, WMSG_PIPE_TRANSFER); header->remote_id = sfd->remote_id; memcpy(buf + sizeof(struct wmsg_basic), sfd->pipe.recv.data, (size_t)sfd->pipe.recv.used); memset(buf + msgsz, 0, alignz(msgsz, 4) - msgsz); transfer_add(transfers, alignz(msgsz, 4), buf); sfd->pipe.recv.used = 0; } if (!sfd->pipe.can_read && sfd->pipe.remote_can_write) { struct wmsg_basic *header = calloc(1, sizeof(struct wmsg_basic)); header->size_and_type = transfer_header( sizeof(struct wmsg_basic), WMSG_PIPE_SHUTDOWN_W); header->remote_id = sfd->remote_id; transfer_add(transfers, sizeof(struct wmsg_basic), header); sfd->pipe.remote_can_write = false; } if (!sfd->pipe.can_write && sfd->pipe.remote_can_read) { struct wmsg_basic *header = calloc(1, sizeof(struct wmsg_basic)); header->size_and_type = transfer_header( sizeof(struct wmsg_basic), WMSG_PIPE_SHUTDOWN_R); header->remote_id = sfd->remote_id; transfer_add(transfers, sizeof(struct wmsg_basic), header); sfd->pipe.remote_can_read = false; } } break; case FDC_UNKNOWN: break; } } static void increase_buffer_sizes(struct shadow_fd *sfd, struct thread_pool *threads, size_t new_size) { size_t old_size = sfd->buffer_size; munmap(sfd->mem_local, old_size); sfd->buffer_size = new_size; sfd->mem_local = mmap(NULL, sfd->buffer_size, PROT_READ | PROT_WRITE, MAP_SHARED, sfd->fd_local, 0); if (sfd->mem_local == MAP_FAILED) { wp_error("Mmap failed to remap increased buffer for RID=%d: %s", sfd->remote_id, strerror(errno)); return; } /* if resize happens before any transfers, mirror may still be zero */ if (sfd->mem_mirror) { // todo: handle allocation failures size_t alignment = 1u << threads->diff_alignment_bits; void *new_mirror = zeroed_aligned_realloc( alignz(old_size, alignment), alignz(sfd->buffer_size, alignment), alignment, sfd->mem_mirror, &sfd->mem_mirror_handle); if (!new_mirror) { wp_error("Failed to reallocate mirror"); return; } sfd->mem_mirror = new_mirror; } } static void pipe_close_write(struct shadow_fd *sfd) { if (sfd->pipe.can_read) { /* if pipe.fd is both readable and writable, assume * socket */ shutdown(sfd->pipe.fd, SHUT_WR); } else { checked_close(sfd->pipe.fd); if (sfd->fd_local == sfd->pipe.fd) { sfd->fd_local = -1; } sfd->pipe.fd = -1; } sfd->pipe.can_write = false; /* Also free any accumulated data that was not delivered */ free(sfd->pipe.send.data); memset(&sfd->pipe.send, 0, sizeof(sfd->pipe.send)); } static void pipe_close_read(struct shadow_fd *sfd) { if (sfd->pipe.can_write) { /* if pipe.fd is both readable and writable, assume * socket */ // TODO: check return value, can legitimately fail with ENOBUFS shutdown(sfd->pipe.fd, SHUT_RD); } else { checked_close(sfd->pipe.fd); if (sfd->fd_local == sfd->pipe.fd) { sfd->fd_local = -1; } sfd->pipe.fd = -1; } sfd->pipe.can_read = false; } static int open_sfd(struct fd_translation_map *map, struct shadow_fd **sfd_ptr, int remote_id) { if (*sfd_ptr) { wp_error("shadow structure for RID=%d was already created", remote_id); return ERR_FATAL; } wp_debug("Introducing new fd, remoteid=%d", remote_id); struct shadow_fd *sfd = calloc(1, sizeof(struct shadow_fd)); if (!sfd) { wp_error("failed to allocate shadow structure for RID=%d", remote_id); return ERR_FATAL; } sfd->link.l_prev = &map->link; sfd->link.l_next = map->link.l_next; sfd->link.l_prev->l_next = &sfd->link; sfd->link.l_next->l_prev = &sfd->link; sfd->remote_id = remote_id; sfd->fd_local = -1; sfd->is_dirty = false; /* a received file descriptor is up to date by default */ reset_damage(&sfd->damage); sfd->only_here = false; /* Start the object reference at one, so that, if it is owned by * some known protocol object, it can not be deleted until the * fd has at least be transferred over the Wayland connection */ sfd->refcount.transfer = 1; sfd->refcount.protocol = 0; sfd->refcount.compute = false; *sfd_ptr = sfd; return 0; } static int check_message_min_size( enum wmsg_type type, const struct bytebuf *msg, size_t min_size) { if (msg->size < min_size) { wp_error("Message size for %s is smaller than expected (%zu bytes vs %zu bytes)", wmsg_type_to_str(type), msg->size, min_size); return ERR_FATAL; } return 0; } static int check_sfd_type_2(struct shadow_fd *sfd, int remote_id, enum wmsg_type mtype, enum fdcat ftype1, enum fdcat ftype2) { if (!sfd) { wp_error("shadow structure for RID=%d was not available", remote_id); return ERR_FATAL; } if (sfd->type != ftype1 && sfd->type != ftype2) { wp_error("Trying to apply %s to RID=%d which has incompatible type=%s", wmsg_type_to_str(mtype), remote_id, fdcat_to_str(sfd->type)); return ERR_FATAL; } return 0; } static int check_sfd_type(struct shadow_fd *sfd, int remote_id, enum wmsg_type mtype, enum fdcat ftype) { return check_sfd_type_2(sfd, remote_id, mtype, ftype, ftype); } int apply_update(struct fd_translation_map *map, struct thread_pool *threads, struct render_data *render, enum wmsg_type type, int remote_id, const struct bytebuf *msg) { struct shadow_fd *sfd = get_shadow_for_rid(map, remote_id); int ret = 0; switch (type) { default: case WMSG_RESTART: case WMSG_CLOSE: case WMSG_ACK_NBLOCKS: case WMSG_INJECT_RIDS: case WMSG_PROTOCOL: { if (wmsg_type_is_known(type)) { wp_error("Unexpected update type: %s", wmsg_type_to_str(type)); } else { wp_error("Unidentified update type, number %u. " "This may be caused by the Waypipe instances " "on different sides of the connection having " "incompatible versions or options.", (unsigned)type); } return ERR_FATAL; } /* SFD creation messages */ case WMSG_OPEN_FILE: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_open_file))) < 0) { return ret; } if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } const struct wmsg_open_file header = *(const struct wmsg_open_file *)msg->data; sfd->type = FDC_FILE; sfd->mem_local = NULL; sfd->buffer_size = header.file_size; sfd->remote_bufsize = sfd->buffer_size; size_t alignment = 1u << threads->diff_alignment_bits; sfd->mem_mirror = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->mem_mirror_handle); if (!sfd->mem_mirror) { wp_error("Failed to allocate mirror"); return 0; } sfd->fd_local = create_anon_file(); if (sfd->fd_local == -1) { wp_error("Failed to create anon file for object %d: %s", sfd->remote_id, strerror(errno)); return 0; } /* ftruncate zero initializes the file by default, matching * the zeroed mem_mirror buffer */ if (ftruncate(sfd->fd_local, (off_t)sfd->buffer_size) == -1) { wp_error("Failed to resize anon file to size %zu for reason: %s", sfd->buffer_size, strerror(errno)); return 0; } sfd->mem_local = mmap(NULL, sfd->buffer_size, PROT_READ | PROT_WRITE, MAP_SHARED, sfd->fd_local, 0); if (sfd->mem_local == MAP_FAILED) { wp_error("Failed to mmap newly created shm file for object %d: %s", sfd->remote_id, strerror(errno)); sfd->mem_local = NULL; return 0; } return 0; } case WMSG_OPEN_DMABUF: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_open_dmabuf) + sizeof(struct dmabuf_slice_data))) < 0) { return ret; } if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } sfd->type = FDC_DMABUF; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmabuf), sizeof(struct dmabuf_slice_data)); /* allocate a mirror buffer that matches dimensions of incoming * data from the remote; this may disagree with the mapped size * of the buffer */ sfd->buffer_size = sfd->dmabuf_info.height * sfd->dmabuf_info.strides[0]; size_t alignment = 1u << threads->diff_alignment_bits; sfd->mem_mirror = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->mem_mirror_handle); sfd->dmabuf_warped = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->dmabuf_warped_handle); if (!sfd->mem_mirror || !sfd->dmabuf_warped) { wp_error("Failed to allocate mirror"); return 0; } wp_debug("Creating remote DMAbuf of %d bytes", (int)sfd->buffer_size); // Create mirror from first transfer // The file can only actually be created when we know // what type it is? if (init_render_data(render) == -1) { sfd->fd_local = -1; return 0; } sfd->dmabuf_bo = make_dmabuf(render, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { sfd->fd_local = -1; return 0; } sfd->fd_local = export_dmabuf(sfd->dmabuf_bo); return 0; } case WMSG_OPEN_DMAVID_DST: case WMSG_OPEN_DMAVID_DST_V2: { const size_t min_msg_size = sizeof(struct dmabuf_slice_data) + ((type == WMSG_OPEN_DMAVID_DST_V2) ? sizeof(struct wmsg_open_dmavid) : sizeof(struct wmsg_open_dmabuf)); if ((ret = check_message_min_size(type, msg, min_msg_size)) < 0) { return ret; } if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } /* remote read data, this side writes data */ sfd->type = FDC_DMAVID_IW; if (type == WMSG_OPEN_DMAVID_DST) { const struct wmsg_open_dmabuf header = *(const struct wmsg_open_dmabuf *) msg->data; sfd->buffer_size = header.file_size; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmabuf), sizeof(struct dmabuf_slice_data)); sfd->video_fmt = VIDEO_H264; } else { const struct wmsg_open_dmavid header = *(const struct wmsg_open_dmavid *) msg->data; sfd->buffer_size = header.file_size; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmavid), sizeof(struct dmabuf_slice_data)); uint32_t vid_type = header.vid_flags & 0xff; if (vid_type == (uint32_t)VIDEO_H264 || vid_type == (uint32_t)VIDEO_VP9 || vid_type == (uint32_t)VIDEO_AV1) { sfd->video_fmt = (enum video_coding_fmt)vid_type; } else { wp_error("Unidentified video format %u for RID=%d", vid_type, sfd->remote_id); return ERR_FATAL; } } if (init_render_data(render) == -1) { sfd->fd_local = -1; return 0; } sfd->dmabuf_bo = make_dmabuf(render, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { wp_error("FDC_DMAVID_IW: RID=%d make_dmabuf failure, sz=%d (%d)", sfd->remote_id, (int)sfd->buffer_size, sizeof(struct dmabuf_slice_data)); return 0; } sfd->fd_local = export_dmabuf(sfd->dmabuf_bo); if (setup_video_decode(sfd, render) == -1) { wp_error("Video decoding setup failed for RID=%d", sfd->remote_id); } return 0; } case WMSG_OPEN_DMAVID_SRC: case WMSG_OPEN_DMAVID_SRC_V2: { const size_t min_msg_size = sizeof(struct dmabuf_slice_data) + ((type == WMSG_OPEN_DMAVID_SRC_V2) ? sizeof(struct wmsg_open_dmavid) : sizeof(struct wmsg_open_dmabuf)); if ((ret = check_message_min_size(type, msg, min_msg_size)) < 0) { return ret; } if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } /* remote writes data, this side reads data */ sfd->type = FDC_DMAVID_IR; // TODO: deduplicate this section with WMSG_OPEN_DMAVID_DST, // or stop handling V1 and V2 in the same branch if (type == WMSG_OPEN_DMAVID_SRC) { const struct wmsg_open_dmabuf header = *(const struct wmsg_open_dmabuf *) msg->data; sfd->buffer_size = header.file_size; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmabuf), sizeof(struct dmabuf_slice_data)); sfd->video_fmt = VIDEO_H264; } else { const struct wmsg_open_dmavid header = *(const struct wmsg_open_dmavid *) msg->data; sfd->buffer_size = header.file_size; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmavid), sizeof(struct dmabuf_slice_data)); uint32_t vid_type = header.vid_flags & 0xff; if (vid_type == (uint32_t)VIDEO_H264 || vid_type == (uint32_t)VIDEO_VP9 || vid_type == (uint32_t)VIDEO_AV1) { sfd->video_fmt = (enum video_coding_fmt)vid_type; } else { wp_error("Unidentified video format %u for RID=%d", sfd->remote_id); return ERR_FATAL; } } if (init_render_data(render) == -1) { sfd->fd_local = -1; return 0; } sfd->dmabuf_bo = make_dmabuf(render, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { wp_error("FDC_DMAVID_IR: RID=%d make_dmabuf failure", sfd->remote_id); return 0; } sfd->fd_local = export_dmabuf(sfd->dmabuf_bo); if (setup_video_encode(sfd, render, threads->nthreads) == -1) { wp_error("Video encoding setup failed for RID=%d", sfd->remote_id); } return 0; } case WMSG_OPEN_RW_PIPE: case WMSG_OPEN_IW_PIPE: case WMSG_OPEN_IR_PIPE: { if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } sfd->type = FDC_PIPE; int pipedes[2]; if (type == WMSG_OPEN_RW_PIPE) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, pipedes) == -1) { wp_error("Failed to create a socketpair: %s", strerror(errno)); return 0; } } else { if (pipe(pipedes) == -1) { wp_error("Failed to create a pipe: %s", strerror(errno)); return 0; } } /* We pass 'fd_local' to the client, although we only * read and write from pipe_fd if it exists. */ if (type == WMSG_OPEN_IR_PIPE) { // Read end is 0; the other process writes sfd->fd_local = pipedes[1]; sfd->pipe.fd = pipedes[0]; sfd->pipe.can_read = true; sfd->pipe.remote_can_write = true; } else if (type == WMSG_OPEN_IW_PIPE) { // Write end is 1; the other process reads sfd->fd_local = pipedes[0]; sfd->pipe.fd = pipedes[1]; sfd->pipe.can_write = true; sfd->pipe.remote_can_read = true; } else { // FDC_PIPE_RW // Here, it doesn't matter which end is which sfd->fd_local = pipedes[0]; sfd->pipe.fd = pipedes[1]; sfd->pipe.can_read = true; sfd->pipe.can_write = true; sfd->pipe.remote_can_read = true; sfd->pipe.remote_can_write = true; } if (set_nonblocking(sfd->pipe.fd) == -1) { wp_error("Failed to make private pipe end nonblocking: %s", strerror(errno)); return 0; } return 0; } /* SFD update messages */ case WMSG_EXTEND_FILE: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_open_file))) < 0) { return ret; } if ((ret = check_sfd_type(sfd, remote_id, type, FDC_FILE)) < 0) { return ret; } const struct wmsg_open_file *header = (const struct wmsg_open_file *)msg->data; if (header->file_size <= sfd->buffer_size) { wp_error("File extend message for RID=%d does not increase size %u %z", remote_id, header->file_size, sfd->buffer_size); return ERR_FATAL; } if (ftruncate(sfd->fd_local, (off_t)header->file_size) == -1) { wp_error("Failed to resize file buffer: %s", strerror(errno)); return 0; } increase_buffer_sizes(sfd, threads, (size_t)header->file_size); // the extension implies the remote buffer is at least as large sfd->remote_bufsize = sfd->buffer_size; return 0; } case WMSG_BUFFER_FILL: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_buffer_fill))) < 0) { return ret; } if ((ret = check_sfd_type_2(sfd, remote_id, type, FDC_FILE, FDC_DMABUF)) < 0) { return ret; } if (sfd->type == FDC_FILE && sfd->file_readonly) { wp_debug("Ignoring a fill update to readonly file at RID=%d", remote_id); return 0; } const struct wmsg_buffer_fill *header = (const struct wmsg_buffer_fill *)msg->data; size_t uncomp_size = header->end - header->start; struct thread_data *local = &threads->threads[0]; if (buf_ensure_size((int)uncomp_size, 1, &local->tmp_size, &local->tmp_buf) == -1) { wp_error("Failed to expand temporary decompression buffer, dropping update"); return 0; } const char *act_buffer = NULL; size_t act_size = 0; uncompress_buffer(threads, &threads->threads[0].comp_ctx, msg->size - sizeof(struct wmsg_buffer_fill), msg->data + sizeof(struct wmsg_buffer_fill), uncomp_size, local->tmp_buf, &act_size, &act_buffer); // `memsize+8*remote_nthreads` is the worst-case diff // expansion if (header->end > sfd->buffer_size) { wp_error("Transfer end overflow %" PRIu32 " > %zu", header->end, sfd->buffer_size); return ERR_FATAL; } if (act_size != header->end - header->start) { wp_error("Transfer size mismatch %zu %" PRIu32, act_size, header->end - header->start); return ERR_FATAL; } if (sfd->type == FDC_DMABUF) { int bpp = get_shm_bytes_per_pixel( sfd->dmabuf_info.format); if (bpp == -1) { wp_error("Skipping update of RID=%d, non-RGBA/monoplane fmt %x", sfd->remote_id, sfd->dmabuf_info.format); return 0; } memcpy(sfd->mem_mirror + header->start, act_buffer, header->end - header->start); void *handle = NULL; uint32_t map_stride = 0; char *mem_local = map_dmabuf(sfd->dmabuf_bo, true, &handle, &map_stride); if (!mem_local) { wp_error("Failed to apply fill to RID=%d, fd not mapped", sfd->remote_id); return 0; } uint32_t in_stride = sfd->dmabuf_info.strides[0]; if (map_stride == in_stride) { memcpy(mem_local + header->start, sfd->mem_mirror + header->start, header->end - header->start); } else { /* stride changing transfer */ uint32_t row_length = (uint32_t)bpp * sfd->dmabuf_info.width; uint32_t copy_size = (uint32_t)minu(row_length, minu(map_stride, in_stride)); stride_shifted_copy(mem_local, act_buffer - header->start, header->start, header->end - header->start, copy_size, in_stride, map_stride); } if (unmap_dmabuf(sfd->dmabuf_bo, handle) == -1) { return 0; } } else { memcpy(sfd->mem_mirror + header->start, act_buffer, header->end - header->start); memcpy(sfd->mem_local + header->start, act_buffer, header->end - header->start); } return 0; } case WMSG_BUFFER_DIFF: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_buffer_diff))) < 0) { return ret; } if ((ret = check_sfd_type_2(sfd, remote_id, type, FDC_FILE, FDC_DMABUF)) < 0) { return ret; } if (sfd->type == FDC_FILE && sfd->file_readonly) { wp_debug("Ignoring a diff update to readonly file at RID=%d", remote_id); return 0; } const struct wmsg_buffer_diff *header = (const struct wmsg_buffer_diff *)msg->data; struct thread_data *local = &threads->threads[0]; if (buf_ensure_size((int)(header->diff_size + header->ntrailing), 1, &local->tmp_size, &local->tmp_buf) == -1) { wp_error("Failed to expand temporary decompression buffer, dropping update"); return 0; } const char *act_buffer = NULL; size_t act_size = 0; uncompress_buffer(threads, &threads->threads[0].comp_ctx, msg->size - sizeof(struct wmsg_buffer_diff), msg->data + sizeof(struct wmsg_buffer_diff), header->diff_size + header->ntrailing, local->tmp_buf, &act_size, &act_buffer); // `memsize+8*remote_nthreads` is the worst-case diff // expansion if (act_size != header->diff_size + header->ntrailing) { wp_error("Transfer size mismatch %zu %u", act_size, header->diff_size + header->ntrailing); return ERR_FATAL; } if (sfd->type == FDC_DMABUF) { int bpp = get_shm_bytes_per_pixel( sfd->dmabuf_info.format); if (bpp == -1) { wp_error("Skipping update of RID=%d, non-RGBA/monoplane fmt %x", sfd->remote_id, sfd->dmabuf_info.format); return 0; } void *handle = NULL; uint32_t map_stride = 0; char *mem_local = map_dmabuf(sfd->dmabuf_bo, true, &handle, &map_stride); if (!mem_local) { wp_error("Failed to apply diff to RID=%d, fd not mapped", sfd->remote_id); return 0; } uint32_t in_stride = sfd->dmabuf_info.strides[0]; uint32_t row_length = (uint32_t)bpp * sfd->dmabuf_info.width; uint32_t copy_size = (uint32_t)minu(row_length, minu(map_stride, in_stride)); (void)in_stride; size_t nblocks = sfd->buffer_size / sizeof(uint32_t); size_t ndiffblocks = header->diff_size / sizeof(uint32_t); uint32_t *diff_blocks = (uint32_t *)act_buffer; for (size_t i = 0; i < ndiffblocks;) { size_t nfrom = (size_t)diff_blocks[i]; size_t nto = (size_t)diff_blocks[i + 1]; size_t span = nto - nfrom; if (nto > nblocks || nfrom >= nto || i + (nto - nfrom) >= ndiffblocks) { wp_error("Invalid copy range [%zu,%zu) > %zu=nblocks or [%zu,%zu) > %zu=ndiffblocks", nfrom, nto, nblocks, i + 1, i + 1 + span, ndiffblocks); break; } memcpy(sfd->mem_mirror + sizeof(uint32_t) * nfrom, diff_blocks + i + 2, sizeof(uint32_t) * span); stride_shifted_copy(mem_local, (char *)((diff_blocks + i + 2) - nfrom), sizeof(uint32_t) * nfrom, sizeof(uint32_t) * span, copy_size, in_stride, map_stride); i += span + 2; } if (header->ntrailing > 0) { size_t offset = sfd->buffer_size - header->ntrailing; memcpy(sfd->mem_mirror + offset, act_buffer + header->diff_size, header->ntrailing); stride_shifted_copy(mem_local, (act_buffer + header->diff_size) - offset, offset, header->ntrailing, copy_size, in_stride, map_stride); } if (unmap_dmabuf(sfd->dmabuf_bo, handle) == -1) { return 0; } } else { DTRACE_PROBE2(waypipe, apply_diff_enter, sfd->buffer_size, header->diff_size); apply_diff(sfd->buffer_size, sfd->mem_mirror, sfd->mem_local, header->diff_size, header->ntrailing, act_buffer); DTRACE_PROBE(waypipe, apply_diff_exit); } return 0; } case WMSG_PIPE_TRANSFER: { if ((ret = check_sfd_type(sfd, remote_id, type, FDC_PIPE)) < 0) { return ret; } if (!sfd->pipe.can_write || sfd->pipe.pending_w_shutdown) { wp_debug("Discarding transfer to pipe RID=%d, because pipe cannot be written to", remote_id); return 0; } size_t transf_data_sz = msg->size - sizeof(struct wmsg_basic); int netsize = sfd->pipe.send.used + (int)transf_data_sz; if (buf_ensure_size(netsize, 1, &sfd->pipe.send.size, (void **)&sfd->pipe.send.data) == -1) { wp_error("Failed to expand pipe transfer buffer, dropping data"); return 0; } memcpy(sfd->pipe.send.data + sfd->pipe.send.used, msg->data + sizeof(struct wmsg_basic), transf_data_sz); sfd->pipe.send.used = netsize; // The pipe itself will be flushed/or closed later by // flush_writable_pipes sfd->pipe.writable = true; return 0; } case WMSG_PIPE_SHUTDOWN_R: { if ((ret = check_sfd_type(sfd, remote_id, type, FDC_PIPE)) < 0) { return ret; } sfd->pipe.remote_can_write = false; if (!sfd->pipe.can_read) { wp_debug("Discarding read shutdown to pipe RID=%d, which cannot read", remote_id); return 0; } pipe_close_read(sfd); return 0; } case WMSG_PIPE_SHUTDOWN_W: { if ((ret = check_sfd_type(sfd, remote_id, type, FDC_PIPE)) < 0) { return ret; } sfd->pipe.remote_can_read = false; if (!sfd->pipe.can_write) { wp_debug("Discarding write shutdown to pipe RID=%d, which cannot write", remote_id); return 0; } if (sfd->pipe.send.used <= 0) { pipe_close_write(sfd); } else { /* Shutdown as soon as the current data has been written */ sfd->pipe.pending_w_shutdown = true; } return 0; } case WMSG_SEND_DMAVID_PACKET: { if ((ret = check_sfd_type(sfd, remote_id, type, FDC_DMAVID_IW)) < 0) { return ret; } if (!sfd->dmabuf_bo) { wp_error("Applying update to nonexistent dma buffer object rid=%d", sfd->remote_id); return 0; } struct bytebuf data = { .data = msg->data + sizeof(struct wmsg_basic), .size = msg->size - sizeof(struct wmsg_basic)}; apply_video_packet(sfd, render, &data); return 0; } }; /* all returns should happen inside switch, so none here */ } bool shadow_decref_protocol(struct shadow_fd *sfd) { sfd->refcount.protocol--; return destroy_shadow_if_unreferenced(sfd); } bool shadow_decref_transfer(struct shadow_fd *sfd) { sfd->refcount.transfer--; if (sfd->refcount.transfer == 0 && sfd->type == FDC_PIPE) { /* fd_local has been transferred for the last time, so close * it and make it match pipe.fd, just as on the side where * the original pipe was introduced */ if (sfd->pipe.fd != sfd->fd_local) { checked_close(sfd->fd_local); sfd->fd_local = sfd->pipe.fd; } } return destroy_shadow_if_unreferenced(sfd); } struct shadow_fd *shadow_incref_protocol(struct shadow_fd *sfd) { sfd->has_owner = true; sfd->refcount.protocol++; return sfd; } struct shadow_fd *shadow_incref_transfer(struct shadow_fd *sfd) { sfd->has_owner = true; if (sfd->type == FDC_PIPE && sfd->refcount.transfer == 0) { wp_error("The other pipe end may have been closed"); } sfd->refcount.transfer++; return sfd; } void decref_transferred_fds(struct fd_translation_map *map, int nfds, int fds[]) { for (int i = 0; i < nfds; i++) { struct shadow_fd *sfd = get_shadow_for_local_fd(map, fds[i]); shadow_decref_transfer(sfd); } } void decref_transferred_rids( struct fd_translation_map *map, int nids, int ids[]) { for (int i = 0; i < nids; i++) { struct shadow_fd *sfd = get_shadow_for_rid(map, ids[i]); shadow_decref_transfer(sfd); } } int count_npipes(const struct fd_translation_map *map) { int np = 0; for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->type == FDC_PIPE) { np++; } } return np; } int fill_with_pipes(const struct fd_translation_map *map, struct pollfd *pfds, bool check_read) { int np = 0; for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->type == FDC_PIPE && cur->pipe.fd != -1) { pfds[np].fd = cur->pipe.fd; pfds[np].events = 0; if (check_read && cur->pipe.can_read) { pfds[np].events |= POLLIN; } if (cur->pipe.send.used > 0) { pfds[np].events |= POLLOUT; } np++; } } return np; } static struct shadow_fd *get_shadow_for_pipe_fd( struct fd_translation_map *map, int pipefd) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->type == FDC_PIPE && cur->pipe.fd == pipefd) { return cur; } } return NULL; } void mark_pipe_object_statuses( struct fd_translation_map *map, int nfds, struct pollfd *pfds) { for (int i = 0; i < nfds; i++) { int lfd = pfds[i].fd; struct shadow_fd *sfd = get_shadow_for_pipe_fd(map, lfd); if (!sfd) { wp_error("Failed to find shadow struct for .pipe_fd=%d", lfd); continue; } if (pfds[i].revents & POLLIN || pfds[i].revents & POLLHUP) { /* In */ sfd->pipe.readable = true; } if (pfds[i].revents & POLLOUT) { sfd->pipe.writable = true; } if (pfds[i].revents & POLLERR) { wp_debug("Pipe poll returned POLLERR for .pipe_fd=%d, closing", lfd); if (sfd->pipe.can_read) { pipe_close_read(sfd); } if (sfd->pipe.can_write) { pipe_close_write(sfd); } } } } void flush_writable_pipes(struct fd_translation_map *map) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *sfd = (struct shadow_fd *)lcur; if (sfd->type != FDC_PIPE || !sfd->pipe.writable || sfd->pipe.send.used <= 0) { continue; } sfd->pipe.writable = false; wp_debug("Flushing %zd bytes into RID=%d", sfd->pipe.send.used, sfd->remote_id); ssize_t changed = write(sfd->pipe.fd, sfd->pipe.send.data, (size_t)sfd->pipe.send.used); if (changed == -1 && (errno == EAGAIN || errno == EWOULDBLOCK)) { wp_debug("Writing to pipe RID=%d would block", sfd->remote_id); continue; } else if (changed == -1 && (errno == EPIPE || errno == EBADF)) { /* No process has access to the other end of the pipe, * or the file descriptor is otherwise permanently * unwriteable */ pipe_close_write(sfd); } else if (changed == -1) { wp_error("Failed to write into pipe with remote_id=%d: %s", sfd->remote_id, strerror(errno)); } else { wp_debug("Wrote %zd more bytes into pipe RID=%d", changed, sfd->remote_id); sfd->pipe.send.used -= (int)changed; if (sfd->pipe.send.used > 0) { memmove(sfd->pipe.send.data, sfd->pipe.send.data + changed, (size_t)sfd->pipe.send.used); } if (sfd->pipe.send.used <= 0 && sfd->pipe.pending_w_shutdown) { /* A shutdown request was made, but can only be * applied now that the write buffer has been * cleared */ pipe_close_write(sfd); sfd->pipe.pending_w_shutdown = false; } } } /* Destroy any new unreferenced objects */ for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; destroy_shadow_if_unreferenced(cur); } } void read_readable_pipes(struct fd_translation_map *map) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *sfd = (struct shadow_fd *)lcur; if (sfd->type != FDC_PIPE || !sfd->pipe.readable) { continue; } if (sfd->pipe.recv.size == 0) { sfd->pipe.recv.size = 32768; sfd->pipe.recv.data = malloc((size_t)sfd->pipe.recv.size); } if (sfd->pipe.recv.size > sfd->pipe.recv.used) { sfd->pipe.readable = false; ssize_t changed = read(sfd->pipe.fd, sfd->pipe.recv.data + sfd->pipe.recv.used, (size_t)(sfd->pipe.recv.size - sfd->pipe.recv.used)); if (changed == 0) { /* No process has access to the other end of the * pipe */ pipe_close_read(sfd); } else if (changed == -1 && (errno == EAGAIN || errno == EWOULDBLOCK)) { wp_debug("Reading from pipe RID=%d would block", sfd->remote_id); continue; } else if (changed == -1) { wp_error("Failed to read from pipe with remote_id=%d: %s", sfd->remote_id, strerror(errno)); } else { wp_debug("Read %zd more bytes from pipe RID=%d", changed, sfd->remote_id); sfd->pipe.recv.used += (int)changed; } } } /* Destroy any new unreferenced objects */ for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; destroy_shadow_if_unreferenced(cur); } } void extend_shm_shadow(struct thread_pool *threads, struct shadow_fd *sfd, size_t new_size) { if (sfd->buffer_size >= new_size) { return; } // Verify that the file size actually increased struct stat st; int fs = fstat(sfd->fd_local, &st); if (fs == -1) { wp_error("Checking file size failed: %s", strerror(errno)); return; } if ((size_t)st.st_size < new_size) { wp_error("Trying to resize file larger (%d) than the actual file size (%d), ignoring", (int)new_size, (int)st.st_size); return; } increase_buffer_sizes(sfd, threads, new_size); // leave `sfd->remote_bufsize` unchanged, and mark dirty sfd->is_dirty = true; } void run_task(struct task_data *task, struct thread_data *local) { if (task->type == TASK_COMPRESS_BLOCK) { worker_run_compress_block(task, local); } else if (task->type == TASK_COMPRESS_DIFF) { worker_run_compress_diff(task, local); } else { wp_error("Unidentified task type"); } } int start_parallel_work(struct thread_pool *pool, struct thread_msg_recv_buf *recv_queue) { pthread_mutex_lock(&pool->work_mutex); if (recv_queue->zone_start != recv_queue->zone_end) { wp_error("Some async messages not yet sent"); } recv_queue->zone_start = 0; recv_queue->zone_end = 0; int num_mt_tasks = pool->stack_count; if (buf_ensure_size(num_mt_tasks, sizeof(struct iovec), &recv_queue->size, (void **)&recv_queue->data) == -1) { wp_error("Failed to provide enough space for receive queue, skipping all work tasks"); num_mt_tasks = 0; } pool->do_work = num_mt_tasks > 0; /* Start the work tasks here */ if (num_mt_tasks > 0) { pthread_cond_broadcast(&pool->work_cond); } pthread_mutex_unlock(&pool->work_mutex); return num_mt_tasks; } bool request_work_task( struct thread_pool *pool, struct task_data *task, bool *is_done) { pthread_mutex_lock(&pool->work_mutex); *is_done = pool->stack_count == 0 && pool->tasks_in_progress == 0; bool has_task = false; if (pool->stack_count > 0 && pool->do_work) { int i = pool->stack_count - 1; if (pool->stack[i].type != TASK_STOP) { *task = pool->stack[i]; has_task = true; pool->stack_count--; pool->tasks_in_progress++; if (pool->stack_count <= 0) { pool->do_work = false; } } } pthread_mutex_unlock(&pool->work_mutex); return has_task; } static void *worker_thread_main(void *arg) { struct thread_data *data = arg; struct thread_pool *pool = data->pool; /* The loop is globally locked by default, and only unlocked in * pthread_cond_wait. Yes, there are fancier and faster schemes. */ pthread_mutex_lock(&pool->work_mutex); while (1) { while (!pool->do_work) { pthread_cond_wait(&pool->work_cond, &pool->work_mutex); } if (pool->stack_count <= 0) { pool->do_work = false; continue; } /* Copy task, since the queue may be resized */ int i = pool->stack_count - 1; struct task_data task = pool->stack[i]; if (task.type == TASK_STOP) { break; } pool->tasks_in_progress++; pool->stack_count--; if (pool->stack_count <= 0) { pool->do_work = false; } pthread_mutex_unlock(&pool->work_mutex); run_task(&task, data); pthread_mutex_lock(&pool->work_mutex); uint8_t triv = 0; pool->tasks_in_progress--; if (write(pool->selfpipe_w, &triv, 1) == -1) { wp_error("Failed to write to self-pipe"); } } pthread_mutex_unlock(&pool->work_mutex); return NULL; } waypipe-v0.10.5/waypipe-c/shadow.h000066400000000000000000000327741507071711100170120ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_SHADOW_H #define WAYPIPE_SHADOW_H #include #include #include #include #include "dmabuf.h" #include "interval.h" #include "kernel.h" #include "util.h" struct pollfd; typedef VAGenericID VAContextID; typedef VAGenericID VASurfaceID; typedef VAGenericID VABufferID; typedef struct ZSTD_CCtx_s ZSTD_CCtx; typedef struct ZSTD_DCtx_s ZSTD_DCtx; struct comp_ctx { void *lz4_extstate; ZSTD_CCtx *zstd_ccontext; ZSTD_DCtx *zstd_dcontext; }; enum compression_mode { COMP_NONE, COMP_LZ4, COMP_ZSTD, }; struct shadow_fd_link { struct shadow_fd_link *l_prev, *l_next; /* Doubly linked list */ }; struct fd_translation_map { struct shadow_fd_link link; /* store in first position */ int max_local_id; int local_sign; }; /** Thread pool and associated global information */ struct thread_pool { int nthreads; struct thread_data *threads; // including a slot for the zero thread /* Compression information is globally shared, to save memory, and * because most rapidly changing application buffers have similar * content and use the same settings */ enum compression_mode compression; int compression_level; interval_diff_fn_t diff_func; int diff_alignment_bits; // Mutable state pthread_mutex_t work_mutex; pthread_cond_t work_cond; bool do_work; int stack_count, stack_size; struct task_data *stack; // TODO: distinct queues for wayland->channel and channel->wayland, // to make multithreaded decompression possible int tasks_in_progress; // to wake the main loop int selfpipe_r, selfpipe_w; }; struct thread_data { pthread_t thread; struct thread_pool *pool; /* Thread local data */ struct comp_ctx comp_ctx; /* A local temporary buffer, used to e.g. store diff sections before * compression */ void *tmp_buf; int tmp_size; }; enum task_type { TASK_STOP, TASK_COMPRESS_BLOCK, TASK_COMPRESS_DIFF, }; /** Specification for a task to be run on another thread */ struct task_data { enum task_type type; struct shadow_fd *sfd; /* For block compression option */ int zone_start, zone_end; /* For diff compression option */ struct interval *damage_intervals; int damage_len; bool damaged_end; struct thread_msg_recv_buf *msg_queue; }; /** Shadow object types, signifying file descriptor type and usage */ enum fdcat { FDC_UNKNOWN, FDC_FILE, /* Shared memory buffer */ FDC_PIPE, /* pipe-like object */ FDC_DMABUF, /* DMABUF buffer (will be exactly replicated) */ FDC_DMAVID_IR, /* DMABUF-based video, reading from program */ FDC_DMAVID_IW, /* DMABUF-based video, writing to program */ }; struct pipe_buffer { char *data; int size; int used; }; /** Reference count for a struct shadow_fd; the object can be safely deleted * iff all counts are zero/false. */ struct refcount { /** How many protocol objects refer to this shadow structure */ int protocol; /** How many times must the shadow_fd still be sent to the Wayland * program */ int transfer; /** Do any thread tasks potentially refer to this */ bool compute; }; struct pipe_state { /** Temporary buffers to contain small chunks of data, before it is * transported further */ struct pipe_buffer send; struct pipe_buffer recv; /** Internal file descriptor through which all pipe interactions * are mediated. This equals fd_local, except during the time period * where the shadow_fd is created but the fd_local has not yet been * sent to the remote process. */ int fd; /** 4 bits are needed for the pipe state machine (once the pipe has * been created. They describe the properties of `pipe_fd` locally * and remotely */ bool can_read, can_write; bool remote_can_read, remote_can_write; /** What is the state of the pipe, according to poll ? * (POLLIN|POLLHUP -> readable ; POLLOUT -> writeable) */ bool readable, writable; bool pending_w_shutdown; }; /** * @brief The shadow_fd struct * * This structure is created to track each file descriptor used by the * Wayland protocol. */ struct shadow_fd { struct shadow_fd_link link; /* part of doubly linked list */ enum fdcat type; int remote_id; // + if created serverside; - if created clientside int fd_local; /** true iff the shadow structure is newly created and no message * to create a copy has been sent yet */ bool only_here; // Dirty state. bool has_owner; // Are there protocol handlers which control the // is_dirty flag? bool is_dirty; // If so, should this file be scanned for updates? struct damage damage; /* For worker threads, contains their allocated damage intervals */ struct interval *damage_task_interval_store; struct refcount refcount; // common buffers for file-like types /* total memory size of either the dmabuf or the file */ size_t buffer_size; /* mmap'd long term for files, short term for dmabufs */ char *mem_local; /* exact mirror of the contents, with proper alignment */ char *mem_mirror; void *mem_mirror_handle; // File data size_t remote_bufsize; // used to check for and send file extensions bool file_readonly; // Pipe data struct pipe_state pipe; // DMAbuf data struct gbm_bo *dmabuf_bo; struct dmabuf_slice_data dmabuf_info; void *dmabuf_map_handle; /* Nonnull when DMABUF is currently mapped */ uint32_t dmabuf_map_stride; /* stride at which mem_local is mapped */ /* temporary cache of stride-fixed mem_local. Same dimensions as * mem_mirror */ char *dmabuf_warped; void *dmabuf_warped_handle; // Video data struct AVCodecContext *video_context; struct AVFrame *video_local_frame; /* In format matching DMABUF */ struct AVFrame *video_tmp_frame; /* To hold intermediate copies */ struct AVFrame *video_yuv_frame; /* In enc/dec preferred format */ void *video_yuv_frame_data; void *video_local_frame_data; struct AVPacket *video_packet; struct SwsContext *video_color_context; int64_t video_frameno; enum video_coding_fmt video_fmt; VASurfaceID video_va_surface; VAContextID video_va_context; VABufferID video_va_pipeline; }; const char *compression_mode_to_str(enum compression_mode mode); void setup_translation_map(struct fd_translation_map *map, bool display_side); void cleanup_translation_map(struct fd_translation_map *map); int setup_thread_pool(struct thread_pool *pool, enum compression_mode compression, int compression_level, int n_threads); void cleanup_thread_pool(struct thread_pool *pool); /** Given a file descriptor, return which type code would be applied to its * shadow entry. (For example, FDC_PIPE_IR for a pipe-like object that can only * be read.) Sets *size if non-NULL and if the object is an FDC_FILE. */ enum fdcat get_fd_type(int fd, size_t *size); const char *fdcat_to_str(enum fdcat cat); /** Given a local file descriptor, type hint, and already computed size, * produce matching global id, and register it into the translation map if * not already done. The function can also be provided with optional extra * information (*info). * * This may return NULL on allocation failure; other failures will in general * warn and disable replication features. **/ struct shadow_fd *translate_fd(struct fd_translation_map *map, struct render_data *render, struct thread_pool *threads, int fd, enum fdcat type, size_t sz, const struct dmabuf_slice_data *info, bool force_pipe_iw); /** Given a struct shadow_fd, produce some number of corresponding file update * transfer messages. All pointers will be to existing memory. */ void collect_update(struct thread_pool *threads, struct shadow_fd *cur, struct transfer_queue *transfers, bool use_old_dmavid_req); /** After all thread pool tasks have completed, reduce refcounts and clean up * related data. The caller should then invoke destroy_shadow_if_unreferenced. */ void finish_update(struct shadow_fd *sfd); /** Apply a data update message to an element in the translation map, creating * an entry when there is none. * * Returns -1 if the error is the fault of the other waypipe instance, * 0 otherwise. (For example, syscall failure => 0, bad message length => -1.) */ int apply_update(struct fd_translation_map *map, struct thread_pool *threads, struct render_data *render, enum wmsg_type type, int remote_id, const struct bytebuf *msg); /** Get the shadow structure associated to a remote id, or NULL if it dne */ struct shadow_fd *get_shadow_for_rid(struct fd_translation_map *map, int rid); /** Get shadow structure for a local file descriptor, or NULL if it dne */ struct shadow_fd *get_shadow_for_local_fd( struct fd_translation_map *map, int lfd); /** Count the number of pipe fds being maintained by the translation map */ int count_npipes(const struct fd_translation_map *map); /** Fill in pollfd entries, with POLLIN | POLLOUT, for applicable pipe objects. * Specifically, if check_read is true, indicate all readable pipes. * Also, indicate all writeable pipes for which we also something to write. */ int fill_with_pipes(const struct fd_translation_map *map, struct pollfd *pfds, bool check_read); /** mark pipe shadows as being ready to read or write */ void mark_pipe_object_statuses( struct fd_translation_map *map, int nfds, struct pollfd *pfds); /** For pipes marked writeable, flush as much buffered data as possible */ void flush_writable_pipes(struct fd_translation_map *map); /** For pipes marked readable, read as much data as possible without blocking */ void read_readable_pipes(struct fd_translation_map *map); /** pipe file descriptors should never be removed, since then close-detection * fails. This closes the second pipe ends if we own both of them */ void close_local_pipe_ends(struct fd_translation_map *map); /** If a pipe is remotely closed, but not locally closed, then close it too */ void close_rclosed_pipes(struct fd_translation_map *map); /** Reduce the reference count for a shadow structure which is owned. The * structure should not be used by the caller after this point. Returns true if * pointer deleted. */ bool shadow_decref_protocol(struct shadow_fd *); bool shadow_decref_transfer(struct shadow_fd *); /** Increase the reference count of a shadow structure, and mark it as being * owned. For convenience, returns the passed-in structure. */ struct shadow_fd *shadow_incref_protocol(struct shadow_fd *); struct shadow_fd *shadow_incref_transfer(struct shadow_fd *); /** If the shadow structure has no references, destroy it and remove it from the * map */ bool destroy_shadow_if_unreferenced(struct shadow_fd *sfd); /** Decrease reference count for all objects in the given list, deleting * iff they are owned by protocol objects and have refcount zero */ void decref_transferred_fds( struct fd_translation_map *map, int nfds, int fds[]); void decref_transferred_rids( struct fd_translation_map *map, int nids, int ids[]); /** If sfd->type == FDC_FILE, increase the size of the backing data to support * at least new_size, and mark the new part of underlying file as dirty */ void extend_shm_shadow(struct thread_pool *threads, struct shadow_fd *sfd, size_t new_size); /** Notify the threads so that they can start working on the tasks in the pool, * and return the total number of tasks */ int start_parallel_work(struct thread_pool *pool, struct thread_msg_recv_buf *recv_queue); /** Return true if there is a work task (not a stop task) remaining for the * main thread to work on; also set *is_done if all tasks have completed. */ bool request_work_task(struct thread_pool *pool, struct task_data *task, bool *is_done); /** Run a work task */ void run_task(struct task_data *task, struct thread_data *local); // video.c void cleanup_hwcontext(struct render_data *rd); bool video_supports_dmabuf_format(uint32_t format, uint64_t modifier); bool video_supports_shm_format(uint32_t format); /** Fast check for whether video coding format can be used */ bool video_supports_coding_format(enum video_coding_fmt fmt); /** set redirect for ffmpeg logging through wp_log */ void setup_video_logging(void); void destroy_video_data(struct shadow_fd *sfd); /** These need to have the dmabuf/dmabuf_info set beforehand */ int setup_video_encode( struct shadow_fd *sfd, struct render_data *rd, int nthreads); int setup_video_decode(struct shadow_fd *sfd, struct render_data *rd); /** the video frame to be transferred should already have been transferred into * `sfd->mem_mirror`. */ void collect_video_from_mirror( struct shadow_fd *sfd, struct transfer_queue *transfers); /** Decompress a video packet and apply the new frame onto the shadow_fd */ void apply_video_packet(struct shadow_fd *sfd, struct render_data *rd, const struct bytebuf *data); #endif // WAYPIPE_SHADOW_H waypipe-v0.10.5/waypipe-c/test/000077500000000000000000000000001507071711100163165ustar00rootroot00000000000000waypipe-v0.10.5/waypipe-c/test/build_matrix.py000077500000000000000000000060571507071711100213660ustar00rootroot00000000000000#!/usr/bin/env python3 import sys, os, subprocess, shutil """ Script to check that Waypipe builds and that tests pass in all of its configurations. """ waypipe_root, build_root = sys.argv[1], sys.argv[2] os.makedirs(build_root, exist_ok=True) setups = [ ("regular", ["--buildtype", "debugoptimized"], {}), ("release", ["--buildtype", "release"], {}), ("clang", ["--buildtype", "debugoptimized"], {"CC": "clang"}), ( "clang-tsan", ["--buildtype", "debugoptimized", "-Db_sanitize=thread"], {"CC": "clang"}, ), ( "clang-asan", ["--buildtype", "debugoptimized", "-Db_sanitize=address,undefined"], {"CC": "clang"}, ), ( "empty", [ "--buildtype", "debugoptimized", "-Dwith_video=disabled", "-Dwith_lz4=disabled", "-Dwith_zstd=disabled", "-Dwith_dmabuf=disabled", ], {"CC": "gcc"}, ), ( "novideo", [ "--buildtype", "debugoptimized", "-Dwith_video=disabled", ], {"CC": "gcc"}, ), ( "nolz4", [ "--buildtype", "debugoptimized", "-Dwith_lz4=disabled", ], {"CC": "gcc"}, ), ( "unity", ["--buildtype", "debugoptimized", "--unity", "on", "--unity-size", "400"], {"CC": "gcc", "CFLAGS": "-pedantic -D_GNU_SOURCE"}, ), ( "error", ["--buildtype", "debugoptimized"], {"CC": "gcc", "CFLAGS": "-Wunused-result -std=c11 -pedantic -ggdb3 -O1"}, ), ] main_options = ["video", "dmabuf", "lz4", "zstd", "vaapi"] bool_map = {True: "enabled", False: "disabled"} for compiler in ["gcc", "clang"]: for flags in range(2 ** len(main_options)): bool_options = [(2**i) & flags != 0 for i in range(len(main_options))] name = "-".join( ["poly", compiler] + [m for m, b in zip(main_options, bool_options) if b] ) flag_values = [ "-Dwith_{}={}".format(m, bool_map[b]) for m, b in zip(main_options, bool_options) ] setups.append( (name, ["--buildtype", "debugoptimized"] + flag_values, {"CC": compiler}) ) if len(sys.argv) >= 4: setups = [(s, c, e) for s, c, e in setups if s == sys.argv[3]] base_env = os.environ.copy() for key, options, env in setups: print(key, end=" ") sys.stdout.flush() nenv = base_env.copy() for e in env: nenv[e] = env[e] bdir = os.path.join(build_root, key) try: shutil.rmtree(bdir) except FileNotFoundError: pass r1 = subprocess.run( ["meson", waypipe_root, bdir] + options, capture_output=True, env=nenv ) if r1.returncode: print("failed") print(r1.stdout, r1.stderr, r1.returncode) continue r2 = subprocess.run(["ninja", "test"], cwd=bdir, capture_output=True, env=nenv) if r2.returncode: print("failed") print(r2.stdout, r2.stderr, r2.returncode) continue print("passed") waypipe-v0.10.5/waypipe-c/test/common.c000066400000000000000000000304151507071711100177550ustar00rootroot00000000000000/* * Copyright © 2021 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include #include #include #include #include #include #include #include #include uint64_t time_value = 0; uint64_t local_time_offset = 0; void *read_file_into_mem(const char *path, size_t *len) { int fd = open(path, O_RDONLY | O_NOCTTY); if (fd == -1) { fprintf(stderr, "Failed to open '%s'", path); return NULL; } *len = (size_t)lseek(fd, 0, SEEK_END); if (*len == 0) { checked_close(fd); return EXIT_SUCCESS; } lseek(fd, 0, SEEK_SET); void *buf = malloc(*len); if (read(fd, buf, *len) == -1) { return NULL; } checked_close(fd); return buf; } void send_wayland_msg(struct test_state *src, const struct msg msg, struct transfer_queue *transfers) { /* assume every message uses up 1usec */ time_value += 1000; struct char_window proto_mid; // todo: test_(re)alloc for tests, to abort (but still pass?) if // allocations fail? proto_mid.data = calloc(16384, 1); proto_mid.size = 16384; proto_mid.zone_start = 0; proto_mid.zone_end = 0; struct int_window fd_window; fd_window.size = msg.nfds + 1024; fd_window.data = calloc((size_t)fd_window.size, sizeof(int)); fd_window.zone_start = 0; fd_window.zone_end = 0; if (msg.nfds > 0) { memcpy(fd_window.data, msg.fds, sizeof(uint32_t) * (size_t)msg.nfds); } fd_window.zone_end = msg.nfds; /* The protocol source window is an exact copy of the message, and only * zone_start/zone_end are ever modified */ struct char_window proto_src; proto_src.data = calloc((size_t)msg.len, sizeof(uint32_t)); proto_src.size = msg.len * (int)sizeof(uint32_t); memcpy(proto_src.data, msg.data, (size_t)proto_src.size); proto_src.zone_start = 0; proto_src.zone_end = proto_src.size; local_time_offset = src->local_time_offset; parse_and_prune_messages(&src->glob, src->display_side, !src->display_side, &proto_src, &proto_mid, &fd_window); if (fd_window.zone_start != fd_window.zone_end) { wp_error("Not all fds were consumed, final unused window %d %d", fd_window.zone_start, fd_window.zone_end); src->failed = true; goto cleanup; } /* Replace fds with RIDs in place */ for (int i = 0; i < fd_window.zone_start; i++) { struct shadow_fd *sfd = get_shadow_for_local_fd( &src->glob.map, fd_window.data[i]); if (!sfd) { /* Autodetect type + create shadow fd */ size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd_window.data[i], &fdsz); sfd = translate_fd(&src->glob.map, &src->glob.render, &src->glob.threads, fd_window.data[i], fdtype, fdsz, NULL, false); } if (sfd) { fd_window.data[i] = sfd->remote_id; } else { wp_error("failed to translate"); src->failed = true; goto cleanup; } } for (struct shadow_fd_link *lcur = src->glob.map.link.l_next, *lnxt = lcur->l_next; lcur != &src->glob.map.link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; collect_update(&src->glob.threads, cur, transfers, src->config.old_video_mode); destroy_shadow_if_unreferenced(cur); } decref_transferred_rids( &src->glob.map, fd_window.zone_start, fd_window.data); { start_parallel_work(&src->glob.threads, &transfers->async_recv_queue); bool is_done; struct task_data task; while (request_work_task(&src->glob.threads, &task, &is_done)) { run_task(&task, &src->glob.threads.threads[0]); src->glob.threads.tasks_in_progress--; } (void)transfer_load_async(transfers); } for (struct shadow_fd_link *lcur = src->glob.map.link.l_next, *lnxt = lcur->l_next; lcur != &src->glob.map.link; lcur = lnxt, lnxt = lcur->l_next) { /* Note: finish_update() may delete `cur` */ struct shadow_fd *cur = (struct shadow_fd *)lcur; finish_update(cur); destroy_shadow_if_unreferenced(cur); } if (fd_window.zone_start > 0) { size_t tsz = sizeof(uint32_t) * (1 + (size_t)fd_window.zone_start); void *tmsg = calloc(tsz, 1); ((uint32_t *)tmsg)[0] = transfer_header(tsz, WMSG_INJECT_RIDS); memcpy((char *)tmsg + 4, fd_window.data, 4 * (size_t)fd_window.zone_start); transfer_add(transfers, tsz, tmsg); } if (proto_mid.zone_end > 0) { size_t tsz = sizeof(uint32_t) + (size_t)proto_mid.zone_end; void *tmsg = calloc(tsz, 1); ((uint32_t *)tmsg)[0] = transfer_header(tsz, WMSG_PROTOCOL); memcpy((char *)tmsg + 4, proto_mid.data, (size_t)proto_mid.zone_end); transfer_add(transfers, tsz, tmsg); } cleanup: free(proto_src.data); free(proto_mid.data); free(fd_window.data); } void receive_wire(struct test_state *dst, struct transfer_queue *transfers) { struct char_window proto_mid; proto_mid.data = NULL; proto_mid.size = 0; proto_mid.zone_start = 0; proto_mid.zone_end = 0; const size_t fd_padding = 1024; struct int_window fd_window; fd_window.data = calloc(fd_padding, 4); fd_window.size = (int)fd_padding; fd_window.zone_start = 0; fd_window.zone_end = 0; struct char_window proto_end; proto_end.data = calloc(16384, 1); proto_end.size = 16384; proto_end.zone_start = 0; proto_end.zone_end = 0; for (int i = 0; i < transfers->end; i++) { char *msg = transfers->vecs[i].iov_base; size_t real_sz = transfers->vecs[i].iov_len; uint32_t header = ((uint32_t *)msg)[0]; size_t sz = transfer_size(header); if (sz != real_sz) { wp_error("Transfer nominal size %zu did not match actual %zu", sz, real_sz); goto cleanup; } /* note: we assume there is at most one inj_rid message * per batch*/ if (transfer_type(header) == WMSG_PROTOCOL) { void *ndata = realloc(proto_mid.data, (size_t)proto_mid.zone_end + (sz - 4)); if (!ndata) { wp_error("Failed to reallocate recv side proto data"); goto cleanup; } proto_mid.data = ndata; memcpy(proto_mid.data + proto_mid.zone_end, msg + 4, sz - 4); proto_mid.zone_end += (int)(sz - 4); proto_mid.size = proto_mid.zone_end; } else if (transfer_type(header) == WMSG_INJECT_RIDS) { void *ndata = realloc(fd_window.data, sizeof(int) * (size_t)fd_window.zone_end + (sz - 4) + fd_padding); if (!ndata) { wp_error("Failed to reallocate recv side fd data"); goto cleanup; } fd_window.data = ndata; memcpy(fd_window.data + fd_window.zone_end, msg + 4, sz - 4); fd_window.zone_end += (int)(sz - 4) / 4; fd_window.size = fd_window.zone_end; } else { int rid = (int)((uint32_t *)msg)[1]; struct bytebuf bb; bb.data = msg; bb.size = sz; int r = apply_update(&dst->glob.map, &dst->glob.threads, &dst->glob.render, transfer_type(header), rid, &bb); if (r < 0) { wp_error("Applying update failed"); goto cleanup; } } } /* Convert RIDs back to fds */ for (int i = fd_window.zone_start; i < fd_window.zone_end; i++) { struct shadow_fd *sfd = get_shadow_for_rid( &dst->glob.map, fd_window.data[i]); if (sfd) { fd_window.data[i] = sfd->fd_local; } else { fd_window.data[i] = -1; wp_error("Failed to get shadow_fd for RID=%d, index %d", fd_window.data[i], i); } } local_time_offset = dst->local_time_offset; parse_and_prune_messages(&dst->glob, dst->display_side, dst->display_side, &proto_mid, &proto_end, &fd_window); /* Finally, take the output fds, and append them to the output stack; * ditto with the output messages. Assume for now messages are 1-in * 1-out */ dst->nrcvd++; dst->rcvd = realloc(dst->rcvd, sizeof(struct msg) * (size_t)dst->nrcvd); struct msg *lastmsg = &dst->rcvd[dst->nrcvd - 1]; memset(lastmsg, 0, sizeof(struct msg)); /* Save the fds that were marked used (which should be all of them) */ if (fd_window.zone_start > 0) { lastmsg->nfds = fd_window.zone_start; lastmsg->fds = malloc( sizeof(int) * (size_t)fd_window.zone_start); for (int i = 0; i < fd_window.zone_start; i++) { /* duplicate fd, so it's still usable if shadowfd gone */ lastmsg->fds[i] = dup(fd_window.data[i]); } } if (proto_end.zone_end > 0) { lastmsg->len = proto_end.zone_end; lastmsg->data = malloc( sizeof(uint32_t) * (size_t)proto_end.zone_end); memcpy(lastmsg->data, proto_end.data, (size_t)proto_end.zone_end); } cleanup: free(proto_end.data); free(proto_mid.data); free(fd_window.data); } /* Sends a Wayland protocol message to src, and records output messages * in dst. */ void send_protocol_msg(struct test_state *src, struct test_state *dst, const struct msg msg) { if (src->failed || dst->failed) { wp_error("at least one side broken, skipping msg"); return; } struct transfer_queue transfers; memset(&transfers, 0, sizeof(transfers)); pthread_mutex_init(&transfers.async_recv_queue.lock, NULL); /* On destination side, a bit easier; process transfers, and * then deliver all messages */ send_wayland_msg(src, msg, &transfers); receive_wire(dst, &transfers); cleanup_transfer_queue(&transfers); } int setup_state(struct test_state *s, bool display_side, bool has_gpu) { memset(s, 0, sizeof(*s)); s->config = (struct main_config){.drm_node = NULL, .n_worker_threads = 1, .compression = COMP_NONE, .compression_level = 0, .no_gpu = !has_gpu, .only_linear_dmabuf = true, .video_if_possible = false, .video_bpf = 120000, .video_fmt = VIDEO_H264, .prefer_hwvideo = false, .old_video_mode = false}; s->glob.config = &s->config; s->glob.render = (struct render_data){ .drm_node_path = s->config.drm_node, .drm_fd = -1, .dev = NULL, .disabled = s->config.no_gpu, .av_disabled = s->config.no_gpu || !s->config.prefer_hwvideo, .av_bpf = s->config.video_bpf, .av_video_fmt = (int)s->config.video_fmt, .av_hwdevice_ref = NULL, .av_drmdevice_ref = NULL, .av_vadisplay = NULL, .av_copy_config = 0, }; // leave render data to be set up on demand, just as in // main_loop? // TODO: what compositors _don't_ support GPU stuff? setup_thread_pool(&s->glob.threads, s->config.compression, s->config.compression_level, s->config.n_worker_threads); setup_translation_map(&s->glob.map, display_side); init_message_tracker(&s->glob.tracker); setup_video_logging(); s->display_side = display_side; // TODO: make a transfer queue for outgoing stuff return 0; } void cleanup_state(struct test_state *s) { cleanup_message_tracker(&s->glob.tracker); cleanup_translation_map(&s->glob.map); cleanup_render_data(&s->glob.render); cleanup_hwcontext(&s->glob.render); cleanup_thread_pool(&s->glob.threads); for (int i = 0; i < s->nrcvd; i++) { free(s->rcvd[i].data); for (int j = 0; j < s->rcvd[i].nfds; j++) { checked_close(s->rcvd[i].fds[j]); } free(s->rcvd[i].fds); } free(s->rcvd); } void test_log_handler(const char *file, int line, enum log_level level, const char *fmt, ...) { (void)level; printf("[%s:%d] ", file, line); va_list args; va_start(args, fmt); vprintf(fmt, args); va_end(args); printf("\n"); } void test_atomic_log_handler(const char *file, int line, enum log_level level, const char *fmt, ...) { pthread_t tid = pthread_self(); char msg[1024]; int nwri = 0; nwri += sprintf(msg + nwri, "%" PRIx64 " [%s:%3d] ", (uint64_t)tid, file, line); va_list args; va_start(args, fmt); nwri += vsnprintf(msg + nwri, (size_t)(1022 - nwri), fmt, args); va_end(args); msg[nwri++] = '\n'; msg[nwri] = 0; (void)write(STDOUT_FILENO, msg, (size_t)nwri); (void)level; } waypipe-v0.10.5/waypipe-c/test/common.h000066400000000000000000000044701507071711100177640ustar00rootroot00000000000000/* * Copyright © 2021 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_TESTCOMMON_H #define WAYPIPE_TESTCOMMON_H #include "main.h" #include "parsing.h" #include "util.h" /** a simple log handler to STDOUT for use by test programs */ void test_log_handler(const char *file, int line, enum log_level level, const char *fmt, ...); void test_atomic_log_handler(const char *file, int line, enum log_level level, const char *fmt, ...); extern uint64_t time_value; extern uint64_t local_time_offset; void *read_file_into_mem(const char *path, size_t *len); struct msg { uint32_t *data; int len; int *fds; int nfds; }; struct test_state { struct main_config config; struct globals glob; bool display_side; bool failed; /* messages received from the other side */ int nrcvd; struct msg *rcvd; uint64_t local_time_offset; }; void send_wayland_msg(struct test_state *src, const struct msg msg, struct transfer_queue *queue); void receive_wire(struct test_state *src, struct transfer_queue *queue); void send_protocol_msg(struct test_state *src, struct test_state *dst, const struct msg msg); int setup_state(struct test_state *s, bool display_side, bool has_gpu); void cleanup_state(struct test_state *s); #endif /* WAYPIPE_TESTCOMMON_H */ waypipe-v0.10.5/waypipe-c/test/damage_merge.c000066400000000000000000000242511507071711100210630ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "shadow.h" #include #include #include #include static void fill_overcopy_pattern( int Ntotal, int margin, struct ext_interval *data) { int stride = 100 + margin + 1; for (int i = 0; i < Ntotal; i++) { data[i] = (struct ext_interval){ .start = (i) % (Ntotal / 2) * stride, .width = 100 - (i > Ntotal / 2), .rep = 100, .stride = stride, }; } } static void fill_line_crossing_pattern( int Ntotal, int margin, struct ext_interval *data) { int step = (margin + 1); int boxsize = ceildiv(Ntotal, 2) * step; for (int i = 0; i < Ntotal; i++) { if (i % 2 == 0) { data[i] = (struct ext_interval){ .start = (i / 2) * step, .width = 1, .rep = boxsize, .stride = boxsize, }; } else { data[i] = (struct ext_interval){ .start = (i / 2) * boxsize, .width = boxsize, .rep = 1, .stride = 0, }; } } } static void fill_vline_pattern( int Ntotal, int margin, struct ext_interval *data) { int step = (margin + 2); int stride = Ntotal * step; for (int i = 0; i < Ntotal; i++) { data[i] = (struct ext_interval){ .start = i * step, .width = 1, .rep = 2, .stride = stride, }; } } static int randint(int max) { int cap = RAND_MAX - RAND_MAX % max; while (1) { int x = rand(); if (x >= cap) { continue; } return x % max; } } static void fill_circle_pattern( int Ntotal, int margin, struct ext_interval *data) { srand((uint32_t)(Ntotal + 165 * margin)); int i = 0; int R = (int)((2 * margin + Ntotal) * 0.3); int s = (2 * margin + Ntotal) / 2; while (i < Ntotal) { int x = randint(2 * R); int w = randint(2 * R - x) + 1; int y = randint(2 * R); int h = randint(2 * R - y) + 1; int64_t x2a = (x - R) * (x - R); int64_t x2b = (x + w - R) * (x + w - R); int64_t x2 = x2a < x2b ? x2b : x2a; int64_t y2a = (y - R) * (y - R); int64_t y2b = (y + w - R) * (y + w - R); int64_t y2 = y2a < y2b ? y2b : y2a; if (x2 + y2 >= R * R) { continue; } data[i++] = (struct ext_interval){ .start = s * y + x, .width = w, .rep = h, .stride = s, }; } } static void fill_snow_pattern(int Ntotal, int margin, struct ext_interval *data) { srand((uint32_t)(Ntotal + 165 * margin)); int size = 4; while (size * size < Ntotal * margin) { size = size + size / 4; } for (int i = 0; i < Ntotal; i++) { int x = randint(size); int y = randint(size); data[i] = (struct ext_interval){ .start = size * y + x, .width = 1, .rep = 1, .stride = size, }; } } struct pattern { const char *name; void (*func)(int Ntotal, int margin, struct ext_interval *data); }; static const struct pattern patterns[] = {{"overcopy", fill_overcopy_pattern}, {"line-crossing", fill_line_crossing_pattern}, {"circle", fill_circle_pattern}, {"snow", fill_snow_pattern}, {"vline", fill_vline_pattern}, {NULL, NULL}}; static inline int eint_low(const struct ext_interval i) { return i.start; } static inline int eint_high(const struct ext_interval i) { return i.start + (i.rep - 1) * i.stride + i.width; } static void write_eint( struct ext_interval e, char *buf, int minv, uint8_t value) { for (int k = 0; k < e.rep; k++) { memset(&buf[e.start + e.stride * k - minv], value, (size_t)e.width); } } /** Verify that: * - the new set of intervals covers the old * - the new set of intervals is disjoint within margin */ static bool check_solution_properties(int nsub, const struct ext_interval *sub, int nsup, const struct interval *sup, int margin) { int minv = INT32_MAX, maxv = INT32_MIN; for (int i = 0; i < nsup; i++) { minv = min(minv, sup[i].start); maxv = max(maxv, sup[i].end); } for (int i = 0; i < nsub; i++) { minv = min(minv, eint_low(sub[i])); maxv = max(maxv, eint_high(sub[i])); } if (minv > maxv) { return true; } minv -= margin; maxv += margin; char *test = calloc((size_t)(maxv - minv), 1); // Fast & stupid containment test for (int i = 0; i < nsub; i++) { write_eint(sub[i], test, minv, 1); } for (int i = 0; i < nsup; i++) { struct interval e = sup[i]; if (memchr(&test[e.start - minv - margin], 2, (size_t)(e.end - e.start + 2 * margin)) != NULL) { printf("Internal overlap failure\n"); free(test); return false; } memset(&test[e.start - minv], 2, (size_t)(e.end - e.start)); } bool yes = memchr(test, 1, (size_t)(maxv - minv)) == NULL; if (!yes) { int count = 0; for (int i = 0; i < maxv - minv; i++) { count += test[i] == 1; } printf("Fail count: %d/%d\n", count, maxv - minv); if (maxv - minv < 200) { for (int i = 0; i < maxv - minv; i++) { printf("%d", test[i]); } printf("\n"); } } free(test); return yes; } static int convert_to_simple( struct interval *vec, int count, const struct ext_interval *ext) { int k = 0; for (int i = 0; i < count; i++) { for (int j = 0; j < ext[i].rep; j++) { vec[k].start = ext[i].start + j * ext[i].stride; vec[k].end = vec[k].start + ext[i].width; k++; } } return k; } static int simple_lexsort(const void *L, const void *R) { const struct interval *l = L; const struct interval *r = R; if (l->start != r->start) { return l->start - r->start; } return l->end - r->end; } /** A merge operation which reduces the compound intervals to simple intervals, * and then merges them that way. After all, this only expands memory use and * runtime by a factor of screen height... */ static void __attribute__((noinline)) merge_simple(const int old_count, struct ext_interval *old_list, const int new_count, const struct ext_interval *const new_list, int *dst_count, struct interval **dst_list, int merge_margin) { int nintervals = 0; for (int i = 0; i < old_count; i++) { nintervals += old_list[i].rep; } for (int i = 0; i < new_count; i++) { nintervals += new_list[i].rep; } struct interval *vec = malloc((size_t)nintervals * sizeof(struct interval)); int base = convert_to_simple(vec, old_count, old_list); convert_to_simple(&vec[base], new_count, new_list); // divide and conquer would be faster here qsort(vec, (size_t)nintervals, sizeof(struct interval), simple_lexsort); int r = 0, w = 0; while (r < nintervals) { // inside loop. int end = vec[w].end; r++; // the interval already contains itself while (r < nintervals && vec[r].start < end + merge_margin) { end = max(end, vec[r].end); r++; } vec[w].end = end; w++; if (r < nintervals) { vec[w] = vec[r]; } } *dst_list = vec; *dst_count = w; } static int get_coverage(const int c, const struct interval *li) { int n = 0; for (int i = 0; i < c; i++) { n += li[i].end - li[i].start; } return n; } log_handler_func_t log_funcs[2] = {test_log_handler, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; bool all_success = true; srand(0); // no larger, because e.g. test sizes are (margins*N)^2 int margins[] = {2, 11, 32, 1}; int nvec[] = {1000, 50, 10, 30}; for (int z = 0; z < (int)(sizeof(nvec) / sizeof(nvec[0])); z++) { for (int ip = 0; patterns[ip].name; ip++) { /* Pattern tests: we generate a given pattern of damage * rectangles, apply the merge function, and verify that * all the desired result properties hold */ struct ext_interval *data = calloc((size_t)nvec[z], sizeof(struct ext_interval)); printf("\n---- pattern=%s, N=%d, margin=%d\n", patterns[ip].name, nvec[z], margins[z]); (*patterns[ip].func)(nvec[z], margins[z], data); // check that minv >= 0, maxv is <= 1GB int64_t minv = 0, maxv = 0; for (int i = 0; i < nvec[z]; i++) { int64_t high = data[i].start + ((int64_t)data[i].rep) * data[i].stride + data[i].width; maxv = maxv > high ? maxv : high; minv = minv < data[i].start ? minv : data[i].start; } if (minv < 0) { printf("generated interval set violates lower bound, skipping\n"); continue; } if (maxv > 0x40000000LL) { printf("generated interval set would use too much memory to check, skipping\n"); continue; } const char *names[2] = {"simple", "merges"}; for (int k = 0; k < 2; k++) { int dst_count = 0; struct interval *dst_list = NULL; int margin = margins[z]; struct timespec t0, t1; clock_gettime(CLOCK_MONOTONIC, &t0); if (k == 0) { merge_simple(0, NULL, nvec[z], data, &dst_count, &dst_list, margin); } else if (k == 1) { merge_mergesort(0, NULL, nvec[z], data, &dst_count, &dst_list, margin, 0); } clock_gettime(CLOCK_MONOTONIC, &t1); double elapsed01 = 1.0 * (double)(t1.tv_sec - t0.tv_sec) + 1e-9 * (double)(t1.tv_nsec - t0.tv_nsec); bool pass = check_solution_properties(nvec[z], data, dst_count, dst_list, margins[z]); all_success &= pass; int coverage = get_coverage( dst_count, dst_list); printf("%s operation took %9.5f ms, %d intervals, %d bytes, %s\n", names[k], elapsed01 * 1e3, dst_count, coverage, pass ? "pass" : "FAIL"); free(dst_list); } free(data); } } return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.10.5/waypipe-c/test/diff_roundtrip.c000066400000000000000000000144371507071711100215110ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "shadow.h" #include #include #include #include static int64_t rand_gap_fill(char *data, size_t size, int max_run) { if (max_run == -1) { memset(data, rand(), size); return 1; } else if (max_run == -2) { memset(data, 0, size); return 0; } max_run = max(2, max_run); size_t pos = 0; int64_t nruns = 0; while (pos < size) { int gap1 = (rand() % max_run); gap1 = min((int)(size - pos), gap1); pos += (size_t)gap1; int gap2 = (rand() % max_run); gap2 = min((int)(size - pos), gap2); int val = rand(); memset(&data[pos], val, (size_t)gap2); pos += (size_t)gap2; nruns++; } return nruns; } struct subtest { size_t size; int max_gap; uint32_t seed; int shards; }; static const struct subtest subtests[] = { {256, 128, 0x11, 3}, {333333, 128, 0x11, 3}, {39, 2, 0x13, 17}, {10000000, 262144, 0x21, 1}, {4, 4, 0x41, 1}, {65537, 177, 0x51, 1}, {17777, 2, 0x61, 1}, {60005, 60005, 0x71, 1}, {1 << 16, -1, 0x71, 4}, {1 << 16, -2, 0x71, 4}, {1 << 24, -1, 0x71, 4}, {1 << 24, -2, 0x71, 4}, }; static const enum diff_type diff_types[5] = { DIFF_AVX512F, DIFF_AVX2, DIFF_SSE3, DIFF_NEON, DIFF_C, }; static const char *diff_names[5] = { "avx512", "avx2 ", "sse3 ", "neon ", "plainC", }; static bool run_subtest(int i, const struct subtest test, char *diff, char *source, char *mirror, char *target1, char *target2, interval_diff_fn_t diff_fn, int alignment_bits, const char *diff_name) { uint64_t ns01 = 0, ns12 = 0; int64_t nruns = 0; size_t net_diffsize = 0; srand((uint32_t)test.seed); memset(mirror, 0, test.size); memset(target1, 0, test.size); memset(target2, 0, test.size); int roughtime = (int)test.size + test.shards * 500; int repetitions = min(100, max(1000000000 / roughtime, 1)); bool all_success = true; for (int x = 0; x < repetitions; x++) { nruns += rand_gap_fill(source, test.size, test.max_gap); net_diffsize = 0; for (int s = 0; s < test.shards; s++) { struct interval damage; damage.start = split_interval( 0, (int)test.size, test.shards, s); damage.end = split_interval( 0, (int)test.size, test.shards, s + 1); int alignment = 1 << alignment_bits; damage.start = alignment * (damage.start / alignment); damage.end = alignment * (damage.end / alignment); struct timespec t0, t1, t2; clock_gettime(CLOCK_MONOTONIC, &t0); size_t diffsize = 0; if (damage.start < damage.end) { diffsize = construct_diff_core(diff_fn, alignment_bits, &damage, 1, mirror, source, diff); } size_t ntrailing = 0; if (s == test.shards - 1) { ntrailing = construct_diff_trailing(test.size, alignment_bits, mirror, source, diff + diffsize); } clock_gettime(CLOCK_MONOTONIC, &t1); apply_diff(test.size, target1, target2, diffsize, ntrailing, diff); clock_gettime(CLOCK_MONOTONIC, &t2); ns01 += (uint64_t)((t1.tv_sec - t0.tv_sec) * 1000000000LL + (t1.tv_nsec - t0.tv_nsec)); ns12 += (uint64_t)((t2.tv_sec - t1.tv_sec) * 1000000000LL + (t2.tv_nsec - t1.tv_nsec)); net_diffsize += diffsize + ntrailing; } if (memcmp(target1, source, test.size)) { printf("Failed to synchronize\n"); int ndiff = 0; for (size_t k = 0; k < test.size; k++) { if (target1[k] != source[k] || mirror[k] != source[k]) { if (ndiff > 300) { printf("and still more differences\n"); break; } printf("i %d: target1 %02x mirror %02x source %02x\n", (int)k, (uint8_t)target1[k], (uint8_t)mirror[k], (uint8_t)source[k]); ndiff++; } } all_success = false; break; } } double scale = 1.0 / ((double)repetitions * (double)test.size); printf("%s #%2d, : %6.3f,%6.3f,%6.3f ns/byte create,apply,net (%d/%d@%d), %.1f bytes/run\n", diff_name, i, (double)ns01 * scale, (double)ns12 * scale, (double)(ns01 + ns12) * scale, (int)net_diffsize, (int)test.size, test.shards, (double)repetitions * (double)test.size / (double)nruns); return all_success; } log_handler_func_t log_funcs[2] = {test_log_handler, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; bool all_success = true; const int nsubtests = (sizeof(subtests) / sizeof(subtests[0])); for (int i = 0; i < nsubtests; i++) { struct subtest test = subtests[i]; /* Use maximum alignment */ const size_t bufsize = alignz(test.size + 8 + 64, 64); char *diff = aligned_alloc(64, bufsize); char *source = aligned_alloc(64, bufsize); char *mirror = aligned_alloc(64, bufsize); char *target1 = aligned_alloc(64, bufsize); char *target2 = aligned_alloc(64, bufsize); const int ntypes = sizeof(diff_types) / sizeof(diff_types[0]); for (int a = 0; a < ntypes; a++) { int alignment_bits; interval_diff_fn_t diff_fn = get_diff_function( diff_types[a], &alignment_bits); if (!diff_fn) { continue; } all_success &= run_subtest(i, test, diff, source, mirror, target1, target2, diff_fn, alignment_bits, diff_names[a]); } free(diff); free(source); free(mirror); free(target1); free(target2); } return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.10.5/waypipe-c/test/fake_ssh.c000066400000000000000000000045751507071711100202600ustar00rootroot00000000000000/* * Copyright © 2021 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include #include #include #include #include static int usage(void) { fprintf(stderr, "usage: fake_ssh [-R A:B] [-t] destination command...\n"); return EXIT_FAILURE; } int main(int argc, char **argv) { if (argc < 2) { return usage(); } argv++; argc--; bool pseudoterminal = false; char *link = NULL; char *destination = NULL; while (argc > 0) { if (strcmp(argv[0], "-t") == 0) { pseudoterminal = true; argv++; argc--; } else if (strcmp(argv[0], "-R") == 0) { link = argv[1]; argv += 2; argc -= 2; } else { destination = argv[0]; argv++; argc--; break; } } if (link) { char *p1 = link, *p2 = NULL; for (char *c = link; *c; c++) { if (*c == ':') { *c = '\0'; p2 = c + 1; break; } } if (!p2) { fprintf(stderr, "Failed to split forwarding descriptor '%s'\n", p1); return EXIT_FAILURE; } unlink(p1); if (symlink(p2, p1) == -1) { fprintf(stderr, "Symlinking '%s' to '%s' failed\n", p2, p1); return EXIT_FAILURE; } } (void)destination; (void)pseudoterminal; if (execvp(argv[0], argv) == -1) { fprintf(stderr, "Failed to run program '%s'\n", argv[0]); return EXIT_FAILURE; } return EXIT_SUCCESS; } waypipe-v0.10.5/waypipe-c/test/fd_mirror.c000066400000000000000000000323111507071711100204450ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "shadow.h" #include #include #include #include #include #include #include #include #include struct compression_settings { enum compression_mode mode; int level; }; static const struct compression_settings comp_modes[] = { {COMP_NONE, 0}, #ifdef HAS_LZ4 {COMP_LZ4, 1}, #endif #ifdef HAS_ZSTD {COMP_ZSTD, 5}, #endif }; #ifdef HAS_DMABUF #include #define TEST_2CPP_FORMAT GBM_FORMAT_GR88 #else #define TEST_2CPP_FORMAT 0 #endif static int update_file(int file_fd, struct gbm_bo *bo, size_t sz, int seqno) { (void)bo; if (rand() % 11 == 0) { /* no change */ return 0; } void *data = mmap(NULL, sz, PROT_READ | PROT_WRITE, MAP_SHARED, file_fd, 0); if (data == MAP_FAILED) { return -1; } size_t start = (size_t)rand() % sz; size_t end = (size_t)rand() % sz; if (start > end) { size_t tmp = start; start = end; end = tmp; } memset((char *)data + start, seqno, end - start); munmap(data, sz); return (int)(end - start); } static int update_dmabuf(int file_fd, struct gbm_bo *bo, size_t sz, int seqno) { (void)file_fd; if (rand() % 11 == 0) { /* no change */ return 0; } void *map_handle = NULL; uint32_t stride; void *data = map_dmabuf(bo, true, &map_handle, &stride); if (data == MAP_FAILED) { return -1; } size_t start = (size_t)rand() % sz; size_t end = (size_t)rand() % sz; if (start > end) { size_t tmp = start; start = end; end = tmp; } memset((char *)data + start, seqno, end - start); unmap_dmabuf(bo, map_handle); return (int)(end - start); } static struct bytebuf combine_transfer_blocks(struct transfer_queue *td) { size_t net_size = 0; for (int i = td->start; i < td->end; i++) { net_size += td->vecs[i].iov_len; } struct bytebuf ret_block; ret_block.size = net_size; ret_block.data = malloc(net_size); size_t pos = 0; for (int i = td->start; i < td->end; i++) { memcpy(ret_block.data + pos, td->vecs[i].iov_base, td->vecs[i].iov_len); pos += td->vecs[i].iov_len; } return ret_block; } static bool check_match(int orig_fd, int copy_fd, struct gbm_bo *orig_bo, struct gbm_bo *copy_bo, enum fdcat otype, enum fdcat ctype) { if (ctype != otype) { wp_error("Mirrored file descriptor has different type: ot=%d ct=%d", otype, ctype); return false; } void *ohandle = NULL, *chandle = NULL; void *cdata = NULL, *odata = NULL; bool pass; if (otype == FDC_FILE) { struct stat ofsdata = {0}, cfsdata = {0}; if (fstat(orig_fd, &ofsdata) == -1) { wp_error("Failed to stat original file descriptor"); return false; } if (fstat(copy_fd, &cfsdata) == -1) { wp_error("Failed to stat copied file descriptor"); return false; } size_t csz = (size_t)cfsdata.st_size; size_t osz = (size_t)ofsdata.st_size; if (csz != osz) { wp_error("Mirrored file descriptor has different size: os=%d cs=%d", (int)osz, (int)csz); return false; } cdata = mmap(NULL, csz, PROT_READ, MAP_SHARED, copy_fd, 0); if (cdata == MAP_FAILED) { return false; } odata = mmap(NULL, osz, PROT_READ, MAP_SHARED, orig_fd, 0); if (odata == MAP_FAILED) { munmap(cdata, csz); return false; } pass = memcmp(cdata, odata, csz) == 0; munmap(odata, osz); munmap(cdata, csz); } else if (otype == FDC_DMABUF) { uint32_t copy_stride, orig_stride; cdata = map_dmabuf(copy_bo, false, &chandle, ©_stride); if (cdata == NULL) { return false; } odata = map_dmabuf(orig_bo, false, &ohandle, &orig_stride); if (odata == NULL) { unmap_dmabuf(copy_bo, chandle); return false; } /* todo: check the file descriptor contents */ pass = true; unmap_dmabuf(orig_bo, ohandle); unmap_dmabuf(copy_bo, chandle); } else { return false; } if (!pass) { wp_error("Mirrored file descriptor contents differ"); } return pass; } static void wait_for_thread_pool(struct thread_pool *pool) { bool done = false; while (!done) { uint8_t flush[64]; (void)read(pool->selfpipe_r, flush, sizeof(flush)); /* Also run tasks on main thread, just like the real version */ // TODO: create a 'threadpool.c' struct task_data task; bool has_task = request_work_task(pool, &task, &done); if (has_task) { run_task(&task, &pool->threads[0]); pthread_mutex_lock(&pool->work_mutex); pool->tasks_in_progress--; pthread_mutex_unlock(&pool->work_mutex); /* To skip the next poll */ } else { /* Wait a short amount */ struct timespec waitspec; waitspec.tv_sec = 0; waitspec.tv_nsec = 100000; nanosleep(&waitspec, NULL); } } } static bool test_transfer(struct fd_translation_map *src_map, struct fd_translation_map *dst_map, struct thread_pool *src_pool, struct thread_pool *dst_pool, int rid, bool expect_changes, struct render_data *render_data) { struct transfer_queue transfer_data; memset(&transfer_data, 0, sizeof(struct transfer_queue)); pthread_mutex_init(&transfer_data.async_recv_queue.lock, NULL); struct shadow_fd *src_shadow = get_shadow_for_rid(src_map, rid); collect_update(src_pool, src_shadow, &transfer_data, false); start_parallel_work(src_pool, &transfer_data.async_recv_queue); wait_for_thread_pool(src_pool); finish_update(src_shadow); transfer_load_async(&transfer_data); if (!expect_changes) { size_t ns = 0; for (int i = transfer_data.start; i < transfer_data.end; i++) { ns += transfer_data.vecs[i].iov_len; } if (transfer_data.end == transfer_data.start) { /* nothing sent */ cleanup_transfer_queue(&transfer_data); return true; } /* Redundant transfers are acceptable, if inefficient */ wp_error("Collecting updates gave a transfer (%zd bytes, %d blocks) when none was expected", ns, transfer_data.end - transfer_data.start); } if (transfer_data.end == transfer_data.start) { wp_error("Collecting updates gave a unexpected number (%d) of transfers", transfer_data.end - transfer_data.start); cleanup_transfer_queue(&transfer_data); return false; } struct bytebuf res = combine_transfer_blocks(&transfer_data); cleanup_transfer_queue(&transfer_data); size_t start = 0; while (start < res.size) { struct bytebuf tmp; tmp.data = &res.data[start]; uint32_t hb = ((uint32_t *)tmp.data)[0]; int32_t xid = ((int32_t *)tmp.data)[1]; tmp.size = transfer_size(hb); apply_update(dst_map, dst_pool, render_data, transfer_type(hb), xid, &tmp); start += alignz(tmp.size, 4); } free(res.data); /* first round, this only exists after the transfer */ struct shadow_fd *dst_shadow = get_shadow_for_rid(dst_map, rid); return check_match(src_shadow->fd_local, dst_shadow->fd_local, src_shadow->dmabuf_bo, dst_shadow->dmabuf_bo, src_shadow->type, dst_shadow->type); } /* This test closes the provided file fd */ static bool test_mirror(int new_file_fd, size_t sz, int (*update)(int fd, struct gbm_bo *bo, size_t sz, int seqno), struct compression_settings comp_mode, int n_src_threads, int n_dst_threads, struct render_data *rd, const struct dmabuf_slice_data *slice_data) { struct fd_translation_map src_map; setup_translation_map(&src_map, false); struct thread_pool src_pool; setup_thread_pool(&src_pool, comp_mode.mode, comp_mode.level, n_src_threads); struct fd_translation_map dst_map; setup_translation_map(&dst_map, true); struct thread_pool dst_pool; setup_thread_pool(&dst_pool, comp_mode.mode, comp_mode.level, n_dst_threads); size_t fdsz = 0; enum fdcat fdtype; if (slice_data) { fdtype = FDC_DMABUF; } else { fdtype = get_fd_type(new_file_fd, &fdsz); } struct shadow_fd *src_shadow = translate_fd(&src_map, rd, NULL, new_file_fd, fdtype, fdsz, slice_data, false); struct shadow_fd *dst_shadow = NULL; int rid = src_shadow->remote_id; bool pass = true; for (int i = 0; i < 7; i++) { bool fwd = i == 0 || i % 2; int target_fd = fwd ? src_shadow->fd_local : dst_shadow->fd_local; struct gbm_bo *target_bo = fwd ? src_shadow->dmabuf_bo : dst_shadow->dmabuf_bo; bool expect_changes = false; if (i == 5 && fdtype == FDC_FILE) { sz = (sz * 7) / 5; if (ftruncate(target_fd, (off_t)sz) == -1) { wp_error("failed to resize file"); break; } extend_shm_shadow(fwd ? &src_pool : &dst_pool, fwd ? src_shadow : dst_shadow, sz); expect_changes = true; } int ndiff = i > 0 ? (*update)(target_fd, target_bo, sz, i) : (int)sz; if (ndiff == -1) { pass = false; break; } expect_changes = expect_changes || (ndiff > 0); bool subpass; if (fwd) { src_shadow->is_dirty = true; damage_everything(&src_shadow->damage); subpass = test_transfer(&src_map, &dst_map, &src_pool, &dst_pool, rid, expect_changes, rd); } else { dst_shadow->is_dirty = true; damage_everything(&dst_shadow->damage); subpass = test_transfer(&dst_map, &src_map, &dst_pool, &dst_pool, rid, expect_changes, rd); } pass &= subpass; if (!pass) { break; } dst_shadow = get_shadow_for_rid(&dst_map, rid); } cleanup_translation_map(&src_map); cleanup_translation_map(&dst_map); cleanup_thread_pool(&src_pool); cleanup_thread_pool(&dst_pool); return pass; } log_handler_func_t log_funcs[2] = {NULL, test_atomic_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; if (mkdir("run", S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH) == -1 && errno != EEXIST) { wp_error("Not allowed to create test directory, cannot run tests."); return EXIT_FAILURE; } /* to avoid warnings when the driver dmabuf size constraints require * significant alignment, the width/height are already 64 aligned */ const size_t test_width = 1024; const size_t test_height = 1280; const size_t test_cpp = 2; const size_t test_size = test_width * test_height * test_cpp; const struct dmabuf_slice_data slice_data = { .width = (uint32_t)test_width, .height = (uint32_t)test_height, .format = TEST_2CPP_FORMAT, .num_planes = 1, .modifier = 0, .offsets = {0, 0, 0, 0}, .strides = {(uint32_t)(test_width * test_cpp), 0, 0, 0}, .using_planes = {true, false, false, false}, }; uint8_t *test_pattern = malloc(test_size); for (size_t i = 0; i < test_size; i++) { test_pattern[i] = (uint8_t)i; } struct render_data *rd = calloc(1, sizeof(struct render_data)); rd->drm_fd = -1; rd->av_disabled = true; bool has_dmabuf = TEST_2CPP_FORMAT != 0; if (has_dmabuf && init_render_data(rd) == -1) { has_dmabuf = false; } bool all_success = true; srand(0); for (size_t c = 0; c < sizeof(comp_modes) / sizeof(comp_modes[0]); c++) { for (int gt = 1; gt <= 5; gt++) { for (int rt = 1; rt <= 5; rt++) { int file_fd = create_anon_file(); if (file_fd == -1) { wp_error("Failed to create test file: %s", strerror(errno)); continue; } if (write(file_fd, test_pattern, test_size) != (ssize_t)test_size) { wp_error("Failed to write to test file: %s", strerror(errno)); checked_close(file_fd); continue; } bool pass = test_mirror(file_fd, test_size, update_file, comp_modes[c], gt, rt, rd, NULL); printf(" FILE comp=%d src_thread=%d dst_thread=%d, %s\n", (int)c, gt, rt, pass ? "pass" : "FAIL"); all_success &= pass; if (has_dmabuf) { struct gbm_bo *bo = make_dmabuf( rd, &slice_data); if (!bo) { has_dmabuf = false; continue; } void *map_handle = NULL; uint32_t stride; void *data = map_dmabuf(bo, true, &map_handle, &stride); if (!data) { destroy_dmabuf(bo); has_dmabuf = false; continue; } memcpy(data, test_pattern, test_size); unmap_dmabuf(bo, map_handle); int dmafd = export_dmabuf(bo); if (dmafd == -1) { has_dmabuf = false; continue; } destroy_dmabuf(bo); bool dpass = test_mirror(dmafd, test_size, update_dmabuf, comp_modes[c], gt, rt, rd, &slice_data); printf("DMABUF comp=%d src_thread=%d dst_thread=%d, %s\n", (int)c, gt, rt, dpass ? "pass" : "FAIL"); all_success &= dpass; } } } } cleanup_render_data(rd); free(rd); free(test_pattern); printf("All pass: %c\n", all_success ? 'Y' : 'n'); return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.10.5/waypipe-c/test/fuzz_hook_det.c000066400000000000000000000103511507071711100213340ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include log_handler_func_t log_funcs[2] = {NULL, NULL}; int main(int argc, char **argv) { if (argc == 1 || !strcmp(argv[1], "--help")) { printf("Usage: ./fuzz_hook_det [--server] [--log] {input_file}\n"); printf("A program to run and control Wayland and channel inputs for core Waypipe operations\n"); return EXIT_FAILURE; } bool display_side = true; if (argc > 1 && !strcmp(argv[1], "--server")) { display_side = false; argc--; argv++; } if (argc > 1 && !strcmp(argv[1], "--log")) { log_funcs[0] = test_atomic_log_handler; log_funcs[1] = test_atomic_log_handler; argc--; argv++; } size_t len; char *buf = read_file_into_mem(argv[1], &len); if (!buf) { return EXIT_FAILURE; } printf("Loaded %zu bytes\n", len); struct test_state ts; if (setup_state(&ts, display_side, true) == -1) { return -1; } char *ignore_buf = malloc(65536); /* Main loop: RW from socketpairs with sendmsg, with short wait */ int64_t file_nwords = (int64_t)len / 4; int64_t cursor = 0; uint32_t *data = (uint32_t *)buf; while (cursor < file_nwords) { uint32_t header = data[cursor++]; bool wayland_side = header & 0x1; bool add_file = header & 0x2; int new_fileno = -1; if (add_file && wayland_side && cursor < file_nwords) { uint32_t fsize = data[cursor++]; if (fsize == 0) { /* 'copy' sink */ new_fileno = open("/dev/null", O_WRONLY | O_NOCTTY); if (new_fileno == -1) { wp_error("Failed to open /dev/null"); } } else { /* avoid buffer overflow */ fsize = fsize > 1000000 ? 1000000 : fsize; new_fileno = create_anon_file(); if (ftruncate(new_fileno, (off_t)fsize) == -1) { wp_error("Failed to resize tempfile"); checked_close(new_fileno); break; } } } uint32_t packet_size = header >> 2; int64_t words_left = file_nwords - cursor; if (packet_size > 2048) { packet_size = 2048; } if (packet_size > (uint32_t)words_left) { packet_size = (uint32_t)words_left; } struct transfer_queue transfers; memset(&transfers, 0, sizeof(transfers)); pthread_mutex_init(&transfers.async_recv_queue.lock, NULL); if (wayland_side) { /* Send a message (incl fds) */ struct msg m; m.data = &data[cursor]; m.len = (int)packet_size; if (new_fileno != -1) { m.fds = &new_fileno; m.nfds = 1; } else { m.fds = NULL; m.nfds = 0; } send_wayland_msg(&ts, m, &transfers); /* ignore any created transfers, since this is only * a test of one side */ } else { /* Send a transfer */ void *msg_copy = calloc(packet_size, 4); memcpy(msg_copy, &data[cursor], packet_size * 4); transfer_add(&transfers, packet_size * 4, msg_copy); receive_wire(&ts, &transfers); } cleanup_transfer_queue(&transfers); cursor += packet_size; } cleanup_state(&ts); free(buf); free(ignore_buf); return EXIT_SUCCESS; } waypipe-v0.10.5/waypipe-c/test/fuzz_hook_ext.c000066400000000000000000000170021507071711100213600ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include struct copy_setup { int conn; int wayl; bool is_display_side; struct main_config *mc; }; static void *start_looper(void *data) { struct copy_setup *setup = (struct copy_setup *)data; main_interface_loop(setup->conn, setup->wayl, -1, setup->mc, setup->is_display_side); return NULL; } log_handler_func_t log_funcs[2] = {NULL, NULL}; int main(int argc, char **argv) { if (argc == 1 || !strcmp(argv[1], "--help")) { printf("Usage: ./fuzz_hook_ext [--log] {input_file}\n"); printf("A program to run and control Wayland inputs for a linked client/server pair, from a file.\n"); return EXIT_FAILURE; } if (argc > 1 && !strcmp(argv[1], "--log")) { log_funcs[0] = test_atomic_log_handler; log_funcs[1] = test_atomic_log_handler; argc--; argv++; } setup_video_logging(); size_t len; char *buf = read_file_into_mem(argv[1], &len); if (!buf) { return EXIT_FAILURE; } printf("Loaded %zu bytes\n", len); int srv_fds[2], cli_fds[2], conn_fds[2]; if (socketpair(AF_UNIX, SOCK_STREAM, 0, srv_fds) == -1 || socketpair(AF_UNIX, SOCK_STREAM, 0, cli_fds) == -1 || socketpair(AF_UNIX, SOCK_STREAM, 0, conn_fds) == -1) { printf("Socketpair failed\n"); return EXIT_FAILURE; } struct main_config config = { .drm_node = NULL, .n_worker_threads = 1, .compression = COMP_NONE, .compression_level = 0, .no_gpu = true, /* until we can construct dmabufs here */ .only_linear_dmabuf = false, .video_if_possible = true, .prefer_hwvideo = false, }; pthread_t thread_a, thread_b; struct copy_setup server_conf = {.conn = conn_fds[0], .wayl = srv_fds[1], .is_display_side = true, .mc = &config}; struct copy_setup client_conf = {.conn = conn_fds[1], .wayl = cli_fds[1], .is_display_side = false, .mc = &config}; if (pthread_create(&thread_a, NULL, start_looper, &server_conf) == -1) { printf("Thread failed\n"); } if (pthread_create(&thread_b, NULL, start_looper, &client_conf) == -1) { printf("Thread failed\n"); } char *ignore_buf = malloc(65536); /* Main loop: RW from socketpairs with sendmsg, with short wait */ int64_t file_nwords = (int64_t)len / 4; int64_t cursor = 0; uint32_t *data = (uint32_t *)buf; while (cursor < file_nwords) { uint32_t header = data[cursor++]; bool to_server = header & 0x1; bool add_file = header & 0x2; int new_fileno = -1; if (add_file && cursor < file_nwords) { uint32_t fsize = data[cursor++]; if (fsize == 0) { /* 'copy' sink */ new_fileno = open("/dev/null", O_WRONLY | O_NOCTTY); if (new_fileno == -1) { wp_error("Failed to open /dev/null"); } } else { /* avoid buffer overflow */ fsize = fsize > 1000000 ? 1000000 : fsize; new_fileno = create_anon_file(); if (ftruncate(new_fileno, (off_t)fsize) == -1) { wp_error("Failed to resize tempfile"); checked_close(new_fileno); break; } } } uint32_t packet_size = header >> 2; int64_t words_left = file_nwords - cursor; if (packet_size > 2048) { packet_size = 2048; } if (packet_size > (uint32_t)words_left) { packet_size = (uint32_t)words_left; } /* 2 msec max delay for 8KB of data, assuming no system * interference, should be easily attainable */ int max_write_delay_ms = 1; int max_read_delay_ms = 2; int send_fd = to_server ? srv_fds[0] : cli_fds[0]; /* Write packet to stream */ struct pollfd write_pfd; write_pfd.fd = send_fd; write_pfd.events = POLLOUT; int nw; retry_poll: nw = poll(&write_pfd, 1, max_write_delay_ms); if (nw == -1) { if (new_fileno != -1) { checked_close(new_fileno); } if (errno == EINTR) { goto retry_poll; } printf("Poll error\n"); break; } else if (nw == 1) { /* Send message */ struct iovec the_iovec; the_iovec.iov_len = packet_size * 4; the_iovec.iov_base = (char *)&data[cursor]; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; union { char buf[CMSG_SPACE(sizeof(int))]; struct cmsghdr align; } uc; memset(uc.buf, 0, sizeof(uc.buf)); if (new_fileno != -1) { msg.msg_control = uc.buf; msg.msg_controllen = sizeof(uc.buf); struct cmsghdr *frst = CMSG_FIRSTHDR(&msg); frst->cmsg_level = SOL_SOCKET; frst->cmsg_type = SCM_RIGHTS; memcpy(CMSG_DATA(frst), &new_fileno, sizeof(int)); frst->cmsg_len = CMSG_LEN(sizeof(int)); msg.msg_controllen = CMSG_SPACE(sizeof(int)); } int target_fd = to_server ? srv_fds[0] : cli_fds[0]; ssize_t ret = sendmsg(target_fd, &msg, 0); if (ret == -1) { wp_error("Error in sendmsg"); break; } } else { wp_error("Failed to send message before timeout"); } if (new_fileno != -1) { checked_close(new_fileno); } /* Wait up to max_delay for a response. Almost all packets * should be passed on unmodified; a very small fraction * are dropped */ struct pollfd read_pfds[2]; read_pfds[0].fd = srv_fds[0]; read_pfds[1].fd = cli_fds[0]; read_pfds[0].events = POLLIN; read_pfds[1].events = POLLIN; int nr = poll(read_pfds, 2, packet_size > 0 ? max_read_delay_ms : 0); if (nr == -1) { if (errno == EINTR) { continue; } printf("Poll error\n"); break; } else if (nr == 0) { wp_debug("No reply to sent packet %d", packet_size); } for (int i = 0; i < 2; i++) { if (read_pfds[i].revents & POLLIN) { char cmsgdata[(CMSG_LEN(28 * sizeof(int32_t)))]; struct iovec the_iovec; the_iovec.iov_len = 65536; the_iovec.iov_base = ignore_buf; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = &cmsgdata; msg.msg_controllen = sizeof(cmsgdata); msg.msg_flags = 0; ssize_t ret = recvmsg(read_pfds[i].fd, &msg, 0); if (ret == -1) { wp_error("Error in recvmsg"); } } } cursor += packet_size; } checked_close(srv_fds[0]); checked_close(cli_fds[0]); pthread_join(thread_a, NULL); pthread_join(thread_b, NULL); free(buf); free(ignore_buf); return EXIT_SUCCESS; } waypipe-v0.10.5/waypipe-c/test/fuzz_hook_int.c000066400000000000000000000167431507071711100213650ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include struct copy_setup { int conn; int wayl; bool is_display_side; struct main_config *mc; }; static void *start_looper(void *data) { struct copy_setup *setup = (struct copy_setup *)data; main_interface_loop(setup->conn, setup->wayl, -1, setup->mc, setup->is_display_side); return NULL; } log_handler_func_t log_funcs[2] = {NULL, NULL}; int main(int argc, char **argv) { if (argc == 1 || !strcmp(argv[1], "--help")) { printf("Usage: ./fuzz_hook_int [--server] [--log] {input_file}\n"); printf("A program to run and control Wayland and channel inputs for a waypipe main loop\n"); return EXIT_FAILURE; } bool display_side = true; if (argc > 1 && !strcmp(argv[1], "--server")) { display_side = false; argc--; argv++; } if (argc > 1 && !strcmp(argv[1], "--log")) { log_funcs[0] = test_atomic_log_handler; log_funcs[1] = test_atomic_log_handler; argc--; argv++; } setup_video_logging(); size_t len; char *buf = read_file_into_mem(argv[1], &len); if (!buf) { return EXIT_FAILURE; } printf("Loaded %zu bytes\n", len); int way_fds[2], conn_fds[2]; if (socketpair(AF_UNIX, SOCK_STREAM, 0, way_fds) == -1 || socketpair(AF_UNIX, SOCK_STREAM, 0, conn_fds) == -1) { printf("Socketpair failed\n"); return EXIT_FAILURE; } struct main_config config = { .drm_node = NULL, .n_worker_threads = 1, .compression = COMP_NONE, .compression_level = 0, .no_gpu = true, /* until we can construct dmabufs here */ .only_linear_dmabuf = false, .video_if_possible = true, .prefer_hwvideo = false, }; pthread_t thread; struct copy_setup conf = {.conn = conn_fds[1], .wayl = way_fds[1], .is_display_side = display_side, .mc = &config}; if (pthread_create(&thread, NULL, start_looper, &conf) == -1) { printf("Thread failed\n"); return EXIT_FAILURE; } char *ignore_buf = malloc(65536); /* Main loop: RW from socketpairs with sendmsg, with short wait */ int64_t file_nwords = (int64_t)len / 4; int64_t cursor = 0; uint32_t *data = (uint32_t *)buf; while (cursor < file_nwords) { uint32_t header = data[cursor++]; bool wayland_side = header & 0x1; bool add_file = header & 0x2; int new_fileno = -1; if (add_file && wayland_side && cursor < file_nwords) { uint32_t fsize = data[cursor++]; if (fsize == 0) { /* 'copy' sink */ new_fileno = open("/dev/null", O_WRONLY | O_NOCTTY); if (new_fileno == -1) { wp_error("Failed to open /dev/null"); } } else { /* avoid buffer overflow */ fsize = fsize > 1000000 ? 1000000 : fsize; new_fileno = create_anon_file(); if (ftruncate(new_fileno, (off_t)fsize) == -1) { wp_error("Failed to resize tempfile"); checked_close(new_fileno); break; } } } uint32_t packet_size = header >> 2; int64_t words_left = file_nwords - cursor; if (packet_size > 2048) { packet_size = 2048; } if (packet_size > (uint32_t)words_left) { packet_size = (uint32_t)words_left; } /* 2 msec max delay for 8KB of data, assuming no system * interference, should be easily attainable */ int max_write_delay_ms = 1; int max_read_delay_ms = 2; int send_fd = wayland_side ? way_fds[0] : conn_fds[0]; /* Write packet to stream */ struct pollfd write_pfd; write_pfd.fd = send_fd; write_pfd.events = POLLOUT; int nw; retry_poll: nw = poll(&write_pfd, 1, max_write_delay_ms); if (nw == -1) { if (new_fileno != -1) { checked_close(new_fileno); } if (errno == EINTR) { goto retry_poll; } printf("Poll error\n"); break; } else if (nw == 1 && wayland_side) { /* Send message */ struct iovec the_iovec; the_iovec.iov_len = packet_size * 4; the_iovec.iov_base = (char *)&data[cursor]; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; union { char buf[CMSG_SPACE(sizeof(int))]; struct cmsghdr align; } uc; memset(uc.buf, 0, sizeof(uc.buf)); if (new_fileno != -1) { msg.msg_control = uc.buf; msg.msg_controllen = sizeof(uc.buf); struct cmsghdr *frst = CMSG_FIRSTHDR(&msg); frst->cmsg_level = SOL_SOCKET; frst->cmsg_type = SCM_RIGHTS; memcpy(CMSG_DATA(frst), &new_fileno, sizeof(int)); frst->cmsg_len = CMSG_LEN(sizeof(int)); msg.msg_controllen = CMSG_SPACE(sizeof(int)); } ssize_t ret = sendmsg(way_fds[0], &msg, 0); if (ret == -1) { wp_error("Error in sendmsg"); break; } } else if (nw == 1 && !wayland_side) { ssize_t ret = write(conn_fds[0], (char *)&data[cursor], packet_size * 4); if (ret == -1) { wp_error("Error in write"); break; } } else { wp_error("Failed to send message before timeout"); } if (new_fileno != -1) { checked_close(new_fileno); } /* Wait up to max_delay for a response. Almost all packets * should be passed on unmodified; a very small fraction * are dropped */ struct pollfd read_pfds[2]; read_pfds[0].fd = way_fds[0]; read_pfds[1].fd = conn_fds[0]; read_pfds[0].events = POLLIN; read_pfds[1].events = POLLIN; int nr = poll(read_pfds, 2, packet_size > 0 ? max_read_delay_ms : 0); if (nr == -1) { if (errno == EINTR) { continue; } printf("Poll error\n"); break; } else if (nr == 0) { wp_debug("No reply to sent packet %d", packet_size); } for (int i = 0; i < 2; i++) { if (read_pfds[i].revents & POLLIN) { char cmsgdata[(CMSG_LEN(28 * sizeof(int32_t)))]; struct iovec the_iovec; the_iovec.iov_len = 65536; the_iovec.iov_base = ignore_buf; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = &cmsgdata; msg.msg_controllen = sizeof(cmsgdata); msg.msg_flags = 0; ssize_t ret = recvmsg(read_pfds[i].fd, &msg, 0); if (ret == -1) { wp_error("Error in recvmsg"); } } } cursor += packet_size; } checked_close(conn_fds[0]); checked_close(way_fds[0]); pthread_join(thread, NULL); free(buf); free(ignore_buf); return EXIT_SUCCESS; } waypipe-v0.10.5/waypipe-c/test/headless.py000077500000000000000000000227001507071711100204640ustar00rootroot00000000000000#!/usr/bin/env python3 if __name__ != "__main__": quit(1) import os, subprocess, time, signal import multiprocessing def try_unlink(path): try: os.unlink(path) except FileNotFoundError: pass def wait_until_exists(path): for i in range(100): if os.path.exists(path): return True time.sleep(0.01) else: return False def safe_cleanup(process): assert type(process) == subprocess.Popen for i in range(3): if process.poll() is None: # certain weston client programs appear to initiate shutdown proceedings correctly; however, they appear to wait for a frame beforehand, and the headless weston doesn't ask for additional frames process.send_signal(signal.SIGINT) time.sleep(0.5) try: process.wait(100) except subprocess.TimeoutExpired: process.kill() try: process.wait(1) except subprocess.TimeoutExpired: # no third chances process.terminate() weston_path = os.environ["TEST_WESTON_PATH"] waypipe_path = os.environ["TEST_WAYPIPE_PATH"] ld_library_path = ( os.environ["LD_LIBRARY_PATH"] if "LD_LIBRARY_PATH" in os.environ else "" ) sub_tests = { "SHM": ["TEST_WESTON_SHM_PATH"], "EGL": ["TEST_WESTON_EGL_PATH", "-o"], "DMABUF": ["TEST_WESTON_DMA_PATH"], "TERM": ["TEST_WESTON_TERM_PATH"], "PRES": ["TEST_WESTON_PRES_PATH"], "SUBSURF": ["TEST_WESTON_SUBSURF_PATH"], } for k, v in list(sub_tests.items()): if v[0] in os.environ: v[0] = os.environ[v[0]] else: del sub_tests[k] xdg_runtime_dir = os.path.abspath("./run/") # weston does not currently appear to support setting absolute socket paths socket_path = "w_sock" abs_socket_path = os.path.join(xdg_runtime_dir, socket_path) mainenv = {"XDG_RUNTIME_DIR": xdg_runtime_dir, "LD_LIBRARY_PATH": ld_library_path} weston_command = [ weston_path, "--backend=headless-backend.so", "--socket=" + socket_path, # "--use-pixman", "--width=1111", "--height=777", ] arguments = subprocess.check_output([weston_path, "--help"]).decode() if "--use-gl" in arguments: weston_command.append("--use-gl") try: import psutil except ImportError: psutil = None nontrivial_failures = False subenv = { "WAYLAND_DISPLAY": abs_socket_path, "WAYLAND_DEBUG": "1", "XDG_RUNTIME_DIR": xdg_runtime_dir, "LD_LIBRARY_PATH": ld_library_path, "ASAN_OPTIONS": "detect_leaks=0", } wp_serv_env = { "WAYLAND_DEBUG": "1", "XDG_RUNTIME_DIR": xdg_runtime_dir, "LD_LIBRARY_PATH": ld_library_path, "ASAN_OPTIONS": "detect_leaks=0", } subproc_args = {"env": subenv, "stdin": subprocess.DEVNULL, "stderr": subprocess.STDOUT} wp_serv_args = { "env": wp_serv_env, "stdin": subprocess.DEVNULL, "stderr": subprocess.STDOUT, } def get_child_process(proc_pid, expected_name, sub_test_name): if psutil is not None: # assuming pid has not been recycled/duplicated proc = psutil.Process(proc_pid) if proc.name() == "waypipe-c": for i in range(5): kids = proc.children() if len(kids) > 0: break time.sleep(0.01) else: print( "For test", sub_test_name, "waypipe server's command may have crashed", ) if len(kids) == 1: wp_child = kids[0] try: if wp_child.name() != expected_name: print( "Unusual child process name", wp_child.name(), "does not match", expected_name, ) except psutil.NoSuchProcess: pass def open_logfile(name): path = os.path.join(xdg_runtime_dir, name) return path, open(path, "wb") def start_waypipe(socket_path, control_path, logfile, command, oneshot): prefix = [waypipe_path, "--debug", "--socket", socket_path] if oneshot: prefix += ["--oneshot"] client_command = prefix + ["client"] server_command = prefix + ["--control", control_path, "server"] + command client = subprocess.Popen(client_command, stdout=logfile, **subproc_args) if not wait_until_exists(socket_path): raise Exception("The waypipe socket file at " + socket_path + " did not appear") server = subprocess.Popen(server_command, stdout=logfile, **wp_serv_args) return server, client def cleanup_oneshot(client, server, child): if child is not None: try: child.send_signal(signal.SIGINT) except psutil.NoSuchProcess: time.sleep(0.1) safe_cleanup(server) time.sleep(0.1) safe_cleanup(client) else: server.wait() client.wait() else: safe_cleanup(server) time.sleep(0.1) safe_cleanup(client) return client.returncode, server.returncode def cleanup_multi(client, server, child): if child is not None: try: child.send_signal(signal.SIGINT) except psutil.NoSuchProcess: pass time.sleep(0.1) safe_cleanup(server) time.sleep(0.1) safe_cleanup(client) return client.returncode, server.returncode def run_sub_test(args): sub_test_name, command = args nontrivial_failures = False ocontrol_path = os.path.join(xdg_runtime_dir, sub_test_name + "_octrl") mcontrol_path = os.path.join(xdg_runtime_dir, sub_test_name + "_mctrl") owp_socket_path = os.path.join(xdg_runtime_dir, sub_test_name + "_osocket") mwp_socket_path = os.path.join(xdg_runtime_dir, sub_test_name + "_msocket") try_unlink(owp_socket_path) try_unlink(mwp_socket_path) try_unlink(ocontrol_path) try_unlink(mcontrol_path) ref_log_path, ref_out = open_logfile(sub_test_name + "_ref_out.txt") ref_proc = subprocess.Popen(command, stdout=ref_out, **subproc_args) owp_log_path, owp_out = open_logfile(sub_test_name + "_owp_out.txt") mwp_log_path, mwp_out = open_logfile(sub_test_name + "_mwp_out.txt") owp_server, owp_client = start_waypipe( owp_socket_path, ocontrol_path, owp_out, command, True ) mwp_server, mwp_client = start_waypipe( mwp_socket_path, mcontrol_path, mwp_out, command, False ) owp_child = get_child_process( owp_server.pid, os.path.basename(command[0]), sub_test_name ) mwp_child = get_child_process( mwp_server.pid, os.path.basename(command[0]), sub_test_name ) print("Launched", sub_test_name) time.sleep(1) # Verify that replacing the control pipe (albeit with itself) doesn't break anything # (Since the connection is a unix domain socket, almost no packets will be in flight, # so the test isn't that comprehensive) print("Resetting", sub_test_name) open(ocontrol_path, "w").write(owp_socket_path) open(mcontrol_path, "w").write(mwp_socket_path) try_unlink(ocontrol_path) try_unlink(mcontrol_path) time.sleep(1) print("Closing", sub_test_name) # Beware sudden PID reuse... safe_cleanup(ref_proc) ref_out.close() occode, oscode = cleanup_oneshot(owp_client, owp_server, owp_child) mccode, mscode = cleanup_multi(mwp_client, mwp_server, mwp_child) try_unlink(owp_socket_path) try_unlink(mwp_socket_path) owp_out.close() mwp_out.close() # -2, because applications sometimes return with the sigint error if ref_proc.returncode not in (0, -2): print( "Test {}, run directly, failed (code={}). See logfile at {}".format( sub_test_name, ref_proc.returncode, ref_log_path ) ) else: if oscode in (0, -2) and occode == 0: print("Oneshot test", sub_test_name, "passed") else: print( "Oneshot test {}, run indirectly, failed (ccode={} scode={}). See logfile at {}".format( sub_test_name, occode, oscode, owp_log_path ) ) nontrivial_failures = True if mscode in (0, -2) and mccode in (0, -2): print("Regular test", sub_test_name, "passed") else: print( "Regular test {}, run indirectly, failed (ccode={} scode={}). See logfile at {}".format( sub_test_name, mccode, mscode, mwp_log_path ) ) nontrivial_failures = True return nontrivial_failures os.makedirs(xdg_runtime_dir, mode=0o700, exist_ok=True) os.chmod(xdg_runtime_dir, 0o700) try_unlink(abs_socket_path) try_unlink(abs_socket_path + ".lock") weston_log_path = os.path.join(xdg_runtime_dir, "weston_out.txt") weston_out = open(weston_log_path, "wb") weston_proc = subprocess.Popen( weston_command, env=mainenv, stdin=subprocess.DEVNULL, stdout=weston_out, stderr=subprocess.STDOUT, ) # Otherwise it's a race between weston and the clients if not wait_until_exists(abs_socket_path): raise Exception( "weston failed to create expected display socket path, " + abs_socket_path ) with multiprocessing.Pool(3) as pool: nontriv_failures = pool.map(run_sub_test, [(k, v) for k, v in sub_tests.items()]) safe_cleanup(weston_proc) weston_out.close() if weston_proc.returncode != 0: print("Running headless weston failed. See logfile at ", weston_log_path) if any(nontriv_failures): quit(1) quit(0) waypipe-v0.10.5/waypipe-c/test/meson.build000066400000000000000000000131671507071711100204700ustar00rootroot00000000000000 common_src = static_library( 'common', 'common.c', include_directories: waypipe_includes, ) # Testing test_diff = executable( 'diff_roundtrip', ['diff_roundtrip.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], ) test('waypipe-c; Whether diff operations successfully roundtrip', test_diff, timeout: 60) test_damage = executable( 'damage_merge', ['damage_merge.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], ) test('waypipe-c; If damage rectangles merge efficiently', test_damage, timeout: 5) test_mirror = executable( 'fd_mirror', ['fd_mirror.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], dependencies: [libgbm], ) # disable leak checking, because library code is often responsible test( 'waypipe-c; How well buffers are replicated', test_mirror, env: ['ASAN_OPTIONS=detect_leaks=0'], timeout: 40, ) test_proto_functions = files('protocol_functions.txt') proto_send_src = custom_target( 'protocol_control message serialization', output: 'protocol_functions.h', depend_files: [test_proto_functions, sendgen_path] + abs_protocols, command: [python3, sendgen_path, test_proto_functions, '@OUTPUT@'] + abs_protocols, ) test_protocol = executable( 'protocol_control', ['protocol_control.c', proto_send_src], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], ) test( 'waypipe-c; That common Wayland message patterns work', test_protocol, env: ['ASAN_OPTIONS=detect_leaks=0'], timeout: 20, ) test_pipe = executable( 'pipe_mirror', ['pipe_mirror.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], ) test('waypipe-c; How well pipes are replicated', test_pipe, timeout: 20) test_fnlist = files('test_fnlist.txt') testproto_src = custom_target( 'test-proto code', output: 'protocol-@BASENAME@.c', input: 'test-proto.xml', depend_files: [test_fnlist, symgen_path], command: [python3, symgen_path, 'data', test_fnlist, '@OUTPUT@', '@INPUT@'], ) testproto_header = custom_target( 'test-proto client-header', output: 'protocol-@BASENAME@.h', input: 'test-proto.xml', depend_files: [test_fnlist, symgen_path], command: [ python3, symgen_path, 'header', test_fnlist, '@OUTPUT@', '@INPUT@', ], ) test_parse = executable( 'wire_parse', ['wire_parse.c', testproto_src, testproto_header], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], ) test('waypipe-c; That protocol parsing fails cleanly', test_parse, timeout: 5) fake_ssh = executable('ssh', ['fake_ssh.c']) weston_dep = dependency('weston', required: false) testprog_paths = [] if weston_dep.found() # Sometimes weston's test clients are installed here instead testprog_paths += weston_dep.get_variable(pkgconfig: 'libexecdir') endif weston_prog = find_program('weston', required: false) base_envlist = ['TEST_WAYPIPE_PATH=@0@'.format(waypipe_prog.full_path())] headless_envlist = base_envlist if weston_prog.found() headless_envlist += 'TEST_WESTON_PATH=@0@'.format(weston_prog.full_path()) endif test_programs = [ ['TEST_WESTON_SHM_PATH', 'weston-simple-shm'], # ['TEST_WESTON_EGL_PATH', 'weston-simple-egl'], ['TEST_WESTON_TERM_PATH', 'weston-terminal'], ['TEST_WESTON_PRES_PATH', 'weston-presentation-shm'], ['TEST_WESTON_SUBSURF_PATH', 'weston-subsurfaces'], ] if has_dmabuf test_programs += [['TEST_WESTON_DMA_PATH', 'weston-simple-dmabuf-egl']] endif have_test_progs = false foreach t : test_programs test_prog = find_program(t[1], required: false) foreach p : testprog_paths if not test_prog.found() test_prog = find_program(join_paths(p, t[1]), required: false) endif endforeach if test_prog.found() have_test_progs = true headless_envlist += '@0@=@1@'.format(t[0], test_prog.full_path()) endif endforeach if weston_prog.found() and have_test_progs test_headless = join_paths(meson.current_source_dir(), 'headless.py') test( 'waypipe-c; If clients crash when run with weston via waypipe', python3, args: test_headless, env: headless_envlist, timeout: 30, ) endif sleep_prog = find_program('sleep') startup_envlist = base_envlist startup_envlist += ['TEST_SLEEP_PATH=' + sleep_prog.full_path()] startup_envlist += ['TEST_FAKE_SSH_PATH=' + fake_ssh.full_path()] test_startup = join_paths(meson.current_source_dir(), 'startup_failure.py') test( 'waypipe-c; That waypipe exits cleanly given a bad setup', python3, args: test_startup, env: startup_envlist, timeout: 30, ) fuzz_hook_ext = executable( 'fuzz_hook_ext', ['fuzz_hook_ext.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], dependencies: [pthreads], ) fuzz_hook_int = executable( 'fuzz_hook_int', ['fuzz_hook_int.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], dependencies: [pthreads], ) fuzz_hook_det = executable( 'fuzz_hook_det', ['fuzz_hook_det.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], ) test_proto_hook = executable( 'test_proto_hook', ['test_proto_hook.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], ) test( 'waypipe-c; That `waypipe bench` doesn\'t crash', waypipe_prog, timeout: 20, args: ['--threads', '2', '--test-size', '16384', 'bench', '100.0'], ) waypipe-v0.10.5/waypipe-c/test/pipe_mirror.c000066400000000000000000000234211507071711100210130ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "shadow.h" #include #include #include #include #include #include #include #include #include static int shadow_sync(struct fd_translation_map *src_map, struct fd_translation_map *dst_map) { struct transfer_queue queue; memset(&queue, 0, sizeof(queue)); pthread_mutex_init(&queue.async_recv_queue.lock, NULL); read_readable_pipes(src_map); for (struct shadow_fd_link *lcur = src_map->link.l_next, *lnxt = lcur->l_next; lcur != &src_map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *sfd = (struct shadow_fd *)lcur; collect_update(NULL, sfd, &queue, false); /* collecting updates can reset `remote_can_X` state, so * garbage collect the sfd */ destroy_shadow_if_unreferenced(sfd); } for (int i = 0; i < queue.end; i++) { if (queue.vecs[i].iov_len < 8) { cleanup_transfer_queue(&queue); wp_error("Invalid message"); return -1; } const uint32_t *header = (const uint32_t *)queue.vecs[i].iov_base; struct bytebuf msg; msg.data = queue.vecs[i].iov_base; msg.size = transfer_size(header[0]); if (apply_update(dst_map, NULL, NULL, transfer_type(header[0]), (int32_t)header[1], &msg) == -1) { wp_error("Update failed"); cleanup_transfer_queue(&queue); return -1; } } flush_writable_pipes(dst_map); int nt = queue.end; cleanup_transfer_queue(&queue); return nt; } static int create_pseudo_pipe(bool can_read, bool can_write, bool half_open_socket, int *spec_end, int *opp_end) { bool pipe_possible = can_read != can_write; int pipe_fds[2]; if (half_open_socket || !pipe_possible) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, pipe_fds) == -1) { wp_error("Socketpair failed"); return -1; } if (!can_read) { shutdown(pipe_fds[0], SHUT_RD); } if (!can_write) { shutdown(pipe_fds[0], SHUT_WR); } } else { if (pipe(pipe_fds) == -1) { wp_error("Pipe failed"); return -1; } if (can_write) { int tmp = pipe_fds[0]; pipe_fds[0] = pipe_fds[1]; pipe_fds[1] = tmp; } } *spec_end = pipe_fds[0]; *opp_end = pipe_fds[1]; return 0; } static char fd_is_readable(int fd) { int flags = fcntl(fd, F_GETFL, 0); if (flags == -1) { wp_error("fctnl F_GETFL failed!"); return '?'; } flags = flags & O_ACCMODE; return (flags == O_RDONLY || flags == O_RDWR) ? 'R' : 'n'; } static char fd_is_writable(int fd) { int flags = fcntl(fd, F_GETFL, 0); if (flags == -1) { wp_error("fctnl F_GETFL failed!"); return '?'; } flags = flags & O_ACCMODE; return (flags == O_WRONLY || flags == O_RDWR) ? 'W' : 'n'; } static void print_pipe_state(const char *desc, struct pipe_state *p) { printf("%s state: %c %c %c %c%s\n", desc, p->can_read ? 'R' : 'n', p->can_write ? 'W' : 'n', p->remote_can_read ? 'R' : 'n', p->remote_can_write ? 'W' : 'n', p->pending_w_shutdown ? " shutdownWpending" : ""); } static bool test_pipe_mirror(bool close_src, bool can_read, bool can_write, bool half_open_socket, bool interpret_as_force_iw) { if (can_read == can_write && half_open_socket) { return true; } printf("\nTesting:%s%s%s%s%s\n", can_read ? " read" : "", can_write ? " write" : "", half_open_socket ? " socket" : "", interpret_as_force_iw ? " force_iw" : "", close_src ? " close_src" : " close_dst"); int spec_end, opp_end, anti_end = -1; if (create_pseudo_pipe(can_read, can_write, half_open_socket, &spec_end, &opp_end) == -1) { return false; } struct fd_translation_map src_map; setup_translation_map(&src_map, false); struct fd_translation_map dst_map; setup_translation_map(&dst_map, true); bool success = true; /* Step 1: replicate */ struct shadow_fd *src_shadow = translate_fd(&src_map, NULL, NULL, spec_end, FDC_PIPE, 0, NULL, interpret_as_force_iw); shadow_decref_transfer(src_shadow); int rid = src_shadow->remote_id; if (shadow_sync(&src_map, &dst_map) == -1) { success = false; goto cleanup; } struct shadow_fd *dst_shadow = get_shadow_for_rid(&dst_map, rid); if (!dst_shadow) { printf("Failed to create remote shadow structure\n"); success = false; goto cleanup; } anti_end = dup(dst_shadow->fd_local); shadow_decref_transfer(dst_shadow); if (set_nonblocking(anti_end) == -1 || set_nonblocking(opp_end) == -1) { printf("Failed to make user fds nonblocking\n"); success = false; goto cleanup; } printf("spec %c %c %c %c | opp %c %c | anti %c %c\n", can_read ? 'R' : 'n', can_write ? 'W' : 'n', fd_is_readable(spec_end), fd_is_writable(spec_end), fd_is_readable(opp_end), fd_is_writable(opp_end), fd_is_readable(anti_end), fd_is_writable(anti_end)); print_pipe_state("dst", &dst_shadow->pipe); print_pipe_state("src", &src_shadow->pipe); /* Step 2: transfer tests */ for (int i = 0; i < 4; i++) { bool from_src = i % 2; /* Smaller than a pipe buffer, so writing should always succeed */ char buf[4096]; memset(buf, rand(), sizeof(buf)); int write_fd = from_src ? opp_end : anti_end; int read_fd = from_src ? anti_end : opp_end; const char *target = from_src ? "src" : "dst"; const char *antitarget = from_src ? "dst" : "src"; if (fd_is_writable(write_fd) != 'W') { /* given proper replication, the reverse end should * be readable */ continue; } int amt = max(rand() % 4096, 1); ssize_t ret = write(write_fd, buf, (size_t)amt); if (ret == amt) { struct shadow_fd *mod_sfd = from_src ? src_shadow : dst_shadow; mod_sfd->pipe.readable = true; /* Write successful */ if (shadow_sync(from_src ? &src_map : &dst_map, from_src ? &dst_map : &src_map) == -1) { success = false; goto cleanup; } bool believe_read = can_read && !interpret_as_force_iw; bool expect_transfer_fail = (from_src && !believe_read) || (!from_src && !can_write); // todo: try multiple sync cycles (?) ssize_t rr = read(read_fd, buf, 4096); bool tf_pass = rr == amt; if (!expect_transfer_fail) { /* on some systems, pipe is bidirectional, * making some additional transfers succeed. * This is fine. */ success = success && tf_pass; } const char *resdesc = tf_pass != expect_transfer_fail ? "expected" : "unexpected"; if (tf_pass) { printf("Send packet to %s, and received it from %s, %s\n", target, antitarget, resdesc); } else { printf("Failed to receive packet from %s, %d %zd %s, %s\n", antitarget, read_fd, rr, strerror(errno), resdesc); } } } /* Step 3: close one end, and verify that the other end is closed */ // TODO: test partial shutdowns as well, all 2^4 cases for a single // cycle; and test epipe closing by queuing additional data struct shadow_fd *cls_shadow = close_src ? src_shadow : dst_shadow; if (close_src) { checked_close(opp_end); opp_end = -1; } else { checked_close(anti_end); anti_end = -1; } bool shutdown_deletes = (cls_shadow->pipe.can_read && !cls_shadow->pipe.can_write); /* Special cases, which aren't very important */ shutdown_deletes |= (interpret_as_force_iw && !cls_shadow->pipe.can_write && close_src); cls_shadow->pipe.readable = cls_shadow->pipe.can_read; cls_shadow->pipe.writable = cls_shadow->pipe.can_write; if (shadow_sync(close_src ? &src_map : &dst_map, close_src ? &dst_map : &src_map) == -1) { success = false; goto cleanup; } bool deleted_shadows = true; if (dst_map.link.l_next != &dst_map.link) { print_pipe_state("dst", &dst_shadow->pipe); deleted_shadows = false; } if (src_map.link.l_next != &src_map.link) { print_pipe_state("src", &src_shadow->pipe); deleted_shadows = false; } bool correct_teardown = deleted_shadows == shutdown_deletes; success = success && correct_teardown; printf("Deleted shadows: %c (expected %c)\n", deleted_shadows ? 'Y' : 'n', shutdown_deletes ? 'Y' : 'n'); printf("Test: %s\n", success ? "pass" : "FAIL"); cleanup: if (anti_end != -1) { checked_close(anti_end); } if (opp_end != -1) { checked_close(opp_end); } cleanup_translation_map(&src_map); cleanup_translation_map(&dst_map); return success; } log_handler_func_t log_funcs[2] = {NULL, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; struct sigaction act; act.sa_handler = SIG_IGN; sigemptyset(&act.sa_mask); act.sa_flags = 0; if (sigaction(SIGPIPE, &act, NULL) == -1) { printf("Sigaction failed\n"); return EXIT_SUCCESS; } srand(0); bool all_success = true; for (uint32_t bits = 0; bits < 32; bits++) { bool pass = test_pipe_mirror(bits & 1, bits & 2, bits & 4, bits & 8, bits & 16); all_success = all_success && pass; } printf("\nSuccess: %c\n", all_success ? 'Y' : 'n'); return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.10.5/waypipe-c/test/protocol_control.c000066400000000000000000000716551507071711100221010ustar00rootroot00000000000000/* * Copyright © 2020 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "main.h" #include "parsing.h" #include "util.h" #include "protocol_functions.h" #include #include #include #include #include #include struct msgtransfer { struct test_state *src; struct test_state *dst; }; /* Override the libc clock_gettime, so we can test presentation-time * protocol. Note: the video drivers sometimes call this function. */ int clock_gettime(clockid_t clock_id, struct timespec *tp) { /* Assume every call costs 1ns */ time_value += 1; if (clock_id == CLOCK_REALTIME) { tp->tv_sec = (int64_t)(time_value / 1000000000uLL); tp->tv_nsec = (int64_t)(time_value % 1000000000uLL); } else { tp->tv_sec = (int64_t)((time_value + local_time_offset) / 1000000000uLL); tp->tv_nsec = (int64_t)((time_value + local_time_offset) % 1000000000uLL); } return 0; } static void print_pass(bool pass) { fprintf(stdout, "%s\n", pass ? "PASS" : "FAIL"); } static char *make_filled_pattern(size_t size, uint32_t contents) { uint32_t *mem = calloc(1, size); for (size_t i = 0; i < size / 4; i++) { mem[i] = contents; } return (char *)mem; } static int make_filled_file(size_t size, const char *contents) { int fd = create_anon_file(); ftruncate(fd, (off_t)size); uint32_t *mem = (uint32_t *)mmap( NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); memcpy(mem, contents, size); munmap(mem, size); return fd; } static bool check_file_contents(int fd, size_t size, const char *contents) { if (fd == -1) { return false; } off_t fsize = lseek(fd, 0, SEEK_END); if (fsize != (off_t)size) { wp_error("fd size mismatch: %zu %zu\n", fsize, size); return -1; } uint32_t *mem = (uint32_t *)mmap( NULL, size, PROT_READ, MAP_PRIVATE, fd, 0); if (mem == MAP_FAILED) { wp_error("Failed to map file"); return -1; } bool match = memcmp(mem, contents, size) == 0; munmap(mem, size); return match; } static int get_only_fd_from_msg(const struct test_state *s) { if (s->rcvd && s->rcvd[s->nrcvd - 1].nfds == 1) { return s->rcvd[s->nrcvd - 1].fds[0]; } else { return -1; } } static int get_fd_from_nth_to_last_msg(const struct test_state *s, int nth) { if (!s->rcvd || s->nrcvd < nth) { return -1; } const struct msg *m = &s->rcvd[s->nrcvd - nth]; if (m->nfds != 1) { return -1; } return m->fds[0]; } static void msg_send_handler(struct transfer_states *ts, struct test_state *src, struct test_state *dst) { struct msg m; m.data = ts->msg_space; m.fds = ts->fd_space; m.len = (int)ts->msg_size; m.nfds = (int)ts->fd_size; for (int i = 0; i < m.nfds; i++) { m.fds[i] = dup(m.fds[i]); if (m.fds[i] == -1) { wp_error("Invalid fd provided"); } } send_protocol_msg(src, dst, m); memset(ts->msg_space, 0, sizeof(ts->msg_space)); memset(ts->fd_space, 0, sizeof(ts->fd_space)); } static int setup_tstate(struct transfer_states *ts) { memset(ts, 0, sizeof(*ts)); ts->send = msg_send_handler; ts->comp = calloc(1, sizeof(struct test_state)); ts->app = calloc(1, sizeof(struct test_state)); if (!ts->comp || !ts->app) { goto fail_alloc; } if (setup_state(ts->comp, true, true) == -1) { goto fail_comp_setup; } if (setup_state(ts->app, false, true) == -1) { goto fail_app_setup; } return 0; fail_app_setup: cleanup_state(ts->app); fail_comp_setup: cleanup_state(ts->comp); fail_alloc: free(ts->comp); free(ts->app); return -1; } static void cleanup_tstate(struct transfer_states *ts) { cleanup_state(ts->comp); cleanup_state(ts->app); free(ts->comp); free(ts->app); } static bool test_fixed_shm_buffer_copy(void) { fprintf(stdout, "\n shm_pool+buffer test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; char *testpat = make_filled_pattern(16384, 0xFEDCBA98); int fd = make_filled_file(16384, testpat); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, shm = {0x3}, compositor = {0x4}, pool = {0x5}, buffer = {0x6}, surface = {0x7}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_shm", 1); send_wl_registry_evt_global(&T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind(&T, registry, 1, "wl_shm", 1, shm); send_wl_registry_req_bind( &T, registry, 2, "wl_compositor", 1, compositor); send_wl_shm_req_create_pool(&T, shm, pool, fd, 16384); ret_fd = get_only_fd_from_msg(T.comp); send_wl_shm_pool_req_create_buffer( &T, pool, buffer, 0, 64, 64, 256, 0x30334258); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_attach(&T, surface, buffer, 0, 0); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wl_surface_req_commit(&T, surface); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } pass = check_file_contents(ret_fd, 16384, testpat); if (!pass) { wp_error("Failed to transfer file"); } end: free(testpat); checked_close(fd); cleanup_tstate(&T); print_pass(pass); return pass; } static bool test_fixed_shm_screencopy_copy(void) { fprintf(stdout, "\n screencopy test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; char *testpat_orig = make_filled_pattern(16384, 0xFEDCBA98); char *testpat_screen = make_filled_pattern(16384, 0x77557755); int fd = make_filled_file(16384, testpat_orig); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, shm = {0x3}, output = {0x4}, pool = {0x5}, buffer = {0x6}, frame = {0x7}, screencopy = {0x8}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_shm", 1); send_wl_registry_evt_global(&T, registry, 2, "wl_output", 1); send_wl_registry_evt_global( &T, registry, 3, "zwlr_screencopy_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 1, "wl_shm", 1, shm); send_wl_registry_req_bind(&T, registry, 2, "wl_output", 1, output); send_wl_registry_req_bind(&T, registry, 3, "zwlr_screencopy_manager_v1", 1, screencopy); send_wl_shm_req_create_pool(&T, shm, pool, fd, 16384); ret_fd = get_only_fd_from_msg(T.comp); if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } send_zwlr_screencopy_manager_v1_req_capture_output( &T, screencopy, frame, 0, output); send_zwlr_screencopy_frame_v1_evt_buffer(&T, frame, 0, 64, 64, 16384); send_wl_shm_pool_req_create_buffer( &T, pool, buffer, 0, 64, 64, 256, 0x30334258); send_zwlr_screencopy_frame_v1_req_copy(&T, frame, buffer); uint32_t *mem = (uint32_t *)mmap(NULL, 16384, PROT_READ | PROT_WRITE, MAP_SHARED, ret_fd, 0); memcpy(mem, testpat_screen, 16384); munmap(mem, 16384); send_zwlr_screencopy_frame_v1_evt_flags(&T, frame, 0); send_zwlr_screencopy_frame_v1_evt_ready(&T, frame, 0, 12345, 600000000); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } pass = check_file_contents(fd, 16384, testpat_screen); if (!pass) { wp_error("Failed to transfer file"); } end: free(testpat_screen); free(testpat_orig); checked_close(fd); cleanup_tstate(&T); print_pass(pass); return pass; } static bool test_fixed_keymap_copy(void) { fprintf(stdout, "\n Keymap test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; char *testpat = make_filled_pattern(16384, 0xFEDCBA98); int fd = make_filled_file(16384, testpat); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, seat = {0x3}, keyboard = {0x4}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_seat", 7); send_wl_registry_req_bind(&T, registry, 1, "wl_seat", 7, seat); send_wl_seat_evt_capabilities(&T, seat, 3); send_wl_seat_req_get_keyboard(&T, seat, keyboard); send_wl_keyboard_evt_keymap(&T, keyboard, 1, fd, 16384); ret_fd = get_only_fd_from_msg(T.app); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } pass = check_file_contents(ret_fd, 16384, testpat); if (!pass) { wp_error("Failed to transfer file"); } end: free(testpat); checked_close(fd); cleanup_tstate(&T); print_pass(pass); return pass; } #define DMABUF_FORMAT 875713112 static int create_dmabuf(void) { struct render_data rd; memset(&rd, 0, sizeof(rd)); rd.drm_fd = -1; rd.av_disabled = true; const size_t test_width = 256; const size_t test_height = 384; const size_t test_cpp = 4; const size_t test_size = test_width * test_height * test_cpp; const struct dmabuf_slice_data slice_data = { .width = (uint32_t)test_width, .height = (uint32_t)test_height, .format = DMABUF_FORMAT, .num_planes = 1, .modifier = 0, .offsets = {0, 0, 0, 0}, .strides = {(uint32_t)(test_width * test_cpp), 0, 0, 0}, .using_planes = {true, false, false, false}, }; int dmafd = -1; if (init_render_data(&rd) == -1) { return -1; } struct gbm_bo *bo = make_dmabuf(&rd, &slice_data); if (!bo) { goto end; } void *map_handle = NULL; uint32_t stride; void *data = map_dmabuf(bo, true, &map_handle, &stride); if (!data) { destroy_dmabuf(bo); goto end; } /* TODO: the best test pattern is a colored gradient, so we can * check whether the copy flips things or not */ memset(data, 0x80, test_size); unmap_dmabuf(bo, map_handle); dmafd = export_dmabuf(bo); if (dmafd == -1) { goto end; } end: destroy_dmabuf(bo); cleanup_render_data(&rd); return dmafd; } enum dmabuf_copy_type { COPY_LINUX_DMABUF, COPY_LINUX_DMABUF_INDIR, COPY_DRM_PRIME, COPY_WLR_EXPORT, }; static bool test_fixed_dmabuf_copy(enum dmabuf_copy_type type) { const char *const types[] = {"linux-dmabuf", "linux-dmabuf-indir", "drm-prime", "wlr-export"}; fprintf(stdout, "\n DMABUF test, %s\n", types[(int)type]); int dmabufd = create_dmabuf(); const int width = 256, height = 384; if (dmabufd == -1) { return true; } struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; int ret_fd = -1; switch (type) { case COPY_LINUX_DMABUF: { struct wp_objid display = {0x1}, registry = {0x2}, linux_dmabuf = {0x3}, compositor = {0x4}, params = {0x5}, buffer = {0x6}, surface = {0x7}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global( &T, registry, 1, "zwp_linux_dmabuf_v1", 1); send_wl_registry_evt_global( &T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind(&T, registry, 1, "zwp_linux_dmabuf_v1", 1, linux_dmabuf); send_wl_registry_req_bind(&T, registry, 12, "wl_compositor", 1, compositor); send_zwp_linux_dmabuf_v1_evt_modifier( &T, linux_dmabuf, DMABUF_FORMAT, 0, 0); send_zwp_linux_dmabuf_v1_req_create_params( &T, linux_dmabuf, params); send_zwp_linux_buffer_params_v1_req_add( &T, params, dmabufd, 0, 0, 256 * 4, 0, 0); send_zwp_linux_buffer_params_v1_req_create_immed( &T, params, buffer, 256, 384, DMABUF_FORMAT, 0); /* this message + previous, after reordering, are treated as one * bundle; if that is fixed, this will break, and 1 should * become 2 */ ret_fd = get_fd_from_nth_to_last_msg(T.comp, 1); send_zwp_linux_buffer_params_v1_req_destroy(&T, params); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_attach(&T, surface, buffer, 0, 0); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wl_surface_req_commit(&T, surface); } break; case COPY_LINUX_DMABUF_INDIR: { struct wp_objid display = {0x1}, registry = {0x2}, linux_dmabuf = {0x3}, compositor = {0x4}, params = {0x5}, buffer = {0x6}, surface = {0x7}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global( &T, registry, 1, "zwp_linux_dmabuf_v1", 1); send_wl_registry_evt_global( &T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind(&T, registry, 1, "zwp_linux_dmabuf_v1", 1, linux_dmabuf); send_wl_registry_req_bind(&T, registry, 12, "wl_compositor", 1, compositor); send_zwp_linux_dmabuf_v1_evt_modifier( &T, linux_dmabuf, DMABUF_FORMAT, 0, 0); send_zwp_linux_dmabuf_v1_req_create_params( &T, linux_dmabuf, params); send_zwp_linux_buffer_params_v1_req_add( &T, params, dmabufd, 0, 0, 256 * 4, 0, 0); send_zwp_linux_buffer_params_v1_req_create( &T, params, 256, 384, DMABUF_FORMAT, 0); /* this message + previous, after reordering, are treated as one * bundle; if that is fixed, this will break, and 1 should * become 2 */ ret_fd = get_fd_from_nth_to_last_msg(T.comp, 1); send_zwp_linux_buffer_params_v1_evt_created(&T, params, buffer); send_zwp_linux_buffer_params_v1_req_destroy(&T, params); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_attach(&T, surface, buffer, 0, 0); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wl_surface_req_commit(&T, surface); } break; case COPY_DRM_PRIME: { struct wp_objid display = {0x1}, registry = {0x2}, wl_drm = {0x3}, compositor = {0x4}, buffer = {0x5}, surface = {0x6}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_drm", 1); send_wl_registry_evt_global( &T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind(&T, registry, 1, "wl_drm", 1, wl_drm); send_wl_registry_req_bind(&T, registry, 12, "wl_compositor", 1, compositor); send_wl_drm_evt_device(&T, wl_drm, "/dev/dri/renderD128"); send_wl_drm_evt_format(&T, wl_drm, DMABUF_FORMAT); send_wl_drm_evt_capabilities(&T, wl_drm, 1); send_wl_drm_req_create_prime_buffer(&T, wl_drm, buffer, dmabufd, width, height, DMABUF_FORMAT, 0, width * 4, 0, 0, 0, 0); ret_fd = get_fd_from_nth_to_last_msg(T.comp, 1); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_attach(&T, surface, buffer, 0, 0); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wl_surface_req_commit(&T, surface); } break; case COPY_WLR_EXPORT: { /* note: here the compositor creates and sends fd to client */ struct wp_objid display = {0x1}, registry = {0x2}, export_manager = {0x3}, output = {0x4}, dmabuf_frame = {0x5}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "zwlr_export_dmabuf_manager_v1", 1); send_wl_registry_evt_global(&T, registry, 2, "wl_output", 1); send_wl_registry_req_bind(&T, registry, 1, "zwlr_export_dmabuf_manager_v1", 1, export_manager); send_wl_registry_req_bind( &T, registry, 12, "wl_output", 1, output); send_zwlr_export_dmabuf_manager_v1_req_capture_output( &T, export_manager, dmabuf_frame, 1, output); send_zwlr_export_dmabuf_frame_v1_evt_frame(&T, dmabuf_frame, width, height, 0, 0, 0, 1, DMABUF_FORMAT, 0, 0, 1); send_zwlr_export_dmabuf_frame_v1_evt_object(&T, dmabuf_frame, 0, dmabufd, width * height * 4, 0, width * 4, 0); ret_fd = get_only_fd_from_msg(T.app); send_zwlr_export_dmabuf_frame_v1_evt_ready( &T, dmabuf_frame, 555555, 555555555, 333333333); } break; } if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } // TODO: verify that the FD contents are correct end: checked_close(dmabufd); /* todo: the drm_fd may be dup'd by libgbm but not freed */ cleanup_tstate(&T); print_pass(pass); return pass; } enum data_device_type { DDT_WAYLAND, DDT_GTK_PRIMARY, DDT_PRIMARY, DDT_WLR, }; static const char *const data_device_type_strs[] = {"wayland main", "gtk primary selection", "primary selection", "wlroots data control"}; /* Confirm that wl_data_offer.receive creates a pipe matching the input */ static bool test_data_offer(enum data_device_type type) { fprintf(stdout, "\n Data offer test: %s\n", data_device_type_strs[type]); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; int src_pipe[2]; pipe(src_pipe); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, ddman = {0x3}, seat = {0x4}, ddev = {0x5}, offer = {0xff000001}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_seat", 7); send_wl_registry_req_bind(&T, registry, 1, "wl_seat", 7, seat); switch (type) { case DDT_WAYLAND: send_wl_registry_evt_global( &T, registry, 2, "wl_data_device_manager", 3); send_wl_registry_req_bind(&T, registry, 2, "wl_data_device_manager", 3, ddman); send_wl_data_device_manager_req_get_data_device( &T, ddman, ddev, seat); send_wl_data_device_evt_data_offer(&T, ddev, offer); send_wl_data_offer_evt_offer( &T, offer, "text/plain;charset=utf-8"); send_wl_data_device_evt_selection(&T, ddev, offer); send_wl_data_offer_req_receive(&T, offer, "text/plain;charset=utf-8", src_pipe[1]); break; case DDT_GTK_PRIMARY: send_wl_registry_evt_global(&T, registry, 2, "gtk_primary_selection_device_manager", 1); send_wl_registry_req_bind(&T, registry, 2, "gtk_primary_selection_device_manager", 1, ddman); send_gtk_primary_selection_device_manager_req_get_device( &T, ddman, ddev, seat); send_gtk_primary_selection_device_evt_data_offer( &T, ddev, offer); send_gtk_primary_selection_offer_evt_offer( &T, offer, "text/plain;charset=utf-8"); send_gtk_primary_selection_device_evt_selection( &T, ddev, offer); send_gtk_primary_selection_offer_req_receive(&T, offer, "text/plain;charset=utf-8", src_pipe[1]); break; case DDT_PRIMARY: send_wl_registry_evt_global(&T, registry, 2, "zwp_primary_selection_device_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 2, "zwp_primary_selection_device_manager_v1", 1, ddman); send_zwp_primary_selection_device_manager_v1_req_get_device( &T, ddman, ddev, seat); send_zwp_primary_selection_device_v1_evt_data_offer( &T, ddev, offer); send_zwp_primary_selection_offer_v1_evt_offer( &T, offer, "text/plain;charset=utf-8"); send_zwp_primary_selection_device_v1_evt_selection( &T, ddev, offer); send_zwp_primary_selection_offer_v1_req_receive(&T, offer, "text/plain;charset=utf-8", src_pipe[1]); break; case DDT_WLR: send_wl_registry_evt_global(&T, registry, 2, "zwlr_data_control_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 2, "zwlr_data_control_manager_v1", 1, ddman); send_zwlr_data_control_manager_v1_req_get_data_device( &T, ddman, ddev, seat); send_zwlr_data_control_device_v1_evt_data_offer( &T, ddev, offer); send_zwlr_data_control_offer_v1_evt_offer( &T, offer, "text/plain;charset=utf-8"); send_zwlr_data_control_device_v1_evt_selection(&T, ddev, offer); send_zwlr_data_control_offer_v1_req_receive(&T, offer, "text/plain;charset=utf-8", src_pipe[1]); break; } ret_fd = get_only_fd_from_msg(T.comp); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } uint8_t tmp = 0xab; if (write(ret_fd, &tmp, 1) != 1) { wp_error("Fd not writable"); pass = false; goto end; } end: checked_close(src_pipe[0]); checked_close(src_pipe[1]); cleanup_tstate(&T); print_pass(pass); return pass; } /* Confirm that wl_data_source.data_offer creates a pipe matching the input */ static bool test_data_source(enum data_device_type type) { fprintf(stdout, "\n Data source test: %s\n", data_device_type_strs[type]); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; int dst_pipe[2]; pipe(dst_pipe); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, ddman = {0x3}, seat = {0x4}, ddev = {0x5}, dsource = {0x6}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_seat", 7); send_wl_registry_req_bind(&T, registry, 1, "wl_seat", 7, seat); switch (type) { case DDT_WAYLAND: send_wl_registry_evt_global( &T, registry, 2, "wl_data_device_manager", 1); send_wl_registry_req_bind(&T, registry, 2, "wl_data_device_manager", 1, ddman); send_wl_data_device_manager_req_get_data_device( &T, ddman, ddev, seat); send_wl_data_device_manager_req_create_data_source( &T, ddman, dsource); send_wl_data_source_req_offer( &T, dsource, "text/plain;charset=utf-8"); send_wl_data_device_req_set_selection(&T, ddev, dsource, 9999); send_wl_data_source_evt_send(&T, dsource, "text/plain;charset=utf-8", dst_pipe[0]); break; case DDT_GTK_PRIMARY: send_wl_registry_evt_global(&T, registry, 2, "gtk_primary_selection_device_manager", 1); send_wl_registry_req_bind(&T, registry, 2, "gtk_primary_selection_device_manager", 1, ddman); send_gtk_primary_selection_device_manager_req_get_device( &T, ddman, ddev, seat); send_gtk_primary_selection_device_manager_req_create_source( &T, ddman, dsource); send_gtk_primary_selection_source_req_offer( &T, dsource, "text/plain;charset=utf-8"); send_gtk_primary_selection_device_req_set_selection( &T, ddev, dsource, 9999); send_gtk_primary_selection_source_evt_send(&T, dsource, "text/plain;charset=utf-8", dst_pipe[0]); break; case DDT_PRIMARY: send_wl_registry_evt_global(&T, registry, 2, "zwp_primary_selection_device_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 2, "zwp_primary_selection_device_manager_v1", 1, ddman); send_zwp_primary_selection_device_manager_v1_req_get_device( &T, ddman, ddev, seat); send_zwp_primary_selection_device_manager_v1_req_create_source( &T, ddman, dsource); send_zwp_primary_selection_source_v1_req_offer( &T, dsource, "text/plain;charset=utf-8"); send_zwp_primary_selection_device_v1_req_set_selection( &T, ddev, dsource, 9999); send_zwp_primary_selection_source_v1_evt_send(&T, dsource, "text/plain;charset=utf-8", dst_pipe[0]); break; case DDT_WLR: send_wl_registry_evt_global(&T, registry, 2, "zwlr_data_control_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 2, "zwlr_data_control_manager_v1", 1, ddman); send_zwlr_data_control_manager_v1_req_get_data_device( &T, ddman, ddev, seat); send_zwlr_data_control_manager_v1_req_create_data_source( &T, ddman, dsource); send_zwlr_data_control_source_v1_req_offer( &T, dsource, "text/plain;charset=utf-8"); send_zwlr_data_control_device_v1_req_set_selection( &T, ddev, dsource); send_zwlr_data_control_source_v1_evt_send(&T, dsource, "text/plain;charset=utf-8", dst_pipe[0]); break; } ret_fd = get_only_fd_from_msg(T.app); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } /* todo: check readable */ end: checked_close(dst_pipe[0]); checked_close(dst_pipe[1]); cleanup_tstate(&T); print_pass(pass); return pass; } /* Check that gamma_control copies the input file */ static bool test_gamma_control(void) { fprintf(stdout, "\n Gamma control test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; int ret_fd = -1; char *testpat = make_filled_pattern(1024, 0x12345678); int fd = make_filled_file(1024, testpat); struct wp_objid display = {0x1}, registry = {0x2}, gamma_manager = {0x3}, output = {0x4}, gamma_control = {0x5}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global( &T, registry, 1, "zwlr_gamma_control_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 1, "zwlr_gamma_control_manager_v1", 1, gamma_manager); send_wl_registry_evt_global(&T, registry, 1, "wl_output", 3); send_wl_registry_req_bind(&T, registry, 1, "wl_output", 3, output); send_zwlr_gamma_control_manager_v1_req_get_gamma_control( &T, gamma_manager, gamma_control, output); send_zwlr_gamma_control_v1_evt_gamma_size(&T, gamma_control, 1024); send_zwlr_gamma_control_v1_req_set_gamma(&T, gamma_control, fd); ret_fd = get_only_fd_from_msg(T.comp); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } pass = check_file_contents(ret_fd, 1024, testpat); if (!pass) { wp_error("Failed to transfer file"); } end: free(testpat); checked_close(fd); cleanup_tstate(&T); print_pass(pass); return pass; } /* Check that gamma_control copies the input file */ static bool test_presentation_time(void) { fprintf(stdout, "\n Presentation time test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; struct wp_objid display = {0x1}, registry = {0x2}, presentation = {0x3}, compositor = {0x4}, surface = {0x5}, feedback = {0x6}; T.app->local_time_offset = 500; T.comp->local_time_offset = 600; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wp_presentation", 1); send_wl_registry_evt_global(&T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind( &T, registry, 1, "wp_presentation", 1, presentation); /* todo: run another branch with CLOCK_REALTIME */ send_wp_presentation_evt_clock_id(&T, presentation, CLOCK_MONOTONIC); send_wl_registry_req_bind( &T, registry, 12, "wl_compositor", 1, compositor); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wp_presentation_req_feedback(&T, presentation, surface, feedback); send_wl_surface_req_commit(&T, surface); send_wp_presentation_feedback_evt_presented( &T, feedback, 0, 30, 120000, 16666666, 0, 0, 7); const struct msg *const last_msg = &T.app->rcvd[T.app->nrcvd - 1]; uint32_t tv_sec_hi = last_msg->data[2], tv_sec_lo = last_msg->data[3], tv_nsec = last_msg->data[4]; if (tv_nsec != 120000 + T.app->local_time_offset - T.comp->local_time_offset) { wp_error("Time translation failed %d %d %d", tv_sec_hi, tv_sec_lo, tv_nsec); pass = false; goto end; } /* look at timestamp */ if (!pass) { goto end; } end: cleanup_tstate(&T); print_pass(pass); return pass; } /* Check whether the video encoding feature can replicate a uniform * color image */ static bool test_fixed_video_color_copy(enum video_coding_fmt fmt, bool hw) { (void)fmt; (void)hw; /* todo: back out if no dmabuf support or no video support */ return true; } log_handler_func_t log_funcs[2] = {test_log_handler, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; set_initial_fds(); int ntest = 21; int nsuccess = 0; nsuccess += test_fixed_shm_buffer_copy(); nsuccess += test_fixed_shm_screencopy_copy(); nsuccess += test_fixed_keymap_copy(); nsuccess += test_fixed_dmabuf_copy(COPY_LINUX_DMABUF); nsuccess += test_fixed_dmabuf_copy(COPY_LINUX_DMABUF_INDIR); nsuccess += test_fixed_dmabuf_copy(COPY_DRM_PRIME); nsuccess += test_fixed_dmabuf_copy(COPY_WLR_EXPORT); nsuccess += test_data_offer(DDT_WAYLAND); nsuccess += test_data_offer(DDT_PRIMARY); nsuccess += test_data_offer(DDT_GTK_PRIMARY); nsuccess += test_data_offer(DDT_WLR); nsuccess += test_data_source(DDT_WAYLAND); nsuccess += test_data_source(DDT_PRIMARY); nsuccess += test_data_source(DDT_GTK_PRIMARY); nsuccess += test_data_source(DDT_WLR); nsuccess += test_gamma_control(); nsuccess += test_presentation_time(); nsuccess += test_fixed_video_color_copy(VIDEO_H264, false); nsuccess += test_fixed_video_color_copy(VIDEO_H264, true); nsuccess += test_fixed_video_color_copy(VIDEO_VP9, false); nsuccess += test_fixed_video_color_copy(VIDEO_AV1, false); // TODO: add tests for handling of common errors, e.g. invalid fd, // or type confusion fprintf(stdout, "\n%d of %d cases passed\n", nsuccess, ntest); check_unclosed_fds(); return (nsuccess == ntest) ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.10.5/waypipe-c/test/protocol_functions.txt000066400000000000000000000052241507071711100230130ustar00rootroot00000000000000gtk_primary_selection_device_evt_data_offer gtk_primary_selection_device_evt_selection gtk_primary_selection_device_manager_req_create_source gtk_primary_selection_device_manager_req_get_device gtk_primary_selection_device_req_set_selection gtk_primary_selection_offer_evt_offer gtk_primary_selection_offer_req_receive gtk_primary_selection_source_evt_send gtk_primary_selection_source_req_offer wl_compositor_req_create_surface wl_data_device_evt_data_offer wl_data_device_evt_selection wl_data_device_manager_req_create_data_source wl_data_device_manager_req_get_data_device wl_data_device_req_set_selection wl_data_offer_evt_offer wl_data_offer_req_receive wl_data_source_evt_send wl_data_source_req_offer wl_display_req_get_registry wl_drm_evt_device wl_drm_evt_format wl_drm_evt_capabilities wl_drm_req_create_prime_buffer wl_keyboard_evt_keymap wl_registry_evt_global wl_registry_req_bind wl_seat_evt_capabilities wl_seat_req_get_keyboard wl_shm_pool_req_create_buffer wl_shm_req_create_pool wl_surface_req_attach wl_surface_req_commit wl_surface_req_damage wp_presentation_evt_clock_id wp_presentation_req_feedback wp_presentation_feedback_evt_presented zwlr_data_control_device_v1_evt_data_offer zwlr_data_control_device_v1_evt_selection zwlr_data_control_device_v1_req_set_selection zwlr_data_control_manager_v1_req_create_data_source zwlr_data_control_manager_v1_req_get_data_device zwlr_data_control_offer_v1_evt_offer zwlr_data_control_offer_v1_req_receive zwlr_data_control_source_v1_evt_send zwlr_data_control_source_v1_req_offer zwlr_export_dmabuf_manager_v1_req_capture_output zwlr_export_dmabuf_frame_v1_evt_frame zwlr_export_dmabuf_frame_v1_evt_object zwlr_export_dmabuf_frame_v1_evt_ready zwlr_gamma_control_manager_v1_req_get_gamma_control zwlr_gamma_control_v1_evt_gamma_size zwlr_gamma_control_v1_req_set_gamma zwlr_screencopy_frame_v1_evt_buffer zwlr_screencopy_frame_v1_evt_flags zwlr_screencopy_frame_v1_evt_ready zwlr_screencopy_frame_v1_req_copy zwlr_screencopy_manager_v1_req_capture_output zwp_linux_buffer_params_v1_evt_created zwp_linux_buffer_params_v1_req_add zwp_linux_buffer_params_v1_req_create zwp_linux_buffer_params_v1_req_create_immed zwp_linux_buffer_params_v1_req_destroy zwp_linux_dmabuf_v1_evt_modifier zwp_linux_dmabuf_v1_req_create_params zwp_primary_selection_device_manager_v1_req_create_source zwp_primary_selection_device_manager_v1_req_get_device zwp_primary_selection_device_v1_evt_data_offer zwp_primary_selection_device_v1_evt_selection zwp_primary_selection_device_v1_req_set_selection zwp_primary_selection_offer_v1_evt_offer zwp_primary_selection_offer_v1_req_receive zwp_primary_selection_source_v1_evt_send zwp_primary_selection_source_v1_req_offer waypipe-v0.10.5/waypipe-c/test/startup_failure.py000077500000000000000000000171251507071711100221120ustar00rootroot00000000000000#!/usr/bin/env python3 """ Verifying all the ways in which waypipe can fail before even making a connection. """ if __name__ != "__main__": quit(1) import os, subprocess, time, signal, socket def try_unlink(path): try: os.unlink(path) except FileNotFoundError: pass def make_socket(path): folder, filename = os.path.split(path) cwdir = os.open(".", os.O_RDONLY | os.O_DIRECTORY) display_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) os.chdir(folder) display_socket.bind(filename) display_socket.listen() os.fchdir(cwdir) os.close(cwdir) return display_socket waypipe_path = os.environ["TEST_WAYPIPE_PATH"] sleep_path = os.environ["TEST_SLEEP_PATH"] fake_ssh_path = os.environ["TEST_FAKE_SSH_PATH"] ld_library_path = ( os.environ["LD_LIBRARY_PATH"] if "LD_LIBRARY_PATH" in os.environ else "" ) xdg_runtime_dir = os.path.abspath("./run/") os.makedirs(xdg_runtime_dir, mode=0o700, exist_ok=True) os.chmod(xdg_runtime_dir, 0o700) all_succeeding = True wayland_display = "wayland-display" client_socket_path = xdg_runtime_dir + "/client-socket" server_socket_path = xdg_runtime_dir + "/server-socket" ssh_socket_path = xdg_runtime_dir + "/ssh-socket" wayland_display_path = xdg_runtime_dir + "/" + wayland_display try_unlink(wayland_display_path) display_socket = make_socket(wayland_display_path) USE_SOCKETPAIR = 1 << 1 EXPECT_SUCCESS = 1 << 2 EXPECT_TIMEOUT = 1 << 3 EXPECT_FAILURE = 1 << 4 def run_test(name, command, env, flags): try_unlink(client_socket_path) try_unlink(server_socket_path) try_unlink(server_socket_path + ".disp.sock") if flags & USE_SOCKETPAIR: sockets = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM) conn_socket = 999 os.dup2(sockets[1].fileno(), conn_socket, inheritable=True) env = dict(env, WAYLAND_SOCKET=str(conn_socket)) pfds = [conn_socket] else: pfds = [] timed_out = False log_path = os.path.join(xdg_runtime_dir, "sfail_{}.txt".format(name)) logfile = open(log_path, "wb") print(env, " ".join(command)) proc = subprocess.Popen( command, env=env, stdin=subprocess.DEVNULL, stdout=logfile, stderr=subprocess.STDOUT, pass_fds=pfds, start_new_session=True, ) try: output, none = proc.communicate(timeout=1.0) except subprocess.TimeoutExpired as e: # Program is waiting indefinitely for something. # Kill it, and all children. pgrp = os.getpgid(proc.pid) os.killpg(pgrp, signal.SIGKILL) retcode = None timed_out = True else: retcode = proc.returncode logfile.close() output = open(log_path, "rb").read() if flags & USE_SOCKETPAIR: os.close(conn_socket) log_path = os.path.join(xdg_runtime_dir, "weston_out.txt") with open(log_path, "wb") as out: out.write(output) result = ( "timeout" if timed_out else ("fail({})".format(retcode) if retcode != 0 else "pass") ) global all_succeeding if flags & EXPECT_SUCCESS: if timed_out or retcode != 0: print( "Run {} failed when it should have succeeded".format(name), output, retcode, "timeout" if timed_out else "notimeout", ) all_succeeding = False else: print("Run {} passed.".format(name), output) elif flags & EXPECT_FAILURE: if timed_out or retcode == 0: print( "Run {} succeeded when it should have failed".format(name), output, retcode, "timeout" if timed_out else "notimeout", ) all_succeeding = False else: print("Run {} passed.".format(name), output) elif flags & EXPECT_TIMEOUT: if not timed_out: print( "Run {} stopped when it should have continued".format(name), output, retcode, ) all_succeeding = False else: print("Run {} passed.".format(name), output) else: raise NotImplementedError wait_cmd = [sleep_path, "10.0"] invalid_hostname = "@" fake_ssh_dir = os.path.dirname(fake_ssh_path) waypipe_dir = os.path.dirname(waypipe_path) base_env = {"LD_LIBRARY_PATH": ld_library_path, "PATH": ""} standard_env = dict(base_env, XDG_RUNTIME_DIR=xdg_runtime_dir) ssh_only_env = dict(standard_env, PATH=fake_ssh_dir) ssh_env = dict(standard_env, PATH=fake_ssh_dir + ":" + waypipe_dir) # Configurations that should fail run_test( "b_client_long_disp", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_DISPLAY=("/" + "x" * 107)), EXPECT_FAILURE, ) run_test( "b_client_disp_dne", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_DISPLAY=xdg_runtime_dir + "/dne"), EXPECT_FAILURE, ) run_test( "b_client_no_env", [waypipe_path, "-s", client_socket_path, "client"], base_env, EXPECT_FAILURE, ) run_test( "b_server_oneshot_no_env", [waypipe_path, "-o", "-s", server_socket_path, "server"] + wait_cmd, base_env, EXPECT_TIMEOUT, ) run_test( "b_client_bad_pipe1", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_SOCKET="33"), EXPECT_FAILURE, ) run_test( "b_client_bad_pipe2", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_SOCKET="777777777777777777777777777"), EXPECT_FAILURE, ) run_test( "b_client_bad_pipe3", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_SOCKET="0x33"), EXPECT_FAILURE, ) run_test( "b_client_nxdg_offset", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_DISPLAY=wayland_display), EXPECT_FAILURE, ) run_test( "b_server_no_env", [waypipe_path, "-s", server_socket_path, "server"] + wait_cmd, base_env, EXPECT_FAILURE, ) run_test( "g_ssh_test_nossh_env", [waypipe_path, "-o", "-s", ssh_socket_path, "ssh", invalid_hostname] + wait_cmd, dict(standard_env, WAYLAND_DISPLAY=wayland_display), EXPECT_FAILURE, ) # Configurations that should succeed run_test( "g_help", [waypipe_path, "--help"], base_env, EXPECT_SUCCESS, ) run_test( "g_server_std_env", [waypipe_path, "-s", server_socket_path, "server"] + wait_cmd, standard_env, EXPECT_TIMEOUT, ) run_test( "g_client_std_env", [waypipe_path, "-s", client_socket_path, "client"], dict(standard_env, WAYLAND_DISPLAY=wayland_display_path), EXPECT_TIMEOUT, ) run_test( "g_client_offset_sock", [waypipe_path, "-s", client_socket_path, "client"], dict(standard_env, WAYLAND_DISPLAY=wayland_display), EXPECT_TIMEOUT, ) run_test( "g_client_pipe_env", [waypipe_path, "-s", client_socket_path, "client"], dict(standard_env), EXPECT_TIMEOUT | USE_SOCKETPAIR, ) run_test( "g_ssh_test_oneshot", [ waypipe_path, "--remote-bin", waypipe_path, "-o", "-s", ssh_socket_path, "ssh", invalid_hostname, ] + wait_cmd, dict(ssh_env, WAYLAND_DISPLAY=wayland_display), EXPECT_TIMEOUT, ) run_test( "g_ssh_test_reg", [ waypipe_path, "--remote-bin", waypipe_path, "-s", ssh_socket_path, "ssh", invalid_hostname, ] + wait_cmd, dict(ssh_env, WAYLAND_DISPLAY=wayland_display), EXPECT_TIMEOUT, ) try_unlink(client_socket_path) try_unlink(wayland_display_path) quit(0 if all_succeeding else 1) waypipe-v0.10.5/waypipe-c/test/test-proto.xml000066400000000000000000000054171507071711100211670ustar00rootroot00000000000000 Copyright © 2019 Manuel Stoeckl Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that\n the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. waypipe-v0.10.5/waypipe-c/test/test.sh000077500000000000000000000007671507071711100176460ustar00rootroot00000000000000#!/bin/sh root=`pwd` waypipe=`which waypipe` program=`which ${1:-weston-terminal}` debug= debug=-d # Orange=client, purple=server rm -f /tmp/waypipe-server.sock /tmp/waypipe-client.sock ($waypipe -o $debug client 2>&1 | sed 's/.*/&/') & # ssh-to-self; should have a local keypair set up (ssh -R /tmp/waypipe-server.sock:/tmp/waypipe-client.sock localhost $waypipe -o $debug server -- $program) 2>&1 | sed 's/.*/&/' kill %1 rm -f /tmp/waypipe-server.sock /tmp/waypipe-client.sock waypipe-v0.10.5/waypipe-c/test/test_fnlist.txt000066400000000000000000000000021507071711100214050ustar00rootroot00000000000000* waypipe-v0.10.5/waypipe-c/test/test_proto_hook.c000066400000000000000000000312261507071711100217100ustar00rootroot00000000000000/* * Copyright © 2024 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include static int parse_video_string(const char *str, struct main_config *config, bool on_display_side) { char tmp[128]; size_t l = strlen(str); if (l >= 127) { return -1; } memcpy(tmp, str, l + 1); config->video_if_possible = true; char *part = strtok(tmp, ","); while (part) { if (!strcmp(part, "none")) { config->video_if_possible = false; } else if (!strcmp(part, "h264")) { config->video_fmt = VIDEO_H264; } else if (!strcmp(part, "vp9")) { config->video_fmt = VIDEO_VP9; } else if (!strcmp(part, "av1")) { config->video_fmt = VIDEO_AV1; } else if (!strcmp(part, "hw")) { config->prefer_hwvideo = true; } else if (!strcmp(part, "hwenc")) { if (!on_display_side) { config->prefer_hwvideo = true; } } else if (!strcmp(part, "hwdec")) { if (on_display_side) { config->prefer_hwvideo = true; } } else if (!strcmp(part, "sw")) { config->prefer_hwvideo = false; } else if (!strcmp(part, "swenc")) { if (!on_display_side) { config->prefer_hwvideo = false; } } else if (!strcmp(part, "swdec")) { if (on_display_side) { config->prefer_hwvideo = false; } } else if (!strncmp(part, "bpf=", 4)) { char *ep; double bpf = strtod(part + 4, &ep); if (*ep == 0 && bpf <= 1e9 && bpf >= 1.0) { config->video_bpf = (int)bpf; } else { return -1; } } else { return -1; } part = strtok(NULL, ","); } return 0; } static uint32_t conntoken_header(const struct main_config *config, bool reconnectable, bool update) { uint32_t header = (WAYPIPE_PROTOCOL_VERSION << 16) | CONN_FIXED_BIT; header |= (update ? CONN_UPDATE_BIT : 0); header |= (reconnectable ? CONN_RECONNECTABLE_BIT : 0); #ifdef HAS_LZ4 header |= (config->compression == COMP_LZ4 ? CONN_LZ4_COMPRESSION : 0); #endif #ifdef HAS_ZSTD header |= (config->compression == COMP_ZSTD ? CONN_ZSTD_COMPRESSION : 0); #endif if (config->compression == COMP_NONE) { header |= CONN_NO_COMPRESSION; } if (config->video_if_possible) { header |= (config->video_fmt == VIDEO_H264 ? CONN_H264_VIDEO : 0); header |= (config->video_fmt == VIDEO_VP9 ? CONN_VP9_VIDEO : 0); header |= (config->video_fmt == VIDEO_AV1 ? CONN_AV1_VIDEO : 0); } else { header |= CONN_NO_VIDEO; } #ifdef HAS_DMABUF header |= (config->no_gpu ? CONN_NO_DMABUF_SUPPORT : 0); #else header |= CONN_NO_DMABUF_SUPPORT; #endif return header; } static int check_conn_header(uint32_t header, const struct main_config *config, char *err, size_t err_size) { if ((header & CONN_FIXED_BIT) == 0 && (header & CONN_UNSET_BIT) != 0) { snprintf(err, err_size, "Waypipe client is rejecting connection header %08" PRIx32 "; it is either garbage or there was a wire protocol endianness mismatch.", header); return -1; } /* Earlier versions strictly required a protocol version match; now * there is a protocol version negotiation where waypipe-server sends * its desired version, and if this is not the minimum, the * waypipe-client's first message in reply will acknowledge that * version. To ensure newer clients still work with older Waypipe (that * checked bits 16-31), the version field is now extracted from bits 3-6 * and 16-23. */ uint32_t version = (((header >> 16) & 0xff) << 4) | ((header >> 3) & 0xf); wp_debug("Waypipe server is requesting protocol version %u; using default version 16", version); /* For now, reject mismatches in compression format and video coding * setting, and print an error. Adopting whatever the server asks for * is a minor security issue -- e.g., video handling is a good target * for exploits, and compression can cost CPU time, especially if the * initial connection mechanism were to be expanded to allow setting * compression level. */ if ((header & CONN_COMPRESSION_MASK) == CONN_ZSTD_COMPRESSION) { if (config->compression != COMP_ZSTD) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the compression=ZSTD the Waypipe server expected", compression_mode_to_str( config->compression)); return -1; } } else if ((header & CONN_COMPRESSION_MASK) == CONN_LZ4_COMPRESSION) { if (config->compression != COMP_LZ4) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the compression=LZ4 the Waypipe server expected", compression_mode_to_str( config->compression)); return -1; } } else if ((header & CONN_COMPRESSION_MASK) == CONN_NO_COMPRESSION) { if (config->compression != COMP_NONE) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the compression=NONE the Waypipe server expected", compression_mode_to_str( config->compression)); return -1; } } else if ((header & CONN_COMPRESSION_MASK) != 0) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the unidentified compression type the Waypipe server expected", compression_mode_to_str(config->compression)); return -1; } if ((header & CONN_VIDEO_MASK) == CONN_VP9_VIDEO) { if (!config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not run with video encoding enabled, unlike Waypipe server"); return -1; } if (config->video_fmt != VIDEO_VP9) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the VP9 video coding format requested by the Waypipe server"); return -1; } } else if ((header & CONN_VIDEO_MASK) == CONN_H264_VIDEO) { if (!config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not run with video encoding enabled, unlike Waypipe server"); return -1; } if (config->video_fmt != VIDEO_H264) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the VP9 video coding format requested by the Waypipe server"); return -1; } } else if ((header & CONN_VIDEO_MASK) == CONN_AV1_VIDEO) { if (!config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not run with video encoding enabled, unlike Waypipe server"); return -1; } if (config->video_fmt != VIDEO_AV1) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the AV1 video coding format requested by the Waypipe server"); return -1; } } else if ((header & CONN_VIDEO_MASK) == CONN_NO_VIDEO) { if (config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client has video encoding enabled, but Waypipe server does not"); return -1; } } else if ((header & CONN_VIDEO_MASK) != 0) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the unidentified video coding format requested by the Waypipe server"); return -1; } return 0; } const char *strip_prefix(const char *input, const char *prefix) { if (!strncmp(input, prefix, strlen(prefix))) { return input + strlen(prefix); } else { return NULL; } } log_handler_func_t log_funcs[2] = {NULL, NULL}; int main(int argc, char **argv) { if (argc == 1 || !strcmp(argv[1], "--help")) { printf("Usage: ./test_proto_hook [options] [client-conn|server-conn]\n"); printf("A program which runs the main proxy loop of Waypipe.\n"); return EXIT_FAILURE; } bool debug = false; struct main_config config = (struct main_config){ .drm_node = NULL, .n_worker_threads = 1, .compression = COMP_NONE, .compression_level = 1, .no_gpu = false, .only_linear_dmabuf = true, .video_if_possible = false, .video_bpf = 3e5, .old_video_mode = false, .vsock = false, .vsock_cid = 0, .vsock_port = 0, .vsock_to_host = false, .title_prefix = NULL, .secctx_app_id = NULL, }; if (strcmp(argv[argc - 1], "client-conn") && strcmp(argv[argc - 1], "server-conn")) { fprintf(stderr, "Last argument should be client-conn/server-conn\n"); return EXIT_FAILURE; } bool display_side = !strcmp(argv[argc - 1], "client-conn"); const char *value; for (int i = 1; i < argc - 1; i++) { if (!strcmp(argv[i], "--debug")) { debug = true; } else if (!strcmp(argv[i], "--no-gpu")) { config.no_gpu = true; } else if ((value = strip_prefix(argv[i], "--threads="))) { uint32_t v; if (parse_uint32(value, &v) == -1 || v > (1 << 20)) { fprintf(stderr, "Invalid thread count\n"); return EXIT_FAILURE; } config.n_worker_threads = (int)v; } else if ((value = strip_prefix(argv[i], "--compress="))) { if (!strcmp(value, "none")) { config.compression = COMP_NONE; #ifdef HAS_LZ4 } else if (!strcmp(value, "lz4")) { config.compression = COMP_LZ4; #endif #ifdef HAS_ZSTD } else if (!strcmp(value, "zstd")) { config.compression = COMP_ZSTD; #endif } } else if ((value = strip_prefix(argv[i], "--drm-node="))) { config.drm_node = value; } else if ((value = strip_prefix(argv[i], "--title-prefix="))) { if (!is_utf8(value) || strlen(value) > 128) { fprintf(stderr, "Invalid title prefix of length %d\n", (int)strlen(value)); return EXIT_FAILURE; } config.title_prefix = value; } else if ((value = strip_prefix(argv[i], "--video="))) { if (parse_video_string(value, &config, display_side) == -1) { fprintf(stderr, "Failed to parse video config string '%s'\n", value); return EXIT_FAILURE; } } else if ((value = strip_prefix( argv[i], "--test-wire-version="))) { if (strcmp(value, "16")) { fprintf(stderr, "Version '%s' not implemented\n", value); return EXIT_FAILURE; } } else { fprintf(stderr, "Unexpected argument %s. See source code of this program for details.\n", argv[i]); return EXIT_FAILURE; } } if (debug) { log_funcs[0] = test_atomic_log_handler; } log_funcs[1] = test_atomic_log_handler; const char *upstream = getenv("WAYLAND_SOCKET"); const char *downstream = getenv("WAYPIPE_CONNECTION_FD"); if (!upstream || !downstream) { fprintf(stderr, "Missing environment variable. See source code of this program for details.\n"); return EXIT_FAILURE; } uint32_t upstream_u, downstream_u; if (parse_uint32(upstream, &upstream_u) == -1 || parse_uint32(downstream, &downstream_u) == -1) { fprintf(stderr, "Failed to parse sockets.\n"); return EXIT_FAILURE; } int upstream_fd = (int)upstream_u; int downstream_fd = (int)downstream_u; if (display_side) { uint8_t header[16]; if (read(downstream_fd, header, sizeof(header)) != sizeof(header)) { fprintf(stderr, "Failed to read connection header.\n"); return EXIT_FAILURE; } uint32_t h = *(uint32_t *)header; char err[512]; if (check_conn_header(h, &config, err, sizeof(err)) == -1) { fprintf(stderr, "Bad connection header: %s\n", err); return EXIT_FAILURE; } } else { uint32_t header = conntoken_header(&config, false, false); uint8_t padding[12] = {0}; write(upstream_fd, &header, sizeof(header)); write(upstream_fd, padding, sizeof(padding)); } return main_interface_loop(display_side ? downstream_fd : upstream_fd, display_side ? upstream_fd : downstream_fd, -1, &config, display_side); } waypipe-v0.10.5/waypipe-c/test/trace_bcc.sh000077500000000000000000000014311507071711100205610ustar00rootroot00000000000000#!/bin/sh set -e # With bcc 'tplist -l `which waypipe`', can list all probes # With bcc 'trace', can print events, arguments, and timestamps sudo /usr/share/bcc/tools/trace -t \ 'u:/usr/bin/waypipe:construct_diff_exit "diffsize %d", arg1' \ 'u:/usr/bin/waypipe:construct_diff_enter "rects %d", arg1' \ 'u:/usr/bin/waypipe:apply_diff_enter "size %d diffsize %d", arg1, arg2' \ 'u:/usr/bin/waypipe:apply_diff_exit' \ 'u:/usr/bin/waypipe:channel_write_end' \ 'u:/usr/bin/waypipe:channel_write_start "size %d", arg1' \ 'u:/usr/bin/waypipe:worker_comp_enter "index %d", arg1' \ 'u:/usr/bin/waypipe:worker_comp_exit "index %d", arg1' \ 'u:/usr/bin/waypipe:worker_compdiff_enter "index %d", arg1' \ 'u:/usr/bin/waypipe:worker_compdiff_exit "index %d", arg1' waypipe-v0.10.5/waypipe-c/test/trace_perf.sh000077500000000000000000000010641507071711100207700ustar00rootroot00000000000000#!/bin/sh set -x # This probably requires root to set up the probes, and # a low sys/kernel/perf_event_paranoid to record them. # Also, perf record can create huge (>1 GB) files on busy machines, # so it's recommended to run this on a tmpfs prog=$(which waypipe) capture_time=${1:-120} setup="perf buildid-cache -a `which waypipe` ; perf probe -d sdt_waypipe:* ; perf probe sdt_waypipe:* ;" sudo -- sh -c "$setup" sudo perf record -e sdt_waypipe:*,sched:sched_switch -aR sleep $capture_time sudo chmod 644 perf.data perf script --ns | gzip -9 >scriptfile.gz waypipe-v0.10.5/waypipe-c/test/wire_parse.c000066400000000000000000000152351507071711100206300ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "parsing.h" #include "shadow.h" #include "util.h" #include #include #include #include #include "protocol-test-proto.h" /* from parsing.c */ bool size_check(const struct msg_data *data, const uint32_t *payload, unsigned int true_length, int fd_length); void do_xtype_req_blue(struct context *ctx, const char *interface, uint32_t version, struct wp_object *id, int b, int32_t c, uint32_t d, struct wp_object *e, const char *f, uint32_t g) { char buf[256]; sprintf(buf, "%s %u %u %d %d %u %u %s %u", interface, version, id ? id->obj_id : 0, b, c, d, e ? e->obj_id : 0, f, g); printf("%s\n", buf); ctx->drop_this_msg = strcmp(buf, "babacba 4441 992 7771 3331 4442 991 (null) 4443") != 0; } void do_xtype_evt_yellow(struct context *ctx, uint32_t c) { char buf[256]; sprintf(buf, "%u", c); printf("%s\n", buf); ctx->drop_this_msg = strcmp(buf, "4441") != 0; } void do_ytype_req_green(struct context *ctx, uint32_t a, const char *b, const char *c, int d, const char *e, struct wp_object *f, uint32_t g_count, const uint8_t *g_val) { char buf[256]; sprintf(buf, "%u %s %s %d %s %u %u %x|%x|%x|%x|%x|%x|%x|%x", a, b, c, d, e, f ? f->obj_id : 0, g_count, g_val[0], g_val[1], g_val[2], g_val[3], g_val[4], g_val[5], g_val[6], g_val[7]); printf("%s\n", buf); ctx->drop_this_msg = strcmp(buf, "4441 bea (null) 7771 cbbc 991 8 81|80|81|80|90|99|99|99") != 0; } void do_ytype_evt_red(struct context *ctx, struct wp_object *a, int32_t b, int c, struct wp_object *d, int32_t e, int32_t f, struct wp_object *g, int32_t h, uint32_t i, const char *j, int k, uint32_t l_count, const uint8_t *l_val, uint32_t n, const char *m, struct wp_object *o, int p, struct wp_object *q) { char buf[256]; sprintf(buf, "%u %d %d %u %d %d %u %d %u %s %d %u %x|%x|%x %u %s %u %d %u", a ? a->obj_id : 0, b, c, d ? d->obj_id : 0, e, f, g ? g->obj_id : 0, h, i, j, k, l_count, l_val[0], l_val[1], l_val[2], n, m, o ? o->obj_id : 0, p, q ? q->obj_id : 0); printf("%s\n", buf); ctx->drop_this_msg = strcmp(buf, "0 33330 8881 0 33331 33332 0 33333 44440 bcaba 8882 3 80|80|80 99990 (null) 992 8883 991") != 0; } struct wire_test { const struct wp_interface *intf; int msg_offset; int fds[4]; uint32_t words[50]; int nfds; int nwords; }; static inline uint32_t pack_u32(uint8_t a0, uint8_t a1, uint8_t a2, uint8_t a3) { union { uint8_t s[4]; uint32_t v; } u; u.s[0] = a0; u.s[1] = a1; u.s[2] = a2; u.s[3] = a3; return u.v; } log_handler_func_t log_funcs[2] = {test_log_handler, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; struct message_tracker mt; init_message_tracker(&mt); struct wp_object *old_display = tracker_get(&mt, 1); tracker_remove(&mt, old_display); destroy_wp_object(old_display); struct wp_object xobj; xobj.type = &intf_xtype; xobj.is_zombie = false; xobj.obj_id = 991; tracker_insert(&mt, &xobj); struct wp_object yobj; yobj.type = &intf_ytype; yobj.is_zombie = false; yobj.obj_id = 992; tracker_insert(&mt, &yobj); struct context ctx = {.obj = &xobj, .g = NULL}; struct wire_test tests[] = { {&intf_xtype, 0, {7771}, {8, pack_u32(0x62, 0x61, 0x62, 0x61), pack_u32(0x63, 0x62, 0x61, 0), 4441, yobj.obj_id, 3331, 4442, xobj.obj_id, 0, 4443}, 1, 10}, {&intf_xtype, 1, {0}, {4441}, 0, 1}, {&intf_ytype, 0, {7771}, {4441, 4, pack_u32(0x62, 0x65, 0x61, 0), 0, 5, pack_u32(0x63, 0x62, 0x62, 0x63), pack_u32(0, 0x99, 0x99, 0x99), xobj.obj_id, 8, pack_u32(0x81, 0x80, 0x81, 0x80), pack_u32(0x90, 0x99, 0x99, 0x99)}, 1, 11}, {&intf_ytype, 1, {8881, 8882, 8883}, {7770, 33330, 7771, 33331, 33332, 7773, 33333, 44440, 6, pack_u32(0x62, 0x63, 0x61, 0x62), pack_u32(0x61, 0, 0x99, 0x99), 3, pack_u32(0x80, 0x80, 0x80, 0x11), 99990, 0, yobj.obj_id, xobj.obj_id}, 3, 17}}; bool all_success = true; for (size_t t = 0; t < sizeof(tests) / sizeof(tests[0]); t++) { struct wire_test *wt = &tests[t]; ctx.drop_this_msg = false; wp_callfn_t func = wt->intf->msgs[wt->msg_offset].call; (*func)(&ctx, wt->words, wt->fds, &mt); if (ctx.drop_this_msg) { all_success = false; } printf("Function call %s.%s, %s\n", wt->intf->name, get_nth_packed_string(wt->intf->msg_names, wt->msg_offset), ctx.drop_this_msg ? "FAIL" : "pass"); for (int fdlen = wt->nfds; fdlen >= 0; fdlen--) { for (int length = wt->nwords; length >= 0; length--) { if (fdlen != wt->nfds && length < wt->nwords) { /* the fd check is really trivial */ continue; } bool expect_success = (wt->nwords == length) && (fdlen == wt->nfds); printf("Trying: %d/%d words, %d/%d fds\n", length, wt->nwords, fdlen, wt->nfds); bool sp = size_check( &wt->intf->msgs[wt->msg_offset], wt->words, (unsigned int)length, fdlen); if (sp != expect_success) { wp_error("size check FAIL (%c, expected %c) at %d/%d chars, %d/%d fds", sp ? 'Y' : 'n', expect_success ? 'Y' : 'n', length, wt->nwords, fdlen, wt->nfds); } all_success &= (sp == expect_success); } } } tracker_remove(&mt, &xobj); tracker_remove(&mt, &yobj); cleanup_message_tracker(&mt); printf("Net result: %s\n", all_success ? "pass" : "FAIL"); return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.10.5/waypipe-c/util.c000066400000000000000000000456321507071711100164720ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "util.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef HAS_VSOCK #include #endif int parse_uint32(const char *str, uint32_t *val) { if (!str[0] || (str[0] == '0' && str[1])) { return -1; } uint64_t v = 0; for (const char *cursor = str; *cursor; cursor++) { if (*cursor < '0' || *cursor > '9') { return -1; } uint64_t s = (uint64_t)(*cursor - '0'); v *= 10; v += s; if (v >= (1uLL << 32)) { return -1; } } *val = (uint32_t)v; return 0; } /* An integer-to-string converter which is async-signal-safe, unlike sprintf */ static char *uint_to_str(uint32_t i, char buf[static 11]) { char *pos = &buf[10]; *pos = '\0'; while (i) { --pos; *pos = (char)((i % 10) + (uint32_t)'0'); i /= 10; } return pos; } size_t multi_strcat(char *dest, size_t dest_space, ...) { size_t net_len = 0; va_list args; va_start(args, dest_space); while (true) { const char *str = va_arg(args, const char *); if (!str) { break; } net_len += strlen(str); if (net_len >= dest_space) { va_end(args); dest[0] = '\0'; return 0; } } va_end(args); va_start(args, dest_space); char *pos = dest; while (true) { const char *str = va_arg(args, const char *); if (!str) { break; } size_t len = strlen(str); memcpy(pos, str, len); pos += len; } va_end(args); *pos = '\0'; return net_len; } bool is_utf8(const char *str) { /* See Unicode Standard 15.0.0, Chapter 3, D92 and Table 3.7. */ const uint8_t *v = (const uint8_t *)str; while (*v) { if (v[0] <= 0x7f) { v++; } else if (v[0] <= 0xdf) { if (v[1] < 0x80 || v[1] > 0xbf) { return false; } v += 2; } else if (v[0] <= 0xef) { if (v[0] == 0xe0) { if (v[1] < 0xa0 || v[1] > 0xbf) { return false; } } else if (v[0] <= 0xec) { if (v[1] < 0x80 || v[1] > 0xbf) { return false; } } else if (v[0] == 0xed) { if (v[1] < 0x80 || v[1] > 0x9f) { return false; } } else { if (v[1] < 0x80 || v[1] > 0xbf) { return false; } } if (v[2] < 0x80 || v[2] > 0xbf) { return false; } v += 3; } else if (v[0] <= 0xf4) { if (v[0] == 0xf0) { if (v[1] < 0x90 || v[1] > 0xbf) { return false; } } else if (v[0] <= 0xf3) { if (v[1] < 0x80 || v[1] > 0xbf) { return false; } } else { if (v[1] < 0x80 || v[1] > 0x8f) { return false; } } if (v[2] < 0x80 || v[2] > 0xbf || v[3] < 0x80 || v[3] > 0xbf) { return false; } v += 4; } else { return false; } } return true; } bool shutdown_flag = false; uint64_t inherited_fds[4] = {0, 0, 0, 0}; void handle_sigint(int sig) { (void)sig; char buf[48]; char tmp[11]; const char *pidstr = uint_to_str((uint32_t)getpid(), tmp); size_t len = multi_strcat( buf, sizeof(buf), "SIGINT(", pidstr, ")\n", NULL); (void)write(STDERR_FILENO, buf, len); shutdown_flag = true; } int set_nonblocking(int fd) { int flags = fcntl(fd, F_GETFL, 0); if (flags == -1) { return -1; } return fcntl(fd, F_SETFL, flags | O_NONBLOCK); } int set_cloexec(int fd) { int flags = fcntl(fd, F_GETFD, 0); if (flags == -1) { return -1; } return fcntl(fd, F_SETFD, flags | FD_CLOEXEC); } int setup_nb_socket(int cwd_fd, struct socket_path path, int nmaxclients, int *folder_fd_out, int *socket_fd_out) { if (path.filename->sun_family != AF_UNIX) { wp_error("Address family should be AF_UNIX, was %d", path.filename->sun_family); return -1; } if (strchr(path.filename->sun_path, '/')) { wp_error("Address '%s' should be a pure filename and not contain any forward slashes", path.filename->sun_path); return -1; } int sock = socket(AF_UNIX, SOCK_STREAM, 0); if (sock == -1) { wp_error("Error creating socket: %s", strerror(errno)); return -1; } if (set_nonblocking(sock) == -1) { wp_error("Error making socket nonblocking: %s", strerror(errno)); checked_close(sock); return -1; } int folder_fd = open_folder(path.folder); if (folder_fd == -1) { wp_error("Error opening folder in which to connect to socket: %s", strerror(errno)); checked_close(sock); return -1; } if (fchdir(folder_fd) == -1) { wp_error("Error changing to folder '%s'", path.folder); checked_close(sock); checked_close(folder_fd); return -1; } if (bind(sock, (struct sockaddr *)path.filename, sizeof(*path.filename)) == -1) { wp_error("Error binding socket at %s: %s", path.filename->sun_path, strerror(errno)); checked_close(sock); checked_close(folder_fd); if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } return -1; } if (listen(sock, nmaxclients) == -1) { wp_error("Error listening to socket at %s: %s", path.filename->sun_path, strerror(errno)); checked_close(sock); checked_close(folder_fd); unlink(path.filename->sun_path); if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } return -1; } if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } *folder_fd_out = folder_fd; *socket_fd_out = sock; return 0; } int connect_to_socket_at_folder(int cwd_fd, int folder_fd, const struct sockaddr_un *filename, int *socket_fd) { if (filename->sun_family != AF_UNIX) { wp_error("Address family should be AF_UNIX, was %d", filename->sun_family); return -1; } if (strchr(filename->sun_path, '/')) { wp_error("Address '%s' should be a pure filename and not contain any forward slashes", filename->sun_path); return -1; } int chanfd = socket(AF_UNIX, SOCK_STREAM, 0); if (chanfd == -1) { wp_error("Error creating socket: %s", strerror(errno)); return -1; } if (fchdir(folder_fd) == -1) { wp_error("Error changing to folder\n"); checked_close(chanfd); return -1; } if (connect(chanfd, (struct sockaddr *)filename, sizeof(*filename)) == -1) { wp_error("Error connecting to socket (%s): %s", filename->sun_path, strerror(errno)); checked_close(chanfd); if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } return -1; } if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } *socket_fd = chanfd; return 0; } int connect_to_socket(int cwd_fd, struct socket_path path, int *folder_fd_out, int *socket_fd_out) { int folder_fd = open_folder(path.folder); if (folder_fd == -1) { wp_error("Error opening folder in which to connect to socket: %s", strerror(errno)); return -1; } int ret = connect_to_socket_at_folder( cwd_fd, folder_fd, path.filename, socket_fd_out); if (folder_fd_out && ret == 0) { *folder_fd_out = folder_fd; } else { checked_close(folder_fd); } return ret; } int split_socket_path(char *src_path, struct sockaddr_un *rel_socket) { size_t l = strlen(src_path); if (l == 0) { wp_error("Socket path to split is empty"); return -1; } size_t s = l; while (src_path[s] != '/' && s > 0) { s--; } if (l - s >= sizeof(rel_socket->sun_path)) { wp_error("Filename part '%s' of socket path is too long: %zu bytes >= sizeof(sun_path) = %zu", src_path + s, l - s, sizeof(rel_socket->sun_path)); return -1; } size_t t = (src_path[s] == '/') ? s + 1 : 0; rel_socket->sun_family = AF_UNIX; memset(rel_socket->sun_path, 0x3f, sizeof(rel_socket->sun_path)); memcpy(rel_socket->sun_path, src_path + t, l - t + 1); src_path[s] = '\0'; return 0; } void unlink_at_folder(int orig_dir_fd, int target_dir_fd, const char *target_dir_name, const char *filename) { if (fchdir(target_dir_fd) == -1) { wp_error("Error switching folder to '%s': %s", target_dir_name ? target_dir_name : "(null)", strerror(errno)); return; } if (unlink(filename) == -1) { wp_error("Unlinking '%s' in '%s' failed: %s", filename, target_dir_name ? target_dir_name : "(null)", strerror(errno)); } if (fchdir(orig_dir_fd) == -1) { wp_error("Error switching folder back to cwd: %s", strerror(errno)); } } bool files_equiv(int fd_a, int fd_b) { struct stat stat_a, stat_b; if (fstat(fd_a, &stat_a) == -1) { wp_error("fstat failed, %s", strerror(errno)); return false; } if (fstat(fd_b, &stat_b) == -1) { wp_error("fstat failed, %s", strerror(errno)); return false; } return (stat_a.st_dev == stat_b.st_dev) && (stat_a.st_ino == stat_b.st_ino); } void set_initial_fds(void) { struct pollfd checklist[256]; for (int i = 0; i < 256; i++) { checklist[i].fd = i; checklist[i].events = 0; checklist[i].revents = 0; } if (poll(checklist, 256, 0) == -1) { wp_error("fd-checking poll failed: %s", strerror(errno)); return; } for (int i = 0; i < 256; i++) { if (!(checklist[i].revents & POLLNVAL)) { inherited_fds[i / 64] |= (1uLL << (i % 64)); } } } void check_unclosed_fds(void) { /* Verify that all file descriptors have been closed. Since most * instances have <<256 file descriptors open at a given time, it is * safe to only check up to that level */ struct pollfd checklist[256]; for (int i = 0; i < 256; i++) { checklist[i].fd = i; checklist[i].events = 0; checklist[i].revents = 0; } if (poll(checklist, 256, 0) == -1) { wp_error("fd-checking poll failed: %s", strerror(errno)); return; } for (int i = 0; i < 256; i++) { bool initial_fd = (inherited_fds[i / 64] & (1uLL << (i % 64))) != 0; if (initial_fd) { if (checklist[i].revents & POLLNVAL) { wp_error("Unexpected closed fd %d", i); } } else { if (checklist[i].revents & POLLNVAL) { continue; } #ifdef __linux__ char fd_path[64]; char link[256]; sprintf(fd_path, "/proc/self/fd/%d", i); ssize_t len = readlink(fd_path, link, sizeof(link) - 1); if (len == -1) { wp_error("Failed to readlink /proc/self/fd/%d for unexpected open fd %d", i, i); } else { link[len] = '\0'; if (!strcmp(link, "/var/lib/sss/mc/passwd")) { wp_debug("Known issue, leaked fd %d to /var/lib/sss/mc/passwd", i); } else { wp_debug("Unexpected open fd %d: %s", i, link); } } #else wp_debug("Unexpected open fd %d", i); #endif } } } size_t print_display_error(char *dest, size_t dest_space, uint32_t error_code, const char *message) { if (dest_space < 20) { return 0; } size_t msg_len = strlen(message) + 1; size_t net_len = 4 * ((msg_len + 0x3) / 4) + 20; if (net_len > dest_space) { return 0; } uint32_t header[5] = {0x1, (uint32_t)net_len << 16, 0x1, error_code, (uint32_t)msg_len}; memcpy(dest, header, sizeof(header)); memcpy(dest + sizeof(header), message, msg_len); if (msg_len % 4 != 0) { size_t trailing = 4 - msg_len % 4; uint8_t zeros[4] = {0, 0, 0, 0}; memcpy(dest + sizeof(header) + msg_len, zeros, trailing); } return net_len; } size_t print_wrapped_error(char *dest, size_t dest_space, const char *message) { size_t msg_len = print_display_error( dest + 4, dest_space - 4, 3, message); if (msg_len == 0) { return 0; } uint32_t header = transfer_header(msg_len + 4, WMSG_PROTOCOL); memcpy(dest, &header, sizeof(header)); return msg_len + 4; } int send_one_fd(int socket, int fd) { union { char buf[CMSG_SPACE(sizeof(int))]; struct cmsghdr align; } uc; memset(uc.buf, 0, sizeof(uc.buf)); struct cmsghdr *frst = (struct cmsghdr *)(uc.buf); frst->cmsg_level = SOL_SOCKET; frst->cmsg_type = SCM_RIGHTS; *((int *)CMSG_DATA(frst)) = fd; frst->cmsg_len = CMSG_LEN(sizeof(int)); struct iovec the_iovec; the_iovec.iov_len = 1; uint8_t dummy_data = 1; the_iovec.iov_base = &dummy_data; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_flags = 0; msg.msg_control = uc.buf; msg.msg_controllen = CMSG_SPACE(sizeof(int)); return (int)sendmsg(socket, &msg, 0); } bool wait_for_pid_and_clean(pid_t *target_pid, int *status, int options, struct conn_map *map) { bool found = false; while (1) { int stat; pid_t r = waitpid((pid_t)-1, &stat, options); if (r == 0 || (r == -1 && (errno == ECHILD || errno == EINTR))) { // Valid exit reasons, not an error errno = 0; return found; } else if (r == -1) { wp_error("waitpid failed: %s", strerror(errno)); return found; } wp_debug("Child process %d has died", r); if (map) { /* Clean out all entries matching that pid */ int iw = 0; for (int ir = 0; ir < map->count; ir++) { map->data[iw] = map->data[ir]; if (map->data[ir].pid != r) { iw++; } else { checked_close(map->data[ir].linkfd); } } map->count = iw; } if (r == *target_pid) { *target_pid = 0; *status = stat; found = true; } } } int buf_ensure_size(int count, size_t obj_size, int *space, void **data) { int x = *space; if (count <= x) { return 0; } if (count >= INT32_MAX / 2 || count <= 0) { return -1; } if (x < 1) { x = 1; } while (x < count) { x *= 2; } void *new_data = realloc(*data, (size_t)x * obj_size); if (!new_data) { return -1; } *data = new_data; *space = x; return 0; } static const char *const wmsg_types[] = { "WMSG_PROTOCOL", "WMSG_INJECT_RIDS", "WMSG_OPEN_FILE", "WMSG_EXTEND_FILE", "WMSG_OPEN_DMABUF", "WMSG_BUFFER_FILL", "WMSG_BUFFER_DIFF", "WMSG_OPEN_IR_PIPE", "WMSG_OPEN_IW_PIPE", "WMSG_OPEN_RW_PIPE", "WMSG_PIPE_TRANSFER", "WMSG_PIPE_SHUTDOWN_R", "WMSG_PIPE_SHUTDOWN_W", "WMSG_OPEN_DMAVID_SRC", "WMSG_OPEN_DMAVID_DST", "WMSG_SEND_DMAVID_PACKET", "WMSG_ACK_NBLOCKS", "WMSG_RESTART", "WMSG_CLOSE", "WMSG_OPEN_DMAVID_SRC_V2", "WMSG_OPEN_DMAVID_DST_V2", }; const char *wmsg_type_to_str(enum wmsg_type tp) { if (tp >= sizeof(wmsg_types) / sizeof(wmsg_types[0])) { return "???"; } return wmsg_types[tp]; } bool wmsg_type_is_known(enum wmsg_type tp) { return (size_t)tp < (sizeof(wmsg_types) / sizeof(wmsg_types[0])); } int transfer_ensure_size(struct transfer_queue *transfers, int count) { int sz = transfers->size; if (buf_ensure_size(count, sizeof(*transfers->vecs), &sz, (void **)&transfers->vecs) == -1) { return -1; } sz = transfers->size; if (buf_ensure_size(count, sizeof(*transfers->meta), &sz, (void **)&transfers->meta) == -1) { return -1; } transfers->size = sz; return 0; } int transfer_add(struct transfer_queue *w, size_t size, void *data) { if (size == 0) { return 0; } if (transfer_ensure_size(w, w->end + 1) == -1) { return -1; } w->vecs[w->end].iov_len = size; w->vecs[w->end].iov_base = data; w->meta[w->end].msgno = w->last_msgno; w->meta[w->end].static_alloc = false; w->end++; w->last_msgno++; return 0; } void transfer_async_add(struct thread_msg_recv_buf *q, void *data, size_t sz) { struct iovec vec; vec.iov_len = sz; vec.iov_base = data; pthread_mutex_lock(&q->lock); q->data[q->zone_end++] = vec; pthread_mutex_unlock(&q->lock); } int transfer_load_async(struct transfer_queue *w) { pthread_mutex_lock(&w->async_recv_queue.lock); int zstart = w->async_recv_queue.zone_start; int zend = w->async_recv_queue.zone_end; w->async_recv_queue.zone_start = zend; pthread_mutex_unlock(&w->async_recv_queue.lock); for (int i = zstart; i < zend; i++) { struct iovec v = w->async_recv_queue.data[i]; memset(&w->async_recv_queue.data[i], 0, sizeof(struct iovec)); if (v.iov_len == 0 || v.iov_base == NULL) { wp_error("Unexpected empty message"); continue; } /* Only fill/diff messages are received async, so msgno * is always incremented */ if (transfer_add(w, v.iov_len, v.iov_base) == -1) { wp_error("Failed to add message to transfer queue"); pthread_mutex_unlock(&w->async_recv_queue.lock); return -1; } } return 0; } void cleanup_transfer_queue(struct transfer_queue *td) { for (int i = td->async_recv_queue.zone_start; i < td->async_recv_queue.zone_end; i++) { free(td->async_recv_queue.data[i].iov_base); } pthread_mutex_destroy(&td->async_recv_queue.lock); free(td->async_recv_queue.data); for (int i = 0; i < td->end; i++) { if (!td->meta[i].static_alloc) { free(td->vecs[i].iov_base); } } free(td->vecs); free(td->meta); } #ifdef HAS_VSOCK int connect_to_vsock(uint32_t port, uint32_t cid, bool to_host, int *socket_fd) { wp_debug("Connecting to vsock on port %d, cid %d, send to host %d", port, cid, to_host); int chanfd = socket(AF_VSOCK, SOCK_STREAM, 0); if (chanfd == -1) { wp_error("Error creating socket: %s", strerror(errno)); return -1; } struct sockaddr_vm addr; memset(&addr, 0, sizeof(struct sockaddr_vm)); addr.svm_family = AF_VSOCK; addr.svm_port = port; addr.svm_cid = cid; if (to_host) { addr.svm_flags = VMADDR_FLAG_TO_HOST; } if ((connect(chanfd, (struct sockaddr *)&addr, sizeof(struct sockaddr_vm))) == -1) { wp_error("Error connecting to vsock at port %d: %s", port, strerror(errno)); checked_close(chanfd); return -1; } *socket_fd = chanfd; return 0; } int listen_on_vsock(uint32_t port, int nmaxclients, int *socket_fd_out) { wp_debug("Listening on vsock port %d", port); int sock = socket(AF_VSOCK, SOCK_STREAM, 0); if (sock == -1) { wp_error("Error creating socket: %s", strerror(errno)); return -1; } if (set_nonblocking(sock) == -1) { wp_error("Error making socket nonblocking: %s", strerror(errno)); checked_close(sock); return -1; } struct sockaddr_vm addr; memset(&addr, 0, sizeof(struct sockaddr_vm)); addr.svm_family = AF_VSOCK; addr.svm_port = port; addr.svm_cid = VMADDR_CID_ANY; if (bind(sock, (struct sockaddr *)&addr, sizeof(struct sockaddr_vm)) == -1) { wp_error("Error binding vsock at cid %d port %d: %s", addr.svm_cid, port, strerror(errno)); checked_close(sock); return -1; } if (listen(sock, nmaxclients) == -1) { wp_error("Error listening to socket at cid %d port %d: %s", addr.svm_cid, port, strerror(errno)); checked_close(sock); return -1; } *socket_fd_out = sock; return 0; } #endif waypipe-v0.10.5/waypipe-c/util.h000066400000000000000000000471451507071711100165000ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_UTIL_H #define WAYPIPE_UTIL_H #include #include #include #include #include #include #include #include "config-waypipe.h" #ifdef HAS_USDT #include #else #define DTRACE_PROBE(provider, probe) (void)0 #define DTRACE_PROBE1(provider, probe, parm1) (void)0 #define DTRACE_PROBE2(provider, probe, parm1, parm2) (void)0 #define DTRACE_PROBE3(provider, probe, parm1, parm2, parm3) (void)0 #endif // On SIGINT, this is set to true. The main program should then cleanup ASAP extern bool shutdown_flag; extern uint64_t inherited_fds[4]; void handle_sigint(int sig); /** Basic mathematical operations. */ // use macros? static inline int max(int a, int b) { return a > b ? a : b; } static inline int min(int a, int b) { return a < b ? a : b; } static inline uint64_t maxu(uint64_t a, uint64_t b) { return a > b ? a : b; } static inline uint64_t minu(uint64_t a, uint64_t b) { return a < b ? a : b; } static inline int clamp(int v, int lower, int upper) { return max(min(v, upper), lower); } static inline int align(int v, int m) { return m * ((v + m - 1) / m); } static inline size_t alignz(size_t v, size_t m) { return m * ((v + m - 1) / m); } /* only valid for nonegative v and positive u */ static inline int floordiv(int v, int u) { return v / u; } static inline int ceildiv(int v, int u) { return (v + u - 1) / u; } /* valid as long as nparts < 2**15, (hi - lo) < 2**31 */ static inline int split_interval(int lo, int hi, int nparts, int index) { return lo + index * ((hi - lo) / nparts) + (index * ((hi - lo) % nparts)) / nparts; } /** Parse a base-10 integer, forbidding leading whitespace, + sign, decimal * separators, and locale dependent stuff */ int parse_uint32(const char *str, uint32_t *val); /* Multiple string concatenation; returns number of bytes written and * ensures null termination. Is async-signal-safe, unlike sprintf. * Last argment must be NULL. If there is not enough space, returns 0. */ size_t multi_strcat(char *dest, size_t dest_space, ...); /** Is the string a well-formed UTF-8 code point sequence, per Unicode 15.0? */ bool is_utf8(const char *str); /** Make the file underlying this file descriptor nonblocking. * Silently return -1 on failure. */ int set_nonblocking(int fd); /** Set the close-on-exec flag for the file descriptor. * Silently return -1 on failure. */ int set_cloexec(int fd); /* socket path lengths being overly constrained, it is perhaps best to enforce * this constraint as early as possible by using this type */ struct sockaddr_un; struct socket_path { const char *folder; const struct sockaddr_un *filename; }; /** Create a nonblocking AF_UNIX/SOCK_STREAM socket at folder/filename, * and listen with nmaxclients. * * Prints its own error messages; returns -1 on failure. * * If successful, sets the value of folder_fd to the folder, and socket_fd * to the created socket. * * After creating the socket, will fchdir back to cwd_fd. */ int setup_nb_socket(int cwd_fd, struct socket_path socket_path, int nmaxclients, int *folder_fd, int *socket_fd); /** Opens folder, and connects to a (relative) socket in that * folder given by filename. Abstract sockets? * * After opening folder, will fchdir back to cwd_fd. * * If successful, sets the value of folder_fd to the folder, and socket_fd * to the created socket. (If folder_fd is NULL, then nothing is returned *there.) * * If successful, returns the created socket fd; otherwise returns -1. **/ int connect_to_socket(int cwd_fd, struct socket_path socket_path, int *folder_fd, int *socket_fd); int connect_to_socket_at_folder(int cwd_fd, int folder_fd, const struct sockaddr_un *socket_filename, int *socket_fd); /** Return true iff fd_a/fd_b correspond to the same filesystem file. * If fstat fails, files are assumed to be unequal. */ bool files_equiv(int fd_a, int fd_b); /** * Reads src_path, trims off the filename part, and places the filename * in rel_socket ; if the file name is too long, returns -1, otherwise * returns 0. Sets the SA_FAMILY of `rel_socket` to AF_UNIX. If src_path * contains no folder seperations, then src_path is truncated down to the * empty string. */ int split_socket_path(char *src_path, struct sockaddr_un *rel_socket); /** * Unlink `filename` in `target_dir_fd`, and then fchdir back to `orig_dir_fd`. * The value of `target_dir_name` may be NULL, and is only used for error * messages. */ void unlink_at_folder(int orig_dir_fd, int target_dir_fd, const char *target_dir_name, const char *filename); /** Call close(fd), logging error when fd is invalid */ #define checked_close(fd) \ if (close(fd) == -1) { \ wp_error("close(%d) failed: %s", fd, strerror(errno)); \ } /** Set the list of initially available fds (typically stdin/out/errno) */ void set_initial_fds(void); /** Verify that all file descriptors (except for the initial ones) are closed */ void check_unclosed_fds(void); /** Set the file descriptor to be close-on-exec; return -1 if unsuccessful */ int set_cloexec(int fd); /** Write the Wayland wire representation of a wl_display.error(error_code, * message) event into array `dest`. Return its length in bytes, or 0 if there * is not enough space. */ size_t print_display_error(char *dest, size_t dest_space, uint32_t error_code, const char *message); /** Write the Waypipe wire message of type WMSG_PROTOCOL containing a display * error as from print_display_error(..., 3, message) above. Return wire message * length in bytes, or 0 if there is not enough space. */ size_t print_wrapped_error(char *dest, size_t dest_space, const char *message); #define WAYPIPE_PROTOCOL_VERSION 0x1u /** If the byte order is wrong, the fixed set/unset bits are swapped */ #define CONN_FIXED_BIT (0x1u << 7) #define CONN_UNSET_BIT (0x1u << 31) /** The waypipe-server sends this if it supports reconnections, in which case * the main client process should remember which child to route reconnections * to. */ #define CONN_RECONNECTABLE_BIT (0x1u << 0) /** This is set when reconnecting to an established waypipe-client child process */ #define CONN_UPDATE_BIT (0x1u << 1) /** The waypipe-server sends this to indicate that it does not support DMABUFs, * so the waypipe-client side does not even need to check if it can support * them. If this is not set, the waypipe-client will support (or not) DMABUFs * depending on its flags and local capabilities. */ #define CONN_NO_DMABUF_SUPPORT (0x1u << 2) /** Indicate which compression format the waypipe-server can accept. For * backwards compatibility, if none of these flags is set, assume the server and * client match. */ #define CONN_COMPRESSION_MASK (0x7u << 8) #define CONN_NO_COMPRESSION (0x1u << 8) #define CONN_LZ4_COMPRESSION (0x2u << 8) #define CONN_ZSTD_COMPRESSION (0x3u << 8) /** Indicate which video coding format the waypipe-server can accept. For * backwards compatibility, if none of these flags is set, assume the server and * client match. */ #define CONN_VIDEO_MASK (0x7u << 11) #define CONN_NO_VIDEO (0x1u << 11) #define CONN_VP9_VIDEO (0x2u << 11) #define CONN_H264_VIDEO (0x3u << 11) #define CONN_AV1_VIDEO (0x4u << 11) struct connection_token { /** Indicate protocol version (top 16 bits), endianness, and * reconnection flags. The highest bit must stay clear. */ uint32_t header; uint32_t key[3]; /** Random bits used to identify the connection */ }; /** A type to help keep track of the connection handling processes */ struct conn_addr { struct connection_token token; pid_t pid; int linkfd; }; struct conn_map { struct conn_addr *data; int count, size; }; /** A useful helper routine for lists and stacks. `count` is the number of * objects that will be needed; `obj_size` their side; `size_t` the number * of objects that the malloc'd data can contain, and `data` the list buffer * itself. If count < space, resize the list and update space. Returns -1 on * allocation failure */ int buf_ensure_size(int count, size_t obj_size, int *space, void **data); /** sendmsg a file descriptor over socket */ int send_one_fd(int socket, int fd); enum log_level { WP_DEBUG = 0, WP_ERROR = 1 }; typedef void (*log_handler_func_t)(const char *file, int line, enum log_level level, const char *fmt, ...); /** These log functions should be set by whichever translation units have a * 'main'. The first one is the debug handler, second error handler. Set them to * NULL to disable log messages. */ extern log_handler_func_t log_funcs[2]; #ifdef WAYPIPE_REL_SRC_DIR #define WAYPIPE__FILE__ \ ((const char *)__FILE__ + sizeof(WAYPIPE_REL_SRC_DIR) - 1) #else #define WAYPIPE__FILE__ __FILE__ #endif /** No trailing ;, user must supply. The first vararg must be the format string. */ #define wp_error(...) \ if (log_funcs[WP_ERROR]) \ (*log_funcs[WP_ERROR])(WAYPIPE__FILE__, __LINE__, WP_ERROR, __VA_ARGS__) #define wp_debug(...) \ if (log_funcs[WP_DEBUG]) \ (*log_funcs[WP_DEBUG])(WAYPIPE__FILE__, __LINE__, WP_DEBUG, __VA_ARGS__) /** Run waitpid in a loop until there are no more zombies to clean up. If the * target_pid was one of the completed processes, set status, return true. The * `options` flag will be passed to waitpid. If `map` is not NULL, remove * entries in the connection map which were closed. * * The value *target_pid is set to 0 once the corresponding process has died, * as a convenience to check only the first child process with pid == * *target_pid. */ bool wait_for_pid_and_clean(pid_t *target_pid, int *status, int options, struct conn_map *map); /** An unrecoverable error-- say, running out of file descriptors */ #define ERR_FATAL -1 /** A memory allocation failed; might be fatal, might not be */ #define ERR_NOMEM -2 /** For main loop, channel disconnection */ #define ERR_DISCONN -3 /** For main loop, program disconnection */ #define ERR_STOP -4 /** A helper type, since very often buffers and their sizes are passed together * (or returned together) as arguments */ struct bytebuf { size_t size; char *data; }; struct char_window { char *data; int size; int zone_start; int zone_end; }; struct int_window { int *data; int size; int zone_start; int zone_end; }; /** * @brief Wire format message types * * Each message indicates what the receiving side should do. */ enum wmsg_type { /** Send over a set of Wayland protocol messages. Preceding messages * must create or update file descriptors and inject file descriptors * to the queue. */ // TODO: use extra bits to make parsing more consistent between systems; // i.e, to ensure that # of file descriptors consumed is the same WMSG_PROTOCOL, // header uint32_t, then protocol messages /** Inject file descriptors into the receiver's buffer, for use by the * protocol parser. */ WMSG_INJECT_RIDS, // header uint32_t, then fds /** Create a new shared memory file of the given size. * Format: \ref wmsg_open_file */ WMSG_OPEN_FILE, /** Provide a new (larger) size for the file buffer. * Format: \ref wmsg_open_file */ WMSG_EXTEND_FILE, /** Create a new DMABUF with the given size and \ref dmabuf_slice_data. * Format: \ref wmsg_open_dmabuf */ WMSG_OPEN_DMABUF, /** Fill the region of the file with the folllowing data. The data * should be compressed according to the global compression option. * Format: \ref wmsg_buffer_fill */ WMSG_BUFFER_FILL, /** Apply a diff to the file. The diff contents may be compressed. * Format: \ref wmsg_buffer_diff */ WMSG_BUFFER_DIFF, /** Create a new pipe, with the given remote R/W status */ WMSG_OPEN_IR_PIPE, // wmsg_basic WMSG_OPEN_IW_PIPE, // wmsg_basic WMSG_OPEN_RW_PIPE, // wmsg_basic /** Transfer data to the pipe */ WMSG_PIPE_TRANSFER, // wmsg_basic /** Shutdown the read end of the pipe that waypipe uses. */ WMSG_PIPE_SHUTDOWN_R, // wmsg_basic /** Shutdown the write end of the pipe that waypipe uses. */ WMSG_PIPE_SHUTDOWN_W, // wmsg_basic /** Create a DMABUF (with following data parameters) that will be used * to produce/consume video frames. Format: \ref wmsg_open_dmabuf. * Deprecated and may be disabled/removed in the future. */ WMSG_OPEN_DMAVID_SRC, WMSG_OPEN_DMAVID_DST, /** Send a packet of video data to the destination */ WMSG_SEND_DMAVID_PACKET, // wmsg_basic /** Acknowledge that a given number of messages has been received, so * that the sender of those messages no longer needs to store them * for replaying in case of reconnection. Format: \ref wmsg_ack */ WMSG_ACK_NBLOCKS, /** When restarting a connection, indicate the number of the message * which will be sent next. Format: \ref wmsg_restart */ WMSG_RESTART, // wmsg_restart /** When the remote program is closing. Format: only the header */ WMSG_CLOSE, /** Create a DMABUF (with following data parameters) that will be used * to produce/consume video frames. Format: \ref wmsg_open_dmavid */ WMSG_OPEN_DMAVID_SRC_V2, WMSG_OPEN_DMAVID_DST_V2, }; const char *wmsg_type_to_str(enum wmsg_type tp); bool wmsg_type_is_known(enum wmsg_type tp); struct wmsg_open_file { uint32_t size_and_type; int32_t remote_id; uint32_t file_size; }; static_assert(sizeof(struct wmsg_open_file) == 12, "size check"); struct wmsg_open_dmabuf { uint32_t size_and_type; int32_t remote_id; uint32_t file_size; /* following this, provide struct dmabuf_slice_data */ }; static_assert(sizeof(struct wmsg_open_dmabuf) == 12, "size check"); enum video_coding_fmt { VIDEO_H264 = 0, VIDEO_VP9 = 1, VIDEO_AV1 = 2, }; struct wmsg_open_dmavid { uint32_t size_and_type; int32_t remote_id; uint32_t file_size; uint32_t vid_flags; /* lowest 8 bits determine video type */ /* immediately followed by struct dmabuf_slice_data */ }; static_assert(sizeof(struct wmsg_open_dmavid) == 16, "size check"); struct wmsg_buffer_fill { uint32_t size_and_type; int32_t remote_id; uint32_t start; /**< [start, end), in bytes of zone to be written */ uint32_t end; /* following this, the possibly-compressed data */ }; static_assert(sizeof(struct wmsg_buffer_fill) == 16, "size check"); struct wmsg_buffer_diff { uint32_t size_and_type; int32_t remote_id; uint32_t diff_size; /**< in bytes, when uncompressed */ uint32_t ntrailing; /**< number of 'trailing' bytes, copied to tail */ /* following this, the possibly-compressed diff data */ }; static_assert(sizeof(struct wmsg_buffer_diff) == 16, "size check"); struct wmsg_basic { uint32_t size_and_type; int32_t remote_id; }; static_assert(sizeof(struct wmsg_basic) == 8, "size check"); struct wmsg_ack { uint32_t size_and_type; uint32_t messages_received; }; static_assert(sizeof(struct wmsg_ack) == 8, "size check"); struct wmsg_restart { uint32_t size_and_type; uint32_t last_ack_received; }; static_assert(sizeof(struct wmsg_restart) == 8, "size check"); /** size: the number of bytes in the message, /excluding/ trailing padding. */ static inline uint32_t transfer_header(size_t size, enum wmsg_type type) { return ((uint32_t)size << 5) | (uint32_t)type; } static inline size_t transfer_size(uint32_t header) { return (size_t)header >> 5; } static inline enum wmsg_type transfer_type(uint32_t header) { return (enum wmsg_type)(header & ((1u << 5) - 1)); } /** Worker tasks write their resulting messages to this receive buffer, * and the main thread periodically checks the messages and appends the results * to the main thread. */ struct thread_msg_recv_buf { // TODO: make this lock free, using the fact that valid iovecs have // nonzero fields struct iovec *data; /** [zone_start, zone_end] contains the set of entries which might * contain data */ int zone_start, zone_end, size; pthread_mutex_t lock; }; static inline int msgno_gt(uint32_t a, uint32_t b) { return !((a - b) & (1u << 31)); } struct transfer_block_meta { /** Indicating to which message the corresponding data block belongs. */ uint32_t msgno; /** If true, data is not heap allocated */ bool static_alloc; }; /** A queue of data blocks to be written to the channel. This should only * be used by the main thread; worker tasks should write to a \ref * thread_msg_recv_buf, from which the main thread should in turn collect data */ struct transfer_queue { /** Data to be writtenveed */ struct iovec *vecs; /** Vector with metadata for matching entries of `vecs` */ struct transfer_block_meta *meta; /** start: next block to write. end: just after last block to write; * size: number of iovec blocks */ int start, end, size; /** How much of the block at 'start' has been written */ size_t partial_write_amt; /** The most recent message number, to be incremented after almost all * message types */ uint32_t last_msgno; /** Messages added from a worker thread are introduced here, and should * be periodically copied onto the main queue */ struct thread_msg_recv_buf async_recv_queue; }; /** Ensure the queue has space for 'count' elements */ int transfer_ensure_size(struct transfer_queue *transfers, int count); /** Add transfer message to the queue, expanding the queue as necessary. * This increments the last_msgno, and thus should not be used * for WMSG_ACK_NBLOCKS messages. */ int transfer_add(struct transfer_queue *transfers, size_t size, void *data); /** Destroy the transfer queue, deallocating all attached buffers */ void cleanup_transfer_queue(struct transfer_queue *transfers); /** Move any asynchronously loaded messages to the queue */ int transfer_load_async(struct transfer_queue *w); /** Add a message to the async queue */ void transfer_async_add(struct thread_msg_recv_buf *q, void *data, size_t sz); /* Functions that are unsually platform specific */ int create_anon_file(void); int get_hardware_thread_count(void); int get_iov_max(void); /** For large allocations only; functions providing aligned-and-zeroed * allocations. They return NULL on allocation failure.*/ void *zeroed_aligned_alloc(size_t bytes, size_t alignment, void **handle); void *zeroed_aligned_realloc(size_t old_size_bytes, size_t new_size_bytes, size_t alignment, void *data, void **handle); void zeroed_aligned_free(void *data, void **handle); /** Returns a file descriptor for the folder than can be fchdir'd to, or * -1 on failure, setting errno. If `name` is the empty string, opens the * current directory. */ int open_folder(const char *name); #ifdef HAS_VSOCK int connect_to_vsock(uint32_t port, uint32_t cid, bool to_host, int *socket_fd); int listen_on_vsock(uint32_t port, int nmaxclients, int *socket_fd_out); #endif #endif // WAYPIPE_UTIL_H waypipe-v0.10.5/waypipe-c/video.c000066400000000000000000001136251507071711100166210ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "shadow.h" #if !defined(HAS_VIDEO) || !defined(HAS_DMABUF) void setup_video_logging(void) {} bool video_supports_dmabuf_format(uint32_t format, uint64_t modifier) { (void)format; (void)modifier; return false; } bool video_supports_shm_format(uint32_t format) { (void)format; return false; } bool video_supports_coding_format(enum video_coding_fmt fmt) { (void)fmt; return false; } void cleanup_hwcontext(struct render_data *rd) { (void)rd; } void destroy_video_data(struct shadow_fd *sfd) { (void)sfd; } int setup_video_encode( struct shadow_fd *sfd, struct render_data *rd, int nthreads) { (void)sfd; (void)rd; (void)nthreads; return -1; } int setup_video_decode(struct shadow_fd *sfd, struct render_data *rd) { (void)sfd; (void)rd; return -1; } void collect_video_from_mirror( struct shadow_fd *sfd, struct transfer_queue *transfers) { (void)sfd; (void)transfers; } void apply_video_packet(struct shadow_fd *sfd, struct render_data *rd, const struct bytebuf *data) { (void)rd; (void)sfd; (void)data; } #else /* HAS_VIDEO */ #include #include #include #include #include #include #include #include #include #ifdef HAS_VAAPI #include #include #include #endif /* these are equivalent to the GBM formats */ #include #define VIDEO_H264_HW_ENCODER "h264_vaapi" #define VIDEO_H264_SW_ENCODER "libx264" #define VIDEO_H264_DECODER "h264" #define VIDEO_VP9_HW_ENCODER "vp9_vaapi" #define VIDEO_VP9_SW_ENCODER "libvpx-vp9" #define VIDEO_VP9_DECODER "vp9" /* librav1e currently is not sufficient as its low-latency mode doesn't * appear to entirely turn off lookahead, and a few frames of latency * are unavoidable; this may be fixed in the future. * * libsvtav1 -- might work, if suitable controls for zero latency can be found * * libaom-av1 -- works, but may be slower than the other options */ // #define VIDEO_AV1_SW_ENCODER "libsvtav1" #define VIDEO_AV1_SW_ENCODER "libaom-av1" #define VIDEO_AV1_DECODER "libdav1d" static enum AVPixelFormat drm_to_av(uint32_t format) { /* The avpixel formats are specified with reversed endianness relative * to DRM formats */ switch (format) { case 0: return AV_PIX_FMT_BGR0; case DRM_FORMAT_C8: /* indexed */ return AV_PIX_FMT_NONE; case DRM_FORMAT_R8: return AV_PIX_FMT_GRAY8; case DRM_FORMAT_RGB565: return AV_PIX_FMT_RGB565LE; /* there really isn't a matching format, because no fast video * codec supports alpha. Expect unusual error patterns */ case DRM_FORMAT_GR88: return AV_PIX_FMT_YUYV422; case DRM_FORMAT_RGB888: return AV_PIX_FMT_BGR24; case DRM_FORMAT_BGR888: return AV_PIX_FMT_RGB24; case DRM_FORMAT_XRGB8888: return AV_PIX_FMT_BGR0; case DRM_FORMAT_XBGR8888: return AV_PIX_FMT_RGB0; case DRM_FORMAT_RGBX8888: return AV_PIX_FMT_0BGR; case DRM_FORMAT_BGRX8888: return AV_PIX_FMT_0RGB; #if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(57, 7, 100) /* While X2RGB10LE was available earlier than X2BGR10LE, conversions to * X2RGB10LE were broken until just before X2BGR10LE was added */ case DRM_FORMAT_XRGB2101010: return AV_PIX_FMT_X2RGB10LE; case DRM_FORMAT_XBGR2101010: return AV_PIX_FMT_X2BGR10LE; #endif case DRM_FORMAT_NV12: return AV_PIX_FMT_NV12; case DRM_FORMAT_NV21: return AV_PIX_FMT_NV21; case DRM_FORMAT_YVU410: case DRM_FORMAT_YUV410: return AV_PIX_FMT_YUV410P; case DRM_FORMAT_YVU411: case DRM_FORMAT_YUV411: return AV_PIX_FMT_YUV411P; case DRM_FORMAT_YVU420: case DRM_FORMAT_YUV420: return AV_PIX_FMT_YUV420P; case DRM_FORMAT_YVU422: case DRM_FORMAT_YUV422: return AV_PIX_FMT_YUV422P; case DRM_FORMAT_YVU444: case DRM_FORMAT_YUV444: return AV_PIX_FMT_YUV444P; case DRM_FORMAT_YUYV: return AV_PIX_FMT_NONE; case DRM_FORMAT_YVYU: return AV_PIX_FMT_UYVY422; case DRM_FORMAT_UYVY: return AV_PIX_FMT_YVYU422; case DRM_FORMAT_VYUY: return AV_PIX_FMT_YUYV422; default: return AV_PIX_FMT_NONE; } } static bool needs_vu_flip(uint32_t drm_format) { switch (drm_format) { case DRM_FORMAT_YVU410: case DRM_FORMAT_YVU411: case DRM_FORMAT_YVU420: case DRM_FORMAT_YVU422: case DRM_FORMAT_YVU444: return true; } return false; } bool video_supports_dmabuf_format(uint32_t format, uint64_t modifier) { /* cannot handle CCS modifiers at the moment due to extra 'plane' issues */ if (modifier == fourcc_mod_code(INTEL, 4) /* Y_TILED_CCS */ || modifier == fourcc_mod_code(INTEL, 5) /* Yf_TILED_CCS */ || modifier == fourcc_mod_code(INTEL, 6) /* Y_TILED_GEN12_RC_CCS */ || modifier == fourcc_mod_code(INTEL, 7) /* Y_TILED_GEN12_MC_CCS */ || modifier == fourcc_mod_code(INTEL, 8) /* Y_TILED_GEN12_RC_CCS_CC */) { return false; } return drm_to_av(format) != AV_PIX_FMT_NONE; } bool video_supports_shm_format(uint32_t format) { if (format == 0) { return true; } return video_supports_dmabuf_format(format, 0); } static const struct AVCodec *get_video_sw_encoder( enum video_coding_fmt fmt, bool print_error) { const struct AVCodec *codec = NULL; switch (fmt) { case VIDEO_H264: codec = avcodec_find_encoder_by_name(VIDEO_H264_SW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_H264_SW_ENCODER); } return codec; case VIDEO_VP9: codec = avcodec_find_encoder_by_name(VIDEO_VP9_SW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_VP9_SW_ENCODER); } return codec; case VIDEO_AV1: codec = avcodec_find_encoder_by_name(VIDEO_AV1_SW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_AV1_SW_ENCODER); } return codec; default: return NULL; } } static const struct AVCodec *get_video_hw_encoder( enum video_coding_fmt fmt, bool print_error) { const struct AVCodec *codec = NULL; switch (fmt) { case VIDEO_H264: codec = avcodec_find_encoder_by_name(VIDEO_H264_HW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_H264_HW_ENCODER); } return codec; case VIDEO_VP9: codec = avcodec_find_encoder_by_name(VIDEO_VP9_HW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_VP9_HW_ENCODER); } return codec; case VIDEO_AV1: return NULL; default: return NULL; } } static const struct AVCodec *get_video_decoder( enum video_coding_fmt fmt, bool print_error) { const struct AVCodec *codec = NULL; switch (fmt) { case VIDEO_H264: codec = avcodec_find_decoder_by_name(VIDEO_H264_DECODER); if (!codec && print_error) { wp_error("Failed to find decoder \"%s\"", VIDEO_H264_DECODER); } return codec; case VIDEO_VP9: codec = avcodec_find_decoder_by_name(VIDEO_VP9_DECODER); if (!codec && print_error) { wp_error("Failed to find decoder \"%s\"", VIDEO_VP9_DECODER); } return codec; case VIDEO_AV1: codec = avcodec_find_decoder_by_name(VIDEO_AV1_DECODER); if (!codec && print_error) { wp_error("Failed to find decoder \"%s\"", VIDEO_AV1_DECODER); } return codec; default: return NULL; } } bool video_supports_coding_format(enum video_coding_fmt fmt) { return get_video_sw_encoder(fmt, false) && get_video_decoder(fmt, false); } static void video_log_callback( void *aux, int level, const char *fmt, va_list args) { (void)aux; enum log_level wp_level = (level <= AV_LOG_WARNING) ? WP_ERROR : WP_DEBUG; log_handler_func_t fn = log_funcs[wp_level]; if (!fn) { return; } char buf[1024]; int len = vsnprintf(buf, 1023, fmt, args); while (buf[len - 1] == '\n' && len > 1) { buf[len - 1] = 0; len--; } (*fn)("ffmpeg", 0, wp_level, "%s", buf); } void setup_video_logging(void) { if (log_funcs[WP_DEBUG]) { av_log_set_level(AV_LOG_INFO); } else { av_log_set_level(AV_LOG_WARNING); } av_log_set_callback(video_log_callback); } #ifdef HAS_VAAPI static uint32_t drm_to_va_fourcc(uint32_t drm_fourcc) { switch (drm_fourcc) { /* At the moment, Intel/AMD VAAPI implementations only support * various YUY configurations and RGB32. (No other RGB variants). * See also libavutil / hwcontext_vaapi.c / vaapi_drm_format_map[] */ case DRM_FORMAT_XRGB8888: return VA_FOURCC_BGRX; case DRM_FORMAT_XBGR8888: return VA_FOURCC_RGBX; case DRM_FORMAT_RGBX8888: return VA_FOURCC_XBGR; case DRM_FORMAT_BGRX8888: return VA_FOURCC_XRGB; case DRM_FORMAT_NV12: return VA_FOURCC_NV12; } return 0; } static uint32_t va_fourcc_to_rt(uint32_t va_fourcc) { switch (va_fourcc) { case VA_FOURCC_BGRX: case VA_FOURCC_RGBX: return VA_RT_FORMAT_RGB32; case VA_FOURCC_NV12: return VA_RT_FORMAT_YUV420; } return 0; } static int setup_vaapi_pipeline(struct shadow_fd *sfd, struct render_data *rd, uint32_t width, uint32_t height) { VADisplay vadisp = rd->av_vadisplay; uintptr_t buffer_val = (uintptr_t)sfd->fd_local; uint32_t va_fourcc = drm_to_va_fourcc(sfd->dmabuf_info.format); if (va_fourcc == 0) { wp_error("Could not convert DRM format %x to VA fourcc", sfd->dmabuf_info.format); return -1; } uint32_t rt_format = va_fourcc_to_rt(va_fourcc); VASurfaceAttribExternalBuffers buffer_desc; buffer_desc.num_buffers = 1; buffer_desc.buffers = &buffer_val; buffer_desc.pixel_format = va_fourcc; buffer_desc.flags = 0; buffer_desc.width = width; buffer_desc.height = height; buffer_desc.data_size = (uint32_t)sfd->buffer_size; buffer_desc.num_planes = (uint32_t)sfd->dmabuf_info.num_planes; for (int i = 0; i < (int)sfd->dmabuf_info.num_planes; i++) { buffer_desc.offsets[i] = sfd->dmabuf_info.offsets[i]; buffer_desc.pitches[i] = sfd->dmabuf_info.strides[i]; } VASurfaceAttrib attribs[3]; attribs[0].type = VASurfaceAttribPixelFormat; attribs[0].flags = VA_SURFACE_ATTRIB_SETTABLE; attribs[0].value.type = VAGenericValueTypeInteger; attribs[0].value.value.i = 0; attribs[1].type = VASurfaceAttribMemoryType; attribs[1].flags = VA_SURFACE_ATTRIB_SETTABLE; attribs[1].value.type = VAGenericValueTypeInteger; attribs[1].value.value.i = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME; attribs[2].type = VASurfaceAttribExternalBufferDescriptor; attribs[2].flags = VA_SURFACE_ATTRIB_SETTABLE; attribs[2].value.type = VAGenericValueTypePointer; attribs[2].value.value.p = &buffer_desc; sfd->video_va_surface = 0; sfd->video_va_context = 0; sfd->video_va_pipeline = 0; VAStatus stat = vaCreateSurfaces(vadisp, rt_format, buffer_desc.width, buffer_desc.height, &sfd->video_va_surface, 1, attribs, 3); if (stat != VA_STATUS_SUCCESS) { wp_error("Create surface failed: %s", vaErrorStr(stat)); sfd->video_va_surface = 0; return -1; } stat = vaCreateContext(vadisp, rd->av_copy_config, (int)buffer_desc.width, (int)buffer_desc.height, 0, &sfd->video_va_surface, 1, &sfd->video_va_context); if (stat != VA_STATUS_SUCCESS) { wp_error("Create context failed %s", vaErrorStr(stat)); vaDestroySurfaces(vadisp, &sfd->video_va_surface, 1); sfd->video_va_surface = 0; sfd->video_va_context = 0; return -1; } stat = vaCreateBuffer(vadisp, sfd->video_va_context, VAProcPipelineParameterBufferType, sizeof(VAProcPipelineParameterBuffer), 1, NULL, &sfd->video_va_pipeline); if (stat != VA_STATUS_SUCCESS) { wp_error("Failed to create pipeline buffer: %s", vaErrorStr(stat)); vaDestroySurfaces(vadisp, &sfd->video_va_surface, 1); vaDestroyContext(vadisp, sfd->video_va_context); sfd->video_va_surface = 0; sfd->video_va_context = 0; sfd->video_va_pipeline = 0; return -1; } return 0; } static void cleanup_vaapi_pipeline(struct shadow_fd *sfd) { if (!sfd->video_va_surface && !sfd->video_va_context && !sfd->video_va_pipeline) { return; } AVHWDeviceContext *vwdc = (AVHWDeviceContext *) sfd->video_context->hw_device_ctx->data; if (vwdc->type != AV_HWDEVICE_TYPE_VAAPI) { return; } AVVAAPIDeviceContext *vdctx = (AVVAAPIDeviceContext *)vwdc->hwctx; VADisplay vadisp = vdctx->display; if (sfd->video_va_surface) { vaDestroySurfaces(vadisp, &sfd->video_va_surface, 1); sfd->video_va_surface = 0; } if (sfd->video_va_context) { vaDestroyContext(vadisp, sfd->video_va_context); sfd->video_va_context = 0; } if (sfd->video_va_pipeline) { vaDestroyBuffer(vadisp, sfd->video_va_pipeline); sfd->video_va_pipeline = 0; } } static void run_vaapi_conversion(struct shadow_fd *sfd, struct render_data *rd, struct AVFrame *va_frame) { VADisplay vadisp = rd->av_vadisplay; if (va_frame->format != AV_PIX_FMT_VAAPI) { wp_error("Non-vaapi pixel format: %s", av_get_pix_fmt_name(va_frame->format)); } VASurfaceID src_surf = (VASurfaceID)(ptrdiff_t)va_frame->data[3]; int stat = vaBeginPicture( vadisp, sfd->video_va_context, sfd->video_va_surface); if (stat != VA_STATUS_SUCCESS) { wp_error("Begin picture config failed: %s", vaErrorStr(stat)); } VAProcPipelineParameterBuffer *pipeline_param; stat = vaMapBuffer(vadisp, sfd->video_va_pipeline, (void **)&pipeline_param); if (stat != VA_STATUS_SUCCESS) { wp_error("Failed to map pipeline buffer: %s", vaErrorStr(stat)); } pipeline_param->surface = src_surf; pipeline_param->surface_region = NULL; pipeline_param->output_region = NULL; pipeline_param->output_background_color = 0; pipeline_param->filter_flags = VA_FILTER_SCALING_FAST; pipeline_param->filters = NULL; pipeline_param->filters = 0; stat = vaUnmapBuffer(vadisp, sfd->video_va_pipeline); if (stat != VA_STATUS_SUCCESS) { wp_error("Failed to unmap pipeline buffer: %s", vaErrorStr(stat)); } stat = vaRenderPicture(vadisp, sfd->video_va_context, &sfd->video_va_pipeline, 1); if (stat != VA_STATUS_SUCCESS) { wp_error("Failed to render picture: %s", vaErrorStr(stat)); } stat = vaEndPicture(vadisp, sfd->video_va_context); if (stat != VA_STATUS_SUCCESS) { wp_error("End picture failed: %s", vaErrorStr(stat)); } stat = vaSyncSurface(vadisp, sfd->video_va_surface); if (stat != VA_STATUS_SUCCESS) { wp_error("Sync surface failed: %s", vaErrorStr(stat)); } } #endif void destroy_video_data(struct shadow_fd *sfd) { if (sfd->video_context) { #ifdef HAS_VAAPI cleanup_vaapi_pipeline(sfd); #endif /* free contexts (which, theoretically, could have hooks into * frames/packets) first */ avcodec_free_context(&sfd->video_context); sws_freeContext(sfd->video_color_context); if (sfd->video_yuv_frame_data) { av_freep(sfd->video_yuv_frame_data); } if (sfd->video_local_frame_data) { av_freep(sfd->video_local_frame_data); } av_frame_free(&sfd->video_local_frame); av_frame_free(&sfd->video_tmp_frame); av_frame_free(&sfd->video_yuv_frame); av_packet_free(&sfd->video_packet); } } static void copy_onto_video_mirror(const char *buffer, uint32_t map_stride, AVFrame *frame, const struct dmabuf_slice_data *info) { for (int i = 0; i < info->num_planes; i++) { int j = i; if (needs_vu_flip(info->format) && (i == 1 || i == 2)) { j = 3 - i; } for (size_t r = 0; r < info->height; r++) { uint8_t *dst = frame->data[j] + frame->linesize[j] * (int)r; const char *src = buffer + (size_t)info->offsets[i] + (size_t)map_stride * r; /* todo: handle multiplanar strides properly */ size_t common = (size_t)minu(map_stride, (uint64_t)frame->linesize[j]); memcpy(dst, src, common); } } } static void copy_from_video_mirror(char *buffer, uint32_t map_stride, const AVFrame *frame, const struct dmabuf_slice_data *info) { for (int i = 0; i < info->num_planes; i++) { int j = i; if (needs_vu_flip(info->format) && (i == 1 || i == 2)) { j = 3 - i; } for (size_t r = 0; r < info->height; r++) { const uint8_t *src = frame->data[j] + frame->linesize[j] * (int)r; char *dst = buffer + (size_t)info->offsets[i] + (size_t)map_stride * r; /* todo: handle multiplanar strides properly */ size_t common = (size_t)minu(map_stride, (uint64_t)frame->linesize[j]); memcpy(dst, src, common); } } } static bool pad_hardware_size( int width, int height, int *new_width, int *new_height) { /* VAAPI drivers often impose additional alignment restrictions; for * example, requiring that width be 16-aligned, or that tiled buffers be * 128-aligned. See also intel-vaapi-driver, i965_drv_video.c, * i965_suface_external_memory() [sic] ; */ *new_width = align(width, 16); *new_height = align(height, 16); if (width % 16 != 0) { /* Something goes wrong with VAAPI/buffer state when the * width (or stride?) is not a multiple of 16, and GEM_MMAP * ioctls start failing */ return false; } return true; } static int init_hwcontext(struct render_data *rd) { if (rd->av_disabled) { return -1; } if (rd->av_hwdevice_ref != NULL) { return 0; } if (init_render_data(rd) == -1) { rd->av_disabled = true; return -1; } rd->av_vadisplay = 0; rd->av_copy_config = 0; rd->av_drmdevice_ref = NULL; // Q: what does this even do? rd->av_drmdevice_ref = av_hwdevice_ctx_alloc(AV_HWDEVICE_TYPE_DRM); if (!rd->av_drmdevice_ref) { wp_error("Failed to allocate AV DRM device context"); rd->av_disabled = true; return -1; } AVHWDeviceContext *hwdc = (AVHWDeviceContext *)rd->av_drmdevice_ref->data; AVDRMDeviceContext *dctx = hwdc->hwctx; dctx->fd = rd->drm_fd; if (av_hwdevice_ctx_init(rd->av_drmdevice_ref)) { wp_error("Failed to initialize AV DRM device context"); rd->av_disabled = true; return -1; } /* We create a derived context here, to ensure that the drm fd matches * that which was used to create the DMABUFs. Also, this ensures that * the VA implementation doesn't look for a connection via e.g. Wayland * or X11 */ if (av_hwdevice_ctx_create_derived(&rd->av_hwdevice_ref, AV_HWDEVICE_TYPE_VAAPI, rd->av_drmdevice_ref, 0) < 0) { wp_error("Failed to create VAAPI hardware device"); rd->av_disabled = true; return -1; } #ifdef HAS_VAAPI AVHWDeviceContext *vwdc = (AVHWDeviceContext *)rd->av_hwdevice_ref->data; AVVAAPIDeviceContext *vdctx = (AVVAAPIDeviceContext *)vwdc->hwctx; if (!vdctx) { wp_error("No vaapi device context"); rd->av_disabled = true; return -1; } rd->av_vadisplay = vdctx->display; int stat = vaCreateConfig(rd->av_vadisplay, VAProfileNone, VAEntrypointVideoProc, NULL, 0, &rd->av_copy_config); if (stat != VA_STATUS_SUCCESS) { wp_error("Create config failed: %s", vaErrorStr(stat)); rd->av_disabled = true; return -1; } #endif return 0; } void cleanup_hwcontext(struct render_data *rd) { rd->av_disabled = true; #if HAS_VAAPI if (rd->av_vadisplay && rd->av_copy_config) { vaDestroyConfig(rd->av_vadisplay, rd->av_copy_config); } #endif if (rd->av_hwdevice_ref) { av_buffer_unref(&rd->av_hwdevice_ref); } if (rd->av_drmdevice_ref) { av_buffer_unref(&rd->av_drmdevice_ref); } } static void configure_low_latency_enc_context(struct AVCodecContext *ctx, bool sw, enum video_coding_fmt fmt, int bpf, int nthreads) { // "time" is only meaningful in terms of the frames provided int nom_fps = 25; ctx->time_base = (AVRational){1, nom_fps}; ctx->framerate = (AVRational){nom_fps, 1}; /* B-frames are directly tied to latency, since each one * is predicted using its preceding and following * frames. The gop size is chosen by the driver. */ ctx->gop_size = -1; ctx->max_b_frames = 0; // Q: how to get this to zero? // low latency ctx->delay = 0; if (sw) { ctx->bit_rate = bpf * nom_fps; if (fmt == VIDEO_H264) { if (av_opt_set(ctx->priv_data, "preset", "ultrafast", 0) != 0) { wp_error("Failed to set x264 encode ultrafast preset"); } if (av_opt_set(ctx->priv_data, "tune", "zerolatency", 0) != 0) { wp_error("Failed to set x264 encode zerolatency"); } } else if (fmt == VIDEO_VP9) { if (av_opt_set(ctx->priv_data, "lag-in-frames", "0", 0) != 0) { wp_error("Failed to set vp9 encode lag"); } if (av_opt_set(ctx->priv_data, "quality", "realtime", 0) != 0) { wp_error("Failed to set vp9 quality"); } if (av_opt_set(ctx->priv_data, "speed", "8", 0) != 0) { wp_error("Failed to set vp9 speed"); } } else if (fmt == VIDEO_AV1) { // AOM-AV1 if (av_opt_set(ctx->priv_data, "usage", "realtime", 0) != 0) { wp_error("Failed to set av1 usage"); } if (av_opt_set(ctx->priv_data, "lag-in-frames", "0", 0) != 0) { wp_error("Failed to set av1 lag"); } if (av_opt_set(ctx->priv_data, "cpu-used", "8", 0) != 0) { wp_error("Failed to set av1 speed"); } // Use multi-threaded encoding ctx->thread_count = nthreads; } } else { ctx->bit_rate = bpf * nom_fps; if (fmt == VIDEO_H264) { /* with i965/gen8, hardware encoding is faster but has * significantly worse quality per bitrate than x264 */ if (av_opt_set(ctx->priv_data, "profile", "main", 0) != 0) { wp_error("Failed to set h264 encode main profile"); } } } } static int setup_hwvideo_encode( struct shadow_fd *sfd, struct render_data *rd, int nthreads) { /* NV12 is the preferred format for Intel VAAPI; see also * intel-vaapi-driver/src/i965_drv_video.c . Packed formats like * YUV420P typically don't work. */ const enum AVPixelFormat videofmt = AV_PIX_FMT_NV12; const struct AVCodec *codec = get_video_hw_encoder(sfd->video_fmt, true); if (!codec) { return -1; } struct AVCodecContext *ctx = avcodec_alloc_context3(codec); configure_low_latency_enc_context( ctx, false, sfd->video_fmt, rd->av_bpf, nthreads); if (!pad_hardware_size((int)sfd->dmabuf_info.width, (int)sfd->dmabuf_info.height, &ctx->width, &ctx->height)) { wp_error("Video dimensions (WxH = %dx%d) not alignable to use hardware video encoding", sfd->dmabuf_info.width, sfd->dmabuf_info.height); goto fail_alignment; } AVHWFramesConstraints *constraints = av_hwdevice_get_hwframe_constraints( rd->av_hwdevice_ref, NULL); if (!constraints) { wp_error("Failed to get hardware frame constraints"); goto fail_hwframe_constraints; } enum AVPixelFormat hw_format = constraints->valid_hw_formats[0]; av_hwframe_constraints_free(&constraints); AVBufferRef *frame_ref = av_hwframe_ctx_alloc(rd->av_hwdevice_ref); if (!frame_ref) { wp_error("Failed to allocate frame reference"); goto fail_frameref; } AVHWFramesContext *fctx = (AVHWFramesContext *)frame_ref->data; /* hw fmt is e.g. "vaapi_vld" */ fctx->format = hw_format; fctx->sw_format = videofmt; fctx->width = ctx->width; fctx->height = ctx->height; int err = av_hwframe_ctx_init(frame_ref); if (err < 0) { wp_error("Failed to init hardware frame context, %s", av_err2str(err)); goto fail_hwframe_init; } ctx->pix_fmt = hw_format; ctx->hw_frames_ctx = av_buffer_ref(frame_ref); if (!ctx->hw_frames_ctx) { wp_error("Failed to reference hardware frame context for codec context"); goto fail_ctx_hwfctx; } int open_err = avcodec_open2(ctx, codec, NULL); if (open_err < 0) { wp_error("Failed to open codec: %s", av_err2str(open_err)); goto fail_codec_open; } /* Create a VAAPI frame linked to the sfd DMABUF */ struct AVDRMFrameDescriptor *framedesc = av_mallocz(sizeof(struct AVDRMFrameDescriptor)); if (!framedesc) { wp_error("Failed to allocate DRM frame descriptor"); goto fail_framedesc_alloc; } /* todo: multiplanar support */ framedesc->nb_objects = 1; framedesc->objects[0].format_modifier = sfd->dmabuf_info.modifier; framedesc->objects[0].fd = sfd->fd_local; framedesc->objects[0].size = sfd->buffer_size; framedesc->nb_layers = 1; framedesc->layers[0].nb_planes = sfd->dmabuf_info.num_planes; framedesc->layers[0].format = sfd->dmabuf_info.format; for (int i = 0; i < (int)sfd->dmabuf_info.num_planes; i++) { framedesc->layers[0].planes[i].object_index = 0; framedesc->layers[0].planes[i].offset = sfd->dmabuf_info.offsets[i]; framedesc->layers[0].planes[i].pitch = sfd->dmabuf_info.strides[i]; } AVFrame *local_frame = av_frame_alloc(); if (!local_frame) { wp_error("Failed to allocate local frame"); goto fail_frame_alloc; } local_frame->width = ctx->width; local_frame->height = ctx->height; local_frame->format = AV_PIX_FMT_DRM_PRIME; local_frame->buf[0] = av_buffer_create((uint8_t *)framedesc, sizeof(struct AVDRMFrameDescriptor), av_buffer_default_free, local_frame, 0); if (!local_frame->buf[0]) { wp_error("Failed to reference count frame DRM description"); goto fail_framedesc_ref; } local_frame->data[0] = (uint8_t *)framedesc; local_frame->hw_frames_ctx = av_buffer_ref(frame_ref); if (!local_frame->hw_frames_ctx) { wp_error("Failed to reference hardware frame context for local frame"); goto fail_frame_hwfctx; } AVFrame *yuv_frame = av_frame_alloc(); if (!yuv_frame) { wp_error("Failed to allocate yuv frame"); goto fail_yuv_frame; } yuv_frame->format = hw_format; yuv_frame->hw_frames_ctx = av_buffer_ref(frame_ref); if (!yuv_frame->hw_frames_ctx) { wp_error("Failed to reference hardware frame context for yuv frame"); goto fail_yuv_hwfctx; } int map_err = av_hwframe_map(yuv_frame, local_frame, 0); if (map_err) { wp_error("Failed to map (DRM) local frame to (hardware) yuv frame: %s", av_err2str(map_err)); goto fail_map; } struct AVPacket *pkt = av_packet_alloc(); if (!pkt) { wp_error("Failed to allocate av packet"); goto fail_pkt_alloc; } av_buffer_unref(&frame_ref); sfd->video_context = ctx; sfd->video_local_frame = local_frame; sfd->video_yuv_frame = yuv_frame; sfd->video_packet = pkt; return 0; fail_pkt_alloc: fail_map: fail_yuv_hwfctx: av_frame_free(&yuv_frame); fail_yuv_frame: fail_framedesc_ref: fail_frame_hwfctx: av_frame_free(&local_frame); fail_frame_alloc: fail_framedesc_alloc: fail_codec_open: fail_ctx_hwfctx: fail_hwframe_init: av_buffer_unref(&frame_ref); fail_frameref: fail_hwframe_constraints: fail_alignment: avcodec_free_context(&ctx); return -1; } int setup_video_encode( struct shadow_fd *sfd, struct render_data *rd, int nthreads) { if (sfd->video_context) { wp_error("Video context already set up for sfd RID=%d", sfd->remote_id); return -1; } bool has_hw = init_hwcontext(rd) == 0; /* Attempt hardware encoding, and if it doesn't succeed, fall back * to software encoding */ if (has_hw && setup_hwvideo_encode(sfd, rd, nthreads) == 0) { return 0; } enum AVPixelFormat avpixfmt = drm_to_av(sfd->dmabuf_info.format); if (avpixfmt == AV_PIX_FMT_NONE) { wp_error("Failed to find matching AvPixelFormat for %x", sfd->dmabuf_info.format); return -1; } enum AVPixelFormat videofmt = AV_PIX_FMT_YUV420P; if (sws_isSupportedInput(avpixfmt) == 0) { wp_error("frame format %s not supported", av_get_pix_fmt_name(avpixfmt)); return -1; } if (sws_isSupportedInput(videofmt) == 0) { wp_error("videofmt %s not supported", av_get_pix_fmt_name(videofmt)); return -1; } const struct AVCodec *codec = get_video_sw_encoder(sfd->video_fmt, true); if (!codec) { return -1; } struct AVCodecContext *ctx = avcodec_alloc_context3(codec); ctx->pix_fmt = videofmt; configure_low_latency_enc_context( ctx, true, sfd->video_fmt, rd->av_bpf, nthreads); /* Increase image sizes as needed to ensure codec can run */ ctx->width = (int)sfd->dmabuf_info.width; ctx->height = (int)sfd->dmabuf_info.height; int linesize_align[AV_NUM_DATA_POINTERS]; avcodec_align_dimensions2( ctx, &ctx->width, &ctx->height, linesize_align); struct AVPacket *pkt = av_packet_alloc(); if (avcodec_open2(ctx, codec, NULL) < 0) { wp_error("Failed to open codec"); return -1; } struct AVFrame *local_frame = av_frame_alloc(); if (!local_frame) { wp_error("Could not allocate video frame"); return -1; } local_frame->format = avpixfmt; /* adopt padded sizes */ local_frame->width = ctx->width; local_frame->height = ctx->height; if (av_image_alloc(local_frame->data, local_frame->linesize, local_frame->width, local_frame->height, avpixfmt, 64) < 0) { wp_error("Failed to allocate temp image"); return -1; } struct AVFrame *yuv_frame = av_frame_alloc(); yuv_frame->width = ctx->width; yuv_frame->height = ctx->height; yuv_frame->format = videofmt; if (av_image_alloc(yuv_frame->data, yuv_frame->linesize, yuv_frame->width, yuv_frame->height, videofmt, 64) < 0) { wp_error("Failed to allocate temp image"); return -1; } struct SwsContext *sws = sws_getContext(local_frame->width, local_frame->height, avpixfmt, yuv_frame->width, yuv_frame->height, videofmt, SWS_BILINEAR, NULL, NULL, NULL); if (!sws) { wp_error("Could not create software color conversion context"); return -1; } sfd->video_yuv_frame = yuv_frame; /* recorded pointer to be freed to match av_image_alloc */ sfd->video_yuv_frame_data = &yuv_frame->data[0]; sfd->video_local_frame = local_frame; sfd->video_local_frame_data = &local_frame->data[0]; sfd->video_packet = pkt; sfd->video_context = ctx; sfd->video_color_context = sws; return 0; } static enum AVPixelFormat get_decode_format( AVCodecContext *ctx, const enum AVPixelFormat *pix_fmts) { (void)ctx; for (const enum AVPixelFormat *p = pix_fmts; *p != AV_PIX_FMT_NONE; p++) { /* Prefer VAAPI output, if available. */ if (*p == AV_PIX_FMT_VAAPI) { return AV_PIX_FMT_VAAPI; } } /* YUV420P is the typical software option, but this function is only * called when VAAPI is already available */ return AV_PIX_FMT_NONE; } int setup_video_decode(struct shadow_fd *sfd, struct render_data *rd) { bool has_hw = init_hwcontext(rd) == 0; enum AVPixelFormat avpixfmt = drm_to_av(sfd->dmabuf_info.format); if (avpixfmt == AV_PIX_FMT_NONE) { wp_error("Failed to find matching AvPixelFormat for %x", sfd->dmabuf_info.format); return -1; } enum AVPixelFormat videofmt = AV_PIX_FMT_YUV420P; if (sws_isSupportedInput(avpixfmt) == 0) { wp_error("source pixel format %x not supported", avpixfmt); return -1; } if (sws_isSupportedInput(videofmt) == 0) { wp_error("AV_PIX_FMT_YUV420P not supported"); return -1; } const struct AVCodec *codec = get_video_decoder(sfd->video_fmt, true); if (!codec) { return -1; } struct AVCodecContext *ctx = avcodec_alloc_context3(codec); if (!ctx) { wp_error("Failed to allocate context"); return -1; } ctx->delay = 0; if (has_hw) { /* If alignment permits, use hardware decoding */ has_hw = pad_hardware_size((int)sfd->dmabuf_info.width, (int)sfd->dmabuf_info.height, &ctx->width, &ctx->height); } if (has_hw) { ctx->hw_device_ctx = av_buffer_ref(rd->av_hwdevice_ref); if (!ctx->hw_device_ctx) { wp_error("Failed to reference hardware device context"); } ctx->get_format = get_decode_format; } else { ctx->pix_fmt = videofmt; /* set context dimensions, and allocate buffer to write into */ ctx->width = (int)sfd->dmabuf_info.width; ctx->height = (int)sfd->dmabuf_info.height; int linesize_align[AV_NUM_DATA_POINTERS]; avcodec_align_dimensions2( ctx, &ctx->width, &ctx->height, linesize_align); } if (avcodec_open2(ctx, codec, NULL) < 0) { wp_error("Failed to open codec"); } struct AVFrame *yuv_frame = av_frame_alloc(); if (!yuv_frame) { wp_error("Could not allocate yuv frame"); return -1; } struct AVPacket *pkt = av_packet_alloc(); if (!pkt) { wp_error("Could not allocate video packet"); return -1; } if (ctx->hw_device_ctx) { #ifdef HAS_VAAPI if (rd->av_vadisplay) { setup_vaapi_pipeline(sfd, rd, (uint32_t)ctx->width, (uint32_t)ctx->height); } #endif } sfd->video_yuv_frame = yuv_frame; sfd->video_packet = pkt; sfd->video_context = ctx; /* yuv_frame not allocated by us */ sfd->video_yuv_frame_data = NULL; /* will be allocated on frame receipt */ sfd->video_local_frame = NULL; sfd->video_color_context = NULL; return 0; } void collect_video_from_mirror( struct shadow_fd *sfd, struct transfer_queue *transfers) { if (sfd->video_color_context) { /* If using software encoding, need to convert to YUV */ void *handle = NULL; uint32_t map_stride = 0; void *data = map_dmabuf( sfd->dmabuf_bo, false, &handle, &map_stride); if (!data) { return; } copy_onto_video_mirror(data, map_stride, sfd->video_local_frame, &sfd->dmabuf_info); unmap_dmabuf(sfd->dmabuf_bo, handle); if (sws_scale(sfd->video_color_context, (const uint8_t *const *)sfd ->video_local_frame->data, sfd->video_local_frame->linesize, 0, sfd->video_local_frame->height, sfd->video_yuv_frame->data, sfd->video_yuv_frame->linesize) < 0) { wp_error("Failed to perform color conversion"); } } sfd->video_yuv_frame->pts = sfd->video_frameno++; int sendstat = avcodec_send_frame( sfd->video_context, sfd->video_yuv_frame); if (sendstat < 0) { wp_error("Failed to create frame: %s", av_err2str(sendstat)); return; } // assume 1-1 frames to packets, at the moment int recvstat = avcodec_receive_packet( sfd->video_context, sfd->video_packet); if (recvstat == AVERROR(EINVAL)) { wp_error("Failed to receive packet for RID=%d", sfd->remote_id); return; } else if (recvstat == AVERROR(EAGAIN)) { wp_error("Packet for RID=%d needs more input", sfd->remote_id); } if (recvstat == 0) { struct AVPacket *pkt = sfd->video_packet; size_t pktsz = (size_t)pkt->buf->size; size_t msgsz = sizeof(struct wmsg_basic) + pktsz; char *buf = malloc(alignz(msgsz, 4)); struct wmsg_basic *header = (struct wmsg_basic *)buf; header->size_and_type = transfer_header(msgsz, WMSG_SEND_DMAVID_PACKET); header->remote_id = sfd->remote_id; memcpy(buf + sizeof(struct wmsg_basic), pkt->buf->data, pktsz); memset(buf + msgsz, 0, alignz(msgsz, 4) - msgsz); transfer_add(transfers, alignz(msgsz, 4), buf); av_packet_unref(pkt); } } static int setup_color_conv(struct shadow_fd *sfd, struct AVFrame *cpu_frame) { struct AVCodecContext *ctx = sfd->video_context; enum AVPixelFormat avpixfmt = drm_to_av(sfd->dmabuf_info.format); struct AVFrame *local_frame = av_frame_alloc(); if (!local_frame) { wp_error("Could not allocate video frame"); return -1; } local_frame->format = avpixfmt; /* adopt padded sizes */ local_frame->width = ctx->width; local_frame->height = ctx->height; if (av_image_alloc(local_frame->data, local_frame->linesize, local_frame->width, local_frame->height, avpixfmt, 64) < 0) { wp_error("Failed to allocate local image"); av_frame_free(&local_frame); return -1; } struct SwsContext *sws = sws_getContext(cpu_frame->width, cpu_frame->height, cpu_frame->format, local_frame->width, local_frame->height, avpixfmt, SWS_BILINEAR, NULL, NULL, NULL); if (!sws) { wp_error("Could not create software color conversion context"); av_freep(&local_frame->data[0]); av_frame_free(&local_frame); return -1; } sfd->video_local_frame = local_frame; sfd->video_local_frame_data = &local_frame->data[0]; sfd->video_color_context = sws; return 0; } void apply_video_packet(struct shadow_fd *sfd, struct render_data *rd, const struct bytebuf *msg) { sfd->video_packet->data = (uint8_t *)msg->data; sfd->video_packet->size = (int)msg->size; int sendstat = avcodec_send_packet( sfd->video_context, sfd->video_packet); if (sendstat < 0) { wp_error("Failed to send packet: %s", av_err2str(sendstat)); } /* Receive all produced frames, ignoring all but the most recent */ while (true) { int recvstat = avcodec_receive_frame( sfd->video_context, sfd->video_yuv_frame); if (recvstat == 0) { struct AVFrame *cpu_frame = sfd->video_yuv_frame; #if HAS_VAAPI if (sfd->video_va_surface && sfd->video_yuv_frame->format == AV_PIX_FMT_VAAPI) { run_vaapi_conversion( sfd, rd, sfd->video_yuv_frame); continue; } #else (void)rd; #endif if (sfd->video_yuv_frame->format == AV_PIX_FMT_VAAPI) { if (!sfd->video_tmp_frame) { sfd->video_tmp_frame = av_frame_alloc(); if (!sfd->video_tmp_frame) { wp_error("Failed to allocate temporary frame"); } } int tferr = av_hwframe_transfer_data( sfd->video_tmp_frame, sfd->video_yuv_frame, 0); if (tferr < 0) { wp_error("Failed to transfer hwframe data: %s", av_err2str(tferr)); } cpu_frame = sfd->video_tmp_frame; } if (!cpu_frame) { return; } if (!sfd->video_color_context) { if (setup_color_conv(sfd, cpu_frame) == -1) { return; } } /* Handle frame immediately, since the next receive run * will clear it again */ if (sws_scale(sfd->video_color_context, (const uint8_t *const *) cpu_frame->data, cpu_frame->linesize, 0, cpu_frame->height, sfd->video_local_frame->data, sfd->video_local_frame->linesize) < 0) { wp_error("Failed to perform color conversion"); } if (!sfd->dmabuf_bo) { // ^ was not previously able to create buffer wp_error("DMABUF was not created"); return; } /* Copy data onto DMABUF */ uint32_t map_stride = 0; void *handle = NULL; void *data = map_dmabuf(sfd->dmabuf_bo, true, &handle, &map_stride); if (!data) { return; } copy_from_video_mirror(data, map_stride, sfd->video_local_frame, &sfd->dmabuf_info); unmap_dmabuf(sfd->dmabuf_bo, handle); } else { if (recvstat != AVERROR(EAGAIN)) { wp_error("Failed to receive frame due to error: %s", av_err2str(recvstat)); } break; } } } #endif /* HAS_VIDEO && HAS_DMABUF */ waypipe-v0.10.5/waypipe-c/waypipe.c000066400000000000000000001016351507071711100171670ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include enum waypipe_mode { MODE_FAIL = 0x0, MODE_SSH = 0x1, MODE_CLIENT = 0x2, MODE_SERVER = 0x4, MODE_RECON = 0x8, MODE_BENCH = 0x10 }; static bool log_to_tty = false; static enum waypipe_mode log_mode = MODE_FAIL; static bool log_anti_staircase = false; log_handler_func_t log_funcs[2] = {NULL, NULL}; /* Usage: Wrapped to 79 characters */ static const char usage_string[] = "Usage: waypipe [options] mode ...\n" "A proxy for Wayland protocol applications.\n" "Example: waypipe ssh user@server weston-terminal\n" "\n" "Modes:\n" " ssh [...] Wrap an ssh invocation to run waypipe on both ends of the\n" " connection, and automatically forward Wayland applications.\n" " server CMD Run remotely to invoke CMD and forward application data through\n" " a socket to a matching 'waypipe client' instance.\n" " client Run locally to create a Unix socket to which 'waypipe server'\n" " instances can connect.\n" " recon C T Reconnect a 'waypipe server' instance. Writes the new Unix\n" " socket path T to the control pipe C.\n" " bench B Given a connection bandwidth B in MB/sec, estimate the best\n" " compression level used to send data\n" "\n" "Options:\n" " -c, --compress C choose compression method: lz4[=#], zstd=[=#], none\n" " -d, --debug print debug messages\n" " -h, --help display this help and exit\n" " -n, --no-gpu disable protocols which would use GPU resources\n" " -o, --oneshot only permit one connected application\n" " -s, --socket S set the socket path to either create or connect to:\n" " server default: /tmp/waypipe-server.sock\n" " client default: /tmp/waypipe-client.sock\n" " ssh: sets the prefix for the socket path\n" " vsock: [[s]CID:]port\n" " --version print waypipe version and exit\n" " --allow-tiled allow gpu buffers (DMABUFs) with format modifiers\n" " --control C server,ssh: set control pipe to reconnect server\n" " --display D server,ssh: the Wayland display name or path\n" " --drm-node R set the local render node. default: /dev/dri/renderD128\n" " --remote-node R ssh: set the remote render node path\n" " --remote-bin R ssh: set the remote waypipe binary. default: waypipe\n" " --login-shell server: if server CMD is empty, run a login shell\n" " --threads T set thread pool size, default=hardware threads/2\n" " --title-prefix P prepend P to all window titles\n" " --unlink-socket server: unlink the socket that waypipe connects to\n" " --video[=V] compress certain linear dmabufs only with a video codec\n" " V is list of options: sw,hw,bpf=1.2e5,h264,vp9,av1\n" " --vsock use vsock instead of unix socket\n" " --secctx S client,ssh: enable Wayland security context protocol\n" " S is an app id to be attached to the security context\n" "\n"; static int usage(int retcode) { FILE *ostream = retcode == EXIT_SUCCESS ? stderr : stdout; fprintf(ostream, usage_string); return retcode; } static void log_handler(const char *file, int line, enum log_level level, const char *fmt, ...) { struct timespec ts; clock_gettime(CLOCK_REALTIME, &ts); int pid = getpid(); char mode; if (log_mode == MODE_SERVER) { mode = level == WP_DEBUG ? 's' : 'S'; } else { mode = level == WP_DEBUG ? 'c' : 'C'; } char msg[1024]; int nwri = 0; if (log_to_tty) { msg[nwri++] = '\x1b'; msg[nwri++] = '['; msg[nwri++] = '3'; /* blue for waypipe client, green for waypipe server, * (or unformatted for waypipe server if no pty is made */ msg[nwri++] = log_mode == MODE_SERVER ? '2' : '4'; msg[nwri++] = 'm'; if (level == WP_ERROR) { /* bold errors */ msg[nwri++] = '\x1b'; msg[nwri++] = '['; msg[nwri++] = '1'; msg[nwri++] = 'm'; } } int sec = (int)(ts.tv_sec % 100); int usec = (int)(ts.tv_nsec / 1000); nwri += sprintf(msg + nwri, "%c%d:%3d.%06d [%s:%3d] ", mode, pid, sec, usec, file, line); va_list args; va_start(args, fmt); nwri += vsnprintf(msg + nwri, (size_t)(1000 - nwri), fmt, args); va_end(args); if (log_to_tty) { msg[nwri++] = '\x1b'; msg[nwri++] = '['; msg[nwri++] = '0'; msg[nwri++] = 'm'; /* to avoid 'staircase' rendering when ssh has the '-t' flag * and sets raw mode for the shared terminal output */ if (log_anti_staircase) { msg[nwri++] = '\r'; } } msg[nwri++] = '\n'; msg[nwri] = 0; // single short writes are atomic for pipes, at least (void)write(STDERR_FILENO, msg, (size_t)nwri); } static void handle_noop(int sig) { (void)sig; } /* Configure signal handling policies */ static int setup_sighandlers(void) { struct sigaction ia; // SIGINT: abort operations, and set a flag ia.sa_handler = handle_sigint; sigemptyset(&ia.sa_mask); ia.sa_flags = 0; struct sigaction ca; // SIGCHLD: restart operations, but EINTR on poll ca.sa_handler = handle_noop; sigemptyset(&ca.sa_mask); ca.sa_flags = SA_RESTART | SA_NOCLDSTOP; struct sigaction pa; pa.sa_handler = SIG_IGN; sigemptyset(&pa.sa_mask); pa.sa_flags = 0; if (sigaction(SIGINT, &ia, NULL) == -1) { wp_error("Failed to set signal action for SIGINT"); return -1; } if (sigaction(SIGCHLD, &ca, NULL) == -1) { wp_error("Failed to set signal action for SIGCHLD"); return -1; } if (sigaction(SIGPIPE, &pa, NULL) == -1) { wp_error("Failed to set signal action for SIGPIPE"); return -1; } return 0; } /* produces a random token with a low accidental collision probability */ static void fill_rand_token(char tok[static 8]) { struct timespec tp; clock_gettime(CLOCK_REALTIME, &tp); uint32_t seed = (uint32_t)(getpid() + tp.tv_sec + (tp.tv_nsec << 2)); srand(seed); for (int i = 0; i < 8; i++) { unsigned int r = ((unsigned int)rand()) % 62; if (r < 26) { tok[i] = (char)(r + 'a'); } else if (r < 52) { tok[i] = (char)(r - 26 + 'A'); } else { tok[i] = (char)(r - 52 + '0'); } } } /* Scan a suffix which is either empty or has the form =N, returning true * if it matches */ static bool parse_level_choice(const char *str, int *dest, int defval) { if (str[0] == '\0') { *dest = defval; return true; } if (str[0] != '=') { return false; } str++; int sign = 1; if (str[0] == '-') { sign = -1; str++; } uint32_t val; if (parse_uint32(str, &val) == -1 || (int)val < 0) { return false; } *dest = sign * (int)val; return true; } /* Identifies the index at which the `destination` occurs in an openssh command, * and also sets a boolean if pty allocation was requested by an ssh flag */ static int locate_openssh_cmd_hostname( int argc, char *const *argv, bool *allocates_pty) { /* Based on command line help for openssh 8.0 */ char fixletters[] = "46AaCfGgKkMNnqsTtVvXxYy"; char argletters[] = "BbcDEeFIiJLlmOopQRSWw"; int dstidx = 0; while (dstidx < argc) { if (argv[dstidx][0] == '-' && strchr(argletters, argv[dstidx][1]) != NULL && argv[dstidx][2] == 0) { dstidx += 2; continue; } if (argv[dstidx][0] == '-' && strchr(fixletters, argv[dstidx][1]) != NULL) { for (const char *c = &argv[dstidx][1]; *c; c++) { *allocates_pty |= (*c == 't'); *allocates_pty &= (*c != 'T'); } dstidx++; continue; } if (argv[dstidx][0] == '-' && argv[dstidx][1] == '-' && argv[dstidx][2] == 0) { dstidx++; break; } break; } if (dstidx >= argc || argv[dstidx][0] == '-') { return -1; } return dstidx; } /* Send the socket at 'recon_path' to the control socket at 'control_path'. * Because connections are made by address, the waypipe server root process * must be able to connect to the `recon path`. */ static int run_recon(const char *control_path, const char *recon_path) { size_t len = strlen(recon_path); if (len >= 108) { fprintf(stderr, "Reconnection socket path \"%s\" too long, %d>=%d\n", control_path, (int)len, 108); return EXIT_FAILURE; } int cfd = open(control_path, O_WRONLY | O_NOCTTY); if (cfd == -1) { fprintf(stderr, "Failed to open control pipe at \"%s\"\n", control_path); return EXIT_FAILURE; } ssize_t written = write(cfd, recon_path, len + 1); close(cfd); if ((size_t)written != len + 1) { fprintf(stderr, "Failed to write to control pipe\n"); return EXIT_FAILURE; } return EXIT_SUCCESS; } #ifdef HAS_VIDEO static int parse_video_string(const char *str, struct main_config *config) { char tmp[128]; size_t l = strlen(str); if (l >= 127) { return -1; } memcpy(tmp, str, l + 1); char *part = strtok(tmp, ","); while (part) { if (!strcmp(part, "h264")) { config->video_fmt = VIDEO_H264; } else if (!strcmp(part, "vp9")) { config->video_fmt = VIDEO_VP9; } else if (!strcmp(part, "av1")) { config->video_fmt = VIDEO_AV1; } else if (!strcmp(part, "hw")) { config->prefer_hwvideo = true; } else if (!strcmp(part, "sw")) { config->prefer_hwvideo = false; } else if (!strncmp(part, "bpf=", 4)) { char *ep; double bpf = strtod(part + 4, &ep); if (*ep == 0 && bpf <= 1e9 && bpf >= 1.0) { config->video_bpf = (int)bpf; } else { return -1; } } else { return -1; } part = strtok(NULL, ","); } return 0; } #endif #ifdef HAS_VSOCK static int parse_vsock_addr(const char *str, struct main_config *config) { char tmp[128]; size_t l = strlen(str); if (l >= 127) { return -1; } memcpy(tmp, str, l + 1); char *port = strchr(tmp, ':'); if (port) { char *cid = tmp; port[0] = 0; port = port + 1; size_t cid_len = strlen(cid); if (cid_len > 0) { if (cid[0] == 's') { if (cid_len < 2) { return -1; } config->vsock_to_host = true; if (parse_uint32(cid + 1, &config->vsock_cid) == -1) { return -1; } } else { config->vsock_to_host = false; if (parse_uint32(cid, &config->vsock_cid) == -1) { return -1; } } } } else { port = tmp; } if (parse_uint32(port, &config->vsock_port) == -1) { return -1; } if (config->vsock_port <= 0) { return -1; } return 0; } #endif static const char *feature_names[] = { "lz4", "zstd", "dmabuf", "video", "vaapi", }; static const bool feature_flags[] = { #ifdef HAS_LZ4 true, #else false, #endif #ifdef HAS_ZSTD true, #else false, #endif #ifdef HAS_DMABUF true, #else false, #endif #ifdef HAS_VIDEO true, #else false, #endif #ifdef HAS_VAAPI true, #else false, #endif }; #define ARG_VERSION 1000 #define ARG_DISPLAY 1001 #define ARG_DRMNODE 1002 #define ARG_ALLOW_TILED 1003 #define ARG_LOGIN_SHELL 1004 #define ARG_REMOTENODE 1005 #define ARG_THREADS 1006 #define ARG_UNLINK 1007 #define ARG_VIDEO 1008 #define ARG_HWVIDEO 1009 #define ARG_CONTROL 1010 #define ARG_WAYPIPE_BINARY 1011 #define ARG_BENCH_TEST_SIZE 1012 #define ARG_VSOCK 1013 #define ARG_TITLE_PREFIX 1014 #define ARG_SECCTX 1015 static const struct option options[] = { {"compress", required_argument, NULL, 'c'}, {"debug", no_argument, NULL, 'd'}, {"help", no_argument, NULL, 'h'}, {"no-gpu", no_argument, NULL, 'n'}, {"oneshot", no_argument, NULL, 'o'}, {"socket", required_argument, NULL, 's'}, {"version", no_argument, NULL, ARG_VERSION}, {"allow-tiled", no_argument, NULL, ARG_ALLOW_TILED}, {"unlink-socket", no_argument, NULL, ARG_UNLINK}, {"drm-node", required_argument, NULL, ARG_DRMNODE}, {"remote-node", required_argument, NULL, ARG_REMOTENODE}, {"remote-bin", required_argument, NULL, ARG_WAYPIPE_BINARY}, {"login-shell", no_argument, NULL, ARG_LOGIN_SHELL}, {"video", optional_argument, NULL, ARG_VIDEO}, {"hwvideo", no_argument, NULL, ARG_HWVIDEO}, {"threads", required_argument, NULL, ARG_THREADS}, {"display", required_argument, NULL, ARG_DISPLAY}, {"control", required_argument, NULL, ARG_CONTROL}, {"test-size", required_argument, NULL, ARG_BENCH_TEST_SIZE}, {"vsock", no_argument, NULL, ARG_VSOCK}, {"title-prefix", required_argument, NULL, ARG_TITLE_PREFIX}, {"secctx", required_argument, NULL, ARG_SECCTX}, {0, 0, NULL, 0}}; struct arg_permissions { int val; uint32_t mode_mask; }; #define ALL_MODES (uint32_t)-1 static const struct arg_permissions arg_permissions[] = { {'c', MODE_SSH | MODE_CLIENT | MODE_SERVER}, {'d', ALL_MODES}, {'h', MODE_FAIL}, {'n', MODE_SSH | MODE_CLIENT | MODE_SERVER}, {'o', MODE_SSH | MODE_CLIENT | MODE_SERVER}, {'s', MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_VERSION, MODE_FAIL}, {ARG_ALLOW_TILED, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_UNLINK, MODE_SERVER}, {ARG_DRMNODE, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_REMOTENODE, MODE_SSH}, {ARG_WAYPIPE_BINARY, MODE_SSH}, {ARG_LOGIN_SHELL, MODE_SERVER}, {ARG_VIDEO, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_HWVIDEO, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_THREADS, MODE_SSH | MODE_CLIENT | MODE_SERVER | MODE_BENCH}, {ARG_DISPLAY, MODE_SSH | MODE_SERVER}, {ARG_CONTROL, MODE_SSH | MODE_SERVER}, {ARG_BENCH_TEST_SIZE, MODE_BENCH}, {ARG_VSOCK, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_TITLE_PREFIX, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_SECCTX, MODE_SSH | MODE_CLIENT}}; /* envp is nonstandard, so use environ */ extern char **environ; #ifdef HAS_SECURITY_CONTEXT int create_security_context(const char *sock_path, const char *engine, const char *instance_id, const char *app_id); void close_security_context(void); #endif int main(int argc, char **argv) { bool help = false; bool version = false; bool fail = false; bool debug = false; bool oneshot = false; bool unlink_at_end = false; bool login_shell = false; char *remote_drm_node = NULL; char *comp_string = NULL; char *nthread_string = NULL; char *wayland_display = NULL; char *waypipe_binary = "waypipe"; char *control_path = NULL; char *socketpath = NULL; uint32_t bench_test_size = (1u << 22) + 13; struct main_config config = { .n_worker_threads = 0, .drm_node = NULL, #ifdef HAS_LZ4 .compression = COMP_LZ4, #else .compression = COMP_NONE, #endif .compression_level = 0, .no_gpu = false, .only_linear_dmabuf = true, .video_if_possible = false, .video_bpf = 0, .video_fmt = VIDEO_H264, .prefer_hwvideo = false, .vsock = false, .vsock_cid = 2, /* VMADDR_CID_HOST */ .vsock_to_host = false, /* VMADDR_FLAG_TO_HOST */ .vsock_port = 0, .title_prefix = NULL, .secctx_app_id = NULL, }; /* We do not parse any getopt arguments happening after the mode choice * string, so as not to interfere with them. */ enum waypipe_mode mode = MODE_FAIL; int mode_argc = 0; while (mode_argc < argc) { if (!strcmp(argv[mode_argc], "ssh")) { mode = MODE_SSH; break; } if (!strcmp(argv[mode_argc], "client")) { mode = MODE_CLIENT; break; } if (!strcmp(argv[mode_argc], "server")) { mode = MODE_SERVER; break; } if (!strcmp(argv[mode_argc], "recon")) { mode = MODE_RECON; break; } if (!strcmp(argv[mode_argc], "bench")) { mode = MODE_BENCH; break; } mode_argc++; } while (true) { /* todo: set opterr to 0 and use custom error handler */ int opt = getopt_long( mode_argc, argv, "c:dhnos:", options, NULL); if (opt == -1) { break; } const struct arg_permissions *perms = NULL; for (size_t k = 0; k < sizeof(arg_permissions) / sizeof(arg_permissions[0]); k++) { if (arg_permissions[k].val == opt) { perms = &arg_permissions[k]; } } if (!perms) { fail = true; break; } if (!(mode & perms->mode_mask) && mode != MODE_FAIL) { fprintf(stderr, "Option %s is not allowed in mode %s\n", argv[optind - 1], argv[mode_argc]); return EXIT_FAILURE; } switch (opt) { case 'c': if (!strcmp(optarg, "none")) { config.compression = COMP_NONE; config.compression_level = 0; } else if (!strncmp(optarg, "lz4", 3) && parse_level_choice(optarg + 3, &config.compression_level, -1)) { #ifdef HAS_LZ4 config.compression = COMP_LZ4; #else fprintf(stderr, "Compression method lz4 not available: this copy of Waypipe was not built with LZ4 compression support.\n"); return EXIT_FAILURE; #endif } else if (!strncmp(optarg, "zstd", 4) && parse_level_choice(optarg + 4, &config.compression_level, 5)) { #ifdef HAS_ZSTD config.compression = COMP_ZSTD; #else fprintf(stderr, "Compression method zstd not available: this copy of Waypipe was not built with Zstd compression support.\n"); return EXIT_FAILURE; #endif } else { fail = true; } comp_string = optarg; break; case 'd': debug = true; break; case 'h': help = true; break; case 'n': config.no_gpu = true; break; case 'o': oneshot = true; break; case 's': socketpath = optarg; break; case ARG_VERSION: version = true; break; case ARG_DISPLAY: wayland_display = optarg; break; case ARG_CONTROL: control_path = optarg; break; case ARG_UNLINK: unlink_at_end = true; break; case ARG_DRMNODE: config.drm_node = optarg; break; case ARG_REMOTENODE: remote_drm_node = optarg; break; case ARG_LOGIN_SHELL: login_shell = true; break; case ARG_ALLOW_TILED: config.only_linear_dmabuf = false; break; #ifdef HAS_VIDEO case ARG_VIDEO: config.video_if_possible = true; if (optarg) { if (parse_video_string(optarg, &config) == -1) { fail = true; } config.old_video_mode = false; } break; case ARG_HWVIDEO: config.video_if_possible = true; config.prefer_hwvideo = true; break; #else case ARG_VIDEO: case ARG_HWVIDEO: fprintf(stderr, "Option %s not allowed: this copy of Waypipe was not built with video support.\n", argv[optind - 1]); return EXIT_FAILURE; #endif case ARG_THREADS: { uint32_t nthreads; if (parse_uint32(optarg, &nthreads) == -1 || nthreads > (1u << 16)) { fail = true; } config.n_worker_threads = (int)nthreads; nthread_string = optarg; } break; case ARG_WAYPIPE_BINARY: waypipe_binary = optarg; break; case ARG_BENCH_TEST_SIZE: { if (parse_uint32(optarg, &bench_test_size) == -1 || bench_test_size > (1u << 30)) { fail = true; } } break; case ARG_VSOCK: #ifdef HAS_VSOCK config.vsock = true; break; #else fprintf(stderr, "Option --vsock not allowed: this copy of Waypipe was not built with support for Linux VM sockets.\n"); return EXIT_FAILURE; #endif case ARG_TITLE_PREFIX: if (!is_utf8(optarg)) { fprintf(stderr, "Title prefix argument must be valid UTF-8.\n"); return EXIT_FAILURE; } if (strlen(optarg) > 128) { fprintf(stderr, "Title prefix is too long (>128 bytes).\n"); return EXIT_FAILURE; } config.title_prefix = optarg; break; case ARG_SECCTX: #ifdef HAS_SECURITY_CONTEXT config.secctx_app_id = optarg; break; #else fprintf(stderr, "Option --secctx not allowed: this copy of Waypipe was not built with support for the Wayland security context protocol.\n"); return EXIT_FAILURE; #endif default: fail = true; break; } } if (optind < mode_argc) { fprintf(stderr, "unexpected argument: %s\n", argv[optind]); /* there is an extra parameter before the mode argument */ fail = true; } argv += mode_argc; argc -= mode_argc; if (fail) { return usage(EXIT_FAILURE); } else if (version) { fprintf(stdout, "waypipe " WAYPIPE_VERSION "\n"); fprintf(stdout, "features:"); for (size_t i = 0; i < sizeof(feature_flags) / sizeof(feature_flags[0]); i++) { if (feature_flags[i]) { fprintf(stdout, " %s", feature_names[i]); } } fprintf(stdout, "\n"); fprintf(stdout, "unavailable:"); for (size_t i = 0; i < sizeof(feature_flags) / sizeof(feature_flags[0]); i++) { if (!feature_flags[i]) { fprintf(stdout, " %s", feature_names[i]); } } fprintf(stdout, "\n"); return EXIT_SUCCESS; } else if (help) { return usage(EXIT_SUCCESS); } else if (mode == MODE_FAIL || argc < 1) { return usage(EXIT_FAILURE); } if (mode == MODE_CLIENT && argc > 1) { // In client mode, we do not start an application return usage(EXIT_FAILURE); } else if (mode == MODE_RECON && argc != 3) { // The reconnection helper takes exactly two trailing arguments return usage(EXIT_FAILURE); } else if (mode == MODE_BENCH && argc != 2) { return usage(EXIT_FAILURE); } argv++; argc--; if (argc > 0 && !strcmp(argv[0], "--")) { argv++; argc--; } if (config.video_bpf == 0) { config.video_bpf = config.prefer_hwvideo ? 360000 : 120000; } #ifdef HAS_VSOCK if (config.vsock) { if (socketpath == NULL) { fprintf(stderr, "Socket option (-s, --socket) is required when vsock is enabled\n"); return EXIT_FAILURE; } if (parse_vsock_addr(socketpath, &config) == -1) { fprintf(stderr, "Invalid vsock address specification: '%s' does not match form [[s]CID:]port\n", socketpath); return EXIT_FAILURE; } } #endif if (debug) { log_funcs[0] = log_handler; } log_funcs[1] = log_handler; log_mode = mode; log_anti_staircase = false; log_to_tty = isatty(STDERR_FILENO); setup_video_logging(); if (setup_sighandlers() == -1) { return EXIT_FAILURE; } set_initial_fds(); /* Waypipe connects/binds/unlinks sockets using relative paths, * to work around a) bad Unix socket API which limits path lengths * b) race conditions when directories are moved and renamed. * Unfortunately, for lack of connectat/bindat, this is done * by changing the current working directory of the process to * the desired folder, performing the operation, and then going * back. */ int cwd_fd = open_folder("."); if (cwd_fd == -1) { wp_error("Error: cannot open current directory.\n"); return EXIT_FAILURE; } if (set_cloexec(cwd_fd) == -1) { wp_error("Error: cannot set cloexec on current directory fd.\n"); return EXIT_FAILURE; } const char *wayland_socket = NULL; if (!config.secctx_app_id) { wayland_socket = getenv("WAYLAND_SOCKET"); if (wayland_socket != NULL) { oneshot = true; } } else { #ifdef HAS_SECURITY_CONTEXT /* Create a new socket, send it to the compositor to attach * a security context and write it to WAYLAND_DISPLAY */ char secctx_sock_path[108]; sprintf(secctx_sock_path, "/tmp/waypipe-secctx-%d", getpid()); unlink(secctx_sock_path); char instance_id[21]; sprintf(instance_id, "%d", getpid()); if (create_security_context(secctx_sock_path, "waypipe", instance_id, config.secctx_app_id) == 0) { unsetenv("WAYLAND_SOCKET"); setenv("WAYLAND_DISPLAY", secctx_sock_path, 1); } else { return EXIT_FAILURE; } #endif } int ret; if (mode == MODE_RECON) { ret = run_recon(argv[0], argv[1]); } else if (mode == MODE_BENCH) { char *endptr = NULL; float bw = strtof(argv[0], &endptr); if (*endptr != 0) { wp_error("Failed to parse bandwidth '%s' in MB/sec\n", argv[0]); return EXIT_FAILURE; } ret = run_bench(bw, bench_test_size, config.n_worker_threads); } else if (mode == MODE_CLIENT) { struct sockaddr_un sockaddr; memset(&sockaddr, 0, sizeof(sockaddr)); if (socketpath && split_socket_path(socketpath, &sockaddr) == -1) { ret = EXIT_FAILURE; } else { struct socket_path client_sock_path; client_sock_path.folder = socketpath ? socketpath : "/tmp/"; client_sock_path.filename = &sockaddr; if (!socketpath) { sockaddr.sun_family = AF_UNIX; strcpy(sockaddr.sun_path, "waypipe-client.sock"); } int nmaxclients = oneshot ? 1 : 128; int client_folder_fd = -1, channelsock = -1; if (!config.vsock) { if (setup_nb_socket(cwd_fd, client_sock_path, nmaxclients, &client_folder_fd, &channelsock) == -1) { return EXIT_FAILURE; } } else { #ifdef HAS_VSOCK if (listen_on_vsock(config.vsock_port, nmaxclients, &channelsock) == -1) { return EXIT_FAILURE; } #endif } ret = run_client(cwd_fd, client_sock_path.folder, client_folder_fd, client_sock_path.filename->sun_path, &config, oneshot, wayland_socket, 0, channelsock); if (!config.vsock) { checked_close(client_folder_fd); } } } else if (mode == MODE_SERVER) { char *const *app_argv = (char *const *)argv; char display_path[20]; if (!wayland_display) { char rbytes[9]; fill_rand_token(rbytes); rbytes[8] = 0; sprintf(display_path, "wayland-%s", rbytes); wayland_display = display_path; } struct sockaddr_un sockaddr; memset(&sockaddr, 0, sizeof(sockaddr)); if (socketpath && split_socket_path(socketpath, &sockaddr) == -1) { ret = EXIT_FAILURE; } else { struct socket_path server_sock_path; server_sock_path.folder = socketpath ? socketpath : "/tmp/"; server_sock_path.filename = &sockaddr; if (!socketpath) { sockaddr.sun_family = AF_UNIX; strcpy(sockaddr.sun_path, "waypipe-server.sock"); } ret = run_server(cwd_fd, server_sock_path, wayland_display, control_path, &config, oneshot, unlink_at_end, app_argv, login_shell); } } else { struct sockaddr_un clientsock = {0}; char socket_folder[512] = {0}; if (socketpath) { if (strlen(socketpath) >= sizeof(socket_folder)) { wp_error("Socket path prefix is too long\n"); close(cwd_fd); return EXIT_FAILURE; } strcpy(socket_folder, socketpath); if (split_socket_path(socket_folder, &clientsock) == -1) { close(cwd_fd); return EXIT_FAILURE; } } else { clientsock.sun_family = AF_UNIX; strcpy(clientsock.sun_path, "waypipe"); strcpy(socket_folder, "/tmp/"); socketpath = "/tmp/waypipe"; } if (strlen(clientsock.sun_path) + sizeof("-server-88888888.sock") >= sizeof(clientsock.sun_path)) { wp_error("Socket path prefix filename '%s' is too long (more than %zu bytes).\n", socketpath, sizeof(clientsock.sun_path) - sizeof("-server-88888888.sock")); } bool allocates_pty = false; int dstidx = locate_openssh_cmd_hostname( argc, argv, &allocates_pty); if (dstidx < 0) { fprintf(stderr, "waypipe: Failed to locate destination in ssh command string\n"); close(cwd_fd); return EXIT_FAILURE; } /* If there are no arguments following the destination */ bool needs_login_shell = dstidx + 1 == argc; if (needs_login_shell || allocates_pty) { log_anti_staircase = true; } char rbytes[9]; fill_rand_token(rbytes); rbytes[8] = 0; sprintf(clientsock.sun_path + strlen(clientsock.sun_path), "-client-%s.sock", rbytes); struct socket_path client_sock_path = { .filename = &clientsock, .folder = socket_folder, }; int nmaxclients = oneshot ? 1 : 128; int channel_folder_fd = -1, channelsock = -1; if (!config.vsock) { if (setup_nb_socket(cwd_fd, client_sock_path, nmaxclients, &channel_folder_fd, &channelsock) == -1) { close(cwd_fd); return EXIT_FAILURE; } if (set_cloexec(channelsock) == -1 || set_cloexec(channel_folder_fd) == -1) { wp_error("Failed to make client socket or its folder cloexec"); close(channel_folder_fd); close(channelsock); close(cwd_fd); return EXIT_FAILURE; } } else { #ifdef HAS_VSOCK if (listen_on_vsock(config.vsock_port, nmaxclients, &channelsock) == -1) { return EXIT_FAILURE; } if (set_cloexec(channelsock) == -1) { wp_error("Failed to make client socket or its folder cloexec"); close(channelsock); close(cwd_fd); return EXIT_FAILURE; } #endif } pid_t conn_pid; { char linkage[512]; char serversock[256]; char video_str[140]; char remote_display[20]; if (!config.vsock) { sprintf(serversock, "%s-server-%s.sock", socketpath, rbytes); sprintf(linkage, "%s-server-%s.sock:%s-client-%s.sock", socketpath, rbytes, socketpath, rbytes); } else { sprintf(serversock, "%d", config.vsock_port); } sprintf(remote_display, "wayland-%s", rbytes); if (!wayland_display) { wayland_display = remote_display; } int nextra = 14 + debug + oneshot + 2 * (remote_drm_node != NULL) + 2 * (control_path != NULL) + config.video_if_possible + !config.only_linear_dmabuf + 2 * needs_login_shell + 2 * (config.n_worker_threads != 0); char **arglist = calloc((size_t)(argc + nextra), sizeof(char *)); int offset = 0; arglist[offset++] = "ssh"; if (needs_login_shell) { /* Force tty allocation, if we are attempting a * login shell. The user-override is a -T flag, * and a second -t will ensure a login shell * even if `waypipe ssh` was not run from a pty. * Unfortunately, -t disables newline * translation on the local side; see * `log_handler`. */ arglist[offset++] = "-t"; } if (!config.vsock) { arglist[offset++] = "-R"; arglist[offset++] = linkage; } for (int i = 0; i <= dstidx; i++) { arglist[offset + i] = argv[i]; } arglist[dstidx + 1 + offset++] = waypipe_binary; if (debug) { arglist[dstidx + 1 + offset++] = "-d"; } if (oneshot) { arglist[dstidx + 1 + offset++] = "-o"; } /* Always send the compression flag, because the default * was be changed from NONE to LZ4. */ arglist[dstidx + 1 + offset++] = "-c"; if (!comp_string) { switch (config.compression) { case COMP_LZ4: comp_string = "lz4"; break; case COMP_ZSTD: comp_string = "zstd"; break; default: comp_string = "none"; break; } } arglist[dstidx + 1 + offset++] = comp_string; if (needs_login_shell) { arglist[dstidx + 1 + offset++] = "--login-shell"; } if (config.video_if_possible) { if (!config.old_video_mode) { char *vid_type = NULL; switch (config.video_fmt) { case VIDEO_H264: vid_type = "h264"; break; case VIDEO_VP9: vid_type = "vp9"; break; case VIDEO_AV1: vid_type = "av1"; break; } sprintf(video_str, "--video=%s,%s,bpf=%d", vid_type, config.prefer_hwvideo ? "hw" : "sw", config.video_bpf); arglist[dstidx + 1 + offset++] = video_str; } else { arglist[dstidx + 1 + offset++] = config.prefer_hwvideo ? "--hwvideo" : "--video"; } } if (!config.only_linear_dmabuf) { arglist[dstidx + 1 + offset++] = "--allow-tiled"; } if (remote_drm_node) { arglist[dstidx + 1 + offset++] = "--drm-node"; arglist[dstidx + 1 + offset++] = remote_drm_node; } if (config.n_worker_threads != 0) { arglist[dstidx + 1 + offset++] = "--threads"; arglist[dstidx + 1 + offset++] = nthread_string; } if (control_path) { arglist[dstidx + 1 + offset++] = "--control"; arglist[dstidx + 1 + offset++] = control_path; } arglist[dstidx + 1 + offset++] = "--unlink-socket"; arglist[dstidx + 1 + offset++] = "-s"; arglist[dstidx + 1 + offset++] = serversock; arglist[dstidx + 1 + offset++] = "--display"; arglist[dstidx + 1 + offset++] = wayland_display; if (config.vsock) { arglist[dstidx + 1 + offset++] = "--vsock"; } arglist[dstidx + 1 + offset++] = "server"; for (int i = dstidx + 1; i < argc; i++) { arglist[offset + i] = argv[i]; } arglist[argc + offset] = NULL; int err = posix_spawnp(&conn_pid, arglist[0], NULL, NULL, arglist, environ); if (err) { wp_error("Failed to spawn ssh process: %s", strerror(err)); close(channelsock); free(arglist); return EXIT_FAILURE; } free(arglist); } ret = run_client(cwd_fd, client_sock_path.folder, channel_folder_fd, client_sock_path.filename->sun_path, &config, oneshot, wayland_socket, conn_pid, channelsock); if (!config.vsock) { checked_close(channel_folder_fd); } } #ifdef HAS_SECURITY_CONTEXT close_security_context(); #endif checked_close(cwd_fd); check_unclosed_fds(); return ret; } waypipe-v0.10.5/waypipe.scd000066400000000000000000000324631507071711100156220ustar00rootroot00000000000000waypipe(1) # NAME waypipe - A transparent proxy for Wayland applications # SYNOPSIS *waypipe* [options...] *ssh* [ssh options] _destination_ _command..._ *waypipe* [options...] *client*++ *waypipe* [options...] *server* -- _command..._++ *waypipe* *bench*++ *waypipe* [*--version*] [*-h*, *--help*] \[options...\] = [*-c*, *--compress* C] [*-d*, *--debug*] [*-n*, *--no-gpu*] [*-o*, *--oneshot*] [*-s*, *--socket* S] [*--display* D] [*--drm-node* R] [*--remote-node* R] [*--remote-bin* R] [*--remote-socket* S] [*--login-shell*] [*--threads* T] [*--title-prefix* P] [*--unlink-socket*] [*--video* V] [*--vsock*] [*--secctx S*] # DESCRIPTION Waypipe is a proxy for Wayland clients, with the aim of supporting behavior like *ssh -X*. Prefixing an *ssh ...* command to become *waypipe ssh ...* will automatically run *waypipe* both locally and remotely, and modify the ssh command to set up forwarding between the two instances of *waypipe*. The remote instance will act like a Wayland compositor, letting Wayland applications that are run remotely be displayed locally. When run as *waypipe client*, it will open a socket (by default at _/tmp/waypipe-client.sock_) and will connect to the local Wayland compositor and forward all Wayland applications which were linked to it over the socket by a matching *waypipe server* instance. When run as *waypipe server*, it will run the command that follows in its command line invocation, set up its own Wayland compositor socket, and try to connect to its matching *waypipe client* socket (by default _/tmp/waypipe-server.sock_) and try to forward all the Wayland clients that connect to fake compositor socket to the matching *waypipe client*. The *waypipe bench* mode can be used to estimate which compression options produce the lowest latency, for a given connection bandwidth and number of threads. It tests two synthetic images, one made to be roughly as compressible as images containing text, and one made to be roughly as compressible as images containing pictures. # OPTIONS *-c C, --compress C* Select the compression method applied to data transfers. Options are _none_ (for high-bandwidth networks), _lz4_ (intermediate), _zstd_ (slow connection). The default compression is _lz4_.† The compression level can be chosen by appending = followed by a number. For example, if *C* is _zstd=7_, waypipe will use level 7 Zstd compression. † Unless *waypipe* is built without LZ4 support, in which case the default compression will be _none_. *-d, --debug* Print debug log messages. *-h, --help* Show help message and quit. *-n, --no-gpu* Block protocols like wayland-drm and linux-dmabuf which require access to e.g. render nodes. *-o, --oneshot* Only permit a single connection, and exit when it is closed. *-s S, --socket S* Use *S* as the path for the Unix socket. The default socket path for server mode is _/tmp/waypipe-server.sock_; for client mode, it is _/tmp/waypipe-client.sock_; and in ssh mode, *S* gives the prefix used by both the client and the server for their socket paths. (The actual socket paths in ssh mode add a randomized suffix to *S*.) The default prefix in ssh mode is _/tmp/waypipe_. When vsock is enabled use *S* to specify a CID and a port number. *--version* Briefly describe Waypipe's version and the features it was built with, then quit. Possible features: LZ4 compression support, ZSTD compression support, ability to transfer DMABUFs, video compression support. *--display D* For server or ssh mode, provide _WAYLAND_DISPLAY_ and let waypipe configure its Wayland display socket to have a matching path. (If *D* is not an absolute path, the socket will be created in the folder given by the environment variable _XDG_RUNTIME_DIR_.) *--drm-node R* Specify the path *R* to the drm device that this instance of waypipe should use and (in server mode) notify connecting applications about. *--remote-node R* In ssh mode, specify the path *R* to the drm device that the remote instance of waypipe (running in server mode) should use. *--remote-bin R* In ssh mode, specify the path *R* to the waypipe binary on the remote computer, or its name if it is available in _PATH_. It defaults to *waypipe* if this option isn’t passed. *--remote-socket R* In ssh mode, specify the prefix used by the remote *waypipe server* for its socket path. This overrides the *--socket* option, on the server side only. *--login-shell* Only for server mode; if no command is being run, open a login shell. *--threads T* Set the number of total threads (including the main thread) which a *waypipe* instance will create. These threads will be used to parallelize compression operations. This flag is passed on to *waypipe server* when given to *waypipe ssh*. The flag also controls the thread count for *waypipe bench*. The default behavior (choosable by setting *T* to _0_) is to use half as many threads as the computer has hardware threads available. *--title-prefix P* Prepend *P* to any window titles specified using the XDG shell protocol. In ssh mode, the prefix is applied only on the client side. *--unlink-socket* Only for server mode; on shutdown, unlink the Unix socket that waypipe connects to. *--video V* Compress specific DMABUF formats using a lossy video codec. Opaque, 10-bit, and multiplanar formats, among others, are not supported. *V* is a comma separated list of options to control the video encoding. (For example: *--video av1,hw*.) Later options supersede earlier ones. *none* Do not use video encoding. *h264* Use H.264 encoded video. *vp9* Use VP9 encoded video. *av1* Use VP9 encoded video. *sw*, *swenc*, *swdec* Use software encoding or decoding, when available. *hw*, *hwenc*, *hwdec* Use hardware encoding or decoding, when available. *bpf=B* Set the target bit rate of the video encoder, in units of bits per frame. *B* can be written as an integer or with exponential notation; thus *--video=bpf=7.5e5* is equivalent to *--video=bpf=750000*. *--vsock* Use vsock instead of unix sockets. This is used when waypipe is running in virtual machines. With this option enabled specify a CID and a port number in *S*. CID is only used in the server mode and can be omitted when connecting from a guest virtual machine to host. *--secctx S* Enable the Wayland security context protocol (client or ssh modes). Specify an application ID *S* that will be attached to the security context. *--control C*++ *--hwvideo*++ *--allow-tiled* Options present in older versions of Waypipe that were removed. # EXAMPLE The following *waypipe ssh* subcommand will attempt to run *weston-flower* on the server _exserv_, displaying the result on the local system. ``` waypipe ssh user@exserv weston-flower ``` One can obtain similar behavior by explicitly running waypipe and ssh: ``` waypipe --socket /tmp/socket-client client & ssh -R /tmp/socket-server:/tmp/socket-client user@exserv \\ waypipe --socket /tmp/socket-server server -- weston-flower kill %1 ``` Waypipe may be run locally without an SSH connection by specifying matching socket paths. For example: ``` waypipe --socket /tmp/waypipe.sock client & waypipe --socket /tmp/waypipe.sock server weston-simple-dmabuf-egl kill %1 rm /tmp/waypipe.sock ``` Using transports other than SSH is a bit more complicated. A recipe with ncat to connect to _remote_ from computer _local_: ``` $ waypipe --socket /tmp/waypipe-remote.sock client & $ ncat --ssl -lk 12345 --sh-exec 'ncat -U /tmp/waypipe-remote.sock' & $ ssh user@remote > ncat -lkU /tmp/waypipe-local.sock --sh-exec 'ncat --ssl local 12345' & > waypipe --display wayland-local \\ --socket /tmp/waypipe-local.sock server -- sleep inf & > WAYLAND_DISPLAY=wayland-local application ``` Given a certificate file, socat can also provide an encrypted connection (remove 'verify=0' to check certificates): ``` $ waypipe --socket /tmp/waypipe-remote.sock client & $ socat openssl-listen:12345,reuseaddr,cert=certificate.pem,verify=0,fork \\ unix-connect:/tmp/waypipe-remote.sock $ ssh user@remote > socat unix-listen:/tmp/waypipe-local.sock,reuseaddr,fork \\ openssl-connect:local:12345,verify=0 & > waypipe --socket /tmp/waypipe-local.sock server -- application ``` Many applications require specific environment variables to use Wayland instead of X11. If ssh isn't configured to support loading _~/.ssh/environment_, or to allow specific variables to be set with _AcceptEnv_/_SetEnv_, one can run *waypipe ssh* without a command (and thereby open a login shell), or use *env* to set the needed variables each time: ``` waypipe ssh user@host env XDG_SESSION_TYPE=wayland dolphin ``` In some cases, one may wish to set environment variables for the *waypipe server* process itself; the above trick with *env* will not do this, because the *env* process will be a child of *waypipe server*, not the other way around. Instead, one can use _~/.ssh/environment_, or use the *--remote-bin* option to change the remote Waypipe instance to a shell script that sets the environment before running the actual *waypipe* program. ## Running waypipe in virtual machines When running waypipe in virtual machines on the same host it is possible to use vsock for efficient inter-vm communication. The following scenarios are supported: - Running applications on host from guest. ``` host> waypipe --vsock -s 1234 client guest> waypipe --vsock -s 1234 server weston-terminal ``` - Running applications in a guest virtual machine from host. ``` guest> waypipe --vsock -s 1234 client host> waypipe --vsock -s 3:1234 server weston-terminal ``` In this example waypipe server connects to a virtual machine with CID 3 on port 1234. - Running applications in a guest virtual machine from other guest virtual machines. When running both client and server in virtual machines it is possble to enable the VMADDR_FLAG_TO_HOST flag for sibling communication by prefixing the CID with an s: ``` guest> waypipe --vsock -s 1234 client guest> waypipe --vsock -s s3:1234 server weston-terminal ``` In this case all packets will be routed to host where they can be forwarded to another virtual machine with a vhost-device-vsock device or some other utility. # ENVIRONMENT When running as a server, by default _WAYLAND_DISPLAY_ will be set for the invoked process. If the *--oneshot* flag is set, waypipe will instead set _WAYLAND_SOCKET_ and inherit an already connected socketpair file descriptor to the invoked (child) process. Some programs open and close a Wayland connection repeatedly as part of their initialization, and will not work correctly with this flag. # EXIT STATUS *waypipe ssh* will exit with the exit status code from the remote command, or with return code 1 if there has been an error. # SECURITY Waypipe does not provide any strong security guarantees, and connecting to untrusted servers is not recommended. It does not filter which Wayland protocols the compositor makes available to the client (with a few exceptions for protocols that require file descriptors which Waypipe cannot yet handle). For example, if a Wayland compositor gives all its clients access to a screenshot or lock-screen protocol, then proxied clients run under Waypipe can also make screenshots or lock the screen. In general, applications are not well tested against malicious compositors, and compositors are not well tested against malicious clients. Waypipe can connect the two, and may blindly forward denial-of-service and other attacks. Waypipe itself is written in C and links to compression, graphics, and video libraries; both it and these libraries may have security bugs. Some risk can be avoided by building Waypipe with DMABUF support turned off, or running Waypipe with the *--no-gpu* flag so that it does not expose graphics libraries. *waypipe ssh* has no explicit protections against timing attacks; an observer to the resulting network traffic may, by studying the size and timing of packets, learn information about the user's interaction with a Wayland client proxied through *waypipe ssh*. For example: a lack of activity suggests the user is not currently using the application, while an intermittant stream of messages from the compositor to the client may indicate mouse movement (or maybe something else: the contents of the messages are protected by *ssh*.) The memory used by Waypipe processes may, at a given time, include Wayland messages encoding user input, and the contents of current and recent frames drawn for application windows. Swap should be encrypted to prevent this data from being leaked to disk. # BUGS File bug reports at: https://gitlab.freedesktop.org/mstoeckl/waypipe/ ## Common issues Some programs (gnome-terminal, firefox, kate, among others) have special mechanisms to ensure that only one process is running at a time. Starting those programs under Waypipe while they are running under a different Wayland compositor may silently open a window or tab in the original instance of the program. Such programs may have a command line argument to create a new instance. The *waypipe ssh* command requires that the *ssh* executable supports Unix socket forwarding; this is only supported in OpenSSH since version 6.7 (from 2014), and may require specific configuration. Other implementations of SSH may not be able to forward Unix sockets; see the EXAMPLE section above for how to use non-SSH transports. # SEE ALSO *weston*(1), *ssh*(1), *socat(1)*, *ncat(1)* waypipe-v0.10.5/wrap-ffmpeg/000077500000000000000000000000001507071711100156545ustar00rootroot00000000000000waypipe-v0.10.5/wrap-ffmpeg/Cargo.lock000066400000000000000000000055151507071711100175670ustar00rootroot00000000000000# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "libloading" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4979f22fdb869068da03c9f7528f8297c6fd2606bc3a4affe42e6a823fdb8da4" dependencies = [ "cfg-if", "windows-targets", ] [[package]] name = "pkg-config" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "953ec861398dccce10c670dfeaf3ec4911ca479e9c02154b3a215178c5f566f2" [[package]] name = "waypipe-ffmpeg-wrapper" version = "0.1.0" dependencies = [ "libloading", "pkg-config", ] [[package]] name = "windows-targets" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973" dependencies = [ "windows_aarch64_gnullvm", "windows_aarch64_msvc", "windows_i686_gnu", "windows_i686_gnullvm", "windows_i686_msvc", "windows_x86_64_gnu", "windows_x86_64_gnullvm", "windows_x86_64_msvc", ] [[package]] name = "windows_aarch64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3" [[package]] name = "windows_aarch64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469" [[package]] name = "windows_i686_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b" [[package]] name = "windows_i686_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66" [[package]] name = "windows_i686_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66" [[package]] name = "windows_x86_64_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78" [[package]] name = "windows_x86_64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d" [[package]] name = "windows_x86_64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec" waypipe-v0.10.5/wrap-ffmpeg/Cargo.toml000066400000000000000000000002661507071711100176100ustar00rootroot00000000000000[package] name = "waypipe-ffmpeg-wrapper" version = "0.1.0" edition = "2021" rust-version = "1.65.0" [dependencies] libloading = "0.8.5" [build-dependencies] pkg-config = "0.3.31" waypipe-v0.10.5/wrap-ffmpeg/build.rs000066400000000000000000000111611507071711100173210ustar00rootroot00000000000000fn depfile_to_cargo(path: &std::path::Path) { use std::io::Read; let mut depfile = std::fs::File::open(path).unwrap(); let mut data = Vec::::new(); depfile.read_to_end(&mut data).unwrap(); // depfile contains the dependencies, in Make-style. Spaces in paths are escaped with '\ ', // and backslashes with '\\'. Other escaped chars (newlines, control characters) may break. let mut unescaped = Vec::new(); let mut chunks: Vec = Vec::new(); assert!(!data.contains(&0)); let mut scan = data.into_iter(); loop { let Some(c) = scan.next() else { break; }; if c == b'\\' { // TODO: how does Cargo handle escapes in path names? Or invalid utf8? let d = scan.next().unwrap(); match d { b' ' => unescaped.push(b' '), b'\\' => unescaped.push(b'\\'), _ => panic!(), } } else if c == b' ' { chunks.push(std::str::from_utf8(&unescaped[..]).unwrap().into()); unescaped.clear(); } else { unescaped.push(c); } } chunks.push(std::str::from_utf8(&unescaped[..]).unwrap().into()); for file in chunks.iter().skip(1) { println!("cargo:rerun-if-changed={}", file); } } fn main() { use std::ffi::OsStr; let libavutil = pkg_config::probe_library("libavutil").unwrap(); let libavcodec = pkg_config::probe_library("libavcodec").unwrap(); let mut includes = Vec::new(); includes.extend_from_slice(&libavutil.include_paths[..]); includes.extend_from_slice(&libavcodec.include_paths[..]); let functions = &[ "av_buffer_ref", "av_buffer_unref", "av_dict_free", "av_dict_set", "av_frame_alloc", "av_frame_free", "av_free", "av_get_pix_fmt_name", "av_hwdevice_ctx_alloc", "av_hwdevice_ctx_init", "av_hwdevice_get_hwframe_constraints", "av_hwframe_ctx_alloc", "av_hwframe_ctx_init", "av_hwframe_get_buffer", "av_log_default_callback", "av_log_set_callback", "av_log_set_level", "av_malloc", "av_new_packet", "av_packet_alloc", "av_packet_free", "av_strerror", "avcodec_alloc_context3", "avcodec_align_dimensions2", "avcodec_find_decoder_by_name", "avcodec_find_encoder_by_name", "avcodec_free_context", "avcodec_get_hw_frames_parameters", "avcodec_open2", "avcodec_receive_frame", "avcodec_receive_packet", "avcodec_send_frame", "avcodec_send_packet", ]; let types = &[ "AVFrame", "AVHWDeviceContext", "AVHWFramesContext", "AVPacket", "AVRational", "AVVkFrame", "AVVulkanDeviceContext", "AVVulkanFramesContext", ]; let vars = &[ "AV_LOG_TRACE", "AV_LOG_VERBOSE", "AV_LOG_INFO", "AV_LOG_WARNING", "AV_NUM_DATA_POINTERS", "LIBAVUTIL_VERSION_MAJOR", "LIBAVCODEC_VERSION_MAJOR", ]; let bindgen = "bindgen"; let out_path = std::path::PathBuf::from(std::env::var("OUT_DIR").unwrap()).join("bindings.rs"); let dep_path = std::path::PathBuf::from(std::env::var("OUT_DIR").unwrap()).join("depfile"); let mut args: Vec<&OsStr> = Vec::new(); args.push(OsStr::new("--dynamic-loading")); args.push(OsStr::new("ffmpeg")); args.push(OsStr::new("--dynamic-link-require-all")); args.push(OsStr::new("--rust-target")); args.push(OsStr::new("1.59")); args.push(OsStr::new("--no-doc-comments")); args.push(OsStr::new("--depfile")); args.push(dep_path.as_os_str()); args.push(OsStr::new("--output")); args.push(out_path.as_os_str()); for f in functions.iter() { args.push(OsStr::new("--allowlist-function")); args.push(OsStr::new(*f)); } for f in vars.iter() { args.push(OsStr::new("--allowlist-var")); args.push(OsStr::new(*f)); } for f in types.iter() { args.push(OsStr::new("--allowlist-type")); args.push(OsStr::new(*f)); } args.push(OsStr::new("wrapper.h")); args.push(OsStr::new("--")); let inc_vec: Vec = includes .iter() .map(|x| format!("-I{}", x.to_string_lossy())) .collect(); for x in inc_vec.iter() { args.push(OsStr::new(x)); } let mut child = std::process::Command::new(bindgen) .args(args) .spawn() .unwrap(); let exit_status = child.wait().unwrap(); assert!(exit_status.success()); depfile_to_cargo(&dep_path); } waypipe-v0.10.5/wrap-ffmpeg/src/000077500000000000000000000000001507071711100164435ustar00rootroot00000000000000waypipe-v0.10.5/wrap-ffmpeg/src/lib.rs000066400000000000000000000004341507071711100175600ustar00rootroot00000000000000#![allow(non_upper_case_globals)] #![allow(non_camel_case_types)] #![allow(non_snake_case)] #![allow(improper_ctypes)] // u128 FFI should not be used #![allow(rustdoc::invalid_html_tags)] #![allow(rustdoc::broken_intra_doc_links)] include!(concat!(env!("OUT_DIR"), "/bindings.rs")); waypipe-v0.10.5/wrap-ffmpeg/wrapper.h000066400000000000000000000005261507071711100175100ustar00rootroot00000000000000// this is a hack to avoid issues multiply defining FP_NORMAL, etc. #define _MATH_H #include #include #include #include #include #include #include #include waypipe-v0.10.5/wrap-gbm/000077500000000000000000000000001507071711100151555ustar00rootroot00000000000000waypipe-v0.10.5/wrap-gbm/Cargo.lock000066400000000000000000000055121507071711100170650ustar00rootroot00000000000000# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "libloading" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4979f22fdb869068da03c9f7528f8297c6fd2606bc3a4affe42e6a823fdb8da4" dependencies = [ "cfg-if", "windows-targets", ] [[package]] name = "pkg-config" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "953ec861398dccce10c670dfeaf3ec4911ca479e9c02154b3a215178c5f566f2" [[package]] name = "waypipe-gbm-wrapper" version = "0.1.0" dependencies = [ "libloading", "pkg-config", ] [[package]] name = "windows-targets" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973" dependencies = [ "windows_aarch64_gnullvm", "windows_aarch64_msvc", "windows_i686_gnu", "windows_i686_gnullvm", "windows_i686_msvc", "windows_x86_64_gnu", "windows_x86_64_gnullvm", "windows_x86_64_msvc", ] [[package]] name = "windows_aarch64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3" [[package]] name = "windows_aarch64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469" [[package]] name = "windows_i686_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b" [[package]] name = "windows_i686_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66" [[package]] name = "windows_i686_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66" [[package]] name = "windows_x86_64_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78" [[package]] name = "windows_x86_64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d" [[package]] name = "windows_x86_64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec" waypipe-v0.10.5/wrap-gbm/Cargo.toml000066400000000000000000000002631507071711100171060ustar00rootroot00000000000000[package] name = "waypipe-gbm-wrapper" version = "0.1.0" edition = "2021" rust-version = "1.65.0" [dependencies] libloading = "0.8.5" [build-dependencies] pkg-config = "0.3.31" waypipe-v0.10.5/wrap-gbm/build.rs000066400000000000000000000070271507071711100166300ustar00rootroot00000000000000fn depfile_to_cargo(path: &std::path::Path) { use std::io::Read; let mut depfile = std::fs::File::open(path).unwrap(); let mut data = Vec::::new(); depfile.read_to_end(&mut data).unwrap(); // depfile contains the dependencies, in Make-style. Spaces in paths are escaped with '\ ', // and backslashes with '\\'. Other escaped chars (newlines, control characters) may break. let mut unescaped = Vec::new(); let mut chunks: Vec = Vec::new(); assert!(!data.contains(&0)); let mut scan = data.into_iter(); loop { let Some(c) = scan.next() else { break; }; if c == b'\\' { // TODO: how does Cargo handle escapes in path names? Or invalid utf8? let d = scan.next().unwrap(); match d { b' ' => unescaped.push(b' '), b'\\' => unescaped.push(b'\\'), _ => panic!(), } } else if c == b' ' { chunks.push(std::str::from_utf8(&unescaped[..]).unwrap().into()); unescaped.clear(); } else { unescaped.push(c); } } chunks.push(std::str::from_utf8(&unescaped[..]).unwrap().into()); for file in chunks.iter().skip(1) { println!("cargo:rerun-if-changed={}", file); } } fn main() { use std::ffi::OsStr; let libgbm = pkg_config::probe_library("gbm").unwrap(); let mut includes = Vec::new(); includes.extend_from_slice(&libgbm.include_paths[..]); let functions = &[ "gbm_bo_create", "gbm_bo_destroy", "gbm_bo_get_modifier", "gbm_bo_get_fd", "gbm_bo_get_stride", "gbm_bo_import", "gbm_bo_map", "gbm_bo_unmap", "gbm_create_device", "gbm_device_get_backend_name", "gbm_device_is_format_supported", "gbm_device_destroy", ]; let types = &[ "gbm_import_fd_data", "gbm_bo_transfer_flags", "gbm_bo_flags", ]; let vars = &["GBM_BO_IMPORT_FD"]; let bindgen = "bindgen"; let out_path = std::path::PathBuf::from(std::env::var("OUT_DIR").unwrap()).join("bindings.rs"); let dep_path = std::path::PathBuf::from(std::env::var("OUT_DIR").unwrap()).join("depfile"); let mut args: Vec<&OsStr> = Vec::new(); args.push(OsStr::new("--dynamic-loading")); args.push(OsStr::new("gbm")); args.push(OsStr::new("--dynamic-link-require-all")); args.push(OsStr::new("--rust-target")); args.push(OsStr::new("1.59")); args.push(OsStr::new("--no-doc-comments")); args.push(OsStr::new("--depfile")); args.push(dep_path.as_os_str()); args.push(OsStr::new("--output")); args.push(out_path.as_os_str()); for f in functions.iter() { args.push(OsStr::new("--allowlist-function")); args.push(OsStr::new(*f)); } for f in vars.iter() { args.push(OsStr::new("--allowlist-var")); args.push(OsStr::new(*f)); } for f in types.iter() { args.push(OsStr::new("--allowlist-type")); args.push(OsStr::new(*f)); } args.push(OsStr::new("wrapper.h")); args.push(OsStr::new("--")); let inc_vec: Vec = includes .iter() .map(|x| format!("-I{}", x.to_string_lossy())) .collect(); for x in inc_vec.iter() { args.push(OsStr::new(x)); } let mut child = std::process::Command::new(bindgen) .args(args) .spawn() .unwrap(); let exit_status = child.wait().unwrap(); assert!(exit_status.success()); depfile_to_cargo(&dep_path); } waypipe-v0.10.5/wrap-gbm/src/000077500000000000000000000000001507071711100157445ustar00rootroot00000000000000waypipe-v0.10.5/wrap-gbm/src/lib.rs000066400000000000000000000004341507071711100170610ustar00rootroot00000000000000#![allow(non_upper_case_globals)] #![allow(non_camel_case_types)] #![allow(non_snake_case)] #![allow(improper_ctypes)] // u128 FFI should not be used #![allow(rustdoc::invalid_html_tags)] #![allow(rustdoc::broken_intra_doc_links)] include!(concat!(env!("OUT_DIR"), "/bindings.rs")); waypipe-v0.10.5/wrap-gbm/wrapper.h000066400000000000000000000000211507071711100167770ustar00rootroot00000000000000#include waypipe-v0.10.5/wrap-lz4/000077500000000000000000000000001507071711100151215ustar00rootroot00000000000000waypipe-v0.10.5/wrap-lz4/Cargo.lock000066400000000000000000000006101507071711100170230ustar00rootroot00000000000000# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "pkg-config" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "953ec861398dccce10c670dfeaf3ec4911ca479e9c02154b3a215178c5f566f2" [[package]] name = "waypipe-lz4-wrapper" version = "0.1.0" dependencies = [ "pkg-config", ] waypipe-v0.10.5/wrap-lz4/Cargo.toml000066400000000000000000000002161507071711100170500ustar00rootroot00000000000000[package] name = "waypipe-lz4-wrapper" version = "0.1.0" edition = "2021" rust-version = "1.65.0" [build-dependencies] pkg-config = "0.3.31" waypipe-v0.10.5/wrap-lz4/build.rs000066400000000000000000000063061507071711100165730ustar00rootroot00000000000000fn depfile_to_cargo(path: &std::path::Path) { use std::io::Read; let mut depfile = std::fs::File::open(path).unwrap(); let mut data = Vec::::new(); depfile.read_to_end(&mut data).unwrap(); // depfile contains the dependencies, in Make-style. Spaces in paths are escaped with '\ ', // and backslashes with '\\'. Other escaped chars (newlines, control characters) may break. let mut unescaped = Vec::new(); let mut chunks: Vec = Vec::new(); assert!(!data.contains(&0)); let mut scan = data.into_iter(); loop { let Some(c) = scan.next() else { break; }; if c == b'\\' { // TODO: how does Cargo handle escapes in path names? Or invalid utf8? let d = scan.next().unwrap(); match d { b' ' => unescaped.push(b' '), b'\\' => unescaped.push(b'\\'), _ => panic!(), } } else if c == b' ' { chunks.push(std::str::from_utf8(&unescaped[..]).unwrap().into()); unescaped.clear(); } else { unescaped.push(c); } } chunks.push(std::str::from_utf8(&unescaped[..]).unwrap().into()); for file in chunks.iter().skip(1) { println!("cargo:rerun-if-changed={}", file); } } fn main() { use std::ffi::OsStr; let lib = pkg_config::probe_library("liblz4").unwrap(); println!("cargo:rustc-link-lib=lz4"); let mut includes = Vec::new(); includes.extend_from_slice(&lib.include_paths[..]); let functions: &[&str] = &[ "LZ4_compressBound", "LZ4_sizeofState", "LZ4_sizeofStateHC", "LZ4_compress_fast_extState", "LZ4_compress_HC_extStateHC", "LZ4_decompress_safe", ]; let types: &[&str] = &[]; let vars: &[&str] = &[]; let bindgen = "bindgen"; let out_path = std::path::PathBuf::from(std::env::var("OUT_DIR").unwrap()).join("bindings.rs"); let dep_path = std::path::PathBuf::from(std::env::var("OUT_DIR").unwrap()).join("depfile"); let mut args: Vec<&OsStr> = Vec::new(); args.push(OsStr::new("--rust-target")); args.push(OsStr::new("1.59")); args.push(OsStr::new("--no-doc-comments")); args.push(OsStr::new("--depfile")); args.push(dep_path.as_os_str()); args.push(OsStr::new("--output")); args.push(out_path.as_os_str()); for f in functions.iter() { args.push(OsStr::new("--allowlist-function")); args.push(OsStr::new(*f)); } for f in vars.iter() { args.push(OsStr::new("--allowlist-var")); args.push(OsStr::new(*f)); } for f in types.iter() { args.push(OsStr::new("--allowlist-type")); args.push(OsStr::new(*f)); } args.push(OsStr::new("wrapper.h")); args.push(OsStr::new("--")); let inc_vec: Vec = includes .iter() .map(|x| format!("-I{}", x.to_string_lossy())) .collect(); for x in inc_vec.iter() { args.push(OsStr::new(x)); } let mut child = std::process::Command::new(bindgen) .args(args) .spawn() .unwrap(); let exit_status = child.wait().unwrap(); assert!(exit_status.success()); depfile_to_cargo(&dep_path); } waypipe-v0.10.5/wrap-lz4/src/000077500000000000000000000000001507071711100157105ustar00rootroot00000000000000waypipe-v0.10.5/wrap-lz4/src/lib.rs000066400000000000000000000004341507071711100170250ustar00rootroot00000000000000#![allow(non_upper_case_globals)] #![allow(non_camel_case_types)] #![allow(non_snake_case)] #![allow(improper_ctypes)] // u128 FFI should not be used #![allow(rustdoc::invalid_html_tags)] #![allow(rustdoc::broken_intra_doc_links)] include!(concat!(env!("OUT_DIR"), "/bindings.rs")); waypipe-v0.10.5/wrap-lz4/wrapper.h000066400000000000000000000000441507071711100167500ustar00rootroot00000000000000#include #include waypipe-v0.10.5/wrap-zstd/000077500000000000000000000000001507071711100153745ustar00rootroot00000000000000waypipe-v0.10.5/wrap-zstd/Cargo.lock000066400000000000000000000006111507071711100172770ustar00rootroot00000000000000# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "pkg-config" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "953ec861398dccce10c670dfeaf3ec4911ca479e9c02154b3a215178c5f566f2" [[package]] name = "waypipe-zstd-wrapper" version = "0.1.0" dependencies = [ "pkg-config", ] waypipe-v0.10.5/wrap-zstd/Cargo.toml000066400000000000000000000002171507071711100173240ustar00rootroot00000000000000[package] name = "waypipe-zstd-wrapper" version = "0.1.0" edition = "2021" rust-version = "1.65.0" [build-dependencies] pkg-config = "0.3.31" waypipe-v0.10.5/wrap-zstd/build.rs000066400000000000000000000064541507071711100170520ustar00rootroot00000000000000fn depfile_to_cargo(path: &std::path::Path) { use std::io::Read; let mut depfile = std::fs::File::open(path).unwrap(); let mut data = Vec::::new(); depfile.read_to_end(&mut data).unwrap(); // depfile contains the dependencies, in Make-style. Spaces in paths are escaped with '\ ', // and backslashes with '\\'. Other escaped chars (newlines, control characters) may break. let mut unescaped = Vec::new(); let mut chunks: Vec = Vec::new(); assert!(!data.contains(&0)); let mut scan = data.into_iter(); loop { let Some(c) = scan.next() else { break; }; if c == b'\\' { // TODO: how does Cargo handle escapes in path names? Or invalid utf8? let d = scan.next().unwrap(); match d { b' ' => unescaped.push(b' '), b'\\' => unescaped.push(b'\\'), _ => panic!(), } } else if c == b' ' { chunks.push(std::str::from_utf8(&unescaped[..]).unwrap().into()); unescaped.clear(); } else { unescaped.push(c); } } chunks.push(std::str::from_utf8(&unescaped[..]).unwrap().into()); for file in chunks.iter().skip(1) { println!("cargo:rerun-if-changed={}", file); } } fn main() { use std::ffi::OsStr; let lib = pkg_config::probe_library("libzstd").unwrap(); println!("cargo:rustc-link-lib=zstd"); let mut includes = Vec::new(); includes.extend_from_slice(&lib.include_paths[..]); let functions: &[&str] = &[ "ZSTD_createCCtx", "ZSTD_createDCtx", "ZSTD_freeCCtx", "ZSTD_freeDCtx", "ZSTD_CCtx_setParameter", "ZSTD_compressBound", "ZSTD_compress2", "ZSTD_isError", "ZSTD_decompressDCtx", ]; let types: &[&str] = &["ZSTD_CCtx", "ZSTD_DCtx", "ZSTD_cParameter"]; let vars: &[&str] = &[]; let bindgen = "bindgen"; let out_path = std::path::PathBuf::from(std::env::var("OUT_DIR").unwrap()).join("bindings.rs"); let dep_path = std::path::PathBuf::from(std::env::var("OUT_DIR").unwrap()).join("depfile"); let mut args: Vec<&OsStr> = Vec::new(); args.push(OsStr::new("--rust-target")); args.push(OsStr::new("1.59")); args.push(OsStr::new("--no-doc-comments")); args.push(OsStr::new("--depfile")); args.push(dep_path.as_os_str()); args.push(OsStr::new("--output")); args.push(out_path.as_os_str()); for f in functions.iter() { args.push(OsStr::new("--allowlist-function")); args.push(OsStr::new(*f)); } for f in vars.iter() { args.push(OsStr::new("--allowlist-var")); args.push(OsStr::new(*f)); } for f in types.iter() { args.push(OsStr::new("--allowlist-type")); args.push(OsStr::new(*f)); } args.push(OsStr::new("wrapper.h")); args.push(OsStr::new("--")); let inc_vec: Vec = includes .iter() .map(|x| format!("-I{}", x.to_string_lossy())) .collect(); for x in inc_vec.iter() { args.push(OsStr::new(x)); } let mut child = std::process::Command::new(bindgen) .args(args) .spawn() .unwrap(); let exit_status = child.wait().unwrap(); assert!(exit_status.success()); depfile_to_cargo(&dep_path); } waypipe-v0.10.5/wrap-zstd/src/000077500000000000000000000000001507071711100161635ustar00rootroot00000000000000waypipe-v0.10.5/wrap-zstd/src/lib.rs000066400000000000000000000004341507071711100173000ustar00rootroot00000000000000#![allow(non_upper_case_globals)] #![allow(non_camel_case_types)] #![allow(non_snake_case)] #![allow(improper_ctypes)] // u128 FFI should not be used #![allow(rustdoc::invalid_html_tags)] #![allow(rustdoc::broken_intra_doc_links)] include!(concat!(env!("OUT_DIR"), "/bindings.rs")); waypipe-v0.10.5/wrap-zstd/wrapper.h000066400000000000000000000000221507071711100172170ustar00rootroot00000000000000#include