Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/000077500000000000000000000000001270147354000207225ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/.clang-format000066400000000000000000000001641270147354000232760ustar00rootroot00000000000000--- # We'll use defaults from the LLVM style, but with 4 columns indentation. BasedOnStyle: LLVM IndentWidth: 4 ... Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/.gitignore000066400000000000000000000006661270147354000227220ustar00rootroot00000000000000CMakeCache.txt CMakeLists.txt.user CMakeFiles/ cmake_install.cmake Makefile __pycache__ VKConfig.h *.so *.so.* icd/common/libicd.a icd/intel/intel_gpa.c _out64 out32/* out64/* demos/Debug/* demos/tri.dir/Debug/* demos/tri/Debug/* demos/Win32/Debug/* demos/xcb_nvidia.dir/* demos/smoke/HelpersDispatchTable.cpp demos/smoke/HelpersDispatchTable.h libs/Win32/Debug/* *.pyc *.vcproj *.sln *.suo *.vcxproj *.sdf *.filters build build32 dbuild Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/BUILD.md000066400000000000000000000213661270147354000221130ustar00rootroot00000000000000# Build Instructions This document contains the instructions for building this repository on Linux and Windows. This repository does not contain a Vulkan-capable driver. Before proceeding, it is strongly recommended that you obtain a Vulkan driver from your graphics hardware vendor and install it. Note: The sample Vulkan Intel driver for Linux (ICD) is being deprecated in favor of other driver options from Intel. This driver has been moved to the [VulkanTools repo](https://github.com/LunarG/VulkanTools). Further instructions regarding this ICD are available there. ## Git the Bits To create your local git repository: ``` git clone https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers ``` If you intend to contribute, the preferred work flow is for you to develop your contribution in a fork of this repo in your GitHub account and then submit a pull request. Please see the CONTRIBUTING.md file in this respository for more details. ## Linux Build The build process uses CMake to generate makefiles for this project. The build generates the loader, layers, and tests. This repo has been built and tested on Ubuntu 14.04.3 LTS, 14.10, 15.04 and 15.10. It should be straightforward to use it on other Linux distros. These packages are needed to build this repository: ``` sudo apt-get install git cmake build-essential bison libxcb1-dev ``` Example debug build: ``` cd Vulkan-LoaderAndValidationLayers # cd to the root of the cloned git repository ./update_external_sources.sh # Fetches and builds glslang and spirv-tools cmake -H. -Bdbuild -DCMAKE_BUILD_TYPE=Debug cd dbuild make ``` If you have installed a Vulkan driver obtained from your graphics hardware vendor, the install process should have configured the driver so that the Vulkan loader can find and load it. If you want to use the loader and layers that you have just built: ``` export LD_LIBRARY_PATH=/dbuild/loader export VK_LAYER_PATH=/dbuild/layers ``` Note that if you have installed the [LunarG Vulkan SDK](https://vulkan.lunarg.com), you will also have the SDK version of the loader and layers installed in your default system libraries. You can run the `vulkaninfo` application to see which driver, loader and layers are being used. The `LoaderAndLayerInterface` document in the `loader` folder in this repository is a specification that describes both how ICDs and layers should be properly packaged, and how developers can point to ICDs and layers within their builds. ## Validation Test The test executables can be found in the dbuild/tests directory. Some of the tests that are available: - vk_layer_validation_tests: Test Vulkan layers. There are also a few shell and Python scripts that run test collections (eg, `run_all_tests.sh`). ## Linux Demos Some demos that can be found in the dbuild/demos directory are: - vulkaninfo: report GPU properties - tri: a textured triangle (which is animated to demonstrate Z-clipping) - cube: a textured spinning cube - smoke/smoke: A "smoke" test using a more complex Vulkan demo ## Windows System Requirements Windows 7+ with additional required software packages: - Microsoft Visual Studio 2013 Professional. Note: it is possible that lesser/older versions may work, but that has not been tested. - CMake (from http://www.cmake.org/download/). Notes: - Tell the installer to "Add CMake to the system PATH" environment variable. - Python 3 (from https://www.python.org/downloads). Notes: - Select to install the optional sub-package to add Python to the system PATH environment variable. - Ensure the pip module is installed (it should be by default) - Need python3.3 or later to get the Windows py.exe launcher that is used to get python3 rather than python2 if both are installed on Windows - 32 bit python works - Python lxml package must be installed - Download the lxml package from http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml 32-bit latest for Python 3.5 is: lxml-3.5.0-cp35-none-win32.whl 64-bit latest for Python 3.5 is: lxml-3.5.0-cp35-none-win_amd64.whl - The package can be installed with pip as follows: pip install lxml-3.5.0-cp35-none-win32.whl If pip is not in your path, you can find it at $PYTHON_HOME\Scripts\pip.exe, where PYTHON_HOME is the folder where you installed Python. - Git (from http://git-scm.com/download/win). - Note: If you use Cygwin, you can normally use Cygwin's "git.exe". However, in order to use the "update_external_sources.bat" script, you must have this version. - Tell the installer to allow it to be used for "Developer Prompt" as well as "Git Bash". - Tell the installer to treat line endings "as is" (i.e. both DOS and Unix-style line endings). - Install each a 32-bit and a 64-bit version, as the 64-bit installer does not install the 32-bit libraries and tools. - glslang is required for demos and tests. - You can download and configure it (in a peer directory) here: https://github.com/KhronosGroup/glslang/blob/master/README.md - A windows batch file has been included that will pull and build the correct version. Run it from Developer Command Prompt for VS2013 like so: - update_external_sources.bat --build-glslang Optional software packages: - Cygwin (from https://www.cygwin.com/). Notes: - Cygwin provides some Linux-like tools, which are valuable for obtaining the source code, and running CMake. Especially valuable are the BASH shell and git packages. - If you don't want to use Cygwin, there are other shells and environments that can be used. You can also use a Git package that doesn't come from Cygwin. ## Windows Build Cygwin is used in order to obtain a local copy of the Git repository, and to run the CMake command that creates Visual Studio files. Visual Studio is used to build the software, and will re-run CMake as appropriate. To build all Windows targets (e.g. in a "Developer Command Prompt for VS2013" window): ``` cd Vulkan-LoaderAndValidationLayers # cd to the root of the cloned git repository update_external_sources.bat --all build_windows_targets.bat ``` At this point, you can use Windows Explorer to launch Visual Studio by double-clicking on the "VULKAN.sln" file in the \build folder. Once Visual Studio comes up, you can select "Debug" or "Release" from a drop-down list. You can start a build with either the menu (Build->Build Solution), or a keyboard shortcut (Ctrl+Shift+B). As part of the build process, Python scripts will create additional Visual Studio files and projects, along with additional source files. All of these auto-generated files are under the "build" folder. Vulkan programs must be able to find and use the vulkan-1.dll libary. Make sure it is either installed in the C:\Windows\System32 folder, or the PATH environment variable includes the folder that it is located in. To run Vulkan programs you must tell the icd loader where to find the libraries. This is described in a `LoaderAndLayerInterface` document in the `loader` folder in this repository. This specification describes both how ICDs and layers should be properly packaged, and how developers can point to ICDs and layers within their builds. ## Android Build Install the required tools for Linux and Windows covered above, then add the following. ### Android Studio - Install Android Studio 2.1, latest Preview (tested with 4): - http://tools.android.com/download/studio/canary - From the "Welcome to Android Studio" splash screen, add the following components using Configure > SDK Manager: - SDK Platforms > Android N Preview - SDK Tools > Android NDK #### Add NDK to path On Linux: ``` export PATH=$HOME/Android/sdk/ndk-bundle:$PATH ``` On Windows: ``` set PATH=%LOCALAPPDATA%\Android\sdk\ndk-bundle;%PATH% ``` On OSX: ``` export PATH=$HOME/Library/Android/sdk/ndk-bundle:$PATH ``` ### Additional OSX System Requirements Tested on OSX version 10.11.4 Setup Homebrew and components - Follow instructions on [brew.sh](http://brew.sh) to get homebrew installed. ``` /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" ``` - Ensure Homebrew is at the beginning of your PATH: ``` export PATH=$HOME/homebrew/bin:$PATH ``` - Add packages with the following (may need refinement) ``` brew install cmake brew install python brew install python3 pip install --upgrade pip pip install lxml pip3.5 install --upgrade pip pip3.5 install lxm ``` ### Build steps for Android Use the following to ensure the Android build works. #### Linux and OSX Follow the setup steps for Linux or OSX above, then from your terminal: ``` cd buildAndroid ./update_external_sources_android.sh ./android-generate.sh ndk-build ``` #### Windows Follow the setup steps for Windows above, then from Developer Command Prompt for VS2013: ``` cd buildAndroid update_external_sources_android.bat android-generate.bat ndk-build ``` Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/CMakeLists.txt000077500000000000000000000212131270147354000234640ustar00rootroot00000000000000# The name of our project is "VULKAN". CMakeLists files in this project can # refer to the root source directory of the project as ${VULKAN_SOURCE_DIR} and # to the root binary directory of the project as ${VULKAN_BINARY_DIR}. cmake_minimum_required(VERSION 2.8.11) project (VULKAN) # set (CMAKE_VERBOSE_MAKEFILE 1) # The MAJOR number of the version we're building, used in naming # vulkan-.dll (and other files). set(MAJOR "1") find_package(PythonInterp 3 REQUIRED) if(CMAKE_SYSTEM_NAME STREQUAL "Windows") add_definitions(-DVK_USE_PLATFORM_WIN32_KHR -DWIN32_LEAN_AND_MEAN) set(DisplayServer Win32) elseif(CMAKE_SYSTEM_NAME STREQUAL "Android") add_definitions(-DVK_USE_PLATFORM_ANDROID_KHR) set(DisplayServer Android) elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux") # TODO: Basic support is present for Xlib but is untested. # Mir support is stubbed in but unimplemented and untested. option(BUILD_WSI_XCB_SUPPORT "Build XCB WSI support" ON) option(BUILD_WSI_XLIB_SUPPORT "Build Xlib WSI support" OFF) option(BUILD_WSI_WAYLAND_SUPPORT "Build Wayland WSI support" OFF) option(BUILD_WSI_MIR_SUPPORT "Build Mir WSI support" OFF) if (BUILD_WSI_XCB_SUPPORT) add_definitions(-DVK_USE_PLATFORM_XCB_KHR) set(DisplayServer Xcb) endif() if (BUILD_WSI_XLIB_SUPPORT) add_definitions(-DVK_USE_PLATFORM_XLIB_KHR) set(DisplayServer Xlib) endif() if (BUILD_WSI_WAYLAND_SUPPORT) add_definitions(-DVK_USE_PLATFORM_WAYLAND_KHR) set(DisplayServer Wayland) endif() if (BUILD_WSI_MIR_SUPPORT) add_definitions(-DVK_USE_PLATFORM_MIR_KHR) set(DisplayServer Mir) endif() else() message(FATAL_ERROR "Unsupported Platform!") endif() set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake") # Header file for CMake settings include_directories("${PROJECT_SOURCE_DIR}/include") if(NOT WIN32) include(FindPkgConfig) endif() set (CMAKE_INSTALL_PREFIX "") if (CMAKE_COMPILER_IS_GNUCC OR CMAKE_C_COMPILER_ID MATCHES "Clang") set(COMMON_COMPILE_FLAGS "-Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers") set(COMMON_COMPILE_FLAGS "${COMMON_COMPILE_FLAGS} -fno-strict-aliasing -fno-builtin-memcmp") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -std=c99 ${COMMON_COMPILE_FLAGS}") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMMON_COMPILE_FLAGS} -std=c++11") if (UNIX) set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fvisibility=hidden") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fvisibility=hidden") endif() endif() if(NOT WIN32) find_package(XCB REQUIRED) set (BUILDTGT_DIR build) set (BINDATA_DIR Bin) set (LIBSOURCE_DIR Lib) else() # For Windows, since 32-bit and 64-bit items can co-exist, we build each in its own build directory. # 32-bit target data goes in build32, and 64-bit target data goes into build. So, include/link the # appropriate data at build time. if (CMAKE_CL_64) set (BUILDTGT_DIR build) set (BINDATA_DIR Bin) set (LIBSOURCE_DIR Lib) else() set (BUILDTGT_DIR build32) set (BINDATA_DIR Bin32) set (LIBSOURCE_DIR Lib32) endif() endif() option(BUILD_LOADER "Build loader" ON) option(BUILD_TESTS "Build tests" ON) option(BUILD_LAYERS "Build layers" ON) option(BUILD_DEMOS "Build demos" ON) option(BUILD_VKJSON "Build vkjson" ON) find_program(GLSLANG_VALIDATOR NAMES glslangValidator HINTS "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/install/bin" "${PROJECT_SOURCE_DIR}/../${BINDATA_DIR}" ) find_path(GLSLANG_SPIRV_INCLUDE_DIR SPIRV/spirv.hpp HINTS "${CMAKE_SOURCE_DIR}/../glslang" DOC "Path to SPIRV/spirv.hpp") find_path(SPIRV_TOOLS_INCLUDE_DIR spirv-tools/libspirv.h HINTS "${CMAKE_SOURCE_DIR}/../spirv-tools/include" "${CMAKE_SOURCE_DIR}/../source/spirv-tools/include" "${CMAKE_SOURCE_DIR}/../spirv-tools/external/include" "${CMAKE_SOURCE_DIR}/../source/spirv-tools/external/include" DOC "Path to spirv-tools/libspirv.h") if (WIN32) set (GLSLANG_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/glslang/Release" "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/glslang/OSDependent/Windows/Release" "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/OGLCompilersDLL/Release" "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/SPIRV/Release" ) set (SPIRV_TOOLS_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../spirv-tools/${BUILDTGT_DIR}/source/Release") else() set (GLSLANG_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../glslang/build/install/lib" "${CMAKE_SOURCE_DIR}/../x86_64/lib/glslang" ) set (SPIRV_TOOLS_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../spirv-tools/build/source" "${CMAKE_SOURCE_DIR}/../x86_64/lib/spirv-tools" ) endif() find_library(GLSLANG_LIB NAMES glslang HINTS ${GLSLANG_SEARCH_PATH} ) find_library(OGLCompiler_LIB NAMES OGLCompiler HINTS ${GLSLANG_SEARCH_PATH} ) find_library(OSDependent_LIB NAMES OSDependent HINTS ${GLSLANG_SEARCH_PATH} ) find_library(SPIRV_LIB NAMES SPIRV HINTS ${GLSLANG_SEARCH_PATH} ) find_library(SPIRV_TOOLS_LIB NAMES SPIRV-Tools HINTS ${SPIRV_TOOLS_SEARCH_PATH} ) # On Windows, we must pair Debug and Release appropriately if (WIN32) set (GLSLANG_DEBUG_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/glslang/Debug" "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/glslang/OSDependent/Windows/Debug" "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/OGLCompilersDLL/Debug" "${CMAKE_SOURCE_DIR}/../glslang/${BUILDTGT_DIR}/SPIRV/Debug") set (SPIRV_TOOLS_DEBUG_SEARCH_PATH "${CMAKE_SOURCE_DIR}/../spirv-tools/${BUILDTGT_DIR}/source/Debug") add_library(glslang STATIC IMPORTED) add_library(OGLCompiler STATIC IMPORTED) add_library(OSDependent STATIC IMPORTED) add_library(SPIRV STATIC IMPORTED) add_library(Loader STATIC IMPORTED) add_library(SPIRV-Tools STATIC IMPORTED) find_library(GLSLANG_DLIB NAMES glslang HINTS ${GLSLANG_DEBUG_SEARCH_PATH} ) find_library(OGLCompiler_DLIB NAMES OGLCompiler HINTS ${GLSLANG_DEBUG_SEARCH_PATH} ) find_library(OSDependent_DLIB NAMES OSDependent HINTS ${GLSLANG_DEBUG_SEARCH_PATH} ) find_library(SPIRV_DLIB NAMES SPIRV HINTS ${GLSLANG_DEBUG_SEARCH_PATH} ) find_library(SPIRV_TOOLS_DLIB NAMES SPIRV-Tools HINTS ${SPIRV_TOOLS_DEBUG_SEARCH_PATH} ) set_target_properties(glslang PROPERTIES IMPORTED_LOCATION "${GLSLANG_LIB}" IMPORTED_LOCATION_DEBUG "${GLSLANG_DLIB}") set_target_properties(OGLCompiler PROPERTIES IMPORTED_LOCATION "${OGLCompiler_LIB}" IMPORTED_LOCATION_DEBUG "${OGLCompiler_DLIB}") set_target_properties(OSDependent PROPERTIES IMPORTED_LOCATION "${OSDependent_LIB}" IMPORTED_LOCATION_DEBUG "${OSDependent_DLIB}") set_target_properties(SPIRV PROPERTIES IMPORTED_LOCATION "${SPIRV_LIB}" IMPORTED_LOCATION_DEBUG "${SPIRV_DLIB}") set_target_properties(SPIRV-Tools PROPERTIES IMPORTED_LOCATION "${SPIRV_TOOLS_LIB}" IMPORTED_LOCATION_DEBUG "${SPIRV_TOOLS_DLIB}") set (GLSLANG_LIBRARIES glslang OGLCompiler OSDependent SPIRV) set (SPIRV_TOOLS_LIBRARIES SPIRV-Tools) else () set (GLSLANG_LIBRARIES ${GLSLANG_LIB} ${OGLCompiler_LIB} ${OSDependent_LIB} ${SPIRV_LIB}) set (SPIRV_TOOLS_LIBRARIES ${SPIRV_TOOLS_LIB}) endif() set (PYTHON_CMD ${PYTHON_EXECUTABLE}) if(NOT WIN32) include(GNUInstallDirs) add_definitions(-DSYSCONFDIR="${CMAKE_INSTALL_SYSCONFDIR}") add_definitions(-DDATADIR="${CMAKE_INSTALL_DATADIR}") if (CMAKE_INSTALL_PREFIX STREQUAL "/usr") elseif (CMAKE_INSTALL_PREFIX STREQUAL "") else() add_definitions(-DLOCALPREFIX="${CMAKE_INSTALL_PREFIX}") endif() endif() # loader: Generic VULKAN ICD loader # tests: VULKAN tests if(BUILD_LOADER) add_subdirectory(loader) endif() if(BUILD_TESTS) add_subdirectory(tests) endif() if(BUILD_LAYERS) add_subdirectory(layers) endif() if(BUILD_DEMOS) add_subdirectory(demos) endif() if(BUILD_VKJSON) add_subdirectory(libs/vkjson) endif() Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/CONTRIBUTING.md000066400000000000000000000105421270147354000231550ustar00rootroot00000000000000## How to Contribute to Vulkan Source Repositories ### **The Repositories** The Vulkan source code is distributed across several GitHub repositories. The repositories sponsored by Khronos and LunarG are described here. In general, the canonical Vulkan Loader and Validation Layers sources are in the Khronos repository, while the LunarG repositories host sources for additional tools and sample programs. * [Khronos Vulkan-LoaderAndValidationLayers](https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers) * [LunarG VulkanTools](https://github.com/LunarG/VulkanTools) * [LunarG VulkanSamples](https://github.com/LunarG/VulkanSamples) As a convenience, the contents of the Vulkan-LoaderAndValidationLayers repository are downstreamed into the VulkanTools and VulkanSamples repositories via a branch named `trunk`. This makes the VulkanTools and VulkanSamples easier to work with and avoids compatibility issues that might arise with Vulkan-LoaderAndValidationLayers components if they were obtained from a separate repository. ### **How to Submit Fixes** * **Ensure that the bug was not already reported or fixed** by searching on GitHub under Issues and Pull Requests. * Use the existing GitHub forking and pull request process. This will involve [forking the repository](https://help.github.com/articles/fork-a-repo/), creating a branch with your commits, and then [submitting a pull request](https://help.github.com/articles/using-pull-requests/). * Please base your fixes on the master branch. SDK branches are generally not updated except for critical fixes needed to repair an SDK release. * Please include the GitHub Issue number near the beginning of the commit text if applicable. * Example: "GitHub 123: Fix missing init" * If your changes are restricted only to files from the Vulkan-LoaderAndValidationLayers repository, please direct your pull request to that repository, instead of VulkanTools or VulkanSamples. #### **Coding Conventions and Formatting** * Try to follow any existing style in the file. "When in Rome..." * Run clang-format on your changes to maintain formatting. * There are `.clang-format files` throughout the repository to define clang-format settings which are found and used automatically by clang-format. * A sample git workflow may look like: > # Make changes to the source. > $ git add . > $ clang-format -style=file -i < list of changed code files > > # Check to see if clang-format made any changes and if they are OK. > $ git add . > $ git commit #### **Testing** * Run the existing tests in the repository before and after your changes to check for any regressions. There are some tests that appear in all repositories. These tests can be found in the following folders inside of your target build directory: (These instructions are for Linux) * In the `demos` directory, run: > cube > cube --validate > tri > tri --validate > smoke > smoke --validate > vulkaninfo * In the `tests` directory, run: > run_all_tests.sh * Note that some tests may fail with known issues or driver-specific problems. The idea here is that your changes shouldn't change the test results, unless that was the intent of your changes. * Run tests that explicitly exercise your changes. * Feel free to subject your code changes to other tests as well! ### **Contributor License Agreement (CLA)** #### **Khronos Repository (Vulkan-LoaderAndValidationLayers)** The Khronos Group is still finalizing the CLA process and documentation, so the details about using or requiring a CLA are not available yet. In the meantime, we suggest that you not submit any contributions unless you are comfortable doing so without a CLA. #### **LunarG Repositories** You'll be prompted with a "click-through" CLA as part of submitting your pull request in GitHub. ### **License and Copyrights** All contributions made to the Vulkan-LoaderAndValidationLayers repository are Khronos branded and as such, any new files need to have the Khronos license (MIT style) and copyright included. Please see an existing file in this repository for an example. All contributions made to the LunarG repositories are to be made under the MIT license and any new files need to include this license and any applicable copyrights. You can include your individual copyright after any existing copyrights. Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/LICENSE.txt000066400000000000000000000150011270147354000225420ustar00rootroot00000000000000 Copyright (c) 2014-2016 The Khronos Group Inc. Copyright (c) 2014-2016 Valve Corporation Copyright (c) 2014-2016 LunarG, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and/or associated documentation files (the "Materials"), to deal in the Materials without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Materials, and to permit persons to whom the Materials are furnished to do so, subject to the following conditions: The above copyright notice(s) and this permission notice shall be included in all copies or substantial portions of the Materials. THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS. /* Copyright (c) 2009 Dave Gamble Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ /* Copyright Kevlin Henney, 1997, 2003, 2012. All rights reserved. Permission to use, copy, modify, and distribute this software and its documentation for any purpose is hereby granted without fee, provided that this copyright and permissions notice appear in all copies and derivatives. This software is supplied "as is" without express or implied warranty. But that said, if there are any problems please get in touch. */ Copyright 2008, Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. /* ** Copyright (c) 2014-2015 The Khronos Group Inc. ** ** Permission is hereby granted, free of charge, to any person obtaining a ** copy of this software and/or associated documentation files (the ** "Materials"), to deal in the Materials without restriction, including ** without limitation the rights to use, copy, modify, merge, publish, ** distribute, sublicense, and/or sell copies of the Materials, and to ** permit persons to whom the Materials are furnished to do so, subject to ** the following conditions: ** ** The above copyright notice and this permission notice shall be included ** in all copies or substantial portions of the Materials. ** ** THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, ** EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF ** MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. ** IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY ** CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, ** TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE ** MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS. */ /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/README.md000077500000000000000000000040621270147354000222060ustar00rootroot00000000000000# Vulkan Ecosystem Components *Version 1.0, January 25, 2016* This project provides Khronos offical ICD loader and validation layers for Vulkan developers on Windows and Linux. ## Introduction Vulkan is an Explicit API, enabling direct control over how GPUs actually work. No (or very little) validation or error checking is done inside a Vulkan driver. Applications have full control and responsibility. Any errors in how Vulkan is used often result in a crash. This project provides standard validation layers that can be enabled to ease development by helping developers verify their applications correctly use the Vulkan API. Vulkan supports multiple GPUs and multiple global contexts (VkInstance). The ICD loader is necessary to support multiple GPUs and the VkInstance level Vulkan commands. Additionally, the loader manages inserting Vulkan layer libraries, including validation layers between the application and the ICD. The following components are available in this repository: - Vulkan header files - [*ICD Loader*](loader/) - [*Validation Layers*](layers/) - Demos and tests for the loader and validation layers ## How to Build and Run [BUILD.md](BUILD.md) includes directions for building all the components, running the validation tests and running the demo applications. Information on how to enable the various Validation layers is in [layers/README.md](layers/README.md). Architecture and interface information for the loader is in [loader/LoaderAndLayerInterface.md](loader/LoaderAndLayerInterface.md). ## License This work is released as open source under a MIT-style license from Khronos including a Khronos copyright. See LICENSE.txt for a full list of licenses used in this repository. ## Acknowledgements While this project has been developed primarily by LunarG, Inc; there are many other companies and individuals making this possible: Valve Corporation, funding project development; Google providing significant contributions to the validation layers; Khronos providing oversight and hosting of the project. Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/build_windows_targets.bat000066400000000000000000000037431270147354000260230ustar00rootroot00000000000000echo off REM REM This Windows batch file builds this repository for the following targets: REM 64-bit Debug REM 64-bit Release REM 32-bit Debug REM 32-bit Release REM It uses CMake to genererate the project files and then invokes msbuild REM to build them. REM The update_external_sources.bat batch file must be executed before running REM this batch file REM REM Determine the appropriate CMake strings for the current version of Visual Studio echo Determining VS version python .\determine_vs_version.py > vsversion.tmp set /p VS_VERSION=< vsversion.tmp echo Detected Visual Studio Version as %VS_VERSION% del /Q /F vsversion.tmp rmdir /Q /S build rmdir /Q /S build32 REM ******************************************* REM 64-bit build REM ******************************************* mkdir build pushd build echo Generating 64-bit CMake files for Visual Studio %VS_VERSION% cmake -G "Visual Studio %VS_VERSION% Win64" .. echo Building 64-bit Debug msbuild ALL_BUILD.vcxproj /p:Platform=x64 /p:Configuration=Debug /verbosity:quiet if errorlevel 1 ( echo. echo 64-bit Debug build failed! popd exit /B 1 ) echo Building 64-bit Release msbuild ALL_BUILD.vcxproj /p:Platform=x64 /p:Configuration=Release /verbosity:quiet if errorlevel 1 ( echo. echo 64-bit Release build failed! popd exit /B 1 ) popd REM ******************************************* REM 32-bit build REM ******************************************* mkdir build32 pushd build32 echo Generating 32-bit CMake files for Visual Studio %VS_VERSION% cmake -G "Visual Studio %VS_VERSION%" .. echo Building 32-bit Debug msbuild ALL_BUILD.vcxproj /p:Platform=x86 /p:Configuration=Debug /verbosity:quiet if errorlevel 1 ( echo. echo 32-bit Debug build failed! popd exit /B 1 ) echo Building 32-bit Release msbuild ALL_BUILD.vcxproj /p:Platform=x86 /p:Configuration=Release /verbosity:quiet if errorlevel 1 ( echo. echo 32-bit Release build failed! popd exit /B 1 ) popd exit /b 0 Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/000077500000000000000000000000001270147354000220025ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/Copyright_cmake.txt000066400000000000000000000051151270147354000256550ustar00rootroot00000000000000CMake - Cross Platform Makefile Generator Copyright 2000-2009 Kitware, Inc., Insight Software Consortium All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the names of Kitware, Inc., the Insight Software Consortium, nor the names of their contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ------------------------------------------------------------------------------ The above copyright and license notice applies to distributions of CMake in source and binary form. Some source files contain additional notices of original copyright by their contributors; see each source for details. Third-party software packages supplied with CMake under compatible licenses provide their own copyright notices documented in corresponding subdirectories. ------------------------------------------------------------------------------ CMake was initially developed by Kitware with the following sponsorship: * National Library of Medicine at the National Institutes of Health as part of the Insight Segmentation and Registration Toolkit (ITK). * US National Labs (Los Alamos, Livermore, Sandia) ASC Parallel Visualization Initiative. * National Alliance for Medical Image Computing (NAMIC) is funded by the National Institutes of Health through the NIH Roadmap for Medical Research, Grant U54 EB005149. * Kitware, Inc. Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/FindImageMagick.cmake000066400000000000000000000247651270147354000257610ustar00rootroot00000000000000# - Find the ImageMagick binary suite. # This module will search for a set of ImageMagick tools specified # as components in the FIND_PACKAGE call. Typical components include, # but are not limited to (future versions of ImageMagick might have # additional components not listed here): # # animate # compare # composite # conjure # convert # display # identify # import # mogrify # montage # stream # # If no component is specified in the FIND_PACKAGE call, then it only # searches for the ImageMagick executable directory. This code defines # the following variables: # # ImageMagick_FOUND - TRUE if all components are found. # ImageMagick_EXECUTABLE_DIR - Full path to executables directory. # ImageMagick__FOUND - TRUE if is found. # ImageMagick__EXECUTABLE - Full path to executable. # # There are also components for the following ImageMagick APIs: # # Magick++ # MagickWand # MagickCore # # For these components the following variables are set: # # ImageMagick_FOUND - TRUE if all components are found. # ImageMagick_INCLUDE_DIRS - Full paths to all include dirs. # ImageMagick_LIBRARIES - Full paths to all libraries. # ImageMagick__FOUND - TRUE if is found. # ImageMagick__INCLUDE_DIRS - Full path to include dirs. # ImageMagick__LIBRARIES - Full path to libraries. # # Example Usages: # FIND_PACKAGE(ImageMagick) # FIND_PACKAGE(ImageMagick COMPONENTS convert) # FIND_PACKAGE(ImageMagick COMPONENTS convert mogrify display) # FIND_PACKAGE(ImageMagick COMPONENTS Magick++) # FIND_PACKAGE(ImageMagick COMPONENTS Magick++ convert) # # Note that the standard FIND_PACKAGE features are supported # (i.e., QUIET, REQUIRED, etc.). #============================================================================= # Copyright 2007-2009 Kitware, Inc. # Copyright 2007-2008 Miguel A. Figueroa-Villanueva # # Distributed under the OSI-approved BSD License (the "License"); # see accompanying file Copyright_cmake.txt for details. # # This software is distributed WITHOUT ANY WARRANTY; without even the # implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # See the License for more information. #============================================================================= # (To distributed this file outside of CMake, substitute the full # License text for the above reference.) find_package(PkgConfig QUIET) function(FIND_REGISTRY) if (WIN32) # If a 64-bit compile, it can only appear in "[HKLM]\\software\\ImageMagick" if (CMAKE_CL_64) GET_FILENAME_COMPONENT(IM_BIN_PATH [HKEY_LOCAL_MACHINE\\SOFTWARE\\ImageMagick\\Current;BinPath] ABSOLUTE CACHE) else() # This is dumb, but it's the only way I've been able to get this to work. CMake has no knowledge of the systems architecture. # So, if we want to detect if we're running a 32-bit compile on a 64-bit OS, we need to manually check for the existence of # ImageMagick in the WOW6432Node of the registry first. If that fails, assume they want the 64-bit version. GET_FILENAME_COMPONENT(TESTING [HKEY_LOCAL_MACHINE\\SOFTWARE\\WOW6432Node\\ImageMagick\\Current;BinPath] PATH) # If the WOW6432Node reg string returns empty, assume 32-bit OS, and look in the standard reg path. if (TESTING STREQUAL "") GET_FILENAME_COMPONENT(IM_BIN_PATH [HKEY_LOCAL_MACHINE\\SOFTWARE\\ImageMagick\\Current;BinPath] ABSOLUTE CACHE) # Otherwise, the WOW6432Node returned a string, assume 32-bit build on 64-bit OS and use that string. else() GET_FILENAME_COMPONENT(IM_BIN_PATH [HKEY_LOCAL_MACHINE\\SOFTWARE\\WOW6432Node\\ImageMagick\\Current;BinPath] ABSOLUTE CACHE) endif() endif() set (IMAGEMAGIC_REG_PATH ${IM_BIN_PATH} PARENT_SCOPE) set (IMAGEMAGIC_REGINCLUDE_PATH ${IM_BIN_PATH}/include PARENT_SCOPE) set (IMAGEMAGIC_REGLIB_PATH ${IM_BIN_PATH}/lib PARENT_SCOPE) else() # No registry exists for Linux. So, just set these to empty strings. set (IMAGEMAGIC_REG_PATH "" PARENT_SCOPE) set (IMAGEMAGIC_REGINCLUDE_PATH "" PARENT_SCOPE) set (IMAGEMAGIC_REGLIB_PATH "" PARENT_SCOPE) endif() endfunction() #--------------------------------------------------------------------- # Helper functions #--------------------------------------------------------------------- FUNCTION(FIND_IMAGEMAGICK_API component header) SET(ImageMagick_${component}_FOUND FALSE PARENT_SCOPE) FIND_PATH(ImageMagick_${component}_INCLUDE_DIR NAMES ${header} PATHS ${ImageMagick_INCLUDE_DIRS} ${IMAGEMAGIC_REGINCLUDE_PATH} PATH_SUFFIXES ImageMagick ImageMagick-6 DOC "Path to the ImageMagick include dir." ) FIND_PATH(ImageMagick_${component}_ARCH_INCLUDE_DIR NAMES magick/magick-baseconfig.h PATHS ${ImageMagick_INCLUDE_DIRS} ${IMAGEMAGIC_REGINCLUDE_PATH} PATH_SUFFIXES ImageMagick ImageMagick-6 DOC "Path to the ImageMagick arch-specific include dir." ) FIND_LIBRARY(ImageMagick_${component}_LIBRARY NAMES ${ARGN} PATHS ${IMAGEMAGIC_REGLIB_PATH} DOC "Path to the ImageMagick Magick++ library." ) IF(ImageMagick_${component}_INCLUDE_DIR AND ImageMagick_${component}_LIBRARY) SET(ImageMagick_${component}_FOUND TRUE PARENT_SCOPE) LIST(APPEND ImageMagick_INCLUDE_DIRS ${ImageMagick_${component}_INCLUDE_DIR} ) IF(EXISTS ${ImageMagick_${component}_ARCH_INCLUDE_DIR}) LIST(APPEND ImageMagick_INCLUDE_DIRS ${ImageMagick_${component}_ARCH_INCLUDE_DIR} ) ENDIF(EXISTS ${ImageMagick_${component}_ARCH_INCLUDE_DIR}) LIST(REMOVE_DUPLICATES ImageMagick_INCLUDE_DIRS) SET(ImageMagick_INCLUDE_DIRS ${ImageMagick_INCLUDE_DIRS} PARENT_SCOPE) LIST(APPEND ImageMagick_LIBRARIES ${ImageMagick_${component}_LIBRARY} ) SET(ImageMagick_LIBRARIES ${ImageMagick_LIBRARIES} PARENT_SCOPE) ENDIF(ImageMagick_${component}_INCLUDE_DIR AND ImageMagick_${component}_LIBRARY) ENDFUNCTION(FIND_IMAGEMAGICK_API) FUNCTION(FIND_IMAGEMAGICK_EXE component) SET(_IMAGEMAGICK_EXECUTABLE ${ImageMagick_EXECUTABLE_DIR}/${component}${CMAKE_EXECUTABLE_SUFFIX}) IF(EXISTS ${_IMAGEMAGICK_EXECUTABLE}) SET(ImageMagick_${component}_EXECUTABLE ${_IMAGEMAGICK_EXECUTABLE} PARENT_SCOPE ) SET(ImageMagick_${component}_FOUND TRUE PARENT_SCOPE) ELSE(EXISTS ${_IMAGEMAGICK_EXECUTABLE}) SET(ImageMagick_${component}_FOUND FALSE PARENT_SCOPE) ENDIF(EXISTS ${_IMAGEMAGICK_EXECUTABLE}) ENDFUNCTION(FIND_IMAGEMAGICK_EXE) #--------------------------------------------------------------------- # Start Actual Work #--------------------------------------------------------------------- FIND_REGISTRY() # Try to find a ImageMagick installation binary path. FIND_PATH(ImageMagick_EXECUTABLE_DIR NAMES mogrify${CMAKE_EXECUTABLE_SUFFIX} PATHS ${IMAGEMAGIC_REG_PATH} DOC "Path to the ImageMagick binary directory." NO_DEFAULT_PATH ) FIND_PATH(ImageMagick_EXECUTABLE_DIR NAMES mogrify${CMAKE_EXECUTABLE_SUFFIX} ) # Find each component. Search for all tools in same dir # ; otherwise they should be found # independently and not in a cohesive module such as this one. SET(ImageMagick_FOUND TRUE) FOREACH(component ${ImageMagick_FIND_COMPONENTS} # DEPRECATED: forced components for backward compatibility convert mogrify import montage composite ) IF(component STREQUAL "Magick++") FIND_IMAGEMAGICK_API(Magick++ Magick++.h Magick++ CORE_RL_Magick++_ Magick++-6.Q16 Magick++-Q16 Magick++-6.Q8 Magick++-Q8 Magick++-6.Q16HDRI Magick++-Q16HDRI Magick++-6.Q8HDRI Magick++-Q8HDRI ) ELSEIF(component STREQUAL "MagickWand") FIND_IMAGEMAGICK_API(MagickWand wand/MagickWand.h Wand MagickWand CORE_RL_wand_ MagickWand-6.Q16 MagickWand-Q16 MagickWand-6.Q8 MagickWand-Q8 MagickWand-6.Q16HDRI MagickWand-Q16HDRI MagickWand-6.Q8HDRI MagickWand-Q8HDRI ) ELSEIF(component STREQUAL "MagickCore") FIND_IMAGEMAGICK_API(MagickCore magick/MagickCore.h Magick MagickCore CORE_RL_magick_ MagickCore-6.Q16 MagickCore-Q16 MagickCore-6.Q8 MagickCore-Q8 MagickCore-6.Q16HDRI MagickCore-Q16HDRI MagickCore-6.Q8HDRI MagickCore-Q8HDRI ) ELSE(component STREQUAL "Magick++") IF(ImageMagick_EXECUTABLE_DIR) FIND_IMAGEMAGICK_EXE(${component}) ENDIF(ImageMagick_EXECUTABLE_DIR) ENDIF(component STREQUAL "Magick++") IF(NOT ImageMagick_${component}_FOUND) LIST(FIND ImageMagick_FIND_COMPONENTS ${component} is_requested) IF(is_requested GREATER -1) SET(ImageMagick_FOUND FALSE) ENDIF(is_requested GREATER -1) ENDIF(NOT ImageMagick_${component}_FOUND) ENDFOREACH(component) SET(ImageMagick_INCLUDE_DIRS ${ImageMagick_INCLUDE_DIRS}) SET(ImageMagick_LIBRARIES ${ImageMagick_LIBRARIES}) #--------------------------------------------------------------------- # Standard Package Output #--------------------------------------------------------------------- INCLUDE(FindPackageHandleStandardArgs) FIND_PACKAGE_HANDLE_STANDARD_ARGS( ImageMagick DEFAULT_MSG ImageMagick_FOUND ) # Maintain consistency with all other variables. SET(ImageMagick_FOUND ${IMAGEMAGICK_FOUND}) #--------------------------------------------------------------------- # DEPRECATED: Setting variables for backward compatibility. #--------------------------------------------------------------------- SET(IMAGEMAGICK_BINARY_PATH ${ImageMagick_EXECUTABLE_DIR} CACHE PATH "Path to the ImageMagick binary directory.") SET(IMAGEMAGICK_CONVERT_EXECUTABLE ${ImageMagick_convert_EXECUTABLE} CACHE FILEPATH "Path to ImageMagick's convert executable.") SET(IMAGEMAGICK_MOGRIFY_EXECUTABLE ${ImageMagick_mogrify_EXECUTABLE} CACHE FILEPATH "Path to ImageMagick's mogrify executable.") SET(IMAGEMAGICK_IMPORT_EXECUTABLE ${ImageMagick_import_EXECUTABLE} CACHE FILEPATH "Path to ImageMagick's import executable.") SET(IMAGEMAGICK_MONTAGE_EXECUTABLE ${ImageMagick_montage_EXECUTABLE} CACHE FILEPATH "Path to ImageMagick's montage executable.") SET(IMAGEMAGICK_COMPOSITE_EXECUTABLE ${ImageMagick_composite_EXECUTABLE} CACHE FILEPATH "Path to ImageMagick's composite executable.") MARK_AS_ADVANCED( IMAGEMAGICK_BINARY_PATH IMAGEMAGICK_CONVERT_EXECUTABLE IMAGEMAGICK_MOGRIFY_EXECUTABLE IMAGEMAGICK_IMPORT_EXECUTABLE IMAGEMAGICK_MONTAGE_EXECUTABLE IMAGEMAGICK_COMPOSITE_EXECUTABLE ) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/FindPCIAccess.cmake000066400000000000000000000012541270147354000253440ustar00rootroot00000000000000# - FindPCIAccess # # Copyright 2015 Valve Corporation find_package(PkgConfig) pkg_check_modules(PC_PCIACCESS QUIET pciaccess) find_path(PCIACCESS_INCLUDE_DIR NAMES pciaccess.h HINTS ${PC_PCIACCESS_INCLUDEDIR} ${PC_PCIACCESS_INCLUDE_DIRS} ) find_library(PCIACCESS_LIBRARY NAMES pciaccess HINTS ${PC_PCIACCESS_LIBDIR} ${PC_PCIACCESS_LIBRARY_DIRS} ) include(FindPackageHandleStandardArgs) find_package_handle_standard_args(PCIAccess DEFAULT_MSG PCIACCESS_INCLUDE_DIR PCIACCESS_LIBRARY) mark_as_advanced(PCIACCESS_INCLUDE_DIR PCIACCESS_LIBRARY) set(PCIACCESS_INCLUDE_DIRS ${PCIACCESS_INCLUDE_DIR}) set(PCIACCESS_LIBRARIES ${PCIACCESS_LIBRARY}) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/FindPthreadStubs.cmake000066400000000000000000000005171270147354000262200ustar00rootroot00000000000000# - FindPthreadStubs # # Copyright (C) 2015 Valve Corporation find_package(PkgConfig) pkg_check_modules(PC_PTHREADSTUBS QUIET pthread-stubs) include(FindPackageHandleStandardArgs) find_package_handle_standard_args(PthreadStubs DEFAULT_MSG PC_PTHREADSTUBS_FOUND) set(PTHREADSTUBS_INCLUDE_DIRS "") set(PTHREADSTUBS_LIBRARIES "") Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/FindUDev.cmake000066400000000000000000000011411270147354000244450ustar00rootroot00000000000000# - FindUDev # # Copyright (C) 2015 Valve Corporation find_package(PkgConfig) pkg_check_modules(PC_LIBUDEV QUIET libudev) find_path(UDEV_INCLUDE_DIR NAMES libudev.h HINTS ${PC_LIBUDEV_INCLUDEDIR} ${PC_LIBUDEV_INCLUDE_DIRS} ) find_library(UDEV_LIBRARY NAMES udev HINTS ${PC_LIBUDEV_LIBDIR} ${PC_LIBUDEV_LIBRARY_DIRS} ) include(FindPackageHandleStandardArgs) find_package_handle_standard_args(UDev DEFAULT_MSG UDEV_INCLUDE_DIR UDEV_LIBRARY) mark_as_advanced(UDEV_INCLUDE_DIR UDEV_LIBRARY) set(UDEV_INCLUDE_DIRS ${UDEV_INCLUDE_DIR}) set(UDEV_LIBRARIES ${UDEV_LIBRARY}) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/FindValgrind.cmake000066400000000000000000000007741270147354000253630ustar00rootroot00000000000000# - FindValgrind # # Copyright (C) 2015 Valve Corporation find_package(PkgConfig) pkg_check_modules(PC_VALGRIND QUIET valgrind) find_path(VALGRIND_INCLUDE_DIR NAMES valgrind.h memcheck.h HINTS ${PC_VALGRIND_INCLUDEDIR} ${PC_VALGRIND_INCLUDE_DIRS} ) include(FindPackageHandleStandardArgs) find_package_handle_standard_args(Valgrind DEFAULT_MSG VALGRIND_INCLUDE_DIR) mark_as_advanced(VALGRIND_INCLUDE_DIR) set(VALGRIND_INCLUDE_DIRS ${VALGRIND_INCLUDE_DIR}) set(VALGRIND_LIBRARIES "") Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/FindX11_XCB.cmake000066400000000000000000000023161270147354000246540ustar00rootroot00000000000000# - Try to find libX11-xcb # Once done this will define # # X11_XCB_FOUND - system has libX11-xcb # X11_XCB_LIBRARIES - Link these to use libX11-xcb # X11_XCB_INCLUDE_DIR - the libX11-xcb include dir # X11_XCB_DEFINITIONS - compiler switches required for using libX11-xcb # Copyright (c) 2011 Fredrik Höglund # Copyright (c) 2008 Helio Chissini de Castro, # Copyright (c) 2007 Matthias Kretz, # # Redistribution and use is allowed according to the terms of the BSD license. # For details see the accompanying COPYING-CMAKE-SCRIPTS file. IF (NOT WIN32) # use pkg-config to get the directories and then use these values # in the FIND_PATH() and FIND_LIBRARY() calls FIND_PACKAGE(PkgConfig) PKG_CHECK_MODULES(PKG_X11_XCB QUIET x11-xcb) SET(X11_XCB_DEFINITIONS ${PKG_X11_XCB_CFLAGS}) FIND_PATH(X11_XCB_INCLUDE_DIR NAMES X11/Xlib-xcb.h HINTS ${PKG_X11_XCB_INCLUDE_DIRS}) FIND_LIBRARY(X11_XCB_LIBRARIES NAMES X11-xcb HINTS ${PKG_X11_XCB_LIBRARY_DIRS}) include(FindPackageHandleStandardArgs) FIND_PACKAGE_HANDLE_STANDARD_ARGS(X11_XCB DEFAULT_MSG X11_XCB_LIBRARIES X11_XCB_INCLUDE_DIR) MARK_AS_ADVANCED(X11_XCB_INCLUDE_DIR X11_XCB_LIBRARIES) ENDIF (NOT WIN32) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/FindXCB.cmake000066400000000000000000000024361270147354000242260ustar00rootroot00000000000000# - FindXCB # # Copyright (C) 2015 Valve Corporation find_package(PkgConfig) if(NOT XCB_FIND_COMPONENTS) set(XCB_FIND_COMPONENTS xcb) endif() include(FindPackageHandleStandardArgs) set(XCB_FOUND true) set(XCB_INCLUDE_DIRS "") set(XCB_LIBRARIES "") foreach(comp ${XCB_FIND_COMPONENTS}) # component name string(TOUPPER ${comp} compname) string(REPLACE "-" "_" compname ${compname}) # header name string(REPLACE "xcb-" "" headername xcb/${comp}.h) # library name set(libname ${comp}) pkg_check_modules(PC_${comp} QUIET ${comp}) find_path(${compname}_INCLUDE_DIR NAMES ${headername} HINTS ${PC_${comp}_INCLUDEDIR} ${PC_${comp}_INCLUDE_DIRS} ) find_library(${compname}_LIBRARY NAMES ${libname} HINTS ${PC_${comp}_LIBDIR} ${PC_${comp}_LIBRARY_DIRS} ) find_package_handle_standard_args(${comp} FOUND_VAR ${comp}_FOUND REQUIRED_VARS ${compname}_INCLUDE_DIR ${compname}_LIBRARY) mark_as_advanced(${compname}_INCLUDE_DIR ${compname}_LIBRARY) list(APPEND XCB_INCLUDE_DIRS ${${compname}_INCLUDE_DIR}) list(APPEND XCB_LIBRARIES ${${compname}_LIBRARY}) if(NOT ${comp}_FOUND) set(XCB_FOUND false) endif() endforeach() list(REMOVE_DUPLICATES XCB_INCLUDE_DIRS) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/cmake/README.txt000066400000000000000000000004001270147354000234720ustar00rootroot00000000000000The following files are copied out of CMake 2.8. - FindImageMagick.cmake I have also copied the Copyright.txt file out of cmake and renamed it: - Copyright_cmake.txt All other files are created and/or maintained by either LunarG Inc or Valve Corporation.Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/000077500000000000000000000000001270147354000220315ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/CMakeLists.txt000066400000000000000000000102221270147354000245660ustar00rootroot00000000000000if(NOT WIN32) find_package(XCB REQUIRED) endif() file(GLOB TEXTURES "${PROJECT_SOURCE_DIR}/demos/*.ppm" ) file(COPY ${TEXTURES} DESTINATION ${CMAKE_BINARY_DIR}/demos) set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS}") set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}") if(WIN32) set (LIBRARIES "vulkan-${MAJOR}") elseif(UNIX) set (LIBRARIES "vulkan") else() endif() if(WIN32) # For Windows, since 32-bit and 64-bit items can co-exist, we build each in its own build directory. # 32-bit target data goes in build32, and 64-bit target data goes into build. So, include/link the # appropriate data at build time. if (CMAKE_CL_64) set (BUILDTGT_DIR build) else () set (BUILDTGT_DIR build32) endif() add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/tri-vert.spv COMMAND ${GLSLANG_VALIDATOR} -s -V ${PROJECT_SOURCE_DIR}/demos/tri.vert COMMAND move vert.spv ${CMAKE_BINARY_DIR}/demos/tri-vert.spv DEPENDS tri.vert ${GLSLANG_VALIDATOR} ) add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/tri-frag.spv COMMAND ${GLSLANG_VALIDATOR} -s -V ${PROJECT_SOURCE_DIR}/demos/tri.frag COMMAND move frag.spv ${CMAKE_BINARY_DIR}/demos/tri-frag.spv DEPENDS tri.frag ${GLSLANG_VALIDATOR} ) add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/cube-vert.spv COMMAND ${GLSLANG_VALIDATOR} -s -V ${PROJECT_SOURCE_DIR}/demos/cube.vert COMMAND move vert.spv ${CMAKE_BINARY_DIR}/demos/cube-vert.spv DEPENDS cube.vert ${GLSLANG_VALIDATOR} ) add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/cube-frag.spv COMMAND ${GLSLANG_VALIDATOR} -s -V ${PROJECT_SOURCE_DIR}/demos/cube.frag COMMAND move frag.spv ${CMAKE_BINARY_DIR}/demos/cube-frag.spv DEPENDS cube.frag ${GLSLANG_VALIDATOR} ) file(COPY cube.vcxproj.user DESTINATION ${CMAKE_BINARY_DIR}/demos) file(COPY tri.vcxproj.user DESTINATION ${CMAKE_BINARY_DIR}/demos) file(COPY vulkaninfo.vcxproj.user DESTINATION ${CMAKE_BINARY_DIR}/demos) else() add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/tri-vert.spv COMMAND ${GLSLANG_VALIDATOR} -s -V -o tri-vert.spv ${PROJECT_SOURCE_DIR}/demos/tri.vert DEPENDS tri.vert ${GLSLANG_VALIDATOR} ) add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/tri-frag.spv COMMAND ${GLSLANG_VALIDATOR} -s -V -o tri-frag.spv ${PROJECT_SOURCE_DIR}/demos/tri.frag DEPENDS tri.frag ${GLSLANG_VALIDATOR} ) add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/cube-vert.spv COMMAND ${GLSLANG_VALIDATOR} -s -V -o cube-vert.spv ${PROJECT_SOURCE_DIR}/demos/cube.vert DEPENDS cube.vert ${GLSLANG_VALIDATOR} ) add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/demos/cube-frag.spv COMMAND ${GLSLANG_VALIDATOR} -s -V -o cube-frag.spv ${PROJECT_SOURCE_DIR}/demos/cube.frag DEPENDS cube.frag ${GLSLANG_VALIDATOR} ) endif() if(NOT WIN32) include_directories ( ${XCB_INCLUDE_DIRS} "${PROJECT_SOURCE_DIR}/icd/common" ) link_libraries(${XCB_LIBRARIES} vulkan m) endif() if(WIN32) include_directories ( "${PROJECT_SOURCE_DIR}/icd/common" ) set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D_CRT_SECURE_NO_WARNINGS -D_USE_MATH_DEFINES") set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_CRT_SECURE_NO_WARNINGS -D_USE_MATH_DEFINES") endif() add_executable(vulkaninfo vulkaninfo.c) target_link_libraries(vulkaninfo ${LIBRARIES}) if(UNIX) add_executable(tri tri.c ${CMAKE_BINARY_DIR}/demos/tri-vert.spv ${CMAKE_BINARY_DIR}/demos/tri-frag.spv) else() add_executable(tri WIN32 tri.c ${CMAKE_BINARY_DIR}/demos/tri-vert.spv ${CMAKE_BINARY_DIR}/demos/tri-frag.spv) endif() target_link_libraries(tri ${LIBRARIES}) if(NOT WIN32) add_executable(cube cube.c ${CMAKE_BINARY_DIR}/demos/cube-vert.spv ${CMAKE_BINARY_DIR}/demos/cube-frag.spv) target_link_libraries(cube ${LIBRARIES}) else() if (CMAKE_CL_64) set (LIB_DIR "Win64") else() set (LIB_DIR "Win32") endif() add_executable(cube WIN32 cube.c ${CMAKE_BINARY_DIR}/demos/cube-vert.spv ${CMAKE_BINARY_DIR}/demos/cube-frag.spv) target_link_libraries(cube ${LIBRARIES} ) endif() add_subdirectory(smoke) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/cube.c000066400000000000000000003077521270147354000231310ustar00rootroot00000000000000/* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. * * Author: Chia-I Wu * Author: Courtney Goeltzenleuchter * Author: Ian Elliott * Author: Jon Ashburn */ #define _GNU_SOURCE #include #include #include #include #include #include #ifdef _WIN32 #pragma comment(linker, "/subsystem:windows") #define APP_NAME_STR_LEN 80 #endif // _WIN32 #include #include #include "linmath.h" #define DEMO_TEXTURE_COUNT 1 #define APP_SHORT_NAME "cube" #define APP_LONG_NAME "The Vulkan Cube Demo Program" #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) #if defined(NDEBUG) && defined(__GNUC__) #define U_ASSERT_ONLY __attribute__((unused)) #else #define U_ASSERT_ONLY #endif #ifdef _WIN32 #define ERR_EXIT(err_msg, err_class) \ do { \ MessageBox(NULL, err_msg, err_class, MB_OK); \ exit(1); \ } while (0) #else // _WIN32 #define ERR_EXIT(err_msg, err_class) \ do { \ printf(err_msg); \ fflush(stdout); \ exit(1); \ } while (0) #endif // _WIN32 #define GET_INSTANCE_PROC_ADDR(inst, entrypoint) \ { \ demo->fp##entrypoint = \ (PFN_vk##entrypoint)vkGetInstanceProcAddr(inst, "vk" #entrypoint); \ if (demo->fp##entrypoint == NULL) { \ ERR_EXIT("vkGetInstanceProcAddr failed to find vk" #entrypoint, \ "vkGetInstanceProcAddr Failure"); \ } \ } static PFN_vkGetDeviceProcAddr g_gdpa = NULL; #define GET_DEVICE_PROC_ADDR(dev, entrypoint) \ { \ if (!g_gdpa) \ g_gdpa = (PFN_vkGetDeviceProcAddr)vkGetInstanceProcAddr( \ demo->inst, "vkGetDeviceProcAddr"); \ demo->fp##entrypoint = \ (PFN_vk##entrypoint)g_gdpa(dev, "vk" #entrypoint); \ if (demo->fp##entrypoint == NULL) { \ ERR_EXIT("vkGetDeviceProcAddr failed to find vk" #entrypoint, \ "vkGetDeviceProcAddr Failure"); \ } \ } /* * structure to track all objects related to a texture. */ struct texture_object { VkSampler sampler; VkImage image; VkImageLayout imageLayout; VkMemoryAllocateInfo mem_alloc; VkDeviceMemory mem; VkImageView view; int32_t tex_width, tex_height; }; static char *tex_files[] = {"lunarg.ppm"}; static int validation_error = 0; struct vkcube_vs_uniform { // Must start with MVP float mvp[4][4]; float position[12 * 3][4]; float color[12 * 3][4]; }; struct vktexcube_vs_uniform { // Must start with MVP float mvp[4][4]; float position[12 * 3][4]; float attr[12 * 3][4]; }; //-------------------------------------------------------------------------------------- // Mesh and VertexFormat Data //-------------------------------------------------------------------------------------- // clang-format off struct Vertex { float posX, posY, posZ, posW; // Position data float r, g, b, a; // Color }; struct VertexPosTex { float posX, posY, posZ, posW; // Position data float u, v, s, t; // Texcoord }; #define XYZ1(_x_, _y_, _z_) (_x_), (_y_), (_z_), 1.f #define UV(_u_, _v_) (_u_), (_v_), 0.f, 1.f static const float g_vertex_buffer_data[] = { -1.0f,-1.0f,-1.0f, // -X side -1.0f,-1.0f, 1.0f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, // -Z side 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f, 1.0f,-1.0f, 1.0f, 1.0f,-1.0f, -1.0f,-1.0f,-1.0f, // -Y side 1.0f,-1.0f,-1.0f, 1.0f,-1.0f, 1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f, 1.0f, -1.0f,-1.0f, 1.0f, -1.0f, 1.0f,-1.0f, // +Y side -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f, 1.0f,-1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,-1.0f, 1.0f, 1.0f,-1.0f, // +X side 1.0f, 1.0f, 1.0f, 1.0f,-1.0f, 1.0f, 1.0f,-1.0f, 1.0f, 1.0f,-1.0f,-1.0f, 1.0f, 1.0f,-1.0f, -1.0f, 1.0f, 1.0f, // +Z side -1.0f,-1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f,-1.0f, 1.0f, 1.0f,-1.0f, 1.0f, 1.0f, 1.0f, 1.0f, }; static const float g_uv_buffer_data[] = { 0.0f, 0.0f, // -X side 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // -Z side 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, // -Y side 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // +Y side 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, // +X side 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, // +Z side 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, }; // clang-format on void dumpMatrix(const char *note, mat4x4 MVP) { int i; printf("%s: \n", note); for (i = 0; i < 4; i++) { printf("%f, %f, %f, %f\n", MVP[i][0], MVP[i][1], MVP[i][2], MVP[i][3]); } printf("\n"); fflush(stdout); } void dumpVec4(const char *note, vec4 vector) { printf("%s: \n", note); printf("%f, %f, %f, %f\n", vector[0], vector[1], vector[2], vector[3]); printf("\n"); fflush(stdout); } VKAPI_ATTR VkBool32 VKAPI_CALL dbgFunc(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData) { char *message = (char *)malloc(strlen(pMsg) + 100); assert(message); if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) { sprintf(message, "ERROR: [%s] Code %d : %s", pLayerPrefix, msgCode, pMsg); validation_error = 1; } else if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) { // We know that we're submitting queues without fences, ignore this // warning if (strstr(pMsg, "vkQueueSubmit parameter, VkFence fence, is null pointer")) { return false; } sprintf(message, "WARNING: [%s] Code %d : %s", pLayerPrefix, msgCode, pMsg); validation_error = 1; } else { validation_error = 1; return false; } #ifdef _WIN32 MessageBox(NULL, message, "Alert", MB_OK); #else printf("%s\n", message); fflush(stdout); #endif free(message); /* * false indicates that layer should not bail-out of an * API call that had validation failures. This may mean that the * app dies inside the driver due to invalid parameter(s). * That's what would happen without validation layers, so we'll * keep that behavior here. */ return false; } VKAPI_ATTR VkBool32 VKAPI_CALL BreakCallback(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData) { #ifndef WIN32 raise(SIGTRAP); #else DebugBreak(); #endif return false; } typedef struct _SwapchainBuffers { VkImage image; VkCommandBuffer cmd; VkImageView view; } SwapchainBuffers; struct demo { #ifdef _WIN32 #define APP_NAME_STR_LEN 80 HINSTANCE connection; // hInstance - Windows Instance char name[APP_NAME_STR_LEN]; // Name to put on the window/icon HWND window; // hWnd - window handle #else // _WIN32 xcb_connection_t *connection; xcb_screen_t *screen; xcb_window_t window; xcb_intern_atom_reply_t *atom_wm_delete_window; #endif // _WIN32 VkSurfaceKHR surface; bool prepared; bool use_staging_buffer; VkInstance inst; VkPhysicalDevice gpu; VkDevice device; VkQueue queue; uint32_t graphics_queue_node_index; VkPhysicalDeviceProperties gpu_props; VkQueueFamilyProperties *queue_props; VkPhysicalDeviceMemoryProperties memory_properties; uint32_t enabled_extension_count; uint32_t enabled_layer_count; char *extension_names[64]; char *device_validation_layers[64]; int width, height; VkFormat format; VkColorSpaceKHR color_space; PFN_vkGetPhysicalDeviceSurfaceSupportKHR fpGetPhysicalDeviceSurfaceSupportKHR; PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR fpGetPhysicalDeviceSurfaceCapabilitiesKHR; PFN_vkGetPhysicalDeviceSurfaceFormatsKHR fpGetPhysicalDeviceSurfaceFormatsKHR; PFN_vkGetPhysicalDeviceSurfacePresentModesKHR fpGetPhysicalDeviceSurfacePresentModesKHR; PFN_vkCreateSwapchainKHR fpCreateSwapchainKHR; PFN_vkDestroySwapchainKHR fpDestroySwapchainKHR; PFN_vkGetSwapchainImagesKHR fpGetSwapchainImagesKHR; PFN_vkAcquireNextImageKHR fpAcquireNextImageKHR; PFN_vkQueuePresentKHR fpQueuePresentKHR; uint32_t swapchainImageCount; VkSwapchainKHR swapchain; SwapchainBuffers *buffers; VkCommandPool cmd_pool; struct { VkFormat format; VkImage image; VkMemoryAllocateInfo mem_alloc; VkDeviceMemory mem; VkImageView view; } depth; struct texture_object textures[DEMO_TEXTURE_COUNT]; struct { VkBuffer buf; VkMemoryAllocateInfo mem_alloc; VkDeviceMemory mem; VkDescriptorBufferInfo buffer_info; } uniform_data; VkCommandBuffer cmd; // Buffer for initialization commands VkPipelineLayout pipeline_layout; VkDescriptorSetLayout desc_layout; VkPipelineCache pipelineCache; VkRenderPass render_pass; VkPipeline pipeline; mat4x4 projection_matrix; mat4x4 view_matrix; mat4x4 model_matrix; float spin_angle; float spin_increment; bool pause; VkShaderModule vert_shader_module; VkShaderModule frag_shader_module; VkDescriptorPool desc_pool; VkDescriptorSet desc_set; VkFramebuffer *framebuffers; bool quit; int32_t curFrame; int32_t frameCount; bool validate; bool use_break; PFN_vkCreateDebugReportCallbackEXT CreateDebugReportCallback; PFN_vkDestroyDebugReportCallbackEXT DestroyDebugReportCallback; VkDebugReportCallbackEXT msg_callback; PFN_vkDebugReportMessageEXT DebugReportMessage; uint32_t current_buffer; uint32_t queue_count; }; // Forward declaration: static void demo_resize(struct demo *demo); static bool memory_type_from_properties(struct demo *demo, uint32_t typeBits, VkFlags requirements_mask, uint32_t *typeIndex) { // Search memtypes to find first index with those properties for (uint32_t i = 0; i < 32; i++) { if ((typeBits & 1) == 1) { // Type is available, does it match user properties? if ((demo->memory_properties.memoryTypes[i].propertyFlags & requirements_mask) == requirements_mask) { *typeIndex = i; return true; } } typeBits >>= 1; } // No memory types matched, return failure return false; } static void demo_flush_init_cmd(struct demo *demo) { VkResult U_ASSERT_ONLY err; if (demo->cmd == VK_NULL_HANDLE) return; err = vkEndCommandBuffer(demo->cmd); assert(!err); const VkCommandBuffer cmd_bufs[] = {demo->cmd}; VkFence nullFence = VK_NULL_HANDLE; VkSubmitInfo submit_info = {.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO, .pNext = NULL, .waitSemaphoreCount = 0, .pWaitSemaphores = NULL, .pWaitDstStageMask = NULL, .commandBufferCount = 1, .pCommandBuffers = cmd_bufs, .signalSemaphoreCount = 0, .pSignalSemaphores = NULL}; err = vkQueueSubmit(demo->queue, 1, &submit_info, nullFence); assert(!err); err = vkQueueWaitIdle(demo->queue); assert(!err); vkFreeCommandBuffers(demo->device, demo->cmd_pool, 1, cmd_bufs); demo->cmd = VK_NULL_HANDLE; } static void demo_set_image_layout(struct demo *demo, VkImage image, VkImageAspectFlags aspectMask, VkImageLayout old_image_layout, VkImageLayout new_image_layout, VkAccessFlagBits srcAccessMask) { VkResult U_ASSERT_ONLY err; if (demo->cmd == VK_NULL_HANDLE) { const VkCommandBufferAllocateInfo cmd = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO, .pNext = NULL, .commandPool = demo->cmd_pool, .level = VK_COMMAND_BUFFER_LEVEL_PRIMARY, .commandBufferCount = 1, }; err = vkAllocateCommandBuffers(demo->device, &cmd, &demo->cmd); assert(!err); VkCommandBufferInheritanceInfo cmd_buf_hinfo = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO, .pNext = NULL, .renderPass = VK_NULL_HANDLE, .subpass = 0, .framebuffer = VK_NULL_HANDLE, .occlusionQueryEnable = VK_FALSE, .queryFlags = 0, .pipelineStatistics = 0, }; VkCommandBufferBeginInfo cmd_buf_info = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO, .pNext = NULL, .flags = 0, .pInheritanceInfo = &cmd_buf_hinfo, }; err = vkBeginCommandBuffer(demo->cmd, &cmd_buf_info); assert(!err); } VkImageMemoryBarrier image_memory_barrier = { .sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER, .pNext = NULL, .srcAccessMask = srcAccessMask, .dstAccessMask = 0, .oldLayout = old_image_layout, .newLayout = new_image_layout, .image = image, .subresourceRange = {aspectMask, 0, 1, 0, 1}}; if (new_image_layout == VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL) { /* Make sure anything that was copying from this image has completed */ image_memory_barrier.dstAccessMask = VK_ACCESS_TRANSFER_READ_BIT; } if (new_image_layout == VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL) { image_memory_barrier.dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT; } if (new_image_layout == VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL) { image_memory_barrier.dstAccessMask = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT; } if (new_image_layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) { /* Make sure any Copy or CPU writes to image are flushed */ image_memory_barrier.dstAccessMask = VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_INPUT_ATTACHMENT_READ_BIT; } VkImageMemoryBarrier *pmemory_barrier = &image_memory_barrier; VkPipelineStageFlags src_stages = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT; VkPipelineStageFlags dest_stages = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT; vkCmdPipelineBarrier(demo->cmd, src_stages, dest_stages, 0, 0, NULL, 0, NULL, 1, pmemory_barrier); } static void demo_draw_build_cmd(struct demo *demo, VkCommandBuffer cmd_buf) { VkCommandBufferInheritanceInfo cmd_buf_hinfo = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO, .pNext = NULL, .renderPass = VK_NULL_HANDLE, .subpass = 0, .framebuffer = VK_NULL_HANDLE, .occlusionQueryEnable = VK_FALSE, .queryFlags = 0, .pipelineStatistics = 0, }; const VkCommandBufferBeginInfo cmd_buf_info = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO, .pNext = NULL, .flags = 0, .pInheritanceInfo = &cmd_buf_hinfo, }; const VkClearValue clear_values[2] = { [0] = {.color.float32 = {0.2f, 0.2f, 0.2f, 0.2f}}, [1] = {.depthStencil = {1.0f, 0}}, }; const VkRenderPassBeginInfo rp_begin = { .sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO, .pNext = NULL, .renderPass = demo->render_pass, .framebuffer = demo->framebuffers[demo->current_buffer], .renderArea.offset.x = 0, .renderArea.offset.y = 0, .renderArea.extent.width = demo->width, .renderArea.extent.height = demo->height, .clearValueCount = 2, .pClearValues = clear_values, }; VkResult U_ASSERT_ONLY err; err = vkBeginCommandBuffer(cmd_buf, &cmd_buf_info); assert(!err); vkCmdBeginRenderPass(cmd_buf, &rp_begin, VK_SUBPASS_CONTENTS_INLINE); vkCmdBindPipeline(cmd_buf, VK_PIPELINE_BIND_POINT_GRAPHICS, demo->pipeline); vkCmdBindDescriptorSets(cmd_buf, VK_PIPELINE_BIND_POINT_GRAPHICS, demo->pipeline_layout, 0, 1, &demo->desc_set, 0, NULL); VkViewport viewport; memset(&viewport, 0, sizeof(viewport)); viewport.height = (float)demo->height; viewport.width = (float)demo->width; viewport.minDepth = (float)0.0f; viewport.maxDepth = (float)1.0f; vkCmdSetViewport(cmd_buf, 0, 1, &viewport); VkRect2D scissor; memset(&scissor, 0, sizeof(scissor)); scissor.extent.width = demo->width; scissor.extent.height = demo->height; scissor.offset.x = 0; scissor.offset.y = 0; vkCmdSetScissor(cmd_buf, 0, 1, &scissor); vkCmdDraw(cmd_buf, 12 * 3, 1, 0, 0); vkCmdEndRenderPass(cmd_buf); VkImageMemoryBarrier prePresentBarrier = { .sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER, .pNext = NULL, .srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT, .dstAccessMask = VK_ACCESS_MEMORY_READ_BIT, .oldLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, .newLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR, .srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED, .dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED, .subresourceRange = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1}}; prePresentBarrier.image = demo->buffers[demo->current_buffer].image; VkImageMemoryBarrier *pmemory_barrier = &prePresentBarrier; vkCmdPipelineBarrier(cmd_buf, VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT, 0, 0, NULL, 0, NULL, 1, pmemory_barrier); err = vkEndCommandBuffer(cmd_buf); assert(!err); } void demo_update_data_buffer(struct demo *demo) { mat4x4 MVP, Model, VP; int matrixSize = sizeof(MVP); uint8_t *pData; VkResult U_ASSERT_ONLY err; mat4x4_mul(VP, demo->projection_matrix, demo->view_matrix); // Rotate 22.5 degrees around the Y axis mat4x4_dup(Model, demo->model_matrix); mat4x4_rotate(demo->model_matrix, Model, 0.0f, 1.0f, 0.0f, (float)degreesToRadians(demo->spin_angle)); mat4x4_mul(MVP, VP, demo->model_matrix); err = vkMapMemory(demo->device, demo->uniform_data.mem, 0, demo->uniform_data.mem_alloc.allocationSize, 0, (void **)&pData); assert(!err); memcpy(pData, (const void *)&MVP[0][0], matrixSize); vkUnmapMemory(demo->device, demo->uniform_data.mem); } static void demo_draw(struct demo *demo) { VkResult U_ASSERT_ONLY err; VkSemaphore presentCompleteSemaphore; VkSemaphoreCreateInfo presentCompleteSemaphoreCreateInfo = { .sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO, .pNext = NULL, .flags = 0, }; VkFence nullFence = VK_NULL_HANDLE; err = vkCreateSemaphore(demo->device, &presentCompleteSemaphoreCreateInfo, NULL, &presentCompleteSemaphore); assert(!err); // Get the index of the next available swapchain image: err = demo->fpAcquireNextImageKHR(demo->device, demo->swapchain, UINT64_MAX, presentCompleteSemaphore, (VkFence)0, // TODO: Show use of fence &demo->current_buffer); if (err == VK_ERROR_OUT_OF_DATE_KHR) { // demo->swapchain is out of date (e.g. the window was resized) and // must be recreated: demo_resize(demo); demo_draw(demo); vkDestroySemaphore(demo->device, presentCompleteSemaphore, NULL); return; } else if (err == VK_SUBOPTIMAL_KHR) { // demo->swapchain is not as optimal as it could be, but the platform's // presentation engine will still present the image correctly. } else { assert(!err); } // Assume the command buffer has been run on current_buffer before so // we need to set the image layout back to COLOR_ATTACHMENT_OPTIMAL demo_set_image_layout(demo, demo->buffers[demo->current_buffer].image, VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR, VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, 0); demo_flush_init_cmd(demo); // Wait for the present complete semaphore to be signaled to ensure // that the image won't be rendered to until the presentation // engine has fully released ownership to the application, and it is // okay to render to the image. // FIXME/TODO: DEAL WITH VK_IMAGE_LAYOUT_PRESENT_SRC_KHR VkPipelineStageFlags pipe_stage_flags = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT; VkSubmitInfo submit_info = {.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO, .pNext = NULL, .waitSemaphoreCount = 1, .pWaitSemaphores = &presentCompleteSemaphore, .pWaitDstStageMask = &pipe_stage_flags, .commandBufferCount = 1, .pCommandBuffers = &demo->buffers[demo->current_buffer].cmd, .signalSemaphoreCount = 0, .pSignalSemaphores = NULL}; err = vkQueueSubmit(demo->queue, 1, &submit_info, nullFence); assert(!err); VkPresentInfoKHR present = { .sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR, .pNext = NULL, .swapchainCount = 1, .pSwapchains = &demo->swapchain, .pImageIndices = &demo->current_buffer, }; // TBD/TODO: SHOULD THE "present" PARAMETER BE "const" IN THE HEADER? err = demo->fpQueuePresentKHR(demo->queue, &present); if (err == VK_ERROR_OUT_OF_DATE_KHR) { // demo->swapchain is out of date (e.g. the window was resized) and // must be recreated: demo_resize(demo); } else if (err == VK_SUBOPTIMAL_KHR) { // demo->swapchain is not as optimal as it could be, but the platform's // presentation engine will still present the image correctly. } else { assert(!err); } err = vkQueueWaitIdle(demo->queue); assert(err == VK_SUCCESS); vkDestroySemaphore(demo->device, presentCompleteSemaphore, NULL); } static void demo_prepare_buffers(struct demo *demo) { VkResult U_ASSERT_ONLY err; VkSwapchainKHR oldSwapchain = demo->swapchain; // Check the surface capabilities and formats VkSurfaceCapabilitiesKHR surfCapabilities; err = demo->fpGetPhysicalDeviceSurfaceCapabilitiesKHR( demo->gpu, demo->surface, &surfCapabilities); assert(!err); uint32_t presentModeCount; err = demo->fpGetPhysicalDeviceSurfacePresentModesKHR( demo->gpu, demo->surface, &presentModeCount, NULL); assert(!err); VkPresentModeKHR *presentModes = (VkPresentModeKHR *)malloc(presentModeCount * sizeof(VkPresentModeKHR)); assert(presentModes); err = demo->fpGetPhysicalDeviceSurfacePresentModesKHR( demo->gpu, demo->surface, &presentModeCount, presentModes); assert(!err); VkExtent2D swapchainExtent; // width and height are either both -1, or both not -1. if (surfCapabilities.currentExtent.width == (uint32_t)-1) { // If the surface size is undefined, the size is set to // the size of the images requested. swapchainExtent.width = demo->width; swapchainExtent.height = demo->height; } else { // If the surface size is defined, the swap chain size must match swapchainExtent = surfCapabilities.currentExtent; demo->width = surfCapabilities.currentExtent.width; demo->height = surfCapabilities.currentExtent.height; } // If mailbox mode is available, use it, as is the lowest-latency non- // tearing mode. If not, try IMMEDIATE which will usually be available, // and is fastest (though it tears). If not, fall back to FIFO which is // always available. VkPresentModeKHR swapchainPresentMode = VK_PRESENT_MODE_FIFO_KHR; for (size_t i = 0; i < presentModeCount; i++) { if (presentModes[i] == VK_PRESENT_MODE_MAILBOX_KHR) { swapchainPresentMode = VK_PRESENT_MODE_MAILBOX_KHR; break; } if ((swapchainPresentMode != VK_PRESENT_MODE_MAILBOX_KHR) && (presentModes[i] == VK_PRESENT_MODE_IMMEDIATE_KHR)) { swapchainPresentMode = VK_PRESENT_MODE_IMMEDIATE_KHR; } } // Determine the number of VkImage's to use in the swap chain (we desire to // own only 1 image at a time, besides the images being displayed and // queued for display): uint32_t desiredNumberOfSwapchainImages = surfCapabilities.minImageCount + 1; if ((surfCapabilities.maxImageCount > 0) && (desiredNumberOfSwapchainImages > surfCapabilities.maxImageCount)) { // Application must settle for fewer images than desired: desiredNumberOfSwapchainImages = surfCapabilities.maxImageCount; } VkSurfaceTransformFlagsKHR preTransform; if (surfCapabilities.supportedTransforms & VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR) { preTransform = VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR; } else { preTransform = surfCapabilities.currentTransform; } const VkSwapchainCreateInfoKHR swapchain = { .sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR, .pNext = NULL, .surface = demo->surface, .minImageCount = desiredNumberOfSwapchainImages, .imageFormat = demo->format, .imageColorSpace = demo->color_space, .imageExtent = { .width = swapchainExtent.width, .height = swapchainExtent.height, }, .imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT, .preTransform = preTransform, .compositeAlpha = VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR, .imageArrayLayers = 1, .imageSharingMode = VK_SHARING_MODE_EXCLUSIVE, .queueFamilyIndexCount = 0, .pQueueFamilyIndices = NULL, .presentMode = swapchainPresentMode, .oldSwapchain = oldSwapchain, .clipped = true, }; uint32_t i; err = demo->fpCreateSwapchainKHR(demo->device, &swapchain, NULL, &demo->swapchain); assert(!err); // If we just re-created an existing swapchain, we should destroy the old // swapchain at this point. // Note: destroying the swapchain also cleans up all its associated // presentable images once the platform is done with them. if (oldSwapchain != VK_NULL_HANDLE) { demo->fpDestroySwapchainKHR(demo->device, oldSwapchain, NULL); } err = demo->fpGetSwapchainImagesKHR(demo->device, demo->swapchain, &demo->swapchainImageCount, NULL); assert(!err); VkImage *swapchainImages = (VkImage *)malloc(demo->swapchainImageCount * sizeof(VkImage)); assert(swapchainImages); err = demo->fpGetSwapchainImagesKHR(demo->device, demo->swapchain, &demo->swapchainImageCount, swapchainImages); assert(!err); demo->buffers = (SwapchainBuffers *)malloc(sizeof(SwapchainBuffers) * demo->swapchainImageCount); assert(demo->buffers); for (i = 0; i < demo->swapchainImageCount; i++) { VkImageViewCreateInfo color_image_view = { .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO, .pNext = NULL, .format = demo->format, .components = { .r = VK_COMPONENT_SWIZZLE_R, .g = VK_COMPONENT_SWIZZLE_G, .b = VK_COMPONENT_SWIZZLE_B, .a = VK_COMPONENT_SWIZZLE_A, }, .subresourceRange = {.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT, .baseMipLevel = 0, .levelCount = 1, .baseArrayLayer = 0, .layerCount = 1}, .viewType = VK_IMAGE_VIEW_TYPE_2D, .flags = 0, }; demo->buffers[i].image = swapchainImages[i]; // Render loop will expect image to have been used before and in // VK_IMAGE_LAYOUT_PRESENT_SRC_KHR // layout and will change to COLOR_ATTACHMENT_OPTIMAL, so init the image // to that state demo_set_image_layout( demo, demo->buffers[i].image, VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR, 0); color_image_view.image = demo->buffers[i].image; err = vkCreateImageView(demo->device, &color_image_view, NULL, &demo->buffers[i].view); assert(!err); } if (NULL != presentModes) { free(presentModes); } } static void demo_prepare_depth(struct demo *demo) { const VkFormat depth_format = VK_FORMAT_D16_UNORM; const VkImageCreateInfo image = { .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO, .pNext = NULL, .imageType = VK_IMAGE_TYPE_2D, .format = depth_format, .extent = {demo->width, demo->height, 1}, .mipLevels = 1, .arrayLayers = 1, .samples = VK_SAMPLE_COUNT_1_BIT, .tiling = VK_IMAGE_TILING_OPTIMAL, .usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT, .flags = 0, }; VkImageViewCreateInfo view = { .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO, .pNext = NULL, .image = VK_NULL_HANDLE, .format = depth_format, .subresourceRange = {.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT, .baseMipLevel = 0, .levelCount = 1, .baseArrayLayer = 0, .layerCount = 1}, .flags = 0, .viewType = VK_IMAGE_VIEW_TYPE_2D, }; VkMemoryRequirements mem_reqs; VkResult U_ASSERT_ONLY err; bool U_ASSERT_ONLY pass; demo->depth.format = depth_format; /* create image */ err = vkCreateImage(demo->device, &image, NULL, &demo->depth.image); assert(!err); vkGetImageMemoryRequirements(demo->device, demo->depth.image, &mem_reqs); assert(!err); demo->depth.mem_alloc.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO; demo->depth.mem_alloc.pNext = NULL; demo->depth.mem_alloc.allocationSize = mem_reqs.size; demo->depth.mem_alloc.memoryTypeIndex = 0; pass = memory_type_from_properties(demo, mem_reqs.memoryTypeBits, 0, /* No requirements */ &demo->depth.mem_alloc.memoryTypeIndex); assert(pass); /* allocate memory */ err = vkAllocateMemory(demo->device, &demo->depth.mem_alloc, NULL, &demo->depth.mem); assert(!err); /* bind memory */ err = vkBindImageMemory(demo->device, demo->depth.image, demo->depth.mem, 0); assert(!err); demo_set_image_layout(demo, demo->depth.image, VK_IMAGE_ASPECT_DEPTH_BIT, VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL, 0); /* create image view */ view.image = demo->depth.image; err = vkCreateImageView(demo->device, &view, NULL, &demo->depth.view); assert(!err); } /* Load a ppm file into memory */ bool loadTexture(const char *filename, uint8_t *rgba_data, VkSubresourceLayout *layout, int32_t *width, int32_t *height) { FILE *fPtr = fopen(filename, "rb"); char header[256], *cPtr, *tmp; if (!fPtr) return false; cPtr = fgets(header, 256, fPtr); // P6 if (cPtr == NULL || strncmp(header, "P6\n", 3)) { fclose(fPtr); return false; } do { cPtr = fgets(header, 256, fPtr); if (cPtr == NULL) { fclose(fPtr); return false; } } while (!strncmp(header, "#", 1)); sscanf(header, "%u %u", height, width); if (rgba_data == NULL) { fclose(fPtr); return true; } tmp = fgets(header, 256, fPtr); // Format (void)tmp; if (cPtr == NULL || strncmp(header, "255\n", 3)) { fclose(fPtr); return false; } for (int y = 0; y < *height; y++) { uint8_t *rowPtr = rgba_data; for (int x = 0; x < *width; x++) { size_t s = fread(rowPtr, 3, 1, fPtr); (void)s; rowPtr[3] = 255; /* Alpha of 1 */ rowPtr += 4; } rgba_data += layout->rowPitch; } fclose(fPtr); return true; } static void demo_prepare_texture_image(struct demo *demo, const char *filename, struct texture_object *tex_obj, VkImageTiling tiling, VkImageUsageFlags usage, VkFlags required_props) { const VkFormat tex_format = VK_FORMAT_R8G8B8A8_UNORM; int32_t tex_width; int32_t tex_height; VkResult U_ASSERT_ONLY err; bool U_ASSERT_ONLY pass; if (!loadTexture(filename, NULL, NULL, &tex_width, &tex_height)) { printf("Failed to load textures\n"); fflush(stdout); exit(1); } tex_obj->tex_width = tex_width; tex_obj->tex_height = tex_height; const VkImageCreateInfo image_create_info = { .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO, .pNext = NULL, .imageType = VK_IMAGE_TYPE_2D, .format = tex_format, .extent = {tex_width, tex_height, 1}, .mipLevels = 1, .arrayLayers = 1, .samples = VK_SAMPLE_COUNT_1_BIT, .tiling = tiling, .usage = usage, .flags = 0, .initialLayout = VK_IMAGE_LAYOUT_PREINITIALIZED, }; VkMemoryRequirements mem_reqs; err = vkCreateImage(demo->device, &image_create_info, NULL, &tex_obj->image); assert(!err); vkGetImageMemoryRequirements(demo->device, tex_obj->image, &mem_reqs); tex_obj->mem_alloc.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO; tex_obj->mem_alloc.pNext = NULL; tex_obj->mem_alloc.allocationSize = mem_reqs.size; tex_obj->mem_alloc.memoryTypeIndex = 0; pass = memory_type_from_properties(demo, mem_reqs.memoryTypeBits, required_props, &tex_obj->mem_alloc.memoryTypeIndex); assert(pass); /* allocate memory */ err = vkAllocateMemory(demo->device, &tex_obj->mem_alloc, NULL, &(tex_obj->mem)); assert(!err); /* bind memory */ err = vkBindImageMemory(demo->device, tex_obj->image, tex_obj->mem, 0); assert(!err); if (required_props & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) { const VkImageSubresource subres = { .aspectMask = VK_IMAGE_ASPECT_COLOR_BIT, .mipLevel = 0, .arrayLayer = 0, }; VkSubresourceLayout layout; void *data; vkGetImageSubresourceLayout(demo->device, tex_obj->image, &subres, &layout); err = vkMapMemory(demo->device, tex_obj->mem, 0, tex_obj->mem_alloc.allocationSize, 0, &data); assert(!err); if (!loadTexture(filename, data, &layout, &tex_width, &tex_height)) { fprintf(stderr, "Error loading texture: %s\n", filename); } vkUnmapMemory(demo->device, tex_obj->mem); } tex_obj->imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL; demo_set_image_layout(demo, tex_obj->image, VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_PREINITIALIZED, tex_obj->imageLayout, VK_ACCESS_HOST_WRITE_BIT); /* setting the image layout does not reference the actual memory so no need * to add a mem ref */ } static void demo_destroy_texture_image(struct demo *demo, struct texture_object *tex_objs) { /* clean up staging resources */ vkFreeMemory(demo->device, tex_objs->mem, NULL); vkDestroyImage(demo->device, tex_objs->image, NULL); } static void demo_prepare_textures(struct demo *demo) { const VkFormat tex_format = VK_FORMAT_R8G8B8A8_UNORM; VkFormatProperties props; uint32_t i; vkGetPhysicalDeviceFormatProperties(demo->gpu, tex_format, &props); for (i = 0; i < DEMO_TEXTURE_COUNT; i++) { VkResult U_ASSERT_ONLY err; if ((props.linearTilingFeatures & VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT) && !demo->use_staging_buffer) { /* Device can texture using linear textures */ demo_prepare_texture_image(demo, tex_files[i], &demo->textures[i], VK_IMAGE_TILING_LINEAR, VK_IMAGE_USAGE_SAMPLED_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT); } else if (props.optimalTilingFeatures & VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT) { /* Must use staging buffer to copy linear texture to optimized */ struct texture_object staging_texture; memset(&staging_texture, 0, sizeof(staging_texture)); demo_prepare_texture_image(demo, tex_files[i], &staging_texture, VK_IMAGE_TILING_LINEAR, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT); demo_prepare_texture_image( demo, tex_files[i], &demo->textures[i], VK_IMAGE_TILING_OPTIMAL, (VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT), VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT); demo_set_image_layout(demo, staging_texture.image, VK_IMAGE_ASPECT_COLOR_BIT, staging_texture.imageLayout, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, 0); demo_set_image_layout(demo, demo->textures[i].image, VK_IMAGE_ASPECT_COLOR_BIT, demo->textures[i].imageLayout, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 0); VkImageCopy copy_region = { .srcSubresource = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1}, .srcOffset = {0, 0, 0}, .dstSubresource = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1}, .dstOffset = {0, 0, 0}, .extent = {staging_texture.tex_width, staging_texture.tex_height, 1}, }; vkCmdCopyImage( demo->cmd, staging_texture.image, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, demo->textures[i].image, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 1, ©_region); demo_set_image_layout(demo, demo->textures[i].image, VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, demo->textures[i].imageLayout, 0); demo_flush_init_cmd(demo); demo_destroy_texture_image(demo, &staging_texture); } else { /* Can't support VK_FORMAT_R8G8B8A8_UNORM !? */ assert(!"No support for R8G8B8A8_UNORM as texture image format"); } const VkSamplerCreateInfo sampler = { .sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO, .pNext = NULL, .magFilter = VK_FILTER_NEAREST, .minFilter = VK_FILTER_NEAREST, .mipmapMode = VK_SAMPLER_MIPMAP_MODE_NEAREST, .addressModeU = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE, .addressModeV = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE, .addressModeW = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE, .mipLodBias = 0.0f, .anisotropyEnable = VK_FALSE, .maxAnisotropy = 1, .compareOp = VK_COMPARE_OP_NEVER, .minLod = 0.0f, .maxLod = 0.0f, .borderColor = VK_BORDER_COLOR_FLOAT_OPAQUE_WHITE, .unnormalizedCoordinates = VK_FALSE, }; VkImageViewCreateInfo view = { .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO, .pNext = NULL, .image = VK_NULL_HANDLE, .viewType = VK_IMAGE_VIEW_TYPE_2D, .format = tex_format, .components = { VK_COMPONENT_SWIZZLE_R, VK_COMPONENT_SWIZZLE_G, VK_COMPONENT_SWIZZLE_B, VK_COMPONENT_SWIZZLE_A, }, .subresourceRange = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1}, .flags = 0, }; /* create sampler */ err = vkCreateSampler(demo->device, &sampler, NULL, &demo->textures[i].sampler); assert(!err); /* create image view */ view.image = demo->textures[i].image; err = vkCreateImageView(demo->device, &view, NULL, &demo->textures[i].view); assert(!err); } } void demo_prepare_cube_data_buffer(struct demo *demo) { VkBufferCreateInfo buf_info; VkMemoryRequirements mem_reqs; uint8_t *pData; int i; mat4x4 MVP, VP; VkResult U_ASSERT_ONLY err; bool U_ASSERT_ONLY pass; struct vktexcube_vs_uniform data; mat4x4_mul(VP, demo->projection_matrix, demo->view_matrix); mat4x4_mul(MVP, VP, demo->model_matrix); memcpy(data.mvp, MVP, sizeof(MVP)); // dumpMatrix("MVP", MVP); for (i = 0; i < 12 * 3; i++) { data.position[i][0] = g_vertex_buffer_data[i * 3]; data.position[i][1] = g_vertex_buffer_data[i * 3 + 1]; data.position[i][2] = g_vertex_buffer_data[i * 3 + 2]; data.position[i][3] = 1.0f; data.attr[i][0] = g_uv_buffer_data[2 * i]; data.attr[i][1] = g_uv_buffer_data[2 * i + 1]; data.attr[i][2] = 0; data.attr[i][3] = 0; } memset(&buf_info, 0, sizeof(buf_info)); buf_info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO; buf_info.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT; buf_info.size = sizeof(data); err = vkCreateBuffer(demo->device, &buf_info, NULL, &demo->uniform_data.buf); assert(!err); vkGetBufferMemoryRequirements(demo->device, demo->uniform_data.buf, &mem_reqs); demo->uniform_data.mem_alloc.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO; demo->uniform_data.mem_alloc.pNext = NULL; demo->uniform_data.mem_alloc.allocationSize = mem_reqs.size; demo->uniform_data.mem_alloc.memoryTypeIndex = 0; pass = memory_type_from_properties( demo, mem_reqs.memoryTypeBits, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, &demo->uniform_data.mem_alloc.memoryTypeIndex); assert(pass); err = vkAllocateMemory(demo->device, &demo->uniform_data.mem_alloc, NULL, &(demo->uniform_data.mem)); assert(!err); err = vkMapMemory(demo->device, demo->uniform_data.mem, 0, demo->uniform_data.mem_alloc.allocationSize, 0, (void **)&pData); assert(!err); memcpy(pData, &data, sizeof data); vkUnmapMemory(demo->device, demo->uniform_data.mem); err = vkBindBufferMemory(demo->device, demo->uniform_data.buf, demo->uniform_data.mem, 0); assert(!err); demo->uniform_data.buffer_info.buffer = demo->uniform_data.buf; demo->uniform_data.buffer_info.offset = 0; demo->uniform_data.buffer_info.range = sizeof(data); } static void demo_prepare_descriptor_layout(struct demo *demo) { const VkDescriptorSetLayoutBinding layout_bindings[2] = { [0] = { .binding = 0, .descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, .descriptorCount = 1, .stageFlags = VK_SHADER_STAGE_VERTEX_BIT, .pImmutableSamplers = NULL, }, [1] = { .binding = 1, .descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, .descriptorCount = DEMO_TEXTURE_COUNT, .stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT, .pImmutableSamplers = NULL, }, }; const VkDescriptorSetLayoutCreateInfo descriptor_layout = { .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO, .pNext = NULL, .bindingCount = 2, .pBindings = layout_bindings, }; VkResult U_ASSERT_ONLY err; err = vkCreateDescriptorSetLayout(demo->device, &descriptor_layout, NULL, &demo->desc_layout); assert(!err); const VkPipelineLayoutCreateInfo pPipelineLayoutCreateInfo = { .sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO, .pNext = NULL, .setLayoutCount = 1, .pSetLayouts = &demo->desc_layout, }; err = vkCreatePipelineLayout(demo->device, &pPipelineLayoutCreateInfo, NULL, &demo->pipeline_layout); assert(!err); } static void demo_prepare_render_pass(struct demo *demo) { const VkAttachmentDescription attachments[2] = { [0] = { .format = demo->format, .samples = VK_SAMPLE_COUNT_1_BIT, .loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR, .storeOp = VK_ATTACHMENT_STORE_OP_STORE, .stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE, .stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE, .initialLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, .finalLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, }, [1] = { .format = demo->depth.format, .samples = VK_SAMPLE_COUNT_1_BIT, .loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR, .storeOp = VK_ATTACHMENT_STORE_OP_DONT_CARE, .stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE, .stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE, .initialLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL, .finalLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL, }, }; const VkAttachmentReference color_reference = { .attachment = 0, .layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, }; const VkAttachmentReference depth_reference = { .attachment = 1, .layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL, }; const VkSubpassDescription subpass = { .pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS, .flags = 0, .inputAttachmentCount = 0, .pInputAttachments = NULL, .colorAttachmentCount = 1, .pColorAttachments = &color_reference, .pResolveAttachments = NULL, .pDepthStencilAttachment = &depth_reference, .preserveAttachmentCount = 0, .pPreserveAttachments = NULL, }; const VkRenderPassCreateInfo rp_info = { .sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO, .pNext = NULL, .attachmentCount = 2, .pAttachments = attachments, .subpassCount = 1, .pSubpasses = &subpass, .dependencyCount = 0, .pDependencies = NULL, }; VkResult U_ASSERT_ONLY err; err = vkCreateRenderPass(demo->device, &rp_info, NULL, &demo->render_pass); assert(!err); } static VkShaderModule demo_prepare_shader_module(struct demo *demo, const void *code, size_t size) { VkShaderModule module; VkShaderModuleCreateInfo moduleCreateInfo; VkResult U_ASSERT_ONLY err; moduleCreateInfo.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO; moduleCreateInfo.pNext = NULL; moduleCreateInfo.codeSize = size; moduleCreateInfo.pCode = code; moduleCreateInfo.flags = 0; err = vkCreateShaderModule(demo->device, &moduleCreateInfo, NULL, &module); assert(!err); return module; } char *demo_read_spv(const char *filename, size_t *psize) { long int size; size_t U_ASSERT_ONLY retval; void *shader_code; FILE *fp = fopen(filename, "rb"); if (!fp) return NULL; fseek(fp, 0L, SEEK_END); size = ftell(fp); fseek(fp, 0L, SEEK_SET); shader_code = malloc(size); retval = fread(shader_code, size, 1, fp); assert(retval == 1); *psize = size; fclose(fp); return shader_code; } static VkShaderModule demo_prepare_vs(struct demo *demo) { void *vertShaderCode; size_t size; vertShaderCode = demo_read_spv("cube-vert.spv", &size); demo->vert_shader_module = demo_prepare_shader_module(demo, vertShaderCode, size); free(vertShaderCode); return demo->vert_shader_module; } static VkShaderModule demo_prepare_fs(struct demo *demo) { void *fragShaderCode; size_t size; fragShaderCode = demo_read_spv("cube-frag.spv", &size); demo->frag_shader_module = demo_prepare_shader_module(demo, fragShaderCode, size); free(fragShaderCode); return demo->frag_shader_module; } static void demo_prepare_pipeline(struct demo *demo) { VkGraphicsPipelineCreateInfo pipeline; VkPipelineCacheCreateInfo pipelineCache; VkPipelineVertexInputStateCreateInfo vi; VkPipelineInputAssemblyStateCreateInfo ia; VkPipelineRasterizationStateCreateInfo rs; VkPipelineColorBlendStateCreateInfo cb; VkPipelineDepthStencilStateCreateInfo ds; VkPipelineViewportStateCreateInfo vp; VkPipelineMultisampleStateCreateInfo ms; VkDynamicState dynamicStateEnables[VK_DYNAMIC_STATE_RANGE_SIZE]; VkPipelineDynamicStateCreateInfo dynamicState; VkResult U_ASSERT_ONLY err; memset(dynamicStateEnables, 0, sizeof dynamicStateEnables); memset(&dynamicState, 0, sizeof dynamicState); dynamicState.sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO; dynamicState.pDynamicStates = dynamicStateEnables; memset(&pipeline, 0, sizeof(pipeline)); pipeline.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO; pipeline.layout = demo->pipeline_layout; memset(&vi, 0, sizeof(vi)); vi.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO; memset(&ia, 0, sizeof(ia)); ia.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO; ia.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; memset(&rs, 0, sizeof(rs)); rs.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO; rs.polygonMode = VK_POLYGON_MODE_FILL; rs.cullMode = VK_CULL_MODE_BACK_BIT; rs.frontFace = VK_FRONT_FACE_COUNTER_CLOCKWISE; rs.depthClampEnable = VK_FALSE; rs.rasterizerDiscardEnable = VK_FALSE; rs.depthBiasEnable = VK_FALSE; memset(&cb, 0, sizeof(cb)); cb.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO; VkPipelineColorBlendAttachmentState att_state[1]; memset(att_state, 0, sizeof(att_state)); att_state[0].colorWriteMask = 0xf; att_state[0].blendEnable = VK_FALSE; cb.attachmentCount = 1; cb.pAttachments = att_state; memset(&vp, 0, sizeof(vp)); vp.sType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO; vp.viewportCount = 1; dynamicStateEnables[dynamicState.dynamicStateCount++] = VK_DYNAMIC_STATE_VIEWPORT; vp.scissorCount = 1; dynamicStateEnables[dynamicState.dynamicStateCount++] = VK_DYNAMIC_STATE_SCISSOR; memset(&ds, 0, sizeof(ds)); ds.sType = VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO; ds.depthTestEnable = VK_TRUE; ds.depthWriteEnable = VK_TRUE; ds.depthCompareOp = VK_COMPARE_OP_LESS_OR_EQUAL; ds.depthBoundsTestEnable = VK_FALSE; ds.back.failOp = VK_STENCIL_OP_KEEP; ds.back.passOp = VK_STENCIL_OP_KEEP; ds.back.compareOp = VK_COMPARE_OP_ALWAYS; ds.stencilTestEnable = VK_FALSE; ds.front = ds.back; memset(&ms, 0, sizeof(ms)); ms.sType = VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO; ms.pSampleMask = NULL; ms.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT; // Two stages: vs and fs pipeline.stageCount = 2; VkPipelineShaderStageCreateInfo shaderStages[2]; memset(&shaderStages, 0, 2 * sizeof(VkPipelineShaderStageCreateInfo)); shaderStages[0].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; shaderStages[0].stage = VK_SHADER_STAGE_VERTEX_BIT; shaderStages[0].module = demo_prepare_vs(demo); shaderStages[0].pName = "main"; shaderStages[1].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; shaderStages[1].stage = VK_SHADER_STAGE_FRAGMENT_BIT; shaderStages[1].module = demo_prepare_fs(demo); shaderStages[1].pName = "main"; memset(&pipelineCache, 0, sizeof(pipelineCache)); pipelineCache.sType = VK_STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO; err = vkCreatePipelineCache(demo->device, &pipelineCache, NULL, &demo->pipelineCache); assert(!err); pipeline.pVertexInputState = &vi; pipeline.pInputAssemblyState = &ia; pipeline.pRasterizationState = &rs; pipeline.pColorBlendState = &cb; pipeline.pMultisampleState = &ms; pipeline.pViewportState = &vp; pipeline.pDepthStencilState = &ds; pipeline.pStages = shaderStages; pipeline.renderPass = demo->render_pass; pipeline.pDynamicState = &dynamicState; pipeline.renderPass = demo->render_pass; err = vkCreateGraphicsPipelines(demo->device, demo->pipelineCache, 1, &pipeline, NULL, &demo->pipeline); assert(!err); vkDestroyShaderModule(demo->device, demo->frag_shader_module, NULL); vkDestroyShaderModule(demo->device, demo->vert_shader_module, NULL); } static void demo_prepare_descriptor_pool(struct demo *demo) { const VkDescriptorPoolSize type_counts[2] = { [0] = { .type = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, .descriptorCount = 1, }, [1] = { .type = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, .descriptorCount = DEMO_TEXTURE_COUNT, }, }; const VkDescriptorPoolCreateInfo descriptor_pool = { .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO, .pNext = NULL, .maxSets = 1, .poolSizeCount = 2, .pPoolSizes = type_counts, }; VkResult U_ASSERT_ONLY err; err = vkCreateDescriptorPool(demo->device, &descriptor_pool, NULL, &demo->desc_pool); assert(!err); } static void demo_prepare_descriptor_set(struct demo *demo) { VkDescriptorImageInfo tex_descs[DEMO_TEXTURE_COUNT]; VkWriteDescriptorSet writes[2]; VkResult U_ASSERT_ONLY err; uint32_t i; VkDescriptorSetAllocateInfo alloc_info = { .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO, .pNext = NULL, .descriptorPool = demo->desc_pool, .descriptorSetCount = 1, .pSetLayouts = &demo->desc_layout}; err = vkAllocateDescriptorSets(demo->device, &alloc_info, &demo->desc_set); assert(!err); memset(&tex_descs, 0, sizeof(tex_descs)); for (i = 0; i < DEMO_TEXTURE_COUNT; i++) { tex_descs[i].sampler = demo->textures[i].sampler; tex_descs[i].imageView = demo->textures[i].view; tex_descs[i].imageLayout = VK_IMAGE_LAYOUT_GENERAL; } memset(&writes, 0, sizeof(writes)); writes[0].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET; writes[0].dstSet = demo->desc_set; writes[0].descriptorCount = 1; writes[0].descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER; writes[0].pBufferInfo = &demo->uniform_data.buffer_info; writes[1].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET; writes[1].dstSet = demo->desc_set; writes[1].dstBinding = 1; writes[1].descriptorCount = DEMO_TEXTURE_COUNT; writes[1].descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER; writes[1].pImageInfo = tex_descs; vkUpdateDescriptorSets(demo->device, 2, writes, 0, NULL); } static void demo_prepare_framebuffers(struct demo *demo) { VkImageView attachments[2]; attachments[1] = demo->depth.view; const VkFramebufferCreateInfo fb_info = { .sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO, .pNext = NULL, .renderPass = demo->render_pass, .attachmentCount = 2, .pAttachments = attachments, .width = demo->width, .height = demo->height, .layers = 1, }; VkResult U_ASSERT_ONLY err; uint32_t i; demo->framebuffers = (VkFramebuffer *)malloc(demo->swapchainImageCount * sizeof(VkFramebuffer)); assert(demo->framebuffers); for (i = 0; i < demo->swapchainImageCount; i++) { attachments[0] = demo->buffers[i].view; err = vkCreateFramebuffer(demo->device, &fb_info, NULL, &demo->framebuffers[i]); assert(!err); } } static void demo_prepare(struct demo *demo) { VkResult U_ASSERT_ONLY err; const VkCommandPoolCreateInfo cmd_pool_info = { .sType = VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO, .pNext = NULL, .queueFamilyIndex = demo->graphics_queue_node_index, .flags = 0, }; err = vkCreateCommandPool(demo->device, &cmd_pool_info, NULL, &demo->cmd_pool); assert(!err); const VkCommandBufferAllocateInfo cmd = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO, .pNext = NULL, .commandPool = demo->cmd_pool, .level = VK_COMMAND_BUFFER_LEVEL_PRIMARY, .commandBufferCount = 1, }; demo_prepare_buffers(demo); demo_prepare_depth(demo); demo_prepare_textures(demo); demo_prepare_cube_data_buffer(demo); demo_prepare_descriptor_layout(demo); demo_prepare_render_pass(demo); demo_prepare_pipeline(demo); for (uint32_t i = 0; i < demo->swapchainImageCount; i++) { err = vkAllocateCommandBuffers(demo->device, &cmd, &demo->buffers[i].cmd); assert(!err); } demo_prepare_descriptor_pool(demo); demo_prepare_descriptor_set(demo); demo_prepare_framebuffers(demo); for (uint32_t i = 0; i < demo->swapchainImageCount; i++) { demo->current_buffer = i; demo_draw_build_cmd(demo, demo->buffers[i].cmd); } /* * Prepare functions above may generate pipeline commands * that need to be flushed before beginning the render loop. */ demo_flush_init_cmd(demo); demo->current_buffer = 0; demo->prepared = true; } static void demo_cleanup(struct demo *demo) { uint32_t i; demo->prepared = false; for (i = 0; i < demo->swapchainImageCount; i++) { vkDestroyFramebuffer(demo->device, demo->framebuffers[i], NULL); } free(demo->framebuffers); vkDestroyDescriptorPool(demo->device, demo->desc_pool, NULL); vkDestroyPipeline(demo->device, demo->pipeline, NULL); vkDestroyPipelineCache(demo->device, demo->pipelineCache, NULL); vkDestroyRenderPass(demo->device, demo->render_pass, NULL); vkDestroyPipelineLayout(demo->device, demo->pipeline_layout, NULL); vkDestroyDescriptorSetLayout(demo->device, demo->desc_layout, NULL); for (i = 0; i < DEMO_TEXTURE_COUNT; i++) { vkDestroyImageView(demo->device, demo->textures[i].view, NULL); vkDestroyImage(demo->device, demo->textures[i].image, NULL); vkFreeMemory(demo->device, demo->textures[i].mem, NULL); vkDestroySampler(demo->device, demo->textures[i].sampler, NULL); } demo->fpDestroySwapchainKHR(demo->device, demo->swapchain, NULL); vkDestroyImageView(demo->device, demo->depth.view, NULL); vkDestroyImage(demo->device, demo->depth.image, NULL); vkFreeMemory(demo->device, demo->depth.mem, NULL); vkDestroyBuffer(demo->device, demo->uniform_data.buf, NULL); vkFreeMemory(demo->device, demo->uniform_data.mem, NULL); for (i = 0; i < demo->swapchainImageCount; i++) { vkDestroyImageView(demo->device, demo->buffers[i].view, NULL); vkFreeCommandBuffers(demo->device, demo->cmd_pool, 1, &demo->buffers[i].cmd); } free(demo->buffers); free(demo->queue_props); vkDestroyCommandPool(demo->device, demo->cmd_pool, NULL); vkDestroyDevice(demo->device, NULL); if (demo->validate) { demo->DestroyDebugReportCallback(demo->inst, demo->msg_callback, NULL); } vkDestroySurfaceKHR(demo->inst, demo->surface, NULL); vkDestroyInstance(demo->inst, NULL); #ifndef _WIN32 xcb_destroy_window(demo->connection, demo->window); xcb_disconnect(demo->connection); free(demo->atom_wm_delete_window); #endif // _WIN32 } static void demo_resize(struct demo *demo) { uint32_t i; // Don't react to resize until after first initialization. if (!demo->prepared) { return; } // In order to properly resize the window, we must re-create the swapchain // AND redo the command buffers, etc. // // First, perform part of the demo_cleanup() function: demo->prepared = false; for (i = 0; i < demo->swapchainImageCount; i++) { vkDestroyFramebuffer(demo->device, demo->framebuffers[i], NULL); } free(demo->framebuffers); vkDestroyDescriptorPool(demo->device, demo->desc_pool, NULL); vkDestroyPipeline(demo->device, demo->pipeline, NULL); vkDestroyPipelineCache(demo->device, demo->pipelineCache, NULL); vkDestroyRenderPass(demo->device, demo->render_pass, NULL); vkDestroyPipelineLayout(demo->device, demo->pipeline_layout, NULL); vkDestroyDescriptorSetLayout(demo->device, demo->desc_layout, NULL); for (i = 0; i < DEMO_TEXTURE_COUNT; i++) { vkDestroyImageView(demo->device, demo->textures[i].view, NULL); vkDestroyImage(demo->device, demo->textures[i].image, NULL); vkFreeMemory(demo->device, demo->textures[i].mem, NULL); vkDestroySampler(demo->device, demo->textures[i].sampler, NULL); } vkDestroyImageView(demo->device, demo->depth.view, NULL); vkDestroyImage(demo->device, demo->depth.image, NULL); vkFreeMemory(demo->device, demo->depth.mem, NULL); vkDestroyBuffer(demo->device, demo->uniform_data.buf, NULL); vkFreeMemory(demo->device, demo->uniform_data.mem, NULL); for (i = 0; i < demo->swapchainImageCount; i++) { vkDestroyImageView(demo->device, demo->buffers[i].view, NULL); vkFreeCommandBuffers(demo->device, demo->cmd_pool, 1, &demo->buffers[i].cmd); } vkDestroyCommandPool(demo->device, demo->cmd_pool, NULL); free(demo->buffers); // Second, re-perform the demo_prepare() function, which will re-create the // swapchain: demo_prepare(demo); } // On MS-Windows, make this a global, so it's available to WndProc() struct demo demo; #ifdef _WIN32 static void demo_run(struct demo *demo) { if (!demo->prepared) return; // Wait for work to finish before updating MVP. vkDeviceWaitIdle(demo->device); demo_update_data_buffer(demo); demo_draw(demo); // Wait for work to finish before updating MVP. vkDeviceWaitIdle(demo->device); demo->curFrame++; if (demo->frameCount != INT_MAX && demo->curFrame == demo->frameCount) { PostQuitMessage(validation_error); } } // MS-Windows event handling function: LRESULT CALLBACK WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_CLOSE: PostQuitMessage(validation_error); break; case WM_PAINT: demo_run(&demo); break; case WM_SIZE: // Resize the application to the new window size, except when // it was minimized. Vulkan doesn't support images or swapchains // with width=0 and height=0. if (wParam != SIZE_MINIMIZED) { demo.width = lParam & 0xffff; demo.height = lParam & 0xffff0000 >> 16; demo_resize(&demo); } break; default: break; } return (DefWindowProc(hWnd, uMsg, wParam, lParam)); } static void demo_create_window(struct demo *demo) { WNDCLASSEX win_class; // Initialize the window class structure: win_class.cbSize = sizeof(WNDCLASSEX); win_class.style = CS_HREDRAW | CS_VREDRAW; win_class.lpfnWndProc = WndProc; win_class.cbClsExtra = 0; win_class.cbWndExtra = 0; win_class.hInstance = demo->connection; // hInstance win_class.hIcon = LoadIcon(NULL, IDI_APPLICATION); win_class.hCursor = LoadCursor(NULL, IDC_ARROW); win_class.hbrBackground = (HBRUSH)GetStockObject(WHITE_BRUSH); win_class.lpszMenuName = NULL; win_class.lpszClassName = demo->name; win_class.hIconSm = LoadIcon(NULL, IDI_WINLOGO); // Register window class: if (!RegisterClassEx(&win_class)) { // It didn't work, so try to give a useful error: printf("Unexpected error trying to start the application!\n"); fflush(stdout); exit(1); } // Create window with the registered class: RECT wr = {0, 0, demo->width, demo->height}; AdjustWindowRect(&wr, WS_OVERLAPPEDWINDOW, FALSE); demo->window = CreateWindowEx(0, demo->name, // class name demo->name, // app name WS_OVERLAPPEDWINDOW | // window style WS_VISIBLE | WS_SYSMENU, 100, 100, // x/y coords wr.right - wr.left, // width wr.bottom - wr.top, // height NULL, // handle to parent NULL, // handle to menu demo->connection, // hInstance NULL); // no extra parameters if (!demo->window) { // It didn't work, so try to give a useful error: printf("Cannot create a window in which to draw!\n"); fflush(stdout); exit(1); } } #else // _WIN32 static void demo_handle_event(struct demo *demo, const xcb_generic_event_t *event) { uint8_t event_code = event->response_type & 0x7f; switch (event_code) { case XCB_EXPOSE: // TODO: Resize window break; case XCB_CLIENT_MESSAGE: if ((*(xcb_client_message_event_t *)event).data.data32[0] == (*demo->atom_wm_delete_window).atom) { demo->quit = true; } break; case XCB_KEY_RELEASE: { const xcb_key_release_event_t *key = (const xcb_key_release_event_t *)event; switch (key->detail) { case 0x9: // Escape demo->quit = true; break; case 0x71: // left arrow key demo->spin_angle += demo->spin_increment; break; case 0x72: // right arrow key demo->spin_angle -= demo->spin_increment; break; case 0x41: demo->pause = !demo->pause; break; } } break; case XCB_CONFIGURE_NOTIFY: { const xcb_configure_notify_event_t *cfg = (const xcb_configure_notify_event_t *)event; if ((demo->width != cfg->width) || (demo->height != cfg->height)) { demo->width = cfg->width; demo->height = cfg->height; demo_resize(demo); } } break; default: break; } } static void demo_run(struct demo *demo) { xcb_flush(demo->connection); while (!demo->quit) { xcb_generic_event_t *event; if (demo->pause) { event = xcb_wait_for_event(demo->connection); } else { event = xcb_poll_for_event(demo->connection); } if (event) { demo_handle_event(demo, event); free(event); } // Wait for work to finish before updating MVP. vkDeviceWaitIdle(demo->device); demo_update_data_buffer(demo); demo_draw(demo); // Wait for work to finish before updating MVP. vkDeviceWaitIdle(demo->device); demo->curFrame++; if (demo->frameCount != INT32_MAX && demo->curFrame == demo->frameCount) demo->quit = true; } } static void demo_create_window(struct demo *demo) { uint32_t value_mask, value_list[32]; demo->window = xcb_generate_id(demo->connection); value_mask = XCB_CW_BACK_PIXEL | XCB_CW_EVENT_MASK; value_list[0] = demo->screen->black_pixel; value_list[1] = XCB_EVENT_MASK_KEY_RELEASE | XCB_EVENT_MASK_EXPOSURE | XCB_EVENT_MASK_STRUCTURE_NOTIFY; xcb_create_window(demo->connection, XCB_COPY_FROM_PARENT, demo->window, demo->screen->root, 0, 0, demo->width, demo->height, 0, XCB_WINDOW_CLASS_INPUT_OUTPUT, demo->screen->root_visual, value_mask, value_list); /* Magic code that will send notification when window is destroyed */ xcb_intern_atom_cookie_t cookie = xcb_intern_atom(demo->connection, 1, 12, "WM_PROTOCOLS"); xcb_intern_atom_reply_t *reply = xcb_intern_atom_reply(demo->connection, cookie, 0); xcb_intern_atom_cookie_t cookie2 = xcb_intern_atom(demo->connection, 0, 16, "WM_DELETE_WINDOW"); demo->atom_wm_delete_window = xcb_intern_atom_reply(demo->connection, cookie2, 0); xcb_change_property(demo->connection, XCB_PROP_MODE_REPLACE, demo->window, (*reply).atom, 4, 32, 1, &(*demo->atom_wm_delete_window).atom); free(reply); xcb_map_window(demo->connection, demo->window); // Force the x/y coordinates to 100,100 results are identical in consecutive // runs const uint32_t coords[] = {100, 100}; xcb_configure_window(demo->connection, demo->window, XCB_CONFIG_WINDOW_X | XCB_CONFIG_WINDOW_Y, coords); } #endif // _WIN32 /* * Return 1 (true) if all layer names specified in check_names * can be found in given layer properties. */ static VkBool32 demo_check_layers(uint32_t check_count, char **check_names, uint32_t layer_count, VkLayerProperties *layers) { for (uint32_t i = 0; i < check_count; i++) { VkBool32 found = 0; for (uint32_t j = 0; j < layer_count; j++) { if (!strcmp(check_names[i], layers[j].layerName)) { found = 1; break; } } if (!found) { fprintf(stderr, "Cannot find layer: %s\n", check_names[i]); return 0; } } return 1; } static void demo_init_vk(struct demo *demo) { VkResult err; uint32_t instance_extension_count = 0; uint32_t instance_layer_count = 0; uint32_t device_validation_layer_count = 0; char **instance_validation_layers = NULL; demo->enabled_extension_count = 0; demo->enabled_layer_count = 0; char *instance_validation_layers_alt1[] = { "VK_LAYER_LUNARG_standard_validation" }; char *instance_validation_layers_alt2[] = { "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation", "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker", "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation", "VK_LAYER_LUNARG_swapchain", "VK_LAYER_GOOGLE_unique_objects" }; /* Look for validation layers */ VkBool32 validation_found = 0; if (demo->validate) { err = vkEnumerateInstanceLayerProperties(&instance_layer_count, NULL); assert(!err); instance_validation_layers = instance_validation_layers_alt1; if (instance_layer_count > 0) { VkLayerProperties *instance_layers = malloc(sizeof (VkLayerProperties) * instance_layer_count); err = vkEnumerateInstanceLayerProperties(&instance_layer_count, instance_layers); assert(!err); validation_found = demo_check_layers( ARRAY_SIZE(instance_validation_layers_alt1), instance_validation_layers, instance_layer_count, instance_layers); if (validation_found) { demo->enabled_layer_count = ARRAY_SIZE(instance_validation_layers_alt1); demo->device_validation_layers[0] = "VK_LAYER_LUNARG_standard_validation"; device_validation_layer_count = 1; } else { // use alternative set of validation layers instance_validation_layers = instance_validation_layers_alt2; demo->enabled_layer_count = ARRAY_SIZE(instance_validation_layers_alt2); validation_found = demo_check_layers( ARRAY_SIZE(instance_validation_layers_alt2), instance_validation_layers, instance_layer_count, instance_layers); device_validation_layer_count = ARRAY_SIZE(instance_validation_layers_alt2); for (uint32_t i = 0; i < device_validation_layer_count; i++) { demo->device_validation_layers[i] = instance_validation_layers[i]; } } free(instance_layers); } if (!validation_found) { ERR_EXIT("vkEnumerateInstanceLayerProperties failed to find" "required validation layer.\n\n" "Please look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); } } /* Look for instance extensions */ VkBool32 surfaceExtFound = 0; VkBool32 platformSurfaceExtFound = 0; memset(demo->extension_names, 0, sizeof(demo->extension_names)); err = vkEnumerateInstanceExtensionProperties( NULL, &instance_extension_count, NULL); assert(!err); if (instance_extension_count > 0) { VkExtensionProperties *instance_extensions = malloc(sizeof(VkExtensionProperties) * instance_extension_count); err = vkEnumerateInstanceExtensionProperties( NULL, &instance_extension_count, instance_extensions); assert(!err); for (uint32_t i = 0; i < instance_extension_count; i++) { if (!strcmp(VK_KHR_SURFACE_EXTENSION_NAME, instance_extensions[i].extensionName)) { surfaceExtFound = 1; demo->extension_names[demo->enabled_extension_count++] = VK_KHR_SURFACE_EXTENSION_NAME; } #ifdef _WIN32 if (!strcmp(VK_KHR_WIN32_SURFACE_EXTENSION_NAME, instance_extensions[i].extensionName)) { platformSurfaceExtFound = 1; demo->extension_names[demo->enabled_extension_count++] = VK_KHR_WIN32_SURFACE_EXTENSION_NAME; } #else // _WIN32 if (!strcmp(VK_KHR_XCB_SURFACE_EXTENSION_NAME, instance_extensions[i].extensionName)) { platformSurfaceExtFound = 1; demo->extension_names[demo->enabled_extension_count++] = VK_KHR_XCB_SURFACE_EXTENSION_NAME; } #endif // _WIN32 if (!strcmp(VK_EXT_DEBUG_REPORT_EXTENSION_NAME, instance_extensions[i].extensionName)) { if (demo->validate) { demo->extension_names[demo->enabled_extension_count++] = VK_EXT_DEBUG_REPORT_EXTENSION_NAME; } } assert(demo->enabled_extension_count < 64); } free(instance_extensions); } if (!surfaceExtFound) { ERR_EXIT("vkEnumerateInstanceExtensionProperties failed to find " "the " VK_KHR_SURFACE_EXTENSION_NAME " extension.\n\nDo you have a compatible " "Vulkan installable client driver (ICD) installed?\nPlease " "look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); } if (!platformSurfaceExtFound) { #ifdef _WIN32 ERR_EXIT("vkEnumerateInstanceExtensionProperties failed to find " "the " VK_KHR_WIN32_SURFACE_EXTENSION_NAME " extension.\n\nDo you have a compatible " "Vulkan installable client driver (ICD) installed?\nPlease " "look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); #else // _WIN32 ERR_EXIT("vkEnumerateInstanceExtensionProperties failed to find " "the " VK_KHR_XCB_SURFACE_EXTENSION_NAME " extension.\n\nDo you have a compatible " "Vulkan installable client driver (ICD) installed?\nPlease " "look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); #endif // _WIN32 } const VkApplicationInfo app = { .sType = VK_STRUCTURE_TYPE_APPLICATION_INFO, .pNext = NULL, .pApplicationName = APP_SHORT_NAME, .applicationVersion = 0, .pEngineName = APP_SHORT_NAME, .engineVersion = 0, .apiVersion = VK_API_VERSION_1_0, }; VkInstanceCreateInfo inst_info = { .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO, .pNext = NULL, .pApplicationInfo = &app, .enabledLayerCount = demo->enabled_layer_count, .ppEnabledLayerNames = (const char *const *)instance_validation_layers, .enabledExtensionCount = demo->enabled_extension_count, .ppEnabledExtensionNames = (const char *const *)demo->extension_names, }; /* * This is info for a temp callback to use during CreateInstance. * After the instance is created, we use the instance-based * function to register the final callback. */ VkDebugReportCallbackCreateInfoEXT dbgCreateInfo; if (demo->validate) { dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgCreateInfo.pNext = NULL; dbgCreateInfo.pfnCallback = demo->use_break ? BreakCallback : dbgFunc; dbgCreateInfo.pUserData = NULL; dbgCreateInfo.flags = VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_WARNING_BIT_EXT; inst_info.pNext = &dbgCreateInfo; } uint32_t gpu_count; err = vkCreateInstance(&inst_info, NULL, &demo->inst); if (err == VK_ERROR_INCOMPATIBLE_DRIVER) { ERR_EXIT("Cannot find a compatible Vulkan installable client driver " "(ICD).\n\nPlease look at the Getting Started guide for " "additional information.\n", "vkCreateInstance Failure"); } else if (err == VK_ERROR_EXTENSION_NOT_PRESENT) { ERR_EXIT("Cannot find a specified extension library" ".\nMake sure your layers path is set appropriately.\n", "vkCreateInstance Failure"); } else if (err) { ERR_EXIT("vkCreateInstance failed.\n\nDo you have a compatible Vulkan " "installable client driver (ICD) installed?\nPlease look at " "the Getting Started guide for additional information.\n", "vkCreateInstance Failure"); } /* Make initial call to query gpu_count, then second call for gpu info*/ err = vkEnumeratePhysicalDevices(demo->inst, &gpu_count, NULL); assert(!err && gpu_count > 0); if (gpu_count > 0) { VkPhysicalDevice *physical_devices = malloc(sizeof(VkPhysicalDevice) * gpu_count); err = vkEnumeratePhysicalDevices(demo->inst, &gpu_count, physical_devices); assert(!err); /* For cube demo we just grab the first physical device */ demo->gpu = physical_devices[0]; free(physical_devices); } else { ERR_EXIT("vkEnumeratePhysicalDevices reported zero accessible devices.\n\n" "Do you have a compatible Vulkan installable client driver (ICD) " "installed?\nPlease look at the Getting Started guide for " "additional information.\n", "vkEnumeratePhysicalDevices Failure"); } /* Look for validation layers */ validation_found = 0; demo->enabled_layer_count = 0; uint32_t device_layer_count = 0; err = vkEnumerateDeviceLayerProperties(demo->gpu, &device_layer_count, NULL); assert(!err); if (device_layer_count > 0) { VkLayerProperties *device_layers = malloc(sizeof(VkLayerProperties) * device_layer_count); err = vkEnumerateDeviceLayerProperties(demo->gpu, &device_layer_count, device_layers); assert(!err); if (demo->validate) { validation_found = demo_check_layers(device_validation_layer_count, demo->device_validation_layers, device_layer_count, device_layers); demo->enabled_layer_count = device_validation_layer_count; } free(device_layers); } if (demo->validate && !validation_found) { ERR_EXIT("vkEnumerateDeviceLayerProperties failed to find" "a required validation layer.\n\n" "Please look at the Getting Started guide for additional " "information.\n", "vkCreateDevice Failure"); } /* Look for device extensions */ uint32_t device_extension_count = 0; VkBool32 swapchainExtFound = 0; demo->enabled_extension_count = 0; memset(demo->extension_names, 0, sizeof(demo->extension_names)); err = vkEnumerateDeviceExtensionProperties(demo->gpu, NULL, &device_extension_count, NULL); assert(!err); if (device_extension_count > 0) { VkExtensionProperties *device_extensions = malloc(sizeof(VkExtensionProperties) * device_extension_count); err = vkEnumerateDeviceExtensionProperties( demo->gpu, NULL, &device_extension_count, device_extensions); assert(!err); for (uint32_t i = 0; i < device_extension_count; i++) { if (!strcmp(VK_KHR_SWAPCHAIN_EXTENSION_NAME, device_extensions[i].extensionName)) { swapchainExtFound = 1; demo->extension_names[demo->enabled_extension_count++] = VK_KHR_SWAPCHAIN_EXTENSION_NAME; } assert(demo->enabled_extension_count < 64); } free(device_extensions); } if (!swapchainExtFound) { ERR_EXIT("vkEnumerateDeviceExtensionProperties failed to find " "the " VK_KHR_SWAPCHAIN_EXTENSION_NAME " extension.\n\nDo you have a compatible " "Vulkan installable client driver (ICD) installed?\nPlease " "look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); } if (demo->validate) { demo->CreateDebugReportCallback = (PFN_vkCreateDebugReportCallbackEXT)vkGetInstanceProcAddr( demo->inst, "vkCreateDebugReportCallbackEXT"); demo->DestroyDebugReportCallback = (PFN_vkDestroyDebugReportCallbackEXT)vkGetInstanceProcAddr( demo->inst, "vkDestroyDebugReportCallbackEXT"); if (!demo->CreateDebugReportCallback) { ERR_EXIT( "GetProcAddr: Unable to find vkCreateDebugReportCallbackEXT\n", "vkGetProcAddr Failure"); } if (!demo->DestroyDebugReportCallback) { ERR_EXIT( "GetProcAddr: Unable to find vkDestroyDebugReportCallbackEXT\n", "vkGetProcAddr Failure"); } demo->DebugReportMessage = (PFN_vkDebugReportMessageEXT)vkGetInstanceProcAddr( demo->inst, "vkDebugReportMessageEXT"); if (!demo->DebugReportMessage) { ERR_EXIT("GetProcAddr: Unable to find vkDebugReportMessageEXT\n", "vkGetProcAddr Failure"); } VkDebugReportCallbackCreateInfoEXT dbgCreateInfo; PFN_vkDebugReportCallbackEXT callback; callback = demo->use_break ? BreakCallback : dbgFunc; dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgCreateInfo.pNext = NULL; dbgCreateInfo.pfnCallback = callback; dbgCreateInfo.pUserData = NULL; dbgCreateInfo.flags = VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_WARNING_BIT_EXT; err = demo->CreateDebugReportCallback(demo->inst, &dbgCreateInfo, NULL, &demo->msg_callback); switch (err) { case VK_SUCCESS: break; case VK_ERROR_OUT_OF_HOST_MEMORY: ERR_EXIT("CreateDebugReportCallback: out of host memory\n", "CreateDebugReportCallback Failure"); break; default: ERR_EXIT("CreateDebugReportCallback: unknown failure\n", "CreateDebugReportCallback Failure"); break; } } vkGetPhysicalDeviceProperties(demo->gpu, &demo->gpu_props); /* Call with NULL data to get count */ vkGetPhysicalDeviceQueueFamilyProperties(demo->gpu, &demo->queue_count, NULL); assert(demo->queue_count >= 1); demo->queue_props = (VkQueueFamilyProperties *)malloc( demo->queue_count * sizeof(VkQueueFamilyProperties)); vkGetPhysicalDeviceQueueFamilyProperties(demo->gpu, &demo->queue_count, demo->queue_props); // Find a queue that supports gfx uint32_t gfx_queue_idx = 0; for (gfx_queue_idx = 0; gfx_queue_idx < demo->queue_count; gfx_queue_idx++) { if (demo->queue_props[gfx_queue_idx].queueFlags & VK_QUEUE_GRAPHICS_BIT) break; } assert(gfx_queue_idx < demo->queue_count); // Query fine-grained feature support for this device. // If app has specific feature requirements it should check supported // features based on this query VkPhysicalDeviceFeatures physDevFeatures; vkGetPhysicalDeviceFeatures(demo->gpu, &physDevFeatures); GET_INSTANCE_PROC_ADDR(demo->inst, GetPhysicalDeviceSurfaceSupportKHR); GET_INSTANCE_PROC_ADDR(demo->inst, GetPhysicalDeviceSurfaceCapabilitiesKHR); GET_INSTANCE_PROC_ADDR(demo->inst, GetPhysicalDeviceSurfaceFormatsKHR); GET_INSTANCE_PROC_ADDR(demo->inst, GetPhysicalDeviceSurfacePresentModesKHR); GET_INSTANCE_PROC_ADDR(demo->inst, GetSwapchainImagesKHR); } static void demo_create_device(struct demo *demo) { VkResult U_ASSERT_ONLY err; float queue_priorities[1] = {0.0}; const VkDeviceQueueCreateInfo queue = { .sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO, .pNext = NULL, .queueFamilyIndex = demo->graphics_queue_node_index, .queueCount = 1, .pQueuePriorities = queue_priorities}; VkDeviceCreateInfo device = { .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO, .pNext = NULL, .queueCreateInfoCount = 1, .pQueueCreateInfos = &queue, .enabledLayerCount = demo->enabled_layer_count, .ppEnabledLayerNames = (const char *const *)((demo->validate) ? demo->device_validation_layers : NULL), .enabledExtensionCount = demo->enabled_extension_count, .ppEnabledExtensionNames = (const char *const *)demo->extension_names, .pEnabledFeatures = NULL, // If specific features are required, pass them in here }; err = vkCreateDevice(demo->gpu, &device, NULL, &demo->device); assert(!err); } static void demo_init_vk_swapchain(struct demo *demo) { VkResult U_ASSERT_ONLY err; uint32_t i; // Create a WSI surface for the window: #ifdef _WIN32 VkWin32SurfaceCreateInfoKHR createInfo; createInfo.sType = VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR; createInfo.pNext = NULL; createInfo.flags = 0; createInfo.hinstance = demo->connection; createInfo.hwnd = demo->window; err = vkCreateWin32SurfaceKHR(demo->inst, &createInfo, NULL, &demo->surface); #else // _WIN32 VkXcbSurfaceCreateInfoKHR createInfo; createInfo.sType = VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR; createInfo.pNext = NULL; createInfo.flags = 0; createInfo.connection = demo->connection; createInfo.window = demo->window; err = vkCreateXcbSurfaceKHR(demo->inst, &createInfo, NULL, &demo->surface); #endif // _WIN32 // Iterate over each queue to learn whether it supports presenting: VkBool32 *supportsPresent = (VkBool32 *)malloc(demo->queue_count * sizeof(VkBool32)); for (i = 0; i < demo->queue_count; i++) { demo->fpGetPhysicalDeviceSurfaceSupportKHR(demo->gpu, i, demo->surface, &supportsPresent[i]); } // Search for a graphics and a present queue in the array of queue // families, try to find one that supports both uint32_t graphicsQueueNodeIndex = UINT32_MAX; uint32_t presentQueueNodeIndex = UINT32_MAX; for (i = 0; i < demo->queue_count; i++) { if ((demo->queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT) != 0) { if (graphicsQueueNodeIndex == UINT32_MAX) { graphicsQueueNodeIndex = i; } if (supportsPresent[i] == VK_TRUE) { graphicsQueueNodeIndex = i; presentQueueNodeIndex = i; break; } } } if (presentQueueNodeIndex == UINT32_MAX) { // If didn't find a queue that supports both graphics and present, then // find a separate present queue. for (uint32_t i = 0; i < demo->queue_count; ++i) { if (supportsPresent[i] == VK_TRUE) { presentQueueNodeIndex = i; break; } } } free(supportsPresent); // Generate error if could not find both a graphics and a present queue if (graphicsQueueNodeIndex == UINT32_MAX || presentQueueNodeIndex == UINT32_MAX) { ERR_EXIT("Could not find a graphics and a present queue\n", "Swapchain Initialization Failure"); } // TODO: Add support for separate queues, including presentation, // synchronization, and appropriate tracking for QueueSubmit. // NOTE: While it is possible for an application to use a separate graphics // and a present queues, this demo program assumes it is only using // one: if (graphicsQueueNodeIndex != presentQueueNodeIndex) { ERR_EXIT("Could not find a common graphics and a present queue\n", "Swapchain Initialization Failure"); } demo->graphics_queue_node_index = graphicsQueueNodeIndex; demo_create_device(demo); GET_DEVICE_PROC_ADDR(demo->device, CreateSwapchainKHR); GET_DEVICE_PROC_ADDR(demo->device, DestroySwapchainKHR); GET_DEVICE_PROC_ADDR(demo->device, GetSwapchainImagesKHR); GET_DEVICE_PROC_ADDR(demo->device, AcquireNextImageKHR); GET_DEVICE_PROC_ADDR(demo->device, QueuePresentKHR); vkGetDeviceQueue(demo->device, demo->graphics_queue_node_index, 0, &demo->queue); // Get the list of VkFormat's that are supported: uint32_t formatCount; err = demo->fpGetPhysicalDeviceSurfaceFormatsKHR(demo->gpu, demo->surface, &formatCount, NULL); assert(!err); VkSurfaceFormatKHR *surfFormats = (VkSurfaceFormatKHR *)malloc(formatCount * sizeof(VkSurfaceFormatKHR)); err = demo->fpGetPhysicalDeviceSurfaceFormatsKHR(demo->gpu, demo->surface, &formatCount, surfFormats); assert(!err); // If the format list includes just one entry of VK_FORMAT_UNDEFINED, // the surface has no preferred format. Otherwise, at least one // supported format will be returned. if (formatCount == 1 && surfFormats[0].format == VK_FORMAT_UNDEFINED) { demo->format = VK_FORMAT_B8G8R8A8_UNORM; } else { assert(formatCount >= 1); demo->format = surfFormats[0].format; } demo->color_space = surfFormats[0].colorSpace; demo->quit = false; demo->curFrame = 0; // Get Memory information and properties vkGetPhysicalDeviceMemoryProperties(demo->gpu, &demo->memory_properties); } static void demo_init_connection(struct demo *demo) { #ifndef _WIN32 const xcb_setup_t *setup; xcb_screen_iterator_t iter; int scr; demo->connection = xcb_connect(NULL, &scr); if (demo->connection == NULL) { printf("Cannot find a compatible Vulkan installable client driver " "(ICD).\nExiting ...\n"); fflush(stdout); exit(1); } setup = xcb_get_setup(demo->connection); iter = xcb_setup_roots_iterator(setup); while (scr-- > 0) xcb_screen_next(&iter); demo->screen = iter.data; #endif // _WIN32 } static void demo_init(struct demo *demo, int argc, char **argv) { vec3 eye = {0.0f, 3.0f, 5.0f}; vec3 origin = {0, 0, 0}; vec3 up = {0.0f, 1.0f, 0.0}; memset(demo, 0, sizeof(*demo)); demo->frameCount = INT32_MAX; for (int i = 1; i < argc; i++) { if (strcmp(argv[i], "--use_staging") == 0) { demo->use_staging_buffer = true; continue; } if (strcmp(argv[i], "--break") == 0) { demo->use_break = true; continue; } if (strcmp(argv[i], "--validate") == 0) { demo->validate = true; continue; } if (strcmp(argv[i], "--c") == 0 && demo->frameCount == INT32_MAX && i < argc - 1 && sscanf(argv[i + 1], "%d", &demo->frameCount) == 1 && demo->frameCount >= 0) { i++; continue; } fprintf(stderr, "Usage:\n %s [--use_staging] [--validate] [--break] " "[--c ]\n", APP_SHORT_NAME); fflush(stderr); exit(1); } demo_init_connection(demo); demo_init_vk(demo); demo->width = 500; demo->height = 500; demo->spin_angle = 0.01f; demo->spin_increment = 0.01f; demo->pause = false; mat4x4_perspective(demo->projection_matrix, (float)degreesToRadians(45.0f), 1.0f, 0.1f, 100.0f); mat4x4_look_at(demo->view_matrix, eye, origin, up); mat4x4_identity(demo->model_matrix); } #ifdef _WIN32 // Include header required for parsing the command line options. #include int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR pCmdLine, int nCmdShow) { MSG msg; // message bool done; // flag saying when app is complete int argc; char **argv; // Use the CommandLine functions to get the command line arguments. // Unfortunately, Microsoft outputs // this information as wide characters for Unicode, and we simply want the // Ascii version to be compatible // with the non-Windows side. So, we have to convert the information to // Ascii character strings. LPWSTR *commandLineArgs = CommandLineToArgvW(GetCommandLineW(), &argc); if (NULL == commandLineArgs) { argc = 0; } if (argc > 0) { argv = (char **)malloc(sizeof(char *) * argc); if (argv == NULL) { argc = 0; } else { for (int iii = 0; iii < argc; iii++) { size_t wideCharLen = wcslen(commandLineArgs[iii]); size_t numConverted = 0; argv[iii] = (char *)malloc(sizeof(char) * (wideCharLen + 1)); if (argv[iii] != NULL) { wcstombs_s(&numConverted, argv[iii], wideCharLen + 1, commandLineArgs[iii], wideCharLen + 1); } } } } else { argv = NULL; } demo_init(&demo, argc, argv); // Free up the items we had to allocate for the command line arguments. if (argc > 0 && argv != NULL) { for (int iii = 0; iii < argc; iii++) { if (argv[iii] != NULL) { free(argv[iii]); } } free(argv); } demo.connection = hInstance; strncpy(demo.name, "cube", APP_NAME_STR_LEN); demo_create_window(&demo); demo_init_vk_swapchain(&demo); demo_prepare(&demo); done = false; // initialize loop condition variable // main message loop while (!done) { PeekMessage(&msg, NULL, 0, 0, PM_REMOVE); if (msg.message == WM_QUIT) // check for a quit message { done = true; // if found, quit app } else { /* Translate and dispatch to event queue*/ TranslateMessage(&msg); DispatchMessage(&msg); } RedrawWindow(demo.window, NULL, NULL, RDW_INTERNALPAINT); } demo_cleanup(&demo); return (int)msg.wParam; } #else // _WIN32 int main(int argc, char **argv) { struct demo demo; demo_init(&demo, argc, argv); demo_create_window(&demo); demo_init_vk_swapchain(&demo); demo_prepare(&demo); demo_run(&demo); demo_cleanup(&demo); return validation_error; } #endif // _WIN32 Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/cube.frag000066400000000000000000000030531270147354000236110ustar00rootroot00000000000000/* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. */ /* * Fragment shader for cube demo */ #version 400 #extension GL_ARB_separate_shader_objects : enable #extension GL_ARB_shading_language_420pack : enable layout (binding = 1) uniform sampler2D tex; layout (location = 0) in vec4 texcoord; layout (location = 0) out vec4 uFragColor; void main() { uFragColor = texture(tex, texcoord.xy); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/cube.vcxproj.user000077500000000000000000000011671270147354000253510ustar00rootroot00000000000000 VK_LAYER_PATH=..\layers\Debug WindowsLocalDebugger VK_LAYER_PATH=..\layers\Release WindowsLocalDebugger Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/cube.vert000066400000000000000000000034651270147354000236610ustar00rootroot00000000000000/* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. */ /* * Vertex shader used by Cube demo. */ #version 400 #extension GL_ARB_separate_shader_objects : enable #extension GL_ARB_shading_language_420pack : enable layout(std140, binding = 0) uniform buf { mat4 MVP; vec4 position[12*3]; vec4 attr[12*3]; } ubuf; layout (location = 0) out vec4 texcoord; out gl_PerVertex { vec4 gl_Position; }; void main() { texcoord = ubuf.attr[gl_VertexIndex]; gl_Position = ubuf.MVP * ubuf.position[gl_VertexIndex]; // GL->VK conventions gl_Position.y = -gl_Position.y; gl_Position.z = (gl_Position.z + gl_Position.w) / 2.0; } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/linmath.h000066400000000000000000000374121270147354000236450ustar00rootroot00000000000000/* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. * * Relicensed from the WTFPL (http://www.wtfpl.net/faq/). */ #ifndef LINMATH_H #define LINMATH_H #include // Converts degrees to radians. #define degreesToRadians(angleDegrees) (angleDegrees * M_PI / 180.0) // Converts radians to degrees. #define radiansToDegrees(angleRadians) (angleRadians * 180.0 / M_PI) typedef float vec3[3]; static inline void vec3_add(vec3 r, vec3 const a, vec3 const b) { int i; for (i = 0; i < 3; ++i) r[i] = a[i] + b[i]; } static inline void vec3_sub(vec3 r, vec3 const a, vec3 const b) { int i; for (i = 0; i < 3; ++i) r[i] = a[i] - b[i]; } static inline void vec3_scale(vec3 r, vec3 const v, float const s) { int i; for (i = 0; i < 3; ++i) r[i] = v[i] * s; } static inline float vec3_mul_inner(vec3 const a, vec3 const b) { float p = 0.f; int i; for (i = 0; i < 3; ++i) p += b[i] * a[i]; return p; } static inline void vec3_mul_cross(vec3 r, vec3 const a, vec3 const b) { r[0] = a[1] * b[2] - a[2] * b[1]; r[1] = a[2] * b[0] - a[0] * b[2]; r[2] = a[0] * b[1] - a[1] * b[0]; } static inline float vec3_len(vec3 const v) { return sqrtf(vec3_mul_inner(v, v)); } static inline void vec3_norm(vec3 r, vec3 const v) { float k = 1.f / vec3_len(v); vec3_scale(r, v, k); } static inline void vec3_reflect(vec3 r, vec3 const v, vec3 const n) { float p = 2.f * vec3_mul_inner(v, n); int i; for (i = 0; i < 3; ++i) r[i] = v[i] - p * n[i]; } typedef float vec4[4]; static inline void vec4_add(vec4 r, vec4 const a, vec4 const b) { int i; for (i = 0; i < 4; ++i) r[i] = a[i] + b[i]; } static inline void vec4_sub(vec4 r, vec4 const a, vec4 const b) { int i; for (i = 0; i < 4; ++i) r[i] = a[i] - b[i]; } static inline void vec4_scale(vec4 r, vec4 v, float s) { int i; for (i = 0; i < 4; ++i) r[i] = v[i] * s; } static inline float vec4_mul_inner(vec4 a, vec4 b) { float p = 0.f; int i; for (i = 0; i < 4; ++i) p += b[i] * a[i]; return p; } static inline void vec4_mul_cross(vec4 r, vec4 a, vec4 b) { r[0] = a[1] * b[2] - a[2] * b[1]; r[1] = a[2] * b[0] - a[0] * b[2]; r[2] = a[0] * b[1] - a[1] * b[0]; r[3] = 1.f; } static inline float vec4_len(vec4 v) { return sqrtf(vec4_mul_inner(v, v)); } static inline void vec4_norm(vec4 r, vec4 v) { float k = 1.f / vec4_len(v); vec4_scale(r, v, k); } static inline void vec4_reflect(vec4 r, vec4 v, vec4 n) { float p = 2.f * vec4_mul_inner(v, n); int i; for (i = 0; i < 4; ++i) r[i] = v[i] - p * n[i]; } typedef vec4 mat4x4[4]; static inline void mat4x4_identity(mat4x4 M) { int i, j; for (i = 0; i < 4; ++i) for (j = 0; j < 4; ++j) M[i][j] = i == j ? 1.f : 0.f; } static inline void mat4x4_dup(mat4x4 M, mat4x4 N) { int i, j; for (i = 0; i < 4; ++i) for (j = 0; j < 4; ++j) M[i][j] = N[i][j]; } static inline void mat4x4_row(vec4 r, mat4x4 M, int i) { int k; for (k = 0; k < 4; ++k) r[k] = M[k][i]; } static inline void mat4x4_col(vec4 r, mat4x4 M, int i) { int k; for (k = 0; k < 4; ++k) r[k] = M[i][k]; } static inline void mat4x4_transpose(mat4x4 M, mat4x4 N) { int i, j; for (j = 0; j < 4; ++j) for (i = 0; i < 4; ++i) M[i][j] = N[j][i]; } static inline void mat4x4_add(mat4x4 M, mat4x4 a, mat4x4 b) { int i; for (i = 0; i < 4; ++i) vec4_add(M[i], a[i], b[i]); } static inline void mat4x4_sub(mat4x4 M, mat4x4 a, mat4x4 b) { int i; for (i = 0; i < 4; ++i) vec4_sub(M[i], a[i], b[i]); } static inline void mat4x4_scale(mat4x4 M, mat4x4 a, float k) { int i; for (i = 0; i < 4; ++i) vec4_scale(M[i], a[i], k); } static inline void mat4x4_scale_aniso(mat4x4 M, mat4x4 a, float x, float y, float z) { int i; vec4_scale(M[0], a[0], x); vec4_scale(M[1], a[1], y); vec4_scale(M[2], a[2], z); for (i = 0; i < 4; ++i) { M[3][i] = a[3][i]; } } static inline void mat4x4_mul(mat4x4 M, mat4x4 a, mat4x4 b) { int k, r, c; for (c = 0; c < 4; ++c) for (r = 0; r < 4; ++r) { M[c][r] = 0.f; for (k = 0; k < 4; ++k) M[c][r] += a[k][r] * b[c][k]; } } static inline void mat4x4_mul_vec4(vec4 r, mat4x4 M, vec4 v) { int i, j; for (j = 0; j < 4; ++j) { r[j] = 0.f; for (i = 0; i < 4; ++i) r[j] += M[i][j] * v[i]; } } static inline void mat4x4_translate(mat4x4 T, float x, float y, float z) { mat4x4_identity(T); T[3][0] = x; T[3][1] = y; T[3][2] = z; } static inline void mat4x4_translate_in_place(mat4x4 M, float x, float y, float z) { vec4 t = {x, y, z, 0}; vec4 r; int i; for (i = 0; i < 4; ++i) { mat4x4_row(r, M, i); M[3][i] += vec4_mul_inner(r, t); } } static inline void mat4x4_from_vec3_mul_outer(mat4x4 M, vec3 a, vec3 b) { int i, j; for (i = 0; i < 4; ++i) for (j = 0; j < 4; ++j) M[i][j] = i < 3 && j < 3 ? a[i] * b[j] : 0.f; } static inline void mat4x4_rotate(mat4x4 R, mat4x4 M, float x, float y, float z, float angle) { float s = sinf(angle); float c = cosf(angle); vec3 u = {x, y, z}; if (vec3_len(u) > 1e-4) { vec3_norm(u, u); mat4x4 T; mat4x4_from_vec3_mul_outer(T, u, u); mat4x4 S = {{0, u[2], -u[1], 0}, {-u[2], 0, u[0], 0}, {u[1], -u[0], 0, 0}, {0, 0, 0, 0}}; mat4x4_scale(S, S, s); mat4x4 C; mat4x4_identity(C); mat4x4_sub(C, C, T); mat4x4_scale(C, C, c); mat4x4_add(T, T, C); mat4x4_add(T, T, S); T[3][3] = 1.; mat4x4_mul(R, M, T); } else { mat4x4_dup(R, M); } } static inline void mat4x4_rotate_X(mat4x4 Q, mat4x4 M, float angle) { float s = sinf(angle); float c = cosf(angle); mat4x4 R = {{1.f, 0.f, 0.f, 0.f}, {0.f, c, s, 0.f}, {0.f, -s, c, 0.f}, {0.f, 0.f, 0.f, 1.f}}; mat4x4_mul(Q, M, R); } static inline void mat4x4_rotate_Y(mat4x4 Q, mat4x4 M, float angle) { float s = sinf(angle); float c = cosf(angle); mat4x4 R = {{c, 0.f, s, 0.f}, {0.f, 1.f, 0.f, 0.f}, {-s, 0.f, c, 0.f}, {0.f, 0.f, 0.f, 1.f}}; mat4x4_mul(Q, M, R); } static inline void mat4x4_rotate_Z(mat4x4 Q, mat4x4 M, float angle) { float s = sinf(angle); float c = cosf(angle); mat4x4 R = {{c, s, 0.f, 0.f}, {-s, c, 0.f, 0.f}, {0.f, 0.f, 1.f, 0.f}, {0.f, 0.f, 0.f, 1.f}}; mat4x4_mul(Q, M, R); } static inline void mat4x4_invert(mat4x4 T, mat4x4 M) { float s[6]; float c[6]; s[0] = M[0][0] * M[1][1] - M[1][0] * M[0][1]; s[1] = M[0][0] * M[1][2] - M[1][0] * M[0][2]; s[2] = M[0][0] * M[1][3] - M[1][0] * M[0][3]; s[3] = M[0][1] * M[1][2] - M[1][1] * M[0][2]; s[4] = M[0][1] * M[1][3] - M[1][1] * M[0][3]; s[5] = M[0][2] * M[1][3] - M[1][2] * M[0][3]; c[0] = M[2][0] * M[3][1] - M[3][0] * M[2][1]; c[1] = M[2][0] * M[3][2] - M[3][0] * M[2][2]; c[2] = M[2][0] * M[3][3] - M[3][0] * M[2][3]; c[3] = M[2][1] * M[3][2] - M[3][1] * M[2][2]; c[4] = M[2][1] * M[3][3] - M[3][1] * M[2][3]; c[5] = M[2][2] * M[3][3] - M[3][2] * M[2][3]; /* Assumes it is invertible */ float idet = 1.0f / (s[0] * c[5] - s[1] * c[4] + s[2] * c[3] + s[3] * c[2] - s[4] * c[1] + s[5] * c[0]); T[0][0] = (M[1][1] * c[5] - M[1][2] * c[4] + M[1][3] * c[3]) * idet; T[0][1] = (-M[0][1] * c[5] + M[0][2] * c[4] - M[0][3] * c[3]) * idet; T[0][2] = (M[3][1] * s[5] - M[3][2] * s[4] + M[3][3] * s[3]) * idet; T[0][3] = (-M[2][1] * s[5] + M[2][2] * s[4] - M[2][3] * s[3]) * idet; T[1][0] = (-M[1][0] * c[5] + M[1][2] * c[2] - M[1][3] * c[1]) * idet; T[1][1] = (M[0][0] * c[5] - M[0][2] * c[2] + M[0][3] * c[1]) * idet; T[1][2] = (-M[3][0] * s[5] + M[3][2] * s[2] - M[3][3] * s[1]) * idet; T[1][3] = (M[2][0] * s[5] - M[2][2] * s[2] + M[2][3] * s[1]) * idet; T[2][0] = (M[1][0] * c[4] - M[1][1] * c[2] + M[1][3] * c[0]) * idet; T[2][1] = (-M[0][0] * c[4] + M[0][1] * c[2] - M[0][3] * c[0]) * idet; T[2][2] = (M[3][0] * s[4] - M[3][1] * s[2] + M[3][3] * s[0]) * idet; T[2][3] = (-M[2][0] * s[4] + M[2][1] * s[2] - M[2][3] * s[0]) * idet; T[3][0] = (-M[1][0] * c[3] + M[1][1] * c[1] - M[1][2] * c[0]) * idet; T[3][1] = (M[0][0] * c[3] - M[0][1] * c[1] + M[0][2] * c[0]) * idet; T[3][2] = (-M[3][0] * s[3] + M[3][1] * s[1] - M[3][2] * s[0]) * idet; T[3][3] = (M[2][0] * s[3] - M[2][1] * s[1] + M[2][2] * s[0]) * idet; } static inline void mat4x4_orthonormalize(mat4x4 R, mat4x4 M) { mat4x4_dup(R, M); float s = 1.; vec3 h; vec3_norm(R[2], R[2]); s = vec3_mul_inner(R[1], R[2]); vec3_scale(h, R[2], s); vec3_sub(R[1], R[1], h); vec3_norm(R[2], R[2]); s = vec3_mul_inner(R[1], R[2]); vec3_scale(h, R[2], s); vec3_sub(R[1], R[1], h); vec3_norm(R[1], R[1]); s = vec3_mul_inner(R[0], R[1]); vec3_scale(h, R[1], s); vec3_sub(R[0], R[0], h); vec3_norm(R[0], R[0]); } static inline void mat4x4_frustum(mat4x4 M, float l, float r, float b, float t, float n, float f) { M[0][0] = 2.f * n / (r - l); M[0][1] = M[0][2] = M[0][3] = 0.f; M[1][1] = 2.f * n / (t - b); M[1][0] = M[1][2] = M[1][3] = 0.f; M[2][0] = (r + l) / (r - l); M[2][1] = (t + b) / (t - b); M[2][2] = -(f + n) / (f - n); M[2][3] = -1.f; M[3][2] = -2.f * (f * n) / (f - n); M[3][0] = M[3][1] = M[3][3] = 0.f; } static inline void mat4x4_ortho(mat4x4 M, float l, float r, float b, float t, float n, float f) { M[0][0] = 2.f / (r - l); M[0][1] = M[0][2] = M[0][3] = 0.f; M[1][1] = 2.f / (t - b); M[1][0] = M[1][2] = M[1][3] = 0.f; M[2][2] = -2.f / (f - n); M[2][0] = M[2][1] = M[2][3] = 0.f; M[3][0] = -(r + l) / (r - l); M[3][1] = -(t + b) / (t - b); M[3][2] = -(f + n) / (f - n); M[3][3] = 1.f; } static inline void mat4x4_perspective(mat4x4 m, float y_fov, float aspect, float n, float f) { /* NOTE: Degrees are an unhandy unit to work with. * linmath.h uses radians for everything! */ float const a = (float)(1.f / tan(y_fov / 2.f)); m[0][0] = a / aspect; m[0][1] = 0.f; m[0][2] = 0.f; m[0][3] = 0.f; m[1][0] = 0.f; m[1][1] = a; m[1][2] = 0.f; m[1][3] = 0.f; m[2][0] = 0.f; m[2][1] = 0.f; m[2][2] = -((f + n) / (f - n)); m[2][3] = -1.f; m[3][0] = 0.f; m[3][1] = 0.f; m[3][2] = -((2.f * f * n) / (f - n)); m[3][3] = 0.f; } static inline void mat4x4_look_at(mat4x4 m, vec3 eye, vec3 center, vec3 up) { /* Adapted from Android's OpenGL Matrix.java. */ /* See the OpenGL GLUT documentation for gluLookAt for a description */ /* of the algorithm. We implement it in a straightforward way: */ /* TODO: The negation of of can be spared by swapping the order of * operands in the following cross products in the right way. */ vec3 f; vec3_sub(f, center, eye); vec3_norm(f, f); vec3 s; vec3_mul_cross(s, f, up); vec3_norm(s, s); vec3 t; vec3_mul_cross(t, s, f); m[0][0] = s[0]; m[0][1] = t[0]; m[0][2] = -f[0]; m[0][3] = 0.f; m[1][0] = s[1]; m[1][1] = t[1]; m[1][2] = -f[1]; m[1][3] = 0.f; m[2][0] = s[2]; m[2][1] = t[2]; m[2][2] = -f[2]; m[2][3] = 0.f; m[3][0] = 0.f; m[3][1] = 0.f; m[3][2] = 0.f; m[3][3] = 1.f; mat4x4_translate_in_place(m, -eye[0], -eye[1], -eye[2]); } typedef float quat[4]; static inline void quat_identity(quat q) { q[0] = q[1] = q[2] = 0.f; q[3] = 1.f; } static inline void quat_add(quat r, quat a, quat b) { int i; for (i = 0; i < 4; ++i) r[i] = a[i] + b[i]; } static inline void quat_sub(quat r, quat a, quat b) { int i; for (i = 0; i < 4; ++i) r[i] = a[i] - b[i]; } static inline void quat_mul(quat r, quat p, quat q) { vec3 w; vec3_mul_cross(r, p, q); vec3_scale(w, p, q[3]); vec3_add(r, r, w); vec3_scale(w, q, p[3]); vec3_add(r, r, w); r[3] = p[3] * q[3] - vec3_mul_inner(p, q); } static inline void quat_scale(quat r, quat v, float s) { int i; for (i = 0; i < 4; ++i) r[i] = v[i] * s; } static inline float quat_inner_product(quat a, quat b) { float p = 0.f; int i; for (i = 0; i < 4; ++i) p += b[i] * a[i]; return p; } static inline void quat_conj(quat r, quat q) { int i; for (i = 0; i < 3; ++i) r[i] = -q[i]; r[3] = q[3]; } #define quat_norm vec4_norm static inline void quat_mul_vec3(vec3 r, quat q, vec3 v) { quat v_ = {v[0], v[1], v[2], 0.f}; quat_conj(r, q); quat_norm(r, r); quat_mul(r, v_, r); quat_mul(r, q, r); } static inline void mat4x4_from_quat(mat4x4 M, quat q) { float a = q[3]; float b = q[0]; float c = q[1]; float d = q[2]; float a2 = a * a; float b2 = b * b; float c2 = c * c; float d2 = d * d; M[0][0] = a2 + b2 - c2 - d2; M[0][1] = 2.f * (b * c + a * d); M[0][2] = 2.f * (b * d - a * c); M[0][3] = 0.f; M[1][0] = 2 * (b * c - a * d); M[1][1] = a2 - b2 + c2 - d2; M[1][2] = 2.f * (c * d + a * b); M[1][3] = 0.f; M[2][0] = 2.f * (b * d + a * c); M[2][1] = 2.f * (c * d - a * b); M[2][2] = a2 - b2 - c2 + d2; M[2][3] = 0.f; M[3][0] = M[3][1] = M[3][2] = 0.f; M[3][3] = 1.f; } static inline void mat4x4o_mul_quat(mat4x4 R, mat4x4 M, quat q) { /* XXX: The way this is written only works for othogonal matrices. */ /* TODO: Take care of non-orthogonal case. */ quat_mul_vec3(R[0], q, M[0]); quat_mul_vec3(R[1], q, M[1]); quat_mul_vec3(R[2], q, M[2]); R[3][0] = R[3][1] = R[3][2] = 0.f; R[3][3] = 1.f; } static inline void quat_from_mat4x4(quat q, mat4x4 M) { float r = 0.f; int i; int perm[] = {0, 1, 2, 0, 1}; int *p = perm; for (i = 0; i < 3; i++) { float m = M[i][i]; if (m < r) continue; m = r; p = &perm[i]; } r = sqrtf(1.f + M[p[0]][p[0]] - M[p[1]][p[1]] - M[p[2]][p[2]]); if (r < 1e-6) { q[0] = 1.f; q[1] = q[2] = q[3] = 0.f; return; } q[0] = r / 2.f; q[1] = (M[p[0]][p[1]] - M[p[1]][p[0]]) / (2.f * r); q[2] = (M[p[2]][p[0]] - M[p[0]][p[2]]) / (2.f * r); q[3] = (M[p[2]][p[1]] - M[p[1]][p[2]]) / (2.f * r); } #endif Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/lunarg.ppm000066400000000000000000006000171270147354000240430ustar00rootroot00000000000000P6 256 256 255 uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzyyyyyyyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyzzzzzzzzzzzzzzzzzz{{{{{{{{{{{{{{{{{{||||||||||||||||||||||||||||||{{{{{{{{{{{{{{{zzzzzzzzzzzzzzzzzzzzzzzzyyyyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyzzzzzzzzzzzzzzzzzz{{{{{{{{{||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||{{{{{{{{{{{{zzzzzzzzzzzzzzzyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyzzzzzzzzzzzz{{{{{{{{{|||||||||||||||}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}||||||||||||||||||{{{{{{zzzzzzzzzzzzzzzyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyzzzzzzzzzzzz{{{{{{|||||||||||||||}}}}}}}}}}}}}}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}}}}}}}}}}}}}}}}}}||||||||||||{{{{{{zzzzzzzzzzzzyyyyyyyyyyyyyyyxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxyyyyyyyyyyyyzzzzzzzzzzzz{{{{{{|||||||||}}}}}}}}}}}}}}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}}}}}}}}}}}}||||||||||||{{{{{{zzzzzzzzzzzzyyyyyyyyyyyyxxxxxxxxxxxxwwwwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxyyyyyyyyyyyyzzzzzzzzz{{{{{{||||||||||||}}}}}}}}}~~~~~~~~~~~~€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€~~~~~~~~~~~~}}}}}}}}}}}}|||||||||{{{{{{zzzzzzzzzyyyyyyyyyyyyxxxxxxxxxxxxxxxwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwxxxxxxxxxxxxyyyyyyyyyyyyzzzzzzzzz{{{{{{|||||||||}}}}}}}}}~~~~~~~~~~~~€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€~~~~~~~~~}}}}}}}}}|||||||||{{{{{{zzzzzzzzzyyyyyyyyyyyyxxxxxxxxxxxxwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwxxxxxxxxxxxxyyyyyyyyyzzzzzzzzzzzz{{{|||||||||}}}}}}}}}~~~~~~~~~€€€€€€€€€‚‚‚‚‚‚‚‚‚ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ‚‚‚‚‚‚‚‚‚€€€€€€€€€€€€~~~~~~~~~}}}}}}|||||||||{{{{{{zzzzzzzzzyyyyyyyyyxxxxxxxxxxxxwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwxxxxxxxxxxxxyyyyyyyyyyyyzzzzzzzzz{{{|||||||||}}}}}}}}}~~~~~~€€€€€€€€€‚‚‚‚‚‚ƒƒƒƒƒƒƒƒƒƒƒƒ„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„„ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ‚‚‚€€€€€€~~~~~~~~~}}}}}}|||||||||{{{zzzzzzzzzyyyyyyyyyyyyxxxxxxxxxxxxwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwxxxxxxxxxxxxyyyyyyyyyzzzzzzzzz{{{{{{||||||}}}}}}}}}~~~~~~€€€€€€‚‚‚ƒƒƒƒƒƒƒƒƒ„„„„„„„„„„„„………………………………………†††††††††††††††††††††………………………………………………„„„„„„„„„„„„ƒƒƒƒƒƒƒƒƒ‚‚‚‚‚‚€€€€€€€€€~~~~~~~~~}}}}}}|||||||||{{{zzzzzzzzzyyyyyyyyyxxxxxxxxxxxxwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwxxxxxxxxxxxxyyyyyyyyyzzzzzz{{{{{{|||||||||}}}}}}~~~~~~€€€€€€‚‚‚ƒƒƒƒƒƒƒƒƒ„„„„„„………………………†††††††††††††††‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡†††††††††††††††………………………„„„„„„„„„ƒƒƒƒƒƒ‚‚‚€€€€€€~~~~~~}}}}}}}}}||||||{{{zzzzzzzzzyyyyyyyyyxxxxxxxxxxxxwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwxxxxxxxxxyyyyyyyyyyyyzzzzzzzzz|||||||||}}}}}}~~~~~~~~~€€€€€€‚‚‚ƒƒƒƒƒƒ„„„„„„………………………††††††‡‡‡‡‡‡‡‡‡ˆˆˆˆˆˆˆˆˆ‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰‰ˆˆˆˆˆˆ‡‡‡‡‡‡‡‡‡‡‡‡††††††………………………„„„„„„„„„ƒƒƒƒƒƒ‚‚‚€€€€€€~~~~~~}}}}}}|||||||||{{{zzzzzzyyyyyyyyyxxxxxxxxxxxxwwwwwwwwwwwwwwwvvvvvvvvvvvvvvvvvvvvvuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu[sv9nv-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-lu-kt-kt-kt-kt-kt-kt-kt-kt-kt-kt-ks-ks-ks-ks,js,ir,ir,ir-ir-ir-ir-ir-ir-ir-ir-ir-iq-iq,hq,hq,hr,hr,hp,hp,gp,gp,gp-gp-go,go-go-fo-fo.go.go.go.go-go.fo.fn.fn.fn/fm.en/en/en0em/el0el1el0dl0dk1dl1dk1dk1ck1cj1bj1bj1bi1ai2ai1ai1ag1`f1`f1_f1_e1_e0^d0]c0]c0[b,V]i‰¯ÂÄ´·g‰d‡Œd‡Œc‡Œc‡Œb…‹b…‹b„Šb„Ša„Š`ƒˆ`ƒˆ_ƒˆ^‡^†]€†\…[…Z~„Y}‚Y|‚X|‚W{W{€UzTy~Tw}Tw}Rw|Qv{Pu{OtyOsyNrxMrxLqwKpvKpvJpuIouIntHmsGlqGlqFkqFkqEjpEjpEjpEhnEhnEhnDhmDhmCglCglBelBelBekBekAekAdjAdj@ci@ci@ci?ch?ch?bh?ag?ag?ag?`g>`e>`e>`e>_e>_e=_d=_d=^d=^c=^c=]c=]c<\b;\a<[`<[`<[`Jcggpquuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuotu#jt]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcWcWcWbWbWbVbVaVaUaU`U`T`T_T_S_S^S^S]R]R\Q\Q[P[OZOZNYNXNXMWMWLVKUKUJTJTISHQHQGPFPFOENDMCLBKBJAI@I?H>G=E5=4;3:281806.5-3,2+1(.05²µÿÿÿ¬ÊÎ@…<‚<Œ<Œ<€‹;Š:~‰:~ˆ9}ˆ8|†7{†6z…4y„3xƒ2w‚1v€1u€0t~/s~.q|-p{,oz*ny(mw'ku&it&is%hr$gq#ep!co cmamak`j^i]g\f[fYcXcWaV`U`T^S]R\Q[PZOYOXNXLVKTKTJSIRHQHQGPFOENDMCKCKAI @H ?G ?G >E =E 6= 5< 4; 4: 39 28 17 06 06 .4 -2 ,27>>\auuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuusuugr[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2&, 17­¿Âÿÿÿ´ÏÔD‡’@„@„>ƒŽ=‚=Œ<‹<€Š;Š:~‰9}ˆ8|‡7{†5z„4x„3x‚2w‚1v€0t~/s~.q|-q|+oz)nx(mw'lv&ju&it$hr#fp"ep!docmam`k_j^i^h\gZeZdYcXbWaU`T_T^S]R\Q[PZOYNWMWLVLUKTKTJSIRHQGPFNENDMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 -2 )/6W\uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu7mv[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2+1$*6;ÇÔÖÿÿÿ¶ÐÕGˆ“C†A…A„@ƒŽ?‚?‚Œ>Œ<€‹<‰;~‰:}‡8|†7z…6z„4xƒ3w2u€1t0s}/r}-p{+oy*ny)mw'kv&it&is%gq$gq#ep dnbmalak_i^i]g[fZdYdYbWaVaT_T^S]S]R\PZOYNWMWLVLUKTKTJSIRHQGPGOENDMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 06 -2 .4Vhkuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuudp[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/"'AFÞåæÿÿÿ³ÎÒFˆ’Eˆ’E‡‘C…C…B„ŽAƒ@‚?‚‹=€Š<‰;~ˆ:|‡8{…7z„6yƒ4w3v€1t~1s~0r|-qz,oz+nx)lw(kv'jt&hr%gq$fq"eo!co bmal`j_i]h\f[eYdYcXaVaU`T^S]S]R\PZOYOXMWLVLUKTKTJSIRHQGPGOFODMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 (-+OUuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu^j[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.%2UZúüüÿÿÿªÈÍHŠ”G‰“Gˆ’G‡‘F‡D…C„ŽBƒA‚?‹=‰=~‰;}‡:|†8z„7yƒ6x‚4v€2u1s~/r|.p{-py+nx*lw(ku'is&is%fq#fp"do!bn bl`k`j^h\g\eZeYcXaWaU`U_S]S]R\Q[OYOXNXLVLUKTKTJSIRHQGPGOFODMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 */BHuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.&,$VsxÿÿÿÿÿÿÀÆK‹•JŠ”I‰“Hˆ’G‡‘G†‘E…D„ŽCƒA‚‹?€Š>‰=~ˆ;|†:z„8yƒ7x‚5v3t2t}0r|/q{-oy,mx*lv)jt'is&hs%gq#ep"cn!bm ak`k_i]g\f[eZdXbWaV`U_S]S]R\Q[PZOXNXLVLUKTKTJSIRHQGPGOFOENCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Euuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',$)"'‰Ÿ£ÿÿÿþÿÿ‹´ºNŒ–M‹•L‹”K‰“Iˆ’Hˆ‘G†F…D„B‚Œ@€Š?€Š=~ˆ<|†;{…9yƒ7x‚6v€3u~2s}1r|/p{.oy,mw*ku)jt'hs&hr%fq#do"cm!bl ak_j^h\f[e[dYcWbVaU_T^T^R\Q[PZOXNXMWLUKTKTJSIRHQGPGOFOENCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%+ % -3ÃÐÒÿÿÿóøøv¥­QŽ—P–N‹•M‹”LŠ“Jˆ’I‡‘G…EƒŽDƒŒB‚‹@€‰?~ˆ=|†<{…:z„8w‚5v€4u2s}1q|0pz.ox,lv+lv)jt'is&gr%ep#dn"bl!bl `j^h]g[dYcXcVaV`T^T^R\Q[PZPYNXMWMVKTKTJSIRHQGPGOFOENDMCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)!'HNùûûÿÿÿâìîdš¤S˜RŽ—Q—PŒ•NŠ”L‰’Jˆ‘I†G…ŽEƒC‹A‰?~ˆ=|†={…;yƒ8x6v€4u~2s}2q|0py.nw,lv+kv(js'hs&fq%eo#cm"bl!ak^i^g\gZcXcVaV`U_T^S]Q[PZPYOYMWMVKTKTJSIRHQGPGOFOENDMCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)"' ^z~ÿÿÿÿÿÿÇÛÞW’›U™T™S˜R–PŒ•MŠ“Lˆ’J‡I†G„E‚‹B€Š@~ˆ>}‡={…;zƒ9x‚7w€5u3r}2q{0oy.nw-lw*ku(is'gr&fp%dn#cm"al _i^h]g\fZdYcWbWaU_U_S]R\PZPYOYMWMVLUKTJSIRHQGPGOFOENDMDLBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'$$)®¾Áÿÿÿÿÿÿ¤ÄÉZ“œX’›W‘šU™R–PŒ”OŠ”M‰’Kˆ‘I†H„F‚ŒC€‰Aˆ?|†={„;yƒ9x7v€5t~3r|2pz0oy.mw,lv*ju(hs&fp&eo$dn#bl"`k_h]h]f[eYdXbWaU_U_T^R\PZPYOYNXMVLULUJSIRHQGPGOFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!&"BGùûûÿÿÿ÷úú­´\”[”œY’›W™U—RŒ–P‹”OŠ“Lˆ‘J†I„ŽG‚ŒD€ŠB~ˆ?}†>{…;y‚9w7u5t}3r{2pz0ny-mw+ku*it(hr&ep&eo$cm#ak `i^h]g\eZeXbWaU_U_T^R\Q[PYPYNXNWLULUJSIRHQGPGOFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!&$jƒ‡ÿÿÿÿÿÿÝéëdš¢]•ž\”Z’›W™UŽ˜UŽ—RŒ•O‰’M‡K†I„HƒŒE€ŠBˆ?|†>{„E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $ (,ÊÕÖÿÿÿÿÿÿ±ÌÑa˜ _–Ÿ]•[“›Y‘šW˜UŽ—SŒ•QŠ“Nˆ‘L†J…ŽH‚ŒEŠB~ˆ@}†>z„E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#=[_þÿÿÿÿÿøûû†¯¶c˜¡a—Ÿ_•]“œZ‘šX™UŽ–TŒ•QŠ“Oˆ‘M‡J„Hƒ‹E€ŠCˆ@|†>z„E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!¦¶¹ÿÿÿÿÿÿÖäçh›£eš¡b˜ `–Ÿ^”œ[’šY™VŽ–UŒ•RŠ“P‰‘M†K…H‚‹E€‰C~ˆ@{…>zƒE =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!)GLþþþÿÿÿÿÿÿ ÁÆiœ¤fš¢c˜ a–ž_•\’šZ™WŽ—UŒ•SŠ“Pˆ‘M‡K„H‚‹E‰C}‡@{„>y‚nuS\JSIRGPGOFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!”§ªÿÿÿÿÿÿèðñt£«j¤gš¢e™¡b—Ÿ`”]’š[‘™XŽ—UŒ•TŠ“Pˆ‘M†K„ŒH‹EˆC}†@{ƒ>y‚;v€8u~5r|3pz2nx/ku-jt@w€{¡§¶ËÎæíîÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿþþþÞæç©½Àe‰Ž%V^GOFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!#@Eþþþÿÿÿÿÿÿ³ÍÑmŸ¦k¥i›£f™¡c—Ÿ`”^“›\‘™Y—VŒ•SŠ’Pˆ‘M†ŽKƒŒH€ŠE‡B|…?zƒ>x‚:v7s}5q{5pyf’™´ÊÍñõöÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿìññ¤¹¼JszFODMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!–©¬ÿÿÿÿÿÿïõõ~ª±nŸ§l¥i›£f™¡d—Ÿa•^’š\‘™YŽ—VŒ”SŠ’PˆM…ŽJ‚‹H€‰E~†B|…?yƒE =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!)FKþþþÿÿÿÿÿÿ¹ÑÕr¡©o §lž¥j›£g™¡d—Ÿa”œ_“›\˜XŽ–V‹”R‰‘O†L„J‚ŠGˆD}†Az„f”›ÉÙÜÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÂÏÑGnuAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!®½¿ÿÿÿÿÿÿðõö«²s¢©p §mž¥k›£h™¡d–žb•_’š[˜X–U‹“Rˆ‘N…ŽL„ŒI€‰Tˆ¬ÅÉúûüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûüüš¯³MU ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!@\_ÿÿÿÿÿÿÿÿÿµÎÒw¤«t£ªq §nž¥k›£h™ e—Ÿa”œ^’™[˜W•T‰’Q‡N…~¥¬äíîÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÛãäJou ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! !%Ûáãÿÿÿÿÿÿéðñ€«±x¥¬t¢©q §nž¥k›¢g˜ e–ža”›]‘™Z–W‹”XŒ”®ÇËþþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿüüýõ÷øîóóêïðêïððôôõøøýþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿúûû{–š >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!j‚…ÿÿÿÿÿÿÿÿÿªÆËz§®x¤«u¢©q §n¤k›¢h˜ d–`’š]‘˜i˜ Ñßâÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿô÷øÈ×Ú˜µ¹r—žR€‡;ow-em)aj'_h)`i9lsO{ƒn’˜•°³ÇÕ×ô÷÷ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ±´BI 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!9=þþþÿÿÿÿÿÿÚçé~©°{§®x¤«t¡©qŸ¦mœ¤j™¡f—Ÿc”œy¤©äíîÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿüýýÉØÚ¤ªBx€)en'bl%_i$_h#]f [dYcXaU_U]S]Q[QYPY1ckp“˜¾ÎÐúûüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ´ÃÅTu{ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!¸ÄÆÿÿÿÿÿÿûüý—º¿~©¯z¦­x¤«t¡¨pž¦l›£i™ ©°ìòóÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûüü¹ÍÑ_”3mv/is,gp*en'bl&`j$_h#]f!\eZcXaV`U^S]Q[QYOXNWMVLT@mt¤¹½÷ùùÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ®¾Á ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!SlpÿÿÿÿÿÿÿÿÿÁÖÙ‚«²~¨¯z¦­w£ªs §o¤…¬²íóôÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÍÛÞf“š9r{6ox3mv0jt-hq+fo(cm'ak&aj$^g"\f [dXbV`U^T]Q[QYPXOXMVLTJSIR:gn²ÄÇÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!26øúúÿÿÿÿÿÿæïð‡¯µª±}¨®y¥«v¢©©¯èïðÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿó÷÷‘²·By=u~:r|7py4nv1kt.ir,go*dn(bl&`i$^g"\f [dXbW`U^T]R\QZPXOXMVLTKSIRHPGPa„Šåêëÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!»ÈÉÿÿÿÿÿÿÿÿÿ ÀÄ„¬²€©°|¦­}¨®ÚæèÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÝçéh”D{„Ay?w€;s|8pz4nw2ku0js-gp*dn(bl&aj%^h#]f [dYbW`V_T]R\QZPXOXMVLUKSIRIQGPFO-[câèéÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!]uyÿÿÿÿÿÿÿÿÿÆÙ܆®´ƒ«²¨¯ÆØÛÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÈÙÜYŠ“IˆF}…By‚@w€=t~9qz5ow4mv1ks.hq,fo*cm'aj%^h$^g![e ZcWaV_T^S\QZPXOXNWLUKSIRIQGPFODMÈÔÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! '?Dþþþÿÿÿÿÿÿäí·…­³«ÆÊÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÁÔ×V‰‘MƒŒK€‰G}†D{ƒ@x>u~;r{7py4mw2kt/iq,fo*cm'aj&_h$^g![e ZcWaW_T^S\R[PYPXNWLUKSJRIQGPFODMÉÔÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!  Ûáâÿÿÿÿÿÿüýýš¼À´ºïôõÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÇØÛZŒ”S‡O„LŠH~†E{„By‚?v 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!œ¬®ÿÿÿÿÿÿÿÿÿµÍÑÈÚÝÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÕâäd“›Y‹”U‰Q…ŽM‚‹JˆF|„Czƒ@w€=t}9rz6ox4mu1js.gq+dn)cl'`j%^h"\e!Zd YbW_U_S]R[PYPXNWLUKTJRIQHQFOENÉÔÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! Pjnÿÿÿÿÿÿÿÿÿëòóõøùÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿëñòuž¥_—[Œ•V‰‘R†OƒŒL‰H}†D{ƒ@w€>t}:r{7oy4mv2ks/hq,eo*cl'`j&_h$]g"[dXbW`V_S]S[QZPXNWMVKTJRIQHQFOENÉÔÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! (ADýþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿýþþ•¶»d”›a‘˜\•X‹’T‡P„MŠI~†E{„Bx@v;s{8py5nw2ks0hr-eo*cl)bk&`i#]f"[d YbXaV_S]S[QZPYOWMVKTJRIQHQFOENÉÔÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!$'èìíÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¿ÓÖj˜Ÿf•œb’™^Ž—ZŒ”UˆQ…M‚ŠJ‡F|„Cy‚@v 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!ÅÏÑÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿíóô{¥«l™ g–d“š_—ZŒ”W‰’R…ŽNƒ‹KˆG}…Dz‚Aw€=t|9qz6ow4lu1is/gp,en)bk'`j%^g#\e YbXaW`T^S[QZPYOWMVKTJRJQHQFOENÉÔÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! ŽŸ¢ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ®ÇËq¤n›¡i—žd”›a˜[•XŠ’T†Oƒ‹L€ˆH}†Dz‚Aw€>u}:r{7ox4mu2js/gp,en*ck'`j&_h$]f"Zd YaW`T^S\QZPYOWMVKTJSJQHQGPENÉÕ×ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! Yquÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿéðñ~¦­sž¦o›¢j—žf•œb‘™]Ž–YŠ“U‡P„ŒM‰I~†EzƒBx€?v~;r{8py4mu2js0hq-en*ck(aj%^g$]f"ZdXaW`U^S\RZPYOWMVLUJSJQHQGPENÉÕ×ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! 7OSÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ´ÌÐy£ªtŸ¦pœ£l™ g–c’š^—Z‹”VˆQ…M‰J‡G|„Bx€?v~ 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! #:=ûüüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿòö÷‹°µz¤ªu §q¤m™ h–d’š_—ZŒ”Wˆ‘R…N‚ŠKˆH|…Dy‚@v~=t|9qy6nv3jt1hr.fo+cl)bk&_h%]f#[d!ZbWaU^S\RZQZOXNVLUJSJQHQGPENÉÕ×ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gk¤†¯µ„®´„®´„®´„®´…¯´~ª°it[gZfZfZfZfZfZfZfZfZfZfZfZfYeYe2v€hš¡ƒ¬²…®´w¥«K†^iXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`rž¥˜¹½'jsS^S^R]R]Q\Q\Q[P[PZV`6qzNYNXMWMWLVLVKUKUJTISHRYa§¾Â§¾ÁW_ENDMCMCLBKAJ@I?H CK–®±³ÄÇ/^d;C:B9A8?7>6=5<3;3:28 9?tŒ|’•:@,2*0)/(.',%*$)bwy†•—~.EI! ),åççÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÑß⨮{¤«w¡¨rž¤nš¡i—že”›_—[Œ•X‰’S†ŽO‚‹L€ˆI}†Ez‚Aw=t|:rz6nv4ku1hr.fo,dm)bk&`i%]f#[d!ZbWaU^T]RZQZOXNVLUJSJQHQGPENÌ×Ùÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ®¾Á ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gØåçÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ3y‚[gZfZfZfZfZfZfZfZfZfZfZfZfepž¿ÃùûûÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÏßáI…ŽXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`ÏÞàÿÿÿD~†S^S^R]R]Q\Q\Q[P[PZ¦ÀÄŽ¯´NYNXMWMWLVLVKUKUJTISHRJSÅÔ×ÿÿÿv™žENDMCMCLBKAJ@I?HOx~ÿÿÿåëìIP;C:B9A8?7>6=5<3;3:28AGïòòÿÿÿ%IN,2*0)/(.',%*i|ÿÿÿÿÿÿŽ*/!  ÌÑÒÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿªÅÉ©¯}¥¬x¢©sž¥o›¡j—žf”œa‘˜\•X‰’T†ŽPƒ‹L€ˆI}†Dy‚Aw>t}:rz7ow4ku1hr/gp,dm)bk&`i%]f#[d!ZbWaV_T]RZQZOXNVLUJSJQIQGPEN4`g?hn;ek;dj;cj;ci;bh;bh+T[ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÍÞáÿÿÿ—»À{¨¯}©°}©°~ª°w¦¬hs[gZfZfZfZfZfZfZfZfZfZfZfamÁÕØÿÿÿôøø§ÄÉ€ª°{§­’¶»ÔâäÿÿÿøúûK†ŽXcXcXcWcWbWbWbVbVaVaUaU`U`T`Å×ÚÿÿÿB|…S^S^R]R]Q\Q\Q[P[…ª¯ÿÿÿ…©®NYNXMWMWLVLVKUKUJTISHRHQZ†ŒÿÿÿÔßá JSDMCMCLBKAJ@I?H´ÅÈÿÿÿ{™6=5<3;3:28@Fãèéòôõ#GM,2*0)/(.',E^bøùùÿÿÿ› $#! ¸¿Áÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿôøø“µº©¯}¥¬x¢©sž¥p›¢k˜Ÿf”œa‘˜]Ž–YŠ’U‡Pƒ‹M‰J~†FzƒBx€>t}:rz7ow4lu2ir/gp,dm*ck'`i%]f#[d!Zb XaV_T]RZQZOXNVLUJSJQIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ.u[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZf‚¬²ÿÿÿÙæç5x‚YeYeYeYe^i“¶¼ÿÿÿßéëbmXcXcWcWbWbWbVbVaVaUaU`U`T`Å×ÚÿÿÿB|…S^S^R]R]Q\Q\Q[tž¤ÿÿÿÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQOYßçèÿÿÿN{DMCMCLBKAJ@I1biùúûñôõ"T[6=5<3;3:28@Fãèéòôõ#GM,2*0)/(.,JNæêêÿÿÿ¨³´#( $#! ¦±²ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿáë쇭³‚ª°}¦­y¢©tŸ¦p›¢k˜Ÿf”œb’™]Ž–ZŠ“U‡Q„ŒM‰J~†FzƒBx€>t};rz7ow4lu2ir/gp,dm*ck&`i&^g$\e"Zc XaV_T]S[QZOXNVLUJSJQIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfbmáëìüýýG…YeYeYeYeYeXdYeÇÚÜÿÿÿ_”›XcXcWcWbWbWbVbVaVaUaU`U`T`Å×ÚÿÿÿB|…S^S^R]R]Q\Q\c“™ÿÿÿÿÿÿÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQƒ¤©ÿÿÿ°ÄÇDMCMCLBKAJ@I’«¯ÿÿÿ¡·º=E6=5<3;3:28@Fãèéòôõ#GM,2*0)/7=ÏÕÖÿÿÿÁÉÊ ,0!& $#! ŸÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÌÝ߈®´ƒª°~§­z£ªu ¦pœ£l™ g•c’š^Ž—ZŠ“VˆQ„ŒM‰K~‡G{„Bx€?u~;rz8ox4lu2ir/gp-em*ck&`i&^g$\e"Zc XaV_T]S[QZOXNVLUJSJQIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf)q|ÿÿÿàêì \gYeYeYeYeYeXdXd{§­ÿÿÿ²·XcXcWcWbWbWbVbVaVaUaU`U`T`Å×ÚÿÿÿB|…S^S^R]R]Q\TˆúüüÿÿÿÛæçÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQ*bjöøùùûû,aiCMCLBKAJQYéîïýýþ>kq=E6=5<3;3:28@Fãèéòôõ#GM,2*0,2±¼¾ÿÿÿÕÛÜ:?#'!& $#! }Œÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¸ÏÓˆ®´ƒª°~§­z£ªu ¦pœ£l™ g•c’š^Ž—Z‹“VˆQ„ŒM‰K~‡G{„Bx€?u~;rz8ox5lv2ir0hp-em*ck'`i&^g$\e"Zc XaV_U]S[QZOXNVLUJSJQIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf/uÿÿÿÕãå \gYeYeYeYeYeXdXdr¡§ÿÿÿ‘µºXcXcWcWbWbWbVbVaVaUaU`U`T`Å×ÚÿÿÿB|…S^S^R]R]G‡õøùÿÿÿ{¢¨x ¦ÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGP¯ÄÆÿÿÿˆ¦«CMCLBKAJn‘–ÿÿÿÅÒÔ@G=E6=5<3;3:28@Fãèéòôõ#GM,2*0ŽŸ¢ÿÿÿÛáá&CG$)#'!& $#! n‚ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ©Åɉ®´„«±§®{¤ªv §q£m™ h–c’š^Ž—Z‹“VˆR„ŒM‰K~‡G{„Bx€?u~;rz8ox5lv3js0hp-em*ck'`i%]f$\e"Zc XaV_U]S[QZOXNVLUJSJRIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf.uÿÿÿÖäæ \hYeYeYeYeYeXdXdt¢©ÿÿÿ´¹XcXcWcWbWbWbVbVaVaUaU`U`T`Å×ÚÿÿÿB|…S^S^R]:wìòóÿÿÿŸ¼ÀP[{¢¨ÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGPO|ƒÿÿÿâéêNWCLBK ENÒÝÞÿÿÿb‡Œ>F=E6=5<3;3:28@Fãèéòôõ"GL,2j„ÿÿÿÿÿÿ¤±³'DH$)#'!& $#! fwzÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¢Àĉ®´„«±§®{¤ªv §q£m™ h–c’š_—Z‹“VˆR„ŒM‰K~‡G{„Bx€?u~E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf.uÿÿÿÖäæ \hYeYeYeYeYeXdXdt¢©ÿÿÿ´¹XcXcWcWbWbWbVbVaVaUaU`U`T`Å×ÚÿÿÿB|…S^S^/oxäìíÿÿÿ³ÊÍ U_P[~¤ªÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGP KTÖàâÿÿÿaˆCLBKOz€ÿÿÿáèéJS>F=E6=5<3;3:28@Fãèéòôõ?ENkoÿÿÿÿÿÿÿÿÿÿÿÿþþþž«­59!& $#! asvÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûüüž½Â‰®´„«±§®{¤ªv §q£m™ h–d“š_—Z‹“VˆR„ŒM‰K~‡G{„Bx€?u~E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf.uÿÿÿÖäæ \hYeYeYeYeYeXdXdt¢©ÿÿÿ´¹XcXcWcWbWbWbVbVaVaUaU`U`T`Å×ÚÿÿÿC}…S^#hrØäæÿÿÿÃÕØYdQ[P[~¤ªÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGPFOxšŸÿÿÿÂÑÔDMBK´ÆÉÿÿÿ‰¥©>G>F=E6=5<3;3:28@Fãèéòôõ EJ3UYGdhFbfUnr„–˜ÛàáÿÿÿÍÔÕ49 $#! asvÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿùûûœ¼Á‰®´„«±§®{¤ªv §q£m™ h–c’š_—Z‹“VˆR„ŒM‰K~‡G{„Bx€?u~E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf.uÿÿÿÖäæ \hYeYeYeYeYeXdXdt¢©ÿÿÿ´¹XcXcWcWbWbWbVbVaVaUaU`U`T`Å×Úÿÿÿ8u~[fËÛÞÿÿÿÒàâakQ\Q[P[~¤ªÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGPFO!Zbñõõþþþ2fm&\dùûû÷ùù)\c>G>F=E6=5<3;3:28@Fãèéòôõ#GM,2*0)/(.',27¹ÂÃÿÿÿ‰—š $#! duxÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿúüü¼Á‰®´ƒª°~§­z£ªu ¦pœ£l™ h–c’š^Ž—Z‹“VˆR„ŒM‰K~‡G{„Bx€?u~;rz8ox5lv2ir0hp-em*ck'`i&^g$\e"Zc XaV_U]S[QZOXNVLUJSJRIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf.uÿÿÿÖäæ \hYeYeYeYeYeXdXdt¢©ÿÿÿ´¹XcXcWcWbWbWbVbVaVaUaU`U`T`Å×ÚÿÿÿKƒ‹´ËÎÿÿÿàéë*luQ\Q\Q[P[~¤ªÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGPFOEN£º½ÿÿÿ”¯²‰¦ªÿÿÿ®ÂÄ?H>G>F=E6=5<3;3:28@Fãèéòôõ#GM,2*0)/(.',%*:TXÿÿÿÑר#'#! k|ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ ¿Äˆ®´ƒª°~§­z£ªu ¦pœ£l™ g•c’š^Ž—ZŠ“VˆQ„ŒM‰K~‡G{„Bx€?u~;rz8ox4lu2ir/gp-em*ck'`i&^g$\e"Zc XaV_U]S[QZOXNVLUJSJQIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf.uÿÿÿÖäæ \hYeYeYeYeYeXdXdt¢©ÿÿÿ´¹XcXcWcWbWbWbVbVaVaUaU`U`T`Ä×ÙÿÿÿÜçèÿÿÿìòó:wR]Q\Q\Q[P[~¤ªÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDszþþþòõöõøøÿÿÿJu|?H>G>F=E6=5<3;3:28@Fãèéòôõ#GM,2*0)/(.',%*9=þþþåèé '+#! x‡Šÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¦ÃLJ­³‚ª°}¦­y¢©tŸ¦p›¢l™ g•b’™^Ž—ZŠ“U‡Q„ŒM‰J~†Ez‚Bx€?u~;rz8ox4lu2ir/gp-em*ck&`i&^g$\e"Zc XaV_T]S[QZOXNVLUJSJQIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf.uÿÿÿÖäæ \hYeYeYeYeYeXdXdt¢©ÿÿÿ´¹XcXcWcWbWbWbVbVaVaUaU`U`T`Ä×ÙÿÿÿÿÿÿöùùJ‚ŠR]R]Q\Q\Q[P[~¤ªÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGPFOENFOËØÚÿÿÿÿÿÿÏÚÜ DM?H>G>F=E6=5<3;3:28@Fãèéòôõ#GM,2*0)/(.',%*t};rz7ow4lu2ir/gp,dm*ck&`i&^g$\e"Zc XaV_T]S[QZOXNVLUJSJQIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf.uÿÿÿÖäæ \hYeYeYeYeYeXdXdt¢©ÿÿÿ´¹XcXcWcWbWbWbVbVaVaUaU`U`T`Ä×Ùÿÿÿýþþ[Ž•S^R]R]Q\Q\Q[P[~¤ªÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMl–ÿÿÿÿÿÿo’—@I?H>G>F=E6=5<3;3:28@Fãèéñôô?E,2*0)/(.', +0³½¾ÿÿÿ‘ž¡ $#!  «¬ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÈÚ݆¬²©¯}¥¬x¢©sž¥o›¡j—že”›a‘˜\•YŠ’T†ŽPƒ‹M‰J~†FzƒBx€>t}:rz7ow4lu1hr/gp,dm)bk'`i%]f"Zd YaWaV_T]RZQZOXNVLUJSJQIQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gÎßáÿÿÿ9|†[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf.uÿÿÿÖäæ \hYeYeYeYeYeXdXdt¢©ÿÿÿ´¹XcXcWcWbWbWbVbVaVaUaU`U`T`ÇÙÛÿÿÿnš¡S^S^R]R]Q\Q\Q[P[~¤ªÿÿÿƒ¨­NYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMS\îòóîòóQZ@I?H>G>F=E6=5<3;3:28@Fáçç÷øùfƒMjnPkoOjnOimtˆŠÌÓÔÿÿÿçêë%?D $#! ³»¼ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÞéê…¬²€©®|¥«w¡¨rž¤nš¡i—že”›`˜\•X‰’T†ŽO‚‹L€ˆI}†Ez‚Aw=t|:rz6nv4ku1hr.fo+cl*ck&`i$]f#[d!ZbWaU^T]RZQZOXNVLUJSJQHQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g×åçÿÿÿ;~‡[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf0v€ÿÿÿàêì \hYeYeYeYeYeXdXdy¥¬ÿÿÿ–¸½XcXcWcWbWbWbVbVaVaUaU`U`T`ÐßᎱ¶S^S^S^R]R]Q\Q\Q[P[„©®ÿÿÿЬ²NYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCM¢º½¤»¾AJ@I?H>G>F=E6=5<3;3:28AGçëìÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÑר:SV!& $#! ÅÌÍÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿòö÷³¸¨®{¤«v §q¤m™ h–d“›_—[Œ•Wˆ‘S†ŽN‚ŠKˆH|…Cx@v~=t|9qy6nv4ku1hr.fo+cl)bk&_h%]f#[d!ZbWaU^T]RZQZOXNVLUJSJQHQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gŽ´º´ÍÑ(r|[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZf!lw±ËÏ”·½ [gYeYeYeYeYeXdXdQŠ“»ÑÔd—žXcXcWcWbWbWbVbVaVaUaU`U`T`]— WbS^S^S^R]R]Q\Q\Q[P[D|ƒŒ®³H}…NYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCM6ip0ckAJ@I?H>G>F=E6=5<3;3:28E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! 47õööÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÍÝà}¦¬x¢©sž¦pœ£k˜Ÿf•œc’š]Ž–Z‹”U‡Q…M‰J‡G|„Cx?v~;r{8py5mv3jt0hq-en*ck(aj&_h$]f"Zd YaWaU^S\RZQZOXNVLUJSJQHQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! .GKþþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿñõöˆ­³w¢©rž¥n›¡j—že”›b‘™\Ž•XŠ’T†P„ŒM‰I~†F{„Bx€>u};r{7ox4mu2js/gp,en*ck(aj%^g$]f"Zd YaW`U^S\QZPYOWMVKTJSJQHQGPENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! Jcgÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ²ÊÎu §p¤mš¡h–d”›`˜[•W‰’S†Oƒ‹L€ˆH}†Ezƒ@v=t|:r{7ox4lu1is/gp,en)bk(aj%^g"[d!Zc YaW`T^S\QZPYOWMVKTJRJQHQFOENEMCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! yŽ‘ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿéðñz¤ªpœ£k™ f•œc’š_—ZŒ”V‰‘R…ŽNƒ‹KˆG}…Cy‚@v=t|9qz6ow4lu1is.fp,en)bk&`i%^g#\e!ZcXaV_T^S[QZPYOWMVKTJRJQHQFOENDLCKAI @H ?G ?F >E#NV6]d ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!µÁÃÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ®ÈÌn›¢i—že”›a‘˜]Ž–Y‹“UˆP„MŠI~†F|„Bx@vE…¡˜¬¯ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! Ûáâÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿðõö{£«h—žd“š`˜[Œ•WŠ’S‡OƒŒL‰H}†D{ƒAx?u~:r{7oy4mv2ks/hq,eo*cl(aj&_h$]g"[d YbW`V_S]S[QZPYOWMVKTJRIQHQFOENDLCKAI @H ?G ?F2\aîñò¸ÆÈ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! 7:ýþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÇØÛf–b’™^—ZŒ”V‰‘R†NƒŒK€ˆG}…Czƒ@w€>t}9rz6ox4mu1js.gq,eo)cl'`j&_h$]g"[d YbW`U_S]R[PYPXNWLUKTJRIQHQFOENDLCKAI @H ?G ?F´ÃÅÿÿÿ®¾Á ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! AZ^ÿÿÿÿÿÿÿÿÿñöööùùÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ ½Á`˜\Ž•X‹“TˆP…MŠI‡F|„By‚@v=t}8qy5nw3lu0ir-fp+dn)cl&`i%^h#]f ZcXbW`U_S\R[PYPXNWLUKSJRIQHQFOENDLCKAI @H ?Gu’–ÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! €”—ÿÿÿÿÿÿÿÿÿ¾Ó×ÇÚÜÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿøúû†«±Z”W‰’R‡NƒŒK€‰H~†D{ƒAy?v 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!ËÔÖÿÿÿÿÿÿÿÿÿŸ¾Ã³¹ñööÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿïôõy¢¨Uˆ‘P…M‚‹JˆF}…Czƒ@x=t~:r{6px4mv1ks.hq,fo*cm(bk&_h$^g"\eYbWaW_T^S\R[PYPXNWLUKSIRIQGPFODMDLCKAI:cjçìíÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! 26ûýýÿÿÿÿÿÿìóôŒ²·‚«±¬ÇËÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿíòóvŸ¦N„ŒLŠH~‡E|…By‚?w€ 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#! GaeÿÿÿÿÿÿÿÿÿÐà⃫²©¯{¦­ËÛÞÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿñõö}¥«KŠF}†C{ƒ@x>v;s|8pz4nw2ku/ir,go*dn(bl&`i$^g#]f [dXbW`V_T]R\QZOXOXMVLTKSIRIQGPFODMDLEmsäéêÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!›¬®ÿÿÿÿÿÿÿÿÿ©ÅÊ©¯|§®x¤«~§¯áêìÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûüü˜·¼D{…Bzƒ?w€ 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!%(æììÿÿÿÿÿÿðõöб·|§®x¥«u¡©r §†­³òö÷ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÃÕØW‰‘>v;t}8qz5nx2lu/is-hq*en(cm&ak%_i#]f"\fZcXbV`U^S]Q[QYOXNWMVLTJSIRHPGP"S[ µ¸ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!:TYÿÿÿÿÿÿÿÿÿÍÞá|¨®y¥¬v£ªr §n¤l›¢’µºúüüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿîóô‘±¶Ax6py3mv1lt.ir,gp)en'bl&`j$_h"\f!\eYcXaU_U]S]Q[QYOXNWMULTJSIRIQc†ŒÞåçÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¬¼¿ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!‘¤§ÿÿÿÿÿÿÿÿÿŸÀÅy¦­v£ªs¡¨ož¥l›£h˜ d–˜¹¾üýýÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÝç肦¬=s|/js-hr+gp)en&bl%_i$_h"\f [dYcXaU_U]S]Q[QYOXNWMULTMVY…ÅÓÔÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ°À ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!(,ëîïÿÿÿÿÿÿæîï}©°v£ªs¡¨ož¦lœ£jš¢f—Ÿb”œ_’š”¶»ûüýÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿâêì™¶ºSƒ‹,gp(dm&ak%_i$_h!\e [dYbWaU_T]R\QZPYOXNW2dkz™žÐÛÝÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿîñò£§ ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!Keiÿÿÿÿÿÿÿÿÿ¶ÏÓv¤«r¡¨ož¦lœ¤i™¡f—Ÿc•_’š\˜X•ˆ®³ó÷øÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûüüÕáã¹½p—J{ƒ-eo$_gZdYbV`U^V_ Xb7iq\„Љ¥ªÂÑÔõ÷øÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿçìíJnt?G ;C ;B :A 9@ 8? 7> 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!¶ÃÄÿÿÿÿÿÿõøù…­´r¡©oŸ¦lœ¤iš¢f—Ÿc–ž`’›\‘˜YŽ–V‹“S‰’tŸ¦äìíÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿùûûëðñÖáâÄÔÖºÌϸËÎÀÐÓÎÛÝæìíõ÷øÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÏÙÚ4]c =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!&AFýþþÿÿÿÿÿÿÇÚÝq¡¨nŸ¦lœ¤iš¢f˜ b•`”œ]‘™Y–WŒ•TŠ’P‡M„[•ÃÕØÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿŸ³¶JP >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!„˜œÿÿÿÿÿÿûüýŒ³¹mž¦k¤hš¢e˜Ÿb–ž`“›]‘šZ—W•TŠ“QˆN†KƒŒIŠF~‡¯µïôõÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿâèé]„ ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!14òööÿÿÿÿÿÿÎßálž¦jœ¤gš¢d˜Ÿa•`”œ]‘šZ˜W•T‹“Qˆ‘N†K„IŠFˆC|…@{ƒUˆ‘·ÌÐüýýÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿöøø•¬¯"QX @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!n…‰ÿÿÿÿÿÿüýýŒ³¹i›¤f™¡d˜ a–ž_“›\‘šY—WŽ–U‹”Q‰‘N†K…I‚‹F‰D~‡A{„>y‚E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!,0éíîÿÿÿÿÿÿÉÜßh›£e™¡c˜ `–ž^“›[‘™Y˜V–UŒ”R‰’N‡K…ŽIƒ‹GŠD~‡A|…>y‚=x:u7t}4q{3oy]”­ÅÉîóôÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿãéê’«¯4aiDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!i…ÿÿÿÿÿÿ÷úú‚­³d™¡a–Ÿ`•]“œ[‘šX—U–T‹”Q‰’O‡K…ŽIƒŒGŠDˆA|†?{„=x;w€8t~5s|3pz1nx/lv-jt>w€{¡§»ÎÒîóóÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿâéꩽÀ^ƒ‰QYFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!/4êïïÿÿÿÿÿÿ¸ÑÔc˜ `–Ÿ^”\“›Y‘™W˜U–S‹”P‰‘N‡K…ŽIƒŒFŠDˆA}†?{„=y‚;w€8u~6s}3q{2oz/lw-ku+is)gq'eo&dn5nx`”‘°µ¸ÌÏÔàâìñòýþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿùúúäëìËØÙ«¿ÃŸ¤Iu| U]HQGPGOFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#!x’ÿÿÿÿÿÿêñòo ¨_–ž]”œZ‘›X™V˜TŒ•R‹“P‰’M‡K…IƒŒF‚ŠCˆA~†?{…=yƒ;w9u6s}4q|2pz0nx.lv,jt*hr(gq&eo&cm#bk"`j _h]g%`k0hr;pxK{‚VƒŠX„ŒQ†Ly?pw1em&[dQZMVLUKTIRHQGPGOFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $#=Aúüüÿÿÿÿÿÿ™½Â]•ž\”œY’šW™UŽ—S•Q‹”O‰’L‡J…ŽI„FŠC€ˆA~‡>|…=zƒ;w9v6s}4r|2pz0ox.mw,ju*is(gq&ep&dn$bl"aj _i^g\f[eYdXaV`U_U^S]Q[QZPYOXNWLULUKTIRHQGPGOFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $!!˜«®ÿÿÿÿÿÿÏßâ]•[“œY’šV˜UŽ—RŒ•P‹”N‰’L‡I…ŽHƒŒF‚‹C‰@~‡>|…=zƒ:x‚8v€6t~4r|2qz1oy.mw,kv*it)hr'fp&eo%cm#ak!`j^h]g\eZeYcXaV`U_T^S]Q[PYPYNXNWLULUJSIRHQGPGOFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& $;X^þþþÿÿÿòö÷t¤¬X’šWšU˜S–P‹”NŠ“M‰’J†I„ŽGƒŒE‹C€‰A~‡>|…E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'!& +0Íרÿÿÿÿÿÿ›½ÃV‘šU™SŽ˜R–O‹”M‰“Kˆ‘I†H„ŽF‚ŒD‹B‰@~ˆ>|†<{„:yƒ8w6u4s}2r{1py/nx-lw+ku)it'gr&fp%dn$dn"al!`j_h]h\fZdYcWbV`U_U_S]Q[PZPYOYNXMVLUKTJSIRHQGPGOFOENDMDLCKAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#' %w“ÿÿÿÿÿÿÅÙÜS˜RŽ˜RŽ–PŒ•NŠ”Kˆ’J‡‘I†G…ŽEƒŒCŠA‰?}‡=|†E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#'1QVþþþÿÿÿåîïdš¢PŒ–OŒ–N‹”L‰“K‰’I‡G…EƒC‚ŒA€Š@ˆ>~‡={…;z„9x‚6w€5u€3t~2r|1p{/oy-mw+lv*ju'is&gr%ep$do#cm!bl `j^i^g\gZcXcVaV`U_T^S]Q[PZPYNXMWMVKTKTJSIRHQGPGOFOENDMCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*$)#.3ÊÖ×ÿÿÿùûûz©°NŒ•LŠ”K‰“J‰’H‡‘G†E„ŽC‚B‚‹A‹>~ˆ=}‡<|†:z„9yƒ7w4v2s~2s|0q{.oy-nx+lv*ku(it&hr%fq$dp#dn!bl ak_j^h]g[dYcXcV`V`T^S]R\Q[PZPYNXMWLUKTKTJSIRHQGPGOFOENDMCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*#( %‰ ¤ÿÿÿÿÿÿ˜¼ÂKŠ”JŠ“Hˆ’G‡‘G†E„D„B‚ŒA‹?€Š=ˆ=}‡;{…9z„8x‚6w4u2t~1r}0r{.oz,mx+mv)jt(jt'hs%gq$ep#do!bm ak`k_i]g\f[eZdXbWbV`U_T^S]R\Q[PZOXNXMWLUKTKTJSIRHQGPGOFOENCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',%*"Mlqþÿÿÿÿÿ±ÍÑHˆ“Gˆ’G‡‘F†E…C„ŽBƒA‚Œ>€Š=‰=~ˆ;}‡:{…8yƒ7yƒ5v4v€2t~0r|/q|-py,ny*lw)ku(jt&is&gr$fp#ep!bn bl`k`j^h\g\eZeYcXbWaU`U_S]S]R\Q[OYOXNXLVLUKTKTJSIRHQGPGOFODMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(.',#&INñõõÿÿÿÇÛÞJŠ•E†E†D…BƒŽAƒ@‚Œ?‹=€Š<~ˆ;}ˆ:|†8{…7y„6x‚4w3u€1t~1r}.q{-pz+nx*mx)kv'jt&is&hr%fq"eo!co bmal`j_i]h\f[eZeYcXaVaU`T^S]S]R\Q[OYOXMWLVLUKTKTJSIRHQGPGOFODMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu]i[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/(."'5:Ì×ÙÿÿÿÖåçQ™C†A„ŽA„Ž@ƒ?‚Œ>Œ=€Š<Š<~ˆ9|†8{†7z„6z„4x‚3v€2u€1s~0s}.q{,pz+ny)mw(lw'ju&it&hr$gq#ep!do cnamak_i^i]g[f[eYdYbXaVaT_T^S]S]R\PZOYOXMWLVLUKTKTJSIRHQGPGOENDMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 *0?Fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu am[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2*0)/%*+/¤¸ºÿÿÿÞéëV’›?„?ƒŽ>‚>‚=‹<€‹<Š;‰:}ˆ9}‡7{…5y„4xƒ3x‚2v1t0t~/r}.q|,oz*oy)mx(mw&ju&it%hr$gq#fp"dp dnbmal`j^i^h\g[fZdYcYbWaU`T_T^S]S]Q[PZOYNWMWLVLUKTKTJSIRHQGPGOENDMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 .4 ).#IOuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu$jt[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2+1'-&,€šÿÿÿáìíX”>ƒŽ<‚<Œ<€‹;Š;Š:~‰9}ˆ8}‡7{†6{…4y„3xƒ2w‚1v0t/s~.r}-q|,p{*ny(mw'lw&ku&ju%hs$hr#fp!eo!docmam`k_j^h]h\gZeZcYcXbV`U`T_T^S]R\Q[PZNXNWMWLVLUKTKTJSIRHQGPFNENDMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 06 /5 -3 ).E`duuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuucsv `m[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S^S^S^R]R]Q\Q\Q[P[PZOZNYNYNXMWMWLVLVKUKUJTISHRHQGQGPFOENDMCMCLBKAJ@I?H>G>F=E6=5<3;3:2817/6.4-3,2)/#)i‡‹ÿÿÿàìíU“œ;Œ;‹:‹9Š9~‰8~‰7}‡7|‡6|‡5{…4z…3yƒ1w‚1v0u€/u.s~-r|,q|+pz*oz(mx&lv&kv%jt$ht#gr"fq!eo dnbnbmal_k_i]h]h[fZdZcXcXbV`U`T_T^R\R\Q[OYNXNWMWLVLUKTKTJSIRHQFOFNENDMCLCKBJAI @H ?G ?F >E =E 6= 5< 4; 4: 39 28 17 17 /5 (- FLsuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuVrv am[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[g[gZfZfZfZfZfZfZfZfZfZfZfZfYeYeYeYeYeYeYeXdXdXdXcXcXcWcWbWbWbVbVaVaUaU`U`T`T_T_S_S^S^S]R]R\Q\Q[P[PZOZNYNXNXMWMWLVKUKUJTJTISHQHQGPFPFOENDMCLCKBKAI@I?H>G=E=E5=4<3:291806/5.4,2+1)/%*Zz~ÿÿÿÜéëRš8Š8Š7~‰7~‰6|ˆ6|ˆ5{†4z…4z„3yƒ2wƒ1w0u0u€/t~.s~-r|,q|+pz*ny)mx'mw&kv&ju%is$hs#gq"ep!dobmal`k_j_j^h\gZeZeYcYbWaVaU_T^S]R\PZPZOYNWMWLVKUKTJSJSIRHQGPFOENCLCKCKBJAI @H ?G >F >E 7= 6< 5< 4; 3: 29 17 06 /5 .4 -3 ,2 +1!FLipquuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuitu3mueqbnbnbnbnbnbnbnbnbnbnbnbnbnbnbnbnbnbnbnbnbnbnbmbmbmamamamamamamamamamalal`l`l`l`l`l`l`l`k_k_k_k_k_k_j_j^j^i^i^i]i ]h ]h ]h \h \g \g \g [f [f [f Ze Ze YdYdYcYcYbYbWaX`W_W_V_ V_ U_ T^ T]S^S]R\R\Q[RZQZOXOWNVMVLVLU KS JS JQJQIQ HP GO FN FM DK CK BK AI @H ?G ?F >E =C 7?Ain×äæÄ×Ú[‘˜FŠF€‰F€‰F€‰E€‰E€‰C~ˆC}†C}†B|…A{„A{„@z„?yƒ>x‚=x #include class Shell; class Game { public: Game(const Game &game) = delete; Game &operator=(const Game &game) = delete; virtual ~Game() {} struct Settings { std::string name; int initial_width; int initial_height; int queue_count; int back_buffer_count; int ticks_per_second; bool vsync; bool animate; bool validate; bool validate_verbose; bool no_tick; bool no_render; bool no_present; }; const Settings &settings() const { return settings_; } virtual void attach_shell(Shell &shell) { shell_ = &shell; } virtual void detach_shell() { shell_ = nullptr; } virtual void attach_swapchain() {} virtual void detach_swapchain() {} enum Key { // virtual keys KEY_SHUTDOWN, // physical keys KEY_UNKNOWN, KEY_ESC, KEY_UP, KEY_DOWN, KEY_SPACE, }; virtual void on_key(Key key) {} virtual void on_tick() {} virtual void on_frame(float frame_pred) {} protected: Game(const std::string &name, const std::vector &args) : settings_(), shell_(nullptr) { settings_.name = name; settings_.initial_width = 1280; settings_.initial_height = 1024; settings_.queue_count = 1; settings_.back_buffer_count = 1; settings_.ticks_per_second = 30; settings_.vsync = true; settings_.animate = true; settings_.validate = false; settings_.validate_verbose = false; settings_.no_tick = false; settings_.no_render = false; settings_.no_present = false; parse_args(args); } Settings settings_; Shell *shell_; private: void parse_args(const std::vector &args) { for (auto it = args.begin(); it != args.end(); ++it) { if (*it == "-b") { settings_.vsync = false; } else if (*it == "-w") { ++it; settings_.initial_width = std::stoi(*it); } else if (*it == "-h") { ++it; settings_.initial_height = std::stoi(*it); } else if (*it == "-v") { settings_.validate = true; } else if (*it == "--validate") { settings_.validate = true; } else if (*it == "-vv") { settings_.validate = true; settings_.validate_verbose = true; } else if (*it == "-nt") { settings_.no_tick = true; } else if (*it == "-nr") { settings_.no_render = true; } else if (*it == "-np") { settings_.no_present = true; } } } }; #endif // GAME_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Helpers.h000066400000000000000000000102451270147354000247240ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef HELPERS_H #define HELPERS_H #include #include #include #include #include "HelpersDispatchTable.h" namespace vk { inline VkResult assert_success(VkResult res) { if (res != VK_SUCCESS) { std::stringstream ss; ss << "VkResult " << res << " returned"; throw std::runtime_error(ss.str()); } return res; } inline VkResult enumerate(const char *layer, std::vector &exts) { uint32_t count = 0; vk::EnumerateInstanceExtensionProperties(layer, &count, nullptr); exts.resize(count); return vk::EnumerateInstanceExtensionProperties(layer, &count, exts.data()); } inline VkResult enumerate(VkPhysicalDevice phy, const char *layer, std::vector &exts) { uint32_t count = 0; vk::EnumerateDeviceExtensionProperties(phy, layer, &count, nullptr); exts.resize(count); return vk::EnumerateDeviceExtensionProperties(phy, layer, &count, exts.data()); } inline VkResult enumerate(VkInstance instance, std::vector &phys) { uint32_t count = 0; vk::EnumeratePhysicalDevices(instance, &count, nullptr); phys.resize(count); return vk::EnumeratePhysicalDevices(instance, &count, phys.data()); } inline VkResult enumerate(std::vector &layer_props) { uint32_t count = 0; vk::EnumerateInstanceLayerProperties(&count, nullptr); layer_props.resize(count); return vk::EnumerateInstanceLayerProperties(&count, layer_props.data()); } inline VkResult enumerate(VkPhysicalDevice phy, std::vector &layer_props) { uint32_t count = 0; vk::EnumerateDeviceLayerProperties(phy, &count, nullptr); layer_props.resize(count); return vk::EnumerateDeviceLayerProperties(phy, &count, layer_props.data()); } inline VkResult get(VkPhysicalDevice phy, std::vector &queues) { uint32_t count = 0; vk::GetPhysicalDeviceQueueFamilyProperties(phy, &count, nullptr); queues.resize(count); vk::GetPhysicalDeviceQueueFamilyProperties(phy, &count, queues.data()); return VK_SUCCESS; } inline VkResult get(VkPhysicalDevice phy, VkSurfaceKHR surface, std::vector &formats) { uint32_t count = 0; vk::GetPhysicalDeviceSurfaceFormatsKHR(phy, surface, &count, nullptr); formats.resize(count); return vk::GetPhysicalDeviceSurfaceFormatsKHR(phy, surface, &count, formats.data()); } inline VkResult get(VkPhysicalDevice phy, VkSurfaceKHR surface, std::vector &modes) { uint32_t count = 0; vk::GetPhysicalDeviceSurfacePresentModesKHR(phy, surface, &count, nullptr); modes.resize(count); return vk::GetPhysicalDeviceSurfacePresentModesKHR(phy, surface, &count, modes.data()); } inline VkResult get(VkDevice dev, VkSwapchainKHR swapchain, std::vector &images) { uint32_t count = 0; vk::GetSwapchainImagesKHR(dev, swapchain, &count, nullptr); images.resize(count); return vk::GetSwapchainImagesKHR(dev, swapchain, &count, images.data()); } } // namespace vk #endif // HELPERS_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Main.cpp000066400000000000000000000043461270147354000245460ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #include #include #include "Smoke.h" namespace { Game *create_game(int argc, char **argv) { std::vector args(argv, argv + argc); return new Smoke(args); } } // namespace #if defined(VK_USE_PLATFORM_XCB_KHR) #include "ShellXcb.h" int main(int argc, char **argv) { Game *game = create_game(argc, argv); { ShellXcb shell(*game); shell.run(); } delete game; return 0; } #elif defined(VK_USE_PLATFORM_ANDROID_KHR) #include #include "ShellAndroid.h" void android_main(android_app *app) { Game *game = create_game(0, nullptr); try { ShellAndroid shell(*app, *game); shell.run(); } catch (const std::runtime_error &e) { __android_log_print(ANDROID_LOG_ERROR, game->settings().name.c_str(), "%s", e.what()); } delete game; } #elif defined(VK_USE_PLATFORM_WIN32_KHR) #include "ShellWin32.h" int main(int argc, char **argv) { Game *game = create_game(argc, argv); { ShellWin32 shell(*game); shell.run(); } delete game; return 0; } #endif // VK_USE_PLATFORM_XCB_KHR Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Meshes.cpp000066400000000000000000000412731270147354000251060ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #include #include #include #include #include #include "Helpers.h" #include "Meshes.h" namespace { class Mesh { public: struct Position { float x; float y; float z; }; struct Normal { float x; float y; float z; }; struct Face { int v0; int v1; int v2; }; static uint32_t vertex_stride() { // Position + Normal const int comp_count = 6; return sizeof(float) * comp_count; } static VkVertexInputBindingDescription vertex_input_binding() { VkVertexInputBindingDescription vi_binding = {}; vi_binding.binding = 0; vi_binding.stride = vertex_stride(); vi_binding.inputRate = VK_VERTEX_INPUT_RATE_VERTEX; return vi_binding; } static std::vector vertex_input_attributes() { std::vector vi_attrs(2); // Position vi_attrs[0].location = 0; vi_attrs[0].binding = 0; vi_attrs[0].format = VK_FORMAT_R32G32B32_SFLOAT; vi_attrs[0].offset = 0; // Normal vi_attrs[1].location = 1; vi_attrs[1].binding = 0; vi_attrs[1].format = VK_FORMAT_R32G32B32_SFLOAT; vi_attrs[1].offset = sizeof(float) * 3; return vi_attrs; } static VkIndexType index_type() { return VK_INDEX_TYPE_UINT32; } static VkPipelineInputAssemblyStateCreateInfo input_assembly_state() { VkPipelineInputAssemblyStateCreateInfo ia_info = {}; ia_info.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO; ia_info.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; ia_info.primitiveRestartEnable = false; return ia_info; } void build(const std::vector> &vertices, const std::vector> &faces) { positions_.reserve(vertices.size()); normals_.reserve(vertices.size()); for (const auto &v : vertices) { positions_.emplace_back(Position{ v[0], v[1], v[2] }); normals_.emplace_back(Normal{ v[3], v[4], v[5] }); } faces_.reserve(faces.size()); for (const auto &f : faces) faces_.emplace_back(Face{ f[0], f[1], f[2] }); } uint32_t vertex_count() const { return positions_.size(); } VkDeviceSize vertex_buffer_size() const { return vertex_stride() * vertex_count(); } void vertex_buffer_write(void *data) const { float *dst = reinterpret_cast(data); for (size_t i = 0; i < positions_.size(); i++) { const Position &pos = positions_[i]; const Normal &normal = normals_[i]; dst[0] = pos.x; dst[1] = pos.y; dst[2] = pos.z; dst[3] = normal.x; dst[4] = normal.y; dst[5] = normal.z; dst += 6; } } uint32_t index_count() const { return faces_.size() * 3; } VkDeviceSize index_buffer_size() const { return sizeof(uint32_t) * index_count(); } void index_buffer_write(void *data) const { uint32_t *dst = reinterpret_cast(data); for (const auto &face : faces_) { dst[0] = face.v0; dst[1] = face.v1; dst[2] = face.v2; dst += 3; } } std::vector positions_; std::vector normals_; std::vector faces_; }; class BuildPyramid { public: BuildPyramid(Mesh &mesh) { const std::vector> vertices = { // position normal { 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f }, { -1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f }, { 1.0f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f }, { 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, -1.0f }, { -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, -1.0f }, }; const std::vector> faces = { { 0, 1, 2 }, { 0, 2, 3 }, { 0, 3, 4 }, { 0, 4, 1 }, { 1, 4, 3 }, { 1, 3, 2 }, }; mesh.build(vertices, faces); } }; class BuildIcosphere { public: BuildIcosphere(Mesh &mesh) : mesh_(mesh), radius_(1.0f) { const int tessellate_level = 2; build_icosahedron(); for (int i = 0; i < tessellate_level; i++) tessellate(); } private: void build_icosahedron() { // https://en.wikipedia.org/wiki/Regular_icosahedron const float l1 = std::sqrt(2.0f / (5.0f + std::sqrt(5.0f))) * radius_; const float l2 = std::sqrt(2.0f / (5.0f - std::sqrt(5.0f))) * radius_; // vertices are from three golden rectangles const std::vector> icosahedron_vertices = { // position normal { -l1, -l2, 0.0f, -l1, -l2, 0.0f, }, { l1, -l2, 0.0f, l1, -l2, 0.0f, }, { l1, l2, 0.0f, l1, l2, 0.0f, }, { -l1, l2, 0.0f, -l1, l2, 0.0f, }, { -l2, 0.0f, -l1, -l2, 0.0f, -l1, }, { l2, 0.0f, -l1, l2, 0.0f, -l1, }, { l2, 0.0f, l1, l2, 0.0f, l1, }, { -l2, 0.0f, l1, -l2, 0.0f, l1, }, { 0.0f, -l1, -l2, 0.0f, -l1, -l2, }, { 0.0f, l1, -l2, 0.0f, l1, -l2, }, { 0.0f, l1, l2, 0.0f, l1, l2, }, { 0.0f, -l1, l2, 0.0f, -l1, l2, }, }; const std::vector> icosahedron_faces = { // triangles sharing vertex 0 { 0, 1, 11 }, { 0, 11, 7 }, { 0, 7, 4 }, { 0, 4, 8 }, { 0, 8, 1 }, // adjacent triangles { 11, 1, 6 }, { 7, 11, 10 }, { 4, 7, 3 }, { 8, 4, 9 }, { 1, 8, 5 }, // triangles sharing vertex 2 { 2, 3, 10 }, { 2, 10, 6 }, { 2, 6, 5 }, { 2, 5, 9 }, { 2, 9, 3 }, // adjacent triangles { 10, 3, 7 }, { 6, 10, 11 }, { 5, 6, 1 }, { 9, 5, 8 }, { 3, 9, 4 }, }; mesh_.build(icosahedron_vertices, icosahedron_faces); } void tessellate() { size_t middle_point_count = mesh_.faces_.size() * 3 / 2; size_t final_face_count = mesh_.faces_.size() * 4; std::vector faces; faces.reserve(final_face_count); middle_points_.clear(); middle_points_.reserve(middle_point_count); mesh_.positions_.reserve(mesh_.vertex_count() + middle_point_count); mesh_.normals_.reserve(mesh_.vertex_count() + middle_point_count); for (const auto &f : mesh_.faces_) { int v0 = f.v0; int v1 = f.v1; int v2 = f.v2; int v01 = add_middle_point(v0, v1); int v12 = add_middle_point(v1, v2); int v20 = add_middle_point(v2, v0); faces.emplace_back(Mesh::Face{ v0, v01, v20 }); faces.emplace_back(Mesh::Face{ v1, v12, v01 }); faces.emplace_back(Mesh::Face{ v2, v20, v12 }); faces.emplace_back(Mesh::Face{ v01, v12, v20 }); } mesh_.faces_.swap(faces); } int add_middle_point(int a, int b) { uint64_t key = (a < b) ? ((uint64_t) a << 32 | b) : ((uint64_t) b << 32 | a); auto it = middle_points_.find(key); if (it != middle_points_.end()) return it->second; const Mesh::Position &pos_a = mesh_.positions_[a]; const Mesh::Position &pos_b = mesh_.positions_[b]; Mesh::Position pos_mid = { (pos_a.x + pos_b.x) / 2.0f, (pos_a.y + pos_b.y) / 2.0f, (pos_a.z + pos_b.z) / 2.0f, }; float scale = radius_ / std::sqrt(pos_mid.x * pos_mid.x + pos_mid.y * pos_mid.y + pos_mid.z * pos_mid.z); pos_mid.x *= scale; pos_mid.y *= scale; pos_mid.z *= scale; Mesh::Normal normal_mid = { pos_mid.x, pos_mid.y, pos_mid.z }; normal_mid.x /= radius_; normal_mid.y /= radius_; normal_mid.z /= radius_; mesh_.positions_.emplace_back(pos_mid); mesh_.normals_.emplace_back(normal_mid); int mid = mesh_.vertex_count() - 1; middle_points_.emplace(std::make_pair(key, mid)); return mid; } Mesh &mesh_; const float radius_; std::unordered_map middle_points_; }; class BuildTeapot { public: BuildTeapot(Mesh &mesh) { #include "Meshes.teapot.h" const int position_count = sizeof(teapot_positions) / sizeof(teapot_positions[0]); const int index_count = sizeof(teapot_indices) / sizeof(teapot_indices[0]); assert(position_count % 3 == 0 && index_count % 3 == 0); Mesh::Position translate; float scale; get_transform(teapot_positions, position_count, translate, scale); for (int i = 0; i < position_count; i += 3) { mesh.positions_.emplace_back(Mesh::Position{ (teapot_positions[i + 0] + translate.x) * scale, (teapot_positions[i + 1] + translate.y) * scale, (teapot_positions[i + 2] + translate.z) * scale, }); mesh.normals_.emplace_back(Mesh::Normal{ teapot_normals[i + 0], teapot_normals[i + 1], teapot_normals[i + 2], }); } for (int i = 0; i < index_count; i += 3) { mesh.faces_.emplace_back(Mesh::Face{ teapot_indices[i + 0], teapot_indices[i + 1], teapot_indices[i + 2] }); } } void get_transform(const float *positions, int position_count, Mesh::Position &translate, float &scale) { float min[3] = { positions[0], positions[1], positions[2], }; float max[3] = { positions[0], positions[1], positions[2], }; for (int i = 3; i < position_count; i += 3) { for (int j = 0; j < 3; j++) { if (min[j] > positions[i + j]) min[j] = positions[i + j]; if (max[j] < positions[i + j]) max[j] = positions[i + j]; } } translate.x = -(min[0] + max[0]) / 2.0f; translate.y = -(min[1] + max[1]) / 2.0f; translate.z = -(min[2] + max[2]) / 2.0f; float extents[3] = { max[0] + translate.x, max[1] + translate.y, max[2] + translate.z, }; float max_extent = extents[0]; if (max_extent < extents[1]) max_extent = extents[1]; if (max_extent < extents[2]) max_extent = extents[2]; scale = 1.0f / max_extent; } }; void build_meshes(std::array &meshes) { BuildPyramid build_pyramid(meshes[Meshes::MESH_PYRAMID]); BuildIcosphere build_icosphere(meshes[Meshes::MESH_ICOSPHERE]); BuildTeapot build_teapot(meshes[Meshes::MESH_TEAPOT]); } } // namespace Meshes::Meshes(VkDevice dev, const std::vector &mem_flags) : dev_(dev), vertex_input_binding_(Mesh::vertex_input_binding()), vertex_input_attrs_(Mesh::vertex_input_attributes()), vertex_input_state_(), input_assembly_state_(Mesh::input_assembly_state()), index_type_(Mesh::index_type()) { vertex_input_state_.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO; vertex_input_state_.vertexBindingDescriptionCount = 1; vertex_input_state_.pVertexBindingDescriptions = &vertex_input_binding_; vertex_input_state_.vertexAttributeDescriptionCount = static_cast(vertex_input_attrs_.size()); vertex_input_state_.pVertexAttributeDescriptions = vertex_input_attrs_.data(); std::array meshes; build_meshes(meshes); draw_commands_.reserve(meshes.size()); uint32_t first_index = 0; int32_t vertex_offset = 0; VkDeviceSize vb_size = 0; VkDeviceSize ib_size = 0; for (const auto &mesh : meshes) { VkDrawIndexedIndirectCommand draw = {}; draw.indexCount = mesh.index_count(); draw.instanceCount = 1; draw.firstIndex = first_index; draw.vertexOffset = vertex_offset; draw.firstInstance = 0; draw_commands_.push_back(draw); first_index += mesh.index_count(); vertex_offset += mesh.vertex_count(); vb_size += mesh.vertex_buffer_size(); ib_size += mesh.index_buffer_size(); } allocate_resources(vb_size, ib_size, mem_flags); uint8_t *vb_data, *ib_data; vk::assert_success(vk::MapMemory(dev_, mem_, 0, VK_WHOLE_SIZE, 0, reinterpret_cast(&vb_data))); ib_data = vb_data + ib_mem_offset_; for (const auto &mesh : meshes) { mesh.vertex_buffer_write(vb_data); mesh.index_buffer_write(ib_data); vb_data += mesh.vertex_buffer_size(); ib_data += mesh.index_buffer_size(); } vk::UnmapMemory(dev_, mem_); } Meshes::~Meshes() { vk::FreeMemory(dev_, mem_, nullptr); vk::DestroyBuffer(dev_, vb_, nullptr); vk::DestroyBuffer(dev_, ib_, nullptr); } void Meshes::cmd_bind_buffers(VkCommandBuffer cmd) const { const VkDeviceSize vb_offset = 0; vk::CmdBindVertexBuffers(cmd, 0, 1, &vb_, &vb_offset); vk::CmdBindIndexBuffer(cmd, ib_, 0, index_type_); } void Meshes::cmd_draw(VkCommandBuffer cmd, Type type) const { const auto &draw = draw_commands_[type]; vk::CmdDrawIndexed(cmd, draw.indexCount, draw.instanceCount, draw.firstIndex, draw.vertexOffset, draw.firstInstance); } void Meshes::allocate_resources(VkDeviceSize vb_size, VkDeviceSize ib_size, const std::vector &mem_flags) { VkBufferCreateInfo buf_info = {}; buf_info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO; buf_info.size = vb_size; buf_info.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT; buf_info.sharingMode = VK_SHARING_MODE_EXCLUSIVE; vk::CreateBuffer(dev_, &buf_info, nullptr, &vb_); buf_info.size = ib_size; buf_info.usage = VK_BUFFER_USAGE_INDEX_BUFFER_BIT; vk::CreateBuffer(dev_, &buf_info, nullptr, &ib_); VkMemoryRequirements vb_mem_reqs, ib_mem_reqs; vk::GetBufferMemoryRequirements(dev_, vb_, &vb_mem_reqs); vk::GetBufferMemoryRequirements(dev_, ib_, &ib_mem_reqs); // indices follow vertices ib_mem_offset_ = vb_mem_reqs.size + (ib_mem_reqs.alignment - (vb_mem_reqs.size % ib_mem_reqs.alignment)); VkMemoryAllocateInfo mem_info = {}; mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO; mem_info.allocationSize = ib_mem_offset_ + ib_mem_reqs.size; // find any supported and mappable memory type uint32_t mem_types = (vb_mem_reqs.memoryTypeBits & ib_mem_reqs.memoryTypeBits); for (uint32_t idx = 0; idx < mem_flags.size(); idx++) { if ((mem_types & (1 << idx)) && (mem_flags[idx] & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) && (mem_flags[idx] & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) { // TODO this may not be reachable mem_info.memoryTypeIndex = idx; break; } } vk::AllocateMemory(dev_, &mem_info, nullptr, &mem_); vk::BindBufferMemory(dev_, vb_, mem_, 0); vk::BindBufferMemory(dev_, ib_, mem_, ib_mem_offset_); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Meshes.h000066400000000000000000000045241270147354000245510ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef MESHES_H #define MESHES_H #include #include class Meshes { public: Meshes(VkDevice dev, const std::vector &mem_flags); ~Meshes(); const VkPipelineVertexInputStateCreateInfo &vertex_input_state() const { return vertex_input_state_; } const VkPipelineInputAssemblyStateCreateInfo &input_assembly_state() const { return input_assembly_state_; } enum Type { MESH_PYRAMID, MESH_ICOSPHERE, MESH_TEAPOT, MESH_COUNT, }; void cmd_bind_buffers(VkCommandBuffer cmd) const; void cmd_draw(VkCommandBuffer cmd, Type type) const; private: void allocate_resources(VkDeviceSize vb_size, VkDeviceSize ib_size, const std::vector &mem_flags); VkDevice dev_; VkVertexInputBindingDescription vertex_input_binding_; std::vector vertex_input_attrs_; VkPipelineVertexInputStateCreateInfo vertex_input_state_; VkPipelineInputAssemblyStateCreateInfo input_assembly_state_; VkIndexType index_type_; std::vector draw_commands_; VkBuffer vb_; VkBuffer ib_; VkDeviceMemory mem_; VkDeviceSize ib_mem_offset_; }; #endif // MESHES_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Meshes.teapot.h000066400000000000000000003427571270147354000260610ustar00rootroot00000000000000/* * Copyright (c) 2009 The Chromium Authors. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ // Modified from // // https://raw.githubusercontent.com/KhronosGroup/WebGL/master/sdk/demos/google/shiny-teapot/teapot-streams.js static const float teapot_positions[] = { 17.83489990234375f, 0.0f, 30.573999404907227f, 16.452699661254883f, -7.000179767608643f, 30.573999404907227f, 16.223100662231445f, -6.902520179748535f, 31.51460075378418f, 17.586000442504883f, 0.0f, 31.51460075378418f, 16.48940086364746f, -7.015810012817383f, 31.828100204467773f, 17.87470054626465f, 0.0f, 31.828100204467773f, 17.031099319458008f, -7.246280193328857f, 31.51460075378418f, 18.46190071105957f, 0.0f, 31.51460075378418f, 17.62779998779297f, -7.500199794769287f, 30.573999404907227f, 19.108800888061523f, 0.0f, 30.573999404907227f, 12.662699699401855f, -12.662699699401855f, 30.573999404907227f, 12.486100196838379f, -12.486100196838379f, 31.51460075378418f, 12.690999984741211f, -12.690999984741211f, 31.828100204467773f, 13.10789966583252f, -13.10789966583252f, 31.51460075378418f, 13.56719970703125f, -13.56719970703125f, 30.573999404907227f, 7.000179767608643f, -16.452699661254883f, 30.573999404907227f, 6.902520179748535f, -16.223100662231445f, 31.51460075378418f, 7.015810012817383f, -16.48940086364746f, 31.828100204467773f, 7.246280193328857f, -17.031099319458008f, 31.51460075378418f, 7.500199794769287f, -17.62779998779297f, 30.573999404907227f, 0.0f, -17.83489990234375f, 30.573999404907227f, 0.0f, -17.586000442504883f, 31.51460075378418f, 0.0f, -17.87470054626465f, 31.828100204467773f, 0.0f, -18.46190071105957f, 31.51460075378418f, 0.0f, -19.108800888061523f, 30.573999404907227f, 0.0f, -17.83489990234375f, 30.573999404907227f, -7.483870029449463f, -16.452699661254883f, 30.573999404907227f, -7.106579780578613f, -16.223100662231445f, 31.51460075378418f, 0.0f, -17.586000442504883f, 31.51460075378418f, -7.07627010345459f, -16.48940086364746f, 31.828100204467773f, 0.0f, -17.87470054626465f, 31.828100204467773f, -7.25383996963501f, -17.031099319458008f, 31.51460075378418f, 0.0f, -18.46190071105957f, 31.51460075378418f, -7.500199794769287f, -17.62779998779297f, 30.573999404907227f, 0.0f, -19.108800888061523f, 30.573999404907227f, -13.092700004577637f, -12.662699699401855f, 30.573999404907227f, -12.667499542236328f, -12.486100196838379f, 31.51460075378418f, -12.744799613952637f, -12.690999984741211f, 31.828100204467773f, -13.11460018157959f, -13.10789966583252f, 31.51460075378418f, -13.56719970703125f, -13.56719970703125f, 30.573999404907227f, -16.61389923095703f, -7.000179767608643f, 30.573999404907227f, -16.291099548339844f, -6.902520179748535f, 31.51460075378418f, -16.50950050354004f, -7.015810012817383f, 31.828100204467773f, -17.033599853515625f, -7.246280193328857f, 31.51460075378418f, -17.62779998779297f, -7.500199794769287f, 30.573999404907227f, -17.83489990234375f, 0.0f, 30.573999404907227f, -17.586000442504883f, 0.0f, 31.51460075378418f, -17.87470054626465f, 0.0f, 31.828100204467773f, -18.46190071105957f, 0.0f, 31.51460075378418f, -19.108800888061523f, 0.0f, 30.573999404907227f, -17.83489990234375f, 0.0f, 30.573999404907227f, -16.452699661254883f, 7.000179767608643f, 30.573999404907227f, -16.223100662231445f, 6.902520179748535f, 31.51460075378418f, -17.586000442504883f, 0.0f, 31.51460075378418f, -16.48940086364746f, 7.015810012817383f, 31.828100204467773f, -17.87470054626465f, 0.0f, 31.828100204467773f, -17.031099319458008f, 7.246280193328857f, 31.51460075378418f, -18.46190071105957f, 0.0f, 31.51460075378418f, -17.62779998779297f, 7.500199794769287f, 30.573999404907227f, -19.108800888061523f, 0.0f, 30.573999404907227f, -12.662699699401855f, 12.662699699401855f, 30.573999404907227f, -12.486100196838379f, 12.486100196838379f, 31.51460075378418f, -12.690999984741211f, 12.690999984741211f, 31.828100204467773f, -13.10789966583252f, 13.10789966583252f, 31.51460075378418f, -13.56719970703125f, 13.56719970703125f, 30.573999404907227f, -7.000179767608643f, 16.452699661254883f, 30.573999404907227f, -6.902520179748535f, 16.223100662231445f, 31.51460075378418f, -7.015810012817383f, 16.48940086364746f, 31.828100204467773f, -7.246280193328857f, 17.031099319458008f, 31.51460075378418f, -7.500199794769287f, 17.62779998779297f, 30.573999404907227f, 0.0f, 17.83489990234375f, 30.573999404907227f, 0.0f, 17.586000442504883f, 31.51460075378418f, 0.0f, 17.87470054626465f, 31.828100204467773f, 0.0f, 18.46190071105957f, 31.51460075378418f, 0.0f, 19.108800888061523f, 30.573999404907227f, 0.0f, 17.83489990234375f, 30.573999404907227f, 7.000179767608643f, 16.452699661254883f, 30.573999404907227f, 6.902520179748535f, 16.223100662231445f, 31.51460075378418f, 0.0f, 17.586000442504883f, 31.51460075378418f, 7.015810012817383f, 16.48940086364746f, 31.828100204467773f, 0.0f, 17.87470054626465f, 31.828100204467773f, 7.246280193328857f, 17.031099319458008f, 31.51460075378418f, 0.0f, 18.46190071105957f, 31.51460075378418f, 7.500199794769287f, 17.62779998779297f, 30.573999404907227f, 0.0f, 19.108800888061523f, 30.573999404907227f, 12.662699699401855f, 12.662699699401855f, 30.573999404907227f, 12.486100196838379f, 12.486100196838379f, 31.51460075378418f, 12.690999984741211f, 12.690999984741211f, 31.828100204467773f, 13.10789966583252f, 13.10789966583252f, 31.51460075378418f, 13.56719970703125f, 13.56719970703125f, 30.573999404907227f, 16.452699661254883f, 7.000179767608643f, 30.573999404907227f, 16.223100662231445f, 6.902520179748535f, 31.51460075378418f, 16.48940086364746f, 7.015810012817383f, 31.828100204467773f, 17.031099319458008f, 7.246280193328857f, 31.51460075378418f, 17.62779998779297f, 7.500199794769287f, 30.573999404907227f, 17.83489990234375f, 0.0f, 30.573999404907227f, 17.586000442504883f, 0.0f, 31.51460075378418f, 17.87470054626465f, 0.0f, 31.828100204467773f, 18.46190071105957f, 0.0f, 31.51460075378418f, 19.108800888061523f, 0.0f, 30.573999404907227f, 19.108800888061523f, 0.0f, 30.573999404907227f, 17.62779998779297f, -7.500199794769287f, 30.573999404907227f, 19.785400390625f, -8.418190002441406f, 25.572900772094727f, 21.447599411010742f, 0.0f, 25.572900772094727f, 21.667600631713867f, -9.218990325927734f, 20.661399841308594f, 23.487899780273438f, 0.0f, 20.661399841308594f, 22.99880027770996f, -9.785409927368164f, 15.928999900817871f, 24.930999755859375f, 0.0f, 15.928999900817871f, 23.503799438476562f, -10.000300407409668f, 11.465299606323242f, 25.4783992767334f, 0.0f, 11.465299606323242f, 13.56719970703125f, -13.56719970703125f, 30.573999404907227f, 15.227800369262695f, -15.227800369262695f, 25.572900772094727f, 16.67639923095703f, -16.67639923095703f, 20.661399841308594f, 17.701000213623047f, -17.701000213623047f, 15.928999900817871f, 18.089599609375f, -18.089599609375f, 11.465299606323242f, 7.500199794769287f, -17.62779998779297f, 30.573999404907227f, 8.418190002441406f, -19.785400390625f, 25.572900772094727f, 9.218990325927734f, -21.667600631713867f, 20.661399841308594f, 9.785409927368164f, -22.99880027770996f, 15.928999900817871f, 10.000300407409668f, -23.503799438476562f, 11.465299606323242f, 0.0f, -19.108800888061523f, 30.573999404907227f, 0.0f, -21.447599411010742f, 25.572900772094727f, 0.0f, -23.487899780273438f, 20.661399841308594f, 0.0f, -24.930999755859375f, 15.928999900817871f, 0.0f, -25.4783992767334f, 11.465299606323242f, 0.0f, -19.108800888061523f, 30.573999404907227f, -7.500199794769287f, -17.62779998779297f, 30.573999404907227f, -8.418190002441406f, -19.785400390625f, 25.572900772094727f, 0.0f, -21.447599411010742f, 25.572900772094727f, -9.218990325927734f, -21.667600631713867f, 20.661399841308594f, 0.0f, -23.487899780273438f, 20.661399841308594f, -9.785409927368164f, -22.99880027770996f, 15.928999900817871f, 0.0f, -24.930999755859375f, 15.928999900817871f, -10.000300407409668f, -23.503799438476562f, 11.465299606323242f, 0.0f, -25.4783992767334f, 11.465299606323242f, -13.56719970703125f, -13.56719970703125f, 30.573999404907227f, -15.227800369262695f, -15.227800369262695f, 25.572900772094727f, -16.67639923095703f, -16.67639923095703f, 20.661399841308594f, -17.701000213623047f, -17.701000213623047f, 15.928999900817871f, -18.089599609375f, -18.089599609375f, 11.465299606323242f, -17.62779998779297f, -7.500199794769287f, 30.573999404907227f, -19.785400390625f, -8.418190002441406f, 25.572900772094727f, -21.667600631713867f, -9.218990325927734f, 20.661399841308594f, -22.99880027770996f, -9.785409927368164f, 15.928999900817871f, -23.503799438476562f, -10.000300407409668f, 11.465299606323242f, -19.108800888061523f, 0.0f, 30.573999404907227f, -21.447599411010742f, 0.0f, 25.572900772094727f, -23.487899780273438f, 0.0f, 20.661399841308594f, -24.930999755859375f, 0.0f, 15.928999900817871f, -25.4783992767334f, 0.0f, 11.465299606323242f, -19.108800888061523f, 0.0f, 30.573999404907227f, -17.62779998779297f, 7.500199794769287f, 30.573999404907227f, -19.785400390625f, 8.418190002441406f, 25.572900772094727f, -21.447599411010742f, 0.0f, 25.572900772094727f, -21.667600631713867f, 9.218990325927734f, 20.661399841308594f, -23.487899780273438f, 0.0f, 20.661399841308594f, -22.99880027770996f, 9.785409927368164f, 15.928999900817871f, -24.930999755859375f, 0.0f, 15.928999900817871f, -23.503799438476562f, 10.000300407409668f, 11.465299606323242f, -25.4783992767334f, 0.0f, 11.465299606323242f, -13.56719970703125f, 13.56719970703125f, 30.573999404907227f, -15.227800369262695f, 15.227800369262695f, 25.572900772094727f, -16.67639923095703f, 16.67639923095703f, 20.661399841308594f, -17.701000213623047f, 17.701000213623047f, 15.928999900817871f, -18.089599609375f, 18.089599609375f, 11.465299606323242f, -7.500199794769287f, 17.62779998779297f, 30.573999404907227f, -8.418190002441406f, 19.785400390625f, 25.572900772094727f, -9.218990325927734f, 21.667600631713867f, 20.661399841308594f, -9.785409927368164f, 22.99880027770996f, 15.928999900817871f, -10.000300407409668f, 23.503799438476562f, 11.465299606323242f, 0.0f, 19.108800888061523f, 30.573999404907227f, 0.0f, 21.447599411010742f, 25.572900772094727f, 0.0f, 23.487899780273438f, 20.661399841308594f, 0.0f, 24.930999755859375f, 15.928999900817871f, 0.0f, 25.4783992767334f, 11.465299606323242f, 0.0f, 19.108800888061523f, 30.573999404907227f, 7.500199794769287f, 17.62779998779297f, 30.573999404907227f, 8.418190002441406f, 19.785400390625f, 25.572900772094727f, 0.0f, 21.447599411010742f, 25.572900772094727f, 9.218990325927734f, 21.667600631713867f, 20.661399841308594f, 0.0f, 23.487899780273438f, 20.661399841308594f, 9.785409927368164f, 22.99880027770996f, 15.928999900817871f, 0.0f, 24.930999755859375f, 15.928999900817871f, 10.000300407409668f, 23.503799438476562f, 11.465299606323242f, 0.0f, 25.4783992767334f, 11.465299606323242f, 13.56719970703125f, 13.56719970703125f, 30.573999404907227f, 15.227800369262695f, 15.227800369262695f, 25.572900772094727f, 16.67639923095703f, 16.67639923095703f, 20.661399841308594f, 17.701000213623047f, 17.701000213623047f, 15.928999900817871f, 18.089599609375f, 18.089599609375f, 11.465299606323242f, 17.62779998779297f, 7.500199794769287f, 30.573999404907227f, 19.785400390625f, 8.418190002441406f, 25.572900772094727f, 21.667600631713867f, 9.218990325927734f, 20.661399841308594f, 22.99880027770996f, 9.785409927368164f, 15.928999900817871f, 23.503799438476562f, 10.000300407409668f, 11.465299606323242f, 19.108800888061523f, 0.0f, 30.573999404907227f, 21.447599411010742f, 0.0f, 25.572900772094727f, 23.487899780273438f, 0.0f, 20.661399841308594f, 24.930999755859375f, 0.0f, 15.928999900817871f, 25.4783992767334f, 0.0f, 11.465299606323242f, 25.4783992767334f, 0.0f, 11.465299606323242f, 23.503799438476562f, -10.000300407409668f, 11.465299606323242f, 22.5856990814209f, -9.609620094299316f, 7.688300132751465f, 24.48310089111328f, 0.0f, 7.688300132751465f, 20.565799713134766f, -8.750229835510254f, 4.89661979675293f, 22.29360008239746f, 0.0f, 4.89661979675293f, 18.54599952697754f, -7.890830039978027f, 3.0006699562072754f, 20.104000091552734f, 0.0f, 3.0006699562072754f, 17.62779998779297f, -7.500199794769287f, 1.9108799695968628f, 19.108800888061523f, 0.0f, 1.9108799695968628f, 18.089599609375f, -18.089599609375f, 11.465299606323242f, 17.382999420166016f, -17.382999420166016f, 7.688300132751465f, 15.828399658203125f, -15.828399658203125f, 4.89661979675293f, 14.273900032043457f, -14.273900032043457f, 3.0006699562072754f, 13.56719970703125f, -13.56719970703125f, 1.9108799695968628f, 10.000300407409668f, -23.503799438476562f, 11.465299606323242f, 9.609620094299316f, -22.5856990814209f, 7.688300132751465f, 8.750229835510254f, -20.565799713134766f, 4.89661979675293f, 7.890830039978027f, -18.54599952697754f, 3.0006699562072754f, 7.500199794769287f, -17.62779998779297f, 1.9108799695968628f, 0.0f, -25.4783992767334f, 11.465299606323242f, 0.0f, -24.48310089111328f, 7.688300132751465f, 0.0f, -22.29360008239746f, 4.89661979675293f, 0.0f, -20.104000091552734f, 3.0006699562072754f, 0.0f, -19.108800888061523f, 1.9108799695968628f, 0.0f, -25.4783992767334f, 11.465299606323242f, -10.000300407409668f, -23.503799438476562f, 11.465299606323242f, -9.609620094299316f, -22.5856990814209f, 7.688300132751465f, 0.0f, -24.48310089111328f, 7.688300132751465f, -8.750229835510254f, -20.565799713134766f, 4.89661979675293f, 0.0f, -22.29360008239746f, 4.89661979675293f, -7.890830039978027f, -18.54599952697754f, 3.0006699562072754f, 0.0f, -20.104000091552734f, 3.0006699562072754f, -7.500199794769287f, -17.62779998779297f, 1.9108799695968628f, 0.0f, -19.108800888061523f, 1.9108799695968628f, -18.089599609375f, -18.089599609375f, 11.465299606323242f, -17.382999420166016f, -17.382999420166016f, 7.688300132751465f, -15.828399658203125f, -15.828399658203125f, 4.89661979675293f, -14.273900032043457f, -14.273900032043457f, 3.0006699562072754f, -13.56719970703125f, -13.56719970703125f, 1.9108799695968628f, -23.503799438476562f, -10.000300407409668f, 11.465299606323242f, -22.5856990814209f, -9.609620094299316f, 7.688300132751465f, -20.565799713134766f, -8.750229835510254f, 4.89661979675293f, -18.54599952697754f, -7.890830039978027f, 3.0006699562072754f, -17.62779998779297f, -7.500199794769287f, 1.9108799695968628f, -25.4783992767334f, 0.0f, 11.465299606323242f, -24.48310089111328f, 0.0f, 7.688300132751465f, -22.29360008239746f, 0.0f, 4.89661979675293f, -20.104000091552734f, 0.0f, 3.0006699562072754f, -19.108800888061523f, 0.0f, 1.9108799695968628f, -25.4783992767334f, 0.0f, 11.465299606323242f, -23.503799438476562f, 10.000300407409668f, 11.465299606323242f, -22.5856990814209f, 9.609620094299316f, 7.688300132751465f, -24.48310089111328f, 0.0f, 7.688300132751465f, -20.565799713134766f, 8.750229835510254f, 4.89661979675293f, -22.29360008239746f, 0.0f, 4.89661979675293f, -18.54599952697754f, 7.890830039978027f, 3.0006699562072754f, -20.104000091552734f, 0.0f, 3.0006699562072754f, -17.62779998779297f, 7.500199794769287f, 1.9108799695968628f, -19.108800888061523f, 0.0f, 1.9108799695968628f, -18.089599609375f, 18.089599609375f, 11.465299606323242f, -17.382999420166016f, 17.382999420166016f, 7.688300132751465f, -15.828399658203125f, 15.828399658203125f, 4.89661979675293f, -14.273900032043457f, 14.273900032043457f, 3.0006699562072754f, -13.56719970703125f, 13.56719970703125f, 1.9108799695968628f, -10.000300407409668f, 23.503799438476562f, 11.465299606323242f, -9.609620094299316f, 22.5856990814209f, 7.688300132751465f, -8.750229835510254f, 20.565799713134766f, 4.89661979675293f, -7.890830039978027f, 18.54599952697754f, 3.0006699562072754f, -7.500199794769287f, 17.62779998779297f, 1.9108799695968628f, 0.0f, 25.4783992767334f, 11.465299606323242f, 0.0f, 24.48310089111328f, 7.688300132751465f, 0.0f, 22.29360008239746f, 4.89661979675293f, 0.0f, 20.104000091552734f, 3.0006699562072754f, 0.0f, 19.108800888061523f, 1.9108799695968628f, 0.0f, 25.4783992767334f, 11.465299606323242f, 10.000300407409668f, 23.503799438476562f, 11.465299606323242f, 9.609620094299316f, 22.5856990814209f, 7.688300132751465f, 0.0f, 24.48310089111328f, 7.688300132751465f, 8.750229835510254f, 20.565799713134766f, 4.89661979675293f, 0.0f, 22.29360008239746f, 4.89661979675293f, 7.890830039978027f, 18.54599952697754f, 3.0006699562072754f, 0.0f, 20.104000091552734f, 3.0006699562072754f, 7.500199794769287f, 17.62779998779297f, 1.9108799695968628f, 0.0f, 19.108800888061523f, 1.9108799695968628f, 18.089599609375f, 18.089599609375f, 11.465299606323242f, 17.382999420166016f, 17.382999420166016f, 7.688300132751465f, 15.828399658203125f, 15.828399658203125f, 4.89661979675293f, 14.273900032043457f, 14.273900032043457f, 3.0006699562072754f, 13.56719970703125f, 13.56719970703125f, 1.9108799695968628f, 23.503799438476562f, 10.000300407409668f, 11.465299606323242f, 22.5856990814209f, 9.609620094299316f, 7.688300132751465f, 20.565799713134766f, 8.750229835510254f, 4.89661979675293f, 18.54599952697754f, 7.890830039978027f, 3.0006699562072754f, 17.62779998779297f, 7.500199794769287f, 1.9108799695968628f, 25.4783992767334f, 0.0f, 11.465299606323242f, 24.48310089111328f, 0.0f, 7.688300132751465f, 22.29360008239746f, 0.0f, 4.89661979675293f, 20.104000091552734f, 0.0f, 3.0006699562072754f, 19.108800888061523f, 0.0f, 1.9108799695968628f, 19.108800888061523f, 0.0f, 1.9108799695968628f, 17.62779998779297f, -7.500199794769287f, 1.9108799695968628f, 17.228500366210938f, -7.330269813537598f, 1.2092299461364746f, 18.675800323486328f, 0.0f, 1.2092299461364746f, 15.093799591064453f, -6.422039985656738f, 0.5971490144729614f, 16.361900329589844f, 0.0f, 0.5971490144729614f, 9.819259643554688f, -4.177840232849121f, 0.16421599686145782f, 10.644200325012207f, 0.0f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 13.56719970703125f, -13.56719970703125f, 1.9108799695968628f, 13.25979995727539f, -13.25979995727539f, 1.2092299461364746f, 11.616900444030762f, -11.616900444030762f, 0.5971490144729614f, 7.557370185852051f, -7.557370185852051f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 7.500199794769287f, -17.62779998779297f, 1.9108799695968628f, 7.330269813537598f, -17.228500366210938f, 1.2092299461364746f, 6.422039985656738f, -15.093799591064453f, 0.5971490144729614f, 4.177840232849121f, -9.819259643554688f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 0.0f, -19.108800888061523f, 1.9108799695968628f, 0.0f, -18.675800323486328f, 1.2092299461364746f, 0.0f, -16.361900329589844f, 0.5971490144729614f, 0.0f, -10.644200325012207f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 0.0f, -19.108800888061523f, 1.9108799695968628f, -7.500199794769287f, -17.62779998779297f, 1.9108799695968628f, -7.330269813537598f, -17.228500366210938f, 1.2092299461364746f, 0.0f, -18.675800323486328f, 1.2092299461364746f, -6.422039985656738f, -15.093799591064453f, 0.5971490144729614f, 0.0f, -16.361900329589844f, 0.5971490144729614f, -4.177840232849121f, -9.819259643554688f, 0.16421599686145782f, 0.0f, -10.644200325012207f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, -13.56719970703125f, -13.56719970703125f, 1.9108799695968628f, -13.25979995727539f, -13.25979995727539f, 1.2092299461364746f, -11.616900444030762f, -11.616900444030762f, 0.5971490144729614f, -7.557370185852051f, -7.557370185852051f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, -17.62779998779297f, -7.500199794769287f, 1.9108799695968628f, -17.228500366210938f, -7.330269813537598f, 1.2092299461364746f, -15.093799591064453f, -6.422039985656738f, 0.5971490144729614f, -9.819259643554688f, -4.177840232849121f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, -19.108800888061523f, 0.0f, 1.9108799695968628f, -18.675800323486328f, 0.0f, 1.2092299461364746f, -16.361900329589844f, 0.0f, 0.5971490144729614f, -10.644200325012207f, 0.0f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, -19.108800888061523f, 0.0f, 1.9108799695968628f, -17.62779998779297f, 7.500199794769287f, 1.9108799695968628f, -17.228500366210938f, 7.330269813537598f, 1.2092299461364746f, -18.675800323486328f, 0.0f, 1.2092299461364746f, -15.093799591064453f, 6.422039985656738f, 0.5971490144729614f, -16.361900329589844f, 0.0f, 0.5971490144729614f, -9.819259643554688f, 4.177840232849121f, 0.16421599686145782f, -10.644200325012207f, 0.0f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, -13.56719970703125f, 13.56719970703125f, 1.9108799695968628f, -13.25979995727539f, 13.25979995727539f, 1.2092299461364746f, -11.616900444030762f, 11.616900444030762f, 0.5971490144729614f, -7.557370185852051f, 7.557370185852051f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, -7.500199794769287f, 17.62779998779297f, 1.9108799695968628f, -7.330269813537598f, 17.228500366210938f, 1.2092299461364746f, -6.422039985656738f, 15.093799591064453f, 0.5971490144729614f, -4.177840232849121f, 9.819259643554688f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 0.0f, 19.108800888061523f, 1.9108799695968628f, 0.0f, 18.675800323486328f, 1.2092299461364746f, 0.0f, 16.361900329589844f, 0.5971490144729614f, 0.0f, 10.644200325012207f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 0.0f, 19.108800888061523f, 1.9108799695968628f, 7.500199794769287f, 17.62779998779297f, 1.9108799695968628f, 7.330269813537598f, 17.228500366210938f, 1.2092299461364746f, 0.0f, 18.675800323486328f, 1.2092299461364746f, 6.422039985656738f, 15.093799591064453f, 0.5971490144729614f, 0.0f, 16.361900329589844f, 0.5971490144729614f, 4.177840232849121f, 9.819259643554688f, 0.16421599686145782f, 0.0f, 10.644200325012207f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 13.56719970703125f, 13.56719970703125f, 1.9108799695968628f, 13.25979995727539f, 13.25979995727539f, 1.2092299461364746f, 11.616900444030762f, 11.616900444030762f, 0.5971490144729614f, 7.557370185852051f, 7.557370185852051f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 17.62779998779297f, 7.500199794769287f, 1.9108799695968628f, 17.228500366210938f, 7.330269813537598f, 1.2092299461364746f, 15.093799591064453f, 6.422039985656738f, 0.5971490144729614f, 9.819259643554688f, 4.177840232849121f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, 19.108800888061523f, 0.0f, 1.9108799695968628f, 18.675800323486328f, 0.0f, 1.2092299461364746f, 16.361900329589844f, 0.0f, 0.5971490144729614f, 10.644200325012207f, 0.0f, 0.16421599686145782f, 0.0f, 0.0f, 0.0f, -20.382699966430664f, 0.0f, 25.796899795532227f, -20.1835994720459f, -2.149739980697632f, 26.244699478149414f, -26.511600494384766f, -2.149739980697632f, 26.192899703979492f, -26.334299087524414f, 0.0f, 25.752099990844727f, -31.156299591064453f, -2.149739980697632f, 25.830400466918945f, -30.733299255371094f, 0.0f, 25.438600540161133f, -34.016998291015625f, -2.149739980697632f, 24.846500396728516f, -33.46030044555664f, 0.0f, 24.587600708007812f, -34.99290084838867f, -2.149739980697632f, 22.930500030517578f, -34.39580154418945f, 0.0f, 22.930500030517578f, -19.74570083618164f, -2.8663198947906494f, 27.229999542236328f, -26.901599884033203f, -2.8663198947906494f, 27.162799835205078f, -32.08679962158203f, -2.8663198947906494f, 26.69260025024414f, -35.241798400878906f, -2.8663198947906494f, 25.416200637817383f, -36.30670166015625f, -2.8663198947906494f, 22.930500030517578f, -19.30780029296875f, -2.149739980697632f, 28.215299606323242f, -27.29159927368164f, -2.149739980697632f, 28.132699966430664f, -33.017398834228516f, -2.149739980697632f, 27.55470085144043f, -36.46649932861328f, -2.149739980697632f, 25.98579978942871f, -37.620399475097656f, -2.149739980697632f, 22.930500030517578f, -19.108800888061523f, 0.0f, 28.66320037841797f, -27.468900680541992f, 0.0f, 28.57360076904297f, -33.440399169921875f, 0.0f, 27.94659996032715f, -37.02330017089844f, 0.0f, 26.244699478149414f, -38.21760177612305f, 0.0f, 22.930500030517578f, -19.108800888061523f, 0.0f, 28.66320037841797f, -19.30780029296875f, 2.149739980697632f, 28.215299606323242f, -27.29159927368164f, 2.149739980697632f, 28.132699966430664f, -27.468900680541992f, 0.0f, 28.57360076904297f, -33.017398834228516f, 2.149739980697632f, 27.55470085144043f, -33.440399169921875f, 0.0f, 27.94659996032715f, -36.46649932861328f, 2.149739980697632f, 25.98579978942871f, -37.02330017089844f, 0.0f, 26.244699478149414f, -37.620399475097656f, 2.149739980697632f, 22.930500030517578f, -38.21760177612305f, 0.0f, 22.930500030517578f, -19.74570083618164f, 2.8663198947906494f, 27.229999542236328f, -26.901599884033203f, 2.8663198947906494f, 27.162799835205078f, -32.08679962158203f, 2.8663198947906494f, 26.69260025024414f, -35.241798400878906f, 2.8663198947906494f, 25.416200637817383f, -36.30670166015625f, 2.8663198947906494f, 22.930500030517578f, -20.1835994720459f, 2.149739980697632f, 26.244699478149414f, -26.511600494384766f, 2.149739980697632f, 26.192899703979492f, -31.156299591064453f, 2.149739980697632f, 25.830400466918945f, -34.016998291015625f, 2.149739980697632f, 24.846500396728516f, -34.99290084838867f, 2.149739980697632f, 22.930500030517578f, -20.382699966430664f, 0.0f, 25.796899795532227f, -26.334299087524414f, 0.0f, 25.752099990844727f, -30.733299255371094f, 0.0f, 25.438600540161133f, -33.46030044555664f, 0.0f, 24.587600708007812f, -34.39580154418945f, 0.0f, 22.930500030517578f, -34.39580154418945f, 0.0f, 22.930500030517578f, -34.99290084838867f, -2.149739980697632f, 22.930500030517578f, -34.44089889526367f, -2.149739980697632f, 20.082199096679688f, -33.89820098876953f, 0.0f, 20.33289909362793f, -32.711299896240234f, -2.149739980697632f, 16.81529998779297f, -32.32569885253906f, 0.0f, 17.197900772094727f, -29.69420051574707f, -2.149739980697632f, 13.590499877929688f, -29.558900833129883f, 0.0f, 14.062899589538574f, -25.279300689697266f, -2.149739980697632f, 10.8681001663208f, -25.4783992767334f, 0.0f, 11.465299606323242f, -36.30670166015625f, -2.8663198947906494f, 22.930500030517578f, -35.6348991394043f, -2.8663198947906494f, 19.530500411987305f, -33.55979919433594f, -2.8663198947906494f, 15.973699569702148f, -29.99180030822754f, -2.8663198947906494f, 12.551300048828125f, -24.841400146484375f, -2.8663198947906494f, 9.554389953613281f, -37.620399475097656f, -2.149739980697632f, 22.930500030517578f, -36.82889938354492f, -2.149739980697632f, 18.97879981994629f, -34.408199310302734f, -2.149739980697632f, 15.132100105285645f, -30.289499282836914f, -2.149739980697632f, 11.512200355529785f, -24.403499603271484f, -2.149739980697632f, 8.240659713745117f, -38.21760177612305f, 0.0f, 22.930500030517578f, -37.37160110473633f, 0.0f, 18.728099822998047f, -34.79389953613281f, 0.0f, 14.749600410461426f, -30.424800872802734f, 0.0f, 11.039799690246582f, -24.204500198364258f, 0.0f, 7.643509864807129f, -38.21760177612305f, 0.0f, 22.930500030517578f, -37.620399475097656f, 2.149739980697632f, 22.930500030517578f, -36.82889938354492f, 2.149739980697632f, 18.97879981994629f, -37.37160110473633f, 0.0f, 18.728099822998047f, -34.408199310302734f, 2.149739980697632f, 15.132100105285645f, -34.79389953613281f, 0.0f, 14.749600410461426f, -30.289499282836914f, 2.149739980697632f, 11.512200355529785f, -30.424800872802734f, 0.0f, 11.039799690246582f, -24.403499603271484f, 2.149739980697632f, 8.240659713745117f, -24.204500198364258f, 0.0f, 7.643509864807129f, -36.30670166015625f, 2.8663198947906494f, 22.930500030517578f, -35.6348991394043f, 2.8663198947906494f, 19.530500411987305f, -33.55979919433594f, 2.8663198947906494f, 15.973699569702148f, -29.99180030822754f, 2.8663198947906494f, 12.551300048828125f, -24.841400146484375f, 2.8663198947906494f, 9.554389953613281f, -34.99290084838867f, 2.149739980697632f, 22.930500030517578f, -34.44089889526367f, 2.149739980697632f, 20.082199096679688f, -32.711299896240234f, 2.149739980697632f, 16.81529998779297f, -29.69420051574707f, 2.149739980697632f, 13.590499877929688f, -25.279300689697266f, 2.149739980697632f, 10.8681001663208f, -34.39580154418945f, 0.0f, 22.930500030517578f, -33.89820098876953f, 0.0f, 20.33289909362793f, -32.32569885253906f, 0.0f, 17.197900772094727f, -29.558900833129883f, 0.0f, 14.062899589538574f, -25.4783992767334f, 0.0f, 11.465299606323242f, 21.656600952148438f, 0.0f, 18.15329933166504f, 21.656600952148438f, -4.729420185089111f, 16.511199951171875f, 28.233999252319336f, -4.270359992980957f, 18.339000701904297f, 27.76740074157715f, 0.0f, 19.55660057067871f, 31.011899948120117f, -3.2604401111602783f, 22.221399307250977f, 30.4148006439209f, 0.0f, 22.930500030517578f, 32.59560012817383f, -2.2505099773406982f, 26.764400482177734f, 31.867900848388672f, 0.0f, 27.020999908447266f, 35.5900993347168f, -1.791450023651123f, 30.573999404907227f, 34.39580154418945f, 0.0f, 30.573999404907227f, 21.656600952148438f, -6.3059000968933105f, 12.89840030670166f, 29.260299682617188f, -5.693819999694824f, 15.660200119018555f, 32.32569885253906f, -4.347249984741211f, 20.661399841308594f, 34.19670104980469f, -3.0006699562072754f, 26.199899673461914f, 38.21760177612305f, -2.3886001110076904f, 30.573999404907227f, 21.656600952148438f, -4.729420185089111f, 9.285670280456543f, 30.286699295043945f, -4.270359992980957f, 12.981499671936035f, 33.639400482177734f, -3.2604401111602783f, 19.101299285888672f, 35.79790115356445f, -2.2505099773406982f, 25.635400772094727f, 40.845001220703125f, -1.791450023651123f, 30.573999404907227f, 21.656600952148438f, 0.0f, 7.643509864807129f, 30.75320053100586f, 0.0f, 11.763799667358398f, 34.23659896850586f, 0.0f, 18.392200469970703f, 36.52560043334961f, 0.0f, 25.378799438476562f, 42.03929901123047f, 0.0f, 30.573999404907227f, 21.656600952148438f, 0.0f, 7.643509864807129f, 21.656600952148438f, 4.729420185089111f, 9.285670280456543f, 30.286699295043945f, 4.270359992980957f, 12.981499671936035f, 30.75320053100586f, 0.0f, 11.763799667358398f, 33.639400482177734f, 3.2604401111602783f, 19.101299285888672f, 34.23659896850586f, 0.0f, 18.392200469970703f, 35.79790115356445f, 2.2505099773406982f, 25.635400772094727f, 36.52560043334961f, 0.0f, 25.378799438476562f, 40.845001220703125f, 1.791450023651123f, 30.573999404907227f, 42.03929901123047f, 0.0f, 30.573999404907227f, 21.656600952148438f, 6.3059000968933105f, 12.89840030670166f, 29.260299682617188f, 5.693819999694824f, 15.660200119018555f, 32.32569885253906f, 4.347249984741211f, 20.661399841308594f, 34.19670104980469f, 3.0006699562072754f, 26.199899673461914f, 38.21760177612305f, 2.3886001110076904f, 30.573999404907227f, 21.656600952148438f, 4.729420185089111f, 16.511199951171875f, 28.233999252319336f, 4.270359992980957f, 18.339000701904297f, 31.011899948120117f, 3.2604401111602783f, 22.221399307250977f, 32.59560012817383f, 2.2505099773406982f, 26.764400482177734f, 35.5900993347168f, 1.791450023651123f, 30.573999404907227f, 21.656600952148438f, 0.0f, 18.15329933166504f, 27.76740074157715f, 0.0f, 19.55660057067871f, 30.4148006439209f, 0.0f, 22.930500030517578f, 31.867900848388672f, 0.0f, 27.020999908447266f, 34.39580154418945f, 0.0f, 30.573999404907227f, 34.39580154418945f, 0.0f, 30.573999404907227f, 35.5900993347168f, -1.791450023651123f, 30.573999404907227f, 36.59049987792969f, -1.679479956626892f, 31.137699127197266f, 35.3114013671875f, 0.0f, 31.111499786376953f, 37.18870162963867f, -1.4331599473953247f, 31.332599639892578f, 35.98820114135742f, 0.0f, 31.290599822998047f, 37.206600189208984f, -1.1868300437927246f, 31.1481990814209f, 36.187198638916016f, 0.0f, 31.111499786376953f, 36.46590042114258f, -1.074869990348816f, 30.573999404907227f, 35.669700622558594f, 0.0f, 30.573999404907227f, 38.21760177612305f, -2.3886001110076904f, 30.573999404907227f, 39.40439987182617f, -2.2393100261688232f, 31.195499420166016f, 39.829898834228516f, -1.9108799695968628f, 31.424999237060547f, 39.44919967651367f, -1.582450032234192f, 31.229000091552734f, 38.21760177612305f, -1.4331599473953247f, 30.573999404907227f, 40.845001220703125f, -1.791450023651123f, 30.573999404907227f, 42.218299865722656f, -1.679479956626892f, 31.25320053100586f, 42.47100067138672f, -1.4331599473953247f, 31.51740074157715f, 41.69169998168945f, -1.1868300437927246f, 31.309900283813477f, 39.969200134277344f, -1.074869990348816f, 30.573999404907227f, 42.03929901123047f, 0.0f, 30.573999404907227f, 43.49729919433594f, 0.0f, 31.279399871826172f, 43.67150115966797f, 0.0f, 31.55929946899414f, 42.71110153198242f, 0.0f, 31.346599578857422f, 40.76539993286133f, 0.0f, 30.573999404907227f, 42.03929901123047f, 0.0f, 30.573999404907227f, 40.845001220703125f, 1.791450023651123f, 30.573999404907227f, 42.218299865722656f, 1.679479956626892f, 31.25320053100586f, 43.49729919433594f, 0.0f, 31.279399871826172f, 42.47100067138672f, 1.4331599473953247f, 31.51740074157715f, 43.67150115966797f, 0.0f, 31.55929946899414f, 41.69169998168945f, 1.1868300437927246f, 31.309900283813477f, 42.71110153198242f, 0.0f, 31.346599578857422f, 39.969200134277344f, 1.074869990348816f, 30.573999404907227f, 40.76539993286133f, 0.0f, 30.573999404907227f, 38.21760177612305f, 2.3886001110076904f, 30.573999404907227f, 39.40439987182617f, 2.2393100261688232f, 31.195499420166016f, 39.829898834228516f, 1.9108799695968628f, 31.424999237060547f, 39.44919967651367f, 1.582450032234192f, 31.229000091552734f, 38.21760177612305f, 1.4331599473953247f, 30.573999404907227f, 35.5900993347168f, 1.791450023651123f, 30.573999404907227f, 36.59049987792969f, 1.679479956626892f, 31.137699127197266f, 37.18870162963867f, 1.4331599473953247f, 31.332599639892578f, 37.206600189208984f, 1.1868300437927246f, 31.1481990814209f, 36.46590042114258f, 1.074869990348816f, 30.573999404907227f, 34.39580154418945f, 0.0f, 30.573999404907227f, 35.3114013671875f, 0.0f, 31.111499786376953f, 35.98820114135742f, 0.0f, 31.290599822998047f, 36.187198638916016f, 0.0f, 31.111499786376953f, 35.669700622558594f, 0.0f, 30.573999404907227f, 0.0f, 0.0f, 40.12839889526367f, 0.0f, 0.0f, 40.12839889526367f, 4.004499912261963f, -1.7077000141143799f, 39.501399993896484f, 4.339280128479004f, 0.0f, 39.501399993896484f, 3.8207099437713623f, -1.6290700435638428f, 37.97869873046875f, 4.140230178833008f, 0.0f, 37.97869873046875f, 2.314160108566284f, -0.985912024974823f, 36.09769821166992f, 2.5080299377441406f, 0.0f, 36.09769821166992f, 2.3503799438476562f, -1.0000300407409668f, 34.39580154418945f, 2.547840118408203f, 0.0f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 3.0849199295043945f, -3.0849199295043945f, 39.501399993896484f, 2.943150043487549f, -2.943150043487549f, 37.97869873046875f, 1.782039999961853f, -1.782039999961853f, 36.09769821166992f, 1.8089599609375f, -1.8089599609375f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 1.7077000141143799f, -4.004499912261963f, 39.501399993896484f, 1.6290700435638428f, -3.8207099437713623f, 37.97869873046875f, 0.985912024974823f, -2.314160108566284f, 36.09769821166992f, 1.0000300407409668f, -2.3503799438476562f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 0.0f, -4.339280128479004f, 39.501399993896484f, 0.0f, -4.140230178833008f, 37.97869873046875f, 0.0f, -2.5080299377441406f, 36.09769821166992f, 0.0f, -2.547840118408203f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 0.0f, 0.0f, 40.12839889526367f, -1.7077000141143799f, -4.004499912261963f, 39.501399993896484f, 0.0f, -4.339280128479004f, 39.501399993896484f, -1.6290700435638428f, -3.8207099437713623f, 37.97869873046875f, 0.0f, -4.140230178833008f, 37.97869873046875f, -0.985912024974823f, -2.314160108566284f, 36.09769821166992f, 0.0f, -2.5080299377441406f, 36.09769821166992f, -1.0000300407409668f, -2.3503799438476562f, 34.39580154418945f, 0.0f, -2.547840118408203f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, -3.0849199295043945f, -3.0849199295043945f, 39.501399993896484f, -2.943150043487549f, -2.943150043487549f, 37.97869873046875f, -1.782039999961853f, -1.782039999961853f, 36.09769821166992f, -1.8089599609375f, -1.8089599609375f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, -4.004499912261963f, -1.7077000141143799f, 39.501399993896484f, -3.8207099437713623f, -1.6290700435638428f, 37.97869873046875f, -2.314160108566284f, -0.985912024974823f, 36.09769821166992f, -2.3503799438476562f, -1.0000300407409668f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, -4.339280128479004f, 0.0f, 39.501399993896484f, -4.140230178833008f, 0.0f, 37.97869873046875f, -2.5080299377441406f, 0.0f, 36.09769821166992f, -2.547840118408203f, 0.0f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 0.0f, 0.0f, 40.12839889526367f, -4.004499912261963f, 1.7077000141143799f, 39.501399993896484f, -4.339280128479004f, 0.0f, 39.501399993896484f, -3.8207099437713623f, 1.6290700435638428f, 37.97869873046875f, -4.140230178833008f, 0.0f, 37.97869873046875f, -2.314160108566284f, 0.985912024974823f, 36.09769821166992f, -2.5080299377441406f, 0.0f, 36.09769821166992f, -2.3503799438476562f, 1.0000300407409668f, 34.39580154418945f, -2.547840118408203f, 0.0f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, -3.0849199295043945f, 3.0849199295043945f, 39.501399993896484f, -2.943150043487549f, 2.943150043487549f, 37.97869873046875f, -1.782039999961853f, 1.782039999961853f, 36.09769821166992f, -1.8089599609375f, 1.8089599609375f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, -1.7077000141143799f, 4.004499912261963f, 39.501399993896484f, -1.6290700435638428f, 3.8207099437713623f, 37.97869873046875f, -0.985912024974823f, 2.314160108566284f, 36.09769821166992f, -1.0000300407409668f, 2.3503799438476562f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 0.0f, 4.339280128479004f, 39.501399993896484f, 0.0f, 4.140230178833008f, 37.97869873046875f, 0.0f, 2.5080299377441406f, 36.09769821166992f, 0.0f, 2.547840118408203f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 0.0f, 0.0f, 40.12839889526367f, 1.7077000141143799f, 4.004499912261963f, 39.501399993896484f, 0.0f, 4.339280128479004f, 39.501399993896484f, 1.6290700435638428f, 3.8207099437713623f, 37.97869873046875f, 0.0f, 4.140230178833008f, 37.97869873046875f, 0.985912024974823f, 2.314160108566284f, 36.09769821166992f, 0.0f, 2.5080299377441406f, 36.09769821166992f, 1.0000300407409668f, 2.3503799438476562f, 34.39580154418945f, 0.0f, 2.547840118408203f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 3.0849199295043945f, 3.0849199295043945f, 39.501399993896484f, 2.943150043487549f, 2.943150043487549f, 37.97869873046875f, 1.782039999961853f, 1.782039999961853f, 36.09769821166992f, 1.8089599609375f, 1.8089599609375f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 4.004499912261963f, 1.7077000141143799f, 39.501399993896484f, 3.8207099437713623f, 1.6290700435638428f, 37.97869873046875f, 2.314160108566284f, 0.985912024974823f, 36.09769821166992f, 2.3503799438476562f, 1.0000300407409668f, 34.39580154418945f, 0.0f, 0.0f, 40.12839889526367f, 4.339280128479004f, 0.0f, 39.501399993896484f, 4.140230178833008f, 0.0f, 37.97869873046875f, 2.5080299377441406f, 0.0f, 36.09769821166992f, 2.547840118408203f, 0.0f, 34.39580154418945f, 2.547840118408203f, 0.0f, 34.39580154418945f, 2.3503799438476562f, -1.0000300407409668f, 34.39580154418945f, 5.361800193786621f, -2.2813100814819336f, 33.261199951171875f, 5.812250137329102f, 0.0f, 33.261199951171875f, 9.695320129394531f, -4.125110149383545f, 32.484901428222656f, 10.50979995727539f, 0.0f, 32.484901428222656f, 13.58810043334961f, -5.781400203704834f, 31.708599090576172f, 14.729700088500977f, 0.0f, 31.708599090576172f, 15.27750015258789f, -6.5001702308654785f, 30.573999404907227f, 16.56089973449707f, 0.0f, 30.573999404907227f, 1.8089599609375f, -1.8089599609375f, 34.39580154418945f, 4.126699924468994f, -4.126699924468994f, 33.261199951171875f, 7.461979866027832f, -7.461979866027832f, 32.484901428222656f, 10.458100318908691f, -10.458100318908691f, 31.708599090576172f, 11.758299827575684f, -11.758299827575684f, 30.573999404907227f, 1.0000300407409668f, -2.3503799438476562f, 34.39580154418945f, 2.2813100814819336f, -5.361800193786621f, 33.261199951171875f, 4.125110149383545f, -9.695320129394531f, 32.484901428222656f, 5.781400203704834f, -13.58810043334961f, 31.708599090576172f, 6.5001702308654785f, -15.27750015258789f, 30.573999404907227f, 0.0f, -2.547840118408203f, 34.39580154418945f, 0.0f, -5.812250137329102f, 33.261199951171875f, 0.0f, -10.50979995727539f, 32.484901428222656f, 0.0f, -14.729700088500977f, 31.708599090576172f, 0.0f, -16.56089973449707f, 30.573999404907227f, 0.0f, -2.547840118408203f, 34.39580154418945f, -1.0000300407409668f, -2.3503799438476562f, 34.39580154418945f, -2.2813100814819336f, -5.361800193786621f, 33.261199951171875f, 0.0f, -5.812250137329102f, 33.261199951171875f, -4.125110149383545f, -9.695320129394531f, 32.484901428222656f, 0.0f, -10.50979995727539f, 32.484901428222656f, -5.781400203704834f, -13.58810043334961f, 31.708599090576172f, 0.0f, -14.729700088500977f, 31.708599090576172f, -6.5001702308654785f, -15.27750015258789f, 30.573999404907227f, 0.0f, -16.56089973449707f, 30.573999404907227f, -1.8089599609375f, -1.8089599609375f, 34.39580154418945f, -4.126699924468994f, -4.126699924468994f, 33.261199951171875f, -7.461979866027832f, -7.461979866027832f, 32.484901428222656f, -10.458100318908691f, -10.458100318908691f, 31.708599090576172f, -11.758299827575684f, -11.758299827575684f, 30.573999404907227f, -2.3503799438476562f, -1.0000300407409668f, 34.39580154418945f, -5.361800193786621f, -2.2813100814819336f, 33.261199951171875f, -9.695320129394531f, -4.125110149383545f, 32.484901428222656f, -13.58810043334961f, -5.781400203704834f, 31.708599090576172f, -15.27750015258789f, -6.5001702308654785f, 30.573999404907227f, -2.547840118408203f, 0.0f, 34.39580154418945f, -5.812250137329102f, 0.0f, 33.261199951171875f, -10.50979995727539f, 0.0f, 32.484901428222656f, -14.729700088500977f, 0.0f, 31.708599090576172f, -16.56089973449707f, 0.0f, 30.573999404907227f, -2.547840118408203f, 0.0f, 34.39580154418945f, -2.3503799438476562f, 1.0000300407409668f, 34.39580154418945f, -5.361800193786621f, 2.2813100814819336f, 33.261199951171875f, -5.812250137329102f, 0.0f, 33.261199951171875f, -9.695320129394531f, 4.125110149383545f, 32.484901428222656f, -10.50979995727539f, 0.0f, 32.484901428222656f, -13.58810043334961f, 5.781400203704834f, 31.708599090576172f, -14.729700088500977f, 0.0f, 31.708599090576172f, -15.27750015258789f, 6.5001702308654785f, 30.573999404907227f, -16.56089973449707f, 0.0f, 30.573999404907227f, -1.8089599609375f, 1.8089599609375f, 34.39580154418945f, -4.126699924468994f, 4.126699924468994f, 33.261199951171875f, -7.461979866027832f, 7.461979866027832f, 32.484901428222656f, -10.458100318908691f, 10.458100318908691f, 31.708599090576172f, -11.758299827575684f, 11.758299827575684f, 30.573999404907227f, -1.0000300407409668f, 2.3503799438476562f, 34.39580154418945f, -2.2813100814819336f, 5.361800193786621f, 33.261199951171875f, -4.125110149383545f, 9.695320129394531f, 32.484901428222656f, -5.781400203704834f, 13.58810043334961f, 31.708599090576172f, -6.5001702308654785f, 15.27750015258789f, 30.573999404907227f, 0.0f, 2.547840118408203f, 34.39580154418945f, 0.0f, 5.812250137329102f, 33.261199951171875f, 0.0f, 10.50979995727539f, 32.484901428222656f, 0.0f, 14.729700088500977f, 31.708599090576172f, 0.0f, 16.56089973449707f, 30.573999404907227f, 0.0f, 2.547840118408203f, 34.39580154418945f, 1.0000300407409668f, 2.3503799438476562f, 34.39580154418945f, 2.2813100814819336f, 5.361800193786621f, 33.261199951171875f, 0.0f, 5.812250137329102f, 33.261199951171875f, 4.125110149383545f, 9.695320129394531f, 32.484901428222656f, 0.0f, 10.50979995727539f, 32.484901428222656f, 5.781400203704834f, 13.58810043334961f, 31.708599090576172f, 0.0f, 14.729700088500977f, 31.708599090576172f, 6.5001702308654785f, 15.27750015258789f, 30.573999404907227f, 0.0f, 16.56089973449707f, 30.573999404907227f, 1.8089599609375f, 1.8089599609375f, 34.39580154418945f, 4.126699924468994f, 4.126699924468994f, 33.261199951171875f, 7.461979866027832f, 7.461979866027832f, 32.484901428222656f, 10.458100318908691f, 10.458100318908691f, 31.708599090576172f, 11.758299827575684f, 11.758299827575684f, 30.573999404907227f, 2.3503799438476562f, 1.0000300407409668f, 34.39580154418945f, 5.361800193786621f, 2.2813100814819336f, 33.261199951171875f, 9.695320129394531f, 4.125110149383545f, 32.484901428222656f, 13.58810043334961f, 5.781400203704834f, 31.708599090576172f, 15.27750015258789f, 6.5001702308654785f, 30.573999404907227f, 2.547840118408203f, 0.0f, 34.39580154418945f, 5.812250137329102f, 0.0f, 33.261199951171875f, 10.50979995727539f, 0.0f, 32.484901428222656f, 14.729700088500977f, 0.0f, 31.708599090576172f, 16.56089973449707f, 0.0f, 30.573999404907227f, }; static const float teapot_normals[] = { -0.9667419791221619, 0, -0.25575199723243713, -0.8930140137672424, 0.3698819875717163, -0.2563450038433075, -0.8934370279312134, 0.36910200119018555, 0.2559970021247864, -0.9668239951133728, 0, 0.2554430067539215, -0.0838799998164177, 0.03550700098276138, 0.9958429932594299, -0.09205400198698044, 0, 0.9957540035247803, 0.629721999168396, -0.2604379951953888, 0.7318620085716248, 0.6820489764213562, 0, 0.7313070297241211, 0.803725004196167, -0.3325839936733246, 0.4933690130710602, 0.8703010082244873, 0, 0.4925200045108795, -0.6834070086479187, 0.6834070086479187, -0.2567310035228729, -0.6835309863090515, 0.6835309863090515, 0.25606799125671387, -0.06492599844932556, 0.06492500007152557, 0.9957759976387024, 0.48139700293540955, -0.48139700293540955, 0.7324709892272949, 0.6148040294647217, -0.6148040294647217, 0.4939970076084137, -0.3698819875717163, 0.8930140137672424, -0.2563450038433075, -0.36910200119018555, 0.8934370279312134, 0.2559959888458252, -0.03550700098276138, 0.0838790014386177, 0.9958429932594299, 0.26043900847435, -0.6297230124473572, 0.7318609952926636, 0.3325839936733246, -0.803725004196167, 0.4933690130710602, -0.002848000032827258, 0.9661769866943359, -0.25786298513412476, -0.001921999966725707, 0.9670090079307556, 0.2547360062599182, -0.00026500000967644155, 0.09227199852466583, 0.9957339763641357, 0.00002300000051036477, -0.6820600032806396, 0.7312960028648376, 0, -0.8703010082244873, 0.4925200045108795, -0.002848000032827258, 0.9661769866943359, -0.25786298513412476, 0.37905800342559814, 0.852770984172821, -0.35929998755455017, 0.37711000442504883, 0.9140909910202026, 0.14908500015735626, -0.001921999966725707, 0.9670090079307556, 0.2547360062599182, 0.0275030005723238, 0.12255500257015228, 0.9920809864997864, -0.00026500000967644155, 0.09227199852466583, 0.9957339763641357, -0.26100900769233704, -0.6353650093078613, 0.7267630100250244, 0.00002300000051036477, -0.6820600032806396, 0.7312960028648376, -0.33248499035835266, -0.8042709827423096, 0.4925459921360016, 0, -0.8703010082244873, 0.4925200045108795, 0.6635469794273376, 0.6252639889717102, -0.4107919931411743, 0.712664008140564, 0.6976209878921509, 0.07372400164604187, 0.09972699731588364, 0.12198299914598465, 0.98750901222229, -0.4873189926147461, -0.4885669946670532, 0.7237560153007507, -0.6152420043945312, -0.6154839992523193, 0.4926010072231293, 0.8800280094146729, 0.3387089967727661, -0.3329069912433624, 0.9172769784927368, 0.36149299144744873, 0.16711199283599854, 0.11358699947595596, 0.04806999862194061, 0.9923650026321411, -0.6341490149497986, -0.2618879973888397, 0.7275090217590332, -0.8041260242462158, -0.33270499110221863, 0.49263399839401245, 0.9666900038719177, -0.010453999973833561, -0.2557379901409149, 0.967441976070404, -0.00810300000011921, 0.25296199321746826, 0.0934389978647232, -0.0012799999676644802, 0.9956240057945251, -0.6821659803390503, 0.0003429999924264848, 0.7311969995498657, -0.8703219890594482, 0.00005400000009103678, 0.492482990026474, 0.9666900038719177, -0.010453999973833561, -0.2557379901409149, 0.8930140137672424, -0.3698819875717163, -0.2563450038433075, 0.8934370279312134, -0.36910200119018555, 0.2559970021247864, 0.967441976070404, -0.00810300000011921, 0.25296199321746826, 0.0838799998164177, -0.03550700098276138, 0.9958429932594299, 0.0934389978647232, -0.0012799999676644802, 0.9956240057945251, -0.629721999168396, 0.2604379951953888, 0.7318620085716248, -0.6821659803390503, 0.0003429999924264848, 0.7311969995498657, -0.803725004196167, 0.3325839936733246, 0.4933690130710602, -0.8703219890594482, 0.00005400000009103678, 0.492482990026474, 0.6834070086479187, -0.6834070086479187, -0.2567310035228729, 0.6835309863090515, -0.6835309863090515, 0.25606799125671387, 0.06492599844932556, -0.06492500007152557, 0.9957759976387024, -0.48139700293540955, 0.48139700293540955, 0.7324709892272949, -0.6148040294647217, 0.6148040294647217, 0.4939970076084137, 0.3698819875717163, -0.8930140137672424, -0.2563450038433075, 0.36910200119018555, -0.8934370279312134, 0.2559959888458252, 0.03550700098276138, -0.0838790014386177, 0.9958429932594299, -0.26043900847435, 0.6297230124473572, 0.7318609952926636, -0.3325839936733246, 0.803725004196167, 0.4933690130710602, 0, -0.9667419791221619, -0.25575199723243713, 0, -0.9668239951133728, 0.2554430067539215, 0, -0.09205400198698044, 0.9957540035247803, 0, 0.6820489764213562, 0.7313070297241211, 0, 0.8703010082244873, 0.4925200045108795, 0, -0.9667419791221619, -0.25575199723243713, -0.3698819875717163, -0.8930140137672424, -0.2563450038433075, -0.36910200119018555, -0.8934370279312134, 0.2559970021247864, 0, -0.9668239951133728, 0.2554430067539215, -0.03550700098276138, -0.0838799998164177, 0.9958429932594299, 0, -0.09205400198698044, 0.9957540035247803, 0.2604379951953888, 0.629721999168396, 0.7318620085716248, 0, 0.6820489764213562, 0.7313070297241211, 0.3325839936733246, 0.803725004196167, 0.4933690130710602, 0, 0.8703010082244873, 0.4925200045108795, -0.6834070086479187, -0.6834070086479187, -0.2567310035228729, -0.6835309863090515, -0.6835309863090515, 0.25606799125671387, -0.06492500007152557, -0.06492599844932556, 0.9957759976387024, 0.48139700293540955, 0.48139700293540955, 0.7324709892272949, 0.6148040294647217, 0.6148040294647217, 0.4939970076084137, -0.8930140137672424, -0.3698819875717163, -0.2563450038433075, -0.8934370279312134, -0.36910200119018555, 0.2559959888458252, -0.0838790014386177, -0.03550700098276138, 0.9958429932594299, 0.6297230124473572, 0.26043900847435, 0.7318609952926636, 0.803725004196167, 0.3325839936733246, 0.4933690130710602, -0.9667419791221619, 0, -0.25575199723243713, -0.9668239951133728, 0, 0.2554430067539215, -0.09205400198698044, 0, 0.9957540035247803, 0.6820489764213562, 0, 0.7313070297241211, 0.8703010082244873, 0, 0.4925200045108795, 0.8703010082244873, 0, 0.4925200045108795, 0.803725004196167, -0.3325839936733246, 0.4933690130710602, 0.8454390168190002, -0.34983500838279724, 0.40354499220848083, 0.9153209924697876, 0, 0.4027250111103058, 0.8699960112571716, -0.36004599928855896, 0.33685898780822754, 0.9418079853057861, 0, 0.33615100383758545, 0.9041929841041565, -0.37428000569343567, 0.20579099655151367, 0.9786900281906128, 0, 0.20534199476242065, 0.9218789935112, -0.38175201416015625, -0.06636899709701538, 0.9978039860725403, 0, -0.06623899936676025, 0.6148040294647217, -0.6148040294647217, 0.4939970076084137, 0.6468020081520081, -0.6468020081520081, 0.40409600734710693, 0.6656550168991089, -0.6656550168991089, 0.3373520076274872, 0.6919230222702026, -0.6919230222702026, 0.20611999928951263, 0.7055429816246033, -0.7055429816246033, -0.06647899746894836, 0.3325839936733246, -0.803725004196167, 0.4933690130710602, 0.34983500838279724, -0.8454390168190002, 0.40354499220848083, 0.36004701256752014, -0.8699960112571716, 0.33685800433158875, 0.37428000569343567, -0.9041929841041565, 0.20579099655151367, 0.38175201416015625, -0.9218789935112, -0.06636899709701538, 0, -0.8703010082244873, 0.4925200045108795, 0, -0.9153209924697876, 0.4027250111103058, 0, -0.9418079853057861, 0.33615100383758545, 0, -0.9786900281906128, 0.20534199476242065, 0, -0.9978039860725403, -0.06623899936676025, 0, -0.8703010082244873, 0.4925200045108795, -0.33248499035835266, -0.8042709827423096, 0.4925459921360016, -0.34983500838279724, -0.8454390168190002, 0.40354499220848083, 0, -0.9153209924697876, 0.4027250111103058, -0.36004599928855896, -0.8699960112571716, 0.33685898780822754, 0, -0.9418079853057861, 0.33615100383758545, -0.37428000569343567, -0.9041929841041565, 0.20579099655151367, 0, -0.9786900281906128, 0.20534199476242065, -0.38175201416015625, -0.9218789935112, -0.06636899709701538, 0, -0.9978039860725403, -0.06623899936676025, -0.6152420043945312, -0.6154839992523193, 0.4926010072231293, -0.6468020081520081, -0.6468020081520081, 0.40409600734710693, -0.6656550168991089, -0.6656550168991089, 0.3373520076274872, -0.6919230222702026, -0.6919230222702026, 0.20611999928951263, -0.7055429816246033, -0.7055429816246033, -0.06647899746894836, -0.8041260242462158, -0.33270499110221863, 0.49263399839401245, -0.8454390168190002, -0.34983500838279724, 0.40354499220848083, -0.8699960112571716, -0.36004701256752014, 0.33685800433158875, -0.9041929841041565, -0.37428000569343567, 0.20579099655151367, -0.9218789935112, -0.38175201416015625, -0.06636899709701538, -0.8703219890594482, 0.00005400000009103678, 0.492482990026474, -0.9153209924697876, 0, 0.4027250111103058, -0.9418079853057861, 0, 0.33615100383758545, -0.9786900281906128, 0, 0.20534199476242065, -0.9978039860725403, 0, -0.06623899936676025, -0.8703219890594482, 0.00005400000009103678, 0.492482990026474, -0.803725004196167, 0.3325839936733246, 0.4933690130710602, -0.8454390168190002, 0.34983500838279724, 0.40354499220848083, -0.9153209924697876, 0, 0.4027250111103058, -0.8699960112571716, 0.36004599928855896, 0.33685898780822754, -0.9418079853057861, 0, 0.33615100383758545, -0.9041929841041565, 0.37428000569343567, 0.20579099655151367, -0.9786900281906128, 0, 0.20534199476242065, -0.9218789935112, 0.38175201416015625, -0.06636899709701538, -0.9978039860725403, 0, -0.06623899936676025, -0.6148040294647217, 0.6148040294647217, 0.4939970076084137, -0.6468020081520081, 0.6468020081520081, 0.40409600734710693, -0.6656550168991089, 0.6656550168991089, 0.3373520076274872, -0.6919230222702026, 0.6919230222702026, 0.20611999928951263, -0.7055429816246033, 0.7055429816246033, -0.06647899746894836, -0.3325839936733246, 0.803725004196167, 0.4933690130710602, -0.34983500838279724, 0.8454390168190002, 0.40354499220848083, -0.36004701256752014, 0.8699960112571716, 0.33685800433158875, -0.37428000569343567, 0.9041929841041565, 0.20579099655151367, -0.38175201416015625, 0.9218789935112, -0.06636899709701538, 0, 0.8703010082244873, 0.4925200045108795, 0, 0.9153209924697876, 0.4027250111103058, 0, 0.9418079853057861, 0.33615100383758545, 0, 0.9786900281906128, 0.20534199476242065, 0, 0.9978039860725403, -0.06623899936676025, 0, 0.8703010082244873, 0.4925200045108795, 0.3325839936733246, 0.803725004196167, 0.4933690130710602, 0.34983500838279724, 0.8454390168190002, 0.40354499220848083, 0, 0.9153209924697876, 0.4027250111103058, 0.36004599928855896, 0.8699960112571716, 0.33685898780822754, 0, 0.9418079853057861, 0.33615100383758545, 0.37428000569343567, 0.9041929841041565, 0.20579099655151367, 0, 0.9786900281906128, 0.20534199476242065, 0.38175201416015625, 0.9218789935112, -0.06636899709701538, 0, 0.9978039860725403, -0.06623899936676025, 0.6148040294647217, 0.6148040294647217, 0.4939970076084137, 0.6468020081520081, 0.6468020081520081, 0.40409600734710693, 0.6656550168991089, 0.6656550168991089, 0.3373520076274872, 0.6919230222702026, 0.6919230222702026, 0.20611999928951263, 0.7055429816246033, 0.7055429816246033, -0.06647899746894836, 0.803725004196167, 0.3325839936733246, 0.4933690130710602, 0.8454390168190002, 0.34983500838279724, 0.40354499220848083, 0.8699960112571716, 0.36004701256752014, 0.33685800433158875, 0.9041929841041565, 0.37428000569343567, 0.20579099655151367, 0.9218789935112, 0.38175201416015625, -0.06636899709701538, 0.8703010082244873, 0, 0.4925200045108795, 0.9153209924697876, 0, 0.4027250111103058, 0.9418079853057861, 0, 0.33615100383758545, 0.9786900281906128, 0, 0.20534199476242065, 0.9978039860725403, 0, -0.06623899936676025, 0.9978039860725403, 0, -0.06623899936676025, 0.9218789935112, -0.38175201416015625, -0.06636899709701538, 0.8314369916915894, -0.3441790044307709, -0.4361799955368042, 0.9001820087432861, 0, -0.4355129897594452, 0.6735119819641113, -0.2785939872264862, -0.6846650242805481, 0.7296109795570374, 0, -0.6838629841804504, 0.6403989791870117, -0.26487401127815247, -0.7209240198135376, 0.6939510107040405, 0, -0.7200220227241516, 0.7329490184783936, -0.303166002035141, -0.6089959740638733, 0.7939500212669373, 0, -0.6079840064048767, 0.7055429816246033, -0.7055429816246033, -0.06647899746894836, 0.6360920071601868, -0.6360920071601868, -0.4367780089378357, 0.5149649977684021, -0.5149649977684021, -0.6852890253067017, 0.48965099453926086, -0.48965099453926086, -0.7214459776878357, 0.5605549812316895, -0.5605549812316895, -0.6095539927482605, 0.38175201416015625, -0.9218789935112, -0.06636899709701538, 0.3441790044307709, -0.8314369916915894, -0.4361799955368042, 0.2785939872264862, -0.6735119819641113, -0.6846650242805481, 0.26487401127815247, -0.6403989791870117, -0.7209240198135376, 0.303166002035141, -0.7329490184783936, -0.6089959740638733, 0, -0.9978039860725403, -0.06623899936676025, 0, -0.9001820087432861, -0.4355129897594452, 0, -0.7296109795570374, -0.6838629841804504, 0, -0.6939510107040405, -0.7200220227241516, 0, -0.7939500212669373, -0.6079840064048767, 0, -0.9978039860725403, -0.06623899936676025, -0.38175201416015625, -0.9218789935112, -0.06636899709701538, -0.3441790044307709, -0.8314369916915894, -0.4361799955368042, 0, -0.9001820087432861, -0.4355129897594452, -0.2785939872264862, -0.6735119819641113, -0.6846650242805481, 0, -0.7296109795570374, -0.6838629841804504, -0.26487401127815247, -0.6403989791870117, -0.7209240198135376, 0, -0.6939510107040405, -0.7200220227241516, -0.303166002035141, -0.7329490184783936, -0.6089959740638733, 0, -0.7939500212669373, -0.6079840064048767, -0.7055429816246033, -0.7055429816246033, -0.06647899746894836, -0.6360920071601868, -0.6360920071601868, -0.4367780089378357, -0.5149649977684021, -0.5149649977684021, -0.6852890253067017, -0.48965099453926086, -0.48965099453926086, -0.7214459776878357, -0.5605549812316895, -0.5605549812316895, -0.6095539927482605, -0.9218789935112, -0.38175201416015625, -0.06636899709701538, -0.8314369916915894, -0.3441790044307709, -0.4361799955368042, -0.6735119819641113, -0.2785939872264862, -0.6846650242805481, -0.6403989791870117, -0.26487401127815247, -0.7209240198135376, -0.7329490184783936, -0.303166002035141, -0.6089959740638733, -0.9978039860725403, 0, -0.06623899936676025, -0.9001820087432861, 0, -0.4355129897594452, -0.7296109795570374, 0, -0.6838629841804504, -0.6939510107040405, 0, -0.7200220227241516, -0.7939500212669373, 0, -0.6079840064048767, -0.9978039860725403, 0, -0.06623899936676025, -0.9218789935112, 0.38175201416015625, -0.06636899709701538, -0.8314369916915894, 0.3441790044307709, -0.4361799955368042, -0.9001820087432861, 0, -0.4355129897594452, -0.6735119819641113, 0.2785939872264862, -0.6846650242805481, -0.7296109795570374, 0, -0.6838629841804504, -0.6403989791870117, 0.26487401127815247, -0.7209240198135376, -0.6939510107040405, 0, -0.7200220227241516, -0.7329490184783936, 0.303166002035141, -0.6089959740638733, -0.7939500212669373, 0, -0.6079840064048767, -0.7055429816246033, 0.7055429816246033, -0.06647899746894836, -0.6360920071601868, 0.6360920071601868, -0.4367780089378357, -0.5149649977684021, 0.5149649977684021, -0.6852890253067017, -0.48965099453926086, 0.48965099453926086, -0.7214459776878357, -0.5605549812316895, 0.5605549812316895, -0.6095539927482605, -0.38175201416015625, 0.9218789935112, -0.06636899709701538, -0.3441790044307709, 0.8314369916915894, -0.4361799955368042, -0.2785939872264862, 0.6735119819641113, -0.6846650242805481, -0.26487401127815247, 0.6403989791870117, -0.7209240198135376, -0.303166002035141, 0.7329490184783936, -0.6089959740638733, 0, 0.9978039860725403, -0.06623899936676025, 0, 0.9001820087432861, -0.4355129897594452, 0, 0.7296109795570374, -0.6838629841804504, 0, 0.6939510107040405, -0.7200220227241516, 0, 0.7939500212669373, -0.6079840064048767, 0, 0.9978039860725403, -0.06623899936676025, 0.38175201416015625, 0.9218789935112, -0.06636899709701538, 0.3441790044307709, 0.8314369916915894, -0.4361799955368042, 0, 0.9001820087432861, -0.4355129897594452, 0.2785939872264862, 0.6735119819641113, -0.6846650242805481, 0, 0.7296109795570374, -0.6838629841804504, 0.26487401127815247, 0.6403989791870117, -0.7209240198135376, 0, 0.6939510107040405, -0.7200220227241516, 0.303166002035141, 0.7329490184783936, -0.6089959740638733, 0, 0.7939500212669373, -0.6079840064048767, 0.7055429816246033, 0.7055429816246033, -0.06647899746894836, 0.6360920071601868, 0.6360920071601868, -0.4367780089378357, 0.5149649977684021, 0.5149649977684021, -0.6852890253067017, 0.48965099453926086, 0.48965099453926086, -0.7214459776878357, 0.5605549812316895, 0.5605549812316895, -0.6095539927482605, 0.9218789935112, 0.38175201416015625, -0.06636899709701538, 0.8314369916915894, 0.3441790044307709, -0.4361799955368042, 0.6735119819641113, 0.2785939872264862, -0.6846650242805481, 0.6403989791870117, 0.26487401127815247, -0.7209240198135376, 0.7329490184783936, 0.303166002035141, -0.6089959740638733, 0.9978039860725403, 0, -0.06623899936676025, 0.9001820087432861, 0, -0.4355129897594452, 0.7296109795570374, 0, -0.6838629841804504, 0.6939510107040405, 0, -0.7200220227241516, 0.7939500212669373, 0, -0.6079840064048767, 0.7939500212669373, 0, -0.6079840064048767, 0.7329490184783936, -0.303166002035141, -0.6089959740638733, 0.576229989528656, -0.23821599781513214, -0.7818009853363037, 0.6238600015640259, 0, -0.7815359830856323, 0.16362899541854858, -0.06752700358629227, -0.9842079877853394, 0.17729100584983826, 0, -0.984158992767334, 0.04542100057005882, -0.018735000863671303, -0.9987919926643372, 0.04920699819922447, 0, -0.9987890124320984, 0, 0, -1, 0, 0, -1, 0.5605549812316895, -0.5605549812316895, -0.6095539927482605, 0.44041600823402405, -0.44041600823402405, -0.7823479771614075, 0.12490200251340866, -0.12490200251340866, -0.9842759966850281, 0.034662000834941864, -0.034662000834941864, -0.9987980127334595, 0, 0, -1, 0.303166002035141, -0.7329490184783936, -0.6089959740638733, 0.23821599781513214, -0.576229989528656, -0.7818009853363037, 0.06752700358629227, -0.16362899541854858, -0.9842079877853394, 0.018735000863671303, -0.04542100057005882, -0.9987919926643372, 0, 0, -1, 0, -0.7939500212669373, -0.6079840064048767, 0, -0.6238600015640259, -0.7815359830856323, 0, -0.17729100584983826, -0.984158992767334, 0, -0.04920699819922447, -0.9987890124320984, 0, 0, -1, 0, -0.7939500212669373, -0.6079840064048767, -0.303166002035141, -0.7329490184783936, -0.6089959740638733, -0.23821599781513214, -0.576229989528656, -0.7818009853363037, 0, -0.6238600015640259, -0.7815359830856323, -0.06752700358629227, -0.16362899541854858, -0.9842079877853394, 0, -0.17729100584983826, -0.984158992767334, -0.018735000863671303, -0.04542100057005882, -0.9987919926643372, 0, -0.04920699819922447, -0.9987890124320984, 0, 0, -1, 0, 0, -1, -0.5605549812316895, -0.5605549812316895, -0.6095539927482605, -0.44041600823402405, -0.44041600823402405, -0.7823479771614075, -0.12490200251340866, -0.12490200251340866, -0.9842759966850281, -0.034662000834941864, -0.034662000834941864, -0.9987980127334595, 0, 0, -1, -0.7329490184783936, -0.303166002035141, -0.6089959740638733, -0.576229989528656, -0.23821599781513214, -0.7818009853363037, -0.16362899541854858, -0.06752700358629227, -0.9842079877853394, -0.04542100057005882, -0.018735000863671303, -0.9987919926643372, 0, 0, -1, -0.7939500212669373, 0, -0.6079840064048767, -0.6238600015640259, 0, -0.7815359830856323, -0.17729100584983826, 0, -0.984158992767334, -0.04920699819922447, 0, -0.9987890124320984, 0, 0, -1, -0.7939500212669373, 0, -0.6079840064048767, -0.7329490184783936, 0.303166002035141, -0.6089959740638733, -0.576229989528656, 0.23821599781513214, -0.7818009853363037, -0.6238600015640259, 0, -0.7815359830856323, -0.16362899541854858, 0.06752700358629227, -0.9842079877853394, -0.17729100584983826, 0, -0.984158992767334, -0.04542100057005882, 0.018735000863671303, -0.9987919926643372, -0.04920699819922447, 0, -0.9987890124320984, 0, 0, -1, 0, 0, -1, -0.5605549812316895, 0.5605549812316895, -0.6095539927482605, -0.44041600823402405, 0.44041600823402405, -0.7823479771614075, -0.12490200251340866, 0.12490200251340866, -0.9842759966850281, -0.034662000834941864, 0.034662000834941864, -0.9987980127334595, 0, 0, -1, -0.303166002035141, 0.7329490184783936, -0.6089959740638733, -0.23821599781513214, 0.576229989528656, -0.7818009853363037, -0.06752700358629227, 0.16362899541854858, -0.9842079877853394, -0.018735000863671303, 0.04542100057005882, -0.9987919926643372, 0, 0, -1, 0, 0.7939500212669373, -0.6079840064048767, 0, 0.6238600015640259, -0.7815359830856323, 0, 0.17729100584983826, -0.984158992767334, 0, 0.04920699819922447, -0.9987890124320984, 0, 0, -1, 0, 0.7939500212669373, -0.6079840064048767, 0.303166002035141, 0.7329490184783936, -0.6089959740638733, 0.23821599781513214, 0.576229989528656, -0.7818009853363037, 0, 0.6238600015640259, -0.7815359830856323, 0.06752700358629227, 0.16362899541854858, -0.9842079877853394, 0, 0.17729100584983826, -0.984158992767334, 0.018735000863671303, 0.04542100057005882, -0.9987919926643372, 0, 0.04920699819922447, -0.9987890124320984, 0, 0, -1, 0, 0, -1, 0.5605549812316895, 0.5605549812316895, -0.6095539927482605, 0.44041600823402405, 0.44041600823402405, -0.7823479771614075, 0.12490200251340866, 0.12490200251340866, -0.9842759966850281, 0.034662000834941864, 0.034662000834941864, -0.9987980127334595, 0, 0, -1, 0.7329490184783936, 0.303166002035141, -0.6089959740638733, 0.576229989528656, 0.23821599781513214, -0.7818009853363037, 0.16362899541854858, 0.06752700358629227, -0.9842079877853394, 0.04542100057005882, 0.018735000863671303, -0.9987919926643372, 0, 0, -1, 0.7939500212669373, 0, -0.6079840064048767, 0.6238600015640259, 0, -0.7815359830856323, 0.17729100584983826, 0, -0.984158992767334, 0.04920699819922447, 0, -0.9987890124320984, 0, 0, -1, 0.007784999907016754, 0.00021499999274965376, -0.999970018863678, 0.007038000039756298, -0.5829259753227234, -0.8124949932098389, 0.0361270010471344, -0.5456140041351318, -0.837257981300354, 0.03913800045847893, 0.0009879999561235309, -0.9992330074310303, 0.16184599697589874, -0.5630490183830261, -0.8104209899902344, 0.17951199412345886, 0.0043680001981556416, -0.9837459921836853, 0.4823650121688843, -0.6427459716796875, -0.5951480269432068, 0.6122999787330627, 0.010459000244736671, -0.790556013584137, 0.7387199997901917, -0.6641989946365356, -0.11459299921989441, 0.9861519932746887, 0.006668999791145325, -0.16570700705051422, -0.0019079999765381217, -0.9867690205574036, 0.1621209979057312, 0.002761000068858266, -0.9998499751091003, 0.017105000093579292, 0.010532000102102757, -0.9972469806671143, 0.07339800149202347, -0.06604000180959702, -0.9893029928207397, 0.13006900250911713, -0.09442699700593948, -0.9953929781913757, 0.016594000160694122, -0.009201999753713608, -0.4902929961681366, 0.8715090155601501, -0.04860600084066391, -0.5394579768180847, 0.8406090140342712, -0.22329799830913544, -0.5527390241622925, 0.8028810024261475, -0.5963649749755859, -0.5751349925994873, 0.5599709749221802, -0.8033369779586792, -0.5916029810905457, 0.06823500245809555, -0.01056000031530857, -0.00010299999848939478, 0.9999439716339111, -0.05879800021648407, -0.0007089999853633344, 0.9982699751853943, -0.28071001172065735, -0.0032679999712854624, 0.9597870111465454, -0.7497230172157288, -0.004267000127583742, 0.6617379784584045, -0.9973509907722473, -0.0020580000709742308, 0.07271400094032288, -0.01056000031530857, -0.00010299999848939478, 0.9999439716339111, -0.008791999891400337, 0.49032899737358093, 0.8714929819107056, -0.04649300128221512, 0.5387560129165649, 0.8411779999732971, -0.05879800021648407, -0.0007089999853633344, 0.9982699751853943, -0.21790899336338043, 0.5491610169410706, 0.8068069815635681, -0.28071001172065735, -0.0032679999712854624, 0.9597870111465454, -0.5972909927368164, 0.5741199851036072, 0.560027003288269, -0.7497230172157288, -0.004267000127583742, 0.6617379784584045, -0.8040000200271606, 0.5912910103797913, 0.0629120022058487, -0.9973509907722473, -0.0020580000709742308, 0.07271400094032288, -0.0018050000071525574, 0.986840009689331, 0.16169099509716034, 0.0020310000982135534, 0.999891996383667, 0.014553000219166279, 0.009215000085532665, 0.9981520175933838, 0.060068998485803604, -0.059335000813007355, 0.9917230010032654, 0.11386600136756897, -0.08690100163221359, 0.9961410164833069, 0.01228999998420477, 0.006417000200599432, 0.5830950140953064, -0.812379002571106, 0.03378299996256828, 0.5453730225563049, -0.8375130295753479, 0.1571130007505417, 0.562188982963562, -0.8119469881057739, 0.4844059944152832, 0.6465290188789368, -0.5893650054931641, 0.7388700246810913, 0.6661880016326904, -0.10131999850273132, 0.007784999907016754, 0.00021499999274965376, -0.999970018863678, 0.03913800045847893, 0.0009879999561235309, -0.9992330074310303, 0.17951199412345886, 0.0043680001981556416, -0.9837459921836853, 0.6122999787330627, 0.010459000244736671, -0.790556013584137, 0.9861519932746887, 0.006668999791145325, -0.16570700705051422, 0.9861519932746887, 0.006668999791145325, -0.16570700705051422, 0.7387199997901917, -0.6641989946365356, -0.11459299921989441, 0.7256090044975281, -0.6373609900474548, 0.25935098528862, 0.94651198387146, 0.0033569999504834414, 0.3226499855518341, 0.6459450125694275, -0.6077200174331665, 0.46198800206184387, 0.8258299827575684, 0.007451999932527542, 0.5638700127601624, 0.5316150188446045, -0.5586140155792236, 0.6366599798202515, 0.6500110030174255, 0.006936000194400549, 0.759893000125885, 0.4249640107154846, -0.5955389738082886, 0.6817179918289185, 0.5324289798736572, 0.005243999883532524, 0.8464580178260803, -0.09442699700593948, -0.9953929781913757, 0.016594000160694122, -0.04956100136041641, -0.9985759854316711, -0.01975500024855137, -0.03781700134277344, -0.998649001121521, -0.035624999552965164, -0.0379129983484745, -0.9986140131950378, -0.03651199862360954, -0.1688539981842041, -0.9395300149917603, -0.2979460060596466, -0.8033369779586792, -0.5916029810905457, 0.06823500245809555, -0.7423409819602966, -0.5995240211486816, -0.2991659939289093, -0.6196020245552063, -0.5795029997825623, -0.5294060111045837, -0.483707994222641, -0.5438370108604431, -0.6857600212097168, -0.44529199600219727, -0.4131770133972168, -0.7943549752235413, -0.9973509907722473, -0.0020580000709742308, 0.07271400094032288, -0.9265130162239075, -0.0019950000569224358, -0.3762570023536682, -0.7539200186729431, -0.004317000042647123, -0.6569520235061646, -0.5662239789962769, -0.003461000043898821, -0.8242440223693848, -0.4818040132522583, -0.0018500000005587935, -0.8762770295143127, -0.9973509907722473, -0.0020580000709742308, 0.07271400094032288, -0.8040000200271606, 0.5912910103797913, 0.0629120022058487, -0.7446749806404114, 0.5989770293235779, -0.29442399740219116, -0.9265130162239075, -0.0019950000569224358, -0.3762570023536682, -0.6219490170478821, 0.5781649947166443, -0.5281140208244324, -0.7539200186729431, -0.004317000042647123, -0.6569520235061646, -0.48117101192474365, 0.5428280234336853, -0.6883400082588196, -0.5662239789962769, -0.003461000043898821, -0.8242440223693848, -0.43805500864982605, 0.41574400663375854, -0.7970349788665771, -0.4818040132522583, -0.0018500000005587935, -0.8762770295143127, -0.08690100163221359, 0.9961410164833069, 0.01228999998420477, -0.04433799907565117, 0.9988710284233093, -0.017055999487638474, -0.026177000254392624, 0.9992600083351135, -0.02816700004041195, -0.025293000042438507, 0.9992780089378357, -0.028332000598311424, -0.15748199820518494, 0.9441670179367065, -0.28939300775527954, 0.7388700246810913, 0.6661880016326904, -0.10131999850273132, 0.7282440066337585, 0.63714200258255, 0.25240999460220337, 0.6470540165901184, 0.6082550287246704, 0.4597249925136566, 0.5229939818382263, 0.5621700286865234, 0.6406570076942444, 0.4099780023097992, 0.6046689748764038, 0.6828569769859314, 0.9861519932746887, 0.006668999791145325, -0.16570700705051422, 0.94651198387146, 0.0033569999504834414, 0.3226499855518341, 0.8258299827575684, 0.007451999932527542, 0.5638700127601624, 0.6500110030174255, 0.006936000194400549, 0.759893000125885, 0.5324289798736572, 0.005243999883532524, 0.8464580178260803, -0.230786994099617, 0.006523000076413155, 0.9729819893836975, -0.15287800133228302, -0.7101899981498718, 0.6872109770774841, -0.31672099232673645, -0.7021129727363586, 0.6377500295639038, -0.5489360094070435, 0.0015109999803826213, 0.8358629941940308, -0.6010670065879822, -0.645330011844635, 0.471451997756958, -0.8756710290908813, -0.009891999885439873, 0.4828070104122162, -0.635890007019043, -0.629800021648407, 0.4460900127887726, -0.8775539994239807, -0.01909100078046322, 0.47909700870513916, -0.4357450008392334, -0.670009970664978, 0.6010090112686157, -0.6961889863014221, -0.02449600026011467, 0.7174400091171265, 0.11111299693584442, -0.9901599884033203, -0.08506900072097778, 0.22330999374389648, -0.9747260212898254, 0.006539999973028898, 0.19009700417518616, -0.9694579839706421, 0.15496399998664856, 0.005270000081509352, -0.9818699955940247, 0.18948200345039368, -0.011750999838113785, -0.9690240025520325, 0.24668699502944946, 0.3439059853553772, -0.5994120240211487, -0.7227950096130371, 0.5724899768829346, -0.5916270017623901, -0.5676559805870056, 0.7874360084533691, -0.5605109930038452, -0.2564600110054016, 0.6470969915390015, -0.6981409788131714, -0.3063740134239197, 0.4275279939174652, -0.7535750269889832, -0.49934399127960205, 0.4109260141849518, -0.0012839999981224537, -0.9116680026054382, 0.6715199947357178, 0.0008989999769255519, -0.7409859895706177, 0.9220259785652161, 0.00725199980661273, -0.3870599865913391, 0.8469099998474121, 0.01385399978607893, -0.5315560102462769, 0.5359240174293518, 0.010503999888896942, -0.8442010283470154, 0.4109260141849518, -0.0012839999981224537, -0.9116680026054382, 0.3411880135536194, 0.6009309887886047, -0.7228230237960815, 0.5786640048027039, 0.591838002204895, -0.5611389875411987, 0.6715199947357178, 0.0008989999769255519, -0.7409859895706177, 0.7848690152168274, 0.5665420293807983, -0.25102001428604126, 0.9220259785652161, 0.00725199980661273, -0.3870599865913391, 0.6426810026168823, 0.7039899826049805, -0.3022570013999939, 0.8469099998474121, 0.01385399978607893, -0.5315560102462769, 0.4185889959335327, 0.7581170201301575, -0.5000420212745667, 0.5359240174293518, 0.010503999888896942, -0.8442010283470154, 0.11580599844455719, 0.9901139736175537, -0.07913900166749954, 0.23281100392341614, 0.9724410176277161, 0.012564999982714653, 0.20666299760341644, 0.9662799835205078, 0.15360000729560852, 0.02449899911880493, 0.9865779876708984, 0.16144299507141113, 0.0033809999004006386, 0.9774550199508667, 0.2111150026321411, -0.13491199910640717, 0.7135509848594666, 0.6874909996986389, -0.31953999400138855, 0.7050619721412659, 0.6330729722976685, -0.6039019823074341, 0.6499029994010925, 0.4614419937133789, -0.6318150162696838, 0.6400719881057739, 0.43716898560523987, -0.4243049919605255, 0.6667500138282776, 0.6127070188522339, -0.230786994099617, 0.006523000076413155, 0.9729819893836975, -0.5489360094070435, 0.0015109999803826213, 0.8358629941940308, -0.8756710290908813, -0.009891999885439873, 0.4828070104122162, -0.8775539994239807, -0.01909100078046322, 0.47909700870513916, -0.6961889863014221, -0.02449600026011467, 0.7174400091171265, -0.6961889863014221, -0.02449600026011467, 0.7174400091171265, -0.4357450008392334, -0.670009970664978, 0.6010090112686157, -0.25985801219940186, -0.5525479912757874, 0.7919380068778992, -0.42579901218414307, -0.010804999619722366, 0.9047530293464661, 0.009537000209093094, 0.021669000387191772, 0.9997199773788452, 0.022041000425815582, -0.001623000018298626, 0.9997559785842896, 0.4101540148258209, 0.8490809798240662, 0.3329179883003235, 0.9995980262756348, -0.01155600044876337, 0.02587899938225746, 0.5415220260620117, 0.6370009779930115, -0.5486199855804443, 0.7095860242843628, -0.009670999832451344, -0.7045519948005676, -0.011750999838113785, -0.9690240025520325, 0.24668699502944946, 0.046310000121593475, -0.8891720175743103, 0.45522499084472656, -0.010688000358641148, -0.14889900386333466, 0.9887949824333191, -0.04437499865889549, 0.7291200160980225, 0.6829460263252258, 0.12282499670982361, 0.9923850297927856, 0.009232000447809696, 0.4275279939174652, -0.7535750269889832, -0.49934399127960205, 0.48183900117874146, -0.857479989528656, -0.18044300377368927, 0.45527198910713196, -0.49992498755455017, 0.7367510199546814, -0.22054199874401093, 0.3582780063152313, 0.9071930050849915, -0.23591899871826172, 0.7157959938049316, 0.6572499871253967, 0.5359240174293518, 0.010503999888896942, -0.8442010283470154, 0.7280910015106201, 0.015584999695420265, -0.6853029727935791, 0.8887389898300171, 0.016679000109434128, 0.4581089913845062, -0.26009801030158997, -0.0007999999797903001, 0.965582013130188, -0.37161099910736084, 0.004416999872773886, 0.9283779859542847, 0.5359240174293518, 0.010503999888896942, -0.8442010283470154, 0.4185889959335327, 0.7581170201301575, -0.5000420212745667, 0.4801650047302246, 0.8588529825210571, -0.17836299538612366, 0.7280910015106201, 0.015584999695420265, -0.6853029727935791, 0.4881030023097992, 0.49794700741767883, 0.7168020009994507, 0.8887389898300171, 0.016679000109434128, 0.4581089913845062, -0.2220049947500229, -0.36189401149749756, 0.9053990244865417, -0.26009801030158997, -0.0007999999797903001, 0.965582013130188, -0.23540399968624115, -0.7104769945144653, 0.6631799936294556, -0.37161099910736084, 0.004416999872773886, 0.9283779859542847, 0.0033809999004006386, 0.9774550199508667, 0.2111150026321411, 0.058719001710414886, 0.8971999883651733, 0.437703013420105, 0.0013249999610707164, 0.164000004529953, 0.9864590167999268, -0.04418899863958359, -0.7303190231323242, 0.6816750168800354, 0.13880200684070587, -0.9897300004959106, -0.034189000725746155, -0.4243049919605255, 0.6667500138282776, 0.6127070188522339, -0.25888898968696594, 0.5453789830207825, 0.7972059845924377, 0.012268000282347202, -0.01928500086069107, 0.9997389912605286, 0.3986299932003021, -0.8456630110740662, 0.3548929989337921, 0.5375639796257019, -0.6107370257377625, -0.5813990235328674, -0.6961889863014221, -0.02449600026011467, 0.7174400091171265, -0.42579901218414307, -0.010804999619722366, 0.9047530293464661, 0.022041000425815582, -0.001623000018298626, 0.9997559785842896, 0.9995980262756348, -0.01155600044876337, 0.02587899938225746, 0.7095860242843628, -0.009670999832451344, -0.7045519948005676, 0, 0, 1, 0, 0, 1, 0.7626410126686096, -0.31482499837875366, 0.5650339722633362, 0.8245400190353394, -0.00001700000029813964, 0.5658029913902283, 0.8479819893836975, -0.3500339984893799, -0.39799800515174866, 0.917701005935669, -0.00003300000025774352, -0.397271990776062, 0.8641409873962402, -0.35644200444221497, -0.3552600145339966, 0.9352689981460571, -0.00011200000153621659, -0.3539389967918396, 0.7209920287132263, -0.29793301224708557, 0.6256250143051147, 0.7807120084762573, -0.00007500000356230885, 0.6248909831047058, 0, 0, 1, 0.5833569765090942, -0.5833380222320557, 0.5651649832725525, 0.648485004901886, -0.6484479904174805, -0.3987259864807129, 0.6608719825744629, -0.6607480049133301, -0.35589399933815, 0.5518630146980286, -0.5517799854278564, 0.6252880096435547, 0, 0, 1, 0.31482499837875366, -0.762628972530365, 0.5650510191917419, 0.35004499554634094, -0.8479880094528198, -0.39797601103782654, 0.35647401213645935, -0.8641520142555237, -0.35519900918006897, 0.29798200726509094, -0.7210670113563538, 0.6255149841308594, 0, 0, 1, -0.00001700000029813964, -0.8245400190353394, 0.5658029913902283, -0.00003300000025774352, -0.917701005935669, -0.397271990776062, -0.00011200000153621659, -0.9352689981460571, -0.3539389967918396, -0.00007500000356230885, -0.7807120084762573, 0.6248900294303894, 0, 0, 1, 0, 0, 1, -0.31482499837875366, -0.7626410126686096, 0.5650339722633362, -0.00001700000029813964, -0.8245400190353394, 0.5658029913902283, -0.3500339984893799, -0.8479819893836975, -0.39799800515174866, -0.00003300000025774352, -0.917701005935669, -0.397271990776062, -0.35644200444221497, -0.8641409873962402, -0.3552600145339966, -0.00011200000153621659, -0.9352689981460571, -0.3539389967918396, -0.29793301224708557, -0.7209920287132263, 0.6256250143051147, -0.00007500000356230885, -0.7807120084762573, 0.6248900294303894, 0, 0, 1, -0.5833380222320557, -0.5833569765090942, 0.5651649832725525, -0.6484479904174805, -0.648485004901886, -0.3987259864807129, -0.6607480049133301, -0.6608719825744629, -0.35589399933815, -0.5517799854278564, -0.5518630146980286, 0.6252880096435547, 0, 0, 1, -0.762628972530365, -0.31482499837875366, 0.5650510191917419, -0.8479880094528198, -0.35004499554634094, -0.39797601103782654, -0.8641520142555237, -0.35647401213645935, -0.35519900918006897, -0.7210670113563538, -0.29798200726509094, 0.6255149841308594, 0, 0, 1, -0.8245400190353394, 0.00001700000029813964, 0.5658029913902283, -0.917701005935669, 0.00003300000025774352, -0.397271990776062, -0.9352689981460571, 0.00011200000153621659, -0.3539389967918396, -0.7807120084762573, 0.00007500000356230885, 0.6248900294303894, 0, 0, 1, 0, 0, 1, -0.7626410126686096, 0.31482499837875366, 0.5650339722633362, -0.8245400190353394, 0.00001700000029813964, 0.5658029913902283, -0.8479819893836975, 0.3500339984893799, -0.39799800515174866, -0.917701005935669, 0.00003300000025774352, -0.397271990776062, -0.8641409873962402, 0.35644200444221497, -0.3552600145339966, -0.9352689981460571, 0.00011200000153621659, -0.3539389967918396, -0.7209920287132263, 0.29793301224708557, 0.6256250143051147, -0.7807120084762573, 0.00007500000356230885, 0.6248900294303894, 0, 0, 1, -0.5833569765090942, 0.5833380222320557, 0.5651649832725525, -0.648485004901886, 0.6484479904174805, -0.3987259864807129, -0.6608719825744629, 0.6607480049133301, -0.35589399933815, -0.5518630146980286, 0.5517799854278564, 0.6252880096435547, 0, 0, 1, -0.31482499837875366, 0.762628972530365, 0.5650510191917419, -0.35004499554634094, 0.8479880094528198, -0.39797601103782654, -0.35647401213645935, 0.8641520142555237, -0.35519900918006897, -0.29798200726509094, 0.7210670113563538, 0.6255149841308594, 0, 0, 1, 0.00001700000029813964, 0.8245400190353394, 0.5658029913902283, 0.00003300000025774352, 0.917701005935669, -0.397271990776062, 0.00011200000153621659, 0.9352689981460571, -0.3539389967918396, 0.00007500000356230885, 0.7807120084762573, 0.6248900294303894, 0, 0, 1, 0, 0, 1, 0.31482499837875366, 0.7626410126686096, 0.5650339722633362, 0.00001700000029813964, 0.8245400190353394, 0.5658029913902283, 0.3500339984893799, 0.8479819893836975, -0.39799800515174866, 0.00003300000025774352, 0.917701005935669, -0.397271990776062, 0.35644200444221497, 0.8641409873962402, -0.3552600145339966, 0.00011200000153621659, 0.9352689981460571, -0.3539389967918396, 0.29793301224708557, 0.7209920287132263, 0.6256250143051147, 0.00007500000356230885, 0.7807120084762573, 0.6248900294303894, 0, 0, 1, 0.5833380222320557, 0.5833569765090942, 0.5651649832725525, 0.6484479904174805, 0.648485004901886, -0.3987259864807129, 0.6607480049133301, 0.6608719825744629, -0.35589399933815, 0.5517799854278564, 0.5518630146980286, 0.6252880096435547, 0, 0, 1, 0.762628972530365, 0.31482499837875366, 0.5650510191917419, 0.8479880094528198, 0.35004499554634094, -0.39797601103782654, 0.8641520142555237, 0.35647401213645935, -0.35519900918006897, 0.7210670113563538, 0.29798200726509094, 0.6255149841308594, 0, 0, 1, 0.8245400190353394, -0.00001700000029813964, 0.5658029913902283, 0.917701005935669, -0.00003300000025774352, -0.397271990776062, 0.9352689981460571, -0.00011200000153621659, -0.3539389967918396, 0.7807120084762573, -0.00007500000356230885, 0.6248909831047058, 0.7807120084762573, -0.00007500000356230885, 0.6248909831047058, 0.7209920287132263, -0.29793301224708557, 0.6256250143051147, 0.21797800064086914, -0.0902160033583641, 0.9717749953269958, 0.23658299446105957, 0, 0.9716110229492188, 0.1595889925956726, -0.06596100330352783, 0.9849770069122314, 0.17308400571346283, 0, 0.9849069714546204, 0.3504979908466339, -0.1447400003671646, 0.9253119826316833, 0.37970298528671265, 0, 0.925108015537262, 0.48558899760246277, -0.20147399604320526, 0.8506529927253723, 0.5266720056533813, 0, 0.8500679731369019, 0.5518630146980286, -0.5517799854278564, 0.6252880096435547, 0.16663099825382233, -0.16663099825382233, 0.9718379974365234, 0.12190800160169601, -0.12190800160169601, 0.9850260019302368, 0.2676680088043213, -0.2676680088043213, 0.9255849719047546, 0.37131500244140625, -0.37131500244140625, 0.8510289788246155, 0.29798200726509094, -0.7210670113563538, 0.6255149841308594, 0.0902160033583641, -0.21797800064086914, 0.9717749953269958, 0.06596100330352783, -0.1595889925956726, 0.9849770069122314, 0.1447400003671646, -0.3504979908466339, 0.9253119826316833, 0.20147399604320526, -0.48558899760246277, 0.8506529927253723, -0.00007500000356230885, -0.7807120084762573, 0.6248900294303894, 0, -0.23658299446105957, 0.9716110229492188, 0, -0.17308400571346283, 0.9849069714546204, 0, -0.37970298528671265, 0.925108015537262, 0, -0.5266720056533813, 0.8500679731369019, -0.00007500000356230885, -0.7807120084762573, 0.6248900294303894, -0.29793301224708557, -0.7209920287132263, 0.6256250143051147, -0.0902160033583641, -0.21797800064086914, 0.9717749953269958, 0, -0.23658299446105957, 0.9716110229492188, -0.06596100330352783, -0.1595889925956726, 0.9849770069122314, 0, -0.17308400571346283, 0.9849069714546204, -0.1447400003671646, -0.3504979908466339, 0.9253119826316833, 0, -0.37970298528671265, 0.925108015537262, -0.20147399604320526, -0.48558899760246277, 0.8506529927253723, 0, -0.5266720056533813, 0.8500679731369019, -0.5517799854278564, -0.5518630146980286, 0.6252880096435547, -0.16663099825382233, -0.16663099825382233, 0.9718379974365234, -0.12190800160169601, -0.12190800160169601, 0.9850260019302368, -0.2676680088043213, -0.2676680088043213, 0.9255849719047546, -0.37131500244140625, -0.37131500244140625, 0.8510289788246155, -0.7210670113563538, -0.29798200726509094, 0.6255149841308594, -0.21797800064086914, -0.0902160033583641, 0.9717749953269958, -0.1595889925956726, -0.06596100330352783, 0.9849770069122314, -0.3504979908466339, -0.1447400003671646, 0.9253119826316833, -0.48558899760246277, -0.20147399604320526, 0.8506529927253723, -0.7807120084762573, 0.00007500000356230885, 0.6248900294303894, -0.23658299446105957, 0, 0.9716110229492188, -0.17308400571346283, 0, 0.9849069714546204, -0.37970298528671265, 0, 0.925108015537262, -0.5266720056533813, 0, 0.8500679731369019, -0.7807120084762573, 0.00007500000356230885, 0.6248900294303894, -0.7209920287132263, 0.29793301224708557, 0.6256250143051147, -0.21797800064086914, 0.0902160033583641, 0.9717749953269958, -0.23658299446105957, 0, 0.9716110229492188, -0.1595889925956726, 0.06596100330352783, 0.9849770069122314, -0.17308400571346283, 0, 0.9849069714546204, -0.3504979908466339, 0.1447400003671646, 0.9253119826316833, -0.37970298528671265, 0, 0.925108015537262, -0.48558899760246277, 0.20147399604320526, 0.8506529927253723, -0.5266720056533813, 0, 0.8500679731369019, -0.5518630146980286, 0.5517799854278564, 0.6252880096435547, -0.16663099825382233, 0.16663099825382233, 0.9718379974365234, -0.12190800160169601, 0.12190800160169601, 0.9850260019302368, -0.2676680088043213, 0.2676680088043213, 0.9255849719047546, -0.37131500244140625, 0.37131500244140625, 0.8510289788246155, -0.29798200726509094, 0.7210670113563538, 0.6255149841308594, -0.0902160033583641, 0.21797800064086914, 0.9717749953269958, -0.06596100330352783, 0.1595889925956726, 0.9849770069122314, -0.1447400003671646, 0.3504979908466339, 0.9253119826316833, -0.20147399604320526, 0.48558899760246277, 0.8506529927253723, 0.00007500000356230885, 0.7807120084762573, 0.6248900294303894, 0, 0.23658299446105957, 0.9716110229492188, 0, 0.17308400571346283, 0.9849069714546204, 0, 0.37970298528671265, 0.925108015537262, 0, 0.5266720056533813, 0.8500679731369019, 0.00007500000356230885, 0.7807120084762573, 0.6248900294303894, 0.29793301224708557, 0.7209920287132263, 0.6256250143051147, 0.0902160033583641, 0.21797800064086914, 0.9717749953269958, 0, 0.23658299446105957, 0.9716110229492188, 0.06596100330352783, 0.1595889925956726, 0.9849770069122314, 0, 0.17308400571346283, 0.9849069714546204, 0.1447400003671646, 0.3504979908466339, 0.9253119826316833, 0, 0.37970298528671265, 0.925108015537262, 0.20147399604320526, 0.48558899760246277, 0.8506529927253723, 0, 0.5266720056533813, 0.8500679731369019, 0.5517799854278564, 0.5518630146980286, 0.6252880096435547, 0.16663099825382233, 0.16663099825382233, 0.9718379974365234, 0.12190800160169601, 0.12190800160169601, 0.9850260019302368, 0.2676680088043213, 0.2676680088043213, 0.9255849719047546, 0.37131500244140625, 0.37131500244140625, 0.8510289788246155, 0.7210670113563538, 0.29798200726509094, 0.6255149841308594, 0.21797800064086914, 0.0902160033583641, 0.9717749953269958, 0.1595889925956726, 0.06596100330352783, 0.9849770069122314, 0.3504979908466339, 0.1447400003671646, 0.9253119826316833, 0.48558899760246277, 0.20147399604320526, 0.8506529927253723, 0.7807120084762573, -0.00007500000356230885, 0.6248909831047058, 0.23658299446105957, 0, 0.9716110229492188, 0.17308400571346283, 0, 0.9849069714546204, 0.37970298528671265, 0, 0.925108015537262, 0.5266720056533813, 0, 0.8500679731369019, }; static const int teapot_indices[] = { 0, 1, 2, 2, 3, 0, 3, 2, 4, 4, 5, 3, 5, 4, 6, 6, 7, 5, 7, 6, 8, 8, 9, 7, 1, 10, 11, 11, 2, 1, 2, 11, 12, 12, 4, 2, 4, 12, 13, 13, 6, 4, 6, 13, 14, 14, 8, 6, 10, 15, 16, 16, 11, 10, 11, 16, 17, 17, 12, 11, 12, 17, 18, 18, 13, 12, 13, 18, 19, 19, 14, 13, 15, 20, 21, 21, 16, 15, 16, 21, 22, 22, 17, 16, 17, 22, 23, 23, 18, 17, 18, 23, 24, 24, 19, 18, 25, 26, 27, 27, 28, 25, 28, 27, 29, 29, 30, 28, 30, 29, 31, 31, 32, 30, 32, 31, 33, 33, 34, 32, 26, 35, 36, 36, 27, 26, 27, 36, 37, 37, 29, 27, 29, 37, 38, 38, 31, 29, 31, 38, 39, 39, 33, 31, 35, 40, 41, 41, 36, 35, 36, 41, 42, 42, 37, 36, 37, 42, 43, 43, 38, 37, 38, 43, 44, 44, 39, 38, 40, 45, 46, 46, 41, 40, 41, 46, 47, 47, 42, 41, 42, 47, 48, 48, 43, 42, 43, 48, 49, 49, 44, 43, 50, 51, 52, 52, 53, 50, 53, 52, 54, 54, 55, 53, 55, 54, 56, 56, 57, 55, 57, 56, 58, 58, 59, 57, 51, 60, 61, 61, 52, 51, 52, 61, 62, 62, 54, 52, 54, 62, 63, 63, 56, 54, 56, 63, 64, 64, 58, 56, 60, 65, 66, 66, 61, 60, 61, 66, 67, 67, 62, 61, 62, 67, 68, 68, 63, 62, 63, 68, 69, 69, 64, 63, 65, 70, 71, 71, 66, 65, 66, 71, 72, 72, 67, 66, 67, 72, 73, 73, 68, 67, 68, 73, 74, 74, 69, 68, 75, 76, 77, 77, 78, 75, 78, 77, 79, 79, 80, 78, 80, 79, 81, 81, 82, 80, 82, 81, 83, 83, 84, 82, 76, 85, 86, 86, 77, 76, 77, 86, 87, 87, 79, 77, 79, 87, 88, 88, 81, 79, 81, 88, 89, 89, 83, 81, 85, 90, 91, 91, 86, 85, 86, 91, 92, 92, 87, 86, 87, 92, 93, 93, 88, 87, 88, 93, 94, 94, 89, 88, 90, 95, 96, 96, 91, 90, 91, 96, 97, 97, 92, 91, 92, 97, 98, 98, 93, 92, 93, 98, 99, 99, 94, 93, 100, 101, 102, 102, 103, 100, 103, 102, 104, 104, 105, 103, 105, 104, 106, 106, 107, 105, 107, 106, 108, 108, 109, 107, 101, 110, 111, 111, 102, 101, 102, 111, 112, 112, 104, 102, 104, 112, 113, 113, 106, 104, 106, 113, 114, 114, 108, 106, 110, 115, 116, 116, 111, 110, 111, 116, 117, 117, 112, 111, 112, 117, 118, 118, 113, 112, 113, 118, 119, 119, 114, 113, 115, 120, 121, 121, 116, 115, 116, 121, 122, 122, 117, 116, 117, 122, 123, 123, 118, 117, 118, 123, 124, 124, 119, 118, 125, 126, 127, 127, 128, 125, 128, 127, 129, 129, 130, 128, 130, 129, 131, 131, 132, 130, 132, 131, 133, 133, 134, 132, 126, 135, 136, 136, 127, 126, 127, 136, 137, 137, 129, 127, 129, 137, 138, 138, 131, 129, 131, 138, 139, 139, 133, 131, 135, 140, 141, 141, 136, 135, 136, 141, 142, 142, 137, 136, 137, 142, 143, 143, 138, 137, 138, 143, 144, 144, 139, 138, 140, 145, 146, 146, 141, 140, 141, 146, 147, 147, 142, 141, 142, 147, 148, 148, 143, 142, 143, 148, 149, 149, 144, 143, 150, 151, 152, 152, 153, 150, 153, 152, 154, 154, 155, 153, 155, 154, 156, 156, 157, 155, 157, 156, 158, 158, 159, 157, 151, 160, 161, 161, 152, 151, 152, 161, 162, 162, 154, 152, 154, 162, 163, 163, 156, 154, 156, 163, 164, 164, 158, 156, 160, 165, 166, 166, 161, 160, 161, 166, 167, 167, 162, 161, 162, 167, 168, 168, 163, 162, 163, 168, 169, 169, 164, 163, 165, 170, 171, 171, 166, 165, 166, 171, 172, 172, 167, 166, 167, 172, 173, 173, 168, 167, 168, 173, 174, 174, 169, 168, 175, 176, 177, 177, 178, 175, 178, 177, 179, 179, 180, 178, 180, 179, 181, 181, 182, 180, 182, 181, 183, 183, 184, 182, 176, 185, 186, 186, 177, 176, 177, 186, 187, 187, 179, 177, 179, 187, 188, 188, 181, 179, 181, 188, 189, 189, 183, 181, 185, 190, 191, 191, 186, 185, 186, 191, 192, 192, 187, 186, 187, 192, 193, 193, 188, 187, 188, 193, 194, 194, 189, 188, 190, 195, 196, 196, 191, 190, 191, 196, 197, 197, 192, 191, 192, 197, 198, 198, 193, 192, 193, 198, 199, 199, 194, 193, 200, 201, 202, 202, 203, 200, 203, 202, 204, 204, 205, 203, 205, 204, 206, 206, 207, 205, 207, 206, 208, 208, 209, 207, 201, 210, 211, 211, 202, 201, 202, 211, 212, 212, 204, 202, 204, 212, 213, 213, 206, 204, 206, 213, 214, 214, 208, 206, 210, 215, 216, 216, 211, 210, 211, 216, 217, 217, 212, 211, 212, 217, 218, 218, 213, 212, 213, 218, 219, 219, 214, 213, 215, 220, 221, 221, 216, 215, 216, 221, 222, 222, 217, 216, 217, 222, 223, 223, 218, 217, 218, 223, 224, 224, 219, 218, 225, 226, 227, 227, 228, 225, 228, 227, 229, 229, 230, 228, 230, 229, 231, 231, 232, 230, 232, 231, 233, 233, 234, 232, 226, 235, 236, 236, 227, 226, 227, 236, 237, 237, 229, 227, 229, 237, 238, 238, 231, 229, 231, 238, 239, 239, 233, 231, 235, 240, 241, 241, 236, 235, 236, 241, 242, 242, 237, 236, 237, 242, 243, 243, 238, 237, 238, 243, 244, 244, 239, 238, 240, 245, 246, 246, 241, 240, 241, 246, 247, 247, 242, 241, 242, 247, 248, 248, 243, 242, 243, 248, 249, 249, 244, 243, 250, 251, 252, 252, 253, 250, 253, 252, 254, 254, 255, 253, 255, 254, 256, 256, 257, 255, 257, 256, 258, 258, 259, 257, 251, 260, 261, 261, 252, 251, 252, 261, 262, 262, 254, 252, 254, 262, 263, 263, 256, 254, 256, 263, 264, 264, 258, 256, 260, 265, 266, 266, 261, 260, 261, 266, 267, 267, 262, 261, 262, 267, 268, 268, 263, 262, 263, 268, 269, 269, 264, 263, 265, 270, 271, 271, 266, 265, 266, 271, 272, 272, 267, 266, 267, 272, 273, 273, 268, 267, 268, 273, 274, 274, 269, 268, 275, 276, 277, 277, 278, 275, 278, 277, 279, 279, 280, 278, 280, 279, 281, 281, 282, 280, 282, 281, 283, 283, 284, 282, 276, 285, 286, 286, 277, 276, 277, 286, 287, 287, 279, 277, 279, 287, 288, 288, 281, 279, 281, 288, 289, 289, 283, 281, 285, 290, 291, 291, 286, 285, 286, 291, 292, 292, 287, 286, 287, 292, 293, 293, 288, 287, 288, 293, 294, 294, 289, 288, 290, 295, 296, 296, 291, 290, 291, 296, 297, 297, 292, 291, 292, 297, 298, 298, 293, 292, 293, 298, 299, 299, 294, 293, 300, 301, 302, 302, 303, 300, 303, 302, 304, 304, 305, 303, 305, 304, 306, 306, 307, 305, 307, 306, 308, 308, 309, 307, 301, 310, 311, 311, 302, 301, 302, 311, 312, 312, 304, 302, 304, 312, 313, 313, 306, 304, 306, 313, 314, 314, 308, 306, 310, 315, 316, 316, 311, 310, 311, 316, 317, 317, 312, 311, 312, 317, 318, 318, 313, 312, 313, 318, 319, 319, 314, 313, 315, 320, 321, 321, 316, 315, 316, 321, 322, 322, 317, 316, 317, 322, 323, 323, 318, 317, 318, 323, 324, 324, 319, 318, 325, 326, 327, 327, 328, 325, 328, 327, 329, 329, 330, 328, 330, 329, 331, 331, 332, 330, 332, 331, 333, 333, 334, 332, 326, 335, 336, 336, 327, 326, 327, 336, 337, 337, 329, 327, 329, 337, 338, 338, 331, 329, 331, 338, 339, 339, 333, 331, 335, 340, 341, 341, 336, 335, 336, 341, 342, 342, 337, 336, 337, 342, 343, 343, 338, 337, 338, 343, 344, 344, 339, 338, 340, 345, 346, 346, 341, 340, 341, 346, 347, 347, 342, 341, 342, 347, 348, 348, 343, 342, 343, 348, 349, 349, 344, 343, 350, 351, 352, 352, 353, 350, 353, 352, 354, 354, 355, 353, 355, 354, 356, 356, 357, 355, 357, 356, 358, 358, 359, 357, 351, 360, 361, 361, 352, 351, 352, 361, 362, 362, 354, 352, 354, 362, 363, 363, 356, 354, 356, 363, 364, 364, 358, 356, 360, 365, 366, 366, 361, 360, 361, 366, 367, 367, 362, 361, 362, 367, 368, 368, 363, 362, 363, 368, 369, 369, 364, 363, 365, 370, 371, 371, 366, 365, 366, 371, 372, 372, 367, 366, 367, 372, 373, 373, 368, 367, 368, 373, 374, 374, 369, 368, 375, 376, 377, 377, 378, 375, 378, 377, 379, 379, 380, 378, 380, 379, 381, 381, 382, 380, 382, 381, 383, 383, 384, 382, 376, 385, 386, 386, 377, 376, 377, 386, 387, 387, 379, 377, 379, 387, 388, 388, 381, 379, 381, 388, 389, 389, 383, 381, 385, 390, 391, 391, 386, 385, 386, 391, 392, 392, 387, 386, 387, 392, 393, 393, 388, 387, 388, 393, 394, 394, 389, 388, 390, 395, 396, 396, 391, 390, 391, 396, 397, 397, 392, 391, 392, 397, 398, 398, 393, 392, 393, 398, 399, 399, 394, 393, 400, 401, 402, 402, 403, 400, 403, 402, 404, 404, 405, 403, 405, 404, 406, 406, 407, 405, 407, 406, 408, 408, 409, 407, 401, 410, 411, 411, 402, 401, 402, 411, 412, 412, 404, 402, 404, 412, 413, 413, 406, 404, 406, 413, 414, 414, 408, 406, 410, 415, 416, 416, 411, 410, 411, 416, 417, 417, 412, 411, 412, 417, 418, 418, 413, 412, 413, 418, 419, 419, 414, 413, 415, 420, 421, 421, 416, 415, 416, 421, 422, 422, 417, 416, 417, 422, 423, 423, 418, 417, 418, 423, 424, 424, 419, 418, 425, 426, 427, 427, 428, 425, 428, 427, 429, 429, 430, 428, 430, 429, 431, 431, 432, 430, 432, 431, 433, 433, 434, 432, 426, 435, 436, 436, 427, 426, 427, 436, 437, 437, 429, 427, 429, 437, 438, 438, 431, 429, 431, 438, 439, 439, 433, 431, 435, 440, 441, 441, 436, 435, 436, 441, 442, 442, 437, 436, 437, 442, 443, 443, 438, 437, 438, 443, 444, 444, 439, 438, 440, 445, 446, 446, 441, 440, 441, 446, 447, 447, 442, 441, 442, 447, 448, 448, 443, 442, 443, 448, 449, 449, 444, 443, 450, 451, 452, 452, 453, 450, 453, 452, 454, 454, 455, 453, 455, 454, 456, 456, 457, 455, 457, 456, 458, 458, 459, 457, 451, 460, 461, 461, 452, 451, 452, 461, 462, 462, 454, 452, 454, 462, 463, 463, 456, 454, 456, 463, 464, 464, 458, 456, 460, 465, 466, 466, 461, 460, 461, 466, 467, 467, 462, 461, 462, 467, 468, 468, 463, 462, 463, 468, 469, 469, 464, 463, 465, 470, 471, 471, 466, 465, 466, 471, 472, 472, 467, 466, 467, 472, 473, 473, 468, 467, 468, 473, 474, 474, 469, 468, 475, 476, 477, 477, 478, 475, 478, 477, 479, 479, 480, 478, 480, 479, 481, 481, 482, 480, 482, 481, 483, 483, 484, 482, 476, 485, 486, 486, 477, 476, 477, 486, 487, 487, 479, 477, 479, 487, 488, 488, 481, 479, 481, 488, 489, 489, 483, 481, 485, 490, 491, 491, 486, 485, 486, 491, 492, 492, 487, 486, 487, 492, 493, 493, 488, 487, 488, 493, 494, 494, 489, 488, 490, 495, 496, 496, 491, 490, 491, 496, 497, 497, 492, 491, 492, 497, 498, 498, 493, 492, 493, 498, 499, 499, 494, 493, 500, 501, 502, 502, 503, 500, 503, 502, 504, 504, 505, 503, 505, 504, 506, 506, 507, 505, 507, 506, 508, 508, 509, 507, 501, 510, 511, 511, 502, 501, 502, 511, 512, 512, 504, 502, 504, 512, 513, 513, 506, 504, 506, 513, 514, 514, 508, 506, 510, 515, 516, 516, 511, 510, 511, 516, 517, 517, 512, 511, 512, 517, 518, 518, 513, 512, 513, 518, 519, 519, 514, 513, 515, 520, 521, 521, 516, 515, 516, 521, 522, 522, 517, 516, 517, 522, 523, 523, 518, 517, 518, 523, 524, 524, 519, 518, 525, 526, 527, 527, 528, 525, 528, 527, 529, 529, 530, 528, 530, 529, 531, 531, 532, 530, 532, 531, 533, 533, 534, 532, 526, 535, 536, 536, 527, 526, 527, 536, 537, 537, 529, 527, 529, 537, 538, 538, 531, 529, 531, 538, 539, 539, 533, 531, 535, 540, 541, 541, 536, 535, 536, 541, 542, 542, 537, 536, 537, 542, 543, 543, 538, 537, 538, 543, 544, 544, 539, 538, 540, 545, 546, 546, 541, 540, 541, 546, 547, 547, 542, 541, 542, 547, 548, 548, 543, 542, 543, 548, 549, 549, 544, 543, 550, 551, 552, 552, 553, 550, 553, 552, 554, 554, 555, 553, 555, 554, 556, 556, 557, 555, 557, 556, 558, 558, 559, 557, 551, 560, 561, 561, 552, 551, 552, 561, 562, 562, 554, 552, 554, 562, 563, 563, 556, 554, 556, 563, 564, 564, 558, 556, 560, 565, 566, 566, 561, 560, 561, 566, 567, 567, 562, 561, 562, 567, 568, 568, 563, 562, 563, 568, 569, 569, 564, 563, 565, 570, 571, 571, 566, 565, 566, 571, 572, 572, 567, 566, 567, 572, 573, 573, 568, 567, 568, 573, 574, 574, 569, 568, 575, 576, 577, 577, 578, 575, 578, 577, 579, 579, 580, 578, 580, 579, 581, 581, 582, 580, 582, 581, 583, 583, 584, 582, 576, 585, 586, 586, 577, 576, 577, 586, 587, 587, 579, 577, 579, 587, 588, 588, 581, 579, 581, 588, 589, 589, 583, 581, 585, 590, 591, 591, 586, 585, 586, 591, 592, 592, 587, 586, 587, 592, 593, 593, 588, 587, 588, 593, 594, 594, 589, 588, 590, 595, 596, 596, 591, 590, 591, 596, 597, 597, 592, 591, 592, 597, 598, 598, 593, 592, 593, 598, 599, 599, 594, 593, 600, 601, 602, 602, 603, 600, 603, 602, 604, 604, 605, 603, 605, 604, 606, 606, 607, 605, 607, 606, 608, 608, 609, 607, 601, 610, 611, 611, 602, 601, 602, 611, 612, 612, 604, 602, 604, 612, 613, 613, 606, 604, 606, 613, 614, 614, 608, 606, 610, 615, 616, 616, 611, 610, 611, 616, 617, 617, 612, 611, 612, 617, 618, 618, 613, 612, 613, 618, 619, 619, 614, 613, 615, 620, 621, 621, 616, 615, 616, 621, 622, 622, 617, 616, 617, 622, 623, 623, 618, 617, 618, 623, 624, 624, 619, 618, 625, 626, 627, 627, 628, 625, 628, 627, 629, 629, 630, 628, 630, 629, 631, 631, 632, 630, 632, 631, 633, 633, 634, 632, 626, 635, 636, 636, 627, 626, 627, 636, 637, 637, 629, 627, 629, 637, 638, 638, 631, 629, 631, 638, 639, 639, 633, 631, 635, 640, 641, 641, 636, 635, 636, 641, 642, 642, 637, 636, 637, 642, 643, 643, 638, 637, 638, 643, 644, 644, 639, 638, 640, 645, 646, 646, 641, 640, 641, 646, 647, 647, 642, 641, 642, 647, 648, 648, 643, 642, 643, 648, 649, 649, 644, 643, 650, 651, 652, 652, 653, 650, 653, 652, 654, 654, 655, 653, 655, 654, 656, 656, 657, 655, 657, 656, 658, 658, 659, 657, 651, 660, 661, 661, 652, 651, 652, 661, 662, 662, 654, 652, 654, 662, 663, 663, 656, 654, 656, 663, 664, 664, 658, 656, 660, 665, 666, 666, 661, 660, 661, 666, 667, 667, 662, 661, 662, 667, 668, 668, 663, 662, 663, 668, 669, 669, 664, 663, 665, 670, 671, 671, 666, 665, 666, 671, 672, 672, 667, 666, 667, 672, 673, 673, 668, 667, 668, 673, 674, 674, 669, 668, 675, 676, 677, 677, 678, 675, 678, 677, 679, 679, 680, 678, 680, 679, 681, 681, 682, 680, 682, 681, 683, 683, 684, 682, 676, 685, 686, 686, 677, 676, 677, 686, 687, 687, 679, 677, 679, 687, 688, 688, 681, 679, 681, 688, 689, 689, 683, 681, 685, 690, 691, 691, 686, 685, 686, 691, 692, 692, 687, 686, 687, 692, 693, 693, 688, 687, 688, 693, 694, 694, 689, 688, 690, 695, 696, 696, 691, 690, 691, 696, 697, 697, 692, 691, 692, 697, 698, 698, 693, 692, 693, 698, 699, 699, 694, 693, 700, 701, 702, 702, 703, 700, 703, 702, 704, 704, 705, 703, 705, 704, 706, 706, 707, 705, 707, 706, 708, 708, 709, 707, 701, 710, 711, 711, 702, 701, 702, 711, 712, 712, 704, 702, 704, 712, 713, 713, 706, 704, 706, 713, 714, 714, 708, 706, 710, 715, 716, 716, 711, 710, 711, 716, 717, 717, 712, 711, 712, 717, 718, 718, 713, 712, 713, 718, 719, 719, 714, 713, 715, 720, 721, 721, 716, 715, 716, 721, 722, 722, 717, 716, 717, 722, 723, 723, 718, 717, 718, 723, 724, 724, 719, 718, 725, 726, 727, 727, 728, 725, 728, 727, 729, 729, 730, 728, 730, 729, 731, 731, 732, 730, 732, 731, 733, 733, 734, 732, 726, 735, 736, 736, 727, 726, 727, 736, 737, 737, 729, 727, 729, 737, 738, 738, 731, 729, 731, 738, 739, 739, 733, 731, 735, 740, 741, 741, 736, 735, 736, 741, 742, 742, 737, 736, 737, 742, 743, 743, 738, 737, 738, 743, 744, 744, 739, 738, 740, 745, 746, 746, 741, 740, 741, 746, 747, 747, 742, 741, 742, 747, 748, 748, 743, 742, 743, 748, 749, 749, 744, 743, 750, 751, 752, 752, 753, 750, 753, 752, 754, 754, 755, 753, 755, 754, 756, 756, 757, 755, 757, 756, 758, 758, 759, 757, 751, 760, 761, 761, 752, 751, 752, 761, 762, 762, 754, 752, 754, 762, 763, 763, 756, 754, 756, 763, 764, 764, 758, 756, 760, 765, 766, 766, 761, 760, 761, 766, 767, 767, 762, 761, 762, 767, 768, 768, 763, 762, 763, 768, 769, 769, 764, 763, 765, 770, 771, 771, 766, 765, 766, 771, 772, 772, 767, 766, 767, 772, 773, 773, 768, 767, 768, 773, 774, 774, 769, 768, 775, 776, 777, 777, 778, 775, 778, 777, 779, 779, 780, 778, 780, 779, 781, 781, 782, 780, 782, 781, 783, 783, 784, 782, 776, 785, 786, 786, 777, 776, 777, 786, 787, 787, 779, 777, 779, 787, 788, 788, 781, 779, 781, 788, 789, 789, 783, 781, 785, 790, 791, 791, 786, 785, 786, 791, 792, 792, 787, 786, 787, 792, 793, 793, 788, 787, 788, 793, 794, 794, 789, 788, 790, 795, 796, 796, 791, 790, 791, 796, 797, 797, 792, 791, 792, 797, 798, 798, 793, 792, 793, 798, 799, 799, 794, 793, }; Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/README.md000066400000000000000000000000761270147354000244310ustar00rootroot00000000000000This demo demonstrates multi-thread command buffer recording. Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Shell.cpp000066400000000000000000000472641270147354000247370ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #include #include #include #include #include #include #include "Helpers.h" #include "Shell.h" #include "Game.h" Shell::Shell(Game &game) : game_(game), settings_(game.settings()), ctx_(), game_tick_(1.0f / settings_.ticks_per_second), game_time_(game_tick_) { // require generic WSI extensions instance_extensions_.push_back(VK_KHR_SURFACE_EXTENSION_NAME); device_extensions_.push_back(VK_KHR_SWAPCHAIN_EXTENSION_NAME); // require "standard" validation layers if (settings_.validate) { device_layers_.push_back("VK_LAYER_LUNARG_standard_validation"); instance_layers_.push_back("VK_LAYER_LUNARG_standard_validation"); instance_extensions_.push_back(VK_EXT_DEBUG_REPORT_EXTENSION_NAME); } } void Shell::log(LogPriority priority, const char *msg) { std::ostream &st = (priority >= LOG_ERR) ? std::cerr : std::cout; st << msg << "\n"; } void Shell::init_vk() { vk::init_dispatch_table_top(load_vk()); init_instance(); vk::init_dispatch_table_middle(ctx_.instance, false); init_debug_report(); init_physical_dev(); } void Shell::cleanup_vk() { if (settings_.validate) vk::DestroyDebugReportCallbackEXT(ctx_.instance, ctx_.debug_report, nullptr); vk::DestroyInstance(ctx_.instance, nullptr); } bool Shell::debug_report_callback(VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT obj_type, uint64_t object, size_t location, int32_t msg_code, const char *layer_prefix, const char *msg) { LogPriority prio = LOG_WARN; if (flags & VK_DEBUG_REPORT_ERROR_BIT_EXT) prio = LOG_ERR; else if (flags & (VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT)) prio = LOG_WARN; else if (flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT) prio = LOG_INFO; else if (flags & VK_DEBUG_REPORT_DEBUG_BIT_EXT) prio = LOG_DEBUG; std::stringstream ss; ss << layer_prefix << ": " << msg; log(prio, ss.str().c_str()); return false; } void Shell::assert_all_instance_layers() const { // enumerate instance layer std::vector layers; vk::enumerate(layers); std::set layer_names; for (const auto &layer : layers) layer_names.insert(layer.layerName); // all listed instance layers are required for (const auto &name : instance_layers_) { if (layer_names.find(name) == layer_names.end()) { std::stringstream ss; ss << "instance layer " << name << " is missing"; throw std::runtime_error(ss.str()); } } } void Shell::assert_all_instance_extensions() const { // enumerate instance extensions std::vector exts; vk::enumerate(nullptr, exts); std::set ext_names; for (const auto &ext : exts) ext_names.insert(ext.extensionName); // all listed instance extensions are required for (const auto &name : instance_extensions_) { if (ext_names.find(name) == ext_names.end()) { std::stringstream ss; ss << "instance extension " << name << " is missing"; throw std::runtime_error(ss.str()); } } } bool Shell::has_all_device_layers(VkPhysicalDevice phy) const { // enumerate device layers std::vector layers; vk::enumerate(phy, layers); std::set layer_names; for (const auto &layer : layers) layer_names.insert(layer.layerName); // all listed device layers are required for (const auto &name : device_layers_) { if (layer_names.find(name) == layer_names.end()) return false; } return true; } bool Shell::has_all_device_extensions(VkPhysicalDevice phy) const { // enumerate device extensions std::vector exts; vk::enumerate(phy, nullptr, exts); std::set ext_names; for (const auto &ext : exts) ext_names.insert(ext.extensionName); // all listed device extensions are required for (const auto &name : device_extensions_) { if (ext_names.find(name) == ext_names.end()) return false; } return true; } void Shell::init_instance() { assert_all_instance_layers(); assert_all_instance_extensions(); VkApplicationInfo app_info = {}; app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO; app_info.pApplicationName = settings_.name.c_str(); app_info.applicationVersion = 0; app_info.apiVersion = VK_API_VERSION_1_0; VkInstanceCreateInfo instance_info = {}; instance_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO; instance_info.pApplicationInfo = &app_info; instance_info.enabledLayerCount = static_cast(instance_layers_.size()); instance_info.ppEnabledLayerNames = instance_layers_.data(); instance_info.enabledExtensionCount = static_cast(instance_extensions_.size()); instance_info.ppEnabledExtensionNames = instance_extensions_.data(); vk::assert_success(vk::CreateInstance(&instance_info, nullptr, &ctx_.instance)); } void Shell::init_debug_report() { if (!settings_.validate) return; VkDebugReportCallbackCreateInfoEXT debug_report_info = {}; debug_report_info.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; debug_report_info.flags = VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT | VK_DEBUG_REPORT_ERROR_BIT_EXT; if (settings_.validate_verbose) { debug_report_info.flags = VK_DEBUG_REPORT_INFORMATION_BIT_EXT | VK_DEBUG_REPORT_DEBUG_BIT_EXT; } debug_report_info.pfnCallback = debug_report_callback; debug_report_info.pUserData = reinterpret_cast(this); vk::assert_success(vk::CreateDebugReportCallbackEXT(ctx_.instance, &debug_report_info, nullptr, &ctx_.debug_report)); } void Shell::init_physical_dev() { // enumerate physical devices std::vector phys; vk::assert_success(vk::enumerate(ctx_.instance, phys)); ctx_.physical_dev = VK_NULL_HANDLE; for (auto phy : phys) { if (!has_all_device_layers(phy) || !has_all_device_extensions(phy)) continue; // get queue properties std::vector queues; vk::get(phy, queues); int game_queue_family = -1, present_queue_family = -1; for (uint32_t i = 0; i < queues.size(); i++) { const VkQueueFamilyProperties &q = queues[i]; // requires only GRAPHICS for game queues const VkFlags game_queue_flags = VK_QUEUE_GRAPHICS_BIT; if (game_queue_family < 0 && (q.queueFlags & game_queue_flags) == game_queue_flags) game_queue_family = i; // present queue must support the surface if (present_queue_family < 0 && can_present(phy, i)) present_queue_family = i; if (game_queue_family >= 0 && present_queue_family >= 0) break; } if (game_queue_family >= 0 && present_queue_family >= 0) { ctx_.physical_dev = phy; ctx_.game_queue_family = game_queue_family; ctx_.present_queue_family = present_queue_family; break; } } if (ctx_.physical_dev == VK_NULL_HANDLE) throw std::runtime_error("failed to find any capable Vulkan physical device"); } void Shell::create_context() { create_dev(); vk::init_dispatch_table_bottom(ctx_.instance, ctx_.dev); vk::GetDeviceQueue(ctx_.dev, ctx_.game_queue_family, 0, &ctx_.game_queue); vk::GetDeviceQueue(ctx_.dev, ctx_.present_queue_family, 0, &ctx_.present_queue); create_back_buffers(); // initialize ctx_.{surface,format} before attach_shell create_swapchain(); game_.attach_shell(*this); } void Shell::destroy_context() { if (ctx_.dev == VK_NULL_HANDLE) return; vk::DeviceWaitIdle(ctx_.dev); destroy_swapchain(); game_.detach_shell(); destroy_back_buffers(); ctx_.game_queue = VK_NULL_HANDLE; ctx_.present_queue = VK_NULL_HANDLE; vk::DestroyDevice(ctx_.dev, nullptr); ctx_.dev = VK_NULL_HANDLE; } void Shell::create_dev() { VkDeviceCreateInfo dev_info = {}; dev_info.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO; const std::vector queue_priorities(settings_.queue_count, 0.0f); std::array queue_info = {}; queue_info[0].sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO; queue_info[0].queueFamilyIndex = ctx_.game_queue_family; queue_info[0].queueCount = settings_.queue_count; queue_info[0].pQueuePriorities = queue_priorities.data(); if (ctx_.game_queue_family != ctx_.present_queue_family) { queue_info[1].sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO; queue_info[1].queueFamilyIndex = ctx_.present_queue_family; queue_info[1].queueCount = 1; queue_info[1].pQueuePriorities = queue_priorities.data(); dev_info.queueCreateInfoCount = 2; } else { dev_info.queueCreateInfoCount = 1; } dev_info.pQueueCreateInfos = queue_info.data(); dev_info.enabledLayerCount = static_cast(device_layers_.size()); dev_info.ppEnabledLayerNames = device_layers_.data(); dev_info.enabledExtensionCount = static_cast(device_extensions_.size()); dev_info.ppEnabledExtensionNames = device_extensions_.data(); // disable all features VkPhysicalDeviceFeatures features = {}; dev_info.pEnabledFeatures = &features; vk::assert_success(vk::CreateDevice(ctx_.physical_dev, &dev_info, nullptr, &ctx_.dev)); } void Shell::create_back_buffers() { VkSemaphoreCreateInfo sem_info = {}; sem_info.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO; VkFenceCreateInfo fence_info = {}; fence_info.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO; fence_info.flags = VK_FENCE_CREATE_SIGNALED_BIT; // BackBuffer is used to track which swapchain image and its associated // sync primitives are busy. Having more BackBuffer's than swapchain // images may allows us to replace CPU wait on present_fence by GPU wait // on acquire_semaphore. const int count = settings_.back_buffer_count + 1; for (int i = 0; i < count; i++) { BackBuffer buf = {}; vk::assert_success(vk::CreateSemaphore(ctx_.dev, &sem_info, nullptr, &buf.acquire_semaphore)); vk::assert_success(vk::CreateSemaphore(ctx_.dev, &sem_info, nullptr, &buf.render_semaphore)); vk::assert_success(vk::CreateFence(ctx_.dev, &fence_info, nullptr, &buf.present_fence)); ctx_.back_buffers.push(buf); } } void Shell::destroy_back_buffers() { while (!ctx_.back_buffers.empty()) { const auto &buf = ctx_.back_buffers.front(); vk::DestroySemaphore(ctx_.dev, buf.acquire_semaphore, nullptr); vk::DestroySemaphore(ctx_.dev, buf.render_semaphore, nullptr); vk::DestroyFence(ctx_.dev, buf.present_fence, nullptr); ctx_.back_buffers.pop(); } } void Shell::create_swapchain() { ctx_.surface = create_surface(ctx_.instance); VkBool32 supported; vk::assert_success(vk::GetPhysicalDeviceSurfaceSupportKHR(ctx_.physical_dev, ctx_.present_queue_family, ctx_.surface, &supported)); // this should be guaranteed by the platform-specific can_present call assert(supported); std::vector formats; vk::get(ctx_.physical_dev, ctx_.surface, formats); ctx_.format = formats[0]; // defer to resize_swapchain() ctx_.swapchain = VK_NULL_HANDLE; ctx_.extent.width = (uint32_t) -1; ctx_.extent.height = (uint32_t) -1; } void Shell::destroy_swapchain() { if (ctx_.swapchain != VK_NULL_HANDLE) { game_.detach_swapchain(); vk::DestroySwapchainKHR(ctx_.dev, ctx_.swapchain, nullptr); ctx_.swapchain = VK_NULL_HANDLE; } vk::DestroySurfaceKHR(ctx_.instance, ctx_.surface, nullptr); ctx_.surface = VK_NULL_HANDLE; } void Shell::resize_swapchain(uint32_t width_hint, uint32_t height_hint) { VkSurfaceCapabilitiesKHR caps; vk::assert_success(vk::GetPhysicalDeviceSurfaceCapabilitiesKHR(ctx_.physical_dev, ctx_.surface, &caps)); VkExtent2D extent = caps.currentExtent; // use the hints if (extent.width == (uint32_t) -1) { extent.width = width_hint; extent.height = height_hint; } // clamp width; to protect us from broken hints? if (extent.width < caps.minImageExtent.width) extent.width = caps.minImageExtent.width; else if (extent.width > caps.maxImageExtent.width) extent.width = caps.maxImageExtent.width; // clamp height if (extent.height < caps.minImageExtent.height) extent.height = caps.minImageExtent.height; else if (extent.height > caps.maxImageExtent.height) extent.height = caps.maxImageExtent.height; if (ctx_.extent.width == extent.width && ctx_.extent.height == extent.height) return; uint32_t image_count = settings_.back_buffer_count; if (image_count < caps.minImageCount) image_count = caps.minImageCount; else if (image_count > caps.maxImageCount) image_count = caps.maxImageCount; assert(caps.supportedUsageFlags & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT); assert(caps.supportedTransforms & caps.currentTransform); assert(caps.supportedCompositeAlpha & (VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR | VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR)); VkCompositeAlphaFlagBitsKHR composite_alpha = (caps.supportedCompositeAlpha & VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR) ? VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR : VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR; std::vector modes; vk::get(ctx_.physical_dev, ctx_.surface, modes); // FIFO is the only mode universally supported VkPresentModeKHR mode = VK_PRESENT_MODE_FIFO_KHR; for (auto m : modes) { if ((settings_.vsync && m == VK_PRESENT_MODE_MAILBOX_KHR) || (!settings_.vsync && m == VK_PRESENT_MODE_IMMEDIATE_KHR)) { mode = m; break; } } VkSwapchainCreateInfoKHR swapchain_info = {}; swapchain_info.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR; swapchain_info.surface = ctx_.surface; swapchain_info.minImageCount = image_count; swapchain_info.imageFormat = ctx_.format.format; swapchain_info.imageColorSpace = ctx_.format.colorSpace; swapchain_info.imageExtent = extent; swapchain_info.imageArrayLayers = 1; swapchain_info.imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT; std::vector queue_families(1, ctx_.game_queue_family); if (ctx_.game_queue_family != ctx_.present_queue_family) { queue_families.push_back(ctx_.present_queue_family); swapchain_info.imageSharingMode = VK_SHARING_MODE_CONCURRENT; swapchain_info.queueFamilyIndexCount = (uint32_t)queue_families.size(); swapchain_info.pQueueFamilyIndices = queue_families.data(); } else { swapchain_info.imageSharingMode = VK_SHARING_MODE_EXCLUSIVE; } swapchain_info.preTransform = caps.currentTransform;; swapchain_info.compositeAlpha = composite_alpha; swapchain_info.presentMode = mode; swapchain_info.clipped = true; swapchain_info.oldSwapchain = ctx_.swapchain; vk::assert_success(vk::CreateSwapchainKHR(ctx_.dev, &swapchain_info, nullptr, &ctx_.swapchain)); ctx_.extent = extent; // destroy the old swapchain if (swapchain_info.oldSwapchain != VK_NULL_HANDLE) { game_.detach_swapchain(); vk::DeviceWaitIdle(ctx_.dev); vk::DestroySwapchainKHR(ctx_.dev, swapchain_info.oldSwapchain, nullptr); } game_.attach_swapchain(); } void Shell::add_game_time(float time) { int max_ticks = 3; if (!settings_.no_tick) game_time_ += time; while (game_time_ >= game_tick_ && max_ticks--) { game_.on_tick(); game_time_ -= game_tick_; } } void Shell::acquire_back_buffer() { // acquire just once when not presenting if (settings_.no_present && ctx_.acquired_back_buffer.acquire_semaphore != VK_NULL_HANDLE) return; auto &buf = ctx_.back_buffers.front(); // wait until acquire and render semaphores are waited/unsignaled vk::assert_success(vk::WaitForFences(ctx_.dev, 1, &buf.present_fence, true, UINT64_MAX)); // reset the fence vk::assert_success(vk::ResetFences(ctx_.dev, 1, &buf.present_fence)); vk::assert_success(vk::AcquireNextImageKHR(ctx_.dev, ctx_.swapchain, UINT64_MAX, buf.acquire_semaphore, VK_NULL_HANDLE, &buf.image_index)); ctx_.acquired_back_buffer = buf; ctx_.back_buffers.pop(); } void Shell::present_back_buffer() { const auto &buf = ctx_.acquired_back_buffer; if (!settings_.no_render) game_.on_frame(game_time_ / game_tick_); if (settings_.no_present) { fake_present(); return; } VkPresentInfoKHR present_info = {}; present_info.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR; present_info.waitSemaphoreCount = 1; present_info.pWaitSemaphores = (settings_.no_render) ? &buf.acquire_semaphore : &buf.render_semaphore; present_info.swapchainCount = 1; present_info.pSwapchains = &ctx_.swapchain; present_info.pImageIndices = &buf.image_index; vk::assert_success(vk::QueuePresentKHR(ctx_.present_queue, &present_info)); vk::assert_success(vk::QueueSubmit(ctx_.present_queue, 0, nullptr, buf.present_fence)); ctx_.back_buffers.push(buf); } void Shell::fake_present() { const auto &buf = ctx_.acquired_back_buffer; assert(settings_.no_present); // wait render semaphore and signal acquire semaphore if (!settings_.no_render) { VkPipelineStageFlags stage = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT; VkSubmitInfo submit_info = {}; submit_info.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO; submit_info.waitSemaphoreCount = 1; submit_info.pWaitSemaphores = &buf.render_semaphore; submit_info.pWaitDstStageMask = &stage; submit_info.signalSemaphoreCount = 1; submit_info.pSignalSemaphores = &buf.acquire_semaphore; vk::assert_success(vk::QueueSubmit(ctx_.game_queue, 1, &submit_info, VK_NULL_HANDLE)); } // push the buffer back just once for Shell::cleanup_vk if (buf.acquire_semaphore != ctx_.back_buffers.back().acquire_semaphore) ctx_.back_buffers.push(buf); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Shell.h000066400000000000000000000114061270147354000243710ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef SHELL_H #define SHELL_H #include #include #include #include #include "Game.h" class Game; class Shell { public: Shell(const Shell &sh) = delete; Shell &operator=(const Shell &sh) = delete; virtual ~Shell() {} struct BackBuffer { uint32_t image_index; VkSemaphore acquire_semaphore; VkSemaphore render_semaphore; // signaled when this struct is ready for reuse VkFence present_fence; }; struct Context { VkInstance instance; VkDebugReportCallbackEXT debug_report; VkPhysicalDevice physical_dev; uint32_t game_queue_family; uint32_t present_queue_family; VkDevice dev; VkQueue game_queue; VkQueue present_queue; std::queue back_buffers; VkSurfaceKHR surface; VkSurfaceFormatKHR format; VkSwapchainKHR swapchain; VkExtent2D extent; BackBuffer acquired_back_buffer; }; const Context &context() const { return ctx_; } enum LogPriority { LOG_DEBUG, LOG_INFO, LOG_WARN, LOG_ERR, }; virtual void log(LogPriority priority, const char *msg); virtual void run() = 0; virtual void quit() = 0; protected: Shell(Game &game); void init_vk(); void cleanup_vk(); void create_context(); void destroy_context(); void resize_swapchain(uint32_t width_hint, uint32_t height_hint); void add_game_time(float time); void acquire_back_buffer(); void present_back_buffer(); Game &game_; const Game::Settings &settings_; std::vector instance_layers_; std::vector instance_extensions_; std::vector device_layers_; std::vector device_extensions_; private: bool debug_report_callback(VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT obj_type, uint64_t object, size_t location, int32_t msg_code, const char *layer_prefix, const char *msg); static VKAPI_ATTR VkBool32 VKAPI_CALL debug_report_callback( VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT obj_type, uint64_t object, size_t location, int32_t msg_code, const char *layer_prefix, const char *msg, void *user_data) { Shell *shell = reinterpret_cast(user_data); return shell->debug_report_callback(flags, obj_type, object, location, msg_code, layer_prefix, msg); } void assert_all_instance_layers() const; void assert_all_instance_extensions() const; bool has_all_device_layers(VkPhysicalDevice phy) const; bool has_all_device_extensions(VkPhysicalDevice phy) const; // called by init_vk virtual PFN_vkGetInstanceProcAddr load_vk() = 0; virtual bool can_present(VkPhysicalDevice phy, uint32_t queue_family) = 0; void init_instance(); void init_debug_report(); void init_physical_dev(); // called by create_context void create_dev(); void create_back_buffers(); void destroy_back_buffers(); virtual VkSurfaceKHR create_surface(VkInstance instance) = 0; void create_swapchain(); void destroy_swapchain(); void fake_present(); Context ctx_; const float game_tick_; float game_time_; }; #endif // SHELL_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/ShellAndroid.cpp000066400000000000000000000127101270147354000262240ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #include #include #include #include #include "Helpers.h" #include "Game.h" #include "ShellAndroid.h" namespace { // copied from ShellXCB.cpp class PosixTimer { public: PosixTimer() { reset(); } void reset() { clock_gettime(CLOCK_MONOTONIC, &start_); } double get() const { struct timespec now; clock_gettime(CLOCK_MONOTONIC, &now); constexpr long one_s_in_ns = 1000 * 1000 * 1000; constexpr double one_s_in_ns_d = static_cast(one_s_in_ns); time_t s = now.tv_sec - start_.tv_sec; long ns; if (now.tv_nsec > start_.tv_nsec) { ns = now.tv_nsec - start_.tv_nsec; } else { assert(s > 0); s--; ns = one_s_in_ns - (start_.tv_nsec - now.tv_nsec); } return static_cast(s) + static_cast(ns) / one_s_in_ns_d; } private: struct timespec start_; }; } // namespace ShellAndroid::ShellAndroid(android_app &app, Game &game) : Shell(game), app_(app) { instance_extensions_.push_back(VK_KHR_ANDROID_SURFACE_EXTENSION_NAME); app_dummy(); app_.userData = this; app_.onAppCmd = on_app_cmd; app_.onInputEvent = on_input_event; init_vk(); } ShellAndroid::~ShellAndroid() { cleanup_vk(); dlclose(lib_handle_); } void ShellAndroid::log(LogPriority priority, const char *msg) { int prio; switch (priority) { case LOG_DEBUG: prio = ANDROID_LOG_DEBUG; break; case LOG_INFO: prio = ANDROID_LOG_INFO; break; case LOG_WARN: prio = ANDROID_LOG_WARN; break; case LOG_ERR: prio = ANDROID_LOG_ERROR; break; default: prio = ANDROID_LOG_UNKNOWN; break; } __android_log_write(prio, settings_.name.c_str(), msg); } PFN_vkGetInstanceProcAddr ShellAndroid::load_vk() { const char filename[] = "libvulkan.so"; void *handle = nullptr, *symbol = nullptr; handle = dlopen(filename, RTLD_LAZY); if (handle) symbol = dlsym(handle, "vkGetInstanceProcAddr"); if (!symbol) { if (handle) dlclose(handle); throw std::runtime_error(dlerror()); } lib_handle_ = handle; return reinterpret_cast(symbol); } VkSurfaceKHR ShellAndroid::create_surface(VkInstance instance) { VkAndroidSurfaceCreateInfoKHR surface_info = {}; surface_info.sType = VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR; surface_info.window = app_.window; VkSurfaceKHR surface; vk::assert_success(vk::CreateAndroidSurfaceKHR(instance, &surface_info, nullptr, &surface)); return surface; } void ShellAndroid::on_app_cmd(int32_t cmd) { switch (cmd) { case APP_CMD_INIT_WINDOW: create_context(); resize_swapchain(0, 0); break; case APP_CMD_TERM_WINDOW: destroy_context(); break; case APP_CMD_WINDOW_RESIZED: resize_swapchain(0, 0); break; case APP_CMD_STOP: ANativeActivity_finish(app_.activity); break; default: break; } } int32_t ShellAndroid::on_input_event(const AInputEvent *event) { if (AInputEvent_getType(event) != AINPUT_EVENT_TYPE_MOTION) return false; bool handled = false; switch (AMotionEvent_getAction(event) & AMOTION_EVENT_ACTION_MASK) { case AMOTION_EVENT_ACTION_UP: game_.on_key(Game::KEY_SPACE); handled = true; break; default: break; } return handled; } void ShellAndroid::quit() { ANativeActivity_finish(app_.activity); } void ShellAndroid::run() { PosixTimer timer; double current_time = timer.get(); while (true) { struct android_poll_source *source; while (true) { int timeout = (settings_.animate && app_.window) ? 0 : -1; if (ALooper_pollAll(timeout, nullptr, nullptr, reinterpret_cast(&source)) < 0) break; if (source) source->process(&app_, source); } if (app_.destroyRequested) break; if (!app_.window) continue; acquire_back_buffer(); double t = timer.get(); add_game_time(static_cast(t - current_time)); present_back_buffer(); current_time = t; } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/ShellAndroid.h000066400000000000000000000043541270147354000256760ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef SHELL_ANDROID_H #define SHELL_ANDROID_H #include #include "Shell.h" class ShellAndroid : public Shell { public: ShellAndroid(android_app &app, Game &game); ~ShellAndroid(); void log(LogPriority priority, const char *msg); void run(); void quit(); private: PFN_vkGetInstanceProcAddr load_vk(); bool can_present(VkPhysicalDevice phy, uint32_t queue_family) { return true; } VkSurfaceKHR create_surface(VkInstance instance); void on_app_cmd(int32_t cmd); int32_t on_input_event(const AInputEvent *event); static inline void on_app_cmd(android_app *app, int32_t cmd); static inline int32_t on_input_event(android_app *app, AInputEvent *event); android_app &app_; void *lib_handle_; }; void ShellAndroid::on_app_cmd(android_app *app, int32_t cmd) { auto android = reinterpret_cast(app->userData); android->on_app_cmd(cmd); } int32_t ShellAndroid::on_input_event(android_app *app, AInputEvent *event) { auto android = reinterpret_cast(app->userData); return android->on_input_event(event); } #endif // SHELL_ANDROID_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/ShellWin32.cpp000066400000000000000000000145501270147354000255520ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #include #include #include #include "Helpers.h" #include "Game.h" #include "ShellWin32.h" namespace { class Win32Timer { public: Win32Timer() { LARGE_INTEGER freq; QueryPerformanceFrequency(&freq); freq_ = static_cast(freq.QuadPart); reset(); } void reset() { QueryPerformanceCounter(&start_); } double get() const { LARGE_INTEGER now; QueryPerformanceCounter(&now); return static_cast(now.QuadPart - start_.QuadPart) / freq_; } private: double freq_; LARGE_INTEGER start_; }; } // namespace ShellWin32::ShellWin32(Game &game) : Shell(game), hwnd_(nullptr) { instance_extensions_.push_back(VK_KHR_WIN32_SURFACE_EXTENSION_NAME); init_vk(); } ShellWin32::~ShellWin32() { cleanup_vk(); FreeLibrary(hmodule_); } void ShellWin32::create_window() { const std::string class_name(settings_.name + "WindowClass"); hinstance_ = GetModuleHandle(nullptr); WNDCLASSEX win_class = {}; win_class.cbSize = sizeof(WNDCLASSEX); win_class.style = CS_HREDRAW | CS_VREDRAW; win_class.lpfnWndProc = window_proc; win_class.hInstance = hinstance_; win_class.hCursor = LoadCursor(nullptr, IDC_ARROW); win_class.lpszClassName = class_name.c_str(); RegisterClassEx(&win_class); const DWORD win_style = WS_CLIPSIBLINGS | WS_CLIPCHILDREN | WS_VISIBLE | WS_OVERLAPPEDWINDOW; RECT win_rect = { 0, 0, settings_.initial_width, settings_.initial_height }; AdjustWindowRect(&win_rect, win_style, false); hwnd_ = CreateWindowEx(WS_EX_APPWINDOW, class_name.c_str(), settings_.name.c_str(), win_style, 0, 0, win_rect.right - win_rect.left, win_rect.bottom - win_rect.top, nullptr, nullptr, hinstance_, nullptr); SetForegroundWindow(hwnd_); SetWindowLongPtr(hwnd_, GWLP_USERDATA, (LONG_PTR) this); } PFN_vkGetInstanceProcAddr ShellWin32::load_vk() { const char filename[] = "vulkan-1.dll"; HMODULE mod; PFN_vkGetInstanceProcAddr get_proc; mod = LoadLibrary(filename); if (mod) { get_proc = reinterpret_cast(GetProcAddress( mod, "vkGetInstanceProcAddr")); } if (!mod || !get_proc) { std::stringstream ss; ss << "failed to load " << filename; if (mod) FreeLibrary(mod); throw std::runtime_error(ss.str()); } hmodule_ = mod; return get_proc; } bool ShellWin32::can_present(VkPhysicalDevice phy, uint32_t queue_family) { return vk::GetPhysicalDeviceWin32PresentationSupportKHR(phy, queue_family); } VkSurfaceKHR ShellWin32::create_surface(VkInstance instance) { VkWin32SurfaceCreateInfoKHR surface_info = {}; surface_info.sType = VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR; surface_info.hinstance = hinstance_; surface_info.hwnd = hwnd_; VkSurfaceKHR surface; vk::assert_success(vk::CreateWin32SurfaceKHR(instance, &surface_info, nullptr, &surface)); return surface; } LRESULT ShellWin32::handle_message(UINT msg, WPARAM wparam, LPARAM lparam) { switch (msg) { case WM_SIZE: { UINT w = LOWORD(lparam); UINT h = HIWORD(lparam); resize_swapchain(w, h); } break; case WM_KEYDOWN: { Game::Key key; switch (wparam) { case VK_ESCAPE: key = Game::KEY_ESC; break; case VK_UP: key = Game::KEY_UP; break; case VK_DOWN: key = Game::KEY_DOWN; break; case VK_SPACE: key = Game::KEY_SPACE; break; default: key = Game::KEY_UNKNOWN; break; } game_.on_key(key); } break; case WM_CLOSE: game_.on_key(Game::KEY_SHUTDOWN); break; case WM_DESTROY: quit(); break; default: return DefWindowProc(hwnd_, msg, wparam, lparam); break; } return 0; } void ShellWin32::quit() { PostQuitMessage(0); } void ShellWin32::run() { create_window(); create_context(); resize_swapchain(settings_.initial_width, settings_.initial_height); Win32Timer timer; double current_time = timer.get(); while (true) { bool quit = false; assert(settings_.animate); // process all messages MSG msg; while (PeekMessage(&msg, nullptr, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) { quit = true; break; } TranslateMessage(&msg); DispatchMessage(&msg); } if (quit) break; acquire_back_buffer(); double t = timer.get(); add_game_time(static_cast(t - current_time)); present_back_buffer(); current_time = t; } destroy_context(); DestroyWindow(hwnd_); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/ShellWin32.h000066400000000000000000000041011270147354000252060ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef SHELL_WIN32_H #define SHELL_WIN32_H #include #include "Shell.h" class ShellWin32 : public Shell { public: ShellWin32(Game &game); ~ShellWin32(); void run(); void quit(); private: PFN_vkGetInstanceProcAddr load_vk(); bool can_present(VkPhysicalDevice phy, uint32_t queue_family); void create_window(); VkSurfaceKHR create_surface(VkInstance instance); static LRESULT CALLBACK window_proc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { ShellWin32 *shell = reinterpret_cast(GetWindowLongPtr(hwnd, GWLP_USERDATA)); // called from constructor, CreateWindowEx specifically. But why? if (!shell) return DefWindowProc(hwnd, uMsg, wParam, lParam); return shell->handle_message(uMsg, wParam, lParam); } LRESULT handle_message(UINT msg, WPARAM wparam, LPARAM lparam); HINSTANCE hinstance_; HWND hwnd_; HMODULE hmodule_; }; #endif // SHELL_WIN32_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/ShellXcb.cpp000066400000000000000000000222231270147354000253600ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #include #include #include #include #include "Helpers.h" #include "Game.h" #include "ShellXcb.h" namespace { class PosixTimer { public: PosixTimer() { reset(); } void reset() { clock_gettime(CLOCK_MONOTONIC, &start_); } double get() const { struct timespec now; clock_gettime(CLOCK_MONOTONIC, &now); constexpr long one_s_in_ns = 1000 * 1000 * 1000; constexpr double one_s_in_ns_d = static_cast(one_s_in_ns); time_t s = now.tv_sec - start_.tv_sec; long ns; if (now.tv_nsec > start_.tv_nsec) { ns = now.tv_nsec - start_.tv_nsec; } else { assert(s > 0); s--; ns = one_s_in_ns - (start_.tv_nsec - now.tv_nsec); } return static_cast(s) + static_cast(ns) / one_s_in_ns_d; } private: struct timespec start_; }; xcb_intern_atom_cookie_t intern_atom_cookie(xcb_connection_t *c, const std::string &s) { return xcb_intern_atom(c, false, s.size(), s.c_str()); } xcb_atom_t intern_atom(xcb_connection_t *c, xcb_intern_atom_cookie_t cookie) { xcb_atom_t atom = XCB_ATOM_NONE; xcb_intern_atom_reply_t *reply = xcb_intern_atom_reply(c, cookie, nullptr); if (reply) { atom = reply->atom; free(reply); } return atom; } } // namespace ShellXcb::ShellXcb(Game &game) : Shell(game) { instance_extensions_.push_back(VK_KHR_XCB_SURFACE_EXTENSION_NAME); init_connection(); init_vk(); } ShellXcb::~ShellXcb() { cleanup_vk(); dlclose(lib_handle_); xcb_disconnect(c_); } void ShellXcb::init_connection() { int scr; c_ = xcb_connect(nullptr, &scr); if (!c_ || xcb_connection_has_error(c_)) { xcb_disconnect(c_); throw std::runtime_error("failed to connect to the display server"); } const xcb_setup_t *setup = xcb_get_setup(c_); xcb_screen_iterator_t iter = xcb_setup_roots_iterator(setup); while (scr-- > 0) xcb_screen_next(&iter); scr_ = iter.data; } void ShellXcb::create_window() { win_ = xcb_generate_id(c_); uint32_t value_mask, value_list[32]; value_mask = XCB_CW_BACK_PIXEL | XCB_CW_EVENT_MASK; value_list[0] = scr_->black_pixel; value_list[1] = XCB_EVENT_MASK_KEY_PRESS | XCB_EVENT_MASK_STRUCTURE_NOTIFY; xcb_create_window(c_, XCB_COPY_FROM_PARENT, win_, scr_->root, 0, 0, settings_.initial_width, settings_.initial_height, 0, XCB_WINDOW_CLASS_INPUT_OUTPUT, scr_->root_visual, value_mask, value_list); xcb_intern_atom_cookie_t utf8_string_cookie = intern_atom_cookie(c_, "UTF8_STRING"); xcb_intern_atom_cookie_t _net_wm_name_cookie = intern_atom_cookie(c_, "_NET_WM_NAME"); xcb_intern_atom_cookie_t wm_protocols_cookie = intern_atom_cookie(c_, "WM_PROTOCOLS"); xcb_intern_atom_cookie_t wm_delete_window_cookie = intern_atom_cookie(c_, "WM_DELETE_WINDOW"); // set title xcb_atom_t utf8_string = intern_atom(c_, utf8_string_cookie); xcb_atom_t _net_wm_name = intern_atom(c_, _net_wm_name_cookie); xcb_change_property(c_, XCB_PROP_MODE_REPLACE, win_, _net_wm_name, utf8_string, 8, settings_.name.size(), settings_.name.c_str()); // advertise WM_DELETE_WINDOW wm_protocols_ = intern_atom(c_, wm_protocols_cookie); wm_delete_window_ = intern_atom(c_, wm_delete_window_cookie); xcb_change_property(c_, XCB_PROP_MODE_REPLACE, win_, wm_protocols_, XCB_ATOM_ATOM, 32, 1, &wm_delete_window_); } PFN_vkGetInstanceProcAddr ShellXcb::load_vk() { const char filename[] = "libvulkan.so"; void *handle, *symbol; #ifdef UNINSTALLED_LOADER handle = dlopen(UNINSTALLED_LOADER, RTLD_LAZY); if (!handle) handle = dlopen(filename, RTLD_LAZY); #else handle = dlopen(filename, RTLD_LAZY); #endif if (handle) symbol = dlsym(handle, "vkGetInstanceProcAddr"); if (!handle || !symbol) { std::stringstream ss; ss << "failed to load " << dlerror(); if (handle) dlclose(handle); throw std::runtime_error(ss.str()); } lib_handle_ = handle; return reinterpret_cast(symbol); } bool ShellXcb::can_present(VkPhysicalDevice phy, uint32_t queue_family) { return vk::GetPhysicalDeviceXcbPresentationSupportKHR(phy, queue_family, c_, scr_->root_visual); } VkSurfaceKHR ShellXcb::create_surface(VkInstance instance) { VkXcbSurfaceCreateInfoKHR surface_info = {}; surface_info.sType = VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR; surface_info.connection = c_; surface_info.window = win_; VkSurfaceKHR surface; vk::assert_success(vk::CreateXcbSurfaceKHR(instance, &surface_info, nullptr, &surface)); return surface; } void ShellXcb::handle_event(const xcb_generic_event_t *ev) { switch (ev->response_type & 0x7f) { case XCB_CONFIGURE_NOTIFY: { const xcb_configure_notify_event_t *notify = reinterpret_cast(ev); resize_swapchain(notify->width, notify->height); } break; case XCB_KEY_PRESS: { const xcb_key_press_event_t *press = reinterpret_cast(ev); Game::Key key; // TODO translate xcb_keycode_t switch (press->detail) { case 9: key = Game::KEY_ESC; break; case 111: key = Game::KEY_UP; break; case 116: key = Game::KEY_DOWN; break; case 65: key = Game::KEY_SPACE; break; default: key = Game::KEY_UNKNOWN; break; } game_.on_key(key); } break; case XCB_CLIENT_MESSAGE: { const xcb_client_message_event_t *msg = reinterpret_cast(ev); if (msg->type == wm_protocols_ && msg->data.data32[0] == wm_delete_window_) game_.on_key(Game::KEY_SHUTDOWN); } break; default: break; } } void ShellXcb::loop_wait() { while (true) { xcb_generic_event_t *ev = xcb_wait_for_event(c_); if (!ev) continue; handle_event(ev); free(ev); if (quit_) break; acquire_back_buffer(); present_back_buffer(); } } void ShellXcb::loop_poll() { PosixTimer timer; double current_time = timer.get(); double profile_start_time = current_time; int profile_present_count = 0; while (true) { // handle pending events while (true) { xcb_generic_event_t *ev = xcb_poll_for_event(c_); if (!ev) break; handle_event(ev); free(ev); } if (quit_) break; acquire_back_buffer(); double t = timer.get(); add_game_time(static_cast(t - current_time)); present_back_buffer(); current_time = t; profile_present_count++; if (current_time - profile_start_time >= 5.0) { const double fps = profile_present_count / (current_time - profile_start_time); std::stringstream ss; ss << profile_present_count << " presents in " << current_time - profile_start_time << " seconds " << "(FPS: " << fps << ")"; log(LOG_INFO, ss.str().c_str()); profile_start_time = current_time; profile_present_count = 0; } } } void ShellXcb::run() { create_window(); xcb_map_window(c_, win_); xcb_flush(c_); create_context(); resize_swapchain(settings_.initial_width, settings_.initial_height); quit_ = false; if (settings_.animate) loop_poll(); else loop_wait(); destroy_context(); xcb_destroy_window(c_, win_); xcb_flush(c_); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/ShellXcb.h000066400000000000000000000035141270147354000250270ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef SHELL_XCB_H #define SHELL_XCB_H #include #include "Shell.h" class ShellXcb : public Shell { public: ShellXcb(Game &game); ~ShellXcb(); void run(); void quit() { quit_ = true; } private: void init_connection(); PFN_vkGetInstanceProcAddr load_vk(); bool can_present(VkPhysicalDevice phy, uint32_t queue_family); void create_window(); VkSurfaceKHR create_surface(VkInstance instance); void handle_event(const xcb_generic_event_t *ev); void loop_wait(); void loop_poll(); xcb_connection_t *c_; xcb_screen_t *scr_; xcb_window_t win_; xcb_atom_t wm_protocols_; xcb_atom_t wm_delete_window_; void *lib_handle_; bool quit_; }; #endif // SHELL_XCB_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Simulation.cpp000066400000000000000000000202431270147354000260000ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #include #include #include #include #include "Simulation.h" namespace { class MeshPicker { public: MeshPicker() : pattern_({ Meshes::MESH_PYRAMID, Meshes::MESH_ICOSPHERE, Meshes::MESH_TEAPOT, Meshes::MESH_PYRAMID, Meshes::MESH_ICOSPHERE, Meshes::MESH_PYRAMID, Meshes::MESH_PYRAMID, Meshes::MESH_PYRAMID, Meshes::MESH_PYRAMID, Meshes::MESH_PYRAMID, }), cur_(-1) { } Meshes::Type pick() { cur_ = (cur_ + 1) % pattern_.size(); return pattern_[cur_]; } float scale(Meshes::Type type) const { float base = 0.005f; switch (type) { case Meshes::MESH_PYRAMID: default: return base * 1.0f; case Meshes::MESH_ICOSPHERE: return base * 3.0f; case Meshes::MESH_TEAPOT: return base * 10.0f; } } private: const std::array pattern_; int cur_; }; class ColorPicker { public: ColorPicker(unsigned int rng_seed) : rng_(rng_seed), red_(0.0f, 1.0f), green_(0.0f, 1.0f), blue_(0.0f, 1.0f) { } glm::vec3 pick() { return glm::vec3{ red_(rng_), green_(rng_), blue_(rng_) }; } private: std::mt19937 rng_; std::uniform_real_distribution red_; std::uniform_real_distribution green_; std::uniform_real_distribution blue_; }; } // namespace Animation::Animation(unsigned int rng_seed, float scale) : rng_(rng_seed), dir_(-1.0f, 1.0f), speed_(0.1f, 1.0f) { float x = dir_(rng_); float y = dir_(rng_); float z = dir_(rng_); if (std::abs(x) + std::abs(y) + std::abs(z) == 0.0f) x = 1.0f; current_.axis = glm::normalize(glm::vec3(x, y, z)); current_.speed = speed_(rng_); current_.scale = scale; current_.matrix = glm::scale(glm::mat4(1.0f), glm::vec3(current_.scale)); } glm::mat4 Animation::transformation(float t) { current_.matrix = glm::rotate(current_.matrix, current_.speed * t, current_.axis); return current_.matrix; } class Curve { public: virtual ~Curve() {} virtual glm::vec3 evaluate(float t) = 0; }; namespace { enum CurveType { CURVE_RANDOM, CURVE_CIRCLE, CURVE_COUNT, }; class RandomCurve : public Curve { public: RandomCurve(unsigned int rng_seed) : rng_(rng_seed), direction_(-0.3f, 0.3f), duration_(1.0f, 5.0f), segment_start_(0.0f), segment_direction_(0.0f), time_start_(0.0f), time_duration_(0.0f) { } glm::vec3 evaluate(float t) { if (t >= time_start_ + time_duration_) new_segment(t); pos_ += unit_dir_ * (t - last_); last_ = t; return pos_; } private: void new_segment(float time_start) { segment_start_ += segment_direction_; segment_direction_ = glm::vec3(direction_(rng_), direction_(rng_), direction_(rng_)); time_start_ = time_start; time_duration_ = duration_(rng_); unit_dir_ = segment_direction_ / time_duration_; pos_ = segment_start_; last_ = time_start_; } std::mt19937 rng_; std::uniform_real_distribution direction_; std::uniform_real_distribution duration_; glm::vec3 segment_start_; glm::vec3 segment_direction_; float time_start_; float time_duration_; glm::vec3 unit_dir_; glm::vec3 pos_; float last_; }; class CircleCurve : public Curve { public: CircleCurve(float radius, glm::vec3 axis) : r_(radius) { glm::vec3 a; if (axis.x != 0.0f) { a.x = -axis.z / axis.x; a.y = 0.0f; a.z = 1.0f; } else if (axis.y != 0.0f) { a.x = 1.0f; a.y = -axis.x / axis.y; a.z = 0.0f; } else { a.x = 1.0f; a.y = 0.0f; a.z = -axis.x / axis.z; } a_ = glm::normalize(a); b_ = glm::normalize(glm::cross(a_, axis)); } glm::vec3 evaluate(float t) { return (a_ * (glm::vec3(std::cos(t)) - glm::vec3(1.0f)) + b_ * glm::vec3(std::sin(t))) * glm::vec3(r_); } private: float r_; glm::vec3 a_; glm::vec3 b_; }; } // namespace Path::Path(unsigned int rng_seed) : rng_(rng_seed), type_(0, CURVE_COUNT - 1), duration_(5.0f, 20.0f) { // trigger a subpath generation current_.end = -1.0f; current_.now = 0.0f; } glm::vec3 Path::position(float t) { current_.now += t; while (current_.now >= current_.end) generate_subpath(); return current_.origin + current_.curve->evaluate(current_.now - current_.start); } void Path::generate_subpath() { float duration = duration_(rng_); CurveType type = static_cast(type_(rng_)); if (current_.curve) { current_.origin += current_.curve->evaluate(current_.end - current_.start); current_.start = current_.end; } else { std::uniform_real_distribution origin(0.0f, 2.0f); current_.origin = glm::vec3(origin(rng_), origin(rng_), origin(rng_)); current_.start = current_.now; } current_.end = current_.start + duration; Curve *curve; switch (type) { case CURVE_RANDOM: curve = new RandomCurve(rng_()); break; case CURVE_CIRCLE: { std::uniform_real_distribution dir(-1.0f, 1.0f); glm::vec3 axis(dir(rng_), dir(rng_), dir(rng_)); if (axis.x == 0.0f && axis.y == 0.0f && axis.z == 0.0f) axis.x = 1.0f; std::uniform_real_distribution radius_(0.02f, 0.2f); curve = new CircleCurve(radius_(rng_), axis); } break; default: assert(!"unreachable"); curve = nullptr; break; } current_.curve.reset(curve); } Simulation::Simulation(int object_count) : random_dev_() { MeshPicker mesh; ColorPicker color(random_dev_()); objects_.reserve(object_count); for (int i = 0; i < object_count; i++) { Meshes::Type type = mesh.pick(); float scale = mesh.scale(type); objects_.emplace_back(Object{ type, glm::vec3(0.5 + 0.5 * (float) i / object_count), color.pick(), Animation(random_dev_(), scale), Path(random_dev_()), }); } } void Simulation::set_frame_data_size(uint32_t size) { uint32_t offset = 0; for (auto &obj : objects_) { obj.frame_data_offset = offset; offset += size; } } void Simulation::update(float time, int begin, int end) { for (int i = begin; i < end; i++) { auto &obj = objects_[i]; glm::vec3 pos = obj.path.position(time); glm::mat4 trans = obj.animation.transformation(time); obj.model = glm::translate(glm::mat4(1.0f), pos) * trans; } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Simulation.h000066400000000000000000000052141270147354000254460ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef SIMULATION_H #define SIMULATION_H #include #include #include #include #include "Meshes.h" class Animation { public: Animation(unsigned rng_seed, float scale); glm::mat4 transformation(float t); private: struct Data { glm::vec3 axis; float speed; float scale; glm::mat4 matrix; }; std::mt19937 rng_; std::uniform_real_distribution dir_; std::uniform_real_distribution speed_; Data current_; }; class Curve; class Path { public: Path(unsigned rng_seed); glm::vec3 position(float t); private: struct Subpath { glm::vec3 origin; float start; float end; float now; std::shared_ptr curve; }; void generate_subpath(); std::mt19937 rng_; std::uniform_int_distribution<> type_; std::uniform_real_distribution duration_; Subpath current_; }; class Simulation { public: Simulation(int object_count); struct Object { Meshes::Type mesh; glm::vec3 light_pos; glm::vec3 light_color; Animation animation; Path path; uint32_t frame_data_offset; glm::mat4 model; }; const std::vector &objects() const { return objects_; } unsigned int rng_seed() { return random_dev_(); } void set_frame_data_size(uint32_t size); void update(float time, int begin, int end); private: std::random_device random_dev_; std::vector objects_; }; #endif // SIMULATION_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Smoke.cpp000066400000000000000000000755221270147354000247440ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #include #include #include #include "Helpers.h" #include "Smoke.h" #include "Meshes.h" #include "Shell.h" namespace { // TODO do not rely on compiler to use std140 layout // TODO move lower frequency data to another descriptor set struct ShaderParamBlock { float light_pos[4]; float light_color[4]; float model[4 * 4]; float view_projection[4 * 4]; }; } // namespace Smoke::Smoke(const std::vector &args) : Game("Smoke", args), multithread_(true), use_push_constants_(false), sim_paused_(false), sim_(5000), camera_(2.5f), frame_data_(), render_pass_clear_value_({{ 0.0f, 0.1f, 0.2f, 1.0f }}), render_pass_begin_info_(), primary_cmd_begin_info_(), primary_cmd_submit_info_() { for (auto it = args.begin(); it != args.end(); ++it) { if (*it == "-s") multithread_ = false; else if (*it == "-p") use_push_constants_ = true; } init_workers(); } Smoke::~Smoke() { } void Smoke::init_workers() { int worker_count = std::thread::hardware_concurrency(); // not enough cores if (!multithread_ || worker_count < 2) { multithread_ = false; worker_count = 1; } const int object_per_worker = sim_.objects().size() / worker_count; int object_begin = 0, object_end = 0; workers_.reserve(worker_count); for (int i = 0; i < worker_count; i++) { object_begin = object_end; if (i < worker_count - 1) object_end += object_per_worker; else object_end = sim_.objects().size(); Worker *worker = new Worker(*this, i, object_begin, object_end); workers_.emplace_back(std::unique_ptr(worker)); } } void Smoke::attach_shell(Shell &sh) { Game::attach_shell(sh); const Shell::Context &ctx = sh.context(); physical_dev_ = ctx.physical_dev; dev_ = ctx.dev; queue_ = ctx.game_queue; queue_family_ = ctx.game_queue_family; format_ = ctx.format.format; vk::GetPhysicalDeviceProperties(physical_dev_, &physical_dev_props_); if (use_push_constants_ && sizeof(ShaderParamBlock) > physical_dev_props_.limits.maxPushConstantsSize) { shell_->log(Shell::LOG_WARN, "cannot enable push constants"); use_push_constants_ = false; } VkPhysicalDeviceMemoryProperties mem_props; vk::GetPhysicalDeviceMemoryProperties(physical_dev_, &mem_props); mem_flags_.reserve(mem_props.memoryTypeCount); for (uint32_t i = 0; i < mem_props.memoryTypeCount; i++) mem_flags_.push_back(mem_props.memoryTypes[i].propertyFlags); meshes_ = new Meshes(dev_, mem_flags_); create_render_pass(); create_shader_modules(); create_descriptor_set_layout(); create_pipeline_layout(); create_pipeline(); create_frame_data(2); render_pass_begin_info_.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO; render_pass_begin_info_.renderPass = render_pass_; render_pass_begin_info_.clearValueCount = 1; render_pass_begin_info_.pClearValues = &render_pass_clear_value_; primary_cmd_begin_info_.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO; primary_cmd_begin_info_.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT; // we will render to the swapchain images primary_cmd_submit_wait_stages_ = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT; primary_cmd_submit_info_.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO; primary_cmd_submit_info_.waitSemaphoreCount = 1; primary_cmd_submit_info_.pWaitDstStageMask = &primary_cmd_submit_wait_stages_; primary_cmd_submit_info_.commandBufferCount = 1; primary_cmd_submit_info_.signalSemaphoreCount = 1; if (multithread_) { for (auto &worker : workers_) worker->start(); } } void Smoke::detach_shell() { if (multithread_) { for (auto &worker : workers_) worker->stop(); } destroy_frame_data(); vk::DestroyPipeline(dev_, pipeline_, nullptr); vk::DestroyPipelineLayout(dev_, pipeline_layout_, nullptr); if (!use_push_constants_) vk::DestroyDescriptorSetLayout(dev_, desc_set_layout_, nullptr); vk::DestroyShaderModule(dev_, fs_, nullptr); vk::DestroyShaderModule(dev_, vs_, nullptr); vk::DestroyRenderPass(dev_, render_pass_, nullptr); delete meshes_; Game::detach_shell(); } void Smoke::create_render_pass() { VkAttachmentDescription attachment = {}; attachment.format = format_; attachment.samples = VK_SAMPLE_COUNT_1_BIT; attachment.loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR; attachment.storeOp = VK_ATTACHMENT_STORE_OP_STORE; attachment.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED; attachment.finalLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR; VkAttachmentReference attachment_ref = {}; attachment_ref.attachment = 0; attachment_ref.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL; VkSubpassDescription subpass = {}; subpass.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS; subpass.colorAttachmentCount = 1; subpass.pColorAttachments = &attachment_ref; std::array subpass_deps; subpass_deps[0].srcSubpass = VK_SUBPASS_EXTERNAL; subpass_deps[0].dstSubpass = 0; subpass_deps[0].srcStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT; subpass_deps[0].dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT; subpass_deps[0].srcAccessMask = VK_ACCESS_MEMORY_READ_BIT; subpass_deps[0].dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT; subpass_deps[0].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT; subpass_deps[1].srcSubpass = 0; subpass_deps[1].dstSubpass = VK_SUBPASS_EXTERNAL; subpass_deps[1].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT; subpass_deps[1].dstStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT; subpass_deps[1].srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT; subpass_deps[1].dstAccessMask = VK_ACCESS_MEMORY_READ_BIT; subpass_deps[1].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT; VkRenderPassCreateInfo render_pass_info = {}; render_pass_info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO; render_pass_info.attachmentCount = 1; render_pass_info.pAttachments = &attachment; render_pass_info.subpassCount = 1; render_pass_info.pSubpasses = &subpass; render_pass_info.dependencyCount = (uint32_t)subpass_deps.size(); render_pass_info.pDependencies = subpass_deps.data(); vk::assert_success(vk::CreateRenderPass(dev_, &render_pass_info, nullptr, &render_pass_)); } void Smoke::create_shader_modules() { VkShaderModuleCreateInfo sh_info = {}; sh_info.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO; if (use_push_constants_) { #include "Smoke.push_constant.vert.h" sh_info.codeSize = sizeof(Smoke_push_constant_vert); sh_info.pCode = Smoke_push_constant_vert; } else { #include "Smoke.vert.h" sh_info.codeSize = sizeof(Smoke_vert); sh_info.pCode = Smoke_vert; } vk::assert_success(vk::CreateShaderModule(dev_, &sh_info, nullptr, &vs_)); #include "Smoke.frag.h" sh_info.codeSize = sizeof(Smoke_frag); sh_info.pCode = Smoke_frag; vk::assert_success(vk::CreateShaderModule(dev_, &sh_info, nullptr, &fs_)); } void Smoke::create_descriptor_set_layout() { if (use_push_constants_) return; VkDescriptorSetLayoutBinding layout_binding = {}; layout_binding.binding = 0; layout_binding.descriptorType = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC; layout_binding.descriptorCount = 1; layout_binding.stageFlags = VK_SHADER_STAGE_VERTEX_BIT; VkDescriptorSetLayoutCreateInfo layout_info = {}; layout_info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO; layout_info.bindingCount = 1; layout_info.pBindings = &layout_binding; vk::assert_success(vk::CreateDescriptorSetLayout(dev_, &layout_info, nullptr, &desc_set_layout_)); } void Smoke::create_pipeline_layout() { VkPushConstantRange push_const_range = {}; VkPipelineLayoutCreateInfo pipeline_layout_info = {}; pipeline_layout_info.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO; if (use_push_constants_) { push_const_range.stageFlags = VK_SHADER_STAGE_VERTEX_BIT; push_const_range.offset = 0; push_const_range.size = sizeof(ShaderParamBlock); pipeline_layout_info.pushConstantRangeCount = 1; pipeline_layout_info.pPushConstantRanges = &push_const_range; } else { pipeline_layout_info.setLayoutCount = 1; pipeline_layout_info.pSetLayouts = &desc_set_layout_; } vk::assert_success(vk::CreatePipelineLayout(dev_, &pipeline_layout_info, nullptr, &pipeline_layout_)); } void Smoke::create_pipeline() { VkPipelineShaderStageCreateInfo stage_info[2] = {}; stage_info[0].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; stage_info[0].stage = VK_SHADER_STAGE_VERTEX_BIT; stage_info[0].module = vs_; stage_info[0].pName = "main"; stage_info[1].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; stage_info[1].stage = VK_SHADER_STAGE_FRAGMENT_BIT; stage_info[1].module = fs_; stage_info[1].pName = "main"; VkPipelineViewportStateCreateInfo viewport_info = {}; viewport_info.sType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO; // both dynamic viewport_info.viewportCount = 1; viewport_info.scissorCount = 1; VkPipelineRasterizationStateCreateInfo rast_info = {}; rast_info.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO; rast_info.depthClampEnable = false; rast_info.rasterizerDiscardEnable = false; rast_info.polygonMode = VK_POLYGON_MODE_FILL; rast_info.cullMode = VK_CULL_MODE_NONE; rast_info.frontFace = VK_FRONT_FACE_COUNTER_CLOCKWISE; rast_info.depthBiasEnable = false; rast_info.lineWidth = 1.0f; VkPipelineMultisampleStateCreateInfo multisample_info = {}; multisample_info.sType = VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO; multisample_info.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT; multisample_info.sampleShadingEnable = false; multisample_info.pSampleMask = nullptr; multisample_info.alphaToCoverageEnable = false; multisample_info.alphaToOneEnable = false; VkPipelineColorBlendAttachmentState blend_attachment = {}; blend_attachment.blendEnable = true; blend_attachment.srcColorBlendFactor = VK_BLEND_FACTOR_SRC_ALPHA; blend_attachment.dstColorBlendFactor = VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA; blend_attachment.colorBlendOp = VK_BLEND_OP_ADD; blend_attachment.srcAlphaBlendFactor = VK_BLEND_FACTOR_ONE; blend_attachment.dstAlphaBlendFactor = VK_BLEND_FACTOR_ZERO; blend_attachment.alphaBlendOp = VK_BLEND_OP_ADD; blend_attachment.colorWriteMask = VK_COLOR_COMPONENT_R_BIT | VK_COLOR_COMPONENT_G_BIT | VK_COLOR_COMPONENT_B_BIT | VK_COLOR_COMPONENT_A_BIT; VkPipelineColorBlendStateCreateInfo blend_info = {}; blend_info.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO; blend_info.logicOpEnable = false; blend_info.attachmentCount = 1; blend_info.pAttachments = &blend_attachment; std::array dynamic_states = { VK_DYNAMIC_STATE_VIEWPORT, VK_DYNAMIC_STATE_SCISSOR }; struct VkPipelineDynamicStateCreateInfo dynamic_info = {}; dynamic_info.sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO; dynamic_info.dynamicStateCount = (uint32_t)dynamic_states.size(); dynamic_info.pDynamicStates = dynamic_states.data(); VkGraphicsPipelineCreateInfo pipeline_info = {}; pipeline_info.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO; pipeline_info.stageCount = 2; pipeline_info.pStages = stage_info; pipeline_info.pVertexInputState = &meshes_->vertex_input_state(); pipeline_info.pInputAssemblyState = &meshes_->input_assembly_state(); pipeline_info.pTessellationState = nullptr; pipeline_info.pViewportState = &viewport_info; pipeline_info.pRasterizationState = &rast_info; pipeline_info.pMultisampleState = &multisample_info; pipeline_info.pDepthStencilState = nullptr; pipeline_info.pColorBlendState = &blend_info; pipeline_info.pDynamicState = &dynamic_info; pipeline_info.layout = pipeline_layout_; pipeline_info.renderPass = render_pass_; pipeline_info.subpass = 0; vk::assert_success(vk::CreateGraphicsPipelines(dev_, VK_NULL_HANDLE, 1, &pipeline_info, nullptr, &pipeline_)); } void Smoke::create_frame_data(int count) { frame_data_.resize(count); create_fences(); create_command_buffers(); if (!use_push_constants_) { create_buffers(); create_buffer_memory(); create_descriptor_sets(); } frame_data_index_ = 0; } void Smoke::destroy_frame_data() { if (!use_push_constants_) { vk::DestroyDescriptorPool(dev_, desc_pool_, nullptr); for (auto cmd_pool : worker_cmd_pools_) vk::DestroyCommandPool(dev_, cmd_pool, nullptr); worker_cmd_pools_.clear(); vk::DestroyCommandPool(dev_, primary_cmd_pool_, nullptr); vk::UnmapMemory(dev_, frame_data_mem_); vk::FreeMemory(dev_, frame_data_mem_, nullptr); for (auto &data : frame_data_) vk::DestroyBuffer(dev_, data.buf, nullptr); } for (auto &data : frame_data_) vk::DestroyFence(dev_, data.fence, nullptr); frame_data_.clear(); } void Smoke::create_fences() { VkFenceCreateInfo fence_info = {}; fence_info.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO; fence_info.flags = VK_FENCE_CREATE_SIGNALED_BIT; for (auto &data : frame_data_) vk::assert_success(vk::CreateFence(dev_, &fence_info, nullptr, &data.fence)); } void Smoke::create_command_buffers() { VkCommandPoolCreateInfo cmd_pool_info = {}; cmd_pool_info.sType = VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO; cmd_pool_info.flags = VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT; cmd_pool_info.queueFamilyIndex = queue_family_; VkCommandBufferAllocateInfo cmd_info = {}; cmd_info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO; cmd_info.commandBufferCount = static_cast(frame_data_.size()); // create command pools and buffers std::vector cmd_pools(workers_.size() + 1, VK_NULL_HANDLE); std::vector> cmds_vec(workers_.size() + 1, std::vector(frame_data_.size(), VK_NULL_HANDLE)); for (size_t i = 0; i < cmd_pools.size(); i++) { auto &cmd_pool = cmd_pools[i]; auto &cmds = cmds_vec[i]; vk::assert_success(vk::CreateCommandPool(dev_, &cmd_pool_info, nullptr, &cmd_pool)); cmd_info.commandPool = cmd_pool; cmd_info.level = (cmd_pool == cmd_pools.back()) ? VK_COMMAND_BUFFER_LEVEL_PRIMARY : VK_COMMAND_BUFFER_LEVEL_SECONDARY; vk::assert_success(vk::AllocateCommandBuffers(dev_, &cmd_info, cmds.data())); } // update frame_data_ for (size_t i = 0; i < frame_data_.size(); i++) { for (const auto &cmds : cmds_vec) { if (cmds == cmds_vec.back()) { frame_data_[i].primary_cmd = cmds[i]; } else { frame_data_[i].worker_cmds.push_back(cmds[i]); } } } primary_cmd_pool_ = cmd_pools.back(); cmd_pools.pop_back(); worker_cmd_pools_ = cmd_pools; } void Smoke::create_buffers() { VkDeviceSize object_data_size = sizeof(ShaderParamBlock); // align object data to device limit const VkDeviceSize &alignment = physical_dev_props_.limits.minStorageBufferOffsetAlignment; if (object_data_size % alignment) object_data_size += alignment - (object_data_size % alignment); // update simulation sim_.set_frame_data_size(object_data_size); VkBufferCreateInfo buf_info = {}; buf_info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO; buf_info.size = object_data_size * sim_.objects().size(); buf_info.usage = VK_BUFFER_USAGE_STORAGE_BUFFER_BIT; buf_info.sharingMode = VK_SHARING_MODE_EXCLUSIVE; for (auto &data : frame_data_) vk::assert_success(vk::CreateBuffer(dev_, &buf_info, nullptr, &data.buf)); } void Smoke::create_buffer_memory() { VkMemoryRequirements mem_reqs; vk::GetBufferMemoryRequirements(dev_, frame_data_[0].buf, &mem_reqs); VkDeviceSize aligned_size = mem_reqs.size; if (aligned_size % mem_reqs.alignment) aligned_size += mem_reqs.alignment - (aligned_size % mem_reqs.alignment); // allocate memory VkMemoryAllocateInfo mem_info = {}; mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO; mem_info.allocationSize = aligned_size * (frame_data_.size() - 1) + mem_reqs.size; for (uint32_t idx = 0; idx < mem_flags_.size(); idx++) { if ((mem_reqs.memoryTypeBits & (1 << idx)) && (mem_flags_[idx] & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) && (mem_flags_[idx] & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) { // TODO is this guaranteed to exist? mem_info.memoryTypeIndex = idx; break; } } vk::AllocateMemory(dev_, &mem_info, nullptr, &frame_data_mem_); void *ptr; vk::MapMemory(dev_, frame_data_mem_, 0, VK_WHOLE_SIZE, 0, &ptr); VkDeviceSize offset = 0; for (auto &data : frame_data_) { vk::BindBufferMemory(dev_, data.buf, frame_data_mem_, offset); data.base = reinterpret_cast(ptr) + offset; offset += aligned_size; } } void Smoke::create_descriptor_sets() { VkDescriptorPoolSize desc_pool_size = {}; desc_pool_size.type = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC; desc_pool_size.descriptorCount = frame_data_.size(); VkDescriptorPoolCreateInfo desc_pool_info = {}; desc_pool_info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO; desc_pool_info.maxSets = frame_data_.size(); desc_pool_info.poolSizeCount = 1; desc_pool_info.pPoolSizes = &desc_pool_size; // create descriptor pool vk::assert_success(vk::CreateDescriptorPool(dev_, &desc_pool_info, nullptr, &desc_pool_)); std::vector set_layouts(frame_data_.size(), desc_set_layout_); VkDescriptorSetAllocateInfo set_info = {}; set_info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO; set_info.descriptorPool = desc_pool_; set_info.descriptorSetCount = static_cast(set_layouts.size()); set_info.pSetLayouts = set_layouts.data(); // create descriptor sets std::vector desc_sets(frame_data_.size(), VK_NULL_HANDLE); vk::assert_success(vk::AllocateDescriptorSets(dev_, &set_info, desc_sets.data())); std::vector desc_bufs(frame_data_.size()); std::vector desc_writes(frame_data_.size()); for (size_t i = 0; i < frame_data_.size(); i++) { auto &data = frame_data_[i]; data.desc_set = desc_sets[i]; VkDescriptorBufferInfo desc_buf = {}; desc_buf.buffer = data.buf; desc_buf.offset = 0; desc_buf.range = VK_WHOLE_SIZE; desc_bufs[i] = desc_buf; VkWriteDescriptorSet desc_write = {}; desc_write.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET; desc_write.dstSet = data.desc_set; desc_write.dstBinding = 0; desc_write.dstArrayElement = 0; desc_write.descriptorCount = 1; desc_write.descriptorType = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC; desc_write.pBufferInfo = &desc_bufs[i]; desc_writes[i] = desc_write; } vk::UpdateDescriptorSets(dev_, static_cast(desc_writes.size()), desc_writes.data(), 0, nullptr); } void Smoke::attach_swapchain() { const Shell::Context &ctx = shell_->context(); prepare_viewport(ctx.extent); prepare_framebuffers(ctx.swapchain); update_camera(); } void Smoke::detach_swapchain() { for (auto fb : framebuffers_) vk::DestroyFramebuffer(dev_, fb, nullptr); for (auto view : image_views_) vk::DestroyImageView(dev_, view, nullptr); framebuffers_.clear(); image_views_.clear(); images_.clear(); } void Smoke::prepare_viewport(const VkExtent2D &extent) { extent_ = extent; viewport_.x = 0.0f; viewport_.y = 0.0f; viewport_.width = static_cast(extent.width); viewport_.height = static_cast(extent.height); viewport_.minDepth = 0.0f; viewport_.maxDepth = 1.0f; scissor_.offset = { 0, 0 }; scissor_.extent = extent_; } void Smoke::prepare_framebuffers(VkSwapchainKHR swapchain) { // get swapchain images vk::get(dev_, swapchain, images_); assert(framebuffers_.empty()); image_views_.reserve(images_.size()); framebuffers_.reserve(images_.size()); for (auto img : images_) { VkImageViewCreateInfo view_info = {}; view_info.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO; view_info.image = img; view_info.viewType = VK_IMAGE_VIEW_TYPE_2D; view_info.format = format_; view_info.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT; view_info.subresourceRange.levelCount = 1; view_info.subresourceRange.layerCount = 1; VkImageView view; vk::assert_success(vk::CreateImageView(dev_, &view_info, nullptr, &view)); image_views_.push_back(view); VkFramebufferCreateInfo fb_info = {}; fb_info.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO; fb_info.renderPass = render_pass_; fb_info.attachmentCount = 1; fb_info.pAttachments = &view; fb_info.width = extent_.width; fb_info.height = extent_.height; fb_info.layers = 1; VkFramebuffer fb; vk::assert_success(vk::CreateFramebuffer(dev_, &fb_info, nullptr, &fb)); framebuffers_.push_back(fb); } } void Smoke::update_camera() { const glm::vec3 center(0.0f); const glm::vec3 up(0.f, 0.0f, 1.0f); const glm::mat4 view = glm::lookAt(camera_.eye_pos, center, up); float aspect = static_cast(extent_.width) / static_cast(extent_.height); const glm::mat4 projection = glm::perspective(0.4f, aspect, 0.1f, 100.0f); // Vulkan clip space has inverted Y and half Z. const glm::mat4 clip(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 0.0f, 0.5f, 1.0f); camera_.view_projection = clip * projection * view; } void Smoke::draw_object(const Simulation::Object &obj, FrameData &data, VkCommandBuffer cmd) const { if (use_push_constants_) { ShaderParamBlock params; memcpy(params.light_pos, glm::value_ptr(obj.light_pos), sizeof(obj.light_pos)); memcpy(params.light_color, glm::value_ptr(obj.light_color), sizeof(obj.light_color)); memcpy(params.model, glm::value_ptr(obj.model), sizeof(obj.model)); memcpy(params.view_projection, glm::value_ptr(camera_.view_projection), sizeof(camera_.view_projection)); vk::CmdPushConstants(cmd, pipeline_layout_, VK_SHADER_STAGE_VERTEX_BIT, 0, sizeof(params), ¶ms); } else { ShaderParamBlock *params = reinterpret_cast(data.base + obj.frame_data_offset); memcpy(params->light_pos, glm::value_ptr(obj.light_pos), sizeof(obj.light_pos)); memcpy(params->light_color, glm::value_ptr(obj.light_color), sizeof(obj.light_color)); memcpy(params->model, glm::value_ptr(obj.model), sizeof(obj.model)); memcpy(params->view_projection, glm::value_ptr(camera_.view_projection), sizeof(camera_.view_projection)); vk::CmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, pipeline_layout_, 0, 1, &data.desc_set, 1, &obj.frame_data_offset); } meshes_->cmd_draw(cmd, obj.mesh); } void Smoke::update_simulation(const Worker &worker) { sim_.update(worker.tick_interval_, worker.object_begin_, worker.object_end_); } void Smoke::draw_objects(Worker &worker) { auto &data = frame_data_[frame_data_index_]; auto cmd = data.worker_cmds[worker.index_]; VkCommandBufferInheritanceInfo inherit_info = {}; inherit_info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO; inherit_info.renderPass = render_pass_; inherit_info.framebuffer = worker.fb_; VkCommandBufferBeginInfo begin_info = {}; begin_info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO; begin_info.flags = VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT; begin_info.pInheritanceInfo = &inherit_info; vk::BeginCommandBuffer(cmd, &begin_info); vk::CmdSetViewport(cmd, 0, 1, &viewport_); vk::CmdSetScissor(cmd, 0, 1, &scissor_); vk::CmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, pipeline_); meshes_->cmd_bind_buffers(cmd); for (int i = worker.object_begin_; i < worker.object_end_; i++) { auto &obj = sim_.objects()[i]; draw_object(obj, data, cmd); } vk::EndCommandBuffer(cmd); } void Smoke::on_key(Key key) { switch (key) { case KEY_SHUTDOWN: case KEY_ESC: shell_->quit(); break; case KEY_UP: camera_.eye_pos -= glm::vec3(0.05f); update_camera(); break; case KEY_DOWN: camera_.eye_pos += glm::vec3(0.05f); update_camera(); break; case KEY_SPACE: sim_paused_ = !sim_paused_; break; default: break; } } void Smoke::on_tick() { if (sim_paused_) return; for (auto &worker : workers_) worker->update_simulation(); } void Smoke::on_frame(float frame_pred) { auto &data = frame_data_[frame_data_index_]; // wait for the last submission since we reuse frame data vk::assert_success(vk::WaitForFences(dev_, 1, &data.fence, true, UINT64_MAX)); vk::assert_success(vk::ResetFences(dev_, 1, &data.fence)); const Shell::BackBuffer &back = shell_->context().acquired_back_buffer; // ignore frame_pred for (auto &worker : workers_) worker->draw_objects(framebuffers_[back.image_index]); VkResult res = vk::BeginCommandBuffer(data.primary_cmd, &primary_cmd_begin_info_); VkBufferMemoryBarrier buf_barrier = {}; buf_barrier.sType = VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER; buf_barrier.srcAccessMask = VK_ACCESS_HOST_WRITE_BIT; buf_barrier.dstAccessMask = VK_ACCESS_SHADER_READ_BIT; buf_barrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED; buf_barrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED; buf_barrier.buffer = data.buf; buf_barrier.offset = 0; buf_barrier.size = VK_WHOLE_SIZE; vk::CmdPipelineBarrier(data.primary_cmd, VK_PIPELINE_STAGE_HOST_BIT, VK_PIPELINE_STAGE_VERTEX_SHADER_BIT, 0, 0, nullptr, 1, &buf_barrier, 0, nullptr); render_pass_begin_info_.framebuffer = framebuffers_[back.image_index]; render_pass_begin_info_.renderArea.extent = extent_; vk::CmdBeginRenderPass(data.primary_cmd, &render_pass_begin_info_, VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS); // record render pass commands for (auto &worker : workers_) worker->wait_idle(); vk::CmdExecuteCommands(data.primary_cmd, static_cast(data.worker_cmds.size()), data.worker_cmds.data()); vk::CmdEndRenderPass(data.primary_cmd); vk::EndCommandBuffer(data.primary_cmd); // wait for the image to be owned and signal for render completion primary_cmd_submit_info_.pWaitSemaphores = &back.acquire_semaphore; primary_cmd_submit_info_.pCommandBuffers = &data.primary_cmd; primary_cmd_submit_info_.pSignalSemaphores = &back.render_semaphore; res = vk::QueueSubmit(queue_, 1, &primary_cmd_submit_info_, data.fence); frame_data_index_ = (frame_data_index_ + 1) % frame_data_.size(); (void) res; } Smoke::Worker::Worker(Smoke &smoke, int index, int object_begin, int object_end) : smoke_(smoke), index_(index), object_begin_(object_begin), object_end_(object_end), tick_interval_(1.0f / smoke.settings_.ticks_per_second), state_(INIT) { } void Smoke::Worker::start() { state_ = IDLE; thread_ = std::thread(Smoke::Worker::thread_loop, this); } void Smoke::Worker::stop() { { std::lock_guard lock(mutex_); state_ = INIT; } state_cv_.notify_one(); thread_.join(); } void Smoke::Worker::update_simulation() { { std::lock_guard lock(mutex_); bool started = (state_ != INIT); state_ = STEP; // step directly if (!started) { smoke_.update_simulation(*this); state_ = INIT; } } state_cv_.notify_one(); } void Smoke::Worker::draw_objects(VkFramebuffer fb) { // wait for step_objects first wait_idle(); { std::lock_guard lock(mutex_); bool started = (state_ != INIT); fb_ = fb; state_ = DRAW; // render directly if (!started) { smoke_.draw_objects(*this); state_ = INIT; } } state_cv_.notify_one(); } void Smoke::Worker::wait_idle() { std::unique_lock lock(mutex_); bool started = (state_ != INIT); if (started) state_cv_.wait(lock, [this] { return (state_ == IDLE); }); } void Smoke::Worker::update_loop() { while (true) { std::unique_lock lock(mutex_); state_cv_.wait(lock, [this] { return (state_ != IDLE); }); if (state_ == INIT) break; assert(state_ == STEP || state_ == DRAW); if (state_ == STEP) smoke_.update_simulation(*this); else smoke_.draw_objects(*this); state_ = IDLE; lock.unlock(); state_cv_.notify_one(); } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Smoke.frag000066400000000000000000000002221270147354000250620ustar00rootroot00000000000000#version 310 es precision highp float; in vec3 color; layout(location = 0) out vec4 fragcolor; void main() { fragcolor = vec4(color, 0.5); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Smoke.h000066400000000000000000000117661270147354000244110ustar00rootroot00000000000000/* * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef SMOKE_H #define SMOKE_H #include #include #include #include #include #include #include #include #include "Simulation.h" #include "Game.h" class Meshes; class Smoke : public Game { public: Smoke(const std::vector &args); ~Smoke(); void attach_shell(Shell &sh); void detach_shell(); void attach_swapchain(); void detach_swapchain(); void on_key(Key key); void on_tick(); void on_frame(float frame_pred); private: class Worker { public: Worker(Smoke &smoke, int index, int object_begin, int object_end); void start(); void stop(); void update_simulation(); void draw_objects(VkFramebuffer fb); void wait_idle(); Smoke &smoke_; const int index_; const int object_begin_; const int object_end_; const float tick_interval_; VkFramebuffer fb_; private: enum State { INIT, IDLE, STEP, DRAW, }; void update_loop(); static void thread_loop(Worker *worker) { worker->update_loop(); } std::thread thread_; std::mutex mutex_; std::condition_variable state_cv_; State state_; }; struct Camera { glm::vec3 eye_pos; glm::mat4 view_projection; Camera(float eye) : eye_pos(eye) {} }; struct FrameData { // signaled when this struct is ready for reuse VkFence fence; VkCommandBuffer primary_cmd; std::vector worker_cmds; VkBuffer buf; uint8_t *base; VkDescriptorSet desc_set; }; // called by the constructor void init_workers(); bool multithread_; bool use_push_constants_; // called mostly by on_key void update_camera(); bool sim_paused_; Simulation sim_; Camera camera_; std::vector> workers_; // called by attach_shell void create_render_pass(); void create_shader_modules(); void create_descriptor_set_layout(); void create_pipeline_layout(); void create_pipeline(); void create_frame_data(int count); void destroy_frame_data(); void create_fences(); void create_command_buffers(); void create_buffers(); void create_buffer_memory(); void create_descriptor_sets(); VkPhysicalDevice physical_dev_; VkDevice dev_; VkQueue queue_; uint32_t queue_family_; VkFormat format_; VkPhysicalDeviceProperties physical_dev_props_; std::vector mem_flags_; const Meshes *meshes_; VkRenderPass render_pass_; VkShaderModule vs_; VkShaderModule fs_; VkDescriptorSetLayout desc_set_layout_; VkPipelineLayout pipeline_layout_; VkPipeline pipeline_; VkCommandPool primary_cmd_pool_; std::vector worker_cmd_pools_; VkDescriptorPool desc_pool_; VkDeviceMemory frame_data_mem_; std::vector frame_data_; int frame_data_index_; VkClearValue render_pass_clear_value_; VkRenderPassBeginInfo render_pass_begin_info_; VkCommandBufferBeginInfo primary_cmd_begin_info_; VkPipelineStageFlags primary_cmd_submit_wait_stages_; VkSubmitInfo primary_cmd_submit_info_; // called by attach_swapchain void prepare_viewport(const VkExtent2D &extent); void prepare_framebuffers(VkSwapchainKHR swapchain); VkExtent2D extent_; VkViewport viewport_; VkRect2D scissor_; std::vector images_; std::vector image_views_; std::vector framebuffers_; // called by workers void update_simulation(const Worker &worker); void draw_object(const Simulation::Object &obj, FrameData &data, VkCommandBuffer cmd) const; void draw_objects(Worker &worker); }; #endif // HOLOGRAM_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Smoke.push_constant.vert000066400000000000000000000013151270147354000300160ustar00rootroot00000000000000#version 310 es layout(location = 0) in vec3 in_pos; layout(location = 1) in vec3 in_normal; layout(std140, push_constant) uniform param_block { vec3 light_pos; vec3 light_color; mat4 model; mat4 view_projection; } params; out vec3 color; void main() { vec3 world_light = vec3(params.model * vec4(params.light_pos, 1.0)); vec3 world_pos = vec3(params.model * vec4(in_pos, 1.0)); vec3 world_normal = mat3(params.model) * in_normal; vec3 light_dir = world_light - world_pos; float brightness = dot(light_dir, world_normal) / length(light_dir) / length(world_normal); brightness = abs(brightness); gl_Position = params.view_projection * vec4(world_pos, 1.0); color = params.light_color * brightness; } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/Smoke.vert000066400000000000000000000013341270147354000251300ustar00rootroot00000000000000#version 310 es layout(location = 0) in vec3 in_pos; layout(location = 1) in vec3 in_normal; layout(std140, set = 0, binding = 0) readonly buffer param_block { vec3 light_pos; vec3 light_color; mat4 model; mat4 view_projection; } params; out vec3 color; void main() { vec3 world_light = vec3(params.model * vec4(params.light_pos, 1.0)); vec3 world_pos = vec3(params.model * vec4(in_pos, 1.0)); vec3 world_normal = mat3(params.model) * in_normal; vec3 light_dir = world_light - world_pos; float brightness = dot(light_dir, world_normal) / length(light_dir) / length(world_normal); brightness = abs(brightness); gl_Position = params.view_projection * vec4(world_pos, 1.0); color = params.light_color * brightness; } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/000077500000000000000000000000001270147354000245675ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/build-and-install000077500000000000000000000007701270147354000300240ustar00rootroot00000000000000#!/bin/sh set -e SDK_DIR="$HOME/android/android-sdk-linux" NDK_DIR="$HOME/android/android-ndk-r10e" generate_local_properties() { : > local.properties echo "sdk.dir=${SDK_DIR}" >> local.properties echo "ndk.dir=${NDK_DIR}" >> local.properties } build() { ./gradlew build } install() { adb uninstall com.example.Smoke adb install build/outputs/apk/android-fat-debug.apk } run() { adb shell am start com.example.Smoke/android.app.NativeActivity } generate_local_properties build install #run Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/build.gradle000066400000000000000000000042111270147354000270440ustar00rootroot00000000000000buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle-experimental:0.6.0-alpha3' } } apply plugin: 'com.android.model.application' def demosDir = "../.." def smokeDir = "${demosDir}/demos/Smoke" def glmDir = "${demosDir}/../libs" def vulkanDir = "${demosDir}/../Vulkan-Docs/src" Properties properties = new Properties() properties.load(project.rootProject.file('local.properties').newDataInputStream()) def ndkDir = properties.getProperty('ndk.dir') model { android { compileSdkVersion = 23 buildToolsVersion = "23.0.2" defaultConfig.with { applicationId = "com.example.Smoke" minSdkVersion.apiLevel = 22 targetSdkVersion.apiLevel = 22 versionCode = 1 versionName = "0.1" } } android.ndk { moduleName = "Smoke" toolchain = "clang" stl = "c++_static" cppFlags.addAll(["-std=c++11", "-fexceptions"]) cppFlags.addAll(["-Wall", "-Wextra", "-Wno-unused-parameter"]) cppFlags.addAll([ "-DVK_NO_PROTOTYPES", "-DVK_USE_PLATFORM_ANDROID_KHR", "-DGLM_FORCE_RADIANS", ]) cppFlags.addAll([ "-I${file("${ndkDir}/sources/android/native_app_glue")}".toString(), "-I${file("${vulkanDir}")}".toString(), "-I${file("${glmDir}")}".toString(), "-I${file("src/main/jni")}".toString(), ]) ldLibs.addAll(["android", "log", "dl"]) } android.sources { main { jni { source { srcDir "${ndkDir}/sources/android/native_app_glue" srcDir "${smokeDir}" exclude 'ShellXcb.cpp' exclude 'ShellWin32.cpp' } } } } android.buildTypes { release { ndk.with { debuggable = true } } } android.productFlavors { create ("fat") { ndk.abiFilters.add("armeabi-v7a") ndk.abiFilters.add("x86") } } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/gradle/000077500000000000000000000000001270147354000260255ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/gradle/wrapper/000077500000000000000000000000001270147354000275055ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/gradle/wrapper/gradle-wrapper.jar000066400000000000000000001506041270147354000331250ustar00rootroot00000000000000PK *²•G META-INF/PK *²•Gו˜R?UMETA-INF/MANIFEST.MFóMÌËLK-.Ñ K-*ÎÌϳR0Ô3àåòÌ-ÈIÍMÍ+I, ê†d–ä¤Z)¸%¦ä¤bȵéõòrPK ²•Gorg/PK ²•G org/gradle/PK ²•Gorg/gradle/wrapper/PK ²•Gh‚df£Õ#org/gradle/wrapper/Download$1.class}ŒM Â0…ßh5Z+v/‚ ׆žÁp!.H¾3SK¹¢;Ýa(ÌId0®lˆWžOæ*i)2X¤sd1,¶•ê”:úÈï"! <°P‚Kwð¸àjdÇ@ÙÂC<Òx°ûÃeÈýV9¬ÎœCv@ôyÐr‡}Òúž†Ýú [N},EI/hUЇžMIüGjv2F"TXksY•’“ˆ„S<ª1ä§´8ªéë^™qL¬Ñ_fAqÄô­Ò#Z¢K–^¶ÏûLN YZ“ƒ ÇeZ­al\#›@ù¹JQå¿!þÞNœcÁNÚÆ9R_pé æ§‰rb¬\~XŽÀ nŒÊ='iaÓµä† #-Ú[ÁÍmu WÉÚä¥QʸNÙÔ/PK ²•GçìXsªÛ"org/gradle/wrapper/IDownload.classEÁ Â0 †ÿÌéæ¼ ^õbñ¼« APô^·26J;êt>›À‡;•É—üùó|Ýè{èz~¢+%5O“é&çWΔ(Ùa…_Ê4[gR„³#!XÝbQ”™Vg=Ë{}1±¨÷„A´üYÍëCÂX›”¥†'R°Êð¢†5Âðc/¹JÙö”‹¸$Œþ£æS‡@pP¹„\ËmKuíØôlïÀPK ²•G©zÝ\Q-org/gradle/wrapper/GradleUserHomeLookup.classS[OAþF‘]ÖõR¬(µ¶j[[ÙZíUÓÄ˪‰T Od… ®.»dYlüW½$jjÒÇ>ôG5=ÃR+ȃ»É¹Íw¾sæÌÌï??~Xª‚QÌÉx¢à)žIˆ)`^F\ø‰F° Ṃ½zQFRhMÂK K “[úöúA*_ÚÉ®o¥ôÒANÏ–vÓŸt†pêØ854˰«ZÎsM»ºÊ0Ý+e²éŒžÍK{z‘ahÓ±ža{Ãjrâ¿…×÷ >4¸fÚ¦÷‘¡?ž(06 %Œ¤L›ï7k‡Ü͇}8eÃ*®)üv0à™ †DÊq«ZÕ5*×>»F½Î]m§å4¸»ëÔxÊqNšuj}¸Úg'ü-šŽ¶mZœ0òZÙjw¥äœ¦[æb!Ú‹3)’UD0A\>y²IˆäA$¼R±Œõf Mðxƒ÷f˜èžûFÓ´*Üe]Ó«x‹wÔ¯–ô «x ówuð¿Hú𘗽ŽPî¬áñÃ`•{×!ïŒ!¿}%½nÉx/ qè¸}J†hÍ®0,Þ‰²=q@žã‡îÇ{ÔÆ,ÝúQzýˆŠi’Åi“ÝGã7ü !™8C’“ÑH3Ò `_[(É`+8Ž$U€)<$Â4µ“OZdô4Ã}—è/îý‚z…@‘¨¾„ƒ—:CÂøY˜×…"D "Vˆv IÎ †¹ëÂ(&ö%Óê¿®—)[|SW/ „©þ9Ôð‰s ‡,öÂnì%Br¦Uvö/PK ²•Gâ]ÛÅú 3org/gradle/wrapper/ExclusiveFileAccessManager.class•VKpeÿmš&iºm!@!Pl…´P‚(Ó‚–ØbhZ  ¥·É6]ºÝ›MK} >êûuðè£2#-XÆGGǃ:ãñÂÅ/ž/Äß·›–¾ùÿçïÿú6?þûÕ ã“ ¶¡³¤ÄÒåGw^÷ãD>tŠËIqê  WìiA9%–¾ N£ßÎQ#ˆ58Ä3x6HSÏqÏû¡0(3d…šD-†*‘ÃpÎWbº£~~˜Ö¤Ž':Ïu$SíçzOut$û%„Rç•1%¦+F.Ök[š‘k‘P•0‚­vŸ¢U ¶6ªšE»« AJJ¨Î›ºž4lÕStAôµj†f–PM&ú$xf–Š5)ÍP»‹£ƒª•VuUø33ŠÞ§Xš¸—ˆ^{X£•XÊ´r±œ¥du56n)ù¼jÅÚ/dôbAS;4]mËdÔB¡K1”œj¨OqÚ¢nšr-î­hkz,c™¢e©†K(º.<¶4Ì‹úøày5cÓ˜Ÿ![v›Í“nfF$Ô¹’ g†ÃPõ‚ãaŒˆL©óá13 aˆÐ”ÍÔÿ áÏ5²ÆRŒ¬9êë’6ÏÙíYÄÀKÈæÜ,A›p/”­]˜ÿ‰ül \ak:Ýr˜úAÖCÍÛ„}Ñ«å Å.ZÔïjMÇ—fõð}ÕÅuÒÀUÂÚQebPMèfA=YÔT[Ÿ ø{Ö†[K¶›8LbÃ<sHi­"3+/aÝ2FX¯Y´2¥2Ö¯Üu{…²Œ(ò2ÂN»°S‚)¾G/µ¾Æ/á™T>ý Í3¨h ­™ÆÚ¸7ìB(^¦ê:GúîÏa*¯{¹P¥lò6\¾;#ì4Í v  ´‘ž…É)l …y&¢Í³`¶ÜÜsuâa¢Üê$&Ç&áÝs®ø"\^&x3^9O°ò&3ù9¦q‹™º…ß}~Âçþ·³—9™¿ˆz®»˜÷(sØÀL7R² ˜ñƒ<¥ø7­ûùoê ‘:‚ÇG3ß㸄VLâ=µÑW_£ß ßâ(~ÀÓô׉_háy·Éûƒ¼?Éû‹¼;äýCž¨r´|ƒžö³‚1C_’ÆJÎVž§ô,9§fÊxˆø6'­Œ2¿!ŽƼ¿Ó!”ã0•ä9ù'(ÿ$)^Rê‰Vôݶ-¿¥=ËÝã´èw¨j¼Ž$\†7u…d/«¨æ¶j-'Ií%½œ4Ûµß ¤–2ÕÜÛ¨Û¤G*1 é¸<öPK ²•GÌ ^F‘ù-org/gradle/wrapper/WrapperConfiguration.class“mOAÇÿÛçÖƒ>‚€¢¢"í!œ à3 U1U H|C®p–#åŽÜ]5ñSi"1ñ…Àeœ½[ ¬ÛÄ73³3óŸýí\ûûÏÏ_æð(‡K˜É`6‹Œ îq?ÇÍ<7÷¹yÀÍB‹i9ƒÃÀÎÈwÿ(õw7¡Í‹Úfzî½çuÏùsÏìßÿüù€ýøYÃ.Œ¨xCCF4œÄ¨Š75ø%ó”cò0.w*&UœVñ–†VŒqF® IÎJrNƒŽ© ÎkxÓÒìB ÞÁÅ\»’¼'/;'¥ïk00#IRÃ,„TžkA i¦ŠË C¦eºG4Gº§øŽÛ³BÁ†1Ó…ìŒpÎ3rÚÆì¤‘™2SžËLŸ›6ó zÇl'K9ÆlFÄæ#—N,±˜wEö´cóàš"Ò°(vâ 6§„[+U°32vÙ¸fÄL;6bfD¼»t,¸f&6näh4­„ë#« T¥;jå nID5…µ/ËG'‡’"çš¶EYó±(sñ¤ÃJÅhfZ)ŠÔ¬á&ÓÂYvîÝ숔XàýžHÆ`8©BVXnµ—É™Ë"éRÞd¾ `K…ƒQW8†kKÛõ¹åteÔdTg¬ ¥¬±È„ejÐr€m­¯)3É.oµerº$ŠKóª‚.æžu úÚ¡z°ê9Gép]ÂLY†[pècxõb®Ù§–° NR”àêzFKõKS»Aè7­ÜsÒȧy—Š+:2`Óli„—‚Žg´’ôhé°‘ÓqUúÞÜ §t®~¦`¹fVT;j³:V03³²Év;Ží„çÓ glc–²ðJuÃsŒåˆyI\\“Í+ؘ_†áb¤?Ú­bAÇ">ÐqªøHÇǸ¡âŸ‚Y|†ÏuÀ*¾ÔqSr¾Â×*¾Ññ-¾Óñ=~PqKÇøIAtí™Ôö¿½K³Üé.XœÚ®G&¯`ÛŠ¨ò5K 2cç¥3ŽEݾH}³t×±ÊHÇkôË/u5ý k—Xl™H½"5¢ "¨àœM;ö¼|L^üÏxƒ|¨I;›óðY%¥†7À9–®„´R, [«rn–g}•zÅlj«ç2¾´‘Ÿ tá³¼¥Žå¡w°2úãiÃIˆ«a%EƒVFi.ãO°oΔuö§»À9ØmX…µ<â’VÊM{ 7Jš’ùïýŸú/çQœ7€›Ùˆ 70_“C<Ïß.È¿&(r>‘îá)ÆUáê>„ò»'ÞKð˜Û±T/) ‚n®ëEµh¬<€AòþXBÓôC4E‹ðýÏý%øÉŒ/A¦çàDo_ëø¯Ýƒ>è“›ïÉ=„'– O/¡•ÚëÛ6±‘.6ÑÖ÷›Ÿ´my„ö‰Þ"¶>F‡‚A_È÷ÛÜÁ ¹ ‘åï ù‹è,bû` (¢ëö„¾"žTCj;'¤öJ­T|Œp3Îßû÷QÏ}4‘}ègýxÑ[C8„A´P"A˜B;馸Æ&ꇈÜ.J÷ŽÒÏõziEœœóä\"ª×éñ&?¶nqÜÝÆAÜÅK¸OÍ8ì:ùaÔNË>ÞGíbÜÝF'-÷óÓë&= e€¾h=À¤q /Ó‹O‚NO¯p â7Þg‘BøC8Ââ\â÷ÖQ¼J¯• XÒz»c^‘swÂÛ ÿPK ²•G¥y0ºVorg/gradle/wrapper/Logger.class…“ûoÒPÇ¿—Ç*¯1ÆpL˜ EW†o1&†d  f&û­”›®Ki¡ÿŠ…ÅÄüü£Œç¶ X³žžsï9Ÿó*þþú àO“Hàf—p+Û¨&Qƒ,D]ˆ;BÜ•p A‘Ð`ˆ§wØ ÃÚ3Ã2Üç ±êIí˜^m{À²]Ã⯧Ã>wÞ©}“N6»¶¦šÇªc;8Œ¹§Æ„¡Ôµ]Ñu`r壣ŽFÜQº¶®s§Å5m¡Píž©TÅT-]鹎aé-‘PòÉDÕ½ +TŸ€Y†Æb|ûTuz|<å–Æ[µ…‹ž·(N$Ö&c†bXC3”Ùé„RãWuÄô:”Á+m§Úõf=m†dÏž:?2ÄØRþlDL¦‘BZB3{¸/áAQH£ˆGB<â å 2ÃÆÿüoúg\£·Î+‰j®Š±/ø÷>M\>¤ {JQ¿ÃVÞÒ \ZW‡ÔHþœcÚÞHX¦Å \´ŸâáÅÓ_‰)ÎG½t… }ý РéÙã¤Fœô ÖIfÉ:B„~@Rþ &—~ ò•¬6H®#Jò2bØ&J9Ïò¼‰–ãèöekY4|3†•}Â(ê4yÜ0 ÷¢‚ÆT÷¬‚Р•§ÚþÃÔÇJË º=­-uñ/àg5Ûu°r— ¢î±ìBÖw•½ek++ºÐÜ¥~FµLM›Ž¤vºoû º»3ó©z¶žñgF{Ý(”lMtÏw¿­×oI¸älq»uÍÔu¨ ¶ 9ú˜&‰ŒåiÖ›lËb¶¿ŠÊ\ÒcEÁxê­Õ{qÖ©|´d/kÎ%æÐ—®ã¥îá½Ëºãh}È(莫 ÃWrôÅ’m¸kÙÑíS?¼°æ ;®(Pã¡mV‡Wõ• ?ÇsÁÂÚú²uS^uuÓñÒ2½ì(̆’c΂Êû9ë ›º›™Ìí²¤sÆ.n‘xS›ÆWXvq`…¼ìwîž*ûCRPœ²Jö¢.›dúõÉ pQÅy<ÅT ãÇ*žÃ5sxZÅSãEÞnYÅOðŠŠ,òJ×¢(żn«ø©èü ?gF²*~!« ø¥Š_‰ôk¼¢àÌ»g¸ØøM¿U1§Yº“Nö$K×\E1¾pC_t£øŠßã¼ø£CFñGŸU¬ãUÁ_´xG–é.;`r¦«õk¢p¯ Ⲃƒu˜  ­º;Y2]cYßögû U*擦å&—Hܤa®”ܤO>±]aF(ÜÃßTÜÇ†Š¿KFî‹»æÚ©â ¼Ir³Äf×—7ì(¨ø‡èþIn 3Î%I½¤á$Kæ ¦u‹`’o—b§ÞáM%ºTw-5Ö´¸p-©º¯UžðhWf^Aï;ºà¿¤]¹ÖTp!»jºoC³þbÃò1 Cݾ¼³Ç×ó°½Û«ÛS³ƒ’›(#×-›„Sð:‘^©åpý>ñÐÿif~ãÉyæ"c<½{Fwí†1é2~C ¥º¥%FJ+yÍ¥¹pêê€DÉÞ"©œ¿s(•« 6]ïe¨îL/ÙÖ-yk½6ñ3àóúêøu1Y;óºÇ)-löáé\}¿òjìÒ_,iE§ÆÞf‚¯ì¬ÓšãêË~'l‹p×v2‹ãüÞ:Êo»F´âýx ¬*W Èp}nÛº—?vVÊ1î±Ñrü WpV8‡Ó(¯zªâç t Äï³~Jª¯„ rC'¨)ºq1Î¼Ž†ª•}„$iå8ö Ó³tØ×,‰$°Ä0mù6ÿKOQηә2+¦{*¥{3DÎ…6“¿bìÙ@|®‚&®Ô2ö&öqh q(cÍ盺vè&vèzROõPdçÒ­áÖH-ë[á´!Â1ÅPº±ia–уUfWB»ìƒÞ í6.q\ÁKȱ ÖÓ[FÇŽÍ…ˆå=$›O”q|}t]s‰¬ÑÉžåã,v玬õs@14“:,vË-øÎ2¾Ú1‰)žhbf¦1C,]”gñŒMßú>|ØCß _ÒkÔ¢µd—qêBû]„3ë™Æö2RÕ¼$¼hžbFhy”¹÷ü'ý³[ZpÅó¥ÆU<ëÑ›Ÿ!/›Úb§ùèí机ØA‡=ë=¡çËHW=fd`TaF¤2¦fÆ“ þjUÚ¶¼¶aÏÓG‡ y\Qy}Ä¿T…ß8ÿ—yN22”IAÏÄÒ¯¡a=“èõVÍB÷Dß=d+xﺔ%à|â4鮟Ùäù#j>K¦>G¯×Xƒyœ Žê…ÚB:„|PÝ;ïóXLJ•µ|ÊÿPK ²•G8Þ¶Ýì) org/gradle/wrapper/Install.classY |åµ?'»ÙÙ,Ã+ÂÊkD"y‡7 IØð0 êdw’Œì#îÎ’D­Z‹JÕ¾P[ÑV¬b-UQIÐ(Øj}¶ö¡µµµÕVkoëí½½÷öÞÛû*÷fv7›dâ/¿ýfæ{œçÿœï|_^ýûÓG‰h! >ΗÆ'"_š‚4iÆÉ§ 2¯ðñ›Ï(Àel’4“\ÈS¤™à"ž ;¸Xšé 4•gÈü™ ÏòóìÏaMáÓtÏ ð->¹Zák˜‚ kê67oº´¡©uÓÆ¦Õ›75­o¹tCݦµL…Í—ë»ôšˆë¬iµf¬sÓøúx,ié1k‹IAòk™|‘xg§‘`šÑOtÖt&ôpĨéIèÝÝF¢¦ÙÄJ8Þ‹Äõ0Ól·‰M éqáÒ­[]uɤmå¹n 6äÎÁ¢ÓŒÞP$•4wk̈Q Éä:=¦Û²Õ¸Qh<áó­4c¦µŠé’ÒÑëã59¹Øe[˜¼õñ°Á4±ÙŒ-©h»‘ؤcP<é‘-z”ït§×ê2“L3]9‹o"H% Ý2̤Åt¶«['Úav¦ºeÆc+ʧ›ñ±‰¸"”;ÎTvʤ Qìf{J¾7'"Lò1êټ± SŠr§´vé‹–.kME™&GDõ†œA…¯CwóÈn&µ)Â0KNjôy£ˆ@Ei0Y‡L Ú²l3»å¦lŒnY–Tø3ðC©¬#ÿÜÒáTÊÜ‚ÉÓeô2±‰_Ûa“Ï‹"Bf9Ó“F(•0­¾šu€&pÙ`vIKVvв<šbÝ) t =ŠQ†U|í©ŽA~ÞöÕ¾ö>€A·t¦0u$`š>JªÕö*ИÙiXu±ð#avôåÚjc<H-¡ßhõF!©8×Õ F2”0»zÃfêLrV`êãѨC”©XO¬ËzÕLn5cئ µ§´lŒ'[tYY6"†¤Ó„‹¥0QI¦ìíòâËê‚b О²Ý5ðÃB(?»Ò„âs\c)×ùFÌJôe}aÛKk“eD’‘Í8"ësú…µÍÒ6dcP„®)Y³8‡rc,5²I\¹2“øŠGsOCtb(ÞÝ—“ƒ˜ªK]s“»TÖ`-&OD,Ðm‹Î[°`S³ë–ÙîÆœŠòzH³PšEC¬ +÷Í2ÃÊuûËÖ8B‹„ÐʱÊ­2´ŸœÖIsK†Ö¡U5ZÛ†/•ÅžÂâ±æÕ¼ÞÅC\– —cvwn´dH-³¶²¸ÕNçâÕô¼j™¦Òóô‚J¿¡wU:Lý*ýœ~¡Ò =£Òuô•®¡kUz„UéF¾ŒÇTpʢϪ´›nPézú¬Â»U¾oTù&Þ£ðçäãf•oá[Uþ<ߊíùÄJͬIè}Aå/ò— E®’èÖµuU¨Sþ²Ê{ù6vpzAáÛU¾ƒ¿¢òWùN•÷ñ]È.îeƒÂw«ü5þºÊ÷ð~•ïåoHsŸÊ÷‹¨ü ý ÁÐ⩘eFlÂfºì|['-·VÐæ—$çkḑÔbqKCAjéfLÓc}˜–À~—V­5¦7_ÍŠkHäšÑ‹}:Ò§-ÌÎë«F™1Ä}ûåèVù›|@ öÊßâýL;N(Dšw,âZô“I òÃüm•î!˜è GåG`î²+-3d§]-Þ¡¹ÉСÃAáÓKb%±6ÀÔuNTïÓºô]†Ön1Íңݲgk=¦ÕU]³KôDT³ºt ¡Íw-Àçk݉8Z},Ý'¼°U¥ÁV‡ÚšÔ­š™„q¢#–%šž€h鄌ØìÒ˵ŽD<ªéš•H%Å^N U-Ši¹Å¦†²ªV+I–Ä2™Q“ŠP†œî¬ÅCéò®ÖvÈhuv6j@åG©|ˆWø •ŸÇæ~ìÖC™³.‘Ðû$}ª<@?Sù?¥òÓò6ŸÑ›‚ä@»K¡ÀçlÐ'ªq†Á.S~燺¢qÔ!žåK—ªü,ï*GU>ÆÏ)ü]•¿'`,kiŠZ«aa™rNëªç´Žx¢V“è{^áTþ>?”‰ÚQ…SÐe«vÆT~‘_©^–æ•aÊz*¿Ê{PSˆ?$–^ËL^°©ü™6ûãk6¦òúx*¶C=y±[oˆzRp¦É¹XbNGÄ™6{ô˜%јÂÁ_:b67!(ñdu åÊ?ä×þ‘Ê?柨üS~c=N%¨ò› Ÿ¿Å?GêZþ¨ô!ÿBá·Uþ¥ ãWüNÆÒ£Ê4• óo;3G:cý°úmznJÎ?¼+nyO¥?ÑG ÿVåßñõ*¿Ïw"·žx[@ù›é‘GÍ·hìåŒgm÷åø å²”©¥MMòœŠW(qÊNRiÙÈ;‚é#&å³§–º¬µ“iÆtéXîD>ÑBá±íÃCV½¡b4€yÙË…ÝŸ¸ònµSVν$óééãLëÙÄF6öÂP ?fÕÔC8‰Ða×Îæ g|Ì]ÎÌPßÖ/‚þ ]Îæe'¹ ™â’¾Žð &·¯.k‚F©î°nIùƒ¦¾pZoi™\’Lž™ì{ÃqV|íÐ}IQi“+Þ|8±tZ]v/±³0g”ÖžŸ½b©<±®nÓýV<#Ç„á>”2“Ñn GD_‡–ísÌ(êÛG»ÇUŸ"7Éä¦2i^ zà²QÆÈºÛg\œA%ÃNNÕÎaÚ‰þºöd<’² ‰§á›d*øÑƒ-"÷ `¤:ÎNqé†9ºå+"WKF¯i{Øñ8¾ìÔW W3hûæ£lÔË83Ù)Ø ‚†3Ï8ù•XL‚Nõz,C)q”šVêr `çK >a[2×Cw“GuÊÆ¤›¶^ûdsJðÉܱOxr•#w\`e‡bN çd«áWŒÓÝÏ÷¢Õ”¡¡ôM‘ôú%›íë •cBæ¨K%É—Ù;ÉÒ¥*Ó™§d’œ9$©[.©6·fÿç‘3°ÂÎÍñ$NCéJºŠ˜®ÆW}š åx÷B9kÛOœ’í'ŽÌxNÁÜé&´{ðu(xñœW>@åƒh+7™^ õ0/¯8LJyåaò•W¦ ‡lŸC{ùÐÎÅJ¦Òaͦ¼I7c¤Ü¡I·Ð­Dö›ÈÄö›H•g¿‰\ô~ž¾€‘æHÁsUE?M\‡fRKù“4/“k½Ao?ÖæËcJ­ýヾAšÚVôó++ª¨¨Ÿ¦=Gų :–ƒr%d­¢T ÉØr®u¸eå\E_¤/A¢|:›¾L{A%@Kè6Ø×ƒµåt;Ý Íf@Ï}tfÍ¥™t7Þ|ô5¬öbå×±çUGnB_#7N?BÁuƒtZ[ÅÍhñ,óNžDÏøkó‹¼yûifU0¿Ÿf.óyï9þ—Ê`¾§JÎ:püƒÊ~š] -ç´ V«x–ù‹üAßÑûhÁd:ôù_s„N¯-ôÓ\ï½4.¨LZÐOgl *Ò5oën?8þzPé§’!³¬¡Éh·Bì‹h"µÛàÂí4‡.¦ù´†º„Î"æÑù¦dÐeÔA&uÒhûÐw-Elv@Á˰ú^©€. ¥ô ºü¢tÖ¬7Ðý0"« !ÅDÚEߤ0k!%é!¬ð‚F„¾Eì«ÀõÛxóṊÒwà¦\?B¦ó Fj-å°k~KÕSt&Ã7Ói~[a©÷Y*kóT¶! »âF+‘Jö§‡«F{ž¢ê»¨žzs€Þ’Õ´ÅÛoÁ"yX¹–Ñã ÷Ée§{žDO®N‡©?­Ó³°€pm¬z‘êË+¨¦Ö[%¨_p- n-\¤Ø¢W´¶yѱ¸µ-¿ªµÍôŠ*µùÓÁÓRïZ6³sm§\ †×Ð88b*"1ˆX< ¢Ÿ‡lòo¶cÀ‡>• ~>fž¨¼ÊV³1«f#=eçyüçÙoOÛq’«Ô ä(õ<%Öêi9°|–÷Ùûh1µ-UG—y=Ëò‹ ÷‘VU”¿¨ÖôõÓŠ}¤V}OÑÊ<Úº;H~·rÈ7³m‚·A­Ûá›;¨”¾‚°þ*¾ƒÎ…X™/Úï·ƒt*UÀºG±~:p~ÌFÝ øô9¼å‹pYëÒ~ôƒšx//ë=§çIô|×6í÷ð{ž^pÒ)¿.èÿˆ:gy Rq[Eáª:w· Òym¾g©®ÍS¸®+¬‡ã࿆ÖjtòÛä7¸îü»è $8eÖÂXM…ôÓ§ÄÑxôSs?­;@k©¥mÖ·Éü tám”LÑŠ›j1ßæ¶Zÿ‹TT ·8Pú[ûiëVäIµ iÛmxç¼^ÜØ•å2ß1’¹ ^‚—Cнñürˆ<Ÿ@qÀv £Ýëß ‹Ý‡Mç~{ÆbD}-â~5Ö´#Â{ç×!¾oALïGÄ<ˆ(xö~´Ãû°ôaü ŽÇÄw¼ßãK|Û pö ¿O/Âê×aûy‰^Ç=4‹^ýà– Wé5x¹–vÒè‡ð²d•×éGè;l¿ý؆ö‡YϨŠç§ÐôSÚEèyÆvž\f¥7ÎmøØŸVxéºìqÒû©½¹¢0ÔOá¼(ß{Ðs0z…6£ÀÏ1š´MjDƒi…4o•&Ð[vXåÉ…yšÏƒR°¹¼Âއj<ŒužU³*± œ^nërÖ"àk€¼²`[òÜí‘XÁÔ…CB8ñÿ" ôX½Œ`x&{)ì5$ü—iÔ¡–€] f½M¿„Ø’Ð=Xs&ý ‘—g͵œ~m›+OnûÓ»ÞEà$A°w:d¿ëtÕ%éÙ”ôlâår†!v:pì£bѤR>¢Ð¥Ÿb@éƒĻۆ Д@ôJW2è Kàž:@ãূʱÇo¦»¤Óo¿ýÇA£t9`AøYö3€\xž7ÁJ7Ø"ßÂèÛ4 z¡ù<èX -Ï¡÷1ߥ߃Ö(NÿJ@ný#h}„:ëAé#TÎ&¡yØjߣß¶;°Íþ«%ÅïÍÚq/}Þ8ö‚ªÀnê£AÕö€Ã!{&·¶aüOààäÖ[Ð')™ìEMÑRQ…Z¢m™÷nšPYeW=Žÿ`èE§FqAûoPö¯(eþåÜß êe¢YK Ö[Ä(ö¾½·Í±wl`RLÿdçÓ¬ˆ“üôÏ îˆØˆ9<•ò lhþ¡RÄgœa3Ôœ)ô/i† ýkÚ& D”Í4»·€ø_éßÓ!Rƒ§ÌÊ—ÄäNZu&8¤±ø? ëèžS[ü7ÈY\›N>»üp_í¹/«•ÏÑ tþ›þÇŽ÷àéü/ý_ÚÌ`I5~˜õ°ç+%§hñg)ùAÉÙÍý°³³›ûéïÃwsð9Îäêγ¢wMÓÎÚ´YV5#n«–ͬIÞ,_ èmÅöÔ5ª«F:·-ÍHOºÍŒÎi:›tð¨Vš"´Ô0 »!çÍ a¹\~X@ƒ«]T-BŸ4 ›Ú3ºšÍÆL5ÅHpOÌŽ®¯ÆJTüp†P;"Í%±ÍõËÌÞ5)‡%ÛÑYÒŽº:²l/OÜÑ~1y­•n§†¦¹çÙÃ$Û·5ÓÈJà¥ØÑŒTLÍÉ]f9dzByªd.îhNuTËê'[Uz¤Ôi´ãfÎJ²9çT Vá"܃‚ ®)x·t\:J©FÁÒvhØ“ðDŽŒƒA˜ö| KAt'zèEܪEôãÈ=gϱ(Èá€ÐóÔ­Šì)xŠCMçâûÛ{T G >ÃçEd…4Ð…×µmŽâ _â+Ïð5ñrÃÃF†yKÂ7 ¾ÅwTìó«9ÃÖ2ì¬`fgÌœž ¦Ô©ì6 Ú»,èR}ï¬5£(Ni”mm;Ç=²ƒÏñ=‡öƒ€Áÿù´èœ+AäµÍõU*tS(\zÝ&Ë+µP°ŒUYÆÈ²­"×ÅZ·¦™?ÊÚ,Sþ°Š*O8¯’ÙT‹Tïp±#]pŠ7‚’º®íZL¥gw)™³¸£â¾µ,¬+åà³Ìž©ñü;C¯× ÓºðþBÕͧŠ`Z±%Ü®qf«êLøUãÊE:jÚšq`>¡;|«4Œ{ßËÂDáj‘€¾Š6Rƒ¤“ºž\Ïa(\ÕK$"½¢ò\žþ³&ÝT)£3Õ¦‡@Ûj¥öÜ1-2ª X#5žE®é|×èÿÖþó@à-Æë´‹Ò,Ðì:†ð‚uè£Ñç;ÐO£â`¯ÓL!²â‡ÿ&;‰æÍ¡‘cÔ­œÂ“‡˜ðŒæáÍÃ?”Gý1&ÄÈ1×¢_ÉãÒ„7àõ_&ëÆ„§Mñ<š'|Ÿ@Â3Lû–ÍZ§Ç  p´ïÂ0®b ÝôÍÐC½ÌyJ&_…1ä@ß$û‚ìÃCŒÒŠûC„¸‹4÷á}•xÉWnâM²+vuo‘nœ$"IÞ&/ôOPÈÝ1EôѼtŠÖİ¿ííyt,¼œöŒ‹mb÷Ïèiorªy\ñòèü ¾€÷×ç¢ðË¿¢+á¿Z$|‚îcôüqVv ¤Ý(¼AëqÊ=§:Maƒw‚Èxq™ê1‰):×Läß%­‡>€®ã=Z‰ÞYJ–œDð‚ßvâÜùPK ²•GHÖ·$Ó #(org/gradle/wrapper/WrapperExecutor.class•Wy`UÿM²Én6Ó„n’¶)),%mÒÍEËÝ–#ISÈÕlÒš¢ÖÉî4YØì„ÙÙ ((*¢"DE­·A鈀¢Åïû>ñÂoð÷ÍL6³Ûm ì›÷½÷½ïø}Ç{ûäs? `½òã-~ÜÀ[ƒ(Á~ÜD¡ànY¹Ç·p¯oâxg¸/ˆJôã]ATã`9ß-Ã{d8$Ã{åôûdx¿°| €Ê÷C²òa™}$€ùÞ/ÃGø˜|íÃ2Ë0+ßex(€‡å;'ÃÇýx$€GÅŠÇ„ýAœŽOŠËìS2ûtŸŽøñ„ð‘ûdöY?>ÄÙ8À“òý|_à‹|)€/ð•¾*ËOùñµ º„íëòý† OUâ›ø–Hù¶ß ¢ß%ß âûø˜öà ~„ñü4€Ÿ)¨ßÒîíéØ=:Ü·{hxp¨gxdLA¨ï2m¯Ö‘ÔRQËL¤&6)XÒm¤Ò––²vhÉŒ®`ež€®ÎhGBþæPçÈ6ÏæŠ¼Íè¶ÎÝÎ$ S±K´Ø‹Œv¥-;€Eߟîãþû­D=¶c˜T”T Ö“ñÐ+HzèÒ;<ôɤwzèSI¿ÈC¯EHnvÎCr¹Û_Þïö—×>¿U¨€†qžˆ‘@9O+砌͡d,*…oec³(o™…Æ–çXÎ3@3tŽËœSØcë™haU`"';¿ÍÕÉ"pVÌ¡b,ô=‚ʱÒHôA¨³Xò˜èUW…RŽ­ÒF:šEÏNRµ­Yl}•`ç!TöÛîlÛ)žµˆS´¯„–fr)±–…~ ÕÃõ³¸ã,o“c†å½—^\MoÅßýëg¢ìÅ>òïeuHðö\ Jðjì·›ÈAùGà¢ñJb!á[5‡Þ1|q í½„¿¾CÈ·y[B£ªPGÆ:Ƶ.„•SºAÕVQaˆÙTÇoáÃ6Ô«rE½ÊÎ?é%µx9®²]w̺š?¾íÜþ²_ eÀ©­þÂN•ñ„;ສЙkÜóîù29_xø*Ïá²Üákq‹I’FK— µ:Ú6úÚZê™öƒ÷dß5ÄòZ–ðu¹”r˜/»^Eô{v=#RbÏ^ƒ×òlfßHP|9$þü7â:°Ù-Å@¤Eaï8¥Ð‡ë=%Èi à v¬EÒM®¤­”TêHj-&éFO Ž–$³7rVb˼ٕy ©2»Q°XÙ6lôÕûŽ ¼Þ7Óze­3«îDE„ù½mFYзš²@»‚”XMY!¼™Ü‹ñ6&ÌíܿÃccΖƜ-®-2{O Ž+)aØFëÖzCîõÔ`7â¡2·ùìêt¯§îö ÚÓÝàêVh£ðßþ?PK ²•G„¤«² B*org/gradle/wrapper/GradleWrapperMain.classXxÇþNÚÓjU8 ¤Ã” Ô΀±ÀY  fm¼œéðéVÜíQâ–b§Ø‰ÓH±“8=Nâœ8'ÙŠMì'NwzïÍé½;ùgwu:Ý-B_¾Íí¼yý½yï O>óð£Ö‹*¶á.ð2¹¼\.¯PðJ\€W)8ãÇ«ýxܽVÅëp·÷øñzoPQ&±ÊðFyx¯Š7áÍ Þ¢¢Ò¿UżMò|»\Þ!—wª¸ïRðnïñã~ z¯ŠåxŸ‚÷«X…¨X†³ PQ#¹Ü‡*øª¨Å‡U¤1!¿&åò\öcʨðã¹TÁ9Åc*ÇÇT|ŸPñI<¡àS >­b3žTñ|ÖÏIšÏûñ_”Ÿ_’‡OÉÃ/KFOùñÉè«~|MÁ×¥-wÉå~Ü.¿éÇ·T|ßQð]?¾'‰¿¯âø¡\~¤âÇø‰ŠŸâg ~®¢¿¨Úy µ½«ãpoÇûzº;÷ìëëìÙ+è:¦ŸÐÃ1=>îµÑøÈ’63ž´ô¸5 ÇR†@(¼½£¯µ³«£=çÂEÙßßÙÑ—.›Í#*ÜG­m ëê|mæ0¥•uEãÆÞÔØ#ѧ‰RI3¢ÇôDTî] Ï&Öv™‰‘ðHBŽá“ }|ÜH„wÚÛAg×­Gã4Ê7Æ_%u‡òM¶¥ë‰ò«ð8P]λõ„@©ƒ5Ã;¢1ƒ§¥ã “‡VÔHJˆ€’0M«=JÜÂq=‘4ø±:[ÏH,n3ÇÆôø°4vŸCFE3~‚Œ$Áe¹½§“–1¶/#*‹AÛ4y(æ¸eü<œë•u³£1{'«:bWä ×çÆr‘«M¶Œòº<4A^Ë`ܰÂý:{OÇ-ýTFaâøcL[ùIUºÒ3`$’6Få´Œœ«xÂApîUII3•ˆ­Iâú˜ÀâŒnññ”å€IªIEcÃŒˆ·2h6ûl-7Ô]8MòÌß>/¢¹rBíµÍpîË’¼ Û,j¸×jÁ-nÆMŽá&V£ ]$ ¿ÄÓtU®;5þ•;¤M)b62)4ü ¿Vð ¿Åïˆs\ àx*jÐgÿ+¨á÷øƒ†?âO þ¬á/ø+“jæBíÒ“£Ýú¸†¿áïþjHâ §ñl îTð/ ÿÆ–z¸¯SÖåXŒÎã°Ý<™ú0k€srRÃóq›ôâ3³Ý6M±O·F[“IcŒ5!ñþË….@­òU¬iI‹Ÿ^K·B,`ªe§Íu(›gÊ¢d»P>MˆBM(Œ¨ðS5Q„ÛXF®a¸5ÅÙ$BÅ’Nc¹˜OWD‰&JE™"Ê5±HQ¡‰J±XK4Q%ªé´9î+ý=“/Rq+:fd¥AE,ÕÄER_ßQš©‰eb¹À¶6=7­Ð°AwŒ1B‘˜žLŽÓ³¡£f"äªbçM˜c¡›Ü=i„j×$k›g¥iÏ‘cFÄÒÄ ±RJ ib• Û7¹)á\åY>Z-.fh÷š!%䢄¦«UHŇ)¥Ò«¤k¢F¬ÑÄZQ+°‰<ÜZŠÛÅ&”7"Ñ£Qc8Ÿ‡³"ê4QkYž=Ê #×f¦H7ÛCÓ>qå5KcѨ‰u¢‰×QmX—à6E¬—÷ã:á¼ÅDRoí!3OöÇoˆóôL7šºúù4ÞBÓÕ÷rÏñ`=Óut޲Ôã2/Õ“­‰‘Ô˜·.¤Z†¯œ ŽFGR ^¬5ye7ßÙò ì©c¾ÖxŽYdvÑc+1¬ì®X=ݱòÅc®šG£ÈâÂʸ¥>g/‡¨”Õ*«^ .—Æ®¾ð(Aeys;»S­v{N¶é­ÄËyÓ»›åO,³íÍézù±ðEå)„]7'c¯V1Ý͇ðùyÏV¾Âƒ#‰™²e4Ì£³‘Ûóòj;ƒÃž7Ýÿ<ÝÆ˜lõTýB¼3ríü0ù £ÃÜÿÒðpwÎzÝv™æ ©qÆ¡î`ýVa¶ð!ˆ rD¾š»(â~(k¯r0k_Âý¡¬}))„|Öp½Ž0ä¸ 4L@<`£æZhWàz®šƒGø[„†‰Ebq†âŠ {| º'±°›‹oï †&PØâ úÒPƒ¾‚GàZ(êòÔÞ4ŠÓÐfÀ%\*ÁƒS(#eyKA°@Ò. úÒ´N¢¢E *„N¡’(‹ÓX’FU'Õ-þ ¿iA¹YÚR´.X4‰‹ZÔ Ú0…eCÁ¢),çX1‰•MaÕPÐ?Õ¸x 5äµ&µ3–ïF€ë:z-m­¥ÃêQ…,G#.Å:ìB3öÓmWc=½³¡§° ·b3nG ^‚­8ƒ+ñZmï;"ÞQ;|ò99Já!Dù%Cò –£XH9÷ãĈµÇ1†8½~=½n’K!9\Kh eµq(…ŸÒ6ñÑ–"·3ÔëN2"§l9Ýüæ«Î t„¼ð·–¾œBíPC nõ“ ƒ«\PchÆ!¥T hG:è’¶QKV¸‘JÀþ:f¥Ê²+ð27³ŠÉ®Ñùw6'¿:³ò«Øö Èâæ ‹V¯ÊV8uük 4Âi\2õ¹ìº³ØUeØñõî²»—®“zï®À†46¦qi—¥±iÏ}(éšÂæ¡Æ \qŽ2Z[ÒØzª%0p¥ïlZØÐ;‰íxÖ9W™Ö \uÖ–UNß\îj²‚2€^ÂíC%ú™:ƒL¢«yf¼WÉ݊羜Ió\Íðw€ºÚÒhßÓðJl}:%v0uwv5R“]Î4vw¯›ÆØ#1ÖíåQWÓÙy‡º‚¾s{ì³½¼è=çÈ}%¶ÐÁÒÉmö/°ÉßÍïÝö¯cÏúü*¢æ%,UL¤•LŦs#}¦—¯`Ô·0mw3a÷0Y¥ ”ÐHKo§í²Jµâx!á+ }“X³˜x±£$îpíÞg§.ýØ(Jcßݨt’•›ýi>ŸDïLäì´(!…¥Lÿ™ Xa_ÉÝÂ.÷Í.÷Ò)ô‘i %L ?7nÌâRêr¼jë¥ÿPK ²•GÙÎx ´"org/gradle/wrapper/Install$1.classWkwÕÝ×YŠ÷Þ3çž³ï9û>ôÏÿ|rÀFü}ºqXÄwD|WÄ÷D< ˆ¡cdcˆ"C c2xPzù(Æ¥µD3!âˆ[Äd £p¢pePÁ÷£ðĺƒ’h¦ÚpÇÄภjÃðÃ6ü?ñpˆúÑ~‚Çbx\âŒâ‰(žŒâ)éžñ´Ì&F´ÏFð\'#x^¡}ʰ;m7kØ£VáNË6Vd&Œ)#m¹i÷)´‰MÎ*ú–§pYmäYc%ßrž=7É1ýô}ƒœ´J̲®ó •/y†Ø)te\/ŸÎ{FÎ6ÓG=£P0½ôHÐö×›r~¢1Ìð¸Ñ»iópiR!³ 'Ÿ¦“ç„VÜ*vnPèheÐ)ú†m‹Ý6˱ü '—2\˜„f£ÚZ/zQ‹QwTÐúݳ¾2c9æPirÌôöcR‡xFÊrÐð,W•š¬R ú ã0‚m‹&5×,±˜Î\·Fg,T{²«±¾1²0WýPNŠÜbu*\2ûvzÐ7¹ ×óIÃ;bzùrÇ4sÅ÷¨c»FNA™ï2æ”i“/DÖ^ç& Ïu}…Õ Wx¼P[庆)ÛBÞ!(vËšÉk1‚¶¶®J~ï›0³iÙ°•w ¿äÑëÎÅuqÈÕlÉóLÇO÷3S¤YØa·äeÍ`ñz5¿=bÆîv²¶[da÷˜þ¸Ësà§:~†ulÁV°QG¯ˆ[p«ŽMجã6Ü®£/ÑW}(/c·Â•„ÙU²ìœééø9^Ññ ¼Êzõ¸GtL˸¯éxoèxSÄ[¢{ï(„{ †çëø¥¨…_3Z­btºN<ý&‚wu¼‡S:ÞÇüVÇïð{â:þˆÓ|¤ãOøsë˜?eìça0`Ú¦/NH&ÔõŽÓÝÌFpVÇ'8Á_tü癹ÎCV¡ ¶$À:ß¥á§øLÇçøBÇßÄå€,ºc)“M5TX³d r~Êi®¾p‚Íz¦á›äžÂ–¦EÓ-¾ho…’²Ç—çM¯!°âD82&ÍFÚÎfâÛá~êN695©ª¼àĨï*…ë“ÍN­Æã§Í*ÔJWË Üjœ+‰sçXѵK¾¹×ðÇ™#›5‹_ÉÆ‚ìU·üRÙK\ÐSc»LêÍêqpr¯@Þvék²ï;`þˆææºk“ WÑ‚Œ ®(IG¢íwy#7ðA*]ƒ}Ëÿ†Ý0yñ¡¾b¡†±­ê¥Á“>ÙÕü2‰/Ör{ŒÅ!ó]hN¥©Á¼U`v_ L®ñ¦‹Ü¼Ýêžu·åüÃF¯ÏCÛ$Ðî¿hõ̹˜ Z °Yôýßo–zW·]rå ŽÄ!󨨰ŽÉ>CÒhmo—» `Û[myÃUZ^r•–÷\¥åEÈçtˆÿ}ØÆÙÛÙÛ‹DØÞ˜Z?•ºy-©î„R mZ*žA8•hAkê "Ѳ;([9Œ|¥^é߈o¢ò8âMB+zoñ¦%Ô[ç=”ú-e,;‡Ø¡3h FzËã+(ÊXy†jCe¬šÆšõeħ±š %ÊàÃe³¸üt03þ"Ú)h¡íÓ˜XÒÿsþ‡ºËX=">gqeÕ0~?S¥•‘ UWß¿!ºî³è¡;p¶f¤†f× ñÓZ…­ZB; ^ªÓØ.½k>Ctk¸!ÆuŒ‘s”jDxלúï§U¿4è,ã‰ÆÿÖYVe¡››¨-£x2†ýVfW]þ¸À˜¿­Z%´6‹s¼~$¡®°Bª:ŠÕ”½H’C[È¢ ¹s˜½ÈœdÍIòæCVö<­ÿø7vá+ô«8Tßkq—ºƒj÷¨ãȨ“Ø£^Áz<¦ì㯠ìÝI¦†8s=îb/Œ/É™»1M¸RåQRMá~U8¬vá^bia¼a†¢çU¸_5z|»Ÿ~÷Q£Q3ܾ“~÷\ä^Qž†SeÜ|z "‡«æÜP¨ØÄH¥ýU¾âÛLøs‘;•¿¯PK ²•Gj j´V8org/gradle/wrapper/PathAssembler$LocalDistribution.class•R[KAþÎfÝÕt«qµ¶Ú[´yˆkëÚâ[KA!´…|(L’!NºÙ “M‹ÿJÁPð¡?ÀU<3†Ò¨|Ø™sû.sØË¿¼Å«9Ì`­OMôÌÏ}¼ôQ&øm5ÈTŸ0ß芟"VY¼¯ù~ܪ+Mðòc5¨ì6™îÄ-Ú‰ŒiÑïKùñÞ` {ÍDjÆyTªò„½êôñIÑÉlóàÖ²¶$,4T*? {M©¿ ÂFÖÉ¡ÐÊäã¢kŒm¯Îöµjs•¥„à M¥®%‚¥ydwªµÊ-~[Ø‘ùÿ%»žRuóæîŠ<Ç[5'_³¡nÉë$œÙ6¸E<`=Àfø˜õ±AxwìÆZIDÚ‰?7»²• U³Êò46¬óŸáñ?S@h¹ìà‰=WÙ-øíÄüºPK ²•G’cJK!org/gradle/wrapper/Download.class…W‰g~f³»3; 9X(¦ lî" ’´Ô$vÛ4Á, ]Pq²;Ù,ÝÌlgf Tm½ªõnµU¤^[–¶l(ÐâUT„‹æ#bσ*ÂGe|LÆÇU4ã*šðIãSbùÓ Qñ|V4ŸSñy|AÅñ%Ñ<¢íÇ|YÅW𸌯*øšŠÍøzÞÏD»‚o¨xßTñ-œT°SL¾-Œœ’S žTðßUð=ßWðOÉø¡ŠÓø‘Œ§e<#ãÇ A‹™¦afuÇ1 ñŽk,ì·­œa»Ãáèø‰á¼;o˜n&©»–-¡qÿÔäSûâñ#£wNOÜ-AŠIX=j™Ž«›îŒžÍ "J «F¦ÇÆöM‰Çí£h[³„`ÖJ§ ZÙ8nÙé´­§²ÆÀ¢­çèq`Ü[’ÐÄyV8ÌX愾`HÕéYÝLÄ];c¦¹-\µmưvtqkÆÌ¸{$ŒD¯ããZs×Jºf$øG­Ý7gLc"¿0kØôÙ¬•Ô³3ºó’ÐïÎg˜ÇMõïµͬ¥§÷ I˜ˤó¶±2ÁÞ ¢Â³’*)HØ-gîÀôT¬jÆËd /LYO¥láïÆÚ­D!e8nÆ,Ùn¬Õ• î;ž4rbÑ‘ñ ÍËnc¦kئže1^?êõîÙ‹}lÇ8s¹×Ï~}÷¤îž³ðu÷žECwßYø) œñ¬ÝÁv‚l7q÷f„°hÇF´¸“+ÝE;ˆá.ï­ºÞó/y#ω8æÇ®Áçb×ÚpðäËh¼%!-!tjÅ{#w|ro¥×¨çQ+ê•23€wavb#v‘ï&J»¹2Èѹp‘ßÄގy¡“÷‘ ¹ø ÙøSù(a8IfÊUVæ«e˜¯–`öá->CŽx d¬~žíMÌ"ɾš|à”X,H °×™Hÿx¸›T¾'ÜÃv"ÜËvÐî] Ü/º`x@trø–fåelO4P«!žð÷ÄH žF‚ñ„‘ã ¥7žhö±!mâçðŽçÊ©ïðÎÐñ½LA-<Þzu ¹•Çìã!DЦЕàêSÁO=uÓÞÅla潋)®o†#?­Œà(ÿ´5D¨F˜Þ>ž¬—^¼8LB-J‹ÐïšúÚAÀÕÿPK ²•GÀ>ÆPN#gradle-wrapper-classpath.propertiesSÎÍO)ÍIUHIMËÌË,ÉÌÏãRöÍÏSpIMV02T02²24³21Qpv Q0204å*(ÊÏJM.)¶M/JLÉIÕMÎÉä**Í+ÉÌMµåPK  ²•G$Ù–eÎbuild-receipt.properties5½nÃ0 „w¾JA”3hÉV ÈÒ¢kAK´£À’ I6úøµ›„Û}äñÈau³½®~à¤C þœç\È/ZIlP)Tˆ¬ßä^`¢÷®¼[­ºv°Æ²í*96Qo‡º;Ÿû–h¤¦gT5µp‹¹ò¬-o(¦DvfÓ.Zò-=Òœî´Ñ7§ìbÐ(:!¤‚˜¯‡ùÃ…õw¯~%°òÔªÓÄ“3°¤xgSô#’y~kæô@1´„í±äB™µ(_àé8ÐPK  ²•Gorg/gradle/cli/PK  ²•GÈô–<S1org/gradle/cli/AbstractCommandLineConverter.class•T]oA= ‹+¶ˆm­ßÐå£tÛ>™BHÑhBªÒÄÇe×m`— C£Âߢ/4ÑÄà2ÞYP–´ôaçÞ9{ï¹sæÞÝ?þpˆ# ìèØ5ÁŽÏ•÷By9yE†xÅõ\Yeˆæò§ ZÝïp†Õ†ëñ“a¯ÍEËjw I7|ÛêžZÂUû ¨ÉÏÔð…c:Âêt¹iw]³ÖHaÙ²î÷z–×Qduß;çBrQfظÌõµÿŸïè&D•V«\%6Ý# ǹƙun™]ËsÌ·¢XË3Ø»ö·e9†H‹%Ò²¶ˆ‚®IÒk.Õ=,È÷-1à‚ak^ÀÌÁß1½è”•¬)…ë9¶(id0^}±y_º¾7бÇp«é:ž%‡‚®ì`i6¢Êt*­ã°ŠjºBL¨ FÓ ›¿vU³Wõm_ÕIÂÀm†ÌuוD ûI˜8HâŠ7˜†Ô¼&†í%D…ËÌDÕ„3ìqON›B ¡üä:ACvs×Ï€úÒbÁÌ0¼\8´ùy’ ³3CEwþ&T*µÔôkYú+$@Bèß!KÍ¡5I»YF6V¸ûANwh`+´&ÇXEŠìÝKHš,£Î­Qž¢ûFV#»ù ‘ˆžJ#h…Rq„ØÞñïÓ +dAy ¬Ó~#¨TçN*)oÞ°À»OO„¢Sx€‡”«Î‘ 86'çÇ?šÆ?¦ø'äk„<ų@z†ž±—ÅV`·ÿPK  ²•G2_e¦è(org/gradle/cli/CommandLineParser$1.class…ŒA Â0EÿhµZ v庈kCÏPEÁÄv¨-iIõp.<€‡S\ºp>ó?fÞëýxȇCD„èln¶àm­˜°ÈMÛJ]îkÍ'iÛu#ï’0ßèBWëêÀÝÕ”!f„¥±•¨¬,‹BÕâçy•@ˆwZ³Í•tŽ!é‘BI]‰ã¥á¢#¤ÿHIê9|gèߌ|{Ÿúü õ-™|PK  ²•GRB ¨÷<org/gradle/cli/CommandLineParser$MissingOptionArgState.class•]OÓ`ÇÿÏ6º­ëd  ¾2”½ cˆ€€&‚/1N4Á`‚W¬™5ÝSÒß/¼ñÆý ^H¢#ñÂà‡2ž§-sKÆ–&í9OÏù×´þþú  „1äUÄQˆ“4%Õ«*˜–gE)ÍD1Å5ÅÚu KD1Ç0ôØ“ŸpÛÑí —»:ƒö@Ý^3¹ãèÃ\Ù²«ÅªÍ+¦^Ü1âšU«qQ)B÷ý&Ž@–)ÊŠ! ÷ÃB¶@n“!²fU(ŸAi¸¾WÛÖí§|Û¤“á²µÃÍMnR#î ƒÒyd8Ž!ª>ò¶] ªZìžEGO*%Uão·uÒl×GQ²å—ü/š\T‹®Mn˹-†0·«2¿#/’–hAlu@t€ö×ý„%Ö-q*Ñ2ßÕN¹wÓ e‰&“»¢Bµgs›Q\gÃó¾†Þݧä-FèÍŒ¼•&Žã nX{öŽ~Ï‹2zÄfZöCà Rò6¯AÅ Ó°ˆÎἆ%ŒGA¬ùþöˆaª—3ÌöÞ@†L÷v0Lc8~£c†s¸Cª7ð]S—af²=/Œêµb¯¦ —!ÝÍiú€Å!WiXNƒ$!9!:"í>i!z&òì¡|á'Âû¤†Èd&Ïw`8Ig£¾9N‘O’XF×iŒÐ¥ªä uD¾5x äRlâ( Ž‚38ëqhKÎCŠO ™ÿp…©:”ïm¸O.í6pÉ'%•¤É0€o Æ%˜rŒ¶C?7åo@ãÓ¸ <=@QŽÏþ·O•îì b쫇Ó|C—Ma"€Ü!˰߰°ý¶d^7UØÜ°Ë^ã¥t“^EYÏ3G5Dh8%hù+aòVÂEÒ3ôD*öPK  ²•G¼¬M2‘ƒ=org/gradle/cli/CommandLineParser$OptionStringComparator.classTÏOAþfºeam¡ò£`µ‚Z¡Pd) HJˆ¤‰ ± ‰Þ¦í¦ YvÉî–ø¯ÀÉ›{ñ‰&Ƴ”ñÍv­Bkj8tÞ›÷¾yïÛoÞôÇÏ/ßñÌÀæ jѱ cÑ€†%µ˜:–UšlQÇŠŽU†-éÈ`›!–Ÿ?`ÐÊnÝb©HÇzÕ<®ZÞQµ)2ZqkÂ>žTû(¨‡ÒgHïÒuöO:²{|"<¸Cb×q,¯l ß··Yq½†ÙðDݶ̚-M §®š½žoy¹Þ•J z-ÜQÓ¹|åHœ ÓNÃl#KÝ‘ù]:ã†ÕŠŠ} “^!üC× ÇÞýÞ¬\k´W=²jA©;¢ñ÷ËW{D9•!¥×–ëûùÅ’º„Üÿ†öeÃAS‰²õ/ªÍ@Úæ%·º…ئRƾÛôjÖ ©n5ÝÕnIJ dO±®cƒáy_Žeá[»Žo9¾ ä©Õ=Ó}ªã£&fH]çͰqÃiºR¬-Ãx/µö¿ šÛrÁaø†hfúÁ2=BƒFâ=Y \ K»aÚ™diZ_¸ÿLÇ­*ÈSH‘ŸhpãPsEòD‡?!Fq`§p˜öºöZl{ñïݺ–=ƒ1¡#Î[Ùsåž!®µ¾B{Ë./z Õi|ŸþCÀG1ÌÇåã˜åÈóIùVy&$´ÖnRÞ$¦B’;¸C'r«Èà.ÑË"‡{äi©"žˆ¿¤LŒlr¡ðú"ý.0Øêùõ3m`§Y’(N‡Í’”›V055QáM²<’“õ–3Ýt Æ©`Žò1<ñ³˜ mcd3B£ä= ‹Ô Ý`OùPK  ²•Gè# òGK1org/gradle/cli/CommandLineArgumentException.class•‘ÍJ1…Oú3£µ¶Zm+êÂî´Uו‚ˆ‚0¸°¥ût¦‘™D23êk¹*¸ð|(1IK-‚YÜäžÜûÝòþñúà Í Ø2aÛEÝEƒÀ9ç‚§=‚ú¡O©QzýTqv†…K9fUŸ v›Å#¦ti¥æË€FCª¸Éçb!ð„àÄ—*ôBEÇó‚ˆ{—2Ž©È… ³˜‰ôê9`)—¢KàÆ,Ihh©¿\´—xû¦ &J>™ùÖp1 Y¢Iõ¥¥¾ÌTÀ®¹ñÛúËØ©”Q„cB… óWì~Í¿ËDÊc¶¸D yý få@Ì]õtžÓ»ÓîLA^ìýŠŽ%«6uåVõ©1«Òúš¥8(c]3 «2gÝèy½»íÎñ¹Ÿ°=Ý´oa³²ÌÃÌ©Š kqÓv×>PK  ²•G?hÿLJ=org/gradle/cli/CommandLineParser$KnownOptionParserState.classÝXkxå~¿d“Y&†…V§šds‹Ð´!¢¤†@LŒ´ÚawL73Û™ÙìÅJ[íÝ^Ð6V{±ÚzJB¼Ô^°¢õÖ¢Oÿöyú<}úߟ­=gfv³›Ý°‰?ûç›óïܾsûÎîÅÿž@;Þ•±w® %!£cL!XŒIòòy^l^^\¦OIcBÂ×ਠw1ô…0¾X/áËLt·Lâ¿ÂÛ{$|UÆz|=Œ{elÃ}2¾oòÉ·d|ßaÂï†ñ=ÆÜ_ïã¼üPÂøø8? ã”ðcu°Âø 'yy(ŒŸò÷aàgLûó ü÷Kø%«yT¯$œP¬¤kXæ€k戄“„Ù—…¡m¯iêvOBsÝhí³ì‘Ö[‹'ôÖXÂhí±ÆÆ43Þg˜ú~Ívt»&›‡@¹¯Aàê˰úÃqélÅ€1bjnÊ&Á5 Žwúû„fŽ´÷îbqî¨áÔ´]þæ¾L½Ó0 ·KàB}qòeF¡¸ç‹;zyÎlõXqòÖLÒŸ;¤Û´C ÂDú¬˜–Ôlƒ÷2ÄÞXw“iM˜¾U9ÑÜ^܀¬ä[Ù2»í‘Ô˜nº»êóÃÕ°ÜT ky‘|q«Ø÷šíöëG\R-ÐV¿l«FtwæÌ[^Zßp«wâL&töJ™—²%FÀÚ¬¼ìuu[s-ά ËÕí}A*I/´ém^²åE>È @àßÿéYœºÝKá’#m¼´óÒÁË^:yÙÊ}` b(`VÊŽé7œæëòhZ8h ö WÁ ¸‘¡'Ö̇²Û¶µ£Ügøè× v¡GÁVlWpv(؉)èB·‚A 4.nS:“v‰éAø·ÇÍ :WýV­ÖÕ:ujÜÒÕ´\ÕÕîÔUÍTÓßBýq>ã÷:¬ÇȲßàI6ï)Oãݦª%Ý£6uBsÔ¤mq=®ÞaÙjlÕ-N+8ƒß*¸7+xC ÎbH”À¶¢.ßk8e•Ÿ9tå TijÍÓ$µßú †*˜Á9 ³ ÎcNÁsxFÂó ^À‹ ~ÇÁèÚ›J¸ÕjFº£Nè¶¾dù/á÷þ àìƒ?áBŽïýòQð2»ýÏxEÁE¼*°~!Å®”‘ˆë6õØ£ºªà5ü…—×6¤ÕŽד¶#ŸÄÕf"z¯HxSÁ[,ümüUÂßÔb•ªàÞ¸ööjŽ%¶Ž¦ÕpÍËêAMËi ô¦/}ºâ½2h4ÔÚi-§lÓ¨Óo/ðFì[Xuôlxzª ½o,<¡›#«—^—܆ÐZ<¾€;­†^=F݉„5‘yè±– g7ùeÿ§ÀÁ%8#ÿñ^ª+´XLwœšöÎ6jÙ=KÖ• xrá¥ò^ Éié§ô¶Ž/þ´å‹=ÀT¾Žqéܼ42Žb6Œ´‘ú†ü¡vc&¨é>•ܰŒôhå0g‘|,eŨæðÜC›Þgm}C~RqÈâñùᦊl¹>hAdýf›^]æ2gb“‰÷VŽ÷ö% %ÃjÝO2Ü!›ïǃ·–Lê&¥~sÁÁp‘fºcý¢9Ov­t³YSÀr\’w ÒÈ7ÚV*9dpÅ®Îñÿ€ÎÞ[™ƒ [³Æ3±¯½œGÒ©7•~’vÒOÞrDx¤ (ÂS…÷¥ÁÂûÒlá}iî ùïz‚K°»²’‡³„g:ù$íD)a€¦hãJ¢b¥Ñ9„†gP6…òhÓ¤hóÂÑêÐVD« +Ÿñ$ÞDë•hÄ<Y<Š*ñ6Š“¸JœB­xâ ôM'ÊX>öbàAlw‰±å¥͇<ˆ­/#ëöãS%¾?:Ї°iÊpdeèy¬.ž…4pWÌ òÅÆi¬NGò .oœÅš ‘ µ§3·ØÈOAO“õ§±YœA—x½â¬wƒu¾öÌ úñi mps`ãYOã6jIKxU“¨f}³X'0‰•sX?ÌÎÝp:Wõ*¾¾`ªóPÅsž:Ũ<%J:<¥ høJžÌÈ)g¼x)‹?œá¿%m¤x€INeìÈIÔeYè{­º ×Vðþ6úZ7õùl¢Ñuö€Ï…N¤o~e:WŽÀ,Ý{›fñV[EÀÕ‚f¯kú›§Qsêýùbj/ ƒ ’sØ<‡:JÇúHÃ4¢,m‘&fB‚¦A¼-þ¾umY 6¤´÷óMˆ®ãÔûÿ$PžÂuˆ X#^¦¬½H±·‹×0!^Çqñ&Å›xX¼…“âm\—ðžxÿïR,þî9¼‹Üœrv˜²¢÷`nÅA’ýá3^†ÿƒ*ÅÇ‘óƒÀtÑøYÜQ ªG-±M²$b´±©¹:T]6qfA˜ñ´îñé2Y©àÕ·ð XPa âA…)Ѓ Sp‡Wa `Ô놧á0>ê%Y9¶á®…ŒÓ÷!þìe} n§/*ÃXO߯ÿi;†ÇBòÿPK  ²•Gk±ÍÛ7org/gradle/cli/CommandLineParser$OptionComparator.class•UmOÓP~n×ÑQ:oSðD oe€ œb0™!ñcÍ,éZÒvÄŸáá‹_ÀHbøì"žÛ6s°JáC{Ï=ç9Ϲ÷¼´®~ýPÀŽŒ4ædôòWæ%,ȱ(c Ë^JX‘‘âû^IXãðu E ^3tm–ám2$rù}±dê }ÃÒw›Ýù¬˜¤¨Ø5ÍÜ׃ïC¥è}5\†ÌÞ±gØVÉnkŽæÙƒR¶,Ý)™šëê„XªØN]­;Ú¡©«5ÓP ÛЬCæ“æ¸º3y“£È Õüú»… ð,Æ#òeâ´}¹À0ïЂ/Ò]÷¨i'šjjV]­zŽaÕ‹ S¹6ãÞÁ‘^óŠ~ áÛÂuªÐÆ- Tøt±Y+yÕ&ïdè®uKóš<£•ÿµé¦ú¯ ñYÚ$j¹j7š¾cð¶Èv„ŸçÔ 2èWð[¶VïØA–Û›+màüSÞ*(áuåÍB1lņ,i®^¶\Ýr Ï8Ñ;cÅ Œà=Ãx\Ö8n”¡pﹸv± r CQ¥£o *]]÷‚ 4¨ý¹|[TuÞ’·Ü%`anošt¦€.AƒAiÏU¢ìÑ WÌGÅµÑ ç®S“/c¾o½4<}ôu!ð–£ÝíTZi®œ>‡ðƒƒôîâJv…!’•€a<Ÿ@*QèüÈ’´®Ìœ!qñ ;Gò'º.!íÎv¨ÖÅ ¤ø¾{nD<ƒ|ÚŠÖ‹1(ÐÁº0‡Q¹ôý“¬à IÜó9žbŒ|§èbã$‰™m:ݳðtÉB¼HOÏ\¢g–ž3(§‘W€­`iL9ó¥I’„ ø…Äk´ aÎXtβ E˜$ÂÙÈûøiÌøë,Ð:AyLS)²”Ù´"“"J?²QúA©XÅè_PK  ²•GäbÕ'án?org/gradle/cli/CommandLineParser$UnknownOptionParserState.class½UËRA==É$a2@ñ…ˆ(Éá¡"ˆ"¥¥UZ…²`×$Sãèd†šL”_ð_\H*~€ÿàÆ×Â7º°¼=™ B%qáæÞîÛ÷œ¾}îÍäýï·ïäq[A ã :0ÑA«œ0“ÂäÅÁ”‚iÌDq5Šk rÙãžÅCüw˺»& êÛÖÝe‹—Ëz™abÅqœáò¢¥ç –™[vJ%nWL[¯âFêàó !î É•güÏYÜ6rkžkÚÅ ‡X†á£Ì>O±ŽŸ ‘Ó6½E=}œ±9A{ÅgÖÂËN‘Šë)«•Ò¦î>æ›–.^丵Î]Sìƒ`Ø{j’HOìç¶óÒ~¸å™Žý—œ-”p˜žßeèÞ}^^rJI·=R7Ù °cS†ë­êÛÞ’{2i·OŠcÒÞi nÛŒ1^ã#rÊÞ²t1b7Àðê?4°yvÞo²´=)L^˜)a¦FZÓËÖœŠ[Ðý©c9â*zÑ'̼Šn$Tô ©BA—Ša F± â&£¸Å0÷ÏÓÁ0ÕÛÔÓ –mGdúá6Ša4Ýj3†š C'/ïn{._çV…öýƵÕ±+1DÂúX† ¡UR´È÷Ô%òH¢}4±ý´+Ñ.L¾_c´±=„´ìÂÚøä:"›‚L3þ2û…}F7û‚>ö§èL«`g%.dþJ\)ù+QLˆ¢gq.¸:K^dIá×µ{""¾ù¼jõ4àe8Á™ ²¶È›#àïu`¹¾@ÊTÁs䥓À?|pªšP{‘Œ‹þ‹˜ï€f‘rÄ m—ÔÚEhQrò!£"XØOÄÙ¯º’UVêH/.\¤ 鋘6–?Û9RÕ?SÍ©UÕ@çFcHû:‹UÆïŒ„1Ÿ-‹NòÉcjXücÎâ4Uy‰<blfq=¬üPK  ²•G"zÉZ’™ &org/gradle/cli/CommandLineOption.classV[sÓVþ”˱E Äá €cÜ”¤P·@H€¤IC E±UGT‘\YæÒû½ý ¼ô­å¥´S¤ÌtúÄCgú'úCÚ~GVdÙ²K¦“‰ÎžÕžÝow¿=òÿú€—ñm;0%c:Ž6Lu`;ÞèÀ f…ô¦xÌ%pó ÜÄ-ñxKÆÛ ܆*cAF>ŽÎê©‚4!½#¤¢x,Æ ÇpG(ßÁª%qÖKFIÆ{d«äè–Y–Ð5~G½«æ*Žnä¦4gXBÇ”^4U§bköÖ¿©n Õ,æ¦[7‹Ã'yBQíbeI3é%Ú°5Ôr™&›u#q2QÐÊy[wÁHH†ÝSYÐJ¶–W…ɬj›ÔJˆëf¾²@ØH7ºh[•Ò¬î,J8ÜÚ²‹¹¢­ -—7ôܨµ´¤š…qÝÔ&ÝÈnÑÝÔ“RéŒ1G³ÕC˜‘µ š(ONT–4{Z¼À­¼j̨¶.öž2jyiµéý~\P®SDz5â,êìžc¬¯¡KBO}p¶ÀÐ,Ì®ÙVÔ!^ÔœÉUmH„(´¯Q×"lbQ-Ÿö¸#a("ÎÀZêrøú£Šw6 ivλçžË=wþüýý à_|d°éÃŦ‡’‡m†ì©TÒœ1dª{m·÷C¾)•¸ô;BßóND™ÅfÜåQ›kiïã¤k¾É„aáöÉÈXÝqÝ2ÜЯܵRB7"ž$‚(ÇÍX‡A¨y/A7’A#î÷¹êÙ>£ºÊ;‘:Ã|¬(ÔæF ͹=”æÞô©U÷&K¿õcEZƒ¾P†á¢Ú|äßyq-£¥ ëS+·Â\ñäU•Ì|HQåS$Œð°Ã†Ýêdíú݄3¬1Tþ‡MZñ@wÅ¥´K)¾ãØ s˜AÖC…áhúU0^]ºí<Š. ùu—Ê“Ga(M"¹%zÂ.ì—…cg"W=º„d1föÁùIƒÙ”d}ÿçFøø¸´6,¤¸ˆO).a9Å¢Í,se,~BèŒÅÙ¿Å‹#ÂHpð+¼o(8„îYÿ:nYê² XÁGðQW¼ƒwe|WÁ÷ð}?À%dîæ73¶5Vü?–ñ6ÄFîeüTÁÏðs ½q8œé©ä3‘»×x[‰½ÀSç/²Ìà”‚ü‚WÏ&SƒwE>£à—ø‡F¯‚)L+˜ùý5VeüFÁ'1§à·øGyÝŒ`§ÕRÍ«£Wæ5‡®O™ÞžJoÆ0íLA»Àè ûEÆ.ȸ¦à:V¼‡§ü«Íä61'!œu‘nžQœÃºwµ±Tš¦îÊK7ïrXrúÌ[­1œEã0w‰$D‚ëÏž¶CîÏŒm  kV%8Óûp¶¼y'ЃØÓZT‰1жO~0ÈúÍ¢–ËšQX|ƒló«%>o¶ª‹èwÄ l𤠼(èØt((ÌTãØãyN,áx¡0^*QpKðWj'DG ƒ\…ð!®Ùèþú ׇ6¬Ùèþú!$ù>Œ!ÊÀ4€0dî¼’»)w¡¹[ß@„b”bìäu1N±…bâ’ýkP$Lç×°MÂUœ¤’ðWlŸ¹Vê´ Gn"½†{$ G»¢ž^FHŽbûp,÷>"]±®Èv„qvåοVî¼w΄1Âç(|v3äü­ßA©“?Íw2ä{1‰û0‹]°±/â~¼„ ^&I£ÝûÊèC9$èE–—g­dÐÃ$}œuvœNrgš5&<g±SвÒF2#ÂSŸÌÓ¤XçÉÓLBˆèÇñi ×,´ÎøŸ¥ÄAì|‚;"Am¹þ›Èœì2úBXõiL8 ÂGÇ:ܾmÌ÷á$œûØx0ì$aOÞöI<ååêM¾}ÛS×!ݵiŸÉ±3¼Šhduåοë~ :H¢¨§nR±‡$쥯½ôX˜Î´› ƒ¢e3~ÝöùîôáS¬÷úº;Ÿöiý %^v^ü¢„…ŽhC:»7DçZ§óÃÎÛè™Kï‹8ÕÜ/ özÿ’_¯Ú´aM¦Q¾@''3.`C.?)¶¬h½åó˜÷˜t˜¢¢I6¶D©®%¢ EOFÝ(õ²¨F‘eõ\ßÀ2KûR]ßAEÇA0ÝC4¼înàeF”]Þ™h}41°îoÊ ZLœ*§Ò’Ÿ-"øvv;ÓHr¤ ÇçZJÄÙ‹ø¬gû ÏöPmvúõ“­«º2°†\ˆÛ)Q·\ô‡Äýçzèn1=Ã!ù,ÝyŽ·Àóœû/øîíâ(}Ø)¦{¿VLC¾ËCŽ£"C%sñPK  ²•G‹A5l| :org/gradle/cli/ProjectPropertiesCommandLineConverter.class’KOÂ@…ÏD|?PâìÀuã#QŒ+¢$÷C;–1m‡  ÿJW&.üþ(ãª1ØDã,îéœ9ßôv¦oï/¯ö±[@yl汕G)‡ívÙ }FHWkw„LSºœ°Ü!¿]®nY×7ÎZK:Ì¿cJDóØÌ螎ZRy¶§˜ësÛñ…ÝVò;ÚHŸ+-ø )ƒ€…n´kS†#cruLXõøgh|Ó×B†„j­õÀFÌöYèÙ­Dè™èÎè%×LøÜ%”ÖŽñŽ…Ž*‡_‰¨å½?õÖˆ:("‡<Ú„bJÕö ®­ØŠtòfë^*K÷¸Õ ßµ¦ XUÞðV½Œ£Üi01Èk ÂÁp8ƒwZ±ß8T0gî?Pôa¦Î›™m”ŒÎí=ƒžÌC S³s ¦§£‹| Ë1\áôZêq-}CÓ_èJšžEˉèjš™E+ ¨ùw'©õPK  ²•G2lW¶JForg/gradle/cli/CommandLineParser$CaseInsensitiveStringComparator.class¥S]oA=3|,àb‘¶øYÅŠ–/»¥úb JbB$jBÓ÷œfÙmv—Æ¿âàÅ—šø`|öGï,L C˜{çιçÎ=wöç¯oß4ð<O2Hé%=½” Tt¸j nà©}†dK9*8bˆ•+' ñ¶;” ]åÈwÓI_zÇ¢oS$ßuÂ>žÒû(>*ŸáA[ø²ãøÒñU Îe/ð”3n»“3á‰ÀõÌŽãH¯m ß—”ðºëzck쉡-­­,ÂN„3ÔU?Ï—^i e“Á„;ºÆ^¹{*Î…e glÍ‘ÍåH¥C9îY \§¡ûY,Ž©³¡Xç ûûþ©ÍåˆfçŸ.Ggú¤aÀb@H·¶ùFSÏ¢ô/@†tOLµ­¿]u(Ûú#_k¹û#¢ÊôÜ©7o”na©Ü¾N2‘Á5h˜8)™»Jeb[ž1¼üß1_bŸ7ݵª#†Ýõj1lFæØíŒדº>ÃöФš^ ŠëèIƒ>7Ê¢0®u¢I;‹, ‰êWð/äpdiMê Oâ:ùæ€ 䡟 I%'›¯Õ/;ÚùŒÄÎLûñÙ‚)¯<…$O#Ã30¹²V癫ö¶Q+åq“¤óvCü#”Bû7Â>8É^CŽî~Ÿ,r©ßPK  ²•G…¥´gÆ*&org/gradle/cli/CommandLineParser.classY xTÕµ^+ó8““$‘ƒ<"Éc’B€ð2DT”0†‡xHN’ÉL˜™Q¨¢Ô*õQ«¢`}W©­VEIˆQ¬­ïbm«}Ý>lïíõÞ[Û{o{Ûzk[é¿ö™™ÌL&öããì={¯½ÖÚëñïµwÞüø¹D4‹OæÒúùüE>Éç¯ý-—þNëä¡S:í`Ò©‰Yãti]:ìÖÙÃ^™Ö4öiœ›Ç:çéTÀ†Nù³DOµnvŸÃsäS ñ\sµÆótZÊó5^ qNËx¡Nµ¼Hˆk¼D§óÅ KuºPfó¹×êt±0\Æs|\'V9Of–ûh§H9_>è|!¯ÞE_œGëx¥ë5nî*¹ÚÇkDïK|Ü(¿×j|©NWãu\ ½õ⊀ô_æã&Ù×Zy£ô6ùx³Xçr·ð:›¼ÕÇÍ¢ üÑ¢³Å­>n“ÉvƒLÄd¬‡­h]ȌŬÓ˜Uñ`$ÜÃmu‘ŽN3jÆ#Q·1M®3cÖŠpÌ Ç‚ñàN+›ˆ)ß^¾n;Ó¸KÃÛÑ]a{rµYÑÆ¸·4Aâŧ™ë`*p3Õc1ˆ¶gk£mj {©m[Q{4¦q„i´9?Å»¶B¯3ÜˬÖHÔÊ׸3e€Ú]fÔÊ‹<ÈËÀ~tk1±jõÚ«¶4ÔÖ/ß²ºvíÚå—40ùWn3wšU]ñ`¨*jµY»«V›qh^È4*b+»¬;ÉcTu½Ù šÜÆ`[ØŒwE±Ã噳‹ìŸ!3ÜVe3X¸2m«j‹š-!«ª9¬Jìme0lÙÊ.\žf(ÙUÜmµ$ÌÅİ5žá/̶XQ«ÙT¾€(Ú²ƒ‘*5´>Ä0XŸ1MB)ª–žŒ™[C,î] ãK˜\%¥ë˜Üu‘K6 µº:¶ZѵBQ+#Ífh ÊïÄ ;Þ„S>ao¶gD|IJ¯„J"/?{ÉÓ)K˜æ•llÅÒlQŠKš@pÈkø ÁlËw7[ɀĿ—¤Q­€&²Ãá‰3” 1ÍÒÓe¸x­p 2î^Њ$ãW,ï ¾€„fÍÂÚHB¶Ë9ÁiȰ´ˆTûAÖ/L gnq×™½1·:m^W y^9ôÖ37=&3ªº;“‘åä ‡¤’”Y9lÚaysò@‘« äã´’AŒœ^WŒV!½[‡ÌNlÜf4 wºCVMAs6l|rú% %,ƒXÄ2Ñ­ÑŠcÁ9™#‹†‡_z§`Î¥1³ Nô§{¦¶³ng߈d ôèH´ÅŠ&“ȵo­ÊÉ‘ü4µVc¢©k»Õ ØØi†º Ôc…ãQù½Ün‹2‘yš–U‘®¸ £†È@œCê8Èdq~r4#3”†rž¬ÖÕaîfššmAÇ𖵡Ó-tÚ†Ó1ã¸zîÐÇ“ã:÷¶H¶¨$m}]$²š•ÇBÚ¬<8&ÄÉéH¶jë60ÜS“±”+²eÀ1+UÈx·vµ¶ŠOÆ ´L͈¸V)2Ï÷†‹ÊdcKw$“ÃÜÔ§ÎèT*êfs³‹M›;s&Óô’¡A @Îî™+çËÊs†³rPÑ’d±@Xä%~ÌRª,>ÃAÅ…oèíBBoŒtE›Qà ŽÄ¢RtÝmÐAºÓ /ÐmÝAw2Oò^ÕïìŠÃ¦–Ùa‹Ð8nиˠüzP>÷6x¡kÇ&qâ;è( wi¼Ûàn¾Ò ‡é1¦â¡ŽƒAì‘Æ_eðÞ+ ?#Ÿ«™ª‡<öëZY}Æû ¾–¯Óx¿ÁŸåë þߜˎ-ƒoäˆƒŠ ƒ?Ï71U )4½êY7㨨ØxùâÍåß ¾•¿`ðmüEƒoç;h%Kc¥ ŠÅ•eßÉ…è.ƒïæCm5/Ó2K‡Äiy‰±ŠÍ•eHË48MV›Ë£ÑHÔàâôØÁY µHB3öuÉà{ù>ƒïC<Àwü ?dðÃ|Èà/Ó=?Âj|Ä௷lÊ6ƒã¯&"¿%~¾áàµQË‚ƒ™f ³€¸N‰u¿&ŸÇ ~‚¿žÜgÔÃÛ­–TpÎ&óìKœÁOòS[©FÍn924~(6è—ô+ƒŠŸág >ÆOiÜcÐO¹×àãYËS§ˆÁ}qîé±éáŒÀ³1Úàçøƒûùy™®¼ô)V\, >!i8Æ5~Qè¿"%­FÚ¶™!»]8¬S…8j/{ëÅ3¦ÇfcÅfÐÒ]Ü‚" lµTŠ/‰™¿ið·$jêÌp8/6[ZŠ#ikM¬ 'GšmšX܌ƋwãíÅ3*fTt ݪñ˿¯jüšdÄõL‡É±8u7GÂq3†¢ífÛ‰›‹›Ûá±f„X ñ)²T³eFå'¬í|_7ø QdBɦ¥{6u^Uß½S½Š-›ËJ ~“¼S†Œ%\Î’hÈçƒC²8ÝKÂpâÝùiö§¿G ŠOóF1wxp:èÉb¸ùœýpÁø4—¥áà}æ£GùééR/ý¼éŽÅ­ŽA öq¼p ¬Ï:¨UY?zÐ-ƒºæg¡Ê=¯ì’ hãະtp™?"ºÀ@üà4®¤ôt¥£ÿ4…«è4ÖùÆ™íÃ(„†¾J}ž¥¤ÖõT0qe´fl/í*_8x¸ÜnƬÝqõ ƒºÑV?F—”:ßùf÷V¹½GãÉÛdQ‰CY»AÞ'vt™¡XAÒ] X1Èjÿ¼M4œÑÍíâÜÜX×ÖX"¢‹JV8–â#$èÓ¶°Áa ÿÌíÛñfdB˜Y½×ÌÆs@Ö³‡ ·X»å]À-­¯pÞš® J%+Ó2'×|ZÙy‰3+v±Üš½¦ºô]yŸ¦Z“‚?IÝÝKUÙfÅQõ ¨¯ÛSÈ4ó†¬p[¼]…)  7DÂIæ+Êä6–‹~®6©ÌÎvŠ=§ÀN UÒ¹pž#˜Ú¢‘®ÎõA‘[˜ òä!q7cèÌODh²$ÏJ5ç…°ÚPèàé¢cC©‡ƒ‚Œ·`Ff" mÐó¬Xs4˜|íë”'Œù6¦•¼Û­než‘“õ¦XÇ# ¿øÔkŠ¢ñAü:ûÅ‹®>ö+ÉÀ Az ëé/)œ2{ Ø<­¡®´ídM¤ÃŒÝÄûÀ‚O¥kfP;¢#\®«ê2f‡VZñŸñw…®EÕí}€Q*ÆFà«n¶2ld/¯·-tà\Ÿä7*›ü͘ÒbÅ‚Q«%uè„֌pU·óOHz‡¿ŠÐY´ƒšˆÈKã¥xF/GnïÄôEÕ¿ÿq‘Wýƒª-”{>Qþ8¹+â÷(ÐâÚŽï—ðk1¹ÑÃhY?yšŽ’·—´^ò=­Ü‹¯Ž–h)¤sé>ô ›œî§ˆ«¬6“GÑN,ë¥\°Ó›z)ï¹Ð5šÊݽ4âñãBH&Z†ï°«£ tž0Æf’ ½»è!ˆØGÓ—1gk£¨rËÊÓÈõTŠ­Wé·*UnŠU.=Bbþú²í¯àÿÃô¹„©ë ôÏw?å7õSASÙ³äê£B¦^:£—F×÷SQSY€{iLCyÅÕÇíw÷Ñ8¦C|Rz~¦oÐøO…ßÓCgâ'üžÂ =4ñå÷Ó¤&Y:¹áß‹áâ:ë¥ßâ)=4µÆë÷öÐ4ìÁtEPé÷ä/«½6e(f­æ×üøå¾B­-Á¢ °¨T-:KѺ= R°õ{¼)ºÉG©ì0ºÔ‡)¯ŸðTÅ‹X¢¯lÛïí£ÊDÌáåMHTÚÑ» Þ5š¢Ô”û*ú© ŒfÎê¡ÙB<»‡Î‘vjÏïë¡9‡¨ÐïKnªÐžÛà©Î-Ê“U?Dó±+ü’}ͨÑ3Yê –zŠå܆ý¹|äÔËGh)tÉIW71„¦¥—éÊeóŽ|<³¢‡æ$ž\*žŽRß4Ž6ÑTÄx-]NÑZKW‰Ûi+uS3]M-tYH±VDY=Kíôé»´~ª÷(DR¢0 ¥NžH;¸˜¢\JqžG]ÜH;y#íâmÔÍt%參øíáGi/?ŽßÇh?O×ñ«´Ÿß¦øº‘O7óéVþ+ÝÆÓí9’þû=H°ndÃWékøÞI“éqz½Gl2¦A3Þ….6iSèëô$éCÎ?…¹Ðj«šõAnn¢WJ¯ÓÓ Ó ƒ½Ö‡ýÖz¿kù9z=•ó;°Ä1Iõw2Ñ{úáÖÀ¿EnG–Nå_Q=›÷'1FQ=Ÿ‘­6Ľ€þ z1:s1"¯•¹!¦llàºJJ.-\Yä bq/XÜçÀb’ý%›»¡¹†±M‚½´ ¾¼¡âDµÛUí)ò¹BJyf×xÉܪÉÁvæÐúýÄï/}´ˆ©¡Â[‘À’Å‚0ZräÔûe19Ö& ™¶™»ÌD[ /¬ÖÃÏà7Ñwü5 q`{EG$«\ÀòúhÜT@ÓáŸ'°Ã5 x™^ÁØb@®øÌ¥8$±s½ª°Ó‡È ÞÏIyÊyCñMú¶m×*ðšò[ý´´©¼—Î|ÖV—5ëz鼆оöÑr˜¢ŸÎ‡é.¨qW(kÔÃö8)½„A¼6› …Í °QHˆŒ¿(Éäb0Y‰ÌÖl.¹þÜ—uÒK@´. Rí~€¦ø}NÓGõ`v„&LO<}êQ¿¯°á8­‚¨°ZؾJyŠ÷aÒ ×`ÀíÏõë}t‰Kèg" ä h”=‹v^ ¥Ý$é%´ÓŠ<‚OÕÇimµá!K.¼¤3µdX¡Ö«åJø™ÂËÜ/PS“hÔº=´ZHîg¶RÐÛ\x¹½)Oy“;ÁÑ^xê&„á–§S!x%K@& ìG —B»ÙHàÕ iFšîBšJ–Ý­N ôþös„Ñû—ß!„>BÐ|L'Y§·‰ßá|z›çÐwù\ú×Ñ÷¹Þá]ô.ßJ?äWèÇü:ý„O‚KÞ¢b8)`Eט%ds!û1z ¼5H¿¾Co#<ß§i»'1û;„ö÷ÔŠÕÐõ(ÂÝK5dC”² ~ORiq ÐØ¸ÿ>½‹±5à¨Æ$¨“iÞè‡ù\~™~D?FøùYú 4pÑl¾ÚÞFnììú©J3Iš9hm=–ÒóçU[Ö/0fóx#Åã=¤¥./ˉúç—h¡]ÔOWÀ™f½{Iùkä-\³²¬¶¦&4=Q˜Ôx&¦ÜÎüæõ9ìZ¤ç²üžµÀ¥RÒVô&"s$¤Яÿ y@óŸ(ÿ •Ûoh9} ¼±ÚPný+ýöæG?ú8*u¤zïöËè?À#Ü…Ïé ¯èƒÖ†ú²Gé· m‡ed×'Êë'–¸ªÝEî‰Ñ®@‘{vG!«'Q‹øû©µ©°-‘žÆãÔÞKAT8’äÛ’óÛ³çRH,ÓÑCáà šH6Í~7ûÆ~ZÜ$¥g/uÖø¤ã÷¤ŠT*»•^î@Þ /Õ4ß?Ñhú3 ýCšOm>¢Ëèo("þN{0w-Œr 3=Î81ØEÏã¨IBûh”GÕ1›ƒ hÏA¯‚vì|¥‚v7„*®Ý8 §¥ ýD ÚO$ ýøWr"'߇LùïÄéX…V<#v?žUgW§‹úaˆÅÿK¿wXìÊ.ÒÿÁQrÎð$ÿýÑa1C2ìõ§Ô­¨ZÍ?ƒvôPXî>Bn×3TV;Nñg(ô´ª!Š„jäÈ<ôþ¬$}ˆ¬ŒE]¸¯‰b¢Ý;[7Ú+Ñ.C{Ú=h÷‚ÇgÐ^­ü¸ƒö¹uºíuŸ„v?ÚÏ¢½óEh?‡ö´7ºsèÚσþ&´7ƒnÐôoî?PK  ²•G_>Ò£)3org/gradle/cli/CommandLineParser$AfterOptions.class­•mOÓPÇÿ·ëè(ˆÏ ‚@ ”¡¢ˆ1Q|ÉDt‰¼1wkÕ®5wEñ#ø]|!‰J¢‰Àe<·«£²‘‚1MzÏ9÷Üß9çÞsÛŸ¿¾ýPÆŠÓ²˜é'IW1«!y &T”U,2äê~³É=«âx6ÃxÅ ³!¸åÚfÝuÌ'\´lkußg…!sËñœà6Ãät²ûL•!½ê[/HÃúv³f‹§¼æ’e¸â×¹[å‘zdL[N‹!çe`‹ÇoÇ÷¤ºæy¶Xuy«e“jŒ‹&!&âë)ëb“¿¯ÙAÛÈ02]yÅßrÓå^ÃÜ„ã5Vf6R\4dr]“ ¾C¨¸Â0Ô–ÛQi2 6{{ÄJ.¢‹-÷ßòÏùÞºïý©&÷Ww{Õ–/IÅU0¬ᘓÉå°”ù*3Le ƒ¶áo‹ºýÀ‘­1Úå3/‹Ì#‡¼|]Ë£šŠ%†åDú3ïµç¿ózœÞØaS2Æu7òXÆM†¹cõ Ãìq¶ŸañøÝA78yW¦ºNô°ó*%ù1|èÕëÿ¡aâ­ø¯évÅ¥Ì-ëþN x•»Ûö!*JôÅÌB¶ÿ°ì)’ú È>#ËiIShÌélŠn|Aj—TƒôDŠVÖÐÇêИ…ÙFÛî(âJËè9‰‘ºD£ôRR;¬ ä%ÜŠ1”C!Ë©1†³ã­ è(|Gú¹¡FJfØ÷é°Km×°¥tŽ$…äó¸¡ïF5k’j|EFß§jr† dY+–ªÖ!kmrQ>ñŸÑ=°Ý¾ˆe˜ép2¸n›”&0ft9\9E'ètN* ‘×iϤåÐÀK4¢˜¥ 9h¿PK  ²•GGÑfì’œ3org/gradle/cli/CommandLineParser$OptionString.class•SíNA=Ó¶”µ]Ê—(š‚UÚ-e)â'ˆÆ# ÿ íX–lw›ÝÅè£øþ•D ÑÄð¡Œw¦+BÛøsçÞ;sî=çÎÌï??~¨âÙ ® ”&c¦‘AY†s)TÒ˜‡¥a! eiªÒ,¦pWž]Òp_Æ8÷› ¹µ=þ‘[w›ÖfèÛns™aÀk‡¶ç’³b»v¸Ê0[ì=×›)m1$j^C0d×lW¬ï·v„ÿ–ï8BvòêÜÙâ¾-ã(™wí€AßP ;e(|åºÂ¯9<íZkžß´š>o8ª;¶UóZ-î6d‹7Ü„_8'™¦_ØAÛáŸ×y‹úŒKý„¦B¯ãkxȆ×Òy>ŸªšEìÓ‚4Ui Á1 ñz]A¡º´@ðZñ’êû Mozû~]¼´åÐÇ{*ÌKˆŽa:r‘摎, b^©hx¬c+ Ý¥ŸïÛNCø:ž`UÇ L1°ŠŽë˜Òð”¡r)ê Æÿú;{¢2Ìœ?3zËE9n£›éBî‡Á;;Üeës¹¥÷ôÊy»-ÜÑíw 'i¦©æÏã†<ý¸ ýÖM”ÆI^NN™Ö¤ÊЛ¥h›¢8­Yóq³|ˆ„9wv  c – ÷9„Ó1Ȩ(Ë`œòù ¸(O¶aÊ“bäÓmDm>¨X1¿#‘Kaà ¦~BÛ>FJ†ƒ”g´!ý“ÿ6†Îl|;á4 k±aÅCïÔx0ù¢ž­r/iAïŸ';`CŠÍGà ’G¡™å¹cÄºàš‚›#'3Ð0Í@ÃŒšôn¡€¸‘ÂmÜéaF⺙ig˜M«Ò1Ì*[Ĥº’Ýî=\E7i…‘ú PK  ²•Gx&âT` ;org/gradle/cli/AbstractPropertiesCommandLineConverter.class­V[wUþ L2™æ‚¡5&­I  ­¶i-4JÓÔ`ÉE©‰i+v€ ™fpfˆDÿƒï®å{_ë m\KWŸýþuŸ d°¥YÂâ\öÙgïoûœ}øã¯_p?H8‡Û"–%œÅÞ¬H¸‹O%¬"ͧŸñÑ= ¬IXÇÆ6ñù¾@–7÷𥶰=„q|Å';¼y â!ßÿHÄ×"r þ¤¦köƒ7Ùb–¢Ê0’Ñtu½VÉ«æ}%_&I c”ò–bj|Þ öžf1\Ïf)^2•bYÊZ<•·lS)Ø›¦QUM[S­e£RQô"7»lè$TÍÃXI=V:ܨښ¡3Œ‡#™}å@‰—½ÏÚ¦¦—HuÂ¥zGµ­¬¦z¬YSkY,ú®Vª™9> ¶Ú¦bZ„‹á¯:c†Ù×ë“¶Ñrõ*í&2Ò M n»Ð8&‹»M2j¶Vޝ)ÕDäÄœ2W5( ,MIz¢n)嚺R¯šªe9½ØôhsÄt‡©4%D± ŒØ ÆêƒTmg™œ“èÎvœÃêñá¹Ö­štÃtK–¸¿•zAm"ñ e7«•tÅv²ûãóÙ§ÛÈ)Á®ö ÈÙ¹‘ßW v"âQÊê—ys¥;£íåâ]Â^W/yªpö³FÍ,¨w5žÂh}Û’ñ6&FOÚ•ñ.¦eÌ`ZÄcïC‘—Q@QÆ,_Q±+¢ÄWöDh2öñDDYFºCFßÊ0Á¯À-6j q€ïdÔqÈ›ïÎtÅ+#‚y:§*[ö0íÖîÐJ™¥ZEÕíö‰f˜~]µ¡r~èÎN¤ŸÂó ãÇÕWÞS¬cXDé\¸/Ãô««ä.†ÿ'ˆ®ëÂìǶk_ÛÃÅþ4©°Ð«âTSâ!î¬ÍâWn¸[Â0 µª(•½® Õ5à–RÉ%þÖÕºí<ÃèaÕI÷SؾꢦÕúÆ.C°iþ$Ôhû°{wŸÅfЪñCMöx(étÏ'9î%fèÿÅ9ðÆo=µïÐ,N=ÝMøæŸƒýB&©õ; ¦Àï­£€ó¸/ N?ÓêgyO:ïa®e´JÚ^ê/F}¿Á³ãoÀ›Ý¨² øÖb ø×b4Øþ×kµ“tоç"ÁtÓZ3 Ñ‘ÞU 0íf¼x÷þ'y÷SŸ‹r¿ 6 aˆaýÒd†Ÿ°Bƒ3 /á¹)LáF}AßÏ‹Ñtôc^l?Å$ͼA_ !è7ðVkéï±gäÁë ¾@”‚Ø ćÄCŠðí²G„‹G°Dˆ¥oŒÐ…p—h.` ÷(—øríørôoó‡õY»Jñ]kçKIrÆInà#‡¿›£)ò• ç`•,r›Ò|4öã ŸõÌm“Y©íYB²åYÂ-BÝéÇÓòô1>qúÔ?PK  ²•G“¼¼ ,org/gradle/cli/ParsedCommandLineOption.classS[OAþ¶·-í"¥\A(¡-ÊŠ¼P«1!iĤƒoC;Y†lw›Ý- Ï&¾ø ‰JD}öGÏì.¥I|™9ûï|ß93¿ÿ|ÿ`+)dPPQL!‚Bú0Ÿ¢å†Š›I,¤€žÆ-,&q[~器+¿÷䲤⾊ {ÌlqWA¦²Ëö˜Þò„©W„ë-+è© Ãb^Ëá fº~—‚³É,C¯zްŒå2¥$JÂ^YA4_ØT[µë”ÜWÑjlsçÛ6)’­Ø5fn2GÈsŒy;‚”ä+¶cè†Ãê&×k¦Ð_2ÇåõU»Ñ`V]Rm4=a[T.ipoSP0˜/œ—DNDœ=tšœ=¼ÀZšÕë+ŽÑjpËS0”?’Ž“¬ ÉþKOr‡¹¡bjÑ©ªÝrjü¹»Àê‚dÒ0„a§jW‡íKÉú‘Õp³ &N«®›&7˜Yõ˜Ç×ÞÖ¸ÏEˆ€4W·¹›³l/·ÃöxŽYû9ÿ2,ÈBS*jx’<ÂIy®Ñ2=Ñ4y€t ZÂcº<ÝN5”ñDÃS)höÿÆy†ec{—ר‡—ÎŽ†îˆ+Þ½[§•†+G±Þ9ú •tGi`]ƒ ÿɾ«Â]k4½}LÒ›ÉЃ¢É.Òw€N "N{j9­—)RF”v@¦x¥xŒÈÖ!¢_û죯Ð*ß!(#F9#´Ó<®bÔçõ«r-¿!~€Þc$¶²ê!’?‹_;B}†S'áèÒü‚ö©]q˜4‚j©T£ŸøÇ©BÕ—ÚÕÇ)ñ«ët’ÿâ’õ”,á§;’ãarÐÅ&1ZxFiW“$óGèàuw+æ&9#¾PòÉÝ4f|tgCÆR(+ôà’‚¨±düc—ÆùÙ¶Á9•ÿ PK  ²•G¶À°s˜ª=org/gradle/cli/CommandLineParser$OptionAwareParserState.class­UíNQ=·]ºív±¡~àbº 5ȧ 1$ šÔ4Á·íZ·»æîÖGñ üãMD$>€eœ[–ZÒšEâŸÎÎÌ™s¦wæîþüõý€y¬'Ä :²òiB£Ÿ›&1¥!CÅ´3*f5$0§b^E‘!Ys›MîÔK–c2Œ—\Ñ(4¯Ûf¡f[…Ç\xf}óf•!æ?·¼ì\t®]($zÍr,ÿ.Ãépx¸|®Â lºuj6%;­fÕOxÕ¦ÈPÉ­q»Â…%ý ¨Èv2^ú–ël¼æ"P+ûܧ¼¾í8¦Ø´¹ç™„[í2ÛŸ‰þkºÉßVMò„ˆa™*íñW¼`s§Q(ûÂr«¹§ Q.²áž$Äuv\çˆ y¬ÙûýèÂ;>Þ¦Vv[¢fnYòx2=èY© #…´Ž³XÐ1ˆ3:ncTÅ¢Ž%,«XѱŠ5†…Påg¾)¶,áùåV5H3 ÷ Kµ; 3'ã<<˜ÞíJÚµÅSÎ!ÿ/g©â&ÃW;;ß^ݸå v,l×y½þà/x…Û-ó/ËTé¥êQ§™óZÍô¼ìÂ]܉\E¹¤/þË•=éÙl÷¨žk<§ŒÑR§wgÃrÅtZ.=E¢Ph†h²çÈ+"J6mLïƒì+"F~ÑOA-¨–ò,Í Åf1B± 娆ìE ý$5"Äy £ó ù3¦¿@ùÐá‹A®U±‹'Öá‰á2®Pžá*Æž= ´QEã3¢D5@–}Cì2Pw¥'3Ôyü=RHì®ö±£©I¶ˆ8[êÒ-vt‹n×Û5ã¸@Ö å$}_Î+:Ë®)ò+“Cž ²â½EéøoPK  ²•G'H g)org/gradle/cli/CommandLineConverter.classQMKÃ@}ÓÖÆÔ¯ª'Ï"4 F¯M)HQ Þ·éºlI7²Ý”þ6þ”¸M¡“B–y³ï½aæç÷ëÀ=Îtœœ$S ® ¡ß §lÁü”)á¿®Ù8åA {Oyb¯ :Ë„™©¹ƒ3‚I¡˜É5'ÜÕ JXd´T"qxöž{a¦…/4›¤ÜORé¿1=ç“Q6›15 ¥Ú¹³ë¤6¾®Æ‡EÛWb‰RhèÀ{'ôq¿j<¬R”å:áO2µú%¹Ñz \ß®X„ËíΛ¯„î ÂÍvâƒùŒ+³ÙT›@h`uš-B-›íÙ¬eñ65°_TÜšJÇVpXÄkäÇÅ{òPK  ²•GCÑæê| <org/gradle/cli/CommandLineParser$BeforeFirstSubCommand.classÅVmSW~n²pq³hH •¶ØHƒ@^A«@ ‰m”6Ø·MXÓÕd×Ùlª¡?¡ÿÂδƒ–©ö[gú¥ÿ£?¢Óswc&°Áq¦_î˹Ï=ç¹Ïž{öþõïo/¤q_ÆYÌ¢f^Fse,àZK¸Îñ‰˜,s¬Ø 2V9Ö°ÎqSÆ>•1ˆMŽœŒ¶8ò·úíïõzt†a,gZ•TÅRw«Zª\ÕSY³VSÝœnh·U«®YB/ê†n/1¬OzÃ_G8æÝ6\fj›AÊš»ÃaØjÔJšuG-UÉÊ™eµº­Zº˜7’ Ë^Ñî›–¶¦[u»Ð(5}2(†¡YÙªZ¯k›ó$íêˆN(@»ˆÓy†AÓ(تeçÙºip|Î0äŽÝX´h“¯ÉÜõ5UUJª`[ºQÉtZ¦¼©wø&~Õªé:üÑ·3<‰äÜšæMS½n9j“*ߌq\6¹˜ G ßJ*y³N;éæ{2#š´hf¢½ìc fÃ*SšˆîÀL ¹œÃˆ‚a¼Ëq‡!y"Åæ» ¶ñÇ— ŠØaˆxI¨à¾b˜÷Œt×xh˜.Éxî¨%0&X}Ë?:À²UiÔ4Ã^}RÖšŸþBÓe¤™É*á"nÎE&ÆëÓ ÁƒüÌ—he›ã;*J"b™áªç‘68Ðp÷á—êÕ7«½rC.?V-í0¡£fO~Áé^y§ë±‰ã‚6{*B½^*Y-—µz=:7C—ëR÷yÊÕ°õjê–úHÏC*dÍf¸ØNÓÍ”W{ÛM æƒñ^ìpžCêþØM•·PzÚkt¯š_gBœR³¦’,ó]xÞ;^©VµwÙ+âýóeØã×àýø¿ÄMœÄ+"ô  Ó;Œƒ¢¼ÓÈŸ¨öô÷zf7iî§^‰Å…/OìÁÿ3Í}xŸÚÓb]C[B€]Çd¦5Âc”|ÃQµ%ÃG¸Øôú7ÐG½¶©O°=ômÅ~ïúÏÁ}ø Òˆô'Ò®ùÔOíC.Æiˆ^yºÁbè´ô;ÎýÉÂ3÷0ôr¡¢Ø‘‘^!ßyÚ¢{œèf‰ê Œ²UŒ±5$Ø:2lƒÞ‹9‡þe—X‹¾† Lm1š¢‘ì †}”äIbR$Yºy¸<­HÔsG-v V¿pÃ01Ò Ã1Kê3gtÉÑKŒ.ãcç{\q|\Ňԯvgé9|ž,QêÇ%ñ\^@œ^Å)ê€x.g!à ¸E¸0õŸI>|Mý7’üPK  ²•G“;™|ü9org/gradle/cli/SystemPropertiesCommandLineConverter.class’ËJÃ@†ÏØ«mµ¶ÖjÕEÜ5BDÄ…R/Pé~šÓ‘$&ÓBÞJW‚ À‡'i©AÄYœ3óÏÿÍœ¹¼¼¾Àl— "l¡Y„Íl E Ê<&Ï dÚú@¶ËÇH ÚgÞLÝŠ{:r”Rïs‹:C*X4NĬœ°€ÀQŸ Û´;hZ3a ѽÜG!]îºÔG‹v¹7S"Š5eb o}ɸG ÑÖûtFM‡z¶9‚y¶²¶~X{()spL`7e.°KV, øTXxÉ¢Šõ¿”fDT E¤G ÄPÇWãJm®h~²­Æ49Aíjx­µÑ° ­ÓsÃhöÌ gÔ™¢n8üÇ5©Û] .FÔ’¿s°9õàQˉ΢ⲙ*•sû/@žUg J*æc±e+sƒÊ+1¾ Õî$p¦¾ô´€6¿¡™/t-Í,¢;©h-Í.¢Z ª>kìZÿPK  ²•G-h‚»2org/gradle/cli/CommandLineParser$ParserState.class•SßoÒPþN) sÀÜT4706ê|3M$š˜m f{»@ú”[sÛ-ú?ùâ‹&>øøGÏmÙdƒiÒž÷œï|çœÛß~þpÇFÛ62ØÎ¢jãv,Ô-4™Wžô¢×„T½qB0;ÁÐ%¬v=é^Œû®ú(ú>{JÝ` ü¡©aÅ´9£†û,³D!ûPK  ²•GF¶÷=Ê ;org/gradle/cli/CommandLineParser$AfterFirstSubCommand.classÅVëRÓPþN[¤JÄ;be -´°x)U.R(XEÑ_¡Ä( “ñ|ßgtP™qüçŒâC8îI K›Â0㟳{v÷|gw³»9¿þ|û †E͈4ÒÑ‚ˆH²A/†pKÀ°€Û\3*BÀ˜€;ÆExqO„ˆûH0Ô›oÔB ÊÐÒ\$gÈ«y%’Í«‘¤¾±!k«)USd£ q²W5Õ¼Ç0Õëlþ¯…%^-±‹÷-1x’úªÂÐÂó[+ŠñT^ɓğҳr~I6T¾/ =Ü]†¶ÄkS1&U£`f¶VŠ ÒŒ¦)F2/ Y8ú8‡âôf-IMy$ Mº–1eÃLošª® H2´Ú¼})MÂzÙ›Z“ßÊ‘¼¬å"ÓPµ\¼\Òçìy6ùà–O\}9ݲ'§JBKeÕ¾½mð ëgòáËYÅázåKŒ/ƒ ZÎ1ˆ}ËÈ*“*¯˜Ž2›ž ~œ“àC«€G&1%`Z 3t9eƒaÌÑ‘gÚº¦okÇ”@g%•„K¸Ì›£Úu¼`¶|G%E5e× uSé–»5Ï—4Ãð©Úª–˜ìÛ²¡©’bðä}Båî\EU‹À6¢:“³Y¥PŒF©J‡jhŒ>»3·L5™“7ùÔ8" Î)&COéH¯¬)Ysÿl©HÀª#¡½?n¼œA;–ΙZ›w¶¦QW+ÚïS žê^ͼ8õþWêÂ'AEýå[è=À|>>ЈsÃÅçÍë6ÚMÓÞMT †>à …wáþH{Úimæ:¶ˆ:ö^–AÉ:HGö8‹€ÅÑd"úÍFý€xˆ&‚ŸàúOø+ê\øúùþŸðïAX‘¦!ÈvѸ³q™ÛíÁ» “HêßW6í¸âç€ì9ZÙ ´³eØ+Ë }Ñ; \G7¹Á¹Ĺ(è(¸ ·£ô]LShü¤`ÅÌc®ç‡ÙB ¸p. à˜Å­¨9BØÊj¿…1€ D'(Íôâê$ÉU¢×<"zˆÂ×@'bjÅ'½—è]ðGZ f‰¦<â_PK  ²•GiÞ}FDgradle-cli-classpath.propertiesSÎÍO)ÍIUHIMËÌË,ÉÌÏãRöÍÏSpIMV02T02²24³22Ppv Q0204å*(ÊÏJM.)¶å**Í+ÉÌMµåPK *²•G íAMETA-INF/PK *²•Gו˜R?U¤)META-INF/MANIFEST.MFPK ²•GýAšorg/PK ²•G ýA¾org/gradle/PK ²•GýAéorg/gradle/wrapper/PK ²•Gh‚df£Õ#´org/gradle/wrapper/Download$1.classPK ²•GÅÞ…ÖpD´org/gradle/wrapper/Download$SystemPropertiesProxyAuthenticator.classPK ²•GçìXsªÛ"´vorg/gradle/wrapper/IDownload.classPK ²•G©zÝ\Q-´`org/gradle/wrapper/GradleUserHomeLookup.classPK ²•Gâ]ÛÅú 3´org/gradle/wrapper/ExclusiveFileAccessManager.classPK ²•GÌ ^F‘ù-´org/gradle/wrapper/WrapperConfiguration.classPK ²•GQ}iå 0´ùorg/gradle/wrapper/SystemPropertiesHandler.classPK ²•G¥y0ºV´,org/gradle/wrapper/Logger.classPK ²•GÎrëŠn&´¿org/gradle/wrapper/PathAssembler.classPK ²•G8Þ¶Ýì) ´ org/gradle/wrapper/Install.classPK ²•GÝÀL‰Ä• -´q3org/gradle/wrapper/BootstrapMainStarter.classPK ²•GHÖ·$Ó #(´€8org/gradle/wrapper/WrapperExecutor.classPK ²•G„¤«² B*´™Corg/gradle/wrapper/GradleWrapperMain.classPK ²•GÙÎx ´"´“Norg/gradle/wrapper/Install$1.classPK ²•Gj j´V8´sUorg/gradle/wrapper/PathAssembler$LocalDistribution.classPK ²•G’cJK!´}Worg/gradle/wrapper/Download.classPK ²•GÀ>ÆPN#´Õ_gradle-wrapper-classpath.propertiesPK  ²•G$Ù–eδf`build-receipt.propertiesPK  ²•GýAjaorg/gradle/cli/PK  ²•GÈô–<S1´™aorg/gradle/cli/AbstractCommandLineConverter.classPK  ²•G2_e¦è(´$dorg/gradle/cli/CommandLineParser$1.classPK  ²•GRB ¨÷<´eorg/gradle/cli/CommandLineParser$MissingOptionArgState.classPK  ²•G¼¬M2‘ƒ=´horg/gradle/cli/CommandLineParser$OptionStringComparator.classPK  ²•Gè# òGK1´þjorg/gradle/cli/CommandLineArgumentException.classPK  ²•G?hÿLJ=´”lorg/gradle/cli/CommandLineParser$KnownOptionParserState.classPK  ²•Gk±ÍÛ7´¶torg/gradle/cli/CommandLineParser$OptionComparator.classPK  ²•GäbÕ'án?´Øworg/gradle/cli/CommandLineParser$UnknownOptionParserState.classPK  ²•G"zÉZ’™ &´{org/gradle/cli/CommandLineOption.classPK  ²•Gœl\ϧ¦8´ì€org/gradle/cli/CommandLineParser$OptionParserState.classPK  ²•G[xn˜”Ç&´é‚org/gradle/cli/ParsedCommandLine.classPK  ²•G‹A5l| :´ÁŠorg/gradle/cli/ProjectPropertiesCommandLineConverter.classPK  ²•G2lW¶JF´•Œorg/gradle/cli/CommandLineParser$CaseInsensitiveStringComparator.classPK  ²•G…¥´gÆ*&´Corg/gradle/cli/CommandLineParser.classPK  ²•G_>Ò£)3´î¡org/gradle/cli/CommandLineParser$AfterOptions.classPK  ²•GGÑfì’œ3´â¤org/gradle/cli/CommandLineParser$OptionString.classPK  ²•Gx&âT` ;´Å§org/gradle/cli/AbstractPropertiesCommandLineConverter.classPK  ²•G“¼¼ ,´~¬org/gradle/cli/ParsedCommandLineOption.classPK  ²•G¶À°s˜ª=´„¯org/gradle/cli/CommandLineParser$OptionAwareParserState.classPK  ²•G'H g)´w²org/gradle/cli/CommandLineConverter.classPK  ²•GCÑæê| <´×³org/gradle/cli/CommandLineParser$BeforeFirstSubCommand.classPK  ²•G“;™|ü9´¸org/gradle/cli/SystemPropertiesCommandLineConverter.classPK  ²•G-h‚»2´î¹org/gradle/cli/CommandLineParser$ParserState.classPK  ²•GF¶÷=Ê ;´E¼org/gradle/cli/CommandLineParser$AfterFirstSubCommand.classPK  ²•GiÞ}FD´Û¿gradle-cli-classpath.propertiesPK11^Àgradle-wrapper.properties000066400000000000000000000003461270147354000344630ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/gradle/wrapper#Wed Jan 27 08:20:52 CST 2016 distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-2.9-bin.zip Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/gradlew000077500000000000000000000115531270147354000261470ustar00rootroot00000000000000#!/usr/bin/env bash ############################################################################## ## ## Gradle start up script for UN*X ## ############################################################################## # Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. DEFAULT_JVM_OPTS="" APP_NAME="Gradle" APP_BASE_NAME=`basename "$0"` # Use the maximum available, or set MAX_FD != -1 to use that value. MAX_FD="maximum" warn ( ) { echo "$*" } die ( ) { echo echo "$*" echo exit 1 } # OS specific support (must be 'true' or 'false'). cygwin=false msys=false darwin=false case "`uname`" in CYGWIN* ) cygwin=true ;; Darwin* ) darwin=true ;; MINGW* ) msys=true ;; esac # Attempt to set APP_HOME # Resolve links: $0 may be a link PRG="$0" # Need this for relative symlinks. while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`"/$link" fi done SAVED="`pwd`" cd "`dirname \"$PRG\"`/" >/dev/null APP_HOME="`pwd -P`" cd "$SAVED" >/dev/null CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar # Determine the Java command to use to start the JVM. if [ -n "$JAVA_HOME" ] ; then if [ -x "$JAVA_HOME/jre/sh/java" ] ; then # IBM's JDK on AIX uses strange locations for the executables JAVACMD="$JAVA_HOME/jre/sh/java" else JAVACMD="$JAVA_HOME/bin/java" fi if [ ! -x "$JAVACMD" ] ; then die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME Please set the JAVA_HOME variable in your environment to match the location of your Java installation." fi else JAVACMD="java" which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. Please set the JAVA_HOME variable in your environment to match the location of your Java installation." fi # Increase the maximum file descriptors if we can. if [ "$cygwin" = "false" -a "$darwin" = "false" ] ; then MAX_FD_LIMIT=`ulimit -H -n` if [ $? -eq 0 ] ; then if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then MAX_FD="$MAX_FD_LIMIT" fi ulimit -n $MAX_FD if [ $? -ne 0 ] ; then warn "Could not set maximum file descriptor limit: $MAX_FD" fi else warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT" fi fi # For Darwin, add options to specify how the application appears in the dock if $darwin; then GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\"" fi # For Cygwin, switch paths to Windows format before running java if $cygwin ; then APP_HOME=`cygpath --path --mixed "$APP_HOME"` CLASSPATH=`cygpath --path --mixed "$CLASSPATH"` JAVACMD=`cygpath --unix "$JAVACMD"` # We build the pattern for arguments to be converted via cygpath ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null` SEP="" for dir in $ROOTDIRSRAW ; do ROOTDIRS="$ROOTDIRS$SEP$dir" SEP="|" done OURCYGPATTERN="(^($ROOTDIRS))" # Add a user-defined pattern to the cygpath arguments if [ "$GRADLE_CYGPATTERN" != "" ] ; then OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)" fi # Now convert the arguments - kludge to limit ourselves to /bin/sh i=0 for arg in "$@" ; do CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -` CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"` else eval `echo args$i`="\"$arg\"" fi i=$((i+1)) done case $i in (0) set -- ;; (1) set -- "$args0" ;; (2) set -- "$args0" "$args1" ;; (3) set -- "$args0" "$args1" "$args2" ;; (4) set -- "$args0" "$args1" "$args2" "$args3" ;; (5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;; (6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;; (7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;; (8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;; (9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;; esac fi # Split up the JVM_OPTS And GRADLE_OPTS values into an array, following the shell quoting and substitution rules function splitJvmOpts() { JVM_OPTS=("$@") } eval splitJvmOpts $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS JVM_OPTS[${#JVM_OPTS[*]}]="-Dorg.gradle.appname=$APP_BASE_NAME" exec "$JAVACMD" "${JVM_OPTS[@]}" -classpath "$CLASSPATH" org.gradle.wrapper.GradleWrapperMain "$@" Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/gradlew.bat000066400000000000000000000045441270147354000267130ustar00rootroot00000000000000@if "%DEBUG%" == "" @echo off @rem ########################################################################## @rem @rem Gradle startup script for Windows @rem @rem ########################################################################## @rem Set local scope for the variables with windows NT shell if "%OS%"=="Windows_NT" setlocal @rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. set DEFAULT_JVM_OPTS= set DIRNAME=%~dp0 if "%DIRNAME%" == "" set DIRNAME=. set APP_BASE_NAME=%~n0 set APP_HOME=%DIRNAME% @rem Find java.exe if defined JAVA_HOME goto findJavaFromJavaHome set JAVA_EXE=java.exe %JAVA_EXE% -version >NUL 2>&1 if "%ERRORLEVEL%" == "0" goto init echo. echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. echo. echo Please set the JAVA_HOME variable in your environment to match the echo location of your Java installation. goto fail :findJavaFromJavaHome set JAVA_HOME=%JAVA_HOME:"=% set JAVA_EXE=%JAVA_HOME%/bin/java.exe if exist "%JAVA_EXE%" goto init echo. echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% echo. echo Please set the JAVA_HOME variable in your environment to match the echo location of your Java installation. goto fail :init @rem Get command-line arguments, handling Windowz variants if not "%OS%" == "Windows_NT" goto win9xME_args if "%@eval[2+2]" == "4" goto 4NT_args :win9xME_args @rem Slurp the command line arguments. set CMD_LINE_ARGS= set _SKIP=2 :win9xME_args_slurp if "x%~1" == "x" goto execute set CMD_LINE_ARGS=%* goto execute :4NT_args @rem Get arguments from the 4NT Shell from JP Software set CMD_LINE_ARGS=%$ :execute @rem Setup the command line set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar @rem Execute Gradle "%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS% :end @rem End local scope for the variables with windows NT shell if "%ERRORLEVEL%"=="0" goto mainEnd :fail rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of rem the _cmd.exe /c_ return code! if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1 exit /b 1 :mainEnd if "%OS%"=="Windows_NT" endlocal :omega Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/000077500000000000000000000000001270147354000253565ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/main/000077500000000000000000000000001270147354000263025ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/main/AndroidManifest.xml000066400000000000000000000012201270147354000320660ustar00rootroot00000000000000 Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/main/jni/000077500000000000000000000000001270147354000270625ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/main/jni/Smoke.frag.h000066400000000000000000000065671270147354000312450ustar00rootroot00000000000000#include #if 0 /usr/local/google/home/olv/khronos/VulkanSamples/Demos/Hologram/Hologram.frag Warning, version 310 is not yet complete; most version-specific features are present, but some are missing. Linked fragment stage: // Module Version 10000 // Generated by (magic number): 80001 // Id's are bound by 19 Capability Shader 1: ExtInstImport "GLSL.std.450" MemoryModel Logical GLSL450 EntryPoint Fragment 4 "main" 9 12 ExecutionMode 4 OriginLowerLeft Source ESSL 310 Name 4 "main" Name 9 "fragcolor" Name 12 "color" Decorate 9(fragcolor) Location 0 2: TypeVoid 3: TypeFunction 2 6: TypeFloat 32 7: TypeVector 6(float) 4 8: TypePointer Output 7(fvec4) 9(fragcolor): 8(ptr) Variable Output 10: TypeVector 6(float) 3 11: TypePointer Input 10(fvec3) 12(color): 11(ptr) Variable Input 14: 6(float) Constant 1056964608 4(main): 2 Function None 3 5: Label 13: 10(fvec3) Load 12(color) 15: 6(float) CompositeExtract 13 0 16: 6(float) CompositeExtract 13 1 17: 6(float) CompositeExtract 13 2 18: 7(fvec4) CompositeConstruct 15 16 17 14 Store 9(fragcolor) 18 Return FunctionEnd #endif static const uint32_t Smoke_frag[120] = { 0x07230203, 0x00010000, 0x00080001, 0x00000013, 0x00000000, 0x00020011, 0x00000001, 0x0006000b, 0x00000001, 0x4c534c47, 0x6474732e, 0x3035342e, 0x00000000, 0x0003000e, 0x00000000, 0x00000001, 0x0007000f, 0x00000004, 0x00000004, 0x6e69616d, 0x00000000, 0x00000009, 0x0000000c, 0x00030010, 0x00000004, 0x00000008, 0x00030003, 0x00000001, 0x00000136, 0x00040005, 0x00000004, 0x6e69616d, 0x00000000, 0x00050005, 0x00000009, 0x67617266, 0x6f6c6f63, 0x00000072, 0x00040005, 0x0000000c, 0x6f6c6f63, 0x00000072, 0x00040047, 0x00000009, 0x0000001e, 0x00000000, 0x00020013, 0x00000002, 0x00030021, 0x00000003, 0x00000002, 0x00030016, 0x00000006, 0x00000020, 0x00040017, 0x00000007, 0x00000006, 0x00000004, 0x00040020, 0x00000008, 0x00000003, 0x00000007, 0x0004003b, 0x00000008, 0x00000009, 0x00000003, 0x00040017, 0x0000000a, 0x00000006, 0x00000003, 0x00040020, 0x0000000b, 0x00000001, 0x0000000a, 0x0004003b, 0x0000000b, 0x0000000c, 0x00000001, 0x0004002b, 0x00000006, 0x0000000e, 0x3f000000, 0x00050036, 0x00000002, 0x00000004, 0x00000000, 0x00000003, 0x000200f8, 0x00000005, 0x0004003d, 0x0000000a, 0x0000000d, 0x0000000c, 0x00050051, 0x00000006, 0x0000000f, 0x0000000d, 0x00000000, 0x00050051, 0x00000006, 0x00000010, 0x0000000d, 0x00000001, 0x00050051, 0x00000006, 0x00000011, 0x0000000d, 0x00000002, 0x00070050, 0x00000007, 0x00000012, 0x0000000f, 0x00000010, 0x00000011, 0x0000000e, 0x0003003e, 0x00000009, 0x00000012, 0x000100fd, 0x00010038, }; Smoke.push_constant.vert.h000066400000000000000000000432711270147354000341070ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/main/jni#include #if 0 /usr/local/google/home/olv/khronos/VulkanSamples/Demos/Hologram/Hologram.push_constant.vert Warning, version 310 is not yet complete; most version-specific features are present, but some are missing. Linked vertex stage: // Module Version 10000 // Generated by (magic number): 80001 // Id's are bound by 108 Capability Shader 1: ExtInstImport "GLSL.std.450" MemoryModel Logical GLSL450 EntryPoint Vertex 4 "main" 38 67 89 102 Source ESSL 310 Name 4 "main" Name 9 "world_light" Name 12 "param_block" MemberName 12(param_block) 0 "light_pos" MemberName 12(param_block) 1 "light_color" MemberName 12(param_block) 2 "model" MemberName 12(param_block) 3 "view_projection" Name 14 "params" Name 34 "world_pos" Name 38 "in_pos" Name 49 "world_normal" Name 67 "in_normal" Name 70 "light_dir" Name 75 "brightness" Name 87 "gl_PerVertex" MemberName 87(gl_PerVertex) 0 "gl_Position" MemberName 87(gl_PerVertex) 1 "gl_PointSize" Name 89 "" Name 102 "color" MemberDecorate 12(param_block) 0 Offset 0 MemberDecorate 12(param_block) 1 Offset 16 MemberDecorate 12(param_block) 2 ColMajor MemberDecorate 12(param_block) 2 Offset 32 MemberDecorate 12(param_block) 2 MatrixStride 16 MemberDecorate 12(param_block) 3 ColMajor MemberDecorate 12(param_block) 3 Offset 96 MemberDecorate 12(param_block) 3 MatrixStride 16 Decorate 12(param_block) Block Decorate 14(params) DescriptorSet 0 Decorate 38(in_pos) Location 0 Decorate 67(in_normal) Location 1 MemberDecorate 87(gl_PerVertex) 0 BuiltIn Position MemberDecorate 87(gl_PerVertex) 1 BuiltIn PointSize Decorate 87(gl_PerVertex) Block 2: TypeVoid 3: TypeFunction 2 6: TypeFloat 32 7: TypeVector 6(float) 3 8: TypePointer Function 7(fvec3) 10: TypeVector 6(float) 4 11: TypeMatrix 10(fvec4) 4 12(param_block): TypeStruct 7(fvec3) 7(fvec3) 11 11 13: TypePointer PushConstant 12(param_block) 14(params): 13(ptr) Variable PushConstant 15: TypeInt 32 1 16: 15(int) Constant 2 17: TypePointer PushConstant 11 20: 15(int) Constant 0 21: TypePointer PushConstant 7(fvec3) 24: 6(float) Constant 1065353216 37: TypePointer Input 7(fvec3) 38(in_pos): 37(ptr) Variable Input 52: TypeMatrix 7(fvec3) 3 53: 6(float) Constant 0 67(in_normal): 37(ptr) Variable Input 74: TypePointer Function 6(float) 87(gl_PerVertex): TypeStruct 10(fvec4) 6(float) 88: TypePointer Output 87(gl_PerVertex) 89: 88(ptr) Variable Output 90: 15(int) Constant 3 99: TypePointer Output 10(fvec4) 101: TypePointer Output 7(fvec3) 102(color): 101(ptr) Variable Output 103: 15(int) Constant 1 4(main): 2 Function None 3 5: Label 9(world_light): 8(ptr) Variable Function 34(world_pos): 8(ptr) Variable Function 49(world_normal): 8(ptr) Variable Function 70(light_dir): 8(ptr) Variable Function 75(brightness): 74(ptr) Variable Function 18: 17(ptr) AccessChain 14(params) 16 19: 11 Load 18 22: 21(ptr) AccessChain 14(params) 20 23: 7(fvec3) Load 22 25: 6(float) CompositeExtract 23 0 26: 6(float) CompositeExtract 23 1 27: 6(float) CompositeExtract 23 2 28: 10(fvec4) CompositeConstruct 25 26 27 24 29: 10(fvec4) MatrixTimesVector 19 28 30: 6(float) CompositeExtract 29 0 31: 6(float) CompositeExtract 29 1 32: 6(float) CompositeExtract 29 2 33: 7(fvec3) CompositeConstruct 30 31 32 Store 9(world_light) 33 35: 17(ptr) AccessChain 14(params) 16 36: 11 Load 35 39: 7(fvec3) Load 38(in_pos) 40: 6(float) CompositeExtract 39 0 41: 6(float) CompositeExtract 39 1 42: 6(float) CompositeExtract 39 2 43: 10(fvec4) CompositeConstruct 40 41 42 24 44: 10(fvec4) MatrixTimesVector 36 43 45: 6(float) CompositeExtract 44 0 46: 6(float) CompositeExtract 44 1 47: 6(float) CompositeExtract 44 2 48: 7(fvec3) CompositeConstruct 45 46 47 Store 34(world_pos) 48 50: 17(ptr) AccessChain 14(params) 16 51: 11 Load 50 54: 6(float) CompositeExtract 51 0 0 55: 6(float) CompositeExtract 51 0 1 56: 6(float) CompositeExtract 51 0 2 57: 6(float) CompositeExtract 51 1 0 58: 6(float) CompositeExtract 51 1 1 59: 6(float) CompositeExtract 51 1 2 60: 6(float) CompositeExtract 51 2 0 61: 6(float) CompositeExtract 51 2 1 62: 6(float) CompositeExtract 51 2 2 63: 7(fvec3) CompositeConstruct 54 55 56 64: 7(fvec3) CompositeConstruct 57 58 59 65: 7(fvec3) CompositeConstruct 60 61 62 66: 52 CompositeConstruct 63 64 65 68: 7(fvec3) Load 67(in_normal) 69: 7(fvec3) MatrixTimesVector 66 68 Store 49(world_normal) 69 71: 7(fvec3) Load 9(world_light) 72: 7(fvec3) Load 34(world_pos) 73: 7(fvec3) FSub 71 72 Store 70(light_dir) 73 76: 7(fvec3) Load 70(light_dir) 77: 7(fvec3) Load 49(world_normal) 78: 6(float) Dot 76 77 79: 7(fvec3) Load 70(light_dir) 80: 6(float) ExtInst 1(GLSL.std.450) 66(Length) 79 81: 6(float) FDiv 78 80 82: 7(fvec3) Load 49(world_normal) 83: 6(float) ExtInst 1(GLSL.std.450) 66(Length) 82 84: 6(float) FDiv 81 83 Store 75(brightness) 84 85: 6(float) Load 75(brightness) 86: 6(float) ExtInst 1(GLSL.std.450) 4(FAbs) 85 Store 75(brightness) 86 91: 17(ptr) AccessChain 14(params) 90 92: 11 Load 91 93: 7(fvec3) Load 34(world_pos) 94: 6(float) CompositeExtract 93 0 95: 6(float) CompositeExtract 93 1 96: 6(float) CompositeExtract 93 2 97: 10(fvec4) CompositeConstruct 94 95 96 24 98: 10(fvec4) MatrixTimesVector 92 97 100: 99(ptr) AccessChain 89 20 Store 100 98 104: 21(ptr) AccessChain 14(params) 103 105: 7(fvec3) Load 104 106: 6(float) Load 75(brightness) 107: 7(fvec3) VectorTimesScalar 105 106 Store 102(color) 107 Return FunctionEnd #endif static const uint32_t Smoke_push_constant_vert[715] = { 0x07230203, 0x00010000, 0x00080001, 0x0000006c, 0x00000000, 0x00020011, 0x00000001, 0x0006000b, 0x00000001, 0x4c534c47, 0x6474732e, 0x3035342e, 0x00000000, 0x0003000e, 0x00000000, 0x00000001, 0x0009000f, 0x00000000, 0x00000004, 0x6e69616d, 0x00000000, 0x00000026, 0x00000043, 0x00000059, 0x00000066, 0x00030003, 0x00000001, 0x00000136, 0x00040005, 0x00000004, 0x6e69616d, 0x00000000, 0x00050005, 0x00000009, 0x6c726f77, 0x696c5f64, 0x00746867, 0x00050005, 0x0000000c, 0x61726170, 0x6c625f6d, 0x006b636f, 0x00060006, 0x0000000c, 0x00000000, 0x6867696c, 0x6f705f74, 0x00000073, 0x00060006, 0x0000000c, 0x00000001, 0x6867696c, 0x6f635f74, 0x00726f6c, 0x00050006, 0x0000000c, 0x00000002, 0x65646f6d, 0x0000006c, 0x00070006, 0x0000000c, 0x00000003, 0x77656976, 0x6f72705f, 0x7463656a, 0x006e6f69, 0x00040005, 0x0000000e, 0x61726170, 0x0000736d, 0x00050005, 0x00000022, 0x6c726f77, 0x6f705f64, 0x00000073, 0x00040005, 0x00000026, 0x705f6e69, 0x0000736f, 0x00060005, 0x00000031, 0x6c726f77, 0x6f6e5f64, 0x6c616d72, 0x00000000, 0x00050005, 0x00000043, 0x6e5f6e69, 0x616d726f, 0x0000006c, 0x00050005, 0x00000046, 0x6867696c, 0x69645f74, 0x00000072, 0x00050005, 0x0000004b, 0x67697262, 0x656e7468, 0x00007373, 0x00060005, 0x00000057, 0x505f6c67, 0x65567265, 0x78657472, 0x00000000, 0x00060006, 0x00000057, 0x00000000, 0x505f6c67, 0x7469736f, 0x006e6f69, 0x00070006, 0x00000057, 0x00000001, 0x505f6c67, 0x746e696f, 0x657a6953, 0x00000000, 0x00030005, 0x00000059, 0x00000000, 0x00040005, 0x00000066, 0x6f6c6f63, 0x00000072, 0x00050048, 0x0000000c, 0x00000000, 0x00000023, 0x00000000, 0x00050048, 0x0000000c, 0x00000001, 0x00000023, 0x00000010, 0x00040048, 0x0000000c, 0x00000002, 0x00000005, 0x00050048, 0x0000000c, 0x00000002, 0x00000023, 0x00000020, 0x00050048, 0x0000000c, 0x00000002, 0x00000007, 0x00000010, 0x00040048, 0x0000000c, 0x00000003, 0x00000005, 0x00050048, 0x0000000c, 0x00000003, 0x00000023, 0x00000060, 0x00050048, 0x0000000c, 0x00000003, 0x00000007, 0x00000010, 0x00030047, 0x0000000c, 0x00000002, 0x00040047, 0x0000000e, 0x00000022, 0x00000000, 0x00040047, 0x00000026, 0x0000001e, 0x00000000, 0x00040047, 0x00000043, 0x0000001e, 0x00000001, 0x00050048, 0x00000057, 0x00000000, 0x0000000b, 0x00000000, 0x00050048, 0x00000057, 0x00000001, 0x0000000b, 0x00000001, 0x00030047, 0x00000057, 0x00000002, 0x00020013, 0x00000002, 0x00030021, 0x00000003, 0x00000002, 0x00030016, 0x00000006, 0x00000020, 0x00040017, 0x00000007, 0x00000006, 0x00000003, 0x00040020, 0x00000008, 0x00000007, 0x00000007, 0x00040017, 0x0000000a, 0x00000006, 0x00000004, 0x00040018, 0x0000000b, 0x0000000a, 0x00000004, 0x0006001e, 0x0000000c, 0x00000007, 0x00000007, 0x0000000b, 0x0000000b, 0x00040020, 0x0000000d, 0x00000009, 0x0000000c, 0x0004003b, 0x0000000d, 0x0000000e, 0x00000009, 0x00040015, 0x0000000f, 0x00000020, 0x00000001, 0x0004002b, 0x0000000f, 0x00000010, 0x00000002, 0x00040020, 0x00000011, 0x00000009, 0x0000000b, 0x0004002b, 0x0000000f, 0x00000014, 0x00000000, 0x00040020, 0x00000015, 0x00000009, 0x00000007, 0x0004002b, 0x00000006, 0x00000018, 0x3f800000, 0x00040020, 0x00000025, 0x00000001, 0x00000007, 0x0004003b, 0x00000025, 0x00000026, 0x00000001, 0x00040018, 0x00000034, 0x00000007, 0x00000003, 0x0004002b, 0x00000006, 0x00000035, 0x00000000, 0x0004003b, 0x00000025, 0x00000043, 0x00000001, 0x00040020, 0x0000004a, 0x00000007, 0x00000006, 0x0004001e, 0x00000057, 0x0000000a, 0x00000006, 0x00040020, 0x00000058, 0x00000003, 0x00000057, 0x0004003b, 0x00000058, 0x00000059, 0x00000003, 0x0004002b, 0x0000000f, 0x0000005a, 0x00000003, 0x00040020, 0x00000063, 0x00000003, 0x0000000a, 0x00040020, 0x00000065, 0x00000003, 0x00000007, 0x0004003b, 0x00000065, 0x00000066, 0x00000003, 0x0004002b, 0x0000000f, 0x00000067, 0x00000001, 0x00050036, 0x00000002, 0x00000004, 0x00000000, 0x00000003, 0x000200f8, 0x00000005, 0x0004003b, 0x00000008, 0x00000009, 0x00000007, 0x0004003b, 0x00000008, 0x00000022, 0x00000007, 0x0004003b, 0x00000008, 0x00000031, 0x00000007, 0x0004003b, 0x00000008, 0x00000046, 0x00000007, 0x0004003b, 0x0000004a, 0x0000004b, 0x00000007, 0x00050041, 0x00000011, 0x00000012, 0x0000000e, 0x00000010, 0x0004003d, 0x0000000b, 0x00000013, 0x00000012, 0x00050041, 0x00000015, 0x00000016, 0x0000000e, 0x00000014, 0x0004003d, 0x00000007, 0x00000017, 0x00000016, 0x00050051, 0x00000006, 0x00000019, 0x00000017, 0x00000000, 0x00050051, 0x00000006, 0x0000001a, 0x00000017, 0x00000001, 0x00050051, 0x00000006, 0x0000001b, 0x00000017, 0x00000002, 0x00070050, 0x0000000a, 0x0000001c, 0x00000019, 0x0000001a, 0x0000001b, 0x00000018, 0x00050091, 0x0000000a, 0x0000001d, 0x00000013, 0x0000001c, 0x00050051, 0x00000006, 0x0000001e, 0x0000001d, 0x00000000, 0x00050051, 0x00000006, 0x0000001f, 0x0000001d, 0x00000001, 0x00050051, 0x00000006, 0x00000020, 0x0000001d, 0x00000002, 0x00060050, 0x00000007, 0x00000021, 0x0000001e, 0x0000001f, 0x00000020, 0x0003003e, 0x00000009, 0x00000021, 0x00050041, 0x00000011, 0x00000023, 0x0000000e, 0x00000010, 0x0004003d, 0x0000000b, 0x00000024, 0x00000023, 0x0004003d, 0x00000007, 0x00000027, 0x00000026, 0x00050051, 0x00000006, 0x00000028, 0x00000027, 0x00000000, 0x00050051, 0x00000006, 0x00000029, 0x00000027, 0x00000001, 0x00050051, 0x00000006, 0x0000002a, 0x00000027, 0x00000002, 0x00070050, 0x0000000a, 0x0000002b, 0x00000028, 0x00000029, 0x0000002a, 0x00000018, 0x00050091, 0x0000000a, 0x0000002c, 0x00000024, 0x0000002b, 0x00050051, 0x00000006, 0x0000002d, 0x0000002c, 0x00000000, 0x00050051, 0x00000006, 0x0000002e, 0x0000002c, 0x00000001, 0x00050051, 0x00000006, 0x0000002f, 0x0000002c, 0x00000002, 0x00060050, 0x00000007, 0x00000030, 0x0000002d, 0x0000002e, 0x0000002f, 0x0003003e, 0x00000022, 0x00000030, 0x00050041, 0x00000011, 0x00000032, 0x0000000e, 0x00000010, 0x0004003d, 0x0000000b, 0x00000033, 0x00000032, 0x00060051, 0x00000006, 0x00000036, 0x00000033, 0x00000000, 0x00000000, 0x00060051, 0x00000006, 0x00000037, 0x00000033, 0x00000000, 0x00000001, 0x00060051, 0x00000006, 0x00000038, 0x00000033, 0x00000000, 0x00000002, 0x00060051, 0x00000006, 0x00000039, 0x00000033, 0x00000001, 0x00000000, 0x00060051, 0x00000006, 0x0000003a, 0x00000033, 0x00000001, 0x00000001, 0x00060051, 0x00000006, 0x0000003b, 0x00000033, 0x00000001, 0x00000002, 0x00060051, 0x00000006, 0x0000003c, 0x00000033, 0x00000002, 0x00000000, 0x00060051, 0x00000006, 0x0000003d, 0x00000033, 0x00000002, 0x00000001, 0x00060051, 0x00000006, 0x0000003e, 0x00000033, 0x00000002, 0x00000002, 0x00060050, 0x00000007, 0x0000003f, 0x00000036, 0x00000037, 0x00000038, 0x00060050, 0x00000007, 0x00000040, 0x00000039, 0x0000003a, 0x0000003b, 0x00060050, 0x00000007, 0x00000041, 0x0000003c, 0x0000003d, 0x0000003e, 0x00060050, 0x00000034, 0x00000042, 0x0000003f, 0x00000040, 0x00000041, 0x0004003d, 0x00000007, 0x00000044, 0x00000043, 0x00050091, 0x00000007, 0x00000045, 0x00000042, 0x00000044, 0x0003003e, 0x00000031, 0x00000045, 0x0004003d, 0x00000007, 0x00000047, 0x00000009, 0x0004003d, 0x00000007, 0x00000048, 0x00000022, 0x00050083, 0x00000007, 0x00000049, 0x00000047, 0x00000048, 0x0003003e, 0x00000046, 0x00000049, 0x0004003d, 0x00000007, 0x0000004c, 0x00000046, 0x0004003d, 0x00000007, 0x0000004d, 0x00000031, 0x00050094, 0x00000006, 0x0000004e, 0x0000004c, 0x0000004d, 0x0004003d, 0x00000007, 0x0000004f, 0x00000046, 0x0006000c, 0x00000006, 0x00000050, 0x00000001, 0x00000042, 0x0000004f, 0x00050088, 0x00000006, 0x00000051, 0x0000004e, 0x00000050, 0x0004003d, 0x00000007, 0x00000052, 0x00000031, 0x0006000c, 0x00000006, 0x00000053, 0x00000001, 0x00000042, 0x00000052, 0x00050088, 0x00000006, 0x00000054, 0x00000051, 0x00000053, 0x0003003e, 0x0000004b, 0x00000054, 0x0004003d, 0x00000006, 0x00000055, 0x0000004b, 0x0006000c, 0x00000006, 0x00000056, 0x00000001, 0x00000004, 0x00000055, 0x0003003e, 0x0000004b, 0x00000056, 0x00050041, 0x00000011, 0x0000005b, 0x0000000e, 0x0000005a, 0x0004003d, 0x0000000b, 0x0000005c, 0x0000005b, 0x0004003d, 0x00000007, 0x0000005d, 0x00000022, 0x00050051, 0x00000006, 0x0000005e, 0x0000005d, 0x00000000, 0x00050051, 0x00000006, 0x0000005f, 0x0000005d, 0x00000001, 0x00050051, 0x00000006, 0x00000060, 0x0000005d, 0x00000002, 0x00070050, 0x0000000a, 0x00000061, 0x0000005e, 0x0000005f, 0x00000060, 0x00000018, 0x00050091, 0x0000000a, 0x00000062, 0x0000005c, 0x00000061, 0x00050041, 0x00000063, 0x00000064, 0x00000059, 0x00000014, 0x0003003e, 0x00000064, 0x00000062, 0x00050041, 0x00000015, 0x00000068, 0x0000000e, 0x00000067, 0x0004003d, 0x00000007, 0x00000069, 0x00000068, 0x0004003d, 0x00000006, 0x0000006a, 0x0000004b, 0x0005008e, 0x00000007, 0x0000006b, 0x00000069, 0x0000006a, 0x0003003e, 0x00000066, 0x0000006b, 0x000100fd, 0x00010038, }; Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/main/jni/Smoke.vert.h000066400000000000000000000433771270147354000313060ustar00rootroot00000000000000#include #if 0 /usr/local/google/home/olv/khronos/VulkanSamples/Demos/Hologram/Hologram.vert Warning, version 310 is not yet complete; most version-specific features are present, but some are missing. Linked vertex stage: // Module Version 10000 // Generated by (magic number): 80001 // Id's are bound by 108 Capability Shader 1: ExtInstImport "GLSL.std.450" MemoryModel Logical GLSL450 EntryPoint Vertex 4 "main" 38 67 89 102 Source ESSL 310 Name 4 "main" Name 9 "world_light" Name 12 "param_block" MemberName 12(param_block) 0 "light_pos" MemberName 12(param_block) 1 "light_color" MemberName 12(param_block) 2 "model" MemberName 12(param_block) 3 "view_projection" Name 14 "params" Name 34 "world_pos" Name 38 "in_pos" Name 49 "world_normal" Name 67 "in_normal" Name 70 "light_dir" Name 75 "brightness" Name 87 "gl_PerVertex" MemberName 87(gl_PerVertex) 0 "gl_Position" MemberName 87(gl_PerVertex) 1 "gl_PointSize" Name 89 "" Name 102 "color" MemberDecorate 12(param_block) 0 Offset 0 MemberDecorate 12(param_block) 1 Offset 16 MemberDecorate 12(param_block) 2 ColMajor MemberDecorate 12(param_block) 2 Offset 32 MemberDecorate 12(param_block) 2 MatrixStride 16 MemberDecorate 12(param_block) 3 ColMajor MemberDecorate 12(param_block) 3 Offset 96 MemberDecorate 12(param_block) 3 MatrixStride 16 Decorate 12(param_block) BufferBlock Decorate 14(params) DescriptorSet 0 Decorate 14(params) Binding 0 Decorate 38(in_pos) Location 0 Decorate 67(in_normal) Location 1 MemberDecorate 87(gl_PerVertex) 0 BuiltIn Position MemberDecorate 87(gl_PerVertex) 1 BuiltIn PointSize Decorate 87(gl_PerVertex) Block 2: TypeVoid 3: TypeFunction 2 6: TypeFloat 32 7: TypeVector 6(float) 3 8: TypePointer Function 7(fvec3) 10: TypeVector 6(float) 4 11: TypeMatrix 10(fvec4) 4 12(param_block): TypeStruct 7(fvec3) 7(fvec3) 11 11 13: TypePointer Uniform 12(param_block) 14(params): 13(ptr) Variable Uniform 15: TypeInt 32 1 16: 15(int) Constant 2 17: TypePointer Uniform 11 20: 15(int) Constant 0 21: TypePointer Uniform 7(fvec3) 24: 6(float) Constant 1065353216 37: TypePointer Input 7(fvec3) 38(in_pos): 37(ptr) Variable Input 52: TypeMatrix 7(fvec3) 3 53: 6(float) Constant 0 67(in_normal): 37(ptr) Variable Input 74: TypePointer Function 6(float) 87(gl_PerVertex): TypeStruct 10(fvec4) 6(float) 88: TypePointer Output 87(gl_PerVertex) 89: 88(ptr) Variable Output 90: 15(int) Constant 3 99: TypePointer Output 10(fvec4) 101: TypePointer Output 7(fvec3) 102(color): 101(ptr) Variable Output 103: 15(int) Constant 1 4(main): 2 Function None 3 5: Label 9(world_light): 8(ptr) Variable Function 34(world_pos): 8(ptr) Variable Function 49(world_normal): 8(ptr) Variable Function 70(light_dir): 8(ptr) Variable Function 75(brightness): 74(ptr) Variable Function 18: 17(ptr) AccessChain 14(params) 16 19: 11 Load 18 22: 21(ptr) AccessChain 14(params) 20 23: 7(fvec3) Load 22 25: 6(float) CompositeExtract 23 0 26: 6(float) CompositeExtract 23 1 27: 6(float) CompositeExtract 23 2 28: 10(fvec4) CompositeConstruct 25 26 27 24 29: 10(fvec4) MatrixTimesVector 19 28 30: 6(float) CompositeExtract 29 0 31: 6(float) CompositeExtract 29 1 32: 6(float) CompositeExtract 29 2 33: 7(fvec3) CompositeConstruct 30 31 32 Store 9(world_light) 33 35: 17(ptr) AccessChain 14(params) 16 36: 11 Load 35 39: 7(fvec3) Load 38(in_pos) 40: 6(float) CompositeExtract 39 0 41: 6(float) CompositeExtract 39 1 42: 6(float) CompositeExtract 39 2 43: 10(fvec4) CompositeConstruct 40 41 42 24 44: 10(fvec4) MatrixTimesVector 36 43 45: 6(float) CompositeExtract 44 0 46: 6(float) CompositeExtract 44 1 47: 6(float) CompositeExtract 44 2 48: 7(fvec3) CompositeConstruct 45 46 47 Store 34(world_pos) 48 50: 17(ptr) AccessChain 14(params) 16 51: 11 Load 50 54: 6(float) CompositeExtract 51 0 0 55: 6(float) CompositeExtract 51 0 1 56: 6(float) CompositeExtract 51 0 2 57: 6(float) CompositeExtract 51 1 0 58: 6(float) CompositeExtract 51 1 1 59: 6(float) CompositeExtract 51 1 2 60: 6(float) CompositeExtract 51 2 0 61: 6(float) CompositeExtract 51 2 1 62: 6(float) CompositeExtract 51 2 2 63: 7(fvec3) CompositeConstruct 54 55 56 64: 7(fvec3) CompositeConstruct 57 58 59 65: 7(fvec3) CompositeConstruct 60 61 62 66: 52 CompositeConstruct 63 64 65 68: 7(fvec3) Load 67(in_normal) 69: 7(fvec3) MatrixTimesVector 66 68 Store 49(world_normal) 69 71: 7(fvec3) Load 9(world_light) 72: 7(fvec3) Load 34(world_pos) 73: 7(fvec3) FSub 71 72 Store 70(light_dir) 73 76: 7(fvec3) Load 70(light_dir) 77: 7(fvec3) Load 49(world_normal) 78: 6(float) Dot 76 77 79: 7(fvec3) Load 70(light_dir) 80: 6(float) ExtInst 1(GLSL.std.450) 66(Length) 79 81: 6(float) FDiv 78 80 82: 7(fvec3) Load 49(world_normal) 83: 6(float) ExtInst 1(GLSL.std.450) 66(Length) 82 84: 6(float) FDiv 81 83 Store 75(brightness) 84 85: 6(float) Load 75(brightness) 86: 6(float) ExtInst 1(GLSL.std.450) 4(FAbs) 85 Store 75(brightness) 86 91: 17(ptr) AccessChain 14(params) 90 92: 11 Load 91 93: 7(fvec3) Load 34(world_pos) 94: 6(float) CompositeExtract 93 0 95: 6(float) CompositeExtract 93 1 96: 6(float) CompositeExtract 93 2 97: 10(fvec4) CompositeConstruct 94 95 96 24 98: 10(fvec4) MatrixTimesVector 92 97 100: 99(ptr) AccessChain 89 20 Store 100 98 104: 21(ptr) AccessChain 14(params) 103 105: 7(fvec3) Load 104 106: 6(float) Load 75(brightness) 107: 7(fvec3) VectorTimesScalar 105 106 Store 102(color) 107 Return FunctionEnd #endif static const uint32_t Smoke_vert[719] = { 0x07230203, 0x00010000, 0x00080001, 0x0000006c, 0x00000000, 0x00020011, 0x00000001, 0x0006000b, 0x00000001, 0x4c534c47, 0x6474732e, 0x3035342e, 0x00000000, 0x0003000e, 0x00000000, 0x00000001, 0x0009000f, 0x00000000, 0x00000004, 0x6e69616d, 0x00000000, 0x00000026, 0x00000043, 0x00000059, 0x00000066, 0x00030003, 0x00000001, 0x00000136, 0x00040005, 0x00000004, 0x6e69616d, 0x00000000, 0x00050005, 0x00000009, 0x6c726f77, 0x696c5f64, 0x00746867, 0x00050005, 0x0000000c, 0x61726170, 0x6c625f6d, 0x006b636f, 0x00060006, 0x0000000c, 0x00000000, 0x6867696c, 0x6f705f74, 0x00000073, 0x00060006, 0x0000000c, 0x00000001, 0x6867696c, 0x6f635f74, 0x00726f6c, 0x00050006, 0x0000000c, 0x00000002, 0x65646f6d, 0x0000006c, 0x00070006, 0x0000000c, 0x00000003, 0x77656976, 0x6f72705f, 0x7463656a, 0x006e6f69, 0x00040005, 0x0000000e, 0x61726170, 0x0000736d, 0x00050005, 0x00000022, 0x6c726f77, 0x6f705f64, 0x00000073, 0x00040005, 0x00000026, 0x705f6e69, 0x0000736f, 0x00060005, 0x00000031, 0x6c726f77, 0x6f6e5f64, 0x6c616d72, 0x00000000, 0x00050005, 0x00000043, 0x6e5f6e69, 0x616d726f, 0x0000006c, 0x00050005, 0x00000046, 0x6867696c, 0x69645f74, 0x00000072, 0x00050005, 0x0000004b, 0x67697262, 0x656e7468, 0x00007373, 0x00060005, 0x00000057, 0x505f6c67, 0x65567265, 0x78657472, 0x00000000, 0x00060006, 0x00000057, 0x00000000, 0x505f6c67, 0x7469736f, 0x006e6f69, 0x00070006, 0x00000057, 0x00000001, 0x505f6c67, 0x746e696f, 0x657a6953, 0x00000000, 0x00030005, 0x00000059, 0x00000000, 0x00040005, 0x00000066, 0x6f6c6f63, 0x00000072, 0x00050048, 0x0000000c, 0x00000000, 0x00000023, 0x00000000, 0x00050048, 0x0000000c, 0x00000001, 0x00000023, 0x00000010, 0x00040048, 0x0000000c, 0x00000002, 0x00000005, 0x00050048, 0x0000000c, 0x00000002, 0x00000023, 0x00000020, 0x00050048, 0x0000000c, 0x00000002, 0x00000007, 0x00000010, 0x00040048, 0x0000000c, 0x00000003, 0x00000005, 0x00050048, 0x0000000c, 0x00000003, 0x00000023, 0x00000060, 0x00050048, 0x0000000c, 0x00000003, 0x00000007, 0x00000010, 0x00030047, 0x0000000c, 0x00000003, 0x00040047, 0x0000000e, 0x00000022, 0x00000000, 0x00040047, 0x0000000e, 0x00000021, 0x00000000, 0x00040047, 0x00000026, 0x0000001e, 0x00000000, 0x00040047, 0x00000043, 0x0000001e, 0x00000001, 0x00050048, 0x00000057, 0x00000000, 0x0000000b, 0x00000000, 0x00050048, 0x00000057, 0x00000001, 0x0000000b, 0x00000001, 0x00030047, 0x00000057, 0x00000002, 0x00020013, 0x00000002, 0x00030021, 0x00000003, 0x00000002, 0x00030016, 0x00000006, 0x00000020, 0x00040017, 0x00000007, 0x00000006, 0x00000003, 0x00040020, 0x00000008, 0x00000007, 0x00000007, 0x00040017, 0x0000000a, 0x00000006, 0x00000004, 0x00040018, 0x0000000b, 0x0000000a, 0x00000004, 0x0006001e, 0x0000000c, 0x00000007, 0x00000007, 0x0000000b, 0x0000000b, 0x00040020, 0x0000000d, 0x00000002, 0x0000000c, 0x0004003b, 0x0000000d, 0x0000000e, 0x00000002, 0x00040015, 0x0000000f, 0x00000020, 0x00000001, 0x0004002b, 0x0000000f, 0x00000010, 0x00000002, 0x00040020, 0x00000011, 0x00000002, 0x0000000b, 0x0004002b, 0x0000000f, 0x00000014, 0x00000000, 0x00040020, 0x00000015, 0x00000002, 0x00000007, 0x0004002b, 0x00000006, 0x00000018, 0x3f800000, 0x00040020, 0x00000025, 0x00000001, 0x00000007, 0x0004003b, 0x00000025, 0x00000026, 0x00000001, 0x00040018, 0x00000034, 0x00000007, 0x00000003, 0x0004002b, 0x00000006, 0x00000035, 0x00000000, 0x0004003b, 0x00000025, 0x00000043, 0x00000001, 0x00040020, 0x0000004a, 0x00000007, 0x00000006, 0x0004001e, 0x00000057, 0x0000000a, 0x00000006, 0x00040020, 0x00000058, 0x00000003, 0x00000057, 0x0004003b, 0x00000058, 0x00000059, 0x00000003, 0x0004002b, 0x0000000f, 0x0000005a, 0x00000003, 0x00040020, 0x00000063, 0x00000003, 0x0000000a, 0x00040020, 0x00000065, 0x00000003, 0x00000007, 0x0004003b, 0x00000065, 0x00000066, 0x00000003, 0x0004002b, 0x0000000f, 0x00000067, 0x00000001, 0x00050036, 0x00000002, 0x00000004, 0x00000000, 0x00000003, 0x000200f8, 0x00000005, 0x0004003b, 0x00000008, 0x00000009, 0x00000007, 0x0004003b, 0x00000008, 0x00000022, 0x00000007, 0x0004003b, 0x00000008, 0x00000031, 0x00000007, 0x0004003b, 0x00000008, 0x00000046, 0x00000007, 0x0004003b, 0x0000004a, 0x0000004b, 0x00000007, 0x00050041, 0x00000011, 0x00000012, 0x0000000e, 0x00000010, 0x0004003d, 0x0000000b, 0x00000013, 0x00000012, 0x00050041, 0x00000015, 0x00000016, 0x0000000e, 0x00000014, 0x0004003d, 0x00000007, 0x00000017, 0x00000016, 0x00050051, 0x00000006, 0x00000019, 0x00000017, 0x00000000, 0x00050051, 0x00000006, 0x0000001a, 0x00000017, 0x00000001, 0x00050051, 0x00000006, 0x0000001b, 0x00000017, 0x00000002, 0x00070050, 0x0000000a, 0x0000001c, 0x00000019, 0x0000001a, 0x0000001b, 0x00000018, 0x00050091, 0x0000000a, 0x0000001d, 0x00000013, 0x0000001c, 0x00050051, 0x00000006, 0x0000001e, 0x0000001d, 0x00000000, 0x00050051, 0x00000006, 0x0000001f, 0x0000001d, 0x00000001, 0x00050051, 0x00000006, 0x00000020, 0x0000001d, 0x00000002, 0x00060050, 0x00000007, 0x00000021, 0x0000001e, 0x0000001f, 0x00000020, 0x0003003e, 0x00000009, 0x00000021, 0x00050041, 0x00000011, 0x00000023, 0x0000000e, 0x00000010, 0x0004003d, 0x0000000b, 0x00000024, 0x00000023, 0x0004003d, 0x00000007, 0x00000027, 0x00000026, 0x00050051, 0x00000006, 0x00000028, 0x00000027, 0x00000000, 0x00050051, 0x00000006, 0x00000029, 0x00000027, 0x00000001, 0x00050051, 0x00000006, 0x0000002a, 0x00000027, 0x00000002, 0x00070050, 0x0000000a, 0x0000002b, 0x00000028, 0x00000029, 0x0000002a, 0x00000018, 0x00050091, 0x0000000a, 0x0000002c, 0x00000024, 0x0000002b, 0x00050051, 0x00000006, 0x0000002d, 0x0000002c, 0x00000000, 0x00050051, 0x00000006, 0x0000002e, 0x0000002c, 0x00000001, 0x00050051, 0x00000006, 0x0000002f, 0x0000002c, 0x00000002, 0x00060050, 0x00000007, 0x00000030, 0x0000002d, 0x0000002e, 0x0000002f, 0x0003003e, 0x00000022, 0x00000030, 0x00050041, 0x00000011, 0x00000032, 0x0000000e, 0x00000010, 0x0004003d, 0x0000000b, 0x00000033, 0x00000032, 0x00060051, 0x00000006, 0x00000036, 0x00000033, 0x00000000, 0x00000000, 0x00060051, 0x00000006, 0x00000037, 0x00000033, 0x00000000, 0x00000001, 0x00060051, 0x00000006, 0x00000038, 0x00000033, 0x00000000, 0x00000002, 0x00060051, 0x00000006, 0x00000039, 0x00000033, 0x00000001, 0x00000000, 0x00060051, 0x00000006, 0x0000003a, 0x00000033, 0x00000001, 0x00000001, 0x00060051, 0x00000006, 0x0000003b, 0x00000033, 0x00000001, 0x00000002, 0x00060051, 0x00000006, 0x0000003c, 0x00000033, 0x00000002, 0x00000000, 0x00060051, 0x00000006, 0x0000003d, 0x00000033, 0x00000002, 0x00000001, 0x00060051, 0x00000006, 0x0000003e, 0x00000033, 0x00000002, 0x00000002, 0x00060050, 0x00000007, 0x0000003f, 0x00000036, 0x00000037, 0x00000038, 0x00060050, 0x00000007, 0x00000040, 0x00000039, 0x0000003a, 0x0000003b, 0x00060050, 0x00000007, 0x00000041, 0x0000003c, 0x0000003d, 0x0000003e, 0x00060050, 0x00000034, 0x00000042, 0x0000003f, 0x00000040, 0x00000041, 0x0004003d, 0x00000007, 0x00000044, 0x00000043, 0x00050091, 0x00000007, 0x00000045, 0x00000042, 0x00000044, 0x0003003e, 0x00000031, 0x00000045, 0x0004003d, 0x00000007, 0x00000047, 0x00000009, 0x0004003d, 0x00000007, 0x00000048, 0x00000022, 0x00050083, 0x00000007, 0x00000049, 0x00000047, 0x00000048, 0x0003003e, 0x00000046, 0x00000049, 0x0004003d, 0x00000007, 0x0000004c, 0x00000046, 0x0004003d, 0x00000007, 0x0000004d, 0x00000031, 0x00050094, 0x00000006, 0x0000004e, 0x0000004c, 0x0000004d, 0x0004003d, 0x00000007, 0x0000004f, 0x00000046, 0x0006000c, 0x00000006, 0x00000050, 0x00000001, 0x00000042, 0x0000004f, 0x00050088, 0x00000006, 0x00000051, 0x0000004e, 0x00000050, 0x0004003d, 0x00000007, 0x00000052, 0x00000031, 0x0006000c, 0x00000006, 0x00000053, 0x00000001, 0x00000042, 0x00000052, 0x00050088, 0x00000006, 0x00000054, 0x00000051, 0x00000053, 0x0003003e, 0x0000004b, 0x00000054, 0x0004003d, 0x00000006, 0x00000055, 0x0000004b, 0x0006000c, 0x00000006, 0x00000056, 0x00000001, 0x00000004, 0x00000055, 0x0003003e, 0x0000004b, 0x00000056, 0x00050041, 0x00000011, 0x0000005b, 0x0000000e, 0x0000005a, 0x0004003d, 0x0000000b, 0x0000005c, 0x0000005b, 0x0004003d, 0x00000007, 0x0000005d, 0x00000022, 0x00050051, 0x00000006, 0x0000005e, 0x0000005d, 0x00000000, 0x00050051, 0x00000006, 0x0000005f, 0x0000005d, 0x00000001, 0x00050051, 0x00000006, 0x00000060, 0x0000005d, 0x00000002, 0x00070050, 0x0000000a, 0x00000061, 0x0000005e, 0x0000005f, 0x00000060, 0x00000018, 0x00050091, 0x0000000a, 0x00000062, 0x0000005c, 0x00000061, 0x00050041, 0x00000063, 0x00000064, 0x00000059, 0x00000014, 0x0003003e, 0x00000064, 0x00000062, 0x00050041, 0x00000015, 0x00000068, 0x0000000e, 0x00000067, 0x0004003d, 0x00000007, 0x00000069, 0x00000068, 0x0004003d, 0x00000006, 0x0000006a, 0x0000004b, 0x0005008e, 0x00000007, 0x0000006b, 0x00000069, 0x0000006a, 0x0003003e, 0x00000066, 0x0000006b, 0x000100fd, 0x00010038, }; Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/main/res/000077500000000000000000000000001270147354000270735ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/main/res/values/000077500000000000000000000000001270147354000303725ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/android/src/main/res/values/strings.xml000066400000000000000000000001501270147354000326010ustar00rootroot00000000000000 Smoke Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/generate-dispatch-table000077500000000000000000000532721270147354000275620ustar00rootroot00000000000000#!/usr/bin/env python3 # # Copyright (C) 2016 Google, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. """Generate Vulkan dispatch table. """ import os import sys class Command(object): PLATFORM = 0 LOADER = 1 INSTANCE = 2 DEVICE = 3 def __init__(self, name, dispatch): self.name = name self.dispatch = dispatch self.ty = self._get_type() @staticmethod def valid_c_typedef(c): return (c.startswith("typedef") and c.endswith(");") and "*PFN_vkVoidFunction" not in c) @classmethod def from_c_typedef(cls, c): name_begin = c.find("*PFN_vk") + 7 name_end = c.find(")(", name_begin) name = c[name_begin:name_end] dispatch_begin = name_end + 2 dispatch_end = c.find(" ", dispatch_begin) dispatch = c[dispatch_begin:dispatch_end] if not dispatch.startswith("Vk"): dispatch = None return cls(name, dispatch) def _get_type(self): if self.dispatch: if self.dispatch in ["VkDevice", "VkQueue", "VkCommandBuffer"]: return self.DEVICE else: return self.INSTANCE else: if self.name in ["GetInstanceProcAddr"]: return self.PLATFORM else: return self.LOADER def __repr__(self): return "Command(name=%s, dispatch=%s)" % \ (repr(self.name), repr(self.dispatch)) class Extension(object): def __init__(self, name, version, guard=None, commands=[]): self.name = name self.version = version self.guard = guard self.commands = commands[:] def add_command(self, cmd): self.commands.append(cmd) def __repr__(self): lines = [] lines.append("Extension(name=%s, version=%s, guard=%s, commands=[" % (repr(self.name), repr(self.version), repr(self.guard))) for cmd in self.commands: lines.append(" %s," % repr(cmd)) lines.append("])") return "\n".join(lines) # generated by "generate-dispatch-table parse vulkan.h" vk_core = Extension(name='VK_core', version=0, guard=None, commands=[ Command(name='CreateInstance', dispatch=None), Command(name='DestroyInstance', dispatch='VkInstance'), Command(name='EnumeratePhysicalDevices', dispatch='VkInstance'), Command(name='GetPhysicalDeviceFeatures', dispatch='VkPhysicalDevice'), Command(name='GetPhysicalDeviceFormatProperties', dispatch='VkPhysicalDevice'), Command(name='GetPhysicalDeviceImageFormatProperties', dispatch='VkPhysicalDevice'), Command(name='GetPhysicalDeviceProperties', dispatch='VkPhysicalDevice'), Command(name='GetPhysicalDeviceQueueFamilyProperties', dispatch='VkPhysicalDevice'), Command(name='GetPhysicalDeviceMemoryProperties', dispatch='VkPhysicalDevice'), Command(name='GetInstanceProcAddr', dispatch='VkInstance'), Command(name='GetDeviceProcAddr', dispatch='VkDevice'), Command(name='CreateDevice', dispatch='VkPhysicalDevice'), Command(name='DestroyDevice', dispatch='VkDevice'), Command(name='EnumerateInstanceExtensionProperties', dispatch=None), Command(name='EnumerateDeviceExtensionProperties', dispatch='VkPhysicalDevice'), Command(name='EnumerateInstanceLayerProperties', dispatch=None), Command(name='EnumerateDeviceLayerProperties', dispatch='VkPhysicalDevice'), Command(name='GetDeviceQueue', dispatch='VkDevice'), Command(name='QueueSubmit', dispatch='VkQueue'), Command(name='QueueWaitIdle', dispatch='VkQueue'), Command(name='DeviceWaitIdle', dispatch='VkDevice'), Command(name='AllocateMemory', dispatch='VkDevice'), Command(name='FreeMemory', dispatch='VkDevice'), Command(name='MapMemory', dispatch='VkDevice'), Command(name='UnmapMemory', dispatch='VkDevice'), Command(name='FlushMappedMemoryRanges', dispatch='VkDevice'), Command(name='InvalidateMappedMemoryRanges', dispatch='VkDevice'), Command(name='GetDeviceMemoryCommitment', dispatch='VkDevice'), Command(name='BindBufferMemory', dispatch='VkDevice'), Command(name='BindImageMemory', dispatch='VkDevice'), Command(name='GetBufferMemoryRequirements', dispatch='VkDevice'), Command(name='GetImageMemoryRequirements', dispatch='VkDevice'), Command(name='GetImageSparseMemoryRequirements', dispatch='VkDevice'), Command(name='GetPhysicalDeviceSparseImageFormatProperties', dispatch='VkPhysicalDevice'), Command(name='QueueBindSparse', dispatch='VkQueue'), Command(name='CreateFence', dispatch='VkDevice'), Command(name='DestroyFence', dispatch='VkDevice'), Command(name='ResetFences', dispatch='VkDevice'), Command(name='GetFenceStatus', dispatch='VkDevice'), Command(name='WaitForFences', dispatch='VkDevice'), Command(name='CreateSemaphore', dispatch='VkDevice'), Command(name='DestroySemaphore', dispatch='VkDevice'), Command(name='CreateEvent', dispatch='VkDevice'), Command(name='DestroyEvent', dispatch='VkDevice'), Command(name='GetEventStatus', dispatch='VkDevice'), Command(name='SetEvent', dispatch='VkDevice'), Command(name='ResetEvent', dispatch='VkDevice'), Command(name='CreateQueryPool', dispatch='VkDevice'), Command(name='DestroyQueryPool', dispatch='VkDevice'), Command(name='GetQueryPoolResults', dispatch='VkDevice'), Command(name='CreateBuffer', dispatch='VkDevice'), Command(name='DestroyBuffer', dispatch='VkDevice'), Command(name='CreateBufferView', dispatch='VkDevice'), Command(name='DestroyBufferView', dispatch='VkDevice'), Command(name='CreateImage', dispatch='VkDevice'), Command(name='DestroyImage', dispatch='VkDevice'), Command(name='GetImageSubresourceLayout', dispatch='VkDevice'), Command(name='CreateImageView', dispatch='VkDevice'), Command(name='DestroyImageView', dispatch='VkDevice'), Command(name='CreateShaderModule', dispatch='VkDevice'), Command(name='DestroyShaderModule', dispatch='VkDevice'), Command(name='CreatePipelineCache', dispatch='VkDevice'), Command(name='DestroyPipelineCache', dispatch='VkDevice'), Command(name='GetPipelineCacheData', dispatch='VkDevice'), Command(name='MergePipelineCaches', dispatch='VkDevice'), Command(name='CreateGraphicsPipelines', dispatch='VkDevice'), Command(name='CreateComputePipelines', dispatch='VkDevice'), Command(name='DestroyPipeline', dispatch='VkDevice'), Command(name='CreatePipelineLayout', dispatch='VkDevice'), Command(name='DestroyPipelineLayout', dispatch='VkDevice'), Command(name='CreateSampler', dispatch='VkDevice'), Command(name='DestroySampler', dispatch='VkDevice'), Command(name='CreateDescriptorSetLayout', dispatch='VkDevice'), Command(name='DestroyDescriptorSetLayout', dispatch='VkDevice'), Command(name='CreateDescriptorPool', dispatch='VkDevice'), Command(name='DestroyDescriptorPool', dispatch='VkDevice'), Command(name='ResetDescriptorPool', dispatch='VkDevice'), Command(name='AllocateDescriptorSets', dispatch='VkDevice'), Command(name='FreeDescriptorSets', dispatch='VkDevice'), Command(name='UpdateDescriptorSets', dispatch='VkDevice'), Command(name='CreateFramebuffer', dispatch='VkDevice'), Command(name='DestroyFramebuffer', dispatch='VkDevice'), Command(name='CreateRenderPass', dispatch='VkDevice'), Command(name='DestroyRenderPass', dispatch='VkDevice'), Command(name='GetRenderAreaGranularity', dispatch='VkDevice'), Command(name='CreateCommandPool', dispatch='VkDevice'), Command(name='DestroyCommandPool', dispatch='VkDevice'), Command(name='ResetCommandPool', dispatch='VkDevice'), Command(name='AllocateCommandBuffers', dispatch='VkDevice'), Command(name='FreeCommandBuffers', dispatch='VkDevice'), Command(name='BeginCommandBuffer', dispatch='VkCommandBuffer'), Command(name='EndCommandBuffer', dispatch='VkCommandBuffer'), Command(name='ResetCommandBuffer', dispatch='VkCommandBuffer'), Command(name='CmdBindPipeline', dispatch='VkCommandBuffer'), Command(name='CmdSetViewport', dispatch='VkCommandBuffer'), Command(name='CmdSetScissor', dispatch='VkCommandBuffer'), Command(name='CmdSetLineWidth', dispatch='VkCommandBuffer'), Command(name='CmdSetDepthBias', dispatch='VkCommandBuffer'), Command(name='CmdSetBlendConstants', dispatch='VkCommandBuffer'), Command(name='CmdSetDepthBounds', dispatch='VkCommandBuffer'), Command(name='CmdSetStencilCompareMask', dispatch='VkCommandBuffer'), Command(name='CmdSetStencilWriteMask', dispatch='VkCommandBuffer'), Command(name='CmdSetStencilReference', dispatch='VkCommandBuffer'), Command(name='CmdBindDescriptorSets', dispatch='VkCommandBuffer'), Command(name='CmdBindIndexBuffer', dispatch='VkCommandBuffer'), Command(name='CmdBindVertexBuffers', dispatch='VkCommandBuffer'), Command(name='CmdDraw', dispatch='VkCommandBuffer'), Command(name='CmdDrawIndexed', dispatch='VkCommandBuffer'), Command(name='CmdDrawIndirect', dispatch='VkCommandBuffer'), Command(name='CmdDrawIndexedIndirect', dispatch='VkCommandBuffer'), Command(name='CmdDispatch', dispatch='VkCommandBuffer'), Command(name='CmdDispatchIndirect', dispatch='VkCommandBuffer'), Command(name='CmdCopyBuffer', dispatch='VkCommandBuffer'), Command(name='CmdCopyImage', dispatch='VkCommandBuffer'), Command(name='CmdBlitImage', dispatch='VkCommandBuffer'), Command(name='CmdCopyBufferToImage', dispatch='VkCommandBuffer'), Command(name='CmdCopyImageToBuffer', dispatch='VkCommandBuffer'), Command(name='CmdUpdateBuffer', dispatch='VkCommandBuffer'), Command(name='CmdFillBuffer', dispatch='VkCommandBuffer'), Command(name='CmdClearColorImage', dispatch='VkCommandBuffer'), Command(name='CmdClearDepthStencilImage', dispatch='VkCommandBuffer'), Command(name='CmdClearAttachments', dispatch='VkCommandBuffer'), Command(name='CmdResolveImage', dispatch='VkCommandBuffer'), Command(name='CmdSetEvent', dispatch='VkCommandBuffer'), Command(name='CmdResetEvent', dispatch='VkCommandBuffer'), Command(name='CmdWaitEvents', dispatch='VkCommandBuffer'), Command(name='CmdPipelineBarrier', dispatch='VkCommandBuffer'), Command(name='CmdBeginQuery', dispatch='VkCommandBuffer'), Command(name='CmdEndQuery', dispatch='VkCommandBuffer'), Command(name='CmdResetQueryPool', dispatch='VkCommandBuffer'), Command(name='CmdWriteTimestamp', dispatch='VkCommandBuffer'), Command(name='CmdCopyQueryPoolResults', dispatch='VkCommandBuffer'), Command(name='CmdPushConstants', dispatch='VkCommandBuffer'), Command(name='CmdBeginRenderPass', dispatch='VkCommandBuffer'), Command(name='CmdNextSubpass', dispatch='VkCommandBuffer'), Command(name='CmdEndRenderPass', dispatch='VkCommandBuffer'), Command(name='CmdExecuteCommands', dispatch='VkCommandBuffer'), ]) vk_khr_surface = Extension(name='VK_KHR_surface', version=25, guard=None, commands=[ Command(name='DestroySurfaceKHR', dispatch='VkInstance'), Command(name='GetPhysicalDeviceSurfaceSupportKHR', dispatch='VkPhysicalDevice'), Command(name='GetPhysicalDeviceSurfaceCapabilitiesKHR', dispatch='VkPhysicalDevice'), Command(name='GetPhysicalDeviceSurfaceFormatsKHR', dispatch='VkPhysicalDevice'), Command(name='GetPhysicalDeviceSurfacePresentModesKHR', dispatch='VkPhysicalDevice'), ]) vk_khr_swapchain = Extension(name='VK_KHR_swapchain', version=67, guard=None, commands=[ Command(name='CreateSwapchainKHR', dispatch='VkDevice'), Command(name='DestroySwapchainKHR', dispatch='VkDevice'), Command(name='GetSwapchainImagesKHR', dispatch='VkDevice'), Command(name='AcquireNextImageKHR', dispatch='VkDevice'), Command(name='QueuePresentKHR', dispatch='VkQueue'), ]) vk_khr_display = Extension(name='VK_KHR_display', version=21, guard=None, commands=[ Command(name='GetPhysicalDeviceDisplayPropertiesKHR', dispatch='VkPhysicalDevice'), Command(name='GetPhysicalDeviceDisplayPlanePropertiesKHR', dispatch='VkPhysicalDevice'), Command(name='GetDisplayPlaneSupportedDisplaysKHR', dispatch='VkPhysicalDevice'), Command(name='GetDisplayModePropertiesKHR', dispatch='VkPhysicalDevice'), Command(name='CreateDisplayModeKHR', dispatch='VkPhysicalDevice'), Command(name='GetDisplayPlaneCapabilitiesKHR', dispatch='VkPhysicalDevice'), Command(name='CreateDisplayPlaneSurfaceKHR', dispatch='VkInstance'), ]) vk_khr_display_swapchain = Extension(name='VK_KHR_display_swapchain', version=9, guard=None, commands=[ Command(name='CreateSharedSwapchainsKHR', dispatch='VkDevice'), ]) vk_khr_xlib_surface = Extension(name='VK_KHR_xlib_surface', version=6, guard='VK_USE_PLATFORM_XLIB_KHR', commands=[ Command(name='CreateXlibSurfaceKHR', dispatch='VkInstance'), Command(name='GetPhysicalDeviceXlibPresentationSupportKHR', dispatch='VkPhysicalDevice'), ]) vk_khr_xcb_surface = Extension(name='VK_KHR_xcb_surface', version=6, guard='VK_USE_PLATFORM_XCB_KHR', commands=[ Command(name='CreateXcbSurfaceKHR', dispatch='VkInstance'), Command(name='GetPhysicalDeviceXcbPresentationSupportKHR', dispatch='VkPhysicalDevice'), ]) vk_khr_wayland_surface = Extension(name='VK_KHR_wayland_surface', version=5, guard='VK_USE_PLATFORM_WAYLAND_KHR', commands=[ Command(name='CreateWaylandSurfaceKHR', dispatch='VkInstance'), Command(name='GetPhysicalDeviceWaylandPresentationSupportKHR', dispatch='VkPhysicalDevice'), ]) vk_khr_mir_surface = Extension(name='VK_KHR_mir_surface', version=4, guard='VK_USE_PLATFORM_MIR_KHR', commands=[ Command(name='CreateMirSurfaceKHR', dispatch='VkInstance'), Command(name='GetPhysicalDeviceMirPresentationSupportKHR', dispatch='VkPhysicalDevice'), ]) vk_khr_android_surface = Extension(name='VK_KHR_android_surface', version=6, guard='VK_USE_PLATFORM_ANDROID_KHR', commands=[ Command(name='CreateAndroidSurfaceKHR', dispatch='VkInstance'), ]) vk_khr_win32_surface = Extension(name='VK_KHR_win32_surface', version=5, guard='VK_USE_PLATFORM_WIN32_KHR', commands=[ Command(name='CreateWin32SurfaceKHR', dispatch='VkInstance'), Command(name='GetPhysicalDeviceWin32PresentationSupportKHR', dispatch='VkPhysicalDevice'), ]) vk_ext_debug_report = Extension(name='VK_EXT_debug_report', version=1, guard=None, commands=[ Command(name='CreateDebugReportCallbackEXT', dispatch='VkInstance'), Command(name='DestroyDebugReportCallbackEXT', dispatch='VkInstance'), Command(name='DebugReportMessageEXT', dispatch='VkInstance'), ]) extensions = [ vk_core, vk_khr_surface, vk_khr_swapchain, vk_khr_display, vk_khr_display_swapchain, vk_khr_xlib_surface, vk_khr_xcb_surface, vk_khr_wayland_surface, vk_khr_mir_surface, vk_khr_android_surface, vk_khr_win32_surface, vk_ext_debug_report, ] def generate_header(guard): lines = [] lines.append("// This file is generated.") lines.append("#ifndef %s" % guard) lines.append("#define %s" % guard) lines.append("") lines.append("#include ") lines.append("") lines.append("namespace vk {") lines.append("") for ext in extensions: if ext.guard: lines.append("#ifdef %s" % ext.guard) lines.append("// %s" % ext.name) for cmd in ext.commands: lines.append("extern PFN_vk%s %s;" % (cmd.name, cmd.name)) if ext.guard: lines.append("#endif") lines.append("") lines.append("void init_dispatch_table_top(PFN_vkGetInstanceProcAddr get_instance_proc_addr);") lines.append("void init_dispatch_table_middle(VkInstance instance, bool include_bottom);") lines.append("void init_dispatch_table_bottom(VkInstance instance, VkDevice dev);") lines.append("") lines.append("} // namespace vk") lines.append("") lines.append("#endif // %s" % guard) return "\n".join(lines) def get_proc_addr(dispatchable, cmd, guard=None): if dispatchable == "dev": func = "GetDeviceProcAddr" else: func = "GetInstanceProcAddr" c = " %s = reinterpret_cast(%s(%s, \"vk%s\"));" % \ (cmd.name, cmd.name, func, dispatchable, cmd.name) if guard: c = ("#ifdef %s\n" % guard) + c + "\n#endif" return c def generate_source(header): lines = [] lines.append("// This file is generated.") lines.append("#include \"%s\"" % header) lines.append("") lines.append("namespace vk {") lines.append("") commands_by_types = {} get_instance_proc_addr = None get_device_proc_addr = None for ext in extensions: if ext.guard: lines.append("#ifdef %s" % ext.guard) for cmd in ext.commands: lines.append("PFN_vk%s %s;" % (cmd.name, cmd.name)) if cmd.ty not in commands_by_types: commands_by_types[cmd.ty] = [] commands_by_types[cmd.ty].append([cmd, ext.guard]) if cmd.name == "GetInstanceProcAddr": get_instance_proc_addr = cmd elif cmd.name == "GetDeviceProcAddr": get_device_proc_addr = cmd if ext.guard: lines.append("#endif") lines.append("") lines.append("void init_dispatch_table_top(PFN_vkGetInstanceProcAddr get_instance_proc_addr)") lines.append("{") lines.append(" GetInstanceProcAddr = get_instance_proc_addr;") lines.append("") for cmd, guard in commands_by_types[Command.LOADER]: lines.append(get_proc_addr("VK_NULL_HANDLE", cmd, guard)) lines.append("}") lines.append("") lines.append("void init_dispatch_table_middle(VkInstance instance, bool include_bottom)") lines.append("{") lines.append(get_proc_addr("instance", get_instance_proc_addr)) lines.append("") for cmd, guard in commands_by_types[Command.INSTANCE]: if cmd == get_instance_proc_addr: continue lines.append(get_proc_addr("instance", cmd, guard)) lines.append("") lines.append(" if (!include_bottom)") lines.append(" return;") lines.append("") for cmd, guard in commands_by_types[Command.DEVICE]: lines.append(get_proc_addr("instance", cmd, guard)) lines.append("}") lines.append("") lines.append("void init_dispatch_table_bottom(VkInstance instance, VkDevice dev)") lines.append("{") lines.append(get_proc_addr("instance", get_device_proc_addr)) lines.append(get_proc_addr("dev", get_device_proc_addr)) lines.append("") for cmd, guard in commands_by_types[Command.DEVICE]: if cmd == get_device_proc_addr: continue lines.append(get_proc_addr("dev", cmd, guard)) lines.append("}") lines.append("") lines.append("} // namespace vk") return "\n".join(lines) def parse_vulkan_h(filename): extensions = [] with open(filename, "r") as f: current_ext = None ext_guard = None spec_version = None for line in f: line = line.strip(); if line.startswith("#define VK_API_VERSION"): minor_end = line.rfind(",") minor_begin = line.rfind(",", 0, minor_end) + 1 spec_version = int(line[minor_begin:minor_end]) # add core current_ext = Extension("VK_core", spec_version) extensions.append(current_ext) elif Command.valid_c_typedef(line): current_ext.add_command(Command.from_c_typedef(line)) elif line.startswith("#ifdef VK_USE_PLATFORM"): guard_begin = line.find(" ") + 1 ext_guard = line[guard_begin:] elif line.startswith("#define") and "SPEC_VERSION " in line: version_begin = line.rfind(" ") + 1 spec_version = int(line[version_begin:]) elif line.startswith("#define") and "EXTENSION_NAME " in line: name_end = line.rfind("\"") name_begin = line.rfind("\"", 0, name_end) + 1 name = line[name_begin:name_end] # add extension current_ext = Extension(name, spec_version, ext_guard) extensions.append(current_ext) elif ext_guard and line.startswith("#endif") and ext_guard in line: ext_guard = None for ext in extensions: print("%s = %s" % (ext.name.lower(), repr(ext))) print("") print("extensions = [") for ext in extensions: print(" %s," % ext.name.lower()) print("]") if __name__ == "__main__": if sys.argv[1] == "parse": parse_vulkan_h(sys.argv[2]) else: filename = sys.argv[1] base = os.path.basename(filename) contents = [] if base.endswith(".h"): contents = generate_header(base.replace(".", "_").upper()) elif base.endswith(".cpp"): contents = generate_source(base.replace(".cpp", ".h")) with open(filename, "w") as f: print(contents, file=f) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/smoke/glsl-to-spirv000077500000000000000000000057611270147354000256300ustar00rootroot00000000000000#!/usr/bin/env python3 # # Copyright (C) 2016 Google, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. """Compile GLSL to SPIR-V. Depends on glslangValidator. """ import os import sys import subprocess import struct import re SPIRV_MAGIC = 0x07230203 COLUMNS = 4 INDENT = 4 in_filename = sys.argv[1] out_filename = sys.argv[2] if len(sys.argv) > 2 else None validator = sys.argv[3] if len(sys.argv) > 3 else \ "../../../glslang/build/install/bin/glslangValidator" def identifierize(s): # translate invalid chars s = re.sub("[^0-9a-zA-Z_]", "_", s) # translate leading digits return re.sub("^[^a-zA-Z_]+", "_", s) def compile_glsl(filename, tmpfile): # invoke glslangValidator try: args = [validator, "-V", "-H", "-o", tmpfile, filename] output = subprocess.check_output(args, universal_newlines=True) except subprocess.CalledProcessError as e: print(e.output, file=sys.stderr) exit(1) # read the temp file into a list of SPIR-V words words = [] with open(tmpfile, "rb") as f: data = f.read() assert(len(data) and len(data) % 4 == 0) # determine endianness fmt = ("<" if data[0] == (SPIRV_MAGIC & 0xff) else ">") + "I" for i in range(0, len(data), 4): words.append(struct.unpack(fmt, data[i:(i + 4)])[0]) assert(words[0] == SPIRV_MAGIC) # remove temp file os.remove(tmpfile) return (words, output.rstrip()) base = os.path.basename(in_filename) words, comments = compile_glsl(in_filename, base + ".tmp") literals = [] for i in range(0, len(words), COLUMNS): columns = ["0x%08x" % word for word in words[i:(i + COLUMNS)]] literals.append(" " * INDENT + ", ".join(columns) + ",") header = """#include #if 0 %s #endif static const uint32_t %s[%d] = { %s }; """ % (comments, identifierize(base), len(words), "\n".join(literals)) if out_filename: with open(out_filename, "w") as f: print(header, end="", file=f) else: print(header, end="") Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/tri.c000066400000000000000000002672241270147354000230100ustar00rootroot00000000000000/* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. * * Author: Chia-I Wu * Author: Cody Northrop * Author: Courtney Goeltzenleuchter * Author: Ian Elliott * Author: Jon Ashburn * Author: Piers Daniell */ /* * Draw a textured triangle with depth testing. This is written against Intel * ICD. It does not do state transition nor object memory binding like it * should. It also does no error checking. */ #ifndef _MSC_VER #define _ISOC11_SOURCE /* for aligned_alloc() */ #endif #include #include #include #include #include #include #ifdef _WIN32 #pragma comment(linker, "/subsystem:windows") #define APP_NAME_STR_LEN 80 #endif // _WIN32 #include #define DEMO_TEXTURE_COUNT 1 #define VERTEX_BUFFER_BIND_ID 0 #define APP_SHORT_NAME "tri" #define APP_LONG_NAME "The Vulkan Triangle Demo Program" #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) #if defined(NDEBUG) && defined(__GNUC__) #define U_ASSERT_ONLY __attribute__((unused)) #else #define U_ASSERT_ONLY #endif #ifdef _WIN32 #define ERR_EXIT(err_msg, err_class) \ do { \ MessageBox(NULL, err_msg, err_class, MB_OK); \ exit(1); \ } while (0) #else // _WIN32 #define ERR_EXIT(err_msg, err_class) \ do { \ printf(err_msg); \ fflush(stdout); \ exit(1); \ } while (0) #endif // _WIN32 #define GET_INSTANCE_PROC_ADDR(inst, entrypoint) \ { \ demo->fp##entrypoint = \ (PFN_vk##entrypoint)vkGetInstanceProcAddr(inst, "vk" #entrypoint); \ if (demo->fp##entrypoint == NULL) { \ ERR_EXIT("vkGetInstanceProcAddr failed to find vk" #entrypoint, \ "vkGetInstanceProcAddr Failure"); \ } \ } #define GET_DEVICE_PROC_ADDR(dev, entrypoint) \ { \ demo->fp##entrypoint = \ (PFN_vk##entrypoint)vkGetDeviceProcAddr(dev, "vk" #entrypoint); \ if (demo->fp##entrypoint == NULL) { \ ERR_EXIT("vkGetDeviceProcAddr failed to find vk" #entrypoint, \ "vkGetDeviceProcAddr Failure"); \ } \ } struct texture_object { VkSampler sampler; VkImage image; VkImageLayout imageLayout; VkDeviceMemory mem; VkImageView view; int32_t tex_width, tex_height; }; static int validation_error = 0; VKAPI_ATTR VkBool32 VKAPI_CALL dbgFunc(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData) { char *message = (char *)malloc(strlen(pMsg) + 100); assert(message); validation_error = 1; if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) { sprintf(message, "ERROR: [%s] Code %d : %s", pLayerPrefix, msgCode, pMsg); } else if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) { sprintf(message, "WARNING: [%s] Code %d : %s", pLayerPrefix, msgCode, pMsg); } else { return false; } #ifdef _WIN32 MessageBox(NULL, message, "Alert", MB_OK); #else printf("%s\n", message); fflush(stdout); #endif free(message); /* * false indicates that layer should not bail-out of an * API call that had validation failures. This may mean that the * app dies inside the driver due to invalid parameter(s). * That's what would happen without validation layers, so we'll * keep that behavior here. */ return false; } VKAPI_ATTR VkBool32 VKAPI_CALL BreakCallback(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData) { #ifndef WIN32 raise(SIGTRAP); #else DebugBreak(); #endif return false; } typedef struct _SwapchainBuffers { VkImage image; VkCommandBuffer cmd; VkImageView view; } SwapchainBuffers; struct demo { #ifdef _WIN32 #define APP_NAME_STR_LEN 80 HINSTANCE connection; // hInstance - Windows Instance char name[APP_NAME_STR_LEN]; // Name to put on the window/icon HWND window; // hWnd - window handle #else // _WIN32 xcb_connection_t *connection; xcb_screen_t *screen; xcb_window_t window; xcb_intern_atom_reply_t *atom_wm_delete_window; #endif // _WIN32 VkSurfaceKHR surface; bool prepared; bool use_staging_buffer; VkInstance inst; VkPhysicalDevice gpu; VkDevice device; VkQueue queue; VkPhysicalDeviceProperties gpu_props; VkQueueFamilyProperties *queue_props; uint32_t graphics_queue_node_index; uint32_t enabled_extension_count; uint32_t enabled_layer_count; char *extension_names[64]; char *device_validation_layers[64]; int width, height; VkFormat format; VkColorSpaceKHR color_space; PFN_vkGetPhysicalDeviceSurfaceSupportKHR fpGetPhysicalDeviceSurfaceSupportKHR; PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR fpGetPhysicalDeviceSurfaceCapabilitiesKHR; PFN_vkGetPhysicalDeviceSurfaceFormatsKHR fpGetPhysicalDeviceSurfaceFormatsKHR; PFN_vkGetPhysicalDeviceSurfacePresentModesKHR fpGetPhysicalDeviceSurfacePresentModesKHR; PFN_vkCreateSwapchainKHR fpCreateSwapchainKHR; PFN_vkDestroySwapchainKHR fpDestroySwapchainKHR; PFN_vkGetSwapchainImagesKHR fpGetSwapchainImagesKHR; PFN_vkAcquireNextImageKHR fpAcquireNextImageKHR; PFN_vkQueuePresentKHR fpQueuePresentKHR; uint32_t swapchainImageCount; VkSwapchainKHR swapchain; SwapchainBuffers *buffers; VkCommandPool cmd_pool; struct { VkFormat format; VkImage image; VkDeviceMemory mem; VkImageView view; } depth; struct texture_object textures[DEMO_TEXTURE_COUNT]; struct { VkBuffer buf; VkDeviceMemory mem; VkPipelineVertexInputStateCreateInfo vi; VkVertexInputBindingDescription vi_bindings[1]; VkVertexInputAttributeDescription vi_attrs[2]; } vertices; VkCommandBuffer setup_cmd; // Command Buffer for initialization commands VkCommandBuffer draw_cmd; // Command Buffer for drawing commands VkPipelineLayout pipeline_layout; VkDescriptorSetLayout desc_layout; VkPipelineCache pipelineCache; VkRenderPass render_pass; VkPipeline pipeline; VkShaderModule vert_shader_module; VkShaderModule frag_shader_module; VkDescriptorPool desc_pool; VkDescriptorSet desc_set; VkFramebuffer *framebuffers; VkPhysicalDeviceMemoryProperties memory_properties; int32_t curFrame; int32_t frameCount; bool validate; bool use_break; PFN_vkCreateDebugReportCallbackEXT CreateDebugReportCallback; PFN_vkDestroyDebugReportCallbackEXT DestroyDebugReportCallback; VkDebugReportCallbackEXT msg_callback; PFN_vkDebugReportMessageEXT DebugReportMessage; float depthStencil; float depthIncrement; bool quit; uint32_t current_buffer; uint32_t queue_count; }; // Forward declaration: static void demo_resize(struct demo *demo); static bool memory_type_from_properties(struct demo *demo, uint32_t typeBits, VkFlags requirements_mask, uint32_t *typeIndex) { // Search memtypes to find first index with those properties for (uint32_t i = 0; i < 32; i++) { if ((typeBits & 1) == 1) { // Type is available, does it match user properties? if ((demo->memory_properties.memoryTypes[i].propertyFlags & requirements_mask) == requirements_mask) { *typeIndex = i; return true; } } typeBits >>= 1; } // No memory types matched, return failure return false; } static void demo_flush_init_cmd(struct demo *demo) { VkResult U_ASSERT_ONLY err; if (demo->setup_cmd == VK_NULL_HANDLE) return; err = vkEndCommandBuffer(demo->setup_cmd); assert(!err); const VkCommandBuffer cmd_bufs[] = {demo->setup_cmd}; VkFence nullFence = {VK_NULL_HANDLE}; VkSubmitInfo submit_info = {.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO, .pNext = NULL, .waitSemaphoreCount = 0, .pWaitSemaphores = NULL, .pWaitDstStageMask = NULL, .commandBufferCount = 1, .pCommandBuffers = cmd_bufs, .signalSemaphoreCount = 0, .pSignalSemaphores = NULL}; err = vkQueueSubmit(demo->queue, 1, &submit_info, nullFence); assert(!err); err = vkQueueWaitIdle(demo->queue); assert(!err); vkFreeCommandBuffers(demo->device, demo->cmd_pool, 1, cmd_bufs); demo->setup_cmd = VK_NULL_HANDLE; } static void demo_set_image_layout(struct demo *demo, VkImage image, VkImageAspectFlags aspectMask, VkImageLayout old_image_layout, VkImageLayout new_image_layout, VkAccessFlagBits srcAccessMask) { VkResult U_ASSERT_ONLY err; if (demo->setup_cmd == VK_NULL_HANDLE) { const VkCommandBufferAllocateInfo cmd = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO, .pNext = NULL, .commandPool = demo->cmd_pool, .level = VK_COMMAND_BUFFER_LEVEL_PRIMARY, .commandBufferCount = 1, }; err = vkAllocateCommandBuffers(demo->device, &cmd, &demo->setup_cmd); assert(!err); VkCommandBufferInheritanceInfo cmd_buf_hinfo = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO, .pNext = NULL, .renderPass = VK_NULL_HANDLE, .subpass = 0, .framebuffer = VK_NULL_HANDLE, .occlusionQueryEnable = VK_FALSE, .queryFlags = 0, .pipelineStatistics = 0, }; VkCommandBufferBeginInfo cmd_buf_info = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO, .pNext = NULL, .flags = 0, .pInheritanceInfo = &cmd_buf_hinfo, }; err = vkBeginCommandBuffer(demo->setup_cmd, &cmd_buf_info); assert(!err); } VkImageMemoryBarrier image_memory_barrier = { .sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER, .pNext = NULL, .srcAccessMask = srcAccessMask, .dstAccessMask = 0, .oldLayout = old_image_layout, .newLayout = new_image_layout, .image = image, .subresourceRange = {aspectMask, 0, 1, 0, 1}}; if (new_image_layout == VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL) { /* Make sure anything that was copying from this image has completed */ image_memory_barrier.dstAccessMask = VK_ACCESS_TRANSFER_READ_BIT; } if (new_image_layout == VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL) { image_memory_barrier.dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT; } if (new_image_layout == VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL) { image_memory_barrier.dstAccessMask = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT; } if (new_image_layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) { /* Make sure any Copy or CPU writes to image are flushed */ image_memory_barrier.dstAccessMask = VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_INPUT_ATTACHMENT_READ_BIT; } VkImageMemoryBarrier *pmemory_barrier = &image_memory_barrier; VkPipelineStageFlags src_stages = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT; VkPipelineStageFlags dest_stages = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT; vkCmdPipelineBarrier(demo->setup_cmd, src_stages, dest_stages, 0, 0, NULL, 0, NULL, 1, pmemory_barrier); } static void demo_draw_build_cmd(struct demo *demo) { const VkCommandBufferInheritanceInfo cmd_buf_hinfo = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO, .pNext = NULL, .renderPass = VK_NULL_HANDLE, .subpass = 0, .framebuffer = VK_NULL_HANDLE, .occlusionQueryEnable = VK_FALSE, .queryFlags = 0, .pipelineStatistics = 0, }; const VkCommandBufferBeginInfo cmd_buf_info = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO, .pNext = NULL, .flags = 0, .pInheritanceInfo = &cmd_buf_hinfo, }; const VkClearValue clear_values[2] = { [0] = {.color.float32 = {0.2f, 0.2f, 0.2f, 0.2f}}, [1] = {.depthStencil = {demo->depthStencil, 0}}, }; const VkRenderPassBeginInfo rp_begin = { .sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO, .pNext = NULL, .renderPass = demo->render_pass, .framebuffer = demo->framebuffers[demo->current_buffer], .renderArea.offset.x = 0, .renderArea.offset.y = 0, .renderArea.extent.width = demo->width, .renderArea.extent.height = demo->height, .clearValueCount = 2, .pClearValues = clear_values, }; VkResult U_ASSERT_ONLY err; err = vkBeginCommandBuffer(demo->draw_cmd, &cmd_buf_info); assert(!err); vkCmdBeginRenderPass(demo->draw_cmd, &rp_begin, VK_SUBPASS_CONTENTS_INLINE); vkCmdBindPipeline(demo->draw_cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, demo->pipeline); vkCmdBindDescriptorSets(demo->draw_cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, demo->pipeline_layout, 0, 1, &demo->desc_set, 0, NULL); VkViewport viewport; memset(&viewport, 0, sizeof(viewport)); viewport.height = (float)demo->height; viewport.width = (float)demo->width; viewport.minDepth = (float)0.0f; viewport.maxDepth = (float)1.0f; vkCmdSetViewport(demo->draw_cmd, 0, 1, &viewport); VkRect2D scissor; memset(&scissor, 0, sizeof(scissor)); scissor.extent.width = demo->width; scissor.extent.height = demo->height; scissor.offset.x = 0; scissor.offset.y = 0; vkCmdSetScissor(demo->draw_cmd, 0, 1, &scissor); VkDeviceSize offsets[1] = {0}; vkCmdBindVertexBuffers(demo->draw_cmd, VERTEX_BUFFER_BIND_ID, 1, &demo->vertices.buf, offsets); vkCmdDraw(demo->draw_cmd, 3, 1, 0, 0); vkCmdEndRenderPass(demo->draw_cmd); VkImageMemoryBarrier prePresentBarrier = { .sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER, .pNext = NULL, .srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT, .dstAccessMask = VK_ACCESS_MEMORY_READ_BIT, .oldLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, .newLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR, .srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED, .dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED, .subresourceRange = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1}}; prePresentBarrier.image = demo->buffers[demo->current_buffer].image; VkImageMemoryBarrier *pmemory_barrier = &prePresentBarrier; vkCmdPipelineBarrier(demo->draw_cmd, VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT, 0, 0, NULL, 0, NULL, 1, pmemory_barrier); err = vkEndCommandBuffer(demo->draw_cmd); assert(!err); } static void demo_draw(struct demo *demo) { VkResult U_ASSERT_ONLY err; VkSemaphore presentCompleteSemaphore; VkSemaphoreCreateInfo presentCompleteSemaphoreCreateInfo = { .sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO, .pNext = NULL, .flags = 0, }; err = vkCreateSemaphore(demo->device, &presentCompleteSemaphoreCreateInfo, NULL, &presentCompleteSemaphore); assert(!err); // Get the index of the next available swapchain image: err = demo->fpAcquireNextImageKHR(demo->device, demo->swapchain, UINT64_MAX, presentCompleteSemaphore, (VkFence)0, // TODO: Show use of fence &demo->current_buffer); if (err == VK_ERROR_OUT_OF_DATE_KHR) { // demo->swapchain is out of date (e.g. the window was resized) and // must be recreated: demo_resize(demo); demo_draw(demo); vkDestroySemaphore(demo->device, presentCompleteSemaphore, NULL); return; } else if (err == VK_SUBOPTIMAL_KHR) { // demo->swapchain is not as optimal as it could be, but the platform's // presentation engine will still present the image correctly. } else { assert(!err); } // Assume the command buffer has been run on current_buffer before so // we need to set the image layout back to COLOR_ATTACHMENT_OPTIMAL demo_set_image_layout(demo, demo->buffers[demo->current_buffer].image, VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR, VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, 0); demo_flush_init_cmd(demo); // Wait for the present complete semaphore to be signaled to ensure // that the image won't be rendered to until the presentation // engine has fully released ownership to the application, and it is // okay to render to the image. // FIXME/TODO: DEAL WITH VK_IMAGE_LAYOUT_PRESENT_SRC_KHR demo_draw_build_cmd(demo); VkFence nullFence = VK_NULL_HANDLE; VkPipelineStageFlags pipe_stage_flags = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT; VkSubmitInfo submit_info = {.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO, .pNext = NULL, .waitSemaphoreCount = 1, .pWaitSemaphores = &presentCompleteSemaphore, .pWaitDstStageMask = &pipe_stage_flags, .commandBufferCount = 1, .pCommandBuffers = &demo->draw_cmd, .signalSemaphoreCount = 0, .pSignalSemaphores = NULL}; err = vkQueueSubmit(demo->queue, 1, &submit_info, nullFence); assert(!err); VkPresentInfoKHR present = { .sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR, .pNext = NULL, .swapchainCount = 1, .pSwapchains = &demo->swapchain, .pImageIndices = &demo->current_buffer, }; // TBD/TODO: SHOULD THE "present" PARAMETER BE "const" IN THE HEADER? err = demo->fpQueuePresentKHR(demo->queue, &present); if (err == VK_ERROR_OUT_OF_DATE_KHR) { // demo->swapchain is out of date (e.g. the window was resized) and // must be recreated: demo_resize(demo); } else if (err == VK_SUBOPTIMAL_KHR) { // demo->swapchain is not as optimal as it could be, but the platform's // presentation engine will still present the image correctly. } else { assert(!err); } err = vkQueueWaitIdle(demo->queue); assert(err == VK_SUCCESS); vkDestroySemaphore(demo->device, presentCompleteSemaphore, NULL); } static void demo_prepare_buffers(struct demo *demo) { VkResult U_ASSERT_ONLY err; VkSwapchainKHR oldSwapchain = demo->swapchain; // Check the surface capabilities and formats VkSurfaceCapabilitiesKHR surfCapabilities; err = demo->fpGetPhysicalDeviceSurfaceCapabilitiesKHR( demo->gpu, demo->surface, &surfCapabilities); assert(!err); uint32_t presentModeCount; err = demo->fpGetPhysicalDeviceSurfacePresentModesKHR( demo->gpu, demo->surface, &presentModeCount, NULL); assert(!err); VkPresentModeKHR *presentModes = (VkPresentModeKHR *)malloc(presentModeCount * sizeof(VkPresentModeKHR)); assert(presentModes); err = demo->fpGetPhysicalDeviceSurfacePresentModesKHR( demo->gpu, demo->surface, &presentModeCount, presentModes); assert(!err); VkExtent2D swapchainExtent; // width and height are either both -1, or both not -1. if (surfCapabilities.currentExtent.width == (uint32_t)-1) { // If the surface size is undefined, the size is set to // the size of the images requested. swapchainExtent.width = demo->width; swapchainExtent.height = demo->height; } else { // If the surface size is defined, the swap chain size must match swapchainExtent = surfCapabilities.currentExtent; demo->width = surfCapabilities.currentExtent.width; demo->height = surfCapabilities.currentExtent.height; } VkPresentModeKHR swapchainPresentMode = VK_PRESENT_MODE_FIFO_KHR; // Determine the number of VkImage's to use in the swap chain (we desire to // own only 1 image at a time, besides the images being displayed and // queued for display): uint32_t desiredNumberOfSwapchainImages = surfCapabilities.minImageCount + 1; if ((surfCapabilities.maxImageCount > 0) && (desiredNumberOfSwapchainImages > surfCapabilities.maxImageCount)) { // Application must settle for fewer images than desired: desiredNumberOfSwapchainImages = surfCapabilities.maxImageCount; } VkSurfaceTransformFlagsKHR preTransform; if (surfCapabilities.supportedTransforms & VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR) { preTransform = VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR; } else { preTransform = surfCapabilities.currentTransform; } const VkSwapchainCreateInfoKHR swapchain = { .sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR, .pNext = NULL, .surface = demo->surface, .minImageCount = desiredNumberOfSwapchainImages, .imageFormat = demo->format, .imageColorSpace = demo->color_space, .imageExtent = { .width = swapchainExtent.width, .height = swapchainExtent.height, }, .imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT, .preTransform = preTransform, .compositeAlpha = VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR, .imageArrayLayers = 1, .imageSharingMode = VK_SHARING_MODE_EXCLUSIVE, .queueFamilyIndexCount = 0, .pQueueFamilyIndices = NULL, .presentMode = swapchainPresentMode, .oldSwapchain = oldSwapchain, .clipped = true, }; uint32_t i; err = demo->fpCreateSwapchainKHR(demo->device, &swapchain, NULL, &demo->swapchain); assert(!err); // If we just re-created an existing swapchain, we should destroy the old // swapchain at this point. // Note: destroying the swapchain also cleans up all its associated // presentable images once the platform is done with them. if (oldSwapchain != VK_NULL_HANDLE) { demo->fpDestroySwapchainKHR(demo->device, oldSwapchain, NULL); } err = demo->fpGetSwapchainImagesKHR(demo->device, demo->swapchain, &demo->swapchainImageCount, NULL); assert(!err); VkImage *swapchainImages = (VkImage *)malloc(demo->swapchainImageCount * sizeof(VkImage)); assert(swapchainImages); err = demo->fpGetSwapchainImagesKHR(demo->device, demo->swapchain, &demo->swapchainImageCount, swapchainImages); assert(!err); demo->buffers = (SwapchainBuffers *)malloc(sizeof(SwapchainBuffers) * demo->swapchainImageCount); assert(demo->buffers); for (i = 0; i < demo->swapchainImageCount; i++) { VkImageViewCreateInfo color_attachment_view = { .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO, .pNext = NULL, .format = demo->format, .components = { .r = VK_COMPONENT_SWIZZLE_R, .g = VK_COMPONENT_SWIZZLE_G, .b = VK_COMPONENT_SWIZZLE_B, .a = VK_COMPONENT_SWIZZLE_A, }, .subresourceRange = {.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT, .baseMipLevel = 0, .levelCount = 1, .baseArrayLayer = 0, .layerCount = 1}, .viewType = VK_IMAGE_VIEW_TYPE_2D, .flags = 0, }; demo->buffers[i].image = swapchainImages[i]; // Render loop will expect image to have been used before and in // VK_IMAGE_LAYOUT_PRESENT_SRC_KHR // layout and will change to COLOR_ATTACHMENT_OPTIMAL, so init the image // to that state demo_set_image_layout( demo, demo->buffers[i].image, VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR, 0); color_attachment_view.image = demo->buffers[i].image; err = vkCreateImageView(demo->device, &color_attachment_view, NULL, &demo->buffers[i].view); assert(!err); } demo->current_buffer = 0; if (NULL != presentModes) { free(presentModes); } } static void demo_prepare_depth(struct demo *demo) { const VkFormat depth_format = VK_FORMAT_D16_UNORM; const VkImageCreateInfo image = { .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO, .pNext = NULL, .imageType = VK_IMAGE_TYPE_2D, .format = depth_format, .extent = {demo->width, demo->height, 1}, .mipLevels = 1, .arrayLayers = 1, .samples = VK_SAMPLE_COUNT_1_BIT, .tiling = VK_IMAGE_TILING_OPTIMAL, .usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT, .flags = 0, }; VkMemoryAllocateInfo mem_alloc = { .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO, .pNext = NULL, .allocationSize = 0, .memoryTypeIndex = 0, }; VkImageViewCreateInfo view = { .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO, .pNext = NULL, .image = VK_NULL_HANDLE, .format = depth_format, .subresourceRange = {.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT, .baseMipLevel = 0, .levelCount = 1, .baseArrayLayer = 0, .layerCount = 1}, .flags = 0, .viewType = VK_IMAGE_VIEW_TYPE_2D, }; VkMemoryRequirements mem_reqs; VkResult U_ASSERT_ONLY err; bool U_ASSERT_ONLY pass; demo->depth.format = depth_format; /* create image */ err = vkCreateImage(demo->device, &image, NULL, &demo->depth.image); assert(!err); /* get memory requirements for this object */ vkGetImageMemoryRequirements(demo->device, demo->depth.image, &mem_reqs); /* select memory size and type */ mem_alloc.allocationSize = mem_reqs.size; pass = memory_type_from_properties(demo, mem_reqs.memoryTypeBits, 0, /* No requirements */ &mem_alloc.memoryTypeIndex); assert(pass); /* allocate memory */ err = vkAllocateMemory(demo->device, &mem_alloc, NULL, &demo->depth.mem); assert(!err); /* bind memory */ err = vkBindImageMemory(demo->device, demo->depth.image, demo->depth.mem, 0); assert(!err); demo_set_image_layout(demo, demo->depth.image, VK_IMAGE_ASPECT_DEPTH_BIT, VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL, 0); /* create image view */ view.image = demo->depth.image; err = vkCreateImageView(demo->device, &view, NULL, &demo->depth.view); assert(!err); } static void demo_prepare_texture_image(struct demo *demo, const uint32_t *tex_colors, struct texture_object *tex_obj, VkImageTiling tiling, VkImageUsageFlags usage, VkFlags required_props) { const VkFormat tex_format = VK_FORMAT_B8G8R8A8_UNORM; const int32_t tex_width = 2; const int32_t tex_height = 2; VkResult U_ASSERT_ONLY err; bool U_ASSERT_ONLY pass; tex_obj->tex_width = tex_width; tex_obj->tex_height = tex_height; const VkImageCreateInfo image_create_info = { .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO, .pNext = NULL, .imageType = VK_IMAGE_TYPE_2D, .format = tex_format, .extent = {tex_width, tex_height, 1}, .mipLevels = 1, .arrayLayers = 1, .samples = VK_SAMPLE_COUNT_1_BIT, .tiling = tiling, .usage = usage, .flags = 0, .initialLayout = VK_IMAGE_LAYOUT_PREINITIALIZED }; VkMemoryAllocateInfo mem_alloc = { .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO, .pNext = NULL, .allocationSize = 0, .memoryTypeIndex = 0, }; VkMemoryRequirements mem_reqs; err = vkCreateImage(demo->device, &image_create_info, NULL, &tex_obj->image); assert(!err); vkGetImageMemoryRequirements(demo->device, tex_obj->image, &mem_reqs); mem_alloc.allocationSize = mem_reqs.size; pass = memory_type_from_properties(demo, mem_reqs.memoryTypeBits, required_props, &mem_alloc.memoryTypeIndex); assert(pass); /* allocate memory */ err = vkAllocateMemory(demo->device, &mem_alloc, NULL, &tex_obj->mem); assert(!err); /* bind memory */ err = vkBindImageMemory(demo->device, tex_obj->image, tex_obj->mem, 0); assert(!err); if (required_props & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) { const VkImageSubresource subres = { .aspectMask = VK_IMAGE_ASPECT_COLOR_BIT, .mipLevel = 0, .arrayLayer = 0, }; VkSubresourceLayout layout; void *data; int32_t x, y; vkGetImageSubresourceLayout(demo->device, tex_obj->image, &subres, &layout); err = vkMapMemory(demo->device, tex_obj->mem, 0, mem_alloc.allocationSize, 0, &data); assert(!err); for (y = 0; y < tex_height; y++) { uint32_t *row = (uint32_t *)((char *)data + layout.rowPitch * y); for (x = 0; x < tex_width; x++) row[x] = tex_colors[(x & 1) ^ (y & 1)]; } vkUnmapMemory(demo->device, tex_obj->mem); } tex_obj->imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL; demo_set_image_layout(demo, tex_obj->image, VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_PREINITIALIZED, tex_obj->imageLayout, VK_ACCESS_HOST_WRITE_BIT); /* setting the image layout does not reference the actual memory so no need * to add a mem ref */ } static void demo_destroy_texture_image(struct demo *demo, struct texture_object *tex_obj) { /* clean up staging resources */ vkDestroyImage(demo->device, tex_obj->image, NULL); vkFreeMemory(demo->device, tex_obj->mem, NULL); } static void demo_prepare_textures(struct demo *demo) { const VkFormat tex_format = VK_FORMAT_B8G8R8A8_UNORM; VkFormatProperties props; const uint32_t tex_colors[DEMO_TEXTURE_COUNT][2] = { {0xffff0000, 0xff00ff00}, }; uint32_t i; VkResult U_ASSERT_ONLY err; vkGetPhysicalDeviceFormatProperties(demo->gpu, tex_format, &props); for (i = 0; i < DEMO_TEXTURE_COUNT; i++) { if ((props.linearTilingFeatures & VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT) && !demo->use_staging_buffer) { /* Device can texture using linear textures */ demo_prepare_texture_image(demo, tex_colors[i], &demo->textures[i], VK_IMAGE_TILING_LINEAR, VK_IMAGE_USAGE_SAMPLED_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT); } else if (props.optimalTilingFeatures & VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT) { /* Must use staging buffer to copy linear texture to optimized */ struct texture_object staging_texture; memset(&staging_texture, 0, sizeof(staging_texture)); demo_prepare_texture_image(demo, tex_colors[i], &staging_texture, VK_IMAGE_TILING_LINEAR, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT); demo_prepare_texture_image( demo, tex_colors[i], &demo->textures[i], VK_IMAGE_TILING_OPTIMAL, (VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT), VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT); demo_set_image_layout(demo, staging_texture.image, VK_IMAGE_ASPECT_COLOR_BIT, staging_texture.imageLayout, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, 0); demo_set_image_layout(demo, demo->textures[i].image, VK_IMAGE_ASPECT_COLOR_BIT, demo->textures[i].imageLayout, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 0); VkImageCopy copy_region = { .srcSubresource = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1}, .srcOffset = {0, 0, 0}, .dstSubresource = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1}, .dstOffset = {0, 0, 0}, .extent = {staging_texture.tex_width, staging_texture.tex_height, 1}, }; vkCmdCopyImage( demo->setup_cmd, staging_texture.image, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, demo->textures[i].image, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 1, ©_region); demo_set_image_layout(demo, demo->textures[i].image, VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, demo->textures[i].imageLayout, 0); demo_flush_init_cmd(demo); demo_destroy_texture_image(demo, &staging_texture); } else { /* Can't support VK_FORMAT_B8G8R8A8_UNORM !? */ assert(!"No support for B8G8R8A8_UNORM as texture image format"); } const VkSamplerCreateInfo sampler = { .sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO, .pNext = NULL, .magFilter = VK_FILTER_NEAREST, .minFilter = VK_FILTER_NEAREST, .mipmapMode = VK_SAMPLER_MIPMAP_MODE_NEAREST, .addressModeU = VK_SAMPLER_ADDRESS_MODE_REPEAT, .addressModeV = VK_SAMPLER_ADDRESS_MODE_REPEAT, .addressModeW = VK_SAMPLER_ADDRESS_MODE_REPEAT, .mipLodBias = 0.0f, .anisotropyEnable = VK_FALSE, .maxAnisotropy = 1, .compareOp = VK_COMPARE_OP_NEVER, .minLod = 0.0f, .maxLod = 0.0f, .borderColor = VK_BORDER_COLOR_FLOAT_OPAQUE_WHITE, .unnormalizedCoordinates = VK_FALSE, }; VkImageViewCreateInfo view = { .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO, .pNext = NULL, .image = VK_NULL_HANDLE, .viewType = VK_IMAGE_VIEW_TYPE_2D, .format = tex_format, .components = { VK_COMPONENT_SWIZZLE_R, VK_COMPONENT_SWIZZLE_G, VK_COMPONENT_SWIZZLE_B, VK_COMPONENT_SWIZZLE_A, }, .subresourceRange = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1}, .flags = 0, }; /* create sampler */ err = vkCreateSampler(demo->device, &sampler, NULL, &demo->textures[i].sampler); assert(!err); /* create image view */ view.image = demo->textures[i].image; err = vkCreateImageView(demo->device, &view, NULL, &demo->textures[i].view); assert(!err); } } static void demo_prepare_vertices(struct demo *demo) { // clang-format off const float vb[3][5] = { /* position texcoord */ { -1.0f, -1.0f, 0.25f, 0.0f, 0.0f }, { 1.0f, -1.0f, 0.25f, 1.0f, 0.0f }, { 0.0f, 1.0f, 1.0f, 0.5f, 1.0f }, }; // clang-format on const VkBufferCreateInfo buf_info = { .sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO, .pNext = NULL, .size = sizeof(vb), .usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT, .flags = 0, }; VkMemoryAllocateInfo mem_alloc = { .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO, .pNext = NULL, .allocationSize = 0, .memoryTypeIndex = 0, }; VkMemoryRequirements mem_reqs; VkResult U_ASSERT_ONLY err; bool U_ASSERT_ONLY pass; void *data; memset(&demo->vertices, 0, sizeof(demo->vertices)); err = vkCreateBuffer(demo->device, &buf_info, NULL, &demo->vertices.buf); assert(!err); vkGetBufferMemoryRequirements(demo->device, demo->vertices.buf, &mem_reqs); assert(!err); mem_alloc.allocationSize = mem_reqs.size; pass = memory_type_from_properties(demo, mem_reqs.memoryTypeBits, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, &mem_alloc.memoryTypeIndex); assert(pass); err = vkAllocateMemory(demo->device, &mem_alloc, NULL, &demo->vertices.mem); assert(!err); err = vkMapMemory(demo->device, demo->vertices.mem, 0, mem_alloc.allocationSize, 0, &data); assert(!err); memcpy(data, vb, sizeof(vb)); vkUnmapMemory(demo->device, demo->vertices.mem); err = vkBindBufferMemory(demo->device, demo->vertices.buf, demo->vertices.mem, 0); assert(!err); demo->vertices.vi.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO; demo->vertices.vi.pNext = NULL; demo->vertices.vi.vertexBindingDescriptionCount = 1; demo->vertices.vi.pVertexBindingDescriptions = demo->vertices.vi_bindings; demo->vertices.vi.vertexAttributeDescriptionCount = 2; demo->vertices.vi.pVertexAttributeDescriptions = demo->vertices.vi_attrs; demo->vertices.vi_bindings[0].binding = VERTEX_BUFFER_BIND_ID; demo->vertices.vi_bindings[0].stride = sizeof(vb[0]); demo->vertices.vi_bindings[0].inputRate = VK_VERTEX_INPUT_RATE_VERTEX; demo->vertices.vi_attrs[0].binding = VERTEX_BUFFER_BIND_ID; demo->vertices.vi_attrs[0].location = 0; demo->vertices.vi_attrs[0].format = VK_FORMAT_R32G32B32_SFLOAT; demo->vertices.vi_attrs[0].offset = 0; demo->vertices.vi_attrs[1].binding = VERTEX_BUFFER_BIND_ID; demo->vertices.vi_attrs[1].location = 1; demo->vertices.vi_attrs[1].format = VK_FORMAT_R32G32_SFLOAT; demo->vertices.vi_attrs[1].offset = sizeof(float) * 3; } static void demo_prepare_descriptor_layout(struct demo *demo) { const VkDescriptorSetLayoutBinding layout_binding = { .binding = 0, .descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, .descriptorCount = DEMO_TEXTURE_COUNT, .stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT, .pImmutableSamplers = NULL, }; const VkDescriptorSetLayoutCreateInfo descriptor_layout = { .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO, .pNext = NULL, .bindingCount = 1, .pBindings = &layout_binding, }; VkResult U_ASSERT_ONLY err; err = vkCreateDescriptorSetLayout(demo->device, &descriptor_layout, NULL, &demo->desc_layout); assert(!err); const VkPipelineLayoutCreateInfo pPipelineLayoutCreateInfo = { .sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO, .pNext = NULL, .setLayoutCount = 1, .pSetLayouts = &demo->desc_layout, }; err = vkCreatePipelineLayout(demo->device, &pPipelineLayoutCreateInfo, NULL, &demo->pipeline_layout); assert(!err); } static void demo_prepare_render_pass(struct demo *demo) { const VkAttachmentDescription attachments[2] = { [0] = { .format = demo->format, .samples = VK_SAMPLE_COUNT_1_BIT, .loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR, .storeOp = VK_ATTACHMENT_STORE_OP_STORE, .stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE, .stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE, .initialLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, .finalLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, }, [1] = { .format = demo->depth.format, .samples = VK_SAMPLE_COUNT_1_BIT, .loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR, .storeOp = VK_ATTACHMENT_STORE_OP_DONT_CARE, .stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE, .stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE, .initialLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL, .finalLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL, }, }; const VkAttachmentReference color_reference = { .attachment = 0, .layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, }; const VkAttachmentReference depth_reference = { .attachment = 1, .layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL, }; const VkSubpassDescription subpass = { .pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS, .flags = 0, .inputAttachmentCount = 0, .pInputAttachments = NULL, .colorAttachmentCount = 1, .pColorAttachments = &color_reference, .pResolveAttachments = NULL, .pDepthStencilAttachment = &depth_reference, .preserveAttachmentCount = 0, .pPreserveAttachments = NULL, }; const VkRenderPassCreateInfo rp_info = { .sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO, .pNext = NULL, .attachmentCount = 2, .pAttachments = attachments, .subpassCount = 1, .pSubpasses = &subpass, .dependencyCount = 0, .pDependencies = NULL, }; VkResult U_ASSERT_ONLY err; err = vkCreateRenderPass(demo->device, &rp_info, NULL, &demo->render_pass); assert(!err); } static VkShaderModule demo_prepare_shader_module(struct demo *demo, const void *code, size_t size) { VkShaderModuleCreateInfo moduleCreateInfo; VkShaderModule module; VkResult U_ASSERT_ONLY err; moduleCreateInfo.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO; moduleCreateInfo.pNext = NULL; moduleCreateInfo.codeSize = size; moduleCreateInfo.pCode = code; moduleCreateInfo.flags = 0; err = vkCreateShaderModule(demo->device, &moduleCreateInfo, NULL, &module); assert(!err); return module; } char *demo_read_spv(const char *filename, size_t *psize) { long int size; void *shader_code; size_t retVal; FILE *fp = fopen(filename, "rb"); if (!fp) return NULL; fseek(fp, 0L, SEEK_END); size = ftell(fp); fseek(fp, 0L, SEEK_SET); shader_code = malloc(size); retVal = fread(shader_code, size, 1, fp); if (!retVal) return NULL; *psize = size; fclose(fp); return shader_code; } static VkShaderModule demo_prepare_vs(struct demo *demo) { void *vertShaderCode; size_t size; vertShaderCode = demo_read_spv("tri-vert.spv", &size); demo->vert_shader_module = demo_prepare_shader_module(demo, vertShaderCode, size); free(vertShaderCode); return demo->vert_shader_module; } static VkShaderModule demo_prepare_fs(struct demo *demo) { void *fragShaderCode; size_t size; fragShaderCode = demo_read_spv("tri-frag.spv", &size); demo->frag_shader_module = demo_prepare_shader_module(demo, fragShaderCode, size); free(fragShaderCode); return demo->frag_shader_module; } static void demo_prepare_pipeline(struct demo *demo) { VkGraphicsPipelineCreateInfo pipeline; VkPipelineCacheCreateInfo pipelineCache; VkPipelineVertexInputStateCreateInfo vi; VkPipelineInputAssemblyStateCreateInfo ia; VkPipelineRasterizationStateCreateInfo rs; VkPipelineColorBlendStateCreateInfo cb; VkPipelineDepthStencilStateCreateInfo ds; VkPipelineViewportStateCreateInfo vp; VkPipelineMultisampleStateCreateInfo ms; VkDynamicState dynamicStateEnables[VK_DYNAMIC_STATE_RANGE_SIZE]; VkPipelineDynamicStateCreateInfo dynamicState; VkResult U_ASSERT_ONLY err; memset(dynamicStateEnables, 0, sizeof dynamicStateEnables); memset(&dynamicState, 0, sizeof dynamicState); dynamicState.sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO; dynamicState.pDynamicStates = dynamicStateEnables; memset(&pipeline, 0, sizeof(pipeline)); pipeline.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO; pipeline.layout = demo->pipeline_layout; vi = demo->vertices.vi; memset(&ia, 0, sizeof(ia)); ia.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO; ia.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; memset(&rs, 0, sizeof(rs)); rs.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO; rs.polygonMode = VK_POLYGON_MODE_FILL; rs.cullMode = VK_CULL_MODE_BACK_BIT; rs.frontFace = VK_FRONT_FACE_CLOCKWISE; rs.depthClampEnable = VK_FALSE; rs.rasterizerDiscardEnable = VK_FALSE; rs.depthBiasEnable = VK_FALSE; memset(&cb, 0, sizeof(cb)); cb.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO; VkPipelineColorBlendAttachmentState att_state[1]; memset(att_state, 0, sizeof(att_state)); att_state[0].colorWriteMask = 0xf; att_state[0].blendEnable = VK_FALSE; cb.attachmentCount = 1; cb.pAttachments = att_state; memset(&vp, 0, sizeof(vp)); vp.sType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO; vp.viewportCount = 1; dynamicStateEnables[dynamicState.dynamicStateCount++] = VK_DYNAMIC_STATE_VIEWPORT; vp.scissorCount = 1; dynamicStateEnables[dynamicState.dynamicStateCount++] = VK_DYNAMIC_STATE_SCISSOR; memset(&ds, 0, sizeof(ds)); ds.sType = VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO; ds.depthTestEnable = VK_TRUE; ds.depthWriteEnable = VK_TRUE; ds.depthCompareOp = VK_COMPARE_OP_LESS_OR_EQUAL; ds.depthBoundsTestEnable = VK_FALSE; ds.back.failOp = VK_STENCIL_OP_KEEP; ds.back.passOp = VK_STENCIL_OP_KEEP; ds.back.compareOp = VK_COMPARE_OP_ALWAYS; ds.stencilTestEnable = VK_FALSE; ds.front = ds.back; memset(&ms, 0, sizeof(ms)); ms.sType = VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO; ms.pSampleMask = NULL; ms.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT; // Two stages: vs and fs pipeline.stageCount = 2; VkPipelineShaderStageCreateInfo shaderStages[2]; memset(&shaderStages, 0, 2 * sizeof(VkPipelineShaderStageCreateInfo)); shaderStages[0].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; shaderStages[0].stage = VK_SHADER_STAGE_VERTEX_BIT; shaderStages[0].module = demo_prepare_vs(demo); shaderStages[0].pName = "main"; shaderStages[1].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; shaderStages[1].stage = VK_SHADER_STAGE_FRAGMENT_BIT; shaderStages[1].module = demo_prepare_fs(demo); shaderStages[1].pName = "main"; pipeline.pVertexInputState = &vi; pipeline.pInputAssemblyState = &ia; pipeline.pRasterizationState = &rs; pipeline.pColorBlendState = &cb; pipeline.pMultisampleState = &ms; pipeline.pViewportState = &vp; pipeline.pDepthStencilState = &ds; pipeline.pStages = shaderStages; pipeline.renderPass = demo->render_pass; pipeline.pDynamicState = &dynamicState; memset(&pipelineCache, 0, sizeof(pipelineCache)); pipelineCache.sType = VK_STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO; err = vkCreatePipelineCache(demo->device, &pipelineCache, NULL, &demo->pipelineCache); assert(!err); err = vkCreateGraphicsPipelines(demo->device, demo->pipelineCache, 1, &pipeline, NULL, &demo->pipeline); assert(!err); vkDestroyPipelineCache(demo->device, demo->pipelineCache, NULL); vkDestroyShaderModule(demo->device, demo->frag_shader_module, NULL); vkDestroyShaderModule(demo->device, demo->vert_shader_module, NULL); } static void demo_prepare_descriptor_pool(struct demo *demo) { const VkDescriptorPoolSize type_count = { .type = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, .descriptorCount = DEMO_TEXTURE_COUNT, }; const VkDescriptorPoolCreateInfo descriptor_pool = { .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO, .pNext = NULL, .maxSets = 1, .poolSizeCount = 1, .pPoolSizes = &type_count, }; VkResult U_ASSERT_ONLY err; err = vkCreateDescriptorPool(demo->device, &descriptor_pool, NULL, &demo->desc_pool); assert(!err); } static void demo_prepare_descriptor_set(struct demo *demo) { VkDescriptorImageInfo tex_descs[DEMO_TEXTURE_COUNT]; VkWriteDescriptorSet write; VkResult U_ASSERT_ONLY err; uint32_t i; VkDescriptorSetAllocateInfo alloc_info = { .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO, .pNext = NULL, .descriptorPool = demo->desc_pool, .descriptorSetCount = 1, .pSetLayouts = &demo->desc_layout}; err = vkAllocateDescriptorSets(demo->device, &alloc_info, &demo->desc_set); assert(!err); memset(&tex_descs, 0, sizeof(tex_descs)); for (i = 0; i < DEMO_TEXTURE_COUNT; i++) { tex_descs[i].sampler = demo->textures[i].sampler; tex_descs[i].imageView = demo->textures[i].view; tex_descs[i].imageLayout = VK_IMAGE_LAYOUT_GENERAL; } memset(&write, 0, sizeof(write)); write.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET; write.dstSet = demo->desc_set; write.descriptorCount = DEMO_TEXTURE_COUNT; write.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER; write.pImageInfo = tex_descs; vkUpdateDescriptorSets(demo->device, 1, &write, 0, NULL); } static void demo_prepare_framebuffers(struct demo *demo) { VkImageView attachments[2]; attachments[1] = demo->depth.view; const VkFramebufferCreateInfo fb_info = { .sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO, .pNext = NULL, .renderPass = demo->render_pass, .attachmentCount = 2, .pAttachments = attachments, .width = demo->width, .height = demo->height, .layers = 1, }; VkResult U_ASSERT_ONLY err; uint32_t i; demo->framebuffers = (VkFramebuffer *)malloc(demo->swapchainImageCount * sizeof(VkFramebuffer)); assert(demo->framebuffers); for (i = 0; i < demo->swapchainImageCount; i++) { attachments[0] = demo->buffers[i].view; err = vkCreateFramebuffer(demo->device, &fb_info, NULL, &demo->framebuffers[i]); assert(!err); } } static void demo_prepare(struct demo *demo) { VkResult U_ASSERT_ONLY err; const VkCommandPoolCreateInfo cmd_pool_info = { .sType = VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO, .pNext = NULL, .queueFamilyIndex = demo->graphics_queue_node_index, .flags = VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT, }; err = vkCreateCommandPool(demo->device, &cmd_pool_info, NULL, &demo->cmd_pool); assert(!err); const VkCommandBufferAllocateInfo cmd = { .sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO, .pNext = NULL, .commandPool = demo->cmd_pool, .level = VK_COMMAND_BUFFER_LEVEL_PRIMARY, .commandBufferCount = 1, }; err = vkAllocateCommandBuffers(demo->device, &cmd, &demo->draw_cmd); assert(!err); demo_prepare_buffers(demo); demo_prepare_depth(demo); demo_prepare_textures(demo); demo_prepare_vertices(demo); demo_prepare_descriptor_layout(demo); demo_prepare_render_pass(demo); demo_prepare_pipeline(demo); demo_prepare_descriptor_pool(demo); demo_prepare_descriptor_set(demo); demo_prepare_framebuffers(demo); demo->prepared = true; } #ifdef _WIN32 static void demo_run(struct demo *demo) { if (!demo->prepared) return; demo_draw(demo); if (demo->depthStencil > 0.99f) demo->depthIncrement = -0.001f; if (demo->depthStencil < 0.8f) demo->depthIncrement = 0.001f; demo->depthStencil += demo->depthIncrement; demo->curFrame++; if (demo->frameCount != INT32_MAX && demo->curFrame == demo->frameCount) { PostQuitMessage(validation_error); } } // On MS-Windows, make this a global, so it's available to WndProc() struct demo demo; // MS-Windows event handling function: LRESULT CALLBACK WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { char tmp_str[] = APP_LONG_NAME; switch (uMsg) { case WM_CREATE: return 0; case WM_CLOSE: PostQuitMessage(validation_error); return 0; case WM_PAINT: if (demo.prepared) { demo_run(&demo); break; } case WM_SIZE: // Resize the application to the new window size, except when // it was minimized. Vulkan doesn't support images or swapchains // with width=0 and height=0. if (wParam != SIZE_MINIMIZED) { demo.width = lParam & 0xffff; demo.height = lParam & 0xffff0000 >> 16; demo_resize(&demo); } break; default: break; } return (DefWindowProc(hWnd, uMsg, wParam, lParam)); } static void demo_create_window(struct demo *demo) { WNDCLASSEX win_class; // Initialize the window class structure: win_class.cbSize = sizeof(WNDCLASSEX); win_class.style = CS_HREDRAW | CS_VREDRAW; win_class.lpfnWndProc = WndProc; win_class.cbClsExtra = 0; win_class.cbWndExtra = 0; win_class.hInstance = demo->connection; // hInstance win_class.hIcon = LoadIcon(NULL, IDI_APPLICATION); win_class.hCursor = LoadCursor(NULL, IDC_ARROW); win_class.hbrBackground = (HBRUSH)GetStockObject(WHITE_BRUSH); win_class.lpszMenuName = NULL; win_class.lpszClassName = demo->name; win_class.hIconSm = LoadIcon(NULL, IDI_WINLOGO); // Register window class: if (!RegisterClassEx(&win_class)) { // It didn't work, so try to give a useful error: printf("Unexpected error trying to start the application!\n"); fflush(stdout); exit(1); } // Create window with the registered class: RECT wr = {0, 0, demo->width, demo->height}; AdjustWindowRect(&wr, WS_OVERLAPPEDWINDOW, FALSE); demo->window = CreateWindowEx(0, demo->name, // class name demo->name, // app name WS_OVERLAPPEDWINDOW | // window style WS_VISIBLE | WS_SYSMENU, 100, 100, // x/y coords wr.right - wr.left, // width wr.bottom - wr.top, // height NULL, // handle to parent NULL, // handle to menu demo->connection, // hInstance NULL); // no extra parameters if (!demo->window) { // It didn't work, so try to give a useful error: printf("Cannot create a window in which to draw!\n"); fflush(stdout); exit(1); } } #else // _WIN32 static void demo_handle_event(struct demo *demo, const xcb_generic_event_t *event) { switch (event->response_type & 0x7f) { case XCB_EXPOSE: demo_draw(demo); break; case XCB_CLIENT_MESSAGE: if ((*(xcb_client_message_event_t *)event).data.data32[0] == (*demo->atom_wm_delete_window).atom) { demo->quit = true; } break; case XCB_KEY_RELEASE: { const xcb_key_release_event_t *key = (const xcb_key_release_event_t *)event; if (key->detail == 0x9) demo->quit = true; } break; case XCB_DESTROY_NOTIFY: demo->quit = true; break; case XCB_CONFIGURE_NOTIFY: { const xcb_configure_notify_event_t *cfg = (const xcb_configure_notify_event_t *)event; if ((demo->width != cfg->width) || (demo->height != cfg->height)) { demo->width = cfg->width; demo->height = cfg->height; demo_resize(demo); } } break; default: break; } } static void demo_run(struct demo *demo) { xcb_flush(demo->connection); while (!demo->quit) { xcb_generic_event_t *event; event = xcb_poll_for_event(demo->connection); if (event) { demo_handle_event(demo, event); free(event); } demo_draw(demo); if (demo->depthStencil > 0.99f) demo->depthIncrement = -0.001f; if (demo->depthStencil < 0.8f) demo->depthIncrement = 0.001f; demo->depthStencil += demo->depthIncrement; // Wait for work to finish before updating MVP. vkDeviceWaitIdle(demo->device); demo->curFrame++; if (demo->frameCount != INT32_MAX && demo->curFrame == demo->frameCount) demo->quit = true; } } static void demo_create_window(struct demo *demo) { uint32_t value_mask, value_list[32]; demo->window = xcb_generate_id(demo->connection); value_mask = XCB_CW_BACK_PIXEL | XCB_CW_EVENT_MASK; value_list[0] = demo->screen->black_pixel; value_list[1] = XCB_EVENT_MASK_KEY_RELEASE | XCB_EVENT_MASK_EXPOSURE | XCB_EVENT_MASK_STRUCTURE_NOTIFY; xcb_create_window(demo->connection, XCB_COPY_FROM_PARENT, demo->window, demo->screen->root, 0, 0, demo->width, demo->height, 0, XCB_WINDOW_CLASS_INPUT_OUTPUT, demo->screen->root_visual, value_mask, value_list); /* Magic code that will send notification when window is destroyed */ xcb_intern_atom_cookie_t cookie = xcb_intern_atom(demo->connection, 1, 12, "WM_PROTOCOLS"); xcb_intern_atom_reply_t *reply = xcb_intern_atom_reply(demo->connection, cookie, 0); xcb_intern_atom_cookie_t cookie2 = xcb_intern_atom(demo->connection, 0, 16, "WM_DELETE_WINDOW"); demo->atom_wm_delete_window = xcb_intern_atom_reply(demo->connection, cookie2, 0); xcb_change_property(demo->connection, XCB_PROP_MODE_REPLACE, demo->window, (*reply).atom, 4, 32, 1, &(*demo->atom_wm_delete_window).atom); free(reply); xcb_map_window(demo->connection, demo->window); } #endif // _WIN32 /* * Return 1 (true) if all layer names specified in check_names * can be found in given layer properties. */ static VkBool32 demo_check_layers(uint32_t check_count, char **check_names, uint32_t layer_count, VkLayerProperties *layers) { for (uint32_t i = 0; i < check_count; i++) { VkBool32 found = 0; for (uint32_t j = 0; j < layer_count; j++) { if (!strcmp(check_names[i], layers[j].layerName)) { found = 1; break; } } if (!found) { fprintf(stderr, "Cannot find layer: %s\n", check_names[i]); return 0; } } return 1; } static void demo_init_vk(struct demo *demo) { VkResult err; uint32_t instance_extension_count = 0; uint32_t instance_layer_count = 0; uint32_t device_validation_layer_count = 0; char **instance_validation_layers = NULL; demo->enabled_extension_count = 0; demo->enabled_layer_count = 0; char *instance_validation_layers_alt1[] = { "VK_LAYER_LUNARG_standard_validation" }; char *instance_validation_layers_alt2[] = { "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation", "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker", "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation", "VK_LAYER_LUNARG_swapchain", "VK_LAYER_GOOGLE_unique_objects" }; /* Look for validation layers */ VkBool32 validation_found = 0; if (demo->validate) { err = vkEnumerateInstanceLayerProperties(&instance_layer_count, NULL); assert(!err); instance_validation_layers = instance_validation_layers_alt1; if (instance_layer_count > 0) { VkLayerProperties *instance_layers = malloc(sizeof (VkLayerProperties) * instance_layer_count); err = vkEnumerateInstanceLayerProperties(&instance_layer_count, instance_layers); assert(!err); validation_found = demo_check_layers( ARRAY_SIZE(instance_validation_layers_alt1), instance_validation_layers, instance_layer_count, instance_layers); if (validation_found) { demo->enabled_layer_count = ARRAY_SIZE(instance_validation_layers_alt1); demo->device_validation_layers[0] = "VK_LAYER_LUNARG_standard_validation"; device_validation_layer_count = 1; } else { // use alternative set of validation layers instance_validation_layers = instance_validation_layers_alt2; demo->enabled_layer_count = ARRAY_SIZE(instance_validation_layers_alt2); validation_found = demo_check_layers( ARRAY_SIZE(instance_validation_layers_alt2), instance_validation_layers, instance_layer_count, instance_layers); device_validation_layer_count = ARRAY_SIZE(instance_validation_layers_alt2); for (uint32_t i = 0; i < device_validation_layer_count; i++) { demo->device_validation_layers[i] = instance_validation_layers[i]; } } free(instance_layers); } if (!validation_found) { ERR_EXIT("vkEnumerateInstanceLayerProperties failed to find" "required validation layer.\n\n" "Please look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); } } /* Look for instance extensions */ VkBool32 surfaceExtFound = 0; VkBool32 platformSurfaceExtFound = 0; memset(demo->extension_names, 0, sizeof(demo->extension_names)); err = vkEnumerateInstanceExtensionProperties( NULL, &instance_extension_count, NULL); assert(!err); if (instance_extension_count > 0) { VkExtensionProperties *instance_extensions = malloc(sizeof(VkExtensionProperties) * instance_extension_count); err = vkEnumerateInstanceExtensionProperties( NULL, &instance_extension_count, instance_extensions); assert(!err); for (uint32_t i = 0; i < instance_extension_count; i++) { if (!strcmp(VK_KHR_SURFACE_EXTENSION_NAME, instance_extensions[i].extensionName)) { surfaceExtFound = 1; demo->extension_names[demo->enabled_extension_count++] = VK_KHR_SURFACE_EXTENSION_NAME; } #ifdef _WIN32 if (!strcmp(VK_KHR_WIN32_SURFACE_EXTENSION_NAME, instance_extensions[i].extensionName)) { platformSurfaceExtFound = 1; demo->extension_names[demo->enabled_extension_count++] = VK_KHR_WIN32_SURFACE_EXTENSION_NAME; } #else // _WIN32 if (!strcmp(VK_KHR_XCB_SURFACE_EXTENSION_NAME, instance_extensions[i].extensionName)) { platformSurfaceExtFound = 1; demo->extension_names[demo->enabled_extension_count++] = VK_KHR_XCB_SURFACE_EXTENSION_NAME; } #endif // _WIN32 if (!strcmp(VK_EXT_DEBUG_REPORT_EXTENSION_NAME, instance_extensions[i].extensionName)) { if (demo->validate) { demo->extension_names[demo->enabled_extension_count++] = VK_EXT_DEBUG_REPORT_EXTENSION_NAME; } } assert(demo->enabled_extension_count < 64); } free(instance_extensions); } if (!surfaceExtFound) { ERR_EXIT("vkEnumerateInstanceExtensionProperties failed to find " "the " VK_KHR_SURFACE_EXTENSION_NAME " extension.\n\nDo you have a compatible " "Vulkan installable client driver (ICD) installed?\nPlease " "look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); } if (!platformSurfaceExtFound) { #ifdef _WIN32 ERR_EXIT("vkEnumerateInstanceExtensionProperties failed to find " "the " VK_KHR_WIN32_SURFACE_EXTENSION_NAME " extension.\n\nDo you have a compatible " "Vulkan installable client driver (ICD) installed?\nPlease " "look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); #else // _WIN32 ERR_EXIT("vkEnumerateInstanceExtensionProperties failed to find " "the " VK_KHR_XCB_SURFACE_EXTENSION_NAME " extension.\n\nDo you have a compatible " "Vulkan installable client driver (ICD) installed?\nPlease " "look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); #endif // _WIN32 } const VkApplicationInfo app = { .sType = VK_STRUCTURE_TYPE_APPLICATION_INFO, .pNext = NULL, .pApplicationName = APP_SHORT_NAME, .applicationVersion = 0, .pEngineName = APP_SHORT_NAME, .engineVersion = 0, .apiVersion = VK_API_VERSION_1_0, }; VkInstanceCreateInfo inst_info = { .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO, .pNext = NULL, .pApplicationInfo = &app, .enabledLayerCount = demo->enabled_layer_count, .ppEnabledLayerNames = (const char *const *)instance_validation_layers, .enabledExtensionCount = demo->enabled_extension_count, .ppEnabledExtensionNames = (const char *const *)demo->extension_names, }; uint32_t gpu_count; err = vkCreateInstance(&inst_info, NULL, &demo->inst); if (err == VK_ERROR_INCOMPATIBLE_DRIVER) { ERR_EXIT("Cannot find a compatible Vulkan installable client driver " "(ICD).\n\nPlease look at the Getting Started guide for " "additional information.\n", "vkCreateInstance Failure"); } else if (err == VK_ERROR_EXTENSION_NOT_PRESENT) { ERR_EXIT("Cannot find a specified extension library" ".\nMake sure your layers path is set appropriately\n", "vkCreateInstance Failure"); } else if (err) { ERR_EXIT("vkCreateInstance failed.\n\nDo you have a compatible Vulkan " "installable client driver (ICD) installed?\nPlease look at " "the Getting Started guide for additional information.\n", "vkCreateInstance Failure"); } /* Make initial call to query gpu_count, then second call for gpu info*/ err = vkEnumeratePhysicalDevices(demo->inst, &gpu_count, NULL); assert(!err && gpu_count > 0); if (gpu_count > 0) { VkPhysicalDevice *physical_devices = malloc(sizeof(VkPhysicalDevice) * gpu_count); err = vkEnumeratePhysicalDevices(demo->inst, &gpu_count, physical_devices); assert(!err); /* For tri demo we just grab the first physical device */ demo->gpu = physical_devices[0]; free(physical_devices); } else { ERR_EXIT("vkEnumeratePhysicalDevices reported zero accessible devices." "\n\nDo you have a compatible Vulkan installable client" " driver (ICD) installed?\nPlease look at the Getting Started" " guide for additional information.\n", "vkEnumeratePhysicalDevices Failure"); } /* Look for validation layers */ if (demo->validate) { validation_found = 0; demo->enabled_layer_count = 0; uint32_t device_layer_count = 0; err = vkEnumerateDeviceLayerProperties(demo->gpu, &device_layer_count, NULL); assert(!err); if (device_layer_count > 0) { VkLayerProperties *device_layers = malloc(sizeof (VkLayerProperties) * device_layer_count); err = vkEnumerateDeviceLayerProperties(demo->gpu, &device_layer_count, device_layers); assert(!err); validation_found = demo_check_layers(device_validation_layer_count, demo->device_validation_layers, device_layer_count, device_layers); demo->enabled_layer_count = device_validation_layer_count; free(device_layers); } if (!validation_found) { ERR_EXIT("vkEnumerateDeviceLayerProperties failed to find " "a required validation layer.\n\n" "Please look at the Getting Started guide for additional " "information.\n", "vkCreateDevice Failure"); } } /* Look for device extensions */ uint32_t device_extension_count = 0; VkBool32 swapchainExtFound = 0; demo->enabled_extension_count = 0; memset(demo->extension_names, 0, sizeof(demo->extension_names)); err = vkEnumerateDeviceExtensionProperties(demo->gpu, NULL, &device_extension_count, NULL); assert(!err); if (device_extension_count > 0) { VkExtensionProperties *device_extensions = malloc(sizeof(VkExtensionProperties) * device_extension_count); err = vkEnumerateDeviceExtensionProperties( demo->gpu, NULL, &device_extension_count, device_extensions); assert(!err); for (uint32_t i = 0; i < device_extension_count; i++) { if (!strcmp(VK_KHR_SWAPCHAIN_EXTENSION_NAME, device_extensions[i].extensionName)) { swapchainExtFound = 1; demo->extension_names[demo->enabled_extension_count++] = VK_KHR_SWAPCHAIN_EXTENSION_NAME; } assert(demo->enabled_extension_count < 64); } free(device_extensions); } if (!swapchainExtFound) { ERR_EXIT("vkEnumerateDeviceExtensionProperties failed to find " "the " VK_KHR_SWAPCHAIN_EXTENSION_NAME " extension.\n\nDo you have a compatible " "Vulkan installable client driver (ICD) installed?\nPlease " "look at the Getting Started guide for additional " "information.\n", "vkCreateInstance Failure"); } if (demo->validate) { demo->CreateDebugReportCallback = (PFN_vkCreateDebugReportCallbackEXT)vkGetInstanceProcAddr( demo->inst, "vkCreateDebugReportCallbackEXT"); demo->DestroyDebugReportCallback = (PFN_vkDestroyDebugReportCallbackEXT)vkGetInstanceProcAddr( demo->inst, "vkDestroyDebugReportCallbackEXT"); if (!demo->CreateDebugReportCallback) { ERR_EXIT( "GetProcAddr: Unable to find vkCreateDebugReportCallbackEXT\n", "vkGetProcAddr Failure"); } if (!demo->DestroyDebugReportCallback) { ERR_EXIT( "GetProcAddr: Unable to find vkDestroyDebugReportCallbackEXT\n", "vkGetProcAddr Failure"); } demo->DebugReportMessage = (PFN_vkDebugReportMessageEXT)vkGetInstanceProcAddr( demo->inst, "vkDebugReportMessageEXT"); if (!demo->DebugReportMessage) { ERR_EXIT("GetProcAddr: Unable to find vkDebugReportMessageEXT\n", "vkGetProcAddr Failure"); } VkDebugReportCallbackCreateInfoEXT dbgCreateInfo; dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgCreateInfo.flags = VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_WARNING_BIT_EXT; dbgCreateInfo.pfnCallback = demo->use_break ? BreakCallback : dbgFunc; dbgCreateInfo.pUserData = NULL; dbgCreateInfo.pNext = NULL; err = demo->CreateDebugReportCallback(demo->inst, &dbgCreateInfo, NULL, &demo->msg_callback); switch (err) { case VK_SUCCESS: break; case VK_ERROR_OUT_OF_HOST_MEMORY: ERR_EXIT("CreateDebugReportCallback: out of host memory\n", "CreateDebugReportCallback Failure"); break; default: ERR_EXIT("CreateDebugReportCallback: unknown failure\n", "CreateDebugReportCallback Failure"); break; } } // Having these GIPA queries of device extension entry points both // BEFORE and AFTER vkCreateDevice is a good test for the loader GET_INSTANCE_PROC_ADDR(demo->inst, GetPhysicalDeviceSurfaceCapabilitiesKHR); GET_INSTANCE_PROC_ADDR(demo->inst, GetPhysicalDeviceSurfaceFormatsKHR); GET_INSTANCE_PROC_ADDR(demo->inst, GetPhysicalDeviceSurfacePresentModesKHR); GET_INSTANCE_PROC_ADDR(demo->inst, GetPhysicalDeviceSurfaceSupportKHR); GET_INSTANCE_PROC_ADDR(demo->inst, CreateSwapchainKHR); GET_INSTANCE_PROC_ADDR(demo->inst, DestroySwapchainKHR); GET_INSTANCE_PROC_ADDR(demo->inst, GetSwapchainImagesKHR); GET_INSTANCE_PROC_ADDR(demo->inst, AcquireNextImageKHR); GET_INSTANCE_PROC_ADDR(demo->inst, QueuePresentKHR); vkGetPhysicalDeviceProperties(demo->gpu, &demo->gpu_props); // Query with NULL data to get count vkGetPhysicalDeviceQueueFamilyProperties(demo->gpu, &demo->queue_count, NULL); demo->queue_props = (VkQueueFamilyProperties *)malloc( demo->queue_count * sizeof(VkQueueFamilyProperties)); vkGetPhysicalDeviceQueueFamilyProperties(demo->gpu, &demo->queue_count, demo->queue_props); assert(demo->queue_count >= 1); VkPhysicalDeviceFeatures features; vkGetPhysicalDeviceFeatures(demo->gpu, &features); if (!features.shaderClipDistance) { ERR_EXIT("Required device feature `shaderClipDistance` not supported\n", "GetPhysicalDeviceFeatures failure"); } // Graphics queue and MemMgr queue can be separate. // TODO: Add support for separate queues, including synchronization, // and appropriate tracking for QueueSubmit } static void demo_init_device(struct demo *demo) { VkResult U_ASSERT_ONLY err; float queue_priorities[1] = {0.0}; const VkDeviceQueueCreateInfo queue = { .sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO, .pNext = NULL, .queueFamilyIndex = demo->graphics_queue_node_index, .queueCount = 1, .pQueuePriorities = queue_priorities}; VkPhysicalDeviceFeatures features = { .shaderClipDistance = VK_TRUE, }; VkDeviceCreateInfo device = { .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO, .pNext = NULL, .queueCreateInfoCount = 1, .pQueueCreateInfos = &queue, .enabledLayerCount = demo->enabled_layer_count, .ppEnabledLayerNames = (const char *const *)((demo->validate) ? demo->device_validation_layers : NULL), .enabledExtensionCount = demo->enabled_extension_count, .ppEnabledExtensionNames = (const char *const *)demo->extension_names, .pEnabledFeatures = &features, }; err = vkCreateDevice(demo->gpu, &device, NULL, &demo->device); assert(!err); } static void demo_init_vk_swapchain(struct demo *demo) { VkResult U_ASSERT_ONLY err; uint32_t i; // Create a WSI surface for the window: #ifdef _WIN32 VkWin32SurfaceCreateInfoKHR createInfo; createInfo.sType = VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR; createInfo.pNext = NULL; createInfo.flags = 0; createInfo.hinstance = demo->connection; createInfo.hwnd = demo->window; err = vkCreateWin32SurfaceKHR(demo->inst, &createInfo, NULL, &demo->surface); #else // _WIN32 VkXcbSurfaceCreateInfoKHR createInfo; createInfo.sType = VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR; createInfo.pNext = NULL; createInfo.flags = 0; createInfo.connection = demo->connection; createInfo.window = demo->window; err = vkCreateXcbSurfaceKHR(demo->inst, &createInfo, NULL, &demo->surface); #endif // _WIN32 // Iterate over each queue to learn whether it supports presenting: VkBool32 *supportsPresent = (VkBool32 *)malloc(demo->queue_count * sizeof(VkBool32)); for (i = 0; i < demo->queue_count; i++) { demo->fpGetPhysicalDeviceSurfaceSupportKHR(demo->gpu, i, demo->surface, &supportsPresent[i]); } // Search for a graphics and a present queue in the array of queue // families, try to find one that supports both uint32_t graphicsQueueNodeIndex = UINT32_MAX; uint32_t presentQueueNodeIndex = UINT32_MAX; for (i = 0; i < demo->queue_count; i++) { if ((demo->queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT) != 0) { if (graphicsQueueNodeIndex == UINT32_MAX) { graphicsQueueNodeIndex = i; } if (supportsPresent[i] == VK_TRUE) { graphicsQueueNodeIndex = i; presentQueueNodeIndex = i; break; } } } if (presentQueueNodeIndex == UINT32_MAX) { // If didn't find a queue that supports both graphics and present, then // find a separate present queue. for (uint32_t i = 0; i < demo->queue_count; ++i) { if (supportsPresent[i] == VK_TRUE) { presentQueueNodeIndex = i; break; } } } free(supportsPresent); // Generate error if could not find both a graphics and a present queue if (graphicsQueueNodeIndex == UINT32_MAX || presentQueueNodeIndex == UINT32_MAX) { ERR_EXIT("Could not find a graphics and a present queue\n", "Swapchain Initialization Failure"); } // TODO: Add support for separate queues, including presentation, // synchronization, and appropriate tracking for QueueSubmit. // NOTE: While it is possible for an application to use a separate graphics // and a present queues, this demo program assumes it is only using // one: if (graphicsQueueNodeIndex != presentQueueNodeIndex) { ERR_EXIT("Could not find a common graphics and a present queue\n", "Swapchain Initialization Failure"); } demo->graphics_queue_node_index = graphicsQueueNodeIndex; demo_init_device(demo); vkGetDeviceQueue(demo->device, demo->graphics_queue_node_index, 0, &demo->queue); // Get the list of VkFormat's that are supported: uint32_t formatCount; err = demo->fpGetPhysicalDeviceSurfaceFormatsKHR(demo->gpu, demo->surface, &formatCount, NULL); assert(!err); VkSurfaceFormatKHR *surfFormats = (VkSurfaceFormatKHR *)malloc(formatCount * sizeof(VkSurfaceFormatKHR)); err = demo->fpGetPhysicalDeviceSurfaceFormatsKHR(demo->gpu, demo->surface, &formatCount, surfFormats); assert(!err); // If the format list includes just one entry of VK_FORMAT_UNDEFINED, // the surface has no preferred format. Otherwise, at least one // supported format will be returned. if (formatCount == 1 && surfFormats[0].format == VK_FORMAT_UNDEFINED) { demo->format = VK_FORMAT_B8G8R8A8_UNORM; } else { assert(formatCount >= 1); demo->format = surfFormats[0].format; } demo->color_space = surfFormats[0].colorSpace; demo->quit = false; demo->curFrame = 0; // Get Memory information and properties vkGetPhysicalDeviceMemoryProperties(demo->gpu, &demo->memory_properties); } static void demo_init_connection(struct demo *demo) { #ifndef _WIN32 const xcb_setup_t *setup; xcb_screen_iterator_t iter; int scr; demo->connection = xcb_connect(NULL, &scr); if (demo->connection == NULL) { printf("Cannot find a compatible Vulkan installable client driver " "(ICD).\nExiting ...\n"); fflush(stdout); exit(1); } setup = xcb_get_setup(demo->connection); iter = xcb_setup_roots_iterator(setup); while (scr-- > 0) xcb_screen_next(&iter); demo->screen = iter.data; #endif // _WIN32 } static void demo_init(struct demo *demo, const int argc, const char *argv[]) { memset(demo, 0, sizeof(*demo)); demo->frameCount = INT32_MAX; for (int i = 1; i < argc; i++) { if (strcmp(argv[i], "--use_staging") == 0) { demo->use_staging_buffer = true; continue; } if (strcmp(argv[i], "--break") == 0) { demo->use_break = true; continue; } if (strcmp(argv[i], "--validate") == 0) { demo->validate = true; continue; } if (strcmp(argv[i], "--c") == 0 && demo->frameCount == INT32_MAX && i < argc - 1 && sscanf(argv[i + 1], "%d", &demo->frameCount) == 1 && demo->frameCount >= 0) { i++; continue; } fprintf(stderr, "Usage:\n %s [--use_staging] [--validate] [--break] " "[--c ]\n", APP_SHORT_NAME); fflush(stderr); exit(1); } demo_init_connection(demo); demo_init_vk(demo); demo->width = 300; demo->height = 300; demo->depthStencil = 1.0; demo->depthIncrement = -0.01f; } static void demo_cleanup(struct demo *demo) { uint32_t i; demo->prepared = false; for (i = 0; i < demo->swapchainImageCount; i++) { vkDestroyFramebuffer(demo->device, demo->framebuffers[i], NULL); } free(demo->framebuffers); vkDestroyDescriptorPool(demo->device, demo->desc_pool, NULL); if (demo->setup_cmd) { vkFreeCommandBuffers(demo->device, demo->cmd_pool, 1, &demo->setup_cmd); } vkFreeCommandBuffers(demo->device, demo->cmd_pool, 1, &demo->draw_cmd); vkDestroyCommandPool(demo->device, demo->cmd_pool, NULL); vkDestroyPipeline(demo->device, demo->pipeline, NULL); vkDestroyRenderPass(demo->device, demo->render_pass, NULL); vkDestroyPipelineLayout(demo->device, demo->pipeline_layout, NULL); vkDestroyDescriptorSetLayout(demo->device, demo->desc_layout, NULL); vkDestroyBuffer(demo->device, demo->vertices.buf, NULL); vkFreeMemory(demo->device, demo->vertices.mem, NULL); for (i = 0; i < DEMO_TEXTURE_COUNT; i++) { vkDestroyImageView(demo->device, demo->textures[i].view, NULL); vkDestroyImage(demo->device, demo->textures[i].image, NULL); vkFreeMemory(demo->device, demo->textures[i].mem, NULL); vkDestroySampler(demo->device, demo->textures[i].sampler, NULL); } for (i = 0; i < demo->swapchainImageCount; i++) { vkDestroyImageView(demo->device, demo->buffers[i].view, NULL); } vkDestroyImageView(demo->device, demo->depth.view, NULL); vkDestroyImage(demo->device, demo->depth.image, NULL); vkFreeMemory(demo->device, demo->depth.mem, NULL); demo->fpDestroySwapchainKHR(demo->device, demo->swapchain, NULL); free(demo->buffers); vkDestroyDevice(demo->device, NULL); if (demo->validate) { demo->DestroyDebugReportCallback(demo->inst, demo->msg_callback, NULL); } vkDestroySurfaceKHR(demo->inst, demo->surface, NULL); vkDestroyInstance(demo->inst, NULL); free(demo->queue_props); #ifndef _WIN32 xcb_destroy_window(demo->connection, demo->window); xcb_disconnect(demo->connection); free(demo->atom_wm_delete_window); #endif // _WIN32 } static void demo_resize(struct demo *demo) { uint32_t i; // Don't react to resize until after first initialization. if (!demo->prepared) { return; } // In order to properly resize the window, we must re-create the swapchain // AND redo the command buffers, etc. // // First, perform part of the demo_cleanup() function: demo->prepared = false; for (i = 0; i < demo->swapchainImageCount; i++) { vkDestroyFramebuffer(demo->device, demo->framebuffers[i], NULL); } free(demo->framebuffers); vkDestroyDescriptorPool(demo->device, demo->desc_pool, NULL); if (demo->setup_cmd) { vkFreeCommandBuffers(demo->device, demo->cmd_pool, 1, &demo->setup_cmd); } vkFreeCommandBuffers(demo->device, demo->cmd_pool, 1, &demo->draw_cmd); vkDestroyCommandPool(demo->device, demo->cmd_pool, NULL); vkDestroyPipeline(demo->device, demo->pipeline, NULL); vkDestroyRenderPass(demo->device, demo->render_pass, NULL); vkDestroyPipelineLayout(demo->device, demo->pipeline_layout, NULL); vkDestroyDescriptorSetLayout(demo->device, demo->desc_layout, NULL); vkDestroyBuffer(demo->device, demo->vertices.buf, NULL); vkFreeMemory(demo->device, demo->vertices.mem, NULL); for (i = 0; i < DEMO_TEXTURE_COUNT; i++) { vkDestroyImageView(demo->device, demo->textures[i].view, NULL); vkDestroyImage(demo->device, demo->textures[i].image, NULL); vkFreeMemory(demo->device, demo->textures[i].mem, NULL); vkDestroySampler(demo->device, demo->textures[i].sampler, NULL); } for (i = 0; i < demo->swapchainImageCount; i++) { vkDestroyImageView(demo->device, demo->buffers[i].view, NULL); } vkDestroyImageView(demo->device, demo->depth.view, NULL); vkDestroyImage(demo->device, demo->depth.image, NULL); vkFreeMemory(demo->device, demo->depth.mem, NULL); free(demo->buffers); // Second, re-perform the demo_prepare() function, which will re-create the // swapchain: demo_prepare(demo); } #ifdef _WIN32 // Include header required for parsing the command line options. #include int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR pCmdLine, int nCmdShow) { MSG msg; // message bool done; // flag saying when app is complete int argc; char **argv; // Use the CommandLine functions to get the command line arguments. // Unfortunately, Microsoft outputs // this information as wide characters for Unicode, and we simply want the // Ascii version to be compatible // with the non-Windows side. So, we have to convert the information to // Ascii character strings. LPWSTR *commandLineArgs = CommandLineToArgvW(GetCommandLineW(), &argc); if (NULL == commandLineArgs) { argc = 0; } if (argc > 0) { argv = (char **)malloc(sizeof(char *) * argc); if (argv == NULL) { argc = 0; } else { for (int iii = 0; iii < argc; iii++) { size_t wideCharLen = wcslen(commandLineArgs[iii]); size_t numConverted = 0; argv[iii] = (char *)malloc(sizeof(char) * (wideCharLen + 1)); if (argv[iii] != NULL) { wcstombs_s(&numConverted, argv[iii], wideCharLen + 1, commandLineArgs[iii], wideCharLen + 1); } } } } else { argv = NULL; } demo_init(&demo, argc, argv); // Free up the items we had to allocate for the command line arguments. if (argc > 0 && argv != NULL) { for (int iii = 0; iii < argc; iii++) { if (argv[iii] != NULL) { free(argv[iii]); } } free(argv); } demo.connection = hInstance; strncpy(demo.name, "tri", APP_NAME_STR_LEN); demo_create_window(&demo); demo_init_vk_swapchain(&demo); demo_prepare(&demo); done = false; // initialize loop condition variable /* main message loop*/ while (!done) { PeekMessage(&msg, NULL, 0, 0, PM_REMOVE); if (msg.message == WM_QUIT) // check for a quit message { done = true; // if found, quit app } else { /* Translate and dispatch to event queue*/ TranslateMessage(&msg); DispatchMessage(&msg); } RedrawWindow(demo.window, NULL, NULL, RDW_INTERNALPAINT); } demo_cleanup(&demo); return (int)msg.wParam; } #else // _WIN32 int main(const int argc, const char *argv[]) { struct demo demo; demo_init(&demo, argc, argv); demo_create_window(&demo); demo_init_vk_swapchain(&demo); demo_prepare(&demo); demo_run(&demo); demo_cleanup(&demo); return validation_error; } #endif // _WIN32 Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/tri.frag000066400000000000000000000030461270147354000234730ustar00rootroot00000000000000/* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. */ /* * Fragment shader for tri demo */ #version 400 #extension GL_ARB_separate_shader_objects : enable #extension GL_ARB_shading_language_420pack : enable layout (binding = 0) uniform sampler2D tex; layout (location = 0) in vec2 texcoord; layout (location = 0) out vec4 uFragColor; void main() { uFragColor = texture(tex, texcoord); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/tri.vcxproj.user000077500000000000000000000011671270147354000252310ustar00rootroot00000000000000 VK_LAYER_PATH=..\layers\Debug WindowsLocalDebugger VK_LAYER_PATH=..\layers\Release WindowsLocalDebugger Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/tri.vert000066400000000000000000000030341270147354000235310ustar00rootroot00000000000000/* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. */ /* * Vertex shader used by tri demo. */ #version 400 #extension GL_ARB_separate_shader_objects : enable #extension GL_ARB_shading_language_420pack : enable layout (location = 0) in vec4 pos; layout (location = 1) in vec2 attr; layout (location = 0) out vec2 texcoord; void main() { texcoord = attr; gl_Position = pos; } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/vulkaninfo.c000066400000000000000000001561561270147354000243670ustar00rootroot00000000000000/* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. * * Author: Courtney Goeltzenleuchter * Author: David Pinedo * Author: Mark Lobodzinski */ #include #include #include #include #include #include #ifdef _WIN32 #include #include #endif // _WIN32 #include #define ERR(err) \ printf("%s:%d: failed with %s\n", __FILE__, __LINE__, \ vk_result_string(err)); #ifdef _WIN32 #define snprintf _snprintf // Returns nonzero if the console is used only for this process. Will return // zero if another process (such as cmd.exe) is also attached. static int ConsoleIsExclusive(void) { DWORD pids[2]; DWORD num_pids = GetConsoleProcessList(pids, ARRAYSIZE(pids)); return num_pids <= 1; } #define WAIT_FOR_CONSOLE_DESTROY \ do { \ if (ConsoleIsExclusive()) \ Sleep(INFINITE); \ } while (0) #else #define WAIT_FOR_CONSOLE_DESTROY #endif #define ERR_EXIT(err) \ do { \ ERR(err); \ fflush(stdout); \ WAIT_FOR_CONSOLE_DESTROY; \ exit(-1); \ } while (0) #if defined(NDEBUG) && defined(__GNUC__) #define U_ASSERT_ONLY __attribute__((unused)) #else #define U_ASSERT_ONLY #endif #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) #define MAX_GPUS 8 #define MAX_QUEUE_TYPES 5 #define APP_SHORT_NAME "vulkaninfo" struct app_gpu; struct app_dev { struct app_gpu *gpu; /* point back to the GPU */ VkDevice obj; VkFormatProperties format_props[VK_FORMAT_RANGE_SIZE]; }; struct layer_extension_list { VkLayerProperties layer_properties; uint32_t extension_count; VkExtensionProperties *extension_properties; }; struct app_instance { VkInstance instance; uint32_t global_layer_count; struct layer_extension_list *global_layers; uint32_t global_extension_count; VkExtensionProperties *global_extensions; }; struct app_gpu { uint32_t id; VkPhysicalDevice obj; VkPhysicalDeviceProperties props; uint32_t queue_count; VkQueueFamilyProperties *queue_props; VkDeviceQueueCreateInfo *queue_reqs; VkPhysicalDeviceMemoryProperties memory_props; VkPhysicalDeviceFeatures features; VkPhysicalDevice limits; uint32_t device_layer_count; struct layer_extension_list *device_layers; uint32_t device_extension_count; VkExtensionProperties *device_extensions; struct app_dev dev; }; static VKAPI_ATTR VkBool32 VKAPI_CALL dbg_callback(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData) { char *message = (char *)malloc(strlen(pMsg) + 100); assert(message); if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) { sprintf(message, "ERROR: [%s] Code %d : %s", pLayerPrefix, msgCode, pMsg); } else if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) { sprintf(message, "WARNING: [%s] Code %d : %s", pLayerPrefix, msgCode, pMsg); } else if (msgFlags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT) { sprintf(message, "INFO: [%s] Code %d : %s", pLayerPrefix, msgCode, pMsg); } else if (msgFlags & VK_DEBUG_REPORT_DEBUG_BIT_EXT) { sprintf(message, "DEBUG: [%s] Code %d : %s", pLayerPrefix, msgCode, pMsg); } printf("%s\n", message); fflush(stdout); free(message); /* * false indicates that layer should not bail-out of an * API call that had validation failures. This may mean that the * app dies inside the driver due to invalid parameter(s). * That's what would happen without validation layers, so we'll * keep that behavior here. */ return false; } static const char *vk_result_string(VkResult err) { switch (err) { #define STR(r) \ case r: \ return #r STR(VK_SUCCESS); STR(VK_NOT_READY); STR(VK_TIMEOUT); STR(VK_EVENT_SET); STR(VK_EVENT_RESET); STR(VK_ERROR_INITIALIZATION_FAILED); STR(VK_ERROR_OUT_OF_HOST_MEMORY); STR(VK_ERROR_OUT_OF_DEVICE_MEMORY); STR(VK_ERROR_DEVICE_LOST); STR(VK_ERROR_LAYER_NOT_PRESENT); STR(VK_ERROR_EXTENSION_NOT_PRESENT); STR(VK_ERROR_MEMORY_MAP_FAILED); STR(VK_ERROR_INCOMPATIBLE_DRIVER); #undef STR default: return "UNKNOWN_RESULT"; } } static const char *vk_physical_device_type_string(VkPhysicalDeviceType type) { switch (type) { #define STR(r) \ case VK_PHYSICAL_DEVICE_TYPE_##r: \ return #r STR(OTHER); STR(INTEGRATED_GPU); STR(DISCRETE_GPU); STR(VIRTUAL_GPU); #undef STR default: return "UNKNOWN_DEVICE"; } } static const char *vk_format_string(VkFormat fmt) { switch (fmt) { #define STR(r) \ case VK_FORMAT_##r: \ return #r STR(UNDEFINED); STR(R4G4_UNORM_PACK8); STR(R4G4B4A4_UNORM_PACK16); STR(B4G4R4A4_UNORM_PACK16); STR(R5G6B5_UNORM_PACK16); STR(B5G6R5_UNORM_PACK16); STR(R5G5B5A1_UNORM_PACK16); STR(B5G5R5A1_UNORM_PACK16); STR(A1R5G5B5_UNORM_PACK16); STR(R8_UNORM); STR(R8_SNORM); STR(R8_USCALED); STR(R8_SSCALED); STR(R8_UINT); STR(R8_SINT); STR(R8_SRGB); STR(R8G8_UNORM); STR(R8G8_SNORM); STR(R8G8_USCALED); STR(R8G8_SSCALED); STR(R8G8_UINT); STR(R8G8_SINT); STR(R8G8_SRGB); STR(R8G8B8_UNORM); STR(R8G8B8_SNORM); STR(R8G8B8_USCALED); STR(R8G8B8_SSCALED); STR(R8G8B8_UINT); STR(R8G8B8_SINT); STR(R8G8B8_SRGB); STR(B8G8R8_UNORM); STR(B8G8R8_SNORM); STR(B8G8R8_USCALED); STR(B8G8R8_SSCALED); STR(B8G8R8_UINT); STR(B8G8R8_SINT); STR(B8G8R8_SRGB); STR(R8G8B8A8_UNORM); STR(R8G8B8A8_SNORM); STR(R8G8B8A8_USCALED); STR(R8G8B8A8_SSCALED); STR(R8G8B8A8_UINT); STR(R8G8B8A8_SINT); STR(R8G8B8A8_SRGB); STR(B8G8R8A8_UNORM); STR(B8G8R8A8_SNORM); STR(B8G8R8A8_USCALED); STR(B8G8R8A8_SSCALED); STR(B8G8R8A8_UINT); STR(B8G8R8A8_SINT); STR(B8G8R8A8_SRGB); STR(A8B8G8R8_UNORM_PACK32); STR(A8B8G8R8_SNORM_PACK32); STR(A8B8G8R8_USCALED_PACK32); STR(A8B8G8R8_SSCALED_PACK32); STR(A8B8G8R8_UINT_PACK32); STR(A8B8G8R8_SINT_PACK32); STR(A8B8G8R8_SRGB_PACK32); STR(A2R10G10B10_UNORM_PACK32); STR(A2R10G10B10_SNORM_PACK32); STR(A2R10G10B10_USCALED_PACK32); STR(A2R10G10B10_SSCALED_PACK32); STR(A2R10G10B10_UINT_PACK32); STR(A2R10G10B10_SINT_PACK32); STR(A2B10G10R10_UNORM_PACK32); STR(A2B10G10R10_SNORM_PACK32); STR(A2B10G10R10_USCALED_PACK32); STR(A2B10G10R10_SSCALED_PACK32); STR(A2B10G10R10_UINT_PACK32); STR(A2B10G10R10_SINT_PACK32); STR(R16_UNORM); STR(R16_SNORM); STR(R16_USCALED); STR(R16_SSCALED); STR(R16_UINT); STR(R16_SINT); STR(R16_SFLOAT); STR(R16G16_UNORM); STR(R16G16_SNORM); STR(R16G16_USCALED); STR(R16G16_SSCALED); STR(R16G16_UINT); STR(R16G16_SINT); STR(R16G16_SFLOAT); STR(R16G16B16_UNORM); STR(R16G16B16_SNORM); STR(R16G16B16_USCALED); STR(R16G16B16_SSCALED); STR(R16G16B16_UINT); STR(R16G16B16_SINT); STR(R16G16B16_SFLOAT); STR(R16G16B16A16_UNORM); STR(R16G16B16A16_SNORM); STR(R16G16B16A16_USCALED); STR(R16G16B16A16_SSCALED); STR(R16G16B16A16_UINT); STR(R16G16B16A16_SINT); STR(R16G16B16A16_SFLOAT); STR(R32_UINT); STR(R32_SINT); STR(R32_SFLOAT); STR(R32G32_UINT); STR(R32G32_SINT); STR(R32G32_SFLOAT); STR(R32G32B32_UINT); STR(R32G32B32_SINT); STR(R32G32B32_SFLOAT); STR(R32G32B32A32_UINT); STR(R32G32B32A32_SINT); STR(R32G32B32A32_SFLOAT); STR(R64_UINT); STR(R64_SINT); STR(R64_SFLOAT); STR(R64G64_UINT); STR(R64G64_SINT); STR(R64G64_SFLOAT); STR(R64G64B64_UINT); STR(R64G64B64_SINT); STR(R64G64B64_SFLOAT); STR(R64G64B64A64_UINT); STR(R64G64B64A64_SINT); STR(R64G64B64A64_SFLOAT); STR(B10G11R11_UFLOAT_PACK32); STR(E5B9G9R9_UFLOAT_PACK32); STR(D16_UNORM); STR(X8_D24_UNORM_PACK32); STR(D32_SFLOAT); STR(S8_UINT); STR(D16_UNORM_S8_UINT); STR(D24_UNORM_S8_UINT); STR(D32_SFLOAT_S8_UINT); STR(BC1_RGB_UNORM_BLOCK); STR(BC1_RGB_SRGB_BLOCK); STR(BC2_UNORM_BLOCK); STR(BC2_SRGB_BLOCK); STR(BC3_UNORM_BLOCK); STR(BC3_SRGB_BLOCK); STR(BC4_UNORM_BLOCK); STR(BC4_SNORM_BLOCK); STR(BC5_UNORM_BLOCK); STR(BC5_SNORM_BLOCK); STR(BC6H_UFLOAT_BLOCK); STR(BC6H_SFLOAT_BLOCK); STR(BC7_UNORM_BLOCK); STR(BC7_SRGB_BLOCK); STR(ETC2_R8G8B8_UNORM_BLOCK); STR(ETC2_R8G8B8A1_UNORM_BLOCK); STR(ETC2_R8G8B8A8_UNORM_BLOCK); STR(EAC_R11_UNORM_BLOCK); STR(EAC_R11_SNORM_BLOCK); STR(EAC_R11G11_UNORM_BLOCK); STR(EAC_R11G11_SNORM_BLOCK); STR(ASTC_4x4_UNORM_BLOCK); STR(ASTC_4x4_SRGB_BLOCK); STR(ASTC_5x4_UNORM_BLOCK); STR(ASTC_5x4_SRGB_BLOCK); STR(ASTC_5x5_UNORM_BLOCK); STR(ASTC_5x5_SRGB_BLOCK); STR(ASTC_6x5_UNORM_BLOCK); STR(ASTC_6x5_SRGB_BLOCK); STR(ASTC_6x6_UNORM_BLOCK); STR(ASTC_6x6_SRGB_BLOCK); STR(ASTC_8x5_UNORM_BLOCK); STR(ASTC_8x5_SRGB_BLOCK); STR(ASTC_8x6_UNORM_BLOCK); STR(ASTC_8x6_SRGB_BLOCK); STR(ASTC_8x8_UNORM_BLOCK); STR(ASTC_8x8_SRGB_BLOCK); STR(ASTC_10x5_UNORM_BLOCK); STR(ASTC_10x5_SRGB_BLOCK); STR(ASTC_10x6_UNORM_BLOCK); STR(ASTC_10x6_SRGB_BLOCK); STR(ASTC_10x8_UNORM_BLOCK); STR(ASTC_10x8_SRGB_BLOCK); STR(ASTC_10x10_UNORM_BLOCK); STR(ASTC_10x10_SRGB_BLOCK); STR(ASTC_12x10_UNORM_BLOCK); STR(ASTC_12x10_SRGB_BLOCK); STR(ASTC_12x12_UNORM_BLOCK); STR(ASTC_12x12_SRGB_BLOCK); #undef STR default: return "UNKNOWN_FORMAT"; } } static void app_dev_init_formats(struct app_dev *dev) { VkFormat f; for (f = 0; f < VK_FORMAT_RANGE_SIZE; f++) { const VkFormat fmt = f; vkGetPhysicalDeviceFormatProperties(dev->gpu->obj, fmt, &dev->format_props[f]); } } static void extract_version(uint32_t version, uint32_t *major, uint32_t *minor, uint32_t *patch) { *major = version >> 22; *minor = (version >> 12) & 0x3ff; *patch = version & 0xfff; } static void app_get_physical_device_layer_extensions( struct app_gpu *gpu, char *layer_name, uint32_t *extension_count, VkExtensionProperties **extension_properties) { VkResult err; uint32_t ext_count = 0; VkExtensionProperties *ext_ptr = NULL; /* repeat get until VK_INCOMPLETE goes away */ do { err = vkEnumerateDeviceExtensionProperties(gpu->obj, layer_name, &ext_count, NULL); assert(!err); if (ext_ptr) { free(ext_ptr); } ext_ptr = malloc(ext_count * sizeof(VkExtensionProperties)); err = vkEnumerateDeviceExtensionProperties(gpu->obj, layer_name, &ext_count, ext_ptr); } while (err == VK_INCOMPLETE); assert(!err); *extension_count = ext_count; *extension_properties = ext_ptr; } static void app_dev_init(struct app_dev *dev, struct app_gpu *gpu) { VkDeviceCreateInfo info = { .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO, .pNext = NULL, .queueCreateInfoCount = 0, .pQueueCreateInfos = NULL, .enabledLayerCount = 0, .ppEnabledLayerNames = NULL, .enabledExtensionCount = 0, .ppEnabledExtensionNames = NULL, }; VkResult U_ASSERT_ONLY err; uint32_t count = 0; /* Scan layers */ VkLayerProperties *device_layer_properties = NULL; struct layer_extension_list *device_layers = NULL; do { err = vkEnumerateDeviceLayerProperties(gpu->obj, &count, NULL); assert(!err); if (device_layer_properties) { free(device_layer_properties); } device_layer_properties = malloc(sizeof(VkLayerProperties) * count); assert(device_layer_properties); if (device_layers) { free(device_layers); } device_layers = malloc(sizeof(struct layer_extension_list) * count); assert(device_layers); err = vkEnumerateDeviceLayerProperties(gpu->obj, &count, device_layer_properties); } while (err == VK_INCOMPLETE); assert(!err); gpu->device_layer_count = count; gpu->device_layers = device_layers; for (uint32_t i = 0; i < gpu->device_layer_count; i++) { VkLayerProperties *src_info = &device_layer_properties[i]; struct layer_extension_list *dst_info = &gpu->device_layers[i]; memcpy(&dst_info->layer_properties, src_info, sizeof(VkLayerProperties)); /* Save away layer extension info for report */ app_get_physical_device_layer_extensions( gpu, src_info->layerName, &dst_info->extension_count, &dst_info->extension_properties); } free(device_layer_properties); app_get_physical_device_layer_extensions( gpu, NULL, &gpu->device_extension_count, &gpu->device_extensions); fflush(stdout); /* request all queues */ info.queueCreateInfoCount = gpu->queue_count; info.pQueueCreateInfos = gpu->queue_reqs; info.enabledLayerCount = 0; info.ppEnabledLayerNames = NULL; info.enabledExtensionCount = 0; info.ppEnabledExtensionNames = NULL; dev->gpu = gpu; err = vkCreateDevice(gpu->obj, &info, NULL, &dev->obj); if (err) ERR_EXIT(err); } static void app_dev_destroy(struct app_dev *dev) { vkDestroyDevice(dev->obj, NULL); } static void app_get_global_layer_extensions(char *layer_name, uint32_t *extension_count, VkExtensionProperties **extension_properties) { VkResult err; uint32_t ext_count = 0; VkExtensionProperties *ext_ptr = NULL; /* repeat get until VK_INCOMPLETE goes away */ do { err = vkEnumerateInstanceExtensionProperties(layer_name, &ext_count, NULL); assert(!err); if (ext_ptr) { free(ext_ptr); } ext_ptr = malloc(ext_count * sizeof(VkExtensionProperties)); err = vkEnumerateInstanceExtensionProperties(layer_name, &ext_count, ext_ptr); } while (err == VK_INCOMPLETE); assert(!err); *extension_count = ext_count; *extension_properties = ext_ptr; } static void app_create_instance(struct app_instance *inst) { const VkApplicationInfo app_info = { .sType = VK_STRUCTURE_TYPE_APPLICATION_INFO, .pNext = NULL, .pApplicationName = APP_SHORT_NAME, .applicationVersion = 1, .pEngineName = APP_SHORT_NAME, .engineVersion = 1, .apiVersion = VK_API_VERSION_1_0, }; VkInstanceCreateInfo inst_info = { .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO, .pNext = NULL, .pApplicationInfo = &app_info, .enabledLayerCount = 0, .ppEnabledLayerNames = NULL, .enabledExtensionCount = 0, .ppEnabledExtensionNames = NULL, }; VkResult U_ASSERT_ONLY err; uint32_t count = 0; /* Scan layers */ VkLayerProperties *global_layer_properties = NULL; struct layer_extension_list *global_layers = NULL; do { err = vkEnumerateInstanceLayerProperties(&count, NULL); assert(!err); if (global_layer_properties) { free(global_layer_properties); } global_layer_properties = malloc(sizeof(VkLayerProperties) * count); assert(global_layer_properties); if (global_layers) { free(global_layers); } global_layers = malloc(sizeof(struct layer_extension_list) * count); assert(global_layers); err = vkEnumerateInstanceLayerProperties(&count, global_layer_properties); } while (err == VK_INCOMPLETE); assert(!err); inst->global_layer_count = count; inst->global_layers = global_layers; for (uint32_t i = 0; i < inst->global_layer_count; i++) { VkLayerProperties *src_info = &global_layer_properties[i]; struct layer_extension_list *dst_info = &inst->global_layers[i]; memcpy(&dst_info->layer_properties, src_info, sizeof(VkLayerProperties)); /* Save away layer extension info for report */ app_get_global_layer_extensions(src_info->layerName, &dst_info->extension_count, &dst_info->extension_properties); } free(global_layer_properties); /* Collect global extensions */ inst->global_extension_count = 0; app_get_global_layer_extensions(NULL, &inst->global_extension_count, &inst->global_extensions); VkDebugReportCallbackCreateInfoEXT dbg_info; memset(&dbg_info, 0, sizeof(dbg_info)); dbg_info.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbg_info.flags = VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_INFORMATION_BIT_EXT; dbg_info.pfnCallback = dbg_callback; inst_info.pNext = &dbg_info; err = vkCreateInstance(&inst_info, NULL, &inst->instance); if (err == VK_ERROR_INCOMPATIBLE_DRIVER) { printf("Cannot create Vulkan instance.\n"); ERR_EXIT(err); } else if (err) { ERR_EXIT(err); } } static void app_destroy_instance(struct app_instance *inst) { free(inst->global_extensions); vkDestroyInstance(inst->instance, NULL); } static void app_gpu_init(struct app_gpu *gpu, uint32_t id, VkPhysicalDevice obj) { uint32_t i; memset(gpu, 0, sizeof(*gpu)); gpu->id = id; gpu->obj = obj; vkGetPhysicalDeviceProperties(gpu->obj, &gpu->props); /* get queue count */ vkGetPhysicalDeviceQueueFamilyProperties(gpu->obj, &gpu->queue_count, NULL); gpu->queue_props = malloc(sizeof(gpu->queue_props[0]) * gpu->queue_count); if (!gpu->queue_props) ERR_EXIT(VK_ERROR_OUT_OF_HOST_MEMORY); vkGetPhysicalDeviceQueueFamilyProperties(gpu->obj, &gpu->queue_count, gpu->queue_props); /* set up queue requests */ gpu->queue_reqs = malloc(sizeof(*gpu->queue_reqs) * gpu->queue_count); if (!gpu->queue_reqs) ERR_EXIT(VK_ERROR_OUT_OF_HOST_MEMORY); for (i = 0; i < gpu->queue_count; i++) { float *queue_priorities = malloc(gpu->queue_props[i].queueCount * sizeof(float)); memset(queue_priorities, 0, gpu->queue_props[i].queueCount * sizeof(float)); gpu->queue_reqs[i].sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO; gpu->queue_reqs[i].pNext = NULL; gpu->queue_reqs[i].queueFamilyIndex = i; gpu->queue_reqs[i].queueCount = gpu->queue_props[i].queueCount; gpu->queue_reqs[i].pQueuePriorities = queue_priorities; } vkGetPhysicalDeviceMemoryProperties(gpu->obj, &gpu->memory_props); vkGetPhysicalDeviceFeatures(gpu->obj, &gpu->features); app_dev_init(&gpu->dev, gpu); app_dev_init_formats(&gpu->dev); } static void app_gpu_destroy(struct app_gpu *gpu) { app_dev_destroy(&gpu->dev); free(gpu->device_extensions); for (uint32_t i = 0; i < gpu->queue_count; i++) { free((void *)gpu->queue_reqs[i].pQueuePriorities); } free(gpu->queue_reqs); free(gpu->queue_props); } // clang-format off static void app_dev_dump_format_props(const struct app_dev *dev, VkFormat fmt) { const VkFormatProperties *props = &dev->format_props[fmt]; struct { const char *name; VkFlags flags; } features[3]; uint32_t i; features[0].name = "linearTiling FormatFeatureFlags"; features[0].flags = props->linearTilingFeatures; features[1].name = "optimalTiling FormatFeatureFlags"; features[1].flags = props->optimalTilingFeatures; features[2].name = "bufferFeatures FormatFeatureFlags"; features[2].flags = props->bufferFeatures; printf("\nFORMAT_%s:", vk_format_string(fmt)); for (i = 0; i < ARRAY_SIZE(features); i++) { printf("\n\t%s:", features[i].name); if (features[i].flags == 0) { printf("\n\t\tNone"); } else { printf("%s%s%s%s%s%s%s%s%s%s%s%s%s", ((features[i].flags & VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT) ? "\n\t\tVK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT) ? "\n\t\tVK_FORMAT_FEATURE_STORAGE_IMAGE_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT) ? "\n\t\tVK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT) ? "\n\t\tVK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT) ? "\n\t\tVK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT) ? "\n\t\tVK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_BLIT_SRC_BIT) ? "\n\t\tVK_FORMAT_FEATURE_BLIT_SRC_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_BLIT_DST_BIT) ? "\n\t\tVK_FORMAT_FEATURE_BLIT_DST_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT) ? "\n\t\tVK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT) ? "\n\t\tVK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT) ? "\n\t\tVK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT) ? "\n\t\tVK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT" : ""), ((features[i].flags & VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT) ? "\n\t\tVK_FORMAT_FEATURE_VERTEX_BUFFER_BIT" : "")); } } printf("\n"); } static void app_dev_dump(const struct app_dev *dev) { VkFormat fmt; for (fmt = 0; fmt < VK_FORMAT_RANGE_SIZE; fmt++) { app_dev_dump_format_props(dev, fmt); } } #ifdef _WIN32 #define PRINTF_SIZE_T_SPECIFIER "%Iu" #else #define PRINTF_SIZE_T_SPECIFIER "%zu" #endif static void app_gpu_dump_features(const struct app_gpu *gpu) { const VkPhysicalDeviceFeatures *features = &gpu->features; printf("VkPhysicalDeviceFeatures:\n"); printf("=========================\n"); printf("\trobustBufferAccess = %u\n", features->robustBufferAccess ); printf("\tfullDrawIndexUint32 = %u\n", features->fullDrawIndexUint32 ); printf("\timageCubeArray = %u\n", features->imageCubeArray ); printf("\tindependentBlend = %u\n", features->independentBlend ); printf("\tgeometryShader = %u\n", features->geometryShader ); printf("\ttessellationShader = %u\n", features->tessellationShader ); printf("\tsampleRateShading = %u\n", features->sampleRateShading ); printf("\tdualSrcBlend = %u\n", features->dualSrcBlend ); printf("\tlogicOp = %u\n", features->logicOp ); printf("\tmultiDrawIndirect = %u\n", features->multiDrawIndirect ); printf("\tdrawIndirectFirstInstance = %u\n", features->drawIndirectFirstInstance ); printf("\tdepthClamp = %u\n", features->depthClamp ); printf("\tdepthBiasClamp = %u\n", features->depthBiasClamp ); printf("\tfillModeNonSolid = %u\n", features->fillModeNonSolid ); printf("\tdepthBounds = %u\n", features->depthBounds ); printf("\twideLines = %u\n", features->wideLines ); printf("\tlargePoints = %u\n", features->largePoints ); printf("\ttextureCompressionETC2 = %u\n", features->textureCompressionETC2 ); printf("\ttextureCompressionASTC_LDR = %u\n", features->textureCompressionASTC_LDR ); printf("\ttextureCompressionBC = %u\n", features->textureCompressionBC ); printf("\tocclusionQueryPrecise = %u\n", features->occlusionQueryPrecise ); printf("\tpipelineStatisticsQuery = %u\n", features->pipelineStatisticsQuery ); printf("\tvertexSideEffects = %u\n", features->vertexPipelineStoresAndAtomics ); printf("\ttessellationSideEffects = %u\n", features->fragmentStoresAndAtomics ); printf("\tgeometrySideEffects = %u\n", features->shaderTessellationAndGeometryPointSize ); printf("\tshaderImageGatherExtended = %u\n", features->shaderImageGatherExtended ); printf("\tshaderStorageImageExtendedFormats = %u\n", features->shaderStorageImageExtendedFormats ); printf("\tshaderStorageImageMultisample = %u\n", features->shaderStorageImageMultisample ); printf("\tshaderStorageImageReadWithoutFormat = %u\n", features->shaderStorageImageReadWithoutFormat ); printf("\tshaderStorageImageWriteWithoutFormat = %u\n", features->shaderStorageImageWriteWithoutFormat ); printf("\tshaderUniformBufferArrayDynamicIndexing = %u\n", features->shaderUniformBufferArrayDynamicIndexing); printf("\tshaderSampledImageArrayDynamicIndexing = %u\n", features->shaderSampledImageArrayDynamicIndexing ); printf("\tshaderStorageBufferArrayDynamicIndexing = %u\n", features->shaderStorageBufferArrayDynamicIndexing); printf("\tshaderStorageImageArrayDynamicIndexing = %u\n", features->shaderStorageImageArrayDynamicIndexing ); printf("\tshaderClipDistance = %u\n", features->shaderClipDistance ); printf("\tshaderCullDistance = %u\n", features->shaderCullDistance ); printf("\tshaderFloat64 = %u\n", features->shaderFloat64 ); printf("\tshaderInt64 = %u\n", features->shaderInt64 ); printf("\tshaderInt16 = %u\n", features->shaderInt16 ); printf("\tshaderResourceResidency = %u\n", features->shaderResourceResidency ); printf("\tshaderResourceMinLod = %u\n", features->shaderResourceMinLod ); printf("\talphaToOne = %u\n", features->alphaToOne ); printf("\tsparseBinding = %u\n", features->sparseBinding ); printf("\tsparseResidencyBuffer = %u\n", features->sparseResidencyBuffer ); printf("\tsparseResidencyImage2D = %u\n", features->sparseResidencyImage2D ); printf("\tsparseResidencyImage3D = %u\n", features->sparseResidencyImage3D ); printf("\tsparseResidency2Samples = %u\n", features->sparseResidency2Samples ); printf("\tsparseResidency4Samples = %u\n", features->sparseResidency4Samples ); printf("\tsparseResidency8Samples = %u\n", features->sparseResidency8Samples ); printf("\tsparseResidency16Samples = %u\n", features->sparseResidency16Samples ); printf("\tsparseResidencyAliased = %u\n", features->sparseResidencyAliased ); printf("\tvariableMultisampleRate = %u\n", features->variableMultisampleRate ); printf("\tiheritedQueries = %u\n", features->inheritedQueries ); } static void app_dump_sparse_props(const VkPhysicalDeviceSparseProperties *sparseProps) { printf("\tVkPhysicalDeviceSparseProperties:\n"); printf("\t---------------------------------\n"); printf("\t\tresidencyStandard2DBlockShape = %u\n", sparseProps->residencyStandard2DBlockShape ); printf("\t\tresidencyStandard2DMultisampleBlockShape = %u\n", sparseProps->residencyStandard2DMultisampleBlockShape); printf("\t\tresidencyStandard3DBlockShape = %u\n", sparseProps->residencyStandard3DBlockShape ); printf("\t\tresidencyAlignedMipSize = %u\n", sparseProps->residencyAlignedMipSize ); printf("\t\tresidencyNonResidentStrict = %u\n", sparseProps->residencyNonResidentStrict ); } static void app_dump_limits(const VkPhysicalDeviceLimits *limits) { printf("\tVkPhysicalDeviceLimits:\n"); printf("\t-----------------------\n"); printf("\t\tmaxImageDimension1D = 0x%" PRIxLEAST32 "\n", limits->maxImageDimension1D ); printf("\t\tmaxImageDimension2D = 0x%" PRIxLEAST32 "\n", limits->maxImageDimension2D ); printf("\t\tmaxImageDimension3D = 0x%" PRIxLEAST32 "\n", limits->maxImageDimension3D ); printf("\t\tmaxImageDimensionCube = 0x%" PRIxLEAST32 "\n", limits->maxImageDimensionCube ); printf("\t\tmaxImageArrayLayers = 0x%" PRIxLEAST32 "\n", limits->maxImageArrayLayers ); printf("\t\tmaxTexelBufferElements = 0x%" PRIxLEAST32 "\n", limits->maxTexelBufferElements ); printf("\t\tmaxUniformBufferRange = 0x%" PRIxLEAST32 "\n", limits->maxUniformBufferRange ); printf("\t\tmaxStorageBufferRange = 0x%" PRIxLEAST32 "\n", limits->maxStorageBufferRange ); printf("\t\tmaxPushConstantsSize = 0x%" PRIxLEAST32 "\n", limits->maxPushConstantsSize ); printf("\t\tmaxMemoryAllocationCount = 0x%" PRIxLEAST32 "\n", limits->maxMemoryAllocationCount ); printf("\t\tmaxSamplerAllocationCount = 0x%" PRIxLEAST32 "\n", limits->maxSamplerAllocationCount ); printf("\t\tbufferImageGranularity = 0x%" PRIxLEAST64 "\n", limits->bufferImageGranularity ); printf("\t\tsparseAddressSpaceSize = 0x%" PRIxLEAST64 "\n", limits->sparseAddressSpaceSize ); printf("\t\tmaxBoundDescriptorSets = 0x%" PRIxLEAST32 "\n", limits->maxBoundDescriptorSets ); printf("\t\tmaxPerStageDescriptorSamplers = 0x%" PRIxLEAST32 "\n", limits->maxPerStageDescriptorSamplers ); printf("\t\tmaxPerStageDescriptorUniformBuffers = 0x%" PRIxLEAST32 "\n", limits->maxPerStageDescriptorUniformBuffers ); printf("\t\tmaxPerStageDescriptorStorageBuffers = 0x%" PRIxLEAST32 "\n", limits->maxPerStageDescriptorStorageBuffers ); printf("\t\tmaxPerStageDescriptorSampledImages = 0x%" PRIxLEAST32 "\n", limits->maxPerStageDescriptorSampledImages ); printf("\t\tmaxPerStageDescriptorStorageImages = 0x%" PRIxLEAST32 "\n", limits->maxPerStageDescriptorStorageImages ); printf("\t\tmaxPerStageDescriptorInputAttachments = 0x%" PRIxLEAST32 "\n", limits->maxPerStageDescriptorInputAttachments ); printf("\t\tmaxPerStageResources = 0x%" PRIxLEAST32 "\n", limits->maxPerStageResources ); printf("\t\tmaxDescriptorSetSamplers = 0x%" PRIxLEAST32 "\n", limits->maxDescriptorSetSamplers ); printf("\t\tmaxDescriptorSetUniformBuffers = 0x%" PRIxLEAST32 "\n", limits->maxDescriptorSetUniformBuffers ); printf("\t\tmaxDescriptorSetUniformBuffersDynamic = 0x%" PRIxLEAST32 "\n", limits->maxDescriptorSetUniformBuffersDynamic ); printf("\t\tmaxDescriptorSetStorageBuffers = 0x%" PRIxLEAST32 "\n", limits->maxDescriptorSetStorageBuffers ); printf("\t\tmaxDescriptorSetStorageBuffersDynamic = 0x%" PRIxLEAST32 "\n", limits->maxDescriptorSetStorageBuffersDynamic ); printf("\t\tmaxDescriptorSetSampledImages = 0x%" PRIxLEAST32 "\n", limits->maxDescriptorSetSampledImages ); printf("\t\tmaxDescriptorSetStorageImages = 0x%" PRIxLEAST32 "\n", limits->maxDescriptorSetStorageImages ); printf("\t\tmaxDescriptorSetInputAttachments = 0x%" PRIxLEAST32 "\n", limits->maxDescriptorSetInputAttachments ); printf("\t\tmaxVertexInputAttributes = 0x%" PRIxLEAST32 "\n", limits->maxVertexInputAttributes ); printf("\t\tmaxVertexInputBindings = 0x%" PRIxLEAST32 "\n", limits->maxVertexInputBindings ); printf("\t\tmaxVertexInputAttributeOffset = 0x%" PRIxLEAST32 "\n", limits->maxVertexInputAttributeOffset ); printf("\t\tmaxVertexInputBindingStride = 0x%" PRIxLEAST32 "\n", limits->maxVertexInputBindingStride ); printf("\t\tmaxVertexOutputComponents = 0x%" PRIxLEAST32 "\n", limits->maxVertexOutputComponents ); printf("\t\tmaxTessellationGenerationLevel = 0x%" PRIxLEAST32 "\n", limits->maxTessellationGenerationLevel ); printf("\t\tmaxTessellationPatchSize = 0x%" PRIxLEAST32 "\n", limits->maxTessellationPatchSize ); printf("\t\tmaxTessellationControlPerVertexInputComponents = 0x%" PRIxLEAST32 "\n", limits->maxTessellationControlPerVertexInputComponents ); printf("\t\tmaxTessellationControlPerVertexOutputComponents = 0x%" PRIxLEAST32 "\n", limits->maxTessellationControlPerVertexOutputComponents); printf("\t\tmaxTessellationControlPerPatchOutputComponents = 0x%" PRIxLEAST32 "\n", limits->maxTessellationControlPerPatchOutputComponents ); printf("\t\tmaxTessellationControlTotalOutputComponents = 0x%" PRIxLEAST32 "\n", limits->maxTessellationControlTotalOutputComponents ); printf("\t\tmaxTessellationEvaluationInputComponents = 0x%" PRIxLEAST32 "\n", limits->maxTessellationEvaluationInputComponents ); printf("\t\tmaxTessellationEvaluationOutputComponents = 0x%" PRIxLEAST32 "\n", limits->maxTessellationEvaluationOutputComponents ); printf("\t\tmaxGeometryShaderInvocations = 0x%" PRIxLEAST32 "\n", limits->maxGeometryShaderInvocations ); printf("\t\tmaxGeometryInputComponents = 0x%" PRIxLEAST32 "\n", limits->maxGeometryInputComponents ); printf("\t\tmaxGeometryOutputComponents = 0x%" PRIxLEAST32 "\n", limits->maxGeometryOutputComponents ); printf("\t\tmaxGeometryOutputVertices = 0x%" PRIxLEAST32 "\n", limits->maxGeometryOutputVertices ); printf("\t\tmaxGeometryTotalOutputComponents = 0x%" PRIxLEAST32 "\n", limits->maxGeometryTotalOutputComponents ); printf("\t\tmaxFragmentInputComponents = 0x%" PRIxLEAST32 "\n", limits->maxFragmentInputComponents ); printf("\t\tmaxFragmentOutputAttachments = 0x%" PRIxLEAST32 "\n", limits->maxFragmentOutputAttachments ); printf("\t\tmaxFragmentDualSrcAttachments = 0x%" PRIxLEAST32 "\n", limits->maxFragmentDualSrcAttachments ); printf("\t\tmaxFragmentCombinedOutputResources = 0x%" PRIxLEAST32 "\n", limits->maxFragmentCombinedOutputResources ); printf("\t\tmaxComputeSharedMemorySize = 0x%" PRIxLEAST32 "\n", limits->maxComputeSharedMemorySize ); printf("\t\tmaxComputeWorkGroupCount[0] = 0x%" PRIxLEAST32 "\n", limits->maxComputeWorkGroupCount[0] ); printf("\t\tmaxComputeWorkGroupCount[1] = 0x%" PRIxLEAST32 "\n", limits->maxComputeWorkGroupCount[1] ); printf("\t\tmaxComputeWorkGroupCount[2] = 0x%" PRIxLEAST32 "\n", limits->maxComputeWorkGroupCount[2] ); printf("\t\tmaxComputeWorkGroupInvocations = 0x%" PRIxLEAST32 "\n", limits->maxComputeWorkGroupInvocations ); printf("\t\tmaxComputeWorkGroupSize[0] = 0x%" PRIxLEAST32 "\n", limits->maxComputeWorkGroupSize[0] ); printf("\t\tmaxComputeWorkGroupSize[1] = 0x%" PRIxLEAST32 "\n", limits->maxComputeWorkGroupSize[1] ); printf("\t\tmaxComputeWorkGroupSize[2] = 0x%" PRIxLEAST32 "\n", limits->maxComputeWorkGroupSize[2] ); printf("\t\tsubPixelPrecisionBits = 0x%" PRIxLEAST32 "\n", limits->subPixelPrecisionBits ); printf("\t\tsubTexelPrecisionBits = 0x%" PRIxLEAST32 "\n", limits->subTexelPrecisionBits ); printf("\t\tmipmapPrecisionBits = 0x%" PRIxLEAST32 "\n", limits->mipmapPrecisionBits ); printf("\t\tmaxDrawIndexedIndexValue = 0x%" PRIxLEAST32 "\n", limits->maxDrawIndexedIndexValue ); printf("\t\tmaxDrawIndirectCount = 0x%" PRIxLEAST32 "\n", limits->maxDrawIndirectCount ); printf("\t\tmaxSamplerLodBias = %f\n", limits->maxSamplerLodBias ); printf("\t\tmaxSamplerAnisotropy = %f\n", limits->maxSamplerAnisotropy ); printf("\t\tmaxViewports = 0x%" PRIxLEAST32 "\n", limits->maxViewports ); printf("\t\tmaxViewportDimensions[0] = 0x%" PRIxLEAST32 "\n", limits->maxViewportDimensions[0] ); printf("\t\tmaxViewportDimensions[1] = 0x%" PRIxLEAST32 "\n", limits->maxViewportDimensions[1] ); printf("\t\tviewportBoundsRange[0] = %f\n", limits->viewportBoundsRange[0] ); printf("\t\tviewportBoundsRange[1] = %f\n", limits->viewportBoundsRange[1] ); printf("\t\tviewportSubPixelBits = 0x%" PRIxLEAST32 "\n", limits->viewportSubPixelBits ); printf("\t\tminMemoryMapAlignment = " PRINTF_SIZE_T_SPECIFIER "\n", limits->minMemoryMapAlignment ); printf("\t\tminTexelBufferOffsetAlignment = 0x%" PRIxLEAST64 "\n", limits->minTexelBufferOffsetAlignment ); printf("\t\tminUniformBufferOffsetAlignment = 0x%" PRIxLEAST64 "\n", limits->minUniformBufferOffsetAlignment ); printf("\t\tminStorageBufferOffsetAlignment = 0x%" PRIxLEAST64 "\n", limits->minStorageBufferOffsetAlignment ); printf("\t\tminTexelOffset = 0x%" PRIxLEAST32 "\n", limits->minTexelOffset ); printf("\t\tmaxTexelOffset = 0x%" PRIxLEAST32 "\n", limits->maxTexelOffset ); printf("\t\tminTexelGatherOffset = 0x%" PRIxLEAST32 "\n", limits->minTexelGatherOffset ); printf("\t\tmaxTexelGatherOffset = 0x%" PRIxLEAST32 "\n", limits->maxTexelGatherOffset ); printf("\t\tminInterpolationOffset = %f\n", limits->minInterpolationOffset ); printf("\t\tmaxInterpolationOffset = %f\n", limits->maxInterpolationOffset ); printf("\t\tsubPixelInterpolationOffsetBits = 0x%" PRIxLEAST32 "\n", limits->subPixelInterpolationOffsetBits ); printf("\t\tmaxFramebufferWidth = 0x%" PRIxLEAST32 "\n", limits->maxFramebufferWidth ); printf("\t\tmaxFramebufferHeight = 0x%" PRIxLEAST32 "\n", limits->maxFramebufferHeight ); printf("\t\tmaxFramebufferLayers = 0x%" PRIxLEAST32 "\n", limits->maxFramebufferLayers ); printf("\t\tframebufferColorSampleCounts = 0x%" PRIxLEAST32 "\n", limits->framebufferColorSampleCounts ); printf("\t\tframebufferDepthSampleCounts = 0x%" PRIxLEAST32 "\n", limits->framebufferDepthSampleCounts ); printf("\t\tframebufferStencilSampleCounts = 0x%" PRIxLEAST32 "\n", limits->framebufferStencilSampleCounts ); printf("\t\tmaxColorAttachments = 0x%" PRIxLEAST32 "\n", limits->maxColorAttachments ); printf("\t\tsampledImageColorSampleCounts = 0x%" PRIxLEAST32 "\n", limits->sampledImageColorSampleCounts ); printf("\t\tsampledImageDepthSampleCounts = 0x%" PRIxLEAST32 "\n", limits->sampledImageDepthSampleCounts ); printf("\t\tsampledImageStencilSampleCounts = 0x%" PRIxLEAST32 "\n", limits->sampledImageStencilSampleCounts ); printf("\t\tsampledImageIntegerSampleCounts = 0x%" PRIxLEAST32 "\n", limits->sampledImageIntegerSampleCounts ); printf("\t\tstorageImageSampleCounts = 0x%" PRIxLEAST32 "\n", limits->storageImageSampleCounts ); printf("\t\tmaxSampleMaskWords = 0x%" PRIxLEAST32 "\n", limits->maxSampleMaskWords ); printf("\t\ttimestampComputeAndGraphics = %u\n", limits->timestampComputeAndGraphics ); printf("\t\ttimestampPeriod = 0x%f\n", limits->timestampPeriod ); printf("\t\tmaxClipDistances = 0x%" PRIxLEAST32 "\n", limits->maxClipDistances ); printf("\t\tmaxCullDistances = 0x%" PRIxLEAST32 "\n", limits->maxCullDistances ); printf("\t\tmaxCombinedClipAndCullDistances = 0x%" PRIxLEAST32 "\n", limits->maxCombinedClipAndCullDistances ); printf("\t\tpointSizeRange[0] = %f\n", limits->pointSizeRange[0] ); printf("\t\tpointSizeRange[1] = %f\n", limits->pointSizeRange[1] ); printf("\t\tlineWidthRange[0] = %f\n", limits->lineWidthRange[0] ); printf("\t\tlineWidthRange[1] = %f\n", limits->lineWidthRange[1] ); printf("\t\tpointSizeGranularity = %f\n", limits->pointSizeGranularity ); printf("\t\tlineWidthGranularity = %f\n", limits->lineWidthGranularity ); printf("\t\tstrictLines = %u\n", limits->strictLines ); printf("\t\tstandardSampleLocations = %u\n", limits->standardSampleLocations ); printf("\t\toptimalBufferCopyOffsetAlignment = 0x%" PRIxLEAST64 "\n", limits->optimalBufferCopyOffsetAlignment ); printf("\t\toptimalBufferCopyRowPitchAlignment = 0x%" PRIxLEAST64 "\n", limits->optimalBufferCopyRowPitchAlignment ); printf("\t\tnonCoherentAtomSize = 0x%" PRIxLEAST64 "\n", limits->nonCoherentAtomSize ); } static void app_gpu_dump_props(const struct app_gpu *gpu) { const VkPhysicalDeviceProperties *props = &gpu->props; printf("VkPhysicalDeviceProperties:\n"); printf("===========================\n"); printf("\tapiVersion = %u\n", props->apiVersion); printf("\tdriverVersion = %u\n", props->driverVersion); printf("\tvendorID = 0x%04x\n", props->vendorID); printf("\tdeviceID = 0x%04x\n", props->deviceID); printf("\tdeviceType = %s\n", vk_physical_device_type_string(props->deviceType)); printf("\tdeviceName = %s\n", props->deviceName); app_dump_limits(&gpu->props.limits); app_dump_sparse_props(&gpu->props.sparseProperties); fflush(stdout); } // clang-format on static void app_dump_extensions(const char *indent, const char *layer_name, const uint32_t extension_count, const VkExtensionProperties *extension_properties) { uint32_t i; if (layer_name && (strlen(layer_name) > 0)) { printf("%s%s Extensions", indent, layer_name); } else { printf("Extensions"); } printf("\tcount = %d\n", extension_count); for (i = 0; i < extension_count; i++) { VkExtensionProperties const *ext_prop = &extension_properties[i]; printf("%s\t", indent); printf("%-32s: extension revision %2d\n", ext_prop->extensionName, ext_prop->specVersion); } printf("\n"); fflush(stdout); } static void app_gpu_dump_queue_props(const struct app_gpu *gpu, uint32_t id) { const VkQueueFamilyProperties *props = &gpu->queue_props[id]; printf("VkQueueFamilyProperties[%d]:\n", id); printf("============================\n"); printf("\tqueueFlags = %c%c%c\n", (props->queueFlags & VK_QUEUE_GRAPHICS_BIT) ? 'G' : '.', (props->queueFlags & VK_QUEUE_COMPUTE_BIT) ? 'C' : '.', (props->queueFlags & VK_QUEUE_TRANSFER_BIT) ? 'D' : '.'); printf("\tqueueCount = %u\n", props->queueCount); printf("\ttimestampValidBits = %u\n", props->timestampValidBits); printf("\tminImageTransferGranularity = (%d, %d, %d)\n", props->minImageTransferGranularity.width, props->minImageTransferGranularity.height, props->minImageTransferGranularity.depth); fflush(stdout); } static void app_gpu_dump_memory_props(const struct app_gpu *gpu) { const VkPhysicalDeviceMemoryProperties *props = &gpu->memory_props; printf("VkPhysicalDeviceMemoryProperties:\n"); printf("=================================\n"); printf("\tmemoryTypeCount = %u\n", props->memoryTypeCount); for (uint32_t i = 0; i < props->memoryTypeCount; i++) { printf("\tmemoryTypes[%u] : \n", i); printf("\t\tpropertyFlags = %u\n", props->memoryTypes[i].propertyFlags); printf("\t\theapIndex = %u\n", props->memoryTypes[i].heapIndex); } printf("\tmemoryHeapCount = %u\n", props->memoryHeapCount); for (uint32_t i = 0; i < props->memoryHeapCount; i++) { printf("\tmemoryHeaps[%u] : \n", i); printf("\t\tsize = " PRINTF_SIZE_T_SPECIFIER "\n", (size_t)props->memoryHeaps[i].size); } fflush(stdout); } static void app_gpu_dump(const struct app_gpu *gpu) { uint32_t i; printf("Device Extensions and layers:\n"); printf("=============================\n"); printf("GPU%u\n", gpu->id); app_gpu_dump_props(gpu); printf("\n"); app_dump_extensions("", "Device", gpu->device_extension_count, gpu->device_extensions); printf("\n"); printf("Layers\tcount = %d\n", gpu->device_layer_count); for (uint32_t i = 0; i < gpu->device_layer_count; i++) { uint32_t major, minor, patch; char spec_version[64], layer_version[64]; struct layer_extension_list const *layer_info = &gpu->device_layers[i]; extract_version(layer_info->layer_properties.specVersion, &major, &minor, &patch); snprintf(spec_version, sizeof(spec_version), "%d.%d.%d", major, minor, patch); snprintf(layer_version, sizeof(layer_version), "%d", layer_info->layer_properties.implementationVersion); printf("\t%s (%s) Vulkan version %s, layer version %s\n", layer_info->layer_properties.layerName, (char *)layer_info->layer_properties.description, spec_version, layer_version); app_dump_extensions("\t", layer_info->layer_properties.layerName, layer_info->extension_count, layer_info->extension_properties); fflush(stdout); } printf("\n"); for (i = 0; i < gpu->queue_count; i++) { app_gpu_dump_queue_props(gpu, i); printf("\n"); } app_gpu_dump_memory_props(gpu); printf("\n"); app_gpu_dump_features(gpu); printf("\n"); app_dev_dump(&gpu->dev); } #ifdef _WIN32 // Enlarges the console window to have a large scrollback size. static void ConsoleEnlarge() { HANDLE consoleHandle = GetStdHandle(STD_OUTPUT_HANDLE); // make the console window bigger CONSOLE_SCREEN_BUFFER_INFO csbi; COORD bufferSize; if (GetConsoleScreenBufferInfo(consoleHandle, &csbi)) { bufferSize.X = csbi.dwSize.X + 30; bufferSize.Y = 20000; SetConsoleScreenBufferSize(consoleHandle, bufferSize); } SMALL_RECT r; r.Left = r.Top = 0; r.Right = csbi.dwSize.X - 1 + 30; r.Bottom = 50; SetConsoleWindowInfo(consoleHandle, true, &r); // change the console window title SetConsoleTitle(TEXT(APP_SHORT_NAME)); } #endif int main(int argc, char **argv) { unsigned int major, minor, patch; struct app_gpu gpus[MAX_GPUS]; VkPhysicalDevice objs[MAX_GPUS]; uint32_t gpu_count, i; VkResult err; struct app_instance inst; #ifdef _WIN32 if (ConsoleIsExclusive()) ConsoleEnlarge(); #endif major = VK_API_VERSION_1_0 >> 22; minor = (VK_API_VERSION_1_0 >> 12) & 0x3ff; patch = VK_HEADER_VERSION & 0xfff; printf("===========\n"); printf("VULKAN INFO\n"); printf("===========\n\n"); printf("Vulkan API Version: %d.%d.%d\n\n", major, minor, patch); app_create_instance(&inst); printf("Instance Extensions and layers:\n"); printf("===============================\n"); app_dump_extensions("", "Instance", inst.global_extension_count, inst.global_extensions); printf("Instance Layers\tcount = %d\n", inst.global_layer_count); for (uint32_t i = 0; i < inst.global_layer_count; i++) { uint32_t major, minor, patch; char spec_version[64], layer_version[64]; VkLayerProperties const *layer_prop = &inst.global_layers[i].layer_properties; extract_version(layer_prop->specVersion, &major, &minor, &patch); snprintf(spec_version, sizeof(spec_version), "%d.%d.%d", major, minor, patch); snprintf(layer_version, sizeof(layer_version), "%d", layer_prop->implementationVersion); printf("\t%s (%s) Vulkan version %s, layer version %s\n", layer_prop->layerName, (char *)layer_prop->description, spec_version, layer_version); app_dump_extensions("\t", inst.global_layers[i].layer_properties.layerName, inst.global_layers[i].extension_count, inst.global_layers[i].extension_properties); } err = vkEnumeratePhysicalDevices(inst.instance, &gpu_count, NULL); if (err) ERR_EXIT(err); if (gpu_count > MAX_GPUS) { printf("Too many GPUS found \n"); ERR_EXIT(-1); } err = vkEnumeratePhysicalDevices(inst.instance, &gpu_count, objs); if (err) ERR_EXIT(err); for (i = 0; i < gpu_count; i++) { app_gpu_init(&gpus[i], i, objs[i]); app_gpu_dump(&gpus[i]); printf("\n\n"); } for (i = 0; i < gpu_count; i++) app_gpu_destroy(&gpus[i]); app_destroy_instance(&inst); fflush(stdout); #ifdef _WIN32 if (ConsoleIsExclusive()) Sleep(INFINITE); #endif return 0; } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/demos/vulkaninfo.vcxproj.user000077500000000000000000000011671270147354000266070ustar00rootroot00000000000000 VK_LAYER_PATH=..\layers\Debug WindowsLocalDebugger VK_LAYER_PATH=..\layers\Release WindowsLocalDebugger Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/determine_vs_version.py000066400000000000000000000075371270147354000255410ustar00rootroot00000000000000#!/usr/bin/env python3 # # Copyright (c) 2016 The Khronos Group Inc. # Copyright (c) 2016 Valve Corporation # Copyright (c) 2016 LunarG, Inc. # Copyright (c) 2016 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and/or associated documentation files (the "Materials"), to # deal in the Materials without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Materials, and to permit persons to whom the Materials # are furnished to do so, subject to the following conditions: # # The above copyright notice(s) and this permission notice shall be included # in all copies or substantial portions of the Materials. # # THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. # # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE # USE OR OTHER DEALINGS IN THE MATERIALS # # Author: Mark Young import sys import os # Following function code snippet was found on StackOverflow (with a change to lower # camel-case on the variable names): # http://stackoverflow.com/questions/377017/test-if-executable-exists-in-python def find_executable(program): def is_exe(fPath): return os.path.isfile(fPath) and os.access(fPath, os.X_OK) fPath, fName = os.path.split(program) if fPath: if is_exe(program): return program else: for path in os.environ["PATH"].split(os.pathsep): path = path.strip('"') exe_file = os.path.join(path, program) if is_exe(exe_file): return exe_file return None def determine_year(version): if version == 8: return 2005 elif version == 9: return 2008 elif version == 10: return 2010 elif version == 11: return 2012 elif version == 12: return 2013 elif version == 14: return 2015 else: return 0000 # Determine if msbuild is in the path, then call it to determine the version and parse # it into a format we can use, which is " ". if __name__ == '__main__': exeName = 'msbuild.exe' versionCall = exeName + ' /ver' # Determine if the executable exists in the path, this is critical. # foundExeName = find_executable(exeName) # If not found, return an invalid number but in the appropriate format so it will # fail if the program above tries to use it. if foundExeName == None: print('00 0000') print('Executable ' + exeName + ' not found in PATH!') else: sysCallOut = os.popen(versionCall).read() version = None # Split around any spaces first spaceList = sysCallOut.split(' ') for spaceString in spaceList: # If we've already found it, bail. if version != None: break # Now split around line feeds lineList = spaceString.split('\n') for curLine in lineList: # If we've already found it, bail. if version != None: break # We only want to continue if there's a period in the list if '.' not in curLine: continue # Get the first element and determine if it is a number, if so, we've # got our number. splitAroundPeriod = curLine.split('.') if splitAroundPeriod[0].isdigit(): version = int (splitAroundPeriod[0]) break # Failsafe to return a number in the proper format, but one that will fail. if version == None: version = 00 # Determine the year associated with that version year = determine_year(version) # Output the string we need for Cmake to properly build for this version print(str(version) + ' ' + str(year)) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/generator.py000066400000000000000000005173461270147354000233020ustar00rootroot00000000000000#!/usr/bin/python3 -i # # Copyright (c) 2013-2016 The Khronos Group Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and/or associated documentation files (the # "Materials"), to deal in the Materials without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Materials, and to # permit persons to whom the Materials are furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Materials. # # THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY # CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, # TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS. import os,re,sys from collections import namedtuple from lxml import etree def write( *args, **kwargs ): file = kwargs.pop('file',sys.stdout) end = kwargs.pop( 'end','\n') file.write( ' '.join([str(arg) for arg in args]) ) file.write( end ) # noneStr - returns string argument, or "" if argument is None. # Used in converting lxml Elements into text. # str - string to convert def noneStr(str): if (str): return str else: return "" # enquote - returns string argument with surrounding quotes, # for serialization into Python code. def enquote(str): if (str): return "'" + str + "'" else: return None # Primary sort key for regSortFeatures. # Sorts by category of the feature name string: # Core API features (those defined with a tag) # ARB/KHR/OES (Khronos extensions) # other (EXT/vendor extensions) # This will need changing for Vulkan! def regSortCategoryKey(feature): if (feature.elem.tag == 'feature'): return 0 elif (feature.category == 'ARB' or feature.category == 'KHR' or feature.category == 'OES'): return 1 else: return 2 # Secondary sort key for regSortFeatures. # Sorts by extension name. def regSortNameKey(feature): return feature.name # Second sort key for regSortFeatures. # Sorts by feature version. elements all have version number "0" def regSortFeatureVersionKey(feature): return float(feature.version) # Tertiary sort key for regSortFeatures. # Sorts by extension number. elements all have extension number 0. def regSortExtensionNumberKey(feature): return int(feature.number) # regSortFeatures - default sort procedure for features. # Sorts by primary key of feature category ('feature' or 'extension') # then by version number (for features) # then by extension number (for extensions) def regSortFeatures(featureList): featureList.sort(key = regSortExtensionNumberKey) featureList.sort(key = regSortFeatureVersionKey) featureList.sort(key = regSortCategoryKey) # GeneratorOptions - base class for options used during header production # These options are target language independent, and used by # Registry.apiGen() and by base OutputGenerator objects. # # Members # filename - name of file to generate, or None to write to stdout. # apiname - string matching 'apiname' attribute, e.g. 'gl'. # profile - string specifying API profile , e.g. 'core', or None. # versions - regex matching API versions to process interfaces for. # Normally '.*' or '[0-9]\.[0-9]' to match all defined versions. # emitversions - regex matching API versions to actually emit # interfaces for (though all requested versions are considered # when deciding which interfaces to generate). For GL 4.3 glext.h, # this might be '1\.[2-5]|[2-4]\.[0-9]'. # defaultExtensions - If not None, a string which must in its # entirety match the pattern in the "supported" attribute of # the . Defaults to None. Usually the same as apiname. # addExtensions - regex matching names of additional extensions # to include. Defaults to None. # removeExtensions - regex matching names of extensions to # remove (after defaultExtensions and addExtensions). Defaults # to None. # sortProcedure - takes a list of FeatureInfo objects and sorts # them in place to a preferred order in the generated output. # Default is core API versions, ARB/KHR/OES extensions, all # other extensions, alphabetically within each group. # The regex patterns can be None or empty, in which case they match # nothing. class GeneratorOptions: """Represents options during header production from an API registry""" def __init__(self, filename = None, apiname = None, profile = None, versions = '.*', emitversions = '.*', defaultExtensions = None, addExtensions = None, removeExtensions = None, sortProcedure = regSortFeatures): self.filename = filename self.apiname = apiname self.profile = profile self.versions = self.emptyRegex(versions) self.emitversions = self.emptyRegex(emitversions) self.defaultExtensions = defaultExtensions self.addExtensions = self.emptyRegex(addExtensions) self.removeExtensions = self.emptyRegex(removeExtensions) self.sortProcedure = sortProcedure # # Substitute a regular expression which matches no version # or extension names for None or the empty string. def emptyRegex(self,pat): if (pat == None or pat == ''): return '_nomatch_^' else: return pat # CGeneratorOptions - subclass of GeneratorOptions. # # Adds options used by COutputGenerator objects during C language header # generation. # # Additional members # prefixText - list of strings to prefix generated header with # (usually a copyright statement + calling convention macros). # protectFile - True if multiple inclusion protection should be # generated (based on the filename) around the entire header. # protectFeature - True if #ifndef..#endif protection should be # generated around a feature interface in the header file. # genFuncPointers - True if function pointer typedefs should be # generated # protectProto - If conditional protection should be generated # around prototype declarations, set to either '#ifdef' # to require opt-in (#ifdef protectProtoStr) or '#ifndef' # to require opt-out (#ifndef protectProtoStr). Otherwise # set to None. # protectProtoStr - #ifdef/#ifndef symbol to use around prototype # declarations, if protectProto is set # apicall - string to use for the function declaration prefix, # such as APICALL on Windows. # apientry - string to use for the calling convention macro, # in typedefs, such as APIENTRY. # apientryp - string to use for the calling convention macro # in function pointer typedefs, such as APIENTRYP. # indentFuncProto - True if prototype declarations should put each # parameter on a separate line # indentFuncPointer - True if typedefed function pointers should put each # parameter on a separate line # alignFuncParam - if nonzero and parameters are being put on a # separate line, align parameter names at the specified column class CGeneratorOptions(GeneratorOptions): """Represents options during C interface generation for headers""" def __init__(self, filename = None, apiname = None, profile = None, versions = '.*', emitversions = '.*', defaultExtensions = None, addExtensions = None, removeExtensions = None, sortProcedure = regSortFeatures, prefixText = "", genFuncPointers = True, protectFile = True, protectFeature = True, protectProto = None, protectProtoStr = None, apicall = '', apientry = '', apientryp = '', indentFuncProto = True, indentFuncPointer = False, alignFuncParam = 0): GeneratorOptions.__init__(self, filename, apiname, profile, versions, emitversions, defaultExtensions, addExtensions, removeExtensions, sortProcedure) self.prefixText = prefixText self.genFuncPointers = genFuncPointers self.protectFile = protectFile self.protectFeature = protectFeature self.protectProto = protectProto self.protectProtoStr = protectProtoStr self.apicall = apicall self.apientry = apientry self.apientryp = apientryp self.indentFuncProto = indentFuncProto self.indentFuncPointer = indentFuncPointer self.alignFuncParam = alignFuncParam # DocGeneratorOptions - subclass of GeneratorOptions. # # Shares many members with CGeneratorOptions, since # both are writing C-style declarations: # # prefixText - list of strings to prefix generated header with # (usually a copyright statement + calling convention macros). # apicall - string to use for the function declaration prefix, # such as APICALL on Windows. # apientry - string to use for the calling convention macro, # in typedefs, such as APIENTRY. # apientryp - string to use for the calling convention macro # in function pointer typedefs, such as APIENTRYP. # genDirectory - directory into which to generate include files # indentFuncProto - True if prototype declarations should put each # parameter on a separate line # indentFuncPointer - True if typedefed function pointers should put each # parameter on a separate line # alignFuncParam - if nonzero and parameters are being put on a # separate line, align parameter names at the specified column # # Additional members: # class DocGeneratorOptions(GeneratorOptions): """Represents options during C interface generation for Asciidoc""" def __init__(self, filename = None, apiname = None, profile = None, versions = '.*', emitversions = '.*', defaultExtensions = None, addExtensions = None, removeExtensions = None, sortProcedure = regSortFeatures, prefixText = "", apicall = '', apientry = '', apientryp = '', genDirectory = 'gen', indentFuncProto = True, indentFuncPointer = False, alignFuncParam = 0, expandEnumerants = True): GeneratorOptions.__init__(self, filename, apiname, profile, versions, emitversions, defaultExtensions, addExtensions, removeExtensions, sortProcedure) self.prefixText = prefixText self.apicall = apicall self.apientry = apientry self.apientryp = apientryp self.genDirectory = genDirectory self.indentFuncProto = indentFuncProto self.indentFuncPointer = indentFuncPointer self.alignFuncParam = alignFuncParam self.expandEnumerants = expandEnumerants # ThreadGeneratorOptions - subclass of GeneratorOptions. # # Adds options used by COutputGenerator objects during C language header # generation. # # Additional members # prefixText - list of strings to prefix generated header with # (usually a copyright statement + calling convention macros). # protectFile - True if multiple inclusion protection should be # generated (based on the filename) around the entire header. # protectFeature - True if #ifndef..#endif protection should be # generated around a feature interface in the header file. # genFuncPointers - True if function pointer typedefs should be # generated # protectProto - True if #ifdef..#endif protection should be # generated around prototype declarations # protectProtoStr - #ifdef symbol to use around prototype # declarations, if protected # apicall - string to use for the function declaration prefix, # such as APICALL on Windows. # apientry - string to use for the calling convention macro, # in typedefs, such as APIENTRY. # apientryp - string to use for the calling convention macro # in function pointer typedefs, such as APIENTRYP. # indentFuncProto - True if prototype declarations should put each # parameter on a separate line # indentFuncPointer - True if typedefed function pointers should put each # parameter on a separate line # alignFuncParam - if nonzero and parameters are being put on a # separate line, align parameter names at the specified column class ThreadGeneratorOptions(GeneratorOptions): """Represents options during C interface generation for headers""" def __init__(self, filename = None, apiname = None, profile = None, versions = '.*', emitversions = '.*', defaultExtensions = None, addExtensions = None, removeExtensions = None, sortProcedure = regSortFeatures, prefixText = "", genFuncPointers = True, protectFile = True, protectFeature = True, protectProto = True, protectProtoStr = True, apicall = '', apientry = '', apientryp = '', indentFuncProto = True, indentFuncPointer = False, alignFuncParam = 0): GeneratorOptions.__init__(self, filename, apiname, profile, versions, emitversions, defaultExtensions, addExtensions, removeExtensions, sortProcedure) self.prefixText = prefixText self.genFuncPointers = genFuncPointers self.protectFile = protectFile self.protectFeature = protectFeature self.protectProto = protectProto self.protectProtoStr = protectProtoStr self.apicall = apicall self.apientry = apientry self.apientryp = apientryp self.indentFuncProto = indentFuncProto self.indentFuncPointer = indentFuncPointer self.alignFuncParam = alignFuncParam # ParamCheckerGeneratorOptions - subclass of GeneratorOptions. # # Adds options used by ParamCheckerOutputGenerator objects during parameter validation # generation. # # Additional members # prefixText - list of strings to prefix generated header with # (usually a copyright statement + calling convention macros). # protectFile - True if multiple inclusion protection should be # generated (based on the filename) around the entire header. # protectFeature - True if #ifndef..#endif protection should be # generated around a feature interface in the header file. # genFuncPointers - True if function pointer typedefs should be # generated # protectProto - If conditional protection should be generated # around prototype declarations, set to either '#ifdef' # to require opt-in (#ifdef protectProtoStr) or '#ifndef' # to require opt-out (#ifndef protectProtoStr). Otherwise # set to None. # protectProtoStr - #ifdef/#ifndef symbol to use around prototype # declarations, if protectProto is set # apicall - string to use for the function declaration prefix, # such as APICALL on Windows. # apientry - string to use for the calling convention macro, # in typedefs, such as APIENTRY. # apientryp - string to use for the calling convention macro # in function pointer typedefs, such as APIENTRYP. # indentFuncProto - True if prototype declarations should put each # parameter on a separate line # indentFuncPointer - True if typedefed function pointers should put each # parameter on a separate line # alignFuncParam - if nonzero and parameters are being put on a # separate line, align parameter names at the specified column class ParamCheckerGeneratorOptions(GeneratorOptions): """Represents options during C interface generation for headers""" def __init__(self, filename = None, apiname = None, profile = None, versions = '.*', emitversions = '.*', defaultExtensions = None, addExtensions = None, removeExtensions = None, sortProcedure = regSortFeatures, prefixText = "", genFuncPointers = True, protectFile = True, protectFeature = True, protectProto = None, protectProtoStr = None, apicall = '', apientry = '', apientryp = '', indentFuncProto = True, indentFuncPointer = False, alignFuncParam = 0): GeneratorOptions.__init__(self, filename, apiname, profile, versions, emitversions, defaultExtensions, addExtensions, removeExtensions, sortProcedure) self.prefixText = prefixText self.genFuncPointers = genFuncPointers self.protectFile = protectFile self.protectFeature = protectFeature self.protectProto = protectProto self.protectProtoStr = protectProtoStr self.apicall = apicall self.apientry = apientry self.apientryp = apientryp self.indentFuncProto = indentFuncProto self.indentFuncPointer = indentFuncPointer self.alignFuncParam = alignFuncParam # OutputGenerator - base class for generating API interfaces. # Manages basic logic, logging, and output file control # Derived classes actually generate formatted output. # # ---- methods ---- # OutputGenerator(errFile, warnFile, diagFile) # errFile, warnFile, diagFile - file handles to write errors, # warnings, diagnostics to. May be None to not write. # logMsg(level, *args) - log messages of different categories # level - 'error', 'warn', or 'diag'. 'error' will also # raise a UserWarning exception # *args - print()-style arguments # setExtMap(map) - specify a dictionary map from extension names to # numbers, used in creating values for extension enumerants. # beginFile(genOpts) - start a new interface file # genOpts - GeneratorOptions controlling what's generated and how # endFile() - finish an interface file, closing it when done # beginFeature(interface, emit) - write interface for a feature # and tag generated features as having been done. # interface - element for the / to generate # emit - actually write to the header only when True # endFeature() - finish an interface. # genType(typeinfo,name) - generate interface for a type # typeinfo - TypeInfo for a type # genStruct(typeinfo,name) - generate interface for a C "struct" type. # typeinfo - TypeInfo for a type interpreted as a struct # genGroup(groupinfo,name) - generate interface for a group of enums (C "enum") # groupinfo - GroupInfo for a group # genEnum(enuminfo, name) - generate interface for an enum (constant) # enuminfo - EnumInfo for an enum # name - enum name # genCmd(cmdinfo) - generate interface for a command # cmdinfo - CmdInfo for a command # makeCDecls(cmd) - return C prototype and function pointer typedef for a # Element, as a list of two strings # cmd - Element for the # newline() - print a newline to the output file (utility function) # class OutputGenerator: """Generate specified API interfaces in a specific style, such as a C header""" def __init__(self, errFile = sys.stderr, warnFile = sys.stderr, diagFile = sys.stdout): self.outFile = None self.errFile = errFile self.warnFile = warnFile self.diagFile = diagFile # Internal state self.featureName = None self.genOpts = None self.registry = None # Used for extension enum value generation self.extBase = 1000000000 self.extBlockSize = 1000 # # logMsg - write a message of different categories to different # destinations. # level - # 'diag' (diagnostic, voluminous) # 'warn' (warning) # 'error' (fatal error - raises exception after logging) # *args - print()-style arguments to direct to corresponding log def logMsg(self, level, *args): """Log a message at the given level. Can be ignored or log to a file""" if (level == 'error'): strfile = io.StringIO() write('ERROR:', *args, file=strfile) if (self.errFile != None): write(strfile.getvalue(), file=self.errFile) raise UserWarning(strfile.getvalue()) elif (level == 'warn'): if (self.warnFile != None): write('WARNING:', *args, file=self.warnFile) elif (level == 'diag'): if (self.diagFile != None): write('DIAG:', *args, file=self.diagFile) else: raise UserWarning( '*** FATAL ERROR in Generator.logMsg: unknown level:' + level) # # enumToValue - parses and converts an tag into a value. # Returns a list # first element - integer representation of the value, or None # if needsNum is False. The value must be a legal number # if needsNum is True. # second element - string representation of the value # There are several possible representations of values. # A 'value' attribute simply contains the value. # A 'bitpos' attribute defines a value by specifying the bit # position which is set in that value. # A 'offset','extbase','extends' triplet specifies a value # as an offset to a base value defined by the specified # 'extbase' extension name, which is then cast to the # typename specified by 'extends'. This requires probing # the registry database, and imbeds knowledge of the # Vulkan extension enum scheme in this function. def enumToValue(self, elem, needsNum): name = elem.get('name') numVal = None if ('value' in elem.keys()): value = elem.get('value') # print('About to translate value =', value, 'type =', type(value)) if (needsNum): numVal = int(value, 0) # If there's a non-integer, numeric 'type' attribute (e.g. 'u' or # 'ull'), append it to the string value. # t = enuminfo.elem.get('type') # if (t != None and t != '' and t != 'i' and t != 's'): # value += enuminfo.type self.logMsg('diag', 'Enum', name, '-> value [', numVal, ',', value, ']') return [numVal, value] if ('bitpos' in elem.keys()): value = elem.get('bitpos') numVal = int(value, 0) numVal = 1 << numVal value = '0x%08x' % numVal self.logMsg('diag', 'Enum', name, '-> bitpos [', numVal, ',', value, ']') return [numVal, value] if ('offset' in elem.keys()): # Obtain values in the mapping from the attributes enumNegative = False offset = int(elem.get('offset'),0) extnumber = int(elem.get('extnumber'),0) extends = elem.get('extends') if ('dir' in elem.keys()): enumNegative = True self.logMsg('diag', 'Enum', name, 'offset =', offset, 'extnumber =', extnumber, 'extends =', extends, 'enumNegative =', enumNegative) # Now determine the actual enumerant value, as defined # in the "Layers and Extensions" appendix of the spec. numVal = self.extBase + (extnumber - 1) * self.extBlockSize + offset if (enumNegative): numVal = -numVal value = '%d' % numVal # More logic needed! self.logMsg('diag', 'Enum', name, '-> offset [', numVal, ',', value, ']') return [numVal, value] return [None, None] # def beginFile(self, genOpts): self.genOpts = genOpts # # Open specified output file. Not done in constructor since a # Generator can be used without writing to a file. if (self.genOpts.filename != None): self.outFile = open(self.genOpts.filename, 'w') else: self.outFile = sys.stdout def endFile(self): self.errFile and self.errFile.flush() self.warnFile and self.warnFile.flush() self.diagFile and self.diagFile.flush() self.outFile.flush() if (self.outFile != sys.stdout and self.outFile != sys.stderr): self.outFile.close() self.genOpts = None # def beginFeature(self, interface, emit): self.emit = emit self.featureName = interface.get('name') # If there's an additional 'protect' attribute in the feature, save it self.featureExtraProtect = interface.get('protect') def endFeature(self): # Derived classes responsible for emitting feature self.featureName = None self.featureExtraProtect = None # Utility method to validate we're generating something only inside a # tag def validateFeature(self, featureType, featureName): if (self.featureName == None): raise UserWarning('Attempt to generate', featureType, name, 'when not in feature') # # Type generation def genType(self, typeinfo, name): self.validateFeature('type', name) # # Struct (e.g. C "struct" type) generation def genStruct(self, typeinfo, name): self.validateFeature('struct', name) # # Group (e.g. C "enum" type) generation def genGroup(self, groupinfo, name): self.validateFeature('group', name) # # Enumerant (really, constant) generation def genEnum(self, enuminfo, name): self.validateFeature('enum', name) # # Command generation def genCmd(self, cmd, name): self.validateFeature('command', name) # # Utility functions - turn a into C-language prototype # and typedef declarations for that name. # name - contents of tag # tail - whatever text follows that tag in the Element def makeProtoName(self, name, tail): return self.genOpts.apientry + name + tail def makeTypedefName(self, name, tail): return '(' + self.genOpts.apientryp + 'PFN_' + name + tail + ')' # # makeCParamDecl - return a string which is an indented, formatted # declaration for a or block (e.g. function parameter # or structure/union member). # param - Element ( or ) to format # aligncol - if non-zero, attempt to align the nested element # at this column def makeCParamDecl(self, param, aligncol): paramdecl = ' ' + noneStr(param.text) for elem in param: text = noneStr(elem.text) tail = noneStr(elem.tail) if (elem.tag == 'name' and aligncol > 0): self.logMsg('diag', 'Aligning parameter', elem.text, 'to column', self.genOpts.alignFuncParam) # Align at specified column, if possible paramdecl = paramdecl.rstrip() oldLen = len(paramdecl) paramdecl = paramdecl.ljust(aligncol) newLen = len(paramdecl) self.logMsg('diag', 'Adjust length of parameter decl from', oldLen, 'to', newLen, ':', paramdecl) paramdecl += text + tail return paramdecl # # getCParamTypeLength - return the length of the type field is an indented, formatted # declaration for a or block (e.g. function parameter # or structure/union member). # param - Element ( or ) to identify def getCParamTypeLength(self, param): paramdecl = ' ' + noneStr(param.text) for elem in param: text = noneStr(elem.text) tail = noneStr(elem.tail) if (elem.tag == 'name'): # Align at specified column, if possible newLen = len(paramdecl.rstrip()) self.logMsg('diag', 'Identifying length of', elem.text, 'as', newLen) paramdecl += text + tail return newLen # # makeCDecls - return C prototype and function pointer typedef for a # command, as a two-element list of strings. # cmd - Element containing a tag def makeCDecls(self, cmd): """Generate C function pointer typedef for Element""" proto = cmd.find('proto') params = cmd.findall('param') # Begin accumulating prototype and typedef strings pdecl = self.genOpts.apicall tdecl = 'typedef ' # # Insert the function return type/name. # For prototypes, add APIENTRY macro before the name # For typedefs, add (APIENTRY *) around the name and # use the PFN_cmdnameproc naming convention. # Done by walking the tree for element by element. # lxml.etree has elem.text followed by (elem[i], elem[i].tail) # for each child element and any following text # Leading text pdecl += noneStr(proto.text) tdecl += noneStr(proto.text) # For each child element, if it's a wrap in appropriate # declaration. Otherwise append its contents and tail contents. for elem in proto: text = noneStr(elem.text) tail = noneStr(elem.tail) if (elem.tag == 'name'): pdecl += self.makeProtoName(text, tail) tdecl += self.makeTypedefName(text, tail) else: pdecl += text + tail tdecl += text + tail # Now add the parameter declaration list, which is identical # for prototypes and typedefs. Concatenate all the text from # a node without the tags. No tree walking required # since all tags are ignored. # Uses: self.indentFuncProto # self.indentFuncPointer # self.alignFuncParam # Might be able to doubly-nest the joins, e.g. # ','.join(('_'.join([l[i] for i in range(0,len(l))]) n = len(params) # Indented parameters if n > 0: indentdecl = '(\n' for i in range(0,n): paramdecl = self.makeCParamDecl(params[i], self.genOpts.alignFuncParam) if (i < n - 1): paramdecl += ',\n' else: paramdecl += ');' indentdecl += paramdecl else: indentdecl = '(void);' # Non-indented parameters paramdecl = '(' if n > 0: for i in range(0,n): paramdecl += ''.join([t for t in params[i].itertext()]) if (i < n - 1): paramdecl += ', ' else: paramdecl += 'void' paramdecl += ");"; return [ pdecl + indentdecl, tdecl + paramdecl ] # def newline(self): write('', file=self.outFile) def setRegistry(self, registry): self.registry = registry # # COutputGenerator - subclass of OutputGenerator. # Generates C-language API interfaces. # # ---- methods ---- # COutputGenerator(errFile, warnFile, diagFile) - args as for # OutputGenerator. Defines additional internal state. # ---- methods overriding base class ---- # beginFile(genOpts) # endFile() # beginFeature(interface, emit) # endFeature() # genType(typeinfo,name) # genStruct(typeinfo,name) # genGroup(groupinfo,name) # genEnum(enuminfo, name) # genCmd(cmdinfo) class COutputGenerator(OutputGenerator): """Generate specified API interfaces in a specific style, such as a C header""" # This is an ordered list of sections in the header file. TYPE_SECTIONS = ['include', 'define', 'basetype', 'handle', 'enum', 'group', 'bitmask', 'funcpointer', 'struct'] ALL_SECTIONS = TYPE_SECTIONS + ['commandPointer', 'command'] def __init__(self, errFile = sys.stderr, warnFile = sys.stderr, diagFile = sys.stdout): OutputGenerator.__init__(self, errFile, warnFile, diagFile) # Internal state - accumulators for different inner block text self.sections = dict([(section, []) for section in self.ALL_SECTIONS]) # def beginFile(self, genOpts): OutputGenerator.beginFile(self, genOpts) # C-specific # # Multiple inclusion protection & C++ wrappers. if (genOpts.protectFile and self.genOpts.filename): headerSym = re.sub('\.h', '_h_', os.path.basename(self.genOpts.filename)).upper() write('#ifndef', headerSym, file=self.outFile) write('#define', headerSym, '1', file=self.outFile) self.newline() write('#ifdef __cplusplus', file=self.outFile) write('extern "C" {', file=self.outFile) write('#endif', file=self.outFile) self.newline() # # User-supplied prefix text, if any (list of strings) if (genOpts.prefixText): for s in genOpts.prefixText: write(s, file=self.outFile) # # Some boilerplate describing what was generated - this # will probably be removed later since the extensions # pattern may be very long. # write('/* Generated C header for:', file=self.outFile) # write(' * API:', genOpts.apiname, file=self.outFile) # if (genOpts.profile): # write(' * Profile:', genOpts.profile, file=self.outFile) # write(' * Versions considered:', genOpts.versions, file=self.outFile) # write(' * Versions emitted:', genOpts.emitversions, file=self.outFile) # write(' * Default extensions included:', genOpts.defaultExtensions, file=self.outFile) # write(' * Additional extensions included:', genOpts.addExtensions, file=self.outFile) # write(' * Extensions removed:', genOpts.removeExtensions, file=self.outFile) # write(' */', file=self.outFile) def endFile(self): # C-specific # Finish C++ wrapper and multiple inclusion protection self.newline() write('#ifdef __cplusplus', file=self.outFile) write('}', file=self.outFile) write('#endif', file=self.outFile) if (self.genOpts.protectFile and self.genOpts.filename): self.newline() write('#endif', file=self.outFile) # Finish processing in superclass OutputGenerator.endFile(self) def beginFeature(self, interface, emit): # Start processing in superclass OutputGenerator.beginFeature(self, interface, emit) # C-specific # Accumulate includes, defines, types, enums, function pointer typedefs, # end function prototypes separately for this feature. They're only # printed in endFeature(). self.sections = dict([(section, []) for section in self.ALL_SECTIONS]) def endFeature(self): # C-specific # Actually write the interface to the output file. if (self.emit): self.newline() if (self.genOpts.protectFeature): write('#ifndef', self.featureName, file=self.outFile) # If type declarations are needed by other features based on # this one, it may be necessary to suppress the ExtraProtect, # or move it below the 'for section...' loop. if (self.featureExtraProtect != None): write('#ifdef', self.featureExtraProtect, file=self.outFile) write('#define', self.featureName, '1', file=self.outFile) for section in self.TYPE_SECTIONS: contents = self.sections[section] if contents: write('\n'.join(contents), file=self.outFile) self.newline() if (self.genOpts.genFuncPointers and self.sections['commandPointer']): write('\n'.join(self.sections['commandPointer']), file=self.outFile) self.newline() if (self.sections['command']): if (self.genOpts.protectProto): write(self.genOpts.protectProto, self.genOpts.protectProtoStr, file=self.outFile) write('\n'.join(self.sections['command']), end='', file=self.outFile) if (self.genOpts.protectProto): write('#endif', file=self.outFile) else: self.newline() if (self.featureExtraProtect != None): write('#endif /*', self.featureExtraProtect, '*/', file=self.outFile) if (self.genOpts.protectFeature): write('#endif /*', self.featureName, '*/', file=self.outFile) # Finish processing in superclass OutputGenerator.endFeature(self) # # Append a definition to the specified section def appendSection(self, section, text): # self.sections[section].append('SECTION: ' + section + '\n') self.sections[section].append(text) # # Type generation def genType(self, typeinfo, name): OutputGenerator.genType(self, typeinfo, name) typeElem = typeinfo.elem # If the type is a struct type, traverse the imbedded tags # generating a structure. Otherwise, emit the tag text. category = typeElem.get('category') if (category == 'struct' or category == 'union'): self.genStruct(typeinfo, name) else: # Replace tags with an APIENTRY-style string # (from self.genOpts). Copy other text through unchanged. # If the resulting text is an empty string, don't emit it. s = noneStr(typeElem.text) for elem in typeElem: if (elem.tag == 'apientry'): s += self.genOpts.apientry + noneStr(elem.tail) else: s += noneStr(elem.text) + noneStr(elem.tail) if s: # Add extra newline after multi-line entries. if '\n' in s: s += '\n' self.appendSection(category, s) # # Struct (e.g. C "struct" type) generation. # This is a special case of the tag where the contents are # interpreted as a set of tags instead of freeform C # C type declarations. The tags are just like # tags - they are a declaration of a struct or union member. # Only simple member declarations are supported (no nested # structs etc.) def genStruct(self, typeinfo, typeName): OutputGenerator.genStruct(self, typeinfo, typeName) body = 'typedef ' + typeinfo.elem.get('category') + ' ' + typeName + ' {\n' # paramdecl = self.makeCParamDecl(typeinfo.elem, self.genOpts.alignFuncParam) targetLen = 0; for member in typeinfo.elem.findall('.//member'): targetLen = max(targetLen, self.getCParamTypeLength(member)) for member in typeinfo.elem.findall('.//member'): body += self.makeCParamDecl(member, targetLen + 4) body += ';\n' body += '} ' + typeName + ';\n' self.appendSection('struct', body) # # Group (e.g. C "enum" type) generation. # These are concatenated together with other types. def genGroup(self, groupinfo, groupName): OutputGenerator.genGroup(self, groupinfo, groupName) groupElem = groupinfo.elem expandName = re.sub(r'([0-9a-z_])([A-Z0-9][^A-Z0-9]?)',r'\1_\2',groupName).upper() expandPrefix = expandName expandSuffix = '' expandSuffixMatch = re.search(r'[A-Z][A-Z]+$',groupName) if expandSuffixMatch: expandSuffix = '_' + expandSuffixMatch.group() # Strip off the suffix from the prefix expandPrefix = expandName.rsplit(expandSuffix, 1)[0] # Prefix body = "\ntypedef enum " + groupName + " {\n" isEnum = ('FLAG_BITS' not in expandPrefix) # Loop over the nested 'enum' tags. Keep track of the minimum and # maximum numeric values, if they can be determined; but only for # core API enumerants, not extension enumerants. This is inferred # by looking for 'extends' attributes. minName = None for elem in groupElem.findall('enum'): # Convert the value to an integer and use that to track min/max. # Values of form -(number) are accepted but nothing more complex. # Should catch exceptions here for more complex constructs. Not yet. (numVal,strVal) = self.enumToValue(elem, True) name = elem.get('name') # Extension enumerants are only included if they are requested # in addExtensions or match defaultExtensions. if (elem.get('extname') is None or re.match(self.genOpts.addExtensions,elem.get('extname')) is not None or self.genOpts.defaultExtensions == elem.get('supported')): body += " " + name + " = " + strVal + ",\n" if (isEnum and elem.get('extends') is None): if (minName == None): minName = maxName = name minValue = maxValue = numVal elif (numVal < minValue): minName = name minValue = numVal elif (numVal > maxValue): maxName = name maxValue = numVal # Generate min/max value tokens and a range-padding enum. Need some # additional padding to generate correct names... if isEnum: body += " " + expandPrefix + "_BEGIN_RANGE" + expandSuffix + " = " + minName + ",\n" body += " " + expandPrefix + "_END_RANGE" + expandSuffix + " = " + maxName + ",\n" body += " " + expandPrefix + "_RANGE_SIZE" + expandSuffix + " = (" + maxName + " - " + minName + " + 1),\n" body += " " + expandPrefix + "_MAX_ENUM" + expandSuffix + " = 0x7FFFFFFF\n" # Postfix body += "} " + groupName + ";" if groupElem.get('type') == 'bitmask': section = 'bitmask' else: section = 'group' self.appendSection(section, body) # Enumerant generation # tags may specify their values in several ways, but are usually # just integers. def genEnum(self, enuminfo, name): OutputGenerator.genEnum(self, enuminfo, name) (numVal,strVal) = self.enumToValue(enuminfo.elem, False) body = '#define ' + name.ljust(33) + ' ' + strVal self.appendSection('enum', body) # # Command generation def genCmd(self, cmdinfo, name): OutputGenerator.genCmd(self, cmdinfo, name) # decls = self.makeCDecls(cmdinfo.elem) self.appendSection('command', decls[0] + '\n') if (self.genOpts.genFuncPointers): self.appendSection('commandPointer', decls[1]) # DocOutputGenerator - subclass of OutputGenerator. # Generates AsciiDoc includes with C-language API interfaces, for reference # pages and the Vulkan specification. Similar to COutputGenerator, but # each interface is written into a different file as determined by the # options, only actual C types are emitted, and none of the boilerplate # preprocessor code is emitted. # # ---- methods ---- # DocOutputGenerator(errFile, warnFile, diagFile) - args as for # OutputGenerator. Defines additional internal state. # ---- methods overriding base class ---- # beginFile(genOpts) # endFile() # beginFeature(interface, emit) # endFeature() # genType(typeinfo,name) # genStruct(typeinfo,name) # genGroup(groupinfo,name) # genEnum(enuminfo, name) # genCmd(cmdinfo) class DocOutputGenerator(OutputGenerator): """Generate specified API interfaces in a specific style, such as a C header""" def __init__(self, errFile = sys.stderr, warnFile = sys.stderr, diagFile = sys.stdout): OutputGenerator.__init__(self, errFile, warnFile, diagFile) # def beginFile(self, genOpts): OutputGenerator.beginFile(self, genOpts) def endFile(self): OutputGenerator.endFile(self) def beginFeature(self, interface, emit): # Start processing in superclass OutputGenerator.beginFeature(self, interface, emit) def endFeature(self): # Finish processing in superclass OutputGenerator.endFeature(self) # # Generate an include file # # directory - subdirectory to put file in # basename - base name of the file # contents - contents of the file (Asciidoc boilerplate aside) def writeInclude(self, directory, basename, contents): # Create file filename = self.genOpts.genDirectory + '/' + directory + '/' + basename + '.txt' self.logMsg('diag', '# Generating include file:', filename) fp = open(filename, 'w') # Asciidoc anchor write('// WARNING: DO NOT MODIFY! This file is automatically generated from the vk.xml registry', file=fp) write('ifndef::doctype-manpage[]', file=fp) write('[[{0},{0}]]'.format(basename), file=fp) write('["source","{basebackend@docbook:c++:cpp}",title=""]', file=fp) write('endif::doctype-manpage[]', file=fp) write('ifdef::doctype-manpage[]', file=fp) write('["source","{basebackend@docbook:c++:cpp}"]', file=fp) write('endif::doctype-manpage[]', file=fp) write('------------------------------------------------------------------------------', file=fp) write(contents, file=fp) write('------------------------------------------------------------------------------', file=fp) fp.close() # # Type generation def genType(self, typeinfo, name): OutputGenerator.genType(self, typeinfo, name) typeElem = typeinfo.elem # If the type is a struct type, traverse the imbedded tags # generating a structure. Otherwise, emit the tag text. category = typeElem.get('category') if (category == 'struct' or category == 'union'): self.genStruct(typeinfo, name) else: # Replace tags with an APIENTRY-style string # (from self.genOpts). Copy other text through unchanged. # If the resulting text is an empty string, don't emit it. s = noneStr(typeElem.text) for elem in typeElem: if (elem.tag == 'apientry'): s += self.genOpts.apientry + noneStr(elem.tail) else: s += noneStr(elem.text) + noneStr(elem.tail) if (len(s) > 0): if (category == 'bitmask'): self.writeInclude('flags', name, s + '\n') elif (category == 'enum'): self.writeInclude('enums', name, s + '\n') elif (category == 'funcpointer'): self.writeInclude('funcpointers', name, s+ '\n') else: self.logMsg('diag', '# NOT writing include file for type:', name, 'category: ', category) else: self.logMsg('diag', '# NOT writing empty include file for type', name) # # Struct (e.g. C "struct" type) generation. # This is a special case of the tag where the contents are # interpreted as a set of tags instead of freeform C # C type declarations. The tags are just like # tags - they are a declaration of a struct or union member. # Only simple member declarations are supported (no nested # structs etc.) def genStruct(self, typeinfo, typeName): OutputGenerator.genStruct(self, typeinfo, typeName) s = 'typedef ' + typeinfo.elem.get('category') + ' ' + typeName + ' {\n' # paramdecl = self.makeCParamDecl(typeinfo.elem, self.genOpts.alignFuncParam) targetLen = 0; for member in typeinfo.elem.findall('.//member'): targetLen = max(targetLen, self.getCParamTypeLength(member)) for member in typeinfo.elem.findall('.//member'): s += self.makeCParamDecl(member, targetLen + 4) s += ';\n' s += '} ' + typeName + ';' self.writeInclude('structs', typeName, s) # # Group (e.g. C "enum" type) generation. # These are concatenated together with other types. def genGroup(self, groupinfo, groupName): OutputGenerator.genGroup(self, groupinfo, groupName) groupElem = groupinfo.elem # See if we need min/max/num/padding at end expand = self.genOpts.expandEnumerants if expand: expandName = re.sub(r'([0-9a-z_])([A-Z0-9][^A-Z0-9]?)',r'\1_\2',groupName).upper() isEnum = ('FLAG_BITS' not in expandName) expandPrefix = expandName expandSuffix = '' # Look for a suffix expandSuffixMatch = re.search(r'[A-Z][A-Z]+$',groupName) if expandSuffixMatch: expandSuffix = '_' + expandSuffixMatch.group() # Strip off the suffix from the prefix expandPrefix = expandName.rsplit(expandSuffix, 1)[0] # Prefix s = "typedef enum " + groupName + " {\n" # Loop over the nested 'enum' tags. Keep track of the minimum and # maximum numeric values, if they can be determined. minName = None for elem in groupElem.findall('enum'): # Convert the value to an integer and use that to track min/max. # Values of form -(number) are accepted but nothing more complex. # Should catch exceptions here for more complex constructs. Not yet. (numVal,strVal) = self.enumToValue(elem, True) name = elem.get('name') # Extension enumerants are only included if they are requested # in addExtensions or match defaultExtensions. if (elem.get('extname') is None or re.match(self.genOpts.addExtensions,elem.get('extname')) is not None or self.genOpts.defaultExtensions == elem.get('supported')): s += " " + name + " = " + strVal + ",\n" if (expand and isEnum and elem.get('extends') is None): if (minName == None): minName = maxName = name minValue = maxValue = numVal elif (numVal < minValue): minName = name minValue = numVal elif (numVal > maxValue): maxName = name maxValue = numVal # Generate min/max value tokens and a range-padding enum. Need some # additional padding to generate correct names... if (expand): s += "\n" if isEnum: s += " " + expandPrefix + "_BEGIN_RANGE" + expandSuffix + " = " + minName + ",\n" s += " " + expandPrefix + "_END_RANGE" + expandSuffix + " = " + maxName + ",\n" s += " " + expandPrefix + "_RANGE_SIZE" + expandSuffix + " = (" + maxName + " - " + minName + " + 1),\n" s += " " + expandPrefix + "_MAX_ENUM" + expandSuffix + " = 0x7FFFFFFF\n" # Postfix s += "} " + groupName + ";" self.writeInclude('enums', groupName, s) # Enumerant generation # tags may specify their values in several ways, but are usually # just integers. def genEnum(self, enuminfo, name): OutputGenerator.genEnum(self, enuminfo, name) (numVal,strVal) = self.enumToValue(enuminfo.elem, False) s = '#define ' + name.ljust(33) + ' ' + strVal self.logMsg('diag', '# NOT writing compile-time constant', name) # self.writeInclude('consts', name, s) # # Command generation def genCmd(self, cmdinfo, name): OutputGenerator.genCmd(self, cmdinfo, name) # decls = self.makeCDecls(cmdinfo.elem) self.writeInclude('protos', name, decls[0]) # PyOutputGenerator - subclass of OutputGenerator. # Generates Python data structures describing API names. # Similar to DocOutputGenerator, but writes a single # file. # # ---- methods ---- # PyOutputGenerator(errFile, warnFile, diagFile) - args as for # OutputGenerator. Defines additional internal state. # ---- methods overriding base class ---- # beginFile(genOpts) # endFile() # genType(typeinfo,name) # genStruct(typeinfo,name) # genGroup(groupinfo,name) # genEnum(enuminfo, name) # genCmd(cmdinfo) class PyOutputGenerator(OutputGenerator): """Generate specified API interfaces in a specific style, such as a C header""" def __init__(self, errFile = sys.stderr, warnFile = sys.stderr, diagFile = sys.stdout): OutputGenerator.__init__(self, errFile, warnFile, diagFile) # def beginFile(self, genOpts): OutputGenerator.beginFile(self, genOpts) for dict in [ 'flags', 'enums', 'structs', 'consts', 'enums', 'consts', 'protos', 'funcpointers' ]: write(dict, '= {}', file=self.outFile) def endFile(self): OutputGenerator.endFile(self) # # Add a name from the interface # # dict - type of name (see beginFile above) # name - name to add # value - A serializable Python value for the name def addName(self, dict, name, value=None): write(dict + "['" + name + "'] = ", value, file=self.outFile) # # Type generation # For 'struct' or 'union' types, defer to genStruct() to # add to the dictionary. # For 'bitmask' types, add the type name to the 'flags' dictionary, # with the value being the corresponding 'enums' name defining # the acceptable flag bits. # For 'enum' types, add the type name to the 'enums' dictionary, # with the value being '@STOPHERE@' (because this case seems # never to happen). # For 'funcpointer' types, add the type name to the 'funcpointers' # dictionary. # For 'handle' and 'define' types, add the handle or #define name # to the 'struct' dictionary, because that's how the spec sources # tag these types even though they aren't structs. def genType(self, typeinfo, name): OutputGenerator.genType(self, typeinfo, name) typeElem = typeinfo.elem # If the type is a struct type, traverse the imbedded tags # generating a structure. Otherwise, emit the tag text. category = typeElem.get('category') if (category == 'struct' or category == 'union'): self.genStruct(typeinfo, name) else: # Extract the type name # (from self.genOpts). Copy other text through unchanged. # If the resulting text is an empty string, don't emit it. count = len(noneStr(typeElem.text)) for elem in typeElem: count += len(noneStr(elem.text)) + len(noneStr(elem.tail)) if (count > 0): if (category == 'bitmask'): requiredEnum = typeElem.get('requires') self.addName('flags', name, enquote(requiredEnum)) elif (category == 'enum'): # This case never seems to come up! # @enums C 'enum' name Dictionary of enumerant names self.addName('enums', name, enquote('@STOPHERE@')) elif (category == 'funcpointer'): self.addName('funcpointers', name, None) elif (category == 'handle' or category == 'define'): self.addName('structs', name, None) else: write('# Unprocessed type:', name, 'category:', category, file=self.outFile) else: write('# Unprocessed type:', name, file=self.outFile) # # Struct (e.g. C "struct" type) generation. # # Add the struct name to the 'structs' dictionary, with the # value being an ordered list of the struct member names. def genStruct(self, typeinfo, typeName): OutputGenerator.genStruct(self, typeinfo, typeName) members = [member.text for member in typeinfo.elem.findall('.//member/name')] self.addName('structs', typeName, members) # # Group (e.g. C "enum" type) generation. # These are concatenated together with other types. # # Add the enum type name to the 'enums' dictionary, with # the value being an ordered list of the enumerant names. # Add each enumerant name to the 'consts' dictionary, with # the value being the enum type the enumerant is part of. def genGroup(self, groupinfo, groupName): OutputGenerator.genGroup(self, groupinfo, groupName) groupElem = groupinfo.elem # @enums C 'enum' name Dictionary of enumerant names # @consts C enumerant/const name Name of corresponding 'enums' key # Loop over the nested 'enum' tags. Keep track of the minimum and # maximum numeric values, if they can be determined. enumerants = [elem.get('name') for elem in groupElem.findall('enum')] for name in enumerants: self.addName('consts', name, enquote(groupName)) self.addName('enums', groupName, enumerants) # Enumerant generation (compile-time constants) # # Add the constant name to the 'consts' dictionary, with the # value being None to indicate that the constant isn't # an enumeration value. def genEnum(self, enuminfo, name): OutputGenerator.genEnum(self, enuminfo, name) # @consts C enumerant/const name Name of corresponding 'enums' key self.addName('consts', name, None) # # Command generation # # Add the command name to the 'protos' dictionary, with the # value being an ordered list of the parameter names. def genCmd(self, cmdinfo, name): OutputGenerator.genCmd(self, cmdinfo, name) params = [param.text for param in cmdinfo.elem.findall('param/name')] self.addName('protos', name, params) # ValidityOutputGenerator - subclass of OutputGenerator. # Generates AsciiDoc includes of valid usage information, for reference # pages and the Vulkan specification. Similar to DocOutputGenerator. # # ---- methods ---- # ValidityOutputGenerator(errFile, warnFile, diagFile) - args as for # OutputGenerator. Defines additional internal state. # ---- methods overriding base class ---- # beginFile(genOpts) # endFile() # beginFeature(interface, emit) # endFeature() # genCmd(cmdinfo) class ValidityOutputGenerator(OutputGenerator): """Generate specified API interfaces in a specific style, such as a C header""" def __init__(self, errFile = sys.stderr, warnFile = sys.stderr, diagFile = sys.stdout): OutputGenerator.__init__(self, errFile, warnFile, diagFile) def beginFile(self, genOpts): OutputGenerator.beginFile(self, genOpts) def endFile(self): OutputGenerator.endFile(self) def beginFeature(self, interface, emit): # Start processing in superclass OutputGenerator.beginFeature(self, interface, emit) def endFeature(self): # Finish processing in superclass OutputGenerator.endFeature(self) def makeParameterName(self, name): return 'pname:' + name def makeStructName(self, name): return 'sname:' + name def makeBaseTypeName(self, name): return 'basetype:' + name def makeEnumerationName(self, name): return 'elink:' + name def makeEnumerantName(self, name): return 'ename:' + name def makeFLink(self, name): return 'flink:' + name # # Generate an include file # # directory - subdirectory to put file in # basename - base name of the file # contents - contents of the file (Asciidoc boilerplate aside) def writeInclude(self, directory, basename, validity, threadsafety, commandpropertiesentry, successcodes, errorcodes): # Create file filename = self.genOpts.genDirectory + '/' + directory + '/' + basename + '.txt' self.logMsg('diag', '# Generating include file:', filename) fp = open(filename, 'w') # Asciidoc anchor write('// WARNING: DO NOT MODIFY! This file is automatically generated from the vk.xml registry', file=fp) # Valid Usage if validity is not None: write('ifndef::doctype-manpage[]', file=fp) write('.Valid Usage', file=fp) write('*' * 80, file=fp) write('endif::doctype-manpage[]', file=fp) write('ifdef::doctype-manpage[]', file=fp) write('Valid Usage', file=fp) write('-----------', file=fp) write('endif::doctype-manpage[]', file=fp) write(validity, file=fp, end='') write('ifndef::doctype-manpage[]', file=fp) write('*' * 80, file=fp) write('endif::doctype-manpage[]', file=fp) write('', file=fp) # Host Synchronization if threadsafety is not None: write('ifndef::doctype-manpage[]', file=fp) write('.Host Synchronization', file=fp) write('*' * 80, file=fp) write('endif::doctype-manpage[]', file=fp) write('ifdef::doctype-manpage[]', file=fp) write('Host Synchronization', file=fp) write('--------------------', file=fp) write('endif::doctype-manpage[]', file=fp) write(threadsafety, file=fp, end='') write('ifndef::doctype-manpage[]', file=fp) write('*' * 80, file=fp) write('endif::doctype-manpage[]', file=fp) write('', file=fp) # Command Properties - contained within a block, to avoid table numbering if commandpropertiesentry is not None: write('ifndef::doctype-manpage[]', file=fp) write('.Command Properties', file=fp) write('*' * 80, file=fp) write('endif::doctype-manpage[]', file=fp) write('ifdef::doctype-manpage[]', file=fp) write('Command Properties', file=fp) write('------------------', file=fp) write('endif::doctype-manpage[]', file=fp) write('[options="header", width="100%"]', file=fp) write('|=====================', file=fp) write('|Command Buffer Levels|Render Pass Scope|Supported Queue Types', file=fp) write(commandpropertiesentry, file=fp) write('|=====================', file=fp) write('ifndef::doctype-manpage[]', file=fp) write('*' * 80, file=fp) write('endif::doctype-manpage[]', file=fp) write('', file=fp) # Success Codes - contained within a block, to avoid table numbering if successcodes is not None or errorcodes is not None: write('ifndef::doctype-manpage[]', file=fp) write('.Return Codes', file=fp) write('*' * 80, file=fp) write('endif::doctype-manpage[]', file=fp) write('ifdef::doctype-manpage[]', file=fp) write('Return Codes', file=fp) write('------------', file=fp) write('endif::doctype-manpage[]', file=fp) if successcodes is not None: write('ifndef::doctype-manpage[]', file=fp) write('<>::', file=fp) write('endif::doctype-manpage[]', file=fp) write('ifdef::doctype-manpage[]', file=fp) write('On success, this command returns::', file=fp) write('endif::doctype-manpage[]', file=fp) write(successcodes, file=fp) if errorcodes is not None: write('ifndef::doctype-manpage[]', file=fp) write('<>::', file=fp) write('endif::doctype-manpage[]', file=fp) write('ifdef::doctype-manpage[]', file=fp) write('On failure, this command returns::', file=fp) write('endif::doctype-manpage[]', file=fp) write(errorcodes, file=fp) write('ifndef::doctype-manpage[]', file=fp) write('*' * 80, file=fp) write('endif::doctype-manpage[]', file=fp) write('', file=fp) fp.close() # # Check if the parameter passed in is a pointer def paramIsPointer(self, param): ispointer = False paramtype = param.find('type') if paramtype.tail is not None and '*' in paramtype.tail: ispointer = True return ispointer # # Check if the parameter passed in is a static array def paramIsStaticArray(self, param): if param.find('name').tail is not None: if param.find('name').tail[0] == '[': return True # # Get the length of a parameter that's been identified as a static array def staticArrayLength(self, param): paramname = param.find('name') paramenumsize = param.find('enum') if paramenumsize is not None: return paramenumsize.text else: return paramname.tail[1:-1] # # Check if the parameter passed in is a pointer to an array def paramIsArray(self, param): return param.attrib.get('len') is not None # # Get the parent of a handle object def getHandleParent(self, typename): types = self.registry.findall("types/type") for elem in types: if (elem.find("name") is not None and elem.find('name').text == typename) or elem.attrib.get('name') == typename: return elem.attrib.get('parent') # # Check if a parent object is dispatchable or not def isHandleTypeDispatchable(self, handlename): handle = self.registry.find("types/type/[name='" + handlename + "'][@category='handle']") if handle is not None and handle.find('type').text == 'VK_DEFINE_HANDLE': return True else: return False def isHandleOptional(self, param, params): # See if the handle is optional isOptional = False # Simple, if it's optional, return true if param.attrib.get('optional') is not None: return True # If no validity is being generated, it usually means that validity is complex and not absolute, so let's say yes. if param.attrib.get('noautovalidity') is not None: return True # If the parameter is an array and we haven't already returned, find out if any of the len parameters are optional if self.paramIsArray(param): lengths = param.attrib.get('len').split(',') for length in lengths: if (length) != 'null-terminated' and (length) != '1': for otherparam in params: if otherparam.find('name').text == length: if otherparam.attrib.get('optional') is not None: return True return False # # Get the category of a type def getTypeCategory(self, typename): types = self.registry.findall("types/type") for elem in types: if (elem.find("name") is not None and elem.find('name').text == typename) or elem.attrib.get('name') == typename: return elem.attrib.get('category') # # Make a chunk of text for the end of a parameter if it is an array def makeAsciiDocPreChunk(self, param, params): paramname = param.find('name') paramtype = param.find('type') # General pre-amble. Check optionality and add stuff. asciidoc = '* ' if self.paramIsStaticArray(param): asciidoc += 'Any given element of ' elif self.paramIsArray(param): lengths = param.attrib.get('len').split(',') # Find all the parameters that are called out as optional, so we can document that they might be zero, and the array may be ignored optionallengths = [] for length in lengths: if (length) != 'null-terminated' and (length) != '1': for otherparam in params: if otherparam.find('name').text == length: if otherparam.attrib.get('optional') is not None: if self.paramIsPointer(otherparam): optionallengths.append('the value referenced by ' + self.makeParameterName(length)) else: optionallengths.append(self.makeParameterName(length)) # Document that these arrays may be ignored if any of the length values are 0 if len(optionallengths) != 0 or param.attrib.get('optional') is not None: asciidoc += 'If ' if len(optionallengths) != 0: if len(optionallengths) == 1: asciidoc += optionallengths[0] asciidoc += ' is ' else: asciidoc += ' or '.join(optionallengths) asciidoc += ' are ' asciidoc += 'not `0`, ' if len(optionallengths) != 0 and param.attrib.get('optional') is not None: asciidoc += 'and ' if param.attrib.get('optional') is not None: asciidoc += self.makeParameterName(paramname.text) asciidoc += ' is not `NULL`, ' elif param.attrib.get('optional') is not None: # Don't generate this stub for bitflags if self.getTypeCategory(paramtype.text) != 'bitmask': if param.attrib.get('optional').split(',')[0] == 'true': asciidoc += 'If ' asciidoc += self.makeParameterName(paramname.text) asciidoc += ' is not ' if self.paramIsArray(param) or self.paramIsPointer(param) or self.isHandleTypeDispatchable(paramtype.text): asciidoc += '`NULL`' elif self.getTypeCategory(paramtype.text) == 'handle': asciidoc += 'sname:VK_NULL_HANDLE' else: asciidoc += '`0`' asciidoc += ', ' return asciidoc # # Make the generic asciidoc line chunk portion used for all parameters. # May return an empty string if nothing to validate. def createValidationLineForParameterIntroChunk(self, param, params, typetext): asciidoc = '' paramname = param.find('name') paramtype = param.find('type') asciidoc += self.makeAsciiDocPreChunk(param, params) asciidoc += self.makeParameterName(paramname.text) asciidoc += ' must: be ' if self.paramIsArray(param): # Arrays. These are hard to get right, apparently lengths = param.attrib.get('len').split(',') if (lengths[0]) == 'null-terminated': asciidoc += 'a null-terminated ' elif (lengths[0]) == '1': asciidoc += 'a pointer to ' else: asciidoc += 'a pointer to an array of ' # Handle equations, which are currently denoted with latex if 'latexmath:' in lengths[0]: asciidoc += lengths[0] else: asciidoc += self.makeParameterName(lengths[0]) asciidoc += ' ' for length in lengths[1:]: if (length) == 'null-terminated': # This should always be the last thing. If it ever isn't for some bizarre reason, then this will need some massaging. asciidoc += 'null-terminated ' elif (length) == '1': asciidoc += 'pointers to ' else: asciidoc += 'pointers to arrays of ' # Handle equations, which are currently denoted with latex if 'latex:' in length: asciidoc += length else: asciidoc += self.makeParameterName(length) asciidoc += ' ' # Void pointers don't actually point at anything - remove the word "to" if paramtype.text == 'void': if lengths[-1] == '1': if len(lengths) > 1: asciidoc = asciidoc[:-5] # Take care of the extra s added by the post array chunk function. #HACK# else: asciidoc = asciidoc[:-4] else: # An array of void values is a byte array. asciidoc += 'byte' elif paramtype.text == 'char': # A null terminated array of chars is a string if lengths[-1] == 'null-terminated': asciidoc += 'string' else: # Else it's just a bunch of chars asciidoc += 'char value' elif param.text is not None: # If a value is "const" that means it won't get modified, so it must be valid going into the function. if 'const' in param.text: typecategory = self.getTypeCategory(paramtype.text) if (typecategory != 'struct' and typecategory != 'union' and typecategory != 'basetype' and typecategory is not None) or not self.isStructAlwaysValid(paramtype.text): asciidoc += 'valid ' asciidoc += typetext # pluralize if len(lengths) > 1 or (lengths[0] != '1' and lengths[0] != 'null-terminated'): asciidoc += 's' elif self.paramIsPointer(param): # Handle pointers - which are really special case arrays (i.e. they don't have a length) pointercount = paramtype.tail.count('*') # Could be multi-level pointers (e.g. ppData - pointer to a pointer). Handle that. for i in range(0, pointercount): asciidoc += 'a pointer to ' if paramtype.text == 'void': # If there's only one pointer, it's optional, and it doesn't point at anything in particular - we don't need any language. if pointercount == 1 and param.attrib.get('optional') is not None: return '' # early return else: # Pointer to nothing in particular - delete the " to " portion asciidoc = asciidoc[:-4] else: # Add an article for English semantic win asciidoc += 'a ' # If a value is "const" that means it won't get modified, so it must be valid going into the function. if param.text is not None and paramtype.text != 'void': if 'const' in param.text: asciidoc += 'valid ' asciidoc += typetext else: # Non-pointer, non-optional things must be valid asciidoc += 'a valid ' asciidoc += typetext if asciidoc != '': asciidoc += '\n' # Add additional line for non-optional bitmasks if self.getTypeCategory(paramtype.text) == 'bitmask': if param.attrib.get('optional') is None: asciidoc += '* ' if self.paramIsArray(param): asciidoc += 'Each element of ' asciidoc += 'pname:' asciidoc += paramname.text asciidoc += ' mustnot: be `0`' asciidoc += '\n' return asciidoc def makeAsciiDocLineForParameter(self, param, params, typetext): if param.attrib.get('noautovalidity') is not None: return '' asciidoc = self.createValidationLineForParameterIntroChunk(param, params, typetext) return asciidoc # Try to do check if a structure is always considered valid (i.e. there's no rules to its acceptance) def isStructAlwaysValid(self, structname): struct = self.registry.find("types/type[@name='" + structname + "']") params = struct.findall('member') validity = struct.find('validity') if validity is not None: return False for param in params: paramname = param.find('name') paramtype = param.find('type') typecategory = self.getTypeCategory(paramtype.text) if paramname.text == 'pNext': return False if paramname.text == 'sType': return False if paramtype.text == 'void' or paramtype.text == 'char' or self.paramIsArray(param) or self.paramIsPointer(param): if self.makeAsciiDocLineForParameter(param, params, '') != '': return False elif typecategory == 'handle' or typecategory == 'enum' or typecategory == 'bitmask' or param.attrib.get('returnedonly') == 'true': return False elif typecategory == 'struct' or typecategory == 'union': if self.isStructAlwaysValid(paramtype.text) is False: return False return True # # Make an entire asciidoc line for a given parameter def createValidationLineForParameter(self, param, params, typecategory): asciidoc = '' paramname = param.find('name') paramtype = param.find('type') if paramtype.text == 'void' or paramtype.text == 'char': # Chars and void are special cases - needs care inside the generator functions # A null-terminated char array is a string, else it's chars. # An array of void values is a byte array, a void pointer is just a pointer to nothing in particular asciidoc += self.makeAsciiDocLineForParameter(param, params, '') elif typecategory == 'bitmask': bitsname = paramtype.text.replace('Flags', 'FlagBits') if self.registry.find("enums[@name='" + bitsname + "']") is None: asciidoc += '* ' asciidoc += self.makeParameterName(paramname.text) asciidoc += ' must: be `0`' asciidoc += '\n' else: if self.paramIsArray(param): asciidoc += self.makeAsciiDocLineForParameter(param, params, 'combinations of ' + self.makeEnumerationName(bitsname) + ' value') else: asciidoc += self.makeAsciiDocLineForParameter(param, params, 'combination of ' + self.makeEnumerationName(bitsname) + ' values') elif typecategory == 'handle': asciidoc += self.makeAsciiDocLineForParameter(param, params, self.makeStructName(paramtype.text) + ' handle') elif typecategory == 'enum': asciidoc += self.makeAsciiDocLineForParameter(param, params, self.makeEnumerationName(paramtype.text) + ' value') elif typecategory == 'struct': if (self.paramIsArray(param) or self.paramIsPointer(param)) or not self.isStructAlwaysValid(paramtype.text): asciidoc += self.makeAsciiDocLineForParameter(param, params, self.makeStructName(paramtype.text) + ' structure') elif typecategory == 'union': if (self.paramIsArray(param) or self.paramIsPointer(param)) or not self.isStructAlwaysValid(paramtype.text): asciidoc += self.makeAsciiDocLineForParameter(param, params, self.makeStructName(paramtype.text) + ' union') elif self.paramIsArray(param) or self.paramIsPointer(param): asciidoc += self.makeAsciiDocLineForParameter(param, params, self.makeBaseTypeName(paramtype.text) + ' value') return asciidoc # # Make an asciidoc validity entry for a handle's parent object def makeAsciiDocHandleParent(self, param, params): asciidoc = '' paramname = param.find('name') paramtype = param.find('type') # Deal with handle parents handleparent = self.getHandleParent(paramtype.text) if handleparent is not None: parentreference = None for otherparam in params: if otherparam.find('type').text == handleparent: parentreference = otherparam.find('name').text if parentreference is not None: asciidoc += '* ' if self.isHandleOptional(param, params): if self.paramIsArray(param): asciidoc += 'Each element of ' asciidoc += self.makeParameterName(paramname.text) asciidoc += ' that is a valid handle' else: asciidoc += 'If ' asciidoc += self.makeParameterName(paramname.text) asciidoc += ' is a valid handle, it' else: if self.paramIsArray(param): asciidoc += 'Each element of ' asciidoc += self.makeParameterName(paramname.text) asciidoc += ' must: have been created, allocated or retrieved from ' asciidoc += self.makeParameterName(parentreference) asciidoc += '\n' return asciidoc # # Generate an asciidoc validity line for the sType value of a struct def makeStructureType(self, blockname, param): asciidoc = '* ' paramname = param.find('name') paramtype = param.find('type') asciidoc += self.makeParameterName(paramname.text) asciidoc += ' must: be ' structuretype = '' for elem in re.findall(r'(([A-Z][a-z]+)|([A-Z][A-Z]+))', blockname): if elem[0] == 'Vk': structuretype += 'VK_STRUCTURE_TYPE_' else: structuretype += elem[0].upper() structuretype += '_' asciidoc += self.makeEnumerantName(structuretype[:-1]) asciidoc += '\n' return asciidoc # # Generate an asciidoc validity line for the pNext value of a struct def makeStructureExtensionPointer(self, param): asciidoc = '* ' paramname = param.find('name') paramtype = param.find('type') asciidoc += self.makeParameterName(paramname.text) validextensionstructs = param.attrib.get('validextensionstructs') if validextensionstructs is None: asciidoc += ' must: be `NULL`' else: extensionstructs = validextensionstructs.split(',') asciidoc += ' must: point to one of ' + extensionstructs[:-1].join(', ') + ' or ' + extensionstructs[-1] + 'if the extension that introduced them is enabled ' asciidoc += '\n' return asciidoc # # Generate all the valid usage information for a given struct or command def makeValidUsageStatements(self, cmd, blockname, params, usages): # Start the asciidoc block for this asciidoc = '' handles = [] anyparentedhandlesoptional = False parentdictionary = {} arraylengths = set() for param in params: paramname = param.find('name') paramtype = param.find('type') # Get the type's category typecategory = self.getTypeCategory(paramtype.text) # Generate language to independently validate a parameter if paramtype.text == 'VkStructureType' and paramname.text == 'sType': asciidoc += self.makeStructureType(blockname, param) elif paramtype.text == 'void' and paramname.text == 'pNext': asciidoc += self.makeStructureExtensionPointer(param) else: asciidoc += self.createValidationLineForParameter(param, params, typecategory) # Ensure that any parenting is properly validated, and list that a handle was found if typecategory == 'handle': # Don't detect a parent for return values! if not self.paramIsPointer(param) or (param.text is not None and 'const' in param.text): parent = self.getHandleParent(paramtype.text) if parent is not None: handles.append(param) # If any param is optional, it affects the output if self.isHandleOptional(param, params): anyparentedhandlesoptional = True # Find the first dispatchable parent ancestor = parent while ancestor is not None and not self.isHandleTypeDispatchable(ancestor): ancestor = self.getHandleParent(ancestor) # If one was found, add this parameter to the parent dictionary if ancestor is not None: if ancestor not in parentdictionary: parentdictionary[ancestor] = [] if self.paramIsArray(param): parentdictionary[ancestor].append('the elements of ' + self.makeParameterName(paramname.text)) else: parentdictionary[ancestor].append(self.makeParameterName(paramname.text)) # Get the array length for this parameter arraylength = param.attrib.get('len') if arraylength is not None: for onelength in arraylength.split(','): arraylengths.add(onelength) # For any vkQueue* functions, there might be queue type data if 'vkQueue' in blockname: # The queue type must be valid queuetypes = cmd.attrib.get('queues') if queuetypes is not None: queuebits = [] for queuetype in re.findall(r'([^,]+)', queuetypes): queuebits.append(queuetype.replace('_',' ')) asciidoc += '* ' asciidoc += 'The pname:queue must: support ' if len(queuebits) == 1: asciidoc += queuebits[0] else: asciidoc += (', ').join(queuebits[:-1]) asciidoc += ' or ' asciidoc += queuebits[-1] asciidoc += ' operations' asciidoc += '\n' if 'vkCmd' in blockname: # The commandBuffer parameter must be being recorded asciidoc += '* ' asciidoc += 'pname:commandBuffer must: be in the recording state' asciidoc += '\n' # The queue type must be valid queuetypes = cmd.attrib.get('queues') queuebits = [] for queuetype in re.findall(r'([^,]+)', queuetypes): queuebits.append(queuetype.replace('_',' ')) asciidoc += '* ' asciidoc += 'The sname:VkCommandPool that pname:commandBuffer was allocated from must: support ' if len(queuebits) == 1: asciidoc += queuebits[0] else: asciidoc += (', ').join(queuebits[:-1]) asciidoc += ' or ' asciidoc += queuebits[-1] asciidoc += ' operations' asciidoc += '\n' # Must be called inside/outside a renderpass appropriately renderpass = cmd.attrib.get('renderpass') if renderpass != 'both': asciidoc += '* This command must: only be called ' asciidoc += renderpass asciidoc += ' of a render pass instance' asciidoc += '\n' # Must be in the right level command buffer cmdbufferlevel = cmd.attrib.get('cmdbufferlevel') if cmdbufferlevel != 'primary,secondary': asciidoc += '* pname:commandBuffer must: be a ' asciidoc += cmdbufferlevel asciidoc += ' sname:VkCommandBuffer' asciidoc += '\n' # Any non-optional arraylengths should specify they must be greater than 0 for param in params: paramname = param.find('name') for arraylength in arraylengths: if paramname.text == arraylength and param.attrib.get('optional') is None: # Get all the array dependencies arrays = cmd.findall("param/[@len='" + arraylength + "'][@optional='true']") # Get all the optional array dependencies, including those not generating validity for some reason optionalarrays = cmd.findall("param/[@len='" + arraylength + "'][@optional='true']") optionalarrays.extend(cmd.findall("param/[@len='" + arraylength + "'][@noautovalidity='true']")) asciidoc += '* ' # Allow lengths to be arbitrary if all their dependents are optional if len(optionalarrays) == len(arrays) and len(optionalarrays) != 0: asciidoc += 'If ' if len(optionalarrays) > 1: asciidoc += 'any of ' for array in optionalarrays[:-1]: asciidoc += self.makeParameterName(optionalarrays.find('name').text) asciidoc += ', ' if len(optionalarrays) > 1: asciidoc += 'and ' asciidoc += self.makeParameterName(optionalarrays[-1].find('name').text) asciidoc += ' are ' else: asciidoc += self.makeParameterName(optionalarrays[-1].find('name').text) asciidoc += ' is ' asciidoc += 'not `NULL`, ' if self.paramIsPointer(param): asciidoc += 'the value referenced by ' elif self.paramIsPointer(param): asciidoc += 'The value referenced by ' asciidoc += self.makeParameterName(arraylength) asciidoc += ' must: be greater than `0`' asciidoc += '\n' # Find the parents of all objects referenced in this command for param in handles: asciidoc += self.makeAsciiDocHandleParent(param, params) # Find the common ancestors of objects noancestorscount = 0 while noancestorscount < len(parentdictionary): noancestorscount = 0 oldparentdictionary = parentdictionary.copy() for parent in oldparentdictionary.items(): ancestor = self.getHandleParent(parent[0]) while ancestor is not None and ancestor not in parentdictionary: ancestor = self.getHandleParent(ancestor) if ancestor is not None: parentdictionary[ancestor] += parentdictionary.pop(parent[0]) else: # No ancestors possible - so count it up noancestorscount += 1 # Add validation language about common ancestors for parent in parentdictionary.items(): if len(parent[1]) > 1: parentlanguage = '* ' parentlanguage += 'Each of ' parentlanguage += ", ".join(parent[1][:-1]) parentlanguage += ' and ' parentlanguage += parent[1][-1] if anyparentedhandlesoptional is True: parentlanguage += ' that are valid handles' parentlanguage += ' must: have been created, allocated or retrieved from the same ' parentlanguage += self.makeStructName(parent[0]) parentlanguage += '\n' # Capitalize and add to the main language asciidoc += parentlanguage # Add in any plain-text validation language that should be added for usage in usages: asciidoc += '* ' asciidoc += usage asciidoc += '\n' # In case there's nothing to report, return None if asciidoc == '': return None # Delimit the asciidoc block return asciidoc def makeThreadSafetyBlock(self, cmd, paramtext): """Generate C function pointer typedef for Element""" paramdecl = '' # For any vkCmd* functions, the commandBuffer parameter must be being recorded if cmd.find('proto/name') is not None and 'vkCmd' in cmd.find('proto/name'): paramdecl += '* ' paramdecl += 'The sname:VkCommandPool that pname:commandBuffer was created from' paramdecl += '\n' # Find and add any parameters that are thread unsafe explicitexternsyncparams = cmd.findall(paramtext + "[@externsync]") if (explicitexternsyncparams is not None): for param in explicitexternsyncparams: externsyncattribs = param.attrib.get('externsync') paramname = param.find('name') for externsyncattrib in externsyncattribs.split(','): paramdecl += '* ' paramdecl += 'Host access to ' if externsyncattrib == 'true': if self.paramIsArray(param): paramdecl += 'each member of ' + self.makeParameterName(paramname.text) elif self.paramIsPointer(param): paramdecl += 'the object referenced by ' + self.makeParameterName(paramname.text) else: paramdecl += self.makeParameterName(paramname.text) else: paramdecl += 'pname:' paramdecl += externsyncattrib paramdecl += ' must: be externally synchronized\n' # Find and add any "implicit" parameters that are thread unsafe implicitexternsyncparams = cmd.find('implicitexternsyncparams') if (implicitexternsyncparams is not None): for elem in implicitexternsyncparams: paramdecl += '* ' paramdecl += 'Host access to ' paramdecl += elem.text paramdecl += ' must: be externally synchronized\n' if (paramdecl == ''): return None else: return paramdecl def makeCommandPropertiesTableEntry(self, cmd, name): if 'vkCmd' in name: # Must be called inside/outside a renderpass appropriately cmdbufferlevel = cmd.attrib.get('cmdbufferlevel') cmdbufferlevel = (' + \n').join(cmdbufferlevel.title().split(',')) renderpass = cmd.attrib.get('renderpass') renderpass = renderpass.capitalize() queues = cmd.attrib.get('queues') queues = (' + \n').join(queues.upper().split(',')) return '|' + cmdbufferlevel + '|' + renderpass + '|' + queues elif 'vkQueue' in name: # Must be called inside/outside a renderpass appropriately queues = cmd.attrib.get('queues') if queues is None: queues = 'Any' else: queues = (' + \n').join(queues.upper().split(',')) return '|-|-|' + queues return None def makeSuccessCodes(self, cmd, name): successcodes = cmd.attrib.get('successcodes') if successcodes is not None: successcodeentry = '' successcodes = successcodes.split(',') return '* ename:' + '\n* ename:'.join(successcodes) return None def makeErrorCodes(self, cmd, name): errorcodes = cmd.attrib.get('errorcodes') if errorcodes is not None: errorcodeentry = '' errorcodes = errorcodes.split(',') return '* ename:' + '\n* ename:'.join(errorcodes) return None # # Command generation def genCmd(self, cmdinfo, name): OutputGenerator.genCmd(self, cmdinfo, name) # # Get all the parameters params = cmdinfo.elem.findall('param') usageelements = cmdinfo.elem.findall('validity/usage') usages = [] for usage in usageelements: usages.append(usage.text) for usage in cmdinfo.additionalValidity: usages.append(usage.text) for usage in cmdinfo.removedValidity: usages.remove(usage.text) validity = self.makeValidUsageStatements(cmdinfo.elem, name, params, usages) threadsafety = self.makeThreadSafetyBlock(cmdinfo.elem, 'param') commandpropertiesentry = self.makeCommandPropertiesTableEntry(cmdinfo.elem, name) successcodes = self.makeSuccessCodes(cmdinfo.elem, name) errorcodes = self.makeErrorCodes(cmdinfo.elem, name) self.writeInclude('validity/protos', name, validity, threadsafety, commandpropertiesentry, successcodes, errorcodes) # # Struct Generation def genStruct(self, typeinfo, typename): OutputGenerator.genStruct(self, typeinfo, typename) # Anything that's only ever returned can't be set by the user, so shouldn't have any validity information. if typeinfo.elem.attrib.get('returnedonly') is None: params = typeinfo.elem.findall('member') usageelements = typeinfo.elem.findall('validity/usage') usages = [] for usage in usageelements: usages.append(usage.text) for usage in typeinfo.additionalValidity: usages.append(usage.text) for usage in typeinfo.removedValidity: usages.remove(usage.text) validity = self.makeValidUsageStatements(typeinfo.elem, typename, params, usages) threadsafety = self.makeThreadSafetyBlock(typeinfo.elem, 'member') self.writeInclude('validity/structs', typename, validity, threadsafety, None, None, None) else: # Still generate files for return only structs, in case this state changes later self.writeInclude('validity/structs', typename, None, None, None, None, None) # # Type Generation def genType(self, typeinfo, typename): OutputGenerator.genType(self, typeinfo, typename) category = typeinfo.elem.get('category') if (category == 'struct' or category == 'union'): self.genStruct(typeinfo, typename) # HostSynchronizationOutputGenerator - subclass of OutputGenerator. # Generates AsciiDoc includes of the externsync parameter table for the # fundamentals chapter of the Vulkan specification. Similar to # DocOutputGenerator. # # ---- methods ---- # HostSynchronizationOutputGenerator(errFile, warnFile, diagFile) - args as for # OutputGenerator. Defines additional internal state. # ---- methods overriding base class ---- # genCmd(cmdinfo) class HostSynchronizationOutputGenerator(OutputGenerator): # Generate Host Synchronized Parameters in a table at the top of the spec def __init__(self, errFile = sys.stderr, warnFile = sys.stderr, diagFile = sys.stdout): OutputGenerator.__init__(self, errFile, warnFile, diagFile) threadsafety = {'parameters': '', 'parameterlists': '', 'implicit': ''} def makeParameterName(self, name): return 'pname:' + name def makeFLink(self, name): return 'flink:' + name # # Generate an include file # # directory - subdirectory to put file in # basename - base name of the file # contents - contents of the file (Asciidoc boilerplate aside) def writeInclude(self): if self.threadsafety['parameters'] is not None: # Create file filename = self.genOpts.genDirectory + '/' + self.genOpts.filename + '/parameters.txt' self.logMsg('diag', '# Generating include file:', filename) fp = open(filename, 'w') # Host Synchronization write('// WARNING: DO NOT MODIFY! This file is automatically generated from the vk.xml registry', file=fp) write('.Externally Synchronized Parameters', file=fp) write('*' * 80, file=fp) write(self.threadsafety['parameters'], file=fp, end='') write('*' * 80, file=fp) write('', file=fp) if self.threadsafety['parameterlists'] is not None: # Create file filename = self.genOpts.genDirectory + '/' + self.genOpts.filename + '/parameterlists.txt' self.logMsg('diag', '# Generating include file:', filename) fp = open(filename, 'w') # Host Synchronization write('// WARNING: DO NOT MODIFY! This file is automatically generated from the vk.xml registry', file=fp) write('.Externally Synchronized Parameter Lists', file=fp) write('*' * 80, file=fp) write(self.threadsafety['parameterlists'], file=fp, end='') write('*' * 80, file=fp) write('', file=fp) if self.threadsafety['implicit'] is not None: # Create file filename = self.genOpts.genDirectory + '/' + self.genOpts.filename + '/implicit.txt' self.logMsg('diag', '# Generating include file:', filename) fp = open(filename, 'w') # Host Synchronization write('// WARNING: DO NOT MODIFY! This file is automatically generated from the vk.xml registry', file=fp) write('.Implicit Externally Synchronized Parameters', file=fp) write('*' * 80, file=fp) write(self.threadsafety['implicit'], file=fp, end='') write('*' * 80, file=fp) write('', file=fp) fp.close() # # Check if the parameter passed in is a pointer to an array def paramIsArray(self, param): return param.attrib.get('len') is not None # Check if the parameter passed in is a pointer def paramIsPointer(self, param): ispointer = False paramtype = param.find('type') if paramtype.tail is not None and '*' in paramtype.tail: ispointer = True return ispointer # Turn the "name[].member[]" notation into plain English. def makeThreadDereferenceHumanReadable(self, dereference): matches = re.findall(r"[\w]+[^\w]*",dereference) stringval = '' for match in reversed(matches): if '->' in match or '.' in match: stringval += 'member of ' if '[]' in match: stringval += 'each element of ' stringval += 'the ' stringval += self.makeParameterName(re.findall(r"[\w]+",match)[0]) stringval += ' ' stringval += 'parameter' return stringval[0].upper() + stringval[1:] def makeThreadSafetyBlocks(self, cmd, paramtext): protoname = cmd.find('proto/name').text # Find and add any parameters that are thread unsafe explicitexternsyncparams = cmd.findall(paramtext + "[@externsync]") if (explicitexternsyncparams is not None): for param in explicitexternsyncparams: externsyncattribs = param.attrib.get('externsync') paramname = param.find('name') for externsyncattrib in externsyncattribs.split(','): tempstring = '* ' if externsyncattrib == 'true': if self.paramIsArray(param): tempstring += 'Each element of the ' elif self.paramIsPointer(param): tempstring += 'The object referenced by the ' else: tempstring += 'The ' tempstring += self.makeParameterName(paramname.text) tempstring += ' parameter' else: tempstring += self.makeThreadDereferenceHumanReadable(externsyncattrib) tempstring += ' in ' tempstring += self.makeFLink(protoname) tempstring += '\n' if ' element of ' in tempstring: self.threadsafety['parameterlists'] += tempstring else: self.threadsafety['parameters'] += tempstring # Find and add any "implicit" parameters that are thread unsafe implicitexternsyncparams = cmd.find('implicitexternsyncparams') if (implicitexternsyncparams is not None): for elem in implicitexternsyncparams: self.threadsafety['implicit'] += '* ' self.threadsafety['implicit'] += elem.text[0].upper() self.threadsafety['implicit'] += elem.text[1:] self.threadsafety['implicit'] += ' in ' self.threadsafety['implicit'] += self.makeFLink(protoname) self.threadsafety['implicit'] += '\n' # For any vkCmd* functions, the commandBuffer parameter must be being recorded if protoname is not None and 'vkCmd' in protoname: self.threadsafety['implicit'] += '* ' self.threadsafety['implicit'] += 'The sname:VkCommandPool that pname:commandBuffer was allocated from, in ' self.threadsafety['implicit'] += self.makeFLink(protoname) self.threadsafety['implicit'] += '\n' # # Command generation def genCmd(self, cmdinfo, name): OutputGenerator.genCmd(self, cmdinfo, name) # # Get all thh parameters params = cmdinfo.elem.findall('param') usages = cmdinfo.elem.findall('validity/usage') self.makeThreadSafetyBlocks(cmdinfo.elem, 'param') self.writeInclude() # ThreadOutputGenerator - subclass of OutputGenerator. # Generates Thread checking framework # # ---- methods ---- # ThreadOutputGenerator(errFile, warnFile, diagFile) - args as for # OutputGenerator. Defines additional internal state. # ---- methods overriding base class ---- # beginFile(genOpts) # endFile() # beginFeature(interface, emit) # endFeature() # genType(typeinfo,name) # genStruct(typeinfo,name) # genGroup(groupinfo,name) # genEnum(enuminfo, name) # genCmd(cmdinfo) class ThreadOutputGenerator(OutputGenerator): """Generate specified API interfaces in a specific style, such as a C header""" # This is an ordered list of sections in the header file. TYPE_SECTIONS = ['include', 'define', 'basetype', 'handle', 'enum', 'group', 'bitmask', 'funcpointer', 'struct'] ALL_SECTIONS = TYPE_SECTIONS + ['command'] def __init__(self, errFile = sys.stderr, warnFile = sys.stderr, diagFile = sys.stdout): OutputGenerator.__init__(self, errFile, warnFile, diagFile) # Internal state - accumulators for different inner block text self.sections = dict([(section, []) for section in self.ALL_SECTIONS]) self.intercepts = [] # Check if the parameter passed in is a pointer to an array def paramIsArray(self, param): return param.attrib.get('len') is not None # Check if the parameter passed in is a pointer def paramIsPointer(self, param): ispointer = False for elem in param: #write('paramIsPointer '+elem.text, file=sys.stderr) #write('elem.tag '+elem.tag, file=sys.stderr) #if (elem.tail is None): # write('elem.tail is None', file=sys.stderr) #else: # write('elem.tail '+elem.tail, file=sys.stderr) if ((elem.tag is not 'type') and (elem.tail is not None)) and '*' in elem.tail: ispointer = True # write('is pointer', file=sys.stderr) return ispointer def makeThreadUseBlock(self, cmd, functionprefix): """Generate C function pointer typedef for Element""" paramdecl = '' thread_check_dispatchable_objects = [ "VkCommandBuffer", "VkDevice", "VkInstance", "VkQueue", ] thread_check_nondispatchable_objects = [ "VkBuffer", "VkBufferView", "VkCommandPool", "VkDescriptorPool", "VkDescriptorSetLayout", "VkDeviceMemory", "VkEvent", "VkFence", "VkFramebuffer", "VkImage", "VkImageView", "VkPipeline", "VkPipelineCache", "VkPipelineLayout", "VkQueryPool", "VkRenderPass", "VkSampler", "VkSemaphore", "VkShaderModule", ] # Find and add any parameters that are thread unsafe params = cmd.findall('param') for param in params: paramname = param.find('name') if False: # self.paramIsPointer(param): paramdecl += ' // not watching use of pointer ' + paramname.text + '\n' else: externsync = param.attrib.get('externsync') if externsync == 'true': if self.paramIsArray(param): paramdecl += ' for (uint32_t index=0;index<' + param.attrib.get('len') + ';index++) {\n' paramdecl += ' ' + functionprefix + 'WriteObject(my_data, ' + paramname.text + '[index]);\n' paramdecl += ' }\n' else: paramdecl += ' ' + functionprefix + 'WriteObject(my_data, ' + paramname.text + ');\n' elif (param.attrib.get('externsync')): if self.paramIsArray(param): # Externsync can list pointers to arrays of members to synchronize paramdecl += ' for (uint32_t index=0;index<' + param.attrib.get('len') + ';index++) {\n' for member in externsync.split(","): # Replace first empty [] in member name with index element = member.replace('[]','[index]',1) if '[]' in element: # Replace any second empty [] in element name with # inner array index based on mapping array names like # "pSomeThings[]" to "someThingCount" array size. # This could be more robust by mapping a param member # name to a struct type and "len" attribute. limit = element[0:element.find('s[]')] + 'Count' dotp = limit.rfind('.p') limit = limit[0:dotp+1] + limit[dotp+2:dotp+3].lower() + limit[dotp+3:] paramdecl += ' for(uint32_t index2=0;index2<'+limit+';index2++)' element = element.replace('[]','[index2]') paramdecl += ' ' + functionprefix + 'WriteObject(my_data, ' + element + ');\n' paramdecl += ' }\n' else: # externsync can list members to synchronize for member in externsync.split(","): paramdecl += ' ' + functionprefix + 'WriteObject(my_data, ' + member + ');\n' else: paramtype = param.find('type') if paramtype is not None: paramtype = paramtype.text else: paramtype = 'None' if paramtype in thread_check_dispatchable_objects or paramtype in thread_check_nondispatchable_objects: if self.paramIsArray(param) and ('pPipelines' != paramname.text): paramdecl += ' for (uint32_t index=0;index<' + param.attrib.get('len') + ';index++) {\n' paramdecl += ' ' + functionprefix + 'ReadObject(my_data, ' + paramname.text + '[index]);\n' paramdecl += ' }\n' elif not self.paramIsPointer(param): # Pointer params are often being created. # They are not being read from. paramdecl += ' ' + functionprefix + 'ReadObject(my_data, ' + paramname.text + ');\n' explicitexternsyncparams = cmd.findall("param[@externsync]") if (explicitexternsyncparams is not None): for param in explicitexternsyncparams: externsyncattrib = param.attrib.get('externsync') paramname = param.find('name') paramdecl += '// Host access to ' if externsyncattrib == 'true': if self.paramIsArray(param): paramdecl += 'each member of ' + paramname.text elif self.paramIsPointer(param): paramdecl += 'the object referenced by ' + paramname.text else: paramdecl += paramname.text else: paramdecl += externsyncattrib paramdecl += ' must be externally synchronized\n' # Find and add any "implicit" parameters that are thread unsafe implicitexternsyncparams = cmd.find('implicitexternsyncparams') if (implicitexternsyncparams is not None): for elem in implicitexternsyncparams: paramdecl += ' // ' paramdecl += elem.text paramdecl += ' must be externally synchronized between host accesses\n' if (paramdecl == ''): return None else: return paramdecl def beginFile(self, genOpts): OutputGenerator.beginFile(self, genOpts) # C-specific # # Multiple inclusion protection & C++ wrappers. if (genOpts.protectFile and self.genOpts.filename): headerSym = '__' + re.sub('\.h', '_h_', os.path.basename(self.genOpts.filename)) write('#ifndef', headerSym, file=self.outFile) write('#define', headerSym, '1', file=self.outFile) self.newline() write('#ifdef __cplusplus', file=self.outFile) write('extern "C" {', file=self.outFile) write('#endif', file=self.outFile) self.newline() # # User-supplied prefix text, if any (list of strings) if (genOpts.prefixText): for s in genOpts.prefixText: write(s, file=self.outFile) def endFile(self): # C-specific # Finish C++ wrapper and multiple inclusion protection self.newline() # record intercepted procedures write('// intercepts', file=self.outFile) write('struct { const char* name; PFN_vkVoidFunction pFunc;} procmap[] = {', file=self.outFile) write('\n'.join(self.intercepts), file=self.outFile) write('};\n', file=self.outFile) self.newline() write('#ifdef __cplusplus', file=self.outFile) write('}', file=self.outFile) write('#endif', file=self.outFile) if (self.genOpts.protectFile and self.genOpts.filename): self.newline() write('#endif', file=self.outFile) # Finish processing in superclass OutputGenerator.endFile(self) def beginFeature(self, interface, emit): #write('// starting beginFeature', file=self.outFile) # Start processing in superclass OutputGenerator.beginFeature(self, interface, emit) # C-specific # Accumulate includes, defines, types, enums, function pointer typedefs, # end function prototypes separately for this feature. They're only # printed in endFeature(). self.sections = dict([(section, []) for section in self.ALL_SECTIONS]) #write('// ending beginFeature', file=self.outFile) def endFeature(self): # C-specific # Actually write the interface to the output file. #write('// starting endFeature', file=self.outFile) if (self.emit): self.newline() if (self.genOpts.protectFeature): write('#ifndef', self.featureName, file=self.outFile) # If type declarations are needed by other features based on # this one, it may be necessary to suppress the ExtraProtect, # or move it below the 'for section...' loop. #write('// endFeature looking at self.featureExtraProtect', file=self.outFile) if (self.featureExtraProtect != None): write('#ifdef', self.featureExtraProtect, file=self.outFile) #write('#define', self.featureName, '1', file=self.outFile) for section in self.TYPE_SECTIONS: #write('// endFeature writing section'+section, file=self.outFile) contents = self.sections[section] if contents: write('\n'.join(contents), file=self.outFile) self.newline() #write('// endFeature looking at self.sections[command]', file=self.outFile) if (self.sections['command']): write('\n'.join(self.sections['command']), end='', file=self.outFile) self.newline() if (self.featureExtraProtect != None): write('#endif /*', self.featureExtraProtect, '*/', file=self.outFile) if (self.genOpts.protectFeature): write('#endif /*', self.featureName, '*/', file=self.outFile) # Finish processing in superclass OutputGenerator.endFeature(self) #write('// ending endFeature', file=self.outFile) # # Append a definition to the specified section def appendSection(self, section, text): # self.sections[section].append('SECTION: ' + section + '\n') self.sections[section].append(text) # # Type generation def genType(self, typeinfo, name): pass # # Struct (e.g. C "struct" type) generation. # This is a special case of the tag where the contents are # interpreted as a set of tags instead of freeform C # C type declarations. The tags are just like # tags - they are a declaration of a struct or union member. # Only simple member declarations are supported (no nested # structs etc.) def genStruct(self, typeinfo, typeName): OutputGenerator.genStruct(self, typeinfo, typeName) body = 'typedef ' + typeinfo.elem.get('category') + ' ' + typeName + ' {\n' # paramdecl = self.makeCParamDecl(typeinfo.elem, self.genOpts.alignFuncParam) for member in typeinfo.elem.findall('.//member'): body += self.makeCParamDecl(member, self.genOpts.alignFuncParam) body += ';\n' body += '} ' + typeName + ';\n' self.appendSection('struct', body) # # Group (e.g. C "enum" type) generation. # These are concatenated together with other types. def genGroup(self, groupinfo, groupName): pass # Enumerant generation # tags may specify their values in several ways, but are usually # just integers. def genEnum(self, enuminfo, name): pass # # Command generation def genCmd(self, cmdinfo, name): special_functions = [ 'vkGetDeviceProcAddr', 'vkGetInstanceProcAddr', 'vkCreateDevice', 'vkDestroyDevice', 'vkCreateInstance', 'vkDestroyInstance', 'vkEnumerateInstanceLayerProperties', 'vkEnumerateInstanceExtensionProperties', 'vkAllocateCommandBuffers', 'vkFreeCommandBuffers', 'vkCreateDebugReportCallbackEXT', 'vkDestroyDebugReportCallbackEXT', ] if name in special_functions: self.intercepts += [ ' {"%s", reinterpret_cast(%s)},' % (name,name) ] return if "KHR" in name: self.appendSection('command', '// TODO - not wrapping KHR function ' + name) return # Determine first if this function needs to be intercepted startthreadsafety = self.makeThreadUseBlock(cmdinfo.elem, 'start') if startthreadsafety is None: return finishthreadsafety = self.makeThreadUseBlock(cmdinfo.elem, 'finish') # record that the function will be intercepted if (self.featureExtraProtect != None): self.intercepts += [ '#ifdef %s' % self.featureExtraProtect ] self.intercepts += [ ' {"%s", reinterpret_cast(%s)},' % (name,name) ] if (self.featureExtraProtect != None): self.intercepts += [ '#endif' ] OutputGenerator.genCmd(self, cmdinfo, name) # decls = self.makeCDecls(cmdinfo.elem) self.appendSection('command', '') self.appendSection('command', decls[0][:-1]) self.appendSection('command', '{') # setup common to call wrappers # first parameter is always dispatchable dispatchable_type = cmdinfo.elem.find('param/type').text dispatchable_name = cmdinfo.elem.find('param/name').text self.appendSection('command', ' dispatch_key key = get_dispatch_key('+dispatchable_name+');') self.appendSection('command', ' layer_data *my_data = get_my_data_ptr(key, layer_data_map);') if dispatchable_type in ["VkPhysicalDevice", "VkInstance"]: self.appendSection('command', ' VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table;') else: self.appendSection('command', ' VkLayerDispatchTable *pTable = my_data->device_dispatch_table;') # Declare result variable, if any. resulttype = cmdinfo.elem.find('proto/type') if (resulttype != None and resulttype.text == 'void'): resulttype = None if (resulttype != None): self.appendSection('command', ' ' + resulttype.text + ' result;') assignresult = 'result = ' else: assignresult = '' self.appendSection('command', str(startthreadsafety)) params = cmdinfo.elem.findall('param/name') paramstext = ','.join([str(param.text) for param in params]) API = cmdinfo.elem.attrib.get('name').replace('vk','pTable->',1) self.appendSection('command', ' ' + assignresult + API + '(' + paramstext + ');') self.appendSection('command', str(finishthreadsafety)) # Return result variable, if any. if (resulttype != None): self.appendSection('command', ' return result;') self.appendSection('command', '}') # ParamCheckerOutputGenerator - subclass of OutputGenerator. # Generates param checker layer code. # # ---- methods ---- # ParamCheckerOutputGenerator(errFile, warnFile, diagFile) - args as for # OutputGenerator. Defines additional internal state. # ---- methods overriding base class ---- # beginFile(genOpts) # endFile() # beginFeature(interface, emit) # endFeature() # genType(typeinfo,name) # genStruct(typeinfo,name) # genGroup(groupinfo,name) # genEnum(enuminfo, name) # genCmd(cmdinfo) class ParamCheckerOutputGenerator(OutputGenerator): """Generate ParamChecker code based on XML element attributes""" # This is an ordered list of sections in the header file. ALL_SECTIONS = ['command'] def __init__(self, errFile = sys.stderr, warnFile = sys.stderr, diagFile = sys.stdout): OutputGenerator.__init__(self, errFile, warnFile, diagFile) self.INDENT_SPACES = 4 # Struct member categories, to be used to avoid validating output values retrieved by queries such as vkGetPhysicalDeviceProperties # For example, VkPhysicalDeviceProperties will be ignored for vkGetPhysicalDeviceProperties where it is an ouput, but will be processed # for vkCreateDevice where is it a member of the VkDeviceCreateInfo input parameter. self.STRUCT_MEMBERS_INPUT_ONLY_NONE = 1 # The struct contains no 'input-only' members and will always be processed self.STRUCT_MEMBERS_INPUT_ONLY_MIXED = 2 # The struct contains some 'input-only' members; these members will only be processed when the struct is an input parameter self.STRUCT_MEMBERS_INPUT_ONLY_EXCLUSIVE = 3 # The struct contains only 'input-only' members; the entire struct will only be processed when it is an input parameter # Commands to ignore self.blacklist = [ 'vkGetInstanceProcAddr', 'vkGetDeviceProcAddr', 'vkEnumerateInstanceLayerProperties', 'vkEnumerateInstanceExtensionsProperties', 'vkEnumerateDeviceLayerProperties', 'vkEnumerateDeviceExtensionsProperties', 'vkCreateDebugReportCallbackEXT', 'vkDebugReportMessageEXT'] # Internal state - accumulators for different inner block text self.sections = dict([(section, []) for section in self.ALL_SECTIONS]) self.structNames = [] # List of Vulkan struct typenames self.stypes = [] # Values from the VkStructureType enumeration self.structTypes = dict() # Map of Vulkan struct typename to required VkStructureType self.commands = [] # List of CommandData records for all Vulkan commands self.structMembers = [] # List of StructMemberData records for all Vulkan structs self.validatedStructs = dict() # Map of structs containing members that require validation to a value indicating # that the struct contains members that are only validated when it is an input parameter self.enumRanges = dict() # Map of enum name to BEGIN/END range values # Named tuples to store struct and command data self.StructType = namedtuple('StructType', ['name', 'value']) self.CommandParam = namedtuple('CommandParam', ['type', 'name', 'ispointer', 'isstaticarray', 'isbool', 'israngedenum', 'isconst', 'isoptional', 'iscount', 'len', 'extstructs', 'cdecl']) self.CommandData = namedtuple('CommandData', ['name', 'params', 'cdecl']) self.StructMemberData = namedtuple('StructMemberData', ['name', 'members']) # def incIndent(self, indent): inc = ' ' * self.INDENT_SPACES if indent: return indent + inc return inc # def decIndent(self, indent): if indent and (len(indent) > self.INDENT_SPACES): return indent[:-self.INDENT_SPACES] return '' # def beginFile(self, genOpts): OutputGenerator.beginFile(self, genOpts) # C-specific # # User-supplied prefix text, if any (list of strings) if (genOpts.prefixText): for s in genOpts.prefixText: write(s, file=self.outFile) # # Multiple inclusion protection & C++ wrappers. if (genOpts.protectFile and self.genOpts.filename): headerSym = re.sub('\.h', '_H', os.path.basename(self.genOpts.filename)).upper() write('#ifndef', headerSym, file=self.outFile) write('#define', headerSym, '1', file=self.outFile) self.newline() # # Headers write('#include ', file=self.outFile) self.newline() write('#include "vulkan/vulkan.h"', file=self.outFile) write('#include "vk_layer_extension_utils.h"', file=self.outFile) write('#include "parameter_validation_utils.h"', file=self.outFile) # # Macros self.newline() write('#ifndef UNUSED_PARAMETER', file=self.outFile) write('#define UNUSED_PARAMETER(x) (void)(x)', file=self.outFile) write('#endif // UNUSED_PARAMETER', file=self.outFile) def endFile(self): # C-specific # Finish C++ wrapper and multiple inclusion protection self.newline() if (self.genOpts.protectFile and self.genOpts.filename): self.newline() write('#endif', file=self.outFile) # Finish processing in superclass OutputGenerator.endFile(self) def beginFeature(self, interface, emit): # Start processing in superclass OutputGenerator.beginFeature(self, interface, emit) # C-specific # Accumulate includes, defines, types, enums, function pointer typedefs, # end function prototypes separately for this feature. They're only # printed in endFeature(). self.sections = dict([(section, []) for section in self.ALL_SECTIONS]) self.structNames = [] self.stypes = [] self.structTypes = dict() self.commands = [] self.structMembers = [] self.validatedStructs = dict() self.enumRanges = dict() def endFeature(self): # C-specific # Actually write the interface to the output file. if (self.emit): self.newline() # If type declarations are needed by other features based on # this one, it may be necessary to suppress the ExtraProtect, # or move it below the 'for section...' loop. if (self.featureExtraProtect != None): write('#ifdef', self.featureExtraProtect, file=self.outFile) # Generate the struct member checking code from the captured data self.prepareStructMemberData() self.processStructMemberData() # Generate the command parameter checking code from the captured data self.processCmdData() if (self.sections['command']): if (self.genOpts.protectProto): write(self.genOpts.protectProto, self.genOpts.protectProtoStr, file=self.outFile) write('\n'.join(self.sections['command']), end='', file=self.outFile) if (self.featureExtraProtect != None): write('#endif /*', self.featureExtraProtect, '*/', file=self.outFile) else: self.newline() # Finish processing in superclass OutputGenerator.endFeature(self) # # Append a definition to the specified section def appendSection(self, section, text): # self.sections[section].append('SECTION: ' + section + '\n') self.sections[section].append(text) # # Type generation def genType(self, typeinfo, name): OutputGenerator.genType(self, typeinfo, name) typeElem = typeinfo.elem # If the type is a struct type, traverse the imbedded tags # generating a structure. Otherwise, emit the tag text. category = typeElem.get('category') if (category == 'struct' or category == 'union'): self.structNames.append(name) self.genStruct(typeinfo, name) # # Struct parameter check generation. # This is a special case of the tag where the contents are # interpreted as a set of tags instead of freeform C # C type declarations. The tags are just like # tags - they are a declaration of a struct or union member. # Only simple member declarations are supported (no nested # structs etc.) def genStruct(self, typeinfo, typeName): OutputGenerator.genStruct(self, typeinfo, typeName) members = typeinfo.elem.findall('.//member') # # Iterate over members once to get length parameters for arrays lens = set() for member in members: len = self.getLen(member) if len: lens.add(len) # # Generate member info membersInfo = [] for member in members: # Get the member's type and name info = self.getTypeNameTuple(member) type = info[0] name = info[1] stypeValue = '' cdecl = self.makeCParamDecl(member, 0) # Process VkStructureType if type == 'VkStructureType': # Extract the required struct type value from the comments # embedded in the original text defining the 'typeinfo' element rawXml = etree.tostring(typeinfo.elem).decode('ascii') result = re.search(r'VK_STRUCTURE_TYPE_\w+', rawXml) if result: value = result.group(0) # Make sure value is valid #if value not in self.stypes: # print('WARNING: {} is not part of the VkStructureType enumeration [{}]'.format(value, typeName)) else: value = '' # Store the required type value self.structTypes[typeName] = self.StructType(name=name, value=value) # # Store pointer/array/string info # Check for parameter name in lens set iscount = False if name in lens: iscount = True # The pNext members are not tagged as optional, but are treated as # optional for parameter NULL checks. Static array members # are also treated as optional to skip NULL pointer validation, as # they won't be NULL. isstaticarray = self.paramIsStaticArray(member) isoptional = False if self.paramIsOptional(member) or (name == 'pNext') or (isstaticarray): isoptional = True membersInfo.append(self.CommandParam(type=type, name=name, ispointer=self.paramIsPointer(member), isstaticarray=isstaticarray, isbool=True if type == 'VkBool32' else False, israngedenum=True if type in self.enumRanges else False, isconst=True if 'const' in cdecl else False, isoptional=isoptional, iscount=iscount, len=self.getLen(member), extstructs=member.attrib.get('validextensionstructs') if name == 'pNext' else None, cdecl=cdecl)) self.structMembers.append(self.StructMemberData(name=typeName, members=membersInfo)) # # Capture group (e.g. C "enum" type) info to be used for # param check code generation. # These are concatenated together with other types. def genGroup(self, groupinfo, groupName): OutputGenerator.genGroup(self, groupinfo, groupName) groupElem = groupinfo.elem # # Store the sType values if groupName == 'VkStructureType': for elem in groupElem.findall('enum'): self.stypes.append(elem.get('name')) else: # Determine if begin/end ranges are needed (we don't do this for VkStructureType, which has a more finely grained check) expandName = re.sub(r'([0-9a-z_])([A-Z0-9][^A-Z0-9]?)',r'\1_\2',groupName).upper() expandPrefix = expandName expandSuffix = '' expandSuffixMatch = re.search(r'[A-Z][A-Z]+$',groupName) if expandSuffixMatch: expandSuffix = '_' + expandSuffixMatch.group() # Strip off the suffix from the prefix expandPrefix = expandName.rsplit(expandSuffix, 1)[0] isEnum = ('FLAG_BITS' not in expandPrefix) if isEnum: self.enumRanges[groupName] = (expandPrefix + '_BEGIN_RANGE' + expandSuffix, expandPrefix + '_END_RANGE' + expandSuffix) # # Capture command parameter info to be used for param # check code generation. def genCmd(self, cmdinfo, name): OutputGenerator.genCmd(self, cmdinfo, name) if name not in self.blacklist: params = cmdinfo.elem.findall('param') # Get list of array lengths lens = set() for param in params: len = self.getLen(param) if len: lens.add(len) # Get param info paramsInfo = [] for param in params: paramInfo = self.getTypeNameTuple(param) cdecl = self.makeCParamDecl(param, 0) # Check for parameter name in lens set iscount = False if paramInfo[1] in lens: iscount = True paramsInfo.append(self.CommandParam(type=paramInfo[0], name=paramInfo[1], ispointer=self.paramIsPointer(param), isstaticarray=self.paramIsStaticArray(param), isbool=True if paramInfo[0] == 'VkBool32' else False, israngedenum=True if paramInfo[0] in self.enumRanges else False, isconst=True if 'const' in cdecl else False, isoptional=self.paramIsOptional(param), iscount=iscount, len=self.getLen(param), extstructs=None, cdecl=cdecl)) self.commands.append(self.CommandData(name=name, params=paramsInfo, cdecl=self.makeCDecls(cmdinfo.elem)[0])) # # Check if the parameter passed in is a pointer def paramIsPointer(self, param): ispointer = 0 paramtype = param.find('type') if (paramtype.tail is not None) and ('*' in paramtype.tail): ispointer = paramtype.tail.count('*') elif paramtype.text[:4] == 'PFN_': # Treat function pointer typedefs as a pointer to a single value ispointer = 1 return ispointer # # Check if the parameter passed in is a static array def paramIsStaticArray(self, param): isstaticarray = 0 paramname = param.find('name') if (paramname.tail is not None) and ('[' in paramname.tail): isstaticarray = paramname.tail.count('[') return isstaticarray # # Check if the parameter passed in is optional # Returns a list of Boolean values for comma separated len attributes (len='false,true') def paramIsOptional(self, param): # See if the handle is optional isoptional = False # Simple, if it's optional, return true optString = param.attrib.get('optional') if optString: if optString == 'true': isoptional = True elif ',' in optString: opts = [] for opt in optString.split(','): val = opt.strip() if val == 'true': opts.append(True) elif val == 'false': opts.append(False) else: print('Unrecognized len attribute value',val) isoptional = opts return isoptional # # Retrieve the value of the len tag def getLen(self, param): result = None len = param.attrib.get('len') if len and len != 'null-terminated': # For string arrays, 'len' can look like 'count,null-terminated', # indicating that we have a null terminated array of strings. We # strip the null-terminated from the 'len' field and only return # the parameter specifying the string count if 'null-terminated' in len: result = len.split(',')[0] else: result = len return result # # Retrieve the type and name for a parameter def getTypeNameTuple(self, param): type = '' name = '' for elem in param: if elem.tag == 'type': type = noneStr(elem.text) elif elem.tag == 'name': name = noneStr(elem.text) return (type, name) # # Find a named parameter in a parameter list def getParamByName(self, params, name): for param in params: if param.name == name: return param return None # # Extract length values from latexmath. Currently an inflexible solution that looks for specific # patterns that are found in vk.xml. Will need to be updated when new patterns are introduced. def parseLateXMath(self, source): name = 'ERROR' decoratedName = 'ERROR' if 'mathit' in source: # Matches expressions similar to 'latexmath:[$\lceil{\mathit{rasterizationSamples} \over 32}\rceil$]' match = re.match(r'latexmath\s*\:\s*\[\s*\$\\l(\w+)\s*\{\s*\\mathit\s*\{\s*(\w+)\s*\}\s*\\over\s*(\d+)\s*\}\s*\\r(\w+)\$\s*\]', source) if not match or match.group(1) != match.group(4): raise 'Unrecognized latexmath expression' name = match.group(2) decoratedName = '{}({}/{})'.format(*match.group(1, 2, 3)) else: # Matches expressions similar to 'latexmath : [$dataSize \over 4$]' match = re.match(r'latexmath\s*\:\s*\[\s*\$\s*(\w+)\s*\\over\s*(\d+)\s*\$\s*\]', source) name = match.group(1) decoratedName = '{}/{}'.format(*match.group(1, 2)) return name, decoratedName # # Get the length paramater record for the specified parameter name def getLenParam(self, params, name): lenParam = None if name: if '->' in name: # The count is obtained by dereferencing a member of a struct parameter lenParam = self.CommandParam(name=name, iscount=True, ispointer=False, isbool=False, israngedenum=False, isconst=False, isstaticarray=None, isoptional=False, type=None, len=None, extstructs=None, cdecl=None) elif 'latexmath' in name: lenName, decoratedName = self.parseLateXMath(name) lenParam = self.getParamByName(params, lenName) # TODO: Zero-check the result produced by the equation? # Copy the stored len parameter entry and overwrite the name with the processed latexmath equation #param = self.getParamByName(params, lenName) #lenParam = self.CommandParam(name=decoratedName, iscount=param.iscount, ispointer=param.ispointer, # isoptional=param.isoptional, type=param.type, len=param.len, # isstaticarray=param.isstaticarray, extstructs=param.extstructs, cdecl=param.cdecl) else: lenParam = self.getParamByName(params, name) return lenParam # # Convert a vulkan.h command declaration into a parameter_validation.h definition def getCmdDef(self, cmd): # # Strip the trailing ';' and split into individual lines lines = cmd.cdecl[:-1].split('\n') # Replace Vulkan prototype lines[0] = 'static VkBool32 parameter_validation_' + cmd.name + '(' # Replace the first argument with debug_report_data, when the first # argument is a handle (not vkCreateInstance) reportData = ' debug_report_data*'.ljust(self.genOpts.alignFuncParam) + 'report_data,' if cmd.name != 'vkCreateInstance': lines[1] = reportData else: lines.insert(1, reportData) return '\n'.join(lines) # # Generate the code to check for a NULL dereference before calling the # validation function def genCheckedLengthCall(self, indent, name, expr): count = name.count('->') if count: checkedExpr = '' localIndent = indent elements = name.split('->') # Open the if expression blocks for i in range(0, count): checkedExpr += localIndent + 'if ({} != NULL) {{\n'.format('->'.join(elements[0:i+1])) localIndent = self.incIndent(localIndent) # Add the validation expression checkedExpr += localIndent + expr # Close the if blocks for i in range(0, count): localIndent = self.decIndent(localIndent) checkedExpr += localIndent + '}\n' return checkedExpr # No if statements were required return indent + expr # # Generate the parameter checking code def genFuncBody(self, indent, name, values, valuePrefix, variablePrefix, structName, needConditionCheck): funcBody = '' unused = [] # Code to conditionally check parameters only when they are inputs. Primarily avoids # checking uninitialized members of output structs used to retrieve bools and enums. # Conditional checks are grouped together to be appended to funcBody within a single # if check for input parameter direction. conditionalExprs = [] for value in values: checkExpr = '' # Code to check the current parameter lenParam = None # # Generate the full name of the value, which will be printed in # the error message, by adding the variable prefix to the # value name valueDisplayName = '(std::string({}) + std::string("{}")).c_str()'.format(variablePrefix, value.name) if variablePrefix else '"{}"'.format(value.name) # # Check for NULL pointers, ignore the inout count parameters that # will be validated with their associated array if (value.ispointer or value.isstaticarray) and not value.iscount: # # Parameters for function argument generation req = 'VK_TRUE' # Paramerter can be NULL cpReq = 'VK_TRUE' # Count pointer can be NULL cvReq = 'VK_TRUE' # Count value can be 0 lenDisplayName = None # Name of length parameter to print with validation messages; parameter name with prefix applied # # Generate required/optional parameter strings for the pointer and count values if value.isoptional: req = 'VK_FALSE' if value.len: # The parameter is an array with an explicit count parameter lenParam = self.getLenParam(values, value.len) lenDisplayName = '(std::string({}) + std::string("{}")).c_str()'.format(variablePrefix, lenParam.name) if variablePrefix else '"{}"'.format(lenParam.name) if lenParam.ispointer: # Count parameters that are pointers are inout if type(lenParam.isoptional) is list: if lenParam.isoptional[0]: cpReq = 'VK_FALSE' if lenParam.isoptional[1]: cvReq = 'VK_FALSE' else: if lenParam.isoptional: cpReq = 'VK_FALSE' else: if lenParam.isoptional: cvReq = 'VK_FALSE' # # If this is a pointer to a struct with an sType field, verify the type if value.type in self.structTypes: stype = self.structTypes[value.type] if lenParam: # This is an array if lenParam.ispointer: # When the length parameter is a pointer, there is an extra Boolean parameter in the function call to indicate if it is required checkExpr = 'skipCall |= validate_struct_type_array(report_data, {}, {ldn}, {dn}, "{sv}", {pf}{ln}, {pf}{vn}, {sv}, {}, {}, {});\n'.format(name, cpReq, cvReq, req, ln=lenParam.name, ldn=lenDisplayName, dn=valueDisplayName, vn=value.name, sv=stype.value, pf=valuePrefix) else: checkExpr = 'skipCall |= validate_struct_type_array(report_data, {}, {ldn}, {dn}, "{sv}", {pf}{ln}, {pf}{vn}, {sv}, {}, {});\n'.format(name, cvReq, req, ln=lenParam.name, ldn=lenDisplayName, dn=valueDisplayName, vn=value.name, sv=stype.value, pf=valuePrefix) else: checkExpr = 'skipCall |= validate_struct_type(report_data, {}, {}, "{sv}", {}{vn}, {sv}, {});\n'.format(name, valueDisplayName, valuePrefix, req, vn=value.name, sv=stype.value) elif value.name == 'pNext': # We need to ignore VkDeviceCreateInfo and VkInstanceCreateInfo, as the loader manipulates them in a way that is not documented in vk.xml if not structName in ['VkDeviceCreateInfo', 'VkInstanceCreateInfo']: # Generate an array of acceptable VkStructureType values for pNext extStructCount = 0 extStructVar = 'NULL' extStructNames = 'NULL' if value.extstructs: structs = value.extstructs.split(',') checkExpr = 'const VkStructureType allowedStructs[] = {' + ', '.join([self.structTypes[s].value for s in structs]) + '};\n' + indent extStructCount = 'ARRAY_SIZE(allowedStructs)' extStructVar = 'allowedStructs' extStructNames = '"' + ', '.join(structs) + '"' checkExpr += 'skipCall |= validate_struct_pnext(report_data, {}, {}, {}, {}{vn}, {}, {});\n'.format(name, valueDisplayName, extStructNames, valuePrefix, extStructCount, extStructVar, vn=value.name) else: if lenParam: # This is an array if lenParam.ispointer: # If count and array parameters are optional, there will be no validation if req == 'VK_TRUE' or cpReq == 'VK_TRUE' or cvReq == 'VK_TRUE': # When the length parameter is a pointer, there is an extra Boolean parameter in the function call to indicate if it is required checkExpr = 'skipCall |= validate_array(report_data, {}, {ldn}, {dn}, {pf}{ln}, {pf}{vn}, {}, {}, {});\n'.format(name, cpReq, cvReq, req, ln=lenParam.name, ldn=lenDisplayName, dn=valueDisplayName, vn=value.name, pf=valuePrefix) else: # If count and array parameters are optional, there will be no validation if req == 'VK_TRUE' or cvReq == 'VK_TRUE': funcName = 'validate_array' if value.type != 'char' else 'validate_string_array' checkExpr = 'skipCall |= {}(report_data, {}, {ldn}, {dn}, {pf}{ln}, {pf}{vn}, {}, {});\n'.format(funcName, name, cvReq, req, ln=lenParam.name, ldn=lenDisplayName, dn=valueDisplayName, vn=value.name, pf=valuePrefix) elif not value.isoptional: # Function pointers need a reinterpret_cast to void* if value.type[:4] == 'PFN_': checkExpr = 'skipCall |= validate_required_pointer(report_data, {}, {}, reinterpret_cast({}{vn}));\n'.format(name, valueDisplayName, valuePrefix, vn=value.name) else: checkExpr = 'skipCall |= validate_required_pointer(report_data, {}, {}, {}{vn});\n'.format(name, valueDisplayName, valuePrefix, vn=value.name) # # If this is a pointer to a struct (input), see if it contains members that need to be checked if value.type in self.validatedStructs and value.isconst: # # The name prefix used when reporting an error with a struct member (eg. the 'pCreateInfor->' in 'pCreateInfo->sType') if lenParam: prefix = '(std::string({}) + std::string("{}[i]->")).c_str()'.format(variablePrefix, value.name) if variablePrefix else '(std::string("{}[i]->")).c_str()'.format(value.name) else: prefix = '(std::string({}) + std::string("{}->")).c_str()'.format(variablePrefix, value.name) if variablePrefix else '"{}->"'.format(value.name) # membersInputOnly = self.validatedStructs[value.type] # # If the current struct has mixed 'input-only' and 'non-input-only' members, it needs an isInput flag if membersInputOnly == self.STRUCT_MEMBERS_INPUT_ONLY_MIXED: # If this function is called from another struct validation function (valuePrefix is not empty), then we forward the 'isInput' prameter isInput = 'isInput' if not valuePrefix: # We are validating function parameters and need to determine if the current value is an input parameter isInput = 'true' if value.isconst else 'false' if checkExpr: checkExpr += '\n' + indent if lenParam: # Need to process all elements in the array checkExpr += 'if ({}{} != NULL) {{\n'.format(valuePrefix, value.name) indent = self.incIndent(indent) checkExpr += indent + 'for (uint32_t i = 0; i < {}{}; ++i) {{\n'.format(valuePrefix, lenParam.name) indent = self.incIndent(indent) checkExpr += indent + 'skipCall |= parameter_validation_{}(report_data, {}, {}, {}, &({}{}[i]));\n'.format(value.type, name, prefix, isInput, valuePrefix, value.name) indent = self.decIndent(indent) checkExpr += indent + '}\n' indent = self.decIndent(indent) checkExpr += indent + '}\n' else: checkExpr += 'skipCall |= parameter_validation_{}(report_data, {}, {}, {}, {}{});\n'.format(value.type, name, prefix, isInput, valuePrefix, value.name) else: # Validation function does not have an isInput field if lenParam: # Need to process all elements in the array expr = 'if ({}{} != NULL) {{\n'.format(valuePrefix, value.name) indent = self.incIndent(indent) expr += indent + 'for (uint32_t i = 0; i < {}{}; ++i) {{\n'.format(valuePrefix, lenParam.name) indent = self.incIndent(indent) expr += indent + 'skipCall |= parameter_validation_{}(report_data, {}, {}, &({}{}[i]));\n'.format(value.type, name, prefix, valuePrefix, value.name) indent = self.decIndent(indent) expr += indent + '}\n' indent = self.decIndent(indent) expr += indent + '}\n' else: expr = 'skipCall |= parameter_validation_{}(report_data, {}, {}, {}{});\n'.format(value.type, name, prefix, valuePrefix, value.name) # # If the struct only has input-only members and is a member of another struct, it is conditionally processed based on 'isInput' if valuePrefix and membersInputOnly == self.STRUCT_MEMBERS_INPUT_ONLY_EXCLUSIVE: if needConditionCheck: if expr.count('\n') > 1: # TODO: Proper fix for this formatting workaround conditionalExprs.append(expr.replace(' ' * 8, ' ' * 12)) else: conditionalExprs.append(expr) else: if checkExpr: checkExpr += '\n' + indent checkExpr += expr # # If the struct is a function parameter (valuePrefix is empty) and only contains input-only parameters, it can be ignored if it is not an input elif (membersInputOnly == self.STRUCT_MEMBERS_INPUT_ONLY_NONE) or (not valuePrefix and membersInputOnly == self.STRUCT_MEMBERS_INPUT_ONLY_EXCLUSIVE and value.isconst): if checkExpr: checkExpr += '\n' + indent checkExpr += expr elif value.isbool and value.isconst: expr = 'skipCall |= validate_bool32_array(report_data, {}, {}, {pf}{}, {pf}{});\n'.format(name, valueDisplayName, lenParam.name, value.name, pf=valuePrefix) if checkExpr: checkExpr += '\n' + indent checkExpr += expr elif value.israngedenum and value.isconst: enumRange = self.enumRanges[value.type] expr = 'skipCall |= validate_ranged_enum_array(report_data, {}, {}, "{}", {}, {}, {pf}{}, {pf}{});\n'.format(name, valueDisplayName, value.type, enumRange[0], enumRange[1], lenParam.name, value.name, pf=valuePrefix) if checkExpr: checkExpr += '\n' + indent checkExpr += expr elif value.type in self.validatedStructs: # The name of the value with prefix applied prefix = '(std::string({}) + std::string("{}.")).c_str()'.format(variablePrefix, value.name) if variablePrefix else '"{}."'.format(value.name) # membersInputOnly = self.validatedStructs[value.type] # # If the current struct has mixed 'input-only' and 'non-input-only' members, it needs an isInput flag if membersInputOnly == self.STRUCT_MEMBERS_INPUT_ONLY_MIXED: # If this function is called from another struct validation function (valuePrefix is not empty), then we forward the 'isInput' prameter isInput = 'isInput' if not valuePrefix: # We are validating function parameters and need to determine if the current value is an input parameter isInput = 'true' if value.isconst else 'false' if checkExpr: checkExpr += '\n' + indent checkExpr += 'skipCall |= parameter_validation_{}(report_data, {}, {}, {}, &({}{}));\n'.format(value.type, name, prefix, isInput, valuePrefix, value.name) else: # Validation function does not have an isInput field expr = 'skipCall |= parameter_validation_{}(report_data, {}, {}, &({}{}));\n'.format(value.type, name, prefix, valuePrefix, value.name) # # If the struct only has input-only members and is a member of another struct, it is conditionally processed based on 'isInput' if valuePrefix and membersInputOnly == self.STRUCT_MEMBERS_INPUT_ONLY_EXCLUSIVE: if needConditionCheck: conditionalExprs.append(expr) else: if checkExpr: checkExpr += '\n' + indent checkExpr += expr # # If the struct is a function parameter (valuePrefix is empty) and only contains input-only parameters, it can be ignored if it is not an input elif (membersInputOnly == self.STRUCT_MEMBERS_INPUT_ONLY_NONE) or (not valuePrefix and membersInputOnly == self.STRUCT_MEMBERS_INPUT_ONLY_EXCLUSIVE and value.isconst): if checkExpr: checkExpr += '\n' + indent checkExpr += expr elif value.isbool: expr = 'skipCall |= validate_bool32(report_data, {}, {}, {}{});\n'.format(name, valueDisplayName, valuePrefix, value.name) if needConditionCheck: conditionalExprs.append(expr) else: checkExpr = expr elif value.israngedenum: enumRange = self.enumRanges[value.type] expr = 'skipCall |= validate_ranged_enum(report_data, {}, {}, "{}", {}, {}, {}{});\n'.format(name, valueDisplayName, value.type, enumRange[0], enumRange[1], valuePrefix, value.name) if needConditionCheck: conditionalExprs.append(expr) else: checkExpr = expr # # Append the parameter check to the function body for the current command if checkExpr: funcBody += '\n' if lenParam and ('->' in lenParam.name): # Add checks to ensure the validation call does not dereference a NULL pointer to obtain the count funcBody += self.genCheckedLengthCall(indent, lenParam.name, checkExpr) else: funcBody += indent + checkExpr elif not value.iscount: # If no expression was generated for this value, it is unreferenced by the validation function, unless # it is an array count, which is indirectly referenced for array valiadation. unused.append(value.name) # Add the 'input' only checks if conditionalExprs: funcBody += '\n' funcBody += indent + 'if (isInput) {' indent = self.incIndent(indent) for conditionalExpr in conditionalExprs: funcBody += '\n' funcBody += indent + conditionalExpr indent = self.decIndent(indent) funcBody += indent + '}\n' return funcBody, unused # # Post-process the collected struct member data to create a list of structs # with members that need to be validated def prepareStructMemberData(self): for struct in self.structMembers: inputOnly = False validated = False for member in struct.members: if not member.iscount: lenParam = self.getLenParam(struct.members, member.len) # The sType value needs to be validated # The pNext value needs to be validated # A required array/count needs to be validated # A required pointer needs to be validated # A bool needs to be validated, and the struct is an input parameter # An enum needs to be validated, and the struct is an input parameter if member.type in self.structTypes: validated = True elif member.name == 'pNext': validated = True elif member.ispointer and lenParam: # This is an array # Make sure len is not optional if lenParam.ispointer: if not lenParam.isoptional[0] or not lenParam.isoptional[1] or not member.isoptional: validated = True else: if not lenParam.isoptional or not member.isoptional: validated = True elif member.ispointer and not member.isoptional: validated = True elif member.isbool or member.israngedenum: inputOnly = True # if validated or inputOnly: if not validated: self.validatedStructs[struct.name] = self.STRUCT_MEMBERS_INPUT_ONLY_EXCLUSIVE elif not inputOnly: self.validatedStructs[struct.name] = self.STRUCT_MEMBERS_INPUT_ONLY_NONE else: self.validatedStructs[struct.name] = self.STRUCT_MEMBERS_INPUT_ONLY_MIXED # Second pass to check for struct members that are structs requiring validation # May not be necessary, as structs seem to always be defined before first use in the XML registry for member in struct.members: if member.type in self.validatedStructs: memberInputOnly = self.validatedStructs[member.type] if not struct.name in self.validatedStructs: self.validatedStructs[struct.name] = memberInputOnly elif self.validatedStructs[struct.name] != memberInputOnly: self.validatedStructs[struct.name] = self.STRUCT_MEMBERS_INPUT_ONLY_MIXED # # Generate the struct member check code from the captured data def processStructMemberData(self): indent = self.incIndent(None) for struct in self.structMembers: needConditionCheck = False if struct.name in self.validatedStructs and self.validatedStructs[struct.name] == self.STRUCT_MEMBERS_INPUT_ONLY_MIXED: needConditionCheck = True # # The string returned by genFuncBody will be nested in an if check for a NULL pointer, so needs its indent incremented funcBody, unused = self.genFuncBody(self.incIndent(indent), 'pFuncName', struct.members, 'pStruct->', 'pVariableName', struct.name, needConditionCheck) if funcBody: cmdDef = 'static VkBool32 parameter_validation_{}(\n'.format(struct.name) cmdDef += ' debug_report_data*'.ljust(self.genOpts.alignFuncParam) + ' report_data,\n' cmdDef += ' const char*'.ljust(self.genOpts.alignFuncParam) + ' pFuncName,\n' cmdDef += ' const char*'.ljust(self.genOpts.alignFuncParam) + ' pVariableName,\n' # If there is a funcBody, this struct must have an entry in the validatedStructs dictionary if self.validatedStructs[struct.name] == self.STRUCT_MEMBERS_INPUT_ONLY_MIXED: # If the struct has mixed input only and non-input only members, it needs a flag to indicate if it is an input or output cmdDef += ' bool'.ljust(self.genOpts.alignFuncParam) + ' isInput,\n' cmdDef += ' const {}*'.format(struct.name).ljust(self.genOpts.alignFuncParam) + ' pStruct)\n' cmdDef += '{\n' cmdDef += indent + 'VkBool32 skipCall = VK_FALSE;\n' cmdDef += '\n' cmdDef += indent + 'if (pStruct != NULL) {' cmdDef += funcBody cmdDef += indent +'}\n' cmdDef += '\n' cmdDef += indent + 'return skipCall;\n' cmdDef += '}\n' self.appendSection('command', cmdDef) # # Generate the command param check code from the captured data def processCmdData(self): indent = self.incIndent(None) for command in self.commands: cmdBody, unused = self.genFuncBody(indent, '"{}"'.format(command.name), command.params, '', None, None, False) if cmdBody: cmdDef = self.getCmdDef(command) + '\n' cmdDef += '{\n' # Process unused parameters # Ignore the first dispatch handle parameter, which is not # processed by parameter_validation (except for vkCreateInstance, which # does not have a handle as its first parameter) startIndex = 1 if command.name == 'vkCreateInstance': startIndex = 0 for name in unused[startIndex:]: cmdDef += indent + 'UNUSED_PARAMETER({});\n'.format(name) if len(unused) > 1: cmdDef += '\n' cmdDef += indent + 'VkBool32 skipCall = VK_FALSE;\n' cmdDef += cmdBody cmdDef += '\n' cmdDef += indent + 'return skipCall;\n' cmdDef += '}\n' self.appendSection('command', cmdDef) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/genvk.py000077500000000000000000000314531270147354000224170ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright (c) 2013-2015 The Khronos Group Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and/or associated documentation files (the # "Materials"), to deal in the Materials without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Materials, and to # permit persons to whom the Materials are furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Materials. # # THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY # CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, # TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS. import sys, time, pdb, string, cProfile from reg import * from generator import write, CGeneratorOptions, COutputGenerator, DocGeneratorOptions, DocOutputGenerator, PyOutputGenerator, ValidityOutputGenerator, HostSynchronizationOutputGenerator, ThreadGeneratorOptions, ThreadOutputGenerator from generator import ParamCheckerGeneratorOptions, ParamCheckerOutputGenerator # debug - start header generation in debugger # dump - dump registry after loading # profile - enable Python profiling # protect - whether to use #ifndef protections # registry - use specified XML registry instead of gl.xml # target - string name of target header, or all targets if None # timeit - time length of registry loading & header generation # validate - validate return & parameter group tags against debug = False dump = False profile = False protect = True target = None timeit = False validate= False # Default input / log files errFilename = None diagFilename = 'diag.txt' regFilename = 'vk.xml' if __name__ == '__main__': i = 1 while (i < len(sys.argv)): arg = sys.argv[i] i = i + 1 if (arg == '-debug'): write('Enabling debug (-debug)', file=sys.stderr) debug = True elif (arg == '-dump'): write('Enabling dump (-dump)', file=sys.stderr) dump = True elif (arg == '-noprotect'): write('Disabling inclusion protection in output headers', file=sys.stderr) protect = False elif (arg == '-profile'): write('Enabling profiling (-profile)', file=sys.stderr) profile = True elif (arg == '-registry'): regFilename = sys.argv[i] i = i+1 write('Using registry ', regFilename, file=sys.stderr) elif (arg == '-time'): write('Enabling timing (-time)', file=sys.stderr) timeit = True elif (arg == '-validate'): write('Enabling group validation (-validate)', file=sys.stderr) validate = True elif (arg[0:1] == '-'): write('Unrecognized argument:', arg, file=sys.stderr) exit(1) else: target = arg write('Using target', target, file=sys.stderr) # Simple timer functions startTime = None def startTimer(): global startTime startTime = time.clock() def endTimer(msg): global startTime endTime = time.clock() if (timeit): write(msg, endTime - startTime) startTime = None # Load & parse registry reg = Registry() startTimer() tree = etree.parse(regFilename) endTimer('Time to make ElementTree =') startTimer() reg.loadElementTree(tree) endTimer('Time to parse ElementTree =') if (validate): reg.validateGroups() if (dump): write('***************************************') write('Performing Registry dump to regdump.txt') write('***************************************') reg.dumpReg(filehandle = open('regdump.txt','w')) # Turn a list of strings into a regexp string matching exactly those strings def makeREstring(list): return '^(' + '|'.join(list) + ')$' # Descriptive names for various regexp patterns used to select # versions and extensions allVersions = allExtensions = '.*' noVersions = noExtensions = None # Copyright text prefixing all headers (list of strings). prefixStrings = [ '/*', '** Copyright (c) 2015-2016 The Khronos Group Inc.', '**', '** Permission is hereby granted, free of charge, to any person obtaining a', '** copy of this software and/or associated documentation files (the', '** "Materials"), to deal in the Materials without restriction, including', '** without limitation the rights to use, copy, modify, merge, publish,', '** distribute, sublicense, and/or sell copies of the Materials, and to', '** permit persons to whom the Materials are furnished to do so, subject to', '** the following conditions:', '**', '** The above copyright notice and this permission notice shall be included', '** in all copies or substantial portions of the Materials.', '**', '** THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,', '** EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF', '** MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.', '** IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY', '** CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,', '** TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE', '** MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.', '*/', '' ] # Text specific to Vulkan headers vkPrefixStrings = [ '/*', '** This header is generated from the Khronos Vulkan XML API Registry.', '**', '*/', '' ] # Defaults for generating re-inclusion protection wrappers (or not) protectFile = protect protectFeature = protect protectProto = protect buildList = [ # Vulkan 1.0 - header for core API + extensions. # To generate just the core API, # change to 'defaultExtensions = None' below. [ COutputGenerator, CGeneratorOptions( filename = 'include/vulkan/vulkan.h', apiname = 'vulkan', profile = None, versions = allVersions, emitversions = allVersions, defaultExtensions = 'vulkan', addExtensions = None, removeExtensions = None, prefixText = prefixStrings + vkPrefixStrings, genFuncPointers = True, protectFile = protectFile, protectFeature = False, protectProto = '#ifndef', protectProtoStr = 'VK_NO_PROTOTYPES', apicall = 'VKAPI_ATTR ', apientry = 'VKAPI_CALL ', apientryp = 'VKAPI_PTR *', alignFuncParam = 48) ], # Vulkan 1.0 draft - API include files for spec and ref pages # Overwrites include subdirectories in spec source tree # The generated include files do not include the calling convention # macros (apientry etc.), unlike the header files. # Because the 1.0 core branch includes ref pages for extensions, # all the extension interfaces need to be generated, even though # none are used by the core spec itself. [ DocOutputGenerator, DocGeneratorOptions( filename = 'vulkan-docs', apiname = 'vulkan', profile = None, versions = allVersions, emitversions = allVersions, defaultExtensions = None, addExtensions = makeREstring([ 'VK_KHR_sampler_mirror_clamp_to_edge', ]), removeExtensions = makeREstring([ ]), prefixText = prefixStrings + vkPrefixStrings, apicall = '', apientry = '', apientryp = '*', genDirectory = '../../doc/specs/vulkan', alignFuncParam = 48, expandEnumerants = False) ], # Vulkan 1.0 draft - API names to validate man/api spec includes & links [ PyOutputGenerator, DocGeneratorOptions( filename = '../../doc/specs/vulkan/vkapi.py', apiname = 'vulkan', profile = None, versions = allVersions, emitversions = allVersions, defaultExtensions = None, addExtensions = makeREstring([ 'VK_KHR_sampler_mirror_clamp_to_edge', ]), removeExtensions = makeREstring([ ])) ], # Vulkan 1.0 draft - core API validity files for spec # Overwrites validity subdirectories in spec source tree [ ValidityOutputGenerator, DocGeneratorOptions( filename = 'validity', apiname = 'vulkan', profile = None, versions = allVersions, emitversions = allVersions, defaultExtensions = None, addExtensions = makeREstring([ 'VK_KHR_sampler_mirror_clamp_to_edge', ]), removeExtensions = makeREstring([ ]), genDirectory = '../../doc/specs/vulkan') ], # Vulkan 1.0 draft - core API host sync table files for spec # Overwrites subdirectory in spec source tree [ HostSynchronizationOutputGenerator, DocGeneratorOptions( filename = 'hostsynctable', apiname = 'vulkan', profile = None, versions = allVersions, emitversions = allVersions, defaultExtensions = None, addExtensions = makeREstring([ 'VK_KHR_sampler_mirror_clamp_to_edge', ]), removeExtensions = makeREstring([ ]), genDirectory = '../../doc/specs/vulkan') ], # Vulkan 1.0 draft - thread checking layer [ ThreadOutputGenerator, ThreadGeneratorOptions( filename = 'thread_check.h', apiname = 'vulkan', profile = None, versions = allVersions, emitversions = allVersions, defaultExtensions = 'vulkan', addExtensions = None, removeExtensions = None, prefixText = prefixStrings + vkPrefixStrings, genFuncPointers = True, protectFile = protectFile, protectFeature = False, protectProto = True, protectProtoStr = 'VK_PROTOTYPES', apicall = '', apientry = 'VKAPI_CALL ', apientryp = 'VKAPI_PTR *', alignFuncParam = 48) ], [ ParamCheckerOutputGenerator, ParamCheckerGeneratorOptions( filename = 'parameter_validation.h', apiname = 'vulkan', profile = None, versions = allVersions, emitversions = allVersions, defaultExtensions = 'vulkan', addExtensions = None, removeExtensions = None, prefixText = prefixStrings + vkPrefixStrings, genFuncPointers = True, protectFile = protectFile, protectFeature = False, protectProto = None, protectProtoStr = 'VK_NO_PROTOTYPES', apicall = 'VKAPI_ATTR ', apientry = 'VKAPI_CALL ', apientryp = 'VKAPI_PTR *', alignFuncParam = 48) ], None ] # create error/warning & diagnostic files if (errFilename): errWarn = open(errFilename,'w') else: errWarn = sys.stderr diag = open(diagFilename, 'w') def genHeaders(): # Loop over targets, building each generated = 0 for item in buildList: if (item == None): break createGenerator = item[0] genOpts = item[1] if (target and target != genOpts.filename): # write('*** Skipping', genOpts.filename) continue write('*** Building', genOpts.filename) generated = generated + 1 startTimer() gen = createGenerator(errFile=errWarn, warnFile=errWarn, diagFile=diag) reg.setGenerator(gen) reg.apiGen(genOpts) write('** Generated', genOpts.filename) endTimer('Time to generate ' + genOpts.filename + ' =') if (target and generated == 0): write('Failed to generate target:', target) if (debug): pdb.run('genHeaders()') elif (profile): import cProfile, pstats cProfile.run('genHeaders()', 'profile.txt') p = pstats.Stats('profile.txt') p.strip_dirs().sort_stats('time').print_stats(50) else: genHeaders() Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/glslang_revision000066400000000000000000000000511270147354000242060ustar00rootroot00000000000000c3869fee412a90c4eadea0bf936ab2530d2dff51 Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/include/000077500000000000000000000000001270147354000223455ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/include/vulkan/000077500000000000000000000000001270147354000236455ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/include/vulkan/vk_icd.h000066400000000000000000000074461270147354000252700ustar00rootroot00000000000000// // File: vk_icd.h // /* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. * */ #ifndef VKICD_H #define VKICD_H #include "vulkan.h" /* * The ICD must reserve space for a pointer for the loader's dispatch * table, at the start of . * The ICD must initialize this variable using the SET_LOADER_MAGIC_VALUE macro. */ #define ICD_LOADER_MAGIC 0x01CDC0DE typedef union _VK_LOADER_DATA { uintptr_t loaderMagic; void *loaderData; } VK_LOADER_DATA; static inline void set_loader_magic_value(void *pNewObject) { VK_LOADER_DATA *loader_info = (VK_LOADER_DATA *)pNewObject; loader_info->loaderMagic = ICD_LOADER_MAGIC; } static inline bool valid_loader_magic_value(void *pNewObject) { const VK_LOADER_DATA *loader_info = (VK_LOADER_DATA *)pNewObject; return (loader_info->loaderMagic & 0xffffffff) == ICD_LOADER_MAGIC; } /* * Windows and Linux ICDs will treat VkSurfaceKHR as a pointer to a struct that * contains the platform-specific connection and surface information. */ typedef enum _VkIcdWsiPlatform { VK_ICD_WSI_PLATFORM_MIR, VK_ICD_WSI_PLATFORM_WAYLAND, VK_ICD_WSI_PLATFORM_WIN32, VK_ICD_WSI_PLATFORM_XCB, VK_ICD_WSI_PLATFORM_XLIB, VK_ICD_WSI_PLATFORM_DISPLAY } VkIcdWsiPlatform; typedef struct _VkIcdSurfaceBase { VkIcdWsiPlatform platform; } VkIcdSurfaceBase; #ifdef VK_USE_PLATFORM_MIR_KHR typedef struct _VkIcdSurfaceMir { VkIcdSurfaceBase base; MirConnection *connection; MirSurface *mirSurface; } VkIcdSurfaceMir; #endif // VK_USE_PLATFORM_MIR_KHR #ifdef VK_USE_PLATFORM_WAYLAND_KHR typedef struct _VkIcdSurfaceWayland { VkIcdSurfaceBase base; struct wl_display *display; struct wl_surface *surface; } VkIcdSurfaceWayland; #endif // VK_USE_PLATFORM_WAYLAND_KHR #ifdef VK_USE_PLATFORM_WIN32_KHR typedef struct _VkIcdSurfaceWin32 { VkIcdSurfaceBase base; HINSTANCE hinstance; HWND hwnd; } VkIcdSurfaceWin32; #endif // VK_USE_PLATFORM_WIN32_KHR #ifdef VK_USE_PLATFORM_XCB_KHR typedef struct _VkIcdSurfaceXcb { VkIcdSurfaceBase base; xcb_connection_t *connection; xcb_window_t window; } VkIcdSurfaceXcb; #endif // VK_USE_PLATFORM_XCB_KHR #ifdef VK_USE_PLATFORM_XLIB_KHR typedef struct _VkIcdSurfaceXlib { VkIcdSurfaceBase base; Display *dpy; Window window; } VkIcdSurfaceXlib; #endif // VK_USE_PLATFORM_XLIB_KHR typedef struct _VkIcdSurfaceDisplay { VkIcdSurfaceBase base; VkDisplayModeKHR displayMode; uint32_t planeIndex; uint32_t planeStackIndex; VkSurfaceTransformFlagBitsKHR transform; float globalAlpha; VkDisplayPlaneAlphaFlagBitsKHR alphaMode; VkExtent2D imageExtent; } VkIcdSurfaceDisplay; #endif // VKICD_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/include/vulkan/vk_layer.h000066400000000000000000000324001270147354000256310ustar00rootroot00000000000000// // File: vk_layer.h // /* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. * */ /* Need to define dispatch table * Core struct can then have ptr to dispatch table at the top * Along with object ptrs for current and next OBJ */ #pragma once #include "vulkan.h" #if defined(__GNUC__) && __GNUC__ >= 4 #define VK_LAYER_EXPORT __attribute__((visibility("default"))) #elif defined(__SUNPRO_C) && (__SUNPRO_C >= 0x590) #define VK_LAYER_EXPORT __attribute__((visibility("default"))) #else #define VK_LAYER_EXPORT #endif typedef struct VkLayerDispatchTable_ { PFN_vkGetDeviceProcAddr GetDeviceProcAddr; PFN_vkDestroyDevice DestroyDevice; PFN_vkGetDeviceQueue GetDeviceQueue; PFN_vkQueueSubmit QueueSubmit; PFN_vkQueueWaitIdle QueueWaitIdle; PFN_vkDeviceWaitIdle DeviceWaitIdle; PFN_vkAllocateMemory AllocateMemory; PFN_vkFreeMemory FreeMemory; PFN_vkMapMemory MapMemory; PFN_vkUnmapMemory UnmapMemory; PFN_vkFlushMappedMemoryRanges FlushMappedMemoryRanges; PFN_vkInvalidateMappedMemoryRanges InvalidateMappedMemoryRanges; PFN_vkGetDeviceMemoryCommitment GetDeviceMemoryCommitment; PFN_vkGetImageSparseMemoryRequirements GetImageSparseMemoryRequirements; PFN_vkGetImageMemoryRequirements GetImageMemoryRequirements; PFN_vkGetBufferMemoryRequirements GetBufferMemoryRequirements; PFN_vkBindImageMemory BindImageMemory; PFN_vkBindBufferMemory BindBufferMemory; PFN_vkQueueBindSparse QueueBindSparse; PFN_vkCreateFence CreateFence; PFN_vkDestroyFence DestroyFence; PFN_vkGetFenceStatus GetFenceStatus; PFN_vkResetFences ResetFences; PFN_vkWaitForFences WaitForFences; PFN_vkCreateSemaphore CreateSemaphore; PFN_vkDestroySemaphore DestroySemaphore; PFN_vkCreateEvent CreateEvent; PFN_vkDestroyEvent DestroyEvent; PFN_vkGetEventStatus GetEventStatus; PFN_vkSetEvent SetEvent; PFN_vkResetEvent ResetEvent; PFN_vkCreateQueryPool CreateQueryPool; PFN_vkDestroyQueryPool DestroyQueryPool; PFN_vkGetQueryPoolResults GetQueryPoolResults; PFN_vkCreateBuffer CreateBuffer; PFN_vkDestroyBuffer DestroyBuffer; PFN_vkCreateBufferView CreateBufferView; PFN_vkDestroyBufferView DestroyBufferView; PFN_vkCreateImage CreateImage; PFN_vkDestroyImage DestroyImage; PFN_vkGetImageSubresourceLayout GetImageSubresourceLayout; PFN_vkCreateImageView CreateImageView; PFN_vkDestroyImageView DestroyImageView; PFN_vkCreateShaderModule CreateShaderModule; PFN_vkDestroyShaderModule DestroyShaderModule; PFN_vkCreatePipelineCache CreatePipelineCache; PFN_vkDestroyPipelineCache DestroyPipelineCache; PFN_vkGetPipelineCacheData GetPipelineCacheData; PFN_vkMergePipelineCaches MergePipelineCaches; PFN_vkCreateGraphicsPipelines CreateGraphicsPipelines; PFN_vkCreateComputePipelines CreateComputePipelines; PFN_vkDestroyPipeline DestroyPipeline; PFN_vkCreatePipelineLayout CreatePipelineLayout; PFN_vkDestroyPipelineLayout DestroyPipelineLayout; PFN_vkCreateSampler CreateSampler; PFN_vkDestroySampler DestroySampler; PFN_vkCreateDescriptorSetLayout CreateDescriptorSetLayout; PFN_vkDestroyDescriptorSetLayout DestroyDescriptorSetLayout; PFN_vkCreateDescriptorPool CreateDescriptorPool; PFN_vkDestroyDescriptorPool DestroyDescriptorPool; PFN_vkResetDescriptorPool ResetDescriptorPool; PFN_vkAllocateDescriptorSets AllocateDescriptorSets; PFN_vkFreeDescriptorSets FreeDescriptorSets; PFN_vkUpdateDescriptorSets UpdateDescriptorSets; PFN_vkCreateFramebuffer CreateFramebuffer; PFN_vkDestroyFramebuffer DestroyFramebuffer; PFN_vkCreateRenderPass CreateRenderPass; PFN_vkDestroyRenderPass DestroyRenderPass; PFN_vkGetRenderAreaGranularity GetRenderAreaGranularity; PFN_vkCreateCommandPool CreateCommandPool; PFN_vkDestroyCommandPool DestroyCommandPool; PFN_vkResetCommandPool ResetCommandPool; PFN_vkAllocateCommandBuffers AllocateCommandBuffers; PFN_vkFreeCommandBuffers FreeCommandBuffers; PFN_vkBeginCommandBuffer BeginCommandBuffer; PFN_vkEndCommandBuffer EndCommandBuffer; PFN_vkResetCommandBuffer ResetCommandBuffer; PFN_vkCmdBindPipeline CmdBindPipeline; PFN_vkCmdBindDescriptorSets CmdBindDescriptorSets; PFN_vkCmdBindVertexBuffers CmdBindVertexBuffers; PFN_vkCmdBindIndexBuffer CmdBindIndexBuffer; PFN_vkCmdSetViewport CmdSetViewport; PFN_vkCmdSetScissor CmdSetScissor; PFN_vkCmdSetLineWidth CmdSetLineWidth; PFN_vkCmdSetDepthBias CmdSetDepthBias; PFN_vkCmdSetBlendConstants CmdSetBlendConstants; PFN_vkCmdSetDepthBounds CmdSetDepthBounds; PFN_vkCmdSetStencilCompareMask CmdSetStencilCompareMask; PFN_vkCmdSetStencilWriteMask CmdSetStencilWriteMask; PFN_vkCmdSetStencilReference CmdSetStencilReference; PFN_vkCmdDraw CmdDraw; PFN_vkCmdDrawIndexed CmdDrawIndexed; PFN_vkCmdDrawIndirect CmdDrawIndirect; PFN_vkCmdDrawIndexedIndirect CmdDrawIndexedIndirect; PFN_vkCmdDispatch CmdDispatch; PFN_vkCmdDispatchIndirect CmdDispatchIndirect; PFN_vkCmdCopyBuffer CmdCopyBuffer; PFN_vkCmdCopyImage CmdCopyImage; PFN_vkCmdBlitImage CmdBlitImage; PFN_vkCmdCopyBufferToImage CmdCopyBufferToImage; PFN_vkCmdCopyImageToBuffer CmdCopyImageToBuffer; PFN_vkCmdUpdateBuffer CmdUpdateBuffer; PFN_vkCmdFillBuffer CmdFillBuffer; PFN_vkCmdClearColorImage CmdClearColorImage; PFN_vkCmdClearDepthStencilImage CmdClearDepthStencilImage; PFN_vkCmdClearAttachments CmdClearAttachments; PFN_vkCmdResolveImage CmdResolveImage; PFN_vkCmdSetEvent CmdSetEvent; PFN_vkCmdResetEvent CmdResetEvent; PFN_vkCmdWaitEvents CmdWaitEvents; PFN_vkCmdPipelineBarrier CmdPipelineBarrier; PFN_vkCmdBeginQuery CmdBeginQuery; PFN_vkCmdEndQuery CmdEndQuery; PFN_vkCmdResetQueryPool CmdResetQueryPool; PFN_vkCmdWriteTimestamp CmdWriteTimestamp; PFN_vkCmdCopyQueryPoolResults CmdCopyQueryPoolResults; PFN_vkCmdPushConstants CmdPushConstants; PFN_vkCmdBeginRenderPass CmdBeginRenderPass; PFN_vkCmdNextSubpass CmdNextSubpass; PFN_vkCmdEndRenderPass CmdEndRenderPass; PFN_vkCmdExecuteCommands CmdExecuteCommands; PFN_vkCreateSwapchainKHR CreateSwapchainKHR; PFN_vkDestroySwapchainKHR DestroySwapchainKHR; PFN_vkGetSwapchainImagesKHR GetSwapchainImagesKHR; PFN_vkAcquireNextImageKHR AcquireNextImageKHR; PFN_vkQueuePresentKHR QueuePresentKHR; } VkLayerDispatchTable; typedef struct VkLayerInstanceDispatchTable_ { PFN_vkGetInstanceProcAddr GetInstanceProcAddr; PFN_vkDestroyInstance DestroyInstance; PFN_vkEnumeratePhysicalDevices EnumeratePhysicalDevices; PFN_vkGetPhysicalDeviceFeatures GetPhysicalDeviceFeatures; PFN_vkGetPhysicalDeviceImageFormatProperties GetPhysicalDeviceImageFormatProperties; PFN_vkGetPhysicalDeviceFormatProperties GetPhysicalDeviceFormatProperties; PFN_vkGetPhysicalDeviceSparseImageFormatProperties GetPhysicalDeviceSparseImageFormatProperties; PFN_vkGetPhysicalDeviceProperties GetPhysicalDeviceProperties; PFN_vkGetPhysicalDeviceQueueFamilyProperties GetPhysicalDeviceQueueFamilyProperties; PFN_vkGetPhysicalDeviceMemoryProperties GetPhysicalDeviceMemoryProperties; PFN_vkEnumerateDeviceExtensionProperties EnumerateDeviceExtensionProperties; PFN_vkEnumerateDeviceLayerProperties EnumerateDeviceLayerProperties; PFN_vkDestroySurfaceKHR DestroySurfaceKHR; PFN_vkGetPhysicalDeviceSurfaceSupportKHR GetPhysicalDeviceSurfaceSupportKHR; PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR GetPhysicalDeviceSurfaceCapabilitiesKHR; PFN_vkGetPhysicalDeviceSurfaceFormatsKHR GetPhysicalDeviceSurfaceFormatsKHR; PFN_vkGetPhysicalDeviceSurfacePresentModesKHR GetPhysicalDeviceSurfacePresentModesKHR; PFN_vkCreateDebugReportCallbackEXT CreateDebugReportCallbackEXT; PFN_vkDestroyDebugReportCallbackEXT DestroyDebugReportCallbackEXT; PFN_vkDebugReportMessageEXT DebugReportMessageEXT; #ifdef VK_USE_PLATFORM_MIR_KHR PFN_vkCreateMirSurfaceKHR CreateMirSurfaceKHR; PFN_vkGetPhysicalDeviceMirPresentationSupportKHR GetPhysicalDeviceMirPresentationSupportKHR; #endif #ifdef VK_USE_PLATFORM_WAYLAND_KHR PFN_vkCreateWaylandSurfaceKHR CreateWaylandSurfaceKHR; PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR GetPhysicalDeviceWaylandPresentationSupportKHR; #endif #ifdef VK_USE_PLATFORM_WIN32_KHR PFN_vkCreateWin32SurfaceKHR CreateWin32SurfaceKHR; PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR GetPhysicalDeviceWin32PresentationSupportKHR; #endif #ifdef VK_USE_PLATFORM_XCB_KHR PFN_vkCreateXcbSurfaceKHR CreateXcbSurfaceKHR; PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR GetPhysicalDeviceXcbPresentationSupportKHR; #endif #ifdef VK_USE_PLATFORM_XLIB_KHR PFN_vkCreateXlibSurfaceKHR CreateXlibSurfaceKHR; PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR GetPhysicalDeviceXlibPresentationSupportKHR; #endif #ifdef VK_USE_PLATFORM_ANDROID_KHR PFN_vkCreateAndroidSurfaceKHR CreateAndroidSurfaceKHR; #endif PFN_vkGetPhysicalDeviceDisplayPropertiesKHR GetPhysicalDeviceDisplayPropertiesKHR; PFN_vkGetPhysicalDeviceDisplayPlanePropertiesKHR GetPhysicalDeviceDisplayPlanePropertiesKHR; PFN_vkGetDisplayPlaneSupportedDisplaysKHR GetDisplayPlaneSupportedDisplaysKHR; PFN_vkGetDisplayModePropertiesKHR GetDisplayModePropertiesKHR; PFN_vkCreateDisplayModeKHR CreateDisplayModeKHR; PFN_vkGetDisplayPlaneCapabilitiesKHR GetDisplayPlaneCapabilitiesKHR; PFN_vkCreateDisplayPlaneSurfaceKHR CreateDisplayPlaneSurfaceKHR; } VkLayerInstanceDispatchTable; // LL node for tree of dbg callback functions typedef struct VkLayerDbgFunctionNode_ { VkDebugReportCallbackEXT msgCallback; PFN_vkDebugReportCallbackEXT pfnMsgCallback; VkFlags msgFlags; void *pUserData; struct VkLayerDbgFunctionNode_ *pNext; } VkLayerDbgFunctionNode; typedef enum VkLayerDbgAction_ { VK_DBG_LAYER_ACTION_IGNORE = 0x0, VK_DBG_LAYER_ACTION_CALLBACK = 0x1, VK_DBG_LAYER_ACTION_LOG_MSG = 0x2, VK_DBG_LAYER_ACTION_BREAK = 0x4, VK_DBG_LAYER_ACTION_DEBUG_OUTPUT = 0x8, } VkLayerDbgAction; // ------------------------------------------------------------------------------------------------ // CreateInstance and CreateDevice support structures /* Sub type of structure for instance and device loader ext of CreateInfo. * When sType == VK_STRUCTURE_TYPE_LAYER_INSTANCE_CREATE_INFO * or sType == VK_STRUCTURE_TYPE_LAYER_DEVICE_CREATE_INFO * then VkLayerFunction indicates struct type pointed to by pNext */ typedef enum VkLayerFunction_ { VK_LAYER_LINK_INFO = 0, VK_LOADER_DATA_CALLBACK = 1 } VkLayerFunction; typedef struct VkLayerInstanceLink_ { struct VkLayerInstanceLink_ *pNext; PFN_vkGetInstanceProcAddr pfnNextGetInstanceProcAddr; } VkLayerInstanceLink; /* * When creating the device chain the loader needs to pass * down information about it's device structure needed at * the end of the chain. Passing the data via the * VkLayerDeviceInfo avoids issues with finding the * exact instance being used. */ typedef struct VkLayerDeviceInfo_ { void *device_info; PFN_vkGetInstanceProcAddr pfnNextGetInstanceProcAddr; } VkLayerDeviceInfo; typedef VkResult (VKAPI_PTR *PFN_vkSetInstanceLoaderData)(VkInstance instance, void *object); typedef VkResult (VKAPI_PTR *PFN_vkSetDeviceLoaderData)(VkDevice device, void *object); typedef struct { VkStructureType sType; // VK_STRUCTURE_TYPE_LAYER_INSTANCE_CREATE_INFO const void *pNext; VkLayerFunction function; union { VkLayerInstanceLink *pLayerInfo; PFN_vkSetInstanceLoaderData pfnSetInstanceLoaderData; } u; } VkLayerInstanceCreateInfo; typedef struct VkLayerDeviceLink_ { struct VkLayerDeviceLink_ *pNext; PFN_vkGetInstanceProcAddr pfnNextGetInstanceProcAddr; PFN_vkGetDeviceProcAddr pfnNextGetDeviceProcAddr; } VkLayerDeviceLink; typedef struct { VkStructureType sType; // VK_STRUCTURE_TYPE_LAYER_DEVICE_CREATE_INFO const void *pNext; VkLayerFunction function; union { VkLayerDeviceLink *pLayerInfo; PFN_vkSetDeviceLoaderData pfnSetDeviceLoaderData; } u; } VkLayerDeviceCreateInfo; Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/include/vulkan/vk_platform.h000066400000000000000000000104611270147354000263440ustar00rootroot00000000000000// // File: vk_platform.h // /* ** Copyright (c) 2014-2015 The Khronos Group Inc. ** ** Permission is hereby granted, free of charge, to any person obtaining a ** copy of this software and/or associated documentation files (the ** "Materials"), to deal in the Materials without restriction, including ** without limitation the rights to use, copy, modify, merge, publish, ** distribute, sublicense, and/or sell copies of the Materials, and to ** permit persons to whom the Materials are furnished to do so, subject to ** the following conditions: ** ** The above copyright notice and this permission notice shall be included ** in all copies or substantial portions of the Materials. ** ** THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, ** EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF ** MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. ** IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY ** CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, ** TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE ** MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS. */ #ifndef VK_PLATFORM_H_ #define VK_PLATFORM_H_ #ifdef __cplusplus extern "C" { #endif // __cplusplus /* *************************************************************************************************** * Platform-specific directives and type declarations *************************************************************************************************** */ /* Platform-specific calling convention macros. * * Platforms should define these so that Vulkan clients call Vulkan commands * with the same calling conventions that the Vulkan implementation expects. * * VKAPI_ATTR - Placed before the return type in function declarations. * Useful for C++11 and GCC/Clang-style function attribute syntax. * VKAPI_CALL - Placed after the return type in function declarations. * Useful for MSVC-style calling convention syntax. * VKAPI_PTR - Placed between the '(' and '*' in function pointer types. * * Function declaration: VKAPI_ATTR void VKAPI_CALL vkCommand(void); * Function pointer type: typedef void (VKAPI_PTR *PFN_vkCommand)(void); */ #if defined(_WIN32) // On Windows, Vulkan commands use the stdcall convention #define VKAPI_ATTR #define VKAPI_CALL __stdcall #define VKAPI_PTR VKAPI_CALL #elif defined(__ANDROID__) && defined(__ARM_EABI__) && !defined(__ARM_ARCH_7A__) // Android does not support Vulkan in native code using the "armeabi" ABI. #error "Vulkan requires the 'armeabi-v7a' or 'armeabi-v7a-hard' ABI on 32-bit ARM CPUs" #elif defined(__ANDROID__) && defined(__ARM_ARCH_7A__) // On Android/ARMv7a, Vulkan functions use the armeabi-v7a-hard calling // convention, even if the application's native code is compiled with the // armeabi-v7a calling convention. #define VKAPI_ATTR __attribute__((pcs("aapcs-vfp"))) #define VKAPI_CALL #define VKAPI_PTR VKAPI_ATTR #else // On other platforms, use the default calling convention #define VKAPI_ATTR #define VKAPI_CALL #define VKAPI_PTR #endif #include #if !defined(VK_NO_STDINT_H) #if defined(_MSC_VER) && (_MSC_VER < 1600) typedef signed __int8 int8_t; typedef unsigned __int8 uint8_t; typedef signed __int16 int16_t; typedef unsigned __int16 uint16_t; typedef signed __int32 int32_t; typedef unsigned __int32 uint32_t; typedef signed __int64 int64_t; typedef unsigned __int64 uint64_t; #else #include #endif #endif // !defined(VK_NO_STDINT_H) #ifdef __cplusplus } // extern "C" #endif // __cplusplus // Platform-specific headers required by platform window system extensions. // These are enabled prior to #including "vulkan.h". The same enable then // controls inclusion of the extension interfaces in vulkan.h. #ifdef VK_USE_PLATFORM_ANDROID_KHR #include #endif #ifdef VK_USE_PLATFORM_MIR_KHR #include #endif #ifdef VK_USE_PLATFORM_WAYLAND_KHR #include #endif #ifdef VK_USE_PLATFORM_WIN32_KHR #include #endif #ifdef VK_USE_PLATFORM_XLIB_KHR #include #endif #ifdef VK_USE_PLATFORM_XCB_KHR #include #endif #endif Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/include/vulkan/vk_sdk_platform.h000066400000000000000000000035711270147354000272110ustar00rootroot00000000000000// // File: vk_sdk_platform.h // /* * Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials are * furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included in * all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS. */ #ifndef VK_SDK_PLATFORM_H #define VK_SDK_PLATFORM_H #if defined(_WIN32) #define NOMINMAX #ifndef __cplusplus #undef inline #define inline __inline #endif // __cplusplus #if (defined(_MSC_VER) && _MSC_VER < 1900 /*vs2015*/) // C99: // Microsoft didn't implement C99 in Visual Studio; but started adding it with // VS2013. However, VS2013 still didn't have snprintf(). The following is a // work-around (Note: The _CRT_SECURE_NO_WARNINGS macro must be set in the // "CMakeLists.txt" file). // NOTE: This is fixed in Visual Studio 2015. #define snprintf _snprintf #endif #define strdup _strdup #endif // _WIN32 #endif // VK_SDK_PLATFORM_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/include/vulkan/vulkan.h000066400000000000000000005454201270147354000253300ustar00rootroot00000000000000#ifndef VULKAN_H_ #define VULKAN_H_ 1 #ifdef __cplusplus extern "C" { #endif /* ** Copyright (c) 2015-2016 The Khronos Group Inc. ** ** Permission is hereby granted, free of charge, to any person obtaining a ** copy of this software and/or associated documentation files (the ** "Materials"), to deal in the Materials without restriction, including ** without limitation the rights to use, copy, modify, merge, publish, ** distribute, sublicense, and/or sell copies of the Materials, and to ** permit persons to whom the Materials are furnished to do so, subject to ** the following conditions: ** ** The above copyright notice and this permission notice shall be included ** in all copies or substantial portions of the Materials. ** ** THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, ** EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF ** MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. ** IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY ** CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, ** TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE ** MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS. */ /* ** This header is generated from the Khronos Vulkan XML API Registry. ** */ #define VK_VERSION_1_0 1 #include "vk_platform.h" #define VK_MAKE_VERSION(major, minor, patch) \ (((major) << 22) | ((minor) << 12) | (patch)) // DEPRECATED: This define has been removed. Specific version defines (e.g. VK_API_VERSION_1_0), or the VK_MAKE_VERSION macro, should be used instead. //#define VK_API_VERSION VK_MAKE_VERSION(1, 0, 0) // Vulkan 1.0 version number #define VK_API_VERSION_1_0 VK_MAKE_VERSION(1, 0, 0) #define VK_VERSION_MAJOR(version) ((uint32_t)(version) >> 22) #define VK_VERSION_MINOR(version) (((uint32_t)(version) >> 12) & 0x3ff) #define VK_VERSION_PATCH(version) ((uint32_t)(version) & 0xfff) // Version of this file #define VK_HEADER_VERSION 8 #define VK_NULL_HANDLE 0 #define VK_DEFINE_HANDLE(object) typedef struct object##_T* object; #if defined(__LP64__) || defined(_WIN64) || defined(__x86_64__) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__) #define VK_DEFINE_NON_DISPATCHABLE_HANDLE(object) typedef struct object##_T *object; #else #define VK_DEFINE_NON_DISPATCHABLE_HANDLE(object) typedef uint64_t object; #endif typedef uint32_t VkFlags; typedef uint32_t VkBool32; typedef uint64_t VkDeviceSize; typedef uint32_t VkSampleMask; VK_DEFINE_HANDLE(VkInstance) VK_DEFINE_HANDLE(VkPhysicalDevice) VK_DEFINE_HANDLE(VkDevice) VK_DEFINE_HANDLE(VkQueue) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSemaphore) VK_DEFINE_HANDLE(VkCommandBuffer) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkFence) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDeviceMemory) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkBuffer) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkImage) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkEvent) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkQueryPool) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkBufferView) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkImageView) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkShaderModule) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkPipelineCache) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkPipelineLayout) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkRenderPass) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkPipeline) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDescriptorSetLayout) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSampler) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDescriptorPool) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDescriptorSet) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkFramebuffer) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkCommandPool) #define VK_LOD_CLAMP_NONE 1000.0f #define VK_REMAINING_MIP_LEVELS (~0U) #define VK_REMAINING_ARRAY_LAYERS (~0U) #define VK_WHOLE_SIZE (~0ULL) #define VK_ATTACHMENT_UNUSED (~0U) #define VK_TRUE 1 #define VK_FALSE 0 #define VK_QUEUE_FAMILY_IGNORED (~0U) #define VK_SUBPASS_EXTERNAL (~0U) #define VK_MAX_PHYSICAL_DEVICE_NAME_SIZE 256 #define VK_UUID_SIZE 16 #define VK_MAX_MEMORY_TYPES 32 #define VK_MAX_MEMORY_HEAPS 16 #define VK_MAX_EXTENSION_NAME_SIZE 256 #define VK_MAX_DESCRIPTION_SIZE 256 typedef enum VkPipelineCacheHeaderVersion { VK_PIPELINE_CACHE_HEADER_VERSION_ONE = 1, VK_PIPELINE_CACHE_HEADER_VERSION_BEGIN_RANGE = VK_PIPELINE_CACHE_HEADER_VERSION_ONE, VK_PIPELINE_CACHE_HEADER_VERSION_END_RANGE = VK_PIPELINE_CACHE_HEADER_VERSION_ONE, VK_PIPELINE_CACHE_HEADER_VERSION_RANGE_SIZE = (VK_PIPELINE_CACHE_HEADER_VERSION_ONE - VK_PIPELINE_CACHE_HEADER_VERSION_ONE + 1), VK_PIPELINE_CACHE_HEADER_VERSION_MAX_ENUM = 0x7FFFFFFF } VkPipelineCacheHeaderVersion; typedef enum VkResult { VK_SUCCESS = 0, VK_NOT_READY = 1, VK_TIMEOUT = 2, VK_EVENT_SET = 3, VK_EVENT_RESET = 4, VK_INCOMPLETE = 5, VK_ERROR_OUT_OF_HOST_MEMORY = -1, VK_ERROR_OUT_OF_DEVICE_MEMORY = -2, VK_ERROR_INITIALIZATION_FAILED = -3, VK_ERROR_DEVICE_LOST = -4, VK_ERROR_MEMORY_MAP_FAILED = -5, VK_ERROR_LAYER_NOT_PRESENT = -6, VK_ERROR_EXTENSION_NOT_PRESENT = -7, VK_ERROR_FEATURE_NOT_PRESENT = -8, VK_ERROR_INCOMPATIBLE_DRIVER = -9, VK_ERROR_TOO_MANY_OBJECTS = -10, VK_ERROR_FORMAT_NOT_SUPPORTED = -11, VK_ERROR_SURFACE_LOST_KHR = -1000000000, VK_ERROR_NATIVE_WINDOW_IN_USE_KHR = -1000000001, VK_SUBOPTIMAL_KHR = 1000001003, VK_ERROR_OUT_OF_DATE_KHR = -1000001004, VK_ERROR_INCOMPATIBLE_DISPLAY_KHR = -1000003001, VK_ERROR_VALIDATION_FAILED_EXT = -1000011001, VK_ERROR_INVALID_SHADER_NV = -1000012000, VK_RESULT_BEGIN_RANGE = VK_ERROR_FORMAT_NOT_SUPPORTED, VK_RESULT_END_RANGE = VK_INCOMPLETE, VK_RESULT_RANGE_SIZE = (VK_INCOMPLETE - VK_ERROR_FORMAT_NOT_SUPPORTED + 1), VK_RESULT_MAX_ENUM = 0x7FFFFFFF } VkResult; typedef enum VkStructureType { VK_STRUCTURE_TYPE_APPLICATION_INFO = 0, VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO = 1, VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO = 2, VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO = 3, VK_STRUCTURE_TYPE_SUBMIT_INFO = 4, VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO = 5, VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE = 6, VK_STRUCTURE_TYPE_BIND_SPARSE_INFO = 7, VK_STRUCTURE_TYPE_FENCE_CREATE_INFO = 8, VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO = 9, VK_STRUCTURE_TYPE_EVENT_CREATE_INFO = 10, VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO = 11, VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO = 12, VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO = 13, VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO = 14, VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO = 15, VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO = 16, VK_STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO = 17, VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO = 18, VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO = 19, VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO = 20, VK_STRUCTURE_TYPE_PIPELINE_TESSELLATION_STATE_CREATE_INFO = 21, VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO = 22, VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO = 23, VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO = 24, VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO = 25, VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO = 26, VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO = 27, VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO = 28, VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO = 29, VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO = 30, VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO = 31, VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO = 32, VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO = 33, VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO = 34, VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET = 35, VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET = 36, VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO = 37, VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO = 38, VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO = 39, VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO = 40, VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO = 41, VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO = 42, VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO = 43, VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER = 44, VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER = 45, VK_STRUCTURE_TYPE_MEMORY_BARRIER = 46, VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO = 47, VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO = 48, VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR = 1000001000, VK_STRUCTURE_TYPE_PRESENT_INFO_KHR = 1000001001, VK_STRUCTURE_TYPE_DISPLAY_MODE_CREATE_INFO_KHR = 1000002000, VK_STRUCTURE_TYPE_DISPLAY_SURFACE_CREATE_INFO_KHR = 1000002001, VK_STRUCTURE_TYPE_DISPLAY_PRESENT_INFO_KHR = 1000003000, VK_STRUCTURE_TYPE_XLIB_SURFACE_CREATE_INFO_KHR = 1000004000, VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR = 1000005000, VK_STRUCTURE_TYPE_WAYLAND_SURFACE_CREATE_INFO_KHR = 1000006000, VK_STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR = 1000007000, VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR = 1000008000, VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR = 1000009000, VK_STRUCTURE_TYPE_DEBUG_REPORT_CALLBACK_CREATE_INFO_EXT = 1000011000, VK_STRUCTURE_TYPE_BEGIN_RANGE = VK_STRUCTURE_TYPE_APPLICATION_INFO, VK_STRUCTURE_TYPE_END_RANGE = VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO, VK_STRUCTURE_TYPE_RANGE_SIZE = (VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO - VK_STRUCTURE_TYPE_APPLICATION_INFO + 1), VK_STRUCTURE_TYPE_MAX_ENUM = 0x7FFFFFFF } VkStructureType; typedef enum VkSystemAllocationScope { VK_SYSTEM_ALLOCATION_SCOPE_COMMAND = 0, VK_SYSTEM_ALLOCATION_SCOPE_OBJECT = 1, VK_SYSTEM_ALLOCATION_SCOPE_CACHE = 2, VK_SYSTEM_ALLOCATION_SCOPE_DEVICE = 3, VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE = 4, VK_SYSTEM_ALLOCATION_SCOPE_BEGIN_RANGE = VK_SYSTEM_ALLOCATION_SCOPE_COMMAND, VK_SYSTEM_ALLOCATION_SCOPE_END_RANGE = VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE, VK_SYSTEM_ALLOCATION_SCOPE_RANGE_SIZE = (VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE - VK_SYSTEM_ALLOCATION_SCOPE_COMMAND + 1), VK_SYSTEM_ALLOCATION_SCOPE_MAX_ENUM = 0x7FFFFFFF } VkSystemAllocationScope; typedef enum VkInternalAllocationType { VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE = 0, VK_INTERNAL_ALLOCATION_TYPE_BEGIN_RANGE = VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE, VK_INTERNAL_ALLOCATION_TYPE_END_RANGE = VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE, VK_INTERNAL_ALLOCATION_TYPE_RANGE_SIZE = (VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE - VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE + 1), VK_INTERNAL_ALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF } VkInternalAllocationType; typedef enum VkFormat { VK_FORMAT_UNDEFINED = 0, VK_FORMAT_R4G4_UNORM_PACK8 = 1, VK_FORMAT_R4G4B4A4_UNORM_PACK16 = 2, VK_FORMAT_B4G4R4A4_UNORM_PACK16 = 3, VK_FORMAT_R5G6B5_UNORM_PACK16 = 4, VK_FORMAT_B5G6R5_UNORM_PACK16 = 5, VK_FORMAT_R5G5B5A1_UNORM_PACK16 = 6, VK_FORMAT_B5G5R5A1_UNORM_PACK16 = 7, VK_FORMAT_A1R5G5B5_UNORM_PACK16 = 8, VK_FORMAT_R8_UNORM = 9, VK_FORMAT_R8_SNORM = 10, VK_FORMAT_R8_USCALED = 11, VK_FORMAT_R8_SSCALED = 12, VK_FORMAT_R8_UINT = 13, VK_FORMAT_R8_SINT = 14, VK_FORMAT_R8_SRGB = 15, VK_FORMAT_R8G8_UNORM = 16, VK_FORMAT_R8G8_SNORM = 17, VK_FORMAT_R8G8_USCALED = 18, VK_FORMAT_R8G8_SSCALED = 19, VK_FORMAT_R8G8_UINT = 20, VK_FORMAT_R8G8_SINT = 21, VK_FORMAT_R8G8_SRGB = 22, VK_FORMAT_R8G8B8_UNORM = 23, VK_FORMAT_R8G8B8_SNORM = 24, VK_FORMAT_R8G8B8_USCALED = 25, VK_FORMAT_R8G8B8_SSCALED = 26, VK_FORMAT_R8G8B8_UINT = 27, VK_FORMAT_R8G8B8_SINT = 28, VK_FORMAT_R8G8B8_SRGB = 29, VK_FORMAT_B8G8R8_UNORM = 30, VK_FORMAT_B8G8R8_SNORM = 31, VK_FORMAT_B8G8R8_USCALED = 32, VK_FORMAT_B8G8R8_SSCALED = 33, VK_FORMAT_B8G8R8_UINT = 34, VK_FORMAT_B8G8R8_SINT = 35, VK_FORMAT_B8G8R8_SRGB = 36, VK_FORMAT_R8G8B8A8_UNORM = 37, VK_FORMAT_R8G8B8A8_SNORM = 38, VK_FORMAT_R8G8B8A8_USCALED = 39, VK_FORMAT_R8G8B8A8_SSCALED = 40, VK_FORMAT_R8G8B8A8_UINT = 41, VK_FORMAT_R8G8B8A8_SINT = 42, VK_FORMAT_R8G8B8A8_SRGB = 43, VK_FORMAT_B8G8R8A8_UNORM = 44, VK_FORMAT_B8G8R8A8_SNORM = 45, VK_FORMAT_B8G8R8A8_USCALED = 46, VK_FORMAT_B8G8R8A8_SSCALED = 47, VK_FORMAT_B8G8R8A8_UINT = 48, VK_FORMAT_B8G8R8A8_SINT = 49, VK_FORMAT_B8G8R8A8_SRGB = 50, VK_FORMAT_A8B8G8R8_UNORM_PACK32 = 51, VK_FORMAT_A8B8G8R8_SNORM_PACK32 = 52, VK_FORMAT_A8B8G8R8_USCALED_PACK32 = 53, VK_FORMAT_A8B8G8R8_SSCALED_PACK32 = 54, VK_FORMAT_A8B8G8R8_UINT_PACK32 = 55, VK_FORMAT_A8B8G8R8_SINT_PACK32 = 56, VK_FORMAT_A8B8G8R8_SRGB_PACK32 = 57, VK_FORMAT_A2R10G10B10_UNORM_PACK32 = 58, VK_FORMAT_A2R10G10B10_SNORM_PACK32 = 59, VK_FORMAT_A2R10G10B10_USCALED_PACK32 = 60, VK_FORMAT_A2R10G10B10_SSCALED_PACK32 = 61, VK_FORMAT_A2R10G10B10_UINT_PACK32 = 62, VK_FORMAT_A2R10G10B10_SINT_PACK32 = 63, VK_FORMAT_A2B10G10R10_UNORM_PACK32 = 64, VK_FORMAT_A2B10G10R10_SNORM_PACK32 = 65, VK_FORMAT_A2B10G10R10_USCALED_PACK32 = 66, VK_FORMAT_A2B10G10R10_SSCALED_PACK32 = 67, VK_FORMAT_A2B10G10R10_UINT_PACK32 = 68, VK_FORMAT_A2B10G10R10_SINT_PACK32 = 69, VK_FORMAT_R16_UNORM = 70, VK_FORMAT_R16_SNORM = 71, VK_FORMAT_R16_USCALED = 72, VK_FORMAT_R16_SSCALED = 73, VK_FORMAT_R16_UINT = 74, VK_FORMAT_R16_SINT = 75, VK_FORMAT_R16_SFLOAT = 76, VK_FORMAT_R16G16_UNORM = 77, VK_FORMAT_R16G16_SNORM = 78, VK_FORMAT_R16G16_USCALED = 79, VK_FORMAT_R16G16_SSCALED = 80, VK_FORMAT_R16G16_UINT = 81, VK_FORMAT_R16G16_SINT = 82, VK_FORMAT_R16G16_SFLOAT = 83, VK_FORMAT_R16G16B16_UNORM = 84, VK_FORMAT_R16G16B16_SNORM = 85, VK_FORMAT_R16G16B16_USCALED = 86, VK_FORMAT_R16G16B16_SSCALED = 87, VK_FORMAT_R16G16B16_UINT = 88, VK_FORMAT_R16G16B16_SINT = 89, VK_FORMAT_R16G16B16_SFLOAT = 90, VK_FORMAT_R16G16B16A16_UNORM = 91, VK_FORMAT_R16G16B16A16_SNORM = 92, VK_FORMAT_R16G16B16A16_USCALED = 93, VK_FORMAT_R16G16B16A16_SSCALED = 94, VK_FORMAT_R16G16B16A16_UINT = 95, VK_FORMAT_R16G16B16A16_SINT = 96, VK_FORMAT_R16G16B16A16_SFLOAT = 97, VK_FORMAT_R32_UINT = 98, VK_FORMAT_R32_SINT = 99, VK_FORMAT_R32_SFLOAT = 100, VK_FORMAT_R32G32_UINT = 101, VK_FORMAT_R32G32_SINT = 102, VK_FORMAT_R32G32_SFLOAT = 103, VK_FORMAT_R32G32B32_UINT = 104, VK_FORMAT_R32G32B32_SINT = 105, VK_FORMAT_R32G32B32_SFLOAT = 106, VK_FORMAT_R32G32B32A32_UINT = 107, VK_FORMAT_R32G32B32A32_SINT = 108, VK_FORMAT_R32G32B32A32_SFLOAT = 109, VK_FORMAT_R64_UINT = 110, VK_FORMAT_R64_SINT = 111, VK_FORMAT_R64_SFLOAT = 112, VK_FORMAT_R64G64_UINT = 113, VK_FORMAT_R64G64_SINT = 114, VK_FORMAT_R64G64_SFLOAT = 115, VK_FORMAT_R64G64B64_UINT = 116, VK_FORMAT_R64G64B64_SINT = 117, VK_FORMAT_R64G64B64_SFLOAT = 118, VK_FORMAT_R64G64B64A64_UINT = 119, VK_FORMAT_R64G64B64A64_SINT = 120, VK_FORMAT_R64G64B64A64_SFLOAT = 121, VK_FORMAT_B10G11R11_UFLOAT_PACK32 = 122, VK_FORMAT_E5B9G9R9_UFLOAT_PACK32 = 123, VK_FORMAT_D16_UNORM = 124, VK_FORMAT_X8_D24_UNORM_PACK32 = 125, VK_FORMAT_D32_SFLOAT = 126, VK_FORMAT_S8_UINT = 127, VK_FORMAT_D16_UNORM_S8_UINT = 128, VK_FORMAT_D24_UNORM_S8_UINT = 129, VK_FORMAT_D32_SFLOAT_S8_UINT = 130, VK_FORMAT_BC1_RGB_UNORM_BLOCK = 131, VK_FORMAT_BC1_RGB_SRGB_BLOCK = 132, VK_FORMAT_BC1_RGBA_UNORM_BLOCK = 133, VK_FORMAT_BC1_RGBA_SRGB_BLOCK = 134, VK_FORMAT_BC2_UNORM_BLOCK = 135, VK_FORMAT_BC2_SRGB_BLOCK = 136, VK_FORMAT_BC3_UNORM_BLOCK = 137, VK_FORMAT_BC3_SRGB_BLOCK = 138, VK_FORMAT_BC4_UNORM_BLOCK = 139, VK_FORMAT_BC4_SNORM_BLOCK = 140, VK_FORMAT_BC5_UNORM_BLOCK = 141, VK_FORMAT_BC5_SNORM_BLOCK = 142, VK_FORMAT_BC6H_UFLOAT_BLOCK = 143, VK_FORMAT_BC6H_SFLOAT_BLOCK = 144, VK_FORMAT_BC7_UNORM_BLOCK = 145, VK_FORMAT_BC7_SRGB_BLOCK = 146, VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK = 147, VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK = 148, VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK = 149, VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK = 150, VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK = 151, VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK = 152, VK_FORMAT_EAC_R11_UNORM_BLOCK = 153, VK_FORMAT_EAC_R11_SNORM_BLOCK = 154, VK_FORMAT_EAC_R11G11_UNORM_BLOCK = 155, VK_FORMAT_EAC_R11G11_SNORM_BLOCK = 156, VK_FORMAT_ASTC_4x4_UNORM_BLOCK = 157, VK_FORMAT_ASTC_4x4_SRGB_BLOCK = 158, VK_FORMAT_ASTC_5x4_UNORM_BLOCK = 159, VK_FORMAT_ASTC_5x4_SRGB_BLOCK = 160, VK_FORMAT_ASTC_5x5_UNORM_BLOCK = 161, VK_FORMAT_ASTC_5x5_SRGB_BLOCK = 162, VK_FORMAT_ASTC_6x5_UNORM_BLOCK = 163, VK_FORMAT_ASTC_6x5_SRGB_BLOCK = 164, VK_FORMAT_ASTC_6x6_UNORM_BLOCK = 165, VK_FORMAT_ASTC_6x6_SRGB_BLOCK = 166, VK_FORMAT_ASTC_8x5_UNORM_BLOCK = 167, VK_FORMAT_ASTC_8x5_SRGB_BLOCK = 168, VK_FORMAT_ASTC_8x6_UNORM_BLOCK = 169, VK_FORMAT_ASTC_8x6_SRGB_BLOCK = 170, VK_FORMAT_ASTC_8x8_UNORM_BLOCK = 171, VK_FORMAT_ASTC_8x8_SRGB_BLOCK = 172, VK_FORMAT_ASTC_10x5_UNORM_BLOCK = 173, VK_FORMAT_ASTC_10x5_SRGB_BLOCK = 174, VK_FORMAT_ASTC_10x6_UNORM_BLOCK = 175, VK_FORMAT_ASTC_10x6_SRGB_BLOCK = 176, VK_FORMAT_ASTC_10x8_UNORM_BLOCK = 177, VK_FORMAT_ASTC_10x8_SRGB_BLOCK = 178, VK_FORMAT_ASTC_10x10_UNORM_BLOCK = 179, VK_FORMAT_ASTC_10x10_SRGB_BLOCK = 180, VK_FORMAT_ASTC_12x10_UNORM_BLOCK = 181, VK_FORMAT_ASTC_12x10_SRGB_BLOCK = 182, VK_FORMAT_ASTC_12x12_UNORM_BLOCK = 183, VK_FORMAT_ASTC_12x12_SRGB_BLOCK = 184, VK_FORMAT_BEGIN_RANGE = VK_FORMAT_UNDEFINED, VK_FORMAT_END_RANGE = VK_FORMAT_ASTC_12x12_SRGB_BLOCK, VK_FORMAT_RANGE_SIZE = (VK_FORMAT_ASTC_12x12_SRGB_BLOCK - VK_FORMAT_UNDEFINED + 1), VK_FORMAT_MAX_ENUM = 0x7FFFFFFF } VkFormat; typedef enum VkImageType { VK_IMAGE_TYPE_1D = 0, VK_IMAGE_TYPE_2D = 1, VK_IMAGE_TYPE_3D = 2, VK_IMAGE_TYPE_BEGIN_RANGE = VK_IMAGE_TYPE_1D, VK_IMAGE_TYPE_END_RANGE = VK_IMAGE_TYPE_3D, VK_IMAGE_TYPE_RANGE_SIZE = (VK_IMAGE_TYPE_3D - VK_IMAGE_TYPE_1D + 1), VK_IMAGE_TYPE_MAX_ENUM = 0x7FFFFFFF } VkImageType; typedef enum VkImageTiling { VK_IMAGE_TILING_OPTIMAL = 0, VK_IMAGE_TILING_LINEAR = 1, VK_IMAGE_TILING_BEGIN_RANGE = VK_IMAGE_TILING_OPTIMAL, VK_IMAGE_TILING_END_RANGE = VK_IMAGE_TILING_LINEAR, VK_IMAGE_TILING_RANGE_SIZE = (VK_IMAGE_TILING_LINEAR - VK_IMAGE_TILING_OPTIMAL + 1), VK_IMAGE_TILING_MAX_ENUM = 0x7FFFFFFF } VkImageTiling; typedef enum VkPhysicalDeviceType { VK_PHYSICAL_DEVICE_TYPE_OTHER = 0, VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU = 1, VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU = 2, VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU = 3, VK_PHYSICAL_DEVICE_TYPE_CPU = 4, VK_PHYSICAL_DEVICE_TYPE_BEGIN_RANGE = VK_PHYSICAL_DEVICE_TYPE_OTHER, VK_PHYSICAL_DEVICE_TYPE_END_RANGE = VK_PHYSICAL_DEVICE_TYPE_CPU, VK_PHYSICAL_DEVICE_TYPE_RANGE_SIZE = (VK_PHYSICAL_DEVICE_TYPE_CPU - VK_PHYSICAL_DEVICE_TYPE_OTHER + 1), VK_PHYSICAL_DEVICE_TYPE_MAX_ENUM = 0x7FFFFFFF } VkPhysicalDeviceType; typedef enum VkQueryType { VK_QUERY_TYPE_OCCLUSION = 0, VK_QUERY_TYPE_PIPELINE_STATISTICS = 1, VK_QUERY_TYPE_TIMESTAMP = 2, VK_QUERY_TYPE_BEGIN_RANGE = VK_QUERY_TYPE_OCCLUSION, VK_QUERY_TYPE_END_RANGE = VK_QUERY_TYPE_TIMESTAMP, VK_QUERY_TYPE_RANGE_SIZE = (VK_QUERY_TYPE_TIMESTAMP - VK_QUERY_TYPE_OCCLUSION + 1), VK_QUERY_TYPE_MAX_ENUM = 0x7FFFFFFF } VkQueryType; typedef enum VkSharingMode { VK_SHARING_MODE_EXCLUSIVE = 0, VK_SHARING_MODE_CONCURRENT = 1, VK_SHARING_MODE_BEGIN_RANGE = VK_SHARING_MODE_EXCLUSIVE, VK_SHARING_MODE_END_RANGE = VK_SHARING_MODE_CONCURRENT, VK_SHARING_MODE_RANGE_SIZE = (VK_SHARING_MODE_CONCURRENT - VK_SHARING_MODE_EXCLUSIVE + 1), VK_SHARING_MODE_MAX_ENUM = 0x7FFFFFFF } VkSharingMode; typedef enum VkImageLayout { VK_IMAGE_LAYOUT_UNDEFINED = 0, VK_IMAGE_LAYOUT_GENERAL = 1, VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL = 2, VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL = 3, VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL = 4, VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL = 5, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL = 6, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL = 7, VK_IMAGE_LAYOUT_PREINITIALIZED = 8, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR = 1000001002, VK_IMAGE_LAYOUT_BEGIN_RANGE = VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_END_RANGE = VK_IMAGE_LAYOUT_PREINITIALIZED, VK_IMAGE_LAYOUT_RANGE_SIZE = (VK_IMAGE_LAYOUT_PREINITIALIZED - VK_IMAGE_LAYOUT_UNDEFINED + 1), VK_IMAGE_LAYOUT_MAX_ENUM = 0x7FFFFFFF } VkImageLayout; typedef enum VkImageViewType { VK_IMAGE_VIEW_TYPE_1D = 0, VK_IMAGE_VIEW_TYPE_2D = 1, VK_IMAGE_VIEW_TYPE_3D = 2, VK_IMAGE_VIEW_TYPE_CUBE = 3, VK_IMAGE_VIEW_TYPE_1D_ARRAY = 4, VK_IMAGE_VIEW_TYPE_2D_ARRAY = 5, VK_IMAGE_VIEW_TYPE_CUBE_ARRAY = 6, VK_IMAGE_VIEW_TYPE_BEGIN_RANGE = VK_IMAGE_VIEW_TYPE_1D, VK_IMAGE_VIEW_TYPE_END_RANGE = VK_IMAGE_VIEW_TYPE_CUBE_ARRAY, VK_IMAGE_VIEW_TYPE_RANGE_SIZE = (VK_IMAGE_VIEW_TYPE_CUBE_ARRAY - VK_IMAGE_VIEW_TYPE_1D + 1), VK_IMAGE_VIEW_TYPE_MAX_ENUM = 0x7FFFFFFF } VkImageViewType; typedef enum VkComponentSwizzle { VK_COMPONENT_SWIZZLE_IDENTITY = 0, VK_COMPONENT_SWIZZLE_ZERO = 1, VK_COMPONENT_SWIZZLE_ONE = 2, VK_COMPONENT_SWIZZLE_R = 3, VK_COMPONENT_SWIZZLE_G = 4, VK_COMPONENT_SWIZZLE_B = 5, VK_COMPONENT_SWIZZLE_A = 6, VK_COMPONENT_SWIZZLE_BEGIN_RANGE = VK_COMPONENT_SWIZZLE_IDENTITY, VK_COMPONENT_SWIZZLE_END_RANGE = VK_COMPONENT_SWIZZLE_A, VK_COMPONENT_SWIZZLE_RANGE_SIZE = (VK_COMPONENT_SWIZZLE_A - VK_COMPONENT_SWIZZLE_IDENTITY + 1), VK_COMPONENT_SWIZZLE_MAX_ENUM = 0x7FFFFFFF } VkComponentSwizzle; typedef enum VkVertexInputRate { VK_VERTEX_INPUT_RATE_VERTEX = 0, VK_VERTEX_INPUT_RATE_INSTANCE = 1, VK_VERTEX_INPUT_RATE_BEGIN_RANGE = VK_VERTEX_INPUT_RATE_VERTEX, VK_VERTEX_INPUT_RATE_END_RANGE = VK_VERTEX_INPUT_RATE_INSTANCE, VK_VERTEX_INPUT_RATE_RANGE_SIZE = (VK_VERTEX_INPUT_RATE_INSTANCE - VK_VERTEX_INPUT_RATE_VERTEX + 1), VK_VERTEX_INPUT_RATE_MAX_ENUM = 0x7FFFFFFF } VkVertexInputRate; typedef enum VkPrimitiveTopology { VK_PRIMITIVE_TOPOLOGY_POINT_LIST = 0, VK_PRIMITIVE_TOPOLOGY_LINE_LIST = 1, VK_PRIMITIVE_TOPOLOGY_LINE_STRIP = 2, VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST = 3, VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP = 4, VK_PRIMITIVE_TOPOLOGY_TRIANGLE_FAN = 5, VK_PRIMITIVE_TOPOLOGY_LINE_LIST_WITH_ADJACENCY = 6, VK_PRIMITIVE_TOPOLOGY_LINE_STRIP_WITH_ADJACENCY = 7, VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST_WITH_ADJACENCY = 8, VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP_WITH_ADJACENCY = 9, VK_PRIMITIVE_TOPOLOGY_PATCH_LIST = 10, VK_PRIMITIVE_TOPOLOGY_BEGIN_RANGE = VK_PRIMITIVE_TOPOLOGY_POINT_LIST, VK_PRIMITIVE_TOPOLOGY_END_RANGE = VK_PRIMITIVE_TOPOLOGY_PATCH_LIST, VK_PRIMITIVE_TOPOLOGY_RANGE_SIZE = (VK_PRIMITIVE_TOPOLOGY_PATCH_LIST - VK_PRIMITIVE_TOPOLOGY_POINT_LIST + 1), VK_PRIMITIVE_TOPOLOGY_MAX_ENUM = 0x7FFFFFFF } VkPrimitiveTopology; typedef enum VkPolygonMode { VK_POLYGON_MODE_FILL = 0, VK_POLYGON_MODE_LINE = 1, VK_POLYGON_MODE_POINT = 2, VK_POLYGON_MODE_BEGIN_RANGE = VK_POLYGON_MODE_FILL, VK_POLYGON_MODE_END_RANGE = VK_POLYGON_MODE_POINT, VK_POLYGON_MODE_RANGE_SIZE = (VK_POLYGON_MODE_POINT - VK_POLYGON_MODE_FILL + 1), VK_POLYGON_MODE_MAX_ENUM = 0x7FFFFFFF } VkPolygonMode; typedef enum VkFrontFace { VK_FRONT_FACE_COUNTER_CLOCKWISE = 0, VK_FRONT_FACE_CLOCKWISE = 1, VK_FRONT_FACE_BEGIN_RANGE = VK_FRONT_FACE_COUNTER_CLOCKWISE, VK_FRONT_FACE_END_RANGE = VK_FRONT_FACE_CLOCKWISE, VK_FRONT_FACE_RANGE_SIZE = (VK_FRONT_FACE_CLOCKWISE - VK_FRONT_FACE_COUNTER_CLOCKWISE + 1), VK_FRONT_FACE_MAX_ENUM = 0x7FFFFFFF } VkFrontFace; typedef enum VkCompareOp { VK_COMPARE_OP_NEVER = 0, VK_COMPARE_OP_LESS = 1, VK_COMPARE_OP_EQUAL = 2, VK_COMPARE_OP_LESS_OR_EQUAL = 3, VK_COMPARE_OP_GREATER = 4, VK_COMPARE_OP_NOT_EQUAL = 5, VK_COMPARE_OP_GREATER_OR_EQUAL = 6, VK_COMPARE_OP_ALWAYS = 7, VK_COMPARE_OP_BEGIN_RANGE = VK_COMPARE_OP_NEVER, VK_COMPARE_OP_END_RANGE = VK_COMPARE_OP_ALWAYS, VK_COMPARE_OP_RANGE_SIZE = (VK_COMPARE_OP_ALWAYS - VK_COMPARE_OP_NEVER + 1), VK_COMPARE_OP_MAX_ENUM = 0x7FFFFFFF } VkCompareOp; typedef enum VkStencilOp { VK_STENCIL_OP_KEEP = 0, VK_STENCIL_OP_ZERO = 1, VK_STENCIL_OP_REPLACE = 2, VK_STENCIL_OP_INCREMENT_AND_CLAMP = 3, VK_STENCIL_OP_DECREMENT_AND_CLAMP = 4, VK_STENCIL_OP_INVERT = 5, VK_STENCIL_OP_INCREMENT_AND_WRAP = 6, VK_STENCIL_OP_DECREMENT_AND_WRAP = 7, VK_STENCIL_OP_BEGIN_RANGE = VK_STENCIL_OP_KEEP, VK_STENCIL_OP_END_RANGE = VK_STENCIL_OP_DECREMENT_AND_WRAP, VK_STENCIL_OP_RANGE_SIZE = (VK_STENCIL_OP_DECREMENT_AND_WRAP - VK_STENCIL_OP_KEEP + 1), VK_STENCIL_OP_MAX_ENUM = 0x7FFFFFFF } VkStencilOp; typedef enum VkLogicOp { VK_LOGIC_OP_CLEAR = 0, VK_LOGIC_OP_AND = 1, VK_LOGIC_OP_AND_REVERSE = 2, VK_LOGIC_OP_COPY = 3, VK_LOGIC_OP_AND_INVERTED = 4, VK_LOGIC_OP_NO_OP = 5, VK_LOGIC_OP_XOR = 6, VK_LOGIC_OP_OR = 7, VK_LOGIC_OP_NOR = 8, VK_LOGIC_OP_EQUIVALENT = 9, VK_LOGIC_OP_INVERT = 10, VK_LOGIC_OP_OR_REVERSE = 11, VK_LOGIC_OP_COPY_INVERTED = 12, VK_LOGIC_OP_OR_INVERTED = 13, VK_LOGIC_OP_NAND = 14, VK_LOGIC_OP_SET = 15, VK_LOGIC_OP_BEGIN_RANGE = VK_LOGIC_OP_CLEAR, VK_LOGIC_OP_END_RANGE = VK_LOGIC_OP_SET, VK_LOGIC_OP_RANGE_SIZE = (VK_LOGIC_OP_SET - VK_LOGIC_OP_CLEAR + 1), VK_LOGIC_OP_MAX_ENUM = 0x7FFFFFFF } VkLogicOp; typedef enum VkBlendFactor { VK_BLEND_FACTOR_ZERO = 0, VK_BLEND_FACTOR_ONE = 1, VK_BLEND_FACTOR_SRC_COLOR = 2, VK_BLEND_FACTOR_ONE_MINUS_SRC_COLOR = 3, VK_BLEND_FACTOR_DST_COLOR = 4, VK_BLEND_FACTOR_ONE_MINUS_DST_COLOR = 5, VK_BLEND_FACTOR_SRC_ALPHA = 6, VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA = 7, VK_BLEND_FACTOR_DST_ALPHA = 8, VK_BLEND_FACTOR_ONE_MINUS_DST_ALPHA = 9, VK_BLEND_FACTOR_CONSTANT_COLOR = 10, VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_COLOR = 11, VK_BLEND_FACTOR_CONSTANT_ALPHA = 12, VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_ALPHA = 13, VK_BLEND_FACTOR_SRC_ALPHA_SATURATE = 14, VK_BLEND_FACTOR_SRC1_COLOR = 15, VK_BLEND_FACTOR_ONE_MINUS_SRC1_COLOR = 16, VK_BLEND_FACTOR_SRC1_ALPHA = 17, VK_BLEND_FACTOR_ONE_MINUS_SRC1_ALPHA = 18, VK_BLEND_FACTOR_BEGIN_RANGE = VK_BLEND_FACTOR_ZERO, VK_BLEND_FACTOR_END_RANGE = VK_BLEND_FACTOR_ONE_MINUS_SRC1_ALPHA, VK_BLEND_FACTOR_RANGE_SIZE = (VK_BLEND_FACTOR_ONE_MINUS_SRC1_ALPHA - VK_BLEND_FACTOR_ZERO + 1), VK_BLEND_FACTOR_MAX_ENUM = 0x7FFFFFFF } VkBlendFactor; typedef enum VkBlendOp { VK_BLEND_OP_ADD = 0, VK_BLEND_OP_SUBTRACT = 1, VK_BLEND_OP_REVERSE_SUBTRACT = 2, VK_BLEND_OP_MIN = 3, VK_BLEND_OP_MAX = 4, VK_BLEND_OP_BEGIN_RANGE = VK_BLEND_OP_ADD, VK_BLEND_OP_END_RANGE = VK_BLEND_OP_MAX, VK_BLEND_OP_RANGE_SIZE = (VK_BLEND_OP_MAX - VK_BLEND_OP_ADD + 1), VK_BLEND_OP_MAX_ENUM = 0x7FFFFFFF } VkBlendOp; typedef enum VkDynamicState { VK_DYNAMIC_STATE_VIEWPORT = 0, VK_DYNAMIC_STATE_SCISSOR = 1, VK_DYNAMIC_STATE_LINE_WIDTH = 2, VK_DYNAMIC_STATE_DEPTH_BIAS = 3, VK_DYNAMIC_STATE_BLEND_CONSTANTS = 4, VK_DYNAMIC_STATE_DEPTH_BOUNDS = 5, VK_DYNAMIC_STATE_STENCIL_COMPARE_MASK = 6, VK_DYNAMIC_STATE_STENCIL_WRITE_MASK = 7, VK_DYNAMIC_STATE_STENCIL_REFERENCE = 8, VK_DYNAMIC_STATE_BEGIN_RANGE = VK_DYNAMIC_STATE_VIEWPORT, VK_DYNAMIC_STATE_END_RANGE = VK_DYNAMIC_STATE_STENCIL_REFERENCE, VK_DYNAMIC_STATE_RANGE_SIZE = (VK_DYNAMIC_STATE_STENCIL_REFERENCE - VK_DYNAMIC_STATE_VIEWPORT + 1), VK_DYNAMIC_STATE_MAX_ENUM = 0x7FFFFFFF } VkDynamicState; typedef enum VkFilter { VK_FILTER_NEAREST = 0, VK_FILTER_LINEAR = 1, VK_FILTER_CUBIC_IMG = 1000015000, VK_FILTER_BEGIN_RANGE = VK_FILTER_NEAREST, VK_FILTER_END_RANGE = VK_FILTER_LINEAR, VK_FILTER_RANGE_SIZE = (VK_FILTER_LINEAR - VK_FILTER_NEAREST + 1), VK_FILTER_MAX_ENUM = 0x7FFFFFFF } VkFilter; typedef enum VkSamplerMipmapMode { VK_SAMPLER_MIPMAP_MODE_NEAREST = 0, VK_SAMPLER_MIPMAP_MODE_LINEAR = 1, VK_SAMPLER_MIPMAP_MODE_BEGIN_RANGE = VK_SAMPLER_MIPMAP_MODE_NEAREST, VK_SAMPLER_MIPMAP_MODE_END_RANGE = VK_SAMPLER_MIPMAP_MODE_LINEAR, VK_SAMPLER_MIPMAP_MODE_RANGE_SIZE = (VK_SAMPLER_MIPMAP_MODE_LINEAR - VK_SAMPLER_MIPMAP_MODE_NEAREST + 1), VK_SAMPLER_MIPMAP_MODE_MAX_ENUM = 0x7FFFFFFF } VkSamplerMipmapMode; typedef enum VkSamplerAddressMode { VK_SAMPLER_ADDRESS_MODE_REPEAT = 0, VK_SAMPLER_ADDRESS_MODE_MIRRORED_REPEAT = 1, VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE = 2, VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER = 3, VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE = 4, VK_SAMPLER_ADDRESS_MODE_BEGIN_RANGE = VK_SAMPLER_ADDRESS_MODE_REPEAT, VK_SAMPLER_ADDRESS_MODE_END_RANGE = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER, VK_SAMPLER_ADDRESS_MODE_RANGE_SIZE = (VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER - VK_SAMPLER_ADDRESS_MODE_REPEAT + 1), VK_SAMPLER_ADDRESS_MODE_MAX_ENUM = 0x7FFFFFFF } VkSamplerAddressMode; typedef enum VkBorderColor { VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK = 0, VK_BORDER_COLOR_INT_TRANSPARENT_BLACK = 1, VK_BORDER_COLOR_FLOAT_OPAQUE_BLACK = 2, VK_BORDER_COLOR_INT_OPAQUE_BLACK = 3, VK_BORDER_COLOR_FLOAT_OPAQUE_WHITE = 4, VK_BORDER_COLOR_INT_OPAQUE_WHITE = 5, VK_BORDER_COLOR_BEGIN_RANGE = VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK, VK_BORDER_COLOR_END_RANGE = VK_BORDER_COLOR_INT_OPAQUE_WHITE, VK_BORDER_COLOR_RANGE_SIZE = (VK_BORDER_COLOR_INT_OPAQUE_WHITE - VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK + 1), VK_BORDER_COLOR_MAX_ENUM = 0x7FFFFFFF } VkBorderColor; typedef enum VkDescriptorType { VK_DESCRIPTOR_TYPE_SAMPLER = 0, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER = 1, VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE = 2, VK_DESCRIPTOR_TYPE_STORAGE_IMAGE = 3, VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER = 4, VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER = 5, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER = 6, VK_DESCRIPTOR_TYPE_STORAGE_BUFFER = 7, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC = 8, VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC = 9, VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT = 10, VK_DESCRIPTOR_TYPE_BEGIN_RANGE = VK_DESCRIPTOR_TYPE_SAMPLER, VK_DESCRIPTOR_TYPE_END_RANGE = VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT, VK_DESCRIPTOR_TYPE_RANGE_SIZE = (VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT - VK_DESCRIPTOR_TYPE_SAMPLER + 1), VK_DESCRIPTOR_TYPE_MAX_ENUM = 0x7FFFFFFF } VkDescriptorType; typedef enum VkAttachmentLoadOp { VK_ATTACHMENT_LOAD_OP_LOAD = 0, VK_ATTACHMENT_LOAD_OP_CLEAR = 1, VK_ATTACHMENT_LOAD_OP_DONT_CARE = 2, VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE = VK_ATTACHMENT_LOAD_OP_LOAD, VK_ATTACHMENT_LOAD_OP_END_RANGE = VK_ATTACHMENT_LOAD_OP_DONT_CARE, VK_ATTACHMENT_LOAD_OP_RANGE_SIZE = (VK_ATTACHMENT_LOAD_OP_DONT_CARE - VK_ATTACHMENT_LOAD_OP_LOAD + 1), VK_ATTACHMENT_LOAD_OP_MAX_ENUM = 0x7FFFFFFF } VkAttachmentLoadOp; typedef enum VkAttachmentStoreOp { VK_ATTACHMENT_STORE_OP_STORE = 0, VK_ATTACHMENT_STORE_OP_DONT_CARE = 1, VK_ATTACHMENT_STORE_OP_BEGIN_RANGE = VK_ATTACHMENT_STORE_OP_STORE, VK_ATTACHMENT_STORE_OP_END_RANGE = VK_ATTACHMENT_STORE_OP_DONT_CARE, VK_ATTACHMENT_STORE_OP_RANGE_SIZE = (VK_ATTACHMENT_STORE_OP_DONT_CARE - VK_ATTACHMENT_STORE_OP_STORE + 1), VK_ATTACHMENT_STORE_OP_MAX_ENUM = 0x7FFFFFFF } VkAttachmentStoreOp; typedef enum VkPipelineBindPoint { VK_PIPELINE_BIND_POINT_GRAPHICS = 0, VK_PIPELINE_BIND_POINT_COMPUTE = 1, VK_PIPELINE_BIND_POINT_BEGIN_RANGE = VK_PIPELINE_BIND_POINT_GRAPHICS, VK_PIPELINE_BIND_POINT_END_RANGE = VK_PIPELINE_BIND_POINT_COMPUTE, VK_PIPELINE_BIND_POINT_RANGE_SIZE = (VK_PIPELINE_BIND_POINT_COMPUTE - VK_PIPELINE_BIND_POINT_GRAPHICS + 1), VK_PIPELINE_BIND_POINT_MAX_ENUM = 0x7FFFFFFF } VkPipelineBindPoint; typedef enum VkCommandBufferLevel { VK_COMMAND_BUFFER_LEVEL_PRIMARY = 0, VK_COMMAND_BUFFER_LEVEL_SECONDARY = 1, VK_COMMAND_BUFFER_LEVEL_BEGIN_RANGE = VK_COMMAND_BUFFER_LEVEL_PRIMARY, VK_COMMAND_BUFFER_LEVEL_END_RANGE = VK_COMMAND_BUFFER_LEVEL_SECONDARY, VK_COMMAND_BUFFER_LEVEL_RANGE_SIZE = (VK_COMMAND_BUFFER_LEVEL_SECONDARY - VK_COMMAND_BUFFER_LEVEL_PRIMARY + 1), VK_COMMAND_BUFFER_LEVEL_MAX_ENUM = 0x7FFFFFFF } VkCommandBufferLevel; typedef enum VkIndexType { VK_INDEX_TYPE_UINT16 = 0, VK_INDEX_TYPE_UINT32 = 1, VK_INDEX_TYPE_BEGIN_RANGE = VK_INDEX_TYPE_UINT16, VK_INDEX_TYPE_END_RANGE = VK_INDEX_TYPE_UINT32, VK_INDEX_TYPE_RANGE_SIZE = (VK_INDEX_TYPE_UINT32 - VK_INDEX_TYPE_UINT16 + 1), VK_INDEX_TYPE_MAX_ENUM = 0x7FFFFFFF } VkIndexType; typedef enum VkSubpassContents { VK_SUBPASS_CONTENTS_INLINE = 0, VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS = 1, VK_SUBPASS_CONTENTS_BEGIN_RANGE = VK_SUBPASS_CONTENTS_INLINE, VK_SUBPASS_CONTENTS_END_RANGE = VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS, VK_SUBPASS_CONTENTS_RANGE_SIZE = (VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS - VK_SUBPASS_CONTENTS_INLINE + 1), VK_SUBPASS_CONTENTS_MAX_ENUM = 0x7FFFFFFF } VkSubpassContents; typedef VkFlags VkInstanceCreateFlags; typedef enum VkFormatFeatureFlagBits { VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT = 0x00000001, VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT = 0x00000002, VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT = 0x00000004, VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT = 0x00000008, VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT = 0x00000010, VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT = 0x00000020, VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT = 0x00000040, VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT = 0x00000080, VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT = 0x00000100, VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000200, VK_FORMAT_FEATURE_BLIT_SRC_BIT = 0x00000400, VK_FORMAT_FEATURE_BLIT_DST_BIT = 0x00000800, VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT = 0x00001000, VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_CUBIC_BIT_IMG = 0x00002000, VK_FORMAT_FEATURE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkFormatFeatureFlagBits; typedef VkFlags VkFormatFeatureFlags; typedef enum VkImageUsageFlagBits { VK_IMAGE_USAGE_TRANSFER_SRC_BIT = 0x00000001, VK_IMAGE_USAGE_TRANSFER_DST_BIT = 0x00000002, VK_IMAGE_USAGE_SAMPLED_BIT = 0x00000004, VK_IMAGE_USAGE_STORAGE_BIT = 0x00000008, VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000010, VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000020, VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT = 0x00000040, VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT = 0x00000080, VK_IMAGE_USAGE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkImageUsageFlagBits; typedef VkFlags VkImageUsageFlags; typedef enum VkImageCreateFlagBits { VK_IMAGE_CREATE_SPARSE_BINDING_BIT = 0x00000001, VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT = 0x00000002, VK_IMAGE_CREATE_SPARSE_ALIASED_BIT = 0x00000004, VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT = 0x00000008, VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT = 0x00000010, VK_IMAGE_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkImageCreateFlagBits; typedef VkFlags VkImageCreateFlags; typedef enum VkSampleCountFlagBits { VK_SAMPLE_COUNT_1_BIT = 0x00000001, VK_SAMPLE_COUNT_2_BIT = 0x00000002, VK_SAMPLE_COUNT_4_BIT = 0x00000004, VK_SAMPLE_COUNT_8_BIT = 0x00000008, VK_SAMPLE_COUNT_16_BIT = 0x00000010, VK_SAMPLE_COUNT_32_BIT = 0x00000020, VK_SAMPLE_COUNT_64_BIT = 0x00000040, VK_SAMPLE_COUNT_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkSampleCountFlagBits; typedef VkFlags VkSampleCountFlags; typedef enum VkQueueFlagBits { VK_QUEUE_GRAPHICS_BIT = 0x00000001, VK_QUEUE_COMPUTE_BIT = 0x00000002, VK_QUEUE_TRANSFER_BIT = 0x00000004, VK_QUEUE_SPARSE_BINDING_BIT = 0x00000008, VK_QUEUE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkQueueFlagBits; typedef VkFlags VkQueueFlags; typedef enum VkMemoryPropertyFlagBits { VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT = 0x00000001, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT = 0x00000002, VK_MEMORY_PROPERTY_HOST_COHERENT_BIT = 0x00000004, VK_MEMORY_PROPERTY_HOST_CACHED_BIT = 0x00000008, VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT = 0x00000010, VK_MEMORY_PROPERTY_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkMemoryPropertyFlagBits; typedef VkFlags VkMemoryPropertyFlags; typedef enum VkMemoryHeapFlagBits { VK_MEMORY_HEAP_DEVICE_LOCAL_BIT = 0x00000001, VK_MEMORY_HEAP_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkMemoryHeapFlagBits; typedef VkFlags VkMemoryHeapFlags; typedef VkFlags VkDeviceCreateFlags; typedef VkFlags VkDeviceQueueCreateFlags; typedef enum VkPipelineStageFlagBits { VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT = 0x00000001, VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT = 0x00000002, VK_PIPELINE_STAGE_VERTEX_INPUT_BIT = 0x00000004, VK_PIPELINE_STAGE_VERTEX_SHADER_BIT = 0x00000008, VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT = 0x00000010, VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT = 0x00000020, VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT = 0x00000040, VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT = 0x00000080, VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT = 0x00000100, VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT = 0x00000200, VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT = 0x00000400, VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT = 0x00000800, VK_PIPELINE_STAGE_TRANSFER_BIT = 0x00001000, VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT = 0x00002000, VK_PIPELINE_STAGE_HOST_BIT = 0x00004000, VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT = 0x00008000, VK_PIPELINE_STAGE_ALL_COMMANDS_BIT = 0x00010000, VK_PIPELINE_STAGE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkPipelineStageFlagBits; typedef VkFlags VkPipelineStageFlags; typedef VkFlags VkMemoryMapFlags; typedef enum VkImageAspectFlagBits { VK_IMAGE_ASPECT_COLOR_BIT = 0x00000001, VK_IMAGE_ASPECT_DEPTH_BIT = 0x00000002, VK_IMAGE_ASPECT_STENCIL_BIT = 0x00000004, VK_IMAGE_ASPECT_METADATA_BIT = 0x00000008, VK_IMAGE_ASPECT_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkImageAspectFlagBits; typedef VkFlags VkImageAspectFlags; typedef enum VkSparseImageFormatFlagBits { VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT = 0x00000001, VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT = 0x00000002, VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT = 0x00000004, VK_SPARSE_IMAGE_FORMAT_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkSparseImageFormatFlagBits; typedef VkFlags VkSparseImageFormatFlags; typedef enum VkSparseMemoryBindFlagBits { VK_SPARSE_MEMORY_BIND_METADATA_BIT = 0x00000001, VK_SPARSE_MEMORY_BIND_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkSparseMemoryBindFlagBits; typedef VkFlags VkSparseMemoryBindFlags; typedef enum VkFenceCreateFlagBits { VK_FENCE_CREATE_SIGNALED_BIT = 0x00000001, VK_FENCE_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkFenceCreateFlagBits; typedef VkFlags VkFenceCreateFlags; typedef VkFlags VkSemaphoreCreateFlags; typedef VkFlags VkEventCreateFlags; typedef VkFlags VkQueryPoolCreateFlags; typedef enum VkQueryPipelineStatisticFlagBits { VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT = 0x00000001, VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT = 0x00000002, VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT = 0x00000004, VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT = 0x00000008, VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT = 0x00000010, VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT = 0x00000020, VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT = 0x00000040, VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT = 0x00000080, VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT = 0x00000100, VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT = 0x00000200, VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT = 0x00000400, VK_QUERY_PIPELINE_STATISTIC_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkQueryPipelineStatisticFlagBits; typedef VkFlags VkQueryPipelineStatisticFlags; typedef enum VkQueryResultFlagBits { VK_QUERY_RESULT_64_BIT = 0x00000001, VK_QUERY_RESULT_WAIT_BIT = 0x00000002, VK_QUERY_RESULT_WITH_AVAILABILITY_BIT = 0x00000004, VK_QUERY_RESULT_PARTIAL_BIT = 0x00000008, VK_QUERY_RESULT_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkQueryResultFlagBits; typedef VkFlags VkQueryResultFlags; typedef enum VkBufferCreateFlagBits { VK_BUFFER_CREATE_SPARSE_BINDING_BIT = 0x00000001, VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT = 0x00000002, VK_BUFFER_CREATE_SPARSE_ALIASED_BIT = 0x00000004, VK_BUFFER_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkBufferCreateFlagBits; typedef VkFlags VkBufferCreateFlags; typedef enum VkBufferUsageFlagBits { VK_BUFFER_USAGE_TRANSFER_SRC_BIT = 0x00000001, VK_BUFFER_USAGE_TRANSFER_DST_BIT = 0x00000002, VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT = 0x00000004, VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT = 0x00000008, VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT = 0x00000010, VK_BUFFER_USAGE_STORAGE_BUFFER_BIT = 0x00000020, VK_BUFFER_USAGE_INDEX_BUFFER_BIT = 0x00000040, VK_BUFFER_USAGE_VERTEX_BUFFER_BIT = 0x00000080, VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT = 0x00000100, VK_BUFFER_USAGE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkBufferUsageFlagBits; typedef VkFlags VkBufferUsageFlags; typedef VkFlags VkBufferViewCreateFlags; typedef VkFlags VkImageViewCreateFlags; typedef VkFlags VkShaderModuleCreateFlags; typedef VkFlags VkPipelineCacheCreateFlags; typedef enum VkPipelineCreateFlagBits { VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT = 0x00000001, VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT = 0x00000002, VK_PIPELINE_CREATE_DERIVATIVE_BIT = 0x00000004, VK_PIPELINE_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkPipelineCreateFlagBits; typedef VkFlags VkPipelineCreateFlags; typedef VkFlags VkPipelineShaderStageCreateFlags; typedef enum VkShaderStageFlagBits { VK_SHADER_STAGE_VERTEX_BIT = 0x00000001, VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT = 0x00000002, VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT = 0x00000004, VK_SHADER_STAGE_GEOMETRY_BIT = 0x00000008, VK_SHADER_STAGE_FRAGMENT_BIT = 0x00000010, VK_SHADER_STAGE_COMPUTE_BIT = 0x00000020, VK_SHADER_STAGE_ALL_GRAPHICS = 0x0000001F, VK_SHADER_STAGE_ALL = 0x7FFFFFFF, VK_SHADER_STAGE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkShaderStageFlagBits; typedef VkFlags VkPipelineVertexInputStateCreateFlags; typedef VkFlags VkPipelineInputAssemblyStateCreateFlags; typedef VkFlags VkPipelineTessellationStateCreateFlags; typedef VkFlags VkPipelineViewportStateCreateFlags; typedef VkFlags VkPipelineRasterizationStateCreateFlags; typedef enum VkCullModeFlagBits { VK_CULL_MODE_NONE = 0, VK_CULL_MODE_FRONT_BIT = 0x00000001, VK_CULL_MODE_BACK_BIT = 0x00000002, VK_CULL_MODE_FRONT_AND_BACK = 0x00000003, VK_CULL_MODE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkCullModeFlagBits; typedef VkFlags VkCullModeFlags; typedef VkFlags VkPipelineMultisampleStateCreateFlags; typedef VkFlags VkPipelineDepthStencilStateCreateFlags; typedef VkFlags VkPipelineColorBlendStateCreateFlags; typedef enum VkColorComponentFlagBits { VK_COLOR_COMPONENT_R_BIT = 0x00000001, VK_COLOR_COMPONENT_G_BIT = 0x00000002, VK_COLOR_COMPONENT_B_BIT = 0x00000004, VK_COLOR_COMPONENT_A_BIT = 0x00000008, VK_COLOR_COMPONENT_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkColorComponentFlagBits; typedef VkFlags VkColorComponentFlags; typedef VkFlags VkPipelineDynamicStateCreateFlags; typedef VkFlags VkPipelineLayoutCreateFlags; typedef VkFlags VkShaderStageFlags; typedef VkFlags VkSamplerCreateFlags; typedef VkFlags VkDescriptorSetLayoutCreateFlags; typedef enum VkDescriptorPoolCreateFlagBits { VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT = 0x00000001, VK_DESCRIPTOR_POOL_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkDescriptorPoolCreateFlagBits; typedef VkFlags VkDescriptorPoolCreateFlags; typedef VkFlags VkDescriptorPoolResetFlags; typedef VkFlags VkFramebufferCreateFlags; typedef VkFlags VkRenderPassCreateFlags; typedef enum VkAttachmentDescriptionFlagBits { VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT = 0x00000001, VK_ATTACHMENT_DESCRIPTION_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkAttachmentDescriptionFlagBits; typedef VkFlags VkAttachmentDescriptionFlags; typedef VkFlags VkSubpassDescriptionFlags; typedef enum VkAccessFlagBits { VK_ACCESS_INDIRECT_COMMAND_READ_BIT = 0x00000001, VK_ACCESS_INDEX_READ_BIT = 0x00000002, VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT = 0x00000004, VK_ACCESS_UNIFORM_READ_BIT = 0x00000008, VK_ACCESS_INPUT_ATTACHMENT_READ_BIT = 0x00000010, VK_ACCESS_SHADER_READ_BIT = 0x00000020, VK_ACCESS_SHADER_WRITE_BIT = 0x00000040, VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080, VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100, VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT = 0x00000200, VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT = 0x00000400, VK_ACCESS_TRANSFER_READ_BIT = 0x00000800, VK_ACCESS_TRANSFER_WRITE_BIT = 0x00001000, VK_ACCESS_HOST_READ_BIT = 0x00002000, VK_ACCESS_HOST_WRITE_BIT = 0x00004000, VK_ACCESS_MEMORY_READ_BIT = 0x00008000, VK_ACCESS_MEMORY_WRITE_BIT = 0x00010000, VK_ACCESS_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkAccessFlagBits; typedef VkFlags VkAccessFlags; typedef enum VkDependencyFlagBits { VK_DEPENDENCY_BY_REGION_BIT = 0x00000001, VK_DEPENDENCY_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkDependencyFlagBits; typedef VkFlags VkDependencyFlags; typedef enum VkCommandPoolCreateFlagBits { VK_COMMAND_POOL_CREATE_TRANSIENT_BIT = 0x00000001, VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT = 0x00000002, VK_COMMAND_POOL_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkCommandPoolCreateFlagBits; typedef VkFlags VkCommandPoolCreateFlags; typedef enum VkCommandPoolResetFlagBits { VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT = 0x00000001, VK_COMMAND_POOL_RESET_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkCommandPoolResetFlagBits; typedef VkFlags VkCommandPoolResetFlags; typedef enum VkCommandBufferUsageFlagBits { VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT = 0x00000001, VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT = 0x00000002, VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT = 0x00000004, VK_COMMAND_BUFFER_USAGE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkCommandBufferUsageFlagBits; typedef VkFlags VkCommandBufferUsageFlags; typedef enum VkQueryControlFlagBits { VK_QUERY_CONTROL_PRECISE_BIT = 0x00000001, VK_QUERY_CONTROL_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkQueryControlFlagBits; typedef VkFlags VkQueryControlFlags; typedef enum VkCommandBufferResetFlagBits { VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT = 0x00000001, VK_COMMAND_BUFFER_RESET_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkCommandBufferResetFlagBits; typedef VkFlags VkCommandBufferResetFlags; typedef enum VkStencilFaceFlagBits { VK_STENCIL_FACE_FRONT_BIT = 0x00000001, VK_STENCIL_FACE_BACK_BIT = 0x00000002, VK_STENCIL_FRONT_AND_BACK = 0x00000003, VK_STENCIL_FACE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF } VkStencilFaceFlagBits; typedef VkFlags VkStencilFaceFlags; typedef void* (VKAPI_PTR *PFN_vkAllocationFunction)( void* pUserData, size_t size, size_t alignment, VkSystemAllocationScope allocationScope); typedef void* (VKAPI_PTR *PFN_vkReallocationFunction)( void* pUserData, void* pOriginal, size_t size, size_t alignment, VkSystemAllocationScope allocationScope); typedef void (VKAPI_PTR *PFN_vkFreeFunction)( void* pUserData, void* pMemory); typedef void (VKAPI_PTR *PFN_vkInternalAllocationNotification)( void* pUserData, size_t size, VkInternalAllocationType allocationType, VkSystemAllocationScope allocationScope); typedef void (VKAPI_PTR *PFN_vkInternalFreeNotification)( void* pUserData, size_t size, VkInternalAllocationType allocationType, VkSystemAllocationScope allocationScope); typedef void (VKAPI_PTR *PFN_vkVoidFunction)(void); typedef struct VkApplicationInfo { VkStructureType sType; const void* pNext; const char* pApplicationName; uint32_t applicationVersion; const char* pEngineName; uint32_t engineVersion; uint32_t apiVersion; } VkApplicationInfo; typedef struct VkInstanceCreateInfo { VkStructureType sType; const void* pNext; VkInstanceCreateFlags flags; const VkApplicationInfo* pApplicationInfo; uint32_t enabledLayerCount; const char* const* ppEnabledLayerNames; uint32_t enabledExtensionCount; const char* const* ppEnabledExtensionNames; } VkInstanceCreateInfo; typedef struct VkAllocationCallbacks { void* pUserData; PFN_vkAllocationFunction pfnAllocation; PFN_vkReallocationFunction pfnReallocation; PFN_vkFreeFunction pfnFree; PFN_vkInternalAllocationNotification pfnInternalAllocation; PFN_vkInternalFreeNotification pfnInternalFree; } VkAllocationCallbacks; typedef struct VkPhysicalDeviceFeatures { VkBool32 robustBufferAccess; VkBool32 fullDrawIndexUint32; VkBool32 imageCubeArray; VkBool32 independentBlend; VkBool32 geometryShader; VkBool32 tessellationShader; VkBool32 sampleRateShading; VkBool32 dualSrcBlend; VkBool32 logicOp; VkBool32 multiDrawIndirect; VkBool32 drawIndirectFirstInstance; VkBool32 depthClamp; VkBool32 depthBiasClamp; VkBool32 fillModeNonSolid; VkBool32 depthBounds; VkBool32 wideLines; VkBool32 largePoints; VkBool32 alphaToOne; VkBool32 multiViewport; VkBool32 samplerAnisotropy; VkBool32 textureCompressionETC2; VkBool32 textureCompressionASTC_LDR; VkBool32 textureCompressionBC; VkBool32 occlusionQueryPrecise; VkBool32 pipelineStatisticsQuery; VkBool32 vertexPipelineStoresAndAtomics; VkBool32 fragmentStoresAndAtomics; VkBool32 shaderTessellationAndGeometryPointSize; VkBool32 shaderImageGatherExtended; VkBool32 shaderStorageImageExtendedFormats; VkBool32 shaderStorageImageMultisample; VkBool32 shaderStorageImageReadWithoutFormat; VkBool32 shaderStorageImageWriteWithoutFormat; VkBool32 shaderUniformBufferArrayDynamicIndexing; VkBool32 shaderSampledImageArrayDynamicIndexing; VkBool32 shaderStorageBufferArrayDynamicIndexing; VkBool32 shaderStorageImageArrayDynamicIndexing; VkBool32 shaderClipDistance; VkBool32 shaderCullDistance; VkBool32 shaderFloat64; VkBool32 shaderInt64; VkBool32 shaderInt16; VkBool32 shaderResourceResidency; VkBool32 shaderResourceMinLod; VkBool32 sparseBinding; VkBool32 sparseResidencyBuffer; VkBool32 sparseResidencyImage2D; VkBool32 sparseResidencyImage3D; VkBool32 sparseResidency2Samples; VkBool32 sparseResidency4Samples; VkBool32 sparseResidency8Samples; VkBool32 sparseResidency16Samples; VkBool32 sparseResidencyAliased; VkBool32 variableMultisampleRate; VkBool32 inheritedQueries; } VkPhysicalDeviceFeatures; typedef struct VkFormatProperties { VkFormatFeatureFlags linearTilingFeatures; VkFormatFeatureFlags optimalTilingFeatures; VkFormatFeatureFlags bufferFeatures; } VkFormatProperties; typedef struct VkExtent3D { uint32_t width; uint32_t height; uint32_t depth; } VkExtent3D; typedef struct VkImageFormatProperties { VkExtent3D maxExtent; uint32_t maxMipLevels; uint32_t maxArrayLayers; VkSampleCountFlags sampleCounts; VkDeviceSize maxResourceSize; } VkImageFormatProperties; typedef struct VkPhysicalDeviceLimits { uint32_t maxImageDimension1D; uint32_t maxImageDimension2D; uint32_t maxImageDimension3D; uint32_t maxImageDimensionCube; uint32_t maxImageArrayLayers; uint32_t maxTexelBufferElements; uint32_t maxUniformBufferRange; uint32_t maxStorageBufferRange; uint32_t maxPushConstantsSize; uint32_t maxMemoryAllocationCount; uint32_t maxSamplerAllocationCount; VkDeviceSize bufferImageGranularity; VkDeviceSize sparseAddressSpaceSize; uint32_t maxBoundDescriptorSets; uint32_t maxPerStageDescriptorSamplers; uint32_t maxPerStageDescriptorUniformBuffers; uint32_t maxPerStageDescriptorStorageBuffers; uint32_t maxPerStageDescriptorSampledImages; uint32_t maxPerStageDescriptorStorageImages; uint32_t maxPerStageDescriptorInputAttachments; uint32_t maxPerStageResources; uint32_t maxDescriptorSetSamplers; uint32_t maxDescriptorSetUniformBuffers; uint32_t maxDescriptorSetUniformBuffersDynamic; uint32_t maxDescriptorSetStorageBuffers; uint32_t maxDescriptorSetStorageBuffersDynamic; uint32_t maxDescriptorSetSampledImages; uint32_t maxDescriptorSetStorageImages; uint32_t maxDescriptorSetInputAttachments; uint32_t maxVertexInputAttributes; uint32_t maxVertexInputBindings; uint32_t maxVertexInputAttributeOffset; uint32_t maxVertexInputBindingStride; uint32_t maxVertexOutputComponents; uint32_t maxTessellationGenerationLevel; uint32_t maxTessellationPatchSize; uint32_t maxTessellationControlPerVertexInputComponents; uint32_t maxTessellationControlPerVertexOutputComponents; uint32_t maxTessellationControlPerPatchOutputComponents; uint32_t maxTessellationControlTotalOutputComponents; uint32_t maxTessellationEvaluationInputComponents; uint32_t maxTessellationEvaluationOutputComponents; uint32_t maxGeometryShaderInvocations; uint32_t maxGeometryInputComponents; uint32_t maxGeometryOutputComponents; uint32_t maxGeometryOutputVertices; uint32_t maxGeometryTotalOutputComponents; uint32_t maxFragmentInputComponents; uint32_t maxFragmentOutputAttachments; uint32_t maxFragmentDualSrcAttachments; uint32_t maxFragmentCombinedOutputResources; uint32_t maxComputeSharedMemorySize; uint32_t maxComputeWorkGroupCount[3]; uint32_t maxComputeWorkGroupInvocations; uint32_t maxComputeWorkGroupSize[3]; uint32_t subPixelPrecisionBits; uint32_t subTexelPrecisionBits; uint32_t mipmapPrecisionBits; uint32_t maxDrawIndexedIndexValue; uint32_t maxDrawIndirectCount; float maxSamplerLodBias; float maxSamplerAnisotropy; uint32_t maxViewports; uint32_t maxViewportDimensions[2]; float viewportBoundsRange[2]; uint32_t viewportSubPixelBits; size_t minMemoryMapAlignment; VkDeviceSize minTexelBufferOffsetAlignment; VkDeviceSize minUniformBufferOffsetAlignment; VkDeviceSize minStorageBufferOffsetAlignment; int32_t minTexelOffset; uint32_t maxTexelOffset; int32_t minTexelGatherOffset; uint32_t maxTexelGatherOffset; float minInterpolationOffset; float maxInterpolationOffset; uint32_t subPixelInterpolationOffsetBits; uint32_t maxFramebufferWidth; uint32_t maxFramebufferHeight; uint32_t maxFramebufferLayers; VkSampleCountFlags framebufferColorSampleCounts; VkSampleCountFlags framebufferDepthSampleCounts; VkSampleCountFlags framebufferStencilSampleCounts; VkSampleCountFlags framebufferNoAttachmentsSampleCounts; uint32_t maxColorAttachments; VkSampleCountFlags sampledImageColorSampleCounts; VkSampleCountFlags sampledImageIntegerSampleCounts; VkSampleCountFlags sampledImageDepthSampleCounts; VkSampleCountFlags sampledImageStencilSampleCounts; VkSampleCountFlags storageImageSampleCounts; uint32_t maxSampleMaskWords; VkBool32 timestampComputeAndGraphics; float timestampPeriod; uint32_t maxClipDistances; uint32_t maxCullDistances; uint32_t maxCombinedClipAndCullDistances; uint32_t discreteQueuePriorities; float pointSizeRange[2]; float lineWidthRange[2]; float pointSizeGranularity; float lineWidthGranularity; VkBool32 strictLines; VkBool32 standardSampleLocations; VkDeviceSize optimalBufferCopyOffsetAlignment; VkDeviceSize optimalBufferCopyRowPitchAlignment; VkDeviceSize nonCoherentAtomSize; } VkPhysicalDeviceLimits; typedef struct VkPhysicalDeviceSparseProperties { VkBool32 residencyStandard2DBlockShape; VkBool32 residencyStandard2DMultisampleBlockShape; VkBool32 residencyStandard3DBlockShape; VkBool32 residencyAlignedMipSize; VkBool32 residencyNonResidentStrict; } VkPhysicalDeviceSparseProperties; typedef struct VkPhysicalDeviceProperties { uint32_t apiVersion; uint32_t driverVersion; uint32_t vendorID; uint32_t deviceID; VkPhysicalDeviceType deviceType; char deviceName[VK_MAX_PHYSICAL_DEVICE_NAME_SIZE]; uint8_t pipelineCacheUUID[VK_UUID_SIZE]; VkPhysicalDeviceLimits limits; VkPhysicalDeviceSparseProperties sparseProperties; } VkPhysicalDeviceProperties; typedef struct VkQueueFamilyProperties { VkQueueFlags queueFlags; uint32_t queueCount; uint32_t timestampValidBits; VkExtent3D minImageTransferGranularity; } VkQueueFamilyProperties; typedef struct VkMemoryType { VkMemoryPropertyFlags propertyFlags; uint32_t heapIndex; } VkMemoryType; typedef struct VkMemoryHeap { VkDeviceSize size; VkMemoryHeapFlags flags; } VkMemoryHeap; typedef struct VkPhysicalDeviceMemoryProperties { uint32_t memoryTypeCount; VkMemoryType memoryTypes[VK_MAX_MEMORY_TYPES]; uint32_t memoryHeapCount; VkMemoryHeap memoryHeaps[VK_MAX_MEMORY_HEAPS]; } VkPhysicalDeviceMemoryProperties; typedef struct VkDeviceQueueCreateInfo { VkStructureType sType; const void* pNext; VkDeviceQueueCreateFlags flags; uint32_t queueFamilyIndex; uint32_t queueCount; const float* pQueuePriorities; } VkDeviceQueueCreateInfo; typedef struct VkDeviceCreateInfo { VkStructureType sType; const void* pNext; VkDeviceCreateFlags flags; uint32_t queueCreateInfoCount; const VkDeviceQueueCreateInfo* pQueueCreateInfos; uint32_t enabledLayerCount; const char* const* ppEnabledLayerNames; uint32_t enabledExtensionCount; const char* const* ppEnabledExtensionNames; const VkPhysicalDeviceFeatures* pEnabledFeatures; } VkDeviceCreateInfo; typedef struct VkExtensionProperties { char extensionName[VK_MAX_EXTENSION_NAME_SIZE]; uint32_t specVersion; } VkExtensionProperties; typedef struct VkLayerProperties { char layerName[VK_MAX_EXTENSION_NAME_SIZE]; uint32_t specVersion; uint32_t implementationVersion; char description[VK_MAX_DESCRIPTION_SIZE]; } VkLayerProperties; typedef struct VkSubmitInfo { VkStructureType sType; const void* pNext; uint32_t waitSemaphoreCount; const VkSemaphore* pWaitSemaphores; const VkPipelineStageFlags* pWaitDstStageMask; uint32_t commandBufferCount; const VkCommandBuffer* pCommandBuffers; uint32_t signalSemaphoreCount; const VkSemaphore* pSignalSemaphores; } VkSubmitInfo; typedef struct VkMemoryAllocateInfo { VkStructureType sType; const void* pNext; VkDeviceSize allocationSize; uint32_t memoryTypeIndex; } VkMemoryAllocateInfo; typedef struct VkMappedMemoryRange { VkStructureType sType; const void* pNext; VkDeviceMemory memory; VkDeviceSize offset; VkDeviceSize size; } VkMappedMemoryRange; typedef struct VkMemoryRequirements { VkDeviceSize size; VkDeviceSize alignment; uint32_t memoryTypeBits; } VkMemoryRequirements; typedef struct VkSparseImageFormatProperties { VkImageAspectFlags aspectMask; VkExtent3D imageGranularity; VkSparseImageFormatFlags flags; } VkSparseImageFormatProperties; typedef struct VkSparseImageMemoryRequirements { VkSparseImageFormatProperties formatProperties; uint32_t imageMipTailFirstLod; VkDeviceSize imageMipTailSize; VkDeviceSize imageMipTailOffset; VkDeviceSize imageMipTailStride; } VkSparseImageMemoryRequirements; typedef struct VkSparseMemoryBind { VkDeviceSize resourceOffset; VkDeviceSize size; VkDeviceMemory memory; VkDeviceSize memoryOffset; VkSparseMemoryBindFlags flags; } VkSparseMemoryBind; typedef struct VkSparseBufferMemoryBindInfo { VkBuffer buffer; uint32_t bindCount; const VkSparseMemoryBind* pBinds; } VkSparseBufferMemoryBindInfo; typedef struct VkSparseImageOpaqueMemoryBindInfo { VkImage image; uint32_t bindCount; const VkSparseMemoryBind* pBinds; } VkSparseImageOpaqueMemoryBindInfo; typedef struct VkImageSubresource { VkImageAspectFlags aspectMask; uint32_t mipLevel; uint32_t arrayLayer; } VkImageSubresource; typedef struct VkOffset3D { int32_t x; int32_t y; int32_t z; } VkOffset3D; typedef struct VkSparseImageMemoryBind { VkImageSubresource subresource; VkOffset3D offset; VkExtent3D extent; VkDeviceMemory memory; VkDeviceSize memoryOffset; VkSparseMemoryBindFlags flags; } VkSparseImageMemoryBind; typedef struct VkSparseImageMemoryBindInfo { VkImage image; uint32_t bindCount; const VkSparseImageMemoryBind* pBinds; } VkSparseImageMemoryBindInfo; typedef struct VkBindSparseInfo { VkStructureType sType; const void* pNext; uint32_t waitSemaphoreCount; const VkSemaphore* pWaitSemaphores; uint32_t bufferBindCount; const VkSparseBufferMemoryBindInfo* pBufferBinds; uint32_t imageOpaqueBindCount; const VkSparseImageOpaqueMemoryBindInfo* pImageOpaqueBinds; uint32_t imageBindCount; const VkSparseImageMemoryBindInfo* pImageBinds; uint32_t signalSemaphoreCount; const VkSemaphore* pSignalSemaphores; } VkBindSparseInfo; typedef struct VkFenceCreateInfo { VkStructureType sType; const void* pNext; VkFenceCreateFlags flags; } VkFenceCreateInfo; typedef struct VkSemaphoreCreateInfo { VkStructureType sType; const void* pNext; VkSemaphoreCreateFlags flags; } VkSemaphoreCreateInfo; typedef struct VkEventCreateInfo { VkStructureType sType; const void* pNext; VkEventCreateFlags flags; } VkEventCreateInfo; typedef struct VkQueryPoolCreateInfo { VkStructureType sType; const void* pNext; VkQueryPoolCreateFlags flags; VkQueryType queryType; uint32_t queryCount; VkQueryPipelineStatisticFlags pipelineStatistics; } VkQueryPoolCreateInfo; typedef struct VkBufferCreateInfo { VkStructureType sType; const void* pNext; VkBufferCreateFlags flags; VkDeviceSize size; VkBufferUsageFlags usage; VkSharingMode sharingMode; uint32_t queueFamilyIndexCount; const uint32_t* pQueueFamilyIndices; } VkBufferCreateInfo; typedef struct VkBufferViewCreateInfo { VkStructureType sType; const void* pNext; VkBufferViewCreateFlags flags; VkBuffer buffer; VkFormat format; VkDeviceSize offset; VkDeviceSize range; } VkBufferViewCreateInfo; typedef struct VkImageCreateInfo { VkStructureType sType; const void* pNext; VkImageCreateFlags flags; VkImageType imageType; VkFormat format; VkExtent3D extent; uint32_t mipLevels; uint32_t arrayLayers; VkSampleCountFlagBits samples; VkImageTiling tiling; VkImageUsageFlags usage; VkSharingMode sharingMode; uint32_t queueFamilyIndexCount; const uint32_t* pQueueFamilyIndices; VkImageLayout initialLayout; } VkImageCreateInfo; typedef struct VkSubresourceLayout { VkDeviceSize offset; VkDeviceSize size; VkDeviceSize rowPitch; VkDeviceSize arrayPitch; VkDeviceSize depthPitch; } VkSubresourceLayout; typedef struct VkComponentMapping { VkComponentSwizzle r; VkComponentSwizzle g; VkComponentSwizzle b; VkComponentSwizzle a; } VkComponentMapping; typedef struct VkImageSubresourceRange { VkImageAspectFlags aspectMask; uint32_t baseMipLevel; uint32_t levelCount; uint32_t baseArrayLayer; uint32_t layerCount; } VkImageSubresourceRange; typedef struct VkImageViewCreateInfo { VkStructureType sType; const void* pNext; VkImageViewCreateFlags flags; VkImage image; VkImageViewType viewType; VkFormat format; VkComponentMapping components; VkImageSubresourceRange subresourceRange; } VkImageViewCreateInfo; typedef struct VkShaderModuleCreateInfo { VkStructureType sType; const void* pNext; VkShaderModuleCreateFlags flags; size_t codeSize; const uint32_t* pCode; } VkShaderModuleCreateInfo; typedef struct VkPipelineCacheCreateInfo { VkStructureType sType; const void* pNext; VkPipelineCacheCreateFlags flags; size_t initialDataSize; const void* pInitialData; } VkPipelineCacheCreateInfo; typedef struct VkSpecializationMapEntry { uint32_t constantID; uint32_t offset; size_t size; } VkSpecializationMapEntry; typedef struct VkSpecializationInfo { uint32_t mapEntryCount; const VkSpecializationMapEntry* pMapEntries; size_t dataSize; const void* pData; } VkSpecializationInfo; typedef struct VkPipelineShaderStageCreateInfo { VkStructureType sType; const void* pNext; VkPipelineShaderStageCreateFlags flags; VkShaderStageFlagBits stage; VkShaderModule module; const char* pName; const VkSpecializationInfo* pSpecializationInfo; } VkPipelineShaderStageCreateInfo; typedef struct VkVertexInputBindingDescription { uint32_t binding; uint32_t stride; VkVertexInputRate inputRate; } VkVertexInputBindingDescription; typedef struct VkVertexInputAttributeDescription { uint32_t location; uint32_t binding; VkFormat format; uint32_t offset; } VkVertexInputAttributeDescription; typedef struct VkPipelineVertexInputStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineVertexInputStateCreateFlags flags; uint32_t vertexBindingDescriptionCount; const VkVertexInputBindingDescription* pVertexBindingDescriptions; uint32_t vertexAttributeDescriptionCount; const VkVertexInputAttributeDescription* pVertexAttributeDescriptions; } VkPipelineVertexInputStateCreateInfo; typedef struct VkPipelineInputAssemblyStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineInputAssemblyStateCreateFlags flags; VkPrimitiveTopology topology; VkBool32 primitiveRestartEnable; } VkPipelineInputAssemblyStateCreateInfo; typedef struct VkPipelineTessellationStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineTessellationStateCreateFlags flags; uint32_t patchControlPoints; } VkPipelineTessellationStateCreateInfo; typedef struct VkViewport { float x; float y; float width; float height; float minDepth; float maxDepth; } VkViewport; typedef struct VkOffset2D { int32_t x; int32_t y; } VkOffset2D; typedef struct VkExtent2D { uint32_t width; uint32_t height; } VkExtent2D; typedef struct VkRect2D { VkOffset2D offset; VkExtent2D extent; } VkRect2D; typedef struct VkPipelineViewportStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineViewportStateCreateFlags flags; uint32_t viewportCount; const VkViewport* pViewports; uint32_t scissorCount; const VkRect2D* pScissors; } VkPipelineViewportStateCreateInfo; typedef struct VkPipelineRasterizationStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineRasterizationStateCreateFlags flags; VkBool32 depthClampEnable; VkBool32 rasterizerDiscardEnable; VkPolygonMode polygonMode; VkCullModeFlags cullMode; VkFrontFace frontFace; VkBool32 depthBiasEnable; float depthBiasConstantFactor; float depthBiasClamp; float depthBiasSlopeFactor; float lineWidth; } VkPipelineRasterizationStateCreateInfo; typedef struct VkPipelineMultisampleStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineMultisampleStateCreateFlags flags; VkSampleCountFlagBits rasterizationSamples; VkBool32 sampleShadingEnable; float minSampleShading; const VkSampleMask* pSampleMask; VkBool32 alphaToCoverageEnable; VkBool32 alphaToOneEnable; } VkPipelineMultisampleStateCreateInfo; typedef struct VkStencilOpState { VkStencilOp failOp; VkStencilOp passOp; VkStencilOp depthFailOp; VkCompareOp compareOp; uint32_t compareMask; uint32_t writeMask; uint32_t reference; } VkStencilOpState; typedef struct VkPipelineDepthStencilStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineDepthStencilStateCreateFlags flags; VkBool32 depthTestEnable; VkBool32 depthWriteEnable; VkCompareOp depthCompareOp; VkBool32 depthBoundsTestEnable; VkBool32 stencilTestEnable; VkStencilOpState front; VkStencilOpState back; float minDepthBounds; float maxDepthBounds; } VkPipelineDepthStencilStateCreateInfo; typedef struct VkPipelineColorBlendAttachmentState { VkBool32 blendEnable; VkBlendFactor srcColorBlendFactor; VkBlendFactor dstColorBlendFactor; VkBlendOp colorBlendOp; VkBlendFactor srcAlphaBlendFactor; VkBlendFactor dstAlphaBlendFactor; VkBlendOp alphaBlendOp; VkColorComponentFlags colorWriteMask; } VkPipelineColorBlendAttachmentState; typedef struct VkPipelineColorBlendStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineColorBlendStateCreateFlags flags; VkBool32 logicOpEnable; VkLogicOp logicOp; uint32_t attachmentCount; const VkPipelineColorBlendAttachmentState* pAttachments; float blendConstants[4]; } VkPipelineColorBlendStateCreateInfo; typedef struct VkPipelineDynamicStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineDynamicStateCreateFlags flags; uint32_t dynamicStateCount; const VkDynamicState* pDynamicStates; } VkPipelineDynamicStateCreateInfo; typedef struct VkGraphicsPipelineCreateInfo { VkStructureType sType; const void* pNext; VkPipelineCreateFlags flags; uint32_t stageCount; const VkPipelineShaderStageCreateInfo* pStages; const VkPipelineVertexInputStateCreateInfo* pVertexInputState; const VkPipelineInputAssemblyStateCreateInfo* pInputAssemblyState; const VkPipelineTessellationStateCreateInfo* pTessellationState; const VkPipelineViewportStateCreateInfo* pViewportState; const VkPipelineRasterizationStateCreateInfo* pRasterizationState; const VkPipelineMultisampleStateCreateInfo* pMultisampleState; const VkPipelineDepthStencilStateCreateInfo* pDepthStencilState; const VkPipelineColorBlendStateCreateInfo* pColorBlendState; const VkPipelineDynamicStateCreateInfo* pDynamicState; VkPipelineLayout layout; VkRenderPass renderPass; uint32_t subpass; VkPipeline basePipelineHandle; int32_t basePipelineIndex; } VkGraphicsPipelineCreateInfo; typedef struct VkComputePipelineCreateInfo { VkStructureType sType; const void* pNext; VkPipelineCreateFlags flags; VkPipelineShaderStageCreateInfo stage; VkPipelineLayout layout; VkPipeline basePipelineHandle; int32_t basePipelineIndex; } VkComputePipelineCreateInfo; typedef struct VkPushConstantRange { VkShaderStageFlags stageFlags; uint32_t offset; uint32_t size; } VkPushConstantRange; typedef struct VkPipelineLayoutCreateInfo { VkStructureType sType; const void* pNext; VkPipelineLayoutCreateFlags flags; uint32_t setLayoutCount; const VkDescriptorSetLayout* pSetLayouts; uint32_t pushConstantRangeCount; const VkPushConstantRange* pPushConstantRanges; } VkPipelineLayoutCreateInfo; typedef struct VkSamplerCreateInfo { VkStructureType sType; const void* pNext; VkSamplerCreateFlags flags; VkFilter magFilter; VkFilter minFilter; VkSamplerMipmapMode mipmapMode; VkSamplerAddressMode addressModeU; VkSamplerAddressMode addressModeV; VkSamplerAddressMode addressModeW; float mipLodBias; VkBool32 anisotropyEnable; float maxAnisotropy; VkBool32 compareEnable; VkCompareOp compareOp; float minLod; float maxLod; VkBorderColor borderColor; VkBool32 unnormalizedCoordinates; } VkSamplerCreateInfo; typedef struct VkDescriptorSetLayoutBinding { uint32_t binding; VkDescriptorType descriptorType; uint32_t descriptorCount; VkShaderStageFlags stageFlags; const VkSampler* pImmutableSamplers; } VkDescriptorSetLayoutBinding; typedef struct VkDescriptorSetLayoutCreateInfo { VkStructureType sType; const void* pNext; VkDescriptorSetLayoutCreateFlags flags; uint32_t bindingCount; const VkDescriptorSetLayoutBinding* pBindings; } VkDescriptorSetLayoutCreateInfo; typedef struct VkDescriptorPoolSize { VkDescriptorType type; uint32_t descriptorCount; } VkDescriptorPoolSize; typedef struct VkDescriptorPoolCreateInfo { VkStructureType sType; const void* pNext; VkDescriptorPoolCreateFlags flags; uint32_t maxSets; uint32_t poolSizeCount; const VkDescriptorPoolSize* pPoolSizes; } VkDescriptorPoolCreateInfo; typedef struct VkDescriptorSetAllocateInfo { VkStructureType sType; const void* pNext; VkDescriptorPool descriptorPool; uint32_t descriptorSetCount; const VkDescriptorSetLayout* pSetLayouts; } VkDescriptorSetAllocateInfo; typedef struct VkDescriptorImageInfo { VkSampler sampler; VkImageView imageView; VkImageLayout imageLayout; } VkDescriptorImageInfo; typedef struct VkDescriptorBufferInfo { VkBuffer buffer; VkDeviceSize offset; VkDeviceSize range; } VkDescriptorBufferInfo; typedef struct VkWriteDescriptorSet { VkStructureType sType; const void* pNext; VkDescriptorSet dstSet; uint32_t dstBinding; uint32_t dstArrayElement; uint32_t descriptorCount; VkDescriptorType descriptorType; const VkDescriptorImageInfo* pImageInfo; const VkDescriptorBufferInfo* pBufferInfo; const VkBufferView* pTexelBufferView; } VkWriteDescriptorSet; typedef struct VkCopyDescriptorSet { VkStructureType sType; const void* pNext; VkDescriptorSet srcSet; uint32_t srcBinding; uint32_t srcArrayElement; VkDescriptorSet dstSet; uint32_t dstBinding; uint32_t dstArrayElement; uint32_t descriptorCount; } VkCopyDescriptorSet; typedef struct VkFramebufferCreateInfo { VkStructureType sType; const void* pNext; VkFramebufferCreateFlags flags; VkRenderPass renderPass; uint32_t attachmentCount; const VkImageView* pAttachments; uint32_t width; uint32_t height; uint32_t layers; } VkFramebufferCreateInfo; typedef struct VkAttachmentDescription { VkAttachmentDescriptionFlags flags; VkFormat format; VkSampleCountFlagBits samples; VkAttachmentLoadOp loadOp; VkAttachmentStoreOp storeOp; VkAttachmentLoadOp stencilLoadOp; VkAttachmentStoreOp stencilStoreOp; VkImageLayout initialLayout; VkImageLayout finalLayout; } VkAttachmentDescription; typedef struct VkAttachmentReference { uint32_t attachment; VkImageLayout layout; } VkAttachmentReference; typedef struct VkSubpassDescription { VkSubpassDescriptionFlags flags; VkPipelineBindPoint pipelineBindPoint; uint32_t inputAttachmentCount; const VkAttachmentReference* pInputAttachments; uint32_t colorAttachmentCount; const VkAttachmentReference* pColorAttachments; const VkAttachmentReference* pResolveAttachments; const VkAttachmentReference* pDepthStencilAttachment; uint32_t preserveAttachmentCount; const uint32_t* pPreserveAttachments; } VkSubpassDescription; typedef struct VkSubpassDependency { uint32_t srcSubpass; uint32_t dstSubpass; VkPipelineStageFlags srcStageMask; VkPipelineStageFlags dstStageMask; VkAccessFlags srcAccessMask; VkAccessFlags dstAccessMask; VkDependencyFlags dependencyFlags; } VkSubpassDependency; typedef struct VkRenderPassCreateInfo { VkStructureType sType; const void* pNext; VkRenderPassCreateFlags flags; uint32_t attachmentCount; const VkAttachmentDescription* pAttachments; uint32_t subpassCount; const VkSubpassDescription* pSubpasses; uint32_t dependencyCount; const VkSubpassDependency* pDependencies; } VkRenderPassCreateInfo; typedef struct VkCommandPoolCreateInfo { VkStructureType sType; const void* pNext; VkCommandPoolCreateFlags flags; uint32_t queueFamilyIndex; } VkCommandPoolCreateInfo; typedef struct VkCommandBufferAllocateInfo { VkStructureType sType; const void* pNext; VkCommandPool commandPool; VkCommandBufferLevel level; uint32_t commandBufferCount; } VkCommandBufferAllocateInfo; typedef struct VkCommandBufferInheritanceInfo { VkStructureType sType; const void* pNext; VkRenderPass renderPass; uint32_t subpass; VkFramebuffer framebuffer; VkBool32 occlusionQueryEnable; VkQueryControlFlags queryFlags; VkQueryPipelineStatisticFlags pipelineStatistics; } VkCommandBufferInheritanceInfo; typedef struct VkCommandBufferBeginInfo { VkStructureType sType; const void* pNext; VkCommandBufferUsageFlags flags; const VkCommandBufferInheritanceInfo* pInheritanceInfo; } VkCommandBufferBeginInfo; typedef struct VkBufferCopy { VkDeviceSize srcOffset; VkDeviceSize dstOffset; VkDeviceSize size; } VkBufferCopy; typedef struct VkImageSubresourceLayers { VkImageAspectFlags aspectMask; uint32_t mipLevel; uint32_t baseArrayLayer; uint32_t layerCount; } VkImageSubresourceLayers; typedef struct VkImageCopy { VkImageSubresourceLayers srcSubresource; VkOffset3D srcOffset; VkImageSubresourceLayers dstSubresource; VkOffset3D dstOffset; VkExtent3D extent; } VkImageCopy; typedef struct VkImageBlit { VkImageSubresourceLayers srcSubresource; VkOffset3D srcOffsets[2]; VkImageSubresourceLayers dstSubresource; VkOffset3D dstOffsets[2]; } VkImageBlit; typedef struct VkBufferImageCopy { VkDeviceSize bufferOffset; uint32_t bufferRowLength; uint32_t bufferImageHeight; VkImageSubresourceLayers imageSubresource; VkOffset3D imageOffset; VkExtent3D imageExtent; } VkBufferImageCopy; typedef union VkClearColorValue { float float32[4]; int32_t int32[4]; uint32_t uint32[4]; } VkClearColorValue; typedef struct VkClearDepthStencilValue { float depth; uint32_t stencil; } VkClearDepthStencilValue; typedef union VkClearValue { VkClearColorValue color; VkClearDepthStencilValue depthStencil; } VkClearValue; typedef struct VkClearAttachment { VkImageAspectFlags aspectMask; uint32_t colorAttachment; VkClearValue clearValue; } VkClearAttachment; typedef struct VkClearRect { VkRect2D rect; uint32_t baseArrayLayer; uint32_t layerCount; } VkClearRect; typedef struct VkImageResolve { VkImageSubresourceLayers srcSubresource; VkOffset3D srcOffset; VkImageSubresourceLayers dstSubresource; VkOffset3D dstOffset; VkExtent3D extent; } VkImageResolve; typedef struct VkMemoryBarrier { VkStructureType sType; const void* pNext; VkAccessFlags srcAccessMask; VkAccessFlags dstAccessMask; } VkMemoryBarrier; typedef struct VkBufferMemoryBarrier { VkStructureType sType; const void* pNext; VkAccessFlags srcAccessMask; VkAccessFlags dstAccessMask; uint32_t srcQueueFamilyIndex; uint32_t dstQueueFamilyIndex; VkBuffer buffer; VkDeviceSize offset; VkDeviceSize size; } VkBufferMemoryBarrier; typedef struct VkImageMemoryBarrier { VkStructureType sType; const void* pNext; VkAccessFlags srcAccessMask; VkAccessFlags dstAccessMask; VkImageLayout oldLayout; VkImageLayout newLayout; uint32_t srcQueueFamilyIndex; uint32_t dstQueueFamilyIndex; VkImage image; VkImageSubresourceRange subresourceRange; } VkImageMemoryBarrier; typedef struct VkRenderPassBeginInfo { VkStructureType sType; const void* pNext; VkRenderPass renderPass; VkFramebuffer framebuffer; VkRect2D renderArea; uint32_t clearValueCount; const VkClearValue* pClearValues; } VkRenderPassBeginInfo; typedef struct VkDispatchIndirectCommand { uint32_t x; uint32_t y; uint32_t z; } VkDispatchIndirectCommand; typedef struct VkDrawIndexedIndirectCommand { uint32_t indexCount; uint32_t instanceCount; uint32_t firstIndex; int32_t vertexOffset; uint32_t firstInstance; } VkDrawIndexedIndirectCommand; typedef struct VkDrawIndirectCommand { uint32_t vertexCount; uint32_t instanceCount; uint32_t firstVertex; uint32_t firstInstance; } VkDrawIndirectCommand; typedef VkResult (VKAPI_PTR *PFN_vkCreateInstance)(const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance); typedef void (VKAPI_PTR *PFN_vkDestroyInstance)(VkInstance instance, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkEnumeratePhysicalDevices)(VkInstance instance, uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices); typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceFeatures)(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures* pFeatures); typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceFormatProperties)(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties* pFormatProperties); typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceImageFormatProperties)(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties* pImageFormatProperties); typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceProperties)(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties* pProperties); typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceQueueFamilyProperties)(VkPhysicalDevice physicalDevice, uint32_t* pQueueFamilyPropertyCount, VkQueueFamilyProperties* pQueueFamilyProperties); typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceMemoryProperties)(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties* pMemoryProperties); typedef PFN_vkVoidFunction (VKAPI_PTR *PFN_vkGetInstanceProcAddr)(VkInstance instance, const char* pName); typedef PFN_vkVoidFunction (VKAPI_PTR *PFN_vkGetDeviceProcAddr)(VkDevice device, const char* pName); typedef VkResult (VKAPI_PTR *PFN_vkCreateDevice)(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice); typedef void (VKAPI_PTR *PFN_vkDestroyDevice)(VkDevice device, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkEnumerateInstanceExtensionProperties)(const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties); typedef VkResult (VKAPI_PTR *PFN_vkEnumerateDeviceExtensionProperties)(VkPhysicalDevice physicalDevice, const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties); typedef VkResult (VKAPI_PTR *PFN_vkEnumerateInstanceLayerProperties)(uint32_t* pPropertyCount, VkLayerProperties* pProperties); typedef VkResult (VKAPI_PTR *PFN_vkEnumerateDeviceLayerProperties)(VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkLayerProperties* pProperties); typedef void (VKAPI_PTR *PFN_vkGetDeviceQueue)(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue* pQueue); typedef VkResult (VKAPI_PTR *PFN_vkQueueSubmit)(VkQueue queue, uint32_t submitCount, const VkSubmitInfo* pSubmits, VkFence fence); typedef VkResult (VKAPI_PTR *PFN_vkQueueWaitIdle)(VkQueue queue); typedef VkResult (VKAPI_PTR *PFN_vkDeviceWaitIdle)(VkDevice device); typedef VkResult (VKAPI_PTR *PFN_vkAllocateMemory)(VkDevice device, const VkMemoryAllocateInfo* pAllocateInfo, const VkAllocationCallbacks* pAllocator, VkDeviceMemory* pMemory); typedef void (VKAPI_PTR *PFN_vkFreeMemory)(VkDevice device, VkDeviceMemory memory, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkMapMemory)(VkDevice device, VkDeviceMemory memory, VkDeviceSize offset, VkDeviceSize size, VkMemoryMapFlags flags, void** ppData); typedef void (VKAPI_PTR *PFN_vkUnmapMemory)(VkDevice device, VkDeviceMemory memory); typedef VkResult (VKAPI_PTR *PFN_vkFlushMappedMemoryRanges)(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges); typedef VkResult (VKAPI_PTR *PFN_vkInvalidateMappedMemoryRanges)(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges); typedef void (VKAPI_PTR *PFN_vkGetDeviceMemoryCommitment)(VkDevice device, VkDeviceMemory memory, VkDeviceSize* pCommittedMemoryInBytes); typedef VkResult (VKAPI_PTR *PFN_vkBindBufferMemory)(VkDevice device, VkBuffer buffer, VkDeviceMemory memory, VkDeviceSize memoryOffset); typedef VkResult (VKAPI_PTR *PFN_vkBindImageMemory)(VkDevice device, VkImage image, VkDeviceMemory memory, VkDeviceSize memoryOffset); typedef void (VKAPI_PTR *PFN_vkGetBufferMemoryRequirements)(VkDevice device, VkBuffer buffer, VkMemoryRequirements* pMemoryRequirements); typedef void (VKAPI_PTR *PFN_vkGetImageMemoryRequirements)(VkDevice device, VkImage image, VkMemoryRequirements* pMemoryRequirements); typedef void (VKAPI_PTR *PFN_vkGetImageSparseMemoryRequirements)(VkDevice device, VkImage image, uint32_t* pSparseMemoryRequirementCount, VkSparseImageMemoryRequirements* pSparseMemoryRequirements); typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceSparseImageFormatProperties)(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t* pPropertyCount, VkSparseImageFormatProperties* pProperties); typedef VkResult (VKAPI_PTR *PFN_vkQueueBindSparse)(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo* pBindInfo, VkFence fence); typedef VkResult (VKAPI_PTR *PFN_vkCreateFence)(VkDevice device, const VkFenceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFence* pFence); typedef void (VKAPI_PTR *PFN_vkDestroyFence)(VkDevice device, VkFence fence, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkResetFences)(VkDevice device, uint32_t fenceCount, const VkFence* pFences); typedef VkResult (VKAPI_PTR *PFN_vkGetFenceStatus)(VkDevice device, VkFence fence); typedef VkResult (VKAPI_PTR *PFN_vkWaitForFences)(VkDevice device, uint32_t fenceCount, const VkFence* pFences, VkBool32 waitAll, uint64_t timeout); typedef VkResult (VKAPI_PTR *PFN_vkCreateSemaphore)(VkDevice device, const VkSemaphoreCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSemaphore* pSemaphore); typedef void (VKAPI_PTR *PFN_vkDestroySemaphore)(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreateEvent)(VkDevice device, const VkEventCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkEvent* pEvent); typedef void (VKAPI_PTR *PFN_vkDestroyEvent)(VkDevice device, VkEvent event, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkGetEventStatus)(VkDevice device, VkEvent event); typedef VkResult (VKAPI_PTR *PFN_vkSetEvent)(VkDevice device, VkEvent event); typedef VkResult (VKAPI_PTR *PFN_vkResetEvent)(VkDevice device, VkEvent event); typedef VkResult (VKAPI_PTR *PFN_vkCreateQueryPool)(VkDevice device, const VkQueryPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkQueryPool* pQueryPool); typedef void (VKAPI_PTR *PFN_vkDestroyQueryPool)(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkGetQueryPoolResults)(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, size_t dataSize, void* pData, VkDeviceSize stride, VkQueryResultFlags flags); typedef VkResult (VKAPI_PTR *PFN_vkCreateBuffer)(VkDevice device, const VkBufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBuffer* pBuffer); typedef void (VKAPI_PTR *PFN_vkDestroyBuffer)(VkDevice device, VkBuffer buffer, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreateBufferView)(VkDevice device, const VkBufferViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBufferView* pView); typedef void (VKAPI_PTR *PFN_vkDestroyBufferView)(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreateImage)(VkDevice device, const VkImageCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImage* pImage); typedef void (VKAPI_PTR *PFN_vkDestroyImage)(VkDevice device, VkImage image, const VkAllocationCallbacks* pAllocator); typedef void (VKAPI_PTR *PFN_vkGetImageSubresourceLayout)(VkDevice device, VkImage image, const VkImageSubresource* pSubresource, VkSubresourceLayout* pLayout); typedef VkResult (VKAPI_PTR *PFN_vkCreateImageView)(VkDevice device, const VkImageViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImageView* pView); typedef void (VKAPI_PTR *PFN_vkDestroyImageView)(VkDevice device, VkImageView imageView, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreateShaderModule)(VkDevice device, const VkShaderModuleCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkShaderModule* pShaderModule); typedef void (VKAPI_PTR *PFN_vkDestroyShaderModule)(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreatePipelineCache)(VkDevice device, const VkPipelineCacheCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineCache* pPipelineCache); typedef void (VKAPI_PTR *PFN_vkDestroyPipelineCache)(VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkGetPipelineCacheData)(VkDevice device, VkPipelineCache pipelineCache, size_t* pDataSize, void* pData); typedef VkResult (VKAPI_PTR *PFN_vkMergePipelineCaches)(VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, const VkPipelineCache* pSrcCaches); typedef VkResult (VKAPI_PTR *PFN_vkCreateGraphicsPipelines)(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkGraphicsPipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines); typedef VkResult (VKAPI_PTR *PFN_vkCreateComputePipelines)(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkComputePipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines); typedef void (VKAPI_PTR *PFN_vkDestroyPipeline)(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreatePipelineLayout)(VkDevice device, const VkPipelineLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineLayout* pPipelineLayout); typedef void (VKAPI_PTR *PFN_vkDestroyPipelineLayout)(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreateSampler)(VkDevice device, const VkSamplerCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSampler* pSampler); typedef void (VKAPI_PTR *PFN_vkDestroySampler)(VkDevice device, VkSampler sampler, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreateDescriptorSetLayout)(VkDevice device, const VkDescriptorSetLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorSetLayout* pSetLayout); typedef void (VKAPI_PTR *PFN_vkDestroyDescriptorSetLayout)(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreateDescriptorPool)(VkDevice device, const VkDescriptorPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorPool* pDescriptorPool); typedef void (VKAPI_PTR *PFN_vkDestroyDescriptorPool)(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkResetDescriptorPool)(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags); typedef VkResult (VKAPI_PTR *PFN_vkAllocateDescriptorSets)(VkDevice device, const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pDescriptorSets); typedef VkResult (VKAPI_PTR *PFN_vkFreeDescriptorSets)(VkDevice device, VkDescriptorPool descriptorPool, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets); typedef void (VKAPI_PTR *PFN_vkUpdateDescriptorSets)(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet* pDescriptorWrites, uint32_t descriptorCopyCount, const VkCopyDescriptorSet* pDescriptorCopies); typedef VkResult (VKAPI_PTR *PFN_vkCreateFramebuffer)(VkDevice device, const VkFramebufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFramebuffer* pFramebuffer); typedef void (VKAPI_PTR *PFN_vkDestroyFramebuffer)(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkCreateRenderPass)(VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkRenderPass* pRenderPass); typedef void (VKAPI_PTR *PFN_vkDestroyRenderPass)(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks* pAllocator); typedef void (VKAPI_PTR *PFN_vkGetRenderAreaGranularity)(VkDevice device, VkRenderPass renderPass, VkExtent2D* pGranularity); typedef VkResult (VKAPI_PTR *PFN_vkCreateCommandPool)(VkDevice device, const VkCommandPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkCommandPool* pCommandPool); typedef void (VKAPI_PTR *PFN_vkDestroyCommandPool)(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkResetCommandPool)(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags); typedef VkResult (VKAPI_PTR *PFN_vkAllocateCommandBuffers)(VkDevice device, const VkCommandBufferAllocateInfo* pAllocateInfo, VkCommandBuffer* pCommandBuffers); typedef void (VKAPI_PTR *PFN_vkFreeCommandBuffers)(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers); typedef VkResult (VKAPI_PTR *PFN_vkBeginCommandBuffer)(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo* pBeginInfo); typedef VkResult (VKAPI_PTR *PFN_vkEndCommandBuffer)(VkCommandBuffer commandBuffer); typedef VkResult (VKAPI_PTR *PFN_vkResetCommandBuffer)(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags); typedef void (VKAPI_PTR *PFN_vkCmdBindPipeline)(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline); typedef void (VKAPI_PTR *PFN_vkCmdSetViewport)(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport* pViewports); typedef void (VKAPI_PTR *PFN_vkCmdSetScissor)(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D* pScissors); typedef void (VKAPI_PTR *PFN_vkCmdSetLineWidth)(VkCommandBuffer commandBuffer, float lineWidth); typedef void (VKAPI_PTR *PFN_vkCmdSetDepthBias)(VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor); typedef void (VKAPI_PTR *PFN_vkCmdSetBlendConstants)(VkCommandBuffer commandBuffer, const float blendConstants[4]); typedef void (VKAPI_PTR *PFN_vkCmdSetDepthBounds)(VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds); typedef void (VKAPI_PTR *PFN_vkCmdSetStencilCompareMask)(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask); typedef void (VKAPI_PTR *PFN_vkCmdSetStencilWriteMask)(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask); typedef void (VKAPI_PTR *PFN_vkCmdSetStencilReference)(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference); typedef void (VKAPI_PTR *PFN_vkCmdBindDescriptorSets)(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout, uint32_t firstSet, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets, uint32_t dynamicOffsetCount, const uint32_t* pDynamicOffsets); typedef void (VKAPI_PTR *PFN_vkCmdBindIndexBuffer)(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType); typedef void (VKAPI_PTR *PFN_vkCmdBindVertexBuffers)(VkCommandBuffer commandBuffer, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer* pBuffers, const VkDeviceSize* pOffsets); typedef void (VKAPI_PTR *PFN_vkCmdDraw)(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance); typedef void (VKAPI_PTR *PFN_vkCmdDrawIndexed)(VkCommandBuffer commandBuffer, uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance); typedef void (VKAPI_PTR *PFN_vkCmdDrawIndirect)(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride); typedef void (VKAPI_PTR *PFN_vkCmdDrawIndexedIndirect)(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride); typedef void (VKAPI_PTR *PFN_vkCmdDispatch)(VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z); typedef void (VKAPI_PTR *PFN_vkCmdDispatchIndirect)(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset); typedef void (VKAPI_PTR *PFN_vkCmdCopyBuffer)(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferCopy* pRegions); typedef void (VKAPI_PTR *PFN_vkCmdCopyImage)(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy* pRegions); typedef void (VKAPI_PTR *PFN_vkCmdBlitImage)(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit* pRegions, VkFilter filter); typedef void (VKAPI_PTR *PFN_vkCmdCopyBufferToImage)(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkBufferImageCopy* pRegions); typedef void (VKAPI_PTR *PFN_vkCmdCopyImageToBuffer)(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferImageCopy* pRegions); typedef void (VKAPI_PTR *PFN_vkCmdUpdateBuffer)(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t* pData); typedef void (VKAPI_PTR *PFN_vkCmdFillBuffer)(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data); typedef void (VKAPI_PTR *PFN_vkCmdClearColorImage)(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearColorValue* pColor, uint32_t rangeCount, const VkImageSubresourceRange* pRanges); typedef void (VKAPI_PTR *PFN_vkCmdClearDepthStencilImage)(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearDepthStencilValue* pDepthStencil, uint32_t rangeCount, const VkImageSubresourceRange* pRanges); typedef void (VKAPI_PTR *PFN_vkCmdClearAttachments)(VkCommandBuffer commandBuffer, uint32_t attachmentCount, const VkClearAttachment* pAttachments, uint32_t rectCount, const VkClearRect* pRects); typedef void (VKAPI_PTR *PFN_vkCmdResolveImage)(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve* pRegions); typedef void (VKAPI_PTR *PFN_vkCmdSetEvent)(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask); typedef void (VKAPI_PTR *PFN_vkCmdResetEvent)(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask); typedef void (VKAPI_PTR *PFN_vkCmdWaitEvents)(VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent* pEvents, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers); typedef void (VKAPI_PTR *PFN_vkCmdPipelineBarrier)(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers); typedef void (VKAPI_PTR *PFN_vkCmdBeginQuery)(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query, VkQueryControlFlags flags); typedef void (VKAPI_PTR *PFN_vkCmdEndQuery)(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query); typedef void (VKAPI_PTR *PFN_vkCmdResetQueryPool)(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount); typedef void (VKAPI_PTR *PFN_vkCmdWriteTimestamp)(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t query); typedef void (VKAPI_PTR *PFN_vkCmdCopyQueryPoolResults)(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize stride, VkQueryResultFlags flags); typedef void (VKAPI_PTR *PFN_vkCmdPushConstants)(VkCommandBuffer commandBuffer, VkPipelineLayout layout, VkShaderStageFlags stageFlags, uint32_t offset, uint32_t size, const void* pValues); typedef void (VKAPI_PTR *PFN_vkCmdBeginRenderPass)(VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo* pRenderPassBegin, VkSubpassContents contents); typedef void (VKAPI_PTR *PFN_vkCmdNextSubpass)(VkCommandBuffer commandBuffer, VkSubpassContents contents); typedef void (VKAPI_PTR *PFN_vkCmdEndRenderPass)(VkCommandBuffer commandBuffer); typedef void (VKAPI_PTR *PFN_vkCmdExecuteCommands)(VkCommandBuffer commandBuffer, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance( const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance); VKAPI_ATTR void VKAPI_CALL vkDestroyInstance( VkInstance instance, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices( VkInstance instance, uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices); VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFeatures( VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures* pFeatures); VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFormatProperties( VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties* pFormatProperties); VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceImageFormatProperties( VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties* pImageFormatProperties); VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceProperties( VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties* pProperties); VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceQueueFamilyProperties( VkPhysicalDevice physicalDevice, uint32_t* pQueueFamilyPropertyCount, VkQueueFamilyProperties* pQueueFamilyProperties); VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceMemoryProperties( VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties* pMemoryProperties); VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr( VkInstance instance, const char* pName); VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr( VkDevice device, const char* pName); VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice( VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice); VKAPI_ATTR void VKAPI_CALL vkDestroyDevice( VkDevice device, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties( const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties); VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties( VkPhysicalDevice physicalDevice, const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties); VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties( uint32_t* pPropertyCount, VkLayerProperties* pProperties); VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties( VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkLayerProperties* pProperties); VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue( VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue* pQueue); VKAPI_ATTR VkResult VKAPI_CALL vkQueueSubmit( VkQueue queue, uint32_t submitCount, const VkSubmitInfo* pSubmits, VkFence fence); VKAPI_ATTR VkResult VKAPI_CALL vkQueueWaitIdle( VkQueue queue); VKAPI_ATTR VkResult VKAPI_CALL vkDeviceWaitIdle( VkDevice device); VKAPI_ATTR VkResult VKAPI_CALL vkAllocateMemory( VkDevice device, const VkMemoryAllocateInfo* pAllocateInfo, const VkAllocationCallbacks* pAllocator, VkDeviceMemory* pMemory); VKAPI_ATTR void VKAPI_CALL vkFreeMemory( VkDevice device, VkDeviceMemory memory, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkMapMemory( VkDevice device, VkDeviceMemory memory, VkDeviceSize offset, VkDeviceSize size, VkMemoryMapFlags flags, void** ppData); VKAPI_ATTR void VKAPI_CALL vkUnmapMemory( VkDevice device, VkDeviceMemory memory); VKAPI_ATTR VkResult VKAPI_CALL vkFlushMappedMemoryRanges( VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges); VKAPI_ATTR VkResult VKAPI_CALL vkInvalidateMappedMemoryRanges( VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges); VKAPI_ATTR void VKAPI_CALL vkGetDeviceMemoryCommitment( VkDevice device, VkDeviceMemory memory, VkDeviceSize* pCommittedMemoryInBytes); VKAPI_ATTR VkResult VKAPI_CALL vkBindBufferMemory( VkDevice device, VkBuffer buffer, VkDeviceMemory memory, VkDeviceSize memoryOffset); VKAPI_ATTR VkResult VKAPI_CALL vkBindImageMemory( VkDevice device, VkImage image, VkDeviceMemory memory, VkDeviceSize memoryOffset); VKAPI_ATTR void VKAPI_CALL vkGetBufferMemoryRequirements( VkDevice device, VkBuffer buffer, VkMemoryRequirements* pMemoryRequirements); VKAPI_ATTR void VKAPI_CALL vkGetImageMemoryRequirements( VkDevice device, VkImage image, VkMemoryRequirements* pMemoryRequirements); VKAPI_ATTR void VKAPI_CALL vkGetImageSparseMemoryRequirements( VkDevice device, VkImage image, uint32_t* pSparseMemoryRequirementCount, VkSparseImageMemoryRequirements* pSparseMemoryRequirements); VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceSparseImageFormatProperties( VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t* pPropertyCount, VkSparseImageFormatProperties* pProperties); VKAPI_ATTR VkResult VKAPI_CALL vkQueueBindSparse( VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo* pBindInfo, VkFence fence); VKAPI_ATTR VkResult VKAPI_CALL vkCreateFence( VkDevice device, const VkFenceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFence* pFence); VKAPI_ATTR void VKAPI_CALL vkDestroyFence( VkDevice device, VkFence fence, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkResetFences( VkDevice device, uint32_t fenceCount, const VkFence* pFences); VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceStatus( VkDevice device, VkFence fence); VKAPI_ATTR VkResult VKAPI_CALL vkWaitForFences( VkDevice device, uint32_t fenceCount, const VkFence* pFences, VkBool32 waitAll, uint64_t timeout); VKAPI_ATTR VkResult VKAPI_CALL vkCreateSemaphore( VkDevice device, const VkSemaphoreCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSemaphore* pSemaphore); VKAPI_ATTR void VKAPI_CALL vkDestroySemaphore( VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreateEvent( VkDevice device, const VkEventCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkEvent* pEvent); VKAPI_ATTR void VKAPI_CALL vkDestroyEvent( VkDevice device, VkEvent event, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkGetEventStatus( VkDevice device, VkEvent event); VKAPI_ATTR VkResult VKAPI_CALL vkSetEvent( VkDevice device, VkEvent event); VKAPI_ATTR VkResult VKAPI_CALL vkResetEvent( VkDevice device, VkEvent event); VKAPI_ATTR VkResult VKAPI_CALL vkCreateQueryPool( VkDevice device, const VkQueryPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkQueryPool* pQueryPool); VKAPI_ATTR void VKAPI_CALL vkDestroyQueryPool( VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkGetQueryPoolResults( VkDevice device, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, size_t dataSize, void* pData, VkDeviceSize stride, VkQueryResultFlags flags); VKAPI_ATTR VkResult VKAPI_CALL vkCreateBuffer( VkDevice device, const VkBufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBuffer* pBuffer); VKAPI_ATTR void VKAPI_CALL vkDestroyBuffer( VkDevice device, VkBuffer buffer, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferView( VkDevice device, const VkBufferViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBufferView* pView); VKAPI_ATTR void VKAPI_CALL vkDestroyBufferView( VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage( VkDevice device, const VkImageCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImage* pImage); VKAPI_ATTR void VKAPI_CALL vkDestroyImage( VkDevice device, VkImage image, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR void VKAPI_CALL vkGetImageSubresourceLayout( VkDevice device, VkImage image, const VkImageSubresource* pSubresource, VkSubresourceLayout* pLayout); VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView( VkDevice device, const VkImageViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImageView* pView); VKAPI_ATTR void VKAPI_CALL vkDestroyImageView( VkDevice device, VkImageView imageView, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreateShaderModule( VkDevice device, const VkShaderModuleCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkShaderModule* pShaderModule); VKAPI_ATTR void VKAPI_CALL vkDestroyShaderModule( VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineCache( VkDevice device, const VkPipelineCacheCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineCache* pPipelineCache); VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineCache( VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkGetPipelineCacheData( VkDevice device, VkPipelineCache pipelineCache, size_t* pDataSize, void* pData); VKAPI_ATTR VkResult VKAPI_CALL vkMergePipelineCaches( VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, const VkPipelineCache* pSrcCaches); VKAPI_ATTR VkResult VKAPI_CALL vkCreateGraphicsPipelines( VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkGraphicsPipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines); VKAPI_ATTR VkResult VKAPI_CALL vkCreateComputePipelines( VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkComputePipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines); VKAPI_ATTR void VKAPI_CALL vkDestroyPipeline( VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineLayout( VkDevice device, const VkPipelineLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineLayout* pPipelineLayout); VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineLayout( VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreateSampler( VkDevice device, const VkSamplerCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSampler* pSampler); VKAPI_ATTR void VKAPI_CALL vkDestroySampler( VkDevice device, VkSampler sampler, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorSetLayout( VkDevice device, const VkDescriptorSetLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorSetLayout* pSetLayout); VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorSetLayout( VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorPool( VkDevice device, const VkDescriptorPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorPool* pDescriptorPool); VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorPool( VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkResetDescriptorPool( VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags); VKAPI_ATTR VkResult VKAPI_CALL vkAllocateDescriptorSets( VkDevice device, const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pDescriptorSets); VKAPI_ATTR VkResult VKAPI_CALL vkFreeDescriptorSets( VkDevice device, VkDescriptorPool descriptorPool, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets); VKAPI_ATTR void VKAPI_CALL vkUpdateDescriptorSets( VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet* pDescriptorWrites, uint32_t descriptorCopyCount, const VkCopyDescriptorSet* pDescriptorCopies); VKAPI_ATTR VkResult VKAPI_CALL vkCreateFramebuffer( VkDevice device, const VkFramebufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFramebuffer* pFramebuffer); VKAPI_ATTR void VKAPI_CALL vkDestroyFramebuffer( VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass( VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkRenderPass* pRenderPass); VKAPI_ATTR void VKAPI_CALL vkDestroyRenderPass( VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR void VKAPI_CALL vkGetRenderAreaGranularity( VkDevice device, VkRenderPass renderPass, VkExtent2D* pGranularity); VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool( VkDevice device, const VkCommandPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkCommandPool* pCommandPool); VKAPI_ATTR void VKAPI_CALL vkDestroyCommandPool( VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandPool( VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags); VKAPI_ATTR VkResult VKAPI_CALL vkAllocateCommandBuffers( VkDevice device, const VkCommandBufferAllocateInfo* pAllocateInfo, VkCommandBuffer* pCommandBuffers); VKAPI_ATTR void VKAPI_CALL vkFreeCommandBuffers( VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers); VKAPI_ATTR VkResult VKAPI_CALL vkBeginCommandBuffer( VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo* pBeginInfo); VKAPI_ATTR VkResult VKAPI_CALL vkEndCommandBuffer( VkCommandBuffer commandBuffer); VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandBuffer( VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags); VKAPI_ATTR void VKAPI_CALL vkCmdBindPipeline( VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline); VKAPI_ATTR void VKAPI_CALL vkCmdSetViewport( VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport* pViewports); VKAPI_ATTR void VKAPI_CALL vkCmdSetScissor( VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D* pScissors); VKAPI_ATTR void VKAPI_CALL vkCmdSetLineWidth( VkCommandBuffer commandBuffer, float lineWidth); VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBias( VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor); VKAPI_ATTR void VKAPI_CALL vkCmdSetBlendConstants( VkCommandBuffer commandBuffer, const float blendConstants[4]); VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBounds( VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds); VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilCompareMask( VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask); VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilWriteMask( VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask); VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilReference( VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference); VKAPI_ATTR void VKAPI_CALL vkCmdBindDescriptorSets( VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout, uint32_t firstSet, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets, uint32_t dynamicOffsetCount, const uint32_t* pDynamicOffsets); VKAPI_ATTR void VKAPI_CALL vkCmdBindIndexBuffer( VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType); VKAPI_ATTR void VKAPI_CALL vkCmdBindVertexBuffers( VkCommandBuffer commandBuffer, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer* pBuffers, const VkDeviceSize* pOffsets); VKAPI_ATTR void VKAPI_CALL vkCmdDraw( VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance); VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexed( VkCommandBuffer commandBuffer, uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance); VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndirect( VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride); VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexedIndirect( VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride); VKAPI_ATTR void VKAPI_CALL vkCmdDispatch( VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z); VKAPI_ATTR void VKAPI_CALL vkCmdDispatchIndirect( VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset); VKAPI_ATTR void VKAPI_CALL vkCmdCopyBuffer( VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferCopy* pRegions); VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage( VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy* pRegions); VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage( VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit* pRegions, VkFilter filter); VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage( VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkBufferImageCopy* pRegions); VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer( VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferImageCopy* pRegions); VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer( VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t* pData); VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer( VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data); VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage( VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearColorValue* pColor, uint32_t rangeCount, const VkImageSubresourceRange* pRanges); VKAPI_ATTR void VKAPI_CALL vkCmdClearDepthStencilImage( VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearDepthStencilValue* pDepthStencil, uint32_t rangeCount, const VkImageSubresourceRange* pRanges); VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments( VkCommandBuffer commandBuffer, uint32_t attachmentCount, const VkClearAttachment* pAttachments, uint32_t rectCount, const VkClearRect* pRects); VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage( VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve* pRegions); VKAPI_ATTR void VKAPI_CALL vkCmdSetEvent( VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask); VKAPI_ATTR void VKAPI_CALL vkCmdResetEvent( VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask); VKAPI_ATTR void VKAPI_CALL vkCmdWaitEvents( VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent* pEvents, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers); VKAPI_ATTR void VKAPI_CALL vkCmdPipelineBarrier( VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers); VKAPI_ATTR void VKAPI_CALL vkCmdBeginQuery( VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query, VkQueryControlFlags flags); VKAPI_ATTR void VKAPI_CALL vkCmdEndQuery( VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query); VKAPI_ATTR void VKAPI_CALL vkCmdResetQueryPool( VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount); VKAPI_ATTR void VKAPI_CALL vkCmdWriteTimestamp( VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t query); VKAPI_ATTR void VKAPI_CALL vkCmdCopyQueryPoolResults( VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize stride, VkQueryResultFlags flags); VKAPI_ATTR void VKAPI_CALL vkCmdPushConstants( VkCommandBuffer commandBuffer, VkPipelineLayout layout, VkShaderStageFlags stageFlags, uint32_t offset, uint32_t size, const void* pValues); VKAPI_ATTR void VKAPI_CALL vkCmdBeginRenderPass( VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo* pRenderPassBegin, VkSubpassContents contents); VKAPI_ATTR void VKAPI_CALL vkCmdNextSubpass( VkCommandBuffer commandBuffer, VkSubpassContents contents); VKAPI_ATTR void VKAPI_CALL vkCmdEndRenderPass( VkCommandBuffer commandBuffer); VKAPI_ATTR void VKAPI_CALL vkCmdExecuteCommands( VkCommandBuffer commandBuffer, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers); #endif #define VK_KHR_surface 1 VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSurfaceKHR) #define VK_KHR_SURFACE_SPEC_VERSION 25 #define VK_KHR_SURFACE_EXTENSION_NAME "VK_KHR_surface" typedef enum VkColorSpaceKHR { VK_COLORSPACE_SRGB_NONLINEAR_KHR = 0, VK_COLOR_SPACE_BEGIN_RANGE_KHR = VK_COLORSPACE_SRGB_NONLINEAR_KHR, VK_COLOR_SPACE_END_RANGE_KHR = VK_COLORSPACE_SRGB_NONLINEAR_KHR, VK_COLOR_SPACE_RANGE_SIZE_KHR = (VK_COLORSPACE_SRGB_NONLINEAR_KHR - VK_COLORSPACE_SRGB_NONLINEAR_KHR + 1), VK_COLOR_SPACE_MAX_ENUM_KHR = 0x7FFFFFFF } VkColorSpaceKHR; typedef enum VkPresentModeKHR { VK_PRESENT_MODE_IMMEDIATE_KHR = 0, VK_PRESENT_MODE_MAILBOX_KHR = 1, VK_PRESENT_MODE_FIFO_KHR = 2, VK_PRESENT_MODE_FIFO_RELAXED_KHR = 3, VK_PRESENT_MODE_BEGIN_RANGE_KHR = VK_PRESENT_MODE_IMMEDIATE_KHR, VK_PRESENT_MODE_END_RANGE_KHR = VK_PRESENT_MODE_FIFO_RELAXED_KHR, VK_PRESENT_MODE_RANGE_SIZE_KHR = (VK_PRESENT_MODE_FIFO_RELAXED_KHR - VK_PRESENT_MODE_IMMEDIATE_KHR + 1), VK_PRESENT_MODE_MAX_ENUM_KHR = 0x7FFFFFFF } VkPresentModeKHR; typedef enum VkSurfaceTransformFlagBitsKHR { VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR = 0x00000001, VK_SURFACE_TRANSFORM_ROTATE_90_BIT_KHR = 0x00000002, VK_SURFACE_TRANSFORM_ROTATE_180_BIT_KHR = 0x00000004, VK_SURFACE_TRANSFORM_ROTATE_270_BIT_KHR = 0x00000008, VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_BIT_KHR = 0x00000010, VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_90_BIT_KHR = 0x00000020, VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_180_BIT_KHR = 0x00000040, VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_270_BIT_KHR = 0x00000080, VK_SURFACE_TRANSFORM_INHERIT_BIT_KHR = 0x00000100, VK_SURFACE_TRANSFORM_FLAG_BITS_MAX_ENUM_KHR = 0x7FFFFFFF } VkSurfaceTransformFlagBitsKHR; typedef VkFlags VkSurfaceTransformFlagsKHR; typedef enum VkCompositeAlphaFlagBitsKHR { VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR = 0x00000001, VK_COMPOSITE_ALPHA_PRE_MULTIPLIED_BIT_KHR = 0x00000002, VK_COMPOSITE_ALPHA_POST_MULTIPLIED_BIT_KHR = 0x00000004, VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR = 0x00000008, VK_COMPOSITE_ALPHA_FLAG_BITS_MAX_ENUM_KHR = 0x7FFFFFFF } VkCompositeAlphaFlagBitsKHR; typedef VkFlags VkCompositeAlphaFlagsKHR; typedef struct VkSurfaceCapabilitiesKHR { uint32_t minImageCount; uint32_t maxImageCount; VkExtent2D currentExtent; VkExtent2D minImageExtent; VkExtent2D maxImageExtent; uint32_t maxImageArrayLayers; VkSurfaceTransformFlagsKHR supportedTransforms; VkSurfaceTransformFlagBitsKHR currentTransform; VkCompositeAlphaFlagsKHR supportedCompositeAlpha; VkImageUsageFlags supportedUsageFlags; } VkSurfaceCapabilitiesKHR; typedef struct VkSurfaceFormatKHR { VkFormat format; VkColorSpaceKHR colorSpace; } VkSurfaceFormatKHR; typedef void (VKAPI_PTR *PFN_vkDestroySurfaceKHR)(VkInstance instance, VkSurfaceKHR surface, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceSurfaceSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, VkSurfaceKHR surface, VkBool32* pSupported); typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR)(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, VkSurfaceCapabilitiesKHR* pSurfaceCapabilities); typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceSurfaceFormatsKHR)(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t* pSurfaceFormatCount, VkSurfaceFormatKHR* pSurfaceFormats); typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceSurfacePresentModesKHR)(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t* pPresentModeCount, VkPresentModeKHR* pPresentModes); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR void VKAPI_CALL vkDestroySurfaceKHR( VkInstance instance, VkSurfaceKHR surface, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceSupportKHR( VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, VkSurfaceKHR surface, VkBool32* pSupported); VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceCapabilitiesKHR( VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, VkSurfaceCapabilitiesKHR* pSurfaceCapabilities); VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceFormatsKHR( VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t* pSurfaceFormatCount, VkSurfaceFormatKHR* pSurfaceFormats); VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfacePresentModesKHR( VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t* pPresentModeCount, VkPresentModeKHR* pPresentModes); #endif #define VK_KHR_swapchain 1 VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSwapchainKHR) #define VK_KHR_SWAPCHAIN_SPEC_VERSION 67 #define VK_KHR_SWAPCHAIN_EXTENSION_NAME "VK_KHR_swapchain" typedef VkFlags VkSwapchainCreateFlagsKHR; typedef struct VkSwapchainCreateInfoKHR { VkStructureType sType; const void* pNext; VkSwapchainCreateFlagsKHR flags; VkSurfaceKHR surface; uint32_t minImageCount; VkFormat imageFormat; VkColorSpaceKHR imageColorSpace; VkExtent2D imageExtent; uint32_t imageArrayLayers; VkImageUsageFlags imageUsage; VkSharingMode imageSharingMode; uint32_t queueFamilyIndexCount; const uint32_t* pQueueFamilyIndices; VkSurfaceTransformFlagBitsKHR preTransform; VkCompositeAlphaFlagBitsKHR compositeAlpha; VkPresentModeKHR presentMode; VkBool32 clipped; VkSwapchainKHR oldSwapchain; } VkSwapchainCreateInfoKHR; typedef struct VkPresentInfoKHR { VkStructureType sType; const void* pNext; uint32_t waitSemaphoreCount; const VkSemaphore* pWaitSemaphores; uint32_t swapchainCount; const VkSwapchainKHR* pSwapchains; const uint32_t* pImageIndices; VkResult* pResults; } VkPresentInfoKHR; typedef VkResult (VKAPI_PTR *PFN_vkCreateSwapchainKHR)(VkDevice device, const VkSwapchainCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSwapchainKHR* pSwapchain); typedef void (VKAPI_PTR *PFN_vkDestroySwapchainKHR)(VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks* pAllocator); typedef VkResult (VKAPI_PTR *PFN_vkGetSwapchainImagesKHR)(VkDevice device, VkSwapchainKHR swapchain, uint32_t* pSwapchainImageCount, VkImage* pSwapchainImages); typedef VkResult (VKAPI_PTR *PFN_vkAcquireNextImageKHR)(VkDevice device, VkSwapchainKHR swapchain, uint64_t timeout, VkSemaphore semaphore, VkFence fence, uint32_t* pImageIndex); typedef VkResult (VKAPI_PTR *PFN_vkQueuePresentKHR)(VkQueue queue, const VkPresentInfoKHR* pPresentInfo); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR( VkDevice device, const VkSwapchainCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSwapchainKHR* pSwapchain); VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR( VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainImagesKHR( VkDevice device, VkSwapchainKHR swapchain, uint32_t* pSwapchainImageCount, VkImage* pSwapchainImages); VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR( VkDevice device, VkSwapchainKHR swapchain, uint64_t timeout, VkSemaphore semaphore, VkFence fence, uint32_t* pImageIndex); VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR( VkQueue queue, const VkPresentInfoKHR* pPresentInfo); #endif #define VK_KHR_display 1 VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDisplayKHR) VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDisplayModeKHR) #define VK_KHR_DISPLAY_SPEC_VERSION 21 #define VK_KHR_DISPLAY_EXTENSION_NAME "VK_KHR_display" typedef enum VkDisplayPlaneAlphaFlagBitsKHR { VK_DISPLAY_PLANE_ALPHA_OPAQUE_BIT_KHR = 0x00000001, VK_DISPLAY_PLANE_ALPHA_GLOBAL_BIT_KHR = 0x00000002, VK_DISPLAY_PLANE_ALPHA_PER_PIXEL_BIT_KHR = 0x00000004, VK_DISPLAY_PLANE_ALPHA_PER_PIXEL_PREMULTIPLIED_BIT_KHR = 0x00000008, VK_DISPLAY_PLANE_ALPHA_FLAG_BITS_MAX_ENUM_KHR = 0x7FFFFFFF } VkDisplayPlaneAlphaFlagBitsKHR; typedef VkFlags VkDisplayPlaneAlphaFlagsKHR; typedef VkFlags VkDisplayModeCreateFlagsKHR; typedef VkFlags VkDisplaySurfaceCreateFlagsKHR; typedef struct VkDisplayPropertiesKHR { VkDisplayKHR display; const char* displayName; VkExtent2D physicalDimensions; VkExtent2D physicalResolution; VkSurfaceTransformFlagsKHR supportedTransforms; VkBool32 planeReorderPossible; VkBool32 persistentContent; } VkDisplayPropertiesKHR; typedef struct VkDisplayModeParametersKHR { VkExtent2D visibleRegion; uint32_t refreshRate; } VkDisplayModeParametersKHR; typedef struct VkDisplayModePropertiesKHR { VkDisplayModeKHR displayMode; VkDisplayModeParametersKHR parameters; } VkDisplayModePropertiesKHR; typedef struct VkDisplayModeCreateInfoKHR { VkStructureType sType; const void* pNext; VkDisplayModeCreateFlagsKHR flags; VkDisplayModeParametersKHR parameters; } VkDisplayModeCreateInfoKHR; typedef struct VkDisplayPlaneCapabilitiesKHR { VkDisplayPlaneAlphaFlagsKHR supportedAlpha; VkOffset2D minSrcPosition; VkOffset2D maxSrcPosition; VkExtent2D minSrcExtent; VkExtent2D maxSrcExtent; VkOffset2D minDstPosition; VkOffset2D maxDstPosition; VkExtent2D minDstExtent; VkExtent2D maxDstExtent; } VkDisplayPlaneCapabilitiesKHR; typedef struct VkDisplayPlanePropertiesKHR { VkDisplayKHR currentDisplay; uint32_t currentStackIndex; } VkDisplayPlanePropertiesKHR; typedef struct VkDisplaySurfaceCreateInfoKHR { VkStructureType sType; const void* pNext; VkDisplaySurfaceCreateFlagsKHR flags; VkDisplayModeKHR displayMode; uint32_t planeIndex; uint32_t planeStackIndex; VkSurfaceTransformFlagBitsKHR transform; float globalAlpha; VkDisplayPlaneAlphaFlagBitsKHR alphaMode; VkExtent2D imageExtent; } VkDisplaySurfaceCreateInfoKHR; typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceDisplayPropertiesKHR)(VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkDisplayPropertiesKHR* pProperties); typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceDisplayPlanePropertiesKHR)(VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkDisplayPlanePropertiesKHR* pProperties); typedef VkResult (VKAPI_PTR *PFN_vkGetDisplayPlaneSupportedDisplaysKHR)(VkPhysicalDevice physicalDevice, uint32_t planeIndex, uint32_t* pDisplayCount, VkDisplayKHR* pDisplays); typedef VkResult (VKAPI_PTR *PFN_vkGetDisplayModePropertiesKHR)(VkPhysicalDevice physicalDevice, VkDisplayKHR display, uint32_t* pPropertyCount, VkDisplayModePropertiesKHR* pProperties); typedef VkResult (VKAPI_PTR *PFN_vkCreateDisplayModeKHR)(VkPhysicalDevice physicalDevice, VkDisplayKHR display, const VkDisplayModeCreateInfoKHR*pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDisplayModeKHR* pMode); typedef VkResult (VKAPI_PTR *PFN_vkGetDisplayPlaneCapabilitiesKHR)(VkPhysicalDevice physicalDevice, VkDisplayModeKHR mode, uint32_t planeIndex, VkDisplayPlaneCapabilitiesKHR* pCapabilities); typedef VkResult (VKAPI_PTR *PFN_vkCreateDisplayPlaneSurfaceKHR)(VkInstance instance, const VkDisplaySurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceDisplayPropertiesKHR( VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkDisplayPropertiesKHR* pProperties); VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceDisplayPlanePropertiesKHR( VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkDisplayPlanePropertiesKHR* pProperties); VKAPI_ATTR VkResult VKAPI_CALL vkGetDisplayPlaneSupportedDisplaysKHR( VkPhysicalDevice physicalDevice, uint32_t planeIndex, uint32_t* pDisplayCount, VkDisplayKHR* pDisplays); VKAPI_ATTR VkResult VKAPI_CALL vkGetDisplayModePropertiesKHR( VkPhysicalDevice physicalDevice, VkDisplayKHR display, uint32_t* pPropertyCount, VkDisplayModePropertiesKHR* pProperties); VKAPI_ATTR VkResult VKAPI_CALL vkCreateDisplayModeKHR( VkPhysicalDevice physicalDevice, VkDisplayKHR display, const VkDisplayModeCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDisplayModeKHR* pMode); VKAPI_ATTR VkResult VKAPI_CALL vkGetDisplayPlaneCapabilitiesKHR( VkPhysicalDevice physicalDevice, VkDisplayModeKHR mode, uint32_t planeIndex, VkDisplayPlaneCapabilitiesKHR* pCapabilities); VKAPI_ATTR VkResult VKAPI_CALL vkCreateDisplayPlaneSurfaceKHR( VkInstance instance, const VkDisplaySurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); #endif #define VK_KHR_display_swapchain 1 #define VK_KHR_DISPLAY_SWAPCHAIN_SPEC_VERSION 9 #define VK_KHR_DISPLAY_SWAPCHAIN_EXTENSION_NAME "VK_KHR_display_swapchain" typedef struct VkDisplayPresentInfoKHR { VkStructureType sType; const void* pNext; VkRect2D srcRect; VkRect2D dstRect; VkBool32 persistent; } VkDisplayPresentInfoKHR; typedef VkResult (VKAPI_PTR *PFN_vkCreateSharedSwapchainsKHR)(VkDevice device, uint32_t swapchainCount, const VkSwapchainCreateInfoKHR* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkSwapchainKHR* pSwapchains); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateSharedSwapchainsKHR( VkDevice device, uint32_t swapchainCount, const VkSwapchainCreateInfoKHR* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkSwapchainKHR* pSwapchains); #endif #ifdef VK_USE_PLATFORM_XLIB_KHR #define VK_KHR_xlib_surface 1 #include #define VK_KHR_XLIB_SURFACE_SPEC_VERSION 6 #define VK_KHR_XLIB_SURFACE_EXTENSION_NAME "VK_KHR_xlib_surface" typedef VkFlags VkXlibSurfaceCreateFlagsKHR; typedef struct VkXlibSurfaceCreateInfoKHR { VkStructureType sType; const void* pNext; VkXlibSurfaceCreateFlagsKHR flags; Display* dpy; Window window; } VkXlibSurfaceCreateInfoKHR; typedef VkResult (VKAPI_PTR *PFN_vkCreateXlibSurfaceKHR)(VkInstance instance, const VkXlibSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, Display* dpy, VisualID visualID); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateXlibSurfaceKHR( VkInstance instance, const VkXlibSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXlibPresentationSupportKHR( VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, Display* dpy, VisualID visualID); #endif #endif /* VK_USE_PLATFORM_XLIB_KHR */ #ifdef VK_USE_PLATFORM_XCB_KHR #define VK_KHR_xcb_surface 1 #include #define VK_KHR_XCB_SURFACE_SPEC_VERSION 6 #define VK_KHR_XCB_SURFACE_EXTENSION_NAME "VK_KHR_xcb_surface" typedef VkFlags VkXcbSurfaceCreateFlagsKHR; typedef struct VkXcbSurfaceCreateInfoKHR { VkStructureType sType; const void* pNext; VkXcbSurfaceCreateFlagsKHR flags; xcb_connection_t* connection; xcb_window_t window; } VkXcbSurfaceCreateInfoKHR; typedef VkResult (VKAPI_PTR *PFN_vkCreateXcbSurfaceKHR)(VkInstance instance, const VkXcbSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, xcb_connection_t* connection, xcb_visualid_t visual_id); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateXcbSurfaceKHR( VkInstance instance, const VkXcbSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXcbPresentationSupportKHR( VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, xcb_connection_t* connection, xcb_visualid_t visual_id); #endif #endif /* VK_USE_PLATFORM_XCB_KHR */ #ifdef VK_USE_PLATFORM_WAYLAND_KHR #define VK_KHR_wayland_surface 1 #include #define VK_KHR_WAYLAND_SURFACE_SPEC_VERSION 5 #define VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME "VK_KHR_wayland_surface" typedef VkFlags VkWaylandSurfaceCreateFlagsKHR; typedef struct VkWaylandSurfaceCreateInfoKHR { VkStructureType sType; const void* pNext; VkWaylandSurfaceCreateFlagsKHR flags; struct wl_display* display; struct wl_surface* surface; } VkWaylandSurfaceCreateInfoKHR; typedef VkResult (VKAPI_PTR *PFN_vkCreateWaylandSurfaceKHR)(VkInstance instance, const VkWaylandSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, struct wl_display* display); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateWaylandSurfaceKHR( VkInstance instance, const VkWaylandSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWaylandPresentationSupportKHR( VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, struct wl_display* display); #endif #endif /* VK_USE_PLATFORM_WAYLAND_KHR */ #ifdef VK_USE_PLATFORM_MIR_KHR #define VK_KHR_mir_surface 1 #include #define VK_KHR_MIR_SURFACE_SPEC_VERSION 4 #define VK_KHR_MIR_SURFACE_EXTENSION_NAME "VK_KHR_mir_surface" typedef VkFlags VkMirSurfaceCreateFlagsKHR; typedef struct VkMirSurfaceCreateInfoKHR { VkStructureType sType; const void* pNext; VkMirSurfaceCreateFlagsKHR flags; MirConnection* connection; MirSurface* mirSurface; } VkMirSurfaceCreateInfoKHR; typedef VkResult (VKAPI_PTR *PFN_vkCreateMirSurfaceKHR)(VkInstance instance, const VkMirSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceMirPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, MirConnection* connection); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateMirSurfaceKHR( VkInstance instance, const VkMirSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceMirPresentationSupportKHR( VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, MirConnection* connection); #endif #endif /* VK_USE_PLATFORM_MIR_KHR */ #ifdef VK_USE_PLATFORM_ANDROID_KHR #define VK_KHR_android_surface 1 #include #define VK_KHR_ANDROID_SURFACE_SPEC_VERSION 6 #define VK_KHR_ANDROID_SURFACE_EXTENSION_NAME "VK_KHR_android_surface" typedef VkFlags VkAndroidSurfaceCreateFlagsKHR; typedef struct VkAndroidSurfaceCreateInfoKHR { VkStructureType sType; const void* pNext; VkAndroidSurfaceCreateFlagsKHR flags; ANativeWindow* window; } VkAndroidSurfaceCreateInfoKHR; typedef VkResult (VKAPI_PTR *PFN_vkCreateAndroidSurfaceKHR)(VkInstance instance, const VkAndroidSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateAndroidSurfaceKHR( VkInstance instance, const VkAndroidSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); #endif #endif /* VK_USE_PLATFORM_ANDROID_KHR */ #ifdef VK_USE_PLATFORM_WIN32_KHR #define VK_KHR_win32_surface 1 #include #define VK_KHR_WIN32_SURFACE_SPEC_VERSION 5 #define VK_KHR_WIN32_SURFACE_EXTENSION_NAME "VK_KHR_win32_surface" typedef VkFlags VkWin32SurfaceCreateFlagsKHR; typedef struct VkWin32SurfaceCreateInfoKHR { VkStructureType sType; const void* pNext; VkWin32SurfaceCreateFlagsKHR flags; HINSTANCE hinstance; HWND hwnd; } VkWin32SurfaceCreateInfoKHR; typedef VkResult (VKAPI_PTR *PFN_vkCreateWin32SurfaceKHR)(VkInstance instance, const VkWin32SurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateWin32SurfaceKHR( VkInstance instance, const VkWin32SurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface); VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWin32PresentationSupportKHR( VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex); #endif #endif /* VK_USE_PLATFORM_WIN32_KHR */ #define VK_KHR_sampler_mirror_clamp_to_edge 1 #define VK_KHR_SAMPLER_MIRROR_CLAMP_TO_EDGE_SPEC_VERSION 1 #define VK_KHR_SAMPLER_MIRROR_CLAMP_TO_EDGE_EXTENSION_NAME "VK_KHR_sampler_mirror_clamp_to_edge" #define VK_EXT_debug_report 1 VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDebugReportCallbackEXT) #define VK_EXT_DEBUG_REPORT_SPEC_VERSION 2 #define VK_EXT_DEBUG_REPORT_EXTENSION_NAME "VK_EXT_debug_report" #define VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT VK_STRUCTURE_TYPE_DEBUG_REPORT_CALLBACK_CREATE_INFO_EXT typedef enum VkDebugReportObjectTypeEXT { VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT = 0, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT = 1, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT = 2, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT = 3, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT = 4, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT = 5, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT = 6, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT = 7, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT = 8, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT = 9, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT = 10, VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT = 11, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT = 12, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT = 13, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT = 14, VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT = 15, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_CACHE_EXT = 16, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT = 17, VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT = 18, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT = 19, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT = 20, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT = 21, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT = 22, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT = 23, VK_DEBUG_REPORT_OBJECT_TYPE_FRAMEBUFFER_EXT = 24, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT = 25, VK_DEBUG_REPORT_OBJECT_TYPE_SURFACE_KHR_EXT = 26, VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT = 27, VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT = 28, VK_DEBUG_REPORT_OBJECT_TYPE_BEGIN_RANGE_EXT = VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_END_RANGE_EXT = VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_RANGE_SIZE_EXT = (VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT - VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT + 1), VK_DEBUG_REPORT_OBJECT_TYPE_MAX_ENUM_EXT = 0x7FFFFFFF } VkDebugReportObjectTypeEXT; typedef enum VkDebugReportErrorEXT { VK_DEBUG_REPORT_ERROR_NONE_EXT = 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT = 1, VK_DEBUG_REPORT_ERROR_BEGIN_RANGE_EXT = VK_DEBUG_REPORT_ERROR_NONE_EXT, VK_DEBUG_REPORT_ERROR_END_RANGE_EXT = VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT, VK_DEBUG_REPORT_ERROR_RANGE_SIZE_EXT = (VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT - VK_DEBUG_REPORT_ERROR_NONE_EXT + 1), VK_DEBUG_REPORT_ERROR_MAX_ENUM_EXT = 0x7FFFFFFF } VkDebugReportErrorEXT; typedef enum VkDebugReportFlagBitsEXT { VK_DEBUG_REPORT_INFORMATION_BIT_EXT = 0x00000001, VK_DEBUG_REPORT_WARNING_BIT_EXT = 0x00000002, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT = 0x00000004, VK_DEBUG_REPORT_ERROR_BIT_EXT = 0x00000008, VK_DEBUG_REPORT_DEBUG_BIT_EXT = 0x00000010, VK_DEBUG_REPORT_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF } VkDebugReportFlagBitsEXT; typedef VkFlags VkDebugReportFlagsEXT; typedef VkBool32 (VKAPI_PTR *PFN_vkDebugReportCallbackEXT)( VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objectType, uint64_t object, size_t location, int32_t messageCode, const char* pLayerPrefix, const char* pMessage, void* pUserData); typedef struct VkDebugReportCallbackCreateInfoEXT { VkStructureType sType; const void* pNext; VkDebugReportFlagsEXT flags; PFN_vkDebugReportCallbackEXT pfnCallback; void* pUserData; } VkDebugReportCallbackCreateInfoEXT; typedef VkResult (VKAPI_PTR *PFN_vkCreateDebugReportCallbackEXT)(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDebugReportCallbackEXT* pCallback); typedef void (VKAPI_PTR *PFN_vkDestroyDebugReportCallbackEXT)(VkInstance instance, VkDebugReportCallbackEXT callback, const VkAllocationCallbacks* pAllocator); typedef void (VKAPI_PTR *PFN_vkDebugReportMessageEXT)(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objectType, uint64_t object, size_t location, int32_t messageCode, const char* pLayerPrefix, const char* pMessage); #ifndef VK_NO_PROTOTYPES VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT( VkInstance instance, const VkDebugReportCallbackCreateInfoEXT* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDebugReportCallbackEXT* pCallback); VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT( VkInstance instance, VkDebugReportCallbackEXT callback, const VkAllocationCallbacks* pAllocator); VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT( VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objectType, uint64_t object, size_t location, int32_t messageCode, const char* pLayerPrefix, const char* pMessage); #endif #define VK_NV_glsl_shader 1 #define VK_NV_GLSL_SHADER_SPEC_VERSION 1 #define VK_NV_GLSL_SHADER_EXTENSION_NAME "VK_NV_glsl_shader" #define VK_IMG_filter_cubic 1 #define VK_IMG_FILTER_CUBIC_SPEC_VERSION 1 #define VK_IMG_FILTER_CUBIC_EXTENSION_NAME "VK_IMG_filter_cubic" #ifdef __cplusplus } #endif #endif Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/000077500000000000000000000000001270147354000222215ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/.clang-format000066400000000000000000000002051270147354000245710ustar00rootroot00000000000000--- # We'll use defaults from the LLVM style, but with 4 columns indentation. BasedOnStyle: LLVM IndentWidth: 4 ColumnLimit: 132 ... Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/CMakeLists.txt000066400000000000000000000146271270147354000247730ustar00rootroot00000000000000cmake_minimum_required (VERSION 2.8.11) macro(run_vk_helper subcmd) add_custom_command(OUTPUT ${ARGN} COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk_helper.py --${subcmd} ${PROJECT_SOURCE_DIR}/include/vulkan/vulkan.h --abs_out_dir ${CMAKE_CURRENT_BINARY_DIR} DEPENDS ${PROJECT_SOURCE_DIR}/vk_helper.py ${PROJECT_SOURCE_DIR}/include/vulkan/vulkan.h ) endmacro() macro(run_vk_layer_generate subcmd output) add_custom_command(OUTPUT ${output} COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk-layer-generate.py ${DisplayServer} ${subcmd} ${PROJECT_SOURCE_DIR}/include/vulkan/vulkan.h > ${output} DEPENDS ${PROJECT_SOURCE_DIR}/vk-layer-generate.py ${PROJECT_SOURCE_DIR}/include/vulkan/vulkan.h ${PROJECT_SOURCE_DIR}/vulkan.py ) endmacro() macro(run_vk_layer_xml_generate subcmd output) add_custom_command(OUTPUT ${output} COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/genvk.py -registry ${PROJECT_SOURCE_DIR}/vk.xml ${output} DEPENDS ${PROJECT_SOURCE_DIR}/vk.xml ${PROJECT_SOURCE_DIR}/generator.py ${PROJECT_SOURCE_DIR}/genvk.py ${PROJECT_SOURCE_DIR}/reg.py ) endmacro() set(LAYER_JSON_FILES VkLayer_core_validation VkLayer_image VkLayer_object_tracker VkLayer_unique_objects VkLayer_parameter_validation VkLayer_swapchain VkLayer_threading VkLayer_device_limits ) set(VK_LAYER_RPATH /usr/lib/x86_64-linux-gnu/vulkan/layer:/usr/lib/i386-linux-gnu/vulkan/layer) set(CMAKE_INSTALL_RPATH ${VK_LAYER_RPATH}) if (NOT WIN32) # extra setup for out-of-tree builds if (NOT (CMAKE_CURRENT_SOURCE_DIR STREQUAL CMAKE_CURRENT_BINARY_DIR)) foreach (config_file ${LAYER_JSON_FILES}) add_custom_target(${config_file}-json ALL COMMAND ln -sf ${CMAKE_CURRENT_SOURCE_DIR}/linux/${config_file}.json VERBATIM ) endforeach(config_file) endif() else() if (NOT (CMAKE_CURRENT_SOURCE_DIR STREQUAL CMAKE_CURRENT_BINARY_DIR)) foreach (config_file ${LAYER_JSON_FILES}) FILE(TO_NATIVE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/windows/${config_file}.json src_json) FILE(TO_NATIVE_PATH ${CMAKE_CURRENT_BINARY_DIR}/$/${config_file}.json dst_json) add_custom_target(${config_file}-json ALL COMMAND copy ${src_json} ${dst_json} VERBATIM ) endforeach(config_file) endif() endif() if (WIN32) macro(add_vk_layer target) add_custom_command(OUTPUT VkLayer_${target}.def COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk-generate.py ${DisplayServer} win-def-file VkLayer_${target} layer > VkLayer_${target}.def DEPENDS ${PROJECT_SOURCE_DIR}/vk-generate.py ${PROJECT_SOURCE_DIR}/vk.py ) add_library(VkLayer_${target} SHARED ${ARGN} VkLayer_${target}.def) target_link_Libraries(VkLayer_${target} layer_utils) add_dependencies(VkLayer_${target} layer_utils_static) add_dependencies(VkLayer_${target} generate_vk_layer_helpers) set_target_properties(VkLayer_${target} PROPERTIES LINK_FLAGS "/DEF:${CMAKE_CURRENT_BINARY_DIR}/VkLayer_${target}.def") endmacro() else() macro(add_vk_layer target) add_library(VkLayer_${target} SHARED ${ARGN}) target_link_Libraries(VkLayer_${target} layer_utils) add_dependencies(VkLayer_${target} generate_vk_layer_helpers) set_target_properties(VkLayer_${target} PROPERTIES LINK_FLAGS "-Wl,-Bsymbolic") install(TARGETS VkLayer_${target} DESTINATION ${PROJECT_BINARY_DIR}/install_staging) endmacro() endif() include_directories( ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/../loader ${CMAKE_CURRENT_SOURCE_DIR}/../include/vulkan ${CMAKE_CURRENT_BINARY_DIR} ${GLSLANG_SPIRV_INCLUDE_DIR} ) if (WIN32) set (CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -D_CRT_SECURE_NO_WARNINGS") set (CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -D_CRT_SECURE_NO_WARNINGS") set (CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -D_CRT_SECURE_NO_WARNINGS /bigobj") set (CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} -D_CRT_SECURE_NO_WARNINGS /bigobj") endif() if (NOT WIN32) set (CMAKE_CXX_FLAGS "-std=c++11") set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wpointer-arith") set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wpointer-arith") endif() add_custom_command(OUTPUT vk_dispatch_table_helper.h COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk-generate.py ${DisplayServer} dispatch-table-ops layer > vk_dispatch_table_helper.h DEPENDS ${PROJECT_SOURCE_DIR}/vk-generate.py ${PROJECT_SOURCE_DIR}/vulkan.py) run_vk_helper(gen_enum_string_helper vk_enum_string_helper.h) run_vk_helper(gen_struct_wrappers vk_struct_string_helper.h vk_struct_string_helper_cpp.h vk_struct_string_helper_no_addr.h vk_struct_string_helper_no_addr_cpp.h vk_struct_size_helper.h vk_struct_size_helper.c vk_struct_wrappers.h vk_struct_wrappers.cpp vk_safe_struct.h vk_safe_struct.cpp ) add_custom_target(generate_vk_layer_helpers DEPENDS vk_dispatch_table_helper.h vk_enum_string_helper.h vk_struct_string_helper.h vk_struct_string_helper_no_addr.h vk_struct_string_helper_cpp.h vk_struct_string_helper_no_addr_cpp.h vk_struct_size_helper.h vk_struct_size_helper.c vk_struct_wrappers.h vk_struct_wrappers.cpp vk_safe_struct.h vk_safe_struct.cpp ) run_vk_layer_generate(object_tracker object_tracker.cpp) run_vk_layer_xml_generate(Threading thread_check.h) run_vk_layer_generate(unique_objects unique_objects.cpp) run_vk_layer_xml_generate(ParamChecker parameter_validation.h) add_library(layer_utils SHARED vk_layer_config.cpp vk_layer_extension_utils.cpp vk_layer_utils.cpp) if (WIN32) add_library(layer_utils_static STATIC vk_layer_config.cpp vk_layer_extension_utils.cpp vk_layer_utils.cpp) set_target_properties(layer_utils_static PROPERTIES OUTPUT_NAME layer_utils) target_link_libraries(layer_utils) else() install(TARGETS layer_utils DESTINATION ${PROJECT_BINARY_DIR}/install_staging) endif() add_vk_layer(core_validation core_validation.cpp vk_layer_table.cpp) add_vk_layer(device_limits device_limits.cpp vk_layer_table.cpp vk_layer_utils.cpp) add_vk_layer(image image.cpp vk_layer_table.cpp) add_vk_layer(swapchain swapchain.cpp vk_layer_table.cpp) # generated add_vk_layer(object_tracker object_tracker.cpp vk_layer_table.cpp) add_vk_layer(threading threading.cpp thread_check.h vk_layer_table.cpp) add_vk_layer(unique_objects unique_objects.cpp vk_layer_table.cpp vk_safe_struct.cpp) add_vk_layer(parameter_validation parameter_validation.cpp parameter_validation.h vk_layer_table.cpp) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/README.md000066400000000000000000000201121270147354000234740ustar00rootroot00000000000000# Layer Description and Status ## Overview Layer libraries can be written to intercept or hook VK entry points for various debug and validation purposes. One or more VK entry points can be defined in your Layer library. Undefined entrypoints in the Layer library will be passed to the next Layer which may be the driver. Multiple layer libraries can be chained (actually a hierarchy) together. vkEnumerateInstanceLayerProperties and vkEnumerateDeviceLayerProperties can be called to list the available layers and their properties. Layers can intercept Vulkan instance level entry points in which case they are called an Instance Layer. Layers can intercept device entry points in which case they are called a Device Layer. Instance level entry points are those with VkInstance or VkPhysicalDevice as first parameter. Device level entry points are those with VkDevice, VkCommandBuffer, or VkQueue as the first parameter. Layers that want to intercept both instance and device level entrypoints are called Global Layers. vkXXXXGetProcAddr is used internally by the Layers and Loader to initialize dispatch tables. Device Layers are activated at vkCreateDevice time. Instance Layers are activated at vkCreateInstance time. Layers can also be activated via environment variables (VK_INSTANCE_LAYERS or VK_DEVICE_LAYERS). All validation layers work with the DEBUG_REPORT extension to provide the application or user with validation feedback. When a validation layer is enabled, it will look at the vk_layer_settings.txt file to determine its behavior. Such as outputing to a file, stdout or debug output (Windows). An application can also register callback functions via the DEBUG_REPORT extension to receive callbacks when the requested validation events happen. Application callbacks happen regardless of the settings in vk_layer_settings.txt ### Layer library example code Note that some layers are code-generated and will therefore exist in the directory (build_dir)/layers -include/vkLayer.h - header file for layer code. ### Layer Details For complete details of current validation layers, including all of the validation checks that they perform, please refer to the document layers/vk_validation_layer_details.md. Below is a brief overview of each layer. ### Standard Validation This is a meta-layer managed by the loader. (name = VK_LAYER_LUNARG_standard_validation) - specifying this layer name will cause the loader to load the all of the standard validation layers (listed below) in the following optimal order: VK_LAYER_GOOGLE_threading, VK_LAYER_LUNARG_parameter_validation, VK_LAYER_LUNARG_device_limits, VK_LAYER_LUNARG_object_tracker, VK_LAYER_LUNARG_image, VK_LAYER_LUNARG_core_validation, VK_LAYER_LUNARG_swapchain, and VK_LAYER_GOOGLE_unique_objects. Other layers can be specified and the loader will remove duplicates. ### Print Object Stats (build dir)/layers/object_tracker.cpp (name=VK_LAYER_LUNARG_object_tracker) - Track object creation, use, and destruction. As objects are created, they're stored in a map. As objects are used, the layer verifies they exist in the map, flagging errors for unknown objects. As objects are destroyed, they're removed from the map. At vkDestroyDevice() and vkDestroyInstance() times, if any objects have not been destroyed, they are reported as leaked objects. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. ### Validate API State and Shaders layers/core\_validation.cpp (name=VK\_LAYER\_LUNARG\_core\_validation) - The core\_validation layer does the bulk of the API validation that requires storing state. Some of the state it tracks includes the Descriptor Set, Pipeline State, Shaders, and dynamic state, and memory objects and bindings. It performs some point validation as states are created and used, and further validation Draw call and QueueSubmit time. Of primary interest is making sure that the resources bound to Descriptor Sets correctly align with the layout specified for the Set. Also, all of the image and buffer layouts are validated to make sure explicit layout transitions are properly managed. Related to memory, core\_validation includes tracking object bindings, memory hazards, and memory object lifetimes. It also validates several other hazard-related issues related to command buffers, fences, and memory mapping. Additionally core\_validation include shader validation (formerly separate shader\_checker layer) that inspects the SPIR-V shader images and fixed function pipeline stages at PSO creation time. It flags errors when inconsistencies are found across interfaces between shader stages. The exact behavior of the checks depends on the pair of pipeline stages involved. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. ### Check parameters layers/parameter_validation.cpp (name=VK_LAYER_LUNARG_parameter_validation) - Check the input parameters to API calls for validity. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. ### Image parameters layers/image.cpp (name=VK_LAYER_LUNARG_image) - The image layer is intended to validate image parameters, formats, and correct use. Images are a significant enough area that they were given a separate layer. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. ### Check threading layers/threading.cpp (name=VK_LAYER_GOOGLE_threading) - Check multithreading of API calls for validity. Currently this checks that only one thread at a time uses an object in free-threaded API calls. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. ### Swapchain layers/swapchain.cpp (name=VK_LAYER_LUNARG_swapchain) - Check that WSI extensions are being used correctly. ### Device Limitations layers/device_limits.cpp (name=VK_LAYER_LUNARG_device_limits) - This layer is intended to capture underlying device features and limitations and then flag errors if an app makes requests for unsupported features or exceeding limitations. This layer is a work in progress and currently only flags some high-level errors without flagging errors on specific feature and limitation. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. ### Unique Objects (build dir)/layers/unique_objects.cpp (name=VK_LAYER_GOOGLE_unique_objects) - The Vulkan specification allows objects that have non-unique handles. This makes tracking object lifetimes difficult in that it is unclear which object is being referenced on deletion. The unique_objects layer was created to address this problem. If loaded in the correct position (last, which is closest to the display driver) it will wrap all objects with a unique object representation, allowing proper object lifetime tracking. This layer does no validation on its own, and may not be required for the proper operation of all layers or all platforms. One sign that it is needed is the appearance of errors emitted from the object_tracker layer indicating the use of previously destroyed objects. ## Using Layers 1. Build VK loader using normal steps (cmake and make) 2. Place libVkLayer_.so in the same directory as your VK test or app: cp build/layer/libVkLayer_threading.so build/tests This is required for the Loader to be able to scan and enumerate your library. Alternatively, use the VK\_LAYER\_PATH environment variable to specify where the layer libraries reside. 3. Create a vk_layer_settings.txt file in the same directory to specify how your layers should behave. Model it after the following example: [*vk_layer_settings.txt*](vk_layer_settings.txt) 4. Specify which Layers to activate by using vkCreateDevice and/or vkCreateInstance or environment variables. export VK\_INSTANCE\_LAYERS=VK\_LAYER\_LUNARG\_param\_checker:VK\_LAYER\_LUNARG\_core\_validation export VK\_DEVICE\_LAYERS=VK\_LAYER\_LUNARG\_param\_checker:VK\_LAYER\_LUNARG\_core\_validation cd build/tests; ./vkinfo ## Status ### Current known issues Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/core_validation.cpp000066400000000000000000022406621270147354000261030ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Cody Northrop * Author: Michael Lentine * Author: Tobin Ehlis * Author: Chia-I Wu * Author: Chris Forbes * Author: Mark Lobodzinski * Author: Ian Elliott */ // Allow use of STL min and max functions in Windows #define NOMINMAX // Turn on mem_tracker merged code #define MTMERGESOURCE 1 #include #include #include #include #include #include #include #include #include #include #include #include #include #include "vk_loader_platform.h" #include "vk_dispatch_table_helper.h" #include "vk_struct_string_helper_cpp.h" #if defined(__GNUC__) #pragma GCC diagnostic ignored "-Wwrite-strings" #endif #if defined(__GNUC__) #pragma GCC diagnostic warning "-Wwrite-strings" #endif #include "vk_struct_size_helper.h" #include "core_validation.h" #include "vk_layer_config.h" #include "vk_layer_table.h" #include "vk_layer_data.h" #include "vk_layer_logging.h" #include "vk_layer_extension_utils.h" #include "vk_layer_utils.h" #if defined __ANDROID__ #include #define LOGCONSOLE(...) ((void)__android_log_print(ANDROID_LOG_INFO, "DS", __VA_ARGS__)) #else #define LOGCONSOLE(...) printf(__VA_ARGS__) #endif using std::unordered_map; using std::unordered_set; #if MTMERGESOURCE // WSI Image Objects bypass usual Image Object creation methods. A special Memory // Object value will be used to identify them internally. static const VkDeviceMemory MEMTRACKER_SWAP_CHAIN_IMAGE_KEY = (VkDeviceMemory)(-1); #endif // Track command pools and their command buffers struct CMD_POOL_INFO { VkCommandPoolCreateFlags createFlags; uint32_t queueFamilyIndex; list commandBuffers; // list container of cmd buffers allocated from this pool }; struct devExts { VkBool32 wsi_enabled; unordered_map swapchainMap; unordered_map imageToSwapchainMap; }; // fwd decls struct shader_module; struct layer_data { debug_report_data *report_data; std::vector logging_callback; VkLayerDispatchTable *device_dispatch_table; VkLayerInstanceDispatchTable *instance_dispatch_table; #if MTMERGESOURCE // MTMERGESOURCE - stuff pulled directly from MT uint64_t currentFenceId; // Maps for tracking key structs related to mem_tracker state // Images and Buffers are 2 objects that can have memory bound to them so they get special treatment unordered_map imageBindingMap; unordered_map bufferBindingMap; // MTMERGESOURCE - End of MT stuff #endif devExts device_extensions; unordered_set queues; // all queues under given device // Global set of all cmdBuffers that are inFlight on this device unordered_set globalInFlightCmdBuffers; // Layer specific data unordered_map> sampleMap; unordered_map imageViewMap; unordered_map imageMap; unordered_map bufferViewMap; unordered_map bufferMap; unordered_map pipelineMap; unordered_map commandPoolMap; unordered_map descriptorPoolMap; unordered_map setMap; unordered_map descriptorSetLayoutMap; unordered_map pipelineLayoutMap; unordered_map memObjMap; unordered_map fenceMap; unordered_map queueMap; unordered_map eventMap; unordered_map queryToStateMap; unordered_map queryPoolMap; unordered_map semaphoreMap; unordered_map commandBufferMap; unordered_map frameBufferMap; unordered_map> imageSubresourceMap; unordered_map imageLayoutMap; unordered_map renderPassMap; unordered_map> shaderModuleMap; // Current render pass VkRenderPassBeginInfo renderPassBeginInfo; uint32_t currentSubpass; VkDevice device; // Device specific data PHYS_DEV_PROPERTIES_NODE physDevProperties; // MTMERGESOURCE - added a couple of fields to constructor initializer layer_data() : report_data(nullptr), device_dispatch_table(nullptr), instance_dispatch_table(nullptr), #if MTMERGESOURCE currentFenceId(1), #endif device_extensions(){}; }; static const VkLayerProperties cv_global_layers[] = {{ "VK_LAYER_LUNARG_core_validation", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer", }}; template void ValidateLayerOrdering(const TCreateInfo &createInfo) { bool foundLayer = false; for (uint32_t i = 0; i < createInfo.enabledLayerCount; ++i) { if (!strcmp(createInfo.ppEnabledLayerNames[i], cv_global_layers[0].layerName)) { foundLayer = true; } // This has to be logged to console as we don't have a callback at this point. if (!foundLayer && !strcmp(createInfo.ppEnabledLayerNames[0], "VK_LAYER_GOOGLE_unique_objects")) { LOGCONSOLE("Cannot activate layer VK_LAYER_GOOGLE_unique_objects prior to activating %s.", cv_global_layers[0].layerName); } } } // Code imported from shader_checker static void build_def_index(shader_module *); // A forward iterator over spirv instructions. Provides easy access to len, opcode, and content words // without the caller needing to care too much about the physical SPIRV module layout. struct spirv_inst_iter { std::vector::const_iterator zero; std::vector::const_iterator it; uint32_t len() { return *it >> 16; } uint32_t opcode() { return *it & 0x0ffffu; } uint32_t const &word(unsigned n) { return it[n]; } uint32_t offset() { return (uint32_t)(it - zero); } spirv_inst_iter() {} spirv_inst_iter(std::vector::const_iterator zero, std::vector::const_iterator it) : zero(zero), it(it) {} bool operator==(spirv_inst_iter const &other) { return it == other.it; } bool operator!=(spirv_inst_iter const &other) { return it != other.it; } spirv_inst_iter operator++(int) { /* x++ */ spirv_inst_iter ii = *this; it += len(); return ii; } spirv_inst_iter operator++() { /* ++x; */ it += len(); return *this; } /* The iterator and the value are the same thing. */ spirv_inst_iter &operator*() { return *this; } spirv_inst_iter const &operator*() const { return *this; } }; struct shader_module { /* the spirv image itself */ vector words; /* a mapping of to the first word of its def. this is useful because walking type * trees, constant expressions, etc requires jumping all over the instruction stream. */ unordered_map def_index; shader_module(VkShaderModuleCreateInfo const *pCreateInfo) : words((uint32_t *)pCreateInfo->pCode, (uint32_t *)pCreateInfo->pCode + pCreateInfo->codeSize / sizeof(uint32_t)), def_index() { build_def_index(this); } /* expose begin() / end() to enable range-based for */ spirv_inst_iter begin() const { return spirv_inst_iter(words.begin(), words.begin() + 5); } /* first insn */ spirv_inst_iter end() const { return spirv_inst_iter(words.begin(), words.end()); } /* just past last insn */ /* given an offset into the module, produce an iterator there. */ spirv_inst_iter at(unsigned offset) const { return spirv_inst_iter(words.begin(), words.begin() + offset); } /* gets an iterator to the definition of an id */ spirv_inst_iter get_def(unsigned id) const { auto it = def_index.find(id); if (it == def_index.end()) { return end(); } return at(it->second); } }; // TODO : Do we need to guard access to layer_data_map w/ lock? static unordered_map layer_data_map; // TODO : This can be much smarter, using separate locks for separate global data static int globalLockInitialized = 0; static loader_platform_thread_mutex globalLock; #if MTMERGESOURCE // MTMERGESOURCE - start of direct pull static VkPhysicalDeviceMemoryProperties memProps; static void clear_cmd_buf_and_mem_references(layer_data *my_data, const VkCommandBuffer cb); #define MAX_BINDING 0xFFFFFFFF static MT_OBJ_BINDING_INFO *get_object_binding_info(layer_data *my_data, uint64_t handle, VkDebugReportObjectTypeEXT type) { MT_OBJ_BINDING_INFO *retValue = NULL; switch (type) { case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT: { auto it = my_data->imageBindingMap.find(handle); if (it != my_data->imageBindingMap.end()) return &(*it).second; break; } case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT: { auto it = my_data->bufferBindingMap.find(handle); if (it != my_data->bufferBindingMap.end()) return &(*it).second; break; } default: break; } return retValue; } // MTMERGESOURCE - end section #endif template layer_data *get_my_data_ptr(void *data_key, std::unordered_map &data_map); // prototype static GLOBAL_CB_NODE *getCBNode(layer_data *, const VkCommandBuffer); #if MTMERGESOURCE static void delete_queue_info_list(layer_data *my_data) { // Process queue list, cleaning up each entry before deleting my_data->queueMap.clear(); } // Delete CBInfo from container and clear mem references to CB static void delete_cmd_buf_info(layer_data *my_data, VkCommandPool commandPool, const VkCommandBuffer cb) { clear_cmd_buf_and_mem_references(my_data, cb); // Delete the CBInfo info my_data->commandPoolMap[commandPool].commandBuffers.remove(cb); my_data->commandBufferMap.erase(cb); } static void add_object_binding_info(layer_data *my_data, const uint64_t handle, const VkDebugReportObjectTypeEXT type, const VkDeviceMemory mem) { switch (type) { // Buffers and images are unique as their CreateInfo is in container struct case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT: { auto pCI = &my_data->bufferBindingMap[handle]; pCI->mem = mem; break; } case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT: { auto pCI = &my_data->imageBindingMap[handle]; pCI->mem = mem; break; } default: break; } } static void add_object_create_info(layer_data *my_data, const uint64_t handle, const VkDebugReportObjectTypeEXT type, const void *pCreateInfo) { // TODO : For any CreateInfo struct that has ptrs, need to deep copy them and appropriately clean up on Destroy switch (type) { // Buffers and images are unique as their CreateInfo is in container struct case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT: { auto pCI = &my_data->bufferBindingMap[handle]; memset(pCI, 0, sizeof(MT_OBJ_BINDING_INFO)); memcpy(&pCI->create_info.buffer, pCreateInfo, sizeof(VkBufferCreateInfo)); break; } case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT: { auto pCI = &my_data->imageBindingMap[handle]; memset(pCI, 0, sizeof(MT_OBJ_BINDING_INFO)); memcpy(&pCI->create_info.image, pCreateInfo, sizeof(VkImageCreateInfo)); break; } // Swap Chain is very unique, use my_data->imageBindingMap, but copy in // SwapChainCreatInfo's usage flags and set the mem value to a unique key. These is used by // vkCreateImageView and internal mem_tracker routines to distinguish swap chain images case VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT: { auto pCI = &my_data->imageBindingMap[handle]; memset(pCI, 0, sizeof(MT_OBJ_BINDING_INFO)); pCI->mem = MEMTRACKER_SWAP_CHAIN_IMAGE_KEY; pCI->valid = false; pCI->create_info.image.usage = const_cast(static_cast(pCreateInfo))->imageUsage; break; } default: break; } } // Add a fence, creating one if necessary to our list of fences/fenceIds static VkBool32 add_fence_info(layer_data *my_data, VkFence fence, VkQueue queue, uint64_t *fenceId) { VkBool32 skipCall = VK_FALSE; *fenceId = my_data->currentFenceId++; // If no fence, create an internal fence to track the submissions if (fence != VK_NULL_HANDLE) { my_data->fenceMap[fence].fenceId = *fenceId; my_data->fenceMap[fence].queue = queue; // Validate that fence is in UNSIGNALED state VkFenceCreateInfo *pFenceCI = &(my_data->fenceMap[fence].createInfo); if (pFenceCI->flags & VK_FENCE_CREATE_SIGNALED_BIT) { skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, (uint64_t)fence, __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM", "Fence %#" PRIxLEAST64 " submitted in SIGNALED state. Fences must be reset before being submitted", (uint64_t)fence); } } else { // TODO : Do we need to create an internal fence here for tracking purposes? } // Update most recently submitted fence and fenceId for Queue my_data->queueMap[queue].lastSubmittedId = *fenceId; return skipCall; } // Remove a fenceInfo from our list of fences/fenceIds static void delete_fence_info(layer_data *my_data, VkFence fence) { my_data->fenceMap.erase(fence); } // Record information when a fence is known to be signalled static void update_fence_tracking(layer_data *my_data, VkFence fence) { auto fence_item = my_data->fenceMap.find(fence); if (fence_item != my_data->fenceMap.end()) { FENCE_NODE *pCurFenceInfo = &(*fence_item).second; VkQueue queue = pCurFenceInfo->queue; auto queue_item = my_data->queueMap.find(queue); if (queue_item != my_data->queueMap.end()) { QUEUE_NODE *pQueueInfo = &(*queue_item).second; if (pQueueInfo->lastRetiredId < pCurFenceInfo->fenceId) { pQueueInfo->lastRetiredId = pCurFenceInfo->fenceId; } } } // Update fence state in fenceCreateInfo structure auto pFCI = &(my_data->fenceMap[fence].createInfo); pFCI->flags = static_cast(pFCI->flags | VK_FENCE_CREATE_SIGNALED_BIT); } // Helper routine that updates the fence list for a specific queue to all-retired static void retire_queue_fences(layer_data *my_data, VkQueue queue) { QUEUE_NODE *pQueueInfo = &my_data->queueMap[queue]; // Set queue's lastRetired to lastSubmitted indicating all fences completed pQueueInfo->lastRetiredId = pQueueInfo->lastSubmittedId; } // Helper routine that updates all queues to all-retired static void retire_device_fences(layer_data *my_data, VkDevice device) { // Process each queue for device // TODO: Add multiple device support for (auto ii = my_data->queueMap.begin(); ii != my_data->queueMap.end(); ++ii) { // Set queue's lastRetired to lastSubmitted indicating all fences completed QUEUE_NODE *pQueueInfo = &(*ii).second; pQueueInfo->lastRetiredId = pQueueInfo->lastSubmittedId; } } // Helper function to validate correct usage bits set for buffers or images // Verify that (actual & desired) flags != 0 or, // if strict is true, verify that (actual & desired) flags == desired // In case of error, report it via dbg callbacks static VkBool32 validate_usage_flags(layer_data *my_data, void *disp_obj, VkFlags actual, VkFlags desired, VkBool32 strict, uint64_t obj_handle, VkDebugReportObjectTypeEXT obj_type, char const *ty_str, char const *func_name, char const *usage_str) { VkBool32 correct_usage = VK_FALSE; VkBool32 skipCall = VK_FALSE; if (strict) correct_usage = ((actual & desired) == desired); else correct_usage = ((actual & desired) != 0); if (!correct_usage) { skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, obj_type, obj_handle, __LINE__, MEMTRACK_INVALID_USAGE_FLAG, "MEM", "Invalid usage flag for %s %#" PRIxLEAST64 " used by %s. In this case, %s should have %s set during creation.", ty_str, obj_handle, func_name, ty_str, usage_str); } return skipCall; } // Helper function to validate usage flags for images // Pulls image info and then sends actual vs. desired usage off to helper above where // an error will be flagged if usage is not correct static VkBool32 validate_image_usage_flags(layer_data *my_data, void *disp_obj, VkImage image, VkFlags desired, VkBool32 strict, char const *func_name, char const *usage_string) { VkBool32 skipCall = VK_FALSE; MT_OBJ_BINDING_INFO *pBindInfo = get_object_binding_info(my_data, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT); if (pBindInfo) { skipCall = validate_usage_flags(my_data, disp_obj, pBindInfo->create_info.image.usage, desired, strict, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "image", func_name, usage_string); } return skipCall; } // Helper function to validate usage flags for buffers // Pulls buffer info and then sends actual vs. desired usage off to helper above where // an error will be flagged if usage is not correct static VkBool32 validate_buffer_usage_flags(layer_data *my_data, void *disp_obj, VkBuffer buffer, VkFlags desired, VkBool32 strict, char const *func_name, char const *usage_string) { VkBool32 skipCall = VK_FALSE; MT_OBJ_BINDING_INFO *pBindInfo = get_object_binding_info(my_data, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT); if (pBindInfo) { skipCall = validate_usage_flags(my_data, disp_obj, pBindInfo->create_info.buffer.usage, desired, strict, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, "buffer", func_name, usage_string); } return skipCall; } // Return ptr to info in map container containing mem, or NULL if not found // Calls to this function should be wrapped in mutex static DEVICE_MEM_INFO *get_mem_obj_info(layer_data *dev_data, const VkDeviceMemory mem) { auto item = dev_data->memObjMap.find(mem); if (item != dev_data->memObjMap.end()) { return &(*item).second; } else { return NULL; } } static void add_mem_obj_info(layer_data *my_data, void *object, const VkDeviceMemory mem, const VkMemoryAllocateInfo *pAllocateInfo) { assert(object != NULL); memcpy(&my_data->memObjMap[mem].allocInfo, pAllocateInfo, sizeof(VkMemoryAllocateInfo)); // TODO: Update for real hardware, actually process allocation info structures my_data->memObjMap[mem].allocInfo.pNext = NULL; my_data->memObjMap[mem].object = object; my_data->memObjMap[mem].mem = mem; my_data->memObjMap[mem].image = VK_NULL_HANDLE; my_data->memObjMap[mem].memRange.offset = 0; my_data->memObjMap[mem].memRange.size = 0; my_data->memObjMap[mem].pData = 0; my_data->memObjMap[mem].pDriverData = 0; my_data->memObjMap[mem].valid = false; } static VkBool32 validate_memory_is_valid(layer_data *dev_data, VkDeviceMemory mem, const char *functionName, VkImage image = VK_NULL_HANDLE) { if (mem == MEMTRACKER_SWAP_CHAIN_IMAGE_KEY) { MT_OBJ_BINDING_INFO *pBindInfo = get_object_binding_info(dev_data, reinterpret_cast(image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT); if (pBindInfo && !pBindInfo->valid) { return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)(mem), __LINE__, MEMTRACK_INVALID_USAGE_FLAG, "MEM", "%s: Cannot read invalid swapchain image %" PRIx64 ", please fill the memory before using.", functionName, (uint64_t)(image)); } } else { DEVICE_MEM_INFO *pMemObj = get_mem_obj_info(dev_data, mem); if (pMemObj && !pMemObj->valid) { return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)(mem), __LINE__, MEMTRACK_INVALID_USAGE_FLAG, "MEM", "%s: Cannot read invalid memory %" PRIx64 ", please fill the memory before using.", functionName, (uint64_t)(mem)); } } return false; } static void set_memory_valid(layer_data *dev_data, VkDeviceMemory mem, bool valid, VkImage image = VK_NULL_HANDLE) { if (mem == MEMTRACKER_SWAP_CHAIN_IMAGE_KEY) { MT_OBJ_BINDING_INFO *pBindInfo = get_object_binding_info(dev_data, reinterpret_cast(image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT); if (pBindInfo) { pBindInfo->valid = valid; } } else { DEVICE_MEM_INFO *pMemObj = get_mem_obj_info(dev_data, mem); if (pMemObj) { pMemObj->valid = valid; } } } // Find CB Info and add mem reference to list container // Find Mem Obj Info and add CB reference to list container static VkBool32 update_cmd_buf_and_mem_references(layer_data *dev_data, const VkCommandBuffer cb, const VkDeviceMemory mem, const char *apiName) { VkBool32 skipCall = VK_FALSE; // Skip validation if this image was created through WSI if (mem != MEMTRACKER_SWAP_CHAIN_IMAGE_KEY) { // First update CB binding in MemObj mini CB list DEVICE_MEM_INFO *pMemInfo = get_mem_obj_info(dev_data, mem); if (pMemInfo) { pMemInfo->commandBufferBindings.insert(cb); // Now update CBInfo's Mem reference list GLOBAL_CB_NODE *pCBNode = getCBNode(dev_data, cb); // TODO: keep track of all destroyed CBs so we know if this is a stale or simply invalid object if (pCBNode) { pCBNode->memObjs.insert(mem); } } } return skipCall; } // Free bindings related to CB static void clear_cmd_buf_and_mem_references(layer_data *dev_data, const VkCommandBuffer cb) { GLOBAL_CB_NODE *pCBNode = getCBNode(dev_data, cb); if (pCBNode) { if (pCBNode->memObjs.size() > 0) { for (auto mem : pCBNode->memObjs) { DEVICE_MEM_INFO *pInfo = get_mem_obj_info(dev_data, mem); if (pInfo) { pInfo->commandBufferBindings.erase(cb); } } pCBNode->memObjs.clear(); } pCBNode->validate_functions.clear(); } } // Delete the entire CB list static void delete_cmd_buf_info_list(layer_data *my_data) { for (auto &cb_node : my_data->commandBufferMap) { clear_cmd_buf_and_mem_references(my_data, cb_node.first); } my_data->commandBufferMap.clear(); } // For given MemObjInfo, report Obj & CB bindings static VkBool32 reportMemReferencesAndCleanUp(layer_data *dev_data, DEVICE_MEM_INFO *pMemObjInfo) { VkBool32 skipCall = VK_FALSE; size_t cmdBufRefCount = pMemObjInfo->commandBufferBindings.size(); size_t objRefCount = pMemObjInfo->objBindings.size(); if ((pMemObjInfo->commandBufferBindings.size()) != 0) { skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemObjInfo->mem, __LINE__, MEMTRACK_FREED_MEM_REF, "MEM", "Attempting to free memory object %#" PRIxLEAST64 " which still contains " PRINTF_SIZE_T_SPECIFIER " references", (uint64_t)pMemObjInfo->mem, (cmdBufRefCount + objRefCount)); } if (cmdBufRefCount > 0 && pMemObjInfo->commandBufferBindings.size() > 0) { for (auto cb : pMemObjInfo->commandBufferBindings) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)cb, __LINE__, MEMTRACK_FREED_MEM_REF, "MEM", "Command Buffer %p still has a reference to mem obj %#" PRIxLEAST64, cb, (uint64_t)pMemObjInfo->mem); } // Clear the list of hanging references pMemObjInfo->commandBufferBindings.clear(); } if (objRefCount > 0 && pMemObjInfo->objBindings.size() > 0) { for (auto obj : pMemObjInfo->objBindings) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, obj.type, obj.handle, __LINE__, MEMTRACK_FREED_MEM_REF, "MEM", "VK Object %#" PRIxLEAST64 " still has a reference to mem obj %#" PRIxLEAST64, obj.handle, (uint64_t)pMemObjInfo->mem); } // Clear the list of hanging references pMemObjInfo->objBindings.clear(); } return skipCall; } static VkBool32 deleteMemObjInfo(layer_data *my_data, void *object, VkDeviceMemory mem) { VkBool32 skipCall = VK_FALSE; auto item = my_data->memObjMap.find(mem); if (item != my_data->memObjMap.end()) { my_data->memObjMap.erase(item); } else { skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MEM_OBJ, "MEM", "Request to delete memory object %#" PRIxLEAST64 " not present in memory Object Map", (uint64_t)mem); } return skipCall; } // Check if fence for given CB is completed static bool checkCBCompleted(layer_data *my_data, const VkCommandBuffer cb, bool *complete) { GLOBAL_CB_NODE *pCBNode = getCBNode(my_data, cb); VkBool32 skipCall = false; *complete = true; if (pCBNode) { if (pCBNode->lastSubmittedQueue != NULL) { VkQueue queue = pCBNode->lastSubmittedQueue; QUEUE_NODE *pQueueInfo = &my_data->queueMap[queue]; if (pCBNode->fenceId > pQueueInfo->lastRetiredId) { skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)cb, __LINE__, MEMTRACK_NONE, "MEM", "fence %#" PRIxLEAST64 " for CB %p has not been checked for completion", (uint64_t)pCBNode->lastSubmittedFence, cb); *complete = false; } } } return skipCall; } static VkBool32 freeMemObjInfo(layer_data *dev_data, void *object, VkDeviceMemory mem, VkBool32 internal) { VkBool32 skipCall = VK_FALSE; // Parse global list to find info w/ mem DEVICE_MEM_INFO *pInfo = get_mem_obj_info(dev_data, mem); if (pInfo) { if (pInfo->allocInfo.allocationSize == 0 && !internal) { // TODO: Verify against Valid Use section skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MEM_OBJ, "MEM", "Attempting to free memory associated with a Persistent Image, %#" PRIxLEAST64 ", " "this should not be explicitly freed\n", (uint64_t)mem); } else { // Clear any CB bindings for completed CBs // TODO : Is there a better place to do this? assert(pInfo->object != VK_NULL_HANDLE); // clear_cmd_buf_and_mem_references removes elements from // pInfo->commandBufferBindings -- this copy not needed in c++14, // and probably not needed in practice in c++11 auto bindings = pInfo->commandBufferBindings; for (auto cb : bindings) { bool commandBufferComplete = false; skipCall |= checkCBCompleted(dev_data, cb, &commandBufferComplete); if (commandBufferComplete) { clear_cmd_buf_and_mem_references(dev_data, cb); } } // Now verify that no references to this mem obj remain and remove bindings if (pInfo->commandBufferBindings.size() || pInfo->objBindings.size()) { skipCall |= reportMemReferencesAndCleanUp(dev_data, pInfo); } // Delete mem obj info skipCall |= deleteMemObjInfo(dev_data, object, mem); } } return skipCall; } static const char *object_type_to_string(VkDebugReportObjectTypeEXT type) { switch (type) { case VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT: return "image"; case VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT: return "buffer"; case VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT: return "swapchain"; default: return "unknown"; } } // Remove object binding performs 3 tasks: // 1. Remove ObjectInfo from MemObjInfo list container of obj bindings & free it // 2. Clear mem binding for image/buffer by setting its handle to 0 // TODO : This only applied to Buffer, Image, and Swapchain objects now, how should it be updated/customized? static VkBool32 clear_object_binding(layer_data *dev_data, void *dispObj, uint64_t handle, VkDebugReportObjectTypeEXT type) { // TODO : Need to customize images/buffers/swapchains to track mem binding and clear it here appropriately VkBool32 skipCall = VK_FALSE; MT_OBJ_BINDING_INFO *pObjBindInfo = get_object_binding_info(dev_data, handle, type); if (pObjBindInfo) { DEVICE_MEM_INFO *pMemObjInfo = get_mem_obj_info(dev_data, pObjBindInfo->mem); // TODO : Make sure this is a reasonable way to reset mem binding pObjBindInfo->mem = VK_NULL_HANDLE; if (pMemObjInfo) { // This obj is bound to a memory object. Remove the reference to this object in that memory object's list, // and set the objects memory binding pointer to NULL. if (!pMemObjInfo->objBindings.erase({handle, type})) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_INVALID_OBJECT, "MEM", "While trying to clear mem binding for %s obj %#" PRIxLEAST64 ", unable to find that object referenced by mem obj %#" PRIxLEAST64, object_type_to_string(type), handle, (uint64_t)pMemObjInfo->mem); } } } return skipCall; } // For NULL mem case, output warning // Make sure given object is in global object map // IF a previous binding existed, output validation error // Otherwise, add reference from objectInfo to memoryInfo // Add reference off of objInfo // device is required for error logging, need a dispatchable // object for that. static VkBool32 set_mem_binding(layer_data *dev_data, void *dispatch_object, VkDeviceMemory mem, uint64_t handle, VkDebugReportObjectTypeEXT type, const char *apiName) { VkBool32 skipCall = VK_FALSE; // Handle NULL case separately, just clear previous binding & decrement reference if (mem == VK_NULL_HANDLE) { // TODO: Verify against Valid Use section of spec. skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_INVALID_MEM_OBJ, "MEM", "In %s, attempting to Bind Obj(%#" PRIxLEAST64 ") to NULL", apiName, handle); } else { MT_OBJ_BINDING_INFO *pObjBindInfo = get_object_binding_info(dev_data, handle, type); if (!pObjBindInfo) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_MISSING_MEM_BINDINGS, "MEM", "In %s, attempting to update Binding of %s Obj(%#" PRIxLEAST64 ") that's not in global list", object_type_to_string(type), apiName, handle); } else { // non-null case so should have real mem obj DEVICE_MEM_INFO *pMemInfo = get_mem_obj_info(dev_data, mem); if (pMemInfo) { // TODO : Need to track mem binding for obj and report conflict here DEVICE_MEM_INFO *pPrevBinding = get_mem_obj_info(dev_data, pObjBindInfo->mem); if (pPrevBinding != NULL) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_REBIND_OBJECT, "MEM", "In %s, attempting to bind memory (%#" PRIxLEAST64 ") to object (%#" PRIxLEAST64 ") which has already been bound to mem object %#" PRIxLEAST64, apiName, (uint64_t)mem, handle, (uint64_t)pPrevBinding->mem); } else { pMemInfo->objBindings.insert({handle, type}); // For image objects, make sure default memory state is correctly set // TODO : What's the best/correct way to handle this? if (VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT == type) { VkImageCreateInfo ici = pObjBindInfo->create_info.image; if (ici.usage & (VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT)) { // TODO:: More memory state transition stuff. } } pObjBindInfo->mem = mem; } } } } return skipCall; } // For NULL mem case, clear any previous binding Else... // Make sure given object is in its object map // IF a previous binding existed, update binding // Add reference from objectInfo to memoryInfo // Add reference off of object's binding info // Return VK_TRUE if addition is successful, VK_FALSE otherwise static VkBool32 set_sparse_mem_binding(layer_data *dev_data, void *dispObject, VkDeviceMemory mem, uint64_t handle, VkDebugReportObjectTypeEXT type, const char *apiName) { VkBool32 skipCall = VK_FALSE; // Handle NULL case separately, just clear previous binding & decrement reference if (mem == VK_NULL_HANDLE) { skipCall = clear_object_binding(dev_data, dispObject, handle, type); } else { MT_OBJ_BINDING_INFO *pObjBindInfo = get_object_binding_info(dev_data, handle, type); if (!pObjBindInfo) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_MISSING_MEM_BINDINGS, "MEM", "In %s, attempting to update Binding of Obj(%#" PRIxLEAST64 ") that's not in global list()", apiName, handle); } // non-null case so should have real mem obj DEVICE_MEM_INFO *pInfo = get_mem_obj_info(dev_data, mem); if (pInfo) { pInfo->objBindings.insert({handle, type}); // Need to set mem binding for this object pObjBindInfo->mem = mem; } } return skipCall; } template void print_object_map_members(layer_data *my_data, void *dispObj, T const &objectName, VkDebugReportObjectTypeEXT objectType, const char *objectStr) { for (auto const &element : objectName) { log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objectType, 0, __LINE__, MEMTRACK_NONE, "MEM", " %s Object list contains %s Object %#" PRIxLEAST64 " ", objectStr, objectStr, element.first); } } // For given Object, get 'mem' obj that it's bound to or NULL if no binding static VkBool32 get_mem_binding_from_object(layer_data *my_data, void *dispObj, const uint64_t handle, const VkDebugReportObjectTypeEXT type, VkDeviceMemory *mem) { VkBool32 skipCall = VK_FALSE; *mem = VK_NULL_HANDLE; MT_OBJ_BINDING_INFO *pObjBindInfo = get_object_binding_info(my_data, handle, type); if (pObjBindInfo) { if (pObjBindInfo->mem) { *mem = pObjBindInfo->mem; } else { skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_MISSING_MEM_BINDINGS, "MEM", "Trying to get mem binding for object %#" PRIxLEAST64 " but object has no mem binding", handle); } } else { skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, type, handle, __LINE__, MEMTRACK_INVALID_OBJECT, "MEM", "Trying to get mem binding for object %#" PRIxLEAST64 " but no such object in %s list", handle, object_type_to_string(type)); } return skipCall; } // Print details of MemObjInfo list static void print_mem_list(layer_data *dev_data, void *dispObj) { DEVICE_MEM_INFO *pInfo = NULL; // Early out if info is not requested if (!(dev_data->report_data->active_flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)) { return; } // Just printing each msg individually for now, may want to package these into single large print log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", "Details of Memory Object list (of size " PRINTF_SIZE_T_SPECIFIER " elements)", dev_data->memObjMap.size()); log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", "============================="); if (dev_data->memObjMap.size() <= 0) return; for (auto ii = dev_data->memObjMap.begin(); ii != dev_data->memObjMap.end(); ++ii) { pInfo = &(*ii).second; log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " ===MemObjInfo at %p===", (void *)pInfo); log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " Mem object: %#" PRIxLEAST64, (uint64_t)(pInfo->mem)); log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " Ref Count: " PRINTF_SIZE_T_SPECIFIER, pInfo->commandBufferBindings.size() + pInfo->objBindings.size()); if (0 != pInfo->allocInfo.allocationSize) { string pAllocInfoMsg = vk_print_vkmemoryallocateinfo(&pInfo->allocInfo, "MEM(INFO): "); log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " Mem Alloc info:\n%s", pAllocInfoMsg.c_str()); } else { log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " Mem Alloc info is NULL (alloc done by vkCreateSwapchainKHR())"); } log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " VK OBJECT Binding list of size " PRINTF_SIZE_T_SPECIFIER " elements:", pInfo->objBindings.size()); if (pInfo->objBindings.size() > 0) { for (auto obj : pInfo->objBindings) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " VK OBJECT %" PRIu64, obj.handle); } } log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " VK Command Buffer (CB) binding list of size " PRINTF_SIZE_T_SPECIFIER " elements", pInfo->commandBufferBindings.size()); if (pInfo->commandBufferBindings.size() > 0) { for (auto cb : pInfo->commandBufferBindings) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " VK CB %p", cb); } } } } static void printCBList(layer_data *my_data, void *dispObj) { GLOBAL_CB_NODE *pCBInfo = NULL; // Early out if info is not requested if (!(my_data->report_data->active_flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)) { return; } log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", "Details of CB list (of size " PRINTF_SIZE_T_SPECIFIER " elements)", my_data->commandBufferMap.size()); log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", "=================="); if (my_data->commandBufferMap.size() <= 0) return; for (auto &cb_node : my_data->commandBufferMap) { pCBInfo = cb_node.second; log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " CB Info (%p) has CB %p, fenceId %" PRIx64 ", and fence %#" PRIxLEAST64, (void *)pCBInfo, (void *)pCBInfo->commandBuffer, pCBInfo->fenceId, (uint64_t)pCBInfo->lastSubmittedFence); if (pCBInfo->memObjs.size() <= 0) continue; for (auto obj : pCBInfo->memObjs) { log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, 0, __LINE__, MEMTRACK_NONE, "MEM", " Mem obj %" PRIu64, (uint64_t)obj); } } } #endif // Return a string representation of CMD_TYPE enum static string cmdTypeToString(CMD_TYPE cmd) { switch (cmd) { case CMD_BINDPIPELINE: return "CMD_BINDPIPELINE"; case CMD_BINDPIPELINEDELTA: return "CMD_BINDPIPELINEDELTA"; case CMD_SETVIEWPORTSTATE: return "CMD_SETVIEWPORTSTATE"; case CMD_SETLINEWIDTHSTATE: return "CMD_SETLINEWIDTHSTATE"; case CMD_SETDEPTHBIASSTATE: return "CMD_SETDEPTHBIASSTATE"; case CMD_SETBLENDSTATE: return "CMD_SETBLENDSTATE"; case CMD_SETDEPTHBOUNDSSTATE: return "CMD_SETDEPTHBOUNDSSTATE"; case CMD_SETSTENCILREADMASKSTATE: return "CMD_SETSTENCILREADMASKSTATE"; case CMD_SETSTENCILWRITEMASKSTATE: return "CMD_SETSTENCILWRITEMASKSTATE"; case CMD_SETSTENCILREFERENCESTATE: return "CMD_SETSTENCILREFERENCESTATE"; case CMD_BINDDESCRIPTORSETS: return "CMD_BINDDESCRIPTORSETS"; case CMD_BINDINDEXBUFFER: return "CMD_BINDINDEXBUFFER"; case CMD_BINDVERTEXBUFFER: return "CMD_BINDVERTEXBUFFER"; case CMD_DRAW: return "CMD_DRAW"; case CMD_DRAWINDEXED: return "CMD_DRAWINDEXED"; case CMD_DRAWINDIRECT: return "CMD_DRAWINDIRECT"; case CMD_DRAWINDEXEDINDIRECT: return "CMD_DRAWINDEXEDINDIRECT"; case CMD_DISPATCH: return "CMD_DISPATCH"; case CMD_DISPATCHINDIRECT: return "CMD_DISPATCHINDIRECT"; case CMD_COPYBUFFER: return "CMD_COPYBUFFER"; case CMD_COPYIMAGE: return "CMD_COPYIMAGE"; case CMD_BLITIMAGE: return "CMD_BLITIMAGE"; case CMD_COPYBUFFERTOIMAGE: return "CMD_COPYBUFFERTOIMAGE"; case CMD_COPYIMAGETOBUFFER: return "CMD_COPYIMAGETOBUFFER"; case CMD_CLONEIMAGEDATA: return "CMD_CLONEIMAGEDATA"; case CMD_UPDATEBUFFER: return "CMD_UPDATEBUFFER"; case CMD_FILLBUFFER: return "CMD_FILLBUFFER"; case CMD_CLEARCOLORIMAGE: return "CMD_CLEARCOLORIMAGE"; case CMD_CLEARATTACHMENTS: return "CMD_CLEARCOLORATTACHMENT"; case CMD_CLEARDEPTHSTENCILIMAGE: return "CMD_CLEARDEPTHSTENCILIMAGE"; case CMD_RESOLVEIMAGE: return "CMD_RESOLVEIMAGE"; case CMD_SETEVENT: return "CMD_SETEVENT"; case CMD_RESETEVENT: return "CMD_RESETEVENT"; case CMD_WAITEVENTS: return "CMD_WAITEVENTS"; case CMD_PIPELINEBARRIER: return "CMD_PIPELINEBARRIER"; case CMD_BEGINQUERY: return "CMD_BEGINQUERY"; case CMD_ENDQUERY: return "CMD_ENDQUERY"; case CMD_RESETQUERYPOOL: return "CMD_RESETQUERYPOOL"; case CMD_COPYQUERYPOOLRESULTS: return "CMD_COPYQUERYPOOLRESULTS"; case CMD_WRITETIMESTAMP: return "CMD_WRITETIMESTAMP"; case CMD_INITATOMICCOUNTERS: return "CMD_INITATOMICCOUNTERS"; case CMD_LOADATOMICCOUNTERS: return "CMD_LOADATOMICCOUNTERS"; case CMD_SAVEATOMICCOUNTERS: return "CMD_SAVEATOMICCOUNTERS"; case CMD_BEGINRENDERPASS: return "CMD_BEGINRENDERPASS"; case CMD_ENDRENDERPASS: return "CMD_ENDRENDERPASS"; default: return "UNKNOWN"; } } // SPIRV utility functions static void build_def_index(shader_module *module) { for (auto insn : *module) { switch (insn.opcode()) { /* Types */ case spv::OpTypeVoid: case spv::OpTypeBool: case spv::OpTypeInt: case spv::OpTypeFloat: case spv::OpTypeVector: case spv::OpTypeMatrix: case spv::OpTypeImage: case spv::OpTypeSampler: case spv::OpTypeSampledImage: case spv::OpTypeArray: case spv::OpTypeRuntimeArray: case spv::OpTypeStruct: case spv::OpTypeOpaque: case spv::OpTypePointer: case spv::OpTypeFunction: case spv::OpTypeEvent: case spv::OpTypeDeviceEvent: case spv::OpTypeReserveId: case spv::OpTypeQueue: case spv::OpTypePipe: module->def_index[insn.word(1)] = insn.offset(); break; /* Fixed constants */ case spv::OpConstantTrue: case spv::OpConstantFalse: case spv::OpConstant: case spv::OpConstantComposite: case spv::OpConstantSampler: case spv::OpConstantNull: module->def_index[insn.word(2)] = insn.offset(); break; /* Specialization constants */ case spv::OpSpecConstantTrue: case spv::OpSpecConstantFalse: case spv::OpSpecConstant: case spv::OpSpecConstantComposite: case spv::OpSpecConstantOp: module->def_index[insn.word(2)] = insn.offset(); break; /* Variables */ case spv::OpVariable: module->def_index[insn.word(2)] = insn.offset(); break; /* Functions */ case spv::OpFunction: module->def_index[insn.word(2)] = insn.offset(); break; default: /* We don't care about any other defs for now. */ break; } } } static spirv_inst_iter find_entrypoint(shader_module *src, char const *name, VkShaderStageFlagBits stageBits) { for (auto insn : *src) { if (insn.opcode() == spv::OpEntryPoint) { auto entrypointName = (char const *)&insn.word(3); auto entrypointStageBits = 1u << insn.word(1); if (!strcmp(entrypointName, name) && (entrypointStageBits & stageBits)) { return insn; } } } return src->end(); } bool shader_is_spirv(VkShaderModuleCreateInfo const *pCreateInfo) { uint32_t *words = (uint32_t *)pCreateInfo->pCode; size_t sizeInWords = pCreateInfo->codeSize / sizeof(uint32_t); /* Just validate that the header makes sense. */ return sizeInWords >= 5 && words[0] == spv::MagicNumber && words[1] == spv::Version; } static char const *storage_class_name(unsigned sc) { switch (sc) { case spv::StorageClassInput: return "input"; case spv::StorageClassOutput: return "output"; case spv::StorageClassUniformConstant: return "const uniform"; case spv::StorageClassUniform: return "uniform"; case spv::StorageClassWorkgroup: return "workgroup local"; case spv::StorageClassCrossWorkgroup: return "workgroup global"; case spv::StorageClassPrivate: return "private global"; case spv::StorageClassFunction: return "function"; case spv::StorageClassGeneric: return "generic"; case spv::StorageClassAtomicCounter: return "atomic counter"; case spv::StorageClassImage: return "image"; case spv::StorageClassPushConstant: return "push constant"; default: return "unknown"; } } /* get the value of an integral constant */ unsigned get_constant_value(shader_module const *src, unsigned id) { auto value = src->get_def(id); assert(value != src->end()); if (value.opcode() != spv::OpConstant) { /* TODO: Either ensure that the specialization transform is already performed on a module we're considering here, OR -- specialize on the fly now. */ return 1; } return value.word(3); } static void describe_type_inner(std::ostringstream &ss, shader_module const *src, unsigned type) { auto insn = src->get_def(type); assert(insn != src->end()); switch (insn.opcode()) { case spv::OpTypeBool: ss << "bool"; break; case spv::OpTypeInt: ss << (insn.word(3) ? 's' : 'u') << "int" << insn.word(2); break; case spv::OpTypeFloat: ss << "float" << insn.word(2); break; case spv::OpTypeVector: ss << "vec" << insn.word(3) << " of "; describe_type_inner(ss, src, insn.word(2)); break; case spv::OpTypeMatrix: ss << "mat" << insn.word(3) << " of "; describe_type_inner(ss, src, insn.word(2)); break; case spv::OpTypeArray: ss << "arr[" << get_constant_value(src, insn.word(3)) << "] of "; describe_type_inner(ss, src, insn.word(2)); break; case spv::OpTypePointer: ss << "ptr to " << storage_class_name(insn.word(2)) << " "; describe_type_inner(ss, src, insn.word(3)); break; case spv::OpTypeStruct: { ss << "struct of ("; for (unsigned i = 2; i < insn.len(); i++) { describe_type_inner(ss, src, insn.word(i)); if (i == insn.len() - 1) { ss << ")"; } else { ss << ", "; } } break; } case spv::OpTypeSampler: ss << "sampler"; break; case spv::OpTypeSampledImage: ss << "sampler+"; describe_type_inner(ss, src, insn.word(2)); break; case spv::OpTypeImage: ss << "image(dim=" << insn.word(3) << ", sampled=" << insn.word(7) << ")"; break; default: ss << "oddtype"; break; } } static std::string describe_type(shader_module const *src, unsigned type) { std::ostringstream ss; describe_type_inner(ss, src, type); return ss.str(); } static bool types_match(shader_module const *a, shader_module const *b, unsigned a_type, unsigned b_type, bool b_arrayed) { /* walk two type trees together, and complain about differences */ auto a_insn = a->get_def(a_type); auto b_insn = b->get_def(b_type); assert(a_insn != a->end()); assert(b_insn != b->end()); if (b_arrayed && b_insn.opcode() == spv::OpTypeArray) { /* we probably just found the extra level of arrayness in b_type: compare the type inside it to a_type */ return types_match(a, b, a_type, b_insn.word(2), false); } if (a_insn.opcode() != b_insn.opcode()) { return false; } switch (a_insn.opcode()) { /* if b_arrayed and we hit a leaf type, then we can't match -- there's nowhere for the extra OpTypeArray to be! */ case spv::OpTypeBool: return true && !b_arrayed; case spv::OpTypeInt: /* match on width, signedness */ return a_insn.word(2) == b_insn.word(2) && a_insn.word(3) == b_insn.word(3) && !b_arrayed; case spv::OpTypeFloat: /* match on width */ return a_insn.word(2) == b_insn.word(2) && !b_arrayed; case spv::OpTypeVector: case spv::OpTypeMatrix: /* match on element type, count. these all have the same layout. we don't get here if * b_arrayed -- that is handled above. */ return !b_arrayed && types_match(a, b, a_insn.word(2), b_insn.word(2), b_arrayed) && a_insn.word(3) == b_insn.word(3); case spv::OpTypeArray: /* match on element type, count. these all have the same layout. we don't get here if * b_arrayed. This differs from vector & matrix types in that the array size is the id of a constant instruction, * not a literal within OpTypeArray */ return !b_arrayed && types_match(a, b, a_insn.word(2), b_insn.word(2), b_arrayed) && get_constant_value(a, a_insn.word(3)) == get_constant_value(b, b_insn.word(3)); case spv::OpTypeStruct: /* match on all element types */ { if (b_arrayed) { /* for the purposes of matching different levels of arrayness, structs are leaves. */ return false; } if (a_insn.len() != b_insn.len()) { return false; /* structs cannot match if member counts differ */ } for (unsigned i = 2; i < a_insn.len(); i++) { if (!types_match(a, b, a_insn.word(i), b_insn.word(i), b_arrayed)) { return false; } } return true; } case spv::OpTypePointer: /* match on pointee type. storage class is expected to differ */ return types_match(a, b, a_insn.word(3), b_insn.word(3), b_arrayed); default: /* remaining types are CLisms, or may not appear in the interfaces we * are interested in. Just claim no match. */ return false; } } static int value_or_default(std::unordered_map const &map, unsigned id, int def) { auto it = map.find(id); if (it == map.end()) return def; else return it->second; } static unsigned get_locations_consumed_by_type(shader_module const *src, unsigned type, bool strip_array_level) { auto insn = src->get_def(type); assert(insn != src->end()); switch (insn.opcode()) { case spv::OpTypePointer: /* see through the ptr -- this is only ever at the toplevel for graphics shaders; * we're never actually passing pointers around. */ return get_locations_consumed_by_type(src, insn.word(3), strip_array_level); case spv::OpTypeArray: if (strip_array_level) { return get_locations_consumed_by_type(src, insn.word(2), false); } else { return get_constant_value(src, insn.word(3)) * get_locations_consumed_by_type(src, insn.word(2), false); } case spv::OpTypeMatrix: /* num locations is the dimension * element size */ return insn.word(3) * get_locations_consumed_by_type(src, insn.word(2), false); default: /* everything else is just 1. */ return 1; /* TODO: extend to handle 64bit scalar types, whose vectors may need * multiple locations. */ } } typedef std::pair location_t; typedef std::pair descriptor_slot_t; struct interface_var { uint32_t id; uint32_t type_id; uint32_t offset; /* TODO: collect the name, too? Isn't required to be present. */ }; static spirv_inst_iter get_struct_type(shader_module const *src, spirv_inst_iter def, bool is_array_of_verts) { while (true) { if (def.opcode() == spv::OpTypePointer) { def = src->get_def(def.word(3)); } else if (def.opcode() == spv::OpTypeArray && is_array_of_verts) { def = src->get_def(def.word(2)); is_array_of_verts = false; } else if (def.opcode() == spv::OpTypeStruct) { return def; } else { return src->end(); } } } static void collect_interface_block_members(layer_data *my_data, shader_module const *src, std::map &out, std::unordered_map const &blocks, bool is_array_of_verts, uint32_t id, uint32_t type_id) { /* Walk down the type_id presented, trying to determine whether it's actually an interface block. */ auto type = get_struct_type(src, src->get_def(type_id), is_array_of_verts); if (type == src->end() || blocks.find(type.word(1)) == blocks.end()) { /* this isn't an interface block. */ return; } std::unordered_map member_components; /* Walk all the OpMemberDecorate for type's result id -- first pass, collect components. */ for (auto insn : *src) { if (insn.opcode() == spv::OpMemberDecorate && insn.word(1) == type.word(1)) { unsigned member_index = insn.word(2); if (insn.word(3) == spv::DecorationComponent) { unsigned component = insn.word(4); member_components[member_index] = component; } } } /* Second pass -- produce the output, from Location decorations */ for (auto insn : *src) { if (insn.opcode() == spv::OpMemberDecorate && insn.word(1) == type.word(1)) { unsigned member_index = insn.word(2); unsigned member_type_id = type.word(2 + member_index); if (insn.word(3) == spv::DecorationLocation) { unsigned location = insn.word(4); unsigned num_locations = get_locations_consumed_by_type(src, member_type_id, false); auto component_it = member_components.find(member_index); unsigned component = component_it == member_components.end() ? 0 : component_it->second; for (unsigned int offset = 0; offset < num_locations; offset++) { interface_var v; v.id = id; /* TODO: member index in interface_var too? */ v.type_id = member_type_id; v.offset = offset; out[std::make_pair(location + offset, component)] = v; } } } } } static void collect_interface_by_location(layer_data *my_data, shader_module const *src, spirv_inst_iter entrypoint, spv::StorageClass sinterface, std::map &out, bool is_array_of_verts) { std::unordered_map var_locations; std::unordered_map var_builtins; std::unordered_map var_components; std::unordered_map blocks; for (auto insn : *src) { /* We consider two interface models: SSO rendezvous-by-location, and * builtins. Complain about anything that fits neither model. */ if (insn.opcode() == spv::OpDecorate) { if (insn.word(2) == spv::DecorationLocation) { var_locations[insn.word(1)] = insn.word(3); } if (insn.word(2) == spv::DecorationBuiltIn) { var_builtins[insn.word(1)] = insn.word(3); } if (insn.word(2) == spv::DecorationComponent) { var_components[insn.word(1)] = insn.word(3); } if (insn.word(2) == spv::DecorationBlock) { blocks[insn.word(1)] = 1; } } } /* TODO: handle grouped decorations */ /* TODO: handle index=1 dual source outputs from FS -- two vars will * have the same location, and we DONT want to clobber. */ /* find the end of the entrypoint's name string. additional zero bytes follow the actual null terminator, to fill out the rest of the word - so we only need to look at the last byte in the word to determine which word contains the terminator. */ auto word = 3; while (entrypoint.word(word) & 0xff000000u) { ++word; } ++word; for (; word < entrypoint.len(); word++) { auto insn = src->get_def(entrypoint.word(word)); assert(insn != src->end()); assert(insn.opcode() == spv::OpVariable); if (insn.word(3) == sinterface) { unsigned id = insn.word(2); unsigned type = insn.word(1); int location = value_or_default(var_locations, id, -1); int builtin = value_or_default(var_builtins, id, -1); unsigned component = value_or_default(var_components, id, 0); /* unspecified is OK, is 0 */ /* All variables and interface block members in the Input or Output storage classes * must be decorated with either a builtin or an explicit location. * * TODO: integrate the interface block support here. For now, don't complain -- * a valid SPIRV module will only hit this path for the interface block case, as the * individual members of the type are decorated, rather than variable declarations. */ if (location != -1) { /* A user-defined interface variable, with a location. Where a variable * occupied multiple locations, emit one result for each. */ unsigned num_locations = get_locations_consumed_by_type(src, type, is_array_of_verts); for (unsigned int offset = 0; offset < num_locations; offset++) { interface_var v; v.id = id; v.type_id = type; v.offset = offset; out[std::make_pair(location + offset, component)] = v; } } else if (builtin == -1) { /* An interface block instance */ collect_interface_block_members(my_data, src, out, blocks, is_array_of_verts, id, type); } } } } static void collect_interface_by_descriptor_slot(layer_data *my_data, shader_module const *src, std::unordered_set const &accessible_ids, std::map &out) { std::unordered_map var_sets; std::unordered_map var_bindings; for (auto insn : *src) { /* All variables in the Uniform or UniformConstant storage classes are required to be decorated with both * DecorationDescriptorSet and DecorationBinding. */ if (insn.opcode() == spv::OpDecorate) { if (insn.word(2) == spv::DecorationDescriptorSet) { var_sets[insn.word(1)] = insn.word(3); } if (insn.word(2) == spv::DecorationBinding) { var_bindings[insn.word(1)] = insn.word(3); } } } for (auto id : accessible_ids) { auto insn = src->get_def(id); assert(insn != src->end()); if (insn.opcode() == spv::OpVariable && (insn.word(3) == spv::StorageClassUniform || insn.word(3) == spv::StorageClassUniformConstant)) { unsigned set = value_or_default(var_sets, insn.word(2), 0); unsigned binding = value_or_default(var_bindings, insn.word(2), 0); auto existing_it = out.find(std::make_pair(set, binding)); if (existing_it != out.end()) { /* conflict within spv image */ log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_INCONSISTENT_SPIRV, "SC", "var %d (type %d) in %s interface in descriptor slot (%u,%u) conflicts with existing definition", insn.word(2), insn.word(1), storage_class_name(insn.word(3)), existing_it->first.first, existing_it->first.second); } interface_var v; v.id = insn.word(2); v.type_id = insn.word(1); out[std::make_pair(set, binding)] = v; } } } static bool validate_interface_between_stages(layer_data *my_data, shader_module const *producer, spirv_inst_iter producer_entrypoint, char const *producer_name, shader_module const *consumer, spirv_inst_iter consumer_entrypoint, char const *consumer_name, bool consumer_arrayed_input) { std::map outputs; std::map inputs; bool pass = true; collect_interface_by_location(my_data, producer, producer_entrypoint, spv::StorageClassOutput, outputs, false); collect_interface_by_location(my_data, consumer, consumer_entrypoint, spv::StorageClassInput, inputs, consumer_arrayed_input); auto a_it = outputs.begin(); auto b_it = inputs.begin(); /* maps sorted by key (location); walk them together to find mismatches */ while ((outputs.size() > 0 && a_it != outputs.end()) || (inputs.size() && b_it != inputs.end())) { bool a_at_end = outputs.size() == 0 || a_it == outputs.end(); bool b_at_end = inputs.size() == 0 || b_it == inputs.end(); auto a_first = a_at_end ? std::make_pair(0u, 0u) : a_it->first; auto b_first = b_at_end ? std::make_pair(0u, 0u) : b_it->first; if (b_at_end || ((!a_at_end) && (a_first < b_first))) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_OUTPUT_NOT_CONSUMED, "SC", "%s writes to output location %u.%u which is not consumed by %s", producer_name, a_first.first, a_first.second, consumer_name)) { pass = false; } a_it++; } else if (a_at_end || a_first > b_first) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_INPUT_NOT_PRODUCED, "SC", "%s consumes input location %u.%u which is not written by %s", consumer_name, b_first.first, b_first.second, producer_name)) { pass = false; } b_it++; } else { if (types_match(producer, consumer, a_it->second.type_id, b_it->second.type_id, consumer_arrayed_input)) { /* OK! */ } else { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, "SC", "Type mismatch on location %u.%u: '%s' vs '%s'", a_first.first, a_first.second, describe_type(producer, a_it->second.type_id).c_str(), describe_type(consumer, b_it->second.type_id).c_str())) { pass = false; } } a_it++; b_it++; } } return pass; } enum FORMAT_TYPE { FORMAT_TYPE_UNDEFINED, FORMAT_TYPE_FLOAT, /* UNORM, SNORM, FLOAT, USCALED, SSCALED, SRGB -- anything we consider float in the shader */ FORMAT_TYPE_SINT, FORMAT_TYPE_UINT, }; static unsigned get_format_type(VkFormat fmt) { switch (fmt) { case VK_FORMAT_UNDEFINED: return FORMAT_TYPE_UNDEFINED; case VK_FORMAT_R8_SINT: case VK_FORMAT_R8G8_SINT: case VK_FORMAT_R8G8B8_SINT: case VK_FORMAT_R8G8B8A8_SINT: case VK_FORMAT_R16_SINT: case VK_FORMAT_R16G16_SINT: case VK_FORMAT_R16G16B16_SINT: case VK_FORMAT_R16G16B16A16_SINT: case VK_FORMAT_R32_SINT: case VK_FORMAT_R32G32_SINT: case VK_FORMAT_R32G32B32_SINT: case VK_FORMAT_R32G32B32A32_SINT: case VK_FORMAT_B8G8R8_SINT: case VK_FORMAT_B8G8R8A8_SINT: case VK_FORMAT_A2B10G10R10_SINT_PACK32: case VK_FORMAT_A2R10G10B10_SINT_PACK32: return FORMAT_TYPE_SINT; case VK_FORMAT_R8_UINT: case VK_FORMAT_R8G8_UINT: case VK_FORMAT_R8G8B8_UINT: case VK_FORMAT_R8G8B8A8_UINT: case VK_FORMAT_R16_UINT: case VK_FORMAT_R16G16_UINT: case VK_FORMAT_R16G16B16_UINT: case VK_FORMAT_R16G16B16A16_UINT: case VK_FORMAT_R32_UINT: case VK_FORMAT_R32G32_UINT: case VK_FORMAT_R32G32B32_UINT: case VK_FORMAT_R32G32B32A32_UINT: case VK_FORMAT_B8G8R8_UINT: case VK_FORMAT_B8G8R8A8_UINT: case VK_FORMAT_A2B10G10R10_UINT_PACK32: case VK_FORMAT_A2R10G10B10_UINT_PACK32: return FORMAT_TYPE_UINT; default: return FORMAT_TYPE_FLOAT; } } /* characterizes a SPIR-V type appearing in an interface to a FF stage, * for comparison to a VkFormat's characterization above. */ static unsigned get_fundamental_type(shader_module const *src, unsigned type) { auto insn = src->get_def(type); assert(insn != src->end()); switch (insn.opcode()) { case spv::OpTypeInt: return insn.word(3) ? FORMAT_TYPE_SINT : FORMAT_TYPE_UINT; case spv::OpTypeFloat: return FORMAT_TYPE_FLOAT; case spv::OpTypeVector: return get_fundamental_type(src, insn.word(2)); case spv::OpTypeMatrix: return get_fundamental_type(src, insn.word(2)); case spv::OpTypeArray: return get_fundamental_type(src, insn.word(2)); case spv::OpTypePointer: return get_fundamental_type(src, insn.word(3)); default: return FORMAT_TYPE_UNDEFINED; } } static uint32_t get_shader_stage_id(VkShaderStageFlagBits stage) { uint32_t bit_pos = u_ffs(stage); return bit_pos - 1; } static bool validate_vi_consistency(layer_data *my_data, VkPipelineVertexInputStateCreateInfo const *vi) { /* walk the binding descriptions, which describe the step rate and stride of each vertex buffer. * each binding should be specified only once. */ std::unordered_map bindings; bool pass = true; for (unsigned i = 0; i < vi->vertexBindingDescriptionCount; i++) { auto desc = &vi->pVertexBindingDescriptions[i]; auto &binding = bindings[desc->binding]; if (binding) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_INCONSISTENT_VI, "SC", "Duplicate vertex input binding descriptions for binding %d", desc->binding)) { pass = false; } } else { binding = desc; } } return pass; } static bool validate_vi_against_vs_inputs(layer_data *my_data, VkPipelineVertexInputStateCreateInfo const *vi, shader_module const *vs, spirv_inst_iter entrypoint) { std::map inputs; bool pass = true; collect_interface_by_location(my_data, vs, entrypoint, spv::StorageClassInput, inputs, false); /* Build index by location */ std::map attribs; if (vi) { for (unsigned i = 0; i < vi->vertexAttributeDescriptionCount; i++) attribs[vi->pVertexAttributeDescriptions[i].location] = &vi->pVertexAttributeDescriptions[i]; } auto it_a = attribs.begin(); auto it_b = inputs.begin(); while ((attribs.size() > 0 && it_a != attribs.end()) || (inputs.size() > 0 && it_b != inputs.end())) { bool a_at_end = attribs.size() == 0 || it_a == attribs.end(); bool b_at_end = inputs.size() == 0 || it_b == inputs.end(); auto a_first = a_at_end ? 0 : it_a->first; auto b_first = b_at_end ? 0 : it_b->first.first; if (!a_at_end && (b_at_end || a_first < b_first)) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_OUTPUT_NOT_CONSUMED, "SC", "Vertex attribute at location %d not consumed by VS", a_first)) { pass = false; } it_a++; } else if (!b_at_end && (a_at_end || b_first < a_first)) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0, __LINE__, SHADER_CHECKER_INPUT_NOT_PRODUCED, "SC", "VS consumes input at location %d but not provided", b_first)) { pass = false; } it_b++; } else { unsigned attrib_type = get_format_type(it_a->second->format); unsigned input_type = get_fundamental_type(vs, it_b->second.type_id); /* type checking */ if (attrib_type != FORMAT_TYPE_UNDEFINED && input_type != FORMAT_TYPE_UNDEFINED && attrib_type != input_type) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, "SC", "Attribute type of `%s` at location %d does not match VS input type of `%s`", string_VkFormat(it_a->second->format), a_first, describe_type(vs, it_b->second.type_id).c_str())) { pass = false; } } /* OK! */ it_a++; it_b++; } } return pass; } static bool validate_fs_outputs_against_render_pass(layer_data *my_data, shader_module const *fs, spirv_inst_iter entrypoint, RENDER_PASS_NODE const *rp, uint32_t subpass) { const std::vector &color_formats = rp->subpassColorFormats[subpass]; std::map outputs; bool pass = true; /* TODO: dual source blend index (spv::DecIndex, zero if not provided) */ collect_interface_by_location(my_data, fs, entrypoint, spv::StorageClassOutput, outputs, false); auto it = outputs.begin(); uint32_t attachment = 0; /* Walk attachment list and outputs together -- this is a little overpowered since attachments * are currently dense, but the parallel with matching between shader stages is nice. */ while ((outputs.size() > 0 && it != outputs.end()) || attachment < color_formats.size()) { if (attachment == color_formats.size() || (it != outputs.end() && it->first.first < attachment)) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_OUTPUT_NOT_CONSUMED, "SC", "FS writes to output location %d with no matching attachment", it->first.first)) { pass = false; } it++; } else if (it == outputs.end() || it->first.first > attachment) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_INPUT_NOT_PRODUCED, "SC", "Attachment %d not written by FS", attachment)) { pass = false; } attachment++; } else { unsigned output_type = get_fundamental_type(fs, it->second.type_id); unsigned att_type = get_format_type(color_formats[attachment]); /* type checking */ if (att_type != FORMAT_TYPE_UNDEFINED && output_type != FORMAT_TYPE_UNDEFINED && att_type != output_type) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, "SC", "Attachment %d of type `%s` does not match FS output type of `%s`", attachment, string_VkFormat(color_formats[attachment]), describe_type(fs, it->second.type_id).c_str())) { pass = false; } } /* OK! */ it++; attachment++; } } return pass; } /* For some analyses, we need to know about all ids referenced by the static call tree of a particular * entrypoint. This is important for identifying the set of shader resources actually used by an entrypoint, * for example. * Note: we only explore parts of the image which might actually contain ids we care about for the above analyses. * - NOT the shader input/output interfaces. * * TODO: The set of interesting opcodes here was determined by eyeballing the SPIRV spec. It might be worth * converting parts of this to be generated from the machine-readable spec instead. */ static void mark_accessible_ids(shader_module const *src, spirv_inst_iter entrypoint, std::unordered_set &ids) { std::unordered_set worklist; worklist.insert(entrypoint.word(2)); while (!worklist.empty()) { auto id_iter = worklist.begin(); auto id = *id_iter; worklist.erase(id_iter); auto insn = src->get_def(id); if (insn == src->end()) { /* id is something we didnt collect in build_def_index. that's OK -- we'll stumble * across all kinds of things here that we may not care about. */ continue; } /* try to add to the output set */ if (!ids.insert(id).second) { continue; /* if we already saw this id, we don't want to walk it again. */ } switch (insn.opcode()) { case spv::OpFunction: /* scan whole body of the function, enlisting anything interesting */ while (++insn, insn.opcode() != spv::OpFunctionEnd) { switch (insn.opcode()) { case spv::OpLoad: case spv::OpAtomicLoad: case spv::OpAtomicExchange: case spv::OpAtomicCompareExchange: case spv::OpAtomicCompareExchangeWeak: case spv::OpAtomicIIncrement: case spv::OpAtomicIDecrement: case spv::OpAtomicIAdd: case spv::OpAtomicISub: case spv::OpAtomicSMin: case spv::OpAtomicUMin: case spv::OpAtomicSMax: case spv::OpAtomicUMax: case spv::OpAtomicAnd: case spv::OpAtomicOr: case spv::OpAtomicXor: worklist.insert(insn.word(3)); /* ptr */ break; case spv::OpStore: case spv::OpAtomicStore: worklist.insert(insn.word(1)); /* ptr */ break; case spv::OpAccessChain: case spv::OpInBoundsAccessChain: worklist.insert(insn.word(3)); /* base ptr */ break; case spv::OpSampledImage: case spv::OpImageSampleImplicitLod: case spv::OpImageSampleExplicitLod: case spv::OpImageSampleDrefImplicitLod: case spv::OpImageSampleDrefExplicitLod: case spv::OpImageSampleProjImplicitLod: case spv::OpImageSampleProjExplicitLod: case spv::OpImageSampleProjDrefImplicitLod: case spv::OpImageSampleProjDrefExplicitLod: case spv::OpImageFetch: case spv::OpImageGather: case spv::OpImageDrefGather: case spv::OpImageRead: case spv::OpImage: case spv::OpImageQueryFormat: case spv::OpImageQueryOrder: case spv::OpImageQuerySizeLod: case spv::OpImageQuerySize: case spv::OpImageQueryLod: case spv::OpImageQueryLevels: case spv::OpImageQuerySamples: case spv::OpImageSparseSampleImplicitLod: case spv::OpImageSparseSampleExplicitLod: case spv::OpImageSparseSampleDrefImplicitLod: case spv::OpImageSparseSampleDrefExplicitLod: case spv::OpImageSparseSampleProjImplicitLod: case spv::OpImageSparseSampleProjExplicitLod: case spv::OpImageSparseSampleProjDrefImplicitLod: case spv::OpImageSparseSampleProjDrefExplicitLod: case spv::OpImageSparseFetch: case spv::OpImageSparseGather: case spv::OpImageSparseDrefGather: case spv::OpImageTexelPointer: worklist.insert(insn.word(3)); /* image or sampled image */ break; case spv::OpImageWrite: worklist.insert(insn.word(1)); /* image -- different operand order to above */ break; case spv::OpFunctionCall: for (auto i = 3; i < insn.len(); i++) { worklist.insert(insn.word(i)); /* fn itself, and all args */ } break; case spv::OpExtInst: for (auto i = 5; i < insn.len(); i++) { worklist.insert(insn.word(i)); /* operands to ext inst */ } break; } } break; } } } struct shader_stage_attributes { char const *const name; bool arrayed_input; }; static shader_stage_attributes shader_stage_attribs[] = { {"vertex shader", false}, {"tessellation control shader", true}, {"tessellation evaluation shader", false}, {"geometry shader", true}, {"fragment shader", false}, }; static bool validate_push_constant_block_against_pipeline(layer_data *my_data, std::vector const *pushConstantRanges, shader_module const *src, spirv_inst_iter type, VkShaderStageFlagBits stage) { bool pass = true; /* strip off ptrs etc */ type = get_struct_type(src, type, false); assert(type != src->end()); /* validate directly off the offsets. this isn't quite correct for arrays * and matrices, but is a good first step. TODO: arrays, matrices, weird * sizes */ for (auto insn : *src) { if (insn.opcode() == spv::OpMemberDecorate && insn.word(1) == type.word(1)) { if (insn.word(3) == spv::DecorationOffset) { unsigned offset = insn.word(4); auto size = 4; /* bytes; TODO: calculate this based on the type */ bool found_range = false; for (auto const &range : *pushConstantRanges) { if (range.offset <= offset && range.offset + range.size >= offset + size) { found_range = true; if ((range.stageFlags & stage) == 0) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_PUSH_CONSTANT_NOT_ACCESSIBLE_FROM_STAGE, "SC", "Push constant range covering variable starting at " "offset %u not accessible from stage %s", offset, string_VkShaderStageFlagBits(stage))) { pass = false; } } break; } } if (!found_range) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_PUSH_CONSTANT_OUT_OF_RANGE, "SC", "Push constant range covering variable starting at " "offset %u not declared in layout", offset)) { pass = false; } } } } } return pass; } static bool validate_push_constant_usage(layer_data *my_data, std::vector const *pushConstantRanges, shader_module const *src, std::unordered_set accessible_ids, VkShaderStageFlagBits stage) { bool pass = true; for (auto id : accessible_ids) { auto def_insn = src->get_def(id); if (def_insn.opcode() == spv::OpVariable && def_insn.word(3) == spv::StorageClassPushConstant) { pass &= validate_push_constant_block_against_pipeline(my_data, pushConstantRanges, src, src->get_def(def_insn.word(1)), stage); } } return pass; } // For given pipelineLayout verify that the setLayout at slot.first // has the requested binding at slot.second static VkDescriptorSetLayoutBinding const * get_descriptor_binding(layer_data *my_data, PIPELINE_LAYOUT_NODE *pipelineLayout, descriptor_slot_t slot) { if (!pipelineLayout) return nullptr; if (slot.first >= pipelineLayout->descriptorSetLayouts.size()) return nullptr; auto const layout_node = my_data->descriptorSetLayoutMap[pipelineLayout->descriptorSetLayouts[slot.first]]; auto bindingIt = layout_node->bindingToIndexMap.find(slot.second); if ((bindingIt == layout_node->bindingToIndexMap.end()) || (layout_node->createInfo.pBindings == NULL)) return nullptr; assert(bindingIt->second < layout_node->createInfo.bindingCount); return &layout_node->createInfo.pBindings[bindingIt->second]; } // Block of code at start here for managing/tracking Pipeline state that this layer cares about static uint64_t g_drawCount[NUM_DRAW_TYPES] = {0, 0, 0, 0}; // TODO : Should be tracking lastBound per commandBuffer and when draws occur, report based on that cmd buffer lastBound // Then need to synchronize the accesses based on cmd buffer so that if I'm reading state on one cmd buffer, updates // to that same cmd buffer by separate thread are not changing state from underneath us // Track the last cmd buffer touched by this thread static VkBool32 hasDrawCmd(GLOBAL_CB_NODE *pCB) { for (uint32_t i = 0; i < NUM_DRAW_TYPES; i++) { if (pCB->drawCount[i]) return VK_TRUE; } return VK_FALSE; } // Check object status for selected flag state static VkBool32 validate_status(layer_data *my_data, GLOBAL_CB_NODE *pNode, CBStatusFlags status_mask, VkFlags msg_flags, DRAW_STATE_ERROR error_code, const char *fail_msg) { if (!(pNode->status & status_mask)) { return log_msg(my_data->report_data, msg_flags, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(pNode->commandBuffer), __LINE__, error_code, "DS", "CB object %#" PRIxLEAST64 ": %s", reinterpret_cast(pNode->commandBuffer), fail_msg); } return VK_FALSE; } // Retrieve pipeline node ptr for given pipeline object static PIPELINE_NODE *getPipeline(layer_data *my_data, const VkPipeline pipeline) { if (my_data->pipelineMap.find(pipeline) == my_data->pipelineMap.end()) { return NULL; } return my_data->pipelineMap[pipeline]; } // Return VK_TRUE if for a given PSO, the given state enum is dynamic, else return VK_FALSE static VkBool32 isDynamic(const PIPELINE_NODE *pPipeline, const VkDynamicState state) { if (pPipeline && pPipeline->graphicsPipelineCI.pDynamicState) { for (uint32_t i = 0; i < pPipeline->graphicsPipelineCI.pDynamicState->dynamicStateCount; i++) { if (state == pPipeline->graphicsPipelineCI.pDynamicState->pDynamicStates[i]) return VK_TRUE; } } return VK_FALSE; } // Validate state stored as flags at time of draw call static VkBool32 validate_draw_state_flags(layer_data *dev_data, GLOBAL_CB_NODE *pCB, const PIPELINE_NODE *pPipe, VkBool32 indexedDraw) { VkBool32 result; result = validate_status(dev_data, pCB, CBSTATUS_VIEWPORT_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_VIEWPORT_NOT_BOUND, "Dynamic viewport state not set for this command buffer"); result |= validate_status(dev_data, pCB, CBSTATUS_SCISSOR_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_SCISSOR_NOT_BOUND, "Dynamic scissor state not set for this command buffer"); if ((pPipe->iaStateCI.topology == VK_PRIMITIVE_TOPOLOGY_LINE_LIST) || (pPipe->iaStateCI.topology == VK_PRIMITIVE_TOPOLOGY_LINE_STRIP)) { result |= validate_status(dev_data, pCB, CBSTATUS_LINE_WIDTH_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_LINE_WIDTH_NOT_BOUND, "Dynamic line width state not set for this command buffer"); } if (pPipe->rsStateCI.depthBiasEnable) { result |= validate_status(dev_data, pCB, CBSTATUS_DEPTH_BIAS_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_DEPTH_BIAS_NOT_BOUND, "Dynamic depth bias state not set for this command buffer"); } if (pPipe->blendConstantsEnabled) { result |= validate_status(dev_data, pCB, CBSTATUS_BLEND_CONSTANTS_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_BLEND_NOT_BOUND, "Dynamic blend constants state not set for this command buffer"); } if (pPipe->dsStateCI.depthBoundsTestEnable) { result |= validate_status(dev_data, pCB, CBSTATUS_DEPTH_BOUNDS_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_DEPTH_BOUNDS_NOT_BOUND, "Dynamic depth bounds state not set for this command buffer"); } if (pPipe->dsStateCI.stencilTestEnable) { result |= validate_status(dev_data, pCB, CBSTATUS_STENCIL_READ_MASK_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_STENCIL_NOT_BOUND, "Dynamic stencil read mask state not set for this command buffer"); result |= validate_status(dev_data, pCB, CBSTATUS_STENCIL_WRITE_MASK_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_STENCIL_NOT_BOUND, "Dynamic stencil write mask state not set for this command buffer"); result |= validate_status(dev_data, pCB, CBSTATUS_STENCIL_REFERENCE_SET, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_STENCIL_NOT_BOUND, "Dynamic stencil reference state not set for this command buffer"); } if (indexedDraw) { result |= validate_status(dev_data, pCB, CBSTATUS_INDEX_BUFFER_BOUND, VK_DEBUG_REPORT_ERROR_BIT_EXT, DRAWSTATE_INDEX_BUFFER_NOT_BOUND, "Index buffer object not bound to this command buffer when Indexed Draw attempted"); } return result; } // Verify attachment reference compatibility according to spec // If one array is larger, treat missing elements of shorter array as VK_ATTACHMENT_UNUSED & other array much match this // If both AttachmentReference arrays have requested index, check their corresponding AttachementDescriptions // to make sure that format and samples counts match. // If not, they are not compatible. static bool attachment_references_compatible(const uint32_t index, const VkAttachmentReference *pPrimary, const uint32_t primaryCount, const VkAttachmentDescription *pPrimaryAttachments, const VkAttachmentReference *pSecondary, const uint32_t secondaryCount, const VkAttachmentDescription *pSecondaryAttachments) { if (index >= primaryCount) { // Check secondary as if primary is VK_ATTACHMENT_UNUSED if (VK_ATTACHMENT_UNUSED == pSecondary[index].attachment) return true; } else if (index >= secondaryCount) { // Check primary as if secondary is VK_ATTACHMENT_UNUSED if (VK_ATTACHMENT_UNUSED == pPrimary[index].attachment) return true; } else { // format and sample count must match if ((pPrimaryAttachments[pPrimary[index].attachment].format == pSecondaryAttachments[pSecondary[index].attachment].format) && (pPrimaryAttachments[pPrimary[index].attachment].samples == pSecondaryAttachments[pSecondary[index].attachment].samples)) return true; } // Format and sample counts didn't match return false; } // For give primary and secondary RenderPass objects, verify that they're compatible static bool verify_renderpass_compatibility(layer_data *my_data, const VkRenderPass primaryRP, const VkRenderPass secondaryRP, string &errorMsg) { stringstream errorStr; if (my_data->renderPassMap.find(primaryRP) == my_data->renderPassMap.end()) { errorStr << "invalid VkRenderPass (" << primaryRP << ")"; errorMsg = errorStr.str(); return false; } else if (my_data->renderPassMap.find(secondaryRP) == my_data->renderPassMap.end()) { errorStr << "invalid VkRenderPass (" << secondaryRP << ")"; errorMsg = errorStr.str(); return false; } // Trivial pass case is exact same RP if (primaryRP == secondaryRP) { return true; } const VkRenderPassCreateInfo *primaryRPCI = my_data->renderPassMap[primaryRP]->pCreateInfo; const VkRenderPassCreateInfo *secondaryRPCI = my_data->renderPassMap[secondaryRP]->pCreateInfo; if (primaryRPCI->subpassCount != secondaryRPCI->subpassCount) { errorStr << "RenderPass for primary cmdBuffer has " << primaryRPCI->subpassCount << " subpasses but renderPass for secondary cmdBuffer has " << secondaryRPCI->subpassCount << " subpasses."; errorMsg = errorStr.str(); return false; } uint32_t spIndex = 0; for (spIndex = 0; spIndex < primaryRPCI->subpassCount; ++spIndex) { // For each subpass, verify that corresponding color, input, resolve & depth/stencil attachment references are compatible uint32_t primaryColorCount = primaryRPCI->pSubpasses[spIndex].colorAttachmentCount; uint32_t secondaryColorCount = secondaryRPCI->pSubpasses[spIndex].colorAttachmentCount; uint32_t colorMax = std::max(primaryColorCount, secondaryColorCount); for (uint32_t cIdx = 0; cIdx < colorMax; ++cIdx) { if (!attachment_references_compatible(cIdx, primaryRPCI->pSubpasses[spIndex].pColorAttachments, primaryColorCount, primaryRPCI->pAttachments, secondaryRPCI->pSubpasses[spIndex].pColorAttachments, secondaryColorCount, secondaryRPCI->pAttachments)) { errorStr << "color attachments at index " << cIdx << " of subpass index " << spIndex << " are not compatible."; errorMsg = errorStr.str(); return false; } else if (!attachment_references_compatible(cIdx, primaryRPCI->pSubpasses[spIndex].pResolveAttachments, primaryColorCount, primaryRPCI->pAttachments, secondaryRPCI->pSubpasses[spIndex].pResolveAttachments, secondaryColorCount, secondaryRPCI->pAttachments)) { errorStr << "resolve attachments at index " << cIdx << " of subpass index " << spIndex << " are not compatible."; errorMsg = errorStr.str(); return false; } else if (!attachment_references_compatible(cIdx, primaryRPCI->pSubpasses[spIndex].pDepthStencilAttachment, primaryColorCount, primaryRPCI->pAttachments, secondaryRPCI->pSubpasses[spIndex].pDepthStencilAttachment, secondaryColorCount, secondaryRPCI->pAttachments)) { errorStr << "depth/stencil attachments at index " << cIdx << " of subpass index " << spIndex << " are not compatible."; errorMsg = errorStr.str(); return false; } } uint32_t primaryInputCount = primaryRPCI->pSubpasses[spIndex].inputAttachmentCount; uint32_t secondaryInputCount = secondaryRPCI->pSubpasses[spIndex].inputAttachmentCount; uint32_t inputMax = std::max(primaryInputCount, secondaryInputCount); for (uint32_t i = 0; i < inputMax; ++i) { if (!attachment_references_compatible(i, primaryRPCI->pSubpasses[spIndex].pInputAttachments, primaryColorCount, primaryRPCI->pAttachments, secondaryRPCI->pSubpasses[spIndex].pInputAttachments, secondaryColorCount, secondaryRPCI->pAttachments)) { errorStr << "input attachments at index " << i << " of subpass index " << spIndex << " are not compatible."; errorMsg = errorStr.str(); return false; } } } return true; } // For give SET_NODE, verify that its Set is compatible w/ the setLayout corresponding to pipelineLayout[layoutIndex] static bool verify_set_layout_compatibility(layer_data *my_data, const SET_NODE *pSet, const VkPipelineLayout layout, const uint32_t layoutIndex, string &errorMsg) { stringstream errorStr; auto pipeline_layout_it = my_data->pipelineLayoutMap.find(layout); if (pipeline_layout_it == my_data->pipelineLayoutMap.end()) { errorStr << "invalid VkPipelineLayout (" << layout << ")"; errorMsg = errorStr.str(); return false; } if (layoutIndex >= pipeline_layout_it->second.descriptorSetLayouts.size()) { errorStr << "VkPipelineLayout (" << layout << ") only contains " << pipeline_layout_it->second.descriptorSetLayouts.size() << " setLayouts corresponding to sets 0-" << pipeline_layout_it->second.descriptorSetLayouts.size() - 1 << ", but you're attempting to bind set to index " << layoutIndex; errorMsg = errorStr.str(); return false; } // Get the specific setLayout from PipelineLayout that overlaps this set LAYOUT_NODE *pLayoutNode = my_data->descriptorSetLayoutMap[pipeline_layout_it->second.descriptorSetLayouts[layoutIndex]]; if (pLayoutNode->layout == pSet->pLayout->layout) { // trivial pass case return true; } size_t descriptorCount = pLayoutNode->descriptorTypes.size(); if (descriptorCount != pSet->pLayout->descriptorTypes.size()) { errorStr << "setLayout " << layoutIndex << " from pipelineLayout " << layout << " has " << descriptorCount << " descriptors, but corresponding set being bound has " << pSet->pLayout->descriptorTypes.size() << " descriptors."; errorMsg = errorStr.str(); return false; // trivial fail case } // Now need to check set against corresponding pipelineLayout to verify compatibility for (size_t i = 0; i < descriptorCount; ++i) { // Need to verify that layouts are identically defined // TODO : Is below sufficient? Making sure that types & stageFlags match per descriptor // do we also need to check immutable samplers? if (pLayoutNode->descriptorTypes[i] != pSet->pLayout->descriptorTypes[i]) { errorStr << "descriptor " << i << " for descriptorSet being bound is type '" << string_VkDescriptorType(pSet->pLayout->descriptorTypes[i]) << "' but corresponding descriptor from pipelineLayout is type '" << string_VkDescriptorType(pLayoutNode->descriptorTypes[i]) << "'"; errorMsg = errorStr.str(); return false; } if (pLayoutNode->stageFlags[i] != pSet->pLayout->stageFlags[i]) { errorStr << "stageFlags " << i << " for descriptorSet being bound is " << pSet->pLayout->stageFlags[i] << "' but corresponding descriptor from pipelineLayout has stageFlags " << pLayoutNode->stageFlags[i]; errorMsg = errorStr.str(); return false; } } return true; } // Validate that data for each specialization entry is fully contained within the buffer. static VkBool32 validate_specialization_offsets(layer_data *my_data, VkPipelineShaderStageCreateInfo const *info) { VkBool32 pass = VK_TRUE; VkSpecializationInfo const *spec = info->pSpecializationInfo; if (spec) { for (auto i = 0u; i < spec->mapEntryCount; i++) { if (spec->pMapEntries[i].offset + spec->pMapEntries[i].size > spec->dataSize) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0, __LINE__, SHADER_CHECKER_BAD_SPECIALIZATION, "SC", "Specialization entry %u (for constant id %u) references memory outside provided " "specialization data (bytes %u.." PRINTF_SIZE_T_SPECIFIER "; " PRINTF_SIZE_T_SPECIFIER " bytes provided)", i, spec->pMapEntries[i].constantID, spec->pMapEntries[i].offset, spec->pMapEntries[i].offset + spec->pMapEntries[i].size - 1, spec->dataSize)) { pass = VK_FALSE; } } } } return pass; } static bool descriptor_type_match(layer_data *my_data, shader_module const *module, uint32_t type_id, VkDescriptorType descriptor_type, unsigned &descriptor_count) { auto type = module->get_def(type_id); descriptor_count = 1; /* Strip off any array or ptrs. Where we remove array levels, adjust the * descriptor count for each dimension. */ while (type.opcode() == spv::OpTypeArray || type.opcode() == spv::OpTypePointer) { if (type.opcode() == spv::OpTypeArray) { descriptor_count *= get_constant_value(module, type.word(3)); type = module->get_def(type.word(2)); } else { type = module->get_def(type.word(3)); } } switch (type.opcode()) { case spv::OpTypeStruct: { for (auto insn : *module) { if (insn.opcode() == spv::OpDecorate && insn.word(1) == type.word(1)) { if (insn.word(2) == spv::DecorationBlock) { return descriptor_type == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER || descriptor_type == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC; } else if (insn.word(2) == spv::DecorationBufferBlock) { return descriptor_type == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER || descriptor_type == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC; } } } /* Invalid */ return false; } case spv::OpTypeSampler: return descriptor_type == VK_DESCRIPTOR_TYPE_SAMPLER; case spv::OpTypeSampledImage: if (descriptor_type == VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER) { /* Slight relaxation for some GLSL historical madness: samplerBuffer * doesn't really have a sampler, and a texel buffer descriptor * doesn't really provide one. Allow this slight mismatch. */ auto image_type = module->get_def(type.word(2)); auto dim = image_type.word(3); auto sampled = image_type.word(7); return dim == spv::DimBuffer && sampled == 1; } return descriptor_type == VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER; case spv::OpTypeImage: { /* Many descriptor types backing image types-- depends on dimension * and whether the image will be used with a sampler. SPIRV for * Vulkan requires that sampled be 1 or 2 -- leaving the decision to * runtime is unacceptable. */ auto dim = type.word(3); auto sampled = type.word(7); if (dim == spv::DimSubpassData) { return descriptor_type == VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT; } else if (dim == spv::DimBuffer) { if (sampled == 1) { return descriptor_type == VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER; } else { return descriptor_type == VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER; } } else if (sampled == 1) { return descriptor_type == VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE; } else { return descriptor_type == VK_DESCRIPTOR_TYPE_STORAGE_IMAGE; } } /* We shouldn't really see any other junk types -- but if we do, they're * a mismatch. */ default: return false; /* Mismatch */ } } static VkBool32 require_feature(layer_data *my_data, VkBool32 feature, char const *feature_name) { if (!feature) { if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_FEATURE_NOT_ENABLED, "SC", "Shader requires VkPhysicalDeviceFeatures::%s but is not " "enabled on the device", feature_name)) { return false; } } return true; } static VkBool32 validate_shader_capabilities(layer_data *my_data, shader_module const *src) { VkBool32 pass = VK_TRUE; auto enabledFeatures = &my_data->physDevProperties.features; for (auto insn : *src) { if (insn.opcode() == spv::OpCapability) { switch (insn.word(1)) { case spv::CapabilityMatrix: case spv::CapabilityShader: case spv::CapabilityInputAttachment: case spv::CapabilitySampled1D: case spv::CapabilityImage1D: case spv::CapabilitySampledBuffer: case spv::CapabilityImageBuffer: case spv::CapabilityImageQuery: case spv::CapabilityDerivativeControl: // Always supported by a Vulkan 1.0 implementation -- no feature bits. break; case spv::CapabilityGeometry: pass &= require_feature(my_data, enabledFeatures->geometryShader, "geometryShader"); break; case spv::CapabilityTessellation: pass &= require_feature(my_data, enabledFeatures->tessellationShader, "tessellationShader"); break; case spv::CapabilityFloat64: pass &= require_feature(my_data, enabledFeatures->shaderFloat64, "shaderFloat64"); break; case spv::CapabilityInt64: pass &= require_feature(my_data, enabledFeatures->shaderInt64, "shaderInt64"); break; case spv::CapabilityTessellationPointSize: case spv::CapabilityGeometryPointSize: pass &= require_feature(my_data, enabledFeatures->shaderTessellationAndGeometryPointSize, "shaderTessellationAndGeometryPointSize"); break; case spv::CapabilityImageGatherExtended: pass &= require_feature(my_data, enabledFeatures->shaderImageGatherExtended, "shaderImageGatherExtended"); break; case spv::CapabilityStorageImageMultisample: pass &= require_feature(my_data, enabledFeatures->shaderStorageImageMultisample, "shaderStorageImageMultisample"); break; case spv::CapabilityUniformBufferArrayDynamicIndexing: pass &= require_feature(my_data, enabledFeatures->shaderUniformBufferArrayDynamicIndexing, "shaderUniformBufferArrayDynamicIndexing"); break; case spv::CapabilitySampledImageArrayDynamicIndexing: pass &= require_feature(my_data, enabledFeatures->shaderSampledImageArrayDynamicIndexing, "shaderSampledImageArrayDynamicIndexing"); break; case spv::CapabilityStorageBufferArrayDynamicIndexing: pass &= require_feature(my_data, enabledFeatures->shaderStorageBufferArrayDynamicIndexing, "shaderStorageBufferArrayDynamicIndexing"); break; case spv::CapabilityStorageImageArrayDynamicIndexing: pass &= require_feature(my_data, enabledFeatures->shaderStorageImageArrayDynamicIndexing, "shaderStorageImageArrayDynamicIndexing"); break; case spv::CapabilityClipDistance: pass &= require_feature(my_data, enabledFeatures->shaderClipDistance, "shaderClipDistance"); break; case spv::CapabilityCullDistance: pass &= require_feature(my_data, enabledFeatures->shaderCullDistance, "shaderCullDistance"); break; case spv::CapabilityImageCubeArray: pass &= require_feature(my_data, enabledFeatures->imageCubeArray, "imageCubeArray"); break; case spv::CapabilitySampleRateShading: pass &= require_feature(my_data, enabledFeatures->sampleRateShading, "sampleRateShading"); break; case spv::CapabilitySparseResidency: pass &= require_feature(my_data, enabledFeatures->shaderResourceResidency, "shaderResourceResidency"); break; case spv::CapabilityMinLod: pass &= require_feature(my_data, enabledFeatures->shaderResourceMinLod, "shaderResourceMinLod"); break; case spv::CapabilitySampledCubeArray: pass &= require_feature(my_data, enabledFeatures->imageCubeArray, "imageCubeArray"); break; case spv::CapabilityImageMSArray: pass &= require_feature(my_data, enabledFeatures->shaderStorageImageMultisample, "shaderStorageImageMultisample"); break; case spv::CapabilityStorageImageExtendedFormats: pass &= require_feature(my_data, enabledFeatures->shaderStorageImageExtendedFormats, "shaderStorageImageExtendedFormats"); break; case spv::CapabilityInterpolationFunction: pass &= require_feature(my_data, enabledFeatures->sampleRateShading, "sampleRateShading"); break; case spv::CapabilityStorageImageReadWithoutFormat: pass &= require_feature(my_data, enabledFeatures->shaderStorageImageReadWithoutFormat, "shaderStorageImageReadWithoutFormat"); break; case spv::CapabilityStorageImageWriteWithoutFormat: pass &= require_feature(my_data, enabledFeatures->shaderStorageImageWriteWithoutFormat, "shaderStorageImageWriteWithoutFormat"); break; case spv::CapabilityMultiViewport: pass &= require_feature(my_data, enabledFeatures->multiViewport, "multiViewport"); break; default: if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_BAD_CAPABILITY, "SC", "Shader declares capability %u, not supported in Vulkan.", insn.word(1))) pass = VK_FALSE; break; } } } return pass; } static VkBool32 validate_pipeline_shader_stage(layer_data *dev_data, VkPipelineShaderStageCreateInfo const *pStage, PIPELINE_NODE *pipeline, PIPELINE_LAYOUT_NODE *pipelineLayout, shader_module **out_module, spirv_inst_iter *out_entrypoint) { VkBool32 pass = VK_TRUE; auto module = *out_module = dev_data->shaderModuleMap[pStage->module].get(); pass &= validate_specialization_offsets(dev_data, pStage); /* find the entrypoint */ auto entrypoint = *out_entrypoint = find_entrypoint(module, pStage->pName, pStage->stage); if (entrypoint == module->end()) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_MISSING_ENTRYPOINT, "SC", "No entrypoint found named `%s` for stage %s", pStage->pName, string_VkShaderStageFlagBits(pStage->stage))) { pass = VK_FALSE; } } /* validate shader capabilities against enabled device features */ pass &= validate_shader_capabilities(dev_data, module); /* mark accessible ids */ std::unordered_set accessible_ids; mark_accessible_ids(module, entrypoint, accessible_ids); /* validate descriptor set layout against what the entrypoint actually uses */ std::map descriptor_uses; collect_interface_by_descriptor_slot(dev_data, module, accessible_ids, descriptor_uses); /* validate push constant usage */ pass &= validate_push_constant_usage(dev_data, &pipelineLayout->pushConstantRanges, module, accessible_ids, pStage->stage); /* validate descriptor use */ for (auto use : descriptor_uses) { // While validating shaders capture which slots are used by the pipeline pipeline->active_slots[use.first.first].insert(use.first.second); /* find the matching binding */ auto binding = get_descriptor_binding(dev_data, pipelineLayout, use.first); unsigned required_descriptor_count; if (!binding) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_MISSING_DESCRIPTOR, "SC", "Shader uses descriptor slot %u.%u (used as type `%s`) but not declared in pipeline layout", use.first.first, use.first.second, describe_type(module, use.second.type_id).c_str())) { pass = VK_FALSE; } } else if (~binding->stageFlags & pStage->stage) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /*dev*/ 0, __LINE__, SHADER_CHECKER_DESCRIPTOR_NOT_ACCESSIBLE_FROM_STAGE, "SC", "Shader uses descriptor slot %u.%u (used " "as type `%s`) but descriptor not " "accessible from stage %s", use.first.first, use.first.second, describe_type(module, use.second.type_id).c_str(), string_VkShaderStageFlagBits(pStage->stage))) { pass = VK_FALSE; } } else if (!descriptor_type_match(dev_data, module, use.second.type_id, binding->descriptorType, /*out*/ required_descriptor_count)) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_DESCRIPTOR_TYPE_MISMATCH, "SC", "Type mismatch on descriptor slot " "%u.%u (used as type `%s`) but " "descriptor of type %s", use.first.first, use.first.second, describe_type(module, use.second.type_id).c_str(), string_VkDescriptorType(binding->descriptorType))) { pass = VK_FALSE; } } else if (binding->descriptorCount < required_descriptor_count) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VkDebugReportObjectTypeEXT(0), 0, __LINE__, SHADER_CHECKER_DESCRIPTOR_TYPE_MISMATCH, "SC", "Shader expects at least %u descriptors for binding %u.%u (used as type `%s`) but only %u provided", required_descriptor_count, use.first.first, use.first.second, describe_type(module, use.second.type_id).c_str(), binding->descriptorCount)) { pass = VK_FALSE; } } } return pass; } // Validate that the shaders used by the given pipeline and store the active_slots // that are actually used by the pipeline into pPipeline->active_slots static VkBool32 validate_and_capture_pipeline_shader_state(layer_data *my_data, PIPELINE_NODE *pPipeline) { VkGraphicsPipelineCreateInfo const *pCreateInfo = &pPipeline->graphicsPipelineCI; int vertex_stage = get_shader_stage_id(VK_SHADER_STAGE_VERTEX_BIT); int fragment_stage = get_shader_stage_id(VK_SHADER_STAGE_FRAGMENT_BIT); shader_module *shaders[5]; memset(shaders, 0, sizeof(shaders)); spirv_inst_iter entrypoints[5]; memset(entrypoints, 0, sizeof(entrypoints)); VkPipelineVertexInputStateCreateInfo const *vi = 0; VkBool32 pass = VK_TRUE; auto pipelineLayout = pCreateInfo->layout != VK_NULL_HANDLE ? &my_data->pipelineLayoutMap[pCreateInfo->layout] : nullptr; for (uint32_t i = 0; i < pCreateInfo->stageCount; i++) { VkPipelineShaderStageCreateInfo const *pStage = &pCreateInfo->pStages[i]; auto stage_id = get_shader_stage_id(pStage->stage); pass &= validate_pipeline_shader_stage(my_data, pStage, pPipeline, pipelineLayout, &shaders[stage_id], &entrypoints[stage_id]); } vi = pCreateInfo->pVertexInputState; if (vi) { pass &= validate_vi_consistency(my_data, vi); } if (shaders[vertex_stage]) { pass &= validate_vi_against_vs_inputs(my_data, vi, shaders[vertex_stage], entrypoints[vertex_stage]); } int producer = get_shader_stage_id(VK_SHADER_STAGE_VERTEX_BIT); int consumer = get_shader_stage_id(VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT); while (!shaders[producer] && producer != fragment_stage) { producer++; consumer++; } for (; producer != fragment_stage && consumer <= fragment_stage; consumer++) { assert(shaders[producer]); if (shaders[consumer]) { pass &= validate_interface_between_stages(my_data, shaders[producer], entrypoints[producer], shader_stage_attribs[producer].name, shaders[consumer], entrypoints[consumer], shader_stage_attribs[consumer].name, shader_stage_attribs[consumer].arrayed_input); producer = consumer; } } auto rp = pCreateInfo->renderPass != VK_NULL_HANDLE ? my_data->renderPassMap[pCreateInfo->renderPass] : nullptr; if (shaders[fragment_stage] && rp) { pass &= validate_fs_outputs_against_render_pass(my_data, shaders[fragment_stage], entrypoints[fragment_stage], rp, pCreateInfo->subpass); } return pass; } static VkBool32 validate_compute_pipeline(layer_data *my_data, PIPELINE_NODE *pPipeline) { VkComputePipelineCreateInfo const *pCreateInfo = &pPipeline->computePipelineCI; auto pipelineLayout = pCreateInfo->layout != VK_NULL_HANDLE ? &my_data->pipelineLayoutMap[pCreateInfo->layout] : nullptr; shader_module *module; spirv_inst_iter entrypoint; return validate_pipeline_shader_stage(my_data, &pCreateInfo->stage, pPipeline, pipelineLayout, &module, &entrypoint); } // Return Set node ptr for specified set or else NULL static SET_NODE *getSetNode(layer_data *my_data, const VkDescriptorSet set) { if (my_data->setMap.find(set) == my_data->setMap.end()) { return NULL; } return my_data->setMap[set]; } // For given Layout Node and binding, return index where that binding begins static uint32_t getBindingStartIndex(const LAYOUT_NODE *pLayout, const uint32_t binding) { uint32_t offsetIndex = 0; for (uint32_t i = 0; i < pLayout->createInfo.bindingCount; i++) { if (pLayout->createInfo.pBindings[i].binding == binding) break; offsetIndex += pLayout->createInfo.pBindings[i].descriptorCount; } return offsetIndex; } // For given layout node and binding, return last index that is updated static uint32_t getBindingEndIndex(const LAYOUT_NODE *pLayout, const uint32_t binding) { uint32_t offsetIndex = 0; for (uint32_t i = 0; i < pLayout->createInfo.bindingCount; i++) { offsetIndex += pLayout->createInfo.pBindings[i].descriptorCount; if (pLayout->createInfo.pBindings[i].binding == binding) break; } return offsetIndex - 1; } // For the given command buffer, verify and update the state for activeSetBindingsPairs // This includes: // 1. Verifying that any dynamic descriptor in that set has a valid dynamic offset bound. // To be valid, the dynamic offset combined with the offset and range from its // descriptor update must not overflow the size of its buffer being updated // 2. Grow updateImages for given pCB to include any bound STORAGE_IMAGE descriptor images // 3. Grow updateBuffers for pCB to include buffers from STORAGE*_BUFFER descriptor buffers static VkBool32 validate_and_update_drawtime_descriptor_state( layer_data *dev_data, GLOBAL_CB_NODE *pCB, const vector>> &activeSetBindingsPairs) { VkBool32 result = VK_FALSE; VkWriteDescriptorSet *pWDS = NULL; uint32_t dynOffsetIndex = 0; VkDeviceSize bufferSize = 0; for (auto set_bindings_pair : activeSetBindingsPairs) { SET_NODE *set_node = set_bindings_pair.first; LAYOUT_NODE *layout_node = set_node->pLayout; for (auto binding : set_bindings_pair.second) { uint32_t startIdx = getBindingStartIndex(layout_node, binding); uint32_t endIdx = getBindingEndIndex(layout_node, binding); for (uint32_t i = startIdx; i <= endIdx; ++i) { // TODO : Flag error here if set_node->pDescriptorUpdates[i] is NULL switch (set_node->pDescriptorUpdates[i]->sType) { case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET: pWDS = (VkWriteDescriptorSet *)set_node->pDescriptorUpdates[i]; if ((pWDS->descriptorType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC) || (pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC)) { for (uint32_t j = 0; j < pWDS->descriptorCount; ++j) { bufferSize = dev_data->bufferMap[pWDS->pBufferInfo[j].buffer].create_info->size; uint32_t dynOffset = pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].dynamicOffsets[dynOffsetIndex]; if (pWDS->pBufferInfo[j].range == VK_WHOLE_SIZE) { if ((dynOffset + pWDS->pBufferInfo[j].offset) > bufferSize) { result |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, reinterpret_cast(set_node->set), __LINE__, DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, "DS", "VkDescriptorSet (%#" PRIxLEAST64 ") bound as set #%u has range of " "VK_WHOLE_SIZE but dynamic offset %#" PRIxLEAST32 ". " "combined with offset %#" PRIxLEAST64 " oversteps its buffer (%#" PRIxLEAST64 ") which has a size of %#" PRIxLEAST64 ".", reinterpret_cast(set_node->set), i, pCB->dynamicOffsets[dynOffsetIndex], pWDS->pBufferInfo[j].offset, reinterpret_cast(pWDS->pBufferInfo[j].buffer), bufferSize); } } else if ((dynOffset + pWDS->pBufferInfo[j].offset + pWDS->pBufferInfo[j].range) > bufferSize) { result |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, reinterpret_cast(set_node->set), __LINE__, DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, "DS", "VkDescriptorSet (%#" PRIxLEAST64 ") bound as set #%u has dynamic offset %#" PRIxLEAST32 ". " "Combined with offset %#" PRIxLEAST64 " and range %#" PRIxLEAST64 " from its update, this oversteps its buffer " "(%#" PRIxLEAST64 ") which has a size of %#" PRIxLEAST64 ".", reinterpret_cast(set_node->set), i, pCB->dynamicOffsets[dynOffsetIndex], pWDS->pBufferInfo[j].offset, pWDS->pBufferInfo[j].range, reinterpret_cast(pWDS->pBufferInfo[j].buffer), bufferSize); } else if ((dynOffset + pWDS->pBufferInfo[j].offset + pWDS->pBufferInfo[j].range) > bufferSize) { result |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, reinterpret_cast(set_node->set), __LINE__, DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, "DS", "VkDescriptorSet (%#" PRIxLEAST64 ") bound as set #%u has dynamic offset %#" PRIxLEAST32 ". " "Combined with offset %#" PRIxLEAST64 " and range %#" PRIxLEAST64 " from its update, this oversteps its buffer " "(%#" PRIxLEAST64 ") which has a size of %#" PRIxLEAST64 ".", reinterpret_cast(set_node->set), i, pCB->dynamicOffsets[dynOffsetIndex], pWDS->pBufferInfo[j].offset, pWDS->pBufferInfo[j].range, reinterpret_cast(pWDS->pBufferInfo[j].buffer), bufferSize); } dynOffsetIndex++; } } else if (pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_IMAGE) { for (uint32_t j = 0; j < pWDS->descriptorCount; ++j) { pCB->updateImages.insert(pWDS->pImageInfo[j].imageView); } } else if (pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER) { for (uint32_t j = 0; j < pWDS->descriptorCount; ++j) { assert(dev_data->bufferViewMap.find(pWDS->pTexelBufferView[j]) != dev_data->bufferViewMap.end()); pCB->updateBuffers.insert(dev_data->bufferViewMap[pWDS->pTexelBufferView[j]].buffer); } } else if (pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER || pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC) { for (uint32_t j = 0; j < pWDS->descriptorCount; ++j) { pCB->updateBuffers.insert(pWDS->pBufferInfo[j].buffer); } } i += pWDS->descriptorCount; // Advance i to end of this set of descriptors (++i at end of for loop will move 1 // index past last of these descriptors) break; default: // Currently only shadowing Write update nodes so shouldn't get here assert(0); continue; } } } } return result; } // TODO : This is a temp function that naively updates bound storage images and buffers based on which descriptor sets are bound. // When validate_and_update_draw_state() handles computer shaders so that active_slots is correct for compute pipelines, this // function can be killed and validate_and_update_draw_state() used instead static void update_shader_storage_images_and_buffers(layer_data *dev_data, GLOBAL_CB_NODE *pCB) { VkWriteDescriptorSet *pWDS = nullptr; SET_NODE *pSet = nullptr; // For the bound descriptor sets, pull off any storage images and buffers // This may be more than are actually updated depending on which are active, but for now this is a stop-gap for compute // pipelines for (auto set : pCB->lastBound[VK_PIPELINE_BIND_POINT_COMPUTE].uniqueBoundSets) { // Get the set node pSet = getSetNode(dev_data, set); // For each update in the set for (auto pUpdate : pSet->pDescriptorUpdates) { // If it's a write update to STORAGE type capture image/buffer being updated if (pUpdate && (pUpdate->sType == VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET)) { pWDS = reinterpret_cast(pUpdate); if (pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_IMAGE) { for (uint32_t j = 0; j < pWDS->descriptorCount; ++j) { pCB->updateImages.insert(pWDS->pImageInfo[j].imageView); } } else if (pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER) { for (uint32_t j = 0; j < pWDS->descriptorCount; ++j) { pCB->updateBuffers.insert(dev_data->bufferViewMap[pWDS->pTexelBufferView[j]].buffer); } } else if (pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER || pWDS->descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC) { for (uint32_t j = 0; j < pWDS->descriptorCount; ++j) { pCB->updateBuffers.insert(pWDS->pBufferInfo[j].buffer); } } } } } } // Validate overall state at the time of a draw call static VkBool32 validate_and_update_draw_state(layer_data *my_data, GLOBAL_CB_NODE *pCB, const VkBool32 indexedDraw, const VkPipelineBindPoint bindPoint) { VkBool32 result = VK_FALSE; auto const &state = pCB->lastBound[bindPoint]; PIPELINE_NODE *pPipe = getPipeline(my_data, state.pipeline); // First check flag states if (VK_PIPELINE_BIND_POINT_GRAPHICS == bindPoint) result = validate_draw_state_flags(my_data, pCB, pPipe, indexedDraw); // Now complete other state checks // TODO : Currently only performing next check if *something* was bound (non-zero last bound) // There is probably a better way to gate when this check happens, and to know if something *should* have been bound // We should have that check separately and then gate this check based on that check if (pPipe) { if (state.pipelineLayout) { string errorString; // Need a vector (vs. std::set) of active Sets for dynamicOffset validation in case same set bound w/ different offsets vector>> activeSetBindingsPairs; for (auto setBindingPair : pPipe->active_slots) { uint32_t setIndex = setBindingPair.first; // If valid set is not bound throw an error if ((state.boundDescriptorSets.size() <= setIndex) || (!state.boundDescriptorSets[setIndex])) { result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_DESCRIPTOR_SET_NOT_BOUND, "DS", "VkPipeline %#" PRIxLEAST64 " uses set #%u but that set is not bound.", (uint64_t)pPipe->pipeline, setIndex); } else if (!verify_set_layout_compatibility(my_data, my_data->setMap[state.boundDescriptorSets[setIndex]], pPipe->graphicsPipelineCI.layout, setIndex, errorString)) { // Set is bound but not compatible w/ overlapping pipelineLayout from PSO VkDescriptorSet setHandle = my_data->setMap[state.boundDescriptorSets[setIndex]]->set; result |= log_msg( my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)setHandle, __LINE__, DRAWSTATE_PIPELINE_LAYOUTS_INCOMPATIBLE, "DS", "VkDescriptorSet (%#" PRIxLEAST64 ") bound as set #%u is not compatible with overlapping VkPipelineLayout %#" PRIxLEAST64 " due to: %s", (uint64_t)setHandle, setIndex, (uint64_t)pPipe->graphicsPipelineCI.layout, errorString.c_str()); } else { // Valid set is bound and layout compatible, validate that it's updated // Pull the set node SET_NODE *pSet = my_data->setMap[state.boundDescriptorSets[setIndex]]; // Save vector of all active sets to verify dynamicOffsets below // activeSetNodes.push_back(pSet); activeSetBindingsPairs.push_back(std::make_pair(pSet, setBindingPair.second)); // Make sure set has been updated if (!pSet->pUpdateStructs) { result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pSet->set, __LINE__, DRAWSTATE_DESCRIPTOR_SET_NOT_UPDATED, "DS", "DS %#" PRIxLEAST64 " bound but it was never updated. It is now being used to draw so " "this will result in undefined behavior.", (uint64_t)pSet->set); } } } // For given active slots, verify any dynamic descriptors and record updated images & buffers result |= validate_and_update_drawtime_descriptor_state(my_data, pCB, activeSetBindingsPairs); } if (VK_PIPELINE_BIND_POINT_GRAPHICS == bindPoint) { // Verify Vtx binding if (pPipe->vertexBindingDescriptions.size() > 0) { for (size_t i = 0; i < pPipe->vertexBindingDescriptions.size(); i++) { if ((pCB->currentDrawData.buffers.size() < (i + 1)) || (pCB->currentDrawData.buffers[i] == VK_NULL_HANDLE)) { result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", "The Pipeline State Object (%#" PRIxLEAST64 ") expects that this Command Buffer's vertex binding Index " PRINTF_SIZE_T_SPECIFIER " should be set via vkCmdBindVertexBuffers.", (uint64_t)state.pipeline, i); } } } else { if (!pCB->currentDrawData.buffers.empty()) { result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", "Vertex buffers are bound to command buffer (%#" PRIxLEAST64 ") but no vertex buffers are attached to this Pipeline State Object (%#" PRIxLEAST64 ").", (uint64_t)pCB->commandBuffer, (uint64_t)state.pipeline); } } // If Viewport or scissors are dynamic, verify that dynamic count matches PSO count. // Skip check if rasterization is disabled or there is no viewport. if ((!pPipe->graphicsPipelineCI.pRasterizationState || !pPipe->graphicsPipelineCI.pRasterizationState->rasterizerDiscardEnable) && pPipe->graphicsPipelineCI.pViewportState) { VkBool32 dynViewport = isDynamic(pPipe, VK_DYNAMIC_STATE_VIEWPORT); VkBool32 dynScissor = isDynamic(pPipe, VK_DYNAMIC_STATE_SCISSOR); if (dynViewport) { if (pCB->viewports.size() != pPipe->graphicsPipelineCI.pViewportState->viewportCount) { result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS", "Dynamic viewportCount from vkCmdSetViewport() is " PRINTF_SIZE_T_SPECIFIER ", but PSO viewportCount is %u. These counts must match.", pCB->viewports.size(), pPipe->graphicsPipelineCI.pViewportState->viewportCount); } } if (dynScissor) { if (pCB->scissors.size() != pPipe->graphicsPipelineCI.pViewportState->scissorCount) { result |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS", "Dynamic scissorCount from vkCmdSetScissor() is " PRINTF_SIZE_T_SPECIFIER ", but PSO scissorCount is %u. These counts must match.", pCB->scissors.size(), pPipe->graphicsPipelineCI.pViewportState->scissorCount); } } } } } return result; } // Verify that create state for a pipeline is valid static VkBool32 verifyPipelineCreateState(layer_data *my_data, const VkDevice device, std::vector pPipelines, int pipelineIndex) { VkBool32 skipCall = VK_FALSE; PIPELINE_NODE *pPipeline = pPipelines[pipelineIndex]; // If create derivative bit is set, check that we've specified a base // pipeline correctly, and that the base pipeline was created to allow // derivatives. if (pPipeline->graphicsPipelineCI.flags & VK_PIPELINE_CREATE_DERIVATIVE_BIT) { PIPELINE_NODE *pBasePipeline = nullptr; if (!((pPipeline->graphicsPipelineCI.basePipelineHandle != VK_NULL_HANDLE) ^ (pPipeline->graphicsPipelineCI.basePipelineIndex != -1))) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo: exactly one of base pipeline index and handle must be specified"); } else if (pPipeline->graphicsPipelineCI.basePipelineIndex != -1) { if (pPipeline->graphicsPipelineCI.basePipelineIndex >= pipelineIndex) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo: base pipeline must occur earlier in array than derivative pipeline."); } else { pBasePipeline = pPipelines[pPipeline->graphicsPipelineCI.basePipelineIndex]; } } else if (pPipeline->graphicsPipelineCI.basePipelineHandle != VK_NULL_HANDLE) { pBasePipeline = getPipeline(my_data, pPipeline->graphicsPipelineCI.basePipelineHandle); } if (pBasePipeline && !(pBasePipeline->graphicsPipelineCI.flags & VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT)) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo: base pipeline does not allow derivatives."); } } if (pPipeline->graphicsPipelineCI.pColorBlendState != NULL) { if (!my_data->physDevProperties.features.independentBlend) { if (pPipeline->attachments.size() > 1) { VkPipelineColorBlendAttachmentState *pAttachments = &pPipeline->attachments[0]; for (size_t i = 1; i < pPipeline->attachments.size(); i++) { if ((pAttachments[0].blendEnable != pAttachments[i].blendEnable) || (pAttachments[0].srcColorBlendFactor != pAttachments[i].srcColorBlendFactor) || (pAttachments[0].dstColorBlendFactor != pAttachments[i].dstColorBlendFactor) || (pAttachments[0].colorBlendOp != pAttachments[i].colorBlendOp) || (pAttachments[0].srcAlphaBlendFactor != pAttachments[i].srcAlphaBlendFactor) || (pAttachments[0].dstAlphaBlendFactor != pAttachments[i].dstAlphaBlendFactor) || (pAttachments[0].alphaBlendOp != pAttachments[i].alphaBlendOp) || (pAttachments[0].colorWriteMask != pAttachments[i].colorWriteMask)) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INDEPENDENT_BLEND, "DS", "Invalid Pipeline CreateInfo: If independent blend feature not " "enabled, all elements of pAttachments must be identical"); } } } } if (!my_data->physDevProperties.features.logicOp && (pPipeline->graphicsPipelineCI.pColorBlendState->logicOpEnable != VK_FALSE)) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_DISABLED_LOGIC_OP, "DS", "Invalid Pipeline CreateInfo: If logic operations feature not enabled, logicOpEnable must be VK_FALSE"); } if ((pPipeline->graphicsPipelineCI.pColorBlendState->logicOpEnable == VK_TRUE) && ((pPipeline->graphicsPipelineCI.pColorBlendState->logicOp < VK_LOGIC_OP_CLEAR) || (pPipeline->graphicsPipelineCI.pColorBlendState->logicOp > VK_LOGIC_OP_SET))) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_LOGIC_OP, "DS", "Invalid Pipeline CreateInfo: If logicOpEnable is VK_TRUE, logicOp must be a valid VkLogicOp value"); } } // Ensure the subpass index is valid. If not, then validate_and_capture_pipeline_shader_state // produces nonsense errors that confuse users. Other layers should already // emit errors for renderpass being invalid. auto rp_data = my_data->renderPassMap.find(pPipeline->graphicsPipelineCI.renderPass); if (rp_data != my_data->renderPassMap.end() && pPipeline->graphicsPipelineCI.subpass >= rp_data->second->pCreateInfo->subpassCount) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: Subpass index %u " "is out of range for this renderpass (0..%u)", pPipeline->graphicsPipelineCI.subpass, rp_data->second->pCreateInfo->subpassCount - 1); } if (!validate_and_capture_pipeline_shader_state(my_data, pPipeline)) { skipCall = VK_TRUE; } // VS is required if (!(pPipeline->active_shaders & VK_SHADER_STAGE_VERTEX_BIT)) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: Vtx Shader required"); } // Either both or neither TC/TE shaders should be defined if (((pPipeline->active_shaders & VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT) == 0) != ((pPipeline->active_shaders & VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT) == 0)) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: TE and TC shaders must be included or excluded as a pair"); } // Compute shaders should be specified independent of Gfx shaders if ((pPipeline->active_shaders & VK_SHADER_STAGE_COMPUTE_BIT) && (pPipeline->active_shaders & (VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT | VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT | VK_SHADER_STAGE_GEOMETRY_BIT | VK_SHADER_STAGE_FRAGMENT_BIT))) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: Do not specify Compute Shader for Gfx Pipeline"); } // VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive topology is only valid for tessellation pipelines. // Mismatching primitive topology and tessellation fails graphics pipeline creation. if (pPipeline->active_shaders & (VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT | VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT) && (pPipeline->iaStateCI.topology != VK_PRIMITIVE_TOPOLOGY_PATCH_LIST)) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: " "VK_PRIMITIVE_TOPOLOGY_PATCH_LIST must be set as IA " "topology for tessellation pipelines"); } if (pPipeline->iaStateCI.topology == VK_PRIMITIVE_TOPOLOGY_PATCH_LIST) { if (~pPipeline->active_shaders & VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: " "VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive " "topology is only valid for tessellation pipelines"); } if (!pPipeline->tessStateCI.patchControlPoints || (pPipeline->tessStateCI.patchControlPoints > 32)) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, "DS", "Invalid Pipeline CreateInfo State: " "VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive " "topology used with patchControlPoints value %u." " patchControlPoints should be >0 and <=32.", pPipeline->tessStateCI.patchControlPoints); } } // Viewport state must be included if rasterization is enabled. // If the viewport state is included, the viewport and scissor counts should always match. // NOTE : Even if these are flagged as dynamic, counts need to be set correctly for shader compiler if (!pPipeline->graphicsPipelineCI.pRasterizationState || !pPipeline->graphicsPipelineCI.pRasterizationState->rasterizerDiscardEnable) { if (!pPipeline->graphicsPipelineCI.pViewportState) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS", "Gfx Pipeline pViewportState is null. Even if viewport " "and scissors are dynamic PSO must include " "viewportCount and scissorCount in pViewportState."); } else if (pPipeline->graphicsPipelineCI.pViewportState->scissorCount != pPipeline->graphicsPipelineCI.pViewportState->viewportCount) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS", "Gfx Pipeline viewport count (%u) must match scissor count (%u).", pPipeline->vpStateCI.viewportCount, pPipeline->vpStateCI.scissorCount); } else { // If viewport or scissor are not dynamic, then verify that data is appropriate for count VkBool32 dynViewport = isDynamic(pPipeline, VK_DYNAMIC_STATE_VIEWPORT); VkBool32 dynScissor = isDynamic(pPipeline, VK_DYNAMIC_STATE_SCISSOR); if (!dynViewport) { if (pPipeline->graphicsPipelineCI.pViewportState->viewportCount && !pPipeline->graphicsPipelineCI.pViewportState->pViewports) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS", "Gfx Pipeline viewportCount is %u, but pViewports is NULL. For non-zero viewportCount, you " "must either include pViewports data, or include viewport in pDynamicState and set it with " "vkCmdSetViewport().", pPipeline->graphicsPipelineCI.pViewportState->viewportCount); } } if (!dynScissor) { if (pPipeline->graphicsPipelineCI.pViewportState->scissorCount && !pPipeline->graphicsPipelineCI.pViewportState->pScissors) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, "DS", "Gfx Pipeline scissorCount is %u, but pScissors is NULL. For non-zero scissorCount, you " "must either include pScissors data, or include scissor in pDynamicState and set it with " "vkCmdSetScissor().", pPipeline->graphicsPipelineCI.pViewportState->scissorCount); } } } } return skipCall; } // Init the pipeline mapping info based on pipeline create info LL tree // Threading note : Calls to this function should wrapped in mutex // TODO : this should really just be in the constructor for PIPELINE_NODE static PIPELINE_NODE *initGraphicsPipeline(layer_data *dev_data, const VkGraphicsPipelineCreateInfo *pCreateInfo) { PIPELINE_NODE *pPipeline = new PIPELINE_NODE; // First init create info memcpy(&pPipeline->graphicsPipelineCI, pCreateInfo, sizeof(VkGraphicsPipelineCreateInfo)); size_t bufferSize = 0; const VkPipelineVertexInputStateCreateInfo *pVICI = NULL; const VkPipelineColorBlendStateCreateInfo *pCBCI = NULL; for (uint32_t i = 0; i < pCreateInfo->stageCount; i++) { const VkPipelineShaderStageCreateInfo *pPSSCI = &pCreateInfo->pStages[i]; switch (pPSSCI->stage) { case VK_SHADER_STAGE_VERTEX_BIT: memcpy(&pPipeline->vsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo)); pPipeline->active_shaders |= VK_SHADER_STAGE_VERTEX_BIT; break; case VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT: memcpy(&pPipeline->tcsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo)); pPipeline->active_shaders |= VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT; break; case VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT: memcpy(&pPipeline->tesCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo)); pPipeline->active_shaders |= VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT; break; case VK_SHADER_STAGE_GEOMETRY_BIT: memcpy(&pPipeline->gsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo)); pPipeline->active_shaders |= VK_SHADER_STAGE_GEOMETRY_BIT; break; case VK_SHADER_STAGE_FRAGMENT_BIT: memcpy(&pPipeline->fsCI, pPSSCI, sizeof(VkPipelineShaderStageCreateInfo)); pPipeline->active_shaders |= VK_SHADER_STAGE_FRAGMENT_BIT; break; case VK_SHADER_STAGE_COMPUTE_BIT: // TODO : Flag error, CS is specified through VkComputePipelineCreateInfo pPipeline->active_shaders |= VK_SHADER_STAGE_COMPUTE_BIT; break; default: // TODO : Flag error break; } } // Copy over GraphicsPipelineCreateInfo structure embedded pointers if (pCreateInfo->stageCount != 0) { pPipeline->graphicsPipelineCI.pStages = new VkPipelineShaderStageCreateInfo[pCreateInfo->stageCount]; bufferSize = pCreateInfo->stageCount * sizeof(VkPipelineShaderStageCreateInfo); memcpy((void *)pPipeline->graphicsPipelineCI.pStages, pCreateInfo->pStages, bufferSize); } if (pCreateInfo->pVertexInputState != NULL) { pPipeline->vertexInputCI = *pCreateInfo->pVertexInputState; // Copy embedded ptrs pVICI = pCreateInfo->pVertexInputState; if (pVICI->vertexBindingDescriptionCount) { pPipeline->vertexBindingDescriptions = std::vector( pVICI->pVertexBindingDescriptions, pVICI->pVertexBindingDescriptions + pVICI->vertexBindingDescriptionCount); } if (pVICI->vertexAttributeDescriptionCount) { pPipeline->vertexAttributeDescriptions = std::vector( pVICI->pVertexAttributeDescriptions, pVICI->pVertexAttributeDescriptions + pVICI->vertexAttributeDescriptionCount); } pPipeline->graphicsPipelineCI.pVertexInputState = &pPipeline->vertexInputCI; } if (pCreateInfo->pInputAssemblyState != NULL) { pPipeline->iaStateCI = *pCreateInfo->pInputAssemblyState; pPipeline->graphicsPipelineCI.pInputAssemblyState = &pPipeline->iaStateCI; } if (pCreateInfo->pTessellationState != NULL) { pPipeline->tessStateCI = *pCreateInfo->pTessellationState; pPipeline->graphicsPipelineCI.pTessellationState = &pPipeline->tessStateCI; } if (pCreateInfo->pViewportState != NULL) { pPipeline->vpStateCI = *pCreateInfo->pViewportState; pPipeline->graphicsPipelineCI.pViewportState = &pPipeline->vpStateCI; } if (pCreateInfo->pRasterizationState != NULL) { pPipeline->rsStateCI = *pCreateInfo->pRasterizationState; pPipeline->graphicsPipelineCI.pRasterizationState = &pPipeline->rsStateCI; } if (pCreateInfo->pMultisampleState != NULL) { pPipeline->msStateCI = *pCreateInfo->pMultisampleState; pPipeline->graphicsPipelineCI.pMultisampleState = &pPipeline->msStateCI; } if (pCreateInfo->pDepthStencilState != NULL) { pPipeline->dsStateCI = *pCreateInfo->pDepthStencilState; pPipeline->graphicsPipelineCI.pDepthStencilState = &pPipeline->dsStateCI; } if (pCreateInfo->pColorBlendState != NULL) { pPipeline->cbStateCI = *pCreateInfo->pColorBlendState; // Copy embedded ptrs pCBCI = pCreateInfo->pColorBlendState; if (pCBCI->attachmentCount) { pPipeline->attachments = std::vector( pCBCI->pAttachments, pCBCI->pAttachments + pCBCI->attachmentCount); } pPipeline->graphicsPipelineCI.pColorBlendState = &pPipeline->cbStateCI; } if (pCreateInfo->pDynamicState != NULL) { pPipeline->dynStateCI = *pCreateInfo->pDynamicState; if (pPipeline->dynStateCI.dynamicStateCount) { pPipeline->dynStateCI.pDynamicStates = new VkDynamicState[pPipeline->dynStateCI.dynamicStateCount]; bufferSize = pPipeline->dynStateCI.dynamicStateCount * sizeof(VkDynamicState); memcpy((void *)pPipeline->dynStateCI.pDynamicStates, pCreateInfo->pDynamicState->pDynamicStates, bufferSize); } pPipeline->graphicsPipelineCI.pDynamicState = &pPipeline->dynStateCI; } return pPipeline; } // Free the Pipeline nodes static void deletePipelines(layer_data *my_data) { if (my_data->pipelineMap.size() <= 0) return; for (auto ii = my_data->pipelineMap.begin(); ii != my_data->pipelineMap.end(); ++ii) { if ((*ii).second->graphicsPipelineCI.stageCount != 0) { delete[](*ii).second->graphicsPipelineCI.pStages; } if ((*ii).second->dynStateCI.dynamicStateCount != 0) { delete[](*ii).second->dynStateCI.pDynamicStates; } delete (*ii).second; } my_data->pipelineMap.clear(); } // For given pipeline, return number of MSAA samples, or one if MSAA disabled static VkSampleCountFlagBits getNumSamples(layer_data *my_data, const VkPipeline pipeline) { PIPELINE_NODE *pPipe = my_data->pipelineMap[pipeline]; if (VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO == pPipe->msStateCI.sType) { return pPipe->msStateCI.rasterizationSamples; } return VK_SAMPLE_COUNT_1_BIT; } // Validate state related to the PSO static VkBool32 validatePipelineState(layer_data *my_data, const GLOBAL_CB_NODE *pCB, const VkPipelineBindPoint pipelineBindPoint, const VkPipeline pipeline) { VkBool32 skipCall = VK_FALSE; if (VK_PIPELINE_BIND_POINT_GRAPHICS == pipelineBindPoint) { // Verify that any MSAA request in PSO matches sample# in bound FB // Skip the check if rasterization is disabled. PIPELINE_NODE *pPipeline = my_data->pipelineMap[pipeline]; if (!pPipeline->graphicsPipelineCI.pRasterizationState || !pPipeline->graphicsPipelineCI.pRasterizationState->rasterizerDiscardEnable) { VkSampleCountFlagBits psoNumSamples = getNumSamples(my_data, pipeline); if (pCB->activeRenderPass) { const VkRenderPassCreateInfo *pRPCI = my_data->renderPassMap[pCB->activeRenderPass]->pCreateInfo; const VkSubpassDescription *pSD = &pRPCI->pSubpasses[pCB->activeSubpass]; VkSampleCountFlagBits subpassNumSamples = (VkSampleCountFlagBits)0; uint32_t i; for (i = 0; i < pSD->colorAttachmentCount; i++) { VkSampleCountFlagBits samples; if (pSD->pColorAttachments[i].attachment == VK_ATTACHMENT_UNUSED) continue; samples = pRPCI->pAttachments[pSD->pColorAttachments[i].attachment].samples; if (subpassNumSamples == (VkSampleCountFlagBits)0) { subpassNumSamples = samples; } else if (subpassNumSamples != samples) { subpassNumSamples = (VkSampleCountFlagBits)-1; break; } } if (pSD->pDepthStencilAttachment && pSD->pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) { const VkSampleCountFlagBits samples = pRPCI->pAttachments[pSD->pDepthStencilAttachment->attachment].samples; if (subpassNumSamples == (VkSampleCountFlagBits)0) subpassNumSamples = samples; else if (subpassNumSamples != samples) subpassNumSamples = (VkSampleCountFlagBits)-1; } if ((pSD->colorAttachmentCount > 0 || pSD->pDepthStencilAttachment) && psoNumSamples != subpassNumSamples) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, (uint64_t)pipeline, __LINE__, DRAWSTATE_NUM_SAMPLES_MISMATCH, "DS", "Num samples mismatch! Binding PSO (%#" PRIxLEAST64 ") with %u samples while current RenderPass (%#" PRIxLEAST64 ") w/ %u samples!", (uint64_t)pipeline, psoNumSamples, (uint64_t)pCB->activeRenderPass, subpassNumSamples); } } else { // TODO : I believe it's an error if we reach this point and don't have an activeRenderPass // Verify and flag error as appropriate } } // TODO : Add more checks here } else { // TODO : Validate non-gfx pipeline updates } return skipCall; } // Block of code at start here specifically for managing/tracking DSs // Return Pool node ptr for specified pool or else NULL static DESCRIPTOR_POOL_NODE *getPoolNode(layer_data *my_data, const VkDescriptorPool pool) { if (my_data->descriptorPoolMap.find(pool) == my_data->descriptorPoolMap.end()) { return NULL; } return my_data->descriptorPoolMap[pool]; } static LAYOUT_NODE *getLayoutNode(layer_data *my_data, const VkDescriptorSetLayout layout) { if (my_data->descriptorSetLayoutMap.find(layout) == my_data->descriptorSetLayoutMap.end()) { return NULL; } return my_data->descriptorSetLayoutMap[layout]; } // Return VK_FALSE if update struct is of valid type, otherwise flag error and return code from callback static VkBool32 validUpdateStruct(layer_data *my_data, const VkDevice device, const GENERIC_HEADER *pUpdateStruct) { switch (pUpdateStruct->sType) { case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET: case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET: return VK_FALSE; default: return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS", "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree", string_VkStructureType(pUpdateStruct->sType), pUpdateStruct->sType); } } // Set count for given update struct in the last parameter // Return value of skipCall, which is only VK_TRUE if error occurs and callback signals execution to cease static uint32_t getUpdateCount(layer_data *my_data, const VkDevice device, const GENERIC_HEADER *pUpdateStruct) { switch (pUpdateStruct->sType) { case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET: return ((VkWriteDescriptorSet *)pUpdateStruct)->descriptorCount; case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET: // TODO : Need to understand this case better and make sure code is correct return ((VkCopyDescriptorSet *)pUpdateStruct)->descriptorCount; default: return 0; } return 0; } // For given layout and update, return the first overall index of the layout that is updated static uint32_t getUpdateStartIndex(layer_data *my_data, const VkDevice device, const LAYOUT_NODE *pLayout, const uint32_t binding, const uint32_t arrayIndex, const GENERIC_HEADER *pUpdateStruct) { return getBindingStartIndex(pLayout, binding) + arrayIndex; } // For given layout and update, return the last overall index of the layout that is updated static uint32_t getUpdateEndIndex(layer_data *my_data, const VkDevice device, const LAYOUT_NODE *pLayout, const uint32_t binding, const uint32_t arrayIndex, const GENERIC_HEADER *pUpdateStruct) { uint32_t count = getUpdateCount(my_data, device, pUpdateStruct); return getBindingStartIndex(pLayout, binding) + arrayIndex + count - 1; } // Verify that the descriptor type in the update struct matches what's expected by the layout static VkBool32 validateUpdateConsistency(layer_data *my_data, const VkDevice device, const LAYOUT_NODE *pLayout, const GENERIC_HEADER *pUpdateStruct, uint32_t startIndex, uint32_t endIndex) { // First get actual type of update VkBool32 skipCall = VK_FALSE; VkDescriptorType actualType; uint32_t i = 0; switch (pUpdateStruct->sType) { case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET: actualType = ((VkWriteDescriptorSet *)pUpdateStruct)->descriptorType; break; case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET: /* no need to validate */ return VK_FALSE; break; default: skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS", "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree", string_VkStructureType(pUpdateStruct->sType), pUpdateStruct->sType); } if (VK_FALSE == skipCall) { // Set first stageFlags as reference and verify that all other updates match it VkShaderStageFlags refStageFlags = pLayout->stageFlags[startIndex]; for (i = startIndex; i <= endIndex; i++) { if (pLayout->descriptorTypes[i] != actualType) { skipCall |= log_msg( my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS", "Write descriptor update has descriptor type %s that does not match overlapping binding descriptor type of %s!", string_VkDescriptorType(actualType), string_VkDescriptorType(pLayout->descriptorTypes[i])); } if (pLayout->stageFlags[i] != refStageFlags) { skipCall |= log_msg( my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_DESCRIPTOR_STAGEFLAGS_MISMATCH, "DS", "Write descriptor update has stageFlags %x that do not match overlapping binding descriptor stageFlags of %x!", refStageFlags, pLayout->stageFlags[i]); } } } return skipCall; } // Determine the update type, allocate a new struct of that type, shadow the given pUpdate // struct into the pNewNode param. Return VK_TRUE if error condition encountered and callback signals early exit. // NOTE : Calls to this function should be wrapped in mutex static VkBool32 shadowUpdateNode(layer_data *my_data, const VkDevice device, GENERIC_HEADER *pUpdate, GENERIC_HEADER **pNewNode) { VkBool32 skipCall = VK_FALSE; VkWriteDescriptorSet *pWDS = NULL; VkCopyDescriptorSet *pCDS = NULL; switch (pUpdate->sType) { case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET: pWDS = new VkWriteDescriptorSet; *pNewNode = (GENERIC_HEADER *)pWDS; memcpy(pWDS, pUpdate, sizeof(VkWriteDescriptorSet)); switch (pWDS->descriptorType) { case VK_DESCRIPTOR_TYPE_SAMPLER: case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER: case VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE: case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE: { VkDescriptorImageInfo *info = new VkDescriptorImageInfo[pWDS->descriptorCount]; memcpy(info, pWDS->pImageInfo, pWDS->descriptorCount * sizeof(VkDescriptorImageInfo)); pWDS->pImageInfo = info; } break; case VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER: case VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER: { VkBufferView *info = new VkBufferView[pWDS->descriptorCount]; memcpy(info, pWDS->pTexelBufferView, pWDS->descriptorCount * sizeof(VkBufferView)); pWDS->pTexelBufferView = info; } break; case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER: case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER: case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC: case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC: { VkDescriptorBufferInfo *info = new VkDescriptorBufferInfo[pWDS->descriptorCount]; memcpy(info, pWDS->pBufferInfo, pWDS->descriptorCount * sizeof(VkDescriptorBufferInfo)); pWDS->pBufferInfo = info; } break; default: return VK_ERROR_VALIDATION_FAILED_EXT; break; } break; case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET: pCDS = new VkCopyDescriptorSet; *pNewNode = (GENERIC_HEADER *)pCDS; memcpy(pCDS, pUpdate, sizeof(VkCopyDescriptorSet)); break; default: if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS", "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree", string_VkStructureType(pUpdate->sType), pUpdate->sType)) return VK_TRUE; } // Make sure that pNext for the end of shadow copy is NULL (*pNewNode)->pNext = NULL; return skipCall; } // Verify that given sampler is valid static VkBool32 validateSampler(const layer_data *my_data, const VkSampler *pSampler, const VkBool32 immutable) { VkBool32 skipCall = VK_FALSE; auto sampIt = my_data->sampleMap.find(*pSampler); if (sampIt == my_data->sampleMap.end()) { if (!immutable) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT, (uint64_t)*pSampler, __LINE__, DRAWSTATE_SAMPLER_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Attempt to update descriptor with invalid sampler %#" PRIxLEAST64, (uint64_t)*pSampler); } else { // immutable skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT, (uint64_t)*pSampler, __LINE__, DRAWSTATE_SAMPLER_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Attempt to update descriptor whose binding has an invalid immutable " "sampler %#" PRIxLEAST64, (uint64_t)*pSampler); } } else { // TODO : Any further checks we want to do on the sampler? } return skipCall; } //TODO: Consolidate functions bool FindLayout(const GLOBAL_CB_NODE *pCB, ImageSubresourcePair imgpair, IMAGE_CMD_BUF_LAYOUT_NODE &node, const VkImageAspectFlags aspectMask) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(pCB->commandBuffer), layer_data_map); if (!(imgpair.subresource.aspectMask & aspectMask)) { return false; } VkImageAspectFlags oldAspectMask = imgpair.subresource.aspectMask; imgpair.subresource.aspectMask = aspectMask; auto imgsubIt = pCB->imageLayoutMap.find(imgpair); if (imgsubIt == pCB->imageLayoutMap.end()) { return false; } if (node.layout != VK_IMAGE_LAYOUT_MAX_ENUM && node.layout != imgsubIt->second.layout) { log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, reinterpret_cast(imgpair.image), __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS", "Cannot query for VkImage 0x%" PRIx64 " layout when combined aspect mask %d has multiple layout types: %s and %s", reinterpret_cast(imgpair.image), oldAspectMask, string_VkImageLayout(node.layout), string_VkImageLayout(imgsubIt->second.layout)); } if (node.initialLayout != VK_IMAGE_LAYOUT_MAX_ENUM && node.initialLayout != imgsubIt->second.initialLayout) { log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, reinterpret_cast(imgpair.image), __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS", "Cannot query for VkImage 0x%" PRIx64 " layout when combined aspect mask %d has multiple initial layout types: %s and %s", reinterpret_cast(imgpair.image), oldAspectMask, string_VkImageLayout(node.initialLayout), string_VkImageLayout(imgsubIt->second.initialLayout)); } node = imgsubIt->second; return true; } bool FindLayout(const layer_data *my_data, ImageSubresourcePair imgpair, VkImageLayout &layout, const VkImageAspectFlags aspectMask) { if (!(imgpair.subresource.aspectMask & aspectMask)) { return false; } VkImageAspectFlags oldAspectMask = imgpair.subresource.aspectMask; imgpair.subresource.aspectMask = aspectMask; auto imgsubIt = my_data->imageLayoutMap.find(imgpair); if (imgsubIt == my_data->imageLayoutMap.end()) { return false; } if (layout != VK_IMAGE_LAYOUT_MAX_ENUM && layout != imgsubIt->second.layout) { log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, reinterpret_cast(imgpair.image), __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS", "Cannot query for VkImage 0x%" PRIx64 " layout when combined aspect mask %d has multiple layout types: %s and %s", reinterpret_cast(imgpair.image), oldAspectMask, string_VkImageLayout(layout), string_VkImageLayout(imgsubIt->second.layout)); } layout = imgsubIt->second.layout; return true; } // find layout(s) on the cmd buf level bool FindLayout(const GLOBAL_CB_NODE *pCB, VkImage image, VkImageSubresource range, IMAGE_CMD_BUF_LAYOUT_NODE &node) { ImageSubresourcePair imgpair = {image, true, range}; node = IMAGE_CMD_BUF_LAYOUT_NODE(VK_IMAGE_LAYOUT_MAX_ENUM, VK_IMAGE_LAYOUT_MAX_ENUM); FindLayout(pCB, imgpair, node, VK_IMAGE_ASPECT_COLOR_BIT); FindLayout(pCB, imgpair, node, VK_IMAGE_ASPECT_DEPTH_BIT); FindLayout(pCB, imgpair, node, VK_IMAGE_ASPECT_STENCIL_BIT); FindLayout(pCB, imgpair, node, VK_IMAGE_ASPECT_METADATA_BIT); if (node.layout == VK_IMAGE_LAYOUT_MAX_ENUM) { imgpair = {image, false, VkImageSubresource()}; auto imgsubIt = pCB->imageLayoutMap.find(imgpair); if (imgsubIt == pCB->imageLayoutMap.end()) return false; node = imgsubIt->second; } return true; } // find layout(s) on the global level bool FindLayout(const layer_data *my_data, ImageSubresourcePair imgpair, VkImageLayout &layout) { layout = VK_IMAGE_LAYOUT_MAX_ENUM; FindLayout(my_data, imgpair, layout, VK_IMAGE_ASPECT_COLOR_BIT); FindLayout(my_data, imgpair, layout, VK_IMAGE_ASPECT_DEPTH_BIT); FindLayout(my_data, imgpair, layout, VK_IMAGE_ASPECT_STENCIL_BIT); FindLayout(my_data, imgpair, layout, VK_IMAGE_ASPECT_METADATA_BIT); if (layout == VK_IMAGE_LAYOUT_MAX_ENUM) { imgpair = {imgpair.image, false, VkImageSubresource()}; auto imgsubIt = my_data->imageLayoutMap.find(imgpair); if (imgsubIt == my_data->imageLayoutMap.end()) return false; layout = imgsubIt->second.layout; } return true; } bool FindLayout(const layer_data *my_data, VkImage image, VkImageSubresource range, VkImageLayout &layout) { ImageSubresourcePair imgpair = {image, true, range}; return FindLayout(my_data, imgpair, layout); } bool FindLayouts(const layer_data *my_data, VkImage image, std::vector &layouts) { auto sub_data = my_data->imageSubresourceMap.find(image); if (sub_data == my_data->imageSubresourceMap.end()) return false; auto imgIt = my_data->imageMap.find(image); if (imgIt == my_data->imageMap.end()) return false; bool ignoreGlobal = false; // TODO: Make this robust for >1 aspect mask. Now it will just say ignore // potential errors in this case. if (sub_data->second.size() >= (imgIt->second.createInfo.arrayLayers * imgIt->second.createInfo.mipLevels + 1)) { ignoreGlobal = true; } for (auto imgsubpair : sub_data->second) { if (ignoreGlobal && !imgsubpair.hasSubresource) continue; auto img_data = my_data->imageLayoutMap.find(imgsubpair); if (img_data != my_data->imageLayoutMap.end()) { layouts.push_back(img_data->second.layout); } } return true; } // Set the layout on the global level void SetLayout(layer_data *my_data, ImageSubresourcePair imgpair, const VkImageLayout &layout) { VkImage &image = imgpair.image; // TODO (mlentine): Maybe set format if new? Not used atm. my_data->imageLayoutMap[imgpair].layout = layout; // TODO (mlentine): Maybe make vector a set? auto subresource = std::find(my_data->imageSubresourceMap[image].begin(), my_data->imageSubresourceMap[image].end(), imgpair); if (subresource == my_data->imageSubresourceMap[image].end()) { my_data->imageSubresourceMap[image].push_back(imgpair); } } // Set the layout on the cmdbuf level void SetLayout(GLOBAL_CB_NODE *pCB, ImageSubresourcePair imgpair, const IMAGE_CMD_BUF_LAYOUT_NODE &node) { pCB->imageLayoutMap[imgpair] = node; // TODO (mlentine): Maybe make vector a set? auto subresource = std::find(pCB->imageSubresourceMap[imgpair.image].begin(), pCB->imageSubresourceMap[imgpair.image].end(), imgpair); if (subresource == pCB->imageSubresourceMap[imgpair.image].end()) { pCB->imageSubresourceMap[imgpair.image].push_back(imgpair); } } void SetLayout(GLOBAL_CB_NODE *pCB, ImageSubresourcePair imgpair, const VkImageLayout &layout) { // TODO (mlentine): Maybe make vector a set? if (std::find(pCB->imageSubresourceMap[imgpair.image].begin(), pCB->imageSubresourceMap[imgpair.image].end(), imgpair) != pCB->imageSubresourceMap[imgpair.image].end()) { pCB->imageLayoutMap[imgpair].layout = layout; } else { // TODO (mlentine): Could be expensive and might need to be removed. assert(imgpair.hasSubresource); IMAGE_CMD_BUF_LAYOUT_NODE node; if (!FindLayout(pCB, imgpair.image, imgpair.subresource, node)) { node.initialLayout = layout; } SetLayout(pCB, imgpair, {node.initialLayout, layout}); } } template void SetLayout(OBJECT *pObject, ImageSubresourcePair imgpair, const LAYOUT &layout, VkImageAspectFlags aspectMask) { if (imgpair.subresource.aspectMask & aspectMask) { imgpair.subresource.aspectMask = aspectMask; SetLayout(pObject, imgpair, layout); } } template void SetLayout(OBJECT *pObject, VkImage image, VkImageSubresource range, const LAYOUT &layout) { ImageSubresourcePair imgpair = {image, true, range}; SetLayout(pObject, imgpair, layout, VK_IMAGE_ASPECT_COLOR_BIT); SetLayout(pObject, imgpair, layout, VK_IMAGE_ASPECT_DEPTH_BIT); SetLayout(pObject, imgpair, layout, VK_IMAGE_ASPECT_STENCIL_BIT); SetLayout(pObject, imgpair, layout, VK_IMAGE_ASPECT_METADATA_BIT); } template void SetLayout(OBJECT *pObject, VkImage image, const LAYOUT &layout) { ImageSubresourcePair imgpair = {image, false, VkImageSubresource()}; SetLayout(pObject, image, imgpair, layout); } void SetLayout(const layer_data *dev_data, GLOBAL_CB_NODE *pCB, VkImageView imageView, const VkImageLayout &layout) { auto image_view_data = dev_data->imageViewMap.find(imageView); assert(image_view_data != dev_data->imageViewMap.end()); const VkImage &image = image_view_data->second.image; const VkImageSubresourceRange &subRange = image_view_data->second.subresourceRange; // TODO: Do not iterate over every possibility - consolidate where possible for (uint32_t j = 0; j < subRange.levelCount; j++) { uint32_t level = subRange.baseMipLevel + j; for (uint32_t k = 0; k < subRange.layerCount; k++) { uint32_t layer = subRange.baseArrayLayer + k; VkImageSubresource sub = {subRange.aspectMask, level, layer}; SetLayout(pCB, image, sub, layout); } } } // Verify that given imageView is valid static VkBool32 validateImageView(const layer_data *my_data, const VkImageView *pImageView, const VkImageLayout imageLayout) { VkBool32 skipCall = VK_FALSE; auto ivIt = my_data->imageViewMap.find(*pImageView); if (ivIt == my_data->imageViewMap.end()) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t)*pImageView, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Attempt to update descriptor with invalid imageView %#" PRIxLEAST64, (uint64_t)*pImageView); } else { // Validate that imageLayout is compatible with aspectMask and image format VkImageAspectFlags aspectMask = ivIt->second.subresourceRange.aspectMask; VkImage image = ivIt->second.image; // TODO : Check here in case we have a bad image VkFormat format = VK_FORMAT_MAX_ENUM; auto imgIt = my_data->imageMap.find(image); if (imgIt != my_data->imageMap.end()) { format = (*imgIt).second.createInfo.format; } else { // Also need to check the swapchains. auto swapchainIt = my_data->device_extensions.imageToSwapchainMap.find(image); if (swapchainIt != my_data->device_extensions.imageToSwapchainMap.end()) { VkSwapchainKHR swapchain = swapchainIt->second; auto swapchain_nodeIt = my_data->device_extensions.swapchainMap.find(swapchain); if (swapchain_nodeIt != my_data->device_extensions.swapchainMap.end()) { SWAPCHAIN_NODE *pswapchain_node = swapchain_nodeIt->second; format = pswapchain_node->createInfo.imageFormat; } } } if (format == VK_FORMAT_MAX_ENUM) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)image, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Attempt to update descriptor with invalid image %#" PRIxLEAST64 " in imageView %#" PRIxLEAST64, (uint64_t)image, (uint64_t)*pImageView); } else { VkBool32 ds = vk_format_is_depth_or_stencil(format); switch (imageLayout) { case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL: // Only Color bit must be set if ((aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) != VK_IMAGE_ASPECT_COLOR_BIT) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t)*pImageView, __LINE__, DRAWSTATE_INVALID_IMAGE_ASPECT, "DS", "vkUpdateDescriptorSets: Updating descriptor with layout VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL " "and imageView %#" PRIxLEAST64 "" " that does not have VK_IMAGE_ASPECT_COLOR_BIT set.", (uint64_t)*pImageView); } // format must NOT be DS if (ds) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t)*pImageView, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Updating descriptor with layout VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL " "and imageView %#" PRIxLEAST64 "" " but the image format is %s which is not a color format.", (uint64_t)*pImageView, string_VkFormat(format)); } break; case VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL: case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL: // Depth or stencil bit must be set, but both must NOT be set if (aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) { if (aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) { // both must NOT be set skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t)*pImageView, __LINE__, DRAWSTATE_INVALID_IMAGE_ASPECT, "DS", "vkUpdateDescriptorSets: Updating descriptor with imageView %#" PRIxLEAST64 "" " that has both STENCIL and DEPTH aspects set", (uint64_t)*pImageView); } } else if (!(aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT)) { // Neither were set skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t)*pImageView, __LINE__, DRAWSTATE_INVALID_IMAGE_ASPECT, "DS", "vkUpdateDescriptorSets: Updating descriptor with layout %s and imageView %#" PRIxLEAST64 "" " that does not have STENCIL or DEPTH aspect set.", string_VkImageLayout(imageLayout), (uint64_t)*pImageView); } // format must be DS if (!ds) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT, (uint64_t)*pImageView, __LINE__, DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Updating descriptor with layout %s and imageView %#" PRIxLEAST64 "" " but the image format is %s which is not a depth/stencil format.", string_VkImageLayout(imageLayout), (uint64_t)*pImageView, string_VkFormat(format)); } break; default: // anything to check for other layouts? break; } } } return skipCall; } // Verify that given bufferView is valid static VkBool32 validateBufferView(const layer_data *my_data, const VkBufferView *pBufferView) { VkBool32 skipCall = VK_FALSE; auto sampIt = my_data->bufferViewMap.find(*pBufferView); if (sampIt == my_data->bufferViewMap.end()) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT, (uint64_t)*pBufferView, __LINE__, DRAWSTATE_BUFFERVIEW_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Attempt to update descriptor with invalid bufferView %#" PRIxLEAST64, (uint64_t)*pBufferView); } else { // TODO : Any further checks we want to do on the bufferView? } return skipCall; } // Verify that given bufferInfo is valid static VkBool32 validateBufferInfo(const layer_data *my_data, const VkDescriptorBufferInfo *pBufferInfo) { VkBool32 skipCall = VK_FALSE; auto sampIt = my_data->bufferMap.find(pBufferInfo->buffer); if (sampIt == my_data->bufferMap.end()) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, (uint64_t)pBufferInfo->buffer, __LINE__, DRAWSTATE_BUFFERINFO_DESCRIPTOR_ERROR, "DS", "vkUpdateDescriptorSets: Attempt to update descriptor where bufferInfo has invalid buffer %#" PRIxLEAST64, (uint64_t)pBufferInfo->buffer); } else { // TODO : Any further checks we want to do on the bufferView? } return skipCall; } static VkBool32 validateUpdateContents(const layer_data *my_data, const VkWriteDescriptorSet *pWDS, const VkDescriptorSetLayoutBinding *pLayoutBinding) { VkBool32 skipCall = VK_FALSE; // First verify that for the given Descriptor type, the correct DescriptorInfo data is supplied const VkSampler *pSampler = NULL; VkBool32 immutable = VK_FALSE; uint32_t i = 0; // For given update type, verify that update contents are correct switch (pWDS->descriptorType) { case VK_DESCRIPTOR_TYPE_SAMPLER: for (i = 0; i < pWDS->descriptorCount; ++i) { skipCall |= validateSampler(my_data, &(pWDS->pImageInfo[i].sampler), immutable); } break; case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER: for (i = 0; i < pWDS->descriptorCount; ++i) { if (NULL == pLayoutBinding->pImmutableSamplers) { pSampler = &(pWDS->pImageInfo[i].sampler); if (immutable) { skipCall |= log_msg( my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT, (uint64_t)*pSampler, __LINE__, DRAWSTATE_INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE, "DS", "vkUpdateDescriptorSets: Update #%u is not an immutable sampler %#" PRIxLEAST64 ", but previous update(s) from this " "VkWriteDescriptorSet struct used an immutable sampler. All updates from a single struct must either " "use immutable or non-immutable samplers.", i, (uint64_t)*pSampler); } } else { if (i > 0 && !immutable) { skipCall |= log_msg( my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT, (uint64_t)*pSampler, __LINE__, DRAWSTATE_INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE, "DS", "vkUpdateDescriptorSets: Update #%u is an immutable sampler, but previous update(s) from this " "VkWriteDescriptorSet struct used a non-immutable sampler. All updates from a single struct must either " "use immutable or non-immutable samplers.", i); } immutable = VK_TRUE; pSampler = &(pLayoutBinding->pImmutableSamplers[i]); } skipCall |= validateSampler(my_data, pSampler, immutable); } // Intentionally fall through here to also validate image stuff case VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE: case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE: case VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT: for (i = 0; i < pWDS->descriptorCount; ++i) { skipCall |= validateImageView(my_data, &(pWDS->pImageInfo[i].imageView), pWDS->pImageInfo[i].imageLayout); } break; case VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER: case VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER: for (i = 0; i < pWDS->descriptorCount; ++i) { skipCall |= validateBufferView(my_data, &(pWDS->pTexelBufferView[i])); } break; case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER: case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER: case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC: case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC: for (i = 0; i < pWDS->descriptorCount; ++i) { skipCall |= validateBufferInfo(my_data, &(pWDS->pBufferInfo[i])); } break; default: break; } return skipCall; } // Validate that given set is valid and that it's not being used by an in-flight CmdBuffer // func_str is the name of the calling function // Return VK_FALSE if no errors occur // Return VK_TRUE if validation error occurs and callback returns VK_TRUE (to skip upcoming API call down the chain) VkBool32 validateIdleDescriptorSet(const layer_data *my_data, VkDescriptorSet set, std::string func_str) { VkBool32 skip_call = VK_FALSE; auto set_node = my_data->setMap.find(set); if (set_node == my_data->setMap.end()) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(set), __LINE__, DRAWSTATE_DOUBLE_DESTROY, "DS", "Cannot call %s() on descriptor set %" PRIxLEAST64 " that has not been allocated.", func_str.c_str(), (uint64_t)(set)); } else { if (set_node->second->in_use.load()) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(set), __LINE__, DRAWSTATE_OBJECT_INUSE, "DS", "Cannot call %s() on descriptor set %" PRIxLEAST64 " that is in use by a command buffer.", func_str.c_str(), (uint64_t)(set)); } } return skip_call; } static void invalidateBoundCmdBuffers(layer_data *dev_data, const SET_NODE *pSet) { // Flag any CBs this set is bound to as INVALID for (auto cb : pSet->boundCmdBuffers) { auto cb_node = dev_data->commandBufferMap.find(cb); if (cb_node != dev_data->commandBufferMap.end()) { cb_node->second->state = CB_INVALID; } } } // update DS mappings based on write and copy update arrays static VkBool32 dsUpdate(layer_data *my_data, VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet *pWDS, uint32_t descriptorCopyCount, const VkCopyDescriptorSet *pCDS) { VkBool32 skipCall = VK_FALSE; LAYOUT_NODE *pLayout = NULL; VkDescriptorSetLayoutCreateInfo *pLayoutCI = NULL; // Validate Write updates uint32_t i = 0; for (i = 0; i < descriptorWriteCount; i++) { VkDescriptorSet ds = pWDS[i].dstSet; SET_NODE *pSet = my_data->setMap[ds]; // Set being updated cannot be in-flight if ((skipCall = validateIdleDescriptorSet(my_data, ds, "VkUpdateDescriptorSets")) == VK_TRUE) return skipCall; // If set is bound to any cmdBuffers, mark them invalid invalidateBoundCmdBuffers(my_data, pSet); GENERIC_HEADER *pUpdate = (GENERIC_HEADER *)&pWDS[i]; pLayout = pSet->pLayout; // First verify valid update struct if ((skipCall = validUpdateStruct(my_data, device, pUpdate)) == VK_TRUE) { break; } uint32_t binding = 0, endIndex = 0; binding = pWDS[i].dstBinding; auto bindingToIndex = pLayout->bindingToIndexMap.find(binding); // Make sure that layout being updated has the binding being updated if (bindingToIndex == pLayout->bindingToIndexMap.end()) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(ds), __LINE__, DRAWSTATE_INVALID_UPDATE_INDEX, "DS", "Descriptor Set %" PRIu64 " does not have binding to match " "update binding %u for update type " "%s!", (uint64_t)(ds), binding, string_VkStructureType(pUpdate->sType)); } else { // Next verify that update falls within size of given binding endIndex = getUpdateEndIndex(my_data, device, pLayout, binding, pWDS[i].dstArrayElement, pUpdate); if (getBindingEndIndex(pLayout, binding) < endIndex) { pLayoutCI = &pLayout->createInfo; string DSstr = vk_print_vkdescriptorsetlayoutcreateinfo(pLayoutCI, "{DS} "); skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(ds), __LINE__, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS", "Descriptor update type of %s is out of bounds for matching binding %u in Layout w/ CI:\n%s!", string_VkStructureType(pUpdate->sType), binding, DSstr.c_str()); } else { // TODO : should we skip update on a type mismatch or force it? uint32_t startIndex; startIndex = getUpdateStartIndex(my_data, device, pLayout, binding, pWDS[i].dstArrayElement, pUpdate); // Layout bindings match w/ update, now verify that update type // & stageFlags are the same for entire update if ((skipCall = validateUpdateConsistency(my_data, device, pLayout, pUpdate, startIndex, endIndex)) == VK_FALSE) { // The update is within bounds and consistent, but need to // make sure contents make sense as well if ((skipCall = validateUpdateContents(my_data, &pWDS[i], &pLayout->createInfo.pBindings[bindingToIndex->second])) == VK_FALSE) { // Update is good. Save the update info // Create new update struct for this set's shadow copy GENERIC_HEADER *pNewNode = NULL; skipCall |= shadowUpdateNode(my_data, device, pUpdate, &pNewNode); if (NULL == pNewNode) { skipCall |= log_msg( my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(ds), __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS", "Out of memory while attempting to allocate UPDATE struct in vkUpdateDescriptors()"); } else { // Insert shadow node into LL of updates for this set pNewNode->pNext = pSet->pUpdateStructs; pSet->pUpdateStructs = pNewNode; // Now update appropriate descriptor(s) to point to new Update node for (uint32_t j = startIndex; j <= endIndex; j++) { assert(j < pSet->descriptorCount); pSet->pDescriptorUpdates[j] = pNewNode; } } } } } } } // Now validate copy updates for (i = 0; i < descriptorCopyCount; ++i) { SET_NODE *pSrcSet = NULL, *pDstSet = NULL; LAYOUT_NODE *pSrcLayout = NULL, *pDstLayout = NULL; uint32_t srcStartIndex = 0, srcEndIndex = 0, dstStartIndex = 0, dstEndIndex = 0; // For each copy make sure that update falls within given layout and that types match pSrcSet = my_data->setMap[pCDS[i].srcSet]; pDstSet = my_data->setMap[pCDS[i].dstSet]; // Set being updated cannot be in-flight if ((skipCall = validateIdleDescriptorSet(my_data, pDstSet->set, "VkUpdateDescriptorSets")) == VK_TRUE) return skipCall; invalidateBoundCmdBuffers(my_data, pDstSet); pSrcLayout = pSrcSet->pLayout; pDstLayout = pDstSet->pLayout; // Validate that src binding is valid for src set layout if (pSrcLayout->bindingToIndexMap.find(pCDS[i].srcBinding) == pSrcLayout->bindingToIndexMap.end()) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pSrcSet->set, __LINE__, DRAWSTATE_INVALID_UPDATE_INDEX, "DS", "Copy descriptor update %u has srcBinding %u " "which is out of bounds for underlying SetLayout " "%#" PRIxLEAST64 " which only has bindings 0-%u.", i, pCDS[i].srcBinding, (uint64_t)pSrcLayout->layout, pSrcLayout->createInfo.bindingCount - 1); } else if (pDstLayout->bindingToIndexMap.find(pCDS[i].dstBinding) == pDstLayout->bindingToIndexMap.end()) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDstSet->set, __LINE__, DRAWSTATE_INVALID_UPDATE_INDEX, "DS", "Copy descriptor update %u has dstBinding %u " "which is out of bounds for underlying SetLayout " "%#" PRIxLEAST64 " which only has bindings 0-%u.", i, pCDS[i].dstBinding, (uint64_t)pDstLayout->layout, pDstLayout->createInfo.bindingCount - 1); } else { // Proceed with validation. Bindings are ok, but make sure update is within bounds of given layout srcEndIndex = getUpdateEndIndex(my_data, device, pSrcLayout, pCDS[i].srcBinding, pCDS[i].srcArrayElement, (const GENERIC_HEADER *)&(pCDS[i])); dstEndIndex = getUpdateEndIndex(my_data, device, pDstLayout, pCDS[i].dstBinding, pCDS[i].dstArrayElement, (const GENERIC_HEADER *)&(pCDS[i])); if (getBindingEndIndex(pSrcLayout, pCDS[i].srcBinding) < srcEndIndex) { pLayoutCI = &pSrcLayout->createInfo; string DSstr = vk_print_vkdescriptorsetlayoutcreateinfo(pLayoutCI, "{DS} "); skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pSrcSet->set, __LINE__, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS", "Copy descriptor src update is out of bounds for matching binding %u in Layout w/ CI:\n%s!", pCDS[i].srcBinding, DSstr.c_str()); } else if (getBindingEndIndex(pDstLayout, pCDS[i].dstBinding) < dstEndIndex) { pLayoutCI = &pDstLayout->createInfo; string DSstr = vk_print_vkdescriptorsetlayoutcreateinfo(pLayoutCI, "{DS} "); skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDstSet->set, __LINE__, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS", "Copy descriptor dest update is out of bounds for matching binding %u in Layout w/ CI:\n%s!", pCDS[i].dstBinding, DSstr.c_str()); } else { srcStartIndex = getUpdateStartIndex(my_data, device, pSrcLayout, pCDS[i].srcBinding, pCDS[i].srcArrayElement, (const GENERIC_HEADER *)&(pCDS[i])); dstStartIndex = getUpdateStartIndex(my_data, device, pDstLayout, pCDS[i].dstBinding, pCDS[i].dstArrayElement, (const GENERIC_HEADER *)&(pCDS[i])); for (uint32_t j = 0; j < pCDS[i].descriptorCount; ++j) { // For copy just make sure that the types match and then perform the update if (pSrcLayout->descriptorTypes[srcStartIndex + j] != pDstLayout->descriptorTypes[dstStartIndex + j]) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS", "Copy descriptor update index %u, update count #%u, has src update descriptor type %s " "that does not match overlapping dest descriptor type of %s!", i, j + 1, string_VkDescriptorType(pSrcLayout->descriptorTypes[srcStartIndex + j]), string_VkDescriptorType(pDstLayout->descriptorTypes[dstStartIndex + j])); } else { // point dst descriptor at corresponding src descriptor // TODO : This may be a hole. I believe copy should be its own copy, // otherwise a subsequent write update to src will incorrectly affect the copy pDstSet->pDescriptorUpdates[j + dstStartIndex] = pSrcSet->pDescriptorUpdates[j + srcStartIndex]; pDstSet->pUpdateStructs = pSrcSet->pUpdateStructs; } } } } } return skipCall; } // Verify that given pool has descriptors that are being requested for allocation. // NOTE : Calls to this function should be wrapped in mutex static VkBool32 validate_descriptor_availability_in_pool(layer_data *dev_data, DESCRIPTOR_POOL_NODE *pPoolNode, uint32_t count, const VkDescriptorSetLayout *pSetLayouts) { VkBool32 skipCall = VK_FALSE; uint32_t i = 0; uint32_t j = 0; // Track number of descriptorSets allowable in this pool if (pPoolNode->availableSets < count) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, reinterpret_cast(pPoolNode->pool), __LINE__, DRAWSTATE_DESCRIPTOR_POOL_EMPTY, "DS", "Unable to allocate %u descriptorSets from pool %#" PRIxLEAST64 ". This pool only has %d descriptorSets remaining.", count, reinterpret_cast(pPoolNode->pool), pPoolNode->availableSets); } else { pPoolNode->availableSets -= count; } for (i = 0; i < count; ++i) { LAYOUT_NODE *pLayout = getLayoutNode(dev_data, pSetLayouts[i]); if (NULL == pLayout) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t)pSetLayouts[i], __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS", "Unable to find set layout node for layout %#" PRIxLEAST64 " specified in vkAllocateDescriptorSets() call", (uint64_t)pSetLayouts[i]); } else { uint32_t typeIndex = 0, poolSizeCount = 0; for (j = 0; j < pLayout->createInfo.bindingCount; ++j) { typeIndex = static_cast(pLayout->createInfo.pBindings[j].descriptorType); poolSizeCount = pLayout->createInfo.pBindings[j].descriptorCount; if (poolSizeCount > pPoolNode->availableDescriptorTypeCount[typeIndex]) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t)pLayout->layout, __LINE__, DRAWSTATE_DESCRIPTOR_POOL_EMPTY, "DS", "Unable to allocate %u descriptors of type %s from pool %#" PRIxLEAST64 ". This pool only has %d descriptors of this type remaining.", poolSizeCount, string_VkDescriptorType(pLayout->createInfo.pBindings[j].descriptorType), (uint64_t)pPoolNode->pool, pPoolNode->availableDescriptorTypeCount[typeIndex]); } else { // Decrement available descriptors of this type pPoolNode->availableDescriptorTypeCount[typeIndex] -= poolSizeCount; } } } } return skipCall; } // Free the shadowed update node for this Set // NOTE : Calls to this function should be wrapped in mutex static void freeShadowUpdateTree(SET_NODE *pSet) { GENERIC_HEADER *pShadowUpdate = pSet->pUpdateStructs; pSet->pUpdateStructs = NULL; GENERIC_HEADER *pFreeUpdate = pShadowUpdate; // Clear the descriptor mappings as they will now be invalid pSet->pDescriptorUpdates.clear(); while (pShadowUpdate) { pFreeUpdate = pShadowUpdate; pShadowUpdate = (GENERIC_HEADER *)pShadowUpdate->pNext; VkWriteDescriptorSet *pWDS = NULL; switch (pFreeUpdate->sType) { case VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET: pWDS = (VkWriteDescriptorSet *)pFreeUpdate; switch (pWDS->descriptorType) { case VK_DESCRIPTOR_TYPE_SAMPLER: case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER: case VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE: case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE: { delete[] pWDS->pImageInfo; } break; case VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER: case VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER: { delete[] pWDS->pTexelBufferView; } break; case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER: case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER: case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC: case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC: { delete[] pWDS->pBufferInfo; } break; default: break; } break; case VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET: break; default: assert(0); break; } delete pFreeUpdate; } } // Free all DS Pools including their Sets & related sub-structs // NOTE : Calls to this function should be wrapped in mutex static void deletePools(layer_data *my_data) { if (my_data->descriptorPoolMap.size() <= 0) return; for (auto ii = my_data->descriptorPoolMap.begin(); ii != my_data->descriptorPoolMap.end(); ++ii) { SET_NODE *pSet = (*ii).second->pSets; SET_NODE *pFreeSet = pSet; while (pSet) { pFreeSet = pSet; pSet = pSet->pNext; // Freeing layouts handled in deleteLayouts() function // Free Update shadow struct tree freeShadowUpdateTree(pFreeSet); delete pFreeSet; } delete (*ii).second; } my_data->descriptorPoolMap.clear(); } // WARN : Once deleteLayouts() called, any layout ptrs in Pool/Set data structure will be invalid // NOTE : Calls to this function should be wrapped in mutex static void deleteLayouts(layer_data *my_data) { if (my_data->descriptorSetLayoutMap.size() <= 0) return; for (auto ii = my_data->descriptorSetLayoutMap.begin(); ii != my_data->descriptorSetLayoutMap.end(); ++ii) { LAYOUT_NODE *pLayout = (*ii).second; if (pLayout->createInfo.pBindings) { for (uint32_t i = 0; i < pLayout->createInfo.bindingCount; i++) { delete[] pLayout->createInfo.pBindings[i].pImmutableSamplers; } delete[] pLayout->createInfo.pBindings; } delete pLayout; } my_data->descriptorSetLayoutMap.clear(); } // Currently clearing a set is removing all previous updates to that set // TODO : Validate if this is correct clearing behavior static void clearDescriptorSet(layer_data *my_data, VkDescriptorSet set) { SET_NODE *pSet = getSetNode(my_data, set); if (!pSet) { // TODO : Return error } else { freeShadowUpdateTree(pSet); } } static void clearDescriptorPool(layer_data *my_data, const VkDevice device, const VkDescriptorPool pool, VkDescriptorPoolResetFlags flags) { DESCRIPTOR_POOL_NODE *pPool = getPoolNode(my_data, pool); if (!pPool) { log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, (uint64_t)pool, __LINE__, DRAWSTATE_INVALID_POOL, "DS", "Unable to find pool node for pool %#" PRIxLEAST64 " specified in vkResetDescriptorPool() call", (uint64_t)pool); } else { // TODO: validate flags // For every set off of this pool, clear it SET_NODE *pSet = pPool->pSets; while (pSet) { clearDescriptorSet(my_data, pSet->set); pSet = pSet->pNext; } // Reset available count for each type and available sets for this pool for (uint32_t i = 0; i < pPool->availableDescriptorTypeCount.size(); ++i) { pPool->availableDescriptorTypeCount[i] = pPool->maxDescriptorTypeCount[i]; } pPool->availableSets = pPool->maxSets; } } // For given CB object, fetch associated CB Node from map static GLOBAL_CB_NODE *getCBNode(layer_data *my_data, const VkCommandBuffer cb) { if (my_data->commandBufferMap.count(cb) == 0) { log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(cb), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Attempt to use CommandBuffer %#" PRIxLEAST64 " that doesn't exist!", (uint64_t)(cb)); return NULL; } return my_data->commandBufferMap[cb]; } // Free all CB Nodes // NOTE : Calls to this function should be wrapped in mutex static void deleteCommandBuffers(layer_data *my_data) { if (my_data->commandBufferMap.size() <= 0) { return; } for (auto ii = my_data->commandBufferMap.begin(); ii != my_data->commandBufferMap.end(); ++ii) { delete (*ii).second; } my_data->commandBufferMap.clear(); } static VkBool32 report_error_no_cb_begin(const layer_data *dev_data, const VkCommandBuffer cb, const char *caller_name) { return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)cb, __LINE__, DRAWSTATE_NO_BEGIN_COMMAND_BUFFER, "DS", "You must call vkBeginCommandBuffer() before this call to %s", caller_name); } VkBool32 validateCmdsInCmdBuffer(const layer_data *dev_data, const GLOBAL_CB_NODE *pCB, const CMD_TYPE cmd_type) { if (!pCB->activeRenderPass) return VK_FALSE; VkBool32 skip_call = VK_FALSE; if (pCB->activeSubpassContents == VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS && cmd_type != CMD_EXECUTECOMMANDS) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Commands cannot be called in a subpass using secondary command buffers."); } else if (pCB->activeSubpassContents == VK_SUBPASS_CONTENTS_INLINE && cmd_type == CMD_EXECUTECOMMANDS) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() cannot be called in a subpass using inline commands."); } return skip_call; } static bool checkGraphicsBit(const layer_data *my_data, VkQueueFlags flags, const char *name) { if (!(flags & VK_QUEUE_GRAPHICS_BIT)) return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Cannot call %s on a command buffer allocated from a pool without graphics capabilities.", name); return false; } static bool checkComputeBit(const layer_data *my_data, VkQueueFlags flags, const char *name) { if (!(flags & VK_QUEUE_COMPUTE_BIT)) return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Cannot call %s on a command buffer allocated from a pool without compute capabilities.", name); return false; } static bool checkGraphicsOrComputeBit(const layer_data *my_data, VkQueueFlags flags, const char *name) { if (!((flags & VK_QUEUE_GRAPHICS_BIT) || (flags & VK_QUEUE_COMPUTE_BIT))) return log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Cannot call %s on a command buffer allocated from a pool without graphics capabilities.", name); return false; } // Add specified CMD to the CmdBuffer in given pCB, flagging errors if CB is not // in the recording state or if there's an issue with the Cmd ordering static VkBool32 addCmd(const layer_data *my_data, GLOBAL_CB_NODE *pCB, const CMD_TYPE cmd, const char *caller_name) { VkBool32 skipCall = VK_FALSE; auto pool_data = my_data->commandPoolMap.find(pCB->createInfo.commandPool); if (pool_data != my_data->commandPoolMap.end()) { VkQueueFlags flags = my_data->physDevProperties.queue_family_properties[pool_data->second.queueFamilyIndex].queueFlags; switch (cmd) { case CMD_BINDPIPELINE: case CMD_BINDPIPELINEDELTA: case CMD_BINDDESCRIPTORSETS: case CMD_FILLBUFFER: case CMD_CLEARCOLORIMAGE: case CMD_SETEVENT: case CMD_RESETEVENT: case CMD_WAITEVENTS: case CMD_BEGINQUERY: case CMD_ENDQUERY: case CMD_RESETQUERYPOOL: case CMD_COPYQUERYPOOLRESULTS: case CMD_WRITETIMESTAMP: skipCall |= checkGraphicsOrComputeBit(my_data, flags, cmdTypeToString(cmd).c_str()); break; case CMD_SETVIEWPORTSTATE: case CMD_SETSCISSORSTATE: case CMD_SETLINEWIDTHSTATE: case CMD_SETDEPTHBIASSTATE: case CMD_SETBLENDSTATE: case CMD_SETDEPTHBOUNDSSTATE: case CMD_SETSTENCILREADMASKSTATE: case CMD_SETSTENCILWRITEMASKSTATE: case CMD_SETSTENCILREFERENCESTATE: case CMD_BINDINDEXBUFFER: case CMD_BINDVERTEXBUFFER: case CMD_DRAW: case CMD_DRAWINDEXED: case CMD_DRAWINDIRECT: case CMD_DRAWINDEXEDINDIRECT: case CMD_BLITIMAGE: case CMD_CLEARATTACHMENTS: case CMD_CLEARDEPTHSTENCILIMAGE: case CMD_RESOLVEIMAGE: case CMD_BEGINRENDERPASS: case CMD_NEXTSUBPASS: case CMD_ENDRENDERPASS: skipCall |= checkGraphicsBit(my_data, flags, cmdTypeToString(cmd).c_str()); break; case CMD_DISPATCH: case CMD_DISPATCHINDIRECT: skipCall |= checkComputeBit(my_data, flags, cmdTypeToString(cmd).c_str()); break; case CMD_COPYBUFFER: case CMD_COPYIMAGE: case CMD_COPYBUFFERTOIMAGE: case CMD_COPYIMAGETOBUFFER: case CMD_CLONEIMAGEDATA: case CMD_UPDATEBUFFER: case CMD_PIPELINEBARRIER: case CMD_EXECUTECOMMANDS: break; default: break; } } if (pCB->state != CB_RECORDING) { skipCall |= report_error_no_cb_begin(my_data, pCB->commandBuffer, caller_name); skipCall |= validateCmdsInCmdBuffer(my_data, pCB, cmd); CMD_NODE cmdNode = {}; // init cmd node and append to end of cmd LL cmdNode.cmdNumber = ++pCB->numCmds; cmdNode.type = cmd; pCB->cmds.push_back(cmdNode); } return skipCall; } // Reset the command buffer state // Maintain the createInfo and set state to CB_NEW, but clear all other state static void resetCB(layer_data *my_data, const VkCommandBuffer cb) { GLOBAL_CB_NODE *pCB = my_data->commandBufferMap[cb]; if (pCB) { pCB->cmds.clear(); // Reset CB state (note that createInfo is not cleared) pCB->commandBuffer = cb; memset(&pCB->beginInfo, 0, sizeof(VkCommandBufferBeginInfo)); memset(&pCB->inheritanceInfo, 0, sizeof(VkCommandBufferInheritanceInfo)); pCB->numCmds = 0; memset(pCB->drawCount, 0, NUM_DRAW_TYPES * sizeof(uint64_t)); pCB->state = CB_NEW; pCB->submitCount = 0; pCB->status = 0; pCB->viewports.clear(); pCB->scissors.clear(); for (uint32_t i = 0; i < VK_PIPELINE_BIND_POINT_RANGE_SIZE; ++i) { // Before clearing lastBoundState, remove any CB bindings from all uniqueBoundSets for (auto set : pCB->lastBound[i].uniqueBoundSets) { auto set_node = my_data->setMap.find(set); if (set_node != my_data->setMap.end()) { set_node->second->boundCmdBuffers.erase(pCB->commandBuffer); } } pCB->lastBound[i].reset(); } memset(&pCB->activeRenderPassBeginInfo, 0, sizeof(pCB->activeRenderPassBeginInfo)); pCB->activeRenderPass = 0; pCB->activeSubpassContents = VK_SUBPASS_CONTENTS_INLINE; pCB->activeSubpass = 0; pCB->framebuffer = 0; pCB->fenceId = 0; pCB->lastSubmittedFence = VK_NULL_HANDLE; pCB->lastSubmittedQueue = VK_NULL_HANDLE; pCB->destroyedSets.clear(); pCB->updatedSets.clear(); pCB->destroyedFramebuffers.clear(); pCB->waitedEvents.clear(); pCB->semaphores.clear(); pCB->events.clear(); pCB->waitedEventsBeforeQueryReset.clear(); pCB->queryToStateMap.clear(); pCB->activeQueries.clear(); pCB->startedQueries.clear(); pCB->imageLayoutMap.clear(); pCB->eventToStageMap.clear(); pCB->drawData.clear(); pCB->currentDrawData.buffers.clear(); pCB->primaryCommandBuffer = VK_NULL_HANDLE; pCB->secondaryCommandBuffers.clear(); pCB->updateImages.clear(); pCB->updateBuffers.clear(); pCB->validate_functions.clear(); pCB->memObjs.clear(); pCB->eventUpdates.clear(); } } // Set PSO-related status bits for CB, including dynamic state set via PSO static void set_cb_pso_status(GLOBAL_CB_NODE *pCB, const PIPELINE_NODE *pPipe) { // Account for any dynamic state not set via this PSO if (!pPipe->dynStateCI.dynamicStateCount) { // All state is static pCB->status = CBSTATUS_ALL; } else { // First consider all state on // Then unset any state that's noted as dynamic in PSO // Finally OR that into CB statemask CBStatusFlags psoDynStateMask = CBSTATUS_ALL; for (uint32_t i = 0; i < pPipe->dynStateCI.dynamicStateCount; i++) { switch (pPipe->dynStateCI.pDynamicStates[i]) { case VK_DYNAMIC_STATE_VIEWPORT: psoDynStateMask &= ~CBSTATUS_VIEWPORT_SET; break; case VK_DYNAMIC_STATE_SCISSOR: psoDynStateMask &= ~CBSTATUS_SCISSOR_SET; break; case VK_DYNAMIC_STATE_LINE_WIDTH: psoDynStateMask &= ~CBSTATUS_LINE_WIDTH_SET; break; case VK_DYNAMIC_STATE_DEPTH_BIAS: psoDynStateMask &= ~CBSTATUS_DEPTH_BIAS_SET; break; case VK_DYNAMIC_STATE_BLEND_CONSTANTS: psoDynStateMask &= ~CBSTATUS_BLEND_CONSTANTS_SET; break; case VK_DYNAMIC_STATE_DEPTH_BOUNDS: psoDynStateMask &= ~CBSTATUS_DEPTH_BOUNDS_SET; break; case VK_DYNAMIC_STATE_STENCIL_COMPARE_MASK: psoDynStateMask &= ~CBSTATUS_STENCIL_READ_MASK_SET; break; case VK_DYNAMIC_STATE_STENCIL_WRITE_MASK: psoDynStateMask &= ~CBSTATUS_STENCIL_WRITE_MASK_SET; break; case VK_DYNAMIC_STATE_STENCIL_REFERENCE: psoDynStateMask &= ~CBSTATUS_STENCIL_REFERENCE_SET; break; default: // TODO : Flag error here break; } } pCB->status |= psoDynStateMask; } } // Print the last bound Gfx Pipeline static VkBool32 printPipeline(layer_data *my_data, const VkCommandBuffer cb) { VkBool32 skipCall = VK_FALSE; GLOBAL_CB_NODE *pCB = getCBNode(my_data, cb); if (pCB) { PIPELINE_NODE *pPipeTrav = getPipeline(my_data, pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].pipeline); if (!pPipeTrav) { // nothing to print } else { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_NONE, "DS", "%s", vk_print_vkgraphicspipelinecreateinfo(&pPipeTrav->graphicsPipelineCI, "{DS}").c_str()); } } return skipCall; } static void printCB(layer_data *my_data, const VkCommandBuffer cb) { GLOBAL_CB_NODE *pCB = getCBNode(my_data, cb); if (pCB && pCB->cmds.size() > 0) { log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_NONE, "DS", "Cmds in CB %p", (void *)cb); vector cmds = pCB->cmds; for (auto ii = cmds.begin(); ii != cmds.end(); ++ii) { // TODO : Need to pass cb as srcObj here log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS", " CMD#%" PRIu64 ": %s", (*ii).cmdNumber, cmdTypeToString((*ii).type).c_str()); } } else { // Nothing to print } } static VkBool32 synchAndPrintDSConfig(layer_data *my_data, const VkCommandBuffer cb) { VkBool32 skipCall = VK_FALSE; if (!(my_data->report_data->active_flags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT)) { return skipCall; } skipCall |= printPipeline(my_data, cb); return skipCall; } // Flags validation error if the associated call is made inside a render pass. The apiName // routine should ONLY be called outside a render pass. static VkBool32 insideRenderPass(const layer_data *my_data, GLOBAL_CB_NODE *pCB, const char *apiName) { VkBool32 inside = VK_FALSE; if (pCB->activeRenderPass) { inside = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)pCB->commandBuffer, __LINE__, DRAWSTATE_INVALID_RENDERPASS_CMD, "DS", "%s: It is invalid to issue this call inside an active render pass (%#" PRIxLEAST64 ")", apiName, (uint64_t)pCB->activeRenderPass); } return inside; } // Flags validation error if the associated call is made outside a render pass. The apiName // routine should ONLY be called inside a render pass. static VkBool32 outsideRenderPass(const layer_data *my_data, GLOBAL_CB_NODE *pCB, const char *apiName) { VkBool32 outside = VK_FALSE; if (((pCB->createInfo.level == VK_COMMAND_BUFFER_LEVEL_PRIMARY) && (!pCB->activeRenderPass)) || ((pCB->createInfo.level == VK_COMMAND_BUFFER_LEVEL_SECONDARY) && (!pCB->activeRenderPass) && !(pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT))) { outside = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)pCB->commandBuffer, __LINE__, DRAWSTATE_NO_ACTIVE_RENDERPASS, "DS", "%s: This call must be issued inside an active render pass.", apiName); } return outside; } static void init_core_validation(layer_data *my_data, const VkAllocationCallbacks *pAllocator) { layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_core_validation"); if (!globalLockInitialized) { loader_platform_thread_create_mutex(&globalLock); globalLockInitialized = 1; } #if MTMERGESOURCE // Zero out memory property data memset(&memProps, 0, sizeof(VkPhysicalDeviceMemoryProperties)); #endif } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) { VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance"); if (fpCreateInstance == NULL) return VK_ERROR_INITIALIZATION_FAILED; // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance); if (result != VK_SUCCESS) return result; layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map); my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable; layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr); my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance, pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames); init_core_validation(my_data, pAllocator); ValidateLayerOrdering(*pCreateInfo); return result; } /* hook DestroyInstance to remove tableInstanceMap entry */ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) { // TODOSC : Shouldn't need any customization here dispatch_key key = get_dispatch_key(instance); // TBD: Need any locking this early, in case this function is called at the // same time by more than one thread? layer_data *my_data = get_my_data_ptr(key, layer_data_map); VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; pTable->DestroyInstance(instance, pAllocator); loader_platform_thread_lock_mutex(&globalLock); // Clean up logging callback, if any while (my_data->logging_callback.size() > 0) { VkDebugReportCallbackEXT callback = my_data->logging_callback.back(); layer_destroy_msg_callback(my_data->report_data, callback, pAllocator); my_data->logging_callback.pop_back(); } layer_debug_report_destroy_instance(my_data->report_data); delete my_data->instance_dispatch_table; layer_data_map.erase(key); loader_platform_thread_unlock_mutex(&globalLock); if (layer_data_map.empty()) { // Release mutex when destroying last instance. loader_platform_thread_delete_mutex(&globalLock); globalLockInitialized = 0; } } static void createDeviceRegisterExtensions(const VkDeviceCreateInfo *pCreateInfo, VkDevice device) { uint32_t i; // TBD: Need any locking, in case this function is called at the same time // by more than one thread? layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); dev_data->device_extensions.wsi_enabled = false; VkLayerDispatchTable *pDisp = dev_data->device_dispatch_table; PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr; pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR)gpa(device, "vkCreateSwapchainKHR"); pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR)gpa(device, "vkDestroySwapchainKHR"); pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR)gpa(device, "vkGetSwapchainImagesKHR"); pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR)gpa(device, "vkAcquireNextImageKHR"); pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR)gpa(device, "vkQueuePresentKHR"); for (i = 0; i < pCreateInfo->enabledExtensionCount; i++) { if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0) dev_data->device_extensions.wsi_enabled = true; } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) { VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr; PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice"); if (fpCreateDevice == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateDevice(gpu, pCreateInfo, pAllocator, pDevice); if (result != VK_SUCCESS) { return result; } loader_platform_thread_lock_mutex(&globalLock); layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map); layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map); // Setup device dispatch table my_device_data->device_dispatch_table = new VkLayerDispatchTable; layer_init_device_dispatch_table(*pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr); my_device_data->device = *pDevice; my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice); createDeviceRegisterExtensions(pCreateInfo, *pDevice); // Get physical device limits for this device my_instance_data->instance_dispatch_table->GetPhysicalDeviceProperties(gpu, &(my_device_data->physDevProperties.properties)); uint32_t count; my_instance_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(gpu, &count, nullptr); my_device_data->physDevProperties.queue_family_properties.resize(count); my_instance_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties( gpu, &count, &my_device_data->physDevProperties.queue_family_properties[0]); // TODO: device limits should make sure these are compatible if (pCreateInfo->pEnabledFeatures) { my_device_data->physDevProperties.features = *pCreateInfo->pEnabledFeatures; } else { memset(&my_device_data->physDevProperties.features, 0, sizeof(VkPhysicalDeviceFeatures)); } loader_platform_thread_unlock_mutex(&globalLock); ValidateLayerOrdering(*pCreateInfo); return result; } // prototype static void deleteRenderPasses(layer_data *); VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) { // TODOSC : Shouldn't need any customization here dispatch_key key = get_dispatch_key(device); layer_data *dev_data = get_my_data_ptr(key, layer_data_map); // Free all the memory loader_platform_thread_lock_mutex(&globalLock); deletePipelines(dev_data); deleteRenderPasses(dev_data); deleteCommandBuffers(dev_data); deletePools(dev_data); deleteLayouts(dev_data); dev_data->imageViewMap.clear(); dev_data->imageMap.clear(); dev_data->imageSubresourceMap.clear(); dev_data->imageLayoutMap.clear(); dev_data->bufferViewMap.clear(); dev_data->bufferMap.clear(); loader_platform_thread_unlock_mutex(&globalLock); #if MTMERGESOURCE VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&globalLock); log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, __LINE__, MEMTRACK_NONE, "MEM", "Printing List details prior to vkDestroyDevice()"); log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, __LINE__, MEMTRACK_NONE, "MEM", "================================================"); print_mem_list(dev_data, device); printCBList(dev_data, device); delete_cmd_buf_info_list(dev_data); // Report any memory leaks DEVICE_MEM_INFO *pInfo = NULL; if (dev_data->memObjMap.size() > 0) { for (auto ii = dev_data->memObjMap.begin(); ii != dev_data->memObjMap.end(); ++ii) { pInfo = &(*ii).second; if (pInfo->allocInfo.allocationSize != 0) { // Valid Usage: All child objects created on device must have been destroyed prior to destroying device skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pInfo->mem, __LINE__, MEMTRACK_MEMORY_LEAK, "MEM", "Mem Object %" PRIu64 " has not been freed. You should clean up this memory by calling " "vkFreeMemory(%" PRIu64 ") prior to vkDestroyDevice().", (uint64_t)(pInfo->mem), (uint64_t)(pInfo->mem)); } } } // Queues persist until device is destroyed delete_queue_info_list(dev_data); layer_debug_report_destroy_device(device); loader_platform_thread_unlock_mutex(&globalLock); #if DISPATCH_MAP_DEBUG fprintf(stderr, "Device: %p, key: %p\n", device, key); #endif VkLayerDispatchTable *pDisp = dev_data->device_dispatch_table; if (VK_FALSE == skipCall) { pDisp->DestroyDevice(device, pAllocator); } #else dev_data->device_dispatch_table->DestroyDevice(device, pAllocator); #endif delete dev_data->device_dispatch_table; layer_data_map.erase(key); } #if MTMERGESOURCE VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceMemoryProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties *pMemoryProperties) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); VkLayerInstanceDispatchTable *pInstanceTable = my_data->instance_dispatch_table; pInstanceTable->GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties); memcpy(&memProps, pMemoryProperties, sizeof(VkPhysicalDeviceMemoryProperties)); } #endif static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) { return util_GetLayerProperties(ARRAY_SIZE(cv_global_layers), cv_global_layers, pCount, pProperties); } // TODO: Why does this exist - can we just use global? static const VkLayerProperties cv_device_layers[] = {{ "VK_LAYER_LUNARG_core_validation", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer", }}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice, const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { if (pLayerName == NULL) { dispatch_key key = get_dispatch_key(physicalDevice); layer_data *my_data = get_my_data_ptr(key, layer_data_map); return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties); } else { return util_GetExtensionProperties(0, NULL, pCount, pProperties); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) { /* draw_state physical device layers are the same as global */ return util_GetLayerProperties(ARRAY_SIZE(cv_device_layers), cv_device_layers, pCount, pProperties); } // This validates that the initial layout specified in the command buffer for // the IMAGE is the same // as the global IMAGE layout VkBool32 ValidateCmdBufImageLayouts(VkCommandBuffer cmdBuffer) { VkBool32 skip_call = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer); for (auto cb_image_data : pCB->imageLayoutMap) { VkImageLayout imageLayout; if (!FindLayout(dev_data, cb_image_data.first, imageLayout)) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Cannot submit cmd buffer using deleted image %" PRIu64 ".", reinterpret_cast(cb_image_data.first)); } else { if (cb_image_data.second.initialLayout == VK_IMAGE_LAYOUT_UNDEFINED) { // TODO: Set memory invalid which is in mem_tracker currently } else if (imageLayout != cb_image_data.second.initialLayout) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(cmdBuffer), __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Cannot submit cmd buffer using image (%" PRIx64 ") with layout %s when " "first use is %s.", reinterpret_cast(cb_image_data.first.image), string_VkImageLayout(imageLayout), string_VkImageLayout(cb_image_data.second.initialLayout)); } SetLayout(dev_data, cb_image_data.first, cb_image_data.second.layout); } } return skip_call; } // Track which resources are in-flight by atomically incrementing their "in_use" count VkBool32 validateAndIncrementResources(layer_data *my_data, GLOBAL_CB_NODE *pCB) { VkBool32 skip_call = VK_FALSE; for (auto drawDataElement : pCB->drawData) { for (auto buffer : drawDataElement.buffers) { auto buffer_data = my_data->bufferMap.find(buffer); if (buffer_data == my_data->bufferMap.end()) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, (uint64_t)(buffer), __LINE__, DRAWSTATE_INVALID_BUFFER, "DS", "Cannot submit cmd buffer using deleted buffer %" PRIu64 ".", (uint64_t)(buffer)); } else { buffer_data->second.in_use.fetch_add(1); } } } for (uint32_t i = 0; i < VK_PIPELINE_BIND_POINT_RANGE_SIZE; ++i) { for (auto set : pCB->lastBound[i].uniqueBoundSets) { auto setNode = my_data->setMap.find(set); if (setNode == my_data->setMap.end()) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)(set), __LINE__, DRAWSTATE_INVALID_DESCRIPTOR_SET, "DS", "Cannot submit cmd buffer using deleted descriptor set %" PRIu64 ".", (uint64_t)(set)); } else { setNode->second->in_use.fetch_add(1); } } } for (auto semaphore : pCB->semaphores) { auto semaphoreNode = my_data->semaphoreMap.find(semaphore); if (semaphoreNode == my_data->semaphoreMap.end()) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, reinterpret_cast(semaphore), __LINE__, DRAWSTATE_INVALID_SEMAPHORE, "DS", "Cannot submit cmd buffer using deleted semaphore %" PRIu64 ".", reinterpret_cast(semaphore)); } else { semaphoreNode->second.in_use.fetch_add(1); } } for (auto event : pCB->events) { auto eventNode = my_data->eventMap.find(event); if (eventNode == my_data->eventMap.end()) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, reinterpret_cast(event), __LINE__, DRAWSTATE_INVALID_EVENT, "DS", "Cannot submit cmd buffer using deleted event %" PRIu64 ".", reinterpret_cast(event)); } else { eventNode->second.in_use.fetch_add(1); } } return skip_call; } void decrementResources(layer_data *my_data, VkCommandBuffer cmdBuffer) { GLOBAL_CB_NODE *pCB = getCBNode(my_data, cmdBuffer); for (auto drawDataElement : pCB->drawData) { for (auto buffer : drawDataElement.buffers) { auto buffer_data = my_data->bufferMap.find(buffer); if (buffer_data != my_data->bufferMap.end()) { buffer_data->second.in_use.fetch_sub(1); } } } for (uint32_t i = 0; i < VK_PIPELINE_BIND_POINT_RANGE_SIZE; ++i) { for (auto set : pCB->lastBound[i].uniqueBoundSets) { auto setNode = my_data->setMap.find(set); if (setNode != my_data->setMap.end()) { setNode->second->in_use.fetch_sub(1); } } } for (auto semaphore : pCB->semaphores) { auto semaphoreNode = my_data->semaphoreMap.find(semaphore); if (semaphoreNode != my_data->semaphoreMap.end()) { semaphoreNode->second.in_use.fetch_sub(1); } } for (auto event : pCB->events) { auto eventNode = my_data->eventMap.find(event); if (eventNode != my_data->eventMap.end()) { eventNode->second.in_use.fetch_sub(1); } } for (auto queryStatePair : pCB->queryToStateMap) { my_data->queryToStateMap[queryStatePair.first] = queryStatePair.second; } for (auto eventStagePair : pCB->eventToStageMap) { my_data->eventMap[eventStagePair.first].stageMask = eventStagePair.second; } } void decrementResources(layer_data *my_data, uint32_t fenceCount, const VkFence *pFences) { for (uint32_t i = 0; i < fenceCount; ++i) { auto fence_data = my_data->fenceMap.find(pFences[i]); if (fence_data == my_data->fenceMap.end() || !fence_data->second.needsSignaled) return; fence_data->second.needsSignaled = false; fence_data->second.in_use.fetch_sub(1); decrementResources(my_data, fence_data->second.priorFences.size(), fence_data->second.priorFences.data()); for (auto cmdBuffer : fence_data->second.cmdBuffers) { decrementResources(my_data, cmdBuffer); } } } void decrementResources(layer_data *my_data, VkQueue queue) { auto queue_data = my_data->queueMap.find(queue); if (queue_data != my_data->queueMap.end()) { for (auto cmdBuffer : queue_data->second.untrackedCmdBuffers) { decrementResources(my_data, cmdBuffer); } queue_data->second.untrackedCmdBuffers.clear(); decrementResources(my_data, queue_data->second.lastFences.size(), queue_data->second.lastFences.data()); } } void updateTrackedCommandBuffers(layer_data *dev_data, VkQueue queue, VkQueue other_queue, VkFence fence) { if (queue == other_queue) { return; } auto queue_data = dev_data->queueMap.find(queue); auto other_queue_data = dev_data->queueMap.find(other_queue); if (queue_data == dev_data->queueMap.end() || other_queue_data == dev_data->queueMap.end()) { return; } for (auto fence : other_queue_data->second.lastFences) { queue_data->second.lastFences.push_back(fence); } if (fence != VK_NULL_HANDLE) { auto fence_data = dev_data->fenceMap.find(fence); if (fence_data == dev_data->fenceMap.end()) { return; } for (auto cmdbuffer : other_queue_data->second.untrackedCmdBuffers) { fence_data->second.cmdBuffers.push_back(cmdbuffer); } other_queue_data->second.untrackedCmdBuffers.clear(); } else { for (auto cmdbuffer : other_queue_data->second.untrackedCmdBuffers) { queue_data->second.untrackedCmdBuffers.push_back(cmdbuffer); } other_queue_data->second.untrackedCmdBuffers.clear(); } for (auto eventStagePair : other_queue_data->second.eventToStageMap) { queue_data->second.eventToStageMap[eventStagePair.first] = eventStagePair.second; } } void trackCommandBuffers(layer_data *my_data, VkQueue queue, uint32_t submitCount, const VkSubmitInfo *pSubmits, VkFence fence) { auto queue_data = my_data->queueMap.find(queue); if (fence != VK_NULL_HANDLE) { vector prior_fences; auto fence_data = my_data->fenceMap.find(fence); if (fence_data == my_data->fenceMap.end()) { return; } if (queue_data != my_data->queueMap.end()) { prior_fences = queue_data->second.lastFences; queue_data->second.lastFences.clear(); queue_data->second.lastFences.push_back(fence); for (auto cmdbuffer : queue_data->second.untrackedCmdBuffers) { fence_data->second.cmdBuffers.push_back(cmdbuffer); } queue_data->second.untrackedCmdBuffers.clear(); } fence_data->second.cmdBuffers.clear(); fence_data->second.priorFences = prior_fences; fence_data->second.needsSignaled = true; fence_data->second.queue = queue; fence_data->second.in_use.fetch_add(1); for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) { const VkSubmitInfo *submit = &pSubmits[submit_idx]; for (uint32_t i = 0; i < submit->commandBufferCount; ++i) { for (auto secondaryCmdBuffer : my_data->commandBufferMap[submit->pCommandBuffers[i]]->secondaryCommandBuffers) { fence_data->second.cmdBuffers.push_back(secondaryCmdBuffer); } fence_data->second.cmdBuffers.push_back(submit->pCommandBuffers[i]); } } } else { if (queue_data != my_data->queueMap.end()) { for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) { const VkSubmitInfo *submit = &pSubmits[submit_idx]; for (uint32_t i = 0; i < submit->commandBufferCount; ++i) { for (auto secondaryCmdBuffer : my_data->commandBufferMap[submit->pCommandBuffers[i]]->secondaryCommandBuffers) { queue_data->second.untrackedCmdBuffers.push_back(secondaryCmdBuffer); } queue_data->second.untrackedCmdBuffers.push_back(submit->pCommandBuffers[i]); } } } } if (queue_data != my_data->queueMap.end()) { for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) { const VkSubmitInfo *submit = &pSubmits[submit_idx]; for (uint32_t i = 0; i < submit->commandBufferCount; ++i) { // Add cmdBuffers to both the global set and queue set for (auto secondaryCmdBuffer : my_data->commandBufferMap[submit->pCommandBuffers[i]]->secondaryCommandBuffers) { my_data->globalInFlightCmdBuffers.insert(secondaryCmdBuffer); queue_data->second.inFlightCmdBuffers.insert(secondaryCmdBuffer); } my_data->globalInFlightCmdBuffers.insert(submit->pCommandBuffers[i]); queue_data->second.inFlightCmdBuffers.insert(submit->pCommandBuffers[i]); } } } } bool validateCommandBufferSimultaneousUse(layer_data *dev_data, GLOBAL_CB_NODE *pCB) { bool skip_call = false; if (dev_data->globalInFlightCmdBuffers.count(pCB->commandBuffer) && !(pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT)) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, "DS", "Command Buffer %#" PRIx64 " is already in use and is not marked for simultaneous use.", reinterpret_cast(pCB->commandBuffer)); } return skip_call; } static bool validateCommandBufferState(layer_data *dev_data, GLOBAL_CB_NODE *pCB) { bool skipCall = false; // Validate that cmd buffers have been updated if (CB_RECORDED != pCB->state) { if (CB_INVALID == pCB->state) { // Inform app of reason CB invalid bool causeReported = false; if (!pCB->destroyedSets.empty()) { std::stringstream set_string; for (auto set : pCB->destroyedSets) set_string << " " << set; skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "You are submitting command buffer %#" PRIxLEAST64 " that is invalid because it had the following bound descriptor set(s) destroyed: %s", (uint64_t)(pCB->commandBuffer), set_string.str().c_str()); causeReported = true; } if (!pCB->updatedSets.empty()) { std::stringstream set_string; for (auto set : pCB->updatedSets) set_string << " " << set; skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "You are submitting command buffer %#" PRIxLEAST64 " that is invalid because it had the following bound descriptor set(s) updated: %s", (uint64_t)(pCB->commandBuffer), set_string.str().c_str()); causeReported = true; } if (!pCB->destroyedFramebuffers.empty()) { std::stringstream fb_string; for (auto fb : pCB->destroyedFramebuffers) fb_string << " " << fb; skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "You are submitting command buffer %#" PRIxLEAST64 " that is invalid because it had the following " "referenced framebuffers destroyed: %s", reinterpret_cast(pCB->commandBuffer), fb_string.str().c_str()); causeReported = true; } // TODO : This is defensive programming to make sure an error is // flagged if we hit this INVALID cmd buffer case and none of the // above cases are hit. As the number of INVALID cases grows, this // code should be updated to seemlessly handle all the cases. if (!causeReported) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "You are submitting command buffer %#" PRIxLEAST64 " that is invalid due to an unknown cause. Validation " "should " "be improved to report the exact cause.", reinterpret_cast(pCB->commandBuffer)); } } else { // Flag error for using CB w/o vkEndCommandBuffer() called skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_NO_END_COMMAND_BUFFER, "DS", "You must call vkEndCommandBuffer() on CB %#" PRIxLEAST64 " before this call to vkQueueSubmit()!", (uint64_t)(pCB->commandBuffer)); } } return skipCall; } static VkBool32 validatePrimaryCommandBufferState(layer_data *dev_data, GLOBAL_CB_NODE *pCB) { // Track in-use for resources off of primary and any secondary CBs VkBool32 skipCall = validateAndIncrementResources(dev_data, pCB); if (!pCB->secondaryCommandBuffers.empty()) { for (auto secondaryCmdBuffer : pCB->secondaryCommandBuffers) { skipCall |= validateAndIncrementResources(dev_data, dev_data->commandBufferMap[secondaryCmdBuffer]); GLOBAL_CB_NODE *pSubCB = getCBNode(dev_data, secondaryCmdBuffer); if (pSubCB->primaryCommandBuffer != pCB->commandBuffer) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION, "DS", "CB %#" PRIxLEAST64 " was submitted with secondary buffer %#" PRIxLEAST64 " but that buffer has subsequently been bound to " "primary cmd buffer %#" PRIxLEAST64 ".", reinterpret_cast(pCB->commandBuffer), reinterpret_cast(secondaryCmdBuffer), reinterpret_cast(pSubCB->primaryCommandBuffer)); } } } // TODO : Verify if this also needs to be checked for secondary command // buffers. If so, this block of code can move to // validateCommandBufferState() function. vulkan GL106 filed to clarify if ((pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT) && (pCB->submitCount > 1)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION, "DS", "CB %#" PRIxLEAST64 " was begun w/ VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT " "set, but has been submitted %#" PRIxLEAST64 " times.", (uint64_t)(pCB->commandBuffer), pCB->submitCount); } skipCall |= validateCommandBufferState(dev_data, pCB); // If USAGE_SIMULTANEOUS_USE_BIT not set then CB cannot already be executing // on device skipCall |= validateCommandBufferSimultaneousUse(dev_data, pCB); return skipCall; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo *pSubmits, VkFence fence) { VkBool32 skipCall = VK_FALSE; GLOBAL_CB_NODE *pCBNode = NULL; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map); VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE // TODO : Need to track fence and clear mem references when fence clears // MTMTODO : Merge this code with code below to avoid duplicating efforts uint64_t fenceId = 0; skipCall = add_fence_info(dev_data, fence, queue, &fenceId); print_mem_list(dev_data, queue); printCBList(dev_data, queue); for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) { const VkSubmitInfo *submit = &pSubmits[submit_idx]; for (uint32_t i = 0; i < submit->commandBufferCount; i++) { pCBNode = getCBNode(dev_data, submit->pCommandBuffers[i]); if (pCBNode) { pCBNode->fenceId = fenceId; pCBNode->lastSubmittedFence = fence; pCBNode->lastSubmittedQueue = queue; for (auto &function : pCBNode->validate_functions) { skipCall |= function(); } for (auto &function : pCBNode->eventUpdates) { skipCall |= static_cast(function(queue)); } } } for (uint32_t i = 0; i < submit->waitSemaphoreCount; i++) { VkSemaphore sem = submit->pWaitSemaphores[i]; if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) { if (dev_data->semaphoreMap[sem].state != MEMTRACK_SEMAPHORE_STATE_SIGNALLED) { skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, (uint64_t)sem, __LINE__, MEMTRACK_NONE, "SEMAPHORE", "vkQueueSubmit: Semaphore must be in signaled state before passing to pWaitSemaphores"); } dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_WAIT; } } for (uint32_t i = 0; i < submit->signalSemaphoreCount; i++) { VkSemaphore sem = submit->pSignalSemaphores[i]; if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) { if (dev_data->semaphoreMap[sem].state != MEMTRACK_SEMAPHORE_STATE_UNSET) { skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, (uint64_t)sem, __LINE__, MEMTRACK_NONE, "SEMAPHORE", "vkQueueSubmit: Semaphore must not be currently signaled or in a wait state"); } dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_SIGNALLED; } } } #endif // First verify that fence is not in use if ((fence != VK_NULL_HANDLE) && (submitCount != 0) && dev_data->fenceMap[fence].in_use.load()) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, (uint64_t)(fence), __LINE__, DRAWSTATE_INVALID_FENCE, "DS", "Fence %#" PRIx64 " is already in use by another submission.", (uint64_t)(fence)); } // Now verify each individual submit std::unordered_set processed_other_queues; for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) { const VkSubmitInfo *submit = &pSubmits[submit_idx]; vector semaphoreList; for (uint32_t i = 0; i < submit->waitSemaphoreCount; ++i) { const VkSemaphore &semaphore = submit->pWaitSemaphores[i]; if (dev_data->semaphoreMap[semaphore].signaled) { dev_data->semaphoreMap[semaphore].signaled = 0; dev_data->semaphoreMap[semaphore].in_use.fetch_sub(1); } else { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS, "DS", "Queue %#" PRIx64 " is waiting on semaphore %#" PRIx64 " that has no way to be signaled.", reinterpret_cast(queue), reinterpret_cast(semaphore)); } const VkQueue &other_queue = dev_data->semaphoreMap[semaphore].queue; if (other_queue != VK_NULL_HANDLE && !processed_other_queues.count(other_queue)) { updateTrackedCommandBuffers(dev_data, queue, other_queue, fence); processed_other_queues.insert(other_queue); } } for (uint32_t i = 0; i < submit->signalSemaphoreCount; ++i) { const VkSemaphore &semaphore = submit->pSignalSemaphores[i]; semaphoreList.push_back(semaphore); if (dev_data->semaphoreMap[semaphore].signaled) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS, "DS", "Queue %#" PRIx64 " is signaling semaphore %#" PRIx64 " that has already been signaled but not waited on by queue %#" PRIx64 ".", reinterpret_cast(queue), reinterpret_cast(semaphore), reinterpret_cast(dev_data->semaphoreMap[semaphore].queue)); } else { dev_data->semaphoreMap[semaphore].signaled = 1; dev_data->semaphoreMap[semaphore].queue = queue; } } for (uint32_t i = 0; i < submit->commandBufferCount; i++) { skipCall |= ValidateCmdBufImageLayouts(submit->pCommandBuffers[i]); pCBNode = getCBNode(dev_data, submit->pCommandBuffers[i]); pCBNode->semaphores = semaphoreList; pCBNode->submitCount++; // increment submit count skipCall |= validatePrimaryCommandBufferState(dev_data, pCBNode); } } // Update cmdBuffer-related data structs and mark fence in-use trackCommandBuffers(dev_data, queue, submitCount, pSubmits, fence); loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) result = dev_data->device_dispatch_table->QueueSubmit(queue, submitCount, pSubmits, fence); #if MTMERGESOURCE loader_platform_thread_lock_mutex(&globalLock); for (uint32_t submit_idx = 0; submit_idx < submitCount; submit_idx++) { const VkSubmitInfo *submit = &pSubmits[submit_idx]; for (uint32_t i = 0; i < submit->waitSemaphoreCount; i++) { VkSemaphore sem = submit->pWaitSemaphores[i]; if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) { dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_UNSET; } } } loader_platform_thread_unlock_mutex(&globalLock); #endif return result; } #if MTMERGESOURCE VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateMemory(VkDevice device, const VkMemoryAllocateInfo *pAllocateInfo, const VkAllocationCallbacks *pAllocator, VkDeviceMemory *pMemory) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = my_data->device_dispatch_table->AllocateMemory(device, pAllocateInfo, pAllocator, pMemory); // TODO : Track allocations and overall size here loader_platform_thread_lock_mutex(&globalLock); add_mem_obj_info(my_data, device, *pMemory, pAllocateInfo); print_mem_list(my_data, device); loader_platform_thread_unlock_mutex(&globalLock); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeMemory(VkDevice device, VkDeviceMemory mem, const VkAllocationCallbacks *pAllocator) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); // From spec : A memory object is freed by calling vkFreeMemory() when it is no longer needed. // Before freeing a memory object, an application must ensure the memory object is no longer // in use by the device—for example by command buffers queued for execution. The memory need // not yet be unbound from all images and buffers, but any further use of those images or // buffers (on host or device) for anything other than destroying those objects will result in // undefined behavior. loader_platform_thread_lock_mutex(&globalLock); freeMemObjInfo(my_data, device, mem, VK_FALSE); print_mem_list(my_data, device); printCBList(my_data, device); loader_platform_thread_unlock_mutex(&globalLock); my_data->device_dispatch_table->FreeMemory(device, mem, pAllocator); } VkBool32 validateMemRange(layer_data *my_data, VkDeviceMemory mem, VkDeviceSize offset, VkDeviceSize size) { VkBool32 skipCall = VK_FALSE; if (size == 0) { // TODO: a size of 0 is not listed as an invalid use in the spec, should it be? skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP, "MEM", "VkMapMemory: Attempting to map memory range of size zero"); } auto mem_element = my_data->memObjMap.find(mem); if (mem_element != my_data->memObjMap.end()) { // It is an application error to call VkMapMemory on an object that is already mapped if (mem_element->second.memRange.size != 0) { skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP, "MEM", "VkMapMemory: Attempting to map memory on an already-mapped object %#" PRIxLEAST64, (uint64_t)mem); } // Validate that offset + size is within object's allocationSize if (size == VK_WHOLE_SIZE) { if (offset >= mem_element->second.allocInfo.allocationSize) { skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP, "MEM", "Mapping Memory from %" PRIu64 " to %" PRIu64 " with total array size %" PRIu64, offset, mem_element->second.allocInfo.allocationSize, mem_element->second.allocInfo.allocationSize); } } else { if ((offset + size) > mem_element->second.allocInfo.allocationSize) { skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP, "MEM", "Mapping Memory from %" PRIu64 " to %" PRIu64 " with total array size %" PRIu64, offset, size + offset, mem_element->second.allocInfo.allocationSize); } } } return skipCall; } void storeMemRanges(layer_data *my_data, VkDeviceMemory mem, VkDeviceSize offset, VkDeviceSize size) { auto mem_element = my_data->memObjMap.find(mem); if (mem_element != my_data->memObjMap.end()) { MemRange new_range; new_range.offset = offset; new_range.size = size; mem_element->second.memRange = new_range; } } VkBool32 deleteMemRanges(layer_data *my_data, VkDeviceMemory mem) { VkBool32 skipCall = VK_FALSE; auto mem_element = my_data->memObjMap.find(mem); if (mem_element != my_data->memObjMap.end()) { if (!mem_element->second.memRange.size) { // Valid Usage: memory must currently be mapped skipCall = log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_MAP, "MEM", "Unmapping Memory without memory being mapped: mem obj %#" PRIxLEAST64, (uint64_t)mem); } mem_element->second.memRange.size = 0; if (mem_element->second.pData) { free(mem_element->second.pData); mem_element->second.pData = 0; } } return skipCall; } static char NoncoherentMemoryFillValue = 0xb; void initializeAndTrackMemory(layer_data *my_data, VkDeviceMemory mem, VkDeviceSize size, void **ppData) { auto mem_element = my_data->memObjMap.find(mem); if (mem_element != my_data->memObjMap.end()) { mem_element->second.pDriverData = *ppData; uint32_t index = mem_element->second.allocInfo.memoryTypeIndex; if (memProps.memoryTypes[index].propertyFlags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) { mem_element->second.pData = 0; } else { if (size == VK_WHOLE_SIZE) { size = mem_element->second.allocInfo.allocationSize; } size_t convSize = (size_t)(size); mem_element->second.pData = malloc(2 * convSize); memset(mem_element->second.pData, NoncoherentMemoryFillValue, 2 * convSize); *ppData = static_cast(mem_element->second.pData) + (convSize / 2); } } } #endif // Note: This function assumes that the global lock is held by the calling // thread. VkBool32 cleanInFlightCmdBuffer(layer_data *my_data, VkCommandBuffer cmdBuffer) { VkBool32 skip_call = VK_FALSE; GLOBAL_CB_NODE *pCB = getCBNode(my_data, cmdBuffer); if (pCB) { for (auto queryEventsPair : pCB->waitedEventsBeforeQueryReset) { for (auto event : queryEventsPair.second) { if (my_data->eventMap[event].needsSignaled) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, 0, DRAWSTATE_INVALID_QUERY, "DS", "Cannot get query results on queryPool %" PRIu64 " with index %d which was guarded by unsignaled event %" PRIu64 ".", (uint64_t)(queryEventsPair.first.pool), queryEventsPair.first.index, (uint64_t)(event)); } } } } return skip_call; } // Remove given cmd_buffer from the global inFlight set. // Also, if given queue is valid, then remove the cmd_buffer from that queues // inFlightCmdBuffer set. Finally, check all other queues and if given cmd_buffer // is still in flight on another queue, add it back into the global set. // Note: This function assumes that the global lock is held by the calling // thread. static inline void removeInFlightCmdBuffer(layer_data *dev_data, VkCommandBuffer cmd_buffer, VkQueue queue) { // Pull it off of global list initially, but if we find it in any other queue list, add it back in dev_data->globalInFlightCmdBuffers.erase(cmd_buffer); if (dev_data->queueMap.find(queue) != dev_data->queueMap.end()) { dev_data->queueMap[queue].inFlightCmdBuffers.erase(cmd_buffer); for (auto q : dev_data->queues) { if ((q != queue) && (dev_data->queueMap[q].inFlightCmdBuffers.find(cmd_buffer) != dev_data->queueMap[q].inFlightCmdBuffers.end())) { dev_data->globalInFlightCmdBuffers.insert(cmd_buffer); break; } } } } #if MTMERGESOURCE static inline bool verifyFenceStatus(VkDevice device, VkFence fence, const char *apiCall) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkBool32 skipCall = false; auto pFenceInfo = my_data->fenceMap.find(fence); if (pFenceInfo != my_data->fenceMap.end()) { if (pFenceInfo->second.firstTimeFlag != VK_TRUE) { if ((pFenceInfo->second.createInfo.flags & VK_FENCE_CREATE_SIGNALED_BIT) && pFenceInfo->second.firstTimeFlag != VK_TRUE) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, (uint64_t)fence, __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM", "%s specified fence %#" PRIxLEAST64 " already in SIGNALED state.", apiCall, (uint64_t)fence); } if (!pFenceInfo->second.queue && !pFenceInfo->second.swapchain) { // Checking status of unsubmitted fence skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, reinterpret_cast(fence), __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM", "%s called for fence %#" PRIxLEAST64 " which has not been submitted on a Queue or during " "acquire next image.", apiCall, reinterpret_cast(fence)); } } else { pFenceInfo->second.firstTimeFlag = VK_FALSE; } } return skipCall; } #endif VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkWaitForFences(VkDevice device, uint32_t fenceCount, const VkFence *pFences, VkBool32 waitAll, uint64_t timeout) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkBool32 skip_call = VK_FALSE; #if MTMERGESOURCE // Verify fence status of submitted fences loader_platform_thread_lock_mutex(&globalLock); for (uint32_t i = 0; i < fenceCount; i++) { skip_call |= verifyFenceStatus(device, pFences[i], "vkWaitForFences"); } loader_platform_thread_unlock_mutex(&globalLock); if (skip_call) return VK_ERROR_VALIDATION_FAILED_EXT; #endif VkResult result = dev_data->device_dispatch_table->WaitForFences(device, fenceCount, pFences, waitAll, timeout); if (result == VK_SUCCESS) { loader_platform_thread_lock_mutex(&globalLock); // When we know that all fences are complete we can clean/remove their CBs if (waitAll || fenceCount == 1) { for (uint32_t i = 0; i < fenceCount; ++i) { #if MTMERGESOURCE update_fence_tracking(dev_data, pFences[i]); #endif VkQueue fence_queue = dev_data->fenceMap[pFences[i]].queue; for (auto cmdBuffer : dev_data->fenceMap[pFences[i]].cmdBuffers) { skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer); removeInFlightCmdBuffer(dev_data, cmdBuffer, fence_queue); } } decrementResources(dev_data, fenceCount, pFences); } // NOTE : Alternate case not handled here is when some fences have completed. In // this case for app to guarantee which fences completed it will have to call // vkGetFenceStatus() at which point we'll clean/remove their CBs if complete. loader_platform_thread_unlock_mutex(&globalLock); } if (VK_FALSE != skip_call) return VK_ERROR_VALIDATION_FAILED_EXT; return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceStatus(VkDevice device, VkFence fence) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); bool skipCall = false; VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; #if MTMERGESOURCE loader_platform_thread_lock_mutex(&globalLock); skipCall = verifyFenceStatus(device, fence, "vkGetFenceStatus"); loader_platform_thread_unlock_mutex(&globalLock); if (skipCall) return result; #endif result = dev_data->device_dispatch_table->GetFenceStatus(device, fence); VkBool32 skip_call = VK_FALSE; loader_platform_thread_lock_mutex(&globalLock); if (result == VK_SUCCESS) { #if MTMERGESOURCE update_fence_tracking(dev_data, fence); #endif auto fence_queue = dev_data->fenceMap[fence].queue; for (auto cmdBuffer : dev_data->fenceMap[fence].cmdBuffers) { skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer); removeInFlightCmdBuffer(dev_data, cmdBuffer, fence_queue); } decrementResources(dev_data, 1, &fence); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE != skip_call) return VK_ERROR_VALIDATION_FAILED_EXT; return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue *pQueue) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); dev_data->device_dispatch_table->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue); loader_platform_thread_lock_mutex(&globalLock); // Add queue to tracking set only if it is new auto result = dev_data->queues.emplace(*pQueue); if (result.second == true) { QUEUE_NODE *pQNode = &dev_data->queueMap[*pQueue]; pQNode->device = device; #if MTMERGESOURCE pQNode->lastRetiredId = 0; pQNode->lastSubmittedId = 0; #endif } loader_platform_thread_unlock_mutex(&globalLock); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueWaitIdle(VkQueue queue) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map); decrementResources(dev_data, queue); VkBool32 skip_call = VK_FALSE; loader_platform_thread_lock_mutex(&globalLock); // Iterate over local set since we erase set members as we go in for loop auto local_cb_set = dev_data->queueMap[queue].inFlightCmdBuffers; for (auto cmdBuffer : local_cb_set) { skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer); removeInFlightCmdBuffer(dev_data, cmdBuffer, queue); } dev_data->queueMap[queue].inFlightCmdBuffers.clear(); loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE != skip_call) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = dev_data->device_dispatch_table->QueueWaitIdle(queue); #if MTMERGESOURCE if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); retire_queue_fences(dev_data, queue); loader_platform_thread_unlock_mutex(&globalLock); } #endif return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkDeviceWaitIdle(VkDevice device) { VkBool32 skip_call = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); for (auto queue : dev_data->queues) { decrementResources(dev_data, queue); if (dev_data->queueMap.find(queue) != dev_data->queueMap.end()) { // Clear all of the queue inFlightCmdBuffers (global set cleared below) dev_data->queueMap[queue].inFlightCmdBuffers.clear(); } } for (auto cmdBuffer : dev_data->globalInFlightCmdBuffers) { skip_call |= cleanInFlightCmdBuffer(dev_data, cmdBuffer); } dev_data->globalInFlightCmdBuffers.clear(); loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE != skip_call) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = dev_data->device_dispatch_table->DeviceWaitIdle(device); #if MTMERGESOURCE if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); retire_device_fences(dev_data, device); loader_platform_thread_unlock_mutex(&globalLock); } #endif return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyFence(VkDevice device, VkFence fence, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); bool skipCall = false; loader_platform_thread_lock_mutex(&globalLock); if (dev_data->fenceMap[fence].in_use.load()) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, (uint64_t)(fence), __LINE__, DRAWSTATE_INVALID_FENCE, "DS", "Fence %#" PRIx64 " is in use by a command buffer.", (uint64_t)(fence)); } #if MTMERGESOURCE delete_fence_info(dev_data, fence); auto item = dev_data->fenceMap.find(fence); if (item != dev_data->fenceMap.end()) { dev_data->fenceMap.erase(item); } #endif loader_platform_thread_unlock_mutex(&globalLock); if (!skipCall) dev_data->device_dispatch_table->DestroyFence(device, fence, pAllocator); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySemaphore(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); dev_data->device_dispatch_table->DestroySemaphore(device, semaphore, pAllocator); loader_platform_thread_lock_mutex(&globalLock); auto item = dev_data->semaphoreMap.find(semaphore); if (item != dev_data->semaphoreMap.end()) { if (item->second.in_use.load()) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, reinterpret_cast(semaphore), __LINE__, DRAWSTATE_INVALID_SEMAPHORE, "DS", "Cannot delete semaphore %" PRIx64 " which is in use.", reinterpret_cast(semaphore)); } dev_data->semaphoreMap.erase(semaphore); } loader_platform_thread_unlock_mutex(&globalLock); // TODO : Clean up any internal data structures using this obj. } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyEvent(VkDevice device, VkEvent event, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); bool skip_call = false; loader_platform_thread_lock_mutex(&globalLock); auto event_data = dev_data->eventMap.find(event); if (event_data != dev_data->eventMap.end()) { if (event_data->second.in_use.load()) { skip_call |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, reinterpret_cast(event), __LINE__, DRAWSTATE_INVALID_EVENT, "DS", "Cannot delete event %" PRIx64 " which is in use by a command buffer.", reinterpret_cast(event)); } dev_data->eventMap.erase(event_data); } loader_platform_thread_unlock_mutex(&globalLock); if (!skip_call) dev_data->device_dispatch_table->DestroyEvent(device, event, pAllocator); // TODO : Clean up any internal data structures using this obj. } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyQueryPool(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks *pAllocator) { get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->DestroyQueryPool(device, queryPool, pAllocator); // TODO : Clean up any internal data structures using this obj. } VKAPI_ATTR VkResult VKAPI_CALL vkGetQueryPoolResults(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, size_t dataSize, void *pData, VkDeviceSize stride, VkQueryResultFlags flags) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); unordered_map> queriesInFlight; GLOBAL_CB_NODE *pCB = nullptr; loader_platform_thread_lock_mutex(&globalLock); for (auto cmdBuffer : dev_data->globalInFlightCmdBuffers) { pCB = getCBNode(dev_data, cmdBuffer); for (auto queryStatePair : pCB->queryToStateMap) { queriesInFlight[queryStatePair.first].push_back(cmdBuffer); } } VkBool32 skip_call = VK_FALSE; for (uint32_t i = 0; i < queryCount; ++i) { QueryObject query = {queryPool, firstQuery + i}; auto queryElement = queriesInFlight.find(query); auto queryToStateElement = dev_data->queryToStateMap.find(query); if (queryToStateElement != dev_data->queryToStateMap.end()) { } // Available and in flight if (queryElement != queriesInFlight.end() && queryToStateElement != dev_data->queryToStateMap.end() && queryToStateElement->second) { for (auto cmdBuffer : queryElement->second) { pCB = getCBNode(dev_data, cmdBuffer); auto queryEventElement = pCB->waitedEventsBeforeQueryReset.find(query); if (queryEventElement == pCB->waitedEventsBeforeQueryReset.end()) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS", "Cannot get query results on queryPool %" PRIu64 " with index %d which is in flight.", (uint64_t)(queryPool), firstQuery + i); } else { for (auto event : queryEventElement->second) { dev_data->eventMap[event].needsSignaled = true; } } } // Unavailable and in flight } else if (queryElement != queriesInFlight.end() && queryToStateElement != dev_data->queryToStateMap.end() && !queryToStateElement->second) { // TODO : Can there be the same query in use by multiple command buffers in flight? bool make_available = false; for (auto cmdBuffer : queryElement->second) { pCB = getCBNode(dev_data, cmdBuffer); make_available |= pCB->queryToStateMap[query]; } if (!(((flags & VK_QUERY_RESULT_PARTIAL_BIT) || (flags & VK_QUERY_RESULT_WAIT_BIT)) && make_available)) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS", "Cannot get query results on queryPool %" PRIu64 " with index %d which is unavailable.", (uint64_t)(queryPool), firstQuery + i); } // Unavailable } else if (queryToStateElement != dev_data->queryToStateMap.end() && !queryToStateElement->second) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS", "Cannot get query results on queryPool %" PRIu64 " with index %d which is unavailable.", (uint64_t)(queryPool), firstQuery + i); // Unitialized } else if (queryToStateElement == dev_data->queryToStateMap.end()) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS", "Cannot get query results on queryPool %" PRIu64 " with index %d as data has not been collected for this index.", (uint64_t)(queryPool), firstQuery + i); } } loader_platform_thread_unlock_mutex(&globalLock); if (skip_call) return VK_ERROR_VALIDATION_FAILED_EXT; return dev_data->device_dispatch_table->GetQueryPoolResults(device, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags); } VkBool32 validateIdleBuffer(const layer_data *my_data, VkBuffer buffer) { VkBool32 skip_call = VK_FALSE; auto buffer_data = my_data->bufferMap.find(buffer); if (buffer_data == my_data->bufferMap.end()) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, (uint64_t)(buffer), __LINE__, DRAWSTATE_DOUBLE_DESTROY, "DS", "Cannot free buffer %" PRIxLEAST64 " that has not been allocated.", (uint64_t)(buffer)); } else { if (buffer_data->second.in_use.load()) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, (uint64_t)(buffer), __LINE__, DRAWSTATE_OBJECT_INUSE, "DS", "Cannot free buffer %" PRIxLEAST64 " that is in use by a command buffer.", (uint64_t)(buffer)); } } return skip_call; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyBuffer(VkDevice device, VkBuffer buffer, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE auto item = dev_data->bufferBindingMap.find((uint64_t)buffer); if (item != dev_data->bufferBindingMap.end()) { skipCall = clear_object_binding(dev_data, device, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT); dev_data->bufferBindingMap.erase(item); } #endif if (!validateIdleBuffer(dev_data, buffer) && (VK_FALSE == skipCall)) { loader_platform_thread_unlock_mutex(&globalLock); dev_data->device_dispatch_table->DestroyBuffer(device, buffer, pAllocator); loader_platform_thread_lock_mutex(&globalLock); } dev_data->bufferMap.erase(buffer); loader_platform_thread_unlock_mutex(&globalLock); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyBufferView(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); dev_data->device_dispatch_table->DestroyBufferView(device, bufferView, pAllocator); loader_platform_thread_lock_mutex(&globalLock); auto item = dev_data->bufferViewMap.find(bufferView); if (item != dev_data->bufferViewMap.end()) { dev_data->bufferViewMap.erase(item); } loader_platform_thread_unlock_mutex(&globalLock); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkBool32 skipCall = VK_FALSE; #if MTMERGESOURCE loader_platform_thread_lock_mutex(&globalLock); auto item = dev_data->imageBindingMap.find((uint64_t)image); if (item != dev_data->imageBindingMap.end()) { skipCall = clear_object_binding(dev_data, device, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT); dev_data->imageBindingMap.erase(item); } loader_platform_thread_unlock_mutex(&globalLock); #endif if (VK_FALSE == skipCall) dev_data->device_dispatch_table->DestroyImage(device, image, pAllocator); loader_platform_thread_lock_mutex(&globalLock); const auto& entry = dev_data->imageMap.find(image); if (entry != dev_data->imageMap.end()) { // Clear any memory mapping for this image const auto &mem_entry = dev_data->memObjMap.find(entry->second.mem); if (mem_entry != dev_data->memObjMap.end()) mem_entry->second.image = VK_NULL_HANDLE; // Remove image from imageMap dev_data->imageMap.erase(entry); } const auto& subEntry = dev_data->imageSubresourceMap.find(image); if (subEntry != dev_data->imageSubresourceMap.end()) { for (const auto& pair : subEntry->second) { dev_data->imageLayoutMap.erase(pair); } dev_data->imageSubresourceMap.erase(subEntry); } loader_platform_thread_unlock_mutex(&globalLock); } #if MTMERGESOURCE VkBool32 print_memory_range_error(layer_data *dev_data, const uint64_t object_handle, const uint64_t other_handle, VkDebugReportObjectTypeEXT object_type) { if (object_type == VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT) { return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, object_type, object_handle, 0, MEMTRACK_INVALID_ALIASING, "MEM", "Buffer %" PRIx64 " is alised with image %" PRIx64, object_handle, other_handle); } else { return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, object_type, object_handle, 0, MEMTRACK_INVALID_ALIASING, "MEM", "Image %" PRIx64 " is alised with buffer %" PRIx64, object_handle, other_handle); } } VkBool32 validate_memory_range(layer_data *dev_data, const vector &ranges, const MEMORY_RANGE &new_range, VkDebugReportObjectTypeEXT object_type) { VkBool32 skip_call = false; for (auto range : ranges) { if ((range.end & ~(dev_data->physDevProperties.properties.limits.bufferImageGranularity - 1)) < (new_range.start & ~(dev_data->physDevProperties.properties.limits.bufferImageGranularity - 1))) continue; if ((range.start & ~(dev_data->physDevProperties.properties.limits.bufferImageGranularity - 1)) > (new_range.end & ~(dev_data->physDevProperties.properties.limits.bufferImageGranularity - 1))) continue; skip_call |= print_memory_range_error(dev_data, new_range.handle, range.handle, object_type); } return skip_call; } VkBool32 validate_buffer_image_aliasing(layer_data *dev_data, uint64_t handle, VkDeviceMemory mem, VkDeviceSize memoryOffset, VkMemoryRequirements memRequirements, vector &ranges, const vector &other_ranges, VkDebugReportObjectTypeEXT object_type) { MEMORY_RANGE range; range.handle = handle; range.memory = mem; range.start = memoryOffset; range.end = memoryOffset + memRequirements.size - 1; ranges.push_back(range); return validate_memory_range(dev_data, other_ranges, range, object_type); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBindBufferMemory(VkDevice device, VkBuffer buffer, VkDeviceMemory mem, VkDeviceSize memoryOffset) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; loader_platform_thread_lock_mutex(&globalLock); // Track objects tied to memory uint64_t buffer_handle = (uint64_t)(buffer); VkBool32 skipCall = set_mem_binding(dev_data, device, mem, buffer_handle, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, "vkBindBufferMemory"); add_object_binding_info(dev_data, buffer_handle, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, mem); { VkMemoryRequirements memRequirements; // MTMTODO : Shouldn't this call down the chain? vkGetBufferMemoryRequirements(device, buffer, &memRequirements); skipCall |= validate_buffer_image_aliasing(dev_data, buffer_handle, mem, memoryOffset, memRequirements, dev_data->memObjMap[mem].bufferRanges, dev_data->memObjMap[mem].imageRanges, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT); // Validate memory requirements alignment if (vk_safe_modulo(memoryOffset, memRequirements.alignment) != 0) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DRAWSTATE_INVALID_BUFFER_MEMORY_OFFSET, "DS", "vkBindBufferMemory(): memoryOffset is %#" PRIxLEAST64 " but must be an integer multiple of the " "VkMemoryRequirements::alignment value %#" PRIxLEAST64 ", returned from a call to vkGetBufferMemoryRequirements with buffer", memoryOffset, memRequirements.alignment); } // Validate device limits alignments VkBufferUsageFlags usage = dev_data->bufferMap[buffer].create_info->usage; if (usage & (VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT | VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT)) { if (vk_safe_modulo(memoryOffset, dev_data->physDevProperties.properties.limits.minTexelBufferOffsetAlignment) != 0) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DRAWSTATE_INVALID_TEXEL_BUFFER_OFFSET, "DS", "vkBindBufferMemory(): memoryOffset is %#" PRIxLEAST64 " but must be a multiple of " "device limit minTexelBufferOffsetAlignment %#" PRIxLEAST64, memoryOffset, dev_data->physDevProperties.properties.limits.minTexelBufferOffsetAlignment); } } if (usage & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT) { if (vk_safe_modulo(memoryOffset, dev_data->physDevProperties.properties.limits.minUniformBufferOffsetAlignment) != 0) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DRAWSTATE_INVALID_UNIFORM_BUFFER_OFFSET, "DS", "vkBindBufferMemory(): memoryOffset is %#" PRIxLEAST64 " but must be a multiple of " "device limit minUniformBufferOffsetAlignment %#" PRIxLEAST64, memoryOffset, dev_data->physDevProperties.properties.limits.minUniformBufferOffsetAlignment); } } if (usage & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT) { if (vk_safe_modulo(memoryOffset, dev_data->physDevProperties.properties.limits.minStorageBufferOffsetAlignment) != 0) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DRAWSTATE_INVALID_STORAGE_BUFFER_OFFSET, "DS", "vkBindBufferMemory(): memoryOffset is %#" PRIxLEAST64 " but must be a multiple of " "device limit minStorageBufferOffsetAlignment %#" PRIxLEAST64, memoryOffset, dev_data->physDevProperties.properties.limits.minStorageBufferOffsetAlignment); } } } print_mem_list(dev_data, device); loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { result = dev_data->device_dispatch_table->BindBufferMemory(device, buffer, mem, memoryOffset); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetBufferMemoryRequirements(VkDevice device, VkBuffer buffer, VkMemoryRequirements *pMemoryRequirements) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); // TODO : What to track here? // Could potentially save returned mem requirements and validate values passed into BindBufferMemory my_data->device_dispatch_table->GetBufferMemoryRequirements(device, buffer, pMemoryRequirements); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageMemoryRequirements(VkDevice device, VkImage image, VkMemoryRequirements *pMemoryRequirements) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); // TODO : What to track here? // Could potentially save returned mem requirements and validate values passed into BindImageMemory my_data->device_dispatch_table->GetImageMemoryRequirements(device, image, pMemoryRequirements); } #endif VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImageView(VkDevice device, VkImageView imageView, const VkAllocationCallbacks *pAllocator) { get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->DestroyImageView(device, imageView, pAllocator); // TODO : Clean up any internal data structures using this obj. } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyShaderModule(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks *pAllocator) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); my_data->shaderModuleMap.erase(shaderModule); loader_platform_thread_unlock_mutex(&globalLock); my_data->device_dispatch_table->DestroyShaderModule(device, shaderModule, pAllocator); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyPipeline(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks *pAllocator) { get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroyPipeline(device, pipeline, pAllocator); // TODO : Clean up any internal data structures using this obj. } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineLayout(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks *pAllocator) { get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->DestroyPipelineLayout(device, pipelineLayout, pAllocator); // TODO : Clean up any internal data structures using this obj. } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySampler(VkDevice device, VkSampler sampler, const VkAllocationCallbacks *pAllocator) { get_my_data_ptr(get_dispatch_key(device), layer_data_map)->device_dispatch_table->DestroySampler(device, sampler, pAllocator); // TODO : Clean up any internal data structures using this obj. } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorSetLayout(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks *pAllocator) { get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->DestroyDescriptorSetLayout(device, descriptorSetLayout, pAllocator); // TODO : Clean up any internal data structures using this obj. } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks *pAllocator) { get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->DestroyDescriptorPool(device, descriptorPool, pAllocator); // TODO : Clean up any internal data structures using this obj. } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer *pCommandBuffers) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); bool skip_call = false; loader_platform_thread_lock_mutex(&globalLock); for (uint32_t i = 0; i < commandBufferCount; i++) { #if MTMERGESOURCE clear_cmd_buf_and_mem_references(dev_data, pCommandBuffers[i]); #endif if (dev_data->globalInFlightCmdBuffers.count(pCommandBuffers[i])) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(pCommandBuffers[i]), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS", "Attempt to free command buffer (%#" PRIxLEAST64 ") which is in use.", reinterpret_cast(pCommandBuffers[i])); } // Delete CB information structure, and remove from commandBufferMap auto cb = dev_data->commandBufferMap.find(pCommandBuffers[i]); if (cb != dev_data->commandBufferMap.end()) { // reset prior to delete for data clean-up resetCB(dev_data, (*cb).second->commandBuffer); delete (*cb).second; dev_data->commandBufferMap.erase(cb); } // Remove commandBuffer reference from commandPoolMap dev_data->commandPoolMap[commandPool].commandBuffers.remove(pCommandBuffers[i]); } #if MTMERGESOURCE printCBList(dev_data, device); #endif loader_platform_thread_unlock_mutex(&globalLock); if (!skip_call) dev_data->device_dispatch_table->FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkCommandPool *pCommandPool) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); dev_data->commandPoolMap[*pCommandPool].createFlags = pCreateInfo->flags; dev_data->commandPoolMap[*pCommandPool].queueFamilyIndex = pCreateInfo->queueFamilyIndex; loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateQueryPool(VkDevice device, const VkQueryPoolCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkQueryPool *pQueryPool) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateQueryPool(device, pCreateInfo, pAllocator, pQueryPool); if (result == VK_SUCCESS) { loader_platform_thread_lock_mutex(&globalLock); dev_data->queryPoolMap[*pQueryPool].createInfo = *pCreateInfo; loader_platform_thread_unlock_mutex(&globalLock); } return result; } VkBool32 validateCommandBuffersNotInUse(const layer_data *dev_data, VkCommandPool commandPool) { VkBool32 skipCall = VK_FALSE; auto pool_data = dev_data->commandPoolMap.find(commandPool); if (pool_data != dev_data->commandPoolMap.end()) { for (auto cmdBuffer : pool_data->second.commandBuffers) { if (dev_data->globalInFlightCmdBuffers.count(cmdBuffer)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT, (uint64_t)(commandPool), __LINE__, DRAWSTATE_OBJECT_INUSE, "DS", "Cannot reset command pool %" PRIx64 " when allocated command buffer %" PRIx64 " is in use.", (uint64_t)(commandPool), (uint64_t)(cmdBuffer)); } } } return skipCall; } // Destroy commandPool along with all of the commandBuffers allocated from that pool VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); bool commandBufferComplete = false; bool skipCall = false; loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE // Verify that command buffers in pool are complete (not in-flight) // MTMTODO : Merge this with code below (separate *NotInUse() call) for (auto it = dev_data->commandPoolMap[commandPool].commandBuffers.begin(); it != dev_data->commandPoolMap[commandPool].commandBuffers.end(); it++) { commandBufferComplete = VK_FALSE; skipCall = checkCBCompleted(dev_data, *it, &commandBufferComplete); if (VK_FALSE == commandBufferComplete) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(*it), __LINE__, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", "Destroying Command Pool 0x%" PRIxLEAST64 " before " "its command buffer (0x%" PRIxLEAST64 ") has completed.", (uint64_t)(commandPool), reinterpret_cast(*it)); } } #endif // Must remove cmdpool from cmdpoolmap, after removing all cmdbuffers in its list from the commandPoolMap if (dev_data->commandPoolMap.find(commandPool) != dev_data->commandPoolMap.end()) { for (auto poolCb = dev_data->commandPoolMap[commandPool].commandBuffers.begin(); poolCb != dev_data->commandPoolMap[commandPool].commandBuffers.end();) { auto del_cb = dev_data->commandBufferMap.find(*poolCb); delete (*del_cb).second; // delete CB info structure dev_data->commandBufferMap.erase(del_cb); // Remove this command buffer poolCb = dev_data->commandPoolMap[commandPool].commandBuffers.erase( poolCb); // Remove CB reference from commandPoolMap's list } } dev_data->commandPoolMap.erase(commandPool); VkBool32 result = validateCommandBuffersNotInUse(dev_data, commandPool); loader_platform_thread_unlock_mutex(&globalLock); if (result) return; if (!skipCall) dev_data->device_dispatch_table->DestroyCommandPool(device, commandPool, pAllocator); #if MTMERGESOURCE loader_platform_thread_lock_mutex(&globalLock); auto item = dev_data->commandPoolMap[commandPool].commandBuffers.begin(); // Remove command buffers from command buffer map while (item != dev_data->commandPoolMap[commandPool].commandBuffers.end()) { auto del_item = item++; delete_cmd_buf_info(dev_data, commandPool, *del_item); } dev_data->commandPoolMap.erase(commandPool); loader_platform_thread_unlock_mutex(&globalLock); #endif } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); bool commandBufferComplete = false; bool skipCall = false; VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; #if MTMERGESOURCE // MTMTODO : Merge this with *NotInUse() call below loader_platform_thread_lock_mutex(&globalLock); auto it = dev_data->commandPoolMap[commandPool].commandBuffers.begin(); // Verify that CB's in pool are complete (not in-flight) while (it != dev_data->commandPoolMap[commandPool].commandBuffers.end()) { skipCall = checkCBCompleted(dev_data, (*it), &commandBufferComplete); if (!commandBufferComplete) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(*it), __LINE__, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", "Resetting CB %p before it has completed. You must check CB " "flag before calling vkResetCommandBuffer().", (*it)); } else { // Clear memory references at this point. clear_cmd_buf_and_mem_references(dev_data, (*it)); } ++it; } loader_platform_thread_unlock_mutex(&globalLock); #endif if (VK_TRUE == validateCommandBuffersNotInUse(dev_data, commandPool)) return VK_ERROR_VALIDATION_FAILED_EXT; if (!skipCall) result = dev_data->device_dispatch_table->ResetCommandPool(device, commandPool, flags); // Reset all of the CBs allocated from this pool if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); auto it = dev_data->commandPoolMap[commandPool].commandBuffers.begin(); while (it != dev_data->commandPoolMap[commandPool].commandBuffers.end()) { resetCB(dev_data, (*it)); ++it; } loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetFences(VkDevice device, uint32_t fenceCount, const VkFence *pFences) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; bool skipCall = false; loader_platform_thread_lock_mutex(&globalLock); for (uint32_t i = 0; i < fenceCount; ++i) { #if MTMERGESOURCE // Reset fence state in fenceCreateInfo structure // MTMTODO : Merge with code below auto fence_item = dev_data->fenceMap.find(pFences[i]); if (fence_item != dev_data->fenceMap.end()) { // Validate fences in SIGNALED state if (!(fence_item->second.createInfo.flags & VK_FENCE_CREATE_SIGNALED_BIT)) { // TODO: I don't see a Valid Usage section for ResetFences. This behavior should be documented there. skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, (uint64_t)pFences[i], __LINE__, MEMTRACK_INVALID_FENCE_STATE, "MEM", "Fence %#" PRIxLEAST64 " submitted to VkResetFences in UNSIGNALED STATE", (uint64_t)pFences[i]); } else { fence_item->second.createInfo.flags = static_cast(fence_item->second.createInfo.flags & ~VK_FENCE_CREATE_SIGNALED_BIT); } } #endif if (dev_data->fenceMap[pFences[i]].in_use.load()) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT, reinterpret_cast(pFences[i]), __LINE__, DRAWSTATE_INVALID_FENCE, "DS", "Fence %#" PRIx64 " is in use by a command buffer.", reinterpret_cast(pFences[i])); } } loader_platform_thread_unlock_mutex(&globalLock); if (!skipCall) result = dev_data->device_dispatch_table->ResetFences(device, fenceCount, pFences); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyFramebuffer(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); auto fbNode = dev_data->frameBufferMap.find(framebuffer); if (fbNode != dev_data->frameBufferMap.end()) { for (auto cb : fbNode->second.referencingCmdBuffers) { auto cbNode = dev_data->commandBufferMap.find(cb); if (cbNode != dev_data->commandBufferMap.end()) { // Set CB as invalid and record destroyed framebuffer cbNode->second->state = CB_INVALID; cbNode->second->destroyedFramebuffers.insert(framebuffer); } } delete [] fbNode->second.createInfo.pAttachments; dev_data->frameBufferMap.erase(fbNode); } loader_platform_thread_unlock_mutex(&globalLock); dev_data->device_dispatch_table->DestroyFramebuffer(device, framebuffer, pAllocator); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyRenderPass(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); dev_data->device_dispatch_table->DestroyRenderPass(device, renderPass, pAllocator); loader_platform_thread_lock_mutex(&globalLock); dev_data->renderPassMap.erase(renderPass); loader_platform_thread_unlock_mutex(&globalLock); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBuffer(VkDevice device, const VkBufferCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkBuffer *pBuffer) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateBuffer(device, pCreateInfo, pAllocator, pBuffer); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE add_object_create_info(dev_data, (uint64_t)*pBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, pCreateInfo); #endif // TODO : This doesn't create deep copy of pQueueFamilyIndices so need to fix that if/when we want that data to be valid dev_data->bufferMap[*pBuffer].create_info = unique_ptr(new VkBufferCreateInfo(*pCreateInfo)); dev_data->bufferMap[*pBuffer].in_use.store(0); loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferView(VkDevice device, const VkBufferViewCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkBufferView *pView) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateBufferView(device, pCreateInfo, pAllocator, pView); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); dev_data->bufferViewMap[*pView] = VkBufferViewCreateInfo(*pCreateInfo); #if MTMERGESOURCE // In order to create a valid buffer view, the buffer must have been created with at least one of the // following flags: UNIFORM_TEXEL_BUFFER_BIT or STORAGE_TEXEL_BUFFER_BIT validate_buffer_usage_flags(dev_data, device, pCreateInfo->buffer, VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT | VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT, VK_FALSE, "vkCreateBufferView()", "VK_BUFFER_USAGE_[STORAGE|UNIFORM]_TEXEL_BUFFER_BIT"); #endif loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage(VkDevice device, const VkImageCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkImage *pImage) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateImage(device, pCreateInfo, pAllocator, pImage); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE add_object_create_info(dev_data, (uint64_t)*pImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, pCreateInfo); #endif IMAGE_LAYOUT_NODE image_node; image_node.layout = pCreateInfo->initialLayout; image_node.format = pCreateInfo->format; dev_data->imageMap[*pImage].createInfo = *pCreateInfo; ImageSubresourcePair subpair = {*pImage, false, VkImageSubresource()}; dev_data->imageSubresourceMap[*pImage].push_back(subpair); dev_data->imageLayoutMap[subpair] = image_node; loader_platform_thread_unlock_mutex(&globalLock); } return result; } static void ResolveRemainingLevelsLayers(layer_data *dev_data, VkImageSubresourceRange *range, VkImage image) { /* expects globalLock to be held by caller */ auto image_node_it = dev_data->imageMap.find(image); if (image_node_it != dev_data->imageMap.end()) { /* If the caller used the special values VK_REMAINING_MIP_LEVELS and * VK_REMAINING_ARRAY_LAYERS, resolve them now in our internal state to * the actual values. */ if (range->levelCount == VK_REMAINING_MIP_LEVELS) { range->levelCount = image_node_it->second.createInfo.mipLevels - range->baseMipLevel; } if (range->layerCount == VK_REMAINING_ARRAY_LAYERS) { range->layerCount = image_node_it->second.createInfo.arrayLayers - range->baseArrayLayer; } } } // Return the correct layer/level counts if the caller used the special // values VK_REMAINING_MIP_LEVELS or VK_REMAINING_ARRAY_LAYERS. static void ResolveRemainingLevelsLayers(layer_data *dev_data, uint32_t *levels, uint32_t *layers, VkImageSubresourceRange range, VkImage image) { /* expects globalLock to be held by caller */ *levels = range.levelCount; *layers = range.layerCount; auto image_node_it = dev_data->imageMap.find(image); if (image_node_it != dev_data->imageMap.end()) { if (range.levelCount == VK_REMAINING_MIP_LEVELS) { *levels = image_node_it->second.createInfo.mipLevels - range.baseMipLevel; } if (range.layerCount == VK_REMAINING_ARRAY_LAYERS) { *layers = image_node_it->second.createInfo.arrayLayers - range.baseArrayLayer; } } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device, const VkImageViewCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkImageView *pView) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateImageView(device, pCreateInfo, pAllocator, pView); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); VkImageViewCreateInfo localCI = VkImageViewCreateInfo(*pCreateInfo); ResolveRemainingLevelsLayers(dev_data, &localCI.subresourceRange, pCreateInfo->image); dev_data->imageViewMap[*pView] = localCI; #if MTMERGESOURCE // Validate that img has correct usage flags set validate_image_usage_flags(dev_data, device, pCreateInfo->image, VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT, VK_FALSE, "vkCreateImageView()", "VK_IMAGE_USAGE_[SAMPLED|STORAGE|COLOR_ATTACHMENT]_BIT"); #endif loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFence(VkDevice device, const VkFenceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkFence *pFence) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateFence(device, pCreateInfo, pAllocator, pFence); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); FENCE_NODE *pFN = &dev_data->fenceMap[*pFence]; #if MTMERGESOURCE memset(pFN, 0, sizeof(MT_FENCE_INFO)); memcpy(&(pFN->createInfo), pCreateInfo, sizeof(VkFenceCreateInfo)); if (pCreateInfo->flags & VK_FENCE_CREATE_SIGNALED_BIT) { pFN->firstTimeFlag = VK_TRUE; } #endif pFN->in_use.store(0); loader_platform_thread_unlock_mutex(&globalLock); } return result; } // TODO handle pipeline caches VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineCache(VkDevice device, const VkPipelineCacheCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkPipelineCache *pPipelineCache) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreatePipelineCache(device, pCreateInfo, pAllocator, pPipelineCache); return result; } VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineCache(VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); dev_data->device_dispatch_table->DestroyPipelineCache(device, pipelineCache, pAllocator); } VKAPI_ATTR VkResult VKAPI_CALL vkGetPipelineCacheData(VkDevice device, VkPipelineCache pipelineCache, size_t *pDataSize, void *pData) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->GetPipelineCacheData(device, pipelineCache, pDataSize, pData); return result; } VKAPI_ATTR VkResult VKAPI_CALL vkMergePipelineCaches(VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, const VkPipelineCache *pSrcCaches) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->MergePipelineCaches(device, dstCache, srcCacheCount, pSrcCaches); return result; } // utility function to set collective state for pipeline void set_pipeline_state(PIPELINE_NODE *pPipe) { // If any attachment used by this pipeline has blendEnable, set top-level blendEnable if (pPipe->graphicsPipelineCI.pColorBlendState) { for (size_t i = 0; i < pPipe->attachments.size(); ++i) { if (VK_TRUE == pPipe->attachments[i].blendEnable) { if (((pPipe->attachments[i].dstAlphaBlendFactor >= VK_BLEND_FACTOR_CONSTANT_COLOR) && (pPipe->attachments[i].dstAlphaBlendFactor <= VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_ALPHA)) || ((pPipe->attachments[i].dstColorBlendFactor >= VK_BLEND_FACTOR_CONSTANT_COLOR) && (pPipe->attachments[i].dstColorBlendFactor <= VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_ALPHA)) || ((pPipe->attachments[i].srcAlphaBlendFactor >= VK_BLEND_FACTOR_CONSTANT_COLOR) && (pPipe->attachments[i].srcAlphaBlendFactor <= VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_ALPHA)) || ((pPipe->attachments[i].srcColorBlendFactor >= VK_BLEND_FACTOR_CONSTANT_COLOR) && (pPipe->attachments[i].srcColorBlendFactor <= VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_ALPHA))) { pPipe->blendConstantsEnabled = true; } } } } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t count, const VkGraphicsPipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator, VkPipeline *pPipelines) { VkResult result = VK_SUCCESS; // TODO What to do with pipelineCache? // The order of operations here is a little convoluted but gets the job done // 1. Pipeline create state is first shadowed into PIPELINE_NODE struct // 2. Create state is then validated (which uses flags setup during shadowing) // 3. If everything looks good, we'll then create the pipeline and add NODE to pipelineMap VkBool32 skipCall = VK_FALSE; // TODO : Improve this data struct w/ unique_ptrs so cleanup below is automatic vector pPipeNode(count); layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); uint32_t i = 0; loader_platform_thread_lock_mutex(&globalLock); for (i = 0; i < count; i++) { pPipeNode[i] = initGraphicsPipeline(dev_data, &pCreateInfos[i]); skipCall |= verifyPipelineCreateState(dev_data, device, pPipeNode, i); } if (VK_FALSE == skipCall) { loader_platform_thread_unlock_mutex(&globalLock); result = dev_data->device_dispatch_table->CreateGraphicsPipelines(device, pipelineCache, count, pCreateInfos, pAllocator, pPipelines); loader_platform_thread_lock_mutex(&globalLock); for (i = 0; i < count; i++) { pPipeNode[i]->pipeline = pPipelines[i]; dev_data->pipelineMap[pPipeNode[i]->pipeline] = pPipeNode[i]; } loader_platform_thread_unlock_mutex(&globalLock); } else { for (i = 0; i < count; i++) { delete pPipeNode[i]; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t count, const VkComputePipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator, VkPipeline *pPipelines) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; // TODO : Improve this data struct w/ unique_ptrs so cleanup below is automatic vector pPipeNode(count); layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); uint32_t i = 0; loader_platform_thread_lock_mutex(&globalLock); for (i = 0; i < count; i++) { // TODO: Verify compute stage bits // Create and initialize internal tracking data structure pPipeNode[i] = new PIPELINE_NODE; memcpy(&pPipeNode[i]->computePipelineCI, (const void *)&pCreateInfos[i], sizeof(VkComputePipelineCreateInfo)); // TODO: Add Compute Pipeline Verification // skipCall |= verifyPipelineCreateState(dev_data, device, pPipeNode[i]); } if (VK_FALSE == skipCall) { loader_platform_thread_unlock_mutex(&globalLock); result = dev_data->device_dispatch_table->CreateComputePipelines(device, pipelineCache, count, pCreateInfos, pAllocator, pPipelines); loader_platform_thread_lock_mutex(&globalLock); for (i = 0; i < count; i++) { pPipeNode[i]->pipeline = pPipelines[i]; dev_data->pipelineMap[pPipeNode[i]->pipeline] = pPipeNode[i]; } loader_platform_thread_unlock_mutex(&globalLock); } else { for (i = 0; i < count; i++) { // Clean up any locally allocated data structures delete pPipeNode[i]; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSampler(VkDevice device, const VkSamplerCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSampler *pSampler) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateSampler(device, pCreateInfo, pAllocator, pSampler); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); dev_data->sampleMap[*pSampler] = unique_ptr(new SAMPLER_NODE(pSampler, pCreateInfo)); loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorSetLayout(VkDevice device, const VkDescriptorSetLayoutCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDescriptorSetLayout *pSetLayout) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateDescriptorSetLayout(device, pCreateInfo, pAllocator, pSetLayout); if (VK_SUCCESS == result) { // TODOSC : Capture layout bindings set LAYOUT_NODE *pNewNode = new LAYOUT_NODE; if (NULL == pNewNode) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t)*pSetLayout, __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS", "Out of memory while attempting to allocate LAYOUT_NODE in vkCreateDescriptorSetLayout()")) return VK_ERROR_VALIDATION_FAILED_EXT; } memcpy((void *)&pNewNode->createInfo, pCreateInfo, sizeof(VkDescriptorSetLayoutCreateInfo)); pNewNode->createInfo.pBindings = new VkDescriptorSetLayoutBinding[pCreateInfo->bindingCount]; memcpy((void *)pNewNode->createInfo.pBindings, pCreateInfo->pBindings, sizeof(VkDescriptorSetLayoutBinding) * pCreateInfo->bindingCount); // g++ does not like reserve with size 0 if (pCreateInfo->bindingCount) pNewNode->bindingToIndexMap.reserve(pCreateInfo->bindingCount); uint32_t totalCount = 0; for (uint32_t i = 0; i < pCreateInfo->bindingCount; i++) { if (!pNewNode->bindingToIndexMap.emplace(pCreateInfo->pBindings[i].binding, i).second) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t)*pSetLayout, __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS", "duplicated binding number in " "VkDescriptorSetLayoutBinding")) return VK_ERROR_VALIDATION_FAILED_EXT; } else { pNewNode->bindingToIndexMap[pCreateInfo->pBindings[i].binding] = i; } totalCount += pCreateInfo->pBindings[i].descriptorCount; if (pCreateInfo->pBindings[i].pImmutableSamplers) { VkSampler **ppIS = (VkSampler **)&pNewNode->createInfo.pBindings[i].pImmutableSamplers; *ppIS = new VkSampler[pCreateInfo->pBindings[i].descriptorCount]; memcpy(*ppIS, pCreateInfo->pBindings[i].pImmutableSamplers, pCreateInfo->pBindings[i].descriptorCount * sizeof(VkSampler)); } } pNewNode->layout = *pSetLayout; pNewNode->startIndex = 0; if (totalCount > 0) { pNewNode->descriptorTypes.resize(totalCount); pNewNode->stageFlags.resize(totalCount); uint32_t offset = 0; uint32_t j = 0; VkDescriptorType dType; for (uint32_t i = 0; i < pCreateInfo->bindingCount; i++) { dType = pCreateInfo->pBindings[i].descriptorType; for (j = 0; j < pCreateInfo->pBindings[i].descriptorCount; j++) { pNewNode->descriptorTypes[offset + j] = dType; pNewNode->stageFlags[offset + j] = pCreateInfo->pBindings[i].stageFlags; if ((dType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC) || (dType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC)) { pNewNode->dynamicDescriptorCount++; } } offset += j; } pNewNode->endIndex = pNewNode->startIndex + totalCount - 1; } else { // no descriptors pNewNode->endIndex = 0; } // Put new node at Head of global Layer list loader_platform_thread_lock_mutex(&globalLock); dev_data->descriptorSetLayoutMap[*pSetLayout] = pNewNode; loader_platform_thread_unlock_mutex(&globalLock); } return result; } static bool validatePushConstantSize(const layer_data *dev_data, const uint32_t offset, const uint32_t size, const char *caller_name) { bool skipCall = false; if ((offset + size) > dev_data->physDevProperties.properties.limits.maxPushConstantsSize) { skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_PUSH_CONSTANTS_ERROR, "DS", "%s call has push constants with offset %u and size %u that " "exceeds this device's maxPushConstantSize of %u.", caller_name, offset, size, dev_data->physDevProperties.properties.limits.maxPushConstantsSize); } return skipCall; } VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineLayout(VkDevice device, const VkPipelineLayoutCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkPipelineLayout *pPipelineLayout) { bool skipCall = false; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); uint32_t i = 0; for (i = 0; i < pCreateInfo->pushConstantRangeCount; ++i) { skipCall |= validatePushConstantSize(dev_data, pCreateInfo->pPushConstantRanges[i].offset, pCreateInfo->pPushConstantRanges[i].size, "vkCreatePipelineLayout()"); if ((pCreateInfo->pPushConstantRanges[i].size == 0) || ((pCreateInfo->pPushConstantRanges[i].size & 0x3) != 0)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_PUSH_CONSTANTS_ERROR, "DS", "vkCreatePipelineLayout() call has push constant index %u with " "size %u. Size must be greater than zero and a multiple of 4.", i, pCreateInfo->pPushConstantRanges[i].size); } // TODO : Add warning if ranges overlap } VkResult result = dev_data->device_dispatch_table->CreatePipelineLayout(device, pCreateInfo, pAllocator, pPipelineLayout); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); // TODOSC : Merge capture of the setLayouts per pipeline PIPELINE_LAYOUT_NODE &plNode = dev_data->pipelineLayoutMap[*pPipelineLayout]; plNode.descriptorSetLayouts.resize(pCreateInfo->setLayoutCount); for (i = 0; i < pCreateInfo->setLayoutCount; ++i) { plNode.descriptorSetLayouts[i] = pCreateInfo->pSetLayouts[i]; } plNode.pushConstantRanges.resize(pCreateInfo->pushConstantRangeCount); for (i = 0; i < pCreateInfo->pushConstantRangeCount; ++i) { plNode.pushConstantRanges[i] = pCreateInfo->pPushConstantRanges[i]; } loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorPool(VkDevice device, const VkDescriptorPoolCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDescriptorPool *pDescriptorPool) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateDescriptorPool(device, pCreateInfo, pAllocator, pDescriptorPool); if (VK_SUCCESS == result) { // Insert this pool into Global Pool LL at head if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, (uint64_t)*pDescriptorPool, __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS", "Created Descriptor Pool %#" PRIxLEAST64, (uint64_t)*pDescriptorPool)) return VK_ERROR_VALIDATION_FAILED_EXT; DESCRIPTOR_POOL_NODE *pNewNode = new DESCRIPTOR_POOL_NODE(*pDescriptorPool, pCreateInfo); if (NULL == pNewNode) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, (uint64_t)*pDescriptorPool, __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS", "Out of memory while attempting to allocate DESCRIPTOR_POOL_NODE in vkCreateDescriptorPool()")) return VK_ERROR_VALIDATION_FAILED_EXT; } else { loader_platform_thread_lock_mutex(&globalLock); dev_data->descriptorPoolMap[*pDescriptorPool] = pNewNode; loader_platform_thread_unlock_mutex(&globalLock); } } else { // Need to do anything if pool create fails? } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->ResetDescriptorPool(device, descriptorPool, flags); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); clearDescriptorPool(dev_data, device, descriptorPool, flags); loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateDescriptorSets(VkDevice device, const VkDescriptorSetAllocateInfo *pAllocateInfo, VkDescriptorSet *pDescriptorSets) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); // Verify that requested descriptorSets are available in pool DESCRIPTOR_POOL_NODE *pPoolNode = getPoolNode(dev_data, pAllocateInfo->descriptorPool); if (!pPoolNode) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, (uint64_t)pAllocateInfo->descriptorPool, __LINE__, DRAWSTATE_INVALID_POOL, "DS", "Unable to find pool node for pool %#" PRIxLEAST64 " specified in vkAllocateDescriptorSets() call", (uint64_t)pAllocateInfo->descriptorPool); } else { // Make sure pool has all the available descriptors before calling down chain skipCall |= validate_descriptor_availability_in_pool(dev_data, pPoolNode, pAllocateInfo->descriptorSetCount, pAllocateInfo->pSetLayouts); } loader_platform_thread_unlock_mutex(&globalLock); if (skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = dev_data->device_dispatch_table->AllocateDescriptorSets(device, pAllocateInfo, pDescriptorSets); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); DESCRIPTOR_POOL_NODE *pPoolNode = getPoolNode(dev_data, pAllocateInfo->descriptorPool); if (pPoolNode) { if (pAllocateInfo->descriptorSetCount == 0) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, pAllocateInfo->descriptorSetCount, __LINE__, DRAWSTATE_NONE, "DS", "AllocateDescriptorSets called with 0 count"); } for (uint32_t i = 0; i < pAllocateInfo->descriptorSetCount; i++) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__, DRAWSTATE_NONE, "DS", "Created Descriptor Set %#" PRIxLEAST64, (uint64_t)pDescriptorSets[i]); // Create new set node and add to head of pool nodes SET_NODE *pNewNode = new SET_NODE; if (NULL == pNewNode) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__, DRAWSTATE_OUT_OF_MEMORY, "DS", "Out of memory while attempting to allocate SET_NODE in vkAllocateDescriptorSets()")) { loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } } else { // TODO : Pool should store a total count of each type of Descriptor available // When descriptors are allocated, decrement the count and validate here // that the count doesn't go below 0. One reset/free need to bump count back up. // Insert set at head of Set LL for this pool pNewNode->pNext = pPoolNode->pSets; pNewNode->in_use.store(0); pPoolNode->pSets = pNewNode; LAYOUT_NODE *pLayout = getLayoutNode(dev_data, pAllocateInfo->pSetLayouts[i]); if (NULL == pLayout) { if (log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, (uint64_t)pAllocateInfo->pSetLayouts[i], __LINE__, DRAWSTATE_INVALID_LAYOUT, "DS", "Unable to find set layout node for layout %#" PRIxLEAST64 " specified in vkAllocateDescriptorSets() call", (uint64_t)pAllocateInfo->pSetLayouts[i])) { loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } } pNewNode->pLayout = pLayout; pNewNode->pool = pAllocateInfo->descriptorPool; pNewNode->set = pDescriptorSets[i]; pNewNode->descriptorCount = (pLayout->createInfo.bindingCount != 0) ? pLayout->endIndex + 1 : 0; if (pNewNode->descriptorCount) { pNewNode->pDescriptorUpdates.resize(pNewNode->descriptorCount); } dev_data->setMap[pDescriptorSets[i]] = pNewNode; } } } loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkFreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t count, const VkDescriptorSet *pDescriptorSets) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); // Make sure that no sets being destroyed are in-flight loader_platform_thread_lock_mutex(&globalLock); for (uint32_t i = 0; i < count; ++i) skipCall |= validateIdleDescriptorSet(dev_data, pDescriptorSets[i], "vkFreeDesriptorSets"); DESCRIPTOR_POOL_NODE *pPoolNode = getPoolNode(dev_data, descriptorPool); if (pPoolNode && !(VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT & pPoolNode->createInfo.flags)) { // Can't Free from a NON_FREE pool skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, __LINE__, DRAWSTATE_CANT_FREE_FROM_NON_FREE_POOL, "DS", "It is invalid to call vkFreeDescriptorSets() with a pool created without setting " "VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT."); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE != skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = dev_data->device_dispatch_table->FreeDescriptorSets(device, descriptorPool, count, pDescriptorSets); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); // Update available descriptor sets in pool pPoolNode->availableSets += count; // For each freed descriptor add it back into the pool as available for (uint32_t i = 0; i < count; ++i) { SET_NODE *pSet = dev_data->setMap[pDescriptorSets[i]]; // getSetNode() without locking invalidateBoundCmdBuffers(dev_data, pSet); LAYOUT_NODE *pLayout = pSet->pLayout; uint32_t typeIndex = 0, poolSizeCount = 0; for (uint32_t j = 0; j < pLayout->createInfo.bindingCount; ++j) { typeIndex = static_cast(pLayout->createInfo.pBindings[j].descriptorType); poolSizeCount = pLayout->createInfo.pBindings[j].descriptorCount; pPoolNode->availableDescriptorTypeCount[typeIndex] += poolSizeCount; } } loader_platform_thread_unlock_mutex(&globalLock); } // TODO : Any other clean-up or book-keeping to do here? return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUpdateDescriptorSets(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet *pDescriptorWrites, uint32_t descriptorCopyCount, const VkCopyDescriptorSet *pDescriptorCopies) { // dsUpdate will return VK_TRUE only if a bailout error occurs, so we want to call down tree when update returns VK_FALSE layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); VkBool32 rtn = dsUpdate(dev_data, device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies); loader_platform_thread_unlock_mutex(&globalLock); if (!rtn) { dev_data->device_dispatch_table->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pCreateInfo, VkCommandBuffer *pCommandBuffer) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->AllocateCommandBuffers(device, pCreateInfo, pCommandBuffer); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); auto const &cp_it = dev_data->commandPoolMap.find(pCreateInfo->commandPool); if (cp_it != dev_data->commandPoolMap.end()) { for (uint32_t i = 0; i < pCreateInfo->commandBufferCount; i++) { // Add command buffer to its commandPool map cp_it->second.commandBuffers.push_back(pCommandBuffer[i]); GLOBAL_CB_NODE *pCB = new GLOBAL_CB_NODE; // Add command buffer to map dev_data->commandBufferMap[pCommandBuffer[i]] = pCB; resetCB(dev_data, pCommandBuffer[i]); pCB->createInfo = *pCreateInfo; pCB->device = device; } } #if MTMERGESOURCE printCBList(dev_data, device); #endif loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo *pBeginInfo) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); // Validate command buffer level GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { #if MTMERGESOURCE bool commandBufferComplete = false; // MTMTODO : Merge this with code below // This implicitly resets the Cmd Buffer so make sure any fence is done and then clear memory references skipCall = checkCBCompleted(dev_data, commandBuffer, &commandBufferComplete); if (!commandBufferComplete) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", "Calling vkBeginCommandBuffer() on active CB %p before it has completed. " "You must check CB flag before this call.", commandBuffer); } #endif if (pCB->createInfo.level != VK_COMMAND_BUFFER_LEVEL_PRIMARY) { // Secondary Command Buffer const VkCommandBufferInheritanceInfo *pInfo = pBeginInfo->pInheritanceInfo; if (!pInfo) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS", "vkBeginCommandBuffer(): Secondary Command Buffer (%p) must have inheritance info.", reinterpret_cast(commandBuffer)); } else { if (pBeginInfo->flags & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT) { if (!pInfo->renderPass) { // renderpass should NOT be null for an Secondary CB skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS", "vkBeginCommandBuffer(): Secondary Command Buffers (%p) must specify a valid renderpass parameter.", reinterpret_cast(commandBuffer)); } if (!pInfo->framebuffer) { // framebuffer may be null for an Secondary CB, but this affects perf skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS", "vkBeginCommandBuffer(): Secondary Command Buffers (%p) may perform better if a " "valid framebuffer parameter is specified.", reinterpret_cast(commandBuffer)); } else { string errorString = ""; auto fbNode = dev_data->frameBufferMap.find(pInfo->framebuffer); if (fbNode != dev_data->frameBufferMap.end()) { VkRenderPass fbRP = fbNode->second.createInfo.renderPass; if (!verify_renderpass_compatibility(dev_data, fbRP, pInfo->renderPass, errorString)) { // renderPass that framebuffer was created with // must // be compatible with local renderPass skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, DRAWSTATE_RENDERPASS_INCOMPATIBLE, "DS", "vkBeginCommandBuffer(): Secondary Command " "Buffer (%p) renderPass (%#" PRIxLEAST64 ") is incompatible w/ framebuffer " "(%#" PRIxLEAST64 ") w/ render pass (%#" PRIxLEAST64 ") due to: %s", reinterpret_cast(commandBuffer), (uint64_t)(pInfo->renderPass), (uint64_t)(pInfo->framebuffer), (uint64_t)(fbRP), errorString.c_str()); } // Connect this framebuffer to this cmdBuffer fbNode->second.referencingCmdBuffers.insert(pCB->commandBuffer); } } } if ((pInfo->occlusionQueryEnable == VK_FALSE || dev_data->physDevProperties.features.occlusionQueryPrecise == VK_FALSE) && (pInfo->queryFlags & VK_QUERY_CONTROL_PRECISE_BIT)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS", "vkBeginCommandBuffer(): Secondary Command Buffer (%p) must not have " "VK_QUERY_CONTROL_PRECISE_BIT if occulusionQuery is disabled or the device does not " "support precise occlusion queries.", reinterpret_cast(commandBuffer)); } } if (pInfo && pInfo->renderPass != VK_NULL_HANDLE) { auto rp_data = dev_data->renderPassMap.find(pInfo->renderPass); if (rp_data != dev_data->renderPassMap.end() && rp_data->second && rp_data->second->pCreateInfo) { if (pInfo->subpass >= rp_data->second->pCreateInfo->subpassCount) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS", "vkBeginCommandBuffer(): Secondary Command Buffers (%p) must has a subpass index (%d) " "that is less than the number of subpasses (%d).", (void *)commandBuffer, pInfo->subpass, rp_data->second->pCreateInfo->subpassCount); } } } } if (CB_RECORDING == pCB->state) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS", "vkBeginCommandBuffer(): Cannot call Begin on CB (%#" PRIxLEAST64 ") in the RECORDING state. Must first call vkEndCommandBuffer().", (uint64_t)commandBuffer); } else if (CB_RECORDED == pCB->state) { VkCommandPool cmdPool = pCB->createInfo.commandPool; if (!(VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT & dev_data->commandPoolMap[cmdPool].createFlags)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS", "Call to vkBeginCommandBuffer() on command buffer (%#" PRIxLEAST64 ") attempts to implicitly reset cmdBuffer created from command pool (%#" PRIxLEAST64 ") that does NOT have the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT bit set.", (uint64_t)commandBuffer, (uint64_t)cmdPool); } resetCB(dev_data, commandBuffer); } // Set updated state here in case implicit reset occurs above pCB->state = CB_RECORDING; pCB->beginInfo = *pBeginInfo; if (pCB->beginInfo.pInheritanceInfo) { pCB->inheritanceInfo = *(pCB->beginInfo.pInheritanceInfo); pCB->beginInfo.pInheritanceInfo = &pCB->inheritanceInfo; } } else { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "In vkBeginCommandBuffer() and unable to find CommandBuffer Node for CB %p!", (void *)commandBuffer); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE != skipCall) { return VK_ERROR_VALIDATION_FAILED_EXT; } VkResult result = dev_data->device_dispatch_table->BeginCommandBuffer(commandBuffer, pBeginInfo); #if MTMERGESOURCE loader_platform_thread_lock_mutex(&globalLock); clear_cmd_buf_and_mem_references(dev_data, commandBuffer); loader_platform_thread_unlock_mutex(&globalLock); #endif return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEndCommandBuffer(VkCommandBuffer commandBuffer) { VkBool32 skipCall = VK_FALSE; VkResult result = VK_SUCCESS; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { if (pCB->state != CB_RECORDING) { skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkEndCommandBuffer()"); } for (auto query : pCB->activeQueries) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS", "Ending command buffer with in progress query: queryPool %" PRIu64 ", index %d", (uint64_t)(query.pool), query.index); } } if (VK_FALSE == skipCall) { loader_platform_thread_unlock_mutex(&globalLock); result = dev_data->device_dispatch_table->EndCommandBuffer(commandBuffer); loader_platform_thread_lock_mutex(&globalLock); if (VK_SUCCESS == result) { pCB->state = CB_RECORDED; // Reset CB status flags pCB->status = 0; printCB(dev_data, commandBuffer); } } else { result = VK_ERROR_VALIDATION_FAILED_EXT; } loader_platform_thread_unlock_mutex(&globalLock); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandBuffer(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE bool commandBufferComplete = false; // Verify that CB is complete (not in-flight) skipCall = checkCBCompleted(dev_data, commandBuffer, &commandBufferComplete); if (!commandBufferComplete) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", "Resetting CB %p before it has completed. You must check CB " "flag before calling vkResetCommandBuffer().", commandBuffer); } // Clear memory references as this point. clear_cmd_buf_and_mem_references(dev_data, commandBuffer); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); VkCommandPool cmdPool = pCB->createInfo.commandPool; if (!(VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT & dev_data->commandPoolMap[cmdPool].createFlags)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS", "Attempt to reset command buffer (%#" PRIxLEAST64 ") created from command pool (%#" PRIxLEAST64 ") that does NOT have the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT bit set.", (uint64_t)commandBuffer, (uint64_t)cmdPool); } if (dev_data->globalInFlightCmdBuffers.count(commandBuffer)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, "DS", "Attempt to reset command buffer (%#" PRIxLEAST64 ") which is in use.", reinterpret_cast(commandBuffer)); } loader_platform_thread_unlock_mutex(&globalLock); if (skipCall != VK_FALSE) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = dev_data->device_dispatch_table->ResetCommandBuffer(commandBuffer, flags); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); resetCB(dev_data, commandBuffer); loader_platform_thread_unlock_mutex(&globalLock); } return result; } #if MTMERGESOURCE // TODO : For any vkCmdBind* calls that include an object which has mem bound to it, // need to account for that mem now having binding to given commandBuffer #endif VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindPipeline(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_BINDPIPELINE, "vkCmdBindPipeline()"); if ((VK_PIPELINE_BIND_POINT_COMPUTE == pipelineBindPoint) && (pCB->activeRenderPass)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, (uint64_t)pipeline, __LINE__, DRAWSTATE_INVALID_RENDERPASS_CMD, "DS", "Incorrectly binding compute pipeline (%#" PRIxLEAST64 ") during active RenderPass (%#" PRIxLEAST64 ")", (uint64_t)pipeline, (uint64_t)pCB->activeRenderPass); } PIPELINE_NODE *pPN = getPipeline(dev_data, pipeline); if (pPN) { pCB->lastBound[pipelineBindPoint].pipeline = pipeline; set_cb_pso_status(pCB, pPN); set_pipeline_state(pPN); skipCall |= validatePipelineState(dev_data, pCB, pipelineBindPoint, pipeline); } else { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, (uint64_t)pipeline, __LINE__, DRAWSTATE_INVALID_PIPELINE, "DS", "Attempt to bind Pipeline %#" PRIxLEAST64 " that doesn't exist!", (uint64_t)(pipeline)); } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetViewport(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport *pViewports) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETVIEWPORTSTATE, "vkCmdSetViewport()"); pCB->status |= CBSTATUS_VIEWPORT_SET; pCB->viewports.resize(viewportCount); memcpy(pCB->viewports.data(), pViewports, viewportCount * sizeof(VkViewport)); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetViewport(commandBuffer, firstViewport, viewportCount, pViewports); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetScissor(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D *pScissors) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETSCISSORSTATE, "vkCmdSetScissor()"); pCB->status |= CBSTATUS_SCISSOR_SET; pCB->scissors.resize(scissorCount); memcpy(pCB->scissors.data(), pScissors, scissorCount * sizeof(VkRect2D)); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetScissor(commandBuffer, firstScissor, scissorCount, pScissors); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetLineWidth(VkCommandBuffer commandBuffer, float lineWidth) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETLINEWIDTHSTATE, "vkCmdSetLineWidth()"); pCB->status |= CBSTATUS_LINE_WIDTH_SET; } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetLineWidth(commandBuffer, lineWidth); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBias(VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETDEPTHBIASSTATE, "vkCmdSetDepthBias()"); pCB->status |= CBSTATUS_DEPTH_BIAS_SET; } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetDepthBias(commandBuffer, depthBiasConstantFactor, depthBiasClamp, depthBiasSlopeFactor); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetBlendConstants(VkCommandBuffer commandBuffer, const float blendConstants[4]) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETBLENDSTATE, "vkCmdSetBlendConstants()"); pCB->status |= CBSTATUS_BLEND_CONSTANTS_SET; } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetBlendConstants(commandBuffer, blendConstants); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBounds(VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETDEPTHBOUNDSSTATE, "vkCmdSetDepthBounds()"); pCB->status |= CBSTATUS_DEPTH_BOUNDS_SET; } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetDepthBounds(commandBuffer, minDepthBounds, maxDepthBounds); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilCompareMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETSTENCILREADMASKSTATE, "vkCmdSetStencilCompareMask()"); pCB->status |= CBSTATUS_STENCIL_READ_MASK_SET; } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetStencilCompareMask(commandBuffer, faceMask, compareMask); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilWriteMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETSTENCILWRITEMASKSTATE, "vkCmdSetStencilWriteMask()"); pCB->status |= CBSTATUS_STENCIL_WRITE_MASK_SET; } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetStencilWriteMask(commandBuffer, faceMask, writeMask); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilReference(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETSTENCILREFERENCESTATE, "vkCmdSetStencilReference()"); pCB->status |= CBSTATUS_STENCIL_REFERENCE_SET; } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetStencilReference(commandBuffer, faceMask, reference); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindDescriptorSets(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout, uint32_t firstSet, uint32_t setCount, const VkDescriptorSet *pDescriptorSets, uint32_t dynamicOffsetCount, const uint32_t *pDynamicOffsets) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { if (pCB->state == CB_RECORDING) { // Track total count of dynamic descriptor types to make sure we have an offset for each one uint32_t totalDynamicDescriptors = 0; string errorString = ""; uint32_t lastSetIndex = firstSet + setCount - 1; if (lastSetIndex >= pCB->lastBound[pipelineBindPoint].boundDescriptorSets.size()) pCB->lastBound[pipelineBindPoint].boundDescriptorSets.resize(lastSetIndex + 1); VkDescriptorSet oldFinalBoundSet = pCB->lastBound[pipelineBindPoint].boundDescriptorSets[lastSetIndex]; for (uint32_t i = 0; i < setCount; i++) { SET_NODE *pSet = getSetNode(dev_data, pDescriptorSets[i]); if (pSet) { pCB->lastBound[pipelineBindPoint].uniqueBoundSets.insert(pDescriptorSets[i]); pSet->boundCmdBuffers.insert(commandBuffer); pCB->lastBound[pipelineBindPoint].pipelineLayout = layout; pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i + firstSet] = pDescriptorSets[i]; skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__, DRAWSTATE_NONE, "DS", "DS %#" PRIxLEAST64 " bound on pipeline %s", (uint64_t)pDescriptorSets[i], string_VkPipelineBindPoint(pipelineBindPoint)); if (!pSet->pUpdateStructs && (pSet->descriptorCount != 0)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__, DRAWSTATE_DESCRIPTOR_SET_NOT_UPDATED, "DS", "DS %#" PRIxLEAST64 " bound but it was never updated. You may want to either update it or not bind it.", (uint64_t)pDescriptorSets[i]); } // Verify that set being bound is compatible with overlapping setLayout of pipelineLayout if (!verify_set_layout_compatibility(dev_data, pSet, layout, i + firstSet, errorString)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__, DRAWSTATE_PIPELINE_LAYOUTS_INCOMPATIBLE, "DS", "descriptorSet #%u being bound is not compatible with overlapping layout in " "pipelineLayout due to: %s", i, errorString.c_str()); } if (pSet->pLayout->dynamicDescriptorCount) { // First make sure we won't overstep bounds of pDynamicOffsets array if ((totalDynamicDescriptors + pSet->pLayout->dynamicDescriptorCount) > dynamicOffsetCount) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__, DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, "DS", "descriptorSet #%u (%#" PRIxLEAST64 ") requires %u dynamicOffsets, but only %u dynamicOffsets are left in pDynamicOffsets " "array. There must be one dynamic offset for each dynamic descriptor being bound.", i, (uint64_t)pDescriptorSets[i], pSet->pLayout->dynamicDescriptorCount, (dynamicOffsetCount - totalDynamicDescriptors)); } else { // Validate and store dynamic offsets with the set // Validate Dynamic Offset Minimums uint32_t cur_dyn_offset = totalDynamicDescriptors; for (uint32_t d = 0; d < pSet->descriptorCount; d++) { if (pSet->pLayout->descriptorTypes[d] == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC) { if (vk_safe_modulo( pDynamicOffsets[cur_dyn_offset], dev_data->physDevProperties.properties.limits.minUniformBufferOffsetAlignment) != 0) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DRAWSTATE_INVALID_UNIFORM_BUFFER_OFFSET, "DS", "vkCmdBindDescriptorSets(): pDynamicOffsets[%d] is %d but must be a multiple of " "device limit minUniformBufferOffsetAlignment %#" PRIxLEAST64, cur_dyn_offset, pDynamicOffsets[cur_dyn_offset], dev_data->physDevProperties.properties.limits.minUniformBufferOffsetAlignment); } cur_dyn_offset++; } else if (pSet->pLayout->descriptorTypes[d] == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC) { if (vk_safe_modulo( pDynamicOffsets[cur_dyn_offset], dev_data->physDevProperties.properties.limits.minStorageBufferOffsetAlignment) != 0) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DRAWSTATE_INVALID_STORAGE_BUFFER_OFFSET, "DS", "vkCmdBindDescriptorSets(): pDynamicOffsets[%d] is %d but must be a multiple of " "device limit minStorageBufferOffsetAlignment %#" PRIxLEAST64, cur_dyn_offset, pDynamicOffsets[cur_dyn_offset], dev_data->physDevProperties.properties.limits.minStorageBufferOffsetAlignment); } cur_dyn_offset++; } } // Keep running total of dynamic descriptor count to verify at the end totalDynamicDescriptors += pSet->pLayout->dynamicDescriptorCount; } } } else { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pDescriptorSets[i], __LINE__, DRAWSTATE_INVALID_SET, "DS", "Attempt to bind DS %#" PRIxLEAST64 " that doesn't exist!", (uint64_t)pDescriptorSets[i]); } skipCall |= addCmd(dev_data, pCB, CMD_BINDDESCRIPTORSETS, "vkCmdBindDescriptorSets()"); // For any previously bound sets, need to set them to "invalid" if they were disturbed by this update if (firstSet > 0) { // Check set #s below the first bound set for (uint32_t i = 0; i < firstSet; ++i) { if (pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i] && !verify_set_layout_compatibility( dev_data, dev_data->setMap[pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i]], layout, i, errorString)) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i], __LINE__, DRAWSTATE_NONE, "DS", "DescriptorSetDS %#" PRIxLEAST64 " previously bound as set #%u was disturbed by newly bound pipelineLayout (%#" PRIxLEAST64 ")", (uint64_t)pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i], i, (uint64_t)layout); pCB->lastBound[pipelineBindPoint].boundDescriptorSets[i] = VK_NULL_HANDLE; } } } // Check if newly last bound set invalidates any remaining bound sets if ((pCB->lastBound[pipelineBindPoint].boundDescriptorSets.size() - 1) > (lastSetIndex)) { if (oldFinalBoundSet && !verify_set_layout_compatibility(dev_data, dev_data->setMap[oldFinalBoundSet], layout, lastSetIndex, errorString)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT, (uint64_t)oldFinalBoundSet, __LINE__, DRAWSTATE_NONE, "DS", "DescriptorSetDS %#" PRIxLEAST64 " previously bound as set #%u is incompatible with set %#" PRIxLEAST64 " newly bound as set #%u so set #%u and any subsequent sets were " "disturbed by newly bound pipelineLayout (%#" PRIxLEAST64 ")", (uint64_t)oldFinalBoundSet, lastSetIndex, (uint64_t)pCB->lastBound[pipelineBindPoint].boundDescriptorSets[lastSetIndex], lastSetIndex, lastSetIndex + 1, (uint64_t)layout); pCB->lastBound[pipelineBindPoint].boundDescriptorSets.resize(lastSetIndex + 1); } } // Save dynamicOffsets bound to this CB for (uint32_t i = 0; i < dynamicOffsetCount; i++) { pCB->lastBound[pipelineBindPoint].dynamicOffsets.push_back(pDynamicOffsets[i]); } } // dynamicOffsetCount must equal the total number of dynamic descriptors in the sets being bound if (totalDynamicDescriptors != dynamicOffsetCount) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, "DS", "Attempting to bind %u descriptorSets with %u dynamic descriptors, but dynamicOffsetCount " "is %u. It should exactly match the number of dynamic descriptors.", setCount, totalDynamicDescriptors, dynamicOffsetCount); } // Save dynamicOffsets bound to this CB for (uint32_t i = 0; i < dynamicOffsetCount; i++) { pCB->dynamicOffsets.emplace_back(pDynamicOffsets[i]); } } else { skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdBindDescriptorSets()"); } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdBindDescriptorSets(commandBuffer, pipelineBindPoint, layout, firstSet, setCount, pDescriptorSets, dynamicOffsetCount, pDynamicOffsets); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindIndexBuffer(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)(buffer), VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); auto cb_data = dev_data->commandBufferMap.find(commandBuffer); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdBindIndexBuffer()"); }; cb_data->second->validate_functions.push_back(function); } // TODO : Somewhere need to verify that IBs have correct usage state flagged #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_BINDINDEXBUFFER, "vkCmdBindIndexBuffer()"); VkDeviceSize offset_align = 0; switch (indexType) { case VK_INDEX_TYPE_UINT16: offset_align = 2; break; case VK_INDEX_TYPE_UINT32: offset_align = 4; break; default: // ParamChecker should catch bad enum, we'll also throw alignment error below if offset_align stays 0 break; } if (!offset_align || (offset % offset_align)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_VTX_INDEX_ALIGNMENT_ERROR, "DS", "vkCmdBindIndexBuffer() offset (%#" PRIxLEAST64 ") does not fall on alignment (%s) boundary.", offset, string_VkIndexType(indexType)); } pCB->status |= CBSTATUS_INDEX_BUFFER_BOUND; } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdBindIndexBuffer(commandBuffer, buffer, offset, indexType); } void updateResourceTracking(GLOBAL_CB_NODE *pCB, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer *pBuffers) { uint32_t end = firstBinding + bindingCount; if (pCB->currentDrawData.buffers.size() < end) { pCB->currentDrawData.buffers.resize(end); } for (uint32_t i = 0; i < bindingCount; ++i) { pCB->currentDrawData.buffers[i + firstBinding] = pBuffers[i]; } } void updateResourceTrackingOnDraw(GLOBAL_CB_NODE *pCB) { pCB->drawData.push_back(pCB->currentDrawData); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindVertexBuffers(VkCommandBuffer commandBuffer, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer *pBuffers, const VkDeviceSize *pOffsets) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE for (uint32_t i = 0; i < bindingCount; ++i) { VkDeviceMemory mem; skipCall |= get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)(pBuffers[i]), VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); auto cb_data = dev_data->commandBufferMap.find(commandBuffer); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdBindVertexBuffers()"); }; cb_data->second->validate_functions.push_back(function); } } // TODO : Somewhere need to verify that VBs have correct usage state flagged #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { addCmd(dev_data, pCB, CMD_BINDVERTEXBUFFER, "vkCmdBindVertexBuffer()"); updateResourceTracking(pCB, firstBinding, bindingCount, pBuffers); } else { skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdBindVertexBuffer()"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdBindVertexBuffers(commandBuffer, firstBinding, bindingCount, pBuffers, pOffsets); } /* expects globalLock to be held by caller */ bool markStoreImagesAndBuffersAsWritten(layer_data *dev_data, GLOBAL_CB_NODE *pCB) { bool skip_call = false; for (auto imageView : pCB->updateImages) { auto iv_data = dev_data->imageViewMap.find(imageView); if (iv_data == dev_data->imageViewMap.end()) continue; VkImage image = iv_data->second.image; VkDeviceMemory mem; skip_call |= get_mem_binding_from_object(dev_data, pCB->commandBuffer, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); std::function function = [=]() { set_memory_valid(dev_data, mem, true, image); return VK_FALSE; }; pCB->validate_functions.push_back(function); } for (auto buffer : pCB->updateBuffers) { VkDeviceMemory mem; skip_call |= get_mem_binding_from_object(dev_data, pCB->commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); std::function function = [=]() { set_memory_valid(dev_data, mem, true); return VK_FALSE; }; pCB->validate_functions.push_back(function); } return skip_call; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDraw(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_DRAW, "vkCmdDraw()"); pCB->drawCount[DRAW]++; skipCall |= validate_and_update_draw_state(dev_data, pCB, VK_FALSE, VK_PIPELINE_BIND_POINT_GRAPHICS); skipCall |= markStoreImagesAndBuffersAsWritten(dev_data, pCB); // TODO : Need to pass commandBuffer as srcObj here skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS", "vkCmdDraw() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW]++); skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer); if (VK_FALSE == skipCall) { updateResourceTrackingOnDraw(pCB); } skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDraw"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexed(VkCommandBuffer commandBuffer, uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_DRAWINDEXED, "vkCmdDrawIndexed()"); pCB->drawCount[DRAW_INDEXED]++; skipCall |= validate_and_update_draw_state(dev_data, pCB, VK_TRUE, VK_PIPELINE_BIND_POINT_GRAPHICS); skipCall |= markStoreImagesAndBuffersAsWritten(dev_data, pCB); // TODO : Need to pass commandBuffer as srcObj here skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS", "vkCmdDrawIndexed() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW_INDEXED]++); skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer); if (VK_FALSE == skipCall) { updateResourceTrackingOnDraw(pCB); } skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDrawIndexed"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdDrawIndexed(commandBuffer, indexCount, instanceCount, firstIndex, vertexOffset, firstInstance); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; // MTMTODO : merge with code below skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdDrawIndirect"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_DRAWINDIRECT, "vkCmdDrawIndirect()"); pCB->drawCount[DRAW_INDIRECT]++; skipCall |= validate_and_update_draw_state(dev_data, pCB, VK_FALSE, VK_PIPELINE_BIND_POINT_GRAPHICS); skipCall |= markStoreImagesAndBuffersAsWritten(dev_data, pCB); // TODO : Need to pass commandBuffer as srcObj here skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS", "vkCmdDrawIndirect() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW_INDIRECT]++); skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer); if (VK_FALSE == skipCall) { updateResourceTrackingOnDraw(pCB); } skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDrawIndirect"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdDrawIndirect(commandBuffer, buffer, offset, count, stride); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexedIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; // MTMTODO : merge with code below skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdDrawIndexedIndirect"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_DRAWINDEXEDINDIRECT, "vkCmdDrawIndexedIndirect()"); pCB->drawCount[DRAW_INDEXED_INDIRECT]++; skipCall |= validate_and_update_draw_state(dev_data, pCB, VK_TRUE, VK_PIPELINE_BIND_POINT_GRAPHICS); skipCall |= markStoreImagesAndBuffersAsWritten(dev_data, pCB); // TODO : Need to pass commandBuffer as srcObj here skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_NONE, "DS", "vkCmdDrawIndexedIndirect() call #%" PRIu64 ", reporting DS state:", g_drawCount[DRAW_INDEXED_INDIRECT]++); skipCall |= synchAndPrintDSConfig(dev_data, commandBuffer); if (VK_FALSE == skipCall) { updateResourceTrackingOnDraw(pCB); } skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdDrawIndexedIndirect"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdDrawIndexedIndirect(commandBuffer, buffer, offset, count, stride); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatch(VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { // TODO : Re-enable validate_and_update_draw_state() when it supports compute shaders // skipCall |= validate_and_update_draw_state(dev_data, pCB, VK_FALSE, VK_PIPELINE_BIND_POINT_COMPUTE); // TODO : Call below is temporary until call above can be re-enabled update_shader_storage_images_and_buffers(dev_data, pCB); skipCall |= markStoreImagesAndBuffersAsWritten(dev_data, pCB); skipCall |= addCmd(dev_data, pCB, CMD_DISPATCH, "vkCmdDispatch()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdDispatch"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdDispatch(commandBuffer, x, y, z); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatchIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdDispatchIndirect"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { // TODO : Re-enable validate_and_update_draw_state() when it supports compute shaders // skipCall |= validate_and_update_draw_state(dev_data, pCB, VK_FALSE, VK_PIPELINE_BIND_POINT_COMPUTE); // TODO : Call below is temporary until call above can be re-enabled update_shader_storage_images_and_buffers(dev_data, pCB); skipCall |= markStoreImagesAndBuffersAsWritten(dev_data, pCB); skipCall |= addCmd(dev_data, pCB, CMD_DISPATCHINDIRECT, "vkCmdDispatchIndirect()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdDispatchIndirect"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdDispatchIndirect(commandBuffer, buffer, offset); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBuffer(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdCopyBuffer()"); }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyBuffer"); skipCall |= get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyBuffer"); // Validate that SRC & DST buffers have correct usage flags set skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, srcBuffer, VK_BUFFER_USAGE_TRANSFER_SRC_BIT, true, "vkCmdCopyBuffer()", "VK_BUFFER_USAGE_TRANSFER_SRC_BIT"); skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_COPYBUFFER, "vkCmdCopyBuffer()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyBuffer"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount, pRegions); } VkBool32 VerifySourceImageLayout(VkCommandBuffer cmdBuffer, VkImage srcImage, VkImageSubresourceLayers subLayers, VkImageLayout srcImageLayout) { VkBool32 skip_call = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer); for (uint32_t i = 0; i < subLayers.layerCount; ++i) { uint32_t layer = i + subLayers.baseArrayLayer; VkImageSubresource sub = {subLayers.aspectMask, subLayers.mipLevel, layer}; IMAGE_CMD_BUF_LAYOUT_NODE node; if (!FindLayout(pCB, srcImage, sub, node)) { SetLayout(pCB, srcImage, sub, IMAGE_CMD_BUF_LAYOUT_NODE(srcImageLayout, srcImageLayout)); continue; } if (node.layout != srcImageLayout) { // TODO: Improve log message in the next pass skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Cannot copy from an image whose source layout is %s " "and doesn't match the current layout %s.", string_VkImageLayout(srcImageLayout), string_VkImageLayout(node.layout)); } } if (srcImageLayout != VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL) { if (srcImageLayout == VK_IMAGE_LAYOUT_GENERAL) { // LAYOUT_GENERAL is allowed, but may not be performance optimal, flag as perf warning. skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for input image should be TRANSFER_SRC_OPTIMAL instead of GENERAL."); } else { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for input image is %s but can only be " "TRANSFER_SRC_OPTIMAL or GENERAL.", string_VkImageLayout(srcImageLayout)); } } return skip_call; } VkBool32 VerifyDestImageLayout(VkCommandBuffer cmdBuffer, VkImage destImage, VkImageSubresourceLayers subLayers, VkImageLayout destImageLayout) { VkBool32 skip_call = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer); for (uint32_t i = 0; i < subLayers.layerCount; ++i) { uint32_t layer = i + subLayers.baseArrayLayer; VkImageSubresource sub = {subLayers.aspectMask, subLayers.mipLevel, layer}; IMAGE_CMD_BUF_LAYOUT_NODE node; if (!FindLayout(pCB, destImage, sub, node)) { SetLayout(pCB, destImage, sub, IMAGE_CMD_BUF_LAYOUT_NODE(destImageLayout, destImageLayout)); continue; } if (node.layout != destImageLayout) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Cannot copy from an image whose dest layout is %s and " "doesn't match the current layout %s.", string_VkImageLayout(destImageLayout), string_VkImageLayout(node.layout)); } } if (destImageLayout != VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL) { if (destImageLayout == VK_IMAGE_LAYOUT_GENERAL) { // LAYOUT_GENERAL is allowed, but may not be performance optimal, flag as perf warning. skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for output image should be TRANSFER_DST_OPTIMAL instead of GENERAL."); } else { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for output image is %s but can only be " "TRANSFER_DST_OPTIMAL or GENERAL.", string_VkImageLayout(destImageLayout)); } } return skip_call; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); // Validate that src & dst images have correct usage flags set skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdCopyImage()", srcImage); }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyImage"); skipCall |= get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true, dstImage); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyImage"); skipCall |= validate_image_usage_flags(dev_data, commandBuffer, srcImage, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, true, "vkCmdCopyImage()", "VK_IMAGE_USAGE_TRANSFER_SRC_BIT"); skipCall |= validate_image_usage_flags(dev_data, commandBuffer, dstImage, VK_IMAGE_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyImage()", "VK_IMAGE_USAGE_TRANSFER_DST_BIT"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_COPYIMAGE, "vkCmdCopyImage()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyImage"); for (uint32_t i = 0; i < regionCount; ++i) { skipCall |= VerifySourceImageLayout(commandBuffer, srcImage, pRegions[i].srcSubresource, srcImageLayout); skipCall |= VerifyDestImageLayout(commandBuffer, dstImage, pRegions[i].dstSubresource, dstImageLayout); } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit *pRegions, VkFilter filter) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); // Validate that src & dst images have correct usage flags set skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdBlitImage()", srcImage); }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdBlitImage"); skipCall |= get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true, dstImage); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdBlitImage"); skipCall |= validate_image_usage_flags(dev_data, commandBuffer, srcImage, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, true, "vkCmdBlitImage()", "VK_IMAGE_USAGE_TRANSFER_SRC_BIT"); skipCall |= validate_image_usage_flags(dev_data, commandBuffer, dstImage, VK_IMAGE_USAGE_TRANSFER_DST_BIT, true, "vkCmdBlitImage()", "VK_IMAGE_USAGE_TRANSFER_DST_BIT"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_BLITIMAGE, "vkCmdBlitImage()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdBlitImage"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkBufferImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true, dstImage); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyBufferToImage"); skipCall |= get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdCopyBufferToImage()"); }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyBufferToImage"); // Validate that src buff & dst image have correct usage flags set skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, srcBuffer, VK_BUFFER_USAGE_TRANSFER_SRC_BIT, true, "vkCmdCopyBufferToImage()", "VK_BUFFER_USAGE_TRANSFER_SRC_BIT"); skipCall |= validate_image_usage_flags(dev_data, commandBuffer, dstImage, VK_IMAGE_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyBufferToImage()", "VK_IMAGE_USAGE_TRANSFER_DST_BIT"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_COPYBUFFERTOIMAGE, "vkCmdCopyBufferToImage()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyBufferToImage"); for (uint32_t i = 0; i < regionCount; ++i) { skipCall |= VerifyDestImageLayout(commandBuffer, dstImage, pRegions[i].imageSubresource, dstImageLayout); } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdCopyImageToBuffer()", srcImage); }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyImageToBuffer"); skipCall |= get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyImageToBuffer"); // Validate that dst buff & src image have correct usage flags set skipCall |= validate_image_usage_flags(dev_data, commandBuffer, srcImage, VK_IMAGE_USAGE_TRANSFER_SRC_BIT, true, "vkCmdCopyImageToBuffer()", "VK_IMAGE_USAGE_TRANSFER_SRC_BIT"); skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyImageToBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_COPYIMAGETOBUFFER, "vkCmdCopyImageToBuffer()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyImageToBuffer"); for (uint32_t i = 0; i < regionCount; ++i) { skipCall |= VerifySourceImageLayout(commandBuffer, srcImage, pRegions[i].imageSubresource, srcImageLayout); } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t *pData) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdUpdateBuffer"); // Validate that dst buff has correct usage flags set skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdUpdateBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_UPDATEBUFFER, "vkCmdUpdateBuffer()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyUpdateBuffer"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdFillBuffer"); // Validate that dst buff has correct usage flags set skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdFillBuffer()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_FILLBUFFER, "vkCmdFillBuffer()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyFillBuffer"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(VkCommandBuffer commandBuffer, uint32_t attachmentCount, const VkClearAttachment *pAttachments, uint32_t rectCount, const VkClearRect *pRects) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_CLEARATTACHMENTS, "vkCmdClearAttachments()"); // Warn if this is issued prior to Draw Cmd and clearing the entire attachment if (!hasDrawCmd(pCB) && (pCB->activeRenderPassBeginInfo.renderArea.extent.width == pRects[0].rect.extent.width) && (pCB->activeRenderPassBeginInfo.renderArea.extent.height == pRects[0].rect.extent.height)) { // TODO : commandBuffer should be srcObj // There are times where app needs to use ClearAttachments (generally when reusing a buffer inside of a render pass) // Can we make this warning more specific? I'd like to avoid triggering this test if we can tell it's a use that must // call CmdClearAttachments // Otherwise this seems more like a performance warning. skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, 0, DRAWSTATE_CLEAR_CMD_BEFORE_DRAW, "DS", "vkCmdClearAttachments() issued on CB object 0x%" PRIxLEAST64 " prior to any Draw Cmds." " It is recommended you use RenderPass LOAD_OP_CLEAR on Attachments prior to any Draw.", (uint64_t)(commandBuffer)); } skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdClearAttachments"); } // Validate that attachment is in reference list of active subpass if (pCB->activeRenderPass) { const VkRenderPassCreateInfo *pRPCI = dev_data->renderPassMap[pCB->activeRenderPass]->pCreateInfo; const VkSubpassDescription *pSD = &pRPCI->pSubpasses[pCB->activeSubpass]; for (uint32_t attachment_idx = 0; attachment_idx < attachmentCount; attachment_idx++) { const VkClearAttachment *attachment = &pAttachments[attachment_idx]; if (attachment->aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) { VkBool32 found = VK_FALSE; for (uint32_t i = 0; i < pSD->colorAttachmentCount; i++) { if (attachment->colorAttachment == pSD->pColorAttachments[i].attachment) { found = VK_TRUE; break; } } if (VK_FALSE == found) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_MISSING_ATTACHMENT_REFERENCE, "DS", "vkCmdClearAttachments() attachment index %d not found in attachment reference array of active subpass %d", attachment->colorAttachment, pCB->activeSubpass); } } else if (attachment->aspectMask & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT)) { if (!pSD->pDepthStencilAttachment || // Says no DS will be used in active subpass (pSD->pDepthStencilAttachment->attachment == VK_ATTACHMENT_UNUSED)) { // Says no DS will be used in active subpass skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, DRAWSTATE_MISSING_ATTACHMENT_REFERENCE, "DS", "vkCmdClearAttachments() attachment index %d does not match depthStencilAttachment.attachment (%d) found " "in active subpass %d", attachment->colorAttachment, (pSD->pDepthStencilAttachment) ? pSD->pDepthStencilAttachment->attachment : VK_ATTACHMENT_UNUSED, pCB->activeSubpass); } } } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdClearAttachments(commandBuffer, attachmentCount, pAttachments, rectCount, pRects); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearColorValue *pColor, uint32_t rangeCount, const VkImageSubresourceRange *pRanges) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true, image); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdClearColorImage"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_CLEARCOLORIMAGE, "vkCmdClearColorImage()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdClearColorImage"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearDepthStencilImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearDepthStencilValue *pDepthStencil, uint32_t rangeCount, const VkImageSubresourceRange *pRanges) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true, image); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdClearDepthStencilImage"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_CLEARDEPTHSTENCILIMAGE, "vkCmdClearDepthStencilImage()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdClearDepthStencilImage"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount, pRanges); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE auto cb_data = dev_data->commandBufferMap.find(commandBuffer); VkDeviceMemory mem; skipCall = get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)srcImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, mem, "vkCmdResolveImage()", srcImage); }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdResolveImage"); skipCall |= get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstImage, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true, dstImage); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdResolveImage"); #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_RESOLVEIMAGE, "vkCmdResolveImage()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdResolveImage"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions); } bool setEventStageMask(VkQueue queue, VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { pCB->eventToStageMap[event] = stageMask; } auto queue_data = dev_data->queueMap.find(queue); if (queue_data != dev_data->queueMap.end()) { queue_data->second.eventToStageMap[event] = stageMask; } return false; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_SETEVENT, "vkCmdSetEvent()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdSetEvent"); pCB->events.push_back(event); std::function eventUpdate = std::bind(setEventStageMask, std::placeholders::_1, commandBuffer, event, stageMask); pCB->eventUpdates.push_back(eventUpdate); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdSetEvent(commandBuffer, event, stageMask); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_RESETEVENT, "vkCmdResetEvent()"); skipCall |= insideRenderPass(dev_data, pCB, "vkCmdResetEvent"); pCB->events.push_back(event); std::function eventUpdate = std::bind(setEventStageMask, std::placeholders::_1, commandBuffer, event, VkPipelineStageFlags(0)); pCB->eventUpdates.push_back(eventUpdate); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdResetEvent(commandBuffer, event, stageMask); } VkBool32 TransitionImageLayouts(VkCommandBuffer cmdBuffer, uint32_t memBarrierCount, const VkImageMemoryBarrier *pImgMemBarriers) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer); VkBool32 skip = VK_FALSE; uint32_t levelCount = 0; uint32_t layerCount = 0; for (uint32_t i = 0; i < memBarrierCount; ++i) { auto mem_barrier = &pImgMemBarriers[i]; if (!mem_barrier) continue; // TODO: Do not iterate over every possibility - consolidate where // possible ResolveRemainingLevelsLayers(dev_data, &levelCount, &layerCount, mem_barrier->subresourceRange, mem_barrier->image); for (uint32_t j = 0; j < levelCount; j++) { uint32_t level = mem_barrier->subresourceRange.baseMipLevel + j; for (uint32_t k = 0; k < layerCount; k++) { uint32_t layer = mem_barrier->subresourceRange.baseArrayLayer + k; VkImageSubresource sub = {mem_barrier->subresourceRange.aspectMask, level, layer}; IMAGE_CMD_BUF_LAYOUT_NODE node; if (!FindLayout(pCB, mem_barrier->image, sub, node)) { SetLayout(pCB, mem_barrier->image, sub, IMAGE_CMD_BUF_LAYOUT_NODE(mem_barrier->oldLayout, mem_barrier->newLayout)); continue; } if (mem_barrier->oldLayout == VK_IMAGE_LAYOUT_UNDEFINED) { // TODO: Set memory invalid which is in mem_tracker currently } else if (node.layout != mem_barrier->oldLayout) { skip |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "You cannot transition the layout from %s " "when current layout is %s.", string_VkImageLayout(mem_barrier->oldLayout), string_VkImageLayout(node.layout)); } SetLayout(pCB, mem_barrier->image, sub, mem_barrier->newLayout); } } } return skip; } // Print readable FlagBits in FlagMask std::string string_VkAccessFlags(VkAccessFlags accessMask) { std::string result; std::string separator; if (accessMask == 0) { result = "[None]"; } else { result = "["; for (auto i = 0; i < 32; i++) { if (accessMask & (1 << i)) { result = result + separator + string_VkAccessFlagBits((VkAccessFlagBits)(1 << i)); separator = " | "; } } result = result + "]"; } return result; } // AccessFlags MUST have 'required_bit' set, and may have one or more of 'optional_bits' set. // If required_bit is zero, accessMask must have at least one of 'optional_bits' set // TODO: Add tracking to ensure that at least one barrier has been set for these layout transitions VkBool32 ValidateMaskBits(const layer_data *my_data, VkCommandBuffer cmdBuffer, const VkAccessFlags &accessMask, const VkImageLayout &layout, VkAccessFlags required_bit, VkAccessFlags optional_bits, const char *type) { VkBool32 skip_call = VK_FALSE; if ((accessMask & required_bit) || (!required_bit && (accessMask & optional_bits))) { if (accessMask & !(required_bit | optional_bits)) { // TODO: Verify against Valid Use skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "Additional bits in %s accessMask %d %s are specified when layout is %s.", type, accessMask, string_VkAccessFlags(accessMask).c_str(), string_VkImageLayout(layout)); } } else { if (!required_bit) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s AccessMask %d %s must contain at least one of access bits %d " "%s when layout is %s, unless the app has previously added a " "barrier for this transition.", type, accessMask, string_VkAccessFlags(accessMask).c_str(), optional_bits, string_VkAccessFlags(optional_bits).c_str(), string_VkImageLayout(layout)); } else { std::string opt_bits; if (optional_bits != 0) { std::stringstream ss; ss << optional_bits; opt_bits = "and may have optional bits " + ss.str() + ' ' + string_VkAccessFlags(optional_bits); } skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s AccessMask %d %s must have required access bit %d %s %s when " "layout is %s, unless the app has previously added a barrier for " "this transition.", type, accessMask, string_VkAccessFlags(accessMask).c_str(), required_bit, string_VkAccessFlags(required_bit).c_str(), opt_bits.c_str(), string_VkImageLayout(layout)); } } return skip_call; } VkBool32 ValidateMaskBitsFromLayouts(const layer_data *my_data, VkCommandBuffer cmdBuffer, const VkAccessFlags &accessMask, const VkImageLayout &layout, const char *type) { VkBool32 skip_call = VK_FALSE; switch (layout) { case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL: { skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT, VK_ACCESS_COLOR_ATTACHMENT_READ_BIT, type); break; } case VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL: { skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT, VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT, type); break; } case VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL: { skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_TRANSFER_WRITE_BIT, 0, type); break; } case VK_IMAGE_LAYOUT_PREINITIALIZED: { skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_HOST_WRITE_BIT, 0, type); break; } case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL: { skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, 0, VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT | VK_ACCESS_SHADER_READ_BIT, type); break; } case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL: { skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, 0, VK_ACCESS_INPUT_ATTACHMENT_READ_BIT | VK_ACCESS_SHADER_READ_BIT, type); break; } case VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL: { skip_call |= ValidateMaskBits(my_data, cmdBuffer, accessMask, layout, VK_ACCESS_TRANSFER_READ_BIT, 0, type); break; } case VK_IMAGE_LAYOUT_UNDEFINED: { if (accessMask != 0) { // TODO: Verify against Valid Use section spec skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "Additional bits in %s accessMask %d %s are specified when layout is %s.", type, accessMask, string_VkAccessFlags(accessMask).c_str(), string_VkImageLayout(layout)); } break; } case VK_IMAGE_LAYOUT_GENERAL: default: { break; } } return skip_call; } VkBool32 ValidateBarriers(const char *funcName, VkCommandBuffer cmdBuffer, uint32_t memBarrierCount, const VkMemoryBarrier *pMemBarriers, uint32_t bufferBarrierCount, const VkBufferMemoryBarrier *pBufferMemBarriers, uint32_t imageMemBarrierCount, const VkImageMemoryBarrier *pImageMemBarriers) { VkBool32 skip_call = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer); if (pCB->activeRenderPass && memBarrierCount) { if (!dev_data->renderPassMap[pCB->activeRenderPass]->hasSelfDependency[pCB->activeSubpass]) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s: Barriers cannot be set during subpass %d " "with no self dependency specified.", funcName, pCB->activeSubpass); } } for (uint32_t i = 0; i < imageMemBarrierCount; ++i) { auto mem_barrier = &pImageMemBarriers[i]; auto image_data = dev_data->imageMap.find(mem_barrier->image); if (image_data != dev_data->imageMap.end()) { uint32_t src_q_f_index = mem_barrier->srcQueueFamilyIndex; uint32_t dst_q_f_index = mem_barrier->dstQueueFamilyIndex; if (image_data->second.createInfo.sharingMode == VK_SHARING_MODE_CONCURRENT) { // srcQueueFamilyIndex and dstQueueFamilyIndex must both // be VK_QUEUE_FAMILY_IGNORED if ((src_q_f_index != VK_QUEUE_FAMILY_IGNORED) || (dst_q_f_index != VK_QUEUE_FAMILY_IGNORED)) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUEUE_INDEX, "DS", "%s: Image Barrier for image 0x%" PRIx64 " was created with sharingMode of " "VK_SHARING_MODE_CONCURRENT. Src and dst " " queueFamilyIndices must be VK_QUEUE_FAMILY_IGNORED.", funcName, reinterpret_cast(mem_barrier->image)); } } else { // Sharing mode is VK_SHARING_MODE_EXCLUSIVE. srcQueueFamilyIndex and // dstQueueFamilyIndex must either both be VK_QUEUE_FAMILY_IGNORED, // or both be a valid queue family if (((src_q_f_index == VK_QUEUE_FAMILY_IGNORED) || (dst_q_f_index == VK_QUEUE_FAMILY_IGNORED)) && (src_q_f_index != dst_q_f_index)) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUEUE_INDEX, "DS", "%s: Image 0x%" PRIx64 " was created with sharingMode " "of VK_SHARING_MODE_EXCLUSIVE. If one of src- or " "dstQueueFamilyIndex is VK_QUEUE_FAMILY_IGNORED, both " "must be.", funcName, reinterpret_cast(mem_barrier->image)); } else if (((src_q_f_index != VK_QUEUE_FAMILY_IGNORED) && (dst_q_f_index != VK_QUEUE_FAMILY_IGNORED)) && ((src_q_f_index >= dev_data->physDevProperties.queue_family_properties.size()) || (dst_q_f_index >= dev_data->physDevProperties.queue_family_properties.size()))) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUEUE_INDEX, "DS", "%s: Image 0x%" PRIx64 " was created with sharingMode " "of VK_SHARING_MODE_EXCLUSIVE, but srcQueueFamilyIndex %d" " or dstQueueFamilyIndex %d is greater than " PRINTF_SIZE_T_SPECIFIER "queueFamilies crated for this device.", funcName, reinterpret_cast(mem_barrier->image), src_q_f_index, dst_q_f_index, dev_data->physDevProperties.queue_family_properties.size()); } } } if (mem_barrier) { skip_call |= ValidateMaskBitsFromLayouts(dev_data, cmdBuffer, mem_barrier->srcAccessMask, mem_barrier->oldLayout, "Source"); skip_call |= ValidateMaskBitsFromLayouts(dev_data, cmdBuffer, mem_barrier->dstAccessMask, mem_barrier->newLayout, "Dest"); if (mem_barrier->newLayout == VK_IMAGE_LAYOUT_UNDEFINED || mem_barrier->newLayout == VK_IMAGE_LAYOUT_PREINITIALIZED) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s: Image Layout cannot be transitioned to UNDEFINED or " "PREINITIALIZED.", funcName); } auto image_data = dev_data->imageMap.find(mem_barrier->image); VkFormat format; uint32_t arrayLayers, mipLevels; bool imageFound = false; if (image_data != dev_data->imageMap.end()) { format = image_data->second.createInfo.format; arrayLayers = image_data->second.createInfo.arrayLayers; mipLevels = image_data->second.createInfo.mipLevels; imageFound = true; } else if (dev_data->device_extensions.wsi_enabled) { auto imageswap_data = dev_data->device_extensions.imageToSwapchainMap.find(mem_barrier->image); if (imageswap_data != dev_data->device_extensions.imageToSwapchainMap.end()) { auto swapchain_data = dev_data->device_extensions.swapchainMap.find(imageswap_data->second); if (swapchain_data != dev_data->device_extensions.swapchainMap.end()) { format = swapchain_data->second->createInfo.imageFormat; arrayLayers = swapchain_data->second->createInfo.imageArrayLayers; mipLevels = 1; imageFound = true; } } } if (imageFound) { if (vk_format_is_depth_and_stencil(format) && (!(mem_barrier->subresourceRange.aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) || !(mem_barrier->subresourceRange.aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT))) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s: Image is a depth and stencil format and thus must " "have both VK_IMAGE_ASPECT_DEPTH_BIT and " "VK_IMAGE_ASPECT_STENCIL_BIT set.", funcName); } int layerCount = (mem_barrier->subresourceRange.layerCount == VK_REMAINING_ARRAY_LAYERS) ? 1 : mem_barrier->subresourceRange.layerCount; if ((mem_barrier->subresourceRange.baseArrayLayer + layerCount) > arrayLayers) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s: Subresource must have the sum of the " "baseArrayLayer (%d) and layerCount (%d) be less " "than or equal to the total number of layers (%d).", funcName, mem_barrier->subresourceRange.baseArrayLayer, mem_barrier->subresourceRange.layerCount, arrayLayers); } int levelCount = (mem_barrier->subresourceRange.levelCount == VK_REMAINING_MIP_LEVELS) ? 1 : mem_barrier->subresourceRange.levelCount; if ((mem_barrier->subresourceRange.baseMipLevel + levelCount) > mipLevels) { log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s: Subresource must have the sum of the baseMipLevel " "(%d) and levelCount (%d) be less than or equal to " "the total number of levels (%d).", funcName, mem_barrier->subresourceRange.baseMipLevel, mem_barrier->subresourceRange.levelCount, mipLevels); } } } } for (uint32_t i = 0; i < bufferBarrierCount; ++i) { auto mem_barrier = &pBufferMemBarriers[i]; if (pCB->activeRenderPass) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s: Buffer Barriers cannot be used during a render pass.", funcName); } if (!mem_barrier) continue; // Validate buffer barrier queue family indices if ((mem_barrier->srcQueueFamilyIndex != VK_QUEUE_FAMILY_IGNORED && mem_barrier->srcQueueFamilyIndex >= dev_data->physDevProperties.queue_family_properties.size()) || (mem_barrier->dstQueueFamilyIndex != VK_QUEUE_FAMILY_IGNORED && mem_barrier->dstQueueFamilyIndex >= dev_data->physDevProperties.queue_family_properties.size())) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUEUE_INDEX, "DS", "%s: Buffer Barrier 0x%" PRIx64 " has QueueFamilyIndex greater " "than the number of QueueFamilies (" PRINTF_SIZE_T_SPECIFIER ") for this device.", funcName, reinterpret_cast(mem_barrier->buffer), dev_data->physDevProperties.queue_family_properties.size()); } auto buffer_data = dev_data->bufferMap.find(mem_barrier->buffer); uint64_t buffer_size = buffer_data->second.create_info ? reinterpret_cast(buffer_data->second.create_info->size) : 0; if (buffer_data != dev_data->bufferMap.end()) { if (mem_barrier->offset >= buffer_size) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s: Buffer Barrier 0x%" PRIx64 " has offset %" PRIu64 " whose sum is not less than total size %" PRIu64 ".", funcName, reinterpret_cast(mem_barrier->buffer), reinterpret_cast(mem_barrier->offset), buffer_size); } else if (mem_barrier->size != VK_WHOLE_SIZE && (mem_barrier->offset + mem_barrier->size > buffer_size)) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_BARRIER, "DS", "%s: Buffer Barrier 0x%" PRIx64 " has offset %" PRIu64 " and size %" PRIu64 " whose sum is greater than total size %" PRIu64 ".", funcName, reinterpret_cast(mem_barrier->buffer), reinterpret_cast(mem_barrier->offset), reinterpret_cast(mem_barrier->size), buffer_size); } } } return skip_call; } bool validateEventStageMask(VkQueue queue, GLOBAL_CB_NODE *pCB, uint32_t eventCount, size_t firstEventIndex, VkPipelineStageFlags sourceStageMask) { bool skip_call = false; VkPipelineStageFlags stageMask = 0; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map); for (uint32_t i = 0; i < eventCount; ++i) { auto event = pCB->events[firstEventIndex + i]; auto queue_data = dev_data->queueMap.find(queue); if (queue_data == dev_data->queueMap.end()) return false; auto event_data = queue_data->second.eventToStageMap.find(event); if (event_data != queue_data->second.eventToStageMap.end()) { stageMask |= event_data->second; } else { auto global_event_data = dev_data->eventMap.find(event); if (global_event_data == dev_data->eventMap.end()) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT, reinterpret_cast(event), __LINE__, DRAWSTATE_INVALID_EVENT, "DS", "Event 0x%" PRIx64 " cannot be waited on if it has never been set.", reinterpret_cast(event)); } else { stageMask |= global_event_data->second.stageMask; } } } if (sourceStageMask != stageMask) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_EVENT, "DS", "Submitting cmdbuffer with call to VkCmdWaitEvents using srcStageMask 0x%x which must be the bitwise OR of the " "stageMask parameters used in calls to vkCmdSetEvent and VK_PIPELINE_STAGE_HOST_BIT if used with vkSetEvent.", sourceStageMask); } return skip_call; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdWaitEvents(VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent *pEvents, VkPipelineStageFlags sourceStageMask, VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { auto firstEventIndex = pCB->events.size(); for (uint32_t i = 0; i < eventCount; ++i) { pCB->waitedEvents.push_back(pEvents[i]); pCB->events.push_back(pEvents[i]); } std::function eventUpdate = std::bind(validateEventStageMask, std::placeholders::_1, pCB, eventCount, firstEventIndex, sourceStageMask); pCB->eventUpdates.push_back(eventUpdate); if (pCB->state == CB_RECORDING) { skipCall |= addCmd(dev_data, pCB, CMD_WAITEVENTS, "vkCmdWaitEvents()"); } else { skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdWaitEvents()"); } skipCall |= TransitionImageLayouts(commandBuffer, imageMemoryBarrierCount, pImageMemoryBarriers); skipCall |= ValidateBarriers("vkCmdWaitEvents", commandBuffer, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdWaitEvents(commandBuffer, eventCount, pEvents, sourceStageMask, dstStageMask, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPipelineBarrier(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { skipCall |= addCmd(dev_data, pCB, CMD_PIPELINEBARRIER, "vkCmdPipelineBarrier()"); skipCall |= TransitionImageLayouts(commandBuffer, imageMemoryBarrierCount, pImageMemoryBarriers); skipCall |= ValidateBarriers("vkCmdPipelineBarrier", commandBuffer, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot, VkFlags flags) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { QueryObject query = {queryPool, slot}; pCB->activeQueries.insert(query); if (!pCB->startedQueries.count(query)) { pCB->startedQueries.insert(query); } skipCall |= addCmd(dev_data, pCB, CMD_BEGINQUERY, "vkCmdBeginQuery()"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdBeginQuery(commandBuffer, queryPool, slot, flags); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { QueryObject query = {queryPool, slot}; if (!pCB->activeQueries.count(query)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS", "Ending a query before it was started: queryPool %" PRIu64 ", index %d", (uint64_t)(queryPool), slot); } else { pCB->activeQueries.erase(query); } pCB->queryToStateMap[query] = 1; if (pCB->state == CB_RECORDING) { skipCall |= addCmd(dev_data, pCB, CMD_ENDQUERY, "VkCmdEndQuery()"); } else { skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdEndQuery()"); } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdEndQuery(commandBuffer, queryPool, slot); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResetQueryPool(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { for (uint32_t i = 0; i < queryCount; i++) { QueryObject query = {queryPool, firstQuery + i}; pCB->waitedEventsBeforeQueryReset[query] = pCB->waitedEvents; pCB->queryToStateMap[query] = 0; } if (pCB->state == CB_RECORDING) { skipCall |= addCmd(dev_data, pCB, CMD_RESETQUERYPOOL, "VkCmdResetQueryPool()"); } else { skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdResetQueryPool()"); } skipCall |= insideRenderPass(dev_data, pCB, "vkCmdQueryPool"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdResetQueryPool(commandBuffer, queryPool, firstQuery, queryCount); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyQueryPoolResults(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize stride, VkQueryResultFlags flags) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); #if MTMERGESOURCE VkDeviceMemory mem; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); skipCall |= get_mem_binding_from_object(dev_data, commandBuffer, (uint64_t)dstBuffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, &mem); if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, mem, true); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } skipCall |= update_cmd_buf_and_mem_references(dev_data, commandBuffer, mem, "vkCmdCopyQueryPoolResults"); // Validate that DST buffer has correct usage flags set skipCall |= validate_buffer_usage_flags(dev_data, commandBuffer, dstBuffer, VK_BUFFER_USAGE_TRANSFER_DST_BIT, true, "vkCmdCopyQueryPoolResults()", "VK_BUFFER_USAGE_TRANSFER_DST_BIT"); #endif if (pCB) { for (uint32_t i = 0; i < queryCount; i++) { QueryObject query = {queryPool, firstQuery + i}; if (!pCB->queryToStateMap[query]) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_QUERY, "DS", "Requesting a copy from query to buffer with invalid query: queryPool %" PRIu64 ", index %d", (uint64_t)(queryPool), firstQuery + i); } } if (pCB->state == CB_RECORDING) { skipCall |= addCmd(dev_data, pCB, CMD_COPYQUERYPOOLRESULTS, "vkCmdCopyQueryPoolResults()"); } else { skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdCopyQueryPoolResults()"); } skipCall |= insideRenderPass(dev_data, pCB, "vkCmdCopyQueryPoolResults"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdCopyQueryPoolResults(commandBuffer, queryPool, firstQuery, queryCount, dstBuffer, dstOffset, stride, flags); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPushConstants(VkCommandBuffer commandBuffer, VkPipelineLayout layout, VkShaderStageFlags stageFlags, uint32_t offset, uint32_t size, const void *pValues) { bool skipCall = false; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { if (pCB->state == CB_RECORDING) { skipCall |= addCmd(dev_data, pCB, CMD_PUSHCONSTANTS, "vkCmdPushConstants()"); } else { skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdPushConstants()"); } } if ((offset + size) > dev_data->physDevProperties.properties.limits.maxPushConstantsSize) { skipCall |= validatePushConstantSize(dev_data, offset, size, "vkCmdPushConstants()"); } // TODO : Add warning if push constant update doesn't align with range loader_platform_thread_unlock_mutex(&globalLock); if (!skipCall) dev_data->device_dispatch_table->CmdPushConstants(commandBuffer, layout, stageFlags, offset, size, pValues); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdWriteTimestamp(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t slot) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { QueryObject query = {queryPool, slot}; pCB->queryToStateMap[query] = 1; if (pCB->state == CB_RECORDING) { skipCall |= addCmd(dev_data, pCB, CMD_WRITETIMESTAMP, "vkCmdWriteTimestamp()"); } else { skipCall |= report_error_no_cb_begin(dev_data, commandBuffer, "vkCmdWriteTimestamp()"); } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, slot); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFramebuffer(VkDevice device, const VkFramebufferCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkFramebuffer *pFramebuffer) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateFramebuffer(device, pCreateInfo, pAllocator, pFramebuffer); if (VK_SUCCESS == result) { // Shadow create info and store in map loader_platform_thread_lock_mutex(&globalLock); auto & fbNode = dev_data->frameBufferMap[*pFramebuffer]; fbNode.createInfo = *pCreateInfo; if (pCreateInfo->pAttachments) { auto attachments = new VkImageView[pCreateInfo->attachmentCount]; memcpy(attachments, pCreateInfo->pAttachments, pCreateInfo->attachmentCount * sizeof(VkImageView)); fbNode.createInfo.pAttachments = attachments; } for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) { VkImageView view = pCreateInfo->pAttachments[i]; auto view_data = dev_data->imageViewMap.find(view); if (view_data == dev_data->imageViewMap.end()) { continue; } MT_FB_ATTACHMENT_INFO fb_info; get_mem_binding_from_object(dev_data, device, (uint64_t)(view_data->second.image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &fb_info.mem); fb_info.image = view_data->second.image; fbNode.attachments.push_back(fb_info); } loader_platform_thread_unlock_mutex(&globalLock); } return result; } VkBool32 FindDependency(const int index, const int dependent, const std::vector &subpass_to_node, std::unordered_set &processed_nodes) { // If we have already checked this node we have not found a dependency path so return false. if (processed_nodes.count(index)) return VK_FALSE; processed_nodes.insert(index); const DAGNode &node = subpass_to_node[index]; // Look for a dependency path. If one exists return true else recurse on the previous nodes. if (std::find(node.prev.begin(), node.prev.end(), dependent) == node.prev.end()) { for (auto elem : node.prev) { if (FindDependency(elem, dependent, subpass_to_node, processed_nodes)) return VK_TRUE; } } else { return VK_TRUE; } return VK_FALSE; } VkBool32 CheckDependencyExists(const layer_data *my_data, const int subpass, const std::vector &dependent_subpasses, const std::vector &subpass_to_node, VkBool32 &skip_call) { VkBool32 result = VK_TRUE; // Loop through all subpasses that share the same attachment and make sure a dependency exists for (uint32_t k = 0; k < dependent_subpasses.size(); ++k) { if (subpass == dependent_subpasses[k]) continue; const DAGNode &node = subpass_to_node[subpass]; // Check for a specified dependency between the two nodes. If one exists we are done. auto prev_elem = std::find(node.prev.begin(), node.prev.end(), dependent_subpasses[k]); auto next_elem = std::find(node.next.begin(), node.next.end(), dependent_subpasses[k]); if (prev_elem == node.prev.end() && next_elem == node.next.end()) { // If no dependency exits an implicit dependency still might. If so, warn and if not throw an error. std::unordered_set processed_nodes; if (FindDependency(subpass, dependent_subpasses[k], subpass_to_node, processed_nodes) || FindDependency(dependent_subpasses[k], subpass, subpass_to_node, processed_nodes)) { // TODO: Verify against Valid Use section of spec skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "A dependency between subpasses %d and %d must exist but only an implicit one is specified.", subpass, dependent_subpasses[k]); } else { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "A dependency between subpasses %d and %d must exist but one is not specified.", subpass, dependent_subpasses[k]); result = VK_FALSE; } } } return result; } VkBool32 CheckPreserved(const layer_data *my_data, const VkRenderPassCreateInfo *pCreateInfo, const int index, const uint32_t attachment, const std::vector &subpass_to_node, int depth, VkBool32 &skip_call) { const DAGNode &node = subpass_to_node[index]; // If this node writes to the attachment return true as next nodes need to preserve the attachment. const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[index]; for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) { if (attachment == subpass.pColorAttachments[j].attachment) return VK_TRUE; } if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) { if (attachment == subpass.pDepthStencilAttachment->attachment) return VK_TRUE; } VkBool32 result = VK_FALSE; // Loop through previous nodes and see if any of them write to the attachment. for (auto elem : node.prev) { result |= CheckPreserved(my_data, pCreateInfo, elem, attachment, subpass_to_node, depth + 1, skip_call); } // If the attachment was written to by a previous node than this node needs to preserve it. if (result && depth > 0) { const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[index]; VkBool32 has_preserved = VK_FALSE; for (uint32_t j = 0; j < subpass.preserveAttachmentCount; ++j) { if (subpass.pPreserveAttachments[j] == attachment) { has_preserved = VK_TRUE; break; } } if (has_preserved == VK_FALSE) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "Attachment %d is used by a later subpass and must be preserved in subpass %d.", attachment, index); } } return result; } template bool isRangeOverlapping(T offset1, T size1, T offset2, T size2) { return (((offset1 + size1) > offset2) && ((offset1 + size1) < (offset2 + size2))) || ((offset1 > offset2) && (offset1 < (offset2 + size2))); } bool isRegionOverlapping(VkImageSubresourceRange range1, VkImageSubresourceRange range2) { return (isRangeOverlapping(range1.baseMipLevel, range1.levelCount, range2.baseMipLevel, range2.levelCount) && isRangeOverlapping(range1.baseArrayLayer, range1.layerCount, range2.baseArrayLayer, range2.layerCount)); } VkBool32 ValidateDependencies(const layer_data *my_data, const VkRenderPassBeginInfo *pRenderPassBegin, const std::vector &subpass_to_node) { VkBool32 skip_call = VK_FALSE; const VkFramebufferCreateInfo *pFramebufferInfo = &my_data->frameBufferMap.at(pRenderPassBegin->framebuffer).createInfo; const VkRenderPassCreateInfo *pCreateInfo = my_data->renderPassMap.at(pRenderPassBegin->renderPass)->pCreateInfo; std::vector> output_attachment_to_subpass(pCreateInfo->attachmentCount); std::vector> input_attachment_to_subpass(pCreateInfo->attachmentCount); std::vector> overlapping_attachments(pCreateInfo->attachmentCount); // Find overlapping attachments for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) { for (uint32_t j = i + 1; j < pCreateInfo->attachmentCount; ++j) { VkImageView viewi = pFramebufferInfo->pAttachments[i]; VkImageView viewj = pFramebufferInfo->pAttachments[j]; if (viewi == viewj) { overlapping_attachments[i].push_back(j); overlapping_attachments[j].push_back(i); continue; } auto view_data_i = my_data->imageViewMap.find(viewi); auto view_data_j = my_data->imageViewMap.find(viewj); if (view_data_i == my_data->imageViewMap.end() || view_data_j == my_data->imageViewMap.end()) { continue; } if (view_data_i->second.image == view_data_j->second.image && isRegionOverlapping(view_data_i->second.subresourceRange, view_data_j->second.subresourceRange)) { overlapping_attachments[i].push_back(j); overlapping_attachments[j].push_back(i); continue; } auto image_data_i = my_data->imageMap.find(view_data_i->second.image); auto image_data_j = my_data->imageMap.find(view_data_j->second.image); if (image_data_i == my_data->imageMap.end() || image_data_j == my_data->imageMap.end()) { continue; } if (image_data_i->second.mem == image_data_j->second.mem && isRangeOverlapping(image_data_i->second.memOffset, image_data_i->second.memSize, image_data_j->second.memOffset, image_data_j->second.memSize)) { overlapping_attachments[i].push_back(j); overlapping_attachments[j].push_back(i); } } } for (uint32_t i = 0; i < overlapping_attachments.size(); ++i) { uint32_t attachment = i; for (auto other_attachment : overlapping_attachments[i]) { if (!(pCreateInfo->pAttachments[attachment].flags & VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT)) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "Attachment %d aliases attachment %d but doesn't " "set VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT.", attachment, other_attachment); } if (!(pCreateInfo->pAttachments[other_attachment].flags & VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT)) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "Attachment %d aliases attachment %d but doesn't " "set VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT.", other_attachment, attachment); } } } // Find for each attachment the subpasses that use them. unordered_set attachmentIndices; for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) { const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i]; attachmentIndices.clear(); for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) { uint32_t attachment = subpass.pInputAttachments[j].attachment; input_attachment_to_subpass[attachment].push_back(i); for (auto overlapping_attachment : overlapping_attachments[attachment]) { input_attachment_to_subpass[overlapping_attachment].push_back(i); } } for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) { uint32_t attachment = subpass.pColorAttachments[j].attachment; output_attachment_to_subpass[attachment].push_back(i); for (auto overlapping_attachment : overlapping_attachments[attachment]) { output_attachment_to_subpass[overlapping_attachment].push_back(i); } attachmentIndices.insert(attachment); } if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) { uint32_t attachment = subpass.pDepthStencilAttachment->attachment; output_attachment_to_subpass[attachment].push_back(i); for (auto overlapping_attachment : overlapping_attachments[attachment]) { output_attachment_to_subpass[overlapping_attachment].push_back(i); } if (attachmentIndices.count(attachment)) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "Cannot use same attachment (%u) as both color and depth output in same subpass (%u).", attachment, i); } } } // If there is a dependency needed make sure one exists for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) { const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i]; // If the attachment is an input then all subpasses that output must have a dependency relationship for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) { const uint32_t &attachment = subpass.pInputAttachments[j].attachment; CheckDependencyExists(my_data, i, output_attachment_to_subpass[attachment], subpass_to_node, skip_call); } // If the attachment is an output then all subpasses that use the attachment must have a dependency relationship for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) { const uint32_t &attachment = subpass.pColorAttachments[j].attachment; CheckDependencyExists(my_data, i, output_attachment_to_subpass[attachment], subpass_to_node, skip_call); CheckDependencyExists(my_data, i, input_attachment_to_subpass[attachment], subpass_to_node, skip_call); } if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) { const uint32_t &attachment = subpass.pDepthStencilAttachment->attachment; CheckDependencyExists(my_data, i, output_attachment_to_subpass[attachment], subpass_to_node, skip_call); CheckDependencyExists(my_data, i, input_attachment_to_subpass[attachment], subpass_to_node, skip_call); } } // Loop through implicit dependencies, if this pass reads make sure the attachment is preserved for all passes after it was // written. for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) { const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i]; for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) { CheckPreserved(my_data, pCreateInfo, i, subpass.pInputAttachments[j].attachment, subpass_to_node, 0, skip_call); } } return skip_call; } VkBool32 ValidateLayouts(const layer_data *my_data, VkDevice device, const VkRenderPassCreateInfo *pCreateInfo) { VkBool32 skip = VK_FALSE; for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) { const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i]; for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) { if (subpass.pInputAttachments[j].layout != VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL && subpass.pInputAttachments[j].layout != VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) { if (subpass.pInputAttachments[j].layout == VK_IMAGE_LAYOUT_GENERAL) { // TODO: Verify Valid Use in spec. I believe this is allowed (valid) but may not be optimal performance skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for input attachment is GENERAL but should be READ_ONLY_OPTIMAL."); } else { skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for input attachment is %s but can only be READ_ONLY_OPTIMAL or GENERAL.", string_VkImageLayout(subpass.pInputAttachments[j].layout)); } } } for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) { if (subpass.pColorAttachments[j].layout != VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL) { if (subpass.pColorAttachments[j].layout == VK_IMAGE_LAYOUT_GENERAL) { // TODO: Verify Valid Use in spec. I believe this is allowed (valid) but may not be optimal performance skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for color attachment is GENERAL but should be COLOR_ATTACHMENT_OPTIMAL."); } else { skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for color attachment is %s but can only be COLOR_ATTACHMENT_OPTIMAL or GENERAL.", string_VkImageLayout(subpass.pColorAttachments[j].layout)); } } } if ((subpass.pDepthStencilAttachment != NULL) && (subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED)) { if (subpass.pDepthStencilAttachment->layout != VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL) { if (subpass.pDepthStencilAttachment->layout == VK_IMAGE_LAYOUT_GENERAL) { // TODO: Verify Valid Use in spec. I believe this is allowed (valid) but may not be optimal performance skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for depth attachment is GENERAL but should be DEPTH_STENCIL_ATTACHMENT_OPTIMAL."); } else { skip |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Layout for depth attachment is %s but can only be DEPTH_STENCIL_ATTACHMENT_OPTIMAL or GENERAL.", string_VkImageLayout(subpass.pDepthStencilAttachment->layout)); } } } } return skip; } VkBool32 CreatePassDAG(const layer_data *my_data, VkDevice device, const VkRenderPassCreateInfo *pCreateInfo, std::vector &subpass_to_node, std::vector &has_self_dependency) { VkBool32 skip_call = VK_FALSE; for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) { DAGNode &subpass_node = subpass_to_node[i]; subpass_node.pass = i; } for (uint32_t i = 0; i < pCreateInfo->dependencyCount; ++i) { const VkSubpassDependency &dependency = pCreateInfo->pDependencies[i]; if (dependency.srcSubpass > dependency.dstSubpass && dependency.srcSubpass != VK_SUBPASS_EXTERNAL && dependency.dstSubpass != VK_SUBPASS_EXTERNAL) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "Depedency graph must be specified such that an earlier pass cannot depend on a later pass."); } else if (dependency.srcSubpass == VK_SUBPASS_EXTERNAL && dependency.dstSubpass == VK_SUBPASS_EXTERNAL) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "The src and dest subpasses cannot both be external."); } else if (dependency.srcSubpass == dependency.dstSubpass) { has_self_dependency[dependency.srcSubpass] = true; } if (dependency.dstSubpass != VK_SUBPASS_EXTERNAL) { subpass_to_node[dependency.dstSubpass].prev.push_back(dependency.srcSubpass); } if (dependency.srcSubpass != VK_SUBPASS_EXTERNAL) { subpass_to_node[dependency.srcSubpass].next.push_back(dependency.dstSubpass); } } return skip_call; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateShaderModule(VkDevice device, const VkShaderModuleCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkShaderModule *pShaderModule) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkBool32 skip_call = VK_FALSE; if (!shader_is_spirv(pCreateInfo)) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, /* dev */ 0, __LINE__, SHADER_CHECKER_NON_SPIRV_SHADER, "SC", "Shader is not SPIR-V"); } if (VK_FALSE != skip_call) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult res = my_data->device_dispatch_table->CreateShaderModule(device, pCreateInfo, pAllocator, pShaderModule); if (res == VK_SUCCESS) { loader_platform_thread_lock_mutex(&globalLock); my_data->shaderModuleMap[*pShaderModule] = unique_ptr(new shader_module(pCreateInfo)); loader_platform_thread_unlock_mutex(&globalLock); } return res; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkRenderPass *pRenderPass) { VkBool32 skip_call = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); // Create DAG std::vector has_self_dependency(pCreateInfo->subpassCount); std::vector subpass_to_node(pCreateInfo->subpassCount); skip_call |= CreatePassDAG(dev_data, device, pCreateInfo, subpass_to_node, has_self_dependency); // Validate skip_call |= ValidateLayouts(dev_data, device, pCreateInfo); if (VK_FALSE != skip_call) { loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } loader_platform_thread_unlock_mutex(&globalLock); VkResult result = dev_data->device_dispatch_table->CreateRenderPass(device, pCreateInfo, pAllocator, pRenderPass); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); // TODOSC : Merge in tracking of renderpass from shader_checker // Shadow create info and store in map VkRenderPassCreateInfo *localRPCI = new VkRenderPassCreateInfo(*pCreateInfo); if (pCreateInfo->pAttachments) { localRPCI->pAttachments = new VkAttachmentDescription[localRPCI->attachmentCount]; memcpy((void *)localRPCI->pAttachments, pCreateInfo->pAttachments, localRPCI->attachmentCount * sizeof(VkAttachmentDescription)); } if (pCreateInfo->pSubpasses) { localRPCI->pSubpasses = new VkSubpassDescription[localRPCI->subpassCount]; memcpy((void *)localRPCI->pSubpasses, pCreateInfo->pSubpasses, localRPCI->subpassCount * sizeof(VkSubpassDescription)); for (uint32_t i = 0; i < localRPCI->subpassCount; i++) { VkSubpassDescription *subpass = (VkSubpassDescription *)&localRPCI->pSubpasses[i]; const uint32_t attachmentCount = subpass->inputAttachmentCount + subpass->colorAttachmentCount * (1 + (subpass->pResolveAttachments ? 1 : 0)) + ((subpass->pDepthStencilAttachment) ? 1 : 0) + subpass->preserveAttachmentCount; VkAttachmentReference *attachments = new VkAttachmentReference[attachmentCount]; memcpy(attachments, subpass->pInputAttachments, sizeof(attachments[0]) * subpass->inputAttachmentCount); subpass->pInputAttachments = attachments; attachments += subpass->inputAttachmentCount; memcpy(attachments, subpass->pColorAttachments, sizeof(attachments[0]) * subpass->colorAttachmentCount); subpass->pColorAttachments = attachments; attachments += subpass->colorAttachmentCount; if (subpass->pResolveAttachments) { memcpy(attachments, subpass->pResolveAttachments, sizeof(attachments[0]) * subpass->colorAttachmentCount); subpass->pResolveAttachments = attachments; attachments += subpass->colorAttachmentCount; } if (subpass->pDepthStencilAttachment) { memcpy(attachments, subpass->pDepthStencilAttachment, sizeof(attachments[0]) * 1); subpass->pDepthStencilAttachment = attachments; attachments += 1; } memcpy(attachments, subpass->pPreserveAttachments, sizeof(attachments[0]) * subpass->preserveAttachmentCount); subpass->pPreserveAttachments = &attachments->attachment; } } if (pCreateInfo->pDependencies) { localRPCI->pDependencies = new VkSubpassDependency[localRPCI->dependencyCount]; memcpy((void *)localRPCI->pDependencies, pCreateInfo->pDependencies, localRPCI->dependencyCount * sizeof(VkSubpassDependency)); } dev_data->renderPassMap[*pRenderPass] = new RENDER_PASS_NODE(localRPCI); dev_data->renderPassMap[*pRenderPass]->hasSelfDependency = has_self_dependency; dev_data->renderPassMap[*pRenderPass]->subpassToNode = subpass_to_node; #if MTMERGESOURCE // MTMTODO : Merge with code from above to eliminate duplication for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) { VkAttachmentDescription desc = pCreateInfo->pAttachments[i]; MT_PASS_ATTACHMENT_INFO pass_info; pass_info.load_op = desc.loadOp; pass_info.store_op = desc.storeOp; pass_info.attachment = i; dev_data->renderPassMap[*pRenderPass]->attachments.push_back(pass_info); } // TODO: Maybe fill list and then copy instead of locking std::unordered_map &attachment_first_read = dev_data->renderPassMap[*pRenderPass]->attachment_first_read; std::unordered_map &attachment_first_layout = dev_data->renderPassMap[*pRenderPass]->attachment_first_layout; for (uint32_t i = 0; i < pCreateInfo->subpassCount; ++i) { const VkSubpassDescription &subpass = pCreateInfo->pSubpasses[i]; for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) { uint32_t attachment = subpass.pColorAttachments[j].attachment; if (attachment >= pCreateInfo->attachmentCount) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "Color attachment %d cannot be greater than the total number of attachments %d.", attachment, pCreateInfo->attachmentCount); continue; } if (attachment_first_read.count(attachment)) continue; attachment_first_read.insert(std::make_pair(attachment, false)); attachment_first_layout.insert(std::make_pair(attachment, subpass.pColorAttachments[j].layout)); } if (subpass.pDepthStencilAttachment && subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) { uint32_t attachment = subpass.pDepthStencilAttachment->attachment; if (attachment >= pCreateInfo->attachmentCount) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "Depth stencil attachment %d cannot be greater than the total number of attachments %d.", attachment, pCreateInfo->attachmentCount); continue; } if (attachment_first_read.count(attachment)) continue; attachment_first_read.insert(std::make_pair(attachment, false)); attachment_first_layout.insert(std::make_pair(attachment, subpass.pDepthStencilAttachment->layout)); } for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) { uint32_t attachment = subpass.pInputAttachments[j].attachment; if (attachment >= pCreateInfo->attachmentCount) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "Input attachment %d cannot be greater than the total number of attachments %d.", attachment, pCreateInfo->attachmentCount); continue; } if (attachment_first_read.count(attachment)) continue; attachment_first_read.insert(std::make_pair(attachment, true)); attachment_first_layout.insert(std::make_pair(attachment, subpass.pInputAttachments[j].layout)); } } #endif loader_platform_thread_unlock_mutex(&globalLock); } return result; } // Free the renderpass shadow static void deleteRenderPasses(layer_data *my_data) { if (my_data->renderPassMap.size() <= 0) return; for (auto ii = my_data->renderPassMap.begin(); ii != my_data->renderPassMap.end(); ++ii) { const VkRenderPassCreateInfo *pRenderPassInfo = (*ii).second->pCreateInfo; delete[] pRenderPassInfo->pAttachments; if (pRenderPassInfo->pSubpasses) { for (uint32_t i = 0; i < pRenderPassInfo->subpassCount; ++i) { // Attachements are all allocated in a block, so just need to // find the first non-null one to delete if (pRenderPassInfo->pSubpasses[i].pInputAttachments) { delete[] pRenderPassInfo->pSubpasses[i].pInputAttachments; } else if (pRenderPassInfo->pSubpasses[i].pColorAttachments) { delete[] pRenderPassInfo->pSubpasses[i].pColorAttachments; } else if (pRenderPassInfo->pSubpasses[i].pResolveAttachments) { delete[] pRenderPassInfo->pSubpasses[i].pResolveAttachments; } else if (pRenderPassInfo->pSubpasses[i].pPreserveAttachments) { delete[] pRenderPassInfo->pSubpasses[i].pPreserveAttachments; } } delete[] pRenderPassInfo->pSubpasses; } delete[] pRenderPassInfo->pDependencies; delete pRenderPassInfo; delete (*ii).second; } my_data->renderPassMap.clear(); } VkBool32 VerifyFramebufferAndRenderPassLayouts(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo *pRenderPassBegin) { VkBool32 skip_call = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer); const VkRenderPassCreateInfo *pRenderPassInfo = dev_data->renderPassMap[pRenderPassBegin->renderPass]->pCreateInfo; const VkFramebufferCreateInfo framebufferInfo = dev_data->frameBufferMap[pRenderPassBegin->framebuffer].createInfo; if (pRenderPassInfo->attachmentCount != framebufferInfo.attachmentCount) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "You cannot start a render pass using a framebuffer " "with a different number of attachments."); } for (uint32_t i = 0; i < pRenderPassInfo->attachmentCount; ++i) { const VkImageView &image_view = framebufferInfo.pAttachments[i]; auto image_data = dev_data->imageViewMap.find(image_view); assert(image_data != dev_data->imageViewMap.end()); const VkImage &image = image_data->second.image; const VkImageSubresourceRange &subRange = image_data->second.subresourceRange; IMAGE_CMD_BUF_LAYOUT_NODE newNode = {pRenderPassInfo->pAttachments[i].initialLayout, pRenderPassInfo->pAttachments[i].initialLayout}; // TODO: Do not iterate over every possibility - consolidate where possible for (uint32_t j = 0; j < subRange.levelCount; j++) { uint32_t level = subRange.baseMipLevel + j; for (uint32_t k = 0; k < subRange.layerCount; k++) { uint32_t layer = subRange.baseArrayLayer + k; VkImageSubresource sub = {subRange.aspectMask, level, layer}; IMAGE_CMD_BUF_LAYOUT_NODE node; if (!FindLayout(pCB, image, sub, node)) { SetLayout(pCB, image, sub, newNode); continue; } if (newNode.layout != node.layout) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "You cannot start a render pass using attachment %i " "where the " "initial layout is %s and the layout of the attachment at the " "start of the render pass is %s. The layouts must match.", i, string_VkImageLayout(newNode.layout), string_VkImageLayout(node.layout)); } } } } return skip_call; } void TransitionSubpassLayouts(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo *pRenderPassBegin, const int subpass_index) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer); auto render_pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass); if (render_pass_data == dev_data->renderPassMap.end()) { return; } const VkRenderPassCreateInfo *pRenderPassInfo = render_pass_data->second->pCreateInfo; auto framebuffer_data = dev_data->frameBufferMap.find(pRenderPassBegin->framebuffer); if (framebuffer_data == dev_data->frameBufferMap.end()) { return; } const VkFramebufferCreateInfo framebufferInfo = framebuffer_data->second.createInfo; const VkSubpassDescription &subpass = pRenderPassInfo->pSubpasses[subpass_index]; for (uint32_t j = 0; j < subpass.inputAttachmentCount; ++j) { const VkImageView &image_view = framebufferInfo.pAttachments[subpass.pInputAttachments[j].attachment]; SetLayout(dev_data, pCB, image_view, subpass.pInputAttachments[j].layout); } for (uint32_t j = 0; j < subpass.colorAttachmentCount; ++j) { const VkImageView &image_view = framebufferInfo.pAttachments[subpass.pColorAttachments[j].attachment]; SetLayout(dev_data, pCB, image_view, subpass.pColorAttachments[j].layout); } if ((subpass.pDepthStencilAttachment != NULL) && (subpass.pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED)) { const VkImageView &image_view = framebufferInfo.pAttachments[subpass.pDepthStencilAttachment->attachment]; SetLayout(dev_data, pCB, image_view, subpass.pDepthStencilAttachment->layout); } } VkBool32 validatePrimaryCommandBuffer(const layer_data *my_data, const GLOBAL_CB_NODE *pCB, const std::string &cmd_name) { VkBool32 skip_call = VK_FALSE; if (pCB->createInfo.level != VK_COMMAND_BUFFER_LEVEL_PRIMARY) { skip_call |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "Cannot execute command %s on a secondary command buffer.", cmd_name.c_str()); } return skip_call; } void TransitionFinalSubpassLayouts(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo *pRenderPassBegin) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(cmdBuffer), layer_data_map); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, cmdBuffer); auto render_pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass); if (render_pass_data == dev_data->renderPassMap.end()) { return; } const VkRenderPassCreateInfo *pRenderPassInfo = render_pass_data->second->pCreateInfo; auto framebuffer_data = dev_data->frameBufferMap.find(pRenderPassBegin->framebuffer); if (framebuffer_data == dev_data->frameBufferMap.end()) { return; } const VkFramebufferCreateInfo framebufferInfo = framebuffer_data->second.createInfo; for (uint32_t i = 0; i < pRenderPassInfo->attachmentCount; ++i) { const VkImageView &image_view = framebufferInfo.pAttachments[i]; SetLayout(dev_data, pCB, image_view, pRenderPassInfo->pAttachments[i].finalLayout); } } bool VerifyRenderAreaBounds(const layer_data *my_data, const VkRenderPassBeginInfo *pRenderPassBegin) { bool skip_call = false; const VkFramebufferCreateInfo *pFramebufferInfo = &my_data->frameBufferMap.at(pRenderPassBegin->framebuffer).createInfo; if (pRenderPassBegin->renderArea.offset.x < 0 || (pRenderPassBegin->renderArea.offset.x + pRenderPassBegin->renderArea.extent.width) > pFramebufferInfo->width || pRenderPassBegin->renderArea.offset.y < 0 || (pRenderPassBegin->renderArea.offset.y + pRenderPassBegin->renderArea.extent.height) > pFramebufferInfo->height) { skip_call |= static_cast(log_msg( my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDER_AREA, "CORE", "Cannot execute a render pass with renderArea not within the bound of the " "framebuffer. RenderArea: x %d, y %d, width %d, height %d. Framebuffer: width %d, " "height %d.", pRenderPassBegin->renderArea.offset.x, pRenderPassBegin->renderArea.offset.y, pRenderPassBegin->renderArea.extent.width, pRenderPassBegin->renderArea.extent.height, pFramebufferInfo->width, pFramebufferInfo->height)); } return skip_call; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginRenderPass(VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo *pRenderPassBegin, VkSubpassContents contents) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { if (pRenderPassBegin && pRenderPassBegin->renderPass) { #if MTMERGE auto pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass); if (pass_data != dev_data->renderPassMap.end()) { RENDER_PASS_NODE* pRPNode = pass_data->second; pRPNode->fb = pRenderPassBegin->framebuffer; auto cb_data = dev_data->commandBufferMap.find(commandBuffer); for (size_t i = 0; i < pRPNode->attachments.size(); ++i) { MT_FB_ATTACHMENT_INFO &fb_info = dev_data->frameBufferMap[pRPNode->fb].attachments[i]; if (pRPNode->attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_CLEAR) { if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, fb_info.mem, true, fb_info.image); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } VkImageLayout &attachment_layout = pRPNode->attachment_first_layout[pRPNode->attachments[i].attachment]; if (attachment_layout == VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL || attachment_layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT, (uint64_t)(pRenderPassBegin->renderPass), __LINE__, MEMTRACK_INVALID_LAYOUT, "MEM", "Cannot clear attachment %d with invalid first layout %d.", pRPNode->attachments[i].attachment, attachment_layout); } } else if (pRPNode->attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_DONT_CARE) { if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, fb_info.mem, false, fb_info.image); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } } else if (pRPNode->attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_LOAD) { if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, fb_info.mem, "vkCmdBeginRenderPass()", fb_info.image); }; cb_data->second->validate_functions.push_back(function); } } if (pRPNode->attachment_first_read[pRPNode->attachments[i].attachment]) { if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { return validate_memory_is_valid(dev_data, fb_info.mem, "vkCmdBeginRenderPass()", fb_info.image); }; cb_data->second->validate_functions.push_back(function); } } } } #endif skipCall |= static_cast(VerifyRenderAreaBounds(dev_data, pRenderPassBegin)); skipCall |= VerifyFramebufferAndRenderPassLayouts(commandBuffer, pRenderPassBegin); auto render_pass_data = dev_data->renderPassMap.find(pRenderPassBegin->renderPass); if (render_pass_data != dev_data->renderPassMap.end()) { skipCall |= ValidateDependencies(dev_data, pRenderPassBegin, render_pass_data->second->subpassToNode); } skipCall |= insideRenderPass(dev_data, pCB, "vkCmdBeginRenderPass"); skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdBeginRenderPass"); skipCall |= addCmd(dev_data, pCB, CMD_BEGINRENDERPASS, "vkCmdBeginRenderPass()"); pCB->activeRenderPass = pRenderPassBegin->renderPass; // This is a shallow copy as that is all that is needed for now pCB->activeRenderPassBeginInfo = *pRenderPassBegin; pCB->activeSubpass = 0; pCB->activeSubpassContents = contents; pCB->framebuffer = pRenderPassBegin->framebuffer; // Connect this framebuffer to this cmdBuffer dev_data->frameBufferMap[pCB->framebuffer].referencingCmdBuffers.insert(pCB->commandBuffer); } else { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_RENDERPASS, "DS", "You cannot use a NULL RenderPass object in vkCmdBeginRenderPass()"); } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { dev_data->device_dispatch_table->CmdBeginRenderPass(commandBuffer, pRenderPassBegin, contents); loader_platform_thread_lock_mutex(&globalLock); // This is a shallow copy as that is all that is needed for now dev_data->renderPassBeginInfo = *pRenderPassBegin; dev_data->currentSubpass = 0; loader_platform_thread_unlock_mutex(&globalLock); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdNextSubpass(VkCommandBuffer commandBuffer, VkSubpassContents contents) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); TransitionSubpassLayouts(commandBuffer, &dev_data->renderPassBeginInfo, ++dev_data->currentSubpass); if (pCB) { skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdNextSubpass"); skipCall |= addCmd(dev_data, pCB, CMD_NEXTSUBPASS, "vkCmdNextSubpass()"); pCB->activeSubpass++; pCB->activeSubpassContents = contents; TransitionSubpassLayouts(commandBuffer, &pCB->activeRenderPassBeginInfo, pCB->activeSubpass); if (pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].pipeline) { skipCall |= validatePipelineState(dev_data, pCB, VK_PIPELINE_BIND_POINT_GRAPHICS, pCB->lastBound[VK_PIPELINE_BIND_POINT_GRAPHICS].pipeline); } skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdNextSubpass"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdNextSubpass(commandBuffer, contents); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndRenderPass(VkCommandBuffer commandBuffer) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE auto cb_data = dev_data->commandBufferMap.find(commandBuffer); if (cb_data != dev_data->commandBufferMap.end()) { auto pass_data = dev_data->renderPassMap.find(cb_data->second->activeRenderPass); if (pass_data != dev_data->renderPassMap.end()) { RENDER_PASS_NODE* pRPNode = pass_data->second; for (size_t i = 0; i < pRPNode->attachments.size(); ++i) { MT_FB_ATTACHMENT_INFO &fb_info = dev_data->frameBufferMap[pRPNode->fb].attachments[i]; if (pRPNode->attachments[i].store_op == VK_ATTACHMENT_STORE_OP_STORE) { if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, fb_info.mem, true, fb_info.image); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } } else if (pRPNode->attachments[i].store_op == VK_ATTACHMENT_STORE_OP_DONT_CARE) { if (cb_data != dev_data->commandBufferMap.end()) { std::function function = [=]() { set_memory_valid(dev_data, fb_info.mem, false, fb_info.image); return VK_FALSE; }; cb_data->second->validate_functions.push_back(function); } } } } } #endif GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); TransitionFinalSubpassLayouts(commandBuffer, &dev_data->renderPassBeginInfo); if (pCB) { skipCall |= outsideRenderPass(dev_data, pCB, "vkCmdEndRenderpass"); skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdEndRenderPass"); skipCall |= addCmd(dev_data, pCB, CMD_ENDRENDERPASS, "vkCmdEndRenderPass()"); TransitionFinalSubpassLayouts(commandBuffer, &pCB->activeRenderPassBeginInfo); pCB->activeRenderPass = 0; pCB->activeSubpass = 0; } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdEndRenderPass(commandBuffer); } bool logInvalidAttachmentMessage(layer_data *dev_data, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass, VkRenderPass primaryPass, uint32_t primaryAttach, uint32_t secondaryAttach, const char *msg) { return log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has a render pass %" PRIx64 " that is not compatible with the current render pass %" PRIx64 "." "Attachment %" PRIu32 " is not compatable with %" PRIu32 ". %s", (void *)secondaryBuffer, (uint64_t)(secondaryPass), (uint64_t)(primaryPass), primaryAttach, secondaryAttach, msg); } bool validateAttachmentCompatibility(layer_data *dev_data, VkCommandBuffer primaryBuffer, VkRenderPass primaryPass, uint32_t primaryAttach, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass, uint32_t secondaryAttach, bool is_multi) { bool skip_call = false; auto primary_data = dev_data->renderPassMap.find(primaryPass); auto secondary_data = dev_data->renderPassMap.find(secondaryPass); if (primary_data->second->pCreateInfo->attachmentCount <= primaryAttach) { primaryAttach = VK_ATTACHMENT_UNUSED; } if (secondary_data->second->pCreateInfo->attachmentCount <= secondaryAttach) { secondaryAttach = VK_ATTACHMENT_UNUSED; } if (primaryAttach == VK_ATTACHMENT_UNUSED && secondaryAttach == VK_ATTACHMENT_UNUSED) { return skip_call; } if (primaryAttach == VK_ATTACHMENT_UNUSED) { skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "The first is unused while the second is not."); return skip_call; } if (secondaryAttach == VK_ATTACHMENT_UNUSED) { skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "The second is unused while the first is not."); return skip_call; } if (primary_data->second->pCreateInfo->pAttachments[primaryAttach].format != secondary_data->second->pCreateInfo->pAttachments[secondaryAttach].format) { skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "They have different formats."); } if (primary_data->second->pCreateInfo->pAttachments[primaryAttach].samples != secondary_data->second->pCreateInfo->pAttachments[secondaryAttach].samples) { skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "They have different samples."); } if (is_multi && primary_data->second->pCreateInfo->pAttachments[primaryAttach].flags != secondary_data->second->pCreateInfo->pAttachments[secondaryAttach].flags) { skip_call |= logInvalidAttachmentMessage(dev_data, secondaryBuffer, secondaryPass, primaryPass, primaryAttach, secondaryAttach, "They have different flags."); } return skip_call; } bool validateSubpassCompatibility(layer_data *dev_data, VkCommandBuffer primaryBuffer, VkRenderPass primaryPass, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass, const int subpass, bool is_multi) { bool skip_call = false; auto primary_data = dev_data->renderPassMap.find(primaryPass); auto secondary_data = dev_data->renderPassMap.find(secondaryPass); const VkSubpassDescription &primary_desc = primary_data->second->pCreateInfo->pSubpasses[subpass]; const VkSubpassDescription &secondary_desc = secondary_data->second->pCreateInfo->pSubpasses[subpass]; uint32_t maxInputAttachmentCount = std::max(primary_desc.inputAttachmentCount, secondary_desc.inputAttachmentCount); for (uint32_t i = 0; i < maxInputAttachmentCount; ++i) { uint32_t primary_input_attach = VK_ATTACHMENT_UNUSED, secondary_input_attach = VK_ATTACHMENT_UNUSED; if (i < primary_desc.inputAttachmentCount) { primary_input_attach = primary_desc.pInputAttachments[i].attachment; } if (i < secondary_desc.inputAttachmentCount) { secondary_input_attach = secondary_desc.pInputAttachments[i].attachment; } skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_input_attach, secondaryBuffer, secondaryPass, secondary_input_attach, is_multi); } uint32_t maxColorAttachmentCount = std::max(primary_desc.colorAttachmentCount, secondary_desc.colorAttachmentCount); for (uint32_t i = 0; i < maxColorAttachmentCount; ++i) { uint32_t primary_color_attach = VK_ATTACHMENT_UNUSED, secondary_color_attach = VK_ATTACHMENT_UNUSED; if (i < primary_desc.colorAttachmentCount) { primary_color_attach = primary_desc.pColorAttachments[i].attachment; } if (i < secondary_desc.colorAttachmentCount) { secondary_color_attach = secondary_desc.pColorAttachments[i].attachment; } skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_color_attach, secondaryBuffer, secondaryPass, secondary_color_attach, is_multi); uint32_t primary_resolve_attach = VK_ATTACHMENT_UNUSED, secondary_resolve_attach = VK_ATTACHMENT_UNUSED; if (i < primary_desc.colorAttachmentCount && primary_desc.pResolveAttachments) { primary_resolve_attach = primary_desc.pResolveAttachments[i].attachment; } if (i < secondary_desc.colorAttachmentCount && secondary_desc.pResolveAttachments) { secondary_resolve_attach = secondary_desc.pResolveAttachments[i].attachment; } skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_resolve_attach, secondaryBuffer, secondaryPass, secondary_resolve_attach, is_multi); } uint32_t primary_depthstencil_attach = VK_ATTACHMENT_UNUSED, secondary_depthstencil_attach = VK_ATTACHMENT_UNUSED; if (primary_desc.pDepthStencilAttachment) { primary_depthstencil_attach = primary_desc.pDepthStencilAttachment[0].attachment; } if (secondary_desc.pDepthStencilAttachment) { secondary_depthstencil_attach = secondary_desc.pDepthStencilAttachment[0].attachment; } skip_call |= validateAttachmentCompatibility(dev_data, primaryBuffer, primaryPass, primary_depthstencil_attach, secondaryBuffer, secondaryPass, secondary_depthstencil_attach, is_multi); return skip_call; } bool validateRenderPassCompatibility(layer_data *dev_data, VkCommandBuffer primaryBuffer, VkRenderPass primaryPass, VkCommandBuffer secondaryBuffer, VkRenderPass secondaryPass) { bool skip_call = false; // Early exit if renderPass objects are identical (and therefore compatible) if (primaryPass == secondaryPass) return skip_call; auto primary_data = dev_data->renderPassMap.find(primaryPass); auto secondary_data = dev_data->renderPassMap.find(secondaryPass); if (primary_data == dev_data->renderPassMap.end() || primary_data->second == nullptr) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid current Cmd Buffer %p which has invalid render pass %" PRIx64 ".", (void *)primaryBuffer, (uint64_t)(primaryPass)); return skip_call; } if (secondary_data == dev_data->renderPassMap.end() || secondary_data->second == nullptr) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid secondary Cmd Buffer %p which has invalid render pass %" PRIx64 ".", (void *)secondaryBuffer, (uint64_t)(secondaryPass)); return skip_call; } if (primary_data->second->pCreateInfo->subpassCount != secondary_data->second->pCreateInfo->subpassCount) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has a render pass %" PRIx64 " that is not compatible with the current render pass %" PRIx64 "." "They have a different number of subpasses.", (void *)secondaryBuffer, (uint64_t)(secondaryPass), (uint64_t)(primaryPass)); return skip_call; } bool is_multi = primary_data->second->pCreateInfo->subpassCount > 1; for (uint32_t i = 0; i < primary_data->second->pCreateInfo->subpassCount; ++i) { skip_call |= validateSubpassCompatibility(dev_data, primaryBuffer, primaryPass, secondaryBuffer, secondaryPass, i, is_multi); } return skip_call; } bool validateFramebuffer(layer_data *dev_data, VkCommandBuffer primaryBuffer, const GLOBAL_CB_NODE *pCB, VkCommandBuffer secondaryBuffer, const GLOBAL_CB_NODE *pSubCB) { bool skip_call = false; if (!pSubCB->beginInfo.pInheritanceInfo) { return skip_call; } VkFramebuffer primary_fb = pCB->framebuffer; VkFramebuffer secondary_fb = pSubCB->beginInfo.pInheritanceInfo->framebuffer; if (secondary_fb != VK_NULL_HANDLE) { if (primary_fb != secondary_fb) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p which has a framebuffer %" PRIx64 " that is not compatible with the current framebuffer %" PRIx64 ".", (void *)secondaryBuffer, (uint64_t)(secondary_fb), (uint64_t)(primary_fb)); } auto fb_data = dev_data->frameBufferMap.find(secondary_fb); if (fb_data == dev_data->frameBufferMap.end()) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p " "which has invalid framebuffer %" PRIx64 ".", (void *)secondaryBuffer, (uint64_t)(secondary_fb)); return skip_call; } skip_call |= validateRenderPassCompatibility(dev_data, secondaryBuffer, fb_data->second.createInfo.renderPass, secondaryBuffer, pSubCB->beginInfo.pInheritanceInfo->renderPass); } return skip_call; } bool validateSecondaryCommandBufferState(layer_data *dev_data, GLOBAL_CB_NODE *pCB, GLOBAL_CB_NODE *pSubCB) { bool skipCall = false; unordered_set activeTypes; for (auto queryObject : pCB->activeQueries) { auto queryPoolData = dev_data->queryPoolMap.find(queryObject.pool); if (queryPoolData != dev_data->queryPoolMap.end()) { if (queryPoolData->second.createInfo.queryType == VK_QUERY_TYPE_PIPELINE_STATISTICS && pSubCB->beginInfo.pInheritanceInfo) { VkQueryPipelineStatisticFlags cmdBufStatistics = pSubCB->beginInfo.pInheritanceInfo->pipelineStatistics; if ((cmdBufStatistics & queryPoolData->second.createInfo.pipelineStatistics) != cmdBufStatistics) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p " "which has invalid active query pool %" PRIx64 ". Pipeline statistics is being queried so the command " "buffer must have all bits set on the queryPool.", reinterpret_cast(pCB->commandBuffer), reinterpret_cast(queryPoolData->first)); } } activeTypes.insert(queryPoolData->second.createInfo.queryType); } } for (auto queryObject : pSubCB->startedQueries) { auto queryPoolData = dev_data->queryPoolMap.find(queryObject.pool); if (queryPoolData != dev_data->queryPoolMap.end() && activeTypes.count(queryPoolData->second.createInfo.queryType)) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p " "which has invalid active query pool %" PRIx64 "of type %d but a query of that type has been started on " "secondary Cmd Buffer %p.", reinterpret_cast(pCB->commandBuffer), reinterpret_cast(queryPoolData->first), queryPoolData->second.createInfo.queryType, reinterpret_cast(pSubCB->commandBuffer)); } } return skipCall; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdExecuteCommands(VkCommandBuffer commandBuffer, uint32_t commandBuffersCount, const VkCommandBuffer *pCommandBuffers) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); GLOBAL_CB_NODE *pCB = getCBNode(dev_data, commandBuffer); if (pCB) { GLOBAL_CB_NODE *pSubCB = NULL; for (uint32_t i = 0; i < commandBuffersCount; i++) { pSubCB = getCBNode(dev_data, pCommandBuffers[i]); if (!pSubCB) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ invalid Cmd Buffer %p in element %u of pCommandBuffers array.", (void *)pCommandBuffers[i], i); } else if (VK_COMMAND_BUFFER_LEVEL_PRIMARY == pSubCB->createInfo.level) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands() called w/ Primary Cmd Buffer %p in element %u of pCommandBuffers " "array. All cmd buffers in pCommandBuffers array must be secondary.", (void *)pCommandBuffers[i], i); } else if (pCB->activeRenderPass) { // Secondary CB w/i RenderPass must have *CONTINUE_BIT set if (!(pSubCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT)) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)pCommandBuffers[i], __LINE__, DRAWSTATE_BEGIN_CB_INVALID_STATE, "DS", "vkCmdExecuteCommands(): Secondary Command Buffer (%p) executed within render pass (%#" PRIxLEAST64 ") must have had vkBeginCommandBuffer() called w/ VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT set.", (void *)pCommandBuffers[i], (uint64_t)pCB->activeRenderPass); } else { // Make sure render pass is compatible with parent command buffer pass if has continue skipCall |= validateRenderPassCompatibility(dev_data, commandBuffer, pCB->activeRenderPass, pCommandBuffers[i], pSubCB->beginInfo.pInheritanceInfo->renderPass); skipCall |= validateFramebuffer(dev_data, commandBuffer, pCB, pCommandBuffers[i], pSubCB); } string errorString = ""; if (!verify_renderpass_compatibility(dev_data, pCB->activeRenderPass, pSubCB->beginInfo.pInheritanceInfo->renderPass, errorString)) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)pCommandBuffers[i], __LINE__, DRAWSTATE_RENDERPASS_INCOMPATIBLE, "DS", "vkCmdExecuteCommands(): Secondary Command Buffer (%p) w/ render pass (%#" PRIxLEAST64 ") is incompatible w/ primary command buffer (%p) w/ render pass (%#" PRIxLEAST64 ") due to: %s", (void *)pCommandBuffers[i], (uint64_t)pSubCB->beginInfo.pInheritanceInfo->renderPass, (void *)commandBuffer, (uint64_t)pCB->activeRenderPass, errorString.c_str()); } // If framebuffer for secondary CB is not NULL, then it must match FB from vkCmdBeginRenderPass() // that this CB will be executed in AND framebuffer must have been created w/ RP compatible w/ renderpass if (pSubCB->beginInfo.pInheritanceInfo->framebuffer) { if (pSubCB->beginInfo.pInheritanceInfo->framebuffer != pCB->activeRenderPassBeginInfo.framebuffer) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)pCommandBuffers[i], __LINE__, DRAWSTATE_FRAMEBUFFER_INCOMPATIBLE, "DS", "vkCmdExecuteCommands(): Secondary Command Buffer (%p) references framebuffer (%#" PRIxLEAST64 ") that does not match framebuffer (%#" PRIxLEAST64 ") in active renderpass (%#" PRIxLEAST64 ").", (void *)pCommandBuffers[i], (uint64_t)pSubCB->beginInfo.pInheritanceInfo->framebuffer, (uint64_t)pCB->activeRenderPassBeginInfo.framebuffer, (uint64_t)pCB->activeRenderPass); } } } // TODO(mlentine): Move more logic into this method skipCall |= validateSecondaryCommandBufferState(dev_data, pCB, pSubCB); skipCall |= validateCommandBufferState(dev_data, pSubCB); // Secondary cmdBuffers are considered pending execution starting w/ // being recorded if (!(pSubCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT)) { if (dev_data->globalInFlightCmdBuffers.find(pSubCB->commandBuffer) != dev_data->globalInFlightCmdBuffers.end()) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(pCB->commandBuffer), __LINE__, DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, "DS", "Attempt to simultaneously execute CB %#" PRIxLEAST64 " w/o VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT " "set!", (uint64_t)(pCB->commandBuffer)); } if (pCB->beginInfo.flags & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT) { // Warn that non-simultaneous secondary cmd buffer renders primary non-simultaneous skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)(pCommandBuffers[i]), __LINE__, DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, "DS", "vkCmdExecuteCommands(): Secondary Command Buffer (%#" PRIxLEAST64 ") does not have VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT set and will cause primary command buffer " "(%#" PRIxLEAST64 ") to be treated as if it does not have VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT " "set, even though it does.", (uint64_t)(pCommandBuffers[i]), (uint64_t)(pCB->commandBuffer)); pCB->beginInfo.flags &= ~VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT; } } if (!pCB->activeQueries.empty() && !dev_data->physDevProperties.features.inheritedQueries) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(pCommandBuffers[i]), __LINE__, DRAWSTATE_INVALID_COMMAND_BUFFER, "DS", "vkCmdExecuteCommands(): Secondary Command Buffer " "(%#" PRIxLEAST64 ") cannot be submitted with a query in " "flight and inherited queries not " "supported on this device.", reinterpret_cast(pCommandBuffers[i])); } pSubCB->primaryCommandBuffer = pCB->commandBuffer; pCB->secondaryCommandBuffers.insert(pSubCB->commandBuffer); dev_data->globalInFlightCmdBuffers.insert(pSubCB->commandBuffer); } skipCall |= validatePrimaryCommandBuffer(dev_data, pCB, "vkCmdExecuteComands"); skipCall |= addCmd(dev_data, pCB, CMD_EXECUTECOMMANDS, "vkCmdExecuteComands()"); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) dev_data->device_dispatch_table->CmdExecuteCommands(commandBuffer, commandBuffersCount, pCommandBuffers); } VkBool32 ValidateMapImageLayouts(VkDevice device, VkDeviceMemory mem) { VkBool32 skip_call = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); auto mem_data = dev_data->memObjMap.find(mem); if ((mem_data != dev_data->memObjMap.end()) && (mem_data->second.image != VK_NULL_HANDLE)) { std::vector layouts; if (FindLayouts(dev_data, mem_data->second.image, layouts)) { for (auto layout : layouts) { if (layout != VK_IMAGE_LAYOUT_PREINITIALIZED && layout != VK_IMAGE_LAYOUT_GENERAL) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Cannot map an image with layout %s. Only " "GENERAL or PREINITIALIZED are supported.", string_VkImageLayout(layout)); } } } } return skip_call; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkMapMemory(VkDevice device, VkDeviceMemory mem, VkDeviceSize offset, VkDeviceSize size, VkFlags flags, void **ppData) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkBool32 skip_call = VK_FALSE; VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; loader_platform_thread_lock_mutex(&globalLock); #if MTMERGESOURCE DEVICE_MEM_INFO *pMemObj = get_mem_obj_info(dev_data, mem); if (pMemObj) { pMemObj->valid = true; if ((memProps.memoryTypes[pMemObj->allocInfo.memoryTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0) { skip_call = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)mem, __LINE__, MEMTRACK_INVALID_STATE, "MEM", "Mapping Memory without VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT set: mem obj %#" PRIxLEAST64, (uint64_t)mem); } } skip_call |= validateMemRange(dev_data, mem, offset, size); storeMemRanges(dev_data, mem, offset, size); #endif skip_call |= ValidateMapImageLayouts(device, mem); loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skip_call) { result = dev_data->device_dispatch_table->MapMemory(device, mem, offset, size, flags, ppData); #if MTMERGESOURCE loader_platform_thread_lock_mutex(&globalLock); initializeAndTrackMemory(dev_data, mem, size, ppData); loader_platform_thread_unlock_mutex(&globalLock); #endif } return result; } #if MTMERGESOURCE VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUnmapMemory(VkDevice device, VkDeviceMemory mem) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&globalLock); skipCall |= deleteMemRanges(my_data, mem); loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { my_data->device_dispatch_table->UnmapMemory(device, mem); } } VkBool32 validateMemoryIsMapped(layer_data *my_data, const char *funcName, uint32_t memRangeCount, const VkMappedMemoryRange *pMemRanges) { VkBool32 skipCall = VK_FALSE; for (uint32_t i = 0; i < memRangeCount; ++i) { auto mem_element = my_data->memObjMap.find(pMemRanges[i].memory); if (mem_element != my_data->memObjMap.end()) { if (mem_element->second.memRange.offset > pMemRanges[i].offset) { skipCall |= log_msg( my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory, __LINE__, MEMTRACK_INVALID_MAP, "MEM", "%s: Flush/Invalidate offset (" PRINTF_SIZE_T_SPECIFIER ") is less than Memory Object's offset " "(" PRINTF_SIZE_T_SPECIFIER ").", funcName, static_cast(pMemRanges[i].offset), static_cast(mem_element->second.memRange.offset)); } if ((mem_element->second.memRange.size != VK_WHOLE_SIZE) && ((mem_element->second.memRange.offset + mem_element->second.memRange.size) < (pMemRanges[i].offset + pMemRanges[i].size))) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory, __LINE__, MEMTRACK_INVALID_MAP, "MEM", "%s: Flush/Invalidate upper-bound (" PRINTF_SIZE_T_SPECIFIER ") exceeds the Memory Object's upper-bound " "(" PRINTF_SIZE_T_SPECIFIER ").", funcName, static_cast(pMemRanges[i].offset + pMemRanges[i].size), static_cast(mem_element->second.memRange.offset + mem_element->second.memRange.size)); } } } return skipCall; } VkBool32 validateAndCopyNoncoherentMemoryToDriver(layer_data *my_data, uint32_t memRangeCount, const VkMappedMemoryRange *pMemRanges) { VkBool32 skipCall = VK_FALSE; for (uint32_t i = 0; i < memRangeCount; ++i) { auto mem_element = my_data->memObjMap.find(pMemRanges[i].memory); if (mem_element != my_data->memObjMap.end()) { if (mem_element->second.pData) { VkDeviceSize size = mem_element->second.memRange.size; VkDeviceSize half_size = (size / 2); char *data = static_cast(mem_element->second.pData); for (auto j = 0; j < half_size; ++j) { if (data[j] != NoncoherentMemoryFillValue) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory, __LINE__, MEMTRACK_INVALID_MAP, "MEM", "Memory overflow was detected on mem obj %" PRIxLEAST64, (uint64_t)pMemRanges[i].memory); } } for (auto j = size + half_size; j < 2 * size; ++j) { if (data[j] != NoncoherentMemoryFillValue) { skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, (uint64_t)pMemRanges[i].memory, __LINE__, MEMTRACK_INVALID_MAP, "MEM", "Memory overflow was detected on mem obj %" PRIxLEAST64, (uint64_t)pMemRanges[i].memory); } } memcpy(mem_element->second.pDriverData, static_cast(data + (size_t)(half_size)), (size_t)(size)); } } } return skipCall; } VK_LAYER_EXPORT VkResult VKAPI_CALL vkFlushMappedMemoryRanges(VkDevice device, uint32_t memRangeCount, const VkMappedMemoryRange *pMemRanges) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); skipCall |= validateAndCopyNoncoherentMemoryToDriver(my_data, memRangeCount, pMemRanges); skipCall |= validateMemoryIsMapped(my_data, "vkFlushMappedMemoryRanges", memRangeCount, pMemRanges); loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { result = my_data->device_dispatch_table->FlushMappedMemoryRanges(device, memRangeCount, pMemRanges); } return result; } VK_LAYER_EXPORT VkResult VKAPI_CALL vkInvalidateMappedMemoryRanges(VkDevice device, uint32_t memRangeCount, const VkMappedMemoryRange *pMemRanges) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); skipCall |= validateMemoryIsMapped(my_data, "vkInvalidateMappedMemoryRanges", memRangeCount, pMemRanges); loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { result = my_data->device_dispatch_table->InvalidateMappedMemoryRanges(device, memRangeCount, pMemRanges); } return result; } #endif VKAPI_ATTR VkResult VKAPI_CALL vkBindImageMemory(VkDevice device, VkImage image, VkDeviceMemory mem, VkDeviceSize memoryOffset) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; #if MTMERGESOURCE loader_platform_thread_lock_mutex(&globalLock); // Track objects tied to memory uint64_t image_handle = (uint64_t)(image); skipCall = set_mem_binding(dev_data, device, mem, image_handle, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "vkBindImageMemory"); add_object_binding_info(dev_data, image_handle, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, mem); { VkMemoryRequirements memRequirements; vkGetImageMemoryRequirements(device, image, &memRequirements); skipCall |= validate_buffer_image_aliasing(dev_data, image_handle, mem, memoryOffset, memRequirements, dev_data->memObjMap[mem].imageRanges, dev_data->memObjMap[mem].bufferRanges, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT); } print_mem_list(dev_data, device); loader_platform_thread_unlock_mutex(&globalLock); #endif if (VK_FALSE == skipCall) { result = dev_data->device_dispatch_table->BindImageMemory(device, image, mem, memoryOffset); VkMemoryRequirements memRequirements; dev_data->device_dispatch_table->GetImageMemoryRequirements(device, image, &memRequirements); loader_platform_thread_lock_mutex(&globalLock); dev_data->memObjMap[mem].image = image; dev_data->imageMap[image].mem = mem; dev_data->imageMap[image].memOffset = memoryOffset; dev_data->imageMap[image].memSize = memRequirements.size; loader_platform_thread_unlock_mutex(&globalLock); } return result; } VKAPI_ATTR VkResult VKAPI_CALL vkSetEvent(VkDevice device, VkEvent event) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); dev_data->eventMap[event].needsSignaled = false; dev_data->eventMap[event].stageMask = VK_PIPELINE_STAGE_HOST_BIT; loader_platform_thread_unlock_mutex(&globalLock); VkResult result = dev_data->device_dispatch_table->SetEvent(device, event); return result; } VKAPI_ATTR VkResult VKAPI_CALL vkQueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo *pBindInfo, VkFence fence) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map); VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skip_call = VK_FALSE; #if MTMERGESOURCE //MTMTODO : Merge this code with the checks below loader_platform_thread_lock_mutex(&globalLock); for (uint32_t i = 0; i < bindInfoCount; i++) { const VkBindSparseInfo *bindInfo = &pBindInfo[i]; // Track objects tied to memory for (uint32_t j = 0; j < bindInfo->bufferBindCount; j++) { for (uint32_t k = 0; k < bindInfo->pBufferBinds[j].bindCount; k++) { if (set_sparse_mem_binding(dev_data, queue, bindInfo->pBufferBinds[j].pBinds[k].memory, (uint64_t)bindInfo->pBufferBinds[j].buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, "vkQueueBindSparse")) skip_call = VK_TRUE; } } for (uint32_t j = 0; j < bindInfo->imageOpaqueBindCount; j++) { for (uint32_t k = 0; k < bindInfo->pImageOpaqueBinds[j].bindCount; k++) { if (set_sparse_mem_binding(dev_data, queue, bindInfo->pImageOpaqueBinds[j].pBinds[k].memory, (uint64_t)bindInfo->pImageOpaqueBinds[j].image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "vkQueueBindSparse")) skip_call = VK_TRUE; } } for (uint32_t j = 0; j < bindInfo->imageBindCount; j++) { for (uint32_t k = 0; k < bindInfo->pImageBinds[j].bindCount; k++) { if (set_sparse_mem_binding(dev_data, queue, bindInfo->pImageBinds[j].pBinds[k].memory, (uint64_t)bindInfo->pImageBinds[j].image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, "vkQueueBindSparse")) skip_call = VK_TRUE; } } // Validate semaphore state for (uint32_t i = 0; i < bindInfo->waitSemaphoreCount; i++) { VkSemaphore sem = bindInfo->pWaitSemaphores[i]; if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) { if (dev_data->semaphoreMap[sem].state != MEMTRACK_SEMAPHORE_STATE_SIGNALLED) { skip_call = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, (uint64_t)sem, __LINE__, MEMTRACK_NONE, "SEMAPHORE", "vkQueueBindSparse: Semaphore must be in signaled state before passing to pWaitSemaphores"); } dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_WAIT; } } for (uint32_t i = 0; i < bindInfo->signalSemaphoreCount; i++) { VkSemaphore sem = bindInfo->pSignalSemaphores[i]; if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) { if (dev_data->semaphoreMap[sem].state != MEMTRACK_SEMAPHORE_STATE_UNSET) { skip_call = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, (uint64_t)sem, __LINE__, MEMTRACK_NONE, "SEMAPHORE", "vkQueueBindSparse: Semaphore must not be currently signaled or in a wait state"); } dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_SIGNALLED; } } } print_mem_list(dev_data, queue); loader_platform_thread_unlock_mutex(&globalLock); #endif loader_platform_thread_lock_mutex(&globalLock); for (uint32_t bindIdx = 0; bindIdx < bindInfoCount; ++bindIdx) { const VkBindSparseInfo &bindInfo = pBindInfo[bindIdx]; for (uint32_t i = 0; i < bindInfo.waitSemaphoreCount; ++i) { if (dev_data->semaphoreMap[bindInfo.pWaitSemaphores[i]].signaled) { dev_data->semaphoreMap[bindInfo.pWaitSemaphores[i]].signaled = 0; } else { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS, "DS", "Queue %#" PRIx64 " is waiting on semaphore %#" PRIx64 " that has no way to be signaled.", (uint64_t)(queue), (uint64_t)(bindInfo.pWaitSemaphores[i])); } } for (uint32_t i = 0; i < bindInfo.signalSemaphoreCount; ++i) { dev_data->semaphoreMap[bindInfo.pSignalSemaphores[i]].signaled = 1; } } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skip_call) return dev_data->device_dispatch_table->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence); #if MTMERGESOURCE // Update semaphore state loader_platform_thread_lock_mutex(&globalLock); for (uint32_t bind_info_idx = 0; bind_info_idx < bindInfoCount; bind_info_idx++) { const VkBindSparseInfo *bindInfo = &pBindInfo[bind_info_idx]; for (uint32_t i = 0; i < bindInfo->waitSemaphoreCount; i++) { VkSemaphore sem = bindInfo->pWaitSemaphores[i]; if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) { dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_UNSET; } } } loader_platform_thread_unlock_mutex(&globalLock); #endif return result; } VKAPI_ATTR VkResult VKAPI_CALL vkCreateSemaphore(VkDevice device, const VkSemaphoreCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSemaphore *pSemaphore) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateSemaphore(device, pCreateInfo, pAllocator, pSemaphore); if (result == VK_SUCCESS) { loader_platform_thread_lock_mutex(&globalLock); SEMAPHORE_NODE* sNode = &dev_data->semaphoreMap[*pSemaphore]; sNode->signaled = 0; sNode->queue = VK_NULL_HANDLE; sNode->in_use.store(0); sNode->state = MEMTRACK_SEMAPHORE_STATE_UNSET; loader_platform_thread_unlock_mutex(&globalLock); } return result; } VKAPI_ATTR VkResult VKAPI_CALL vkCreateEvent(VkDevice device, const VkEventCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkEvent *pEvent) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateEvent(device, pCreateInfo, pAllocator, pEvent); if (result == VK_SUCCESS) { loader_platform_thread_lock_mutex(&globalLock); dev_data->eventMap[*pEvent].needsSignaled = false; dev_data->eventMap[*pEvent].in_use.store(0); dev_data->eventMap[*pEvent].stageMask = VkPipelineStageFlags(0); loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(VkDevice device, const VkSwapchainCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSwapchainKHR *pSwapchain) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->CreateSwapchainKHR(device, pCreateInfo, pAllocator, pSwapchain); if (VK_SUCCESS == result) { SWAPCHAIN_NODE *psc_node = new SWAPCHAIN_NODE(pCreateInfo); loader_platform_thread_lock_mutex(&globalLock); dev_data->device_extensions.swapchainMap[*pSwapchain] = psc_node; loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks *pAllocator) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); bool skipCall = false; loader_platform_thread_lock_mutex(&globalLock); auto swapchain_data = dev_data->device_extensions.swapchainMap.find(swapchain); if (swapchain_data != dev_data->device_extensions.swapchainMap.end()) { if (swapchain_data->second->images.size() > 0) { for (auto swapchain_image : swapchain_data->second->images) { auto image_sub = dev_data->imageSubresourceMap.find(swapchain_image); if (image_sub != dev_data->imageSubresourceMap.end()) { for (auto imgsubpair : image_sub->second) { auto image_item = dev_data->imageLayoutMap.find(imgsubpair); if (image_item != dev_data->imageLayoutMap.end()) { dev_data->imageLayoutMap.erase(image_item); } } dev_data->imageSubresourceMap.erase(image_sub); } #if MTMERGESOURCE skipCall = clear_object_binding(dev_data, device, (uint64_t)swapchain_image, VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT); dev_data->imageBindingMap.erase((uint64_t)swapchain_image); #endif } } delete swapchain_data->second; dev_data->device_extensions.swapchainMap.erase(swapchain); } loader_platform_thread_unlock_mutex(&globalLock); if (!skipCall) dev_data->device_dispatch_table->DestroySwapchainKHR(device, swapchain, pAllocator); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t *pCount, VkImage *pSwapchainImages) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = dev_data->device_dispatch_table->GetSwapchainImagesKHR(device, swapchain, pCount, pSwapchainImages); if (result == VK_SUCCESS && pSwapchainImages != NULL) { // This should never happen and is checked by param checker. if (!pCount) return result; loader_platform_thread_lock_mutex(&globalLock); const size_t count = *pCount; auto swapchain_node = dev_data->device_extensions.swapchainMap[swapchain]; if (!swapchain_node->images.empty()) { // TODO : Not sure I like the memcmp here, but it works const bool mismatch = (swapchain_node->images.size() != count || memcmp(&swapchain_node->images[0], pSwapchainImages, sizeof(swapchain_node->images[0]) * count)); if (mismatch) { // TODO: Verify against Valid Usage section of extension log_msg(dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, (uint64_t)swapchain, __LINE__, MEMTRACK_NONE, "SWAP_CHAIN", "vkGetSwapchainInfoKHR(%" PRIu64 ", VK_SWAP_CHAIN_INFO_TYPE_PERSISTENT_IMAGES_KHR) returned mismatching data", (uint64_t)(swapchain)); } } for (uint32_t i = 0; i < *pCount; ++i) { IMAGE_LAYOUT_NODE image_layout_node; image_layout_node.layout = VK_IMAGE_LAYOUT_UNDEFINED; image_layout_node.format = swapchain_node->createInfo.imageFormat; dev_data->imageMap[pSwapchainImages[i]].createInfo.mipLevels = 1; dev_data->imageMap[pSwapchainImages[i]].createInfo.arrayLayers = swapchain_node->createInfo.imageArrayLayers; swapchain_node->images.push_back(pSwapchainImages[i]); ImageSubresourcePair subpair = {pSwapchainImages[i], false, VkImageSubresource()}; dev_data->imageSubresourceMap[pSwapchainImages[i]].push_back(subpair); dev_data->imageLayoutMap[subpair] = image_layout_node; dev_data->device_extensions.imageToSwapchainMap[pSwapchainImages[i]] = swapchain; } if (!swapchain_node->images.empty()) { for (auto image : swapchain_node->images) { // Add image object binding, then insert the new Mem Object and then bind it to created image #if MTMERGESOURCE add_object_create_info(dev_data, (uint64_t)image, VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, &swapchain_node->createInfo); #endif } } loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(VkQueue queue, const VkPresentInfoKHR *pPresentInfo) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map); VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; bool skip_call = false; if (pPresentInfo) { loader_platform_thread_lock_mutex(&globalLock); for (uint32_t i = 0; i < pPresentInfo->waitSemaphoreCount; ++i) { if (dev_data->semaphoreMap[pPresentInfo->pWaitSemaphores[i]].signaled) { dev_data->semaphoreMap[pPresentInfo->pWaitSemaphores[i]].signaled = 0; } else { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, 0, __LINE__, DRAWSTATE_QUEUE_FORWARD_PROGRESS, "DS", "Queue %#" PRIx64 " is waiting on semaphore %#" PRIx64 " that has no way to be signaled.", (uint64_t)(queue), (uint64_t)(pPresentInfo->pWaitSemaphores[i])); } } VkDeviceMemory mem; for (uint32_t i = 0; i < pPresentInfo->swapchainCount; ++i) { auto swapchain_data = dev_data->device_extensions.swapchainMap.find(pPresentInfo->pSwapchains[i]); if (swapchain_data != dev_data->device_extensions.swapchainMap.end() && pPresentInfo->pImageIndices[i] < swapchain_data->second->images.size()) { VkImage image = swapchain_data->second->images[pPresentInfo->pImageIndices[i]]; #if MTMERGESOURCE skip_call |= get_mem_binding_from_object(dev_data, queue, (uint64_t)(image), VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, &mem); skip_call |= validate_memory_is_valid(dev_data, mem, "vkQueuePresentKHR()", image); #endif vector layouts; if (FindLayouts(dev_data, image, layouts)) { for (auto layout : layouts) { if (layout != VK_IMAGE_LAYOUT_PRESENT_SRC_KHR) { skip_call |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT, reinterpret_cast(queue), __LINE__, DRAWSTATE_INVALID_IMAGE_LAYOUT, "DS", "Images passed to present must be in layout " "PRESENT_SOURCE_KHR but is in %s", string_VkImageLayout(layout)); } } } } } loader_platform_thread_unlock_mutex(&globalLock); } if (!skip_call) result = dev_data->device_dispatch_table->QueuePresentKHR(queue, pPresentInfo); #if MTMERGESOURCE loader_platform_thread_lock_mutex(&globalLock); for (uint32_t i = 0; i < pPresentInfo->waitSemaphoreCount; i++) { VkSemaphore sem = pPresentInfo->pWaitSemaphores[i]; if (dev_data->semaphoreMap.find(sem) != dev_data->semaphoreMap.end()) { dev_data->semaphoreMap[sem].state = MEMTRACK_SEMAPHORE_STATE_UNSET; } } loader_platform_thread_unlock_mutex(&globalLock); #endif return result; } VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(VkDevice device, VkSwapchainKHR swapchain, uint64_t timeout, VkSemaphore semaphore, VkFence fence, uint32_t *pImageIndex) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; bool skipCall = false; #if MTMERGESOURCE loader_platform_thread_lock_mutex(&globalLock); if (semaphore != VK_NULL_HANDLE && dev_data->semaphoreMap.find(semaphore) != dev_data->semaphoreMap.end()) { if (dev_data->semaphoreMap[semaphore].state != MEMTRACK_SEMAPHORE_STATE_UNSET) { skipCall = log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT, (uint64_t)semaphore, __LINE__, MEMTRACK_NONE, "SEMAPHORE", "vkAcquireNextImageKHR: Semaphore must not be currently signaled or in a wait state"); } dev_data->semaphoreMap[semaphore].state = MEMTRACK_SEMAPHORE_STATE_SIGNALLED; dev_data->semaphoreMap[semaphore].in_use.fetch_add(1); } auto fence_data = dev_data->fenceMap.find(fence); if (fence_data != dev_data->fenceMap.end()) { fence_data->second.swapchain = swapchain; } loader_platform_thread_unlock_mutex(&globalLock); #endif if (!skipCall) { result = dev_data->device_dispatch_table->AcquireNextImageKHR(device, swapchain, timeout, semaphore, fence, pImageIndex); } loader_platform_thread_lock_mutex(&globalLock); // FIXME/TODO: Need to add some thing code the "fence" parameter dev_data->semaphoreMap[semaphore].signaled = 1; loader_platform_thread_unlock_mutex(&globalLock); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; VkResult res = pTable->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback); if (VK_SUCCESS == res) { loader_platform_thread_lock_mutex(&globalLock); res = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback); loader_platform_thread_unlock_mutex(&globalLock); } return res; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT msgCallback, const VkAllocationCallbacks *pAllocator) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; pTable->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator); loader_platform_thread_lock_mutex(&globalLock); layer_destroy_msg_callback(my_data->report_data, msgCallback, pAllocator); loader_platform_thread_unlock_mutex(&globalLock); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg); } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice dev, const char *funcName) { if (!strcmp(funcName, "vkGetDeviceProcAddr")) return (PFN_vkVoidFunction)vkGetDeviceProcAddr; if (!strcmp(funcName, "vkDestroyDevice")) return (PFN_vkVoidFunction)vkDestroyDevice; if (!strcmp(funcName, "vkQueueSubmit")) return (PFN_vkVoidFunction)vkQueueSubmit; if (!strcmp(funcName, "vkWaitForFences")) return (PFN_vkVoidFunction)vkWaitForFences; if (!strcmp(funcName, "vkGetFenceStatus")) return (PFN_vkVoidFunction)vkGetFenceStatus; if (!strcmp(funcName, "vkQueueWaitIdle")) return (PFN_vkVoidFunction)vkQueueWaitIdle; if (!strcmp(funcName, "vkDeviceWaitIdle")) return (PFN_vkVoidFunction)vkDeviceWaitIdle; if (!strcmp(funcName, "vkGetDeviceQueue")) return (PFN_vkVoidFunction)vkGetDeviceQueue; if (!strcmp(funcName, "vkDestroyInstance")) return (PFN_vkVoidFunction)vkDestroyInstance; if (!strcmp(funcName, "vkDestroyDevice")) return (PFN_vkVoidFunction)vkDestroyDevice; if (!strcmp(funcName, "vkDestroyFence")) return (PFN_vkVoidFunction)vkDestroyFence; if (!strcmp(funcName, "vkResetFences")) return (PFN_vkVoidFunction)vkResetFences; if (!strcmp(funcName, "vkDestroySemaphore")) return (PFN_vkVoidFunction)vkDestroySemaphore; if (!strcmp(funcName, "vkDestroyEvent")) return (PFN_vkVoidFunction)vkDestroyEvent; if (!strcmp(funcName, "vkDestroyQueryPool")) return (PFN_vkVoidFunction)vkDestroyQueryPool; if (!strcmp(funcName, "vkDestroyBuffer")) return (PFN_vkVoidFunction)vkDestroyBuffer; if (!strcmp(funcName, "vkDestroyBufferView")) return (PFN_vkVoidFunction)vkDestroyBufferView; if (!strcmp(funcName, "vkDestroyImage")) return (PFN_vkVoidFunction)vkDestroyImage; if (!strcmp(funcName, "vkDestroyImageView")) return (PFN_vkVoidFunction)vkDestroyImageView; if (!strcmp(funcName, "vkDestroyShaderModule")) return (PFN_vkVoidFunction)vkDestroyShaderModule; if (!strcmp(funcName, "vkDestroyPipeline")) return (PFN_vkVoidFunction)vkDestroyPipeline; if (!strcmp(funcName, "vkDestroyPipelineLayout")) return (PFN_vkVoidFunction)vkDestroyPipelineLayout; if (!strcmp(funcName, "vkDestroySampler")) return (PFN_vkVoidFunction)vkDestroySampler; if (!strcmp(funcName, "vkDestroyDescriptorSetLayout")) return (PFN_vkVoidFunction)vkDestroyDescriptorSetLayout; if (!strcmp(funcName, "vkDestroyDescriptorPool")) return (PFN_vkVoidFunction)vkDestroyDescriptorPool; if (!strcmp(funcName, "vkDestroyFramebuffer")) return (PFN_vkVoidFunction)vkDestroyFramebuffer; if (!strcmp(funcName, "vkDestroyRenderPass")) return (PFN_vkVoidFunction)vkDestroyRenderPass; if (!strcmp(funcName, "vkCreateBuffer")) return (PFN_vkVoidFunction)vkCreateBuffer; if (!strcmp(funcName, "vkCreateBufferView")) return (PFN_vkVoidFunction)vkCreateBufferView; if (!strcmp(funcName, "vkCreateImage")) return (PFN_vkVoidFunction)vkCreateImage; if (!strcmp(funcName, "vkCreateImageView")) return (PFN_vkVoidFunction)vkCreateImageView; if (!strcmp(funcName, "vkCreateFence")) return (PFN_vkVoidFunction)vkCreateFence; if (!strcmp(funcName, "CreatePipelineCache")) return (PFN_vkVoidFunction)vkCreatePipelineCache; if (!strcmp(funcName, "DestroyPipelineCache")) return (PFN_vkVoidFunction)vkDestroyPipelineCache; if (!strcmp(funcName, "GetPipelineCacheData")) return (PFN_vkVoidFunction)vkGetPipelineCacheData; if (!strcmp(funcName, "MergePipelineCaches")) return (PFN_vkVoidFunction)vkMergePipelineCaches; if (!strcmp(funcName, "vkCreateGraphicsPipelines")) return (PFN_vkVoidFunction)vkCreateGraphicsPipelines; if (!strcmp(funcName, "vkCreateComputePipelines")) return (PFN_vkVoidFunction)vkCreateComputePipelines; if (!strcmp(funcName, "vkCreateSampler")) return (PFN_vkVoidFunction)vkCreateSampler; if (!strcmp(funcName, "vkCreateDescriptorSetLayout")) return (PFN_vkVoidFunction)vkCreateDescriptorSetLayout; if (!strcmp(funcName, "vkCreatePipelineLayout")) return (PFN_vkVoidFunction)vkCreatePipelineLayout; if (!strcmp(funcName, "vkCreateDescriptorPool")) return (PFN_vkVoidFunction)vkCreateDescriptorPool; if (!strcmp(funcName, "vkResetDescriptorPool")) return (PFN_vkVoidFunction)vkResetDescriptorPool; if (!strcmp(funcName, "vkAllocateDescriptorSets")) return (PFN_vkVoidFunction)vkAllocateDescriptorSets; if (!strcmp(funcName, "vkFreeDescriptorSets")) return (PFN_vkVoidFunction)vkFreeDescriptorSets; if (!strcmp(funcName, "vkUpdateDescriptorSets")) return (PFN_vkVoidFunction)vkUpdateDescriptorSets; if (!strcmp(funcName, "vkCreateCommandPool")) return (PFN_vkVoidFunction)vkCreateCommandPool; if (!strcmp(funcName, "vkDestroyCommandPool")) return (PFN_vkVoidFunction)vkDestroyCommandPool; if (!strcmp(funcName, "vkResetCommandPool")) return (PFN_vkVoidFunction)vkResetCommandPool; if (!strcmp(funcName, "vkCreateQueryPool")) return (PFN_vkVoidFunction)vkCreateQueryPool; if (!strcmp(funcName, "vkAllocateCommandBuffers")) return (PFN_vkVoidFunction)vkAllocateCommandBuffers; if (!strcmp(funcName, "vkFreeCommandBuffers")) return (PFN_vkVoidFunction)vkFreeCommandBuffers; if (!strcmp(funcName, "vkBeginCommandBuffer")) return (PFN_vkVoidFunction)vkBeginCommandBuffer; if (!strcmp(funcName, "vkEndCommandBuffer")) return (PFN_vkVoidFunction)vkEndCommandBuffer; if (!strcmp(funcName, "vkResetCommandBuffer")) return (PFN_vkVoidFunction)vkResetCommandBuffer; if (!strcmp(funcName, "vkCmdBindPipeline")) return (PFN_vkVoidFunction)vkCmdBindPipeline; if (!strcmp(funcName, "vkCmdSetViewport")) return (PFN_vkVoidFunction)vkCmdSetViewport; if (!strcmp(funcName, "vkCmdSetScissor")) return (PFN_vkVoidFunction)vkCmdSetScissor; if (!strcmp(funcName, "vkCmdSetLineWidth")) return (PFN_vkVoidFunction)vkCmdSetLineWidth; if (!strcmp(funcName, "vkCmdSetDepthBias")) return (PFN_vkVoidFunction)vkCmdSetDepthBias; if (!strcmp(funcName, "vkCmdSetBlendConstants")) return (PFN_vkVoidFunction)vkCmdSetBlendConstants; if (!strcmp(funcName, "vkCmdSetDepthBounds")) return (PFN_vkVoidFunction)vkCmdSetDepthBounds; if (!strcmp(funcName, "vkCmdSetStencilCompareMask")) return (PFN_vkVoidFunction)vkCmdSetStencilCompareMask; if (!strcmp(funcName, "vkCmdSetStencilWriteMask")) return (PFN_vkVoidFunction)vkCmdSetStencilWriteMask; if (!strcmp(funcName, "vkCmdSetStencilReference")) return (PFN_vkVoidFunction)vkCmdSetStencilReference; if (!strcmp(funcName, "vkCmdBindDescriptorSets")) return (PFN_vkVoidFunction)vkCmdBindDescriptorSets; if (!strcmp(funcName, "vkCmdBindVertexBuffers")) return (PFN_vkVoidFunction)vkCmdBindVertexBuffers; if (!strcmp(funcName, "vkCmdBindIndexBuffer")) return (PFN_vkVoidFunction)vkCmdBindIndexBuffer; if (!strcmp(funcName, "vkCmdDraw")) return (PFN_vkVoidFunction)vkCmdDraw; if (!strcmp(funcName, "vkCmdDrawIndexed")) return (PFN_vkVoidFunction)vkCmdDrawIndexed; if (!strcmp(funcName, "vkCmdDrawIndirect")) return (PFN_vkVoidFunction)vkCmdDrawIndirect; if (!strcmp(funcName, "vkCmdDrawIndexedIndirect")) return (PFN_vkVoidFunction)vkCmdDrawIndexedIndirect; if (!strcmp(funcName, "vkCmdDispatch")) return (PFN_vkVoidFunction)vkCmdDispatch; if (!strcmp(funcName, "vkCmdDispatchIndirect")) return (PFN_vkVoidFunction)vkCmdDispatchIndirect; if (!strcmp(funcName, "vkCmdCopyBuffer")) return (PFN_vkVoidFunction)vkCmdCopyBuffer; if (!strcmp(funcName, "vkCmdCopyImage")) return (PFN_vkVoidFunction)vkCmdCopyImage; if (!strcmp(funcName, "vkCmdBlitImage")) return (PFN_vkVoidFunction)vkCmdBlitImage; if (!strcmp(funcName, "vkCmdCopyBufferToImage")) return (PFN_vkVoidFunction)vkCmdCopyBufferToImage; if (!strcmp(funcName, "vkCmdCopyImageToBuffer")) return (PFN_vkVoidFunction)vkCmdCopyImageToBuffer; if (!strcmp(funcName, "vkCmdUpdateBuffer")) return (PFN_vkVoidFunction)vkCmdUpdateBuffer; if (!strcmp(funcName, "vkCmdFillBuffer")) return (PFN_vkVoidFunction)vkCmdFillBuffer; if (!strcmp(funcName, "vkCmdClearColorImage")) return (PFN_vkVoidFunction)vkCmdClearColorImage; if (!strcmp(funcName, "vkCmdClearDepthStencilImage")) return (PFN_vkVoidFunction)vkCmdClearDepthStencilImage; if (!strcmp(funcName, "vkCmdClearAttachments")) return (PFN_vkVoidFunction)vkCmdClearAttachments; if (!strcmp(funcName, "vkCmdResolveImage")) return (PFN_vkVoidFunction)vkCmdResolveImage; if (!strcmp(funcName, "vkCmdSetEvent")) return (PFN_vkVoidFunction)vkCmdSetEvent; if (!strcmp(funcName, "vkCmdResetEvent")) return (PFN_vkVoidFunction)vkCmdResetEvent; if (!strcmp(funcName, "vkCmdWaitEvents")) return (PFN_vkVoidFunction)vkCmdWaitEvents; if (!strcmp(funcName, "vkCmdPipelineBarrier")) return (PFN_vkVoidFunction)vkCmdPipelineBarrier; if (!strcmp(funcName, "vkCmdBeginQuery")) return (PFN_vkVoidFunction)vkCmdBeginQuery; if (!strcmp(funcName, "vkCmdEndQuery")) return (PFN_vkVoidFunction)vkCmdEndQuery; if (!strcmp(funcName, "vkCmdResetQueryPool")) return (PFN_vkVoidFunction)vkCmdResetQueryPool; if (!strcmp(funcName, "vkCmdCopyQueryPoolResults")) return (PFN_vkVoidFunction)vkCmdCopyQueryPoolResults; if (!strcmp(funcName, "vkCmdPushConstants")) return (PFN_vkVoidFunction)vkCmdPushConstants; if (!strcmp(funcName, "vkCmdWriteTimestamp")) return (PFN_vkVoidFunction)vkCmdWriteTimestamp; if (!strcmp(funcName, "vkCreateFramebuffer")) return (PFN_vkVoidFunction)vkCreateFramebuffer; if (!strcmp(funcName, "vkCreateShaderModule")) return (PFN_vkVoidFunction)vkCreateShaderModule; if (!strcmp(funcName, "vkCreateRenderPass")) return (PFN_vkVoidFunction)vkCreateRenderPass; if (!strcmp(funcName, "vkCmdBeginRenderPass")) return (PFN_vkVoidFunction)vkCmdBeginRenderPass; if (!strcmp(funcName, "vkCmdNextSubpass")) return (PFN_vkVoidFunction)vkCmdNextSubpass; if (!strcmp(funcName, "vkCmdEndRenderPass")) return (PFN_vkVoidFunction)vkCmdEndRenderPass; if (!strcmp(funcName, "vkCmdExecuteCommands")) return (PFN_vkVoidFunction)vkCmdExecuteCommands; if (!strcmp(funcName, "vkSetEvent")) return (PFN_vkVoidFunction)vkSetEvent; if (!strcmp(funcName, "vkMapMemory")) return (PFN_vkVoidFunction)vkMapMemory; #if MTMERGESOURCE if (!strcmp(funcName, "vkUnmapMemory")) return (PFN_vkVoidFunction)vkUnmapMemory; if (!strcmp(funcName, "vkAllocateMemory")) return (PFN_vkVoidFunction)vkAllocateMemory; if (!strcmp(funcName, "vkFreeMemory")) return (PFN_vkVoidFunction)vkFreeMemory; if (!strcmp(funcName, "vkFlushMappedMemoryRanges")) return (PFN_vkVoidFunction)vkFlushMappedMemoryRanges; if (!strcmp(funcName, "vkInvalidateMappedMemoryRanges")) return (PFN_vkVoidFunction)vkInvalidateMappedMemoryRanges; if (!strcmp(funcName, "vkBindBufferMemory")) return (PFN_vkVoidFunction)vkBindBufferMemory; if (!strcmp(funcName, "vkGetBufferMemoryRequirements")) return (PFN_vkVoidFunction)vkGetBufferMemoryRequirements; if (!strcmp(funcName, "vkGetImageMemoryRequirements")) return (PFN_vkVoidFunction)vkGetImageMemoryRequirements; #endif if (!strcmp(funcName, "vkGetQueryPoolResults")) return (PFN_vkVoidFunction)vkGetQueryPoolResults; if (!strcmp(funcName, "vkBindImageMemory")) return (PFN_vkVoidFunction)vkBindImageMemory; if (!strcmp(funcName, "vkQueueBindSparse")) return (PFN_vkVoidFunction)vkQueueBindSparse; if (!strcmp(funcName, "vkCreateSemaphore")) return (PFN_vkVoidFunction)vkCreateSemaphore; if (!strcmp(funcName, "vkCreateEvent")) return (PFN_vkVoidFunction)vkCreateEvent; if (dev == NULL) return NULL; layer_data *dev_data; dev_data = get_my_data_ptr(get_dispatch_key(dev), layer_data_map); if (dev_data->device_extensions.wsi_enabled) { if (!strcmp(funcName, "vkCreateSwapchainKHR")) return (PFN_vkVoidFunction)vkCreateSwapchainKHR; if (!strcmp(funcName, "vkDestroySwapchainKHR")) return (PFN_vkVoidFunction)vkDestroySwapchainKHR; if (!strcmp(funcName, "vkGetSwapchainImagesKHR")) return (PFN_vkVoidFunction)vkGetSwapchainImagesKHR; if (!strcmp(funcName, "vkAcquireNextImageKHR")) return (PFN_vkVoidFunction)vkAcquireNextImageKHR; if (!strcmp(funcName, "vkQueuePresentKHR")) return (PFN_vkVoidFunction)vkQueuePresentKHR; } VkLayerDispatchTable *pTable = dev_data->device_dispatch_table; { if (pTable->GetDeviceProcAddr == NULL) return NULL; return pTable->GetDeviceProcAddr(dev, funcName); } } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) { if (!strcmp(funcName, "vkGetInstanceProcAddr")) return (PFN_vkVoidFunction)vkGetInstanceProcAddr; if (!strcmp(funcName, "vkGetDeviceProcAddr")) return (PFN_vkVoidFunction)vkGetDeviceProcAddr; if (!strcmp(funcName, "vkCreateInstance")) return (PFN_vkVoidFunction)vkCreateInstance; if (!strcmp(funcName, "vkCreateDevice")) return (PFN_vkVoidFunction)vkCreateDevice; if (!strcmp(funcName, "vkDestroyInstance")) return (PFN_vkVoidFunction)vkDestroyInstance; #if MTMERGESOURCE if (!strcmp(funcName, "vkGetPhysicalDeviceMemoryProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceMemoryProperties; #endif if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties; if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties; if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties; if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties; if (instance == NULL) return NULL; PFN_vkVoidFunction fptr; layer_data *my_data; my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); fptr = debug_report_get_instance_proc_addr(my_data->report_data, funcName); if (fptr) return fptr; VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; if (pTable->GetInstanceProcAddr == NULL) return NULL; return pTable->GetInstanceProcAddr(instance, funcName); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/core_validation.h000066400000000000000000001267071270147354000255510ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Courtney Goeltzenleuchter * Author: Tobin Ehlis * Author: Chris Forbes * Author: Mark Lobodzinski */ // Check for noexcept support #if defined(__clang__) #if __has_feature(cxx_noexcept) #define HAS_NOEXCEPT #endif #else #if defined(__GXX_EXPERIMENTAL_CXX0X__) && __GNUC__ * 10 + __GNUC_MINOR__ >= 46 || \ defined(_MSC_FULL_VER) && _MSC_FULL_VER >= 190023026 #define HAS_NOEXCEPT #endif #endif #ifdef HAS_NOEXCEPT #define NOEXCEPT noexcept #else #define NOEXCEPT #endif // Enable mem_tracker merged code #define MTMERGE 1 #pragma once #include "vulkan/vk_layer.h" #include #include #include #include #include #include using std::vector; using std::unordered_set; #if MTMERGE // Mem Tracker ERROR codes typedef enum _MEM_TRACK_ERROR { MEMTRACK_NONE, // Used for INFO & other non-error messages MEMTRACK_INVALID_CB, // Cmd Buffer invalid MEMTRACK_INVALID_MEM_OBJ, // Invalid Memory Object MEMTRACK_INVALID_ALIASING, // Invalid Memory Aliasing MEMTRACK_INVALID_LAYOUT, // Invalid Layout MEMTRACK_INTERNAL_ERROR, // Bug in Mem Track Layer internal data structures MEMTRACK_FREED_MEM_REF, // MEM Obj freed while it still has obj and/or CB refs MEMTRACK_MEM_OBJ_CLEAR_EMPTY_BINDINGS, // Clearing bindings on mem obj that doesn't have any bindings MEMTRACK_MISSING_MEM_BINDINGS, // Trying to retrieve mem bindings, but none found (may be internal error) MEMTRACK_INVALID_OBJECT, // Attempting to reference generic VK Object that is invalid MEMTRACK_MEMORY_BINDING_ERROR, // Error during one of many calls that bind memory to object or CB MEMTRACK_MEMORY_LEAK, // Failure to call vkFreeMemory on Mem Obj prior to DestroyDevice MEMTRACK_INVALID_STATE, // Memory not in the correct state MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, // vkResetCommandBuffer() called on a CB that hasn't completed MEMTRACK_INVALID_FENCE_STATE, // Invalid Fence State signaled or used MEMTRACK_REBIND_OBJECT, // Non-sparse object bindings are immutable MEMTRACK_INVALID_USAGE_FLAG, // Usage flags specified at image/buffer create conflict w/ use of object MEMTRACK_INVALID_MAP, // Size flag specified at alloc is too small for mapping range } MEM_TRACK_ERROR; // MemTracker Semaphore states typedef enum SemaphoreState { MEMTRACK_SEMAPHORE_STATE_UNSET, // Semaphore is in an undefined state MEMTRACK_SEMAPHORE_STATE_SIGNALLED, // Semaphore has is in signalled state MEMTRACK_SEMAPHORE_STATE_WAIT, // Semaphore is in wait state } SemaphoreState; struct MemRange { VkDeviceSize offset; VkDeviceSize size; }; /* * MTMTODO : Update this comment * Data Structure overview * There are 4 global STL(' maps * cbMap -- map of command Buffer (CB) objects to MT_CB_INFO structures * Each MT_CB_INFO struct has an stl list container with * memory objects that are referenced by this CB * memObjMap -- map of Memory Objects to MT_MEM_OBJ_INFO structures * Each MT_MEM_OBJ_INFO has two stl list containers with: * -- all CBs referencing this mem obj * -- all VK Objects that are bound to this memory * objectMap -- map of objects to MT_OBJ_INFO structures * * Algorithm overview * These are the primary events that should happen related to different objects * 1. Command buffers * CREATION - Add object,structure to map * CMD BIND - If mem associated, add mem reference to list container * DESTROY - Remove from map, decrement (and report) mem references * 2. Mem Objects * CREATION - Add object,structure to map * OBJ BIND - Add obj structure to list container for that mem node * CMB BIND - If mem-related add CB structure to list container for that mem node * DESTROY - Flag as errors any remaining refs and remove from map * 3. Generic Objects * MEM BIND - DESTROY any previous binding, Add obj node w/ ref to map, add obj ref to list container for that mem node * DESTROY - If mem bound, remove reference list container for that memInfo, remove object ref from map */ // TODO : Is there a way to track when Cmd Buffer finishes & remove mem references at that point? // TODO : Could potentially store a list of freed mem allocs to flag when they're incorrectly used // Simple struct to hold handle and type of object so they can be uniquely identified and looked up in appropriate map struct MT_OBJ_HANDLE_TYPE { uint64_t handle; VkDebugReportObjectTypeEXT type; }; bool operator==(MT_OBJ_HANDLE_TYPE a, MT_OBJ_HANDLE_TYPE b) NOEXCEPT{ return a.handle == b.handle && a.type == b.type; } namespace std { template<> struct hash { size_t operator()(MT_OBJ_HANDLE_TYPE obj) const NOEXCEPT{ return hash()(obj.handle) ^ hash()(obj.type); } }; } struct MEMORY_RANGE { uint64_t handle; VkDeviceMemory memory; VkDeviceSize start; VkDeviceSize end; }; // Data struct for tracking memory object struct DEVICE_MEM_INFO { void *object; // Dispatchable object used to create this memory (device of swapchain) bool valid; // Stores if the memory has valid data or not VkDeviceMemory mem; VkMemoryAllocateInfo allocInfo; unordered_set objBindings; // objects bound to this memory unordered_set commandBufferBindings; // cmd buffers referencing this memory vector bufferRanges; vector imageRanges; VkImage image; // If memory is bound to image, this will have VkImage handle, else VK_NULL_HANDLE MemRange memRange; void *pData, *pDriverData; }; // This only applies to Buffers and Images, which can have memory bound to them struct MT_OBJ_BINDING_INFO { VkDeviceMemory mem; bool valid; // If this is a swapchain image backing memory is not a MT_MEM_OBJ_INFO so store it here. union create_info { VkImageCreateInfo image; VkBufferCreateInfo buffer; } create_info; }; struct MT_FB_ATTACHMENT_INFO { VkImage image; VkDeviceMemory mem; }; struct MT_PASS_ATTACHMENT_INFO { uint32_t attachment; VkAttachmentLoadOp load_op; VkAttachmentStoreOp store_op; }; // Associate fenceId with a fence object struct MT_FENCE_INFO { uint64_t fenceId; // Sequence number for fence at last submit VkQueue queue; // Queue that this fence is submitted against or NULL VkSwapchainKHR swapchain; // Swapchain that this fence is submitted against or NULL VkBool32 firstTimeFlag; // Fence was created in signaled state, avoid warnings for first use VkFenceCreateInfo createInfo; }; // Track Queue information struct MT_QUEUE_INFO { uint64_t lastRetiredId; uint64_t lastSubmittedId; list pQueueCommandBuffers; list pMemRefList; }; struct MT_DESCRIPTOR_SET_INFO { std::vector images; std::vector buffers; }; // Track Swapchain Information struct MT_SWAP_CHAIN_INFO { VkSwapchainCreateInfoKHR createInfo; std::vector images; }; #endif // Draw State ERROR codes typedef enum _DRAW_STATE_ERROR { // TODO: Remove the comments here or expand them. There isn't any additional information in the // comments than in the name in almost all cases. DRAWSTATE_NONE, // Used for INFO & other non-error messages DRAWSTATE_INTERNAL_ERROR, // Error with DrawState internal data structures DRAWSTATE_NO_PIPELINE_BOUND, // Unable to identify a bound pipeline DRAWSTATE_INVALID_POOL, // Invalid DS pool DRAWSTATE_INVALID_SET, // Invalid DS DRAWSTATE_INVALID_RENDER_AREA, // Invalid renderArea DRAWSTATE_INVALID_LAYOUT, // Invalid DS layout DRAWSTATE_INVALID_IMAGE_LAYOUT, // Invalid Image layout DRAWSTATE_INVALID_PIPELINE, // Invalid Pipeline handle referenced DRAWSTATE_INVALID_PIPELINE_LAYOUT, // Invalid PipelineLayout DRAWSTATE_INVALID_PIPELINE_CREATE_STATE, // Attempt to create a pipeline // with invalid state DRAWSTATE_INVALID_COMMAND_BUFFER, // Invalid CommandBuffer referenced DRAWSTATE_INVALID_BARRIER, // Invalid Barrier DRAWSTATE_INVALID_BUFFER, // Invalid Buffer DRAWSTATE_INVALID_QUERY, // Invalid Query DRAWSTATE_INVALID_FENCE, // Invalid Fence DRAWSTATE_INVALID_SEMAPHORE, // Invalid Semaphore DRAWSTATE_INVALID_EVENT, // Invalid Event DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, // binding in vkCmdBindVertexData() too // large for PSO's // pVertexBindingDescriptions array DRAWSTATE_VTX_INDEX_ALIGNMENT_ERROR, // binding offset in // vkCmdBindIndexBuffer() out of // alignment based on indexType // DRAWSTATE_MISSING_DOT_PROGRAM, // No "dot" program in order // to generate png image DRAWSTATE_OUT_OF_MEMORY, // malloc failed DRAWSTATE_INVALID_DESCRIPTOR_SET, // Descriptor Set handle is unknown DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, // Type in layout vs. update are not the // same DRAWSTATE_DESCRIPTOR_STAGEFLAGS_MISMATCH, // StageFlags in layout are not // the same throughout a single // VkWriteDescriptorSet update DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, // Descriptors set for update out // of bounds for corresponding // layout section DRAWSTATE_DESCRIPTOR_POOL_EMPTY, // Attempt to allocate descriptor from a // pool with no more descriptors of that // type available DRAWSTATE_CANT_FREE_FROM_NON_FREE_POOL, // Invalid to call // vkFreeDescriptorSets on Sets // allocated from a NON_FREE Pool DRAWSTATE_INVALID_UPDATE_INDEX, // Index of requested update is invalid for // specified descriptors set DRAWSTATE_INVALID_UPDATE_STRUCT, // Struct in DS Update tree is of invalid // type DRAWSTATE_NUM_SAMPLES_MISMATCH, // Number of samples in bound PSO does not // match number in FB of current RenderPass DRAWSTATE_NO_END_COMMAND_BUFFER, // Must call vkEndCommandBuffer() before // QueueSubmit on that commandBuffer DRAWSTATE_NO_BEGIN_COMMAND_BUFFER, // Binding cmds or calling End on CB that // never had vkBeginCommandBuffer() // called on it DRAWSTATE_COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION, // Cmd Buffer created with // VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT // flag is submitted // multiple times DRAWSTATE_INVALID_SECONDARY_COMMAND_BUFFER, // vkCmdExecuteCommands() called // with a primary commandBuffer // in pCommandBuffers array DRAWSTATE_VIEWPORT_NOT_BOUND, // Draw submitted with no viewport state bound DRAWSTATE_SCISSOR_NOT_BOUND, // Draw submitted with no scissor state bound DRAWSTATE_LINE_WIDTH_NOT_BOUND, // Draw submitted with no line width state // bound DRAWSTATE_DEPTH_BIAS_NOT_BOUND, // Draw submitted with no depth bias state // bound DRAWSTATE_BLEND_NOT_BOUND, // Draw submitted with no blend state bound when // color write enabled DRAWSTATE_DEPTH_BOUNDS_NOT_BOUND, // Draw submitted with no depth bounds // state bound when depth enabled DRAWSTATE_STENCIL_NOT_BOUND, // Draw submitted with no stencil state bound // when stencil enabled DRAWSTATE_INDEX_BUFFER_NOT_BOUND, // Draw submitted with no depth-stencil // state bound when depth write enabled DRAWSTATE_PIPELINE_LAYOUTS_INCOMPATIBLE, // Draw submitted PSO Pipeline // layout that's not compatible // with layout from // BindDescriptorSets DRAWSTATE_RENDERPASS_INCOMPATIBLE, // Incompatible renderpasses between // secondary cmdBuffer and primary // cmdBuffer or framebuffer DRAWSTATE_FRAMEBUFFER_INCOMPATIBLE, // Incompatible framebuffer between // secondary cmdBuffer and active // renderPass DRAWSTATE_INVALID_RENDERPASS, // Use of a NULL or otherwise invalid // RenderPass object DRAWSTATE_INVALID_RENDERPASS_CMD, // Invalid cmd submitted while a // RenderPass is active DRAWSTATE_NO_ACTIVE_RENDERPASS, // Rendering cmd submitted without an active // RenderPass DRAWSTATE_DESCRIPTOR_SET_NOT_UPDATED, // DescriptorSet bound but it was // never updated. This is a warning // code. DRAWSTATE_DESCRIPTOR_SET_NOT_BOUND, // DescriptorSet used by pipeline at // draw time is not bound, or has been // disturbed (which would have flagged // previous warning) DRAWSTATE_INVALID_DYNAMIC_OFFSET_COUNT, // DescriptorSets bound with // different number of dynamic // descriptors that were included in // dynamicOffsetCount DRAWSTATE_CLEAR_CMD_BEFORE_DRAW, // Clear cmd issued before any Draw in // CommandBuffer, should use RenderPass Ops // instead DRAWSTATE_BEGIN_CB_INVALID_STATE, // CB state at Begin call is bad. Can be // Primary/Secondary CB created with // mismatched FB/RP information or CB in // RECORDING state DRAWSTATE_INVALID_CB_SIMULTANEOUS_USE, // CmdBuffer is being used in // violation of // VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT // rules (i.e. simultaneous use w/o // that bit set) DRAWSTATE_INVALID_COMMAND_BUFFER_RESET, // Attempting to call Reset (or // Begin on recorded cmdBuffer) that // was allocated from Pool w/o // VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT // bit set DRAWSTATE_VIEWPORT_SCISSOR_MISMATCH, // Count for viewports and scissors // mismatch and/or state doesn't match // count DRAWSTATE_INVALID_IMAGE_ASPECT, // Image aspect is invalid for the current // operation DRAWSTATE_MISSING_ATTACHMENT_REFERENCE, // Attachment reference must be // present in active subpass DRAWSTATE_SAMPLER_DESCRIPTOR_ERROR, // A Descriptor of *_SAMPLER type is // being updated with an invalid or bad // Sampler DRAWSTATE_INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE, // Descriptors of // *COMBINED_IMAGE_SAMPLER // type are being updated // where some, but not all, // of the updates use // immutable samplers DRAWSTATE_IMAGEVIEW_DESCRIPTOR_ERROR, // A Descriptor of *_IMAGE or // *_ATTACHMENT type is being updated // with an invalid or bad ImageView DRAWSTATE_BUFFERVIEW_DESCRIPTOR_ERROR, // A Descriptor of *_TEXEL_BUFFER // type is being updated with an // invalid or bad BufferView DRAWSTATE_BUFFERINFO_DESCRIPTOR_ERROR, // A Descriptor of // *_[UNIFORM|STORAGE]_BUFFER_[DYNAMIC] // type is being updated with an // invalid or bad BufferView DRAWSTATE_DYNAMIC_OFFSET_OVERFLOW, // At draw time the dynamic offset // combined with buffer offset and range // oversteps size of buffer DRAWSTATE_DOUBLE_DESTROY, // Destroying an object twice DRAWSTATE_OBJECT_INUSE, // Destroying or modifying an object in use by a // command buffer DRAWSTATE_QUEUE_FORWARD_PROGRESS, // Queue cannot guarantee forward progress DRAWSTATE_INVALID_BUFFER_MEMORY_OFFSET, // Dynamic Buffer Offset // violates memory requirements limit DRAWSTATE_INVALID_TEXEL_BUFFER_OFFSET, // Dynamic Texel Buffer Offsets // violate device limit DRAWSTATE_INVALID_UNIFORM_BUFFER_OFFSET, // Dynamic Uniform Buffer Offsets // violate device limit DRAWSTATE_INVALID_STORAGE_BUFFER_OFFSET, // Dynamic Storage Buffer Offsets // violate device limit DRAWSTATE_INDEPENDENT_BLEND, // If independent blending is not enabled, all // elements of pAttachmentsMustBeIdentical DRAWSTATE_DISABLED_LOGIC_OP, // If logic operations is not enabled, logicOpEnable // must be VK_FALSE DRAWSTATE_INVALID_LOGIC_OP, // If logicOpEnable is VK_TRUE, logicOp must // must be a valid VkLogicOp value DRAWSTATE_INVALID_QUEUE_INDEX, // Specified queue index exceeds number // of queried queue families DRAWSTATE_PUSH_CONSTANTS_ERROR, // Push constants exceed maxPushConstantSize } DRAW_STATE_ERROR; typedef enum _SHADER_CHECKER_ERROR { SHADER_CHECKER_NONE, SHADER_CHECKER_INTERFACE_TYPE_MISMATCH, // Type mismatch between shader stages or shader and pipeline SHADER_CHECKER_OUTPUT_NOT_CONSUMED, // Entry appears in output interface, but missing in input SHADER_CHECKER_INPUT_NOT_PRODUCED, // Entry appears in input interface, but missing in output SHADER_CHECKER_NON_SPIRV_SHADER, // Shader image is not SPIR-V SHADER_CHECKER_INCONSISTENT_SPIRV, // General inconsistency within a SPIR-V module SHADER_CHECKER_UNKNOWN_STAGE, // Stage is not supported by analysis SHADER_CHECKER_INCONSISTENT_VI, // VI state contains conflicting binding or attrib descriptions SHADER_CHECKER_MISSING_DESCRIPTOR, // Shader attempts to use a descriptor binding not declared in the layout SHADER_CHECKER_BAD_SPECIALIZATION, // Specialization map entry points outside specialization data block SHADER_CHECKER_MISSING_ENTRYPOINT, // Shader module does not contain the requested entrypoint SHADER_CHECKER_PUSH_CONSTANT_OUT_OF_RANGE, // Push constant variable is not in a push constant range SHADER_CHECKER_PUSH_CONSTANT_NOT_ACCESSIBLE_FROM_STAGE, // Push constant range exists, but not accessible from stage SHADER_CHECKER_DESCRIPTOR_TYPE_MISMATCH, // Descriptor type does not match shader resource type SHADER_CHECKER_DESCRIPTOR_NOT_ACCESSIBLE_FROM_STAGE, // Descriptor used by shader, but not accessible from stage SHADER_CHECKER_FEATURE_NOT_ENABLED, // Shader uses capability requiring a feature not enabled on device SHADER_CHECKER_BAD_CAPABILITY, // Shader uses capability not supported by Vulkan (OpenCL features) } SHADER_CHECKER_ERROR; typedef enum _DRAW_TYPE { DRAW = 0, DRAW_INDEXED = 1, DRAW_INDIRECT = 2, DRAW_INDEXED_INDIRECT = 3, DRAW_BEGIN_RANGE = DRAW, DRAW_END_RANGE = DRAW_INDEXED_INDIRECT, NUM_DRAW_TYPES = (DRAW_END_RANGE - DRAW_BEGIN_RANGE + 1), } DRAW_TYPE; typedef struct _SHADER_DS_MAPPING { uint32_t slotCount; VkDescriptorSetLayoutCreateInfo *pShaderMappingSlot; } SHADER_DS_MAPPING; typedef struct _GENERIC_HEADER { VkStructureType sType; const void *pNext; } GENERIC_HEADER; typedef struct _PIPELINE_NODE { VkPipeline pipeline; VkGraphicsPipelineCreateInfo graphicsPipelineCI; VkPipelineVertexInputStateCreateInfo vertexInputCI; VkPipelineInputAssemblyStateCreateInfo iaStateCI; VkPipelineTessellationStateCreateInfo tessStateCI; VkPipelineViewportStateCreateInfo vpStateCI; VkPipelineRasterizationStateCreateInfo rsStateCI; VkPipelineMultisampleStateCreateInfo msStateCI; VkPipelineColorBlendStateCreateInfo cbStateCI; VkPipelineDepthStencilStateCreateInfo dsStateCI; VkPipelineDynamicStateCreateInfo dynStateCI; VkPipelineShaderStageCreateInfo vsCI; VkPipelineShaderStageCreateInfo tcsCI; VkPipelineShaderStageCreateInfo tesCI; VkPipelineShaderStageCreateInfo gsCI; VkPipelineShaderStageCreateInfo fsCI; // Compute shader is include in VkComputePipelineCreateInfo VkComputePipelineCreateInfo computePipelineCI; // Flag of which shader stages are active for this pipeline uint32_t active_shaders; // Capture which slots (set#->bindings) are actually used by the shaders of this pipeline unordered_map> active_slots; // Vtx input info (if any) std::vector vertexBindingDescriptions; std::vector vertexAttributeDescriptions; std::vector attachments; bool blendConstantsEnabled; // Blend constants enabled for any attachments // Default constructor _PIPELINE_NODE() : pipeline{}, graphicsPipelineCI{}, vertexInputCI{}, iaStateCI{}, tessStateCI{}, vpStateCI{}, rsStateCI{}, msStateCI{}, cbStateCI{}, dsStateCI{}, dynStateCI{}, vsCI{}, tcsCI{}, tesCI{}, gsCI{}, fsCI{}, computePipelineCI{}, active_shaders(0), active_slots(), vertexBindingDescriptions(), vertexAttributeDescriptions(), attachments(), blendConstantsEnabled(false) {} } PIPELINE_NODE; class BASE_NODE { public: std::atomic_int in_use; }; typedef struct _SAMPLER_NODE { VkSampler sampler; VkSamplerCreateInfo createInfo; _SAMPLER_NODE(const VkSampler *ps, const VkSamplerCreateInfo *pci) : sampler(*ps), createInfo(*pci){}; } SAMPLER_NODE; class IMAGE_NODE : public BASE_NODE { public: VkImageCreateInfo createInfo; VkDeviceMemory mem; VkDeviceSize memOffset; VkDeviceSize memSize; }; typedef struct _IMAGE_LAYOUT_NODE { VkImageLayout layout; VkFormat format; } IMAGE_LAYOUT_NODE; class IMAGE_CMD_BUF_LAYOUT_NODE { public: IMAGE_CMD_BUF_LAYOUT_NODE() {} IMAGE_CMD_BUF_LAYOUT_NODE(VkImageLayout initialLayoutInput, VkImageLayout layoutInput) : initialLayout(initialLayoutInput), layout(layoutInput) {} VkImageLayout initialLayout; VkImageLayout layout; }; class BUFFER_NODE : public BASE_NODE { public: using BASE_NODE::in_use; unique_ptr create_info; }; // Store the DAG. struct DAGNode { uint32_t pass; std::vector prev; std::vector next; }; struct RENDER_PASS_NODE { VkRenderPassCreateInfo const *pCreateInfo; VkFramebuffer fb; vector hasSelfDependency; vector subpassToNode; vector> subpassColorFormats; vector attachments; unordered_map attachment_first_read; unordered_map attachment_first_layout; RENDER_PASS_NODE(VkRenderPassCreateInfo const *pCreateInfo) : pCreateInfo(pCreateInfo), fb(VK_NULL_HANDLE) { uint32_t i; subpassColorFormats.reserve(pCreateInfo->subpassCount); for (i = 0; i < pCreateInfo->subpassCount; i++) { const VkSubpassDescription *subpass = &pCreateInfo->pSubpasses[i]; vector color_formats; uint32_t j; color_formats.reserve(subpass->colorAttachmentCount); for (j = 0; j < subpass->colorAttachmentCount; j++) { const uint32_t att = subpass->pColorAttachments[j].attachment; const VkFormat format = pCreateInfo->pAttachments[att].format; color_formats.push_back(format); } subpassColorFormats.push_back(color_formats); } } }; class PHYS_DEV_PROPERTIES_NODE { public: VkPhysicalDeviceProperties properties; VkPhysicalDeviceFeatures features; vector queue_family_properties; }; class FENCE_NODE : public BASE_NODE { public: using BASE_NODE::in_use; #if MTMERGE uint64_t fenceId; // Sequence number for fence at last submit VkSwapchainKHR swapchain; // Swapchain that this fence is submitted against or NULL VkBool32 firstTimeFlag; // Fence was created in signaled state, avoid warnings for first use VkFenceCreateInfo createInfo; #endif VkQueue queue; vector cmdBuffers; bool needsSignaled; vector priorFences; // Default constructor FENCE_NODE() : queue(NULL), needsSignaled(VK_FALSE){}; }; class SEMAPHORE_NODE : public BASE_NODE { public: using BASE_NODE::in_use; uint32_t signaled; SemaphoreState state; VkQueue queue; }; class EVENT_NODE : public BASE_NODE { public: using BASE_NODE::in_use; bool needsSignaled; VkPipelineStageFlags stageMask; }; class QUEUE_NODE { public: VkDevice device; vector lastFences; #if MTMERGE uint64_t lastRetiredId; uint64_t lastSubmittedId; // MTMTODO : merge cmd_buffer data structs here list pQueueCommandBuffers; list pMemRefList; #endif vector untrackedCmdBuffers; unordered_set inFlightCmdBuffers; unordered_map eventToStageMap; }; class QUERY_POOL_NODE : public BASE_NODE { public: VkQueryPoolCreateInfo createInfo; }; class FRAMEBUFFER_NODE { public: VkFramebufferCreateInfo createInfo; unordered_set referencingCmdBuffers; vector attachments; }; // Descriptor Data structures // Layout Node has the core layout data typedef struct _LAYOUT_NODE { VkDescriptorSetLayout layout; VkDescriptorSetLayoutCreateInfo createInfo; uint32_t startIndex; // 1st index of this layout uint32_t endIndex; // last index of this layout uint32_t dynamicDescriptorCount; // Total count of dynamic descriptors used // by this layout vector descriptorTypes; // Type per descriptor in this // layout to verify correct // updates vector stageFlags; // stageFlags per descriptor in this // layout to verify correct updates unordered_map bindingToIndexMap; // map set binding # to // createInfo.pBindings index // Default constructor _LAYOUT_NODE() : layout{}, createInfo{}, startIndex(0), endIndex(0), dynamicDescriptorCount(0){}; } LAYOUT_NODE; // Store layouts and pushconstants for PipelineLayout struct PIPELINE_LAYOUT_NODE { vector descriptorSetLayouts; vector pushConstantRanges; }; class SET_NODE : public BASE_NODE { public: using BASE_NODE::in_use; VkDescriptorSet set; VkDescriptorPool pool; // Head of LL of all Update structs for this set GENERIC_HEADER *pUpdateStructs; // Total num of descriptors in this set (count of its layout plus all prior layouts) uint32_t descriptorCount; vector pDescriptorUpdates; // Vector where each index points to update node for its slot LAYOUT_NODE *pLayout; // Layout for this set SET_NODE *pNext; unordered_set boundCmdBuffers; // Cmd buffers that this set has been bound to SET_NODE() : set(VK_NULL_HANDLE), pool(VK_NULL_HANDLE), pUpdateStructs(nullptr), pLayout(nullptr), pNext(nullptr){}; }; typedef struct _DESCRIPTOR_POOL_NODE { VkDescriptorPool pool; uint32_t maxSets; // Max descriptor sets allowed in this pool uint32_t availableSets; // Available descriptor sets in this pool VkDescriptorPoolCreateInfo createInfo; SET_NODE *pSets; // Head of LL of sets for this Pool vector maxDescriptorTypeCount; // Max # of descriptors of each type in this pool vector availableDescriptorTypeCount; // Available # of descriptors of each type in this pool _DESCRIPTOR_POOL_NODE(const VkDescriptorPool pool, const VkDescriptorPoolCreateInfo *pCreateInfo) : pool(pool), maxSets(pCreateInfo->maxSets), availableSets(pCreateInfo->maxSets), createInfo(*pCreateInfo), pSets(NULL), maxDescriptorTypeCount(VK_DESCRIPTOR_TYPE_RANGE_SIZE), availableDescriptorTypeCount(VK_DESCRIPTOR_TYPE_RANGE_SIZE) { if (createInfo.poolSizeCount) { // Shadow type struct from ptr into local struct size_t poolSizeCountSize = createInfo.poolSizeCount * sizeof(VkDescriptorPoolSize); createInfo.pPoolSizes = new VkDescriptorPoolSize[poolSizeCountSize]; memcpy((void *)createInfo.pPoolSizes, pCreateInfo->pPoolSizes, poolSizeCountSize); // Now set max counts for each descriptor type based on count of that type times maxSets uint32_t i = 0; for (i = 0; i < createInfo.poolSizeCount; ++i) { uint32_t typeIndex = static_cast(createInfo.pPoolSizes[i].type); maxDescriptorTypeCount[typeIndex] = createInfo.pPoolSizes[i].descriptorCount; availableDescriptorTypeCount[typeIndex] = maxDescriptorTypeCount[typeIndex]; } } else { createInfo.pPoolSizes = NULL; // Make sure this is NULL so we don't try to clean it up } } ~_DESCRIPTOR_POOL_NODE() { delete[] createInfo.pPoolSizes; // TODO : pSets are currently freed in deletePools function which uses freeShadowUpdateTree function // need to migrate that struct to smart ptrs for auto-cleanup } } DESCRIPTOR_POOL_NODE; // Cmd Buffer Tracking typedef enum _CMD_TYPE { CMD_BINDPIPELINE, CMD_BINDPIPELINEDELTA, CMD_SETVIEWPORTSTATE, CMD_SETSCISSORSTATE, CMD_SETLINEWIDTHSTATE, CMD_SETDEPTHBIASSTATE, CMD_SETBLENDSTATE, CMD_SETDEPTHBOUNDSSTATE, CMD_SETSTENCILREADMASKSTATE, CMD_SETSTENCILWRITEMASKSTATE, CMD_SETSTENCILREFERENCESTATE, CMD_BINDDESCRIPTORSETS, CMD_BINDINDEXBUFFER, CMD_BINDVERTEXBUFFER, CMD_DRAW, CMD_DRAWINDEXED, CMD_DRAWINDIRECT, CMD_DRAWINDEXEDINDIRECT, CMD_DISPATCH, CMD_DISPATCHINDIRECT, CMD_COPYBUFFER, CMD_COPYIMAGE, CMD_BLITIMAGE, CMD_COPYBUFFERTOIMAGE, CMD_COPYIMAGETOBUFFER, CMD_CLONEIMAGEDATA, CMD_UPDATEBUFFER, CMD_FILLBUFFER, CMD_CLEARCOLORIMAGE, CMD_CLEARATTACHMENTS, CMD_CLEARDEPTHSTENCILIMAGE, CMD_RESOLVEIMAGE, CMD_SETEVENT, CMD_RESETEVENT, CMD_WAITEVENTS, CMD_PIPELINEBARRIER, CMD_BEGINQUERY, CMD_ENDQUERY, CMD_RESETQUERYPOOL, CMD_COPYQUERYPOOLRESULTS, CMD_WRITETIMESTAMP, CMD_PUSHCONSTANTS, CMD_INITATOMICCOUNTERS, CMD_LOADATOMICCOUNTERS, CMD_SAVEATOMICCOUNTERS, CMD_BEGINRENDERPASS, CMD_NEXTSUBPASS, CMD_ENDRENDERPASS, CMD_EXECUTECOMMANDS, } CMD_TYPE; // Data structure for holding sequence of cmds in cmd buffer typedef struct _CMD_NODE { CMD_TYPE type; uint64_t cmdNumber; } CMD_NODE; typedef enum _CB_STATE { CB_NEW, // Newly created CB w/o any cmds CB_RECORDING, // BeginCB has been called on this CB CB_RECORDED, // EndCB has been called on this CB CB_INVALID // CB had a bound descriptor set destroyed or updated } CB_STATE; // CB Status -- used to track status of various bindings on cmd buffer objects typedef VkFlags CBStatusFlags; typedef enum _CBStatusFlagBits { // clang-format off CBSTATUS_NONE = 0x00000000, // No status is set CBSTATUS_VIEWPORT_SET = 0x00000001, // Viewport has been set CBSTATUS_LINE_WIDTH_SET = 0x00000002, // Line width has been set CBSTATUS_DEPTH_BIAS_SET = 0x00000004, // Depth bias has been set CBSTATUS_BLEND_CONSTANTS_SET = 0x00000008, // Blend constants state has been set CBSTATUS_DEPTH_BOUNDS_SET = 0x00000010, // Depth bounds state object has been set CBSTATUS_STENCIL_READ_MASK_SET = 0x00000020, // Stencil read mask has been set CBSTATUS_STENCIL_WRITE_MASK_SET = 0x00000040, // Stencil write mask has been set CBSTATUS_STENCIL_REFERENCE_SET = 0x00000080, // Stencil reference has been set CBSTATUS_INDEX_BUFFER_BOUND = 0x00000100, // Index buffer has been set CBSTATUS_SCISSOR_SET = 0x00000200, // Scissor has been set CBSTATUS_ALL = 0x000003FF, // All dynamic state set // clang-format on } CBStatusFlagBits; typedef struct stencil_data { uint32_t compareMask; uint32_t writeMask; uint32_t reference; } CBStencilData; typedef struct _DRAW_DATA { vector buffers; } DRAW_DATA; struct ImageSubresourcePair { VkImage image; bool hasSubresource; VkImageSubresource subresource; }; bool operator==(const ImageSubresourcePair &img1, const ImageSubresourcePair &img2) { if (img1.image != img2.image || img1.hasSubresource != img2.hasSubresource) return false; return !img1.hasSubresource || (img1.subresource.aspectMask == img2.subresource.aspectMask && img1.subresource.mipLevel == img2.subresource.mipLevel && img1.subresource.arrayLayer == img2.subresource.arrayLayer); } namespace std { template <> struct hash { size_t operator()(ImageSubresourcePair img) const throw() { size_t hashVal = hash()(reinterpret_cast(img.image)); hashVal ^= hash()(img.hasSubresource); if (img.hasSubresource) { hashVal ^= hash()(reinterpret_cast(img.subresource.aspectMask)); hashVal ^= hash()(img.subresource.mipLevel); hashVal ^= hash()(img.subresource.arrayLayer); } return hashVal; } }; } struct QueryObject { VkQueryPool pool; uint32_t index; }; bool operator==(const QueryObject &query1, const QueryObject &query2) { return (query1.pool == query2.pool && query1.index == query2.index); } namespace std { template <> struct hash { size_t operator()(QueryObject query) const throw() { return hash()((uint64_t)(query.pool)) ^ hash()(query.index); } }; } // Track last states that are bound per pipeline bind point (Gfx & Compute) struct LAST_BOUND_STATE { VkPipeline pipeline; VkPipelineLayout pipelineLayout; // Track each set that has been bound // TODO : can unique be global per CB? (do we care about Gfx vs. Compute?) unordered_set uniqueBoundSets; // Ordered bound set tracking where index is set# that given set is bound to vector boundDescriptorSets; // one dynamic offset per dynamic descriptor bound to this CB vector dynamicOffsets; void reset() { pipeline = VK_NULL_HANDLE; pipelineLayout = VK_NULL_HANDLE; uniqueBoundSets.clear(); boundDescriptorSets.clear(); dynamicOffsets.clear(); } }; // Cmd Buffer Wrapper Struct struct GLOBAL_CB_NODE { VkCommandBuffer commandBuffer; VkCommandBufferAllocateInfo createInfo; VkCommandBufferBeginInfo beginInfo; VkCommandBufferInheritanceInfo inheritanceInfo; // VkFence fence; // fence tracking this cmd buffer VkDevice device; // device this CB belongs to uint64_t numCmds; // number of cmds in this CB uint64_t drawCount[NUM_DRAW_TYPES]; // Count of each type of draw in this CB CB_STATE state; // Track cmd buffer update state uint64_t submitCount; // Number of times CB has been submitted CBStatusFlags status; // Track status of various bindings on cmd buffer vector cmds; // vector of commands bound to this command buffer // Currently storing "lastBound" objects on per-CB basis // long-term may want to create caches of "lastBound" states and could have // each individual CMD_NODE referencing its own "lastBound" state // VkPipeline lastBoundPipeline; // VkPipelineLayout lastBoundPipelineLayout; // // Capture unique std::set of descriptorSets that are bound to this CB. // std::set uniqueBoundSets; // vector boundDescriptorSets; // Index is set# that given set is bound to // Store last bound state for Gfx & Compute pipeline bind points LAST_BOUND_STATE lastBound[VK_PIPELINE_BIND_POINT_RANGE_SIZE]; vector dynamicOffsets; vector viewports; vector scissors; VkRenderPassBeginInfo activeRenderPassBeginInfo; uint64_t fenceId; VkFence lastSubmittedFence; VkQueue lastSubmittedQueue; VkRenderPass activeRenderPass; VkSubpassContents activeSubpassContents; uint32_t activeSubpass; VkFramebuffer framebuffer; // Track descriptor sets that are destroyed or updated while bound to CB // TODO : These data structures relate to tracking resources that invalidate // a cmd buffer that references them. Need to unify how we handle these // cases so we don't have different tracking data for each type. std::set destroyedSets; std::set updatedSets; unordered_set destroyedFramebuffers; vector waitedEvents; vector semaphores; vector events; unordered_map> waitedEventsBeforeQueryReset; unordered_map queryToStateMap; // 0 is unavailable, 1 is available unordered_set activeQueries; unordered_set startedQueries; unordered_map imageLayoutMap; unordered_map> imageSubresourceMap; unordered_map eventToStageMap; vector drawData; DRAW_DATA currentDrawData; VkCommandBuffer primaryCommandBuffer; // Track images and buffers that are updated by this CB at the point of a draw unordered_set updateImages; unordered_set updateBuffers; // If cmd buffer is primary, track secondary command buffers pending // execution std::unordered_set secondaryCommandBuffers; // MTMTODO : Scrub these data fields and merge active sets w/ lastBound as appropriate vector> validate_functions; std::unordered_set memObjs; vector> eventUpdates; }; class SWAPCHAIN_NODE { public: VkSwapchainCreateInfoKHR createInfo; uint32_t *pQueueFamilyIndices; std::vector images; SWAPCHAIN_NODE(const VkSwapchainCreateInfoKHR *pCreateInfo) : createInfo(*pCreateInfo), pQueueFamilyIndices(NULL) { if (pCreateInfo->queueFamilyIndexCount && pCreateInfo->imageSharingMode == VK_SHARING_MODE_CONCURRENT) { pQueueFamilyIndices = new uint32_t[pCreateInfo->queueFamilyIndexCount]; memcpy(pQueueFamilyIndices, pCreateInfo->pQueueFamilyIndices, pCreateInfo->queueFamilyIndexCount * sizeof(uint32_t)); createInfo.pQueueFamilyIndices = pQueueFamilyIndices; } } ~SWAPCHAIN_NODE() { delete[] pQueueFamilyIndices; } }; Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/device_limits.cpp000066400000000000000000001271451270147354000255570ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Mark Lobodzinski * Author: Mike Stroyan * Author: Tobin Ehlis */ #include #include #include #include #include #include "vk_loader_platform.h" #include "vk_dispatch_table_helper.h" #if defined(__GNUC__) #pragma GCC diagnostic ignored "-Wwrite-strings" #endif #if defined(__GNUC__) #pragma GCC diagnostic warning "-Wwrite-strings" #endif #include "vk_struct_size_helper.h" #include "device_limits.h" #include "vulkan/vk_layer.h" #include "vk_layer_config.h" #include "vk_enum_validate_helper.h" #include "vk_layer_table.h" #include "vk_layer_data.h" #include "vk_layer_logging.h" #include "vk_layer_extension_utils.h" #include "vk_layer_utils.h" // This struct will be stored in a map hashed by the dispatchable object struct layer_data { debug_report_data *report_data; std::vector logging_callback; VkLayerDispatchTable *device_dispatch_table; VkLayerInstanceDispatchTable *instance_dispatch_table; // Track state of each instance unique_ptr instanceState; unique_ptr physicalDeviceState; VkPhysicalDeviceFeatures actualPhysicalDeviceFeatures; VkPhysicalDeviceFeatures requestedPhysicalDeviceFeatures; // Track physical device per logical device VkPhysicalDevice physicalDevice; VkPhysicalDeviceProperties physicalDeviceProperties; // Vector indices correspond to queueFamilyIndex vector> queueFamilyProperties; layer_data() : report_data(nullptr), device_dispatch_table(nullptr), instance_dispatch_table(nullptr), instanceState(nullptr), physicalDeviceState(nullptr), actualPhysicalDeviceFeatures(), requestedPhysicalDeviceFeatures(), physicalDevice(){}; }; static unordered_map layer_data_map; // TODO : This can be much smarter, using separate locks for separate global data static int globalLockInitialized = 0; static loader_platform_thread_mutex globalLock; template layer_data *get_my_data_ptr(void *data_key, std::unordered_map &data_map); static void init_device_limits(layer_data *my_data, const VkAllocationCallbacks *pAllocator) { layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_device_limits"); if (!globalLockInitialized) { // TODO/TBD: Need to delete this mutex sometime. How??? One // suggestion is to call this during vkCreateInstance(), and then we // can clean it up during vkDestroyInstance(). However, that requires // that the layer have per-instance locks. We need to come back and // address this soon. loader_platform_thread_create_mutex(&globalLock); globalLockInitialized = 1; } } static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice, const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { if (pLayerName == NULL) { dispatch_key key = get_dispatch_key(physicalDevice); layer_data *my_data = get_my_data_ptr(key, layer_data_map); return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties); } else { return util_GetExtensionProperties(0, nullptr, pCount, pProperties); } } static const VkLayerProperties dl_global_layers[] = {{ "VK_LAYER_LUNARG_device_limits", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer", }}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) { return util_GetLayerProperties(ARRAY_SIZE(dl_global_layers), dl_global_layers, pCount, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) { return util_GetLayerProperties(ARRAY_SIZE(dl_global_layers), dl_global_layers, pCount, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) { VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance"); if (fpCreateInstance == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance); if (result != VK_SUCCESS) return result; layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map); my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable; layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr); my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance, pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames); init_device_limits(my_data, pAllocator); my_data->instanceState = unique_ptr(new INSTANCE_STATE()); return VK_SUCCESS; } /* hook DestroyInstance to remove tableInstanceMap entry */ VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) { dispatch_key key = get_dispatch_key(instance); layer_data *my_data = get_my_data_ptr(key, layer_data_map); VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; pTable->DestroyInstance(instance, pAllocator); // Clean up logging callback, if any while (my_data->logging_callback.size() > 0) { VkDebugReportCallbackEXT callback = my_data->logging_callback.back(); layer_destroy_msg_callback(my_data->report_data, callback, pAllocator); my_data->logging_callback.pop_back(); } layer_debug_report_destroy_instance(my_data->report_data); delete my_data->instance_dispatch_table; layer_data_map.erase(key); if (layer_data_map.empty()) { // Release mutex when destroying last instance. loader_platform_thread_delete_mutex(&globalLock); globalLockInitialized = 0; } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount, VkPhysicalDevice *pPhysicalDevices) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); if (my_data->instanceState) { // For this instance, flag when vkEnumeratePhysicalDevices goes to QUERY_COUNT and then QUERY_DETAILS if (NULL == pPhysicalDevices) { my_data->instanceState->vkEnumeratePhysicalDevicesState = QUERY_COUNT; } else { if (UNCALLED == my_data->instanceState->vkEnumeratePhysicalDevicesState) { // Flag error here, shouldn't be calling this without having queried count skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, 0, __LINE__, DEVLIMITS_MUST_QUERY_COUNT, "DL", "Invalid call sequence to vkEnumeratePhysicalDevices() w/ non-NULL pPhysicalDevices. You should first " "call vkEnumeratePhysicalDevices() w/ NULL pPhysicalDevices to query pPhysicalDeviceCount."); } // TODO : Could also flag a warning if re-calling this function in QUERY_DETAILS state else if (my_data->instanceState->physicalDevicesCount != *pPhysicalDeviceCount) { // TODO: Having actual count match count from app is not a requirement, so this can be a warning skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_COUNT_MISMATCH, "DL", "Call to vkEnumeratePhysicalDevices() w/ pPhysicalDeviceCount value %u, but actual count " "supported by this instance is %u.", *pPhysicalDeviceCount, my_data->instanceState->physicalDevicesCount); } my_data->instanceState->vkEnumeratePhysicalDevicesState = QUERY_DETAILS; } if (skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = my_data->instance_dispatch_table->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices); if (NULL == pPhysicalDevices) { my_data->instanceState->physicalDevicesCount = *pPhysicalDeviceCount; } else { // Save physical devices for (uint32_t i = 0; i < *pPhysicalDeviceCount; i++) { layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(pPhysicalDevices[i]), layer_data_map); phy_dev_data->physicalDeviceState = unique_ptr(new PHYSICAL_DEVICE_STATE()); // Init actual features for each physical device my_data->instance_dispatch_table->GetPhysicalDeviceFeatures(pPhysicalDevices[i], &(phy_dev_data->actualPhysicalDeviceFeatures)); } } return result; } else { log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, 0, __LINE__, DEVLIMITS_INVALID_INSTANCE, "DL", "Invalid instance (%#" PRIxLEAST64 ") passed into vkEnumeratePhysicalDevices().", (uint64_t)instance); } return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures *pFeatures) { layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceFeaturesState = QUERY_DETAILS; phy_dev_data->instance_dispatch_table->GetPhysicalDeviceFeatures(physicalDevice, pFeatures); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties *pFormatProperties) { get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map) ->instance_dispatch_table->GetPhysicalDeviceFormatProperties(physicalDevice, format, pFormatProperties); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties *pImageFormatProperties) { return get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map) ->instance_dispatch_table->GetPhysicalDeviceImageFormatProperties(physicalDevice, format, type, tiling, usage, flags, pImageFormatProperties); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties *pProperties) { layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); phy_dev_data->instance_dispatch_table->GetPhysicalDeviceProperties(physicalDevice, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkQueueFamilyProperties *pQueueFamilyProperties) { VkBool32 skipCall = VK_FALSE; layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); if (phy_dev_data->physicalDeviceState) { if (NULL == pQueueFamilyProperties) { phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceQueueFamilyPropertiesState = QUERY_COUNT; } else { // Verify that for each physical device, this function is called first with NULL pQueueFamilyProperties ptr in order to // get count if (UNCALLED == phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceQueueFamilyPropertiesState) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_MUST_QUERY_COUNT, "DL", "Invalid call sequence to vkGetPhysicalDeviceQueueFamilyProperties() w/ non-NULL " "pQueueFamilyProperties. You should first call vkGetPhysicalDeviceQueueFamilyProperties() w/ " "NULL pQueueFamilyProperties to query pCount."); } // Then verify that pCount that is passed in on second call matches what was returned if (phy_dev_data->physicalDeviceState->queueFamilyPropertiesCount != *pCount) { // TODO: this is not a requirement of the Valid Usage section for vkGetPhysicalDeviceQueueFamilyProperties, so // provide as warning skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_COUNT_MISMATCH, "DL", "Call to vkGetPhysicalDeviceQueueFamilyProperties() w/ pCount value %u, but actual count " "supported by this physicalDevice is %u.", *pCount, phy_dev_data->physicalDeviceState->queueFamilyPropertiesCount); } phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceQueueFamilyPropertiesState = QUERY_DETAILS; } if (skipCall) return; phy_dev_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, pCount, pQueueFamilyProperties); if (NULL == pQueueFamilyProperties) { phy_dev_data->physicalDeviceState->queueFamilyPropertiesCount = *pCount; } else { // Save queue family properties phy_dev_data->queueFamilyProperties.reserve(*pCount); for (uint32_t i = 0; i < *pCount; i++) { phy_dev_data->queueFamilyProperties.emplace_back(new VkQueueFamilyProperties(pQueueFamilyProperties[i])); } } return; } else { log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_PHYSICAL_DEVICE, "DL", "Invalid physicalDevice (%#" PRIxLEAST64 ") passed into vkGetPhysicalDeviceQueueFamilyProperties().", (uint64_t)physicalDevice); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceMemoryProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties *pMemoryProperties) { get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map) ->instance_dispatch_table->GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t *pNumProperties, VkSparseImageFormatProperties *pProperties) { get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map) ->instance_dispatch_table->GetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage, tiling, pNumProperties, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetViewport(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport *pViewports) { VkBool32 skipCall = VK_FALSE; /* TODO: Verify viewportCount < maxViewports from VkPhysicalDeviceLimits */ if (VK_FALSE == skipCall) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); my_data->device_dispatch_table->CmdSetViewport(commandBuffer, firstViewport, viewportCount, pViewports); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetScissor(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D *pScissors) { VkBool32 skipCall = VK_FALSE; /* TODO: Verify scissorCount < maxViewports from VkPhysicalDeviceLimits */ /* TODO: viewportCount and scissorCount must match at draw time */ if (VK_FALSE == skipCall) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); my_data->device_dispatch_table->CmdSetScissor(commandBuffer, firstScissor, scissorCount, pScissors); } } // Verify that features have been queried and verify that requested features are available static VkBool32 validate_features_request(layer_data *phy_dev_data) { VkBool32 skipCall = VK_FALSE; // Verify that all of the requested features are available // Get ptrs into actual and requested structs and if requested is 1 but actual is 0, request is invalid VkBool32 *actual = (VkBool32 *)&(phy_dev_data->actualPhysicalDeviceFeatures); VkBool32 *requested = (VkBool32 *)&(phy_dev_data->requestedPhysicalDeviceFeatures); // TODO : This is a nice, compact way to loop through struct, but a bad way to report issues // Need to provide the struct member name with the issue. To do that seems like we'll // have to loop through each struct member which should be done w/ codegen to keep in synch. uint32_t errors = 0; uint32_t totalBools = sizeof(VkPhysicalDeviceFeatures) / sizeof(VkBool32); for (uint32_t i = 0; i < totalBools; i++) { if (requested[i] > actual[i]) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_FEATURE_REQUESTED, "DL", "While calling vkCreateDevice(), requesting feature #%u in VkPhysicalDeviceFeatures struct, " "which is not available on this device.", i); errors++; } } if (errors && (UNCALLED == phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceFeaturesState)) { // If user didn't request features, notify them that they should // TODO: Verify this against the spec. I believe this is an invalid use of the API and should return an error skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_FEATURE_REQUESTED, "DL", "You requested features that are unavailable on this device. You should first query feature " "availability by calling vkGetPhysicalDeviceFeatures()."); } return skipCall; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) { VkBool32 skipCall = VK_FALSE; layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map); // First check is app has actually requested queueFamilyProperties if (!phy_dev_data->physicalDeviceState) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_MUST_QUERY_COUNT, "DL", "Invalid call to vkCreateDevice() w/o first calling vkEnumeratePhysicalDevices()."); } else if (QUERY_DETAILS != phy_dev_data->physicalDeviceState->vkGetPhysicalDeviceQueueFamilyPropertiesState) { // TODO: This is not called out as an invalid use in the spec so make more informative recommendation. skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL", "Call to vkCreateDevice() w/o first calling vkGetPhysicalDeviceQueueFamilyProperties()."); } else { // Check that the requested queue properties are valid for (uint32_t i = 0; i < pCreateInfo->queueCreateInfoCount; i++) { uint32_t requestedIndex = pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex; if (phy_dev_data->queueFamilyProperties.size() <= requestedIndex) { // requested index is out of bounds for this physical device skipCall |= log_msg( phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL", "Invalid queue create request in vkCreateDevice(). Invalid queueFamilyIndex %u requested.", requestedIndex); } else if (pCreateInfo->pQueueCreateInfos[i].queueCount > phy_dev_data->queueFamilyProperties[requestedIndex]->queueCount) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL", "Invalid queue create request in vkCreateDevice(). QueueFamilyIndex %u only has %u queues, but " "requested queueCount is %u.", requestedIndex, phy_dev_data->queueFamilyProperties[requestedIndex]->queueCount, pCreateInfo->pQueueCreateInfos[i].queueCount); } } } // Check that any requested features are available if (pCreateInfo->pEnabledFeatures) { phy_dev_data->requestedPhysicalDeviceFeatures = *(pCreateInfo->pEnabledFeatures); skipCall |= validate_features_request(phy_dev_data); } if (skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr; PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice"); if (fpCreateDevice == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateDevice(gpu, pCreateInfo, pAllocator, pDevice); if (result != VK_SUCCESS) { return result; } layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map); my_device_data->device_dispatch_table = new VkLayerDispatchTable; layer_init_device_dispatch_table(*pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr); my_device_data->report_data = layer_debug_report_create_device(phy_dev_data->report_data, *pDevice); my_device_data->physicalDevice = gpu; // Get physical device properties for this device phy_dev_data->instance_dispatch_table->GetPhysicalDeviceProperties(gpu, &(my_device_data->physicalDeviceProperties)); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) { // Free device lifetime allocations dispatch_key key = get_dispatch_key(device); layer_data *my_device_data = get_my_data_ptr(key, layer_data_map); my_device_data->device_dispatch_table->DestroyDevice(device, pAllocator); delete my_device_data->device_dispatch_table; layer_data_map.erase(key); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkCommandPool *pCommandPool) { // TODO : Verify that requested QueueFamilyIndex for this pool exists VkResult result = get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks *pAllocator) { get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->DestroyCommandPool(device, commandPool, pAllocator); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags) { VkResult result = get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->ResetCommandPool(device, commandPool, flags); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pCreateInfo, VkCommandBuffer *pCommandBuffer) { VkResult result = get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->AllocateCommandBuffers(device, pCreateInfo, pCommandBuffer); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t count, const VkCommandBuffer *pCommandBuffers) { get_my_data_ptr(get_dispatch_key(device), layer_data_map) ->device_dispatch_table->FreeCommandBuffers(device, commandPool, count, pCommandBuffers); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo *pBeginInfo) { bool skipCall = false; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(dev_data->physicalDevice), layer_data_map); const VkCommandBufferInheritanceInfo *pInfo = pBeginInfo->pInheritanceInfo; if (phy_dev_data->actualPhysicalDeviceFeatures.inheritedQueries == VK_FALSE && pInfo && pInfo->occlusionQueryEnable != VK_FALSE) { skipCall |= log_msg( dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, DEVLIMITS_INVALID_INHERITED_QUERY, "DL", "Cannot set inherited occlusionQueryEnable in vkBeginCommandBuffer() when device does not support inheritedQueries."); } if (phy_dev_data->actualPhysicalDeviceFeatures.inheritedQueries != VK_FALSE && pInfo && pInfo->occlusionQueryEnable != VK_FALSE && !validate_VkQueryControlFlagBits(VkQueryControlFlagBits(pInfo->queryFlags))) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, DEVLIMITS_INVALID_INHERITED_QUERY, "DL", "Cannot enable in occlusion queries in vkBeginCommandBuffer() and set queryFlags to %d which is not a " "valid combination of VkQueryControlFlagBits.", pInfo->queryFlags); } VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; if (!skipCall) result = dev_data->device_dispatch_table->BeginCommandBuffer(commandBuffer, pBeginInfo); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue *pQueue) { VkBool32 skipCall = VK_FALSE; layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkPhysicalDevice gpu = dev_data->physicalDevice; layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map); if (queueFamilyIndex >= phy_dev_data->queueFamilyProperties.size()) { // requested index is out of bounds for this physical device skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL", "Invalid queueFamilyIndex %u requested in vkGetDeviceQueue().", queueFamilyIndex); } else if (queueIndex >= phy_dev_data->queueFamilyProperties[queueFamilyIndex]->queueCount) { skipCall |= log_msg( phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, "DL", "Invalid queue request in vkGetDeviceQueue(). QueueFamilyIndex %u only has %u queues, but requested queueIndex is %u.", queueFamilyIndex, phy_dev_data->queueFamilyProperties[queueFamilyIndex]->queueCount, queueIndex); } if (skipCall) return; dev_data->device_dispatch_table->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUpdateDescriptorSets(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet *pDescriptorWrites, uint32_t descriptorCopyCount, const VkCopyDescriptorSet *pDescriptorCopies) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkBool32 skipCall = VK_FALSE; for (uint32_t i = 0; i < descriptorWriteCount; i++) { if ((pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER) || (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC)) { VkDeviceSize uniformAlignment = dev_data->physicalDeviceProperties.limits.minUniformBufferOffsetAlignment; for (uint32_t j = 0; j < pDescriptorWrites[i].descriptorCount; j++) { if (vk_safe_modulo(pDescriptorWrites[i].pBufferInfo[j].offset, uniformAlignment) != 0) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_UNIFORM_BUFFER_OFFSET, "DL", "vkUpdateDescriptorSets(): pDescriptorWrites[%d].pBufferInfo[%d].offset (%#" PRIxLEAST64 ") must be a multiple of device limit minUniformBufferOffsetAlignment %#" PRIxLEAST64, i, j, pDescriptorWrites[i].pBufferInfo[j].offset, uniformAlignment); } } } else if ((pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER) || (pDescriptorWrites[i].descriptorType == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC)) { VkDeviceSize storageAlignment = dev_data->physicalDeviceProperties.limits.minStorageBufferOffsetAlignment; for (uint32_t j = 0; j < pDescriptorWrites[i].descriptorCount; j++) { if (vk_safe_modulo(pDescriptorWrites[i].pBufferInfo[j].offset, storageAlignment) != 0) { skipCall |= log_msg(dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, 0, __LINE__, DEVLIMITS_INVALID_STORAGE_BUFFER_OFFSET, "DL", "vkUpdateDescriptorSets(): pDescriptorWrites[%d].pBufferInfo[%d].offset (%#" PRIxLEAST64 ") must be a multiple of device limit minStorageBufferOffsetAlignment %#" PRIxLEAST64, i, j, pDescriptorWrites[i].pBufferInfo[j].offset, storageAlignment); } } } } if (skipCall == VK_FALSE) { dev_data->device_dispatch_table->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t *pData) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); // dstOffset is the byte offset into the buffer to start updating and must be a multiple of 4. if (dstOffset & 3) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "DL", "vkCmdUpdateBuffer parameter, VkDeviceSize dstOffset, is not a multiple of 4")) { return; } } // dataSize is the number of bytes to update, which must be a multiple of 4. if (dataSize & 3) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "DL", "vkCmdUpdateBuffer parameter, VkDeviceSize dataSize, is not a multiple of 4")) { return; } } dev_data->device_dispatch_table->CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data) { layer_data *dev_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); // dstOffset is the byte offset into the buffer to start filling and must be a multiple of 4. if (dstOffset & 3) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "DL", "vkCmdFillBuffer parameter, VkDeviceSize dstOffset, is not a multiple of 4")) { return; } } // size is the number of bytes to fill, which must be a multiple of 4. if (size & 3) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); if (log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "DL", "vkCmdFillBuffer parameter, VkDeviceSize size, is not a multiple of 4")) { return; } } dev_data->device_dispatch_table->CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); VkResult res = my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback); if (VK_SUCCESS == res) { res = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback); } return res; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT msgCallback, const VkAllocationCallbacks *pAllocator) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); my_data->instance_dispatch_table->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator); layer_destroy_msg_callback(my_data->report_data, msgCallback, pAllocator); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg); } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice dev, const char *funcName) { if (!strcmp(funcName, "vkGetDeviceProcAddr")) return (PFN_vkVoidFunction)vkGetDeviceProcAddr; if (!strcmp(funcName, "vkDestroyDevice")) return (PFN_vkVoidFunction)vkDestroyDevice; if (!strcmp(funcName, "vkGetDeviceQueue")) return (PFN_vkVoidFunction)vkGetDeviceQueue; if (!strcmp(funcName, "CreateCommandPool")) return (PFN_vkVoidFunction)vkCreateCommandPool; if (!strcmp(funcName, "DestroyCommandPool")) return (PFN_vkVoidFunction)vkDestroyCommandPool; if (!strcmp(funcName, "ResetCommandPool")) return (PFN_vkVoidFunction)vkResetCommandPool; if (!strcmp(funcName, "vkAllocateCommandBuffers")) return (PFN_vkVoidFunction)vkAllocateCommandBuffers; if (!strcmp(funcName, "vkFreeCommandBuffers")) return (PFN_vkVoidFunction)vkFreeCommandBuffers; if (!strcmp(funcName, "vkBeginCommandBuffer")) return (PFN_vkVoidFunction)vkBeginCommandBuffer; if (!strcmp(funcName, "vkCmdUpdateBuffer")) return (PFN_vkVoidFunction)vkCmdUpdateBuffer; if (!strcmp(funcName, "vkUpdateDescriptorSets")) return (PFN_vkVoidFunction)vkUpdateDescriptorSets; if (!strcmp(funcName, "vkCmdFillBuffer")) return (PFN_vkVoidFunction)vkCmdFillBuffer; if (dev == NULL) return NULL; layer_data *my_data = get_my_data_ptr(get_dispatch_key(dev), layer_data_map); VkLayerDispatchTable *pTable = my_data->device_dispatch_table; { if (pTable->GetDeviceProcAddr == NULL) return NULL; return pTable->GetDeviceProcAddr(dev, funcName); } } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) { PFN_vkVoidFunction fptr; layer_data *my_data; if (!strcmp(funcName, "vkGetInstanceProcAddr")) return (PFN_vkVoidFunction)vkGetInstanceProcAddr; if (!strcmp(funcName, "vkGetDeviceProcAddr")) return (PFN_vkVoidFunction)vkGetDeviceProcAddr; if (!strcmp(funcName, "vkCreateInstance")) return (PFN_vkVoidFunction)vkCreateInstance; if (!strcmp(funcName, "vkDestroyInstance")) return (PFN_vkVoidFunction)vkDestroyInstance; if (!strcmp(funcName, "vkCreateDevice")) return (PFN_vkVoidFunction)vkCreateDevice; if (!strcmp(funcName, "vkEnumeratePhysicalDevices")) return (PFN_vkVoidFunction)vkEnumeratePhysicalDevices; if (!strcmp(funcName, "vkGetPhysicalDeviceFeatures")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceFeatures; if (!strcmp(funcName, "vkGetPhysicalDeviceFormatProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceFormatProperties; if (!strcmp(funcName, "vkGetPhysicalDeviceImageFormatProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceImageFormatProperties; if (!strcmp(funcName, "vkGetPhysicalDeviceProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceProperties; if (!strcmp(funcName, "vkGetPhysicalDeviceQueueFamilyProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceQueueFamilyProperties; if (!strcmp(funcName, "vkGetPhysicalDeviceMemoryProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceMemoryProperties; if (!strcmp(funcName, "vkGetPhysicalDeviceSparseImageFormatProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceSparseImageFormatProperties; if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties; if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties; if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties; if (!strcmp(funcName, "vkEnumerateInstanceDeviceProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties; if (!instance) return NULL; my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); fptr = debug_report_get_instance_proc_addr(my_data->report_data, funcName); if (fptr) return fptr; { VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; if (pTable->GetInstanceProcAddr == NULL) return NULL; return pTable->GetInstanceProcAddr(instance, funcName); } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/device_limits.h000066400000000000000000000077421270147354000252240ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Tobin Ehlis * Author: Mark Lobodzinski */ #include "vulkan/vk_layer.h" #include using namespace std; // Device Limits ERROR codes typedef enum _DEV_LIMITS_ERROR { DEVLIMITS_NONE, // Used for INFO & other non-error messages DEVLIMITS_INVALID_INSTANCE, // Invalid instance used DEVLIMITS_INVALID_PHYSICAL_DEVICE, // Invalid physical device used DEVLIMITS_INVALID_INHERITED_QUERY, // Invalid use of inherited query DEVLIMITS_MUST_QUERY_COUNT, // Failed to make initial call to an API to query the count DEVLIMITS_MUST_QUERY_PROPERTIES, // Failed to make initial call to an API to query properties DEVLIMITS_INVALID_CALL_SEQUENCE, // Flag generic case of an invalid call sequence by the app DEVLIMITS_INVALID_FEATURE_REQUESTED, // App requested a feature not supported by physical device DEVLIMITS_COUNT_MISMATCH, // App requesting a count value different than actual value DEVLIMITS_INVALID_QUEUE_CREATE_REQUEST, // Invalid queue requested based on queue family properties DEVLIMITS_LIMITS_VIOLATION, // Driver-specified limits/properties were exceeded DEVLIMITS_INVALID_UNIFORM_BUFFER_OFFSET, // Uniform buffer offset violates device limit granularity DEVLIMITS_INVALID_STORAGE_BUFFER_OFFSET, // Storage buffer offset violates device limit granularity } DEV_LIMITS_ERROR; typedef enum _CALL_STATE { UNCALLED, // Function has not been called QUERY_COUNT, // Function called once to query a count QUERY_DETAILS, // Function called w/ a count to query details } CALL_STATE; typedef struct _INSTANCE_STATE { // Track the call state and array size for physical devices CALL_STATE vkEnumeratePhysicalDevicesState; uint32_t physicalDevicesCount; _INSTANCE_STATE() : vkEnumeratePhysicalDevicesState(UNCALLED), physicalDevicesCount(0){}; } INSTANCE_STATE; typedef struct _PHYSICAL_DEVICE_STATE { // Track the call state and array sizes for various query functions CALL_STATE vkGetPhysicalDeviceQueueFamilyPropertiesState; uint32_t queueFamilyPropertiesCount; CALL_STATE vkGetPhysicalDeviceLayerPropertiesState; uint32_t deviceLayerCount; CALL_STATE vkGetPhysicalDeviceExtensionPropertiesState; uint32_t deviceExtensionCount; CALL_STATE vkGetPhysicalDeviceFeaturesState; _PHYSICAL_DEVICE_STATE() : vkGetPhysicalDeviceQueueFamilyPropertiesState(UNCALLED), queueFamilyPropertiesCount(0), vkGetPhysicalDeviceLayerPropertiesState(UNCALLED), deviceLayerCount(0), vkGetPhysicalDeviceExtensionPropertiesState(UNCALLED), deviceExtensionCount(0), vkGetPhysicalDeviceFeaturesState(UNCALLED){}; } PHYSICAL_DEVICE_STATE; Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/image.cpp000066400000000000000000002374731270147354000240270ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Jeremy Hayes * Author: Mark Lobodzinski * Author: Mike Stroyan * Author: Tobin Ehlis */ // Allow use of STL min and max functions in Windows #define NOMINMAX #include #include #include #include #include #include #include #include #include using namespace std; #include "vk_loader_platform.h" #include "vk_dispatch_table_helper.h" #include "vk_struct_string_helper_cpp.h" #include "vk_enum_validate_helper.h" #include "image.h" #include "vk_layer_config.h" #include "vk_layer_extension_utils.h" #include "vk_layer_table.h" #include "vk_layer_data.h" #include "vk_layer_extension_utils.h" #include "vk_layer_utils.h" #include "vk_layer_logging.h" using namespace std; struct layer_data { debug_report_data *report_data; vector logging_callback; VkLayerDispatchTable *device_dispatch_table; VkLayerInstanceDispatchTable *instance_dispatch_table; VkPhysicalDevice physicalDevice; VkPhysicalDeviceProperties physicalDeviceProperties; unordered_map imageMap; layer_data() : report_data(nullptr), device_dispatch_table(nullptr), instance_dispatch_table(nullptr), physicalDevice(0), physicalDeviceProperties(){}; }; static unordered_map layer_data_map; static int globalLockInitialized = 0; static loader_platform_thread_mutex globalLock; static void init_image(layer_data *my_data, const VkAllocationCallbacks *pAllocator) { layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_image"); if (!globalLockInitialized) { loader_platform_thread_create_mutex(&globalLock); globalLockInitialized = 1; } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); VkResult res = my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback); if (res == VK_SUCCESS) { res = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback); } return res; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT msgCallback, const VkAllocationCallbacks *pAllocator) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); my_data->instance_dispatch_table->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator); layer_destroy_msg_callback(my_data->report_data, msgCallback, pAllocator); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) { VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance"); if (fpCreateInstance == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance); if (result != VK_SUCCESS) return result; layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map); my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable; layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr); my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance, pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames); init_image(my_data, pAllocator); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) { // Grab the key before the instance is destroyed. dispatch_key key = get_dispatch_key(instance); layer_data *my_data = get_my_data_ptr(key, layer_data_map); VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; pTable->DestroyInstance(instance, pAllocator); // Clean up logging callback, if any while (my_data->logging_callback.size() > 0) { VkDebugReportCallbackEXT callback = my_data->logging_callback.back(); layer_destroy_msg_callback(my_data->report_data, callback, pAllocator); my_data->logging_callback.pop_back(); } layer_debug_report_destroy_instance(my_data->report_data); delete my_data->instance_dispatch_table; layer_data_map.erase(key); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) { VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr; PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice"); if (fpCreateDevice == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateDevice(physicalDevice, pCreateInfo, pAllocator, pDevice); if (result != VK_SUCCESS) { return result; } layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map); // Setup device dispatch table my_device_data->device_dispatch_table = new VkLayerDispatchTable; layer_init_device_dispatch_table(*pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr); my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice); my_device_data->physicalDevice = physicalDevice; my_instance_data->instance_dispatch_table->GetPhysicalDeviceProperties(physicalDevice, &(my_device_data->physicalDeviceProperties)); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) { dispatch_key key = get_dispatch_key(device); layer_data *my_data = get_my_data_ptr(key, layer_data_map); my_data->device_dispatch_table->DestroyDevice(device, pAllocator); delete my_data->device_dispatch_table; layer_data_map.erase(key); } static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties); } static const VkLayerProperties pc_global_layers[] = {{ "VK_LAYER_LUNARG_image", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer", }}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) { return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers, pCount, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice, const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { // Image does not have any physical device extensions if (pLayerName == NULL) { dispatch_key key = get_dispatch_key(physicalDevice); layer_data *my_data = get_my_data_ptr(key, layer_data_map); VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; return pTable->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties); } else { return util_GetExtensionProperties(0, NULL, pCount, pProperties); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) { // ParamChecker's physical device layers are the same as global return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers, pCount, pProperties); } // Start of the Image layer proper // Returns TRUE if a format is a depth-compatible format bool is_depth_format(VkFormat format) { bool result = VK_FALSE; switch (format) { case VK_FORMAT_D16_UNORM: case VK_FORMAT_X8_D24_UNORM_PACK32: case VK_FORMAT_D32_SFLOAT: case VK_FORMAT_S8_UINT: case VK_FORMAT_D16_UNORM_S8_UINT: case VK_FORMAT_D24_UNORM_S8_UINT: case VK_FORMAT_D32_SFLOAT_S8_UINT: result = VK_TRUE; break; default: break; } return result; } static inline uint32_t validate_VkImageLayoutKHR(VkImageLayout input_value) { return ((validate_VkImageLayout(input_value) == 1) || (input_value == VK_IMAGE_LAYOUT_PRESENT_SRC_KHR)); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage(VkDevice device, const VkImageCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkImage *pImage) { VkBool32 skipCall = VK_FALSE; VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkImageFormatProperties ImageFormatProperties; layer_data *device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkPhysicalDevice physicalDevice = device_data->physicalDevice; layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); if (pCreateInfo->format != VK_FORMAT_UNDEFINED) { VkFormatProperties properties; phy_dev_data->instance_dispatch_table->GetPhysicalDeviceFormatProperties(device_data->physicalDevice, pCreateInfo->format, &properties); if ((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0)) { char const str[] = "vkCreateImage parameter, VkFormat pCreateInfo->format, contains unsupported format"; // TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_FORMAT_UNSUPPORTED, "IMAGE", str); } } // Internal call to get format info. Still goes through layers, could potentially go directly to ICD. phy_dev_data->instance_dispatch_table->GetPhysicalDeviceImageFormatProperties( physicalDevice, pCreateInfo->format, pCreateInfo->imageType, pCreateInfo->tiling, pCreateInfo->usage, pCreateInfo->flags, &ImageFormatProperties); VkDeviceSize imageGranularity = device_data->physicalDeviceProperties.limits.bufferImageGranularity; imageGranularity = imageGranularity == 1 ? 0 : imageGranularity; if ((pCreateInfo->extent.depth > ImageFormatProperties.maxExtent.depth) || (pCreateInfo->extent.width > ImageFormatProperties.maxExtent.width) || (pCreateInfo->extent.height > ImageFormatProperties.maxExtent.height)) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image", "CreateImage extents exceed allowable limits for format: " "Width = %d Height = %d Depth = %d: Limits for Width = %d Height = %d Depth = %d for format %s.", pCreateInfo->extent.width, pCreateInfo->extent.height, pCreateInfo->extent.depth, ImageFormatProperties.maxExtent.width, ImageFormatProperties.maxExtent.height, ImageFormatProperties.maxExtent.depth, string_VkFormat(pCreateInfo->format)); } uint64_t totalSize = ((uint64_t)pCreateInfo->extent.width * (uint64_t)pCreateInfo->extent.height * (uint64_t)pCreateInfo->extent.depth * (uint64_t)pCreateInfo->arrayLayers * (uint64_t)pCreateInfo->samples * (uint64_t)vk_format_get_size(pCreateInfo->format) + (uint64_t)imageGranularity) & ~(uint64_t)imageGranularity; if (totalSize > ImageFormatProperties.maxResourceSize) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image", "CreateImage resource size exceeds allowable maximum " "Image resource size = %#" PRIxLEAST64 ", maximum resource size = %#" PRIxLEAST64 " ", totalSize, ImageFormatProperties.maxResourceSize); } if (pCreateInfo->mipLevels > ImageFormatProperties.maxMipLevels) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image", "CreateImage mipLevels=%d exceeds allowable maximum supported by format of %d", pCreateInfo->mipLevels, ImageFormatProperties.maxMipLevels); } if (pCreateInfo->arrayLayers > ImageFormatProperties.maxArrayLayers) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image", "CreateImage arrayLayers=%d exceeds allowable maximum supported by format of %d", pCreateInfo->arrayLayers, ImageFormatProperties.maxArrayLayers); } if ((pCreateInfo->samples & ImageFormatProperties.sampleCounts) == 0) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__, IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, "Image", "CreateImage samples %s is not supported by format 0x%.8X", string_VkSampleCountFlagBits(pCreateInfo->samples), ImageFormatProperties.sampleCounts); } if (pCreateInfo->initialLayout != VK_IMAGE_LAYOUT_UNDEFINED && pCreateInfo->initialLayout != VK_IMAGE_LAYOUT_PREINITIALIZED) { skipCall |= log_msg(phy_dev_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pImage, __LINE__, IMAGE_INVALID_LAYOUT, "Image", "vkCreateImage parameter, pCreateInfo->initialLayout, must be VK_IMAGE_LAYOUT_UNDEFINED or " "VK_IMAGE_LAYOUT_PREINITIALIZED"); } if (VK_FALSE == skipCall) { result = device_data->device_dispatch_table->CreateImage(device, pCreateInfo, pAllocator, pImage); } if (result == VK_SUCCESS) { loader_platform_thread_lock_mutex(&globalLock); device_data->imageMap[*pImage] = IMAGE_STATE(pCreateInfo); loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks *pAllocator) { layer_data *device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); device_data->imageMap.erase(image); loader_platform_thread_unlock_mutex(&globalLock); device_data->device_dispatch_table->DestroyImage(device, image, pAllocator); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkRenderPass *pRenderPass) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkBool32 skipCall = VK_FALSE; for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) { if (pCreateInfo->pAttachments[i].format != VK_FORMAT_UNDEFINED) { VkFormatProperties properties; get_my_data_ptr(get_dispatch_key(my_data->physicalDevice), layer_data_map) ->instance_dispatch_table->GetPhysicalDeviceFormatProperties(my_data->physicalDevice, pCreateInfo->pAttachments[i].format, &properties); if ((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0)) { std::stringstream ss; ss << "vkCreateRenderPass parameter, VkFormat in pCreateInfo->pAttachments[" << i << "], contains unsupported format"; // TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_FORMAT_UNSUPPORTED, "IMAGE", "%s", ss.str().c_str()); } } } for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) { if (!validate_VkImageLayoutKHR(pCreateInfo->pAttachments[i].initialLayout) || !validate_VkImageLayoutKHR(pCreateInfo->pAttachments[i].finalLayout)) { std::stringstream ss; ss << "vkCreateRenderPass parameter, VkImageLayout in pCreateInfo->pAttachments[" << i << "], is unrecognized"; // TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_RENDERPASS_INVALID_ATTACHMENT, "IMAGE", "%s", ss.str().c_str()); } } for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) { if (!validate_VkAttachmentLoadOp(pCreateInfo->pAttachments[i].loadOp)) { std::stringstream ss; ss << "vkCreateRenderPass parameter, VkAttachmentLoadOp in pCreateInfo->pAttachments[" << i << "], is unrecognized"; // TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_RENDERPASS_INVALID_ATTACHMENT, "IMAGE", "%s", ss.str().c_str()); } } for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) { if (!validate_VkAttachmentStoreOp(pCreateInfo->pAttachments[i].storeOp)) { std::stringstream ss; ss << "vkCreateRenderPass parameter, VkAttachmentStoreOp in pCreateInfo->pAttachments[" << i << "], is unrecognized"; // TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_RENDERPASS_INVALID_ATTACHMENT, "IMAGE", "%s", ss.str().c_str()); } } // Any depth buffers specified as attachments? bool depthFormatPresent = VK_FALSE; for (uint32_t i = 0; i < pCreateInfo->attachmentCount; ++i) { depthFormatPresent |= is_depth_format(pCreateInfo->pAttachments[i].format); } if (depthFormatPresent == VK_FALSE) { // No depth attachment is present, validate that subpasses set depthStencilAttachment to VK_ATTACHMENT_UNUSED; for (uint32_t i = 0; i < pCreateInfo->subpassCount; i++) { if (pCreateInfo->pSubpasses[i].pDepthStencilAttachment && pCreateInfo->pSubpasses[i].pDepthStencilAttachment->attachment != VK_ATTACHMENT_UNUSED) { std::stringstream ss; ss << "vkCreateRenderPass has no depth/stencil attachment, yet subpass[" << i << "] has VkSubpassDescription::depthStencilAttachment value that is not VK_ATTACHMENT_UNUSED"; skipCall |= log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_RENDERPASS_INVALID_DS_ATTACHMENT, "IMAGE", "%s", ss.str().c_str()); } } } if (skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = my_data->device_dispatch_table->CreateRenderPass(device, pCreateInfo, pAllocator, pRenderPass); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device, const VkImageViewCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkImageView *pView) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); auto imageEntry = device_data->imageMap.find(pCreateInfo->image); if (imageEntry != device_data->imageMap.end()) { if (pCreateInfo->subresourceRange.baseMipLevel >= imageEntry->second.mipLevels) { std::stringstream ss; ss << "vkCreateImageView called with baseMipLevel " << pCreateInfo->subresourceRange.baseMipLevel << " for image " << pCreateInfo->image << " that only has " << imageEntry->second.mipLevels << " mip levels."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str()); } if (pCreateInfo->subresourceRange.baseArrayLayer >= imageEntry->second.arraySize) { std::stringstream ss; ss << "vkCreateImageView called with baseArrayLayer " << pCreateInfo->subresourceRange.baseArrayLayer << " for image " << pCreateInfo->image << " that only has " << imageEntry->second.arraySize << " array layers."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str()); } if (!pCreateInfo->subresourceRange.levelCount) { std::stringstream ss; ss << "vkCreateImageView called with 0 in pCreateInfo->subresourceRange.levelCount."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str()); } if (!pCreateInfo->subresourceRange.layerCount) { std::stringstream ss; ss << "vkCreateImageView called with 0 in pCreateInfo->subresourceRange.layerCount."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str()); } VkImageCreateFlags imageFlags = imageEntry->second.flags; VkFormat imageFormat = imageEntry->second.format; VkFormat ivciFormat = pCreateInfo->format; VkImageAspectFlags aspectMask = pCreateInfo->subresourceRange.aspectMask; // Validate VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT state if (imageFlags & VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT) { // Format MUST be compatible (in the same format compatibility class) as the format the image was created with if (vk_format_get_compatibility_class(imageFormat) != vk_format_get_compatibility_class(ivciFormat)) { std::stringstream ss; ss << "vkCreateImageView(): ImageView format " << string_VkFormat(ivciFormat) << " is not in the same format compatibility class as image (" << (uint64_t)pCreateInfo->image << ") format " << string_VkFormat(imageFormat) << ". Images created with the VK_IMAGE_CREATE_MUTABLE_FORMAT BIT " << "can support ImageViews with differing formats but they must be in the same compatibility class."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str()); } } else { // Format MUST be IDENTICAL to the format the image was created with if (imageFormat != ivciFormat) { std::stringstream ss; ss << "vkCreateImageView() format " << string_VkFormat(ivciFormat) << " differs from image " << (uint64_t)pCreateInfo->image << " format " << string_VkFormat(imageFormat) << ". Formats MUST be IDENTICAL unless VK_IMAGE_CREATE_MUTABLE_FORMAT BIT was set on image creation."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_VIEW_CREATE_ERROR, "IMAGE", "%s", ss.str().c_str()); } } // Validate correct image aspect bits for desired formats and format consistency if (vk_format_is_color(imageFormat)) { if ((aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) != VK_IMAGE_ASPECT_COLOR_BIT) { std::stringstream ss; ss << "vkCreateImageView: Color image formats must have the VK_IMAGE_ASPECT_COLOR_BIT set"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } if ((aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) != aspectMask) { std::stringstream ss; ss << "vkCreateImageView: Color image formats must have ONLY the VK_IMAGE_ASPECT_COLOR_BIT set"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } if (VK_FALSE == vk_format_is_color(ivciFormat)) { std::stringstream ss; ss << "vkCreateImageView: The image view's format can differ from the parent image's format, but both must be " << "color formats. ImageFormat is " << string_VkFormat(imageFormat) << " ImageViewFormat is " << string_VkFormat(ivciFormat); skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_FORMAT, "IMAGE", "%s", ss.str().c_str()); } // TODO: Uncompressed formats are compatible if they occupy they same number of bits per pixel. // Compressed formats are compatible if the only difference between them is the numerical type of // the uncompressed pixels (e.g. signed vs. unsigned, or sRGB vs. UNORM encoding). } else if (vk_format_is_depth_and_stencil(imageFormat)) { if ((aspectMask & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT)) == 0) { std::stringstream ss; ss << "vkCreateImageView: Depth/stencil image formats must have at least one of VK_IMAGE_ASPECT_DEPTH_BIT and " "VK_IMAGE_ASPECT_STENCIL_BIT set"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } if ((aspectMask & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT)) != aspectMask) { std::stringstream ss; ss << "vkCreateImageView: Combination depth/stencil image formats can have only the VK_IMAGE_ASPECT_DEPTH_BIT and " "VK_IMAGE_ASPECT_STENCIL_BIT set"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } } else if (vk_format_is_depth_only(imageFormat)) { if ((aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != VK_IMAGE_ASPECT_DEPTH_BIT) { std::stringstream ss; ss << "vkCreateImageView: Depth-only image formats must have the VK_IMAGE_ASPECT_DEPTH_BIT set"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } if ((aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != aspectMask) { std::stringstream ss; ss << "vkCreateImageView: Depth-only image formats can have only the VK_IMAGE_ASPECT_DEPTH_BIT set"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } } else if (vk_format_is_stencil_only(imageFormat)) { if ((aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != VK_IMAGE_ASPECT_STENCIL_BIT) { std::stringstream ss; ss << "vkCreateImageView: Stencil-only image formats must have the VK_IMAGE_ASPECT_STENCIL_BIT set"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } if ((aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != aspectMask) { std::stringstream ss; ss << "vkCreateImageView: Stencil-only image formats can have only the VK_IMAGE_ASPECT_STENCIL_BIT set"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)pCreateInfo->image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } } } if (skipCall) { return VK_ERROR_VALIDATION_FAILED_EXT; } VkResult result = device_data->device_dispatch_table->CreateImageView(device, pCreateInfo, pAllocator, pView); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearColorValue *pColor, uint32_t rangeCount, const VkImageSubresourceRange *pRanges) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); if (imageLayout != VK_IMAGE_LAYOUT_GENERAL && imageLayout != VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL) { skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_LAYOUT, "IMAGE", "vkCmdClearColorImage parameter, imageLayout, must be VK_IMAGE_LAYOUT_GENERAL or " "VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL"); } // For each range, image aspect must be color only for (uint32_t i = 0; i < rangeCount; i++) { if (pRanges[i].aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) { char const str[] = "vkCmdClearColorImage aspectMasks for all subresource ranges must be set to VK_IMAGE_ASPECT_COLOR_BIT"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str); } } if (VK_FALSE == skipCall) { device_data->device_dispatch_table->CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearDepthStencilImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearDepthStencilValue *pDepthStencil, uint32_t rangeCount, const VkImageSubresourceRange *pRanges) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); // For each range, Image aspect must be depth or stencil or both for (uint32_t i = 0; i < rangeCount; i++) { if (((pRanges[i].aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != VK_IMAGE_ASPECT_DEPTH_BIT) && ((pRanges[i].aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != VK_IMAGE_ASPECT_STENCIL_BIT)) { char const str[] = "vkCmdClearDepthStencilImage aspectMasks for all subresource ranges must be " "set to VK_IMAGE_ASPECT_DEPTH_BIT and/or VK_IMAGE_ASPECT_STENCIL_BIT"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str); } } if (VK_FALSE == skipCall) { device_data->device_dispatch_table->CmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount, pRanges); } } // Returns true if [x, xoffset] and [y, yoffset] overlap static bool ranges_intersect(int32_t start, uint32_t start_offset, int32_t end, uint32_t end_offset) { bool result = false; uint32_t intersection_min = std::max(static_cast(start), static_cast(end)); uint32_t intersection_max = std::min(static_cast(start) + start_offset, static_cast(end) + end_offset); if (intersection_max > intersection_min) { result = true; } return result; } // Returns true if two VkImageCopy structures overlap static bool region_intersects(const VkImageCopy *src, const VkImageCopy *dst, VkImageType type) { bool result = true; if ((src->srcSubresource.mipLevel == dst->dstSubresource.mipLevel) && (ranges_intersect(src->srcSubresource.baseArrayLayer, src->srcSubresource.layerCount, dst->dstSubresource.baseArrayLayer, dst->dstSubresource.layerCount))) { switch (type) { case VK_IMAGE_TYPE_3D: result &= ranges_intersect(src->srcOffset.z, src->extent.depth, dst->dstOffset.z, dst->extent.depth); // Intentionally fall through to 2D case case VK_IMAGE_TYPE_2D: result &= ranges_intersect(src->srcOffset.y, src->extent.height, dst->dstOffset.y, dst->extent.height); // Intentionally fall through to 1D case case VK_IMAGE_TYPE_1D: result &= ranges_intersect(src->srcOffset.x, src->extent.width, dst->dstOffset.x, dst->extent.width); break; default: // Unrecognized or new IMAGE_TYPE enums will be caught in parameter_validation assert(false); } } return result; } // Returns true if offset and extent exceed image extents static bool exceeds_bounds(const VkOffset3D *offset, const VkExtent3D *extent, IMAGE_STATE *image) { bool result = false; // Extents/depths cannot be negative but checks left in for clarity return result; } VkBool32 cmd_copy_image_valid_usage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImage dstImage, uint32_t regionCount, const VkImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); auto srcImageEntry = device_data->imageMap.find(srcImage); auto dstImageEntry = device_data->imageMap.find(dstImage); // TODO: This does not cover swapchain-created images. This should fall out when this layer is moved // into the core_validation layer if ((srcImageEntry != device_data->imageMap.end()) && (dstImageEntry != device_data->imageMap.end())) { for (uint32_t i = 0; i < regionCount; i++) { if (pRegions[i].srcSubresource.layerCount == 0) { std::stringstream ss; ss << "vkCmdCopyImage: number of layers in pRegions[" << i << "] srcSubresource is zero"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } if (pRegions[i].dstSubresource.layerCount == 0) { std::stringstream ss; ss << "vkCmdCopyImage: number of layers in pRegions[" << i << "] dstSubresource is zero"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } // For each region the layerCount member of srcSubresource and dstSubresource must match if (pRegions[i].srcSubresource.layerCount != pRegions[i].dstSubresource.layerCount) { std::stringstream ss; ss << "vkCmdCopyImage: number of layers in source and destination subresources for pRegions[" << i << "] do not match"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_EXTENTS, "IMAGE", "%s", ss.str().c_str()); } // For each region, the aspectMask member of srcSubresource and dstSubresource must match if (pRegions[i].srcSubresource.aspectMask != pRegions[i].dstSubresource.aspectMask) { char const str[] = "vkCmdCopyImage: Src and dest aspectMasks for each region must match"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str); } // AspectMask must not contain VK_IMAGE_ASPECT_METADATA_BIT if ((pRegions[i].srcSubresource.aspectMask & VK_IMAGE_ASPECT_METADATA_BIT) || (pRegions[i].dstSubresource.aspectMask & VK_IMAGE_ASPECT_METADATA_BIT)) { std::stringstream ss; ss << "vkCmdCopyImage: pRegions[" << i << "] may not specify aspectMask containing VK_IMAGE_ASPECT_METADATA_BIT"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } // For each region, if aspectMask contains VK_IMAGE_ASPECT_COLOR_BIT, it must not contain either of // VK_IMAGE_ASPECT_DEPTH_BIT or VK_IMAGE_ASPECT_STENCIL_BIT if ((pRegions[i].srcSubresource.aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) && (pRegions[i].srcSubresource.aspectMask & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT))) { char const str[] = "vkCmdCopyImage aspectMask cannot specify both COLOR and DEPTH/STENCIL aspects"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str); } // If either of the calling command's srcImage or dstImage parameters are of VkImageType VK_IMAGE_TYPE_3D, // the baseArrayLayer and layerCount members of both srcSubresource and dstSubresource must be 0 and 1, respectively if (((srcImageEntry->second.imageType == VK_IMAGE_TYPE_3D) || (dstImageEntry->second.imageType == VK_IMAGE_TYPE_3D)) && ((pRegions[i].srcSubresource.baseArrayLayer != 0) || (pRegions[i].srcSubresource.layerCount != 1) || (pRegions[i].dstSubresource.baseArrayLayer != 0) || (pRegions[i].dstSubresource.layerCount != 1))) { std::stringstream ss; ss << "vkCmdCopyImage: src or dstImage type was IMAGE_TYPE_3D, but in subRegion[" << i << "] baseArrayLayer was not zero or layerCount was not 1."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_EXTENTS, "IMAGE", "%s", ss.str().c_str()); } // MipLevel must be less than the mipLevels specified in VkImageCreateInfo when the image was created if (pRegions[i].srcSubresource.mipLevel >= srcImageEntry->second.mipLevels) { std::stringstream ss; ss << "vkCmdCopyImage: pRegions[" << i << "] specifies a src mipLevel greater than the number specified when the srcImage was created."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_EXTENTS, "IMAGE", "%s", ss.str().c_str()); } if (pRegions[i].dstSubresource.mipLevel >= dstImageEntry->second.mipLevels) { std::stringstream ss; ss << "vkCmdCopyImage: pRegions[" << i << "] specifies a dst mipLevel greater than the number specified when the dstImage was created."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_EXTENTS, "IMAGE", "%s", ss.str().c_str()); } // (baseArrayLayer + layerCount) must be less than or equal to the arrayLayers specified in VkImageCreateInfo when the // image was created if ((pRegions[i].srcSubresource.baseArrayLayer + pRegions[i].srcSubresource.layerCount) > srcImageEntry->second.arraySize) { std::stringstream ss; ss << "vkCmdCopyImage: srcImage arrayLayers was " << srcImageEntry->second.arraySize << " but subRegion[" << i << "] baseArrayLayer + layerCount is " << (pRegions[i].srcSubresource.baseArrayLayer + pRegions[i].srcSubresource.layerCount); skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_EXTENTS, "IMAGE", "%s", ss.str().c_str()); } if ((pRegions[i].dstSubresource.baseArrayLayer + pRegions[i].dstSubresource.layerCount) > dstImageEntry->second.arraySize) { std::stringstream ss; ss << "vkCmdCopyImage: dstImage arrayLayers was " << dstImageEntry->second.arraySize << " but subRegion[" << i << "] baseArrayLayer + layerCount is " << (pRegions[i].dstSubresource.baseArrayLayer + pRegions[i].dstSubresource.layerCount); skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_EXTENTS, "IMAGE", "%s", ss.str().c_str()); } // The source region specified by a given element of pRegions must be a region that is contained within srcImage if (exceeds_bounds(&pRegions[i].srcOffset, &pRegions[i].extent, &srcImageEntry->second)) { std::stringstream ss; ss << "vkCmdCopyImage: srcSubResource in pRegions[" << i << "] exceeds extents srcImage was created with"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_EXTENTS, "IMAGE", "%s", ss.str().c_str()); } // The destination region specified by a given element of pRegions must be a region that is contained within dstImage if (exceeds_bounds(&pRegions[i].dstOffset, &pRegions[i].extent, &dstImageEntry->second)) { std::stringstream ss; ss << "vkCmdCopyImage: dstSubResource in pRegions[" << i << "] exceeds extents dstImage was created with"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_EXTENTS, "IMAGE", "%s", ss.str().c_str()); } // The union of all source regions, and the union of all destination regions, specified by the elements of pRegions, // must not overlap in memory if (srcImage == dstImage) { for (uint32_t j = 0; j < regionCount; j++) { if (region_intersects(&pRegions[i], &pRegions[j], srcImageEntry->second.imageType)) { std::stringstream ss; ss << "vkCmdCopyImage: pRegions[" << i << "] src overlaps with pRegions[" << j << "]."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_INVALID_EXTENTS, "IMAGE", "%s", ss.str().c_str()); } } } } // The formats of srcImage and dstImage must be compatible. Formats are considered compatible if their texel size in bytes // is the same between both formats. For example, VK_FORMAT_R8G8B8A8_UNORM is compatible with VK_FORMAT_R32_UINT because // because both texels are 4 bytes in size. Depth/stencil formats must match exactly. if (is_depth_format(srcImageEntry->second.format) || is_depth_format(dstImageEntry->second.format)) { if (srcImageEntry->second.format != dstImageEntry->second.format) { char const str[] = "vkCmdCopyImage called with unmatched source and dest image depth/stencil formats."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_MISMATCHED_IMAGE_FORMAT, "IMAGE", str); } } else { size_t srcSize = vk_format_get_size(srcImageEntry->second.format); size_t destSize = vk_format_get_size(dstImageEntry->second.format); if (srcSize != destSize) { char const str[] = "vkCmdCopyImage called with unmatched source and dest image format sizes."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, reinterpret_cast(commandBuffer), __LINE__, IMAGE_MISMATCHED_IMAGE_FORMAT, "IMAGE", str); } } } return skipCall; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); skipCall = cmd_copy_image_valid_usage(commandBuffer, srcImage, dstImage, regionCount, pRegions); if (VK_FALSE == skipCall) { device_data->device_dispatch_table->CmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions); } } VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(VkCommandBuffer commandBuffer, uint32_t attachmentCount, const VkClearAttachment *pAttachments, uint32_t rectCount, const VkClearRect *pRects) { VkBool32 skipCall = VK_FALSE; VkImageAspectFlags aspectMask; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); for (uint32_t i = 0; i < attachmentCount; i++) { aspectMask = pAttachments[i].aspectMask; if (aspectMask & VK_IMAGE_ASPECT_COLOR_BIT) { if (aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) { // VK_IMAGE_ASPECT_COLOR_BIT is not the only bit set for this attachment char const str[] = "vkCmdClearAttachments aspectMask [%d] must set only VK_IMAGE_ASPECT_COLOR_BIT of a color attachment."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str, i); } } else { // Image aspect must be depth or stencil or both if (((aspectMask & VK_IMAGE_ASPECT_DEPTH_BIT) != VK_IMAGE_ASPECT_DEPTH_BIT) && ((aspectMask & VK_IMAGE_ASPECT_STENCIL_BIT) != VK_IMAGE_ASPECT_STENCIL_BIT)) { char const str[] = "vkCmdClearAttachments aspectMask [%d] must be set to VK_IMAGE_ASPECT_DEPTH_BIT and/or " "VK_IMAGE_ASPECT_STENCIL_BIT"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str, i); } } } if (VK_FALSE == skipCall) { device_data->device_dispatch_table->CmdClearAttachments(commandBuffer, attachmentCount, pAttachments, rectCount, pRects); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); // For each region, the number of layers in the image subresource should not be zero // Image aspect must be ONE OF color, depth, stencil for (uint32_t i = 0; i < regionCount; i++) { if (pRegions[i].imageSubresource.layerCount == 0) { char const str[] = "vkCmdCopyImageToBuffer: number of layers in image subresource is zero"; // TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str); } VkImageAspectFlags aspectMask = pRegions[i].imageSubresource.aspectMask; if ((aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) && (aspectMask != VK_IMAGE_ASPECT_DEPTH_BIT) && (aspectMask != VK_IMAGE_ASPECT_STENCIL_BIT)) { char const str[] = "vkCmdCopyImageToBuffer: aspectMasks for each region must specify only COLOR or DEPTH or STENCIL"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str); } } if (VK_FALSE == skipCall) { device_data->device_dispatch_table->CmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkBufferImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); // For each region, the number of layers in the image subresource should not be zero // Image aspect must be ONE OF color, depth, stencil for (uint32_t i = 0; i < regionCount; i++) { if (pRegions[i].imageSubresource.layerCount == 0) { char const str[] = "vkCmdCopyBufferToImage: number of layers in image subresource is zero"; // TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str); } VkImageAspectFlags aspectMask = pRegions[i].imageSubresource.aspectMask; if ((aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) && (aspectMask != VK_IMAGE_ASPECT_DEPTH_BIT) && (aspectMask != VK_IMAGE_ASPECT_STENCIL_BIT)) { char const str[] = "vkCmdCopyBufferToImage: aspectMasks for each region must specify only COLOR or DEPTH or STENCIL"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str); } } if (VK_FALSE == skipCall) { device_data->device_dispatch_table->CmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit *pRegions, VkFilter filter) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); auto srcImageEntry = device_data->imageMap.find(srcImage); auto dstImageEntry = device_data->imageMap.find(dstImage); if ((srcImageEntry != device_data->imageMap.end()) && (dstImageEntry != device_data->imageMap.end())) { VkFormat srcFormat = srcImageEntry->second.format; VkFormat dstFormat = dstImageEntry->second.format; // Validate consistency for signed and unsigned formats if ((vk_format_is_sint(srcFormat) && !vk_format_is_sint(dstFormat)) || (vk_format_is_uint(srcFormat) && !vk_format_is_uint(dstFormat))) { std::stringstream ss; ss << "vkCmdBlitImage: If one of srcImage and dstImage images has signed/unsigned integer format, " << "the other one must also have signed/unsigned integer format. " << "Source format is " << string_VkFormat(srcFormat) << " Destination format is " << string_VkFormat(dstFormat); skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_FORMAT, "IMAGE", "%s", ss.str().c_str()); } // Validate aspect bits and formats for depth/stencil images if (vk_format_is_depth_or_stencil(srcFormat) || vk_format_is_depth_or_stencil(dstFormat)) { if (srcFormat != dstFormat) { std::stringstream ss; ss << "vkCmdBlitImage: If one of srcImage and dstImage images has a format of depth, stencil or depth " << "stencil, the other one must have exactly the same format. " << "Source format is " << string_VkFormat(srcFormat) << " Destination format is " << string_VkFormat(dstFormat); skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_FORMAT, "IMAGE", "%s", ss.str().c_str()); } for (uint32_t i = 0; i < regionCount; i++) { if (pRegions[i].srcSubresource.layerCount == 0) { char const str[] = "vkCmdBlitImage: number of layers in source subresource is zero"; // TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str); } if (pRegions[i].dstSubresource.layerCount == 0) { char const str[] = "vkCmdBlitImage: number of layers in destination subresource is zero"; // TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str); } if (pRegions[i].srcSubresource.layerCount != pRegions[i].dstSubresource.layerCount) { char const str[] = "vkCmdBlitImage: number of layers in source and destination subresources must match"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str); } VkImageAspectFlags srcAspect = pRegions[i].srcSubresource.aspectMask; VkImageAspectFlags dstAspect = pRegions[i].dstSubresource.aspectMask; if (srcAspect != dstAspect) { std::stringstream ss; ss << "vkCmdBlitImage: Image aspects of depth/stencil images should match"; // TODO: Verify against Valid Use section of spec, if this case yields undefined results, then it's an error skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } if (vk_format_is_depth_and_stencil(srcFormat)) { if ((srcAspect != VK_IMAGE_ASPECT_DEPTH_BIT) && (srcAspect != VK_IMAGE_ASPECT_STENCIL_BIT)) { std::stringstream ss; ss << "vkCmdBlitImage: Combination depth/stencil image formats must have only one of " "VK_IMAGE_ASPECT_DEPTH_BIT " << "and VK_IMAGE_ASPECT_STENCIL_BIT set in srcImage and dstImage"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } } else if (vk_format_is_stencil_only(srcFormat)) { if (srcAspect != VK_IMAGE_ASPECT_STENCIL_BIT) { std::stringstream ss; ss << "vkCmdBlitImage: Stencil-only image formats must have only the VK_IMAGE_ASPECT_STENCIL_BIT " << "set in both the srcImage and dstImage"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } } else if (vk_format_is_depth_only(srcFormat)) { if (srcAspect != VK_IMAGE_ASPECT_DEPTH_BIT) { std::stringstream ss; ss << "vkCmdBlitImage: Depth-only image formats must have only the VK_IMAGE_ASPECT_DEPTH " << "set in both the srcImage and dstImage"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } } } } // Validate filter if (vk_format_is_depth_or_stencil(srcFormat) || vk_format_is_int(srcFormat)) { if (filter != VK_FILTER_NEAREST) { std::stringstream ss; ss << "vkCmdBlitImage: If the format of srcImage is a depth, stencil, depth stencil or integer-based format " << "then filter must be VK_FILTER_NEAREST."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_FILTER, "IMAGE", "%s", ss.str().c_str()); } } } device_data->device_dispatch_table->CmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPipelineBarrier(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); for (uint32_t i = 0; i < imageMemoryBarrierCount; ++i) { VkImageMemoryBarrier const *const barrier = (VkImageMemoryBarrier const *const) & pImageMemoryBarriers[i]; if (barrier->sType == VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER) { if (barrier->subresourceRange.layerCount == 0) { std::stringstream ss; ss << "vkCmdPipelineBarrier called with 0 in ppMemoryBarriers[" << i << "]->subresourceRange.layerCount."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, IMAGE_INVALID_IMAGE_RESOURCE, "IMAGE", "%s", ss.str().c_str()); } } } if (skipCall) { return; } device_data->device_dispatch_table->CmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); auto srcImageEntry = device_data->imageMap.find(srcImage); auto dstImageEntry = device_data->imageMap.find(dstImage); // For each region, the number of layers in the image subresource should not be zero // For each region, src and dest image aspect must be color only for (uint32_t i = 0; i < regionCount; i++) { if (pRegions[i].srcSubresource.layerCount == 0) { char const str[] = "vkCmdResolveImage: number of layers in source subresource is zero"; // TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid/error skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str); } if (pRegions[i].dstSubresource.layerCount == 0) { char const str[] = "vkCmdResolveImage: number of layers in destination subresource is zero"; // TODO: Verify against Valid Use section of spec. Generally if something yield an undefined result, it's invalid/error skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_ASPECT, "IMAGE", str); } if ((pRegions[i].srcSubresource.aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) || (pRegions[i].dstSubresource.aspectMask != VK_IMAGE_ASPECT_COLOR_BIT)) { char const str[] = "vkCmdResolveImage: src and dest aspectMasks for each region must specify only VK_IMAGE_ASPECT_COLOR_BIT"; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", str); } } if ((srcImageEntry != device_data->imageMap.end()) && (dstImageEntry != device_data->imageMap.end())) { if (srcImageEntry->second.format != dstImageEntry->second.format) { char const str[] = "vkCmdResolveImage called with unmatched source and dest formats."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_FORMAT, "IMAGE", str); } if (srcImageEntry->second.imageType != dstImageEntry->second.imageType) { char const str[] = "vkCmdResolveImage called with unmatched source and dest image types."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_MISMATCHED_IMAGE_TYPE, "IMAGE", str); } if (srcImageEntry->second.samples == VK_SAMPLE_COUNT_1_BIT) { char const str[] = "vkCmdResolveImage called with source sample count less than 2."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_RESOLVE_SAMPLES, "IMAGE", str); } if (dstImageEntry->second.samples != VK_SAMPLE_COUNT_1_BIT) { char const str[] = "vkCmdResolveImage called with dest sample count greater than 1."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, (uint64_t)commandBuffer, __LINE__, IMAGE_INVALID_RESOLVE_SAMPLES, "IMAGE", str); } } if (VK_FALSE == skipCall) { device_data->device_dispatch_table->CmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageSubresourceLayout(VkDevice device, VkImage image, const VkImageSubresource *pSubresource, VkSubresourceLayout *pLayout) { VkBool32 skipCall = VK_FALSE; layer_data *device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkFormat format; auto imageEntry = device_data->imageMap.find(image); // Validate that image aspects match formats if (imageEntry != device_data->imageMap.end()) { format = imageEntry->second.format; if (vk_format_is_color(format)) { if (pSubresource->aspectMask != VK_IMAGE_ASPECT_COLOR_BIT) { std::stringstream ss; ss << "vkGetImageSubresourceLayout: For color formats, the aspectMask field of VkImageSubresource must be " "VK_IMAGE_ASPECT_COLOR."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } } else if (vk_format_is_depth_or_stencil(format)) { if ((pSubresource->aspectMask != VK_IMAGE_ASPECT_DEPTH_BIT) && (pSubresource->aspectMask != VK_IMAGE_ASPECT_STENCIL_BIT)) { std::stringstream ss; ss << "vkGetImageSubresourceLayout: For depth/stencil formats, the aspectMask selects either the depth or stencil " "image aspectMask."; skipCall |= log_msg(device_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)image, __LINE__, IMAGE_INVALID_IMAGE_ASPECT, "IMAGE", "%s", ss.str().c_str()); } } } if (VK_FALSE == skipCall) { device_data->device_dispatch_table->GetImageSubresourceLayout(device, image, pSubresource, pLayout); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties *pProperties) { layer_data *phy_dev_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); phy_dev_data->instance_dispatch_table->GetPhysicalDeviceProperties(physicalDevice, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char *funcName) { if (!strcmp(funcName, "vkGetDeviceProcAddr")) return (PFN_vkVoidFunction)vkGetDeviceProcAddr; if (!strcmp(funcName, "vkDestroyDevice")) return (PFN_vkVoidFunction)vkDestroyDevice; if (!strcmp(funcName, "vkCreateImage")) return (PFN_vkVoidFunction)vkCreateImage; if (!strcmp(funcName, "vkDestroyImage")) return (PFN_vkVoidFunction)vkDestroyImage; if (!strcmp(funcName, "vkCreateImageView")) return (PFN_vkVoidFunction)vkCreateImageView; if (!strcmp(funcName, "vkCreateRenderPass")) return (PFN_vkVoidFunction)vkCreateRenderPass; if (!strcmp(funcName, "vkCmdClearColorImage")) return (PFN_vkVoidFunction)vkCmdClearColorImage; if (!strcmp(funcName, "vkCmdClearDepthStencilImage")) return (PFN_vkVoidFunction)vkCmdClearDepthStencilImage; if (!strcmp(funcName, "vkCmdClearAttachments")) return (PFN_vkVoidFunction)vkCmdClearAttachments; if (!strcmp(funcName, "vkCmdCopyImage")) return (PFN_vkVoidFunction)vkCmdCopyImage; if (!strcmp(funcName, "vkCmdCopyImageToBuffer")) return (PFN_vkVoidFunction)vkCmdCopyImageToBuffer; if (!strcmp(funcName, "vkCmdCopyBufferToImage")) return (PFN_vkVoidFunction)vkCmdCopyBufferToImage; if (!strcmp(funcName, "vkCmdBlitImage")) return (PFN_vkVoidFunction)vkCmdBlitImage; if (!strcmp(funcName, "vkCmdPipelineBarrier")) return (PFN_vkVoidFunction)vkCmdPipelineBarrier; if (!strcmp(funcName, "vkCmdResolveImage")) return (PFN_vkVoidFunction)vkCmdResolveImage; if (!strcmp(funcName, "vkGetImageSubresourceLayout")) return (PFN_vkVoidFunction)vkGetImageSubresourceLayout; if (device == NULL) { return NULL; } layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkLayerDispatchTable *pTable = my_data->device_dispatch_table; { if (pTable->GetDeviceProcAddr == NULL) return NULL; return pTable->GetDeviceProcAddr(device, funcName); } } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) { if (!strcmp(funcName, "vkGetInstanceProcAddr")) return (PFN_vkVoidFunction)vkGetInstanceProcAddr; if (!strcmp(funcName, "vkCreateInstance")) return (PFN_vkVoidFunction)vkCreateInstance; if (!strcmp(funcName, "vkDestroyInstance")) return (PFN_vkVoidFunction)vkDestroyInstance; if (!strcmp(funcName, "vkCreateDevice")) return (PFN_vkVoidFunction)vkCreateDevice; if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties; if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties; if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties; if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties; if (!strcmp(funcName, "vkGetPhysicalDeviceProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceProperties; if (instance == NULL) { return NULL; } layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); PFN_vkVoidFunction fptr = debug_report_get_instance_proc_addr(my_data->report_data, funcName); if (fptr) return fptr; VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; if (pTable->GetInstanceProcAddr == NULL) return NULL; return pTable->GetInstanceProcAddr(instance, funcName); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/image.h000066400000000000000000000076671270147354000234740ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Mark Lobodzinski * Author: Mike Stroyan * Author: Tobin Ehlis */ #ifndef IMAGE_H #define IMAGE_H #include "vulkan/vulkan.h" #include "vk_layer_config.h" #include "vk_layer_logging.h" // Image ERROR codes typedef enum _IMAGE_ERROR { IMAGE_NONE, // Used for INFO & other non-error messages IMAGE_FORMAT_UNSUPPORTED, // Request to create Image or RenderPass with a format that is not supported IMAGE_RENDERPASS_INVALID_ATTACHMENT, // Invalid image layouts and/or load/storeOps for an attachment when creating RenderPass IMAGE_RENDERPASS_INVALID_DS_ATTACHMENT, // If no depth attachment for a RenderPass, verify that subpass DS attachment is set to // UNUSED IMAGE_INVALID_IMAGE_ASPECT, // Image aspect mask bits are invalid for this API call IMAGE_MISMATCHED_IMAGE_ASPECT, // Image aspect masks for source and dest images do not match IMAGE_VIEW_CREATE_ERROR, // Error occurred trying to create Image View IMAGE_MISMATCHED_IMAGE_TYPE, // Image types for source and dest images do not match IMAGE_MISMATCHED_IMAGE_FORMAT, // Image formats for source and dest images do not match IMAGE_INVALID_RESOLVE_SAMPLES, // Image resolve source samples less than two or dest samples greater than one IMAGE_INVALID_FORMAT, // Operation specifies an invalid format, or there is a format mismatch IMAGE_INVALID_FILTER, // Operation specifies an invalid filter setting IMAGE_INVALID_IMAGE_RESOURCE, // Image resource/subresource called with invalid setting IMAGE_INVALID_FORMAT_LIMITS_VIOLATION, // Device limits for this format have been exceeded IMAGE_INVALID_LAYOUT, // Operation specifies an invalid layout IMAGE_INVALID_EXTENTS, // Operation specifies invalid image extents } IMAGE_ERROR; typedef struct _IMAGE_STATE { uint32_t mipLevels; uint32_t arraySize; VkFormat format; VkSampleCountFlagBits samples; VkImageType imageType; VkExtent3D extent; VkImageCreateFlags flags; _IMAGE_STATE() : mipLevels(0), arraySize(0), format(VK_FORMAT_UNDEFINED), samples(VK_SAMPLE_COUNT_1_BIT), imageType(VK_IMAGE_TYPE_RANGE_SIZE), extent{}, flags(0){}; _IMAGE_STATE(const VkImageCreateInfo *pCreateInfo) : mipLevels(pCreateInfo->mipLevels), arraySize(pCreateInfo->arrayLayers), format(pCreateInfo->format), samples(pCreateInfo->samples), imageType(pCreateInfo->imageType), extent(pCreateInfo->extent), flags(pCreateInfo->flags){}; } IMAGE_STATE; #endif // IMAGE_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/linux/000077500000000000000000000000001270147354000233605ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/linux/VkLayer_core_validation.json000066400000000000000000000007441270147354000310570ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_core_validation", "type": "GLOBAL", "library_path": "./libVkLayer_core_validation.so", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/linux/VkLayer_device_limits.json000066400000000000000000000007331270147354000305330ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_device_limits", "type": "GLOBAL", "library_path": "./libVkLayer_device_limits.so", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/linux/VkLayer_image.json000066400000000000000000000007131270147354000267730ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_image", "type": "GLOBAL", "library_path": "./libVkLayer_image.so", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/linux/VkLayer_object_tracker.json000066400000000000000000000007351270147354000306760ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_object_tracker", "type": "GLOBAL", "library_path": "./libVkLayer_object_tracker.so", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/linux/VkLayer_parameter_validation.json000066400000000000000000000007511270147354000321050ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_parameter_validation", "type": "GLOBAL", "library_path": "./libVkLayer_parameter_validation.so", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/linux/VkLayer_swapchain.json000066400000000000000000000007231270147354000276670ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_swapchain", "type": "GLOBAL", "library_path": "./libVkLayer_swapchain.so", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/linux/VkLayer_threading.json000066400000000000000000000007231270147354000276570ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_GOOGLE_threading", "type": "GLOBAL", "library_path": "./libVkLayer_threading.so", "api_version": "1.0.8", "implementation_version": "1", "description": "Google Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/linux/VkLayer_unique_objects.json000066400000000000000000000004751270147354000307350ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_GOOGLE_unique_objects", "type": "GLOBAL", "library_path": "./libVkLayer_unique_objects.so", "api_version": "1.0.8", "implementation_version": "1", "description": "Google Validation Layer" } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/object_tracker.h000066400000000000000000001525021270147354000253600ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Jon Ashburn * Author: Mark Lobodzinski * Author: Tobin Ehlis */ #include "vulkan/vk_layer.h" #include "vk_layer_extension_utils.h" #include "vk_enum_string_helper.h" #include "vk_layer_table.h" #include "vk_layer_utils.h" // Object Tracker ERROR codes typedef enum _OBJECT_TRACK_ERROR { OBJTRACK_NONE, // Used for INFO & other non-error messages OBJTRACK_UNKNOWN_OBJECT, // Updating uses of object that's not in global object list OBJTRACK_INTERNAL_ERROR, // Bug with data tracking within the layer OBJTRACK_DESTROY_OBJECT_FAILED, // Couldn't find object to be destroyed OBJTRACK_OBJECT_LEAK, // OBJECT was not correctly freed/destroyed OBJTRACK_OBJCOUNT_MAX_EXCEEDED, // Request for Object data in excess of max obj count OBJTRACK_INVALID_OBJECT, // Object used that has never been created OBJTRACK_DESCRIPTOR_POOL_MISMATCH, // Descriptor Pools specified incorrectly OBJTRACK_COMMAND_POOL_MISMATCH, // Command Pools specified incorrectly } OBJECT_TRACK_ERROR; // Object Status -- used to track state of individual objects typedef VkFlags ObjectStatusFlags; typedef enum _ObjectStatusFlagBits { OBJSTATUS_NONE = 0x00000000, // No status is set OBJSTATUS_FENCE_IS_SUBMITTED = 0x00000001, // Fence has been submitted OBJSTATUS_VIEWPORT_BOUND = 0x00000002, // Viewport state object has been bound OBJSTATUS_RASTER_BOUND = 0x00000004, // Viewport state object has been bound OBJSTATUS_COLOR_BLEND_BOUND = 0x00000008, // Viewport state object has been bound OBJSTATUS_DEPTH_STENCIL_BOUND = 0x00000010, // Viewport state object has been bound OBJSTATUS_GPU_MEM_MAPPED = 0x00000020, // Memory object is currently mapped OBJSTATUS_COMMAND_BUFFER_SECONDARY = 0x00000040, // Command Buffer is of type SECONDARY } ObjectStatusFlagBits; typedef struct _OBJTRACK_NODE { uint64_t vkObj; // Object handle VkDebugReportObjectTypeEXT objType; // Object type identifier ObjectStatusFlags status; // Object state uint64_t parentObj; // Parent object uint64_t belongsTo; // Object Scope -- owning device/instance } OBJTRACK_NODE; // prototype for extension functions uint64_t objTrackGetObjectCount(VkDevice device); uint64_t objTrackGetObjectsOfTypeCount(VkDevice, VkDebugReportObjectTypeEXT type); // Func ptr typedefs typedef uint64_t (*OBJ_TRACK_GET_OBJECT_COUNT)(VkDevice); typedef uint64_t (*OBJ_TRACK_GET_OBJECTS_OF_TYPE_COUNT)(VkDevice, VkDebugReportObjectTypeEXT); struct layer_data { debug_report_data *report_data; // TODO: put instance data here std::vector logging_callback; bool wsi_enabled; bool objtrack_extensions_enabled; layer_data() : report_data(nullptr), wsi_enabled(false), objtrack_extensions_enabled(false){}; }; struct instExts { bool wsi_enabled; }; static std::unordered_map instanceExtMap; static std::unordered_map layer_data_map; static device_table_map object_tracker_device_table_map; static instance_table_map object_tracker_instance_table_map; // We need additionally validate image usage using a separate map // of swapchain-created images static unordered_map swapchainImageMap; static long long unsigned int object_track_index = 0; static int objLockInitialized = 0; static loader_platform_thread_mutex objLock; // Objects stored in a global map w/ struct containing basic info // unordered_map objMap; #define NUM_OBJECT_TYPES (VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT + 1) static uint64_t numObjs[NUM_OBJECT_TYPES] = {0}; static uint64_t numTotalObjs = 0; static VkQueueFamilyProperties *queueInfo = NULL; static uint32_t queueCount = 0; template layer_data *get_my_data_ptr(void *data_key, std::unordered_map &data_map); // // Internal Object Tracker Functions // static void createDeviceRegisterExtensions(const VkDeviceCreateInfo *pCreateInfo, VkDevice device) { layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkLayerDispatchTable *pDisp = get_dispatch_table(object_tracker_device_table_map, device); PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr; pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR)gpa(device, "vkCreateSwapchainKHR"); pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR)gpa(device, "vkDestroySwapchainKHR"); pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR)gpa(device, "vkGetSwapchainImagesKHR"); pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR)gpa(device, "vkAcquireNextImageKHR"); pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR)gpa(device, "vkQueuePresentKHR"); my_device_data->wsi_enabled = false; for (uint32_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) { if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0) my_device_data->wsi_enabled = true; if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], "OBJTRACK_EXTENSIONS") == 0) my_device_data->objtrack_extensions_enabled = true; } } static void createInstanceRegisterExtensions(const VkInstanceCreateInfo *pCreateInfo, VkInstance instance) { uint32_t i; VkLayerInstanceDispatchTable *pDisp = get_dispatch_table(object_tracker_instance_table_map, instance); PFN_vkGetInstanceProcAddr gpa = pDisp->GetInstanceProcAddr; pDisp->DestroySurfaceKHR = (PFN_vkDestroySurfaceKHR)gpa(instance, "vkDestroySurfaceKHR"); pDisp->GetPhysicalDeviceSurfaceSupportKHR = (PFN_vkGetPhysicalDeviceSurfaceSupportKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceSupportKHR"); pDisp->GetPhysicalDeviceSurfaceCapabilitiesKHR = (PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR"); pDisp->GetPhysicalDeviceSurfaceFormatsKHR = (PFN_vkGetPhysicalDeviceSurfaceFormatsKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR"); pDisp->GetPhysicalDeviceSurfacePresentModesKHR = (PFN_vkGetPhysicalDeviceSurfacePresentModesKHR)gpa(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR"); #if VK_USE_PLATFORM_WIN32_KHR pDisp->CreateWin32SurfaceKHR = (PFN_vkCreateWin32SurfaceKHR)gpa(instance, "vkCreateWin32SurfaceKHR"); pDisp->GetPhysicalDeviceWin32PresentationSupportKHR = (PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWin32PresentationSupportKHR"); #endif // VK_USE_PLATFORM_WIN32_KHR #ifdef VK_USE_PLATFORM_XCB_KHR pDisp->CreateXcbSurfaceKHR = (PFN_vkCreateXcbSurfaceKHR)gpa(instance, "vkCreateXcbSurfaceKHR"); pDisp->GetPhysicalDeviceXcbPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXcbPresentationSupportKHR"); #endif // VK_USE_PLATFORM_XCB_KHR #ifdef VK_USE_PLATFORM_XLIB_KHR pDisp->CreateXlibSurfaceKHR = (PFN_vkCreateXlibSurfaceKHR)gpa(instance, "vkCreateXlibSurfaceKHR"); pDisp->GetPhysicalDeviceXlibPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXlibPresentationSupportKHR"); #endif // VK_USE_PLATFORM_XLIB_KHR #ifdef VK_USE_PLATFORM_MIR_KHR pDisp->CreateMirSurfaceKHR = (PFN_vkCreateMirSurfaceKHR)gpa(instance, "vkCreateMirSurfaceKHR"); pDisp->GetPhysicalDeviceMirPresentationSupportKHR = (PFN_vkGetPhysicalDeviceMirPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceMirPresentationSupportKHR"); #endif // VK_USE_PLATFORM_MIR_KHR #ifdef VK_USE_PLATFORM_WAYLAND_KHR pDisp->CreateWaylandSurfaceKHR = (PFN_vkCreateWaylandSurfaceKHR)gpa(instance, "vkCreateWaylandSurfaceKHR"); pDisp->GetPhysicalDeviceWaylandPresentationSupportKHR = (PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWaylandPresentationSupportKHR"); #endif // VK_USE_PLATFORM_WAYLAND_KHR #ifdef VK_USE_PLATFORM_ANDROID_KHR pDisp->CreateAndroidSurfaceKHR = (PFN_vkCreateAndroidSurfaceKHR)gpa(instance, "vkCreateAndroidSurfaceKHR"); #endif // VK_USE_PLATFORM_ANDROID_KHR instanceExtMap[pDisp].wsi_enabled = false; for (i = 0; i < pCreateInfo->enabledExtensionCount; i++) { if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SURFACE_EXTENSION_NAME) == 0) instanceExtMap[pDisp].wsi_enabled = true; } } // Indicate device or instance dispatch table type typedef enum _DispTableType { DISP_TBL_TYPE_INSTANCE, DISP_TBL_TYPE_DEVICE, } DispTableType; debug_report_data *mdd(const void *object) { dispatch_key key = get_dispatch_key(object); layer_data *my_data = get_my_data_ptr(key, layer_data_map); return my_data->report_data; } debug_report_data *mid(VkInstance object) { dispatch_key key = get_dispatch_key(object); layer_data *my_data = get_my_data_ptr(key, layer_data_map); return my_data->report_data; } // For each Queue's doubly linked-list of mem refs typedef struct _OT_MEM_INFO { VkDeviceMemory mem; struct _OT_MEM_INFO *pNextMI; struct _OT_MEM_INFO *pPrevMI; } OT_MEM_INFO; // Track Queue information typedef struct _OT_QUEUE_INFO { OT_MEM_INFO *pMemRefList; struct _OT_QUEUE_INFO *pNextQI; uint32_t queueNodeIndex; VkQueue queue; uint32_t refCount; } OT_QUEUE_INFO; // Global list of QueueInfo structures, one per queue static OT_QUEUE_INFO *g_pQueueInfo = NULL; // Convert an object type enum to an object type array index static uint32_t objTypeToIndex(uint32_t objType) { uint32_t index = objType; return index; } // Add new queue to head of global queue list static void addQueueInfo(uint32_t queueNodeIndex, VkQueue queue) { OT_QUEUE_INFO *pQueueInfo = new OT_QUEUE_INFO; if (pQueueInfo != NULL) { memset(pQueueInfo, 0, sizeof(OT_QUEUE_INFO)); pQueueInfo->queue = queue; pQueueInfo->queueNodeIndex = queueNodeIndex; pQueueInfo->pNextQI = g_pQueueInfo; g_pQueueInfo = pQueueInfo; } else { log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT, reinterpret_cast(queue), __LINE__, OBJTRACK_INTERNAL_ERROR, "OBJTRACK", "ERROR: VK_ERROR_OUT_OF_HOST_MEMORY -- could not allocate memory for Queue Information"); } } // Destroy memRef lists and free all memory static void destroyQueueMemRefLists(void) { OT_QUEUE_INFO *pQueueInfo = g_pQueueInfo; OT_QUEUE_INFO *pDelQueueInfo = NULL; while (pQueueInfo != NULL) { OT_MEM_INFO *pMemInfo = pQueueInfo->pMemRefList; while (pMemInfo != NULL) { OT_MEM_INFO *pDelMemInfo = pMemInfo; pMemInfo = pMemInfo->pNextMI; delete pDelMemInfo; } pDelQueueInfo = pQueueInfo; pQueueInfo = pQueueInfo->pNextQI; delete pDelQueueInfo; } g_pQueueInfo = pQueueInfo; } static void setGpuQueueInfoState(uint32_t count, void *pData) { queueCount = count; queueInfo = (VkQueueFamilyProperties *)realloc((void *)queueInfo, count * sizeof(VkQueueFamilyProperties)); if (queueInfo != NULL) { memcpy(queueInfo, pData, count * sizeof(VkQueueFamilyProperties)); } } // Check Queue type flags for selected queue operations static void validateQueueFlags(VkQueue queue, const char *function) { OT_QUEUE_INFO *pQueueInfo = g_pQueueInfo; while ((pQueueInfo != NULL) && (pQueueInfo->queue != queue)) { pQueueInfo = pQueueInfo->pNextQI; } if (pQueueInfo != NULL) { if ((queueInfo != NULL) && (queueInfo[pQueueInfo->queueNodeIndex].queueFlags & VK_QUEUE_SPARSE_BINDING_BIT) == 0) { log_msg(mdd(queue), VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT, reinterpret_cast(queue), __LINE__, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", "Attempting %s on a non-memory-management capable queue -- VK_QUEUE_SPARSE_BINDING_BIT not set", function); } } } /* TODO: Port to new type safety */ #if 0 // Check object status for selected flag state static VkBool32 validate_status( VkObject dispatchable_object, VkObject vkObj, VkObjectType objType, ObjectStatusFlags status_mask, ObjectStatusFlags status_flag, VkFlags msg_flags, OBJECT_TRACK_ERROR error_code, const char *fail_msg) { if (objMap.find(vkObj) != objMap.end()) { OBJTRACK_NODE* pNode = objMap[vkObj]; if ((pNode->status & status_mask) != status_flag) { char str[1024]; log_msg(mdd(dispatchable_object), msg_flags, pNode->objType, vkObj, __LINE__, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", "OBJECT VALIDATION WARNING: %s object 0x%" PRIxLEAST64 ": %s", string_VkObjectType(objType), static_cast(vkObj), fail_msg); return VK_FALSE; } return VK_TRUE; } else { // If we do not find it print an error log_msg(mdd(dispatchable_object), msg_flags, (VkObjectType) 0, vkObj, __LINE__, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", "Unable to obtain status for non-existent object 0x%" PRIxLEAST64 " of %s type", static_cast(vkObj), string_VkObjectType(objType)); return VK_FALSE; } } #endif #include "vk_dispatch_table_helper.h" static void init_object_tracker(layer_data *my_data, const VkAllocationCallbacks *pAllocator) { layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_object_tracker"); if (!objLockInitialized) { // TODO/TBD: Need to delete this mutex sometime. How??? One // suggestion is to call this during vkCreateInstance(), and then we // can clean it up during vkDestroyInstance(). However, that requires // that the layer have per-instance locks. We need to come back and // address this soon. loader_platform_thread_create_mutex(&objLock); objLockInitialized = 1; } } // // Forward declarations // static void create_physical_device(VkInstance dispatchable_object, VkPhysicalDevice vkObj, VkDebugReportObjectTypeEXT objType); static void create_instance(VkInstance dispatchable_object, VkInstance object, VkDebugReportObjectTypeEXT objType); static void create_device(VkDevice dispatchable_object, VkDevice object, VkDebugReportObjectTypeEXT objType); static void create_device(VkPhysicalDevice dispatchable_object, VkDevice object, VkDebugReportObjectTypeEXT objType); static void create_queue(VkDevice dispatchable_object, VkQueue vkObj, VkDebugReportObjectTypeEXT objType); static VkBool32 validate_image(VkQueue dispatchable_object, VkImage object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_instance(VkInstance dispatchable_object, VkInstance object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_device(VkDevice dispatchable_object, VkDevice object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_descriptor_pool(VkDevice dispatchable_object, VkDescriptorPool object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_descriptor_set_layout(VkDevice dispatchable_object, VkDescriptorSetLayout object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_command_pool(VkDevice dispatchable_object, VkCommandPool object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_buffer(VkQueue dispatchable_object, VkBuffer object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static void create_pipeline(VkDevice dispatchable_object, VkPipeline vkObj, VkDebugReportObjectTypeEXT objType); static VkBool32 validate_pipeline_cache(VkDevice dispatchable_object, VkPipelineCache object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_render_pass(VkDevice dispatchable_object, VkRenderPass object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_shader_module(VkDevice dispatchable_object, VkShaderModule object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_pipeline_layout(VkDevice dispatchable_object, VkPipelineLayout object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static VkBool32 validate_pipeline(VkDevice dispatchable_object, VkPipeline object, VkDebugReportObjectTypeEXT objType, bool null_allowed); static void destroy_command_pool(VkDevice dispatchable_object, VkCommandPool object); static void destroy_command_buffer(VkCommandBuffer dispatchable_object, VkCommandBuffer object); static void destroy_descriptor_pool(VkDevice dispatchable_object, VkDescriptorPool object); static void destroy_descriptor_set(VkDevice dispatchable_object, VkDescriptorSet object); static void destroy_device_memory(VkDevice dispatchable_object, VkDeviceMemory object); static void destroy_swapchain_khr(VkDevice dispatchable_object, VkSwapchainKHR object); static VkBool32 set_device_memory_status(VkDevice dispatchable_object, VkDeviceMemory object, VkDebugReportObjectTypeEXT objType, ObjectStatusFlags status_flag); static VkBool32 reset_device_memory_status(VkDevice dispatchable_object, VkDeviceMemory object, VkDebugReportObjectTypeEXT objType, ObjectStatusFlags status_flag); #if 0 static VkBool32 validate_status(VkDevice dispatchable_object, VkFence object, VkDebugReportObjectTypeEXT objType, ObjectStatusFlags status_mask, ObjectStatusFlags status_flag, VkFlags msg_flags, OBJECT_TRACK_ERROR error_code, const char *fail_msg); #endif extern unordered_map VkPhysicalDeviceMap; extern unordered_map VkDeviceMap; extern unordered_map VkImageMap; extern unordered_map VkQueueMap; extern unordered_map VkDescriptorSetMap; extern unordered_map VkBufferMap; extern unordered_map VkFenceMap; extern unordered_map VkSemaphoreMap; extern unordered_map VkCommandPoolMap; extern unordered_map VkCommandBufferMap; extern unordered_map VkSwapchainKHRMap; extern unordered_map VkSurfaceKHRMap; static void create_physical_device(VkInstance dispatchable_object, VkPhysicalDevice vkObj, VkDebugReportObjectTypeEXT objType) { log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, reinterpret_cast(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++, string_VkDebugReportObjectTypeEXT(objType), reinterpret_cast(vkObj)); OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE; pNewObjNode->objType = objType; pNewObjNode->belongsTo = (uint64_t)dispatchable_object; pNewObjNode->status = OBJSTATUS_NONE; pNewObjNode->vkObj = reinterpret_cast(vkObj); VkPhysicalDeviceMap[reinterpret_cast(vkObj)] = pNewObjNode; uint32_t objIndex = objTypeToIndex(objType); numObjs[objIndex]++; numTotalObjs++; } static void create_surface_khr(VkInstance dispatchable_object, VkSurfaceKHR vkObj, VkDebugReportObjectTypeEXT objType) { // TODO: Add tracking of surface objects log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, (uint64_t)(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++, string_VkDebugReportObjectTypeEXT(objType), (uint64_t)(vkObj)); OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE; pNewObjNode->objType = objType; pNewObjNode->belongsTo = (uint64_t)dispatchable_object; pNewObjNode->status = OBJSTATUS_NONE; pNewObjNode->vkObj = (uint64_t)(vkObj); VkSurfaceKHRMap[(uint64_t)vkObj] = pNewObjNode; uint32_t objIndex = objTypeToIndex(objType); numObjs[objIndex]++; numTotalObjs++; } static void destroy_surface_khr(VkInstance dispatchable_object, VkSurfaceKHR object) { uint64_t object_handle = (uint64_t)(object); if (VkSurfaceKHRMap.find(object_handle) != VkSurfaceKHRMap.end()) { OBJTRACK_NODE *pNode = VkSurfaceKHRMap[(uint64_t)object]; uint32_t objIndex = objTypeToIndex(pNode->objType); assert(numTotalObjs > 0); numTotalObjs--; assert(numObjs[objIndex] > 0); numObjs[objIndex]--; log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ_STAT Destroy %s obj 0x%" PRIxLEAST64 " (%" PRIu64 " total objs remain & %" PRIu64 " %s objs).", string_VkDebugReportObjectTypeEXT(pNode->objType), (uint64_t)(object), numTotalObjs, numObjs[objIndex], string_VkDebugReportObjectTypeEXT(pNode->objType)); delete pNode; VkSurfaceKHRMap.erase(object_handle); } else { log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK", "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?", object_handle); } } static void alloc_command_buffer(VkDevice device, VkCommandPool commandPool, VkCommandBuffer vkObj, VkDebugReportObjectTypeEXT objType, VkCommandBufferLevel level) { log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, reinterpret_cast(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++, string_VkDebugReportObjectTypeEXT(objType), reinterpret_cast(vkObj)); OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE; pNewObjNode->objType = objType; pNewObjNode->belongsTo = (uint64_t)device; pNewObjNode->vkObj = reinterpret_cast(vkObj); pNewObjNode->parentObj = (uint64_t)commandPool; if (level == VK_COMMAND_BUFFER_LEVEL_SECONDARY) { pNewObjNode->status = OBJSTATUS_COMMAND_BUFFER_SECONDARY; } else { pNewObjNode->status = OBJSTATUS_NONE; } VkCommandBufferMap[reinterpret_cast(vkObj)] = pNewObjNode; uint32_t objIndex = objTypeToIndex(objType); numObjs[objIndex]++; numTotalObjs++; } static void free_command_buffer(VkDevice device, VkCommandPool commandPool, VkCommandBuffer commandBuffer) { uint64_t object_handle = reinterpret_cast(commandBuffer); if (VkCommandBufferMap.find(object_handle) != VkCommandBufferMap.end()) { OBJTRACK_NODE *pNode = VkCommandBufferMap[(uint64_t)commandBuffer]; if (pNode->parentObj != (uint64_t)(commandPool)) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_COMMAND_POOL_MISMATCH, "OBJTRACK", "FreeCommandBuffers is attempting to free Command Buffer 0x%" PRIxLEAST64 " belonging to Command Pool 0x%" PRIxLEAST64 " from pool 0x%" PRIxLEAST64 ").", reinterpret_cast(commandBuffer), pNode->parentObj, (uint64_t)(commandPool)); } else { uint32_t objIndex = objTypeToIndex(pNode->objType); assert(numTotalObjs > 0); numTotalObjs--; assert(numObjs[objIndex] > 0); numObjs[objIndex]--; log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ_STAT Destroy %s obj 0x%" PRIxLEAST64 " (%" PRIu64 " total objs remain & %" PRIu64 " %s objs).", string_VkDebugReportObjectTypeEXT(pNode->objType), reinterpret_cast(commandBuffer), numTotalObjs, numObjs[objIndex], string_VkDebugReportObjectTypeEXT(pNode->objType)); delete pNode; VkCommandBufferMap.erase(object_handle); } } else { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK", "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?", object_handle); } } static void alloc_descriptor_set(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorSet vkObj, VkDebugReportObjectTypeEXT objType) { log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, (uint64_t)(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++, string_VkDebugReportObjectTypeEXT(objType), (uint64_t)(vkObj)); OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE; pNewObjNode->objType = objType; pNewObjNode->belongsTo = (uint64_t)device; pNewObjNode->status = OBJSTATUS_NONE; pNewObjNode->vkObj = (uint64_t)(vkObj); pNewObjNode->parentObj = (uint64_t)descriptorPool; VkDescriptorSetMap[(uint64_t)vkObj] = pNewObjNode; uint32_t objIndex = objTypeToIndex(objType); numObjs[objIndex]++; numTotalObjs++; } static void free_descriptor_set(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorSet descriptorSet) { uint64_t object_handle = (uint64_t)(descriptorSet); if (VkDescriptorSetMap.find(object_handle) != VkDescriptorSetMap.end()) { OBJTRACK_NODE *pNode = VkDescriptorSetMap[(uint64_t)descriptorSet]; if (pNode->parentObj != (uint64_t)(descriptorPool)) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_DESCRIPTOR_POOL_MISMATCH, "OBJTRACK", "FreeDescriptorSets is attempting to free descriptorSet 0x%" PRIxLEAST64 " belonging to Descriptor Pool 0x%" PRIxLEAST64 " from pool 0x%" PRIxLEAST64 ").", (uint64_t)(descriptorSet), pNode->parentObj, (uint64_t)(descriptorPool)); } else { uint32_t objIndex = objTypeToIndex(pNode->objType); assert(numTotalObjs > 0); numTotalObjs--; assert(numObjs[objIndex] > 0); numObjs[objIndex]--; log_msg(mdd(device), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, pNode->objType, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ_STAT Destroy %s obj 0x%" PRIxLEAST64 " (%" PRIu64 " total objs remain & %" PRIu64 " %s objs).", string_VkDebugReportObjectTypeEXT(pNode->objType), (uint64_t)(descriptorSet), numTotalObjs, numObjs[objIndex], string_VkDebugReportObjectTypeEXT(pNode->objType)); delete pNode; VkDescriptorSetMap.erase(object_handle); } } else { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, object_handle, __LINE__, OBJTRACK_NONE, "OBJTRACK", "Unable to remove obj 0x%" PRIxLEAST64 ". Was it created? Has it already been destroyed?", object_handle); } } static void create_queue(VkDevice dispatchable_object, VkQueue vkObj, VkDebugReportObjectTypeEXT objType) { log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, reinterpret_cast(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++, string_VkDebugReportObjectTypeEXT(objType), reinterpret_cast(vkObj)); OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE; pNewObjNode->objType = objType; pNewObjNode->belongsTo = (uint64_t)dispatchable_object; pNewObjNode->status = OBJSTATUS_NONE; pNewObjNode->vkObj = reinterpret_cast(vkObj); VkQueueMap[reinterpret_cast(vkObj)] = pNewObjNode; uint32_t objIndex = objTypeToIndex(objType); numObjs[objIndex]++; numTotalObjs++; } static void create_swapchain_image_obj(VkDevice dispatchable_object, VkImage vkObj, VkSwapchainKHR swapchain) { log_msg(mdd(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, (uint64_t)vkObj, __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++, "SwapchainImage", (uint64_t)(vkObj)); OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE; pNewObjNode->belongsTo = (uint64_t)dispatchable_object; pNewObjNode->objType = VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT; pNewObjNode->status = OBJSTATUS_NONE; pNewObjNode->vkObj = (uint64_t)vkObj; pNewObjNode->parentObj = (uint64_t)swapchain; swapchainImageMap[(uint64_t)(vkObj)] = pNewObjNode; } static void create_device(VkInstance dispatchable_object, VkDevice vkObj, VkDebugReportObjectTypeEXT objType) { log_msg(mid(dispatchable_object), VK_DEBUG_REPORT_INFORMATION_BIT_EXT, objType, (uint64_t)(vkObj), __LINE__, OBJTRACK_NONE, "OBJTRACK", "OBJ[%llu] : CREATE %s object 0x%" PRIxLEAST64, object_track_index++, string_VkDebugReportObjectTypeEXT(objType), (uint64_t)(vkObj)); OBJTRACK_NODE *pNewObjNode = new OBJTRACK_NODE; pNewObjNode->belongsTo = (uint64_t)dispatchable_object; pNewObjNode->objType = objType; pNewObjNode->status = OBJSTATUS_NONE; pNewObjNode->vkObj = (uint64_t)(vkObj); VkDeviceMap[(uint64_t)vkObj] = pNewObjNode; uint32_t objIndex = objTypeToIndex(objType); numObjs[objIndex]++; numTotalObjs++; } // // Non-auto-generated API functions called by generated code // VkResult explicit_CreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) { VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance"); if (fpCreateInstance == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance); if (result != VK_SUCCESS) { return result; } layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map); initInstanceTable(*pInstance, fpGetInstanceProcAddr, object_tracker_instance_table_map); VkLayerInstanceDispatchTable *pInstanceTable = get_dispatch_table(object_tracker_instance_table_map, *pInstance); my_data->report_data = debug_report_create_instance(pInstanceTable, *pInstance, pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames); init_object_tracker(my_data, pAllocator); createInstanceRegisterExtensions(pCreateInfo, *pInstance); create_instance(*pInstance, *pInstance, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT); return result; } void explicit_GetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice gpu, uint32_t *pCount, VkQueueFamilyProperties *pProperties) { get_dispatch_table(object_tracker_instance_table_map, gpu)->GetPhysicalDeviceQueueFamilyProperties(gpu, pCount, pProperties); loader_platform_thread_lock_mutex(&objLock); if (pProperties != NULL) setGpuQueueInfoState(*pCount, pProperties); loader_platform_thread_unlock_mutex(&objLock); } VkResult explicit_CreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) { loader_platform_thread_lock_mutex(&objLock); VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr; PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice"); if (fpCreateDevice == NULL) { loader_platform_thread_unlock_mutex(&objLock); return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateDevice(gpu, pCreateInfo, pAllocator, pDevice); if (result != VK_SUCCESS) { loader_platform_thread_unlock_mutex(&objLock); return result; } layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map); layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map); my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice); initDeviceTable(*pDevice, fpGetDeviceProcAddr, object_tracker_device_table_map); createDeviceRegisterExtensions(pCreateInfo, *pDevice); if (VkPhysicalDeviceMap.find((uint64_t)gpu) != VkPhysicalDeviceMap.end()) { OBJTRACK_NODE *pNewObjNode = VkPhysicalDeviceMap[(uint64_t)gpu]; create_device((VkInstance)pNewObjNode->belongsTo, *pDevice, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT); } loader_platform_thread_unlock_mutex(&objLock); return result; } VkResult explicit_EnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount, VkPhysicalDevice *pPhysicalDevices) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= validate_instance(instance, instance, VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, false); loader_platform_thread_unlock_mutex(&objLock); if (skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = get_dispatch_table(object_tracker_instance_table_map, instance) ->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices); loader_platform_thread_lock_mutex(&objLock); if (result == VK_SUCCESS) { if (pPhysicalDevices) { for (uint32_t i = 0; i < *pPhysicalDeviceCount; i++) { create_physical_device(instance, pPhysicalDevices[i], VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT); } } } loader_platform_thread_unlock_mutex(&objLock); return result; } void explicit_GetDeviceQueue(VkDevice device, uint32_t queueNodeIndex, uint32_t queueIndex, VkQueue *pQueue) { loader_platform_thread_lock_mutex(&objLock); validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); loader_platform_thread_unlock_mutex(&objLock); get_dispatch_table(object_tracker_device_table_map, device)->GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue); loader_platform_thread_lock_mutex(&objLock); addQueueInfo(queueNodeIndex, *pQueue); create_queue(device, *pQueue, VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT); loader_platform_thread_unlock_mutex(&objLock); } VkResult explicit_MapMemory(VkDevice device, VkDeviceMemory mem, VkDeviceSize offset, VkDeviceSize size, VkFlags flags, void **ppData) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= set_device_memory_status(device, mem, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, OBJSTATUS_GPU_MEM_MAPPED); skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); loader_platform_thread_unlock_mutex(&objLock); if (skipCall == VK_TRUE) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->MapMemory(device, mem, offset, size, flags, ppData); return result; } void explicit_UnmapMemory(VkDevice device, VkDeviceMemory mem) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= reset_device_memory_status(device, mem, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT, OBJSTATUS_GPU_MEM_MAPPED); skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); loader_platform_thread_unlock_mutex(&objLock); if (skipCall == VK_TRUE) return; get_dispatch_table(object_tracker_device_table_map, device)->UnmapMemory(device, mem); } VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo *pBindInfo, VkFence fence) { loader_platform_thread_lock_mutex(&objLock); validateQueueFlags(queue, "QueueBindSparse"); for (uint32_t i = 0; i < bindInfoCount; i++) { for (uint32_t j = 0; j < pBindInfo[i].bufferBindCount; j++) validate_buffer(queue, pBindInfo[i].pBufferBinds[j].buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, false); for (uint32_t j = 0; j < pBindInfo[i].imageOpaqueBindCount; j++) validate_image(queue, pBindInfo[i].pImageOpaqueBinds[j].image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, false); for (uint32_t j = 0; j < pBindInfo[i].imageBindCount; j++) validate_image(queue, pBindInfo[i].pImageBinds[j].image, VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT, false); } loader_platform_thread_unlock_mutex(&objLock); VkResult result = get_dispatch_table(object_tracker_device_table_map, queue)->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence); return result; } VkResult explicit_AllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pAllocateInfo, VkCommandBuffer *pCommandBuffers) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); skipCall |= validate_command_pool(device, pAllocateInfo->commandPool, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT, false); loader_platform_thread_unlock_mutex(&objLock); if (skipCall) { return VK_ERROR_VALIDATION_FAILED_EXT; } VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->AllocateCommandBuffers(device, pAllocateInfo, pCommandBuffers); loader_platform_thread_lock_mutex(&objLock); for (uint32_t i = 0; i < pAllocateInfo->commandBufferCount; i++) { alloc_command_buffer(device, pAllocateInfo->commandPool, pCommandBuffers[i], VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT, pAllocateInfo->level); } loader_platform_thread_unlock_mutex(&objLock); return result; } VkResult explicit_AllocateDescriptorSets(VkDevice device, const VkDescriptorSetAllocateInfo *pAllocateInfo, VkDescriptorSet *pDescriptorSets) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); skipCall |= validate_descriptor_pool(device, pAllocateInfo->descriptorPool, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, false); for (uint32_t i = 0; i < pAllocateInfo->descriptorSetCount; i++) { skipCall |= validate_descriptor_set_layout(device, pAllocateInfo->pSetLayouts[i], VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT, false); } loader_platform_thread_unlock_mutex(&objLock); if (skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = get_dispatch_table(object_tracker_device_table_map, device)->AllocateDescriptorSets(device, pAllocateInfo, pDescriptorSets); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&objLock); for (uint32_t i = 0; i < pAllocateInfo->descriptorSetCount; i++) { alloc_descriptor_set(device, pAllocateInfo->descriptorPool, pDescriptorSets[i], VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT); } loader_platform_thread_unlock_mutex(&objLock); } return result; } void explicit_FreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer *pCommandBuffers) { loader_platform_thread_lock_mutex(&objLock); validate_command_pool(device, commandPool, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT, false); validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); loader_platform_thread_unlock_mutex(&objLock); get_dispatch_table(object_tracker_device_table_map, device) ->FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers); loader_platform_thread_lock_mutex(&objLock); for (uint32_t i = 0; i < commandBufferCount; i++) { free_command_buffer(device, commandPool, *pCommandBuffers); pCommandBuffers++; } loader_platform_thread_unlock_mutex(&objLock); } void explicit_DestroySwapchainKHR(VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks *pAllocator) { loader_platform_thread_lock_mutex(&objLock); // A swapchain's images are implicitly deleted when the swapchain is deleted. // Remove this swapchain's images from our map of such images. unordered_map::iterator itr = swapchainImageMap.begin(); while (itr != swapchainImageMap.end()) { OBJTRACK_NODE *pNode = (*itr).second; if (pNode->parentObj == (uint64_t)(swapchain)) { swapchainImageMap.erase(itr++); } else { ++itr; } } destroy_swapchain_khr(device, swapchain); loader_platform_thread_unlock_mutex(&objLock); get_dispatch_table(object_tracker_device_table_map, device)->DestroySwapchainKHR(device, swapchain, pAllocator); } void explicit_FreeMemory(VkDevice device, VkDeviceMemory mem, const VkAllocationCallbacks *pAllocator) { loader_platform_thread_lock_mutex(&objLock); validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); loader_platform_thread_unlock_mutex(&objLock); get_dispatch_table(object_tracker_device_table_map, device)->FreeMemory(device, mem, pAllocator); loader_platform_thread_lock_mutex(&objLock); destroy_device_memory(device, mem); loader_platform_thread_unlock_mutex(&objLock); } VkResult explicit_FreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t count, const VkDescriptorSet *pDescriptorSets) { loader_platform_thread_lock_mutex(&objLock); validate_descriptor_pool(device, descriptorPool, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, false); validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); loader_platform_thread_unlock_mutex(&objLock); VkResult result = get_dispatch_table(object_tracker_device_table_map, device) ->FreeDescriptorSets(device, descriptorPool, count, pDescriptorSets); loader_platform_thread_lock_mutex(&objLock); for (uint32_t i = 0; i < count; i++) { free_descriptor_set(device, descriptorPool, *pDescriptorSets++); } loader_platform_thread_unlock_mutex(&objLock); return result; } void explicit_DestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); skipCall |= validate_descriptor_pool(device, descriptorPool, VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT, false); loader_platform_thread_unlock_mutex(&objLock); if (skipCall) { return; } // A DescriptorPool's descriptor sets are implicitly deleted when the pool is deleted. // Remove this pool's descriptor sets from our descriptorSet map. loader_platform_thread_lock_mutex(&objLock); unordered_map::iterator itr = VkDescriptorSetMap.begin(); while (itr != VkDescriptorSetMap.end()) { OBJTRACK_NODE *pNode = (*itr).second; auto del_itr = itr++; if (pNode->parentObj == (uint64_t)(descriptorPool)) { destroy_descriptor_set(device, (VkDescriptorSet)((*del_itr).first)); } } destroy_descriptor_pool(device, descriptorPool); loader_platform_thread_unlock_mutex(&objLock); get_dispatch_table(object_tracker_device_table_map, device)->DestroyDescriptorPool(device, descriptorPool, pAllocator); } void explicit_DestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); skipCall |= validate_command_pool(device, commandPool, VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT, false); loader_platform_thread_unlock_mutex(&objLock); if (skipCall) { return; } loader_platform_thread_lock_mutex(&objLock); // A CommandPool's command buffers are implicitly deleted when the pool is deleted. // Remove this pool's cmdBuffers from our cmd buffer map. unordered_map::iterator itr = VkCommandBufferMap.begin(); unordered_map::iterator del_itr; while (itr != VkCommandBufferMap.end()) { OBJTRACK_NODE *pNode = (*itr).second; del_itr = itr++; if (pNode->parentObj == (uint64_t)(commandPool)) { destroy_command_buffer(reinterpret_cast((*del_itr).first), reinterpret_cast((*del_itr).first)); } } destroy_command_pool(device, commandPool); loader_platform_thread_unlock_mutex(&objLock); get_dispatch_table(object_tracker_device_table_map, device)->DestroyCommandPool(device, commandPool, pAllocator); } VkResult explicit_GetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t *pCount, VkImage *pSwapchainImages) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); loader_platform_thread_unlock_mutex(&objLock); if (skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = get_dispatch_table(object_tracker_device_table_map, device) ->GetSwapchainImagesKHR(device, swapchain, pCount, pSwapchainImages); if (pSwapchainImages != NULL) { loader_platform_thread_lock_mutex(&objLock); for (uint32_t i = 0; i < *pCount; i++) { create_swapchain_image_obj(device, pSwapchainImages[i], swapchain); } loader_platform_thread_unlock_mutex(&objLock); } return result; } // TODO: Add special case to codegen to cover validating all the pipelines instead of just the first VkResult explicit_CreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkGraphicsPipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator, VkPipeline *pPipelines) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); if (pCreateInfos) { for (uint32_t idx0 = 0; idx0 < createInfoCount; ++idx0) { if (pCreateInfos[idx0].basePipelineHandle) { skipCall |= validate_pipeline(device, pCreateInfos[idx0].basePipelineHandle, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, true); } if (pCreateInfos[idx0].layout) { skipCall |= validate_pipeline_layout(device, pCreateInfos[idx0].layout, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT, false); } if (pCreateInfos[idx0].pStages) { for (uint32_t idx1 = 0; idx1 < pCreateInfos[idx0].stageCount; ++idx1) { if (pCreateInfos[idx0].pStages[idx1].module) { skipCall |= validate_shader_module(device, pCreateInfos[idx0].pStages[idx1].module, VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT, false); } } } if (pCreateInfos[idx0].renderPass) { skipCall |= validate_render_pass(device, pCreateInfos[idx0].renderPass, VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT, false); } } } if (pipelineCache) { skipCall |= validate_pipeline_cache(device, pipelineCache, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_CACHE_EXT, false); } loader_platform_thread_unlock_mutex(&objLock); if (skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = get_dispatch_table(object_tracker_device_table_map, device) ->CreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines); loader_platform_thread_lock_mutex(&objLock); if (result == VK_SUCCESS) { for (uint32_t idx2 = 0; idx2 < createInfoCount; ++idx2) { create_pipeline(device, pPipelines[idx2], VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT); } } loader_platform_thread_unlock_mutex(&objLock); return result; } // TODO: Add special case to codegen to cover validating all the pipelines instead of just the first VkResult explicit_CreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkComputePipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator, VkPipeline *pPipelines) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_lock_mutex(&objLock); skipCall |= validate_device(device, device, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, false); if (pCreateInfos) { for (uint32_t idx0 = 0; idx0 < createInfoCount; ++idx0) { if (pCreateInfos[idx0].basePipelineHandle) { skipCall |= validate_pipeline(device, pCreateInfos[idx0].basePipelineHandle, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT, true); } if (pCreateInfos[idx0].layout) { skipCall |= validate_pipeline_layout(device, pCreateInfos[idx0].layout, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT, false); } if (pCreateInfos[idx0].stage.module) { skipCall |= validate_shader_module(device, pCreateInfos[idx0].stage.module, VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT, false); } } } if (pipelineCache) { skipCall |= validate_pipeline_cache(device, pipelineCache, VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_CACHE_EXT, false); } loader_platform_thread_unlock_mutex(&objLock); if (skipCall) return VK_ERROR_VALIDATION_FAILED_EXT; VkResult result = get_dispatch_table(object_tracker_device_table_map, device) ->CreateComputePipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines); loader_platform_thread_lock_mutex(&objLock); if (result == VK_SUCCESS) { for (uint32_t idx1 = 0; idx1 < createInfoCount; ++idx1) { create_pipeline(device, pPipelines[idx1], VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT); } } loader_platform_thread_unlock_mutex(&objLock); return result; } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/parameter_validation.cpp000066400000000000000000005230201270147354000271210ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Jeremy Hayes * Author: Tony Barbour * Author: Mark Lobodzinski * Author: Dustin Graves */ #include #include #include #include #include #include #include #include #include #include "vk_loader_platform.h" #include "vulkan/vk_layer.h" #include "vk_layer_config.h" #include "vk_enum_validate_helper.h" #include "vk_struct_validate_helper.h" #include "vk_layer_table.h" #include "vk_layer_data.h" #include "vk_layer_logging.h" #include "vk_layer_extension_utils.h" #include "vk_layer_utils.h" #include "parameter_validation.h" struct layer_data { debug_report_data *report_data; std::vector logging_callback; // TODO: Split instance/device structs // Device Data // Map for queue family index to queue count std::unordered_map queueFamilyIndexMap; layer_data() : report_data(nullptr){}; }; static std::unordered_map layer_data_map; static device_table_map pc_device_table_map; static instance_table_map pc_instance_table_map; // "my instance data" debug_report_data *mid(VkInstance object) { dispatch_key key = get_dispatch_key(object); layer_data *data = get_my_data_ptr(key, layer_data_map); #if DISPATCH_MAP_DEBUG fprintf(stderr, "MID: map: %p, object: %p, key: %p, data: %p\n", &layer_data_map, object, key, data); #endif assert(data != NULL); return data->report_data; } // "my device data" debug_report_data *mdd(void *object) { dispatch_key key = get_dispatch_key(object); layer_data *data = get_my_data_ptr(key, layer_data_map); #if DISPATCH_MAP_DEBUG fprintf(stderr, "MDD: map: %p, object: %p, key: %p, data: %p\n", &layer_data_map, object, key, data); #endif assert(data != NULL); return data->report_data; } static void init_parameter_validation(layer_data *my_data, const VkAllocationCallbacks *pAllocator) { layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_parameter_validation"); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) { VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance); VkResult result = pTable->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback); if (result == VK_SUCCESS) { layer_data *data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); result = layer_create_msg_callback(data->report_data, pCreateInfo, pAllocator, pMsgCallback); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT msgCallback, const VkAllocationCallbacks *pAllocator) { VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance); pTable->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator); layer_data *data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); layer_destroy_msg_callback(data->report_data, msgCallback, pAllocator); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) { VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance); pTable->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg); } static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { return util_GetExtensionProperties(1, instance_extensions, pCount, pProperties); } static const VkLayerProperties pc_global_layers[] = {{ "VK_LAYER_LUNARG_parameter_validation", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer", }}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) { return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers, pCount, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice, const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { /* parameter_validation does not have any physical device extensions */ if (pLayerName == NULL) { return get_dispatch_table(pc_instance_table_map, physicalDevice) ->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties); } else { return util_GetExtensionProperties(0, NULL, pCount, pProperties); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) { /* parameter_validation's physical device layers are the same as global */ return util_GetLayerProperties(ARRAY_SIZE(pc_global_layers), pc_global_layers, pCount, pProperties); } static bool ValidateEnumerator(VkFormatFeatureFlagBits const &enumerator) { VkFormatFeatureFlagBits allFlags = (VkFormatFeatureFlagBits)( VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT | VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT | VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT | VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT | VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT | VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT | VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT | VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT | VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT | VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_FORMAT_FEATURE_BLIT_SRC_BIT | VK_FORMAT_FEATURE_BLIT_DST_BIT | VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkFormatFeatureFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT) { strings.push_back("VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT"); } if (enumerator & VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT) { strings.push_back("VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT"); } if (enumerator & VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT) { strings.push_back("VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT"); } if (enumerator & VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT) { strings.push_back("VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT"); } if (enumerator & VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT) { strings.push_back("VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT"); } if (enumerator & VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT) { strings.push_back("VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT"); } if (enumerator & VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT) { strings.push_back("VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT"); } if (enumerator & VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT) { strings.push_back("VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT"); } if (enumerator & VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT) { strings.push_back("VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT"); } if (enumerator & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT) { strings.push_back("VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT"); } if (enumerator & VK_FORMAT_FEATURE_BLIT_SRC_BIT) { strings.push_back("VK_FORMAT_FEATURE_BLIT_SRC_BIT"); } if (enumerator & VK_FORMAT_FEATURE_BLIT_DST_BIT) { strings.push_back("VK_FORMAT_FEATURE_BLIT_DST_BIT"); } if (enumerator & VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT) { strings.push_back("VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkImageUsageFlagBits const &enumerator) { VkImageUsageFlagBits allFlags = (VkImageUsageFlagBits)( VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT | VK_IMAGE_USAGE_TRANSFER_SRC_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkImageUsageFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT) { strings.push_back("VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT"); } if (enumerator & VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT) { strings.push_back("VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT"); } if (enumerator & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT) { strings.push_back("VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT"); } if (enumerator & VK_IMAGE_USAGE_STORAGE_BIT) { strings.push_back("VK_IMAGE_USAGE_STORAGE_BIT"); } if (enumerator & VK_IMAGE_USAGE_SAMPLED_BIT) { strings.push_back("VK_IMAGE_USAGE_SAMPLED_BIT"); } if (enumerator & VK_IMAGE_USAGE_TRANSFER_DST_BIT) { strings.push_back("VK_IMAGE_USAGE_TRANSFER_DST_BIT"); } if (enumerator & VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT) { strings.push_back("VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT"); } if (enumerator & VK_IMAGE_USAGE_TRANSFER_SRC_BIT) { strings.push_back("VK_IMAGE_USAGE_TRANSFER_SRC_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkQueueFlagBits const &enumerator) { VkQueueFlagBits allFlags = (VkQueueFlagBits)(VK_QUEUE_TRANSFER_BIT | VK_QUEUE_COMPUTE_BIT | VK_QUEUE_SPARSE_BINDING_BIT | VK_QUEUE_GRAPHICS_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkQueueFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_QUEUE_TRANSFER_BIT) { strings.push_back("VK_QUEUE_TRANSFER_BIT"); } if (enumerator & VK_QUEUE_COMPUTE_BIT) { strings.push_back("VK_QUEUE_COMPUTE_BIT"); } if (enumerator & VK_QUEUE_SPARSE_BINDING_BIT) { strings.push_back("VK_QUEUE_SPARSE_BINDING_BIT"); } if (enumerator & VK_QUEUE_GRAPHICS_BIT) { strings.push_back("VK_QUEUE_GRAPHICS_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkMemoryPropertyFlagBits const &enumerator) { VkMemoryPropertyFlagBits allFlags = (VkMemoryPropertyFlagBits)( VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT | VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkMemoryPropertyFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) { strings.push_back("VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT"); } if (enumerator & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) { strings.push_back("VK_MEMORY_PROPERTY_HOST_COHERENT_BIT"); } if (enumerator & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) { strings.push_back("VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT"); } if (enumerator & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) { strings.push_back("VK_MEMORY_PROPERTY_HOST_CACHED_BIT"); } if (enumerator & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) { strings.push_back("VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkMemoryHeapFlagBits const &enumerator) { VkMemoryHeapFlagBits allFlags = (VkMemoryHeapFlagBits)(VK_MEMORY_HEAP_DEVICE_LOCAL_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkMemoryHeapFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) { strings.push_back("VK_MEMORY_HEAP_DEVICE_LOCAL_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkSparseImageFormatFlagBits const &enumerator) { VkSparseImageFormatFlagBits allFlags = (VkSparseImageFormatFlagBits)(VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT | VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT | VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkSparseImageFormatFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT) { strings.push_back("VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT"); } if (enumerator & VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT) { strings.push_back("VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT"); } if (enumerator & VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT) { strings.push_back("VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkFenceCreateFlagBits const &enumerator) { VkFenceCreateFlagBits allFlags = (VkFenceCreateFlagBits)(VK_FENCE_CREATE_SIGNALED_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkFenceCreateFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_FENCE_CREATE_SIGNALED_BIT) { strings.push_back("VK_FENCE_CREATE_SIGNALED_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkQueryPipelineStatisticFlagBits const &enumerator) { VkQueryPipelineStatisticFlagBits allFlags = (VkQueryPipelineStatisticFlagBits)( VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT | VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT | VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT | VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT | VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT | VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT | VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT | VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT | VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT | VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT | VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkQueryPipelineStatisticFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT"); } if (enumerator & VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT) { strings.push_back("VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkQueryResultFlagBits const &enumerator) { VkQueryResultFlagBits allFlags = (VkQueryResultFlagBits)(VK_QUERY_RESULT_PARTIAL_BIT | VK_QUERY_RESULT_WITH_AVAILABILITY_BIT | VK_QUERY_RESULT_WAIT_BIT | VK_QUERY_RESULT_64_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkQueryResultFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_QUERY_RESULT_PARTIAL_BIT) { strings.push_back("VK_QUERY_RESULT_PARTIAL_BIT"); } if (enumerator & VK_QUERY_RESULT_WITH_AVAILABILITY_BIT) { strings.push_back("VK_QUERY_RESULT_WITH_AVAILABILITY_BIT"); } if (enumerator & VK_QUERY_RESULT_WAIT_BIT) { strings.push_back("VK_QUERY_RESULT_WAIT_BIT"); } if (enumerator & VK_QUERY_RESULT_64_BIT) { strings.push_back("VK_QUERY_RESULT_64_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkBufferUsageFlagBits const &enumerator) { VkBufferUsageFlagBits allFlags = (VkBufferUsageFlagBits)( VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_INDEX_BUFFER_BIT | VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT | VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT | VK_BUFFER_USAGE_STORAGE_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT | VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_SRC_BIT | VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkBufferUsageFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_BUFFER_USAGE_VERTEX_BUFFER_BIT) { strings.push_back("VK_BUFFER_USAGE_VERTEX_BUFFER_BIT"); } if (enumerator & VK_BUFFER_USAGE_INDEX_BUFFER_BIT) { strings.push_back("VK_BUFFER_USAGE_INDEX_BUFFER_BIT"); } if (enumerator & VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT) { strings.push_back("VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT"); } if (enumerator & VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT) { strings.push_back("VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT"); } if (enumerator & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT) { strings.push_back("VK_BUFFER_USAGE_STORAGE_BUFFER_BIT"); } if (enumerator & VK_BUFFER_USAGE_TRANSFER_DST_BIT) { strings.push_back("VK_BUFFER_USAGE_TRANSFER_DST_BIT"); } if (enumerator & VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) { strings.push_back("VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT"); } if (enumerator & VK_BUFFER_USAGE_TRANSFER_SRC_BIT) { strings.push_back("VK_BUFFER_USAGE_TRANSFER_SRC_BIT"); } if (enumerator & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT) { strings.push_back("VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkBufferCreateFlagBits const &enumerator) { VkBufferCreateFlagBits allFlags = (VkBufferCreateFlagBits)( VK_BUFFER_CREATE_SPARSE_ALIASED_BIT | VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT | VK_BUFFER_CREATE_SPARSE_BINDING_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkBufferCreateFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_BUFFER_CREATE_SPARSE_ALIASED_BIT) { strings.push_back("VK_BUFFER_CREATE_SPARSE_ALIASED_BIT"); } if (enumerator & VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT) { strings.push_back("VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT"); } if (enumerator & VK_BUFFER_CREATE_SPARSE_BINDING_BIT) { strings.push_back("VK_BUFFER_CREATE_SPARSE_BINDING_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkImageCreateFlagBits const &enumerator) { VkImageCreateFlagBits allFlags = (VkImageCreateFlagBits)( VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT | VK_IMAGE_CREATE_SPARSE_ALIASED_BIT | VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT | VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT | VK_IMAGE_CREATE_SPARSE_BINDING_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkImageCreateFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT) { strings.push_back("VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT"); } if (enumerator & VK_IMAGE_CREATE_SPARSE_ALIASED_BIT) { strings.push_back("VK_IMAGE_CREATE_SPARSE_ALIASED_BIT"); } if (enumerator & VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT) { strings.push_back("VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT"); } if (enumerator & VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT) { strings.push_back("VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT"); } if (enumerator & VK_IMAGE_CREATE_SPARSE_BINDING_BIT) { strings.push_back("VK_IMAGE_CREATE_SPARSE_BINDING_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkColorComponentFlagBits const &enumerator) { VkColorComponentFlagBits allFlags = (VkColorComponentFlagBits)(VK_COLOR_COMPONENT_A_BIT | VK_COLOR_COMPONENT_B_BIT | VK_COLOR_COMPONENT_G_BIT | VK_COLOR_COMPONENT_R_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkColorComponentFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_COLOR_COMPONENT_A_BIT) { strings.push_back("VK_COLOR_COMPONENT_A_BIT"); } if (enumerator & VK_COLOR_COMPONENT_B_BIT) { strings.push_back("VK_COLOR_COMPONENT_B_BIT"); } if (enumerator & VK_COLOR_COMPONENT_G_BIT) { strings.push_back("VK_COLOR_COMPONENT_G_BIT"); } if (enumerator & VK_COLOR_COMPONENT_R_BIT) { strings.push_back("VK_COLOR_COMPONENT_R_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkPipelineCreateFlagBits const &enumerator) { VkPipelineCreateFlagBits allFlags = (VkPipelineCreateFlagBits)( VK_PIPELINE_CREATE_DERIVATIVE_BIT | VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT | VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkPipelineCreateFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_PIPELINE_CREATE_DERIVATIVE_BIT) { strings.push_back("VK_PIPELINE_CREATE_DERIVATIVE_BIT"); } if (enumerator & VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT) { strings.push_back("VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT"); } if (enumerator & VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT) { strings.push_back("VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkShaderStageFlagBits const &enumerator) { VkShaderStageFlagBits allFlags = (VkShaderStageFlagBits)( VK_SHADER_STAGE_ALL | VK_SHADER_STAGE_FRAGMENT_BIT | VK_SHADER_STAGE_GEOMETRY_BIT | VK_SHADER_STAGE_COMPUTE_BIT | VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT | VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT | VK_SHADER_STAGE_VERTEX_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkShaderStageFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_SHADER_STAGE_ALL) { strings.push_back("VK_SHADER_STAGE_ALL"); } if (enumerator & VK_SHADER_STAGE_FRAGMENT_BIT) { strings.push_back("VK_SHADER_STAGE_FRAGMENT_BIT"); } if (enumerator & VK_SHADER_STAGE_GEOMETRY_BIT) { strings.push_back("VK_SHADER_STAGE_GEOMETRY_BIT"); } if (enumerator & VK_SHADER_STAGE_COMPUTE_BIT) { strings.push_back("VK_SHADER_STAGE_COMPUTE_BIT"); } if (enumerator & VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT) { strings.push_back("VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT"); } if (enumerator & VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT) { strings.push_back("VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT"); } if (enumerator & VK_SHADER_STAGE_VERTEX_BIT) { strings.push_back("VK_SHADER_STAGE_VERTEX_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkPipelineStageFlagBits const &enumerator) { VkPipelineStageFlagBits allFlags = (VkPipelineStageFlagBits)( VK_PIPELINE_STAGE_ALL_COMMANDS_BIT | VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT | VK_PIPELINE_STAGE_HOST_BIT | VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT | VK_PIPELINE_STAGE_TRANSFER_BIT | VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT | VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT | VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT | VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT | VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT | VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT | VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT | VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT | VK_PIPELINE_STAGE_VERTEX_SHADER_BIT | VK_PIPELINE_STAGE_VERTEX_INPUT_BIT | VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT | VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkPipelineStageFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_PIPELINE_STAGE_ALL_COMMANDS_BIT) { strings.push_back("VK_PIPELINE_STAGE_ALL_COMMANDS_BIT"); } if (enumerator & VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT) { strings.push_back("VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT"); } if (enumerator & VK_PIPELINE_STAGE_HOST_BIT) { strings.push_back("VK_PIPELINE_STAGE_HOST_BIT"); } if (enumerator & VK_PIPELINE_STAGE_TRANSFER_BIT) { strings.push_back("VK_PIPELINE_STAGE_TRANSFER_BIT"); } if (enumerator & VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT) { strings.push_back("VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT"); } if (enumerator & VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT) { strings.push_back("VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT"); } if (enumerator & VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT) { strings.push_back("VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT"); } if (enumerator & VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT) { strings.push_back("VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT"); } if (enumerator & VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT) { strings.push_back("VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT"); } if (enumerator & VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT) { strings.push_back("VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT"); } if (enumerator & VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT) { strings.push_back("VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT"); } if (enumerator & VK_PIPELINE_STAGE_VERTEX_SHADER_BIT) { strings.push_back("VK_PIPELINE_STAGE_VERTEX_SHADER_BIT"); } if (enumerator & VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT) { strings.push_back("VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT"); } if (enumerator & VK_PIPELINE_STAGE_VERTEX_INPUT_BIT) { strings.push_back("VK_PIPELINE_STAGE_VERTEX_INPUT_BIT"); } if (enumerator & VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT) { strings.push_back("VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT"); } if (enumerator & VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT) { strings.push_back("VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT"); } if (enumerator & VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT) { strings.push_back("VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkAccessFlagBits const &enumerator) { VkAccessFlagBits allFlags = (VkAccessFlagBits)( VK_ACCESS_INDIRECT_COMMAND_READ_BIT | VK_ACCESS_INDEX_READ_BIT | VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT | VK_ACCESS_UNIFORM_READ_BIT | VK_ACCESS_INPUT_ATTACHMENT_READ_BIT | VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_SHADER_WRITE_BIT | VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT | VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT | VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | VK_ACCESS_TRANSFER_READ_BIT | VK_ACCESS_TRANSFER_WRITE_BIT | VK_ACCESS_HOST_READ_BIT | VK_ACCESS_HOST_WRITE_BIT | VK_ACCESS_MEMORY_READ_BIT | VK_ACCESS_MEMORY_WRITE_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkAccessFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_ACCESS_INDIRECT_COMMAND_READ_BIT) { strings.push_back("VK_ACCESS_INDIRECT_COMMAND_READ_BIT"); } if (enumerator & VK_ACCESS_INDEX_READ_BIT) { strings.push_back("VK_ACCESS_INDEX_READ_BIT"); } if (enumerator & VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT) { strings.push_back("VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT"); } if (enumerator & VK_ACCESS_UNIFORM_READ_BIT) { strings.push_back("VK_ACCESS_UNIFORM_READ_BIT"); } if (enumerator & VK_ACCESS_INPUT_ATTACHMENT_READ_BIT) { strings.push_back("VK_ACCESS_INPUT_ATTACHMENT_READ_BIT"); } if (enumerator & VK_ACCESS_SHADER_READ_BIT) { strings.push_back("VK_ACCESS_SHADER_READ_BIT"); } if (enumerator & VK_ACCESS_SHADER_WRITE_BIT) { strings.push_back("VK_ACCESS_SHADER_WRITE_BIT"); } if (enumerator & VK_ACCESS_COLOR_ATTACHMENT_READ_BIT) { strings.push_back("VK_ACCESS_COLOR_ATTACHMENT_READ_BIT"); } if (enumerator & VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT) { strings.push_back("VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT"); } if (enumerator & VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT) { strings.push_back("VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT"); } if (enumerator & VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT) { strings.push_back("VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT"); } if (enumerator & VK_ACCESS_TRANSFER_READ_BIT) { strings.push_back("VK_ACCESS_TRANSFER_READ_BIT"); } if (enumerator & VK_ACCESS_TRANSFER_WRITE_BIT) { strings.push_back("VK_ACCESS_TRANSFER_WRITE_BIT"); } if (enumerator & VK_ACCESS_HOST_READ_BIT) { strings.push_back("VK_ACCESS_HOST_READ_BIT"); } if (enumerator & VK_ACCESS_HOST_WRITE_BIT) { strings.push_back("VK_ACCESS_HOST_WRITE_BIT"); } if (enumerator & VK_ACCESS_MEMORY_READ_BIT) { strings.push_back("VK_ACCESS_MEMORY_READ_BIT"); } if (enumerator & VK_ACCESS_MEMORY_WRITE_BIT) { strings.push_back("VK_ACCESS_MEMORY_WRITE_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkCommandPoolCreateFlagBits const &enumerator) { VkCommandPoolCreateFlagBits allFlags = (VkCommandPoolCreateFlagBits)(VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT | VK_COMMAND_POOL_CREATE_TRANSIENT_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkCommandPoolCreateFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT) { strings.push_back("VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT"); } if (enumerator & VK_COMMAND_POOL_CREATE_TRANSIENT_BIT) { strings.push_back("VK_COMMAND_POOL_CREATE_TRANSIENT_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkCommandPoolResetFlagBits const &enumerator) { VkCommandPoolResetFlagBits allFlags = (VkCommandPoolResetFlagBits)(VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkCommandPoolResetFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT) { strings.push_back("VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkCommandBufferUsageFlags const &enumerator) { VkCommandBufferUsageFlags allFlags = (VkCommandBufferUsageFlags)(VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT | VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT | VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkCommandBufferUsageFlags const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT) { strings.push_back("VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT"); } if (enumerator & VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT) { strings.push_back("VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT"); } if (enumerator & VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT) { strings.push_back("VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkCommandBufferResetFlagBits const &enumerator) { VkCommandBufferResetFlagBits allFlags = (VkCommandBufferResetFlagBits)(VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkCommandBufferResetFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT) { strings.push_back("VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool ValidateEnumerator(VkImageAspectFlagBits const &enumerator) { VkImageAspectFlagBits allFlags = (VkImageAspectFlagBits)(VK_IMAGE_ASPECT_METADATA_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_COLOR_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkImageAspectFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_IMAGE_ASPECT_METADATA_BIT) { strings.push_back("VK_IMAGE_ASPECT_METADATA_BIT"); } if (enumerator & VK_IMAGE_ASPECT_STENCIL_BIT) { strings.push_back("VK_IMAGE_ASPECT_STENCIL_BIT"); } if (enumerator & VK_IMAGE_ASPECT_DEPTH_BIT) { strings.push_back("VK_IMAGE_ASPECT_DEPTH_BIT"); } if (enumerator & VK_IMAGE_ASPECT_COLOR_BIT) { strings.push_back("VK_IMAGE_ASPECT_COLOR_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static bool validate_queue_family_indices(VkDevice device, const char *function_name, const uint32_t count, const uint32_t *indices) { bool skipCall = false; layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); for (auto i = 0u; i < count; i++) { if (indices[i] == VK_QUEUE_FAMILY_IGNORED) { skipCall |= log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s: the specified queueFamilyIndex cannot be VK_QUEUE_FAMILY_IGNORED.", function_name); } else { const auto &queue_data = my_device_data->queueFamilyIndexMap.find(indices[i]); if (queue_data == my_device_data->queueFamilyIndexMap.end()) { skipCall |= log_msg( mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "VkGetDeviceQueue parameter, uint32_t queueFamilyIndex %d, must have been given when the device was created.", indices[i]); return false; } } } return skipCall; } static bool ValidateEnumerator(VkQueryControlFlagBits const &enumerator) { VkQueryControlFlagBits allFlags = (VkQueryControlFlagBits)(VK_QUERY_CONTROL_PRECISE_BIT); if (enumerator & (~allFlags)) { return false; } return true; } static std::string EnumeratorString(VkQueryControlFlagBits const &enumerator) { if (!ValidateEnumerator(enumerator)) { return "unrecognized enumerator"; } std::vector strings; if (enumerator & VK_QUERY_CONTROL_PRECISE_BIT) { strings.push_back("VK_QUERY_CONTROL_PRECISE_BIT"); } std::string enumeratorString; for (auto const &string : strings) { enumeratorString += string; if (string != strings.back()) { enumeratorString += '|'; } } return enumeratorString; } static const int MaxParamCheckerStringLength = 256; static VkBool32 validate_string(debug_report_data *report_data, const char *apiName, const char *stringName, const char *validateString) { assert(apiName != nullptr); assert(stringName != nullptr); assert(validateString != nullptr); VkBool32 skipCall = VK_FALSE; VkStringErrorFlags result = vk_string_validate(MaxParamCheckerStringLength, validateString); if (result == VK_STRING_ERROR_NONE) { return skipCall; } else if (result & VK_STRING_ERROR_LENGTH) { skipCall = log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s: string %s exceeds max length %d", apiName, stringName, MaxParamCheckerStringLength); } else if (result & VK_STRING_ERROR_BAD_DATA) { skipCall = log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "%s: string %s contains invalid characters or is badly formed", apiName, stringName); } return skipCall; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info != nullptr); assert(chain_info->u.pLayerInfo != nullptr); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance"); if (fpCreateInstance == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; result = fpCreateInstance(pCreateInfo, pAllocator, pInstance); if (result == VK_SUCCESS) { layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map); assert(my_instance_data != nullptr); VkLayerInstanceDispatchTable *pTable = initInstanceTable(*pInstance, fpGetInstanceProcAddr, pc_instance_table_map); my_instance_data->report_data = debug_report_create_instance(pTable, *pInstance, pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames); init_parameter_validation(my_instance_data, pAllocator); // Ordinarily we'd check these before calling down the chain, but none of the layer // support is in place until now, if we survive we can report the issue now. parameter_validation_vkCreateInstance(my_instance_data->report_data, pCreateInfo, pAllocator, pInstance); if (pCreateInfo->pApplicationInfo) { if (pCreateInfo->pApplicationInfo->pApplicationName) { validate_string(my_instance_data->report_data, "vkCreateInstance", "pCreateInfo->VkApplicationInfo->pApplicationName", pCreateInfo->pApplicationInfo->pApplicationName); } if (pCreateInfo->pApplicationInfo->pEngineName) { validate_string(my_instance_data->report_data, "vkCreateInstance", "pCreateInfo->VkApplicationInfo->pEngineName", pCreateInfo->pApplicationInfo->pEngineName); } } } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) { // Grab the key before the instance is destroyed. dispatch_key key = get_dispatch_key(instance); VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(key, layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyInstance(my_data->report_data, pAllocator); if (skipCall == VK_FALSE) { VkLayerInstanceDispatchTable *pTable = get_dispatch_table(pc_instance_table_map, instance); pTable->DestroyInstance(instance, pAllocator); // Clean up logging callback, if any while (my_data->logging_callback.size() > 0) { VkDebugReportCallbackEXT callback = my_data->logging_callback.back(); layer_destroy_msg_callback(my_data->report_data, callback, pAllocator); my_data->logging_callback.pop_back(); } layer_debug_report_destroy_instance(mid(instance)); layer_data_map.erase(pTable); pc_instance_table_map.erase(key); layer_data_map.erase(key); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount, VkPhysicalDevice *pPhysicalDevices) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkEnumeratePhysicalDevices(my_data->report_data, pPhysicalDeviceCount, pPhysicalDevices); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_instance_table_map, instance) ->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices); validate_result(my_data->report_data, "vkEnumeratePhysicalDevices", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures *pFeatures) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetPhysicalDeviceFeatures(my_data->report_data, pFeatures); if (skipCall == VK_FALSE) { get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceFeatures(physicalDevice, pFeatures); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties *pFormatProperties) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetPhysicalDeviceFormatProperties(my_data->report_data, format, pFormatProperties); if (skipCall == VK_FALSE) { get_dispatch_table(pc_instance_table_map, physicalDevice) ->GetPhysicalDeviceFormatProperties(physicalDevice, format, pFormatProperties); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties *pImageFormatProperties) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetPhysicalDeviceImageFormatProperties(my_data->report_data, format, type, tiling, usage, flags, pImageFormatProperties); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_instance_table_map, physicalDevice) ->GetPhysicalDeviceImageFormatProperties(physicalDevice, format, type, tiling, usage, flags, pImageFormatProperties); validate_result(my_data->report_data, "vkGetPhysicalDeviceImageFormatProperties", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties *pProperties) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetPhysicalDeviceProperties(my_data->report_data, pProperties); if (skipCall == VK_FALSE) { get_dispatch_table(pc_instance_table_map, physicalDevice)->GetPhysicalDeviceProperties(physicalDevice, pProperties); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice physicalDevice, uint32_t *pQueueFamilyPropertyCount, VkQueueFamilyProperties *pQueueFamilyProperties) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetPhysicalDeviceQueueFamilyProperties(my_data->report_data, pQueueFamilyPropertyCount, pQueueFamilyProperties); if (skipCall == VK_FALSE) { get_dispatch_table(pc_instance_table_map, physicalDevice) ->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, pQueueFamilyPropertyCount, pQueueFamilyProperties); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceMemoryProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties *pMemoryProperties) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetPhysicalDeviceMemoryProperties(my_data->report_data, pMemoryProperties); if (skipCall == VK_FALSE) { get_dispatch_table(pc_instance_table_map, physicalDevice) ->GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties); } } void validateDeviceCreateInfo(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo *pCreateInfo, const std::vector properties) { std::unordered_set set; if ((pCreateInfo != nullptr) && (pCreateInfo->pQueueCreateInfos != nullptr)) { for (uint32_t i = 0; i < pCreateInfo->queueCreateInfoCount; ++i) { if (set.count(pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex)) { log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueFamilyIndex, is not unique within this " "structure.", i); } else { set.insert(pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex); } if (pCreateInfo->pQueueCreateInfos[i].queueCount == 0) { log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueCount, cannot be zero.", i); } if (pCreateInfo->pQueueCreateInfos[i].pQueuePriorities != nullptr) { for (uint32_t j = 0; j < pCreateInfo->pQueueCreateInfos[i].queueCount; ++j) { if ((pCreateInfo->pQueueCreateInfos[i].pQueuePriorities[j] < 0.f) || (pCreateInfo->pQueueCreateInfos[i].pQueuePriorities[j] > 1.f)) { log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->pQueuePriorities[%d], must be " "between 0 and 1. Actual value is %f", i, j, pCreateInfo->pQueueCreateInfos[i].pQueuePriorities[j]); } } } if (pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex >= properties.size()) { log_msg( mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueFamilyIndex cannot be more than the number " "of queue families.", i); } else if (pCreateInfo->pQueueCreateInfos[i].queueCount > properties[pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex].queueCount) { log_msg( mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "VkDeviceCreateInfo parameter, uint32_t pQueueCreateInfos[%d]->queueCount cannot be more than the number of " "queues for the given family index.", i); } } } } void storeCreateDeviceData(VkDevice device, const VkDeviceCreateInfo *pCreateInfo) { layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); if ((pCreateInfo != nullptr) && (pCreateInfo->pQueueCreateInfos != nullptr)) { for (uint32_t i = 0; i < pCreateInfo->queueCreateInfoCount; ++i) { my_device_data->queueFamilyIndexMap.insert( std::make_pair(pCreateInfo->pQueueCreateInfos[i].queueFamilyIndex, pCreateInfo->pQueueCreateInfos[i].queueCount)); } } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) { /* * NOTE: We do not validate physicalDevice or any dispatchable * object as the first parameter. We couldn't get here if it was wrong! */ VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); assert(my_instance_data != nullptr); skipCall |= parameter_validation_vkCreateDevice(my_instance_data->report_data, pCreateInfo, pAllocator, pDevice); if (pCreateInfo != NULL) { if ((pCreateInfo->enabledLayerCount > 0) && (pCreateInfo->ppEnabledLayerNames != NULL)) { for (size_t i = 0; i < pCreateInfo->enabledLayerCount; i++) { skipCall |= validate_string(my_instance_data->report_data, "vkCreateDevice", "pCreateInfo->ppEnabledLayerNames", pCreateInfo->ppEnabledLayerNames[i]); } } if ((pCreateInfo->enabledExtensionCount > 0) && (pCreateInfo->ppEnabledExtensionNames != NULL)) { for (size_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) { skipCall |= validate_string(my_instance_data->report_data, "vkCreateDevice", "pCreateInfo->ppEnabledExtensionNames", pCreateInfo->ppEnabledExtensionNames[i]); } } } if (skipCall == VK_FALSE) { VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info != nullptr); assert(chain_info->u.pLayerInfo != nullptr); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr; PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice"); if (fpCreateDevice == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; result = fpCreateDevice(physicalDevice, pCreateInfo, pAllocator, pDevice); validate_result(my_instance_data->report_data, "vkCreateDevice", result); if (result == VK_SUCCESS) { layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map); assert(my_device_data != nullptr); my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice); initDeviceTable(*pDevice, fpGetDeviceProcAddr, pc_device_table_map); uint32_t count; get_dispatch_table(pc_instance_table_map, physicalDevice) ->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, &count, nullptr); std::vector properties(count); get_dispatch_table(pc_instance_table_map, physicalDevice) ->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, &count, &properties[0]); validateDeviceCreateInfo(physicalDevice, pCreateInfo, properties); storeCreateDeviceData(*pDevice, pCreateInfo); } } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) { dispatch_key key = get_dispatch_key(device); VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(key, layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyDevice(my_data->report_data, pAllocator); if (skipCall == VK_FALSE) { layer_debug_report_destroy_device(device); #if DISPATCH_MAP_DEBUG fprintf(stderr, "Device: %p, key: %p\n", device, key); #endif get_dispatch_table(pc_device_table_map, device)->DestroyDevice(device, pAllocator); pc_device_table_map.erase(key); layer_data_map.erase(key); } } bool PreGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex) { layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_device_data != nullptr); validate_queue_family_indices(device, "vkGetDeviceQueue", 1, &queueFamilyIndex); const auto &queue_data = my_device_data->queueFamilyIndexMap.find(queueFamilyIndex); if (queue_data->second <= queueIndex) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "VkGetDeviceQueue parameter, uint32_t queueIndex %d, must be less than the number of queues given when the device " "was created.", queueIndex); return false; } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue *pQueue) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetDeviceQueue(my_data->report_data, queueFamilyIndex, queueIndex, pQueue); if (skipCall == VK_FALSE) { PreGetDeviceQueue(device, queueFamilyIndex, queueIndex); get_dispatch_table(pc_device_table_map, device)->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo *pSubmits, VkFence fence) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkQueueSubmit(my_data->report_data, submitCount, pSubmits, fence); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, queue)->QueueSubmit(queue, submitCount, pSubmits, fence); validate_result(my_data->report_data, "vkQueueSubmit", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueWaitIdle(VkQueue queue) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, queue)->QueueWaitIdle(queue); validate_result(my_data->report_data, "vkQueueWaitIdle", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkDeviceWaitIdle(VkDevice device) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, device)->DeviceWaitIdle(device); validate_result(my_data->report_data, "vkDeviceWaitIdle", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateMemory(VkDevice device, const VkMemoryAllocateInfo *pAllocateInfo, const VkAllocationCallbacks *pAllocator, VkDeviceMemory *pMemory) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkAllocateMemory(my_data->report_data, pAllocateInfo, pAllocator, pMemory); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->AllocateMemory(device, pAllocateInfo, pAllocator, pMemory); validate_result(my_data->report_data, "vkAllocateMemory", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeMemory(VkDevice device, VkDeviceMemory memory, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkFreeMemory(my_data->report_data, memory, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->FreeMemory(device, memory, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkMapMemory(VkDevice device, VkDeviceMemory memory, VkDeviceSize offset, VkDeviceSize size, VkMemoryMapFlags flags, void **ppData) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkMapMemory(my_data->report_data, memory, offset, size, flags, ppData); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->MapMemory(device, memory, offset, size, flags, ppData); validate_result(my_data->report_data, "vkMapMemory", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkFlushMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange *pMemoryRanges) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkFlushMappedMemoryRanges(my_data->report_data, memoryRangeCount, pMemoryRanges); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->FlushMappedMemoryRanges(device, memoryRangeCount, pMemoryRanges); validate_result(my_data->report_data, "vkFlushMappedMemoryRanges", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkInvalidateMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange *pMemoryRanges) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkInvalidateMappedMemoryRanges(my_data->report_data, memoryRangeCount, pMemoryRanges); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->InvalidateMappedMemoryRanges(device, memoryRangeCount, pMemoryRanges); validate_result(my_data->report_data, "vkInvalidateMappedMemoryRanges", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceMemoryCommitment(VkDevice device, VkDeviceMemory memory, VkDeviceSize *pCommittedMemoryInBytes) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetDeviceMemoryCommitment(my_data->report_data, memory, pCommittedMemoryInBytes); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->GetDeviceMemoryCommitment(device, memory, pCommittedMemoryInBytes); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBindBufferMemory(VkDevice device, VkBuffer buffer, VkDeviceMemory mem, VkDeviceSize memoryOffset) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, device)->BindBufferMemory(device, buffer, mem, memoryOffset); validate_result(my_data->report_data, "vkBindBufferMemory", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBindImageMemory(VkDevice device, VkImage image, VkDeviceMemory mem, VkDeviceSize memoryOffset) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, device)->BindImageMemory(device, image, mem, memoryOffset); validate_result(my_data->report_data, "vkBindImageMemory", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetBufferMemoryRequirements(VkDevice device, VkBuffer buffer, VkMemoryRequirements *pMemoryRequirements) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetBufferMemoryRequirements(my_data->report_data, buffer, pMemoryRequirements); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->GetBufferMemoryRequirements(device, buffer, pMemoryRequirements); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageMemoryRequirements(VkDevice device, VkImage image, VkMemoryRequirements *pMemoryRequirements) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetImageMemoryRequirements(my_data->report_data, image, pMemoryRequirements); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->GetImageMemoryRequirements(device, image, pMemoryRequirements); } } bool PostGetImageSparseMemoryRequirements(VkDevice device, VkImage image, uint32_t *pNumRequirements, VkSparseImageMemoryRequirements *pSparseMemoryRequirements) { if (pSparseMemoryRequirements != nullptr) { if ((pSparseMemoryRequirements->formatProperties.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkGetImageSparseMemoryRequirements parameter, VkImageAspect " "pSparseMemoryRequirements->formatProperties.aspectMask, is an unrecognized enumerator"); return false; } } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageSparseMemoryRequirements(VkDevice device, VkImage image, uint32_t *pSparseMemoryRequirementCount, VkSparseImageMemoryRequirements *pSparseMemoryRequirements) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetImageSparseMemoryRequirements(my_data->report_data, image, pSparseMemoryRequirementCount, pSparseMemoryRequirements); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device) ->GetImageSparseMemoryRequirements(device, image, pSparseMemoryRequirementCount, pSparseMemoryRequirements); PostGetImageSparseMemoryRequirements(device, image, pSparseMemoryRequirementCount, pSparseMemoryRequirements); } } bool PostGetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t *pNumProperties, VkSparseImageFormatProperties *pProperties) { if (pProperties != nullptr) { if ((pProperties->aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg(mdd(physicalDevice), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkGetPhysicalDeviceSparseImageFormatProperties parameter, VkImageAspect pProperties->aspectMask, is an " "unrecognized enumerator"); return false; } } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t *pPropertyCount, VkSparseImageFormatProperties *pProperties) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetPhysicalDeviceSparseImageFormatProperties(my_data->report_data, format, type, samples, usage, tiling, pPropertyCount, pProperties); if (skipCall == VK_FALSE) { get_dispatch_table(pc_instance_table_map, physicalDevice) ->GetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage, tiling, pPropertyCount, pProperties); PostGetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage, tiling, pPropertyCount, pProperties); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo *pBindInfo, VkFence fence) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkQueueBindSparse(my_data->report_data, bindInfoCount, pBindInfo, fence); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, queue)->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence); validate_result(my_data->report_data, "vkQueueBindSparse", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFence(VkDevice device, const VkFenceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkFence *pFence) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateFence(my_data->report_data, pCreateInfo, pAllocator, pFence); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateFence(device, pCreateInfo, pAllocator, pFence); validate_result(my_data->report_data, "vkCreateFence", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyFence(VkDevice device, VkFence fence, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyFence(my_data->report_data, fence, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyFence(device, fence, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetFences(VkDevice device, uint32_t fenceCount, const VkFence *pFences) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkResetFences(my_data->report_data, fenceCount, pFences); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->ResetFences(device, fenceCount, pFences); validate_result(my_data->report_data, "vkResetFences", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceStatus(VkDevice device, VkFence fence) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, device)->GetFenceStatus(device, fence); validate_result(my_data->report_data, "vkGetFenceStatus", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkWaitForFences(VkDevice device, uint32_t fenceCount, const VkFence *pFences, VkBool32 waitAll, uint64_t timeout) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkWaitForFences(my_data->report_data, fenceCount, pFences, waitAll, timeout); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->WaitForFences(device, fenceCount, pFences, waitAll, timeout); validate_result(my_data->report_data, "vkWaitForFences", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSemaphore(VkDevice device, const VkSemaphoreCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSemaphore *pSemaphore) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateSemaphore(my_data->report_data, pCreateInfo, pAllocator, pSemaphore); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateSemaphore(device, pCreateInfo, pAllocator, pSemaphore); validate_result(my_data->report_data, "vkCreateSemaphore", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySemaphore(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroySemaphore(my_data->report_data, semaphore, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroySemaphore(device, semaphore, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateEvent(VkDevice device, const VkEventCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkEvent *pEvent) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateEvent(my_data->report_data, pCreateInfo, pAllocator, pEvent); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateEvent(device, pCreateInfo, pAllocator, pEvent); validate_result(my_data->report_data, "vkCreateEvent", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyEvent(VkDevice device, VkEvent event, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyEvent(my_data->report_data, event, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyEvent(device, event, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetEventStatus(VkDevice device, VkEvent event) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, device)->GetEventStatus(device, event); validate_result(my_data->report_data, "vkGetEventStatus", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkSetEvent(VkDevice device, VkEvent event) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, device)->SetEvent(device, event); validate_result(my_data->report_data, "vkSetEvent", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetEvent(VkDevice device, VkEvent event) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, device)->ResetEvent(device, event); validate_result(my_data->report_data, "vkSetEvent", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateQueryPool(VkDevice device, const VkQueryPoolCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkQueryPool *pQueryPool) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateQueryPool(my_data->report_data, pCreateInfo, pAllocator, pQueryPool); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateQueryPool(device, pCreateInfo, pAllocator, pQueryPool); validate_result(my_data->report_data, "vkCreateQueryPool", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyQueryPool(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyQueryPool(my_data->report_data, queryPool, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyQueryPool(device, queryPool, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetQueryPoolResults(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, size_t dataSize, void *pData, VkDeviceSize stride, VkQueryResultFlags flags) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetQueryPoolResults(my_data->report_data, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device) ->GetQueryPoolResults(device, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags); validate_result(my_data->report_data, "vkGetQueryPoolResults", result); } return result; } bool PreCreateBuffer(VkDevice device, const VkBufferCreateInfo *pCreateInfo) { if (pCreateInfo != nullptr) { if (pCreateInfo->sharingMode == VK_SHARING_MODE_CONCURRENT) { validate_queue_family_indices(device, "vkCreateBuffer", pCreateInfo->queueFamilyIndexCount, pCreateInfo->pQueueFamilyIndices); } } return true; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBuffer(VkDevice device, const VkBufferCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkBuffer *pBuffer) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateBuffer(my_data->report_data, pCreateInfo, pAllocator, pBuffer); if (skipCall == VK_FALSE) { PreCreateBuffer(device, pCreateInfo); result = get_dispatch_table(pc_device_table_map, device)->CreateBuffer(device, pCreateInfo, pAllocator, pBuffer); validate_result(my_data->report_data, "vkCreateBuffer", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyBuffer(VkDevice device, VkBuffer buffer, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyBuffer(my_data->report_data, buffer, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyBuffer(device, buffer, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferView(VkDevice device, const VkBufferViewCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkBufferView *pView) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateBufferView(my_data->report_data, pCreateInfo, pAllocator, pView); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateBufferView(device, pCreateInfo, pAllocator, pView); validate_result(my_data->report_data, "vkCreateBufferView", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyBufferView(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyBufferView(my_data->report_data, bufferView, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyBufferView(device, bufferView, pAllocator); } } bool PreCreateImage(VkDevice device, const VkImageCreateInfo *pCreateInfo) { if (pCreateInfo != nullptr) { if (pCreateInfo->sharingMode == VK_SHARING_MODE_CONCURRENT) { validate_queue_family_indices(device, "vkCreateImage", pCreateInfo->queueFamilyIndexCount, pCreateInfo->pQueueFamilyIndices); } } return true; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage(VkDevice device, const VkImageCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkImage *pImage) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateImage(my_data->report_data, pCreateInfo, pAllocator, pImage); if (skipCall == VK_FALSE) { PreCreateImage(device, pCreateInfo); result = get_dispatch_table(pc_device_table_map, device)->CreateImage(device, pCreateInfo, pAllocator, pImage); validate_result(my_data->report_data, "vkCreateImage", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyImage(my_data->report_data, image, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyImage(device, image, pAllocator); } } bool PreGetImageSubresourceLayout(VkDevice device, const VkImageSubresource *pSubresource) { if (pSubresource != nullptr) { if ((pSubresource->aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkGetImageSubresourceLayout parameter, VkImageAspect pSubresource->aspectMask, is an unrecognized enumerator"); return false; } } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetImageSubresourceLayout(VkDevice device, VkImage image, const VkImageSubresource *pSubresource, VkSubresourceLayout *pLayout) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetImageSubresourceLayout(my_data->report_data, image, pSubresource, pLayout); if (skipCall == VK_FALSE) { PreGetImageSubresourceLayout(device, pSubresource); get_dispatch_table(pc_device_table_map, device)->GetImageSubresourceLayout(device, image, pSubresource, pLayout); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(VkDevice device, const VkImageViewCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkImageView *pView) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateImageView(my_data->report_data, pCreateInfo, pAllocator, pView); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateImageView(device, pCreateInfo, pAllocator, pView); validate_result(my_data->report_data, "vkCreateImageView", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyImageView(VkDevice device, VkImageView imageView, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyImageView(my_data->report_data, imageView, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyImageView(device, imageView, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateShaderModule(VkDevice device, const VkShaderModuleCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkShaderModule *pShaderModule) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateShaderModule(my_data->report_data, pCreateInfo, pAllocator, pShaderModule); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateShaderModule(device, pCreateInfo, pAllocator, pShaderModule); validate_result(my_data->report_data, "vkCreateShaderModule", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyShaderModule(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyShaderModule(my_data->report_data, shaderModule, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyShaderModule(device, shaderModule, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineCache(VkDevice device, const VkPipelineCacheCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkPipelineCache *pPipelineCache) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreatePipelineCache(my_data->report_data, pCreateInfo, pAllocator, pPipelineCache); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreatePipelineCache(device, pCreateInfo, pAllocator, pPipelineCache); validate_result(my_data->report_data, "vkCreatePipelineCache", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineCache(VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyPipelineCache(my_data->report_data, pipelineCache, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyPipelineCache(device, pipelineCache, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPipelineCacheData(VkDevice device, VkPipelineCache pipelineCache, size_t *pDataSize, void *pData) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetPipelineCacheData(my_data->report_data, pipelineCache, pDataSize, pData); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->GetPipelineCacheData(device, pipelineCache, pDataSize, pData); validate_result(my_data->report_data, "vkGetPipelineCacheData", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkMergePipelineCaches(VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, const VkPipelineCache *pSrcCaches) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkMergePipelineCaches(my_data->report_data, dstCache, srcCacheCount, pSrcCaches); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->MergePipelineCaches(device, dstCache, srcCacheCount, pSrcCaches); validate_result(my_data->report_data, "vkMergePipelineCaches", result); } return result; } bool PreCreateGraphicsPipelines(VkDevice device, const VkGraphicsPipelineCreateInfo *pCreateInfos) { layer_data *data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); // TODO: Handle count if (pCreateInfos != nullptr) { if (pCreateInfos->flags | VK_PIPELINE_CREATE_DERIVATIVE_BIT) { if (pCreateInfos->basePipelineIndex != -1) { if (pCreateInfos->basePipelineHandle != VK_NULL_HANDLE) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, pCreateInfos->basePipelineHandle, must be VK_NULL_HANDLE if " "pCreateInfos->flags " "contains the VK_PIPELINE_CREATE_DERIVATIVE_BIT flag and pCreateInfos->basePipelineIndex is not -1"); return false; } } if (pCreateInfos->basePipelineHandle != VK_NULL_HANDLE) { if (pCreateInfos->basePipelineIndex != -1) { log_msg( mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, pCreateInfos->basePipelineIndex, must be -1 if pCreateInfos->flags " "contains the VK_PIPELINE_CREATE_DERIVATIVE_BIT flag and pCreateInfos->basePipelineHandle is not " "VK_NULL_HANDLE"); return false; } } } if (pCreateInfos->pRasterizationState != nullptr) { if (pCreateInfos->pRasterizationState->cullMode & ~VK_CULL_MODE_FRONT_AND_BACK) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, VkCullMode pCreateInfos->pRasterizationState->cullMode, is an " "unrecognized enumerator"); return false; } } if (pCreateInfos->pColorBlendState != nullptr) { if (pCreateInfos->pColorBlendState->logicOpEnable == VK_TRUE && (pCreateInfos->pColorBlendState->logicOp < VK_LOGIC_OP_BEGIN_RANGE || pCreateInfos->pColorBlendState->logicOp > VK_LOGIC_OP_END_RANGE)) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, VkLogicOp pCreateInfos->pColorBlendState->logicOp, is an " "unrecognized enumerator"); return false; } if (pCreateInfos->pColorBlendState->pAttachments != nullptr && pCreateInfos->pColorBlendState->pAttachments->blendEnable == VK_TRUE) { if (pCreateInfos->pColorBlendState->pAttachments->srcColorBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE || pCreateInfos->pColorBlendState->pAttachments->srcColorBlendFactor > VK_BLEND_FACTOR_END_RANGE) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, VkBlendFactor " "pCreateInfos->pColorBlendState->pAttachments->srcColorBlendFactor, is an unrecognized enumerator"); return false; } if (pCreateInfos->pColorBlendState->pAttachments->dstColorBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE || pCreateInfos->pColorBlendState->pAttachments->dstColorBlendFactor > VK_BLEND_FACTOR_END_RANGE) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, VkBlendFactor " "pCreateInfos->pColorBlendState->pAttachments->dstColorBlendFactor, is an unrecognized enumerator"); return false; } if (pCreateInfos->pColorBlendState->pAttachments->colorBlendOp < VK_BLEND_OP_BEGIN_RANGE || pCreateInfos->pColorBlendState->pAttachments->colorBlendOp > VK_BLEND_OP_END_RANGE) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, VkBlendOp " "pCreateInfos->pColorBlendState->pAttachments->colorBlendOp, is an unrecognized enumerator"); return false; } if (pCreateInfos->pColorBlendState->pAttachments->srcAlphaBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE || pCreateInfos->pColorBlendState->pAttachments->srcAlphaBlendFactor > VK_BLEND_FACTOR_END_RANGE) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, VkBlendFactor " "pCreateInfos->pColorBlendState->pAttachments->srcAlphaBlendFactor, is an unrecognized enumerator"); return false; } if (pCreateInfos->pColorBlendState->pAttachments->dstAlphaBlendFactor < VK_BLEND_FACTOR_BEGIN_RANGE || pCreateInfos->pColorBlendState->pAttachments->dstAlphaBlendFactor > VK_BLEND_FACTOR_END_RANGE) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, VkBlendFactor " "pCreateInfos->pColorBlendState->pAttachments->dstAlphaBlendFactor, is an unrecognized enumerator"); return false; } if (pCreateInfos->pColorBlendState->pAttachments->alphaBlendOp < VK_BLEND_OP_BEGIN_RANGE || pCreateInfos->pColorBlendState->pAttachments->alphaBlendOp > VK_BLEND_OP_END_RANGE) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, VkBlendOp " "pCreateInfos->pColorBlendState->pAttachments->alphaBlendOp, is an unrecognized enumerator"); return false; } } } if (pCreateInfos->renderPass == VK_NULL_HANDLE) { log_msg(mdd(device), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCreateGraphicsPipelines parameter, VkRenderPass pCreateInfos->renderPass, is null pointer"); } int i = 0; for (size_t j = 0; j < pCreateInfos[i].stageCount; j++) { validate_string(data->report_data, "vkCreateGraphicsPipelines", "pCreateInfos[i].pStages[j].pName", pCreateInfos[i].pStages[j].pName); } } return true; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkGraphicsPipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator, VkPipeline *pPipelines) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateGraphicsPipelines(my_data->report_data, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines); if (skipCall == VK_FALSE) { PreCreateGraphicsPipelines(device, pCreateInfos); result = get_dispatch_table(pc_device_table_map, device) ->CreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines); validate_result(my_data->report_data, "vkCreateGraphicsPipelines", result); } return result; } bool PreCreateComputePipelines(VkDevice device, const VkComputePipelineCreateInfo *pCreateInfos) { layer_data *data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); if (pCreateInfos != nullptr) { // TODO: Handle count! int i = 0; validate_string(data->report_data, "vkCreateComputePipelines", "pCreateInfos[i].stage.pName", pCreateInfos[i].stage.pName); } return true; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkComputePipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator, VkPipeline *pPipelines) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateComputePipelines(my_data->report_data, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines); if (skipCall == VK_FALSE) { PreCreateComputePipelines(device, pCreateInfos); result = get_dispatch_table(pc_device_table_map, device) ->CreateComputePipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines); validate_result(my_data->report_data, "vkCreateComputePipelines", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyPipeline(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyPipeline(my_data->report_data, pipeline, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyPipeline(device, pipeline, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineLayout(VkDevice device, const VkPipelineLayoutCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkPipelineLayout *pPipelineLayout) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreatePipelineLayout(my_data->report_data, pCreateInfo, pAllocator, pPipelineLayout); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreatePipelineLayout(device, pCreateInfo, pAllocator, pPipelineLayout); validate_result(my_data->report_data, "vkCreatePipelineLayout", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineLayout(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyPipelineLayout(my_data->report_data, pipelineLayout, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyPipelineLayout(device, pipelineLayout, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSampler(VkDevice device, const VkSamplerCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSampler *pSampler) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateSampler(my_data->report_data, pCreateInfo, pAllocator, pSampler); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateSampler(device, pCreateInfo, pAllocator, pSampler); validate_result(my_data->report_data, "vkCreateSampler", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySampler(VkDevice device, VkSampler sampler, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroySampler(my_data->report_data, sampler, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroySampler(device, sampler, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorSetLayout(VkDevice device, const VkDescriptorSetLayoutCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDescriptorSetLayout *pSetLayout) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateDescriptorSetLayout(my_data->report_data, pCreateInfo, pAllocator, pSetLayout); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateDescriptorSetLayout(device, pCreateInfo, pAllocator, pSetLayout); validate_result(my_data->report_data, "vkCreateDescriptorSetLayout", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorSetLayout(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyDescriptorSetLayout(my_data->report_data, descriptorSetLayout, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyDescriptorSetLayout(device, descriptorSetLayout, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorPool(VkDevice device, const VkDescriptorPoolCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDescriptorPool *pDescriptorPool) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateDescriptorPool(my_data->report_data, pCreateInfo, pAllocator, pDescriptorPool); /* TODOVV: How do we validate maxSets? Probably belongs in the limits layer? */ if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateDescriptorPool(device, pCreateInfo, pAllocator, pDescriptorPool); validate_result(my_data->report_data, "vkCreateDescriptorPool", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyDescriptorPool(my_data->report_data, descriptorPool, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyDescriptorPool(device, descriptorPool, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, device)->ResetDescriptorPool(device, descriptorPool, flags); validate_result(my_data->report_data, "vkResetDescriptorPool", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateDescriptorSets(VkDevice device, const VkDescriptorSetAllocateInfo *pAllocateInfo, VkDescriptorSet *pDescriptorSets) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkAllocateDescriptorSets(my_data->report_data, pAllocateInfo, pDescriptorSets); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->AllocateDescriptorSets(device, pAllocateInfo, pDescriptorSets); validate_result(my_data->report_data, "vkAllocateDescriptorSets", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkFreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t descriptorSetCount, const VkDescriptorSet *pDescriptorSets) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkFreeDescriptorSets(my_data->report_data, descriptorPool, descriptorSetCount, pDescriptorSets); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device) ->FreeDescriptorSets(device, descriptorPool, descriptorSetCount, pDescriptorSets); validate_result(my_data->report_data, "vkFreeDescriptorSets", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkUpdateDescriptorSets(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet *pDescriptorWrites, uint32_t descriptorCopyCount, const VkCopyDescriptorSet *pDescriptorCopies) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkUpdateDescriptorSets(my_data->report_data, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device) ->UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateFramebuffer(VkDevice device, const VkFramebufferCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkFramebuffer *pFramebuffer) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateFramebuffer(my_data->report_data, pCreateInfo, pAllocator, pFramebuffer); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateFramebuffer(device, pCreateInfo, pAllocator, pFramebuffer); validate_result(my_data->report_data, "vkCreateFramebuffer", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyFramebuffer(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyFramebuffer(my_data->report_data, framebuffer, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyFramebuffer(device, framebuffer, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkRenderPass *pRenderPass) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCreateRenderPass(my_data->report_data, pCreateInfo, pAllocator, pRenderPass); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateRenderPass(device, pCreateInfo, pAllocator, pRenderPass); validate_result(my_data->report_data, "vkCreateRenderPass", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyRenderPass(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyRenderPass(my_data->report_data, renderPass, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyRenderPass(device, renderPass, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetRenderAreaGranularity(VkDevice device, VkRenderPass renderPass, VkExtent2D *pGranularity) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkGetRenderAreaGranularity(my_data->report_data, renderPass, pGranularity); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->GetRenderAreaGranularity(device, renderPass, pGranularity); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkCommandPool *pCommandPool) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; bool skipCall = false; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= validate_queue_family_indices(device, "vkCreateCommandPool", 1, &(pCreateInfo->queueFamilyIndex)); skipCall |= parameter_validation_vkCreateCommandPool(my_data->report_data, pCreateInfo, pAllocator, pCommandPool); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool); validate_result(my_data->report_data, "vkCreateCommandPool", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkDestroyCommandPool(my_data->report_data, commandPool, pAllocator); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device)->DestroyCommandPool(device, commandPool, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, device)->ResetCommandPool(device, commandPool, flags); validate_result(my_data->report_data, "vkResetCommandPool", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pAllocateInfo, VkCommandBuffer *pCommandBuffers) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkAllocateCommandBuffers(my_data->report_data, pAllocateInfo, pCommandBuffers); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, device)->AllocateCommandBuffers(device, pAllocateInfo, pCommandBuffers); validate_result(my_data->report_data, "vkAllocateCommandBuffers", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer *pCommandBuffers) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkFreeCommandBuffers(my_data->report_data, commandPool, commandBufferCount, pCommandBuffers); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, device) ->FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkBeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo *pBeginInfo) { VkResult result = VK_ERROR_VALIDATION_FAILED_EXT; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkBeginCommandBuffer(my_data->report_data, pBeginInfo); if (skipCall == VK_FALSE) { result = get_dispatch_table(pc_device_table_map, commandBuffer)->BeginCommandBuffer(commandBuffer, pBeginInfo); validate_result(my_data->report_data, "vkBeginCommandBuffer", result); } return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEndCommandBuffer(VkCommandBuffer commandBuffer) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, commandBuffer)->EndCommandBuffer(commandBuffer); validate_result(my_data->report_data, "vkEndCommandBuffer", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandBuffer(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); VkResult result = get_dispatch_table(pc_device_table_map, commandBuffer)->ResetCommandBuffer(commandBuffer, flags); validate_result(my_data->report_data, "vkResetCommandBuffer", result); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindPipeline(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdBindPipeline(my_data->report_data, pipelineBindPoint, pipeline); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetViewport(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport *pViewports) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdSetViewport(my_data->report_data, firstViewport, viewportCount, pViewports); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdSetViewport(commandBuffer, firstViewport, viewportCount, pViewports); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetScissor(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D *pScissors) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdSetScissor(my_data->report_data, firstScissor, scissorCount, pScissors); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetScissor(commandBuffer, firstScissor, scissorCount, pScissors); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetLineWidth(VkCommandBuffer commandBuffer, float lineWidth) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetLineWidth(commandBuffer, lineWidth); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBias(VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdSetDepthBias(commandBuffer, depthBiasConstantFactor, depthBiasClamp, depthBiasSlopeFactor); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetBlendConstants(VkCommandBuffer commandBuffer, const float blendConstants[4]) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdSetBlendConstants(my_data->report_data, blendConstants); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetBlendConstants(commandBuffer, blendConstants); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBounds(VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetDepthBounds(commandBuffer, minDepthBounds, maxDepthBounds); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilCompareMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetStencilCompareMask(commandBuffer, faceMask, compareMask); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilWriteMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetStencilWriteMask(commandBuffer, faceMask, writeMask); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilReference(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetStencilReference(commandBuffer, faceMask, reference); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindDescriptorSets(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout, uint32_t firstSet, uint32_t descriptorSetCount, const VkDescriptorSet *pDescriptorSets, uint32_t dynamicOffsetCount, const uint32_t *pDynamicOffsets) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdBindDescriptorSets(my_data->report_data, pipelineBindPoint, layout, firstSet, descriptorSetCount, pDescriptorSets, dynamicOffsetCount, pDynamicOffsets); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdBindDescriptorSets(commandBuffer, pipelineBindPoint, layout, firstSet, descriptorSetCount, pDescriptorSets, dynamicOffsetCount, pDynamicOffsets); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindIndexBuffer(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdBindIndexBuffer(my_data->report_data, buffer, offset, indexType); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBindIndexBuffer(commandBuffer, buffer, offset, indexType); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBindVertexBuffers(VkCommandBuffer commandBuffer, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer *pBuffers, const VkDeviceSize *pOffsets) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdBindVertexBuffers(my_data->report_data, firstBinding, bindingCount, pBuffers, pOffsets); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdBindVertexBuffers(commandBuffer, firstBinding, bindingCount, pBuffers, pOffsets); } } bool PreCmdDraw(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance) { if (vertexCount == 0) { // TODO: Verify against Valid Usage section. I don't see a non-zero vertexCount listed, may need to add that and make // this an error or leave as is. log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdDraw parameter, uint32_t vertexCount, is 0"); return false; } if (instanceCount == 0) { // TODO: Verify against Valid Usage section. I don't see a non-zero instanceCount listed, may need to add that and make // this an error or leave as is. log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdDraw parameter, uint32_t instanceCount, is 0"); return false; } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDraw(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance) { PreCmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance); get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexed(VkCommandBuffer commandBuffer, uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdDrawIndexed(commandBuffer, indexCount, instanceCount, firstIndex, vertexOffset, firstInstance); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDrawIndirect(commandBuffer, buffer, offset, count, stride); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexedIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDrawIndexedIndirect(commandBuffer, buffer, offset, count, stride); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatch(VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDispatch(commandBuffer, x, y, z); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdDispatchIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdDispatchIndirect(commandBuffer, buffer, offset); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBuffer(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdCopyBuffer(my_data->report_data, srcBuffer, dstBuffer, regionCount, pRegions); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount, pRegions); } } bool PreCmdCopyImage(VkCommandBuffer commandBuffer, const VkImageCopy *pRegions) { if (pRegions != nullptr) { if ((pRegions->srcSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdCopyImage parameter, VkImageAspect pRegions->srcSubresource.aspectMask, is an unrecognized enumerator"); return false; } if ((pRegions->dstSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdCopyImage parameter, VkImageAspect pRegions->dstSubresource.aspectMask, is an unrecognized enumerator"); return false; } } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdCopyImage(my_data->report_data, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions); if (skipCall == VK_FALSE) { PreCmdCopyImage(commandBuffer, pRegions); get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions); } } bool PreCmdBlitImage(VkCommandBuffer commandBuffer, const VkImageBlit *pRegions) { if (pRegions != nullptr) { if ((pRegions->srcSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdCopyImage parameter, VkImageAspect pRegions->srcSubresource.aspectMask, is an unrecognized enumerator"); return false; } if ((pRegions->dstSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdCopyImage parameter, VkImageAspect pRegions->dstSubresource.aspectMask, is an unrecognized enumerator"); return false; } } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit *pRegions, VkFilter filter) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdBlitImage(my_data->report_data, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter); if (skipCall == VK_FALSE) { PreCmdBlitImage(commandBuffer, pRegions); get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter); } } bool PreCmdCopyBufferToImage(VkCommandBuffer commandBuffer, const VkBufferImageCopy *pRegions) { if (pRegions != nullptr) { if ((pRegions->imageSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdCopyBufferToImage parameter, VkImageAspect pRegions->imageSubresource.aspectMask, is an unrecognized " "enumerator"); return false; } } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkBufferImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdCopyBufferToImage(my_data->report_data, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions); if (skipCall == VK_FALSE) { PreCmdCopyBufferToImage(commandBuffer, pRegions); get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions); } } bool PreCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, const VkBufferImageCopy *pRegions) { if (pRegions != nullptr) { if ((pRegions->imageSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg(mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdCopyImageToBuffer parameter, VkImageAspect pRegions->imageSubresource.aspectMask, is an unrecognized " "enumerator"); return false; } } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferImageCopy *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdCopyImageToBuffer(my_data->report_data, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions); if (skipCall == VK_FALSE) { PreCmdCopyImageToBuffer(commandBuffer, pRegions); get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t *pData) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdUpdateBuffer(my_data->report_data, dstBuffer, dstOffset, dataSize, pData); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearColorValue *pColor, uint32_t rangeCount, const VkImageSubresourceRange *pRanges) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdClearColorImage(my_data->report_data, image, imageLayout, pColor, rangeCount, pRanges); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearDepthStencilImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearDepthStencilValue *pDepthStencil, uint32_t rangeCount, const VkImageSubresourceRange *pRanges) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdClearDepthStencilImage(my_data->report_data, image, imageLayout, pDepthStencil, rangeCount, pRanges); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount, pRanges); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(VkCommandBuffer commandBuffer, uint32_t attachmentCount, const VkClearAttachment *pAttachments, uint32_t rectCount, const VkClearRect *pRects) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdClearAttachments(my_data->report_data, attachmentCount, pAttachments, rectCount, pRects); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdClearAttachments(commandBuffer, attachmentCount, pAttachments, rectCount, pRects); } } bool PreCmdResolveImage(VkCommandBuffer commandBuffer, const VkImageResolve *pRegions) { if (pRegions != nullptr) { if ((pRegions->srcSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg( mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdResolveImage parameter, VkImageAspect pRegions->srcSubresource.aspectMask, is an unrecognized enumerator"); return false; } if ((pRegions->dstSubresource.aspectMask & (VK_IMAGE_ASPECT_COLOR_BIT | VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT | VK_IMAGE_ASPECT_METADATA_BIT)) == 0) { log_msg( mdd(commandBuffer), VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, "PARAMCHECK", "vkCmdResolveImage parameter, VkImageAspect pRegions->dstSubresource.aspectMask, is an unrecognized enumerator"); return false; } } return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve *pRegions) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdResolveImage(my_data->report_data, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions); if (skipCall == VK_FALSE) { PreCmdResolveImage(commandBuffer, pRegions); get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdSetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdSetEvent(commandBuffer, event, stageMask); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdResetEvent(commandBuffer, event, stageMask); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdWaitEvents(VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent *pEvents, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdWaitEvents(my_data->report_data, eventCount, pEvents, srcStageMask, dstStageMask, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdWaitEvents(commandBuffer, eventCount, pEvents, srcStageMask, dstStageMask, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPipelineBarrier(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier *pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier *pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier *pImageMemoryBarriers) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdPipelineBarrier(my_data->report_data, srcStageMask, dstStageMask, dependencyFlags, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot, VkQueryControlFlags flags) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBeginQuery(commandBuffer, queryPool, slot, flags); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t slot) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdEndQuery(commandBuffer, queryPool, slot); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdResetQueryPool(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdResetQueryPool(commandBuffer, queryPool, firstQuery, queryCount); } bool PostCmdWriteTimestamp(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t slot) { ValidateEnumerator(pipelineStage); return true; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdWriteTimestamp(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t slot) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, slot); PostCmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, slot); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdCopyQueryPoolResults(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize stride, VkQueryResultFlags flags) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdCopyQueryPoolResults(commandBuffer, queryPool, firstQuery, queryCount, dstBuffer, dstOffset, stride, flags); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdPushConstants(VkCommandBuffer commandBuffer, VkPipelineLayout layout, VkShaderStageFlags stageFlags, uint32_t offset, uint32_t size, const void *pValues) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdPushConstants(my_data->report_data, layout, stageFlags, offset, size, pValues); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdPushConstants(commandBuffer, layout, stageFlags, offset, size, pValues); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdBeginRenderPass(VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo *pRenderPassBegin, VkSubpassContents contents) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdBeginRenderPass(my_data->report_data, pRenderPassBegin, contents); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdBeginRenderPass(commandBuffer, pRenderPassBegin, contents); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdNextSubpass(VkCommandBuffer commandBuffer, VkSubpassContents contents) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdNextSubpass(my_data->report_data, contents); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdNextSubpass(commandBuffer, contents); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdEndRenderPass(VkCommandBuffer commandBuffer) { get_dispatch_table(pc_device_table_map, commandBuffer)->CmdEndRenderPass(commandBuffer); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkCmdExecuteCommands(VkCommandBuffer commandBuffer, uint32_t commandBufferCount, const VkCommandBuffer *pCommandBuffers) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(commandBuffer), layer_data_map); assert(my_data != NULL); skipCall |= parameter_validation_vkCmdExecuteCommands(my_data->report_data, commandBufferCount, pCommandBuffers); if (skipCall == VK_FALSE) { get_dispatch_table(pc_device_table_map, commandBuffer) ->CmdExecuteCommands(commandBuffer, commandBufferCount, pCommandBuffers); } } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char *funcName) { layer_data *data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); if (validate_string(data->report_data, "vkGetDeviceProcAddr", "funcName", funcName) == VK_TRUE) { return NULL; } if (!strcmp(funcName, "vkGetDeviceProcAddr")) return (PFN_vkVoidFunction)vkGetDeviceProcAddr; if (!strcmp(funcName, "vkDestroyDevice")) return (PFN_vkVoidFunction)vkDestroyDevice; if (!strcmp(funcName, "vkGetDeviceQueue")) return (PFN_vkVoidFunction)vkGetDeviceQueue; if (!strcmp(funcName, "vkQueueSubmit")) return (PFN_vkVoidFunction)vkQueueSubmit; if (!strcmp(funcName, "vkQueueWaitIdle")) return (PFN_vkVoidFunction)vkQueueWaitIdle; if (!strcmp(funcName, "vkDeviceWaitIdle")) return (PFN_vkVoidFunction)vkDeviceWaitIdle; if (!strcmp(funcName, "vkAllocateMemory")) return (PFN_vkVoidFunction)vkAllocateMemory; if (!strcmp(funcName, "vkFreeMemory")) return (PFN_vkVoidFunction)vkFreeMemory; if (!strcmp(funcName, "vkMapMemory")) return (PFN_vkVoidFunction)vkMapMemory; if (!strcmp(funcName, "vkFlushMappedMemoryRanges")) return (PFN_vkVoidFunction)vkFlushMappedMemoryRanges; if (!strcmp(funcName, "vkInvalidateMappedMemoryRanges")) return (PFN_vkVoidFunction)vkInvalidateMappedMemoryRanges; if (!strcmp(funcName, "vkCreateFence")) return (PFN_vkVoidFunction)vkCreateFence; if (!strcmp(funcName, "vkDestroyFence")) return (PFN_vkVoidFunction)vkDestroyFence; if (!strcmp(funcName, "vkResetFences")) return (PFN_vkVoidFunction)vkResetFences; if (!strcmp(funcName, "vkGetFenceStatus")) return (PFN_vkVoidFunction)vkGetFenceStatus; if (!strcmp(funcName, "vkWaitForFences")) return (PFN_vkVoidFunction)vkWaitForFences; if (!strcmp(funcName, "vkCreateSemaphore")) return (PFN_vkVoidFunction)vkCreateSemaphore; if (!strcmp(funcName, "vkDestroySemaphore")) return (PFN_vkVoidFunction)vkDestroySemaphore; if (!strcmp(funcName, "vkCreateEvent")) return (PFN_vkVoidFunction)vkCreateEvent; if (!strcmp(funcName, "vkDestroyEvent")) return (PFN_vkVoidFunction)vkDestroyEvent; if (!strcmp(funcName, "vkGetEventStatus")) return (PFN_vkVoidFunction)vkGetEventStatus; if (!strcmp(funcName, "vkSetEvent")) return (PFN_vkVoidFunction)vkSetEvent; if (!strcmp(funcName, "vkResetEvent")) return (PFN_vkVoidFunction)vkResetEvent; if (!strcmp(funcName, "vkCreateQueryPool")) return (PFN_vkVoidFunction)vkCreateQueryPool; if (!strcmp(funcName, "vkDestroyQueryPool")) return (PFN_vkVoidFunction)vkDestroyQueryPool; if (!strcmp(funcName, "vkGetQueryPoolResults")) return (PFN_vkVoidFunction)vkGetQueryPoolResults; if (!strcmp(funcName, "vkCreateBuffer")) return (PFN_vkVoidFunction)vkCreateBuffer; if (!strcmp(funcName, "vkDestroyBuffer")) return (PFN_vkVoidFunction)vkDestroyBuffer; if (!strcmp(funcName, "vkCreateBufferView")) return (PFN_vkVoidFunction)vkCreateBufferView; if (!strcmp(funcName, "vkDestroyBufferView")) return (PFN_vkVoidFunction)vkDestroyBufferView; if (!strcmp(funcName, "vkCreateImage")) return (PFN_vkVoidFunction)vkCreateImage; if (!strcmp(funcName, "vkDestroyImage")) return (PFN_vkVoidFunction)vkDestroyImage; if (!strcmp(funcName, "vkGetImageSubresourceLayout")) return (PFN_vkVoidFunction)vkGetImageSubresourceLayout; if (!strcmp(funcName, "vkCreateImageView")) return (PFN_vkVoidFunction)vkCreateImageView; if (!strcmp(funcName, "vkDestroyImageView")) return (PFN_vkVoidFunction)vkDestroyImageView; if (!strcmp(funcName, "vkCreateShaderModule")) return (PFN_vkVoidFunction)vkCreateShaderModule; if (!strcmp(funcName, "vkDestroyShaderModule")) return (PFN_vkVoidFunction)vkDestroyShaderModule; if (!strcmp(funcName, "vkCreatePipelineCache")) return (PFN_vkVoidFunction)vkCreatePipelineCache; if (!strcmp(funcName, "vkDestroyPipelineCache")) return (PFN_vkVoidFunction)vkDestroyPipelineCache; if (!strcmp(funcName, "vkGetPipelineCacheData")) return (PFN_vkVoidFunction)vkGetPipelineCacheData; if (!strcmp(funcName, "vkMergePipelineCaches")) return (PFN_vkVoidFunction)vkMergePipelineCaches; if (!strcmp(funcName, "vkCreateGraphicsPipelines")) return (PFN_vkVoidFunction)vkCreateGraphicsPipelines; if (!strcmp(funcName, "vkCreateComputePipelines")) return (PFN_vkVoidFunction)vkCreateComputePipelines; if (!strcmp(funcName, "vkDestroyPipeline")) return (PFN_vkVoidFunction)vkDestroyPipeline; if (!strcmp(funcName, "vkCreatePipelineLayout")) return (PFN_vkVoidFunction)vkCreatePipelineLayout; if (!strcmp(funcName, "vkDestroyPipelineLayout")) return (PFN_vkVoidFunction)vkDestroyPipelineLayout; if (!strcmp(funcName, "vkCreateSampler")) return (PFN_vkVoidFunction)vkCreateSampler; if (!strcmp(funcName, "vkDestroySampler")) return (PFN_vkVoidFunction)vkDestroySampler; if (!strcmp(funcName, "vkCreateDescriptorSetLayout")) return (PFN_vkVoidFunction)vkCreateDescriptorSetLayout; if (!strcmp(funcName, "vkDestroyDescriptorSetLayout")) return (PFN_vkVoidFunction)vkDestroyDescriptorSetLayout; if (!strcmp(funcName, "vkCreateDescriptorPool")) return (PFN_vkVoidFunction)vkCreateDescriptorPool; if (!strcmp(funcName, "vkDestroyDescriptorPool")) return (PFN_vkVoidFunction)vkDestroyDescriptorPool; if (!strcmp(funcName, "vkResetDescriptorPool")) return (PFN_vkVoidFunction)vkResetDescriptorPool; if (!strcmp(funcName, "vkAllocateDescriptorSets")) return (PFN_vkVoidFunction)vkAllocateDescriptorSets; if (!strcmp(funcName, "vkCmdSetViewport")) return (PFN_vkVoidFunction)vkCmdSetViewport; if (!strcmp(funcName, "vkCmdSetScissor")) return (PFN_vkVoidFunction)vkCmdSetScissor; if (!strcmp(funcName, "vkCmdSetLineWidth")) return (PFN_vkVoidFunction)vkCmdSetLineWidth; if (!strcmp(funcName, "vkCmdSetDepthBias")) return (PFN_vkVoidFunction)vkCmdSetDepthBias; if (!strcmp(funcName, "vkCmdSetBlendConstants")) return (PFN_vkVoidFunction)vkCmdSetBlendConstants; if (!strcmp(funcName, "vkCmdSetDepthBounds")) return (PFN_vkVoidFunction)vkCmdSetDepthBounds; if (!strcmp(funcName, "vkCmdSetStencilCompareMask")) return (PFN_vkVoidFunction)vkCmdSetStencilCompareMask; if (!strcmp(funcName, "vkCmdSetStencilWriteMask")) return (PFN_vkVoidFunction)vkCmdSetStencilWriteMask; if (!strcmp(funcName, "vkCmdSetStencilReference")) return (PFN_vkVoidFunction)vkCmdSetStencilReference; if (!strcmp(funcName, "vkAllocateCommandBuffers")) return (PFN_vkVoidFunction)vkAllocateCommandBuffers; if (!strcmp(funcName, "vkFreeCommandBuffers")) return (PFN_vkVoidFunction)vkFreeCommandBuffers; if (!strcmp(funcName, "vkBeginCommandBuffer")) return (PFN_vkVoidFunction)vkBeginCommandBuffer; if (!strcmp(funcName, "vkEndCommandBuffer")) return (PFN_vkVoidFunction)vkEndCommandBuffer; if (!strcmp(funcName, "vkResetCommandBuffer")) return (PFN_vkVoidFunction)vkResetCommandBuffer; if (!strcmp(funcName, "vkCmdBindPipeline")) return (PFN_vkVoidFunction)vkCmdBindPipeline; if (!strcmp(funcName, "vkCmdBindDescriptorSets")) return (PFN_vkVoidFunction)vkCmdBindDescriptorSets; if (!strcmp(funcName, "vkCmdBindVertexBuffers")) return (PFN_vkVoidFunction)vkCmdBindVertexBuffers; if (!strcmp(funcName, "vkCmdBindIndexBuffer")) return (PFN_vkVoidFunction)vkCmdBindIndexBuffer; if (!strcmp(funcName, "vkCmdDraw")) return (PFN_vkVoidFunction)vkCmdDraw; if (!strcmp(funcName, "vkCmdDrawIndexed")) return (PFN_vkVoidFunction)vkCmdDrawIndexed; if (!strcmp(funcName, "vkCmdDrawIndirect")) return (PFN_vkVoidFunction)vkCmdDrawIndirect; if (!strcmp(funcName, "vkCmdDrawIndexedIndirect")) return (PFN_vkVoidFunction)vkCmdDrawIndexedIndirect; if (!strcmp(funcName, "vkCmdDispatch")) return (PFN_vkVoidFunction)vkCmdDispatch; if (!strcmp(funcName, "vkCmdDispatchIndirect")) return (PFN_vkVoidFunction)vkCmdDispatchIndirect; if (!strcmp(funcName, "vkCmdCopyBuffer")) return (PFN_vkVoidFunction)vkCmdCopyBuffer; if (!strcmp(funcName, "vkCmdCopyImage")) return (PFN_vkVoidFunction)vkCmdCopyImage; if (!strcmp(funcName, "vkCmdBlitImage")) return (PFN_vkVoidFunction)vkCmdBlitImage; if (!strcmp(funcName, "vkCmdCopyBufferToImage")) return (PFN_vkVoidFunction)vkCmdCopyBufferToImage; if (!strcmp(funcName, "vkCmdCopyImageToBuffer")) return (PFN_vkVoidFunction)vkCmdCopyImageToBuffer; if (!strcmp(funcName, "vkCmdUpdateBuffer")) return (PFN_vkVoidFunction)vkCmdUpdateBuffer; if (!strcmp(funcName, "vkCmdFillBuffer")) return (PFN_vkVoidFunction)vkCmdFillBuffer; if (!strcmp(funcName, "vkCmdClearColorImage")) return (PFN_vkVoidFunction)vkCmdClearColorImage; if (!strcmp(funcName, "vkCmdResolveImage")) return (PFN_vkVoidFunction)vkCmdResolveImage; if (!strcmp(funcName, "vkCmdSetEvent")) return (PFN_vkVoidFunction)vkCmdSetEvent; if (!strcmp(funcName, "vkCmdResetEvent")) return (PFN_vkVoidFunction)vkCmdResetEvent; if (!strcmp(funcName, "vkCmdWaitEvents")) return (PFN_vkVoidFunction)vkCmdWaitEvents; if (!strcmp(funcName, "vkCmdPipelineBarrier")) return (PFN_vkVoidFunction)vkCmdPipelineBarrier; if (!strcmp(funcName, "vkCmdBeginQuery")) return (PFN_vkVoidFunction)vkCmdBeginQuery; if (!strcmp(funcName, "vkCmdEndQuery")) return (PFN_vkVoidFunction)vkCmdEndQuery; if (!strcmp(funcName, "vkCmdResetQueryPool")) return (PFN_vkVoidFunction)vkCmdResetQueryPool; if (!strcmp(funcName, "vkCmdWriteTimestamp")) return (PFN_vkVoidFunction)vkCmdWriteTimestamp; if (!strcmp(funcName, "vkCmdCopyQueryPoolResults")) return (PFN_vkVoidFunction)vkCmdCopyQueryPoolResults; if (!strcmp(funcName, "vkCreateFramebuffer")) return (PFN_vkVoidFunction)vkCreateFramebuffer; if (!strcmp(funcName, "vkDestroyFramebuffer")) return (PFN_vkVoidFunction)vkDestroyFramebuffer; if (!strcmp(funcName, "vkCreateRenderPass")) return (PFN_vkVoidFunction)vkCreateRenderPass; if (!strcmp(funcName, "vkDestroyRenderPass")) return (PFN_vkVoidFunction)vkDestroyRenderPass; if (!strcmp(funcName, "vkGetRenderAreaGranularity")) return (PFN_vkVoidFunction)vkGetRenderAreaGranularity; if (!strcmp(funcName, "vkCreateCommandPool")) return (PFN_vkVoidFunction)vkCreateCommandPool; if (!strcmp(funcName, "vkDestroyCommandPool")) return (PFN_vkVoidFunction)vkDestroyCommandPool; if (!strcmp(funcName, "vkCmdBeginRenderPass")) return (PFN_vkVoidFunction)vkCmdBeginRenderPass; if (!strcmp(funcName, "vkCmdNextSubpass")) return (PFN_vkVoidFunction)vkCmdNextSubpass; if (device == NULL) { return NULL; } if (get_dispatch_table(pc_device_table_map, device)->GetDeviceProcAddr == NULL) return NULL; return get_dispatch_table(pc_device_table_map, device)->GetDeviceProcAddr(device, funcName); } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) { if (!strcmp(funcName, "vkGetInstanceProcAddr")) return (PFN_vkVoidFunction)vkGetInstanceProcAddr; if (!strcmp(funcName, "vkCreateInstance")) return (PFN_vkVoidFunction)vkCreateInstance; if (!strcmp(funcName, "vkDestroyInstance")) return (PFN_vkVoidFunction)vkDestroyInstance; if (!strcmp(funcName, "vkCreateDevice")) return (PFN_vkVoidFunction)vkCreateDevice; if (!strcmp(funcName, "vkEnumeratePhysicalDevices")) return (PFN_vkVoidFunction)vkEnumeratePhysicalDevices; if (!strcmp(funcName, "vkGetPhysicalDeviceProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceProperties; if (!strcmp(funcName, "vkGetPhysicalDeviceFeatures")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceFeatures; if (!strcmp(funcName, "vkGetPhysicalDeviceFormatProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceFormatProperties; if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties; if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties; if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties; if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties; if (instance == NULL) { return NULL; } layer_data *data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); PFN_vkVoidFunction fptr = debug_report_get_instance_proc_addr(data->report_data, funcName); if (fptr) return fptr; if (get_dispatch_table(pc_instance_table_map, instance)->GetInstanceProcAddr == NULL) return NULL; return get_dispatch_table(pc_instance_table_map, instance)->GetInstanceProcAddr(instance, funcName); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/parameter_validation_utils.h000066400000000000000000000705171270147354000300160ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * The Materials are Confidential Information as defined by the Khronos * Membership Agreement until designated non-confidential by Khronos, at which * point this condition clause shall be removed. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Dustin Graves */ #ifndef PARAMETER_VALIDATION_UTILS_H #define PARAMETER_VALIDATION_UTILS_H #include #include #include #include "vulkan/vulkan.h" #include "vk_enum_string_helper.h" #include "vk_layer_logging.h" namespace { struct GenericHeader { VkStructureType sType; const void *pNext; }; } // Layer name string to be logged with validation messages. const char ParameterValidationName[] = "ParameterValidation"; // String returned by string_VkStructureType for an unrecognized type. const std::string UnsupportedStructureTypeString = "Unhandled VkStructureType"; // String returned by string_VkResult for an unrecognized type. const std::string UnsupportedResultString = "Unhandled VkResult"; // The base value used when computing the offset for an enumeration token value that is added by an extension. // When validating enumeration tokens, any value >= to this value is considered to be provided by an extension. // See Appendix C.10 "Assigning Extension Token Values" from the Vulkan specification const uint32_t ExtEnumBaseValue = 1000000000; template bool is_extension_added_token(T value) { return (std::abs(static_cast(value)) >= ExtEnumBaseValue); } // VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE token is a special case that was converted from a core token to an // extension added token. Its original value was intentionally preserved after the conversion, so it does not use // the base value that other extension added tokens use, and it does not fall within the enum's begin/end range. template <> bool is_extension_added_token(VkSamplerAddressMode value) { bool result = (std::abs(static_cast(value)) >= ExtEnumBaseValue); return (result || (value == VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE)); } /** * Validate a required pointer. * * Verify that a required pointer is not NULL. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param parameterName Name of parameter being validated. * @param value Pointer to validate. * @return Boolean value indicating that the call should be skipped. */ static VkBool32 validate_required_pointer(debug_report_data *report_data, const char *apiName, const char *parameterName, const void *value) { VkBool32 skipCall = VK_FALSE; if (value == NULL) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: required parameter %s specified as NULL", apiName, parameterName); } return skipCall; } /** * Validate pointer to array count and pointer to array. * * Verify that required count and array parameters are not NULL. If count * is not NULL and its value is not optional, verify that it is not 0. If the * array parameter is NULL, and it is not optional, verify that count is 0. * The array parameter will typically be optional for this case (where count is * a pointer), allowing the caller to retrieve the available count. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param countName Name of count parameter. * @param arrayName Name of array parameter. * @param count Pointer to the number of elements in the array. * @param array Array to validate. * @param countPtrRequired The 'count' parameter may not be NULL when true. * @param countValueRequired The '*count' value may not be 0 when true. * @param arrayRequired The 'array' parameter may not be NULL when true. * @return Boolean value indicating that the call should be skipped. */ template VkBool32 validate_array(debug_report_data *report_data, const char *apiName, const char *countName, const char *arrayName, const T *count, const void *array, VkBool32 countPtrRequired, VkBool32 countValueRequired, VkBool32 arrayRequired) { VkBool32 skipCall = VK_FALSE; if (count == NULL) { if (countPtrRequired == VK_TRUE) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: required parameter %s specified as NULL", apiName, countName); } } else { skipCall |= validate_array(report_data, apiName, countName, arrayName, (*count), array, countValueRequired, arrayRequired); } return skipCall; } /** * Validate array count and pointer to array. * * Verify that required count and array parameters are not 0 or NULL. If the * count parameter is not optional, verify that it is not 0. If the array * parameter is NULL, and it is not optional, verify that count is 0. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param countName Name of count parameter. * @param arrayName Name of array parameter. * @param count Number of elements in the array. * @param array Array to validate. * @param countRequired The 'count' parameter may not be 0 when true. * @param arrayRequired The 'array' parameter may not be NULL when true. * @return Boolean value indicating that the call should be skipped. */ template VkBool32 validate_array(debug_report_data *report_data, const char *apiName, const char *countName, const char *arrayName, T count, const void *array, VkBool32 countRequired, VkBool32 arrayRequired) { VkBool32 skipCall = VK_FALSE; // Count parameters not tagged as optional cannot be 0 if ((count == 0) && (countRequired == VK_TRUE)) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: value of %s must be greater than 0", apiName, countName); } // Array parameters not tagged as optional cannot be NULL, // unless the count is 0 if ((array == NULL) && (arrayRequired == VK_TRUE) && (count != 0)) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: required parameter %s specified as NULL", apiName, arrayName); } return skipCall; } /** * Validate an Vulkan structure type. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param parameterName Name of struct parameter being validated. * @param sTypeName Name of expected VkStructureType value. * @param value Pointer to the struct to validate. * @param sType VkStructureType for structure validation. * @param required The parameter may not be NULL when true. * @return Boolean value indicating that the call should be skipped. */ template VkBool32 validate_struct_type(debug_report_data *report_data, const char *apiName, const char *parameterName, const char *sTypeName, const T *value, VkStructureType sType, VkBool32 required) { VkBool32 skipCall = VK_FALSE; if (value == NULL) { if (required == VK_TRUE) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: required parameter %s specified as NULL", apiName, parameterName); } } else if (value->sType != sType) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: parameter %s->sType must be %s", apiName, parameterName, sTypeName); } return skipCall; } /** * Validate an array of Vulkan structures. * * Verify that required count and array parameters are not NULL. If count * is not NULL and its value is not optional, verify that it is not 0. * If the array contains 1 or more structures, verify that each structure's * sType field is set to the correct VkStructureType value. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param countName Name of count parameter. * @param arrayName Name of array parameter. * @param sTypeName Name of expected VkStructureType value. * @param count Pointer to the number of elements in the array. * @param array Array to validate. * @param sType VkStructureType for structure validation. * @param countPtrRequired The 'count' parameter may not be NULL when true. * @param countValueRequired The '*count' value may not be 0 when true. * @param arrayRequired The 'array' parameter may not be NULL when true. * @return Boolean value indicating that the call should be skipped. */ template VkBool32 validate_struct_type_array(debug_report_data *report_data, const char *apiName, const char *countName, const char *arrayName, const char *sTypeName, const uint32_t *count, const T *array, VkStructureType sType, VkBool32 countPtrRequired, VkBool32 countValueRequired, VkBool32 arrayRequired) { VkBool32 skipCall = VK_FALSE; if (count == NULL) { if (countPtrRequired == VK_TRUE) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: required parameter %s specified as NULL", apiName, countName); } } else { skipCall |= validate_struct_type_array(report_data, apiName, countName, arrayName, sTypeName, (*count), array, sType, countValueRequired, arrayRequired); } return skipCall; } /** * Validate an array of Vulkan structures * * Verify that required count and array parameters are not 0 or NULL. If * the array contains 1 or more structures, verify that each structure's * sType field is set to the correct VkStructureType value. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param countName Name of count parameter. * @param arrayName Name of array parameter. * @param sTypeName Name of expected VkStructureType value. * @param count Number of elements in the array. * @param array Array to validate. * @param sType VkStructureType for structure validation. * @param countRequired The 'count' parameter may not be 0 when true. * @param arrayRequired The 'array' parameter may not be NULL when true. * @return Boolean value indicating that the call should be skipped. */ template VkBool32 validate_struct_type_array(debug_report_data *report_data, const char *apiName, const char *countName, const char *arrayName, const char *sTypeName, uint32_t count, const T *array, VkStructureType sType, VkBool32 countRequired, VkBool32 arrayRequired) { VkBool32 skipCall = VK_FALSE; if ((count == 0) || (array == NULL)) { // Count parameters not tagged as optional cannot be 0 if ((count == 0) && (countRequired == VK_TRUE)) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: parameter %s must be greater than 0", apiName, countName); } // Array parameters not tagged as optional cannot be NULL, // unless the count is 0 if ((array == NULL) && (arrayRequired == VK_TRUE) && (count != 0)) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: required parameter %s specified as NULL", apiName, arrayName); } } else { // Verify that all structs in the array have the correct type for (uint32_t i = 0; i < count; ++i) { if (array[i].sType != sType) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: parameter %s[%d].sType must be %s", apiName, arrayName, i, sTypeName); } } } return skipCall; } /** * Validate string array count and content. * * Verify that required count and array parameters are not 0 or NULL. If the * count parameter is not optional, verify that it is not 0. If the array * parameter is NULL, and it is not optional, verify that count is 0. If the * array parameter is not NULL, verify that none of the strings are NULL. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param countName Name of count parameter. * @param arrayName Name of array parameter. * @param count Number of strings in the array. * @param array Array of strings to validate. * @param countRequired The 'count' parameter may not be 0 when true. * @param arrayRequired The 'array' parameter may not be NULL when true. * @return Boolean value indicating that the call should be skipped. */ static VkBool32 validate_string_array(debug_report_data *report_data, const char *apiName, const char *countName, const char *arrayName, uint32_t count, const char *const *array, VkBool32 countRequired, VkBool32 arrayRequired) { VkBool32 skipCall = VK_FALSE; if ((count == 0) || (array == NULL)) { // Count parameters not tagged as optional cannot be 0 if ((count == 0) && (countRequired == VK_TRUE)) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: parameter %s must be greater than 0", apiName, countName); } // Array parameters not tagged as optional cannot be NULL, // unless the count is 0 if ((array == NULL) && (arrayRequired == VK_TRUE) && (count != 0)) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: required parameter %s specified as NULL", apiName, arrayName); } } else { // Verify that strings in the array not NULL for (uint32_t i = 0; i < count; ++i) { if (array[i] == NULL) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: required parameter %s[%d] specified as NULL", apiName, arrayName, i); } } } return skipCall; } /** * Validate a structure's pNext member. * * Verify that the specified pNext value points to the head of a list of * allowed extension structures. If no extension structures are allowed, * verify that pNext is null. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param parameterName Name of parameter being validated. * @param allowedStructNames Names of allowed structs. * @param next Pointer to validate. * @param allowedTypeCount total number of allowed structure types. * @param allowedTypes array of strcuture types allowed for pNext. * @return Boolean value indicating that the call should be skipped. */ static VkBool32 validate_struct_pnext(debug_report_data *report_data, const char *apiName, const char *parameterName, const char *allowedStructNames, const void *next, size_t allowedTypeCount, const VkStructureType *allowedTypes) { VkBool32 skipCall = VK_FALSE; if (next != NULL) { if (allowedTypeCount == 0) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: value of %s must be NULL", apiName, parameterName); } else { const VkStructureType *start = allowedTypes; const VkStructureType *end = allowedTypes + allowedTypeCount; const GenericHeader *current = reinterpret_cast(next); while (current != NULL) { if (std::find(start, end, current->sType) == end) { std::string typeName = string_VkStructureType(current->sType); if (typeName == UnsupportedStructureTypeString) { skipCall |= log_msg( report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: %s chain includes a structure with unexpected VkStructureType (%d); Allowed structures are [%s]", apiName, parameterName, current->sType, allowedStructNames); } else { skipCall |= log_msg( report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: %s chain includes a structure with unexpected VkStructureType %s; Allowed structures are [%s]", apiName, parameterName, typeName.c_str(), allowedStructNames); } } current = reinterpret_cast(current->pNext); } } } return skipCall; } /** * Validate a VkBool32 value. * * Generate a warning if a VkBool32 value is neither VK_TRUE nor VK_FALSE. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param parameterName Name of parameter being validated. * @param value Boolean value to validate. * @return Boolean value indicating that the call should be skipped. */ static VkBool32 validate_bool32(debug_report_data *report_data, const char *apiName, const char *parameterName, VkBool32 value) { VkBool32 skipCall = VK_FALSE; if ((value != VK_TRUE) && (value != VK_FALSE)) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: value of %s (%d) is neither VK_TRUE nor VK_FALSE", apiName, parameterName, value); } return skipCall; } /** * Validate a Vulkan enumeration value. * * Generate a warning if an enumeration token value does not fall within the core enumeration * begin and end token values, and was not added to the enumeration by an extension. Extension * provided enumerations use the equation specified in Appendix C.10 of the Vulkan specification, * with 1,000,000,000 as the base token value. * * @note This function does not expect to process enumerations defining bitmask flag bits. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param parameterName Name of parameter being validated. * @param enumName Name of the enumeration being validated. * @param begin The begin range value for the enumeration. * @param end The end range value for the enumeration. * @param value Boolean value to validate. * @return Boolean value indicating that the call should be skipped. */ template VkBool32 validate_ranged_enum(debug_report_data *report_data, const char *apiName, const char *parameterName, const char *enumName, T begin, T end, T value) { VkBool32 skipCall = VK_FALSE; if (((value < begin) || (value > end)) && !is_extension_added_token(value)) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: value of %s (%d) does not fall within the begin..end range of the core %s " "enumeration tokens and is not an extension added token", apiName, parameterName, value, enumName); } return skipCall; } /** * Validate an array of Vulkan enumeration value. * * Process all enumeration token values in the specified array and generate a warning if a value * does not fall within the core enumeration begin and end token values, and was not added to * the enumeration by an extension. Extension provided enumerations use the equation specified * in Appendix C.10 of the Vulkan specification, with 1,000,000,000 as the base token value. * * @note This function does not expect to process enumerations defining bitmask flag bits. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param parameterName Name of parameter being validated. * @param enumName Name of the enumeration being validated. * @param begin The begin range value for the enumeration. * @param end The end range value for the enumeration. * @param value Boolean value to validate. * @return Boolean value indicating that the call should be skipped. */ template static VkBool32 validate_ranged_enum_array(debug_report_data *report_data, const char *apiName, const char *parameterName, const char *enumName, T begin, T end, uint32_t count, const T *pValues) { VkBool32 skipCall = VK_FALSE; for (uint32_t i = 0; i < count; ++i) { if (((pValues[i] < begin) || (pValues[i] > end)) && !is_extension_added_token(pValues[i])) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: value of %s[%d] (%d) does not fall within the begin..end range of the core %s " "enumeration tokens and is not an extension added token", apiName, parameterName, i, pValues[i], enumName); } } return skipCall; } /** * Get VkResult code description. * * Returns a string describing the specified VkResult code. The description is based on the language in the Vulkan API specification. * * @param value VkResult code to process. * @return String describing the specified VkResult code. */ static std::string get_result_description(VkResult result) { // clang-format off switch (result) { case VK_SUCCESS: return "a command completed successfully"; case VK_NOT_READY: return "a fence or query has not yet completed"; case VK_TIMEOUT: return "a wait operation has not completed in the specified time"; case VK_EVENT_SET: return "an event is signaled"; case VK_EVENT_RESET: return "an event is unsignalled"; case VK_INCOMPLETE: return "a return array was too small for the result"; case VK_ERROR_OUT_OF_HOST_MEMORY: return "a host memory allocation has failed"; case VK_ERROR_OUT_OF_DEVICE_MEMORY: return "a device memory allocation has failed"; case VK_ERROR_INITIALIZATION_FAILED: return "the logical device has been lost"; case VK_ERROR_DEVICE_LOST: return "initialization of an object has failed"; case VK_ERROR_MEMORY_MAP_FAILED: return "mapping of a memory object has failed"; case VK_ERROR_LAYER_NOT_PRESENT: return "the specified layer does not exist"; case VK_ERROR_EXTENSION_NOT_PRESENT: return "the specified extension does not exist"; case VK_ERROR_FEATURE_NOT_PRESENT: return "the requested feature is not available on this device"; case VK_ERROR_INCOMPATIBLE_DRIVER: return "a Vulkan driver could not be found"; case VK_ERROR_TOO_MANY_OBJECTS: return "too many objects of the type have already been created"; case VK_ERROR_FORMAT_NOT_SUPPORTED: return "the requested format is not supported on this device"; case VK_ERROR_SURFACE_LOST_KHR: return "a surface is no longer available"; case VK_ERROR_NATIVE_WINDOW_IN_USE_KHR: return "the requested window is already connected to another " "VkSurfaceKHR object, or some other non-Vulkan surface object"; case VK_SUBOPTIMAL_KHR: return "an image became available, and the swapchain no longer " "matches the surface properties exactly, but can still be used to " "present to the surface successfully."; case VK_ERROR_OUT_OF_DATE_KHR: return "a surface has changed in such a way that it is no " "longer compatible with the swapchain"; case VK_ERROR_INCOMPATIBLE_DISPLAY_KHR: return "the display used by a swapchain does not use the same " "presentable image layout, or is incompatible in a way that prevents " "sharing an image"; case VK_ERROR_VALIDATION_FAILED_EXT: return "API validation has detected an invalid use of the API"; case VK_ERROR_INVALID_SHADER_NV: return "one or more shaders failed to compile or link"; default: return "an error has occured"; }; // clang-format on } /** * Validate return code. * * Print a message describing the reason for failure when an error code is returned. * * @param report_data debug_report_data object for routing validation messages. * @param apiName Name of API call being validated. * @param value VkResult value to validate. */ static void validate_result(debug_report_data *report_data, const char *apiName, VkResult result) { if (result < 0) { std::string resultName = string_VkResult(result); if (resultName == UnsupportedResultString) { // Unrecognized result code log_msg(report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: returned a result code indicating that an error has occured", apiName); } else { std::string resultDesc = get_result_description(result); log_msg(report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (VkDebugReportObjectTypeEXT)0, 0, __LINE__, 1, ParameterValidationName, "%s: returned %s, indicating that %s", apiName, resultName.c_str(), resultDesc.c_str()); } } } #endif // PARAMETER_VALIDATION_UTILS_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/swapchain.cpp000066400000000000000000003422461270147354000247150ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Ian Elliott * Author: Ian Elliott */ #include #include #include #include #include "swapchain.h" #include "vk_layer_extension_utils.h" #include "vk_enum_string_helper.h" #include "vk_layer_utils.h" static int globalLockInitialized = 0; static loader_platform_thread_mutex globalLock; // The following is for logging error messages: static std::unordered_map layer_data_map; template layer_data *get_my_data_ptr(void *data_key, std::unordered_map &data_map); static const VkExtensionProperties instance_extensions[] = {{VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { return util_GetExtensionProperties(ARRAY_SIZE(instance_extensions), instance_extensions, pCount, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice, const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { if (pLayerName == NULL) { dispatch_key key = get_dispatch_key(physicalDevice); layer_data *my_data = get_my_data_ptr(key, layer_data_map); return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties); } else { return util_GetExtensionProperties(0, nullptr, pCount, pProperties); } } static const VkLayerProperties swapchain_layers[] = {{ "VK_LAYER_LUNARG_swapchain", VK_LAYER_API_VERSION, 1, "LunarG Validation Layer", }}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) { return util_GetLayerProperties(ARRAY_SIZE(swapchain_layers), swapchain_layers, pCount, pProperties); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) { return util_GetLayerProperties(ARRAY_SIZE(swapchain_layers), swapchain_layers, pCount, pProperties); } static void createDeviceRegisterExtensions(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo *pCreateInfo, VkDevice device) { uint32_t i; layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); VkLayerDispatchTable *pDisp = my_device_data->device_dispatch_table; PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr; pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR)gpa(device, "vkCreateSwapchainKHR"); pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR)gpa(device, "vkDestroySwapchainKHR"); pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR)gpa(device, "vkGetSwapchainImagesKHR"); pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR)gpa(device, "vkAcquireNextImageKHR"); pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR)gpa(device, "vkQueuePresentKHR"); pDisp->GetDeviceQueue = (PFN_vkGetDeviceQueue)gpa(device, "vkGetDeviceQueue"); SwpPhysicalDevice *pPhysicalDevice = &my_instance_data->physicalDeviceMap[physicalDevice]; if (pPhysicalDevice) { my_device_data->deviceMap[device].pPhysicalDevice = pPhysicalDevice; pPhysicalDevice->pDevice = &my_device_data->deviceMap[device]; } else { // TBD: Should we leave error in (since Swapchain really needs this // link)? log_msg(my_instance_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, (uint64_t)physicalDevice, __LINE__, SWAPCHAIN_INVALID_HANDLE, "Swapchain", "vkCreateDevice() called with a non-valid VkPhysicalDevice."); } my_device_data->deviceMap[device].device = device; my_device_data->deviceMap[device].swapchainExtensionEnabled = false; // Record whether the WSI device extension was enabled for this VkDevice. // No need to check if the extension was advertised by // vkEnumerateDeviceExtensionProperties(), since the loader handles that. for (i = 0; i < pCreateInfo->enabledExtensionCount; i++) { if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0) { my_device_data->deviceMap[device].swapchainExtensionEnabled = true; } } } static void createInstanceRegisterExtensions(const VkInstanceCreateInfo *pCreateInfo, VkInstance instance) { uint32_t i; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); VkLayerInstanceDispatchTable *pDisp = my_data->instance_dispatch_table; PFN_vkGetInstanceProcAddr gpa = pDisp->GetInstanceProcAddr; #ifdef VK_USE_PLATFORM_ANDROID_KHR pDisp->CreateAndroidSurfaceKHR = (PFN_vkCreateAndroidSurfaceKHR)gpa(instance, "vkCreateAndroidSurfaceKHR"); #endif // VK_USE_PLATFORM_ANDROID_KHR #ifdef VK_USE_PLATFORM_MIR_KHR pDisp->CreateMirSurfaceKHR = (PFN_vkCreateMirSurfaceKHR)gpa(instance, "vkCreateMirSurfaceKHR"); pDisp->GetPhysicalDeviceMirPresentationSupportKHR = (PFN_vkGetPhysicalDeviceMirPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceMirPresentationSupportKHR"); #endif // VK_USE_PLATFORM_MIR_KHR #ifdef VK_USE_PLATFORM_WAYLAND_KHR pDisp->CreateWaylandSurfaceKHR = (PFN_vkCreateWaylandSurfaceKHR)gpa(instance, "vkCreateWaylandSurfaceKHR"); pDisp->GetPhysicalDeviceWaylandPresentationSupportKHR = (PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWaylandPresentationSupportKHR"); #endif // VK_USE_PLATFORM_WAYLAND_KHR #ifdef VK_USE_PLATFORM_WIN32_KHR pDisp->CreateWin32SurfaceKHR = (PFN_vkCreateWin32SurfaceKHR)gpa(instance, "vkCreateWin32SurfaceKHR"); pDisp->GetPhysicalDeviceWin32PresentationSupportKHR = (PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWin32PresentationSupportKHR"); #endif // VK_USE_PLATFORM_WIN32_KHR #ifdef VK_USE_PLATFORM_XCB_KHR pDisp->CreateXcbSurfaceKHR = (PFN_vkCreateXcbSurfaceKHR)gpa(instance, "vkCreateXcbSurfaceKHR"); pDisp->GetPhysicalDeviceXcbPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXcbPresentationSupportKHR"); #endif // VK_USE_PLATFORM_XCB_KHR #ifdef VK_USE_PLATFORM_XLIB_KHR pDisp->CreateXlibSurfaceKHR = (PFN_vkCreateXlibSurfaceKHR)gpa(instance, "vkCreateXlibSurfaceKHR"); pDisp->GetPhysicalDeviceXlibPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXlibPresentationSupportKHR"); #endif // VK_USE_PLATFORM_XLIB_KHR pDisp->DestroySurfaceKHR = (PFN_vkDestroySurfaceKHR)gpa(instance, "vkDestroySurfaceKHR"); pDisp->GetPhysicalDeviceSurfaceSupportKHR = (PFN_vkGetPhysicalDeviceSurfaceSupportKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceSupportKHR"); pDisp->GetPhysicalDeviceSurfaceCapabilitiesKHR = (PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR"); pDisp->GetPhysicalDeviceSurfaceFormatsKHR = (PFN_vkGetPhysicalDeviceSurfaceFormatsKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR"); pDisp->GetPhysicalDeviceSurfacePresentModesKHR = (PFN_vkGetPhysicalDeviceSurfacePresentModesKHR)gpa(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR"); // Remember this instance, and whether the VK_KHR_surface extension // was enabled for it: my_data->instanceMap[instance].instance = instance; my_data->instanceMap[instance].surfaceExtensionEnabled = false; #ifdef VK_USE_PLATFORM_ANDROID_KHR my_data->instanceMap[instance].androidSurfaceExtensionEnabled = false; #endif // VK_USE_PLATFORM_ANDROID_KHR #ifdef VK_USE_PLATFORM_MIR_KHR my_data->instanceMap[instance].mirSurfaceExtensionEnabled = false; #endif // VK_USE_PLATFORM_MIR_KHR #ifdef VK_USE_PLATFORM_WAYLAND_KHR my_data->instanceMap[instance].waylandSurfaceExtensionEnabled = false; #endif // VK_USE_PLATFORM_WAYLAND_KHR #ifdef VK_USE_PLATFORM_WIN32_KHR my_data->instanceMap[instance].win32SurfaceExtensionEnabled = false; #endif // VK_USE_PLATFORM_WIN32_KHR #ifdef VK_USE_PLATFORM_XCB_KHR my_data->instanceMap[instance].xcbSurfaceExtensionEnabled = false; #endif // VK_USE_PLATFORM_XCB_KHR #ifdef VK_USE_PLATFORM_XLIB_KHR my_data->instanceMap[instance].xlibSurfaceExtensionEnabled = false; #endif // VK_USE_PLATFORM_XLIB_KHR // Record whether the WSI instance extension was enabled for this // VkInstance. No need to check if the extension was advertised by // vkEnumerateInstanceExtensionProperties(), since the loader handles that. for (i = 0; i < pCreateInfo->enabledExtensionCount; i++) { if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SURFACE_EXTENSION_NAME) == 0) { my_data->instanceMap[instance].surfaceExtensionEnabled = true; } #ifdef VK_USE_PLATFORM_ANDROID_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_ANDROID_SURFACE_EXTENSION_NAME) == 0) { my_data->instanceMap[instance].androidSurfaceExtensionEnabled = true; } #endif // VK_USE_PLATFORM_ANDROID_KHR #ifdef VK_USE_PLATFORM_MIR_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_MIR_SURFACE_EXTENSION_NAME) == 0) { my_data->instanceMap[instance].mirSurfaceExtensionEnabled = true; } #endif // VK_USE_PLATFORM_MIR_KHR #ifdef VK_USE_PLATFORM_WAYLAND_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME) == 0) { my_data->instanceMap[instance].waylandSurfaceExtensionEnabled = true; } #endif // VK_USE_PLATFORM_WAYLAND_KHR #ifdef VK_USE_PLATFORM_WIN32_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_WIN32_SURFACE_EXTENSION_NAME) == 0) { my_data->instanceMap[instance].win32SurfaceExtensionEnabled = true; } #endif // VK_USE_PLATFORM_WIN32_KHR #ifdef VK_USE_PLATFORM_XCB_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_XCB_SURFACE_EXTENSION_NAME) == 0) { my_data->instanceMap[instance].xcbSurfaceExtensionEnabled = true; } #endif // VK_USE_PLATFORM_XCB_KHR #ifdef VK_USE_PLATFORM_XLIB_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_XLIB_SURFACE_EXTENSION_NAME) == 0) { my_data->instanceMap[instance].xlibSurfaceExtensionEnabled = true; } #endif // VK_USE_PLATFORM_XLIB_KHR } } #include "vk_dispatch_table_helper.h" static void init_swapchain(layer_data *my_data, const VkAllocationCallbacks *pAllocator) { layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "lunarg_swapchain"); if (!globalLockInitialized) { loader_platform_thread_create_mutex(&globalLock); globalLockInitialized = 1; } } static const char *surfaceTransformStr(VkSurfaceTransformFlagBitsKHR value) { // Return a string corresponding to the value: return string_VkSurfaceTransformFlagBitsKHR(value); } static const char *surfaceCompositeAlphaStr(VkCompositeAlphaFlagBitsKHR value) { // Return a string corresponding to the value: return string_VkCompositeAlphaFlagBitsKHR(value); } static const char *presentModeStr(VkPresentModeKHR value) { // Return a string corresponding to the value: return string_VkPresentModeKHR(value); } static const char *sharingModeStr(VkSharingMode value) { // Return a string corresponding to the value: return string_VkSharingMode(value); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) { VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance"); if (fpCreateInstance == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance); if (result != VK_SUCCESS) { return result; } layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map); my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable; layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr); my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance, pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames); // Call the following function after my_data is initialized: createInstanceRegisterExtensions(pCreateInfo, *pInstance); init_swapchain(my_data, pAllocator); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) { dispatch_key key = get_dispatch_key(instance); layer_data *my_data = get_my_data_ptr(key, layer_data_map); SwpInstance *pInstance = &(my_data->instanceMap[instance]); // Call down the call chain: my_data->instance_dispatch_table->DestroyInstance(instance, pAllocator); loader_platform_thread_lock_mutex(&globalLock); // Do additional internal cleanup: if (pInstance) { // Delete all of the SwpPhysicalDevice's, SwpSurface's, and the // SwpInstance associated with this instance: for (auto it = pInstance->physicalDevices.begin(); it != pInstance->physicalDevices.end(); it++) { // Free memory that was allocated for/by this SwpPhysicalDevice: SwpPhysicalDevice *pPhysicalDevice = it->second; if (pPhysicalDevice) { if (pPhysicalDevice->pDevice) { LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN, "%s() called before all of its associated " "VkDevices were destroyed.", __FUNCTION__); } free(pPhysicalDevice->pSurfaceFormats); free(pPhysicalDevice->pPresentModes); } // Erase the SwpPhysicalDevice's from the my_data->physicalDeviceMap (which // are simply pointed to by the SwpInstance): my_data->physicalDeviceMap.erase(it->second->physicalDevice); } for (auto it = pInstance->surfaces.begin(); it != pInstance->surfaces.end(); it++) { // Free memory that was allocated for/by this SwpPhysicalDevice: SwpSurface *pSurface = it->second; if (pSurface) { LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN, "%s() called before all of its associated " "VkSurfaceKHRs were destroyed.", __FUNCTION__); } } my_data->instanceMap.erase(instance); } // Clean up logging callback, if any while (my_data->logging_callback.size() > 0) { VkDebugReportCallbackEXT callback = my_data->logging_callback.back(); layer_destroy_msg_callback(my_data->report_data, callback, pAllocator); my_data->logging_callback.pop_back(); } layer_debug_report_destroy_instance(my_data->report_data); delete my_data->instance_dispatch_table; layer_data_map.erase(key); if (layer_data_map.empty()) { // Release mutex when destroying last instance loader_platform_thread_unlock_mutex(&globalLock); loader_platform_thread_delete_mutex(&globalLock); globalLockInitialized = 0; } else { loader_platform_thread_unlock_mutex(&globalLock); } } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice physicalDevice, uint32_t *pQueueFamilyPropertyCount, VkQueueFamilyProperties *pQueueFamilyProperties) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); // Call down the call chain: my_data->instance_dispatch_table->GetPhysicalDeviceQueueFamilyProperties(physicalDevice, pQueueFamilyPropertyCount, pQueueFamilyProperties); // Record the result of this query: loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; if (pPhysicalDevice && pQueueFamilyPropertyCount && !pQueueFamilyProperties) { pPhysicalDevice->gotQueueFamilyPropertyCount = true; pPhysicalDevice->numOfQueueFamilies = *pQueueFamilyPropertyCount; } loader_platform_thread_unlock_mutex(&globalLock); } #ifdef VK_USE_PLATFORM_ANDROID_KHR VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateAndroidSurfaceKHR(VkInstance instance, const VkAndroidSurfaceCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpInstance *pInstance = &(my_data->instanceMap[instance]); // Validate that the platform extension was enabled: if (pInstance && !pInstance->androidSurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_ANDROID_SURFACE_EXTENSION_NAME); } if (!pCreateInfo) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } else { if (pCreateInfo->sType != VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR) { skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo", "VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR"); } if (pCreateInfo->pNext != NULL) { skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->CreateAndroidSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pInstance = &(my_data->instanceMap[instance]); if ((result == VK_SUCCESS) && pInstance && pSurface) { // Record the VkSurfaceKHR returned by the ICD: my_data->surfaceMap[*pSurface].surface = *pSurface; my_data->surfaceMap[*pSurface].pInstance = pInstance; my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL); my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0; my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL; // Point to the associated SwpInstance: pInstance->surfaces[*pSurface] = &my_data->surfaceMap[*pSurface]; } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } #endif // VK_USE_PLATFORM_ANDROID_KHR #ifdef VK_USE_PLATFORM_MIR_KHR VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateMirSurfaceKHR(VkInstance instance, const VkMirSurfaceCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpInstance *pInstance = &(my_data->instanceMap[instance]); // Validate that the platform extension was enabled: if (pInstance && !pInstance->mirSurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_MIR_SURFACE_EXTENSION_NAME); } if (!pCreateInfo) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } else { if (pCreateInfo->sType != VK_STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR) { skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo", "VK_STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR"); } if (pCreateInfo->pNext != NULL) { skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->CreateMirSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pInstance = &(my_data->instanceMap[instance]); if ((result == VK_SUCCESS) && pInstance && pSurface) { // Record the VkSurfaceKHR returned by the ICD: my_data->surfaceMap[*pSurface].surface = *pSurface; my_data->surfaceMap[*pSurface].pInstance = pInstance; my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL); my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0; my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL; // Point to the associated SwpInstance: pInstance->surfaces[*pSurface] = &my_data->surfaceMap[*pSurface]; } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceMirPresentationSupportKHR(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, MirConnection *connection) { VkBool32 result = VK_FALSE; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; // Validate that the platform extension was enabled: if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->mirSurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_MIR_SURFACE_EXTENSION_NAME); } if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) { skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice, "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { // Call down the call chain: result = my_data->instance_dispatch_table->GetPhysicalDeviceMirPresentationSupportKHR(physicalDevice, queueFamilyIndex, connection); } return result; } #endif // VK_USE_PLATFORM_MIR_KHR #ifdef VK_USE_PLATFORM_WAYLAND_KHR VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWaylandSurfaceKHR(VkInstance instance, const VkWaylandSurfaceCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpInstance *pInstance = &(my_data->instanceMap[instance]); // Validate that the platform extension was enabled: if (pInstance && !pInstance->waylandSurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME); } if (!pCreateInfo) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } else { if (pCreateInfo->sType != VK_STRUCTURE_TYPE_WAYLAND_SURFACE_CREATE_INFO_KHR) { skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo", "VK_STRUCTURE_TYPE_WAYLAND_SURFACE_CREATE_INFO_KHR"); } if (pCreateInfo->pNext != NULL) { skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->CreateWaylandSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pInstance = &(my_data->instanceMap[instance]); if ((result == VK_SUCCESS) && pInstance && pSurface) { // Record the VkSurfaceKHR returned by the ICD: my_data->surfaceMap[*pSurface].surface = *pSurface; my_data->surfaceMap[*pSurface].pInstance = pInstance; my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL); my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0; my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL; // Point to the associated SwpInstance: pInstance->surfaces[*pSurface] = &my_data->surfaceMap[*pSurface]; } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWaylandPresentationSupportKHR(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, struct wl_display *display) { VkBool32 result = VK_FALSE; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; // Validate that the platform extension was enabled: if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->waylandSurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME); } if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) { skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice, "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { // Call down the call chain: result = my_data->instance_dispatch_table->GetPhysicalDeviceWaylandPresentationSupportKHR(physicalDevice, queueFamilyIndex, display); } return result; } #endif // VK_USE_PLATFORM_WAYLAND_KHR #ifdef VK_USE_PLATFORM_WIN32_KHR VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateWin32SurfaceKHR(VkInstance instance, const VkWin32SurfaceCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpInstance *pInstance = &(my_data->instanceMap[instance]); // Validate that the platform extension was enabled: if (pInstance && !pInstance->win32SurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_WIN32_SURFACE_EXTENSION_NAME); } if (!pCreateInfo) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } else { if (pCreateInfo->sType != VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR) { skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo", "VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR"); } if (pCreateInfo->pNext != NULL) { skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->CreateWin32SurfaceKHR(instance, pCreateInfo, pAllocator, pSurface); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pInstance = &(my_data->instanceMap[instance]); if ((result == VK_SUCCESS) && pInstance && pSurface) { // Record the VkSurfaceKHR returned by the ICD: my_data->surfaceMap[*pSurface].surface = *pSurface; my_data->surfaceMap[*pSurface].pInstance = pInstance; my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL); my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0; my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL; // Point to the associated SwpInstance: pInstance->surfaces[*pSurface] = &my_data->surfaceMap[*pSurface]; } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWin32PresentationSupportKHR(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex) { VkBool32 result = VK_FALSE; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; // Validate that the platform extension was enabled: if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->win32SurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_WIN32_SURFACE_EXTENSION_NAME); } if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) { skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice, "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { // Call down the call chain: result = my_data->instance_dispatch_table->GetPhysicalDeviceWin32PresentationSupportKHR(physicalDevice, queueFamilyIndex); } return result; } #endif // VK_USE_PLATFORM_WIN32_KHR #ifdef VK_USE_PLATFORM_XCB_KHR VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXcbSurfaceKHR(VkInstance instance, const VkXcbSurfaceCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpInstance *pInstance = &(my_data->instanceMap[instance]); // Validate that the platform extension was enabled: if (pInstance && !pInstance->xcbSurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_XCB_SURFACE_EXTENSION_NAME); } if (!pCreateInfo) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } else { if (pCreateInfo->sType != VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR) { skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo", "VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR"); } if (pCreateInfo->pNext != NULL) { skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->CreateXcbSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pInstance = &(my_data->instanceMap[instance]); if ((result == VK_SUCCESS) && pInstance && pSurface) { // Record the VkSurfaceKHR returned by the ICD: my_data->surfaceMap[*pSurface].surface = *pSurface; my_data->surfaceMap[*pSurface].pInstance = pInstance; my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL); my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0; my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL; // Point to the associated SwpInstance: pInstance->surfaces[*pSurface] = &my_data->surfaceMap[*pSurface]; } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXcbPresentationSupportKHR(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, xcb_connection_t *connection, xcb_visualid_t visual_id) { VkBool32 result = VK_FALSE; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; // Validate that the platform extension was enabled: if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->xcbSurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_XCB_SURFACE_EXTENSION_NAME); } if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) { skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice, "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { // Call down the call chain: result = my_data->instance_dispatch_table->GetPhysicalDeviceXcbPresentationSupportKHR(physicalDevice, queueFamilyIndex, connection, visual_id); } return result; } #endif // VK_USE_PLATFORM_XCB_KHR #ifdef VK_USE_PLATFORM_XLIB_KHR VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateXlibSurfaceKHR(VkInstance instance, const VkXlibSurfaceCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSurfaceKHR *pSurface) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpInstance *pInstance = &(my_data->instanceMap[instance]); // Validate that the platform extension was enabled: if (pInstance && !pInstance->xlibSurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_XLIB_SURFACE_EXTENSION_NAME); } if (!pCreateInfo) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } else { if (pCreateInfo->sType != VK_STRUCTURE_TYPE_XLIB_SURFACE_CREATE_INFO_KHR) { skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo", "VK_STRUCTURE_TYPE_XLIB_SURFACE_CREATE_INFO_KHR"); } if (pCreateInfo->pNext != NULL) { skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->CreateXlibSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pInstance = &(my_data->instanceMap[instance]); if ((result == VK_SUCCESS) && pInstance && pSurface) { // Record the VkSurfaceKHR returned by the ICD: my_data->surfaceMap[*pSurface].surface = *pSurface; my_data->surfaceMap[*pSurface].pInstance = pInstance; my_data->surfaceMap[*pSurface].usedAllocatorToCreate = (pAllocator != NULL); my_data->surfaceMap[*pSurface].numQueueFamilyIndexSupport = 0; my_data->surfaceMap[*pSurface].pQueueFamilyIndexSupport = NULL; // Point to the associated SwpInstance: pInstance->surfaces[*pSurface] = &my_data->surfaceMap[*pSurface]; } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXlibPresentationSupportKHR(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, Display *dpy, VisualID visualID) { VkBool32 result = VK_FALSE; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; // Validate that the platform extension was enabled: if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->xlibSurfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_XLIB_SURFACE_EXTENSION_NAME); } if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) { skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice, "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { // Call down the call chain: result = my_data->instance_dispatch_table->GetPhysicalDeviceXlibPresentationSupportKHR(physicalDevice, queueFamilyIndex, dpy, visualID); } return result; } #endif // VK_USE_PLATFORM_XLIB_KHR VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySurfaceKHR(VkInstance instance, VkSurfaceKHR surface, const VkAllocationCallbacks *pAllocator) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpSurface *pSurface = &my_data->surfaceMap[surface]; // Regardless of skipCall value, do some internal cleanup: if (pSurface) { // Delete the SwpSurface associated with this surface: if (pSurface->pInstance) { pSurface->pInstance->surfaces.erase(surface); } if (!pSurface->swapchains.empty()) { LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN, "%s() called before all of its associated " "VkSwapchainKHRs were destroyed.", __FUNCTION__); // Empty and then delete all SwpSwapchain's for (auto it = pSurface->swapchains.begin(); it != pSurface->swapchains.end(); it++) { // Delete all SwpImage's it->second->images.clear(); // In case the swapchain's device hasn't been destroyed yet // (which isn't likely, but is possible), delete its // association with this swapchain (i.e. so we can't point to // this swpchain from that device, later on): if (it->second->pDevice) { it->second->pDevice->swapchains.clear(); } } pSurface->swapchains.clear(); } if ((pAllocator != NULL) != pSurface->usedAllocatorToCreate) { LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_INCOMPATIBLE_ALLOCATOR, "%s() called with incompatible pAllocator from when " "the object was created.", __FUNCTION__); } my_data->surfaceMap.erase(surface); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { // Call down the call chain: my_data->instance_dispatch_table->DestroySurfaceKHR(instance, surface, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(VkInstance instance, uint32_t *pPhysicalDeviceCount, VkPhysicalDevice *pPhysicalDevices) { VkResult result = VK_SUCCESS; layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); // Call down the call chain: result = my_data->instance_dispatch_table->EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices); loader_platform_thread_lock_mutex(&globalLock); SwpInstance *pInstance = &(my_data->instanceMap[instance]); if ((result == VK_SUCCESS) && pInstance && pPhysicalDevices && (*pPhysicalDeviceCount > 0)) { // Record the VkPhysicalDevices returned by the ICD: for (uint32_t i = 0; i < *pPhysicalDeviceCount; i++) { my_data->physicalDeviceMap[pPhysicalDevices[i]].physicalDevice = pPhysicalDevices[i]; my_data->physicalDeviceMap[pPhysicalDevices[i]].pInstance = pInstance; my_data->physicalDeviceMap[pPhysicalDevices[i]].pDevice = NULL; my_data->physicalDeviceMap[pPhysicalDevices[i]].gotQueueFamilyPropertyCount = false; my_data->physicalDeviceMap[pPhysicalDevices[i]].gotSurfaceCapabilities = false; my_data->physicalDeviceMap[pPhysicalDevices[i]].surfaceFormatCount = 0; my_data->physicalDeviceMap[pPhysicalDevices[i]].pSurfaceFormats = NULL; my_data->physicalDeviceMap[pPhysicalDevices[i]].presentModeCount = 0; my_data->physicalDeviceMap[pPhysicalDevices[i]].pPresentModes = NULL; // Point to the associated SwpInstance: if (pInstance) { pInstance->physicalDevices[pPhysicalDevices[i]] = &my_data->physicalDeviceMap[pPhysicalDevices[i]]; } } } loader_platform_thread_unlock_mutex(&globalLock); return result; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) { VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr; PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice"); if (fpCreateDevice == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateDevice(physicalDevice, pCreateInfo, pAllocator, pDevice); if (result != VK_SUCCESS) { return result; } loader_platform_thread_lock_mutex(&globalLock); layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map); // Setup device dispatch table my_device_data->device_dispatch_table = new VkLayerDispatchTable; layer_init_device_dispatch_table(*pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr); my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice); createDeviceRegisterExtensions(physicalDevice, pCreateInfo, *pDevice); loader_platform_thread_unlock_mutex(&globalLock); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) { dispatch_key key = get_dispatch_key(device); layer_data *my_data = get_my_data_ptr(key, layer_data_map); // Call down the call chain: my_data->device_dispatch_table->DestroyDevice(device, pAllocator); // Do some internal cleanup: loader_platform_thread_lock_mutex(&globalLock); SwpDevice *pDevice = &my_data->deviceMap[device]; if (pDevice) { // Delete the SwpDevice associated with this device: if (pDevice->pPhysicalDevice) { pDevice->pPhysicalDevice->pDevice = NULL; } if (!pDevice->swapchains.empty()) { LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN, "%s() called before all of its associated " "VkSwapchainKHRs were destroyed.", __FUNCTION__); // Empty and then delete all SwpSwapchain's for (auto it = pDevice->swapchains.begin(); it != pDevice->swapchains.end(); it++) { // Delete all SwpImage's it->second->images.clear(); // In case the swapchain's surface hasn't been destroyed yet // (which is likely) delete its association with this swapchain // (i.e. so we can't point to this swpchain from that surface, // later on): if (it->second->pSurface) { it->second->pSurface->swapchains.clear(); } } pDevice->swapchains.clear(); } my_data->deviceMap.erase(device); } delete my_data->device_dispatch_table; layer_data_map.erase(key); loader_platform_thread_unlock_mutex(&globalLock); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceSupportKHR(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, VkSurfaceKHR surface, VkBool32 *pSupported) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; // Validate that the surface extension was enabled: if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->surfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_SURFACE_EXTENSION_NAME); } if (!pPhysicalDevice->gotQueueFamilyPropertyCount) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice, "VkPhysicalDevice", SWAPCHAIN_DID_NOT_QUERY_QUEUE_FAMILIES, "%s() called before calling the " "vkGetPhysicalDeviceQueueFamilyProperties " "function.", __FUNCTION__); } else if (pPhysicalDevice->gotQueueFamilyPropertyCount && (queueFamilyIndex >= pPhysicalDevice->numOfQueueFamilies)) { skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice, "VkPhysicalDevice", queueFamilyIndex, pPhysicalDevice->numOfQueueFamilies); } if (!pSupported) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pSupported"); } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfaceSupportKHR(physicalDevice, queueFamilyIndex, surface, pSupported); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; if ((result == VK_SUCCESS) && pSupported && pPhysicalDevice) { // Record the result of this query: SwpInstance *pInstance = pPhysicalDevice->pInstance; SwpSurface *pSurface = (pInstance) ? pInstance->surfaces[surface] : NULL; if (pSurface) { pPhysicalDevice->supportedSurfaces[surface] = pSurface; if (!pSurface->numQueueFamilyIndexSupport) { if (pPhysicalDevice->gotQueueFamilyPropertyCount) { pSurface->pQueueFamilyIndexSupport = (VkBool32 *)malloc(pPhysicalDevice->numOfQueueFamilies * sizeof(VkBool32)); if (pSurface->pQueueFamilyIndexSupport != NULL) { pSurface->numQueueFamilyIndexSupport = pPhysicalDevice->numOfQueueFamilies; } } } if (pSurface->numQueueFamilyIndexSupport) { pSurface->pQueueFamilyIndexSupport[queueFamilyIndex] = *pSupported; } } } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceCapabilitiesKHR(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, VkSurfaceCapabilitiesKHR *pSurfaceCapabilities) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; // Validate that the surface extension was enabled: if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->surfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_SURFACE_EXTENSION_NAME); } if (!pSurfaceCapabilities) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pSurfaceCapabilities"); } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfaceCapabilitiesKHR(physicalDevice, surface, pSurfaceCapabilities); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; if ((result == VK_SUCCESS) && pPhysicalDevice) { // Record the result of this query: pPhysicalDevice->gotSurfaceCapabilities = true; // FIXME: NEED TO COPY THIS DATA, BECAUSE pSurfaceCapabilities POINTS TO APP-ALLOCATED DATA pPhysicalDevice->surfaceCapabilities = *pSurfaceCapabilities; } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceFormatsKHR(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t *pSurfaceFormatCount, VkSurfaceFormatKHR *pSurfaceFormats) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; // Validate that the surface extension was enabled: if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->surfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_SURFACE_EXTENSION_NAME); } if (!pSurfaceFormatCount) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pSurfaceFormatCount"); } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfaceFormatsKHR(physicalDevice, surface, pSurfaceFormatCount, pSurfaceFormats); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; if ((result == VK_SUCCESS) && pPhysicalDevice && !pSurfaceFormats && pSurfaceFormatCount) { // Record the result of this preliminary query: pPhysicalDevice->surfaceFormatCount = *pSurfaceFormatCount; } else if ((result == VK_SUCCESS) && pPhysicalDevice && pSurfaceFormats && pSurfaceFormatCount) { // Compare the preliminary value of *pSurfaceFormatCount with the // value this time: if (*pSurfaceFormatCount > pPhysicalDevice->surfaceFormatCount) { LOG_ERROR_INVALID_COUNT(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pSurfaceFormatCount", "pSurfaceFormats", *pSurfaceFormatCount, pPhysicalDevice->surfaceFormatCount); } else if (*pSurfaceFormatCount > 0) { // Record the result of this query: pPhysicalDevice->surfaceFormatCount = *pSurfaceFormatCount; pPhysicalDevice->pSurfaceFormats = (VkSurfaceFormatKHR *)malloc(*pSurfaceFormatCount * sizeof(VkSurfaceFormatKHR)); if (pPhysicalDevice->pSurfaceFormats) { for (uint32_t i = 0; i < *pSurfaceFormatCount; i++) { pPhysicalDevice->pSurfaceFormats[i] = pSurfaceFormats[i]; } } else { pPhysicalDevice->surfaceFormatCount = 0; } } } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfacePresentModesKHR(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t *pPresentModeCount, VkPresentModeKHR *pPresentModes) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(physicalDevice), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpPhysicalDevice *pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; // Validate that the surface extension was enabled: if (pPhysicalDevice && pPhysicalDevice->pInstance && !pPhysicalDevice->pInstance->surfaceExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT, pPhysicalDevice->pInstance, "VkInstance", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkInstance.", __FUNCTION__, VK_KHR_SURFACE_EXTENSION_NAME); } if (!pPresentModeCount) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pPresentModeCount"); } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->instance_dispatch_table->GetPhysicalDeviceSurfacePresentModesKHR(physicalDevice, surface, pPresentModeCount, pPresentModes); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pPhysicalDevice = &my_data->physicalDeviceMap[physicalDevice]; if ((result == VK_SUCCESS) && pPhysicalDevice && !pPresentModes && pPresentModeCount) { // Record the result of this preliminary query: pPhysicalDevice->presentModeCount = *pPresentModeCount; } else if ((result == VK_SUCCESS) && pPhysicalDevice && pPresentModes && pPresentModeCount) { // Compare the preliminary value of *pPresentModeCount with the // value this time: if (*pPresentModeCount > pPhysicalDevice->presentModeCount) { LOG_ERROR_INVALID_COUNT(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, physicalDevice, "pPresentModeCount", "pPresentModes", *pPresentModeCount, pPhysicalDevice->presentModeCount); } else if (*pPresentModeCount > 0) { // Record the result of this query: pPhysicalDevice->presentModeCount = *pPresentModeCount; pPhysicalDevice->pPresentModes = (VkPresentModeKHR *)malloc(*pPresentModeCount * sizeof(VkPresentModeKHR)); if (pPhysicalDevice->pPresentModes) { for (uint32_t i = 0; i < *pPresentModeCount; i++) { pPhysicalDevice->pPresentModes[i] = pPresentModes[i]; } } else { pPhysicalDevice->presentModeCount = 0; } } } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } // This function does the up-front validation work for vkCreateSwapchainKHR(), // and returns VK_TRUE if a logging callback indicates that the call down the // chain should be skipped: static VkBool32 validateCreateSwapchainKHR(VkDevice device, const VkSwapchainCreateInfoKHR *pCreateInfo, VkSwapchainKHR *pSwapchain) { // TODO: Validate cases of re-creating a swapchain (the current code // assumes a new swapchain is being created). VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); char fn[] = "vkCreateSwapchainKHR"; SwpDevice *pDevice = &my_data->deviceMap[device]; // Validate that the swapchain extension was enabled: if (pDevice && !pDevice->swapchainExtensionEnabled) { return LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkDevice.", fn, VK_KHR_SWAPCHAIN_EXTENSION_NAME); } if (!pCreateInfo) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } else { if (pCreateInfo->sType != VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR) { skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo", "VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR"); } if (pCreateInfo->pNext != NULL) { skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pCreateInfo"); } } if (!pSwapchain) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pSwapchain"); } // Keep around a useful pointer to pPhysicalDevice: SwpPhysicalDevice *pPhysicalDevice = pDevice->pPhysicalDevice; // Validate pCreateInfo values with result of // vkGetPhysicalDeviceQueueFamilyProperties if (pPhysicalDevice && pPhysicalDevice->gotQueueFamilyPropertyCount) { for (auto i = 0; i < pCreateInfo->queueFamilyIndexCount; i++) { if (pCreateInfo->pQueueFamilyIndices[i] >= pPhysicalDevice->numOfQueueFamilies) { skipCall |= LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT, pPhysicalDevice, "VkPhysicalDevice", pCreateInfo->pQueueFamilyIndices[i], pPhysicalDevice->numOfQueueFamilies); } } } // Validate pCreateInfo values with the results of // vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): if (!pPhysicalDevice || !pPhysicalDevice->gotSurfaceCapabilities) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY, "%s() called before calling " "vkGetPhysicalDeviceSurfaceCapabilitiesKHR().", fn); } else if (pCreateInfo) { // Validate pCreateInfo->surface to make sure that // vkGetPhysicalDeviceSurfaceSupportKHR() reported this as a supported // surface: SwpSurface *pSurface = ((pPhysicalDevice) ? pPhysicalDevice->supportedSurfaces[pCreateInfo->surface] : NULL); if (!pSurface) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_UNSUPPORTED_SURFACE, "%s() called with pCreateInfo->surface that " "was not returned by " "vkGetPhysicalDeviceSurfaceSupportKHR() " "for the device.", fn); } // Validate pCreateInfo->minImageCount against // VkSurfaceCapabilitiesKHR::{min|max}ImageCount: VkSurfaceCapabilitiesKHR *pCapabilities = &pPhysicalDevice->surfaceCapabilities; if ((pCreateInfo->minImageCount < pCapabilities->minImageCount) || ((pCapabilities->maxImageCount > 0) && (pCreateInfo->minImageCount > pCapabilities->maxImageCount))) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_MIN_IMG_COUNT, "%s() called with pCreateInfo->minImageCount " "= %d, which is outside the bounds returned " "by vkGetPhysicalDeviceSurfaceCapabilitiesKHR() (i.e. " "minImageCount = %d, maxImageCount = %d).", fn, pCreateInfo->minImageCount, pCapabilities->minImageCount, pCapabilities->maxImageCount); } // Validate pCreateInfo->imageExtent against // VkSurfaceCapabilitiesKHR::{current|min|max}ImageExtent: if ((pCapabilities->currentExtent.width == -1) && ((pCreateInfo->imageExtent.width < pCapabilities->minImageExtent.width) || (pCreateInfo->imageExtent.width > pCapabilities->maxImageExtent.width) || (pCreateInfo->imageExtent.height < pCapabilities->minImageExtent.height) || (pCreateInfo->imageExtent.height > pCapabilities->maxImageExtent.height))) { skipCall |= LOG_ERROR( VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_OUT_OF_BOUNDS_EXTENTS, "%s() called with pCreateInfo->imageExtent = " "(%d,%d), which is outside the bounds " "returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): " "currentExtent = (%d,%d), minImageExtent = " "(%d,%d), maxImageExtent = (%d,%d).", fn, pCreateInfo->imageExtent.width, pCreateInfo->imageExtent.height, pCapabilities->currentExtent.width, pCapabilities->currentExtent.height, pCapabilities->minImageExtent.width, pCapabilities->minImageExtent.height, pCapabilities->maxImageExtent.width, pCapabilities->maxImageExtent.height); } if ((pCapabilities->currentExtent.width != -1) && ((pCreateInfo->imageExtent.width != pCapabilities->currentExtent.width) || (pCreateInfo->imageExtent.height != pCapabilities->currentExtent.height))) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_EXTENTS_NO_MATCH_WIN, "%s() called with pCreateInfo->imageExtent = " "(%d,%d), which is not equal to the " "currentExtent = (%d,%d) returned by " "vkGetPhysicalDeviceSurfaceCapabilitiesKHR().", fn, pCreateInfo->imageExtent.width, pCreateInfo->imageExtent.height, pCapabilities->currentExtent.width, pCapabilities->currentExtent.height); } // Validate pCreateInfo->preTransform has one bit set (1st two // lines of if-statement), which bit is also set in // VkSurfaceCapabilitiesKHR::supportedTransforms (3rd line of if-statement): if (!pCreateInfo->preTransform || (pCreateInfo->preTransform & (pCreateInfo->preTransform - 1)) || !(pCreateInfo->preTransform & pCapabilities->supportedTransforms)) { // This is an error situation; one for which we'd like to give // the developer a helpful, multi-line error message. Build it // up a little at a time, and then log it: std::string errorString = ""; char str[1024]; // Here's the first part of the message: sprintf(str, "%s() called with a non-supported " "pCreateInfo->preTransform (i.e. %s). " "Supported values are:\n", fn, surfaceTransformStr(pCreateInfo->preTransform)); errorString += str; for (int i = 0; i < 32; i++) { // Build up the rest of the message: if ((1 << i) & pCapabilities->supportedTransforms) { const char *newStr = surfaceTransformStr((VkSurfaceTransformFlagBitsKHR)(1 << i)); sprintf(str, " %s\n", newStr); errorString += str; } } // Log the message that we've built up: skipCall |= debug_report_log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, __LINE__, SWAPCHAIN_CREATE_SWAP_BAD_PRE_TRANSFORM, LAYER_NAME, errorString.c_str()); } // Validate pCreateInfo->compositeAlpha has one bit set (1st two // lines of if-statement), which bit is also set in // VkSurfaceCapabilitiesKHR::supportedCompositeAlpha (3rd line of if-statement): if (!pCreateInfo->compositeAlpha || (pCreateInfo->compositeAlpha & (pCreateInfo->compositeAlpha - 1)) || !((pCreateInfo->compositeAlpha) & pCapabilities->supportedCompositeAlpha)) { // This is an error situation; one for which we'd like to give // the developer a helpful, multi-line error message. Build it // up a little at a time, and then log it: std::string errorString = ""; char str[1024]; // Here's the first part of the message: sprintf(str, "%s() called with a non-supported " "pCreateInfo->compositeAlpha (i.e. %s). " "Supported values are:\n", fn, surfaceCompositeAlphaStr(pCreateInfo->compositeAlpha)); errorString += str; for (int i = 0; i < 32; i++) { // Build up the rest of the message: if ((1 << i) & pCapabilities->supportedCompositeAlpha) { const char *newStr = surfaceCompositeAlphaStr((VkCompositeAlphaFlagBitsKHR)(1 << i)); sprintf(str, " %s\n", newStr); errorString += str; } } // Log the message that we've built up: skipCall |= debug_report_log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, (uint64_t)device, 0, SWAPCHAIN_CREATE_SWAP_BAD_COMPOSITE_ALPHA, LAYER_NAME, errorString.c_str()); } // Validate pCreateInfo->imageArraySize against // VkSurfaceCapabilitiesKHR::maxImageArraySize: if ((pCreateInfo->imageArrayLayers < 1) || (pCreateInfo->imageArrayLayers > pCapabilities->maxImageArrayLayers)) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_IMG_ARRAY_SIZE, "%s() called with a non-supported " "pCreateInfo->imageArraySize (i.e. %d). " "Minimum value is 1, maximum value is %d.", fn, pCreateInfo->imageArrayLayers, pCapabilities->maxImageArrayLayers); } // Validate pCreateInfo->imageUsage against // VkSurfaceCapabilitiesKHR::supportedUsageFlags: if (pCreateInfo->imageUsage != (pCreateInfo->imageUsage & pCapabilities->supportedUsageFlags)) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_IMG_USAGE_FLAGS, "%s() called with a non-supported " "pCreateInfo->imageUsage (i.e. 0x%08x)." " Supported flag bits are 0x%08x.", fn, pCreateInfo->imageUsage, pCapabilities->supportedUsageFlags); } } // Validate pCreateInfo values with the results of // vkGetPhysicalDeviceSurfaceFormatsKHR(): if (!pPhysicalDevice || !pPhysicalDevice->surfaceFormatCount) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY, "%s() called before calling " "vkGetPhysicalDeviceSurfaceFormatsKHR().", fn); } else if (pCreateInfo) { // Validate pCreateInfo->imageFormat against // VkSurfaceFormatKHR::format: bool foundFormat = false; bool foundColorSpace = false; bool foundMatch = false; for (uint32_t i = 0; i < pPhysicalDevice->surfaceFormatCount; i++) { if (pCreateInfo->imageFormat == pPhysicalDevice->pSurfaceFormats[i].format) { // Validate pCreateInfo->imageColorSpace against // VkSurfaceFormatKHR::colorSpace: foundFormat = true; if (pCreateInfo->imageColorSpace == pPhysicalDevice->pSurfaceFormats[i].colorSpace) { foundMatch = true; break; } } else { if (pCreateInfo->imageColorSpace == pPhysicalDevice->pSurfaceFormats[i].colorSpace) { foundColorSpace = true; } } } if (!foundMatch) { if (!foundFormat) { if (!foundColorSpace) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_IMG_FMT_CLR_SP, "%s() called with neither a " "supported pCreateInfo->imageFormat " "(i.e. %d) nor a supported " "pCreateInfo->imageColorSpace " "(i.e. %d).", fn, pCreateInfo->imageFormat, pCreateInfo->imageColorSpace); } else { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_IMG_FORMAT, "%s() called with a non-supported " "pCreateInfo->imageFormat (i.e. %d).", fn, pCreateInfo->imageFormat); } } else if (!foundColorSpace) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_IMG_COLOR_SPACE, "%s() called with a non-supported " "pCreateInfo->imageColorSpace (i.e. %d).", fn, pCreateInfo->imageColorSpace); } } } // Validate pCreateInfo values with the results of // vkGetPhysicalDeviceSurfacePresentModesKHR(): if (!pPhysicalDevice || !pPhysicalDevice->presentModeCount) { if (!pCreateInfo || (pCreateInfo->presentMode != VK_PRESENT_MODE_FIFO_KHR)) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY, "%s() called before calling " "vkGetPhysicalDeviceSurfacePresentModesKHR().", fn); } } else if (pCreateInfo) { // Validate pCreateInfo->presentMode against // vkGetPhysicalDeviceSurfacePresentModesKHR(): bool foundMatch = false; for (uint32_t i = 0; i < pPhysicalDevice->presentModeCount; i++) { if (pPhysicalDevice->pPresentModes[i] == pCreateInfo->presentMode) { foundMatch = true; break; } } if (!foundMatch) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_PRESENT_MODE, "%s() called with a non-supported " "pCreateInfo->presentMode (i.e. %s).", fn, presentModeStr(pCreateInfo->presentMode)); } } // Validate pCreateInfo->imageSharingMode and related values: if (pCreateInfo->imageSharingMode == VK_SHARING_MODE_CONCURRENT) { if ((pCreateInfo->queueFamilyIndexCount <= 1) || !pCreateInfo->pQueueFamilyIndices) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_SHARING_VALUES, "%s() called with a supported " "pCreateInfo->sharingMode of (i.e. %s)," "but with a bad value(s) for " "pCreateInfo->queueFamilyIndexCount or " "pCreateInfo->pQueueFamilyIndices).", fn, sharingModeStr(pCreateInfo->imageSharingMode)); } } else if (pCreateInfo->imageSharingMode != VK_SHARING_MODE_EXCLUSIVE) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_BAD_SHARING_MODE, "%s() called with a non-supported " "pCreateInfo->imageSharingMode (i.e. %s).", fn, sharingModeStr(pCreateInfo->imageSharingMode)); } // Validate pCreateInfo->clipped: if (pCreateInfo && (pCreateInfo->clipped != VK_FALSE) && (pCreateInfo->clipped != VK_TRUE)) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_BAD_BOOL, "%s() called with a VkBool32 value that is " "neither VK_TRUE nor VK_FALSE, but has the " "numeric value of %d.", fn, pCreateInfo->clipped); } // Validate pCreateInfo->oldSwapchain: if (pCreateInfo && pCreateInfo->oldSwapchain) { SwpSwapchain *pOldSwapchain = &my_data->swapchainMap[pCreateInfo->oldSwapchain]; if (pOldSwapchain) { if (device != pOldSwapchain->pDevice->device) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_DESTROY_SWAP_DIFF_DEVICE, "%s() called with a different VkDevice " "than the VkSwapchainKHR was created with.", __FUNCTION__); } if (pCreateInfo->surface != pOldSwapchain->pSurface->surface) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_CREATE_SWAP_DIFF_SURFACE, "%s() called with pCreateInfo->oldSwapchain " "that has a different VkSurfaceKHR than " "pCreateInfo->surface.", fn); } } else { // TBD: Leave this in (not sure object_track will check this)? skipCall |= LOG_ERROR_NON_VALID_OBJ(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, pCreateInfo->oldSwapchain, "VkSwapchainKHR"); } } return skipCall; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(VkDevice device, const VkSwapchainCreateInfoKHR *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkSwapchainKHR *pSwapchain) { VkResult result = VK_SUCCESS; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); VkBool32 skipCall = validateCreateSwapchainKHR(device, pCreateInfo, pSwapchain); if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->device_dispatch_table->CreateSwapchainKHR(device, pCreateInfo, pAllocator, pSwapchain); loader_platform_thread_lock_mutex(&globalLock); if (result == VK_SUCCESS) { // Remember the swapchain's handle, and link it to the device: SwpDevice *pDevice = &my_data->deviceMap[device]; my_data->swapchainMap[*pSwapchain].swapchain = *pSwapchain; if (pDevice) { pDevice->swapchains[*pSwapchain] = &my_data->swapchainMap[*pSwapchain]; } my_data->swapchainMap[*pSwapchain].pDevice = pDevice; my_data->swapchainMap[*pSwapchain].imageCount = 0; my_data->swapchainMap[*pSwapchain].usedAllocatorToCreate = (pAllocator != NULL); // Store a pointer to the surface SwpPhysicalDevice *pPhysicalDevice = pDevice->pPhysicalDevice; SwpInstance *pInstance = (pPhysicalDevice) ? pPhysicalDevice->pInstance : NULL; layer_data *my_instance_data = ((pInstance) ? get_my_data_ptr(get_dispatch_key(pInstance->instance), layer_data_map) : NULL); SwpSurface *pSurface = ((my_data && pCreateInfo) ? &my_instance_data->surfaceMap[pCreateInfo->surface] : NULL); my_data->swapchainMap[*pSwapchain].pSurface = pSurface; if (pSurface) { pSurface->swapchains[*pSwapchain] = &my_data->swapchainMap[*pSwapchain]; } } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks *pAllocator) { // TODOs: // // - Implement a check for validity language that reads: All uses of // presentable images acquired from pname:swapchain and owned by the // application must: have completed execution VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpDevice *pDevice = &my_data->deviceMap[device]; // Validate that the swapchain extension was enabled: if (pDevice && !pDevice->swapchainExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkDevice.", __FUNCTION__, VK_KHR_SWAPCHAIN_EXTENSION_NAME); } // Regardless of skipCall value, do some internal cleanup: SwpSwapchain *pSwapchain = &my_data->swapchainMap[swapchain]; if (pSwapchain) { // Delete the SwpSwapchain associated with this swapchain: if (pSwapchain->pDevice) { pSwapchain->pDevice->swapchains.erase(swapchain); if (device != pSwapchain->pDevice->device) { LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_DESTROY_SWAP_DIFF_DEVICE, "%s() called with a different VkDevice than the " "VkSwapchainKHR was created with.", __FUNCTION__); } } if (pSwapchain->pSurface) { pSwapchain->pSurface->swapchains.erase(swapchain); } if (pSwapchain->imageCount) { pSwapchain->images.clear(); } if ((pAllocator != NULL) != pSwapchain->usedAllocatorToCreate) { LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, instance, "VkInstance", SWAPCHAIN_INCOMPATIBLE_ALLOCATOR, "%s() called with incompatible pAllocator from when " "the object was created.", __FUNCTION__); } my_data->swapchainMap.erase(swapchain); } loader_platform_thread_unlock_mutex(&globalLock); if (VK_FALSE == skipCall) { // Call down the call chain: my_data->device_dispatch_table->DestroySwapchainKHR(device, swapchain, pAllocator); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t *pSwapchainImageCount, VkImage *pSwapchainImages) { VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpDevice *pDevice = &my_data->deviceMap[device]; // Validate that the swapchain extension was enabled: if (pDevice && !pDevice->swapchainExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkDevice.", __FUNCTION__, VK_KHR_SWAPCHAIN_EXTENSION_NAME); } SwpSwapchain *pSwapchain = &my_data->swapchainMap[swapchain]; if (!pSwapchainImageCount) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pSwapchainImageCount"); } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->device_dispatch_table->GetSwapchainImagesKHR(device, swapchain, pSwapchainImageCount, pSwapchainImages); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pSwapchain = &my_data->swapchainMap[swapchain]; if ((result == VK_SUCCESS) && pSwapchain && !pSwapchainImages && pSwapchainImageCount) { // Record the result of this preliminary query: pSwapchain->imageCount = *pSwapchainImageCount; } else if ((result == VK_SUCCESS) && pSwapchain && pSwapchainImages && pSwapchainImageCount) { // Compare the preliminary value of *pSwapchainImageCount with the // value this time: if (*pSwapchainImageCount > pSwapchain->imageCount) { LOG_ERROR_INVALID_COUNT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pSwapchainImageCount", "pSwapchainImages", *pSwapchainImageCount, pSwapchain->imageCount); } else if (*pSwapchainImageCount > 0) { // Record the images and their state: pSwapchain->imageCount = *pSwapchainImageCount; for (uint32_t i = 0; i < *pSwapchainImageCount; i++) { pSwapchain->images[i].image = pSwapchainImages[i]; pSwapchain->images[i].pSwapchain = pSwapchain; pSwapchain->images[i].ownedByApp = false; } } } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(VkDevice device, VkSwapchainKHR swapchain, uint64_t timeout, VkSemaphore semaphore, VkFence fence, uint32_t *pImageIndex) { // TODOs: // // - Address the timeout. Possibilities include looking at the state of the // swapchain's images, depending on the timeout value. // - Implement a check for validity language that reads: If pname:semaphore is // not sname:VK_NULL_HANDLE it must: be unsignalled // - Implement a check for validity language that reads: If pname:fence is not // sname:VK_NULL_HANDLE it must: be unsignalled and mustnot: be associated // with any other queue command that has not yet completed execution on that // queue // - Record/update the state of the swapchain, in case an error occurs // (e.g. VK_ERROR_OUT_OF_DATE_KHR). VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); loader_platform_thread_lock_mutex(&globalLock); SwpDevice *pDevice = &my_data->deviceMap[device]; // Validate that the swapchain extension was enabled: if (pDevice && !pDevice->swapchainExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkDevice.", __FUNCTION__, VK_KHR_SWAPCHAIN_EXTENSION_NAME); } if ((semaphore == VK_NULL_HANDLE) && (fence == VK_NULL_HANDLE)) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "VkDevice", SWAPCHAIN_NO_SYNC_FOR_ACQUIRE, "%s() called with both the semaphore and fence parameters set to " "VK_NULL_HANDLE (at least one should be used).", __FUNCTION__); } SwpSwapchain *pSwapchain = &my_data->swapchainMap[swapchain]; if (pSwapchain) { // Look to see if the application is trying to own too many images at // the same time (i.e. not leave any to display): uint32_t imagesOwnedByApp = 0; for (uint32_t i = 0; i < pSwapchain->imageCount; i++) { if (pSwapchain->images[i].ownedByApp) { imagesOwnedByApp++; } } if (imagesOwnedByApp >= (pSwapchain->imageCount - 1)) { skipCall |= LOG_PERF_WARNING(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, swapchain, "VkSwapchainKHR", SWAPCHAIN_APP_OWNS_TOO_MANY_IMAGES, "%s() called when the application " "already owns all presentable images " "in this swapchain except for the " "image currently being displayed. " "This call to %s() cannot succeed " "unless another thread calls the " "vkQueuePresentKHR() function in " "order to release ownership of one of " "the presentable images of this " "swapchain.", __FUNCTION__, __FUNCTION__); } } if (!pImageIndex) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pImageIndex"); } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->device_dispatch_table->AcquireNextImageKHR(device, swapchain, timeout, semaphore, fence, pImageIndex); loader_platform_thread_lock_mutex(&globalLock); // Obtain this pointer again after locking: pSwapchain = &my_data->swapchainMap[swapchain]; if (((result == VK_SUCCESS) || (result == VK_SUBOPTIMAL_KHR)) && pSwapchain) { // Change the state of the image (now owned by the application): pSwapchain->images[*pImageIndex].ownedByApp = true; } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(VkQueue queue, const VkPresentInfoKHR *pPresentInfo) { // TODOs: // // - Implement a check for validity language that reads: Any given element of // sname:VkSemaphore in pname:pWaitSemaphores must: refer to a prior signal // of that sname:VkSemaphore that won't be consumed by any other wait on that // semaphore // - Record/update the state of the swapchain, in case an error occurs // (e.g. VK_ERROR_OUT_OF_DATE_KHR). VkResult result = VK_SUCCESS; VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(queue), layer_data_map); if (!pPresentInfo) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo"); } else { if (pPresentInfo->sType != VK_STRUCTURE_TYPE_PRESENT_INFO_KHR) { skipCall |= LOG_ERROR_WRONG_STYPE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo", "VK_STRUCTURE_TYPE_PRESENT_INFO_KHR"); } if (pPresentInfo->pNext != NULL) { skipCall |= LOG_INFO_WRONG_NEXT(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo"); } if (!pPresentInfo->swapchainCount) { skipCall |= LOG_ERROR_ZERO_VALUE(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo->swapchainCount"); } if (!pPresentInfo->pSwapchains) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo->pSwapchains"); } if (!pPresentInfo->pImageIndices) { skipCall |= LOG_ERROR_NULL_POINTER(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, device, "pPresentInfo->pImageIndices"); } // Note: pPresentInfo->pResults is allowed to be NULL } loader_platform_thread_lock_mutex(&globalLock); for (uint32_t i = 0; pPresentInfo && (i < pPresentInfo->swapchainCount); i++) { uint32_t index = pPresentInfo->pImageIndices[i]; SwpSwapchain *pSwapchain = &my_data->swapchainMap[pPresentInfo->pSwapchains[i]]; if (pSwapchain) { if (!pSwapchain->pDevice->swapchainExtensionEnabled) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT, pSwapchain->pDevice, "VkDevice", SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, "%s() called even though the %s extension was not enabled for this VkDevice.", __FUNCTION__, VK_KHR_SWAPCHAIN_EXTENSION_NAME); } if (index >= pSwapchain->imageCount) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, pPresentInfo->pSwapchains[i], "VkSwapchainKHR", SWAPCHAIN_INDEX_TOO_LARGE, "%s() called for an index that is too " "large (i.e. %d). There are only %d " "images in this VkSwapchainKHR.\n", __FUNCTION__, index, pSwapchain->imageCount); } else { if (!pSwapchain->images[index].ownedByApp) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, pPresentInfo->pSwapchains[i], "VkSwapchainKHR", SWAPCHAIN_INDEX_NOT_IN_USE, "%s() returned an index (i.e. %d) " "for an image that is not owned by " "the application.", __FUNCTION__, index); } } SwpQueue *pQueue = &my_data->queueMap[queue]; SwpSurface *pSurface = pSwapchain->pSurface; if (pQueue && pSurface && pSurface->numQueueFamilyIndexSupport) { uint32_t queueFamilyIndex = pQueue->queueFamilyIndex; // Note: the 1st test is to ensure queueFamilyIndex is in range, // and the 2nd test is the validation check: if ((pSurface->numQueueFamilyIndexSupport > queueFamilyIndex) && (!pSurface->pQueueFamilyIndexSupport[queueFamilyIndex])) { skipCall |= LOG_ERROR(VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT, pPresentInfo->pSwapchains[i], "VkSwapchainKHR", SWAPCHAIN_SURFACE_NOT_SUPPORTED_WITH_QUEUE, "%s() called with a swapchain whose " "surface is not supported for " "presention on this device with the " "queueFamilyIndex (i.e. %d) of the " "given queue.", __FUNCTION__, queueFamilyIndex); } } } } if (VK_FALSE == skipCall) { // Call down the call chain: loader_platform_thread_unlock_mutex(&globalLock); result = my_data->device_dispatch_table->QueuePresentKHR(queue, pPresentInfo); loader_platform_thread_lock_mutex(&globalLock); if (pPresentInfo && ((result == VK_SUCCESS) || (result == VK_SUBOPTIMAL_KHR))) { for (uint32_t i = 0; i < pPresentInfo->swapchainCount; i++) { int index = pPresentInfo->pImageIndices[i]; SwpSwapchain *pSwapchain = &my_data->swapchainMap[pPresentInfo->pSwapchains[i]]; if (pSwapchain) { // Change the state of the image (no longer owned by the // application): pSwapchain->images[index].ownedByApp = false; } } } loader_platform_thread_unlock_mutex(&globalLock); return result; } loader_platform_thread_unlock_mutex(&globalLock); return VK_ERROR_VALIDATION_FAILED_EXT; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue *pQueue) { VkBool32 skipCall = VK_FALSE; layer_data *my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); if (VK_FALSE == skipCall) { // Call down the call chain: my_data->device_dispatch_table->GetDeviceQueue(device, queueFamilyIndex, queueIndex, pQueue); // Remember the queue's handle, and link it to the device: loader_platform_thread_lock_mutex(&globalLock); SwpDevice *pDevice = &my_data->deviceMap[device]; my_data->queueMap[&pQueue].queue = *pQueue; if (pDevice) { pDevice->queues[*pQueue] = &my_data->queueMap[*pQueue]; } my_data->queueMap[&pQueue].pDevice = pDevice; my_data->queueMap[&pQueue].queueFamilyIndex = queueFamilyIndex; loader_platform_thread_unlock_mutex(&globalLock); } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); VkResult result = my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback); if (VK_SUCCESS == result) { loader_platform_thread_lock_mutex(&globalLock); result = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback); loader_platform_thread_unlock_mutex(&globalLock); } return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT msgCallback, const VkAllocationCallbacks *pAllocator) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); my_data->instance_dispatch_table->DestroyDebugReportCallbackEXT(instance, msgCallback, pAllocator); loader_platform_thread_lock_mutex(&globalLock); layer_destroy_msg_callback(my_data->report_data, msgCallback, pAllocator); loader_platform_thread_unlock_mutex(&globalLock); } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objType, uint64_t object, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); my_data->instance_dispatch_table->DebugReportMessageEXT(instance, flags, objType, object, location, msgCode, pLayerPrefix, pMsg); } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char *funcName) { if (!strcmp("vkGetDeviceProcAddr", funcName)) return (PFN_vkVoidFunction)vkGetDeviceProcAddr; if (!strcmp(funcName, "vkDestroyDevice")) return (PFN_vkVoidFunction)vkDestroyDevice; if (device == VK_NULL_HANDLE) { return NULL; } layer_data *my_data; my_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkLayerDispatchTable *pDisp = my_data->device_dispatch_table; if (!strcmp("vkCreateSwapchainKHR", funcName)) return reinterpret_cast(vkCreateSwapchainKHR); if (!strcmp("vkDestroySwapchainKHR", funcName)) return reinterpret_cast(vkDestroySwapchainKHR); if (!strcmp("vkGetSwapchainImagesKHR", funcName)) return reinterpret_cast(vkGetSwapchainImagesKHR); if (!strcmp("vkAcquireNextImageKHR", funcName)) return reinterpret_cast(vkAcquireNextImageKHR); if (!strcmp("vkQueuePresentKHR", funcName)) return reinterpret_cast(vkQueuePresentKHR); if (!strcmp("vkGetDeviceQueue", funcName)) return reinterpret_cast(vkGetDeviceQueue); if (pDisp->GetDeviceProcAddr == NULL) return NULL; return pDisp->GetDeviceProcAddr(device, funcName); } VK_LAYER_EXPORT VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) { if (!strcmp("vkGetInstanceProcAddr", funcName)) return (PFN_vkVoidFunction)vkGetInstanceProcAddr; if (!strcmp(funcName, "vkCreateInstance")) return (PFN_vkVoidFunction)vkCreateInstance; if (!strcmp(funcName, "vkDestroyInstance")) return (PFN_vkVoidFunction)vkDestroyInstance; if (!strcmp(funcName, "vkCreateDevice")) return (PFN_vkVoidFunction)vkCreateDevice; if (!strcmp(funcName, "vkEnumeratePhysicalDevices")) return (PFN_vkVoidFunction)vkEnumeratePhysicalDevices; if (!strcmp(funcName, "vkEnumerateInstanceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties; if (!strcmp(funcName, "vkEnumerateDeviceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties; if (!strcmp(funcName, "vkEnumerateInstanceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties; if (!strcmp(funcName, "vkEnumerateDeviceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties; if (!strcmp(funcName, "vkGetPhysicalDeviceQueueFamilyProperties")) return (PFN_vkVoidFunction)vkGetPhysicalDeviceQueueFamilyProperties; if (instance == VK_NULL_HANDLE) { return NULL; } PFN_vkVoidFunction addr; layer_data *my_data; my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; addr = debug_report_get_instance_proc_addr(my_data->report_data, funcName); if (addr) { return addr; } #ifdef VK_USE_PLATFORM_ANDROID_KHR if (!strcmp("vkCreateAndroidSurfaceKHR", funcName)) return reinterpret_cast(vkCreateAndroidSurfaceKHR); #endif // VK_USE_PLATFORM_ANDROID_KHR #ifdef VK_USE_PLATFORM_MIR_KHR if (!strcmp("vkCreateMirSurfaceKHR", funcName)) return reinterpret_cast(vkCreateMirSurfaceKHR); if (!strcmp("vkGetPhysicalDeviceMirPresentationSupportKHR", funcName)) return reinterpret_cast(vkGetPhysicalDeviceMirPresentationSupportKHR); #endif // VK_USE_PLATFORM_MIR_KHR #ifdef VK_USE_PLATFORM_WAYLAND_KHR if (!strcmp("vkCreateWaylandSurfaceKHR", funcName)) return reinterpret_cast(vkCreateWaylandSurfaceKHR); if (!strcmp("vkGetPhysicalDeviceWaylandPresentationSupportKHR", funcName)) return reinterpret_cast(vkGetPhysicalDeviceWaylandPresentationSupportKHR); #endif // VK_USE_PLATFORM_WAYLAND_KHR #ifdef VK_USE_PLATFORM_WIN32_KHR if (!strcmp("vkCreateWin32SurfaceKHR", funcName)) return reinterpret_cast(vkCreateWin32SurfaceKHR); if (!strcmp("vkGetPhysicalDeviceWin32PresentationSupportKHR", funcName)) return reinterpret_cast(vkGetPhysicalDeviceWin32PresentationSupportKHR); #endif // VK_USE_PLATFORM_WIN32_KHR #ifdef VK_USE_PLATFORM_XCB_KHR if (!strcmp("vkCreateXcbSurfaceKHR", funcName)) return reinterpret_cast(vkCreateXcbSurfaceKHR); if (!strcmp("vkGetPhysicalDeviceXcbPresentationSupportKHR", funcName)) return reinterpret_cast(vkGetPhysicalDeviceXcbPresentationSupportKHR); #endif // VK_USE_PLATFORM_XCB_KHR #ifdef VK_USE_PLATFORM_XLIB_KHR if (!strcmp("vkCreateXlibSurfaceKHR", funcName)) return reinterpret_cast(vkCreateXlibSurfaceKHR); if (!strcmp("vkGetPhysicalDeviceXlibPresentationSupportKHR", funcName)) return reinterpret_cast(vkGetPhysicalDeviceXlibPresentationSupportKHR); #endif // VK_USE_PLATFORM_XLIB_KHR if (!strcmp("vkDestroySurfaceKHR", funcName)) return reinterpret_cast(vkDestroySurfaceKHR); if (!strcmp("vkGetPhysicalDeviceSurfaceSupportKHR", funcName)) return reinterpret_cast(vkGetPhysicalDeviceSurfaceSupportKHR); if (!strcmp("vkGetPhysicalDeviceSurfaceCapabilitiesKHR", funcName)) return reinterpret_cast(vkGetPhysicalDeviceSurfaceCapabilitiesKHR); if (!strcmp("vkGetPhysicalDeviceSurfaceFormatsKHR", funcName)) return reinterpret_cast(vkGetPhysicalDeviceSurfaceFormatsKHR); if (!strcmp("vkGetPhysicalDeviceSurfacePresentModesKHR", funcName)) return reinterpret_cast(vkGetPhysicalDeviceSurfacePresentModesKHR); if (pTable->GetInstanceProcAddr == NULL) return NULL; return pTable->GetInstanceProcAddr(instance, funcName); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/swapchain.h000066400000000000000000000456631270147354000243650ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Ian Elliott * Author: Ian Elliott */ #ifndef SWAPCHAIN_H #define SWAPCHAIN_H #include "vulkan/vk_layer.h" #include "vk_layer_config.h" #include "vk_layer_logging.h" #include #include using namespace std; // Swapchain ERROR codes typedef enum _SWAPCHAIN_ERROR { SWAPCHAIN_INVALID_HANDLE, // Handle used that isn't currently valid SWAPCHAIN_NULL_POINTER, // Pointer set to NULL, instead of being a valid pointer SWAPCHAIN_EXT_NOT_ENABLED_BUT_USED, // Did not enable WSI extension, but called WSI function SWAPCHAIN_DEL_OBJECT_BEFORE_CHILDREN, // Called vkDestroyDevice() before vkDestroySwapchainKHR() SWAPCHAIN_CREATE_UNSUPPORTED_SURFACE, // Called vkCreateSwapchainKHR() with a pCreateInfo->surface that wasn't seen as supported // by vkGetPhysicalDeviceSurfaceSupportKHR for the device SWAPCHAIN_CREATE_SWAP_WITHOUT_QUERY, // Called vkCreateSwapchainKHR() without calling a query (e.g. // vkGetPhysicalDeviceSurfaceCapabilitiesKHR()) SWAPCHAIN_CREATE_SWAP_BAD_MIN_IMG_COUNT, // Called vkCreateSwapchainKHR() with out-of-bounds minImageCount SWAPCHAIN_CREATE_SWAP_OUT_OF_BOUNDS_EXTENTS, // Called vkCreateSwapchainKHR() with out-of-bounds imageExtent SWAPCHAIN_CREATE_SWAP_EXTENTS_NO_MATCH_WIN, // Called vkCreateSwapchainKHR() with imageExtent that doesn't match window's extent SWAPCHAIN_CREATE_SWAP_BAD_PRE_TRANSFORM, // Called vkCreateSwapchainKHR() with a non-supported preTransform SWAPCHAIN_CREATE_SWAP_BAD_COMPOSITE_ALPHA, // Called vkCreateSwapchainKHR() with a non-supported compositeAlpha SWAPCHAIN_CREATE_SWAP_BAD_IMG_ARRAY_SIZE, // Called vkCreateSwapchainKHR() with a non-supported imageArraySize SWAPCHAIN_CREATE_SWAP_BAD_IMG_USAGE_FLAGS, // Called vkCreateSwapchainKHR() with a non-supported imageUsageFlags SWAPCHAIN_CREATE_SWAP_BAD_IMG_COLOR_SPACE, // Called vkCreateSwapchainKHR() with a non-supported imageColorSpace SWAPCHAIN_CREATE_SWAP_BAD_IMG_FORMAT, // Called vkCreateSwapchainKHR() with a non-supported imageFormat SWAPCHAIN_CREATE_SWAP_BAD_IMG_FMT_CLR_SP, // Called vkCreateSwapchainKHR() with a non-supported imageColorSpace SWAPCHAIN_CREATE_SWAP_BAD_PRESENT_MODE, // Called vkCreateSwapchainKHR() with a non-supported presentMode SWAPCHAIN_CREATE_SWAP_BAD_SHARING_MODE, // Called vkCreateSwapchainKHR() with a non-supported imageSharingMode SWAPCHAIN_CREATE_SWAP_BAD_SHARING_VALUES, // Called vkCreateSwapchainKHR() with bad values when imageSharingMode is // VK_SHARING_MODE_CONCURRENT SWAPCHAIN_CREATE_SWAP_DIFF_SURFACE, // Called vkCreateSwapchainKHR() with pCreateInfo->oldSwapchain that has a different surface // than pCreateInfo->surface SWAPCHAIN_DESTROY_SWAP_DIFF_DEVICE, // Called vkDestroySwapchainKHR() with a different VkDevice than vkCreateSwapchainKHR() SWAPCHAIN_APP_OWNS_TOO_MANY_IMAGES, // vkAcquireNextImageKHR() asked for more images than are available SWAPCHAIN_INDEX_TOO_LARGE, // Index is too large for swapchain SWAPCHAIN_INDEX_NOT_IN_USE, // vkQueuePresentKHR() given index that is not owned by app SWAPCHAIN_BAD_BOOL, // VkBool32 that doesn't have value of VK_TRUE or VK_FALSE (e.g. is a non-zero form of true) SWAPCHAIN_INVALID_COUNT, // Second time a query called, the pCount value didn't match first time SWAPCHAIN_WRONG_STYPE, // The sType for a struct has the wrong value SWAPCHAIN_WRONG_NEXT, // The pNext for a struct is not NULL SWAPCHAIN_ZERO_VALUE, // A value should be non-zero SWAPCHAIN_INCOMPATIBLE_ALLOCATOR, // pAllocator must be compatible (i.e. NULL or not) when object is created and destroyed SWAPCHAIN_DID_NOT_QUERY_QUEUE_FAMILIES, // A function using a queueFamilyIndex was called before // vkGetPhysicalDeviceQueueFamilyProperties() was called SWAPCHAIN_QUEUE_FAMILY_INDEX_TOO_LARGE, // A queueFamilyIndex value is not less than pQueueFamilyPropertyCount returned by // vkGetPhysicalDeviceQueueFamilyProperties() SWAPCHAIN_SURFACE_NOT_SUPPORTED_WITH_QUEUE, // A surface is not supported by a given queueFamilyIndex, as seen by // vkGetPhysicalDeviceSurfaceSupportKHR() SWAPCHAIN_NO_SYNC_FOR_ACQUIRE, // vkAcquireNextImageKHR should be called with a valid semaphore and/or fence } SWAPCHAIN_ERROR; // The following is for logging error messages: #define LAYER_NAME (char *) "Swapchain" #define LOG_ERROR_NON_VALID_OBJ(objType, type, obj) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), __LINE__, \ SWAPCHAIN_INVALID_HANDLE, LAYER_NAME, "%s() called with a non-valid %s.", __FUNCTION__, (obj)) \ : VK_FALSE #define LOG_ERROR_NULL_POINTER(objType, type, obj) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, \ SWAPCHAIN_NULL_POINTER, LAYER_NAME, "%s() called with NULL pointer %s.", __FUNCTION__, (obj)) \ : VK_FALSE #define LOG_ERROR_INVALID_COUNT(objType, type, obj, obj2, val, val2) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, \ SWAPCHAIN_INVALID_COUNT, LAYER_NAME, "%s() called with non-NULL %s, and with %s set to a " \ "value (%d) that is greater than the value (%d) that " \ "was returned when %s was NULL.", \ __FUNCTION__, (obj2), (obj), (val), (val2), (obj2)) \ : VK_FALSE #define LOG_ERROR_WRONG_STYPE(objType, type, obj, val) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, SWAPCHAIN_WRONG_STYPE, \ LAYER_NAME, "%s() called with the wrong value for %s->sType " \ "(expected %s).", \ __FUNCTION__, (obj), (val)) \ : VK_FALSE #define LOG_ERROR_ZERO_VALUE(objType, type, obj) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, SWAPCHAIN_ZERO_VALUE, \ LAYER_NAME, "%s() called with a zero value for %s.", __FUNCTION__, (obj)) \ : VK_FALSE #define LOG_ERROR(objType, type, obj, enm, fmt, ...) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), __LINE__, (enm), \ LAYER_NAME, (fmt), __VA_ARGS__) \ : VK_FALSE #define LOG_ERROR_QUEUE_FAMILY_INDEX_TOO_LARGE(objType, type, obj, val1, val2) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, (objType), (uint64_t)(obj), 0, \ SWAPCHAIN_QUEUE_FAMILY_INDEX_TOO_LARGE, LAYER_NAME, "%s() called with a queueFamilyIndex that is too " \ "large (i.e. %d). The maximum value (returned " \ "by vkGetPhysicalDeviceQueueFamilyProperties) is " \ "only %d.\n", \ __FUNCTION__, (val1), (val2)) \ : VK_FALSE #define LOG_PERF_WARNING(objType, type, obj, enm, fmt, ...) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, (objType), (uint64_t)(obj), __LINE__, \ (enm), LAYER_NAME, (fmt), __VA_ARGS__) \ : VK_FALSE #define LOG_WARNING(objType, type, obj, enm, fmt, ...) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_WARNING_BIT_EXT, (objType), (uint64_t)(obj), __LINE__, (enm), \ LAYER_NAME, (fmt), __VA_ARGS__) \ : VK_FALSE #define LOG_INFO_WRONG_NEXT(objType, type, obj) \ (my_data) ? log_msg(my_data->report_data, VK_DEBUG_REPORT_INFORMATION_BIT_EXT, (objType), (uint64_t)(obj), 0, \ SWAPCHAIN_WRONG_NEXT, LAYER_NAME, "%s() called with non-NULL value for %s->pNext.", __FUNCTION__, (obj)) \ : VK_FALSE // NOTE: The following struct's/typedef's are for keeping track of // info that is used for validating the WSI extensions. // Forward declarations: struct _SwpInstance; struct _SwpSurface; struct _SwpPhysicalDevice; struct _SwpDevice; struct _SwpSwapchain; struct _SwpImage; struct _SwpQueue; typedef _SwpInstance SwpInstance; typedef _SwpSurface SwpSurface; ; typedef _SwpPhysicalDevice SwpPhysicalDevice; typedef _SwpDevice SwpDevice; typedef _SwpSwapchain SwpSwapchain; typedef _SwpImage SwpImage; typedef _SwpQueue SwpQueue; // Create one of these for each VkInstance: struct _SwpInstance { // The actual handle for this VkInstance: VkInstance instance; // Remember the VkSurfaceKHR's that are created for this VkInstance: unordered_map surfaces; // When vkEnumeratePhysicalDevices is called, the VkPhysicalDevice's are // remembered: unordered_map physicalDevices; // Set to true if VK_KHR_SURFACE_EXTENSION_NAME was enabled for this VkInstance: bool surfaceExtensionEnabled; // TODO: Add additional booleans for platform-specific extensions: #ifdef VK_USE_PLATFORM_ANDROID_KHR // Set to true if VK_KHR_ANDROID_SURFACE_EXTENSION_NAME was enabled for this VkInstance: bool androidSurfaceExtensionEnabled; #endif // VK_USE_PLATFORM_ANDROID_KHR #ifdef VK_USE_PLATFORM_MIR_KHR // Set to true if VK_KHR_MIR_SURFACE_EXTENSION_NAME was enabled for this VkInstance: bool mirSurfaceExtensionEnabled; #endif // VK_USE_PLATFORM_MIR_KHR #ifdef VK_USE_PLATFORM_WAYLAND_KHR // Set to true if VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME was enabled for this VkInstance: bool waylandSurfaceExtensionEnabled; #endif // VK_USE_PLATFORM_WAYLAND_KHR #ifdef VK_USE_PLATFORM_WIN32_KHR // Set to true if VK_KHR_WIN32_SURFACE_EXTENSION_NAME was enabled for this VkInstance: bool win32SurfaceExtensionEnabled; #endif // VK_USE_PLATFORM_WIN32_KHR #ifdef VK_USE_PLATFORM_XCB_KHR // Set to true if VK_KHR_XCB_SURFACE_EXTENSION_NAME was enabled for this VkInstance: bool xcbSurfaceExtensionEnabled; #endif // VK_USE_PLATFORM_XCB_KHR #ifdef VK_USE_PLATFORM_XLIB_KHR // Set to true if VK_KHR_XLIB_SURFACE_EXTENSION_NAME was enabled for this VkInstance: bool xlibSurfaceExtensionEnabled; #endif // VK_USE_PLATFORM_XLIB_KHR }; // Create one of these for each VkSurfaceKHR: struct _SwpSurface { // The actual handle for this VkSurfaceKHR: VkSurfaceKHR surface; // VkInstance that this VkSurfaceKHR is associated with: SwpInstance *pInstance; // When vkCreateSwapchainKHR is called, the VkSwapchainKHR's are // remembered: unordered_map swapchains; // 'true' if pAllocator was non-NULL when vkCreate*SurfaceKHR was called: bool usedAllocatorToCreate; // Value of pQueueFamilyPropertyCount that was returned by the // vkGetPhysicalDeviceQueueFamilyProperties() function: uint32_t numQueueFamilyIndexSupport; // Array of VkBool32's that is intialized by the // vkGetPhysicalDeviceSurfaceSupportKHR() function. First call for a given // surface allocates and initializes this array to false for all // queueFamilyIndex's (and sets numQueueFamilyIndexSupport to non-zero). // All calls set the entry for a given queueFamilyIndex: VkBool32 *pQueueFamilyIndexSupport; }; // Create one of these for each VkPhysicalDevice within a VkInstance: struct _SwpPhysicalDevice { // The actual handle for this VkPhysicalDevice: VkPhysicalDevice physicalDevice; // Corresponding VkDevice (and info) to this VkPhysicalDevice: SwpDevice *pDevice; // VkInstance that this VkPhysicalDevice is associated with: SwpInstance *pInstance; // Records results of vkGetPhysicalDeviceQueueFamilyProperties()'s // numOfQueueFamilies parameter when pQueueFamilyProperties is NULL: bool gotQueueFamilyPropertyCount; uint32_t numOfQueueFamilies; // Record all surfaces that vkGetPhysicalDeviceSurfaceSupportKHR() was // called for: unordered_map supportedSurfaces; // TODO: Record/use this info per-surface, not per-device, once a // non-dispatchable surface object is added to WSI: // Results of vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): bool gotSurfaceCapabilities; VkSurfaceCapabilitiesKHR surfaceCapabilities; // TODO: Record/use this info per-surface, not per-device, once a // non-dispatchable surface object is added to WSI: // Count and VkSurfaceFormatKHR's returned by vkGetPhysicalDeviceSurfaceFormatsKHR(): uint32_t surfaceFormatCount; VkSurfaceFormatKHR *pSurfaceFormats; // TODO: Record/use this info per-surface, not per-device, once a // non-dispatchable surface object is added to WSI: // Count and VkPresentModeKHR's returned by vkGetPhysicalDeviceSurfacePresentModesKHR(): uint32_t presentModeCount; VkPresentModeKHR *pPresentModes; }; // Create one of these for each VkDevice within a VkInstance: struct _SwpDevice { // The actual handle for this VkDevice: VkDevice device; // Corresponding VkPhysicalDevice (and info) to this VkDevice: SwpPhysicalDevice *pPhysicalDevice; // Set to true if VK_KHR_SWAPCHAIN_EXTENSION_NAME was enabled: bool swapchainExtensionEnabled; // When vkCreateSwapchainKHR is called, the VkSwapchainKHR's are // remembered: unordered_map swapchains; // When vkGetDeviceQueue is called, the VkQueue's are remembered: unordered_map queues; }; // Create one of these for each VkImage within a VkSwapchainKHR: struct _SwpImage { // The actual handle for this VkImage: VkImage image; // Corresponding VkSwapchainKHR (and info) to this VkImage: SwpSwapchain *pSwapchain; // true if application got this image from vkAcquireNextImageKHR(), and // hasn't yet called vkQueuePresentKHR() for it; otherwise false: bool ownedByApp; }; // Create one of these for each VkSwapchainKHR within a VkDevice: struct _SwpSwapchain { // The actual handle for this VkSwapchainKHR: VkSwapchainKHR swapchain; // Corresponding VkDevice (and info) to this VkSwapchainKHR: SwpDevice *pDevice; // Corresponding VkSurfaceKHR to this VkSwapchainKHR: SwpSurface *pSurface; // When vkGetSwapchainImagesKHR is called, the VkImage's are // remembered: uint32_t imageCount; unordered_map images; // 'true' if pAllocator was non-NULL when vkCreateSwapchainKHR was called: bool usedAllocatorToCreate; }; // Create one of these for each VkQueue within a VkDevice: struct _SwpQueue { // The actual handle for this VkQueue: VkQueue queue; // Corresponding VkDevice (and info) to this VkSwapchainKHR: SwpDevice *pDevice; // Which queueFamilyIndex this VkQueue is associated with: uint32_t queueFamilyIndex; }; struct layer_data { debug_report_data *report_data; std::vector logging_callback; VkLayerDispatchTable *device_dispatch_table; VkLayerInstanceDispatchTable *instance_dispatch_table; // NOTE: The following are for keeping track of info that is used for // validating the WSI extensions. std::unordered_map instanceMap; std::unordered_map surfaceMap; std::unordered_map physicalDeviceMap; std::unordered_map deviceMap; std::unordered_map swapchainMap; std::unordered_map queueMap; layer_data() : report_data(nullptr), device_dispatch_table(nullptr), instance_dispatch_table(nullptr){}; }; #endif // SWAPCHAIN_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/threading.cpp000066400000000000000000000351761270147354000247060ustar00rootroot00000000000000/* * Vulkan * * Copyright (C) 2015 Valve, Inc. * Copyright (C) 2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #include #include #include #include #include #include "vk_loader_platform.h" #include "vulkan/vk_layer.h" #include "vk_layer_config.h" #include "vk_layer_extension_utils.h" #include "vk_layer_utils.h" #include "vk_enum_validate_helper.h" #include "vk_struct_validate_helper.h" #include "vk_layer_table.h" #include "vk_layer_logging.h" #include "threading.h" #include "vk_dispatch_table_helper.h" #include "vk_struct_string_helper_cpp.h" #include "vk_layer_data.h" #include "vk_layer_utils.h" #include "thread_check.h" static void initThreading(layer_data *my_data, const VkAllocationCallbacks *pAllocator) { layer_debug_actions(my_data->report_data, my_data->logging_callback, pAllocator, "google_threading"); if (!threadingLockInitialized) { loader_platform_thread_create_mutex(&threadingLock); loader_platform_thread_init_cond(&threadingCond); threadingLockInitialized = 1; } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) { VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance"); if (fpCreateInstance == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance); if (result != VK_SUCCESS) return result; layer_data *my_data = get_my_data_ptr(get_dispatch_key(*pInstance), layer_data_map); my_data->instance_dispatch_table = new VkLayerInstanceDispatchTable; layer_init_instance_dispatch_table(*pInstance, my_data->instance_dispatch_table, fpGetInstanceProcAddr); my_data->report_data = debug_report_create_instance(my_data->instance_dispatch_table, *pInstance, pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames); initThreading(my_data, pAllocator); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks *pAllocator) { dispatch_key key = get_dispatch_key(instance); layer_data *my_data = get_my_data_ptr(key, layer_data_map); VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; startWriteObject(my_data, instance); pTable->DestroyInstance(instance, pAllocator); finishWriteObject(my_data, instance); // Clean up logging callback, if any while (my_data->logging_callback.size() > 0) { VkDebugReportCallbackEXT callback = my_data->logging_callback.back(); layer_destroy_msg_callback(my_data->report_data, callback, pAllocator); my_data->logging_callback.pop_back(); } layer_debug_report_destroy_instance(my_data->report_data); delete my_data->instance_dispatch_table; layer_data_map.erase(key); if (layer_data_map.empty()) { // Release mutex when destroying last instance. loader_platform_thread_delete_mutex(&threadingLock); threadingLockInitialized = 0; } } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) { VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr; PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice"); if (fpCreateDevice == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateDevice(gpu, pCreateInfo, pAllocator, pDevice); if (result != VK_SUCCESS) { return result; } layer_data *my_instance_data = get_my_data_ptr(get_dispatch_key(gpu), layer_data_map); layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(*pDevice), layer_data_map); // Setup device dispatch table my_device_data->device_dispatch_table = new VkLayerDispatchTable; layer_init_device_dispatch_table(*pDevice, my_device_data->device_dispatch_table, fpGetDeviceProcAddr); my_device_data->report_data = layer_debug_report_create_device(my_instance_data->report_data, *pDevice); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(VkDevice device, const VkAllocationCallbacks *pAllocator) { dispatch_key key = get_dispatch_key(device); layer_data *dev_data = get_my_data_ptr(key, layer_data_map); startWriteObject(dev_data, device); dev_data->device_dispatch_table->DestroyDevice(device, pAllocator); finishWriteObject(dev_data, device); layer_data_map.erase(key); } static const VkExtensionProperties threading_extensions[] = { {VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}}; VK_LAYER_EXPORT VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { return util_GetExtensionProperties(ARRAY_SIZE(threading_extensions), threading_extensions, pCount, pProperties); } static const VkLayerProperties globalLayerProps[] = {{ "VK_LAYER_GOOGLE_threading", VK_LAYER_API_VERSION, // specVersion 1, "Google Validation Layer", }}; VK_LAYER_EXPORT VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(uint32_t *pCount, VkLayerProperties *pProperties) { return util_GetLayerProperties(ARRAY_SIZE(globalLayerProps), globalLayerProps, pCount, pProperties); } static const VkLayerProperties deviceLayerProps[] = {{ "VK_LAYER_GOOGLE_threading", VK_LAYER_API_VERSION, // specVersion 1, "Google Validation Layer", }}; VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice, const char *pLayerName, uint32_t *pCount, VkExtensionProperties *pProperties) { if (pLayerName == NULL) { dispatch_key key = get_dispatch_key(physicalDevice); layer_data *my_data = get_my_data_ptr(key, layer_data_map); return my_data->instance_dispatch_table->EnumerateDeviceExtensionProperties(physicalDevice, NULL, pCount, pProperties); } else { // Threading layer does not have any device extensions return util_GetExtensionProperties(0, nullptr, pCount, pProperties); } } VK_LAYER_EXPORT VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t *pCount, VkLayerProperties *pProperties) { return util_GetLayerProperties(ARRAY_SIZE(deviceLayerProps), deviceLayerProps, pCount, pProperties); } static inline PFN_vkVoidFunction layer_intercept_proc(const char *name) { for (int i = 0; i < sizeof(procmap) / sizeof(procmap[0]); i++) { if (!strcmp(name, procmap[i].name)) return procmap[i].pFunc; } return NULL; } static inline PFN_vkVoidFunction layer_intercept_instance_proc(const char *name) { if (!name || name[0] != 'v' || name[1] != 'k') return NULL; name += 2; if (!strcmp(name, "CreateInstance")) return (PFN_vkVoidFunction)vkCreateInstance; if (!strcmp(name, "DestroyInstance")) return (PFN_vkVoidFunction)vkDestroyInstance; if (!strcmp(name, "EnumerateInstanceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceExtensionProperties; if (!strcmp(name, "EnumerateInstanceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateInstanceLayerProperties; if (!strcmp(name, "EnumerateDeviceExtensionProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceExtensionProperties; if (!strcmp(name, "EnumerateDeviceLayerProperties")) return (PFN_vkVoidFunction)vkEnumerateDeviceLayerProperties; if (!strcmp(name, "CreateDevice")) return (PFN_vkVoidFunction)vkCreateDevice; if (!strcmp(name, "GetInstanceProcAddr")) return (PFN_vkVoidFunction)vkGetInstanceProcAddr; return NULL; } VK_LAYER_EXPORT PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(VkDevice device, const char *funcName) { PFN_vkVoidFunction addr; layer_data *dev_data; if (device == VK_NULL_HANDLE) { return NULL; } addr = layer_intercept_proc(funcName); if (addr) return addr; dev_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkLayerDispatchTable *pTable = dev_data->device_dispatch_table; if (pTable->GetDeviceProcAddr == NULL) return NULL; return pTable->GetDeviceProcAddr(device, funcName); } VK_LAYER_EXPORT PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(VkInstance instance, const char *funcName) { PFN_vkVoidFunction addr; layer_data *my_data; addr = layer_intercept_instance_proc(funcName); if (addr) { return addr; } if (instance == VK_NULL_HANDLE) { return NULL; } my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); addr = debug_report_get_instance_proc_addr(my_data->report_data, funcName); if (addr) { return addr; } VkLayerInstanceDispatchTable *pTable = my_data->instance_dispatch_table; if (pTable->GetInstanceProcAddr == NULL) { return NULL; } return pTable->GetInstanceProcAddr(instance, funcName); } VK_LAYER_EXPORT VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pMsgCallback) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); startReadObject(my_data, instance); VkResult result = my_data->instance_dispatch_table->CreateDebugReportCallbackEXT(instance, pCreateInfo, pAllocator, pMsgCallback); if (VK_SUCCESS == result) { result = layer_create_msg_callback(my_data->report_data, pCreateInfo, pAllocator, pMsgCallback); } finishReadObject(my_data, instance); return result; } VK_LAYER_EXPORT VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT callback, const VkAllocationCallbacks *pAllocator) { layer_data *my_data = get_my_data_ptr(get_dispatch_key(instance), layer_data_map); startReadObject(my_data, instance); startWriteObject(my_data, callback); my_data->instance_dispatch_table->DestroyDebugReportCallbackEXT(instance, callback, pAllocator); layer_destroy_msg_callback(my_data->report_data, callback, pAllocator); finishReadObject(my_data, instance); finishWriteObject(my_data, callback); } VkResult VKAPI_CALL vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo *pAllocateInfo, VkCommandBuffer *pCommandBuffers) { dispatch_key key = get_dispatch_key(device); layer_data *my_data = get_my_data_ptr(key, layer_data_map); VkLayerDispatchTable *pTable = my_data->device_dispatch_table; VkResult result; startReadObject(my_data, device); startWriteObject(my_data, pAllocateInfo->commandPool); result = pTable->AllocateCommandBuffers(device, pAllocateInfo, pCommandBuffers); finishReadObject(my_data, device); finishWriteObject(my_data, pAllocateInfo->commandPool); // Record mapping from command buffer to command pool if (VK_SUCCESS == result) { for (int index = 0; index < pAllocateInfo->commandBufferCount; index++) { loader_platform_thread_lock_mutex(&threadingLock); command_pool_map[pCommandBuffers[index]] = pAllocateInfo->commandPool; loader_platform_thread_unlock_mutex(&threadingLock); } } return result; } void VKAPI_CALL vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer *pCommandBuffers) { dispatch_key key = get_dispatch_key(device); layer_data *my_data = get_my_data_ptr(key, layer_data_map); VkLayerDispatchTable *pTable = my_data->device_dispatch_table; const bool lockCommandPool = false; // pool is already directly locked startReadObject(my_data, device); startWriteObject(my_data, commandPool); for (int index = 0; index < commandBufferCount; index++) { startWriteObject(my_data, pCommandBuffers[index], lockCommandPool); } pTable->FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers); finishReadObject(my_data, device); finishWriteObject(my_data, commandPool); for (int index = 0; index < commandBufferCount; index++) { finishWriteObject(my_data, pCommandBuffers[index], lockCommandPool); loader_platform_thread_lock_mutex(&threadingLock); command_pool_map.erase(pCommandBuffers[index]); loader_platform_thread_unlock_mutex(&threadingLock); } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/threading.h000066400000000000000000000415411270147354000243440ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Cody Northrop * Author: Mike Stroyan */ #ifndef THREADING_H #define THREADING_H #include #include "vk_layer_config.h" #include "vk_layer_logging.h" #if defined(__LP64__) || defined(_WIN64) || defined(__x86_64__) || defined(_M_X64) || defined(__ia64) || defined(_M_IA64) || \ defined(__aarch64__) || defined(__powerpc64__) // If pointers are 64-bit, then there can be separate counters for each // NONDISPATCHABLE_HANDLE type. Otherwise they are all typedef uint64_t. #define DISTINCT_NONDISPATCHABLE_HANDLES #endif // Draw State ERROR codes typedef enum _THREADING_CHECKER_ERROR { THREADING_CHECKER_NONE, // Used for INFO & other non-error messages THREADING_CHECKER_MULTIPLE_THREADS, // Object used simultaneously by multiple threads THREADING_CHECKER_SINGLE_THREAD_REUSE, // Object used simultaneously by recursion in single thread } THREADING_CHECKER_ERROR; struct object_use_data { loader_platform_thread_id thread; int reader_count; int writer_count; }; struct layer_data; static int threadingLockInitialized = 0; static loader_platform_thread_mutex threadingLock; static loader_platform_thread_cond threadingCond; template class counter { public: const char *typeName; VkDebugReportObjectTypeEXT objectType; std::unordered_map uses; void startWrite(debug_report_data *report_data, T object) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_id tid = loader_platform_get_thread_id(); loader_platform_thread_lock_mutex(&threadingLock); if (uses.find(object) == uses.end()) { // There is no current use of the object. Record writer thread. struct object_use_data *use_data = &uses[object]; use_data->reader_count = 0; use_data->writer_count = 1; use_data->thread = tid; } else { struct object_use_data *use_data = &uses[object]; if (use_data->reader_count == 0) { // There are no readers. Two writers just collided. if (use_data->thread != tid) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, objectType, (uint64_t)(object), /*location*/ 0, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING", "THREADING ERROR : object of type %s is simultaneously used in thread %ld and thread %ld", typeName, use_data->thread, tid); if (skipCall) { // Wait for thread-safe access to object instead of skipping call. while (uses.find(object) != uses.end()) { loader_platform_thread_cond_wait(&threadingCond, &threadingLock); } // There is now no current use of the object. Record writer thread. struct object_use_data *use_data = &uses[object]; use_data->thread = tid; use_data->reader_count = 0; use_data->writer_count = 1; } else { // Continue with an unsafe use of the object. use_data->thread = tid; use_data->writer_count += 1; } } else { // This is either safe multiple use in one call, or recursive use. // There is no way to make recursion safe. Just forge ahead. use_data->writer_count += 1; } } else { // There are readers. This writer collided with them. if (use_data->thread != tid) { skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, objectType, (uint64_t)(object), /*location*/ 0, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING", "THREADING ERROR : object of type %s is simultaneously used in thread %ld and thread %ld", typeName, use_data->thread, tid); if (skipCall) { // Wait for thread-safe access to object instead of skipping call. while (uses.find(object) != uses.end()) { loader_platform_thread_cond_wait(&threadingCond, &threadingLock); } // There is now no current use of the object. Record writer thread. struct object_use_data *use_data = &uses[object]; use_data->thread = tid; use_data->reader_count = 0; use_data->writer_count = 1; } else { // Continue with an unsafe use of the object. use_data->thread = tid; use_data->writer_count += 1; } } else { // This is either safe multiple use in one call, or recursive use. // There is no way to make recursion safe. Just forge ahead. use_data->writer_count += 1; } } } loader_platform_thread_unlock_mutex(&threadingLock); } void finishWrite(T object) { // Object is no longer in use loader_platform_thread_lock_mutex(&threadingLock); uses[object].writer_count -= 1; if ((uses[object].reader_count == 0) && (uses[object].writer_count == 0)) { uses.erase(object); } // Notify any waiting threads that this object may be safe to use loader_platform_thread_cond_broadcast(&threadingCond); loader_platform_thread_unlock_mutex(&threadingLock); } void startRead(debug_report_data *report_data, T object) { VkBool32 skipCall = VK_FALSE; loader_platform_thread_id tid = loader_platform_get_thread_id(); loader_platform_thread_lock_mutex(&threadingLock); if (uses.find(object) == uses.end()) { // There is no current use of the object. Record reader count struct object_use_data *use_data = &uses[object]; use_data->reader_count = 1; use_data->writer_count = 0; use_data->thread = tid; } else if (uses[object].writer_count > 0 && uses[object].thread != tid) { // There is a writer of the object. skipCall |= log_msg(report_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, objectType, (uint64_t)(object), /*location*/ 0, THREADING_CHECKER_MULTIPLE_THREADS, "THREADING", "THREADING ERROR : object of type %s is simultaneously used in thread %ld and thread %ld", typeName, uses[object].thread, tid); if (skipCall) { // Wait for thread-safe access to object instead of skipping call. while (uses.find(object) != uses.end()) { loader_platform_thread_cond_wait(&threadingCond, &threadingLock); } // There is no current use of the object. Record reader count struct object_use_data *use_data = &uses[object]; use_data->reader_count = 1; use_data->writer_count = 0; use_data->thread = tid; } else { uses[object].reader_count += 1; } } else { // There are other readers of the object. Increase reader count uses[object].reader_count += 1; } loader_platform_thread_unlock_mutex(&threadingLock); } void finishRead(T object) { loader_platform_thread_lock_mutex(&threadingLock); uses[object].reader_count -= 1; if ((uses[object].reader_count == 0) && (uses[object].writer_count == 0)) { uses.erase(object); } // Notify and waiting threads that this object may be safe to use loader_platform_thread_cond_broadcast(&threadingCond); loader_platform_thread_unlock_mutex(&threadingLock); } counter(const char *name = "", VkDebugReportObjectTypeEXT type = VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT) { typeName = name; objectType = type; } }; struct layer_data { debug_report_data *report_data; std::vector logging_callback; VkLayerDispatchTable *device_dispatch_table; VkLayerInstanceDispatchTable *instance_dispatch_table; counter c_VkCommandBuffer; counter c_VkDevice; counter c_VkInstance; counter c_VkQueue; #ifdef DISTINCT_NONDISPATCHABLE_HANDLES counter c_VkBuffer; counter c_VkBufferView; counter c_VkCommandPool; counter c_VkDescriptorPool; counter c_VkDescriptorSet; counter c_VkDescriptorSetLayout; counter c_VkDeviceMemory; counter c_VkEvent; counter c_VkFence; counter c_VkFramebuffer; counter c_VkImage; counter c_VkImageView; counter c_VkPipeline; counter c_VkPipelineCache; counter c_VkPipelineLayout; counter c_VkQueryPool; counter c_VkRenderPass; counter c_VkSampler; counter c_VkSemaphore; counter c_VkShaderModule; counter c_VkDebugReportCallbackEXT; #else // DISTINCT_NONDISPATCHABLE_HANDLES counter c_uint64_t; #endif // DISTINCT_NONDISPATCHABLE_HANDLES layer_data() : report_data(nullptr), c_VkCommandBuffer("VkCommandBuffer", VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT), c_VkDevice("VkDevice", VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT), c_VkInstance("VkInstance", VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT), c_VkQueue("VkQueue", VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT), #ifdef DISTINCT_NONDISPATCHABLE_HANDLES c_VkBuffer("VkBuffer", VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT), c_VkBufferView("VkBufferView", VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT), c_VkCommandPool("VkCommandPool", VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT), c_VkDescriptorPool("VkDescriptorPool", VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT), c_VkDescriptorSet("VkDescriptorSet", VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT), c_VkDescriptorSetLayout("VkDescriptorSetLayout", VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT), c_VkDeviceMemory("VkDeviceMemory", VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT), c_VkEvent("VkEvent", VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT), c_VkFence("VkFence", VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT), c_VkFramebuffer("VkFramebuffer", VK_DEBUG_REPORT_OBJECT_TYPE_FRAMEBUFFER_EXT), c_VkImage("VkImage", VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT), c_VkImageView("VkImageView", VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT), c_VkPipeline("VkPipeline", VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT), c_VkPipelineCache("VkPipelineCache", VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_CACHE_EXT), c_VkPipelineLayout("VkPipelineLayout", VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT), c_VkQueryPool("VkQueryPool", VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT), c_VkRenderPass("VkRenderPass", VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT), c_VkSampler("VkSampler", VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT), c_VkSemaphore("VkSemaphore", VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT), c_VkShaderModule("VkShaderModule", VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT), c_VkDebugReportCallbackEXT("VkDebugReportCallbackEXT", VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT) #else // DISTINCT_NONDISPATCHABLE_HANDLES c_uint64_t("NON_DISPATCHABLE_HANDLE", VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT) #endif // DISTINCT_NONDISPATCHABLE_HANDLES {}; }; #define WRAPPER(type) \ static void startWriteObject(struct layer_data *my_data, type object) { \ my_data->c_##type.startWrite(my_data->report_data, object); \ } \ static void finishWriteObject(struct layer_data *my_data, type object) { my_data->c_##type.finishWrite(object); } \ static void startReadObject(struct layer_data *my_data, type object) { \ my_data->c_##type.startRead(my_data->report_data, object); \ } \ static void finishReadObject(struct layer_data *my_data, type object) { my_data->c_##type.finishRead(object); } WRAPPER(VkDevice) WRAPPER(VkInstance) WRAPPER(VkQueue) #ifdef DISTINCT_NONDISPATCHABLE_HANDLES WRAPPER(VkBuffer) WRAPPER(VkBufferView) WRAPPER(VkCommandPool) WRAPPER(VkDescriptorPool) WRAPPER(VkDescriptorSet) WRAPPER(VkDescriptorSetLayout) WRAPPER(VkDeviceMemory) WRAPPER(VkEvent) WRAPPER(VkFence) WRAPPER(VkFramebuffer) WRAPPER(VkImage) WRAPPER(VkImageView) WRAPPER(VkPipeline) WRAPPER(VkPipelineCache) WRAPPER(VkPipelineLayout) WRAPPER(VkQueryPool) WRAPPER(VkRenderPass) WRAPPER(VkSampler) WRAPPER(VkSemaphore) WRAPPER(VkShaderModule) WRAPPER(VkDebugReportCallbackEXT) #else // DISTINCT_NONDISPATCHABLE_HANDLES WRAPPER(uint64_t) #endif // DISTINCT_NONDISPATCHABLE_HANDLES static std::unordered_map layer_data_map; static std::unordered_map command_pool_map; // VkCommandBuffer needs check for implicit use of command pool static void startWriteObject(struct layer_data *my_data, VkCommandBuffer object, bool lockPool = true) { if (lockPool) { loader_platform_thread_lock_mutex(&threadingLock); VkCommandPool pool = command_pool_map[object]; loader_platform_thread_unlock_mutex(&threadingLock); startWriteObject(my_data, pool); } my_data->c_VkCommandBuffer.startWrite(my_data->report_data, object); } static void finishWriteObject(struct layer_data *my_data, VkCommandBuffer object, bool lockPool = true) { my_data->c_VkCommandBuffer.finishWrite(object); if (lockPool) { loader_platform_thread_lock_mutex(&threadingLock); VkCommandPool pool = command_pool_map[object]; loader_platform_thread_unlock_mutex(&threadingLock); finishWriteObject(my_data, pool); } } static void startReadObject(struct layer_data *my_data, VkCommandBuffer object) { loader_platform_thread_lock_mutex(&threadingLock); VkCommandPool pool = command_pool_map[object]; loader_platform_thread_unlock_mutex(&threadingLock); startReadObject(my_data, pool); my_data->c_VkCommandBuffer.startRead(my_data->report_data, object); } static void finishReadObject(struct layer_data *my_data, VkCommandBuffer object) { my_data->c_VkCommandBuffer.finishRead(object); loader_platform_thread_lock_mutex(&threadingLock); VkCommandPool pool = command_pool_map[object]; loader_platform_thread_unlock_mutex(&threadingLock); finishReadObject(my_data, pool); } #endif // THREADING_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/unique_objects.h000066400000000000000000000742211270147354000254170ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (C) 2015-2016 Google Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Tobin Ehlis */ #include #include #include #include #include "vulkan/vulkan.h" #include "vk_loader_platform.h" #include #include #include "vulkan/vk_layer.h" #include "vk_layer_config.h" #include "vk_layer_table.h" #include "vk_layer_data.h" #include "vk_layer_logging.h" #include "vk_layer_extension_utils.h" #include "vk_safe_struct.h" #include "vk_layer_utils.h" struct layer_data { bool wsi_enabled; layer_data() : wsi_enabled(false){}; }; struct instExts { bool wsi_enabled; bool xlib_enabled; bool xcb_enabled; bool wayland_enabled; bool mir_enabled; bool android_enabled; bool win32_enabled; }; static std::unordered_map instanceExtMap; static std::unordered_map layer_data_map; static device_table_map unique_objects_device_table_map; static instance_table_map unique_objects_instance_table_map; // Structure to wrap returned non-dispatchable objects to guarantee they have unique handles // address of struct will be used as the unique handle struct VkUniqueObject { uint64_t actualObject; }; // Handle CreateInstance static void createInstanceRegisterExtensions(const VkInstanceCreateInfo *pCreateInfo, VkInstance instance) { uint32_t i; VkLayerInstanceDispatchTable *pDisp = get_dispatch_table(unique_objects_instance_table_map, instance); PFN_vkGetInstanceProcAddr gpa = pDisp->GetInstanceProcAddr; pDisp->DestroySurfaceKHR = (PFN_vkDestroySurfaceKHR)gpa(instance, "vkDestroySurfaceKHR"); pDisp->GetPhysicalDeviceSurfaceSupportKHR = (PFN_vkGetPhysicalDeviceSurfaceSupportKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceSupportKHR"); pDisp->GetPhysicalDeviceSurfaceCapabilitiesKHR = (PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR"); pDisp->GetPhysicalDeviceSurfaceFormatsKHR = (PFN_vkGetPhysicalDeviceSurfaceFormatsKHR)gpa(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR"); pDisp->GetPhysicalDeviceSurfacePresentModesKHR = (PFN_vkGetPhysicalDeviceSurfacePresentModesKHR)gpa(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR"); #ifdef VK_USE_PLATFORM_WIN32_KHR pDisp->CreateWin32SurfaceKHR = (PFN_vkCreateWin32SurfaceKHR)gpa(instance, "vkCreateWin32SurfaceKHR"); pDisp->GetPhysicalDeviceWin32PresentationSupportKHR = (PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWin32PresentationSupportKHR"); #endif // VK_USE_PLATFORM_WIN32_KHR #ifdef VK_USE_PLATFORM_XCB_KHR pDisp->CreateXcbSurfaceKHR = (PFN_vkCreateXcbSurfaceKHR)gpa(instance, "vkCreateXcbSurfaceKHR"); pDisp->GetPhysicalDeviceXcbPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXcbPresentationSupportKHR"); #endif // VK_USE_PLATFORM_XCB_KHR #ifdef VK_USE_PLATFORM_XLIB_KHR pDisp->CreateXlibSurfaceKHR = (PFN_vkCreateXlibSurfaceKHR)gpa(instance, "vkCreateXlibSurfaceKHR"); pDisp->GetPhysicalDeviceXlibPresentationSupportKHR = (PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceXlibPresentationSupportKHR"); #endif // VK_USE_PLATFORM_XLIB_KHR #ifdef VK_USE_PLATFORM_MIR_KHR pDisp->CreateMirSurfaceKHR = (PFN_vkCreateMirSurfaceKHR)gpa(instance, "vkCreateMirSurfaceKHR"); pDisp->GetPhysicalDeviceMirPresentationSupportKHR = (PFN_vkGetPhysicalDeviceMirPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceMirPresentationSupportKHR"); #endif // VK_USE_PLATFORM_MIR_KHR #ifdef VK_USE_PLATFORM_WAYLAND_KHR pDisp->CreateWaylandSurfaceKHR = (PFN_vkCreateWaylandSurfaceKHR)gpa(instance, "vkCreateWaylandSurfaceKHR"); pDisp->GetPhysicalDeviceWaylandPresentationSupportKHR = (PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR)gpa(instance, "vkGetPhysicalDeviceWaylandPresentationSupportKHR"); #endif // VK_USE_PLATFORM_WAYLAND_KHR #ifdef VK_USE_PLATFORM_ANDROID_KHR pDisp->CreateAndroidSurfaceKHR = (PFN_vkCreateAndroidSurfaceKHR)gpa(instance, "vkCreateAndroidSurfaceKHR"); #endif // VK_USE_PLATFORM_ANDROID_KHR instanceExtMap[pDisp] = {}; for (i = 0; i < pCreateInfo->enabledExtensionCount; i++) { if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SURFACE_EXTENSION_NAME) == 0) instanceExtMap[pDisp].wsi_enabled = true; #ifdef VK_USE_PLATFORM_XLIB_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_XLIB_SURFACE_EXTENSION_NAME) == 0) instanceExtMap[pDisp].xlib_enabled = true; #endif #ifdef VK_USE_PLATFORM_XCB_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_XCB_SURFACE_EXTENSION_NAME) == 0) instanceExtMap[pDisp].xcb_enabled = true; #endif #ifdef VK_USE_PLATFORM_WAYLAND_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME) == 0) instanceExtMap[pDisp].wayland_enabled = true; #endif #ifdef VK_USE_PLATFORM_MIR_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_MIR_SURFACE_EXTENSION_NAME) == 0) instanceExtMap[pDisp].mir_enabled = true; #endif #ifdef VK_USE_PLATFORM_ANDROID_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_ANDROID_SURFACE_EXTENSION_NAME) == 0) instanceExtMap[pDisp].android_enabled = true; #endif #ifdef VK_USE_PLATFORM_WIN32_KHR if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_WIN32_SURFACE_EXTENSION_NAME) == 0) instanceExtMap[pDisp].win32_enabled = true; #endif } } VkResult explicit_CreateInstance(const VkInstanceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkInstance *pInstance) { VkLayerInstanceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkCreateInstance fpCreateInstance = (PFN_vkCreateInstance)fpGetInstanceProcAddr(NULL, "vkCreateInstance"); if (fpCreateInstance == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateInstance(pCreateInfo, pAllocator, pInstance); if (result != VK_SUCCESS) { return result; } initInstanceTable(*pInstance, fpGetInstanceProcAddr, unique_objects_instance_table_map); createInstanceRegisterExtensions(pCreateInfo, *pInstance); return result; } // Handle CreateDevice static void createDeviceRegisterExtensions(const VkDeviceCreateInfo *pCreateInfo, VkDevice device) { layer_data *my_device_data = get_my_data_ptr(get_dispatch_key(device), layer_data_map); VkLayerDispatchTable *pDisp = get_dispatch_table(unique_objects_device_table_map, device); PFN_vkGetDeviceProcAddr gpa = pDisp->GetDeviceProcAddr; pDisp->CreateSwapchainKHR = (PFN_vkCreateSwapchainKHR)gpa(device, "vkCreateSwapchainKHR"); pDisp->DestroySwapchainKHR = (PFN_vkDestroySwapchainKHR)gpa(device, "vkDestroySwapchainKHR"); pDisp->GetSwapchainImagesKHR = (PFN_vkGetSwapchainImagesKHR)gpa(device, "vkGetSwapchainImagesKHR"); pDisp->AcquireNextImageKHR = (PFN_vkAcquireNextImageKHR)gpa(device, "vkAcquireNextImageKHR"); pDisp->QueuePresentKHR = (PFN_vkQueuePresentKHR)gpa(device, "vkQueuePresentKHR"); my_device_data->wsi_enabled = false; for (uint32_t i = 0; i < pCreateInfo->enabledExtensionCount; i++) { if (strcmp(pCreateInfo->ppEnabledExtensionNames[i], VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0) my_device_data->wsi_enabled = true; } } VkResult explicit_CreateDevice(VkPhysicalDevice gpu, const VkDeviceCreateInfo *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDevice *pDevice) { VkLayerDeviceCreateInfo *chain_info = get_chain_info(pCreateInfo, VK_LAYER_LINK_INFO); assert(chain_info->u.pLayerInfo); PFN_vkGetInstanceProcAddr fpGetInstanceProcAddr = chain_info->u.pLayerInfo->pfnNextGetInstanceProcAddr; PFN_vkGetDeviceProcAddr fpGetDeviceProcAddr = chain_info->u.pLayerInfo->pfnNextGetDeviceProcAddr; PFN_vkCreateDevice fpCreateDevice = (PFN_vkCreateDevice)fpGetInstanceProcAddr(NULL, "vkCreateDevice"); if (fpCreateDevice == NULL) { return VK_ERROR_INITIALIZATION_FAILED; } // Advance the link info for the next element on the chain chain_info->u.pLayerInfo = chain_info->u.pLayerInfo->pNext; VkResult result = fpCreateDevice(gpu, pCreateInfo, pAllocator, pDevice); if (result != VK_SUCCESS) { return result; } // Setup layer's device dispatch table initDeviceTable(*pDevice, fpGetDeviceProcAddr, unique_objects_device_table_map); createDeviceRegisterExtensions(pCreateInfo, *pDevice); return result; } VkResult explicit_QueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo *pSubmits, VkFence fence) { // UNWRAP USES: // 0 : fence,VkFence if (VK_NULL_HANDLE != fence) { fence = (VkFence)((VkUniqueObject *)fence)->actualObject; } // waitSemaphoreCount : pSubmits[submitCount]->pWaitSemaphores,VkSemaphore std::vector original_pWaitSemaphores = {}; // signalSemaphoreCount : pSubmits[submitCount]->pSignalSemaphores,VkSemaphore std::vector original_pSignalSemaphores = {}; if (pSubmits) { for (uint32_t index0 = 0; index0 < submitCount; ++index0) { if (pSubmits[index0].pWaitSemaphores) { for (uint32_t index1 = 0; index1 < pSubmits[index0].waitSemaphoreCount; ++index1) { VkSemaphore **ppSemaphore = (VkSemaphore **)&(pSubmits[index0].pWaitSemaphores); original_pWaitSemaphores.push_back(pSubmits[index0].pWaitSemaphores[index1]); *(ppSemaphore[index1]) = (VkSemaphore)((VkUniqueObject *)pSubmits[index0].pWaitSemaphores[index1])->actualObject; } } if (pSubmits[index0].pSignalSemaphores) { for (uint32_t index1 = 0; index1 < pSubmits[index0].signalSemaphoreCount; ++index1) { VkSemaphore **ppSemaphore = (VkSemaphore **)&(pSubmits[index0].pSignalSemaphores); original_pSignalSemaphores.push_back(pSubmits[index0].pSignalSemaphores[index1]); *(ppSemaphore[index1]) = (VkSemaphore)((VkUniqueObject *)pSubmits[index0].pSignalSemaphores[index1])->actualObject; } } } } VkResult result = get_dispatch_table(unique_objects_device_table_map, queue)->QueueSubmit(queue, submitCount, pSubmits, fence); if (pSubmits) { for (uint32_t index0 = 0; index0 < submitCount; ++index0) { if (pSubmits[index0].pWaitSemaphores) { for (uint32_t index1 = 0; index1 < pSubmits[index0].waitSemaphoreCount; ++index1) { VkSemaphore **ppSemaphore = (VkSemaphore **)&(pSubmits[index0].pWaitSemaphores); *(ppSemaphore[index1]) = original_pWaitSemaphores[index1]; } } if (pSubmits[index0].pSignalSemaphores) { for (uint32_t index1 = 0; index1 < pSubmits[index0].signalSemaphoreCount; ++index1) { VkSemaphore **ppSemaphore = (VkSemaphore **)&(pSubmits[index0].pSignalSemaphores); *(ppSemaphore[index1]) = original_pSignalSemaphores[index1]; } } } } return result; } VkResult explicit_QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo *pBindInfo, VkFence fence) { // UNWRAP USES: // 0 : pBindInfo[bindInfoCount]->pBufferBinds[bufferBindCount]->buffer,VkBuffer, // pBindInfo[bindInfoCount]->pBufferBinds[bufferBindCount]->pBinds[bindCount]->memory,VkDeviceMemory, // pBindInfo[bindInfoCount]->pImageOpaqueBinds[imageOpaqueBindCount]->image,VkImage, // pBindInfo[bindInfoCount]->pImageOpaqueBinds[imageOpaqueBindCount]->pBinds[bindCount]->memory,VkDeviceMemory, // pBindInfo[bindInfoCount]->pImageBinds[imageBindCount]->image,VkImage, // pBindInfo[bindInfoCount]->pImageBinds[imageBindCount]->pBinds[bindCount]->memory,VkDeviceMemory std::vector original_buffer = {}; std::vector original_memory1 = {}; std::vector original_image1 = {}; std::vector original_memory2 = {}; std::vector original_image2 = {}; std::vector original_memory3 = {}; std::vector original_pWaitSemaphores = {}; std::vector original_pSignalSemaphores = {}; if (pBindInfo) { for (uint32_t index0 = 0; index0 < bindInfoCount; ++index0) { if (pBindInfo[index0].pBufferBinds) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].bufferBindCount; ++index1) { if (pBindInfo[index0].pBufferBinds[index1].buffer) { VkBuffer *pBuffer = (VkBuffer *)&(pBindInfo[index0].pBufferBinds[index1].buffer); original_buffer.push_back(pBindInfo[index0].pBufferBinds[index1].buffer); *(pBuffer) = (VkBuffer)((VkUniqueObject *)pBindInfo[index0].pBufferBinds[index1].buffer)->actualObject; } if (pBindInfo[index0].pBufferBinds[index1].pBinds) { for (uint32_t index2 = 0; index2 < pBindInfo[index0].pBufferBinds[index1].bindCount; ++index2) { if (pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory) { VkDeviceMemory *pDeviceMemory = (VkDeviceMemory *)&(pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory); original_memory1.push_back(pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory); *(pDeviceMemory) = (VkDeviceMemory)((VkUniqueObject *)pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory) ->actualObject; } } } } } if (pBindInfo[index0].pImageOpaqueBinds) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].imageOpaqueBindCount; ++index1) { if (pBindInfo[index0].pImageOpaqueBinds[index1].image) { VkImage *pImage = (VkImage *)&(pBindInfo[index0].pImageOpaqueBinds[index1].image); original_image1.push_back(pBindInfo[index0].pImageOpaqueBinds[index1].image); *(pImage) = (VkImage)((VkUniqueObject *)pBindInfo[index0].pImageOpaqueBinds[index1].image)->actualObject; } if (pBindInfo[index0].pImageOpaqueBinds[index1].pBinds) { for (uint32_t index2 = 0; index2 < pBindInfo[index0].pImageOpaqueBinds[index1].bindCount; ++index2) { if (pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory) { VkDeviceMemory *pDeviceMemory = (VkDeviceMemory *)&(pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory); original_memory2.push_back(pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory); *(pDeviceMemory) = (VkDeviceMemory)( (VkUniqueObject *)pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory) ->actualObject; } } } } } if (pBindInfo[index0].pImageBinds) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].imageBindCount; ++index1) { if (pBindInfo[index0].pImageBinds[index1].image) { VkImage *pImage = (VkImage *)&(pBindInfo[index0].pImageBinds[index1].image); original_image2.push_back(pBindInfo[index0].pImageBinds[index1].image); *(pImage) = (VkImage)((VkUniqueObject *)pBindInfo[index0].pImageBinds[index1].image)->actualObject; } if (pBindInfo[index0].pImageBinds[index1].pBinds) { for (uint32_t index2 = 0; index2 < pBindInfo[index0].pImageBinds[index1].bindCount; ++index2) { if (pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory) { VkDeviceMemory *pDeviceMemory = (VkDeviceMemory *)&(pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory); original_memory3.push_back(pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory); *(pDeviceMemory) = (VkDeviceMemory)((VkUniqueObject *)pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory) ->actualObject; } } } } } if (pBindInfo[index0].pWaitSemaphores) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].waitSemaphoreCount; ++index1) { VkSemaphore **ppSemaphore = (VkSemaphore **)&(pBindInfo[index0].pWaitSemaphores); original_pWaitSemaphores.push_back(pBindInfo[index0].pWaitSemaphores[index1]); *(ppSemaphore[index1]) = (VkSemaphore)((VkUniqueObject *)pBindInfo[index0].pWaitSemaphores[index1])->actualObject; } } if (pBindInfo[index0].pSignalSemaphores) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].signalSemaphoreCount; ++index1) { VkSemaphore **ppSemaphore = (VkSemaphore **)&(pBindInfo[index0].pSignalSemaphores); original_pSignalSemaphores.push_back(pBindInfo[index0].pSignalSemaphores[index1]); *(ppSemaphore[index1]) = (VkSemaphore)((VkUniqueObject *)pBindInfo[index0].pSignalSemaphores[index1])->actualObject; } } } } if (VK_NULL_HANDLE != fence) { fence = (VkFence)((VkUniqueObject *)fence)->actualObject; } VkResult result = get_dispatch_table(unique_objects_device_table_map, queue)->QueueBindSparse(queue, bindInfoCount, pBindInfo, fence); if (pBindInfo) { for (uint32_t index0 = 0; index0 < bindInfoCount; ++index0) { if (pBindInfo[index0].pBufferBinds) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].bufferBindCount; ++index1) { if (pBindInfo[index0].pBufferBinds[index1].buffer) { VkBuffer *pBuffer = (VkBuffer *)&(pBindInfo[index0].pBufferBinds[index1].buffer); *(pBuffer) = original_buffer[index1]; } if (pBindInfo[index0].pBufferBinds[index1].pBinds) { for (uint32_t index2 = 0; index2 < pBindInfo[index0].pBufferBinds[index1].bindCount; ++index2) { if (pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory) { VkDeviceMemory *pDeviceMemory = (VkDeviceMemory *)&(pBindInfo[index0].pBufferBinds[index1].pBinds[index2].memory); *(pDeviceMemory) = original_memory1[index2]; } } } } } if (pBindInfo[index0].pImageOpaqueBinds) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].imageOpaqueBindCount; ++index1) { if (pBindInfo[index0].pImageOpaqueBinds[index1].image) { VkImage *pImage = (VkImage *)&(pBindInfo[index0].pImageOpaqueBinds[index1].image); *(pImage) = original_image1[index1]; } if (pBindInfo[index0].pImageOpaqueBinds[index1].pBinds) { for (uint32_t index2 = 0; index2 < pBindInfo[index0].pImageOpaqueBinds[index1].bindCount; ++index2) { if (pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory) { VkDeviceMemory *pDeviceMemory = (VkDeviceMemory *)&(pBindInfo[index0].pImageOpaqueBinds[index1].pBinds[index2].memory); *(pDeviceMemory) = original_memory2[index2]; } } } } } if (pBindInfo[index0].pImageBinds) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].imageBindCount; ++index1) { if (pBindInfo[index0].pImageBinds[index1].image) { VkImage *pImage = (VkImage *)&(pBindInfo[index0].pImageBinds[index1].image); *(pImage) = original_image2[index1]; } if (pBindInfo[index0].pImageBinds[index1].pBinds) { for (uint32_t index2 = 0; index2 < pBindInfo[index0].pImageBinds[index1].bindCount; ++index2) { if (pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory) { VkDeviceMemory *pDeviceMemory = (VkDeviceMemory *)&(pBindInfo[index0].pImageBinds[index1].pBinds[index2].memory); *(pDeviceMemory) = original_memory3[index2]; } } } } } if (pBindInfo[index0].pWaitSemaphores) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].waitSemaphoreCount; ++index1) { VkSemaphore **ppSemaphore = (VkSemaphore **)&(pBindInfo[index0].pWaitSemaphores); *(ppSemaphore[index1]) = original_pWaitSemaphores[index1]; } } if (pBindInfo[index0].pSignalSemaphores) { for (uint32_t index1 = 0; index1 < pBindInfo[index0].signalSemaphoreCount; ++index1) { VkSemaphore **ppSemaphore = (VkSemaphore **)&(pBindInfo[index0].pSignalSemaphores); *(ppSemaphore[index1]) = original_pSignalSemaphores[index1]; } } } } return result; } VkResult explicit_CreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkComputePipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator, VkPipeline *pPipelines) { // STRUCT USES:{'pipelineCache': 'VkPipelineCache', 'pCreateInfos[createInfoCount]': {'stage': {'module': 'VkShaderModule'}, // 'layout': 'VkPipelineLayout', 'basePipelineHandle': 'VkPipeline'}} // LOCAL DECLS:{'pCreateInfos': 'VkComputePipelineCreateInfo*'} safe_VkComputePipelineCreateInfo *local_pCreateInfos = NULL; if (pCreateInfos) { local_pCreateInfos = new safe_VkComputePipelineCreateInfo[createInfoCount]; for (uint32_t idx0 = 0; idx0 < createInfoCount; ++idx0) { local_pCreateInfos[idx0].initialize(&pCreateInfos[idx0]); if (pCreateInfos[idx0].basePipelineHandle) { local_pCreateInfos[idx0].basePipelineHandle = (VkPipeline)((VkUniqueObject *)pCreateInfos[idx0].basePipelineHandle)->actualObject; } if (pCreateInfos[idx0].layout) { local_pCreateInfos[idx0].layout = (VkPipelineLayout)((VkUniqueObject *)pCreateInfos[idx0].layout)->actualObject; } if (pCreateInfos[idx0].stage.module) { local_pCreateInfos[idx0].stage.module = (VkShaderModule)((VkUniqueObject *)pCreateInfos[idx0].stage.module)->actualObject; } } } if (pipelineCache) { pipelineCache = (VkPipelineCache)((VkUniqueObject *)pipelineCache)->actualObject; } // CODEGEN : file /usr/local/google/home/tobine/vulkan_work/LoaderAndTools/vk-layer-generate.py line #1671 VkResult result = get_dispatch_table(unique_objects_device_table_map, device) ->CreateComputePipelines(device, pipelineCache, createInfoCount, (const VkComputePipelineCreateInfo *)local_pCreateInfos, pAllocator, pPipelines); delete[] local_pCreateInfos; if (VK_SUCCESS == result) { VkUniqueObject *pUO = NULL; for (uint32_t i = 0; i < createInfoCount; ++i) { pUO = new VkUniqueObject(); pUO->actualObject = (uint64_t)pPipelines[i]; pPipelines[i] = (VkPipeline)pUO; } } return result; } VkResult explicit_CreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkGraphicsPipelineCreateInfo *pCreateInfos, const VkAllocationCallbacks *pAllocator, VkPipeline *pPipelines) { // STRUCT USES:{'pipelineCache': 'VkPipelineCache', 'pCreateInfos[createInfoCount]': {'layout': 'VkPipelineLayout', // 'pStages[stageCount]': {'module': 'VkShaderModule'}, 'renderPass': 'VkRenderPass', 'basePipelineHandle': 'VkPipeline'}} // LOCAL DECLS:{'pCreateInfos': 'VkGraphicsPipelineCreateInfo*'} safe_VkGraphicsPipelineCreateInfo *local_pCreateInfos = NULL; if (pCreateInfos) { local_pCreateInfos = new safe_VkGraphicsPipelineCreateInfo[createInfoCount]; for (uint32_t idx0 = 0; idx0 < createInfoCount; ++idx0) { local_pCreateInfos[idx0].initialize(&pCreateInfos[idx0]); if (pCreateInfos[idx0].basePipelineHandle) { local_pCreateInfos[idx0].basePipelineHandle = (VkPipeline)((VkUniqueObject *)pCreateInfos[idx0].basePipelineHandle)->actualObject; } if (pCreateInfos[idx0].layout) { local_pCreateInfos[idx0].layout = (VkPipelineLayout)((VkUniqueObject *)pCreateInfos[idx0].layout)->actualObject; } if (pCreateInfos[idx0].pStages) { for (uint32_t idx1 = 0; idx1 < pCreateInfos[idx0].stageCount; ++idx1) { if (pCreateInfos[idx0].pStages[idx1].module) { local_pCreateInfos[idx0].pStages[idx1].module = (VkShaderModule)((VkUniqueObject *)pCreateInfos[idx0].pStages[idx1].module)->actualObject; } } } if (pCreateInfos[idx0].renderPass) { local_pCreateInfos[idx0].renderPass = (VkRenderPass)((VkUniqueObject *)pCreateInfos[idx0].renderPass)->actualObject; } } } if (pipelineCache) { pipelineCache = (VkPipelineCache)((VkUniqueObject *)pipelineCache)->actualObject; } // CODEGEN : file /usr/local/google/home/tobine/vulkan_work/LoaderAndTools/vk-layer-generate.py line #1671 VkResult result = get_dispatch_table(unique_objects_device_table_map, device) ->CreateGraphicsPipelines(device, pipelineCache, createInfoCount, (const VkGraphicsPipelineCreateInfo *)local_pCreateInfos, pAllocator, pPipelines); delete[] local_pCreateInfos; if (VK_SUCCESS == result) { VkUniqueObject *pUO = NULL; for (uint32_t i = 0; i < createInfoCount; ++i) { pUO = new VkUniqueObject(); pUO->actualObject = (uint64_t)pPipelines[i]; pPipelines[i] = (VkPipeline)pUO; } } return result; } VkResult explicit_GetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t *pSwapchainImageCount, VkImage *pSwapchainImages) { // UNWRAP USES: // 0 : swapchain,VkSwapchainKHR, pSwapchainImages,VkImage if (VK_NULL_HANDLE != swapchain) { swapchain = (VkSwapchainKHR)((VkUniqueObject *)swapchain)->actualObject; } VkResult result = get_dispatch_table(unique_objects_device_table_map, device) ->GetSwapchainImagesKHR(device, swapchain, pSwapchainImageCount, pSwapchainImages); // TODO : Need to add corresponding code to delete these images if (VK_SUCCESS == result) { if ((*pSwapchainImageCount > 0) && pSwapchainImages) { std::vector uniqueImages = {}; for (uint32_t i = 0; i < *pSwapchainImageCount; ++i) { uniqueImages.push_back(new VkUniqueObject()); uniqueImages[i]->actualObject = (uint64_t)pSwapchainImages[i]; pSwapchainImages[i] = (VkImage)uniqueImages[i]; } } } return result; } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_config.cpp000066400000000000000000000205041270147354000260670ustar00rootroot00000000000000/************************************************************************** * * Copyright 2014 Valve Software * Copyright 2015 Google Inc. * All Rights Reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN * THE SOFTWARE. * * Author: Jon Ashburn * Author: Courtney Goeltzenleuchter * Author: Tobin Ehlis **************************************************************************/ #include #include #include #include #include #include #include "vk_layer_config.h" #include "vulkan/vk_sdk_platform.h" #define MAX_CHARS_PER_LINE 4096 class ConfigFile { public: ConfigFile(); ~ConfigFile(); const char *getOption(const std::string &_option); void setOption(const std::string &_option, const std::string &_val); private: bool m_fileIsParsed; std::map m_valueMap; void parseFile(const char *filename); }; static ConfigFile g_configFileObj; static VkLayerDbgAction stringToDbgAction(const char *_enum) { // only handles single enum values if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_IGNORE")) return VK_DBG_LAYER_ACTION_IGNORE; else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_LOG_MSG")) return VK_DBG_LAYER_ACTION_LOG_MSG; #ifdef WIN32 else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_DEBUG_OUTPUT")) return VK_DBG_LAYER_ACTION_DEBUG_OUTPUT; #endif else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_BREAK")) return VK_DBG_LAYER_ACTION_BREAK; return (VkLayerDbgAction)0; } static VkFlags stringToDbgReportFlags(const char *_enum) { // only handles single enum values if (!strcmp(_enum, "VK_DEBUG_REPORT_INFO")) return VK_DEBUG_REPORT_INFORMATION_BIT_EXT; else if (!strcmp(_enum, "VK_DEBUG_REPORT_WARN")) return VK_DEBUG_REPORT_WARNING_BIT_EXT; else if (!strcmp(_enum, "VK_DEBUG_REPORT_PERF_WARN")) return VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT; else if (!strcmp(_enum, "VK_DEBUG_REPORT_ERROR")) return VK_DEBUG_REPORT_ERROR_BIT_EXT; else if (!strcmp(_enum, "VK_DEBUG_REPORT_DEBUG")) return VK_DEBUG_REPORT_DEBUG_BIT_EXT; return (VkFlags)0; } static unsigned int convertStringEnumVal(const char *_enum) { unsigned int ret; ret = stringToDbgAction(_enum); if (ret) return ret; return stringToDbgReportFlags(_enum); } const char *getLayerOption(const char *_option) { return g_configFileObj.getOption(_option); } // If option is NULL or stdout, return stdout, otherwise try to open option // as a filename. If successful, return file handle, otherwise stdout FILE *getLayerLogOutput(const char *_option, const char *layerName) { FILE *log_output = NULL; if (!_option || !strcmp("stdout", _option)) log_output = stdout; else { log_output = fopen(_option, "w"); if (log_output == NULL) { if (_option) std::cout << std::endl << layerName << " ERROR: Bad output filename specified: " << _option << ". Writing to STDOUT instead" << std::endl << std::endl; log_output = stdout; } } return log_output; } VkDebugReportFlagsEXT getLayerOptionFlags(const char *_option, uint32_t optionDefault) { VkDebugReportFlagsEXT flags = optionDefault; const char *option = (g_configFileObj.getOption(_option)); /* parse comma-separated options */ while (option) { const char *p = strchr(option, ','); size_t len; if (p) len = p - option; else len = strlen(option); if (len > 0) { if (strncmp(option, "warn", len) == 0) { flags |= VK_DEBUG_REPORT_WARNING_BIT_EXT; } else if (strncmp(option, "info", len) == 0) { flags |= VK_DEBUG_REPORT_INFORMATION_BIT_EXT; } else if (strncmp(option, "perf", len) == 0) { flags |= VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT; } else if (strncmp(option, "error", len) == 0) { flags |= VK_DEBUG_REPORT_ERROR_BIT_EXT; } else if (strncmp(option, "debug", len) == 0) { flags |= VK_DEBUG_REPORT_DEBUG_BIT_EXT; } } if (!p) break; option = p + 1; } return flags; } bool getLayerOptionEnum(const char *_option, uint32_t *optionDefault) { bool res; const char *option = (g_configFileObj.getOption(_option)); if (option != NULL) { *optionDefault = convertStringEnumVal(option); res = false; } else { res = true; } return res; } void setLayerOptionEnum(const char *_option, const char *_valEnum) { unsigned int val = convertStringEnumVal(_valEnum); char strVal[24]; snprintf(strVal, 24, "%u", val); g_configFileObj.setOption(_option, strVal); } void setLayerOption(const char *_option, const char *_val) { g_configFileObj.setOption(_option, _val); } ConfigFile::ConfigFile() : m_fileIsParsed(false) {} ConfigFile::~ConfigFile() {} const char *ConfigFile::getOption(const std::string &_option) { std::map::const_iterator it; if (!m_fileIsParsed) { parseFile("vk_layer_settings.txt"); } if ((it = m_valueMap.find(_option)) == m_valueMap.end()) return NULL; else return it->second.c_str(); } void ConfigFile::setOption(const std::string &_option, const std::string &_val) { if (!m_fileIsParsed) { parseFile("vk_layer_settings.txt"); } m_valueMap[_option] = _val; } void ConfigFile::parseFile(const char *filename) { std::ifstream file; char buf[MAX_CHARS_PER_LINE]; m_fileIsParsed = true; m_valueMap.clear(); file.open(filename); if (!file.good()) return; // read tokens from the file and form option, value pairs file.getline(buf, MAX_CHARS_PER_LINE); while (!file.eof()) { char option[512]; char value[512]; char *pComment; // discard any comments delimited by '#' in the line pComment = strchr(buf, '#'); if (pComment) *pComment = '\0'; if (sscanf(buf, " %511[^\n\t =] = %511[^\n \t]", option, value) == 2) { std::string optStr(option); std::string valStr(value); m_valueMap[optStr] = valStr; } file.getline(buf, MAX_CHARS_PER_LINE); } } void print_msg_flags(VkFlags msgFlags, char *msg_flags) { bool separator = false; msg_flags[0] = 0; if (msgFlags & VK_DEBUG_REPORT_DEBUG_BIT_EXT) { strcat(msg_flags, "DEBUG"); separator = true; } if (msgFlags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT) { if (separator) strcat(msg_flags, ","); strcat(msg_flags, "INFO"); separator = true; } if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) { if (separator) strcat(msg_flags, ","); strcat(msg_flags, "WARN"); separator = true; } if (msgFlags & VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT) { if (separator) strcat(msg_flags, ","); strcat(msg_flags, "PERF"); separator = true; } if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) { if (separator) strcat(msg_flags, ","); strcat(msg_flags, "ERROR"); } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_config.h000066400000000000000000000035721270147354000255420ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Jon Ashburn **************************************************************************/ #pragma once #include #ifdef __cplusplus extern "C" { #endif const char *getLayerOption(const char *_option); FILE *getLayerLogOutput(const char *_option, const char *layerName); VkDebugReportFlagsEXT getLayerOptionFlags(const char *_option, uint32_t optionDefault); bool getLayerOptionEnum(const char *_option, uint32_t *optionDefault); void setLayerOption(const char *_option, const char *_val); void setLayerOptionEnum(const char *_option, const char *_valEnum); void print_msg_flags(VkFlags msgFlags, char *msg_flags); #ifdef __cplusplus } #endif Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_data.h000066400000000000000000000036141270147354000252030ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Tobin Ehlis */ #ifndef LAYER_DATA_H #define LAYER_DATA_H #include #include "vk_layer_table.h" template DATA_T *get_my_data_ptr(void *data_key, std::unordered_map &layer_data_map) { DATA_T *debug_data; typename std::unordered_map::const_iterator got; /* TODO: We probably should lock here, or have caller lock */ got = layer_data_map.find(data_key); if (got == layer_data_map.end()) { debug_data = new DATA_T; layer_data_map[(void *)data_key] = debug_data; } else { debug_data = got->second; } return debug_data; } #endif // LAYER_DATA_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_extension_utils.cpp000066400000000000000000000051151270147354000300570ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Courtney Goeltzenleuchter * */ #include "string.h" #include "vk_layer_extension_utils.h" #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) /* * This file contains utility functions for layers */ VkResult util_GetExtensionProperties(const uint32_t count, const VkExtensionProperties *layer_extensions, uint32_t *pCount, VkExtensionProperties *pProperties) { uint32_t copy_size; if (pProperties == NULL || layer_extensions == NULL) { *pCount = count; return VK_SUCCESS; } copy_size = *pCount < count ? *pCount : count; memcpy(pProperties, layer_extensions, copy_size * sizeof(VkExtensionProperties)); *pCount = copy_size; if (copy_size < count) { return VK_INCOMPLETE; } return VK_SUCCESS; } VkResult util_GetLayerProperties(const uint32_t count, const VkLayerProperties *layer_properties, uint32_t *pCount, VkLayerProperties *pProperties) { uint32_t copy_size; if (pProperties == NULL || layer_properties == NULL) { *pCount = count; return VK_SUCCESS; } copy_size = *pCount < count ? *pCount : count; memcpy(pProperties, layer_properties, copy_size * sizeof(VkLayerProperties)); *pCount = copy_size; if (copy_size < count) { return VK_INCOMPLETE; } return VK_SUCCESS; } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_extension_utils.h000066400000000000000000000036501270147354000275260ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Courtney Goeltzenleuchter * */ #include "vulkan/vk_layer.h" #ifndef LAYER_EXTENSION_UTILS_H #define LAYER_EXTENSION_UTILS_H #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) /* * This file contains static functions for the generated layers */ extern "C" { VkResult util_GetExtensionProperties(const uint32_t count, const VkExtensionProperties *layer_extensions, uint32_t *pCount, VkExtensionProperties *pProperties); VkResult util_GetLayerProperties(const uint32_t count, const VkLayerProperties *layer_properties, uint32_t *pCount, VkLayerProperties *pProperties); } // extern "C" #endif // LAYER_EXTENSION_UTILS_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_logging.h000066400000000000000000000266221270147354000257240ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Courtney Goeltzenleuchter * Author: Tobin Ehlis * */ #ifndef LAYER_LOGGING_H #define LAYER_LOGGING_H #include #include #include #include #include #include "vk_loader_platform.h" #include "vulkan/vk_layer.h" #include "vk_layer_data.h" #include "vk_layer_table.h" typedef struct _debug_report_data { VkLayerDbgFunctionNode *g_pDbgFunctionHead; VkFlags active_flags; bool g_DEBUG_REPORT; } debug_report_data; template debug_report_data *get_my_data_ptr(void *data_key, std::unordered_map &data_map); // Utility function to handle reporting static inline VkBool32 debug_report_log_msg(debug_report_data *debug_data, VkFlags msgFlags, VkDebugReportObjectTypeEXT objectType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg) { VkBool32 bail = false; VkLayerDbgFunctionNode *pTrav = debug_data->g_pDbgFunctionHead; while (pTrav) { if (pTrav->msgFlags & msgFlags) { if (pTrav->pfnMsgCallback(msgFlags, objectType, srcObject, location, msgCode, pLayerPrefix, pMsg, pTrav->pUserData)) { bail = true; } } pTrav = pTrav->pNext; } return bail; } static inline debug_report_data * debug_report_create_instance(VkLayerInstanceDispatchTable *table, VkInstance inst, uint32_t extension_count, const char *const *ppEnabledExtensions) // layer or extension name to be enabled { debug_report_data *debug_data; PFN_vkGetInstanceProcAddr gpa = table->GetInstanceProcAddr; table->CreateDebugReportCallbackEXT = (PFN_vkCreateDebugReportCallbackEXT)gpa(inst, "vkCreateDebugReportCallbackEXT"); table->DestroyDebugReportCallbackEXT = (PFN_vkDestroyDebugReportCallbackEXT)gpa(inst, "vkDestroyDebugReportCallbackEXT"); table->DebugReportMessageEXT = (PFN_vkDebugReportMessageEXT)gpa(inst, "vkDebugReportMessageEXT"); debug_data = (debug_report_data *)malloc(sizeof(debug_report_data)); if (!debug_data) return NULL; memset(debug_data, 0, sizeof(debug_report_data)); for (uint32_t i = 0; i < extension_count; i++) { /* TODO: Check other property fields */ if (strcmp(ppEnabledExtensions[i], VK_EXT_DEBUG_REPORT_EXTENSION_NAME) == 0) { debug_data->g_DEBUG_REPORT = true; } } return debug_data; } static inline void layer_debug_report_destroy_instance(debug_report_data *debug_data) { VkLayerDbgFunctionNode *pTrav; VkLayerDbgFunctionNode *pTravNext; if (!debug_data) { return; } pTrav = debug_data->g_pDbgFunctionHead; /* Clear out any leftover callbacks */ while (pTrav) { pTravNext = pTrav->pNext; debug_report_log_msg(debug_data, VK_DEBUG_REPORT_ERROR_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT, (uint64_t)pTrav->msgCallback, 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT, "DebugReport", "Debug Report callbacks not removed before DestroyInstance"); free(pTrav); pTrav = pTravNext; } debug_data->g_pDbgFunctionHead = NULL; free(debug_data); } static inline debug_report_data *layer_debug_report_create_device(debug_report_data *instance_debug_data, VkDevice device) { /* DEBUG_REPORT shares data between Instance and Device, * so just return instance's data pointer */ return instance_debug_data; } static inline void layer_debug_report_destroy_device(VkDevice device) { /* Nothing to do since we're using instance data record */ } static inline VkResult layer_create_msg_callback(debug_report_data *debug_data, const VkDebugReportCallbackCreateInfoEXT *pCreateInfo, const VkAllocationCallbacks *pAllocator, VkDebugReportCallbackEXT *pCallback) { /* TODO: Use app allocator */ VkLayerDbgFunctionNode *pNewDbgFuncNode = (VkLayerDbgFunctionNode *)malloc(sizeof(VkLayerDbgFunctionNode)); if (!pNewDbgFuncNode) return VK_ERROR_OUT_OF_HOST_MEMORY; // Handle of 0 is logging_callback so use allocated Node address as unique handle if (!(*pCallback)) *pCallback = (VkDebugReportCallbackEXT)pNewDbgFuncNode; pNewDbgFuncNode->msgCallback = *pCallback; pNewDbgFuncNode->pfnMsgCallback = pCreateInfo->pfnCallback; pNewDbgFuncNode->msgFlags = pCreateInfo->flags; pNewDbgFuncNode->pUserData = pCreateInfo->pUserData; pNewDbgFuncNode->pNext = debug_data->g_pDbgFunctionHead; debug_data->g_pDbgFunctionHead = pNewDbgFuncNode; debug_data->active_flags |= pCreateInfo->flags; debug_report_log_msg(debug_data, VK_DEBUG_REPORT_DEBUG_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT, (uint64_t)*pCallback, 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT, "DebugReport", "Added callback"); return VK_SUCCESS; } static inline void layer_destroy_msg_callback(debug_report_data *debug_data, VkDebugReportCallbackEXT callback, const VkAllocationCallbacks *pAllocator) { VkLayerDbgFunctionNode *pTrav = debug_data->g_pDbgFunctionHead; VkLayerDbgFunctionNode *pPrev = pTrav; bool matched; debug_data->active_flags = 0; while (pTrav) { if (pTrav->msgCallback == callback) { matched = true; pPrev->pNext = pTrav->pNext; if (debug_data->g_pDbgFunctionHead == pTrav) { debug_data->g_pDbgFunctionHead = pTrav->pNext; } debug_report_log_msg(debug_data, VK_DEBUG_REPORT_DEBUG_BIT_EXT, VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT, (uint64_t)pTrav->msgCallback, 0, VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT, "DebugReport", "Destroyed callback"); } else { matched = false; debug_data->active_flags |= pTrav->msgFlags; } pPrev = pTrav; pTrav = pTrav->pNext; if (matched) { /* TODO: Use pAllocator */ free(pPrev); } } } static inline PFN_vkVoidFunction debug_report_get_instance_proc_addr(debug_report_data *debug_data, const char *funcName) { if (!debug_data || !debug_data->g_DEBUG_REPORT) { return NULL; } if (!strcmp(funcName, "vkCreateDebugReportCallbackEXT")) { return (PFN_vkVoidFunction)vkCreateDebugReportCallbackEXT; } if (!strcmp(funcName, "vkDestroyDebugReportCallbackEXT")) { return (PFN_vkVoidFunction)vkDestroyDebugReportCallbackEXT; } if (!strcmp(funcName, "vkDebugReportMessageEXT")) { return (PFN_vkVoidFunction)vkDebugReportMessageEXT; } return NULL; } /* * Checks if the message will get logged. * Allows layer to defer collecting & formating data if the * message will be discarded. */ static inline VkBool32 will_log_msg(debug_report_data *debug_data, VkFlags msgFlags) { if (!debug_data || !(debug_data->active_flags & msgFlags)) { /* message is not wanted */ return false; } return true; } #ifdef WIN32 static inline int vasprintf(char **strp, char const *fmt, va_list ap) { *strp = nullptr; int size = _vscprintf(fmt, ap); if (size >= 0) { *strp = (char *)malloc(size+1); if (!*strp) { return -1; } _vsnprintf(*strp, size+1, fmt, ap); } return size; } #endif /* * Output log message via DEBUG_REPORT * Takes format and variable arg list so that output string * is only computed if a message needs to be logged */ #ifndef WIN32 static inline VkBool32 log_msg(debug_report_data *debug_data, VkFlags msgFlags, VkDebugReportObjectTypeEXT objectType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *format, ...) __attribute__((format(printf, 8, 9))); #endif static inline VkBool32 log_msg(debug_report_data *debug_data, VkFlags msgFlags, VkDebugReportObjectTypeEXT objectType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *format, ...) { if (!debug_data || !(debug_data->active_flags & msgFlags)) { /* message is not wanted */ return false; } va_list argptr; va_start(argptr, format); char *str; vasprintf(&str, format, argptr); va_end(argptr); VkBool32 result = debug_report_log_msg(debug_data, msgFlags, objectType, srcObject, location, msgCode, pLayerPrefix, str); free(str); return result; } static inline VKAPI_ATTR VkBool32 VKAPI_CALL log_callback(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData) { char msg_flags[30]; print_msg_flags(msgFlags, msg_flags); fprintf((FILE *)pUserData, "%s(%s): object: %#" PRIx64 " type: %d location: %lu msgCode: %d: %s\n", pLayerPrefix, msg_flags, srcObject, objType, (unsigned long)location, msgCode, pMsg); fflush((FILE *)pUserData); return false; } static inline VKAPI_ATTR VkBool32 VKAPI_CALL win32_debug_output_msg(VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData) { #ifdef WIN32 char msg_flags[30]; char buf[2048]; print_msg_flags(msgFlags, msg_flags); _snprintf(buf, sizeof(buf) - 1, "%s (%s): object: 0x%" PRIxPTR " type: %d location: " PRINTF_SIZE_T_SPECIFIER " msgCode: %d: %s\n", pLayerPrefix, msg_flags, (size_t)srcObject, objType, location, msgCode, pMsg); OutputDebugString(buf); #endif return false; } #endif // LAYER_LOGGING_H Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_settings.txt000066400000000000000000000075271270147354000265310ustar00rootroot00000000000000# This is an example vk_layer_settings.txt file. # This file allows for per-layer settings which can dynamically affect layer # behavior. Comments in this file are denoted with the "#" char. # Settings lines are of the form ". = " # # is typically the official layer name, minus the VK_LAYER prefix # and all lower-camel-case -- i.e., for VK_LAYER_LUNARG_core_validation, the layer # identifier is 'lunarg_core_validation', and for VK_LAYER_GOOGLE_threading the layer # identifier is 'google_threading'. # # There are some common settings that are used by each layer. # Below is a general description of three common settings, followed by # actual template settings for each layer in the SDK. # # Common settings descriptions: # ============================= # # DEBUG_ACTION: # ============= # .debug_action : This is an enum value indicating what action is to # be taken when a layer wants to report information. Possible settings values # are defined in the vk_layer.h header file. These settings are: # VK_DBG_LAYER_ACTION_IGNORE - Take no action # VK_DBG_LAYER_ACTION_LOG_MSG - Log a txt message to stdout or to a log file # specified via the .log_filename setting (see below) # VK_DBG_LAYER_ACTION_CALLBACK - Call user defined callback function(s) that # have been registered via the VK_EXT_LUNARG_debug_report extension. Since # app must register callback, this is a NOOP for the settings file. # VK_DBG_LAYER_ACTION_BREAK - Trigger a breakpoint. # # REPORT_FLAGS: # ============= # .report_flags : This is a comma-delineated list of options telling # the layer what types of messages it should report back. Options are: # info - Report informational messages # warn - Report warnings of using the API in an unrecommended manner which may # also lead to undefined behavior # perf - Report using the API in a way that may cause suboptimal performance # error - Report errors in API usage # debug - For layer development. Report messages for debugging layer behavior # # LOG_FILENAME: # ============= # .log_filename : output filename. Can be relative to location of # vk_layer_settings.txt file, or an absolute path. If no filename is # specified or if filename has invalid path, then stdout is used by default. # # # # Example of actual settings for each layer: # ========================================== # # VK_LAYER_LUNARG_device_limits Settings lunarg_device_limits.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG lunarg_device_limits.report_flags = error,warn,perf lunarg_device_limits.log_filename = stdout # VK_LAYER_LUNARG_core_validation Settings lunarg_core_validation.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG lunarg_core_validation.report_flags = error,warn,perf lunarg_core_validation.log_filename = stdout # VK_LAYER_LUNARG_image Settings lunarg_image.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG lunarg_image.report_flags = error,warn,perf lunarg_image.log_filename = stdout # VK_LAYER_LUNARG_object_tracker Settings lunarg_object_tracker.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG lunarg_object_tracker.report_flags = error,warn,perf lunarg_object_tracker.log_filename = stdout # VK_LAYER_LUNARG_parameter_validation Settings lunarg_parameter_validation.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG lunarg_parameter_validation.report_flags = error,warn,perf lunarg_parameter_validation.log_filename = stdout # VK_LAYER_LUNARG_swapchain Settings lunarg_swapchain.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG lunarg_swapchain.report_flags = error,warn,perf lunarg_swapchain.log_filename = stdout # VK_LAYER_GOOGLE_threading Settings google_threading.debug_action = VK_DBG_LAYER_ACTION_LOG_MSG google_threading.report_flags = error,warn,perf google_threading.log_filename = stdout Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_table.cpp000066400000000000000000000202071270147354000257110ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * Copyright (c) 2015-2016 Google, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Tobin Ehlis */ #include #include #include "vk_dispatch_table_helper.h" #include "vulkan/vk_layer.h" #include "vk_layer_table.h" static device_table_map tableMap; static instance_table_map tableInstanceMap; #define DISPATCH_MAP_DEBUG 0 // Map lookup must be thread safe VkLayerDispatchTable *device_dispatch_table(void *object) { dispatch_key key = get_dispatch_key(object); device_table_map::const_iterator it = tableMap.find((void *)key); assert(it != tableMap.end() && "Not able to find device dispatch entry"); return it->second; } VkLayerInstanceDispatchTable *instance_dispatch_table(void *object) { dispatch_key key = get_dispatch_key(object); instance_table_map::const_iterator it = tableInstanceMap.find((void *)key); #if DISPATCH_MAP_DEBUG if (it != tableInstanceMap.end()) { fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: %p\n", &tableInstanceMap, object, key, it->second); } else { fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: UNKNOWN\n", &tableInstanceMap, object, key); } #endif assert(it != tableInstanceMap.end() && "Not able to find instance dispatch entry"); return it->second; } void destroy_dispatch_table(device_table_map &map, dispatch_key key) { #if DISPATCH_MAP_DEBUG device_table_map::const_iterator it = map.find((void *)key); if (it != map.end()) { fprintf(stderr, "destroy device dispatch_table: map: %p, key: %p, table: %p\n", &map, key, it->second); } else { fprintf(stderr, "destroy device dispatch table: map: %p, key: %p, table: UNKNOWN\n", &map, key); assert(it != map.end()); } #endif map.erase(key); } void destroy_dispatch_table(instance_table_map &map, dispatch_key key) { #if DISPATCH_MAP_DEBUG instance_table_map::const_iterator it = map.find((void *)key); if (it != map.end()) { fprintf(stderr, "destroy instance dispatch_table: map: %p, key: %p, table: %p\n", &map, key, it->second); } else { fprintf(stderr, "destroy instance dispatch table: map: %p, key: %p, table: UNKNOWN\n", &map, key); assert(it != map.end()); } #endif map.erase(key); } void destroy_device_dispatch_table(dispatch_key key) { destroy_dispatch_table(tableMap, key); } void destroy_instance_dispatch_table(dispatch_key key) { destroy_dispatch_table(tableInstanceMap, key); } VkLayerDispatchTable *get_dispatch_table(device_table_map &map, void *object) { dispatch_key key = get_dispatch_key(object); device_table_map::const_iterator it = map.find((void *)key); #if DISPATCH_MAP_DEBUG if (it != map.end()) { fprintf(stderr, "device_dispatch_table: map: %p, object: %p, key: %p, table: %p\n", &tableInstanceMap, object, key, it->second); } else { fprintf(stderr, "device_dispatch_table: map: %p, object: %p, key: %p, table: UNKNOWN\n", &tableInstanceMap, object, key); } #endif assert(it != map.end() && "Not able to find device dispatch entry"); return it->second; } VkLayerInstanceDispatchTable *get_dispatch_table(instance_table_map &map, void *object) { // VkLayerInstanceDispatchTable *pDisp = *(VkLayerInstanceDispatchTable **) object; dispatch_key key = get_dispatch_key(object); instance_table_map::const_iterator it = map.find((void *)key); #if DISPATCH_MAP_DEBUG if (it != map.end()) { fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: %p\n", &tableInstanceMap, object, key, it->second); } else { fprintf(stderr, "instance_dispatch_table: map: %p, object: %p, key: %p, table: UNKNOWN\n", &tableInstanceMap, object, key); } #endif assert(it != map.end() && "Not able to find instance dispatch entry"); return it->second; } VkLayerInstanceCreateInfo *get_chain_info(const VkInstanceCreateInfo *pCreateInfo, VkLayerFunction func) { VkLayerInstanceCreateInfo *chain_info = (VkLayerInstanceCreateInfo *)pCreateInfo->pNext; while (chain_info && !(chain_info->sType == VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO && chain_info->function == func)) { chain_info = (VkLayerInstanceCreateInfo *)chain_info->pNext; } assert(chain_info != NULL); return chain_info; } VkLayerDeviceCreateInfo *get_chain_info(const VkDeviceCreateInfo *pCreateInfo, VkLayerFunction func) { VkLayerDeviceCreateInfo *chain_info = (VkLayerDeviceCreateInfo *)pCreateInfo->pNext; while (chain_info && !(chain_info->sType == VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO && chain_info->function == func)) { chain_info = (VkLayerDeviceCreateInfo *)chain_info->pNext; } assert(chain_info != NULL); return chain_info; } /* Various dispatchable objects will use the same underlying dispatch table if they * are created from that "parent" object. Thus use pointer to dispatch table * as the key to these table maps. * Instance -> PhysicalDevice * Device -> CommandBuffer or Queue * If use the object themselves as key to map then implies Create entrypoints have to be intercepted * and a new key inserted into map */ VkLayerInstanceDispatchTable *initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa, instance_table_map &map) { VkLayerInstanceDispatchTable *pTable; dispatch_key key = get_dispatch_key(instance); instance_table_map::const_iterator it = map.find((void *)key); if (it == map.end()) { pTable = new VkLayerInstanceDispatchTable; map[(void *)key] = pTable; #if DISPATCH_MAP_DEBUG fprintf(stderr, "New, Instance: map: %p, key: %p, table: %p\n", &map, key, pTable); #endif } else { #if DISPATCH_MAP_DEBUG fprintf(stderr, "Instance: map: %p, key: %p, table: %p\n", &map, key, it->second); #endif return it->second; } layer_init_instance_dispatch_table(instance, pTable, gpa); return pTable; } VkLayerInstanceDispatchTable *initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa) { return initInstanceTable(instance, gpa, tableInstanceMap); } VkLayerDispatchTable *initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa, device_table_map &map) { VkLayerDispatchTable *pTable; dispatch_key key = get_dispatch_key(device); device_table_map::const_iterator it = map.find((void *)key); if (it == map.end()) { pTable = new VkLayerDispatchTable; map[(void *)key] = pTable; #if DISPATCH_MAP_DEBUG fprintf(stderr, "New, Device: map: %p, key: %p, table: %p\n", &map, key, pTable); #endif } else { #if DISPATCH_MAP_DEBUG fprintf(stderr, "Device: map: %p, key: %p, table: %p\n", &map, key, it->second); #endif return it->second; } layer_init_device_dispatch_table(device, pTable, gpa); return pTable; } VkLayerDispatchTable *initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa) { return initDeviceTable(device, gpa, tableMap); } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_table.h000066400000000000000000000054661270147354000253700ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Tobin Ehlis */ #pragma once #include "vulkan/vulkan.h" #include typedef std::unordered_map device_table_map; typedef std::unordered_map instance_table_map; VkLayerDispatchTable *initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa, device_table_map &map); VkLayerDispatchTable *initDeviceTable(VkDevice device, const PFN_vkGetDeviceProcAddr gpa); VkLayerInstanceDispatchTable *initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa, instance_table_map &map); VkLayerInstanceDispatchTable *initInstanceTable(VkInstance instance, const PFN_vkGetInstanceProcAddr gpa); typedef void *dispatch_key; static inline dispatch_key get_dispatch_key(const void *object) { return (dispatch_key) * (VkLayerDispatchTable **)object; } VkLayerDispatchTable *device_dispatch_table(void *object); VkLayerInstanceDispatchTable *instance_dispatch_table(void *object); VkLayerDispatchTable *get_dispatch_table(device_table_map &map, void *object); VkLayerInstanceDispatchTable *get_dispatch_table(instance_table_map &map, void *object); VkLayerInstanceCreateInfo *get_chain_info(const VkInstanceCreateInfo *pCreateInfo, VkLayerFunction func); VkLayerDeviceCreateInfo *get_chain_info(const VkDeviceCreateInfo *pCreateInfo, VkLayerFunction func); void destroy_device_dispatch_table(dispatch_key key); void destroy_instance_dispatch_table(dispatch_key key); void destroy_dispatch_table(device_table_map &map, dispatch_key key); void destroy_dispatch_table(instance_table_map &map, dispatch_key key); Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_utils.cpp000066400000000000000000001023551270147354000257670ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Mark Lobodzinski * */ #include #include #include #include "vulkan/vulkan.h" #include "vk_layer_config.h" #include "vk_layer_utils.h" typedef struct _VULKAN_FORMAT_INFO { size_t size; uint32_t channel_count; VkFormatCompatibilityClass format_class; } VULKAN_FORMAT_INFO; // Set up data structure with number of bytes and number of channels // for each Vulkan format. static const VULKAN_FORMAT_INFO vk_format_table[VK_FORMAT_RANGE_SIZE] = { {0, 0, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_UNDEFINED] {1, 2, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R4G4_UNORM_PACK8] {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R4G4B4A4_UNORM_PACK16] {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_B4G4R4A4_UNORM_PACK16] {2, 3, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R5G6B5_UNORM_PACK16] {2, 3, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_B5G6R5_UNORM_PACK16] {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R5G5B5A1_UNORM_PACK16] {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_B5G5R5A1_UNORM_PACK16] {2, 4, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_A1R5G5B5_UNORM_PACK16] {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_UNORM] {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_SNORM] {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_USCALED] {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_SSCALED] {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_UINT] {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_SINT] {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT}, // [VK_FORMAT_R8_SRGB] {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_UNORM] {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_SNORM] {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_USCALED] {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_SSCALED] {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_UINT] {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_SINT] {2, 2, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R8G8_SRGB] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_UNORM] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_SNORM] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_USCALED] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_SSCALED] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_UINT] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_SINT] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_R8G8B8_SRGB] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_UNORM] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_SNORM] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_USCALED] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_SSCALED] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_UINT] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_SINT] {3, 3, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT}, // [VK_FORMAT_B8G8R8_SRGB] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_UNORM] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_SNORM] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_USCALED] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_SSCALED] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_UINT] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_SINT] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R8G8B8A8_SRGB] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_UNORM] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SNORM] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_USCALED] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SSCALED] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_UINT] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SINT] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SRGB] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_UNORM_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_SNORM_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_USCALED_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_SSCALED_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_UINT_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A8B8G8R8_SINT_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B8G8R8A8_SRGB_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_UNORM_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_SNORM_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_USCALED_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_SSCALED_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_UINT_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2R10G10B10_SINT_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_UNORM_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_SNORM_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_USCALED_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_SSCALED_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_UINT_PACK32] {4, 4, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_A2B10G10R10_SINT_PACK32] {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_UNORM] {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_SNORM] {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_USCALED] {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_SSCALED] {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_UINT] {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_SINT] {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT}, // [VK_FORMAT_R16_SFLOAT] {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_UNORM] {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_SNORM] {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_USCALED] {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_SSCALED] {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_UINT] {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_SINT] {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R16G16_SFLOAT] {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_UNORM] {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_SNORM] {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_USCALED] {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_SSCALED] {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_UINT] {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_SINT] {6, 3, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT}, // [VK_FORMAT_R16G16B16_SFLOAT] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_UNORM] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_SNORM] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_USCALED] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_SSCALED] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_UINT] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_SINT] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R16G16B16A16_SFLOAT] {4, 1, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R32_UINT] {4, 1, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R32_SINT] {4, 1, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_R32_SFLOAT] {8, 2, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R32G32_UINT] {8, 2, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R32G32_SINT] {8, 2, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R32G32_SFLOAT] {12, 3, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT}, // [VK_FORMAT_R32G32B32_UINT] {12, 3, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT}, // [VK_FORMAT_R32G32B32_SINT] {12, 3, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT}, // [VK_FORMAT_R32G32B32_SFLOAT] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R32G32B32A32_UINT] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R32G32B32A32_SINT] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R32G32B32A32_SFLOAT] {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R64_UINT] {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R64_SINT] {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT}, // [VK_FORMAT_R64_SFLOAT] {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R64G64_UINT] {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R64G64_SINT] {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT}, // [VK_FORMAT_R64G64_SFLOAT] {24, 3, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT}, // [VK_FORMAT_R64G64B64_UINT] {24, 3, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT}, // [VK_FORMAT_R64G64B64_SINT] {24, 3, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT}, // [VK_FORMAT_R64G64B64_SFLOAT] {32, 4, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT}, // [VK_FORMAT_R64G64B64A64_UINT] {32, 4, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT}, // [VK_FORMAT_R64G64B64A64_SINT] {32, 4, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT}, // [VK_FORMAT_R64G64B64A64_SFLOAT] {4, 3, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_B10G11R11_UFLOAT_PACK32] {4, 3, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT}, // [VK_FORMAT_E5B9G9R9_UFLOAT_PACK32] {2, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D16_UNORM] {3, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_X8_D24_UNORM_PACK32] {4, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D32_SFLOAT] {1, 1, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_S8_UINT] {3, 2, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D16_UNORM_S8_UINT] {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D24_UNORM_S8_UINT] {4, 2, VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT}, // [VK_FORMAT_D32_SFLOAT_S8_UINT] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGB_BIT}, // [VK_FORMAT_BC1_RGB_UNORM_BLOCK] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGB_BIT}, // [VK_FORMAT_BC1_RGB_SRGB_BLOCK] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGBA_BIT}, // [VK_FORMAT_BC1_RGBA_UNORM_BLOCK] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGBA_BIT}, // [VK_FORMAT_BC1_RGBA_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC2_BIT}, // [VK_FORMAT_BC2_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC2_BIT}, // [VK_FORMAT_BC2_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC3_BIT}, // [VK_FORMAT_BC3_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC3_BIT}, // [VK_FORMAT_BC3_SRGB_BLOCK] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC4_BIT}, // [VK_FORMAT_BC4_UNORM_BLOCK] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC4_BIT}, // [VK_FORMAT_BC4_SNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC5_BIT}, // [VK_FORMAT_BC5_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC5_BIT}, // [VK_FORMAT_BC5_SNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC6H_BIT}, // [VK_FORMAT_BC6H_UFLOAT_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC6H_BIT}, // [VK_FORMAT_BC6H_SFLOAT_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC7_BIT}, // [VK_FORMAT_BC7_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_BC7_BIT}, // [VK_FORMAT_BC7_SRGB_BLOCK] {8, 3, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGB_BIT}, // [VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK] {8, 3, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGB_BIT}, // [VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGBA_BIT}, // [VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGBA_BIT}, // [VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_EAC_RGBA_BIT}, // [VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK] {8, 4, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_EAC_RGBA_BIT}, // [VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK] {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_EAC_R_BIT}, // [VK_FORMAT_EAC_R11_UNORM_BLOCK] {8, 1, VK_FORMAT_COMPATIBILITY_CLASS_EAC_R_BIT}, // [VK_FORMAT_EAC_R11_SNORM_BLOCK] {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_EAC_RG_BIT}, // [VK_FORMAT_EAC_R11G11_UNORM_BLOCK] {16, 2, VK_FORMAT_COMPATIBILITY_CLASS_EAC_RG_BIT}, // [VK_FORMAT_EAC_R11G11_SNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_4X4_BIT}, // [VK_FORMAT_ASTC_4x4_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_4X4_BIT}, // [VK_FORMAT_ASTC_4x4_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X4_BIT}, // [VK_FORMAT_ASTC_5x4_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X4_BIT}, // [VK_FORMAT_ASTC_5x4_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X5_BIT}, // [VK_FORMAT_ASTC_5x5_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X5_BIT}, // [VK_FORMAT_ASTC_5x5_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X5_BIT}, // [VK_FORMAT_ASTC_6x5_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X5_BIT}, // [VK_FORMAT_ASTC_6x5_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X6_BIT}, // [VK_FORMAT_ASTC_6x6_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X6_BIT}, // [VK_FORMAT_ASTC_6x6_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X5_BIT}, // [VK_FORMAT_ASTC_8x5_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X5_BIT}, // [VK_FORMAT_ASTC_8x5_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X6_BIT}, // [VK_FORMAT_ASTC_8x6_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X6_BIT}, // [VK_FORMAT_ASTC_8x6_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X8_BIT}, // [VK_FORMAT_ASTC_8x8_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X8_BIT}, // [VK_FORMAT_ASTC_8x8_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X5_BIT}, // [VK_FORMAT_ASTC_10x5_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X5_BIT}, // [VK_FORMAT_ASTC_10x5_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X6_BIT}, // [VK_FORMAT_ASTC_10x6_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X6_BIT}, // [VK_FORMAT_ASTC_10x6_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X8_BIT}, // [VK_FORMAT_ASTC_10x8_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X8_BIT}, // [VK_FORMAT_ASTC_10x8_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X10_BIT}, // [VK_FORMAT_ASTC_10x10_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X10_BIT}, // [VK_FORMAT_ASTC_10x10_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X10_BIT}, // [VK_FORMAT_ASTC_12x10_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X10_BIT}, // [VK_FORMAT_ASTC_12x10_SRGB_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X12_BIT}, // [VK_FORMAT_ASTC_12x12_UNORM_BLOCK] {16, 4, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X12_BIT}, // [VK_FORMAT_ASTC_12x12_SRGB_BLOCK] }; // Return true if format is a depth or stencil format bool vk_format_is_depth_or_stencil(VkFormat format) { return (vk_format_is_depth_and_stencil(format) || vk_format_is_depth_only(format) || vk_format_is_stencil_only(format)); } // Return true if format contains depth and stencil information bool vk_format_is_depth_and_stencil(VkFormat format) { bool is_ds = false; switch (format) { case VK_FORMAT_D16_UNORM_S8_UINT: case VK_FORMAT_D24_UNORM_S8_UINT: case VK_FORMAT_D32_SFLOAT_S8_UINT: is_ds = true; break; default: break; } return is_ds; } // Return true if format is a stencil-only format bool vk_format_is_stencil_only(VkFormat format) { return (format == VK_FORMAT_S8_UINT); } // Return true if format is a depth-only format bool vk_format_is_depth_only(VkFormat format) { bool is_depth = false; switch (format) { case VK_FORMAT_D16_UNORM: case VK_FORMAT_X8_D24_UNORM_PACK32: case VK_FORMAT_D32_SFLOAT: is_depth = true; break; default: break; } return is_depth; } // Return true if format is of time UNORM bool vk_format_is_norm(VkFormat format) { bool is_norm = false; switch (format) { case VK_FORMAT_R4G4_UNORM_PACK8: case VK_FORMAT_R4G4B4A4_UNORM_PACK16: case VK_FORMAT_R5G6B5_UNORM_PACK16: case VK_FORMAT_R5G5B5A1_UNORM_PACK16: case VK_FORMAT_A1R5G5B5_UNORM_PACK16: case VK_FORMAT_R8_UNORM: case VK_FORMAT_R8_SNORM: case VK_FORMAT_R8G8_UNORM: case VK_FORMAT_R8G8_SNORM: case VK_FORMAT_R8G8B8_UNORM: case VK_FORMAT_R8G8B8_SNORM: case VK_FORMAT_R8G8B8A8_UNORM: case VK_FORMAT_R8G8B8A8_SNORM: case VK_FORMAT_A8B8G8R8_UNORM_PACK32: case VK_FORMAT_A8B8G8R8_SNORM_PACK32: case VK_FORMAT_A2B10G10R10_UNORM_PACK32: case VK_FORMAT_A2B10G10R10_SNORM_PACK32: case VK_FORMAT_R16_UNORM: case VK_FORMAT_R16_SNORM: case VK_FORMAT_R16G16_UNORM: case VK_FORMAT_R16G16_SNORM: case VK_FORMAT_R16G16B16_UNORM: case VK_FORMAT_R16G16B16_SNORM: case VK_FORMAT_R16G16B16A16_UNORM: case VK_FORMAT_R16G16B16A16_SNORM: case VK_FORMAT_BC1_RGB_UNORM_BLOCK: case VK_FORMAT_BC2_UNORM_BLOCK: case VK_FORMAT_BC3_UNORM_BLOCK: case VK_FORMAT_BC4_UNORM_BLOCK: case VK_FORMAT_BC4_SNORM_BLOCK: case VK_FORMAT_BC5_UNORM_BLOCK: case VK_FORMAT_BC5_SNORM_BLOCK: case VK_FORMAT_BC7_UNORM_BLOCK: case VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK: case VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK: case VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK: case VK_FORMAT_EAC_R11_UNORM_BLOCK: case VK_FORMAT_EAC_R11_SNORM_BLOCK: case VK_FORMAT_EAC_R11G11_UNORM_BLOCK: case VK_FORMAT_EAC_R11G11_SNORM_BLOCK: case VK_FORMAT_ASTC_4x4_UNORM_BLOCK: case VK_FORMAT_ASTC_5x4_UNORM_BLOCK: case VK_FORMAT_ASTC_5x5_UNORM_BLOCK: case VK_FORMAT_ASTC_6x5_UNORM_BLOCK: case VK_FORMAT_ASTC_6x6_UNORM_BLOCK: case VK_FORMAT_ASTC_8x5_UNORM_BLOCK: case VK_FORMAT_ASTC_8x6_UNORM_BLOCK: case VK_FORMAT_ASTC_8x8_UNORM_BLOCK: case VK_FORMAT_ASTC_10x5_UNORM_BLOCK: case VK_FORMAT_ASTC_10x6_UNORM_BLOCK: case VK_FORMAT_ASTC_10x8_UNORM_BLOCK: case VK_FORMAT_ASTC_10x10_UNORM_BLOCK: case VK_FORMAT_ASTC_12x10_UNORM_BLOCK: case VK_FORMAT_ASTC_12x12_UNORM_BLOCK: case VK_FORMAT_B5G6R5_UNORM_PACK16: case VK_FORMAT_B8G8R8_UNORM: case VK_FORMAT_B8G8R8_SNORM: case VK_FORMAT_B8G8R8A8_UNORM: case VK_FORMAT_B8G8R8A8_SNORM: case VK_FORMAT_A2R10G10B10_UNORM_PACK32: case VK_FORMAT_A2R10G10B10_SNORM_PACK32: is_norm = true; break; default: break; } return is_norm; }; // Return true if format is an integer format bool vk_format_is_int(VkFormat format) { return (vk_format_is_sint(format) || vk_format_is_uint(format)); } // Return true if format is an unsigned integer format bool vk_format_is_uint(VkFormat format) { bool is_uint = false; switch (format) { case VK_FORMAT_R8_UINT: case VK_FORMAT_R8G8_UINT: case VK_FORMAT_R8G8B8_UINT: case VK_FORMAT_R8G8B8A8_UINT: case VK_FORMAT_A8B8G8R8_UINT_PACK32: case VK_FORMAT_A2B10G10R10_UINT_PACK32: case VK_FORMAT_R16_UINT: case VK_FORMAT_R16G16_UINT: case VK_FORMAT_R16G16B16_UINT: case VK_FORMAT_R16G16B16A16_UINT: case VK_FORMAT_R32_UINT: case VK_FORMAT_R32G32_UINT: case VK_FORMAT_R32G32B32_UINT: case VK_FORMAT_R32G32B32A32_UINT: case VK_FORMAT_R64_UINT: case VK_FORMAT_R64G64_UINT: case VK_FORMAT_R64G64B64_UINT: case VK_FORMAT_R64G64B64A64_UINT: case VK_FORMAT_B8G8R8_UINT: case VK_FORMAT_B8G8R8A8_UINT: case VK_FORMAT_A2R10G10B10_UINT_PACK32: is_uint = true; break; default: break; } return is_uint; } // Return true if format is a signed integer format bool vk_format_is_sint(VkFormat format) { bool is_sint = false; switch (format) { case VK_FORMAT_R8_SINT: case VK_FORMAT_R8G8_SINT: case VK_FORMAT_R8G8B8_SINT: case VK_FORMAT_R8G8B8A8_SINT: case VK_FORMAT_A8B8G8R8_SINT_PACK32: case VK_FORMAT_A2B10G10R10_SINT_PACK32: case VK_FORMAT_R16_SINT: case VK_FORMAT_R16G16_SINT: case VK_FORMAT_R16G16B16_SINT: case VK_FORMAT_R16G16B16A16_SINT: case VK_FORMAT_R32_SINT: case VK_FORMAT_R32G32_SINT: case VK_FORMAT_R32G32B32_SINT: case VK_FORMAT_R32G32B32A32_SINT: case VK_FORMAT_R64_SINT: case VK_FORMAT_R64G64_SINT: case VK_FORMAT_R64G64B64_SINT: case VK_FORMAT_R64G64B64A64_SINT: case VK_FORMAT_B8G8R8_SINT: case VK_FORMAT_B8G8R8A8_SINT: case VK_FORMAT_A2R10G10B10_SINT_PACK32: is_sint = true; break; default: break; } return is_sint; } // Return true if format is a floating-point format bool vk_format_is_float(VkFormat format) { bool is_float = false; switch (format) { case VK_FORMAT_R16_SFLOAT: case VK_FORMAT_R16G16_SFLOAT: case VK_FORMAT_R16G16B16_SFLOAT: case VK_FORMAT_R16G16B16A16_SFLOAT: case VK_FORMAT_R32_SFLOAT: case VK_FORMAT_R32G32_SFLOAT: case VK_FORMAT_R32G32B32_SFLOAT: case VK_FORMAT_R32G32B32A32_SFLOAT: case VK_FORMAT_R64_SFLOAT: case VK_FORMAT_R64G64_SFLOAT: case VK_FORMAT_R64G64B64_SFLOAT: case VK_FORMAT_R64G64B64A64_SFLOAT: case VK_FORMAT_B10G11R11_UFLOAT_PACK32: case VK_FORMAT_E5B9G9R9_UFLOAT_PACK32: case VK_FORMAT_BC6H_UFLOAT_BLOCK: case VK_FORMAT_BC6H_SFLOAT_BLOCK: is_float = true; break; default: break; } return is_float; } // Return true if format is in the SRGB colorspace bool vk_format_is_srgb(VkFormat format) { bool is_srgb = false; switch (format) { case VK_FORMAT_R8_SRGB: case VK_FORMAT_R8G8_SRGB: case VK_FORMAT_R8G8B8_SRGB: case VK_FORMAT_R8G8B8A8_SRGB: case VK_FORMAT_A8B8G8R8_SRGB_PACK32: case VK_FORMAT_BC1_RGB_SRGB_BLOCK: case VK_FORMAT_BC2_SRGB_BLOCK: case VK_FORMAT_BC3_SRGB_BLOCK: case VK_FORMAT_BC7_SRGB_BLOCK: case VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK: case VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK: case VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK: case VK_FORMAT_ASTC_4x4_SRGB_BLOCK: case VK_FORMAT_ASTC_5x4_SRGB_BLOCK: case VK_FORMAT_ASTC_5x5_SRGB_BLOCK: case VK_FORMAT_ASTC_6x5_SRGB_BLOCK: case VK_FORMAT_ASTC_6x6_SRGB_BLOCK: case VK_FORMAT_ASTC_8x5_SRGB_BLOCK: case VK_FORMAT_ASTC_8x6_SRGB_BLOCK: case VK_FORMAT_ASTC_8x8_SRGB_BLOCK: case VK_FORMAT_ASTC_10x5_SRGB_BLOCK: case VK_FORMAT_ASTC_10x6_SRGB_BLOCK: case VK_FORMAT_ASTC_10x8_SRGB_BLOCK: case VK_FORMAT_ASTC_10x10_SRGB_BLOCK: case VK_FORMAT_ASTC_12x10_SRGB_BLOCK: case VK_FORMAT_ASTC_12x12_SRGB_BLOCK: case VK_FORMAT_B8G8R8_SRGB: case VK_FORMAT_B8G8R8A8_SRGB: is_srgb = true; break; default: break; } return is_srgb; } // Return true if format is compressed bool vk_format_is_compressed(VkFormat format) { switch (format) { case VK_FORMAT_BC1_RGB_UNORM_BLOCK: case VK_FORMAT_BC1_RGB_SRGB_BLOCK: case VK_FORMAT_BC2_UNORM_BLOCK: case VK_FORMAT_BC2_SRGB_BLOCK: case VK_FORMAT_BC3_UNORM_BLOCK: case VK_FORMAT_BC3_SRGB_BLOCK: case VK_FORMAT_BC4_UNORM_BLOCK: case VK_FORMAT_BC4_SNORM_BLOCK: case VK_FORMAT_BC5_UNORM_BLOCK: case VK_FORMAT_BC5_SNORM_BLOCK: case VK_FORMAT_BC6H_UFLOAT_BLOCK: case VK_FORMAT_BC6H_SFLOAT_BLOCK: case VK_FORMAT_BC7_UNORM_BLOCK: case VK_FORMAT_BC7_SRGB_BLOCK: case VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK: case VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK: case VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK: case VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK: case VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK: case VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK: case VK_FORMAT_EAC_R11_UNORM_BLOCK: case VK_FORMAT_EAC_R11_SNORM_BLOCK: case VK_FORMAT_EAC_R11G11_UNORM_BLOCK: case VK_FORMAT_EAC_R11G11_SNORM_BLOCK: case VK_FORMAT_ASTC_4x4_UNORM_BLOCK: case VK_FORMAT_ASTC_4x4_SRGB_BLOCK: case VK_FORMAT_ASTC_5x4_UNORM_BLOCK: case VK_FORMAT_ASTC_5x4_SRGB_BLOCK: case VK_FORMAT_ASTC_5x5_UNORM_BLOCK: case VK_FORMAT_ASTC_5x5_SRGB_BLOCK: case VK_FORMAT_ASTC_6x5_UNORM_BLOCK: case VK_FORMAT_ASTC_6x5_SRGB_BLOCK: case VK_FORMAT_ASTC_6x6_UNORM_BLOCK: case VK_FORMAT_ASTC_6x6_SRGB_BLOCK: case VK_FORMAT_ASTC_8x5_UNORM_BLOCK: case VK_FORMAT_ASTC_8x5_SRGB_BLOCK: case VK_FORMAT_ASTC_8x6_UNORM_BLOCK: case VK_FORMAT_ASTC_8x6_SRGB_BLOCK: case VK_FORMAT_ASTC_8x8_UNORM_BLOCK: case VK_FORMAT_ASTC_8x8_SRGB_BLOCK: case VK_FORMAT_ASTC_10x5_UNORM_BLOCK: case VK_FORMAT_ASTC_10x5_SRGB_BLOCK: case VK_FORMAT_ASTC_10x6_UNORM_BLOCK: case VK_FORMAT_ASTC_10x6_SRGB_BLOCK: case VK_FORMAT_ASTC_10x8_UNORM_BLOCK: case VK_FORMAT_ASTC_10x8_SRGB_BLOCK: case VK_FORMAT_ASTC_10x10_UNORM_BLOCK: case VK_FORMAT_ASTC_10x10_SRGB_BLOCK: case VK_FORMAT_ASTC_12x10_UNORM_BLOCK: case VK_FORMAT_ASTC_12x10_SRGB_BLOCK: case VK_FORMAT_ASTC_12x12_UNORM_BLOCK: case VK_FORMAT_ASTC_12x12_SRGB_BLOCK: return true; default: return false; } } // Return format class of the specified format VkFormatCompatibilityClass vk_format_get_compatibility_class(VkFormat format) { return vk_format_table[format].format_class; } // Return size, in bytes, of a pixel of the specified format size_t vk_format_get_size(VkFormat format) { return vk_format_table[format].size; } // Return the number of channels for a given format unsigned int vk_format_get_channel_count(VkFormat format) { return vk_format_table[format].channel_count; } // Perform a zero-tolerant modulo operation VkDeviceSize vk_safe_modulo(VkDeviceSize dividend, VkDeviceSize divisor) { VkDeviceSize result = 0; if (divisor != 0) { result = dividend % divisor; } return result; } static const char UTF8_ONE_BYTE_CODE = 0xC0; static const char UTF8_ONE_BYTE_MASK = 0xE0; static const char UTF8_TWO_BYTE_CODE = 0xE0; static const char UTF8_TWO_BYTE_MASK = 0xF0; static const char UTF8_THREE_BYTE_CODE = 0xF0; static const char UTF8_THREE_BYTE_MASK = 0xF8; static const char UTF8_DATA_BYTE_CODE = 0x80; static const char UTF8_DATA_BYTE_MASK = 0xC0; VkStringErrorFlags vk_string_validate(const int max_length, const char *utf8) { VkStringErrorFlags result = VK_STRING_ERROR_NONE; int num_char_bytes = 0; int i, j; for (i = 0; i < max_length; i++) { if (utf8[i] == 0) { break; } else if ((utf8[i] >= 0xa) && (utf8[i] < 0x7f)) { num_char_bytes = 0; } else if ((utf8[i] & UTF8_ONE_BYTE_MASK) == UTF8_ONE_BYTE_CODE) { num_char_bytes = 1; } else if ((utf8[i] & UTF8_TWO_BYTE_MASK) == UTF8_TWO_BYTE_CODE) { num_char_bytes = 2; } else if ((utf8[i] & UTF8_THREE_BYTE_MASK) == UTF8_THREE_BYTE_CODE) { num_char_bytes = 3; } else { result = VK_STRING_ERROR_BAD_DATA; } // Validate the following num_char_bytes of data for (j = 0; (j < num_char_bytes) && (i < max_length); j++) { if (++i == max_length) { result |= VK_STRING_ERROR_LENGTH; break; } if ((utf8[i] & UTF8_DATA_BYTE_MASK) != UTF8_DATA_BYTE_CODE) { result |= VK_STRING_ERROR_BAD_DATA; } } } return result; } void layer_debug_actions(debug_report_data *report_data, std::vector &logging_callback, const VkAllocationCallbacks *pAllocator, const char *layer_identifier) { uint32_t report_flags = 0; uint32_t debug_action = 0; VkDebugReportCallbackEXT callback = VK_NULL_HANDLE; std::string report_flags_key = layer_identifier; std::string debug_action_key = layer_identifier; std::string log_filename_key = layer_identifier; report_flags_key.append(".report_flags"); debug_action_key.append(".debug_action"); log_filename_key.append(".log_filename"); // initialize layer options report_flags = getLayerOptionFlags(report_flags_key.c_str(), 0); getLayerOptionEnum(debug_action_key.c_str(), (uint32_t *)&debug_action); if (debug_action & VK_DBG_LAYER_ACTION_LOG_MSG) { const char *log_filename = getLayerOption(log_filename_key.c_str()); FILE *log_output = getLayerLogOutput(log_filename, layer_identifier); VkDebugReportCallbackCreateInfoEXT dbgCreateInfo; memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo)); dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgCreateInfo.flags = report_flags; dbgCreateInfo.pfnCallback = log_callback; dbgCreateInfo.pUserData = (void *)log_output; layer_create_msg_callback(report_data, &dbgCreateInfo, pAllocator, &callback); logging_callback.push_back(callback); } if (debug_action & VK_DBG_LAYER_ACTION_DEBUG_OUTPUT) { VkDebugReportCallbackCreateInfoEXT dbgCreateInfo; memset(&dbgCreateInfo, 0, sizeof(dbgCreateInfo)); dbgCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgCreateInfo.flags = report_flags; dbgCreateInfo.pfnCallback = win32_debug_output_msg; dbgCreateInfo.pUserData = NULL; layer_create_msg_callback(report_data, &dbgCreateInfo, pAllocator, &callback); logging_callback.push_back(callback); } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_layer_utils.h000066400000000000000000000131751270147354000254350ustar00rootroot00000000000000/* Copyright (c) 2015-2016 The Khronos Group Inc. * Copyright (c) 2015-2016 Valve Corporation * Copyright (c) 2015-2016 LunarG, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and/or associated documentation files (the "Materials"), to * deal in the Materials without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Materials, and to permit persons to whom the Materials * are furnished to do so, subject to the following conditions: * * The above copyright notice(s) and this permission notice shall be included * in all copies or substantial portions of the Materials. * * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE * USE OR OTHER DEALINGS IN THE MATERIALS * * Author: Mark Lobodzinski * Author: Courtney Goeltzenleuchter */ #pragma once #include #include #include "vk_layer_logging.h" #ifndef WIN32 #include /* for ffs() */ #else #include /* for __lzcnt() */ #endif #ifdef __cplusplus extern "C" { #endif #define VK_LAYER_API_VERSION VK_MAKE_VERSION(1, 0, VK_HEADER_VERSION) typedef enum VkFormatCompatibilityClass { VK_FORMAT_COMPATIBILITY_CLASS_NONE_BIT = 0, VK_FORMAT_COMPATIBILITY_CLASS_8_BIT = 1, VK_FORMAT_COMPATIBILITY_CLASS_16_BIT = 2, VK_FORMAT_COMPATIBILITY_CLASS_24_BIT = 3, VK_FORMAT_COMPATIBILITY_CLASS_32_BIT = 4, VK_FORMAT_COMPATIBILITY_CLASS_48_BIT = 5, VK_FORMAT_COMPATIBILITY_CLASS_64_BIT = 6, VK_FORMAT_COMPATIBILITY_CLASS_96_BIT = 7, VK_FORMAT_COMPATIBILITY_CLASS_128_BIT = 8, VK_FORMAT_COMPATIBILITY_CLASS_192_BIT = 9, VK_FORMAT_COMPATIBILITY_CLASS_256_BIT = 10, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGB_BIT = 11, VK_FORMAT_COMPATIBILITY_CLASS_BC1_RGBA_BIT = 12, VK_FORMAT_COMPATIBILITY_CLASS_BC2_BIT = 13, VK_FORMAT_COMPATIBILITY_CLASS_BC3_BIT = 14, VK_FORMAT_COMPATIBILITY_CLASS_BC4_BIT = 15, VK_FORMAT_COMPATIBILITY_CLASS_BC5_BIT = 16, VK_FORMAT_COMPATIBILITY_CLASS_BC6H_BIT = 17, VK_FORMAT_COMPATIBILITY_CLASS_BC7_BIT = 18, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGB_BIT = 19, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_RGBA_BIT = 20, VK_FORMAT_COMPATIBILITY_CLASS_ETC2_EAC_RGBA_BIT = 21, VK_FORMAT_COMPATIBILITY_CLASS_EAC_R_BIT = 22, VK_FORMAT_COMPATIBILITY_CLASS_EAC_RG_BIT = 23, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_4X4_BIT = 24, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X4_BIT = 25, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_5X5_BIT = 26, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X5_BIT = 27, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_6X6_BIT = 28, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X5_BIT = 29, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X6_BIT = 20, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_8X8_BIT = 31, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X5_BIT = 32, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X6_BIT = 33, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X8_BIT = 34, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_10X10_BIT = 35, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X10_BIT = 36, VK_FORMAT_COMPATIBILITY_CLASS_ASTC_12X12_BIT = 37, VK_FORMAT_COMPATIBILITY_CLASS_D16_BIT = 38, VK_FORMAT_COMPATIBILITY_CLASS_D24_BIT = 39, VK_FORMAT_COMPATIBILITY_CLASS_D32_BIT = 30, VK_FORMAT_COMPATIBILITY_CLASS_S8_BIT = 41, VK_FORMAT_COMPATIBILITY_CLASS_D16S8_BIT = 42, VK_FORMAT_COMPATIBILITY_CLASS_D24S8_BIT = 43, VK_FORMAT_COMPATIBILITY_CLASS_D32S8_BIT = 44, VK_FORMAT_COMPATIBILITY_CLASS_MAX_ENUM = 45 } VkFormatCompatibilityClass; typedef enum VkStringErrorFlagBits { VK_STRING_ERROR_NONE = 0x00000000, VK_STRING_ERROR_LENGTH = 0x00000001, VK_STRING_ERROR_BAD_DATA = 0x00000002, } VkStringErrorFlagBits; typedef VkFlags VkStringErrorFlags; void layer_debug_actions(debug_report_data* report_data, std::vector &logging_callback, const VkAllocationCallbacks *pAllocator, const char* layer_identifier); static inline bool vk_format_is_undef(VkFormat format) { return (format == VK_FORMAT_UNDEFINED); } bool vk_format_is_depth_or_stencil(VkFormat format); bool vk_format_is_depth_and_stencil(VkFormat format); bool vk_format_is_depth_only(VkFormat format); bool vk_format_is_stencil_only(VkFormat format); static inline bool vk_format_is_color(VkFormat format) { return !(vk_format_is_undef(format) || vk_format_is_depth_or_stencil(format)); } bool vk_format_is_norm(VkFormat format); bool vk_format_is_int(VkFormat format); bool vk_format_is_sint(VkFormat format); bool vk_format_is_uint(VkFormat format); bool vk_format_is_float(VkFormat format); bool vk_format_is_srgb(VkFormat format); bool vk_format_is_compressed(VkFormat format); size_t vk_format_get_size(VkFormat format); unsigned int vk_format_get_channel_count(VkFormat format); VkFormatCompatibilityClass vk_format_get_compatibility_class(VkFormat format); VkDeviceSize vk_safe_modulo(VkDeviceSize dividend, VkDeviceSize divisor); VkStringErrorFlags vk_string_validate(const int max_length, const char *char_array); static inline int u_ffs(int val) { #ifdef WIN32 unsigned long bit_pos = 0; if (_BitScanForward(&bit_pos, val) != 0) { bit_pos += 1; } return bit_pos; #else return ffs(val); #endif } #ifdef __cplusplus } #endif Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/vk_validation_layer_details.md000066400000000000000000002065341270147354000303100ustar00rootroot00000000000000[TOC] # Validation Layer Details ## VK_LAYER_LUNARG_standard_validation ### VK_LAYER_LUNARG_standard_validation Overview This is a meta-layer managed by the loader. Specifying this layer name will cause the loader to load the all of the standard validation layers in the following optimal order: - VK_LAYER_GOOGLE_threading - VK_LAYER_LUNARG_parameter_validation - VK_LAYER_LUNARG_device_limits - VK_LAYER_LUNARG_object_tracker - VK_LAYER_LUNARG_image - VK_LAYER_LUNARG_core_validation - VK_LAYER_LUNARG_swapchain - VK_LAYER_GOOGLE_unique_objects Other layers can be specified and the loader will remove duplicates. See the following individual layer descriptions for layer details. ## VK_LAYER_LUNARG_core_validation ### VK_LAYER_LUNARG_core_validation Overview The VK_LAYER_LUNARG_core_validation layer is the main layer performing state tracking, object and state lifetime validation, and consistency and coherency between these states and the requirements, limits, and capabilities. Currently, it is divided into three main areas of validation: Draw State, Memory Tracking, and Shader Checking. ### VK_LAYER_LUNARG_core_validation Draw State Details Table The Draw State portion of the core validation layer tracks state leading into Draw commands. This includes the Pipeline state, dynamic state, shaders, and descriptor set state. This functionality validates the consistency and correctness between and within these states. | Check | Overview | ENUM DRAWSTATE_* | Relevant API | Testname | Notes/TODO | | ----- | -------- | ---------------- | ------------ | -------- | ---------- | | Valid Pipeline Layouts | Verify that sets being bound are compatible with their PipelineLayout and that the last-bound PSO PipelineLayout at Draw time is compatible with all bound sets used by that PSO | PIPELINE_LAYOUTS_INCOMPATIBLE | vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | TBD | None | | Valid BeginCommandBuffer state | Must not call Begin on command buffers that are being recorded, and primary command buffers must specify VK_NULL_HANDLE for RenderPass or Framebuffer parameters, while secondary command buffers must provide non-null parameters, | BEGIN_CB_INVALID_STATE | vkBeginCommandBuffer | PrimaryCommandBufferFramebufferAndRenderpass SecondaryCommandBufferFramebufferAndRenderpass | None | | Command Buffer Simultaneous Use | Violation of VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT rules. Most likely attempting to simultaneously use a CmdBuffer w/o having that bit set. This also warns if you add secondary command buffer w/o that bit set to a primary command buffer that does have that bit set. | INVALID_CB_SIMULTANEOUS_USE | vkQueueSubmit vkCmdExecuteCommands | TODO | Write test | | Valid Command Buffer Reset | Can only reset individual command buffer that was allocated from a pool with VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT set | INVALID_COMMAND_BUFFER_RESET | vkBeginCommandBuffer vkResetCommandBuffer | CommandBufferResetErrors | None | | PSO Bound | Verify that a properly created and valid pipeline object is bound to the CommandBuffer specified in these calls | NO_PIPELINE_BOUND | vkCmdBindDescriptorSets vkCmdBindVertexBuffers | PipelineNotBound | This check is currently more related to VK_LAYER_LUNARG_core_validation internal data structures and less about verifying that PSO is bound at all appropriate points in API. For API purposes, need to make sure this is checked at Draw time and any other relevant calls. | | Valid DescriptorPool | Verifies that the descriptor set pool object was properly created and is valid | INVALID_POOL | vkResetDescriptorPool vkAllocateDescriptorSets | None | This is just an internal layer data structure check. VK_LAYER_LUNARG_parameter_validation or VK_LAYER_LUNARG_object_tracker should really catch bad DSPool | | Valid DescriptorSet | Validate that descriptor set was properly created and is currently valid | INVALID_SET | vkCmdBindDescriptorSets | None | Is this needed other places (like Update/Clear descriptors) | | Valid DescriptorSetLayout | Flag DescriptorSetLayout object that was not properly created | INVALID_LAYOUT | vkAllocateDescriptorSets | None | Anywhere else to check this? | | Valid RenderArea | Flag renderArea field that is outside of the framebuffer | INVALID_RENDER_AREA | vkCmdBeginRenderPass | None | Anywhere else to check this? | | Valid Pipeline | Flag VkPipeline object that was not properly created | INVALID_PIPELINE | vkCmdBindPipeline | InvalidPipeline | NA | | Valid PipelineLayout | Flag VkPipelineLayout object that was not properly created | INVALID_PIPELINE_LAYOUT | vkCmdBindPipeline | TODO | Write test for this case | | Valid Pipeline Create Info | Tests for the following: That compute shaders are not specified for the graphics pipeline, tess evaluation and tess control shaders are included or excluded as a pair, that VK_PRIMITIVE_TOPOLOGY_PATCH_LIST is set as IA topology for tessellation pipelines, that VK_PRIMITIVE_TOPOLOGY_PATCH_LIST primitive topology is only set for tessellation pipelines, and that Vtx Shader specified | INVALID_PIPELINE_CREATE_STATE | vkCreateGraphicsPipelines | InvalidPipelineCreateState | NA | | Valid CommandBuffer | Validates that the command buffer object was properly created and is currently valid | INVALID_COMMAND_BUFFER | vkQueueSubmit vkBeginCommandBuffer vkEndCommandBuffer vkCmdBindPipeline vkCmdBindDescriptorSets vkCmdBindIndexBuffer vkCmdBindVertexBuffers vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatch vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearAttachments vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage vkCmdSetEvent vkCmdResetEvent vkCmdWaitEvents vkCmdPipelineBarrier vkCmdBeginQuery vkCmdEndQuery vkCmdResetQueryPool vkCmdWriteTimestamp vkCmdBeginRenderPass vkCmdNextSubpass vkCmdEndRenderPass vkCmdExecuteCommands vkAllocateCommandBuffers | None | NA | | Vtx Buffer Bounds | Check if VBO index too large for PSO Vtx binding count, and that at least one vertex buffer is attached to pipeline object | VTX_INDEX_OUT_OF_BOUNDS | vkCmdBindDescriptorSets vkCmdBindVertexBuffers | VtxBufferBadIndex | NA | | Idx Buffer Alignment | Verify that offset of Index buffer falls on an alignment boundary as defined by IdxBufferAlignmentError param | VTX_INDEX_ALIGNMENT_ERROR | vkCmdBindIndexBuffer | IdxBufferAlignmentError | NA | | Cmd Buffer End | Verifies that EndCommandBuffer was called for this commandBuffer at QueueSubmit time | NO_END_COMMAND_BUFFER | vkQueueSubmit | NoEndCommandBuffer | NA | | Cmd Buffer Begin | Check that BeginCommandBuffer was called for this command buffer when binding commands or calling end | NO_BEGIN_COMMAND_BUFFER | vkEndCommandBuffer vkCmdBindPipeline vkCmdSetViewport vkCmdSetLineWidth vkCmdSetDepthBias vkCmdSetBlendConstants vkCmdSetDepthBounds vkCmdSetStencilCompareMask vkCmdSetStencilWriteMask vkCmdSetStencilReference vkCmdBindDescriptorSets vkCmdBindIndexBuffer vkCmdBindVertexBuffers vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatch vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearAttachments vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage vkCmdSetEvent vkCmdResetEvent vkCmdWaitEvents vkCmdPipelineBarrier vkCmdBeginQuery vkCmdEndQuery vkCmdResetQueryPool vkCmdWriteTimestamp | NoBeginCommandBuffer | NA | | Cmd Buffer Submit Count | Verify that ONE_TIME submit cmdbuffer is not submitted multiple times | COMMAND_BUFFER_SINGLE_SUBMIT_VIOLATION | vkBeginCommandBuffer, vkQueueSubmit | CommandBufferTwoSubmits | NA | | Valid Secondary CommandBuffer | Validates that no primary command buffers are sent to vkCmdExecuteCommands() are | INVALID_SECONDARY_COMMAND_BUFFER | vkCmdExecuteCommands | ExecuteCommandsPrimaryCB | NA | | Invalid Descriptor Set | Invalid Descriptor Set used. Either never created or already destroyed. | INVALID_DESCRIPTOR_SET | vkQueueSubmit | TODO | Create Test | | Descriptor Type | Verify Descriptor type in bound descriptor set layout matches descriptor type specified in update. This also includes mismatches in the TYPES of copied descriptors. | DESCRIPTOR_TYPE_MISMATCH | vkUpdateDescriptorSets | DSTypeMismatch CopyDescriptorUpdateErrors | NA | | Descriptor StageFlags | Verify all descriptors within a single write update have the same stageFlags | DESCRIPTOR_STAGEFLAGS_MISMATCH | vkUpdateDescriptorSets | NONE | Test this case | | DS Update Size | DS update out of bounds for given layout section. | DESCRIPTOR_UPDATE_OUT_OF_BOUNDS | vkUpdateDescriptorSets | DSUpdateOutOfBounds CopyDescriptorUpdateErrors | NA | | Descriptor Pool empty | Attempt to allocate descriptor type from descriptor pool when no more of that type are available to be allocated. | DESCRIPTOR_POOL_EMPTY | vkAllocateDescriptorSets | AllocDescriptorFromEmptyPool | NA | | Free from NON_FREE Pool | It's invalid to call vkFreeDescriptorSets() on Sets that were allocated from a Pool created with NON_FREE usage. | CANT_FREE_FROM_NON_FREE_POOL | vkFreeDescriptorSets | None | NA | | DS Update Index | DS update binding too large for layout binding count. | INVALID_UPDATE_INDEX | vkUpdateDescriptorSets | InvalidDSUpdateIndex CopyDescriptorUpdateErrors | NA | | DS Update Type | Verifies that structs in DS Update tree are properly created, currenly valid, and of the right type | INVALID_UPDATE_STRUCT | vkUpdateDescriptorSets | InvalidDSUpdateStruct | NA | | MSAA Sample Count | Verifies that Pipeline, RenderPass, and Subpass sample counts are consistent | NUM_SAMPLES_MISMATCH | vkCmdBindPipeline vkCmdBeginRenderPass vkCmdNextSubpass | NumSamplesMismatch | NA | | Dynamic Viewport State Binding | Verify that viewport dynamic state bound to Cmd Buffer at Draw time | VIEWPORT_NOT_BOUND |vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | ViewportStateNotBound | NA | | Dynamic Scissor State Binding | Verify that scissor dynamic state bound to Cmd Buffer at Draw time | SCISSOR_NOT_BOUND |vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | ScissorStateNotBound | NA | | Dynamic Line Width State Binding | Verify that line width dynamic state bound to Cmd Buffer at when required (TODO : Verify when this is) | LINE_WIDTH_NOT_BOUND |vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | TODO | Verify this check and Write targeted test | | Dynamic Depth Bias State Binding | Verify that depth bias dynamic state bound when depth enabled | DEPTH_BIAS_NOT_BOUND |vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | TODO | Verify this check and Write targeted test | | Dynamic Blend State Binding | Verify that blend dynamic state bound when color blend enabled | BLEND_NOT_BOUND |vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | TODO | Verify this check and Write targeted test | | Dynamic Depth Bounds State Binding | Verify that depth bounds dynamic state bound when depth enabled | DEPTH_BOUNDS_NOT_BOUND |vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | TODO | Verify this check and Write targeted test | | Dynamic Stencil State Binding | Verify that stencil dynamic state bound when depth enabled | STENCIL_NOT_BOUND | vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | TODO | Verify this check and Write targeted test | | RenderPass misuse | Tests for the following: that vkCmdDispatch, vkCmdDispatchIndirect, vkCmdCopyBuffer, vkCmdCopyImage, vkCmdBlitImage, vkCmdCopyBufferToImage, vkCmdCopyImageToBuffer, vkCmdUpdateBuffer, vkCmdFillBuffer, vkCmdClearColorImage, vkCmdClearDepthStencilImage, vkCmdResolveImage, vkCmdSetEvent, vkCmdResetEvent, vkCmdResetQueryPool, vkCmdCopyQueryPoolResults, vkCmdBeginRenderPass are not called during an active Renderpass, and that binding compute descriptor sets or pipelines does not take place during an active Renderpass | INVALID_RENDERPASS_CMD | vkCmdBindPipeline vkCmdBindDescriptorSets vkCmdDispatch vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage vkCmdSetEvent vkCmdResetEvent vkCmdResetQueryPool vkCmdCopyQueryPoolResults vkCmdBeginRenderPass | RenderPassWithinRenderPass UpdateBufferWithinRenderPass ClearColorImageWithinRenderPass ClearDepthStencilImageWithinRenderPass FillBufferWithinRenderPass | NA | | Correct use of RenderPass | Validates that the following rendering commands are issued inside an active RenderPass: vkCmdDraw, vkCmdDrawIndexed, vkCmdDrawIndirect, vkCmdDrawIndexedIndirect, vkCmdClearAttachments, vkCmdNextSubpass, vkCmdEndRenderPass | NO_ACTIVE_RENDERPASS | vkCmdBindPipeline vkCmdBindDescriptorSets vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdClearAttachments vkCmdNextSubpass vkCmdEndRenderPass | BindPipelineNoRenderPass ClearAttachmentsOutsideRenderPass | NA | | Valid RenderPass | Flag error if attempt made to Begin/End/Continue a NULL or otherwise invalid RenderPass object | INVALID_RENDERPASS | vkCmdBeginRenderPass vkCmdEndRenderPass vkBeginCommandBuffer | NullRenderPass | NA | | RenderPass Compatibility | Verify that active renderpass is compatible with renderpass specified in secondary command buffer, and that renderpass specified for a framebuffer is compatible with renderpass specified in secondary command buffer | RENDERPASS_INCOMPATIBLE | vkCmdExecuteCommands vkBeginCommandBuffer | TBD | None | | Framebuffer Compatibility | If a framebuffer is passed to secondary command buffer in vkBeginCommandBuffer, then it must match active renderpass (if any) at time of vkCmdExecuteCommands | FRAMEBUFFER_INCOMPATIBLE | vkCmdExecuteCommands | TBD | None | | DescriptorSet Updated | Warn user if DescriptorSet bound that was never updated and is not empty. Trigger error at draw time if a set being used was never updated. | DESCRIPTOR_SET_NOT_UPDATED | vkCmdBindDescriptorSets vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | DescriptorSetCompatibility | NA | | DescriptorSet Bound | Error if DescriptorSet not bound that is used by currently bound VkPipeline at draw time | DESCRIPTOR_SET_NOT_BOUND | vkCmdBindDescriptorSets | DescriptorSetNotUpdated | NA | | Dynamic Offset Count | Error if dynamicOffsetCount at CmdBindDescriptorSets time is not equal to the actual number of dynamic descriptors in all sets being bound. | INVALID_DYNAMIC_OFFSET_COUNT | vkCmdBindDescriptorSets | InvalidDynamicOffsetCases | None | | Dynamic Offsets | At draw time, for a *_DYNAMIC type descriptor, the combination of dynamicOffset along with offset and range from its descriptor update must be less than the size of its buffer. | DYNAMIC_OFFSET_OVERFLOW | vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect | InvalidDynamicOffsetCases | None | | Correct Clear Use | Warn user if CmdClear for Color or DepthStencil issued to Cmd Buffer prior to a Draw Cmd. RenderPass LOAD_OP_CLEAR is preferred in this case. | CLEAR_CMD_BEFORE_DRAW | vkCmdClearColorImage vkCmdClearDepthStencilImage | ClearCmdNoDraw | NA | | Index Buffer Binding | Verify that an index buffer is bound at the point when an indexed draw is attempted. | INDEX_BUFFER_NOT_BOUND | vkCmdDrawIndexed vkCmdDrawIndexedIndirect | TODO | Implement validation test | | Viewport and Scissors match | In PSO viewportCount and scissorCount must match. Also for each count that is non-zero, there corresponding data array ptr should be non-NULL. | VIEWPORT_SCISSOR_MISMATCH | vkCreateGraphicsPipelines vkCmdSetViewport vkCmdSetScissor | TODO | Implement validation test | | Valid Image Aspects for descriptor Updates | When updating ImageView for Descriptor Sets with layout of DEPTH_STENCIL type, the Image Aspect must not have both the DEPTH and STENCIL aspects set, but must have one of the two set. For COLOR_ATTACHMENT, aspect must have COLOR_BIT set. | INVALID_IMAGE_ASPECT | vkUpdateDescriptorSets | DepthStencilImageViewWithColorAspectBitError | This test hits Image layer error, but tough to create case that that skips that error and gets to VK_LAYER_LUNARG_core_validaton draw state error. | | Valid sampler descriptor Updates | An invalid sampler is used when updating SAMPLER descriptor. | SAMPLER_DESCRIPTOR_ERROR | vkUpdateDescriptorSets | SampleDescriptorUpdateError | Currently only making sure sampler handle is known, can add further validation for sampler parameters | | Immutable sampler update consistency | Within a single write update, all sampler updates must use either immutable samplers or non-immutable samplers, but not a combination of both. | INCONSISTENT_IMMUTABLE_SAMPLER_UPDATE | vkUpdateDescriptorSets | None | Write a test for this case | | Valid imageView descriptor Updates | An invalid imageView is used when updating *_IMAGE or *_ATTACHMENT descriptor. | IMAGEVIEW_DESCRIPTOR_ERROR | vkUpdateDescriptorSets | ImageViewDescriptorUpdateError | Currently only making sure imageView handle is known, can add further validation for imageView and underlying image parameters | | Valid bufferView descriptor Updates | An invalid bufferView is used when updating *_TEXEL_BUFFER descriptor. | BUFFERVIEW_DESCRIPTOR_ERROR | vkUpdateDescriptorSets | BufferViewDescriptorUpdateError | Currently only making sure bufferView handle is known, can add further validation for bufferView parameters | | Valid bufferInfo descriptor Updates | An invalid bufferInfo is used when updating *_UNIFORM_BUFFER* or *_STORAGE_BUFFER* descriptor. | BUFFERINFO_DESCRIPTOR_ERROR | vkUpdateDescriptorSets | TODO | Implement validation test | | Attachment References in Subpass | Attachment reference must be present in active subpass | MISSING_ATTACHMENT_REFERENCE | vkCmdClearAttachments | BufferInfoDescriptorUpdateError | Currently only making sure bufferInfo has buffer whose handle is known, can add further validation for bufferInfo parameters | | Verify Image Layouts | Validate correct image layouts for presents, image transitions, command buffers and renderpasses | INVALID_IMAGE_LAYOUT | vkCreateRenderPass vkMapMemory vkQueuePresentKHR vkQueueSubmit vkCmdCopyImage vkCmdCopyImageToBuffer vkCmdWaitEvents VkCmdPipelineBarrier | TBD | None | | Verify Memory Access Flags/Memory Barriers | Validate correct access flags for memory barriers | INVALID_BARRIER | vkCmdWaitEvents vkCmdPipelineBarrier | TBD | None | | Verify Memory Buffer Not Deleted | Validate Command Buffer not submitted with deleted memory buffer | INVALID_BUFFER | vkQueueSubmit | TBD | None | | Verify Memory Buffer Destroy | Validate memory buffers are not destroyed more than once | DOUBLE_DESTROY | vkDestroyBuffer | TBD | None | | Verify Object Not In Use | Validate that object being freed or modified is not in use | OBJECT_INUSE | vkDestroyBuffer vkFreeDescriptorSets vkUpdateDescriptorSets | TBD | None | | Verify Get Queries| Validate that that queries are properly setup, initialized and synchronized | INVALID_QUERY | vkGetFenceStatus vkQueueWaitIdle vkWaitForFences vkDeviceWaitIdle vkCmdBeginQuery vkCmdEndQuery | TBD | None | | Verify Fences Not In Use | Validate that that fences are not used in multiple submit calls at the same time | INVALID_FENCE | vkQueueSubmit | TBD | None | | Verify Semaphores Not In Use | Validate that the semaphores are not used in multiple submit calls at the same time | INVALID_SEMAPHORE | vkQueueSubmit | TBD | None | | Verify Events Not In Use | Validate that that events are not used at the time they are destroyed | INVALID_EVENT | vkDestroyEvent | TBD | None | | Live Semaphore | When waiting on a semaphore, need to make sure that the semaphore is live and therefore can be signalled, otherwise queue is stalled and cannot make forward progress. | QUEUE_FORWARD_PROGRESS | vkQueueSubmit vkQueueBindSparse vkQueuePresentKHR vkAcquireNextImageKHR | TODO | Create test | | Buffer Alignment | Buffer memory offset in BindBufferMemory must agree with VkMemoryRequirements::alignment returned from a call to vkGetBufferMemoryRequirements with buffer | INVALID_BUFFER_MEMORY_OFFSET | vkBindBufferMemory | TODO | Create test | | Texel Buffer Alignment | Storage/Uniform Texel Buffer memory offset in BindBufferMemory must agree with offset alignment device limit | INVALID_TEXEL_BUFFER_OFFSET | vkBindBufferMemory | TODO | Create test | | Storage Buffer Alignment | Storage Buffer offsets in BindBufferMemory, BindDescriptorSets must agree with offset alignment device limit | INVALID_STORAGE_BUFFER_OFFSET | vkBindBufferMemory vkCmdBindDescriptorSets | TODO | Create test | | Uniform Buffer Alignment | Uniform Buffer offsets in BindBufferMemory, BindDescriptorSets must agree with offset alignment device limit | INVALID_UNIFORM_BUFFER_OFFSET | vkBindBufferMemory vkCmdBindDescriptorSets | TODO | Create test | | Independent Blending | If independent blending is not enabled, all elements of pAttachments must be identical | INDEPENDENT_BLEND | vkCreateGraphicsPipelines | TODO | Create test | | Enabled Logic Operations | If logic operations is not enabled, logicOpEnable must be VK_FALSE | DISABLED_LOGIC_OP | vkCreateGraphicsPipelines | TODO | Create test | | Valid Logic Operations | If logicOpEnable is VK_TRUE, logicOp must be a valid VkLogicOp value | INVALID_LOGIC_OP | vkCreateGraphicsPipelines | TODO | Create test | | QueueFamilyIndex is Valid | Validates that QueueFamilyIndices are less an the number of QueueFamilies | INVALID_QUEUE_INDEX | vkCmdWaitEvents vkCmdPipelineBarrier vkCreateBuffer vkCreateImage | TODO | Create test | | Push Constants | Validate that the size of push constant ranges and updates does not exceed maxPushConstantSize | PUSH_CONSTANTS_ERROR | vkCreatePipelineLayout vkCmdPushConstants | TODO | Create test | | NA | Enum used for informational messages | NONE | | NA | None | | NA | Enum used for errors in the layer itself. This does not indicate an app issue, but instead a bug in the layer. | INTERNAL_ERROR | | NA | None | | NA | Enum used when VK_LAYER_LUNARG_core_validation attempts to allocate memory for its own internal use and is unable to. | OUT_OF_MEMORY | | NA | None | | NA | Enum used when VK_LAYER_LUNARG_core_validation attempts to allocate memory for its own internal use and is unable to. | OUT_OF_MEMORY | | NA | None | ### VK_LAYER_LUNARG_core_validation Draw State Pending Work Additional Draw State-related checks to be added: 1. Lifetime validation (See [bug 13383](https://cvs.khronos.org/bugzilla/show_bug.cgi?id=13383)) 2. GetRenderAreaGranularity - The pname:renderPass parameter must be the same as the one given in the sname:VkRenderPassBeginInfo structure for which the render area is relevant. 3. Update Gfx Pipe Create Info shadowing to remove new/delete and instead use unique_ptrs for auto clean-up 4. Add validation for Pipeline Derivatives (see Pipeline Derivatives) section of the spec See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests ### VK_LAYER_LUNARG_core_validation Shader Checker Details Table The Shader Checker portion of the VK_LAYER_LUNARG_core_validation layer inspects the SPIR-V shader images and fixed function pipeline stages at PSO creation time. It flags errors when inconsistencies are found across interfaces between shader stages. The exact behavior of the checksdepends on the pair of pipeline stages involved. | Check | Overview | ENUM SHADER_CHECKER_* | Relevant API | Testname | Notes/TODO | | ----- | -------- | ---------------- | ------------ | -------- | ---------- | | Not consumed | Flag warning if a location is not consumed (useless work) | OUTPUT_NOT_CONSUMED | vkCreateGraphicsPipelines | CreatePipeline*NotConsumed | NA | | Not produced | Flag error if a location is not produced (consumer reads garbage) | INPUT_NOT_PRODUCED | vkCreateGraphicsPipelines | CreatePipeline*NotProvided | NA | | Type mismatch | Flag error if a location has inconsistent types | INTERFACE_TYPE_MISMATCH | vkCreateGraphicsPipelines | CreatePipeline*TypeMismatch | Between shader stages, an exact structural type match is required. Between VI and VS, or between FS and CB, only the basic component type must match (float for UNORM/SNORM/FLOAT, int for SINT, uint for UINT) as the VI and CB stages perform conversions to the exact format. | | Inconsistent shader | Flag error if an inconsistent SPIR-V image is detected. Possible cases include broken type definitions which the layer fails to walk. | INCONSISTENT_SPIRV | vkCreateGraphicsPipelines | TODO | All current tests use the reference compiler to produce valid SPIRV images from GLSL. | | Non-SPIRV shader | Flag warning if a non-SPIR-V shader image is detected. This can occur if early drivers are ingesting GLSL. VK_LAYER_LUNARG_ShaderChecker cannot analyze non-SPIRV shaders, so this suppresses most other checks. | NON_SPIRV_SHADER | vkCreateGraphicsPipelines | TODO | NA | | VI Binding Descriptions | Validate that there is a single vertex input binding description for each binding | INCONSISTENT_VI | vkCreateGraphicsPipelines | CreatePipelineAttribBindingConflict | NA | | Shader Stage Check | Warns if shader stage is unsupported | UNKNOWN_STAGE | vkCreateGraphicsPipelines | TBD | NA | | Shader Specialization | Error if specialization entry data is not fully contained within the specialization data block. | BAD_SPECIALIZATION | vkCreateGraphicsPipelines vkCreateComputePipelines | TBD | NA | | Missing Descriptor | Flags error if shader attempts to use a descriptor binding not declared in the layout | MISSING_DESCRIPTOR | vkCreateGraphicsPipelines | CreatePipelineUniformBlockNotProvided | NA | | Missing Entrypoint | Flags error if specified entrypoint is not present in the shader module | MISSING_ENTRYPOINT | vkCreateGraphicsPipelines | TBD | NA | | Push constant out of range | Flags error if a member of a push constant block is not contained within a push constant range specified in the pipeline layout | PUSH_CONSTANT_OUT_OF_RANGE | vkCreateGraphicsPipelines | CreatePipelinePushContantsNotInLayout | NA | | Push constant not accessible from stage | Flags error if the push constant range containing a push constant block member is not accessible from the current shader stage. | PUSH_CONSTANT_NOT_ACCESSIBLE_FROM_STAGE | vkCreateGraphicsPipelines | TBD | NA | | Descriptor not accessible from stage | Flags error if a descriptor used by a shader stage does not include that stage in its stageFlags | DESCRIPTOR_NOT_ACCESSIBLE_FROM_STAGE | vkCreateGraphicsPipelines | TBD | NA | | Descriptor type mismatch | Flags error if a descriptor type does not match the shader resource type. | DESCRIPTOR_TYPE_MISMATCH | vkCreateGraphicsPipelines | TBD | NA | | Feature not enabled | Flags error if a capability declared by the shader requires a feature not enabled on the device | FEATURE_NOT_ENABLED | vkCreateGraphicsPipelines | TBD | NA | | Bad capability | Flags error if a capability declared by the shader is not supported by Vulkan shaders | BAD_CAPABILITY | vkCreateGraphicsPipelines | TBD | NA | | NA | Enum used for informational messages | NONE | | NA | None | ### VK_LAYER_LUNARG_core_validation Shader Checker Pending Work - Additional test cases for variously broken SPIRV images - Validation of a single SPIRV image in isolation (the spec describes many constraints) See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests ### VK_LAYER_LUNARG_core_validation Memory Tracker Details Table The Mem Tracker portion of the VK_LAYER_LUNARG_core_validation layer tracks memory objects and references and validates that they are managed correctly by the application. This includes tracking object bindings, memory hazards, and memory object lifetimes. Several other hazard-related issues related to command buffers, fences, and memory mapping are also validated in this layer segment. | Check | Overview | ENUM MEMTRACK_* | Relevant API | Testname | Notes/TODO | | ----- | -------- | ---------------- | ------------ | -------- | ---------- | | Valid Command Buffer | Verifies that the command buffer was properly created and is currently valid | INVALID_CB | vkCmdBindPipeline vkCmdSetViewport vkCmdSetLineWidth vkCmdSetDepthBias vkCmdSetBlendConstants vkCmdSetDepthBounds vkCmdSetStencilCompareMask vkCmdSetStencilWriteMask vkCmdSetStencilReference vkBeginCommandBuffer vkResetCommandBuffer vkDestroyDevice vkFreeMemory | NA | NA | | Valid Memory Object | Verifies that the memory object was properly created and is currently valid | INVALID_MEM_OBJ | vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage vkFreeMemory vkBindBufferMemory vkBindImageMemory vkQueueBindSparse | NA | NA | | Memory Aliasing | Flag error if image and/or buffer memory binding ranges overlap | INVALID_ALIASING | vkBindBufferMemory vkBindImageMemory | TODO | Implement test | | Memory Layout | Flag error if attachment is cleared with invalid first layout | INVALID_LAYOUT | vkCmdBeginRenderPass | TODO | Implement test | | Free Referenced Memory | Checks to see if memory being freed still has current references | FREED_MEM_REF | vmFreeMemory | FreeBoundMemory | NA | | Memory Properly Bound | Validate that the memory object referenced in the call was properly created, is currently valid, and is properly bound to the object | MISSING_MEM_BINDINGS | vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyQueryPoolResults vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage | NA | NA | | Valid Object | Verifies that the specified Vulkan object was created properly and is currently valid | INVALID_OBJECT | vkCmdBindPipeline vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage | NA | NA | | Bind Invalid Memory | Validate that memory object was correctly created, that the command buffer object was correctly created, and that both are currently valid objects. | MEMORY_BINDING_ERROR | vkQueueBindSparse vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdResolveImage | NA | The valid Object checks are primarily the responsibilty of VK_LAYER_LUNARG_object_tracker layer, so these checks are more of a backup in case VK_LAYER_LUNARG_object_tracker is not enabled | | Objects Not Destroyed | Verify all objects destroyed at DestroyDevice time | MEMORY_LEAK | vkDestroyDevice | NA | NA | | Memory Mapping State | Verifies that mapped memory is CPU-visible | INVALID_STATE | vkMapMemory | MapMemWithoutHostVisibleBit | NA | | Command Buffer Synchronization | Command Buffer must be complete before BeginCommandBuffer or ResetCommandBuffer can be called | RESET_CB_WHILE_IN_FLIGHT | vkBeginCommandBuffer vkResetCommandBuffer | CallBeginCommandBufferBeforeCompletion CallBeginCommandBufferBeforeCompletion | NA | | Submitted Fence Status | Verifies that: The fence is not submitted in an already signaled state, that ResetFences is not called with a fence in an unsignaled state, and that fences being checked have been submitted | INVALID_FENCE_STATE | vkResetFences vkWaitForFences vkQueueSubmit vkGetFenceStatus | SubmitSignaledFence ResetUnsignaledFence | Create test(s) for case where an unsubmitted fence is having its status checked | | Immutable Memory Binding | Validates that non-sparse memory bindings are immutable, so objects are not re-boundt | REBIND_OBJECT | vkBindBufferMemory, vkBindImageMemory | RebindMemory | NA | | Image/Buffer Usage bits | Verify correct USAGE bits set based on how Images and Buffers are used | INVALID_USAGE_FLAG | vkCreateImage, vkCreateBuffer, vkCreateBufferView, vkCmdCopyBuffer, vkCmdCopyQueryPoolResults, vkCmdCopyImage, vkCmdBlitImage, vkCmdCopyBufferToImage, vkCmdCopyImageToBuffer, vkCmdUpdateBuffer, vkCmdFillBuffer | InvalidUsageBits | NA | | Objects Not Destroyed Warning | Warns if any memory objects have not been freed before their objects are destroyed | MEM_OBJ_CLEAR_EMPTY_BINDINGS | vkDestroyDevice | TBD | NA | | Memory Map Range Checks | Validates that Memory Mapping Requests are valid for the Memory Object (in-range, not currently mapped on Map, currently mapped on UnMap, size is non-zero) | INVALID_MAP | vkMapMemory | TBD | NA | | NA | Enum used for informational messages | NONE | | NA | None | | NA | Enum used for errors in the layer itself. This does not indicate an app issue, but instead a bug in the layer. | INTERNAL_ERROR | | NA | None | ### VK_LAYER_LUNARG_core_validation Memory Tracker Pending Work and Enhancements See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests 1. Consolidate error messages and make them consistent 2. Add validation for maximum memory references, maximum object counts, and object leaks 3. Warn on image/buffer deletion if USAGE bits were set that were not needed 4. Modify INVALID_FENCE_STATE to be WARNINGs instead of ERROR ## VK_LAYER_LUNARG_parameter_validation ### VK_LAYER_LUNARG_parameter_validation Overview The VK_LAYER_LUNARG_parameter_validation layer validates parameter values and flags errors for any values that are outside of acceptable values for the given parameter. ### VK_LAYER_LUNARG_parameter_validation Details Table | Check | Overview | ENUM | Relevant API | Testname | Notes/TODO | | ----- | -------- | ---------------- | ------------ | -------- | ---------- | | Input Parameters | Pointers in structures are recursively validated to be non-null. Enumerated types are validated against min and max enum values. Structure Types are verified to be correct. | NA | vkQueueSubmit vkAllocateMemory vkFlushMappedMemoryRanges vkInvalidateMappedMemoryRanges vkQueueBindSparse vkCreateFence vkResetFences vkWaitForFences vkCreateSemaphore vkCreateEvent vkCreateQueryPool vkCreateBuffer vkCreateBufferView vkCreateImage vkGetImageSubresourceLayout vkCreateImageView vkCreatePipelineCache vkMergePipelineCaches vkCreateGraphicsPipelines vkCreateComputePipelines vkCreatePipelineLayout vkCreateSampler vkCreateDescriptorSetLayout( vkCreateDescriptorPool vkAllocateDescriptorSets vkFreeDescriptorSets vkUpdateDescriptorSets vkCreateFramebuffer vkCreateRenderPass vkCreateCommandPool vkAllocateCommandBuffers vkBeginCommandBuffer vkCmdBindDescriptorSets vkCmdBindVertexBuffers vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdClearAttachments vkCmdResolveImage vkCmdWaitEvents vkCmdPipelineBarrier vkCmdPushConstants vkCmdBeginRenderPass vkCmdExecuteCommands | TBD | NA | | Call results, Output Parameters | Return values are checked for VK_SUCCESS, returned pointers are checked to be NON-NULL, enumerated types of return values are checked to be within the defined range. | NA | vkEnumeratePhysicalDevices vkGetPhysicalDeviceFeatures vkGetPhysicalDeviceFormatProperties vkGetPhysicalDeviceImageFormatProperties vkGetPhysicalDeviceLimits vkGetPhysicalDeviceProperties vkGetPhysicalDeviceQueueFamilyProperties vkGetPhysicalDeviceMemoryProperties vkGetDeviceQueue vkQueueSubmit vkQueueWaitIdle vkDeviceWaitIdle vkAllocateMemory vkFreeMemory vkMapMemory vkUnmapMemory vkFlushMappedMemoryRanges vkInvalidateMappedMemoryRanges vkGetDeviceMemoryCommitment vkBindBufferMemory vkBindImageMemory vkGetBufferMemoryRequirements vkGetImageMemoryRequirements vkGetImageSparseMemoryRequirements vkGetPhysicalDeviceSparseImageFormatProperties vkQueueBindSparse vkCreateFence vkDestroyFence vkResetFences vkGetFenceStatus vkWaitForFences vkCreateSemaphore vkDestroySemaphore vkCreateEvent vkDestroyEvent vkGetEventStatus vkSetEvent vkResetEvent vkCreateQueryPool vkDestroyQueryPool vkGetQueryPoolResults vkCreateBuffer vkDestroyBuffer vkCreateBufferView vkDestroyBufferView vkCreateImage vkDestroyImage vkGetImageSubresourceLayout vkCreateImageView vkDestroyImageView vkDestroyShaderModule vkCreatePipelineCache vkDestroyPipelineCache vkGetPipelineCacheData vkMergePipelineCaches vkCreateGraphicsPipelines vkCreateComputePipelines vkDestroyPipeline vkCreatePipelineLayout vkDestroyPipelineLayout vkCreateSampler vkDestroySampler vkCreateDescriptorSetLayout vkDestroyDescriptorSetLayout vkCreateDescriptorPool vkDestroyDescriptorPool vkResetDescriptorPool vkAllocateDescriptorSets vkFreeDescriptorSets vkUpdateDescriptorSets vkCreateFramebuffer vkDestroyFramebuffer vkCreateRenderPass vkDestroyRenderPass vkGetRenderAreaGranularity vkCreateCommandPool vkDestroyCommandPool vkResetCommandPool vkAllocateCommandBuffers vkFreeCommandBuffers vkBeginCommandBuffer vkEndCommandBuffer vkResetCommandBuffer vkCmdBindPipeline vkCmdBindDescriptorSets vkCmdBindIndexBuffer vkCmdBindVertexBuffers vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatch vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdClearAttachments vkCmdResolveImage vkCmdSetEvent vkCmdResetEvent vkCmdWaitEvents vkCmdPipelineBarrier vkCmdBeginQuery vkCmdEndQuery vkCmdResetQueryPool vkCmdWriteTimestamp vkCmdCopyQueryPoolResults vkCmdPushConstants vkCmdBeginRenderPass vkCmdNextSubpass vkCmdEndRenderPass vkCmdExecuteCommands | TBD | NA | | NA | Enum used for informational messages | NONE | | NA | None | ### VK_LAYER_LUNARG_parameter_validation Pending Work Additional work to be done 1. Source2 was creating a VK_FORMAT_R8_SRGB texture (and image view) which was not supported by the underlying implementation (rendersystemtest imageformat test). Checking that formats are supported by the implementation is something the validation layer could do using the VK_FORMAT_INFO_TYPE_PROPERTIES query. There are probably a bunch of checks here you could be doing around vkCreateImage formats along with whether image/color/depth attachment views are valid. I’m not sure how much of this is already there. 2. From AMD: we were using an image view with a swizzle of VK_COLOR_COMPONENT_FORMAT_A with a BC1_RGB texture, which is not valid because the texture does not have an alpha channel. In general, should validate that the swizzles do not reference components not in the texture format. 3. When querying VK_PHYSICAL_DEVICE_INFO_TYPE_QUEUE_PROPERTIES must provide enough memory for all the queues on the device (not just 1 when device has multiple queues). 4. INT & FLOAT bordercolors. Border color int/float selection must match associated texture format. 5. Flag error on VkBufferCreateInfo if buffer size is 0 6. VkImageViewCreateInfo.format must be set 7. For vkCreateGraphicsPipelines, correctly handle array of pCreateInfos and array of pStages within each element of pCreatInfos 8. Check for valid VkIndexType in vkCmdBindIndexBuffer() should be in PreCmdBindIndexBuffer() call 9. Check for valid VkPipelineBindPoint in vkCmdBindPipeline() & vkCmdBindDescriptorSets() should be in PreCmdBindPipeline() & PreCmdBindDescriptorSets() calls respectively. See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests ## VK_LAYER_LUNARG_image ### VK_LAYER_LUNARG_image Layer Overview The VK_LAYER_LUNARG_image layer is responsible for validating format-related information and enforcing format restrictions. ### VK_LAYER_LUNARG_image Layer Details Table DETAILS TABLE PENDING | Check | Overview | ENUM IMAGE_* | Relevant API | Testname | Notes/TODO | | ----- | -------- | ---------------- | ------------ | -------- | ---------- | | Image Format | Verifies that requested format is a supported Vulkan format on this device | FORMAT_UNSUPPORTED | vkCreateImage vkCreateRenderPass | TBD | NA | | RenderPass Attachments | Validates that attachment image layouts, loadOps, and storeOps are valid Vulkan values | RENDERPASS_INVALID_ATTACHMENT | vkCreateRenderPass | TBD | NA | | Subpass DS Settings | Verifies that if there is no depth attachment then the subpass attachment is set to VK_ATTACHMENT_UNUSED | RENDERPASS_INVALID_DS_ATTACHMENT | vkCreateRenderPass | TBD | NA | | View Creation | Verify that requested Image View Creation parameters are reasonable for the image that the view is being created for | VIEW_CREATE_ERROR | vkCreateImageView | TBD | NA | | Image Aspects | Verify that Image commands are using valid Image Aspect flags | INVALID_IMAGE_ASPECT | vkCreateImageView vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdClearAttachments vkCmdCopyImage vkCmdCopyImageToBuffer vkCmdCopyBufferToImage vkCmdResolveImage vkCmdBlitImage | InvalidImageViewAspect | NA | | Image Aspect Mismatch | Verify that Image commands with source and dest images use matching aspect flags | MISMATCHED_IMAGE_ASPECT | vkCmdCopyImage | TBD | NA | | Image Type Mismatch | Verify that Image commands with source and dest images use matching types | MISMATCHED_IMAGE_TYPE | vkCmdCopyImage vkCmdResolveImage | ResolveImageTypeMismatch | NA | | Image Format Mismatch | Verify that Image commands with source and dest images use matching formats | MISMATCHED_IMAGE_FORMAT | vkCmdCopyImage vkCmdResolveImage | CopyImageDepthStencilFormatMismatch ResolveImageFormatMismatch | NA | | Resolve Sample Count | Verifies that source and dest images sample counts are valid | INVALID_RESOLVE_SAMPLES | vkCmdResolveImage | ResolveImageHighSampleCount ResolveImageLowSampleCount | NA | | Verify Format | Verifies the formats are valid for this image operation | INVALID_FORMAT | vkCreateImageView vkCmdBlitImage | TBD | NA | | Verify Correct Image Filter| Verifies that specified filter is valid | INVALID_FILTER | vkCmdBlitImage | TBD | NA | | Verify Correct Image Settings | Verifies that values are valid for a given resource or subresource | INVALID_IMAGE_RESOURCE | vkCmdPipelineBarrier | TBD | NA | | Verify Image Format Limits | Verifies that image creation parameters are with the device format limits | INVALID_FORMAT_LIMITS_VIOLATION | vkCreateImage | TBD | NA | | Verify Layout | Verifies the layouts are valid for this image operation | INVALID_LAYOUT | vkCreateImage vkCmdClearColorImage | TBD | NA | | Verify Image Extents | Validates that image extent limits are not invalid | INVALID_EXTENTS | vkCmdCopyImage | CopyImageLayerCountMismatch | NA | | NA | Enum used for informational messages | NONE | | NA | None | ### VK_LAYER_LUNARG_image Pending Work See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests ## VK_LAYER_LUNARG_object_tracker ### VK_LAYER_LUNARG_object_tracker Overview The VK_LAYER_LUNARG_object_tracker layer maintains a record of all Vulkan objects. It flags errors when invalid objects are used and at DestroyInstance time it flags any objects that were not properly destroyed. ### VK_LAYER_LUNARG_object_tracker Details Table | Check | Overview | ENUM OBJTRACK_* | Relevant API | Testname | Notes/TODO | | ----- | -------- | ---------------- | ------------ | -------- | ---------- | | Valid Object | Validates that referenced object was properly created and is currently valid. | INVALID_OBJECT | vkAcquireNextImageKHR vkAllocateDescriptorSets vkAllocateMemory vkBeginCommandBuffer vkBindBufferMemory vkBindImageMemory vkCmdBeginQuery vkCmdBeginRenderPass vkCmdBindDescriptorSets vkCmdBindIndexBuffer vkCmdBindPipeline vkCmdBindVertexBuffers vkCmdBlitImage vkCmdClearAttachments vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdCopyBuffer vkCmdCopyBufferToImage vkCmdCopyImage vkCmdCopyImageToBuffer vkCmdCopyQueryPoolResults vkCmdDispatch vkCmdDispatchIndirect vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndexedIndirect vkCmdDrawIndirect vkCmdEndQuery vkCmdEndRenderPass vkCmdExecuteCommands vkCmdFillBuffer vkCmdNextSubpass vkCmdPipelineBarrier vkCmdPushConstants vkCmdResetEvent vkCmdResetQueryPool vkCmdResolveImage vkCmdSetEvent vkCmdUpdateBuffer vkCmdWaitEvents vkCmdWriteTimestamp vkCreateBuffer vkCreateBufferView vkAllocateCommandBuffers vkCreateCommandPool vkCreateComputePipelines vkCreateDescriptorPool vkCreateDescriptorSetLayout vkCreateEvent vkCreateFence vkCreateFramebuffer vkCreateGraphicsPipelines vkCreateImage vkCreateImageView vkCreatePipelineCache vkCreatePipelineLayout vkCreateQueryPool vkCreateRenderPass vkCreateSampler vkCreateSemaphore vkCreateShaderModule vkCreateSwapchainKHR vkDestroyBuffer vkDestroyBufferView vkFreeCommandBuffers vkDestroyCommandPool vkDestroyDescriptorPool vkDestroyDescriptorSetLayout vkDestroyEvent vkDestroyFence vkDestroyFramebuffer vkDestroyImage vkDestroyImageView vkDestroyPipeline vkDestroyPipelineCache vkDestroyPipelineLayout vkDestroyQueryPool vkDestroyRenderPass vkDestroySampler vkDestroySemaphore vkDestroyShaderModule vkDestroySwapchainKHR vkDeviceWaitIdle vkEndCommandBuffer vkEnumeratePhysicalDevices vkFreeDescriptorSets vkFreeMemory vkFreeMemory vkGetBufferMemoryRequirements vkGetDeviceMemoryCommitment vkGetDeviceQueue vkGetEventStatus vkGetFenceStatus vkGetImageMemoryRequirements vkGetImageSparseMemoryRequirements vkGetImageSubresourceLayout vkGetPhysicalDeviceSurfaceSupportKHR vkGetPipelineCacheData vkGetQueryPoolResults vkGetRenderAreaGranularity vkInvalidateMappedMemoryRanges vkMapMemory vkMergePipelineCaches vkQueueBindSparse vkResetCommandBuffer vkResetCommandPool vkResetDescriptorPool vkResetEvent vkResetFences vkSetEvent vkUnmapMemory vkUpdateDescriptorSets vkWaitForFences | BindInvalidMemory BindMemoryToDestroyedObject | Every VkObject class of parameter will be run through this check. This check may ultimately supersede UNKNOWN_OBJECT | | Object Cleanup | Verify that object properly destroyed | DESTROY_OBJECT_FAILED | vkDestroyInstance, vkDestroyDevice, vkFreeMemory | ? | NA | | Objects Leak | When an Instance or Device object is destroyed, validates that all objects belonging to that device/instance have previously been destroyed | OBJECT_LEAK | vkDestroyDevice vkDestroyInstance | ? | NA | | Object Count | Flag error if number of objects requested from extenstion functions exceeds max number of actual objects | OBJCOUNT_MAX_EXCEEDED | objTrackGetObjects objTrackGetObjectsOfType | ? | NA | | Valid Destroy Object | Validates that an object pass into a destroy function was properly created and is currently valid | NONE | vkDestroyInstance vkDestroyDevice vkDestroyFence vkDestroySemaphore vkDestroyEvent vkDestroyQueryPool vkDestroyBuffer vkDestroyBufferView vkDestroyImage vkDestroyImageView vkDestroyShaderModule vkDestroyPipelineCache vkDestroyPipeline vkDestroyPipelineLayout vkDestroySampler vkDestroyDescriptorSetLayout vkDestroyDescriptorPool vkDestroyCommandPool vkFreeCommandBuffers vkDestroyFramebuffer vkDestroyRenderPass vkDestroySwapchainKHR | TBD | These cases need to be moved to a more appropriate error enum | | Unknown object | Internal layer errors when it attempts to update use count for an object that's not in its internal tracking datastructures. | UNKNOWN_OBJECT | | NA | This may be irrelevant due to INVALID_OBJECT error, need to look closely and merge this with that error as appropriate. | | Correct Command Pool | Validates that command buffers in a FreeCommandBuffers call were all created in the specified commandPool | COMMAND_POOL_MISMATCH | vkFreeCommandBuffers | TBD | NA | | Correct Descriptor Pool | Validates that descriptor sets in a FreeDescriptorSets call were all created in the specified descriptorPool | DESCRIPTOR_POOL_MISMATCH | vkFreeDescriptorSets | TBD | NA | | NA | Enum used for informational messages | NONE | | NA | None | | NA | Enum used for errors in the layer itself. This does not indicate an app issue, but instead a bug in the layer. | INTERNAL_ERROR | | NA | None | ### VK_LAYER_LUNARG_object_tracker Pending Work 1. Verify images have CmdPipelineBarrier layouts matching new layout parameters to Cmd*Image* functions 2. For specific object instances that are allowed to be NULL, update object validation to verify that such objects are either NULL or valid 3. Verify cube array VkImageView objects use subresourceRange.arraySize (or effective arraySize when VK_REMAINING_ARRAY_SLICES is specified) that is a multiple of 6. 4. Make object maps specific to instance and device. Objects may only be used with matching instance or device. See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests ## VK_LAYER_GOOGLE_threading ### VK_LAYER_GOOGLE_threading Overview The VK_LAYER_GOOGLE_threading layer checks for simultaneous use of objects by calls from multiple threads. Application code is responsible for preventing simultaneous use of the same objects by certain calls that modify objects. See [bug 13433](https://cvs.khronos.org/bugzilla/show_bug.cgi?id=13433) and for threading rules. Objects that may need a mutex include VkQueue, VkDeviceMemory, VkObject, VkBuffer, VkImage, VkDescriptorSet, VkDescriptorPool, VkCommandBuffer, and VkSemaphore. The most common case is that a VkCommandBuffer passed to VkCmd* calls must be used by only one thread at a time. In addition to reporting threading rule violations, the layer will enforce a mutex for those calls. That can allow an application to continue running without actually crashing due to the reported threading problem. The layer can only observe when a mutual exclusion rule is actually violated. It cannot insure that there is no latent race condition needing mutual exclusion. The layer can also catch reentrant use of the same object by calls from a single thread. That might happen if Vulkan calls are made from a callback function or a signal handler. But the layer cannot prevent such a reentrant use of an object. The layer can only observe when a mutual exclusion rule is actually violated. It cannot insure that there is no latent race condition. ### VK_LAYER_GOOGLE_threading Details Table | Check | Overview | ENUM THREADING_CHECKER_* | Relevant API | Testname | Notes/TODO | | ----- | -------- | ---------------- | ---------------- | -------- | ---------- | | Thread Collision | Detects and notifies user if multiple threads are modifying thes same object | MULTIPLE_THREADS | vkQueueSubmit vkFreeMemory vkMapMemory vkUnmapMemory vkFlushMappedMemoryRanges vkInvalidateMappedMemoryRanges vkBindBufferMemory vkBindImageMemory vkQueueBindSparse vkDestroySemaphore vkDestroyBuffer vkDestroyImage vkDestroyDescriptorPool vkResetDescriptorPool vkAllocateDescriptorSets vkFreeDescriptorSets vkFreeCommandBuffers vkBeginCommandBuffer vkEndCommandBuffer vkResetCommandBuffer vkCmdBindPipeline vkCmdBindDescriptorSets vkCmdBindIndexBuffer vkCmdBindVertexBuffers vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatch vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdClearAttachments vkCmdResolveImage vkCmdSetEvent vkCmdResetEvent vkCmdWaitEvents vkCmdPipelineBarrier vkCmdBeginQuery vkCmdEndQuery vkCmdResetQueryPool vkCmdWriteTimestamp vkCmdCopyQueryPoolResults vkCmdBeginRenderPass vkCmdNextSubpass vkCmdPushConstants vkCmdEndRenderPass vkCmdExecuteCommands | ??? | NA | | Thread Reentrancy | Detects cases of a single thread calling Vulkan reentrantly | SINGLE_THREAD_REUSE | vkQueueSubmit vkFreeMemory vkMapMemory vkUnmapMemory vkFlushMappedMemoryRanges vkInvalidateMappedMemoryRanges vkBindBufferMemory vkBindImageMemory vkQueueBindSparse vkDestroySemaphore vkDestroyBuffer vkDestroyImage vkDestroyDescriptorPool vkResetDescriptorPool vkAllocateDescriptorSets vkFreeDescriptorSets vkFreeCommandBuffers vkBeginCommandBuffer vkEndCommandBuffer vkResetCommandBuffer vkCmdBindPipeline vkCmdSetViewport vkCmdSetBlendConstants vkCmdSetLineWidth vkCmdSetDepthBias vkCmdSetDepthBounds vkCmdSetStencilCompareMask vkCmdSetStencilWriteMask vkCmdSetStencilReference vkCmdBindDescriptorSets vkCmdBindIndexBuffer vkCmdBindVertexBuffers vkCmdDraw vkCmdDrawIndexed vkCmdDrawIndirect vkCmdDrawIndexedIndirect vkCmdDispatch vkCmdDispatchIndirect vkCmdCopyBuffer vkCmdCopyImage vkCmdBlitImage vkCmdCopyBufferToImage vkCmdCopyImageToBuffer vkCmdUpdateBuffer vkCmdFillBuffer vkCmdClearColorImage vkCmdClearDepthStencilImage vkCmdClearAttachments vkCmdResolveImage vkCmdSetEvent vkCmdResetEvent vkCmdWaitEvents vkCmdPipelineBarrier vkCmdBeginQuery vkCmdEndQuery vkCmdResetQueryPool vkCmdWriteTimestamp vkCmdCopyQueryPoolResults vkCmdBeginRenderPass vkCmdNextSubpass vkCmdPushConstants vkCmdEndRenderPass vkCmdExecuteCommands | ??? | NA | | NA | Enum used for informational messages | NONE | | NA | None | ### VK_LAYER_GOOGLE_threading Pending Work See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests ## VK_LAYER_LUNARG_device_limits ### VK_LAYER_LUNARG_device_limits Overview This layer is a work in progress. VK_LAYER_LUNARG_device_limits layer is intended to capture two broad categories of errors: 1. Incorrect use of APIs to query device capabilities 2. Attempt to use API functionality beyond the capability of the underlying device For the first category, the layer tracks which calls are made and flags errors if calls are excluded that should not be, or if call sequencing is incorrect. An example is an app that assumes attempts to Query and use queues without ever having called vkGetPhysicalDeviceQueueFamilyProperties(). Also, if an app is calling vkGetPhysicalDeviceQueueFamilyProperties() to retrieve properties with some assumed count for array size instead of first calling vkGetPhysicalDeviceQueueFamilyProperties() w/ a NULL pQueueFamilyProperties parameter in order to query the actual count. For the second category of errors, VK_LAYER_LUNARG_device_limits stores its own internal record of underlying device capabilities and flags errors if requests are made beyond those limits. Most (all?) of the limits are queried via vkGetPhysicalDevice* calls. ### VK_LAYER_LUNARG_device_limits Details Table | Check | Overview | ENUM DEVLIMITS_* | Relevant API | Testname | Notes/TODO | | ----- | -------- | ---------------- | ---------------- | -------- | ---------- | | Valid instance | If an invalid instance is used, this error will be flagged | INVALID_INSTANCE | vkEnumeratePhysicalDevices | NA | VK_LAYER_LUNARG_object_tracker should also catch this so if we made sure VK_LAYER_LUNARG_object_tracker was always on top, we could avoid this check | | Valid physical device | Enum used for informational messages | INVALID_PHYSICAL_DEVICE | vkEnumeratePhysicalDevices | NA | VK_LAYER_LUNARG_object_tracker should also catch this so if we made sure VK_LAYER_LUNARG_object_tracker was always on top, we could avoid this check | | Valid inherited query | If an invalid inherited query is used, this error will be flagged | INVALID_INHERITED_QUERY | vkBeginCommandBuffer | NA | None | | Querying array counts | For API calls where an array count should be queried with an initial call and a NULL array pointer, verify that such a call was made before making a call with non-null array pointer. | MUST_QUERY_COUNT | vkEnumeratePhysicalDevices vkGetPhysicalDeviceQueueFamilyProperties | NA | Create focused test | | Array count value | For API calls where an array of details is queried, verify that the size of the requested array matches the size of the array supported by the device. | COUNT_MISMATCH | vkEnumeratePhysicalDevices vkGetPhysicalDeviceQueueFamilyProperties | NA | Create focused test | | Queue Creation | When creating/requesting queues, make sure that QueueFamilyPropertiesIndex and index/count within that queue family are valid. | INVALID_QUEUE_CREATE_REQUEST | vkGetDeviceQueue vkCreateDevice | NA | Create focused test | | Query Properties | Before creating an Image, warn if physical device properties have not been queried | MUST_QUERY_PROPERTIES | vkCreateImage | NA | Add validation test | | API Call Sequencing | This is a general error indicating that an app did not use vkGetPhysicalDevice* and other such query calls, but rather made an assumption about device capabilities. | INVALID_CALL_SEQUENCE | vkCreateDevice | NA | Add validation test | | Feature Request | Attempting to vkCreateDevice with a feature that is not supported by the underlying physical device. | INVALID_FEATURE_REQUESTED | vkCreateDevice | NA | Add validation test | | Valid Image Extents | When creating an Image, ensure that image extents are within device limits for the specified format | LIMITS_VIOLATION | vkCreateImage | CreateImageLimitsViolationWidth | NA | | Valid Image Resource Size | When creating an image, ensure the the total image resource size is less than the queried device maximum resource size | LIMITS_VIOLATION | vkCreateImage | CreateImageResourceSizeViolation | NA | | Alignment | When updating a buffer, data should be aligned on 4 byte boundaries | LIMITS_VIOLATION | vkCmdUpdateBuffer | UpdateBufferAlignment | NA | | Alignment | When filling a buffer, data should be aligned on 4 byte boundaries | LIMITS_VIOLATION | vkCmdFillBuffer | UpdateBufferAlignment | NA | | Storage Buffer Alignment | Storage Buffer offsets must agree with offset alignment device limit | INVALID_STORAGE_BUFFER_OFFSET | vkBindBufferMemory vkUpdateDescriptorSets | TODO | Create test | | Uniform Buffer Alignment | Uniform Buffer offsets must agree with offset alignment device limit | INVALID_UNIFORM_BUFFER_OFFSET | vkBindBufferMemory vkUpdateDescriptorSets | TODO | Create test | | NA | Enum used for informational messages | NONE | | NA | None | ### VK_LAYER_LUNARG_device_limits Pending Work 1. For all Formats, call vkGetPhysicalDeviceFormatProperties to pull their properties for the underlying device. After that point, if the app attempts to use any formats in violation of those properties, flag errors (this is done for Images). See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests ## VK_LAYER_LUNARG_swapchain ### Swapchain Overview This layer is a work in progress. VK_LAYER_LUNARG_swapchain layer is intended to ... ### VK_LAYER_LUNARG_swapchain Details Table | Check | Overview | ENUM SWAPCHAIN_* | Relevant API | Testname | Notes/TODO | | ----- | -------- | ---------------- | ------------ | -------- | ---------- | | Valid handle | If an invalid handle is used, this error will be flagged | INVALID_HANDLE | vkCreateDevice vkCreateSwapchainKHR | NA | None | | Valid pointer | If a NULL pointer is used, this error will be flagged | NULL_POINTER | vkGetPhysicalDeviceSurfaceSupportKHR vkGetPhysicalDeviceSurfaceCapabilitiesKHR vkGetPhysicalDeviceSurfaceFormatsKHR vkGetPhysicalDeviceSurfacePresentModesKHR vkCreateSwapchainKHR vkGetSwapchainImagesKHR vkAcquireNextImageKHR vkQueuePresentKHR | NA | None | | Extension enabled before use | Validates that a WSI extension is enabled before its functions are used | EXT_NOT_ENABLED_BUT_USED | vkGetPhysicalDeviceSurfaceSupportKHR vkGetPhysicalDeviceSurfaceCapabilitiesKHR vkGetPhysicalDeviceSurfaceFormatsKHR vkGetPhysicalDeviceSurfacePresentModesKHR vkCreateSwapchainKHR vkDestroySwapchainKHR vkGetSwapchainImagesKHR vkAcquireNextImageKHR vkQueuePresentKHR | NA | None | | Swapchains destroyed before devices | Validates that vkDestroySwapchainKHR() is called for all swapchains associated with a device before vkDestroyDevice() is called | DEL_OBJECT_BEFORE_CHILDREN | vkDestroyDevice vkDestroySurfaceKHR | NA | None | | Surface seen to support presentation | Validates that pCreateInfo->surface was seen by vkGetPhysicalDeviceSurfaceSupportKHR() to support presentation | CREATE_UNSUPPORTED_SURFACE | vkCreateSwapchainKHR | NA | None | | Queries occur before swapchain creation | Validates that vkGetPhysicalDeviceSurfaceCapabilitiesKHR(), vkGetPhysicalDeviceSurfaceFormatsKHR() and vkGetPhysicalDeviceSurfacePresentModesKHR() are called before vkCreateSwapchainKHR() | CREATE_SWAP_WITHOUT_QUERY | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->minImageCount) | Validates vkCreateSwapchainKHR(pCreateInfo->minImageCount) | CREATE_SWAP_BAD_MIN_IMG_COUNT | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->imageExtent) | Validates vkCreateSwapchainKHR(pCreateInfo->imageExtent) when window has no fixed size | CREATE_SWAP_OUT_OF_BOUNDS_EXTENTS | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->imageExtent) | Validates vkCreateSwapchainKHR(pCreateInfo->imageExtent) when window has a fixed size | CREATE_SWAP_EXTENTS_NO_MATCH_WIN | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->preTransform) | Validates vkCreateSwapchainKHR(pCreateInfo->preTransform) | CREATE_SWAP_BAD_PRE_TRANSFORM | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->compositeAlpha) | Validates vkCreateSwapchainKHR(pCreateInfo->compositeAlpha) | CREATE_SWAP_BAD_COMPOSITE_ALPHA | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->imageArraySize) | Validates vkCreateSwapchainKHR(pCreateInfo->imageArraySize) | CREATE_SWAP_BAD_IMG_ARRAY_SIZE | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->imageUsageFlags) | Validates vkCreateSwapchainKHR(pCreateInfo->imageUsageFlags) | CREATE_SWAP_BAD_IMG_USAGE_FLAGS | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->imageColorSpace) | Validates vkCreateSwapchainKHR(pCreateInfo->imageColorSpace) | CREATE_SWAP_BAD_IMG_COLOR_SPACE | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->imageFormat) | Validates vkCreateSwapchainKHR(pCreateInfo->imageFormat) | CREATE_SWAP_BAD_IMG_FORMAT | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->imageFormat and pCreateInfo->imageColorSpace) | Validates vkCreateSwapchainKHR(pCreateInfo->imageFormat and pCreateInfo->imageColorSpace) | CREATE_SWAP_BAD_IMG_FMT_CLR_SP | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->presentMode) | Validates vkCreateSwapchainKHR(pCreateInfo->presentMode) | CREATE_SWAP_BAD_PRESENT_MODE | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->imageSharingMode) | Validates vkCreateSwapchainKHR(pCreateInfo->imageSharingMode) | CREATE_SWAP_BAD_SHARING_MODE | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->imageSharingMode) | Validates vkCreateSwapchainKHR(pCreateInfo->imageSharingMode) | CREATE_SWAP_BAD_SHARING_VALUES | vkCreateSwapchainKHR | NA | None | | vkCreateSwapchainKHR(pCreateInfo->oldSwapchain and pCreateInfo->surface) | pCreateInfo->surface must match pCreateInfo->oldSwapchain's surface | CREATE_SWAP_DIFF_SURFACE | vkCreateSwapchainKHR | NA | None | | Use same device for swapchain | Validates that vkDestroySwapchainKHR() called with the same VkDevice as vkCreateSwapchainKHR() | DESTROY_SWAP_DIFF_DEVICE | vkCreateSwapchainKHR vkDestroySwapchainKHR | NA | None | | Don't use too many images | Validates that app never tries to own too many swapchain images at a time | APP_OWNS_TOO_MANY_IMAGES | vkAcquireNextImageKHR | NA | None | | Index too large | Validates that an image index is within the number of images in a swapchain | INDEX_TOO_LARGE | vkQueuePresentKHR | NA | None | | Can't present a non-owned image | Validates that application only presents images that it owns | INDEX_NOT_IN_USE | vkQueuePresentKHR | NA | None | | A VkBool32 must have values of VK_TRUE or VK_FALSE | Validates that a VkBool32 must have values of VK_TRUE or VK_FALSE | BAD_BOOL | vkCreateSwapchainKHR | NA | None | | pCount must point to same value regardless of whether other pointer is NULL | Validates that app doesn't change value of pCount returned by a query | INVALID_COUNT | vkGetPhysicalDeviceSurfaceFormatsKHR vkGetPhysicalDeviceSurfacePresentModesKHR vkGetSwapchainImagesKHR | NA | None | | Valid sType | Validates that a struct has correct value for sType | WRONG_STYPE | vkCreateSwapchainKHR vkQueuePresentKHR | NA | None | | Valid pNext | Validates that a struct has NULL for the value of pNext | WRONG_NEXT | vkCreateSwapchainKHR vkQueuePresentKHR | NA | None | | Non-zero value | Validates that a required value should be non-zero | ZERO_VALUE | vkQueuePresentKHR | NA | None | | Compatible Allocator | Validates that pAllocator is compatible (i.e. NULL or not) when an object is created and destroyed | INCOMPATIBLE_ALLOCATOR | vkDestroySurfaceKHR | NA | None | | Valid use of queueFamilyIndex | Validates that a queueFamilyIndex not used before vkGetPhysicalDeviceQueueFamilyProperties() was called | DID_NOT_QUERY_QUEUE_FAMILIES | vkGetPhysicalDeviceSurfaceSupportKHR | NA | None | | Valid queueFamilyIndex value | Validates that a queueFamilyIndex value is less-than pQueueFamilyPropertyCount returned by vkGetPhysicalDeviceQueueFamilyProperties | QUEUE_FAMILY_INDEX_TOO_LARGE | vkGetPhysicalDeviceSurfaceSupportKHR | NA | None | | Supported combination of queue and surface | Validates that the surface associated with a swapchain was seen to support the queueFamilyIndex of a given queue | SURFACE_NOT_SUPPORTED_WITH_QUEUE | vkQueuePresentKHR | NA | None | | Proper synchronization of acquired images | vkAcquireNextImageKHR should be called with a valid semaphore and/or fence | NO_SYNC_FOR_ACQUIRE | vkAcquireNextImageKHR | NA | None | Note: The following platform-specific functions are not mentioned above, because they are protected by ifdefs, which cause test failures: - vkCreateAndroidSurfaceKHR - vkCreateMirSurfaceKHR - vkGetPhysicalDeviceMirPresentationSupportKHR - vkCreateWaylandSurfaceKHR - vkGetPhysicalDeviceWaylandPresentationSupportKHR - vkCreateWin32SurfaceKHR - vkGetPhysicalDeviceWin32PresentationSupportKHR - vkCreateXcbSurfaceKHR - vkGetPhysicalDeviceXcbPresentationSupportKHR - vkCreateXlibSurfaceKHR - vkGetPhysicalDeviceXlibPresentationSupportKHR ### VK_LAYER_LUNARG_Swapchain Pending Work Additional checks to be added to VK_LAYER_LUNARG_swapchain 1. Check that the queue used for presenting was checked/valid during vkGetPhysicalDeviceSurfaceSupportKHR. 2. One issue that has already come up is correct UsageFlags for WSI SwapChains and SurfaceProperties. 3. Tons of other stuff including semaphore and synchronization validation. See the Khronos github repository for Vulkan-LoaderAndValidationLayers for additional pending issues, or to submit new validation requests ## VK_LAYER_GOOGLE_unique_objects ### VK_LAYER_GOOGLE_unique_objects Overview The unique_objects is not a validation layer but a helper layer that assists with validation. The Vulkan specification allows objects that have non-unique handles. This makes tracking object lifetimes difficult in that it is unclear which object is being referenced upon deletion. The unique_objects layer addresses this by wrapping all objects with a unique object representation allowing proper object lifetime tracking. This layer does no validation on its own and may not be required for the proper operation of all layers or all platforms. One sign that it is needed is the appearance of many errors from the object_tracker layer indicating the use of previously destroyed objects. For optimal effectiveness this layer should be loaded last (to reside in the layer chain closest to the display driver and farthest from the application). ## General Pending Work A place to capture general validation work to be done. This includes new checks that don't clearly fit into the above layers. 1. For Upcoming Dynamic State overhaul (if approved): If dynamic state value that is consumed is never set prior to consumption, flag an error 2. For Upcoming Dynamic State overhaul (if approved): If dynamic state that was bound as "static" in current PSO is attempted to be set with vkCmdSet* flag an error 3. Need to check VkShaderCreateInfo.stage is being set properly (Issue reported by Dan G) Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/windows/000077500000000000000000000000001270147354000237135ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/windows/VkLayer_core_validation.json000066400000000000000000000007361270147354000314130ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_core_validation", "type": "GLOBAL", "library_path": ".\\VkLayer_core_validation.dll", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/windows/VkLayer_device_limits.json000066400000000000000000000007321270147354000310650ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_device_limits", "type": "GLOBAL", "library_path": ".\\VkLayer_device_limits.dll", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/windows/VkLayer_image.json000066400000000000000000000007121270147354000273250ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_image", "type": "GLOBAL", "library_path": ".\\VkLayer_image.dll", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/windows/VkLayer_object_tracker.json000066400000000000000000000007341270147354000312300ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_object_tracker", "type": "GLOBAL", "library_path": ".\\VkLayer_object_tracker.dll", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/windows/VkLayer_parameter_validation.json000066400000000000000000000007501270147354000324370ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_parameter_validation", "type": "GLOBAL", "library_path": ".\\VkLayer_parameter_validation.dll", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/windows/VkLayer_swapchain.json000066400000000000000000000007221270147354000302210ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_LUNARG_swapchain", "type": "GLOBAL", "library_path": ".\\VkLayer_swapchain.dll", "api_version": "1.0.8", "implementation_version": "1", "description": "LunarG Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/windows/VkLayer_threading.json000066400000000000000000000007221270147354000302110ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_GOOGLE_threading", "type": "GLOBAL", "library_path": ".\\VkLayer_threading.dll", "api_version": "1.0.8", "implementation_version": "1", "description": "Google Validation Layer", "instance_extensions": [ { "name": "VK_EXT_debug_report", "spec_version": "2" } ] } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/layers/windows/VkLayer_unique_objects.json000066400000000000000000000004741270147354000312670ustar00rootroot00000000000000{ "file_format_version" : "1.0.0", "layer" : { "name": "VK_LAYER_GOOGLE_unique_objects", "type": "GLOBAL", "library_path": ".\\VkLayer_unique_objects.dll", "api_version": "1.0.8", "implementation_version": "1", "description": "Google Validation Layer" } } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/000077500000000000000000000000001270147354000216535ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/000077500000000000000000000000001270147354000224325ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/common.hpp000066400000000000000000000030721270147354000244350ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/common.hpp /// @date 2013-12-24 / 2013-12-24 /// @author Christophe Riccio /////////////////////////////////////////////////////////////////////////////////// #ifndef GLM_COMMON_INCLUDED #define GLM_COMMON_INCLUDED #include "detail/func_common.hpp" #endif//GLM_COMMON_INCLUDED Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/000077500000000000000000000000001270147354000236745ustar00rootroot00000000000000Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/_features.hpp000066400000000000000000000313211270147354000263620ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/core/_features.hpp /// @date 2013-02-20 / 2013-02-20 /// @author Christophe Riccio /////////////////////////////////////////////////////////////////////////////////// #ifndef glm_core_features #define glm_core_features // #define GLM_CXX98_EXCEPTIONS // #define GLM_CXX98_RTTI // #define GLM_CXX11_RVALUE_REFERENCES // Rvalue references - GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2118.html // GLM_CXX11_TRAILING_RETURN // Rvalue references for *this - GCC not supported // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2439.htm // GLM_CXX11_NONSTATIC_MEMBER_INIT // Initialization of class objects by rvalues - GCC any // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1610.html // GLM_CXX11_NONSTATIC_MEMBER_INIT // Non-static data member initializers - GCC 4.7 // http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2756.htm // #define GLM_CXX11_VARIADIC_TEMPLATE // Variadic templates - GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2242.pdf // // Extending variadic template template parameters - GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2555.pdf // #define GLM_CXX11_GENERALIZED_INITIALIZERS // Initializer lists - GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2672.htm // #define GLM_CXX11_STATIC_ASSERT // Static assertions - GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1720.html // #define GLM_CXX11_AUTO_TYPE // auto-typed variables - GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1984.pdf // #define GLM_CXX11_AUTO_TYPE // Multi-declarator auto - GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1737.pdf // #define GLM_CXX11_AUTO_TYPE // Removal of auto as a storage-class specifier - GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2546.htm // #define GLM_CXX11_AUTO_TYPE // New function declarator syntax - GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2541.htm // #define GLM_CXX11_LAMBDAS // New wording for C++0x lambdas - GCC 4.5 // http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2927.pdf // #define GLM_CXX11_DECLTYPE // Declared type of an expression - GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2343.pdf // // Right angle brackets - GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1757.html // // Default template arguments for function templates DR226 GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#226 // // Solving the SFINAE problem for expressions DR339 GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2634.html // #define GLM_CXX11_ALIAS_TEMPLATE // Template aliases N2258 GCC 4.7 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2258.pdf // // Extern templates N1987 Yes // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1987.htm // #define GLM_CXX11_NULLPTR // Null pointer constant N2431 GCC 4.6 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2431.pdf // #define GLM_CXX11_STRONG_ENUMS // Strongly-typed enums N2347 GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2347.pdf // // Forward declarations for enums N2764 GCC 4.6 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2764.pdf // // Generalized attributes N2761 GCC 4.8 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2761.pdf // // Generalized constant expressions N2235 GCC 4.6 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2235.pdf // // Alignment support N2341 GCC 4.8 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2341.pdf // #define GLM_CXX11_DELEGATING_CONSTRUCTORS // Delegating constructors N1986 GCC 4.7 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1986.pdf // // Inheriting constructors N2540 GCC 4.8 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2540.htm // #define GLM_CXX11_EXPLICIT_CONVERSIONS // Explicit conversion operators N2437 GCC 4.5 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2437.pdf // // New character types N2249 GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2249.html // // Unicode string literals N2442 GCC 4.5 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2442.htm // // Raw string literals N2442 GCC 4.5 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2442.htm // // Universal character name literals N2170 GCC 4.5 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2170.html // #define GLM_CXX11_USER_LITERALS // User-defined literals N2765 GCC 4.7 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2765.pdf // // Standard Layout Types N2342 GCC 4.5 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2342.htm // #define GLM_CXX11_DEFAULTED_FUNCTIONS // #define GLM_CXX11_DELETED_FUNCTIONS // Defaulted and deleted functions N2346 GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2346.htm // // Extended friend declarations N1791 GCC 4.7 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1791.pdf // // Extending sizeof N2253 GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2253.html // #define GLM_CXX11_INLINE_NAMESPACES // Inline namespaces N2535 GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2535.htm // #define GLM_CXX11_UNRESTRICTED_UNIONS // Unrestricted unions N2544 GCC 4.6 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2544.pdf // #define GLM_CXX11_LOCAL_TYPE_TEMPLATE_ARGS // Local and unnamed types as template arguments N2657 GCC 4.5 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2657.htm // #define GLM_CXX11_RANGE_FOR // Range-based for N2930 GCC 4.6 // http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2930.html // #define GLM_CXX11_OVERRIDE_CONTROL // Explicit virtual overrides N2928 N3206 N3272 GCC 4.7 // http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2928.htm // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3206.htm // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3272.htm // // Minimal support for garbage collection and reachability-based leak detection N2670 No // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2670.htm // #define GLM_CXX11_NOEXCEPT // Allowing move constructors to throw [noexcept] N3050 GCC 4.6 (core language only) // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3050.html // // Defining move special member functions N3053 GCC 4.6 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3053.html // // Sequence points N2239 Yes // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2239.html // // Atomic operations N2427 GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2239.html // // Strong Compare and Exchange N2748 GCC 4.5 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2427.html // // Bidirectional Fences N2752 GCC 4.8 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2752.htm // // Memory model N2429 GCC 4.8 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2429.htm // // Data-dependency ordering: atomics and memory model N2664 GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2664.htm // // Propagating exceptions N2179 GCC 4.4 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2179.html // // Abandoning a process and at_quick_exit N2440 GCC 4.8 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2440.htm // // Allow atomics use in signal handlers N2547 Yes // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2547.htm // // Thread-local storage N2659 GCC 4.8 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2659.htm // // Dynamic initialization and destruction with concurrency N2660 GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2660.htm // // __func__ predefined identifier N2340 GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2340.htm // // C99 preprocessor N1653 GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1653.htm // // long long N1811 GCC 4.3 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1811.pdf // // Extended integral types N1988 Yes // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1988.pdf #if(GLM_COMPILER & GLM_COMPILER_GCC) # if(GLM_COMPILER >= GLM_COMPILER_GCC43) # define GLM_CXX11_STATIC_ASSERT # endif #elif(GLM_COMPILER & GLM_COMPILER_CLANG) # if(__has_feature(cxx_exceptions)) # define GLM_CXX98_EXCEPTIONS # endif # if(__has_feature(cxx_rtti)) # define GLM_CXX98_RTTI # endif # if(__has_feature(cxx_access_control_sfinae)) # define GLM_CXX11_ACCESS_CONTROL_SFINAE # endif # if(__has_feature(cxx_alias_templates)) # define GLM_CXX11_ALIAS_TEMPLATE # endif # if(__has_feature(cxx_alignas)) # define GLM_CXX11_ALIGNAS # endif # if(__has_feature(cxx_attributes)) # define GLM_CXX11_ATTRIBUTES # endif # if(__has_feature(cxx_constexpr)) # define GLM_CXX11_CONSTEXPR # endif # if(__has_feature(cxx_decltype)) # define GLM_CXX11_DECLTYPE # endif # if(__has_feature(cxx_default_function_template_args)) # define GLM_CXX11_DEFAULT_FUNCTION_TEMPLATE_ARGS # endif # if(__has_feature(cxx_defaulted_functions)) # define GLM_CXX11_DEFAULTED_FUNCTIONS # endif # if(__has_feature(cxx_delegating_constructors)) # define GLM_CXX11_DELEGATING_CONSTRUCTORS # endif # if(__has_feature(cxx_deleted_functions)) # define GLM_CXX11_DELETED_FUNCTIONS # endif # if(__has_feature(cxx_explicit_conversions)) # define GLM_CXX11_EXPLICIT_CONVERSIONS # endif # if(__has_feature(cxx_generalized_initializers)) # define GLM_CXX11_GENERALIZED_INITIALIZERS # endif # if(__has_feature(cxx_implicit_moves)) # define GLM_CXX11_IMPLICIT_MOVES # endif # if(__has_feature(cxx_inheriting_constructors)) # define GLM_CXX11_INHERITING_CONSTRUCTORS # endif # if(__has_feature(cxx_inline_namespaces)) # define GLM_CXX11_INLINE_NAMESPACES # endif # if(__has_feature(cxx_lambdas)) # define GLM_CXX11_LAMBDAS # endif # if(__has_feature(cxx_local_type_template_args)) # define GLM_CXX11_LOCAL_TYPE_TEMPLATE_ARGS # endif # if(__has_feature(cxx_noexcept)) # define GLM_CXX11_NOEXCEPT # endif # if(__has_feature(cxx_nonstatic_member_init)) # define GLM_CXX11_NONSTATIC_MEMBER_INIT # endif # if(__has_feature(cxx_nullptr)) # define GLM_CXX11_NULLPTR # endif # if(__has_feature(cxx_override_control)) # define GLM_CXX11_OVERRIDE_CONTROL # endif # if(__has_feature(cxx_reference_qualified_functions)) # define GLM_CXX11_REFERENCE_QUALIFIED_FUNCTIONS # endif # if(__has_feature(cxx_range_for)) # define GLM_CXX11_RANGE_FOR # endif # if(__has_feature(cxx_raw_string_literals)) # define GLM_CXX11_RAW_STRING_LITERALS # endif # if(__has_feature(cxx_rvalue_references)) # define GLM_CXX11_RVALUE_REFERENCES # endif # if(__has_feature(cxx_static_assert)) # define GLM_CXX11_STATIC_ASSERT # endif # if(__has_feature(cxx_auto_type)) # define GLM_CXX11_AUTO_TYPE # endif # if(__has_feature(cxx_strong_enums)) # define GLM_CXX11_STRONG_ENUMS # endif # if(__has_feature(cxx_trailing_return)) # define GLM_CXX11_TRAILING_RETURN # endif # if(__has_feature(cxx_unicode_literals)) # define GLM_CXX11_UNICODE_LITERALS # endif # if(__has_feature(cxx_unrestricted_unions)) # define GLM_CXX11_UNRESTRICTED_UNIONS # endif # if(__has_feature(cxx_user_literals)) # define GLM_CXX11_USER_LITERALS # endif # if(__has_feature(cxx_variadic_templates)) # define GLM_CXX11_VARIADIC_TEMPLATES # endif #endif//(GLM_COMPILER & GLM_COMPILER_CLANG) #endif//glm_core_features Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/_fixes.hpp000066400000000000000000000035031270147354000256630ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/core/_fixes.hpp /// @date 2011-02-21 / 2011-11-22 /// @author Christophe Riccio /////////////////////////////////////////////////////////////////////////////////// #include //! Workaround for compatibility with other libraries #ifdef max #undef max #endif //! Workaround for compatibility with other libraries #ifdef min #undef min #endif //! Workaround for Android #ifdef isnan #undef isnan #endif //! Workaround for Android #ifdef isinf #undef isinf #endif //! Workaround for Chrone Native Client #ifdef log2 #undef log2 #endif Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/_literals.hpp000066400000000000000000000035531270147354000263710ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/core/_literals.hpp /// @date 2013-05-06 / 2013-05-06 /// @author Christophe Riccio /////////////////////////////////////////////////////////////////////////////////// #ifndef glm_core_literals #define glm_core_literals namespace glm { #define GLM_CXX11_USER_LITERALS #ifdef GLM_CXX11_USER_LITERALS /* GLM_FUNC_QUALIFIER detail::half operator "" _h(long double const s) { return detail::half(s); } GLM_FUNC_QUALIFIER float operator "" _f(long double const s) { return static_cast(s); } */ #endif//GLM_CXX11_USER_LITERALS }//namespace glm #endif//glm_core_literals Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/_noise.hpp000066400000000000000000000103351270147354000256630ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/detail/_noise.hpp /// @date 2013-12-24 / 2013-12-24 /// @author Christophe Riccio /////////////////////////////////////////////////////////////////////////////////// #ifndef GLM_DETAIL_NOISE_INCLUDED #define GLM_DETAIL_NOISE_INCLUDED namespace glm{ namespace detail { template GLM_FUNC_QUALIFIER T mod289(T const & x) { return x - floor(x * static_cast(1.0) / static_cast(289.0)) * static_cast(289.0); } template GLM_FUNC_QUALIFIER T permute(T const & x) { return mod289(((x * static_cast(34)) + static_cast(1)) * x); } template GLM_FUNC_QUALIFIER tvec2 permute(tvec2 const & x) { return mod289(((x * static_cast(34)) + static_cast(1)) * x); } template GLM_FUNC_QUALIFIER tvec3 permute(tvec3 const & x) { return mod289(((x * static_cast(34)) + static_cast(1)) * x); } template GLM_FUNC_QUALIFIER tvec4 permute(tvec4 const & x) { return mod289(((x * static_cast(34)) + static_cast(1)) * x); } /* template class vecType> GLM_FUNC_QUALIFIER vecType permute(vecType const & x) { return mod289(((x * T(34)) + T(1)) * x); } */ template GLM_FUNC_QUALIFIER T taylorInvSqrt(T const & r) { return T(1.79284291400159) - T(0.85373472095314) * r; } template GLM_FUNC_QUALIFIER detail::tvec2 taylorInvSqrt(detail::tvec2 const & r) { return T(1.79284291400159) - T(0.85373472095314) * r; } template GLM_FUNC_QUALIFIER detail::tvec3 taylorInvSqrt(detail::tvec3 const & r) { return T(1.79284291400159) - T(0.85373472095314) * r; } template GLM_FUNC_QUALIFIER detail::tvec4 taylorInvSqrt(detail::tvec4 const & r) { return T(1.79284291400159) - T(0.85373472095314) * r; } /* template class vecType> GLM_FUNC_QUALIFIER vecType taylorInvSqrt(vecType const & r) { return T(1.79284291400159) - T(0.85373472095314) * r; } */ template GLM_FUNC_QUALIFIER detail::tvec2 fade(detail::tvec2 const & t) { return (t * t * t) * (t * (t * T(6) - T(15)) + T(10)); } template GLM_FUNC_QUALIFIER detail::tvec3 fade(detail::tvec3 const & t) { return (t * t * t) * (t * (t * T(6) - T(15)) + T(10)); } template GLM_FUNC_QUALIFIER detail::tvec4 fade(detail::tvec4 const & t) { return (t * t * t) * (t * (t * T(6) - T(15)) + T(10)); } /* template class vecType> GLM_FUNC_QUALIFIER vecType fade(vecType const & t) { return (t * t * t) * (t * (t * T(6) - T(15)) + T(10)); } */ }//namespace detail }//namespace glm #endif//GLM_DETAIL_NOISE_INCLUDED Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/_swizzle.hpp000066400000000000000000001440631270147354000262630ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/core/_swizzle.hpp /// @date 2006-04-20 / 2011-02-16 /// @author Christophe Riccio /////////////////////////////////////////////////////////////////////////////////// #ifndef glm_core_swizzle #define glm_core_swizzle namespace glm{ namespace detail { // Internal class for implementing swizzle operators template struct _swizzle_base0 { typedef T value_type; protected: GLM_FUNC_QUALIFIER value_type& elem (size_t i) { return (reinterpret_cast(_buffer))[i]; } GLM_FUNC_QUALIFIER const value_type& elem (size_t i) const { return (reinterpret_cast(_buffer))[i]; } // Use an opaque buffer to *ensure* the compiler doesn't call a constructor. // The size 1 buffer is assumed to aligned to the actual members so that the // elem() char _buffer[1]; }; template struct _swizzle_base1 : public _swizzle_base0 { }; template struct _swizzle_base1 : public _swizzle_base0 { GLM_FUNC_QUALIFIER V operator ()() const { return V(this->elem(E0), this->elem(E1)); } }; template struct _swizzle_base1 : public _swizzle_base0 { GLM_FUNC_QUALIFIER V operator ()() const { return V(this->elem(E0), this->elem(E1), this->elem(E2)); } }; template struct _swizzle_base1 : public _swizzle_base0 { GLM_FUNC_QUALIFIER V operator ()() const { return V(this->elem(E0), this->elem(E1), this->elem(E2), this->elem(E3)); } }; // Internal class for implementing swizzle operators /* Template parameters: ValueType = type of scalar values (e.g. float, double) VecType = class the swizzle is applies to (e.g. tvec3) N = number of components in the vector (e.g. 3) E0...3 = what index the n-th element of this swizzle refers to in the unswizzled vec DUPLICATE_ELEMENTS = 1 if there is a repeated element, 0 otherwise (used to specialize swizzles containing duplicate elements so that they cannot be used as r-values). */ template struct _swizzle_base2 : public _swizzle_base1 { typedef VecType vec_type; typedef ValueType value_type; GLM_FUNC_QUALIFIER _swizzle_base2& operator= (const ValueType& t) { for (int i = 0; i < N; ++i) (*this)[i] = t; return *this; } GLM_FUNC_QUALIFIER _swizzle_base2& operator= (const VecType& that) { struct op { GLM_FUNC_QUALIFIER void operator() (value_type& e, value_type& t) { e = t; } }; _apply_op(that, op()); return *this; } GLM_FUNC_QUALIFIER void operator -= (const VecType& that) { struct op { GLM_FUNC_QUALIFIER void operator() (value_type& e, value_type& t) { e -= t; } }; _apply_op(that, op()); } GLM_FUNC_QUALIFIER void operator += (const VecType& that) { struct op { GLM_FUNC_QUALIFIER void operator() (value_type& e, value_type& t) { e += t; } }; _apply_op(that, op()); } GLM_FUNC_QUALIFIER void operator *= (const VecType& that) { struct op { GLM_FUNC_QUALIFIER void operator() (value_type& e, value_type& t) { e *= t; } }; _apply_op(that, op()); } GLM_FUNC_QUALIFIER void operator /= (const VecType& that) { struct op { GLM_FUNC_QUALIFIER void operator() (value_type& e, value_type& t) { e /= t; } }; _apply_op(that, op()); } GLM_FUNC_QUALIFIER value_type& operator[] (size_t i) { #ifndef __CUDA_ARCH__ static #endif const int offset_dst[4] = { E0, E1, E2, E3 }; return this->elem(offset_dst[i]); } GLM_FUNC_QUALIFIER value_type operator[] (size_t i) const { #ifndef __CUDA_ARCH__ static #endif const int offset_dst[4] = { E0, E1, E2, E3 }; return this->elem(offset_dst[i]); } protected: template GLM_FUNC_QUALIFIER void _apply_op(const VecType& that, T op) { // Make a copy of the data in this == &that. // The copier should optimize out the copy in cases where the function is // properly inlined and the copy is not necessary. ValueType t[N]; for (int i = 0; i < N; ++i) t[i] = that[i]; for (int i = 0; i < N; ++i) op( (*this)[i], t[i] ); } }; // Specialization for swizzles containing duplicate elements. These cannot be modified. template struct _swizzle_base2 : public _swizzle_base1 { typedef VecType vec_type; typedef ValueType value_type; struct Stub {}; GLM_FUNC_QUALIFIER _swizzle_base2& operator= (Stub const &) { return *this; } GLM_FUNC_QUALIFIER value_type operator[] (size_t i) const { #ifndef __CUDA_ARCH__ static #endif const int offset_dst[4] = { E0, E1, E2, E3 }; return this->elem(offset_dst[i]); } }; template struct _swizzle : public _swizzle_base2 { typedef _swizzle_base2 base_type; using base_type::operator=; GLM_FUNC_QUALIFIER operator VecType () const { return (*this)(); } }; // // To prevent the C++ syntax from getting entirely overwhelming, define some alias macros // #define _GLM_SWIZZLE_TEMPLATE1 template #define _GLM_SWIZZLE_TEMPLATE2 template #define _GLM_SWIZZLE_TYPE1 _swizzle #define _GLM_SWIZZLE_TYPE2 _swizzle // // Wrapper for a binary operator (e.g. u.yy + v.zy) // #define _GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(OPERAND) \ _GLM_SWIZZLE_TEMPLATE2 \ GLM_FUNC_QUALIFIER V operator OPERAND ( const _GLM_SWIZZLE_TYPE1& a, const _GLM_SWIZZLE_TYPE2& b) \ { \ return a() OPERAND b(); \ } \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER V operator OPERAND ( const _GLM_SWIZZLE_TYPE1& a, const V& b) \ { \ return a() OPERAND b; \ } \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER V operator OPERAND ( const V& a, const _GLM_SWIZZLE_TYPE1& b) \ { \ return a OPERAND b(); \ } // // Wrapper for a operand between a swizzle and a binary (e.g. 1.0f - u.xyz) // #define _GLM_SWIZZLE_SCALAR_BINARY_OPERATOR_IMPLEMENTATION(OPERAND) \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER V operator OPERAND ( const _GLM_SWIZZLE_TYPE1& a, const T& b) \ { \ return a() OPERAND b; \ } \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER V operator OPERAND ( const T& a, const _GLM_SWIZZLE_TYPE1& b) \ { \ return a OPERAND b(); \ } // // Macro for wrapping a function taking one argument (e.g. abs()) // #define _GLM_SWIZZLE_FUNCTION_1_ARGS(RETURN_TYPE,FUNCTION) \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER typename _GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const _GLM_SWIZZLE_TYPE1& a) \ { \ return FUNCTION(a()); \ } // // Macro for wrapping a function taking two vector arguments (e.g. dot()). // #define _GLM_SWIZZLE_FUNCTION_2_ARGS(RETURN_TYPE,FUNCTION) \ _GLM_SWIZZLE_TEMPLATE2 \ GLM_FUNC_QUALIFIER typename _GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const _GLM_SWIZZLE_TYPE1& a, const _GLM_SWIZZLE_TYPE2& b) \ { \ return FUNCTION(a(), b()); \ } \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER typename _GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const _GLM_SWIZZLE_TYPE1& a, const _GLM_SWIZZLE_TYPE1& b) \ { \ return FUNCTION(a(), b()); \ } \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER typename _GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const _GLM_SWIZZLE_TYPE1& a, const typename V& b) \ { \ return FUNCTION(a(), b); \ } \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER typename _GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const V& a, const _GLM_SWIZZLE_TYPE1& b) \ { \ return FUNCTION(a, b()); \ } // // Macro for wrapping a function take 2 vec arguments followed by a scalar (e.g. mix()). // #define _GLM_SWIZZLE_FUNCTION_2_ARGS_SCALAR(RETURN_TYPE,FUNCTION) \ _GLM_SWIZZLE_TEMPLATE2 \ GLM_FUNC_QUALIFIER typename _GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const _GLM_SWIZZLE_TYPE1& a, const _GLM_SWIZZLE_TYPE2& b, const T& c) \ { \ return FUNCTION(a(), b(), c); \ } \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER typename _GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const _GLM_SWIZZLE_TYPE1& a, const _GLM_SWIZZLE_TYPE1& b, const T& c) \ { \ return FUNCTION(a(), b(), c); \ } \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER typename _GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const _GLM_SWIZZLE_TYPE1& a, const typename S0::vec_type& b, const T& c)\ { \ return FUNCTION(a(), b, c); \ } \ _GLM_SWIZZLE_TEMPLATE1 \ GLM_FUNC_QUALIFIER typename _GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const typename V& a, const _GLM_SWIZZLE_TYPE1& b, const T& c) \ { \ return FUNCTION(a, b(), c); \ } }//namespace detail }//namespace glm namespace glm { namespace detail { _GLM_SWIZZLE_SCALAR_BINARY_OPERATOR_IMPLEMENTATION(-) _GLM_SWIZZLE_SCALAR_BINARY_OPERATOR_IMPLEMENTATION(*) _GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(+) _GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(-) _GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(*) _GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(/) } // // Swizzles are distinct types from the unswizzled type. The below macros will // provide template specializations for the swizzle types for the given functions // so that the compiler does not have any ambiguity to choosing how to handle // the function. // // The alternative is to use the operator()() when calling the function in order // to explicitly convert the swizzled type to the unswizzled type. // //_GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, abs); //_GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, acos); //_GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, acosh); //_GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, all); //_GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, any); //_GLM_SWIZZLE_FUNCTION_2_ARGS(value_type, dot); //_GLM_SWIZZLE_FUNCTION_2_ARGS(vec_type, cross); //_GLM_SWIZZLE_FUNCTION_2_ARGS(vec_type, step); //_GLM_SWIZZLE_FUNCTION_2_ARGS_SCALAR(vec_type, mix); } #define _GLM_SWIZZLE2_2_MEMBERS(T, P, V, E0,E1) \ struct { _swizzle<2, T, P, V, 0,0,-1,-2> E0 ## E0; }; \ struct { _swizzle<2, T, P, V, 0,1,-1,-2> E0 ## E1; }; \ struct { _swizzle<2, T, P, V, 1,0,-1,-2> E1 ## E0; }; \ struct { _swizzle<2, T, P, V, 1,1,-1,-2> E1 ## E1; }; #define _GLM_SWIZZLE2_3_MEMBERS(T, P, V, E0,E1) \ struct { _swizzle<3,T, P, V, 0,0,0,-1> E0 ## E0 ## E0; }; \ struct { _swizzle<3,T, P, V, 0,0,1,-1> E0 ## E0 ## E1; }; \ struct { _swizzle<3,T, P, V, 0,1,0,-1> E0 ## E1 ## E0; }; \ struct { _swizzle<3,T, P, V, 0,1,1,-1> E0 ## E1 ## E1; }; \ struct { _swizzle<3,T, P, V, 1,0,0,-1> E1 ## E0 ## E0; }; \ struct { _swizzle<3,T, P, V, 1,0,1,-1> E1 ## E0 ## E1; }; \ struct { _swizzle<3,T, P, V, 1,1,0,-1> E1 ## E1 ## E0; }; \ struct { _swizzle<3,T, P, V, 1,1,1,-1> E1 ## E1 ## E1; }; #define _GLM_SWIZZLE2_4_MEMBERS(T, P, V, E0,E1) \ struct { _swizzle<4,T, P, V, 0,0,0,0> E0 ## E0 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,0,0,1> E0 ## E0 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,0,1,0> E0 ## E0 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,0,1,1> E0 ## E0 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,1,0,0> E0 ## E1 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,1,0,1> E0 ## E1 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,1,1,0> E0 ## E1 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,1,1,1> E0 ## E1 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,0,0,0> E1 ## E0 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,0,0,1> E1 ## E0 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,0,1,0> E1 ## E0 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,0,1,1> E1 ## E0 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,1,0,0> E1 ## E1 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,1,0,1> E1 ## E1 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,1,1,0> E1 ## E1 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,1,1,1> E1 ## E1 ## E1 ## E1; }; #define _GLM_SWIZZLE3_2_MEMBERS(T, P, V, E0,E1,E2) \ struct { _swizzle<2,T, P, V, 0,0,-1,-2> E0 ## E0; }; \ struct { _swizzle<2,T, P, V, 0,1,-1,-2> E0 ## E1; }; \ struct { _swizzle<2,T, P, V, 0,2,-1,-2> E0 ## E2; }; \ struct { _swizzle<2,T, P, V, 1,0,-1,-2> E1 ## E0; }; \ struct { _swizzle<2,T, P, V, 1,1,-1,-2> E1 ## E1; }; \ struct { _swizzle<2,T, P, V, 1,2,-1,-2> E1 ## E2; }; \ struct { _swizzle<2,T, P, V, 2,0,-1,-2> E2 ## E0; }; \ struct { _swizzle<2,T, P, V, 2,1,-1,-2> E2 ## E1; }; \ struct { _swizzle<2,T, P, V, 2,2,-1,-2> E2 ## E2; }; #define _GLM_SWIZZLE3_3_MEMBERS(T, P, V ,E0,E1,E2) \ struct { _swizzle<3,T,P, V, 0,0,0,-1> E0 ## E0 ## E0; }; \ struct { _swizzle<3,T,P, V, 0,0,1,-1> E0 ## E0 ## E1; }; \ struct { _swizzle<3,T,P, V, 0,0,2,-1> E0 ## E0 ## E2; }; \ struct { _swizzle<3,T,P, V, 0,1,0,-1> E0 ## E1 ## E0; }; \ struct { _swizzle<3,T,P, V, 0,1,1,-1> E0 ## E1 ## E1; }; \ struct { _swizzle<3,T,P, V, 0,1,2,-1> E0 ## E1 ## E2; }; \ struct { _swizzle<3,T,P, V, 0,2,0,-1> E0 ## E2 ## E0; }; \ struct { _swizzle<3,T,P, V, 0,2,1,-1> E0 ## E2 ## E1; }; \ struct { _swizzle<3,T,P, V, 0,2,2,-1> E0 ## E2 ## E2; }; \ struct { _swizzle<3,T,P, V, 1,0,0,-1> E1 ## E0 ## E0; }; \ struct { _swizzle<3,T,P, V, 1,0,1,-1> E1 ## E0 ## E1; }; \ struct { _swizzle<3,T,P, V, 1,0,2,-1> E1 ## E0 ## E2; }; \ struct { _swizzle<3,T,P, V, 1,1,0,-1> E1 ## E1 ## E0; }; \ struct { _swizzle<3,T,P, V, 1,1,1,-1> E1 ## E1 ## E1; }; \ struct { _swizzle<3,T,P, V, 1,1,2,-1> E1 ## E1 ## E2; }; \ struct { _swizzle<3,T,P, V, 1,2,0,-1> E1 ## E2 ## E0; }; \ struct { _swizzle<3,T,P, V, 1,2,1,-1> E1 ## E2 ## E1; }; \ struct { _swizzle<3,T,P, V, 1,2,2,-1> E1 ## E2 ## E2; }; \ struct { _swizzle<3,T,P, V, 2,0,0,-1> E2 ## E0 ## E0; }; \ struct { _swizzle<3,T,P, V, 2,0,1,-1> E2 ## E0 ## E1; }; \ struct { _swizzle<3,T,P, V, 2,0,2,-1> E2 ## E0 ## E2; }; \ struct { _swizzle<3,T,P, V, 2,1,0,-1> E2 ## E1 ## E0; }; \ struct { _swizzle<3,T,P, V, 2,1,1,-1> E2 ## E1 ## E1; }; \ struct { _swizzle<3,T,P, V, 2,1,2,-1> E2 ## E1 ## E2; }; \ struct { _swizzle<3,T,P, V, 2,2,0,-1> E2 ## E2 ## E0; }; \ struct { _swizzle<3,T,P, V, 2,2,1,-1> E2 ## E2 ## E1; }; \ struct { _swizzle<3,T,P, V, 2,2,2,-1> E2 ## E2 ## E2; }; #define _GLM_SWIZZLE3_4_MEMBERS(T, P, V, E0,E1,E2) \ struct { _swizzle<4,T, P, V, 0,0,0,0> E0 ## E0 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,0,0,1> E0 ## E0 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,0,0,2> E0 ## E0 ## E0 ## E2; }; \ struct { _swizzle<4,T, P, V, 0,0,1,0> E0 ## E0 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,0,1,1> E0 ## E0 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,0,1,2> E0 ## E0 ## E1 ## E2; }; \ struct { _swizzle<4,T, P, V, 0,0,2,0> E0 ## E0 ## E2 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,0,2,1> E0 ## E0 ## E2 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,0,2,2> E0 ## E0 ## E2 ## E2; }; \ struct { _swizzle<4,T, P, V, 0,1,0,0> E0 ## E1 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,1,0,1> E0 ## E1 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,1,0,2> E0 ## E1 ## E0 ## E2; }; \ struct { _swizzle<4,T, P, V, 0,1,1,0> E0 ## E1 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,1,1,1> E0 ## E1 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,1,1,2> E0 ## E1 ## E1 ## E2; }; \ struct { _swizzle<4,T, P, V, 0,1,2,0> E0 ## E1 ## E2 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,1,2,1> E0 ## E1 ## E2 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,1,2,2> E0 ## E1 ## E2 ## E2; }; \ struct { _swizzle<4,T, P, V, 0,2,0,0> E0 ## E2 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,2,0,1> E0 ## E2 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,2,0,2> E0 ## E2 ## E0 ## E2; }; \ struct { _swizzle<4,T, P, V, 0,2,1,0> E0 ## E2 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,2,1,1> E0 ## E2 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,2,1,2> E0 ## E2 ## E1 ## E2; }; \ struct { _swizzle<4,T, P, V, 0,2,2,0> E0 ## E2 ## E2 ## E0; }; \ struct { _swizzle<4,T, P, V, 0,2,2,1> E0 ## E2 ## E2 ## E1; }; \ struct { _swizzle<4,T, P, V, 0,2,2,2> E0 ## E2 ## E2 ## E2; }; \ struct { _swizzle<4,T, P, V, 1,0,0,0> E1 ## E0 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,0,0,1> E1 ## E0 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,0,0,2> E1 ## E0 ## E0 ## E2; }; \ struct { _swizzle<4,T, P, V, 1,0,1,0> E1 ## E0 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,0,1,1> E1 ## E0 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,0,1,2> E1 ## E0 ## E1 ## E2; }; \ struct { _swizzle<4,T, P, V, 1,0,2,0> E1 ## E0 ## E2 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,0,2,1> E1 ## E0 ## E2 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,0,2,2> E1 ## E0 ## E2 ## E2; }; \ struct { _swizzle<4,T, P, V, 1,1,0,0> E1 ## E1 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,1,0,1> E1 ## E1 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,1,0,2> E1 ## E1 ## E0 ## E2; }; \ struct { _swizzle<4,T, P, V, 1,1,1,0> E1 ## E1 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,1,1,1> E1 ## E1 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,1,1,2> E1 ## E1 ## E1 ## E2; }; \ struct { _swizzle<4,T, P, V, 1,1,2,0> E1 ## E1 ## E2 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,1,2,1> E1 ## E1 ## E2 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,1,2,2> E1 ## E1 ## E2 ## E2; }; \ struct { _swizzle<4,T, P, V, 1,2,0,0> E1 ## E2 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,2,0,1> E1 ## E2 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,2,0,2> E1 ## E2 ## E0 ## E2; }; \ struct { _swizzle<4,T, P, V, 1,2,1,0> E1 ## E2 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,2,1,1> E1 ## E2 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,2,1,2> E1 ## E2 ## E1 ## E2; }; \ struct { _swizzle<4,T, P, V, 1,2,2,0> E1 ## E2 ## E2 ## E0; }; \ struct { _swizzle<4,T, P, V, 1,2,2,1> E1 ## E2 ## E2 ## E1; }; \ struct { _swizzle<4,T, P, V, 1,2,2,2> E1 ## E2 ## E2 ## E2; }; \ struct { _swizzle<4,T, P, V, 2,0,0,0> E2 ## E0 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 2,0,0,1> E2 ## E0 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 2,0,0,2> E2 ## E0 ## E0 ## E2; }; \ struct { _swizzle<4,T, P, V, 2,0,1,0> E2 ## E0 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 2,0,1,1> E2 ## E0 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 2,0,1,2> E2 ## E0 ## E1 ## E2; }; \ struct { _swizzle<4,T, P, V, 2,0,2,0> E2 ## E0 ## E2 ## E0; }; \ struct { _swizzle<4,T, P, V, 2,0,2,1> E2 ## E0 ## E2 ## E1; }; \ struct { _swizzle<4,T, P, V, 2,0,2,2> E2 ## E0 ## E2 ## E2; }; \ struct { _swizzle<4,T, P, V, 2,1,0,0> E2 ## E1 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 2,1,0,1> E2 ## E1 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 2,1,0,2> E2 ## E1 ## E0 ## E2; }; \ struct { _swizzle<4,T, P, V, 2,1,1,0> E2 ## E1 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 2,1,1,1> E2 ## E1 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 2,1,1,2> E2 ## E1 ## E1 ## E2; }; \ struct { _swizzle<4,T, P, V, 2,1,2,0> E2 ## E1 ## E2 ## E0; }; \ struct { _swizzle<4,T, P, V, 2,1,2,1> E2 ## E1 ## E2 ## E1; }; \ struct { _swizzle<4,T, P, V, 2,1,2,2> E2 ## E1 ## E2 ## E2; }; \ struct { _swizzle<4,T, P, V, 2,2,0,0> E2 ## E2 ## E0 ## E0; }; \ struct { _swizzle<4,T, P, V, 2,2,0,1> E2 ## E2 ## E0 ## E1; }; \ struct { _swizzle<4,T, P, V, 2,2,0,2> E2 ## E2 ## E0 ## E2; }; \ struct { _swizzle<4,T, P, V, 2,2,1,0> E2 ## E2 ## E1 ## E0; }; \ struct { _swizzle<4,T, P, V, 2,2,1,1> E2 ## E2 ## E1 ## E1; }; \ struct { _swizzle<4,T, P, V, 2,2,1,2> E2 ## E2 ## E1 ## E2; }; \ struct { _swizzle<4,T, P, V, 2,2,2,0> E2 ## E2 ## E2 ## E0; }; \ struct { _swizzle<4,T, P, V, 2,2,2,1> E2 ## E2 ## E2 ## E1; }; \ struct { _swizzle<4,T, P, V, 2,2,2,2> E2 ## E2 ## E2 ## E2; }; #define _GLM_SWIZZLE4_2_MEMBERS(T, P, V, E0,E1,E2,E3) \ struct { _swizzle<2,T, P, V, 0,0,-1,-2> E0 ## E0; }; \ struct { _swizzle<2,T, P, V, 0,1,-1,-2> E0 ## E1; }; \ struct { _swizzle<2,T, P, V, 0,2,-1,-2> E0 ## E2; }; \ struct { _swizzle<2,T, P, V, 0,3,-1,-2> E0 ## E3; }; \ struct { _swizzle<2,T, P, V, 1,0,-1,-2> E1 ## E0; }; \ struct { _swizzle<2,T, P, V, 1,1,-1,-2> E1 ## E1; }; \ struct { _swizzle<2,T, P, V, 1,2,-1,-2> E1 ## E2; }; \ struct { _swizzle<2,T, P, V, 1,3,-1,-2> E1 ## E3; }; \ struct { _swizzle<2,T, P, V, 2,0,-1,-2> E2 ## E0; }; \ struct { _swizzle<2,T, P, V, 2,1,-1,-2> E2 ## E1; }; \ struct { _swizzle<2,T, P, V, 2,2,-1,-2> E2 ## E2; }; \ struct { _swizzle<2,T, P, V, 2,3,-1,-2> E2 ## E3; }; \ struct { _swizzle<2,T, P, V, 3,0,-1,-2> E3 ## E0; }; \ struct { _swizzle<2,T, P, V, 3,1,-1,-2> E3 ## E1; }; \ struct { _swizzle<2,T, P, V, 3,2,-1,-2> E3 ## E2; }; \ struct { _swizzle<2,T, P, V, 3,3,-1,-2> E3 ## E3; }; #define _GLM_SWIZZLE4_3_MEMBERS(T,P, V, E0,E1,E2,E3) \ struct { _swizzle<3,T,P, V, 0,0,0,-1> E0 ## E0 ## E0; }; \ struct { _swizzle<3,T,P, V, 0,0,1,-1> E0 ## E0 ## E1; }; \ struct { _swizzle<3,T,P, V, 0,0,2,-1> E0 ## E0 ## E2; }; \ struct { _swizzle<3,T,P, V, 0,0,3,-1> E0 ## E0 ## E3; }; \ struct { _swizzle<3,T,P, V, 0,1,0,-1> E0 ## E1 ## E0; }; \ struct { _swizzle<3,T,P, V, 0,1,1,-1> E0 ## E1 ## E1; }; \ struct { _swizzle<3,T,P, V, 0,1,2,-1> E0 ## E1 ## E2; }; \ struct { _swizzle<3,T,P, V, 0,1,3,-1> E0 ## E1 ## E3; }; \ struct { _swizzle<3,T,P, V, 0,2,0,-1> E0 ## E2 ## E0; }; \ struct { _swizzle<3,T,P, V, 0,2,1,-1> E0 ## E2 ## E1; }; \ struct { _swizzle<3,T,P, V, 0,2,2,-1> E0 ## E2 ## E2; }; \ struct { _swizzle<3,T,P, V, 0,2,3,-1> E0 ## E2 ## E3; }; \ struct { _swizzle<3,T,P, V, 0,3,0,-1> E0 ## E3 ## E0; }; \ struct { _swizzle<3,T,P, V, 0,3,1,-1> E0 ## E3 ## E1; }; \ struct { _swizzle<3,T,P, V, 0,3,2,-1> E0 ## E3 ## E2; }; \ struct { _swizzle<3,T,P, V, 0,3,3,-1> E0 ## E3 ## E3; }; \ struct { _swizzle<3,T,P, V, 1,0,0,-1> E1 ## E0 ## E0; }; \ struct { _swizzle<3,T,P, V, 1,0,1,-1> E1 ## E0 ## E1; }; \ struct { _swizzle<3,T,P, V, 1,0,2,-1> E1 ## E0 ## E2; }; \ struct { _swizzle<3,T,P, V, 1,0,3,-1> E1 ## E0 ## E3; }; \ struct { _swizzle<3,T,P, V, 1,1,0,-1> E1 ## E1 ## E0; }; \ struct { _swizzle<3,T,P, V, 1,1,1,-1> E1 ## E1 ## E1; }; \ struct { _swizzle<3,T,P, V, 1,1,2,-1> E1 ## E1 ## E2; }; \ struct { _swizzle<3,T,P, V, 1,1,3,-1> E1 ## E1 ## E3; }; \ struct { _swizzle<3,T,P, V, 1,2,0,-1> E1 ## E2 ## E0; }; \ struct { _swizzle<3,T,P, V, 1,2,1,-1> E1 ## E2 ## E1; }; \ struct { _swizzle<3,T,P, V, 1,2,2,-1> E1 ## E2 ## E2; }; \ struct { _swizzle<3,T,P, V, 1,2,3,-1> E1 ## E2 ## E3; }; \ struct { _swizzle<3,T,P, V, 1,3,0,-1> E1 ## E3 ## E0; }; \ struct { _swizzle<3,T,P, V, 1,3,1,-1> E1 ## E3 ## E1; }; \ struct { _swizzle<3,T,P, V, 1,3,2,-1> E1 ## E3 ## E2; }; \ struct { _swizzle<3,T,P, V, 1,3,3,-1> E1 ## E3 ## E3; }; \ struct { _swizzle<3,T,P, V, 2,0,0,-1> E2 ## E0 ## E0; }; \ struct { _swizzle<3,T,P, V, 2,0,1,-1> E2 ## E0 ## E1; }; \ struct { _swizzle<3,T,P, V, 2,0,2,-1> E2 ## E0 ## E2; }; \ struct { _swizzle<3,T,P, V, 2,0,3,-1> E2 ## E0 ## E3; }; \ struct { _swizzle<3,T,P, V, 2,1,0,-1> E2 ## E1 ## E0; }; \ struct { _swizzle<3,T,P, V, 2,1,1,-1> E2 ## E1 ## E1; }; \ struct { _swizzle<3,T,P, V, 2,1,2,-1> E2 ## E1 ## E2; }; \ struct { _swizzle<3,T,P, V, 2,1,3,-1> E2 ## E1 ## E3; }; \ struct { _swizzle<3,T,P, V, 2,2,0,-1> E2 ## E2 ## E0; }; \ struct { _swizzle<3,T,P, V, 2,2,1,-1> E2 ## E2 ## E1; }; \ struct { _swizzle<3,T,P, V, 2,2,2,-1> E2 ## E2 ## E2; }; \ struct { _swizzle<3,T,P, V, 2,2,3,-1> E2 ## E2 ## E3; }; \ struct { _swizzle<3,T,P, V, 2,3,0,-1> E2 ## E3 ## E0; }; \ struct { _swizzle<3,T,P, V, 2,3,1,-1> E2 ## E3 ## E1; }; \ struct { _swizzle<3,T,P, V, 2,3,2,-1> E2 ## E3 ## E2; }; \ struct { _swizzle<3,T,P, V, 2,3,3,-1> E2 ## E3 ## E3; }; \ struct { _swizzle<3,T,P, V, 3,0,0,-1> E3 ## E0 ## E0; }; \ struct { _swizzle<3,T,P, V, 3,0,1,-1> E3 ## E0 ## E1; }; \ struct { _swizzle<3,T,P, V, 3,0,2,-1> E3 ## E0 ## E2; }; \ struct { _swizzle<3,T,P, V, 3,0,3,-1> E3 ## E0 ## E3; }; \ struct { _swizzle<3,T,P, V, 3,1,0,-1> E3 ## E1 ## E0; }; \ struct { _swizzle<3,T,P, V, 3,1,1,-1> E3 ## E1 ## E1; }; \ struct { _swizzle<3,T,P, V, 3,1,2,-1> E3 ## E1 ## E2; }; \ struct { _swizzle<3,T,P, V, 3,1,3,-1> E3 ## E1 ## E3; }; \ struct { _swizzle<3,T,P, V, 3,2,0,-1> E3 ## E2 ## E0; }; \ struct { _swizzle<3,T,P, V, 3,2,1,-1> E3 ## E2 ## E1; }; \ struct { _swizzle<3,T,P, V, 3,2,2,-1> E3 ## E2 ## E2; }; \ struct { _swizzle<3,T,P, V, 3,2,3,-1> E3 ## E2 ## E3; }; \ struct { _swizzle<3,T,P, V, 3,3,0,-1> E3 ## E3 ## E0; }; \ struct { _swizzle<3,T,P, V, 3,3,1,-1> E3 ## E3 ## E1; }; \ struct { _swizzle<3,T,P, V, 3,3,2,-1> E3 ## E3 ## E2; }; \ struct { _swizzle<3,T,P, V, 3,3,3,-1> E3 ## E3 ## E3; }; #define _GLM_SWIZZLE4_4_MEMBERS(T, P, V, E0,E1,E2,E3) \ struct { _swizzle<4, T, P, V, 0,0,0,0> E0 ## E0 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,0,0,1> E0 ## E0 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,0,0,2> E0 ## E0 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,0,0,3> E0 ## E0 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,0,1,0> E0 ## E0 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,0,1,1> E0 ## E0 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,0,1,2> E0 ## E0 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,0,1,3> E0 ## E0 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,0,2,0> E0 ## E0 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,0,2,1> E0 ## E0 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,0,2,2> E0 ## E0 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,0,2,3> E0 ## E0 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,0,3,0> E0 ## E0 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,0,3,1> E0 ## E0 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,0,3,2> E0 ## E0 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,0,3,3> E0 ## E0 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,1,0,0> E0 ## E1 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,1,0,1> E0 ## E1 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,1,0,2> E0 ## E1 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,1,0,3> E0 ## E1 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,1,1,0> E0 ## E1 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,1,1,1> E0 ## E1 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,1,1,2> E0 ## E1 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,1,1,3> E0 ## E1 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,1,2,0> E0 ## E1 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,1,2,1> E0 ## E1 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,1,2,2> E0 ## E1 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,1,2,3> E0 ## E1 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,1,3,0> E0 ## E1 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,1,3,1> E0 ## E1 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,1,3,2> E0 ## E1 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,1,3,3> E0 ## E1 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,2,0,0> E0 ## E2 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,2,0,1> E0 ## E2 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,2,0,2> E0 ## E2 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,2,0,3> E0 ## E2 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,2,1,0> E0 ## E2 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,2,1,1> E0 ## E2 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,2,1,2> E0 ## E2 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,2,1,3> E0 ## E2 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,2,2,0> E0 ## E2 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,2,2,1> E0 ## E2 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,2,2,2> E0 ## E2 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,2,2,3> E0 ## E2 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,2,3,0> E0 ## E2 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,2,3,1> E0 ## E2 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,2,3,2> E0 ## E2 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,2,3,3> E0 ## E2 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,3,0,0> E0 ## E3 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,3,0,1> E0 ## E3 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,3,0,2> E0 ## E3 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,3,0,3> E0 ## E3 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,3,1,0> E0 ## E3 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,3,1,1> E0 ## E3 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,3,1,2> E0 ## E3 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,3,1,3> E0 ## E3 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,3,2,0> E0 ## E3 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,3,2,1> E0 ## E3 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,3,2,2> E0 ## E3 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,3,2,3> E0 ## E3 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 0,3,3,0> E0 ## E3 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 0,3,3,1> E0 ## E3 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 0,3,3,2> E0 ## E3 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 0,3,3,3> E0 ## E3 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,0,0,0> E1 ## E0 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,0,0,1> E1 ## E0 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,0,0,2> E1 ## E0 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,0,0,3> E1 ## E0 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,0,1,0> E1 ## E0 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,0,1,1> E1 ## E0 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,0,1,2> E1 ## E0 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,0,1,3> E1 ## E0 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,0,2,0> E1 ## E0 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,0,2,1> E1 ## E0 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,0,2,2> E1 ## E0 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,0,2,3> E1 ## E0 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,0,3,0> E1 ## E0 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,0,3,1> E1 ## E0 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,0,3,2> E1 ## E0 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,0,3,3> E1 ## E0 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,1,0,0> E1 ## E1 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,1,0,1> E1 ## E1 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,1,0,2> E1 ## E1 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,1,0,3> E1 ## E1 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,1,1,0> E1 ## E1 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,1,1,1> E1 ## E1 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,1,1,2> E1 ## E1 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,1,1,3> E1 ## E1 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,1,2,0> E1 ## E1 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,1,2,1> E1 ## E1 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,1,2,2> E1 ## E1 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,1,2,3> E1 ## E1 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,1,3,0> E1 ## E1 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,1,3,1> E1 ## E1 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,1,3,2> E1 ## E1 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,1,3,3> E1 ## E1 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,2,0,0> E1 ## E2 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,2,0,1> E1 ## E2 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,2,0,2> E1 ## E2 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,2,0,3> E1 ## E2 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,2,1,0> E1 ## E2 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,2,1,1> E1 ## E2 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,2,1,2> E1 ## E2 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,2,1,3> E1 ## E2 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,2,2,0> E1 ## E2 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,2,2,1> E1 ## E2 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,2,2,2> E1 ## E2 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,2,2,3> E1 ## E2 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,2,3,0> E1 ## E2 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,2,3,1> E1 ## E2 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,2,3,2> E1 ## E2 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,2,3,3> E1 ## E2 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,3,0,0> E1 ## E3 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,3,0,1> E1 ## E3 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,3,0,2> E1 ## E3 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,3,0,3> E1 ## E3 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,3,1,0> E1 ## E3 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,3,1,1> E1 ## E3 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,3,1,2> E1 ## E3 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,3,1,3> E1 ## E3 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,3,2,0> E1 ## E3 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,3,2,1> E1 ## E3 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,3,2,2> E1 ## E3 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,3,2,3> E1 ## E3 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 1,3,3,0> E1 ## E3 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 1,3,3,1> E1 ## E3 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 1,3,3,2> E1 ## E3 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 1,3,3,3> E1 ## E3 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,0,0,0> E2 ## E0 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,0,0,1> E2 ## E0 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,0,0,2> E2 ## E0 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,0,0,3> E2 ## E0 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,0,1,0> E2 ## E0 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,0,1,1> E2 ## E0 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,0,1,2> E2 ## E0 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,0,1,3> E2 ## E0 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,0,2,0> E2 ## E0 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,0,2,1> E2 ## E0 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,0,2,2> E2 ## E0 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,0,2,3> E2 ## E0 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,0,3,0> E2 ## E0 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,0,3,1> E2 ## E0 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,0,3,2> E2 ## E0 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,0,3,3> E2 ## E0 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,1,0,0> E2 ## E1 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,1,0,1> E2 ## E1 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,1,0,2> E2 ## E1 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,1,0,3> E2 ## E1 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,1,1,0> E2 ## E1 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,1,1,1> E2 ## E1 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,1,1,2> E2 ## E1 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,1,1,3> E2 ## E1 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,1,2,0> E2 ## E1 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,1,2,1> E2 ## E1 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,1,2,2> E2 ## E1 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,1,2,3> E2 ## E1 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,1,3,0> E2 ## E1 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,1,3,1> E2 ## E1 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,1,3,2> E2 ## E1 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,1,3,3> E2 ## E1 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,2,0,0> E2 ## E2 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,2,0,1> E2 ## E2 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,2,0,2> E2 ## E2 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,2,0,3> E2 ## E2 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,2,1,0> E2 ## E2 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,2,1,1> E2 ## E2 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,2,1,2> E2 ## E2 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,2,1,3> E2 ## E2 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,2,2,0> E2 ## E2 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,2,2,1> E2 ## E2 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,2,2,2> E2 ## E2 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,2,2,3> E2 ## E2 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,2,3,0> E2 ## E2 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,2,3,1> E2 ## E2 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,2,3,2> E2 ## E2 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,2,3,3> E2 ## E2 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,3,0,0> E2 ## E3 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,3,0,1> E2 ## E3 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,3,0,2> E2 ## E3 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,3,0,3> E2 ## E3 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,3,1,0> E2 ## E3 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,3,1,1> E2 ## E3 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,3,1,2> E2 ## E3 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,3,1,3> E2 ## E3 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,3,2,0> E2 ## E3 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,3,2,1> E2 ## E3 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,3,2,2> E2 ## E3 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,3,2,3> E2 ## E3 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 2,3,3,0> E2 ## E3 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 2,3,3,1> E2 ## E3 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 2,3,3,2> E2 ## E3 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 2,3,3,3> E2 ## E3 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,0,0,0> E3 ## E0 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,0,0,1> E3 ## E0 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,0,0,2> E3 ## E0 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,0,0,3> E3 ## E0 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,0,1,0> E3 ## E0 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,0,1,1> E3 ## E0 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,0,1,2> E3 ## E0 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,0,1,3> E3 ## E0 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,0,2,0> E3 ## E0 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,0,2,1> E3 ## E0 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,0,2,2> E3 ## E0 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,0,2,3> E3 ## E0 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,0,3,0> E3 ## E0 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,0,3,1> E3 ## E0 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,0,3,2> E3 ## E0 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,0,3,3> E3 ## E0 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,1,0,0> E3 ## E1 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,1,0,1> E3 ## E1 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,1,0,2> E3 ## E1 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,1,0,3> E3 ## E1 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,1,1,0> E3 ## E1 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,1,1,1> E3 ## E1 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,1,1,2> E3 ## E1 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,1,1,3> E3 ## E1 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,1,2,0> E3 ## E1 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,1,2,1> E3 ## E1 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,1,2,2> E3 ## E1 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,1,2,3> E3 ## E1 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,1,3,0> E3 ## E1 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,1,3,1> E3 ## E1 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,1,3,2> E3 ## E1 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,1,3,3> E3 ## E1 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,2,0,0> E3 ## E2 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,2,0,1> E3 ## E2 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,2,0,2> E3 ## E2 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,2,0,3> E3 ## E2 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,2,1,0> E3 ## E2 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,2,1,1> E3 ## E2 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,2,1,2> E3 ## E2 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,2,1,3> E3 ## E2 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,2,2,0> E3 ## E2 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,2,2,1> E3 ## E2 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,2,2,2> E3 ## E2 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,2,2,3> E3 ## E2 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,2,3,0> E3 ## E2 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,2,3,1> E3 ## E2 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,2,3,2> E3 ## E2 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,2,3,3> E3 ## E2 ## E3 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,3,0,0> E3 ## E3 ## E0 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,3,0,1> E3 ## E3 ## E0 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,3,0,2> E3 ## E3 ## E0 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,3,0,3> E3 ## E3 ## E0 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,3,1,0> E3 ## E3 ## E1 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,3,1,1> E3 ## E3 ## E1 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,3,1,2> E3 ## E3 ## E1 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,3,1,3> E3 ## E3 ## E1 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,3,2,0> E3 ## E3 ## E2 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,3,2,1> E3 ## E3 ## E2 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,3,2,2> E3 ## E3 ## E2 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,3,2,3> E3 ## E3 ## E2 ## E3; }; \ struct { _swizzle<4, T, P, V, 3,3,3,0> E3 ## E3 ## E3 ## E0; }; \ struct { _swizzle<4, T, P, V, 3,3,3,1> E3 ## E3 ## E3 ## E1; }; \ struct { _swizzle<4, T, P, V, 3,3,3,2> E3 ## E3 ## E3 ## E2; }; \ struct { _swizzle<4, T, P, V, 3,3,3,3> E3 ## E3 ## E3 ## E3; }; #endif//glm_core_swizzle Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/_swizzle_func.hpp000066400000000000000000001761371270147354000273050ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/core/_swizzle_func.hpp /// @date 2011-10-16 / 2011-10-16 /// @author Christophe Riccio /////////////////////////////////////////////////////////////////////////////////// #ifndef glm_core_swizzle_func #define glm_core_swizzle_func #define GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, CONST, A, B) \ SWIZZLED_TYPE A ## B() CONST \ { \ return SWIZZLED_TYPE(this->A, this->B); \ } #define GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, CONST, A, B, C) \ SWIZZLED_TYPE A ## B ## C() CONST \ { \ return SWIZZLED_TYPE(this->A, this->B, this->C); \ } #define GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, CONST, A, B, C, D) \ SWIZZLED_TYPE A ## B ## C ## D() CONST \ { \ return SWIZZLED_TYPE(this->A, this->B, this->C, this->D); \ } #define GLM_SWIZZLE_GEN_VEC2_ENTRY_DEF(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, CONST, A, B) \ template \ SWIZZLED_TYPE CLASS_TYPE::A ## B() CONST \ { \ return SWIZZLED_TYPE(this->A, this->B); \ } #define GLM_SWIZZLE_GEN_VEC3_ENTRY_DEF(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, CONST, A, B, C) \ template \ SWIZZLED_TYPE CLASS_TYPE::A ## B ## C() CONST \ { \ return SWIZZLED_TYPE(this->A, this->B, this->C); \ } #define GLM_SWIZZLE_GEN_VEC4_ENTRY_DEF(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, CONST, A, B, C, D) \ template \ SWIZZLED_TYPE CLASS_TYPE::A ## B ## C ## D() CONST \ { \ return SWIZZLED_TYPE(this->A, this->B, this->C, this->D); \ } #define GLM_MUTABLE #define GLM_SWIZZLE_GEN_REF2_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, A, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, B, A) #define GLM_SWIZZLE_GEN_REF_FROM_VEC2(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE) \ GLM_SWIZZLE_GEN_REF2_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, x, y) \ GLM_SWIZZLE_GEN_REF2_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, r, g) \ GLM_SWIZZLE_GEN_REF2_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, s, t) //GLM_SWIZZLE_GEN_REF_FROM_VEC2(valType, detail::vec2, detail::ref2) #define GLM_SWIZZLE_GEN_REF2_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, A, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, A, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, B, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, B, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, C, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, C, B) #define GLM_SWIZZLE_GEN_REF3_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, A, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, A, C, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, B, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, B, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, C, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, C, B, A) #define GLM_SWIZZLE_GEN_REF_FROM_VEC3_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_REF3_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC3_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_REF2_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, A, B, C) #define GLM_SWIZZLE_GEN_REF_FROM_VEC3(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE) \ GLM_SWIZZLE_GEN_REF_FROM_VEC3_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, x, y, z) \ GLM_SWIZZLE_GEN_REF_FROM_VEC3_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, r, g, b) \ GLM_SWIZZLE_GEN_REF_FROM_VEC3_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, s, t, p) //GLM_SWIZZLE_GEN_REF_FROM_VEC3(valType, detail::vec3, detail::ref2, detail::ref3) #define GLM_SWIZZLE_GEN_REF2_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, A, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, A, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, A, D) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, B, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, B, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, B, D) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, C, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, C, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, C, D) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, D, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, D, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, GLM_MUTABLE, D, C) #define GLM_SWIZZLE_GEN_REF3_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, B, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, C, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, C, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, D, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, D, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, A, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, C, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, D, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, D, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, A, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, B, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, D, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, D, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, C, B) #define GLM_SWIZZLE_GEN_REF4_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, C, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, C, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, D, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, D, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, B, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , A, B, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, C, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, C, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, D, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, D, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, A, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , B, A, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, B, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, B, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, D, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, D, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, A, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , C, A, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, C, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, C, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, A, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, B, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, , D, B, C, A) #define GLM_SWIZZLE_GEN_REF_FROM_VEC4_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_REF2_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_REF3_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC3_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_REF4_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC4_TYPE, A, B, C, D) #define GLM_SWIZZLE_GEN_REF_FROM_VEC4(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE) \ GLM_SWIZZLE_GEN_REF_FROM_VEC4_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, x, y, z, w) \ GLM_SWIZZLE_GEN_REF_FROM_VEC4_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, r, g, b, a) \ GLM_SWIZZLE_GEN_REF_FROM_VEC4_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, s, t, p, q) //GLM_SWIZZLE_GEN_REF_FROM_VEC4(valType, detail::vec4, detail::ref2, detail::ref3, detail::ref4) #define GLM_SWIZZLE_GEN_VEC2_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B) #define GLM_SWIZZLE_GEN_VEC3_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B) #define GLM_SWIZZLE_GEN_VEC4_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B, B) #define GLM_SWIZZLE_GEN_VEC_FROM_VEC2_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, A, B) \ GLM_SWIZZLE_GEN_VEC2_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, A, B) \ GLM_SWIZZLE_GEN_VEC3_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC3_TYPE, A, B) \ GLM_SWIZZLE_GEN_VEC4_FROM_VEC2_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC4_TYPE, A, B) #define GLM_SWIZZLE_GEN_VEC_FROM_VEC2(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE) \ GLM_SWIZZLE_GEN_VEC_FROM_VEC2_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, x, y) \ GLM_SWIZZLE_GEN_VEC_FROM_VEC2_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, r, g) \ GLM_SWIZZLE_GEN_VEC_FROM_VEC2_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, s, t) //GLM_SWIZZLE_GEN_VEC_FROM_VEC2(valType, detail::vec2, detail::vec2, detail::vec3, detail::vec4) #define GLM_SWIZZLE_GEN_VEC2_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C) #define GLM_SWIZZLE_GEN_VEC3_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, C) #define GLM_SWIZZLE_GEN_VEC4_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, C, C) #define GLM_SWIZZLE_GEN_VEC_FROM_VEC3_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_VEC2_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_VEC3_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC3_TYPE, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_FROM_VEC3_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC4_TYPE, A, B, C) #define GLM_SWIZZLE_GEN_VEC_FROM_VEC3(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE) \ GLM_SWIZZLE_GEN_VEC_FROM_VEC3_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, x, y, z) \ GLM_SWIZZLE_GEN_VEC_FROM_VEC3_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, r, g, b) \ GLM_SWIZZLE_GEN_VEC_FROM_VEC3_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, s, t, p) //GLM_SWIZZLE_GEN_VEC_FROM_VEC3(valType, detail::vec3, detail::vec2, detail::vec3, detail::vec4) #define GLM_SWIZZLE_GEN_VEC2_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C) \ GLM_SWIZZLE_GEN_VEC2_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D) #define GLM_SWIZZLE_GEN_VEC3_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, D) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, A) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, B) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, C) \ GLM_SWIZZLE_GEN_VEC3_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, D) #define GLM_SWIZZLE_GEN_VEC4_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, A, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, B, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, C, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, A, D, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, A, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, B, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, C, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, B, D, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, A, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, B, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, C, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, C, D, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, A, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, B, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, C, D, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, A, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, A, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, A, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, A, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, B, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, B, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, B, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, B, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, C, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, C, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, C, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, C, D) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, D, A) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, D, B) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, D, C) \ GLM_SWIZZLE_GEN_VEC4_ENTRY(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_TYPE, const, D, D, D, D) #define GLM_SWIZZLE_GEN_VEC_FROM_VEC4_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC2_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC3_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC3_TYPE, A, B, C, D) \ GLM_SWIZZLE_GEN_VEC4_FROM_VEC4_SWIZZLE(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC4_TYPE, A, B, C, D) #define GLM_SWIZZLE_GEN_VEC_FROM_VEC4(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE) \ GLM_SWIZZLE_GEN_VEC_FROM_VEC4_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, x, y, z, w) \ GLM_SWIZZLE_GEN_VEC_FROM_VEC4_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, r, g, b, a) \ GLM_SWIZZLE_GEN_VEC_FROM_VEC4_COMP(TMPL_TYPE, PRECISION, CLASS_TYPE, SWIZZLED_VEC2_TYPE, SWIZZLED_VEC3_TYPE, SWIZZLED_VEC4_TYPE, s, t, p, q) //GLM_SWIZZLE_GEN_VEC_FROM_VEC4(valType, detail::vec4, detail::vec2, detail::vec3, detail::vec4) #endif//glm_core_swizzle_func Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/_vectorize.hpp000066400000000000000000000137341270147354000265660ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/core/_vectorize.hpp /// @date 2011-10-14 / 2011-10-14 /// @author Christophe Riccio /////////////////////////////////////////////////////////////////////////////////// #ifndef GLM_CORE_DETAIL_INCLUDED #define GLM_CORE_DETAIL_INCLUDED #include "type_vec1.hpp" #include "type_vec2.hpp" #include "type_vec3.hpp" #include "type_vec4.hpp" #define VECTORIZE1_VEC(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec1 func( \ detail::tvec1 const & v) \ { \ return detail::tvec1( \ func(v.x)); \ } #define VECTORIZE2_VEC(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec2 func( \ detail::tvec2 const & v) \ { \ return detail::tvec2( \ func(v.x), \ func(v.y)); \ } #define VECTORIZE3_VEC(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec3 func( \ detail::tvec3 const & v) \ { \ return detail::tvec3( \ func(v.x), \ func(v.y), \ func(v.z)); \ } #define VECTORIZE4_VEC(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec4 func( \ detail::tvec4 const & v) \ { \ return detail::tvec4( \ func(v.x), \ func(v.y), \ func(v.z), \ func(v.w)); \ } #define VECTORIZE_VEC(func) \ VECTORIZE1_VEC(func) \ VECTORIZE2_VEC(func) \ VECTORIZE3_VEC(func) \ VECTORIZE4_VEC(func) #define VECTORIZE1_VEC_SCA(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec1 func \ ( \ detail::tvec1 const & x, \ T const & y \ ) \ { \ return detail::tvec1( \ func(x.x, y)); \ } #define VECTORIZE2_VEC_SCA(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec2 func \ ( \ detail::tvec2 const & x, \ T const & y \ ) \ { \ return detail::tvec2( \ func(x.x, y), \ func(x.y, y)); \ } #define VECTORIZE3_VEC_SCA(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec3 func \ ( \ detail::tvec3 const & x, \ T const & y \ ) \ { \ return detail::tvec3( \ func(x.x, y), \ func(x.y, y), \ func(x.z, y)); \ } #define VECTORIZE4_VEC_SCA(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec4 func \ ( \ detail::tvec4 const & x, \ T const & y \ ) \ { \ return detail::tvec4( \ func(x.x, y), \ func(x.y, y), \ func(x.z, y), \ func(x.w, y)); \ } #define VECTORIZE_VEC_SCA(func) \ VECTORIZE1_VEC_SCA(func) \ VECTORIZE2_VEC_SCA(func) \ VECTORIZE3_VEC_SCA(func) \ VECTORIZE4_VEC_SCA(func) #define VECTORIZE2_VEC_VEC(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec2 func \ ( \ detail::tvec2 const & x, \ detail::tvec2 const & y \ ) \ { \ return detail::tvec2( \ func(x.x, y.x), \ func(x.y, y.y)); \ } #define VECTORIZE3_VEC_VEC(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec3 func \ ( \ detail::tvec3 const & x, \ detail::tvec3 const & y \ ) \ { \ return detail::tvec3( \ func(x.x, y.x), \ func(x.y, y.y), \ func(x.z, y.z)); \ } #define VECTORIZE4_VEC_VEC(func) \ template \ GLM_FUNC_QUALIFIER detail::tvec4 func \ ( \ detail::tvec4 const & x, \ detail::tvec4 const & y \ ) \ { \ return detail::tvec4( \ func(x.x, y.x), \ func(x.y, y.y), \ func(x.z, y.z), \ func(x.w, y.w)); \ } #define VECTORIZE_VEC_VEC(func) \ VECTORIZE2_VEC_VEC(func) \ VECTORIZE3_VEC_VEC(func) \ VECTORIZE4_VEC_VEC(func) namespace glm{ namespace detail { template struct If { template static GLM_FUNC_QUALIFIER T apply(F functor, const T& val) { return functor(val); } }; template<> struct If { template static GLM_FUNC_QUALIFIER T apply(F, const T& val) { return val; } }; }//namespace detail }//namespace glm #endif//GLM_CORE_DETAIL_INCLUDED Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/dummy.cpp000066400000000000000000000142211270147354000255330ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/core/dummy.cpp /// @date 2011-01-19 / 2011-06-15 /// @author Christophe Riccio /// /// GLM is a header only library. There is nothing to compile. /// dummy.cpp exist only a wordaround for CMake file. /////////////////////////////////////////////////////////////////////////////////// #define GLM_FORCE_RADIANS #define GLM_MESSAGES #include "../glm.hpp" #include struct material { glm::vec4 emission; // Ecm glm::vec4 ambient; // Acm glm::vec4 diffuse; // Dcm glm::vec4 specular; // Scm float shininess; // Srm }; struct light { glm::vec4 ambient; // Acli glm::vec4 diffuse; // Dcli glm::vec4 specular; // Scli glm::vec4 position; // Ppli glm::vec4 halfVector; // Derived: Hi glm::vec3 spotDirection; // Sdli float spotExponent; // Srli float spotCutoff; // Crli // (range: [0.0,90.0], 180.0) float spotCosCutoff; // Derived: cos(Crli) // (range: [1.0,0.0],-1.0) float constantAttenuation; // K0 float linearAttenuation; // K1 float quadraticAttenuation;// K2 }; // Sample 1 #include // glm::vec3 #include // glm::cross, glm::normalize glm::vec3 computeNormal ( glm::vec3 const & a, glm::vec3 const & b, glm::vec3 const & c ) { return glm::normalize(glm::cross(c - a, b - a)); } typedef unsigned int GLuint; #define GL_FALSE 0 void glUniformMatrix4fv(GLuint, int, int, float*){} // Sample 2 #include // glm::vec3 #include // glm::vec4, glm::ivec4 #include // glm::mat4 #include // glm::translate, glm::rotate, glm::scale, glm::perspective #include // glm::value_ptr void func(GLuint LocationMVP, float Translate, glm::vec2 const & Rotate) { glm::mat4 Projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.f); glm::mat4 ViewTranslate = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -Translate)); glm::mat4 ViewRotateX = glm::rotate(ViewTranslate, Rotate.y, glm::vec3(-1.0f, 0.0f, 0.0f)); glm::mat4 View = glm::rotate(ViewRotateX, Rotate.x, glm::vec3(0.0f, 1.0f, 0.0f)); glm::mat4 Model = glm::scale(glm::mat4(1.0f), glm::vec3(0.5f)); glm::mat4 MVP = Projection * View * Model; glUniformMatrix4fv(LocationMVP, 1, GL_FALSE, glm::value_ptr(MVP)); } // Sample 3 #include // glm::vec2 #include // glm::packUnorm2x16 #include // glm::uint #include // glm::i8vec2, glm::i32vec2 std::size_t const VertexCount = 4; // Float quad geometry std::size_t const PositionSizeF32 = VertexCount * sizeof(glm::vec2); glm::vec2 const PositionDataF32[VertexCount] = { glm::vec2(-1.0f,-1.0f), glm::vec2( 1.0f,-1.0f), glm::vec2( 1.0f, 1.0f), glm::vec2(-1.0f, 1.0f) }; // Half-float quad geometry std::size_t const PositionSizeF16 = VertexCount * sizeof(glm::uint); glm::uint const PositionDataF16[VertexCount] = { glm::uint(glm::packUnorm2x16(glm::vec2(-1.0f, -1.0f))), glm::uint(glm::packUnorm2x16(glm::vec2( 1.0f, -1.0f))), glm::uint(glm::packUnorm2x16(glm::vec2( 1.0f, 1.0f))), glm::uint(glm::packUnorm2x16(glm::vec2(-1.0f, 1.0f))) }; // 8 bits signed integer quad geometry std::size_t const PositionSizeI8 = VertexCount * sizeof(glm::i8vec2); glm::i8vec2 const PositionDataI8[VertexCount] = { glm::i8vec2(-1,-1), glm::i8vec2( 1,-1), glm::i8vec2( 1, 1), glm::i8vec2(-1, 1) }; // 32 bits signed integer quad geometry std::size_t const PositionSizeI32 = VertexCount * sizeof(glm::i32vec2); glm::i32vec2 const PositionDataI32[VertexCount] = { glm::i32vec2 (-1,-1), glm::i32vec2 ( 1,-1), glm::i32vec2 ( 1, 1), glm::i32vec2 (-1, 1) }; struct intersection { glm::vec4 position; glm::vec3 normal; }; /* // Sample 4 #include // glm::vec3 #include // glm::normalize, glm::dot, glm::reflect #include // glm::pow #include // glm::vecRand3 glm::vec3 lighting ( intersection const & Intersection, material const & Material, light const & Light, glm::vec3 const & View ) { glm::vec3 Color(0.0f); glm::vec3 LightVertor(glm::normalize( Light.position - Intersection.position + glm::vecRand3(0.0f, Light.inaccuracy)); if(!shadow(Intersection.position, Light.position, LightVertor)) { float Diffuse = glm::dot(Intersection.normal, LightVector); if(Diffuse <= 0.0f) return Color; if(Material.isDiffuse()) Color += Light.color() * Material.diffuse * Diffuse; if(Material.isSpecular()) { glm::vec3 Reflect(glm::reflect( glm::normalize(-LightVector), glm::normalize(Intersection.normal))); float Dot = glm::dot(Reflect, View); float Base = Dot > 0.0f ? Dot : 0.0f; float Specular = glm::pow(Base, Material.exponent); Color += Material.specular * Specular; } } return Color; } */ int main() { return 0; } Vulkan-LoaderAndValidationLayers-sdk-1.0.8.0/libs/glm/detail/func_common.hpp000066400000000000000000000536601270147354000267220ustar00rootroot00000000000000/////////////////////////////////////////////////////////////////////////////////// /// OpenGL Mathematics (glm.g-truc.net) /// /// Copyright (c) 2005 - 2014 G-Truc Creation (www.g-truc.net) /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to deal /// in the Software without restriction, including without limitation the rights /// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell /// copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in /// all copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN /// THE SOFTWARE. /// /// @ref core /// @file glm/core/func_common.hpp /// @date 2008-03-08 / 2010-01-26 /// @author Christophe Riccio /// /// @see GLSL 4.20.8 specification, section 8.3 Common Functions /// /// @defgroup core_func_common Common functions /// @ingroup core /// /// These all operate component-wise. The description is per component. /////////////////////////////////////////////////////////////////////////////////// #ifndef GLM_FUNC_COMMON_INCLUDED #define GLM_FUNC_COMMON_INCLUDED #include "setup.hpp" #include "precision.hpp" #include "type_int.hpp" #include "_fixes.hpp" namespace glm { /// @addtogroup core_func_common /// @{ /// Returns x if x >= 0; otherwise, it returns -x. /// /// @tparam genType floating-point or signed integer; scalar or vector types. /// /// @see GLSL abs man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType abs(genType const & x); /// Returns 1.0 if x > 0, 0.0 if x == 0, or -1.0 if x < 0. /// /// @tparam genType Floating-point or signed integer; scalar or vector types. /// /// @see GLSL sign man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType sign(genType const & x); /// Returns a value equal to the nearest integer that is less then or equal to x. /// /// @tparam genType Floating-point scalar or vector types. /// /// @see GLSL floor man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType floor(genType const & x); /// Returns a value equal to the nearest integer to x /// whose absolute value is not larger than the absolute value of x. /// /// @tparam genType Floating-point scalar or vector types. /// /// @see GLSL trunc man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType trunc(genType const & x); /// Returns a value equal to the nearest integer to x. /// The fraction 0.5 will round in a direction chosen by the /// implementation, presumably the direction that is fastest. /// This includes the possibility that round(x) returns the /// same value as roundEven(x) for all values of x. /// /// @tparam genType Floating-point scalar or vector types. /// /// @see GLSL round man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType round(genType const & x); /// Returns a value equal to the nearest integer to x. /// A fractional part of 0.5 will round toward the nearest even /// integer. (Both 3.5 and 4.5 for x will return 4.0.) /// /// @tparam genType Floating-point scalar or vector types. /// /// @see GLSL roundEven man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions /// @see New round to even technique template GLM_FUNC_DECL genType roundEven(genType const & x); /// Returns a value equal to the nearest integer /// that is greater than or equal to x. /// /// @tparam genType Floating-point scalar or vector types. /// /// @see GLSL ceil man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType ceil(genType const & x); /// Return x - floor(x). /// /// @tparam genType Floating-point scalar or vector types. /// /// @see GLSL fract man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType fract(genType const & x); /// Modulus. Returns x - y * floor(x / y) /// for each component in x using the floating point value y. /// /// @tparam genType Floating-point scalar or vector types. /// /// @see GLSL mod man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType mod( genType const & x, genType const & y); /// Modulus. Returns x - y * floor(x / y) /// for each component in x using the floating point value y. /// /// @tparam genType Floating-point scalar or vector types. /// /// @see GLSL mod man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType mod( genType const & x, typename genType::value_type const & y); /// Returns the fractional part of x and sets i to the integer /// part (as a whole number floating point value). Both the /// return value and the output parameter will have the same /// sign as x. /// /// @tparam genType Floating-point scalar or vector types. /// /// @see GLSL modf man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType modf( genType const & x, genType & i); /// Returns y if y < x; otherwise, it returns x. /// /// @tparam genType Floating-point or integer; scalar or vector types. /// /// @see GLSL min man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions<<<<<<< HEAD template GLM_FUNC_DECL genType min( genType const & x, genType const & y); template GLM_FUNC_DECL genType min( genType const & x, typename genType::value_type const & y); /// Returns y if x < y; otherwise, it returns x. /// /// @tparam genType Floating-point or integer; scalar or vector types. /// /// @see GLSL max man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType max( genType const & x, genType const & y); template GLM_FUNC_DECL genType max( genType const & x, typename genType::value_type const & y); /// Returns min(max(x, minVal), maxVal) for each component in x /// using the floating-point values minVal and maxVal. /// /// @tparam genType Floating-point or integer; scalar or vector types. /// /// @see GLSL clamp man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType clamp( genType const & x, genType const & minVal, genType const & maxVal); template GLM_FUNC_DECL genType clamp( genType const & x, typename genType::value_type const & minVal, typename genType::value_type const & maxVal); /// If genTypeU is a floating scalar or vector: /// Returns x * (1.0 - a) + y * a, i.e., the linear blend of /// x and y using the floating-point value a. /// The value for a is not restricted to the range [0, 1]. /// /// If genTypeU is a boolean scalar or vector: /// Selects which vector each returned component comes /// from. For a component of that is false, the /// corresponding component of x is returned. For a /// component of a that is true, the corresponding /// component of y is returned. Components of x and y that /// are not selected are allowed to be invalid floating point /// values and will have no effect on the results. Thus, this /// provides different functionality than /// genType mix(genType x, genType y, genType(a)) /// where a is a Boolean vector. /// /// @see GLSL mix man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions /// /// @param[in] x Value to interpolate. /// @param[in] y Value to interpolate. /// @param[in] a Interpolant. /// /// @tparam genTypeT Floating point scalar or vector. /// @tparam genTypeU Floating point or boolean scalar or vector. It can't be a vector if it is the length of genTypeT. /// /// @code /// #include /// ... /// float a; /// bool b; /// glm::dvec3 e; /// glm::dvec3 f; /// glm::vec4 g; /// glm::vec4 h; /// ... /// glm::vec4 r = glm::mix(g, h, a); // Interpolate with a floating-point scalar two vectors. /// glm::vec4 s = glm::mix(g, h, b); // Teturns g or h; /// glm::dvec3 t = glm::mix(e, f, a); // Types of the third parameter is not required to match with the first and the second. /// glm::vec4 u = glm::mix(g, h, r); // Interpolations can be perform per component with a vector for the last parameter. /// @endcode template class vecType> GLM_FUNC_DECL vecType mix( vecType const & x, vecType const & y, vecType const & a); template class vecType> GLM_FUNC_DECL vecType mix( vecType const & x, vecType const & y, U const & a); template GLM_FUNC_DECL genTypeT mix( genTypeT const & x, genTypeT const & y, genTypeU const & a); /// Returns 0.0 if x < edge, otherwise it returns 1.0 for each component of a genType. /// /// @see GLSL step man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template GLM_FUNC_DECL genType step( genType const & edge, genType const & x); /// Returns 0.0 if x < edge, otherwise it returns 1.0. /// /// @see GLSL step man page /// @see GLSL 4.20.8 specification, section 8.3 Common Functions template