pax_global_header00006660000000000000000000000064140527066250014521gustar00rootroot0000000000000052 comment=f3c11b2d810159e7063daddeaa0764f4006e5a73 xtensor-0.23.10/000077500000000000000000000000001405270662500133665ustar00rootroot00000000000000xtensor-0.23.10/.appveyor.yml000066400000000000000000000033171405270662500160400ustar00rootroot00000000000000build: false branches: only: - master platform: - x64 image: - Visual Studio 2017 - Visual Studio 2015 environment: matrix: - MINICONDA: C:\xtensor-conda init: - "ECHO %MINICONDA%" - if "%APPVEYOR_BUILD_WORKER_IMAGE%" == "Visual Studio 2015" set VCVARPATH="C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" - if "%APPVEYOR_BUILD_WORKER_IMAGE%" == "Visual Studio 2015" set VCARGUMENT=%PLATFORM% - if "%APPVEYOR_BUILD_WORKER_IMAGE%" == "Visual Studio 2017" set VCVARPATH="C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvars64.bat" - echo "%VCVARPATH% %VCARGUMENT%" - "%VCVARPATH% %VCARGUMENT%" - ps: if($env:Platform -eq "x64"){Start-FileDownload 'http://repo.continuum.io/miniconda/Miniconda3-latest-Windows-x86_64.exe' C:\Miniconda.exe; echo "Done"} - ps: if($env:Platform -eq "x86"){Start-FileDownload 'http://repo.continuum.io/miniconda/Miniconda3-latest-Windows-x86.exe' C:\Miniconda.exe; echo "Done"} - cmd: C:\Miniconda.exe /S /D=C:\xtensor-conda - "set PATH=%MINICONDA%;%MINICONDA%\\Scripts;%MINICONDA%\\Library\\bin;%PATH%" install: - conda config --set always_yes yes --set changeps1 no - conda update -q conda - conda info -a - conda env create --file environment-dev.yml - CALL conda.bat activate xtensor - if "%APPVEYOR_BUILD_WORKER_IMAGE%" == "Visual Studio 2017" set CMAKE_ARGS="-DDISABLE_VS2017=ON" - if "%APPVEYOR_BUILD_WORKER_IMAGE%" == "Visual Studio 2015" set CMAKE_ARGS="" - cmake -G "NMake Makefiles" -DCMAKE_INSTALL_PREFIX=%MINICONDA%\\LIBRARY -DDOWNLOAD_GTEST=ON -DXTENSOR_USE_XSIMD=ON -DCMAKE_BUILD_TYPE=RELEASE %CMAKE_ARGS% . - nmake test_xtensor_lib - cd test build_script: - .\test_xtensor_lib xtensor-0.23.10/.azure-pipelines/000077500000000000000000000000001405270662500165605ustar00rootroot00000000000000xtensor-0.23.10/.azure-pipelines/azure-pipelines-linux-clang.yml000066400000000000000000000026271405270662500246450ustar00rootroot00000000000000jobs: - job: 'Linux_0' strategy: matrix: clang_4: llvm_version: '4.0' clang_5: llvm_version: '5.0' clang_6: llvm_version: '6.0' clang_7: llvm_version: '7' clang_8: llvm_version: '8' clang_9: llvm_version: '9' clang_10: llvm_version: '10' disable_xsimd: 1 pool: vmImage: ubuntu-16.04 variables: CC: clang-$(llvm_version) CXX: clang++-$(llvm_version) timeoutInMinutes: 360 steps: - script: | sudo add-apt-repository ppa:ubuntu-toolchain-r/test if [[ $(llvm_version) == '4.0' || $(llvm_version) == '5.0' ]]; then sudo apt-get update sudo apt-get --no-install-suggests --no-install-recommends install gcc-4.9 clang-$(llvm_version) else LLVM_VERSION=$(llvm_version) get -O - http://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add - sudo add-apt-repository "deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-$LLVM_VERSION main" sudo apt-get update sudo apt-get --no-install-suggests --no-install-recommends install clang-$(llvm_version) fi displayName: Install build toolchain - bash: echo "##vso[task.prependpath]$CONDA/bin" displayName: Add conda to PATH - template: unix-build.yml xtensor-0.23.10/.azure-pipelines/azure-pipelines-linux-gcc.yml000066400000000000000000000027571405270662500243210ustar00rootroot00000000000000jobs: - job: 'Linux_1' strategy: matrix: gcc_4: gcc_version: '4.9' check_cyclic_includes: 1 gcc_5_disable_xsimd: gcc_version: '5' disable_xsimd: 1 gcc_6_disable_exception: gcc_version: '6' disable_exception: 1 gcc_6_column_major: gcc_version: '6' column_major_layout: 1 gcc_7: gcc_version: '7' gcc_7_tbb: gcc_version: '7' enable_tbb: 1 gcc_7_openmp: gcc_version: '7' enable_openmp: 1 gcc_8_bound_checks: gcc_version: '8' bound_checks: 1 build_benchmark: 1 disable_xsimd: 1 gcc_8_cpp17: gcc_version: '8' enable_cpp17: 1 gcc_9: gcc_version: '9' pool: vmImage: ubuntu-16.04 variables: CC: gcc-$(gcc_version) CXX: g++-$(gcc_version) timeoutInMinutes: 360 steps: - script: | if [[ $(gcc_version) == '4.9' || $(gcc_version) == '6' || $(gcc_version) == '7' || $(gcc_version) == '8' ]]; then sudo add-apt-repository ppa:ubuntu-toolchain-r/test sudo apt-get update sudo apt-get --no-install-suggests --no-install-recommends install g++-$(gcc_version) fi displayName: Install build toolchain - bash: echo "##vso[task.prependpath]$CONDA/bin" displayName: Add conda to PATH - template: unix-build.yml xtensor-0.23.10/.azure-pipelines/azure-pipelines-osx.yml000066400000000000000000000014261405270662500232310ustar00rootroot00000000000000jobs: - job: 'OSX' strategy: matrix: macOS_10_14: image_name: 'macOS-10.14' macOS_10_15: image_name: 'macOS-10.15' pool: vmImage: $(image_name) variables: CC: clang CXX: clang++ timeoutInMinutes: 360 steps: - script: | echo "Removing homebrew for Azure to avoid conflicts with conda" curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall > ~/uninstall_homebrew chmod +x ~/uninstall_homebrew ~/uninstall_homebrew -f -q displayName: Remove homebrew - bash: | echo "##vso[task.prependpath]$CONDA/bin" sudo chown -R $USER $CONDA displayName: Add conda to PATH - template: unix-build.yml xtensor-0.23.10/.azure-pipelines/azure-pipelines-win.yml000066400000000000000000000057571405270662500232300ustar00rootroot00000000000000 jobs: - job: 'Windows_clangcl' pool: vmImage: 'vs2017-win2016' timeoutInMinutes: 360 steps: # Install Chocolatey (https://chocolatey.org/install#install-with-powershellexe) - powershell: | Set-ExecutionPolicy Bypass -Scope Process -Force iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1')) Write-Host "##vso[task.setvariable variable=PATH]$env:PATH" choco --version displayName: "Install Chocolatey" # Install Miniconda - script: | choco install miniconda3 --yes set PATH=C:\tools\miniconda3\Scripts;C:\tools\miniconda3;C:\tools\miniconda3\Library\bin;%PATH% echo '##vso[task.setvariable variable=PATH]%PATH%' set LIB=C:\tools\miniconda3\Library\lib;%LIB% echo '##vso[task.setvariable variable=LIB]%LIB%' conda --version displayName: "Install Miniconda" # Configure Miniconda - script: | conda config --set always_yes yes conda config --append channels conda-forge conda info displayName: "Configure Miniconda" # Create conda enviroment # Note: conda activate doesn't work here, because it creates a new shell! - script: | conda install cmake==3.14.0 ^ ninja ^ nlohmann_json ^ xtl==0.7.0 ^ xsimd==7.4.8 ^ python=3.6 conda list displayName: "Install conda packages" # Install LLVM # Note: LLVM distributed by conda is too old - script: | choco install llvm --yes set PATH=C:\Program Files\LLVM\bin;%PATH% echo '##vso[task.setvariable variable=PATH]%PATH%' clang-cl --version displayName: "Install LLVM" # Configure - script: | setlocal EnableDelayedExpansion call "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86_amd64 mkdir build & cd build cmake -G Ninja ^ -DCMAKE_BUILD_TYPE=Release ^ -DCMAKE_C_COMPILER=clang-cl ^ -DCMAKE_CXX_COMPILER=clang-cl ^ -DDOWNLOAD_GTEST=ON ^ -DXTENSOR_USE_XSIMD=ON ^ $(Build.SourcesDirectory) displayName: "Configure xtensor" workingDirectory: $(Build.BinariesDirectory) # Build - script: | call "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86_amd64 cmake --build . ^ --config Release ^ --target test_xtensor_lib ^ -- -v displayName: "Build xtensor" workingDirectory: $(Build.BinariesDirectory)/build # Test - script: | setlocal EnableDelayedExpansion cd test .\test_xtensor_lib displayName: "Test xtensor" workingDirectory: $(Build.BinariesDirectory)/build/test xtensor-0.23.10/.azure-pipelines/unix-build.yml000066400000000000000000000045361405270662500213730ustar00rootroot00000000000000steps: - script: | conda config --set always_yes yes --set changeps1 no conda update -q conda conda env create --file environment-dev.yml source activate xtensor if [[ $(enable_tbb) == 1 ]]; then conda install tbb-devel -c conda-forge fi displayName: Install dependencies - script: | source activate xtensor if [[ $(check_cyclic_includes) == 1 ]]; then set -e conda install networkx -c conda-forge cd tools chmod +x check_circular.py ./check_circular.py cd .. set +e fi displayName: Check circular includes - script: | source activate xtensor mkdir build cd build if [[ $(bound_checks) == 1 ]]; then CMAKE_EXTRA_ARGS="$CMAKE_EXTRA_ARGS -DXTENSOR_ENABLE_ASSERT=ON"; fi if [[ $(column_major_layout) == 1 ]]; then CMAKE_EXTRA_ARGS="$CMAKE_EXTRA_ARGS -DDEFAULT_COLUMN_MAJOR=ON"; fi if [[ $(disable_xsimd) == 1 ]]; then CMAKE_EXTRA_ARGS="$CMAKE_EXTRA_ARGS -DXTENSOR_USE_XSIMD=OFF"; else CMAKE_EXTRA_ARGS="$CMAKE_EXTRA_ARGS -DXTENSOR_USE_XSIMD=ON"; fi if [[ $(enable_tbb) == 1 ]]; then CMAKE_EXTRA_ARGS="$CMAKE_EXTRA_ARGS -DXTENSOR_USE_TBB=ON -DTBB_INCLUDE_DIR=$CONDA_PREFIX/include -DTBB_LIBRARY=$CONDA_PREFIX/lib .."; fi if [[ $(enable_openmp) == 1 ]]; then CMAKE_EXTRA_ARGS="$CMAKE_EXTRA_ARGS -DXTENSOR_USE_OPENMP=ON"; fi if [[ $(disable_exception) == 1 ]]; then CMAKE_EXTRA_ARGS="$CMAKE_EXTRA_ARGS -DXTENSOR_DISABLE_EXCEPTION=ON"; fi if [[ $(enable_cpp17) == 1 ]]; then CMAKE_EXTRA_ARGS="$CMAKE_EXTRA_ARGS -DCPP17=ON"; fi if [[ $(build_benchmark) == 1 ]]; then CMAKE_EXTA_ARGS="$CMAKE_EXTRA_ARGS -DBUILD_BENCHMARK=ON"; fi cmake -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX $CMAKE_EXTRA_ARGS -DDOWNLOAD_GTEST=ON $(Build.SourcesDirectory) displayName: Configure xtensor workingDirectory: $(Build.BinariesDirectory) - script: | source activate xtensor make -j2 test_xtensor_lib displayName: Build xtensor workingDirectory: $(Build.BinariesDirectory)/build - script: | source activate xtensor cd test ./test_xtensor_lib displayName: Test xtensor workingDirectory: $(Build.BinariesDirectory)/build/test xtensor-0.23.10/.github/000077500000000000000000000000001405270662500147265ustar00rootroot00000000000000xtensor-0.23.10/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000006531405270662500205330ustar00rootroot00000000000000# Checklist - [ ] The title and commit message(s) are descriptive. - [ ] Small commits made to fix your PR have been squashed to avoid history pollution. - [ ] Tests have been added for new features or bug fixes. - [ ] API of new functions and classes are documented. # Description xtensor-0.23.10/.github/workflows/000077500000000000000000000000001405270662500167635ustar00rootroot00000000000000xtensor-0.23.10/.github/workflows/gh-pages.yml000066400000000000000000000016241405270662500212040ustar00rootroot00000000000000name: gh-pages on: push: branches: - master jobs: publish: runs-on: ubuntu-latest defaults: run: shell: bash -l {0} steps: - name: Basic GitHub action setup uses: actions/checkout@v2 - name: Set conda environment "test" uses: conda-incubator/setup-miniconda@v2 with: mamba-version: "*" channels: conda-forge,defaults channel-priority: true environment-file: docs/environment.yaml activate-environment: test auto-activate-base: false - name: Run doxygen working-directory: docs run: doxygen - name: Deploy to GitHub Pages if: success() uses: crazy-max/ghaction-github-pages@v2 with: target_branch: gh-pages build_dir: docs/html jekyll: false keep_history: false env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} xtensor-0.23.10/.gitignore000066400000000000000000000012241405270662500153550ustar00rootroot00000000000000# Doxygen output docs/html/ docs/xml/ # Prerequisites *.d # Compiled Object files *.slo *.lo *.o *.obj # Precompiled Headers *.gch *.pch # Compiled Dynamic libraries *.so *.dylib *.dll # Compiled Static libraries *.lai *.la *.a *.lib # Executables *.exe *.out *.app # Vim tmp files *.swp *~ # Generated directory include/tmp/ # Build directory build/ # Test build artefacts test/test_xtensor test/CMakeCache.txt test/Makefile test/CMakeFiles/ test/cmake_install.cmake # Documentation build artefacts docs/CMakeCache.txt docs/xml/ docs/build/ docs/*.tmp # Jupyter artefacts .ipynb_checkpoints/ # Pytho artefacts __pycache__ # Generated files *.pc xtensor-0.23.10/CMakeLists.txt000066400000000000000000000307601405270662500161340ustar00rootroot00000000000000############################################################################ # Copyright (c) Johan Mabille, Sylvain Corlay and Wolf Vollprecht # # Copyright (c) QuantStack # # # # Distributed under the terms of the BSD 3-Clause License. # # # # The full license is in the file LICENSE, distributed with this software. # ############################################################################ cmake_minimum_required(VERSION 3.1) project(xtensor) set(XTENSOR_INCLUDE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/include) # Versionning # =========== file(STRINGS "${XTENSOR_INCLUDE_DIR}/xtensor/xtensor_config.hpp" xtensor_version_defines REGEX "#define XTENSOR_VERSION_(MAJOR|MINOR|PATCH)") foreach(ver ${xtensor_version_defines}) if(ver MATCHES "#define XTENSOR_VERSION_(MAJOR|MINOR|PATCH) +([^ ]+)$") set(XTENSOR_VERSION_${CMAKE_MATCH_1} "${CMAKE_MATCH_2}" CACHE INTERNAL "") endif() endforeach() set(${PROJECT_NAME}_VERSION ${XTENSOR_VERSION_MAJOR}.${XTENSOR_VERSION_MINOR}.${XTENSOR_VERSION_PATCH}) message(STATUS "Building xtensor v${${PROJECT_NAME}_VERSION}") # Dependencies # ============ set(xtl_REQUIRED_VERSION 0.7.0) if(TARGET xtl) set(xtl_VERSION ${XTL_VERSION_MAJOR}.${XTL_VERSION_MINOR}.${XTL_VERSION_PATCH}) # Note: This is not SEMVER compatible comparison if( NOT ${xtl_VERSION} VERSION_GREATER_EQUAL ${xtl_REQUIRED_VERSION}) message(ERROR "Mismatch xtl versions. Found '${xtl_VERSION}' but requires: '${xtl_REQUIRED_VERSION}'") else() message(STATUS "Found xtl v${xtl_VERSION}") endif() else() find_package(xtl ${xtl_REQUIRED_VERSION} REQUIRED) message(STATUS "Found xtl: ${xtl_INCLUDE_DIRS}/xtl") endif() find_package(nlohmann_json 3.1.1 QUIET) # Optional dependencies # ===================== OPTION(XTENSOR_USE_XSIMD "simd acceleration for xtensor" OFF) OPTION(XTENSOR_USE_TBB "enable parallelization using intel TBB" OFF) OPTION(XTENSOR_USE_OPENMP "enable parallelization using OpenMP" OFF) if(XTENSOR_USE_TBB AND XTENSOR_USE_OPENMP) message( FATAL "XTENSOR_USE_TBB and XTENSOR_USE_OPENMP cannot both be active at once" ) endif() if(XTENSOR_USE_XSIMD) set(xsimd_REQUIRED_VERSION 7.4.4) if(TARGET xsimd) set(xsimd_VERSION ${XSIMD_VERSION_MAJOR}.${XSIMD_VERSION_MINOR}.${XSIMD_VERSION_PATCH}) # Note: This is not SEMVER compatible comparison if( NOT ${xsimd_VERSION} VERSION_GREATER_EQUAL ${xsimd_REQUIRED_VERSION}) message(ERROR "Mismatch xsimd versions. Found '${xsimd_VERSION}' but requires: '${xsimd_REQUIRED_VERSION}'") else() message(STATUS "Found xsimd v${xsimd_VERSION}") endif() else() find_package(xsimd ${xsimd_REQUIRED_VERSION} REQUIRED) message(STATUS "Found xsimd: ${xsimd_INCLUDE_DIRS}/xsimd") endif() endif() if(XTENSOR_USE_TBB) set(CMAKE_MODULE_PATH "${CMAKE_MODULE_PATH}" "${CMAKE_CURRENT_SOURCE_DIR}/cmake/") find_package(TBB REQUIRED) message(STATUS "Found intel TBB: ${TBB_INCLUDE_DIRS}") endif() if(XTENSOR_USE_OPENMP) find_package(OpenMP REQUIRED) if (OPENMP_FOUND) # Set openmp variables now # Create private target just for this lib # https://cliutils.gitlab.io/modern-cmake/chapters/packages/OpenMP.html # Probably not safe for cmake < 3.4 .. find_package(Threads REQUIRED) add_library(OpenMP::OpenMP_CXX_xtensor IMPORTED INTERFACE) set_property( TARGET OpenMP::OpenMP_CXX_xtensor PROPERTY INTERFACE_COMPILE_OPTIONS ${OpenMP_CXX_FLAGS} ) # Only works if the same flag is passed to the linker; use CMake 3.9+ otherwise (Intel, AppleClang) set_property( TARGET OpenMP::OpenMP_CXX_xtensor PROPERTY INTERFACE_LINK_LIBRARIES ${OpenMP_CXX_FLAGS} Threads::Threads) message(STATUS "OpenMP Found") else() message(FATAL "Failed to locate OpenMP") endif() endif() # Build # ===== set(XTENSOR_HEADERS ${XTENSOR_INCLUDE_DIR}/xtensor/xaccessible.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xaccumulator.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xadapt.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xarray.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xassign.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xaxis_iterator.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xaxis_slice_iterator.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xbroadcast.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xbuffer_adaptor.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xbuilder.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xchunked_array.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xchunked_assign.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xchunked_view.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xcomplex.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xcontainer.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xcsv.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xdynamic_view.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xeval.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xexception.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xexpression.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xexpression_holder.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xexpression_traits.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xfixed.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xfunction.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xfunctor_view.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xgenerator.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xhistogram.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xindex_view.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xinfo.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xio.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xiterable.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xiterator.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xjson.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xlayout.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xmanipulation.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xmasked_view.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xmath.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xmime.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xnoalias.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xnorm.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xnpy.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xoffset_view.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xoperation.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xoptional.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xoptional_assembly.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xoptional_assembly_base.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xoptional_assembly_storage.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xpad.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xrandom.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xreducer.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xrepeat.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xscalar.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xsemantic.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xset_operation.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xshape.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xslice.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xsort.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xstorage.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xstrided_view.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xstrided_view_base.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xstrides.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xtensor.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xtensor_config.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xtensor_forward.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xtensor_simd.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xutils.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xvectorize.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xview.hpp ${XTENSOR_INCLUDE_DIR}/xtensor/xview_utils.hpp ) add_library(xtensor INTERFACE) target_include_directories(xtensor INTERFACE $ $) target_compile_features(xtensor INTERFACE cxx_std_14) target_link_libraries(xtensor INTERFACE xtl) OPTION(XTENSOR_ENABLE_ASSERT "xtensor bound check" OFF) OPTION(XTENSOR_CHECK_DIMENSION "xtensor dimension check" OFF) OPTION(BUILD_TESTS "xtensor test suite" OFF) OPTION(BUILD_BENCHMARK "xtensor benchmark" OFF) OPTION(DOWNLOAD_GTEST "build gtest from downloaded sources" OFF) OPTION(DOWNLOAD_GBENCHMARK "download google benchmark and build from source" ON) OPTION(DEFAULT_COLUMN_MAJOR "set default layout to column major" OFF) OPTION(DISABLE_VS2017 "disables the compilation of some test with Visual Studio 2017" OFF) OPTION(CPP17 "enables C++17" OFF) OPTION(CPP20 "enables C++20 (experimental)" OFF) OPTION(XTENSOR_DISABLE_EXCEPTIONS "Disable C++ exceptions" OFF) OPTION(DISABLE_MSVC_ITERATOR_CHECK "Disable the MVSC iterator check" ON) if(DOWNLOAD_GTEST OR GTEST_SRC_DIR) set(BUILD_TESTS ON) endif() if(XTENSOR_ENABLE_ASSERT OR XTENSOR_CHECK_DIMENSION) add_definitions(-DXTENSOR_ENABLE_ASSERT) endif() if(XTENSOR_CHECK_DIMENSION) add_definitions(-DXTENSOR_ENABLE_CHECK_DIMENSION) endif() if(DEFAULT_COLUMN_MAJOR) add_definitions(-DXTENSOR_DEFAULT_LAYOUT=layout_type::column_major) endif() if(DISABLE_VS2017) add_definitions(-DDISABLE_VS2017) endif() if(MSVC AND DISABLE_MSVC_ITERATOR_CHECK) add_compile_definitions($<$:_ITERATOR_DEBUG_LEVEL=0>) endif() if(BUILD_TESTS) enable_testing() add_subdirectory(test) endif() if(BUILD_BENCHMARK) add_subdirectory(benchmark) endif() if(XTENSOR_USE_OPENMP) # Link xtensor itself to OpenMP to propagate to user projects target_link_libraries(xtensor INTERFACE OpenMP::OpenMP_CXX_xtensor) endif() # Installation # ============ include(GNUInstallDirs) include(CMakePackageConfigHelpers) install(TARGETS xtensor EXPORT ${PROJECT_NAME}-targets) # Makes the project importable from the build directory export(EXPORT ${PROJECT_NAME}-targets FILE "${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Targets.cmake") install(FILES ${XTENSOR_HEADERS} DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/xtensor) set(XTENSOR_CMAKECONFIG_INSTALL_DIR "${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}" CACHE STRING "install path for xtensorConfig.cmake") configure_package_config_file(${PROJECT_NAME}Config.cmake.in "${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake" INSTALL_DESTINATION ${XTENSOR_CMAKECONFIG_INSTALL_DIR}) # xtensor is header-only and does not depend on the architecture. # Remove CMAKE_SIZEOF_VOID_P from xtensorConfigVersion.cmake so that an xtensorConfig.cmake # generated for a 64 bit target can be used for 32 bit targets and vice versa. set(_XTENSOR_CMAKE_SIZEOF_VOID_P ${CMAKE_SIZEOF_VOID_P}) unset(CMAKE_SIZEOF_VOID_P) write_basic_package_version_file(${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake VERSION ${${PROJECT_NAME}_VERSION} COMPATIBILITY AnyNewerVersion) set(CMAKE_SIZEOF_VOID_P ${_XTENSOR_CMAKE_SIZEOF_VOID_P}) install(FILES ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake DESTINATION ${XTENSOR_CMAKECONFIG_INSTALL_DIR}) install(EXPORT ${PROJECT_NAME}-targets FILE ${PROJECT_NAME}Targets.cmake DESTINATION ${XTENSOR_CMAKECONFIG_INSTALL_DIR}) configure_file(${PROJECT_NAME}.pc.in "${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}.pc" @ONLY) install(FILES "${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}.pc" DESTINATION "${CMAKE_INSTALL_LIBDIR}/pkgconfig/") # Write single include # ==================== function(PREPEND var prefix) set(listVar "") foreach(f ${ARGN}) list(APPEND listVar "${prefix}${f}") endforeach(f) set(${var} "${listVar}" PARENT_SCOPE) endfunction() function(POSTFIX var postfix) set(listVar "") foreach(f ${ARGN}) list(APPEND listVar "${f}${postfix}") endforeach(f) set(${var} "${listVar}" PARENT_SCOPE) endfunction() set(XTENSOR_SINGLE_INCLUDE ${XTENSOR_HEADERS}) string(REPLACE "${XTENSOR_INCLUDE_DIR}/" "" XTENSOR_SINGLE_INCLUDE "${XTENSOR_SINGLE_INCLUDE}") list(REMOVE_ITEM XTENSOR_SINGLE_INCLUDE xtensor/xexpression_holder.hpp xtensor/xjson.hpp xtensor/xmime.hpp xtensor/xnpy.hpp) PREPEND(XTENSOR_SINGLE_INCLUDE "#include <" ${XTENSOR_SINGLE_INCLUDE}) POSTFIX(XTENSOR_SINGLE_INCLUDE ">" ${XTENSOR_SINGLE_INCLUDE}) string(REPLACE ";" "\n" XTENSOR_SINGLE_INCLUDE "${XTENSOR_SINGLE_INCLUDE}") string(CONCAT XTENSOR_SINGLE_INCLUDE "#ifndef XTENSOR\n" "#define XTENSOR\n\n" "${XTENSOR_SINGLE_INCLUDE}" "\n\n#endif\n") file(WRITE "${CMAKE_BINARY_DIR}/xtensor.hpp" "${XTENSOR_SINGLE_INCLUDE}") install(FILES "${CMAKE_BINARY_DIR}/xtensor.hpp" DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}) xtensor-0.23.10/LICENSE000066400000000000000000000030261405270662500143740ustar00rootroot00000000000000Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Copyright (c) 2016, QuantStack All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. xtensor-0.23.10/README.md000066400000000000000000000345311405270662500146530ustar00rootroot00000000000000# ![xtensor](docs/source/xtensor.svg) [![Appveyor](https://ci.appveyor.com/api/projects/status/dljjg79povwgncuf?svg=true)](https://ci.appveyor.com/project/xtensor-stack/xtensor) [![Azure](https://dev.azure.com/xtensor-stack/xtensor-stack/_apis/build/status/xtensor-stack.xtensor?branchName=master)](https://dev.azure.com/xtensor-stack/xtensor-stack/_build/latest?definitionId=4&branchName=master) [![Coverity](https://scan.coverity.com/projects/18335/badge.svg)](https://scan.coverity.com/projects/xtensor) [![Documentation](http://readthedocs.org/projects/xtensor/badge/?version=latest)](https://xtensor.readthedocs.io/en/latest/?badge=latest) [![Doxygen -> gh-pages](https://github.com/xtensor-stack/xtensor/workflows/gh-pages/badge.svg)](https://xtensor-stack.github.io/xtensor) [![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/xtensor-stack/xtensor/stable?filepath=notebooks%2Fxtensor.ipynb) [![Join the Gitter Chat](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/QuantStack/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) Multi-dimensional arrays with broadcasting and lazy computing. ## Introduction `xtensor` is a C++ library meant for numerical analysis with multi-dimensional array expressions. `xtensor` provides - an extensible expression system enabling **lazy broadcasting**. - an API following the idioms of the **C++ standard library**. - tools to manipulate array expressions and build upon `xtensor`. Containers of `xtensor` are inspired by [NumPy](http://www.numpy.org), the Python array programming library. **Adaptors** for existing data structures to be plugged into our expression system can easily be written. In fact, `xtensor` can be used to **process NumPy data structures inplace** using Python's [buffer protocol](https://docs.python.org/3/c-api/buffer.html). Similarly, we can operate on Julia and R arrays. For more details on the NumPy, Julia and R bindings, check out the [xtensor-python](https://github.com/xtensor-stack/xtensor-python), [xtensor-julia](https://github.com/xtensor-stack/Xtensor.jl) and [xtensor-r](https://github.com/xtensor-stack/xtensor-r) projects respectively. `xtensor` requires a modern C++ compiler supporting C++14. The following C++ compilers are supported: - On Windows platforms, Visual C++ 2015 Update 2, or more recent - On Unix platforms, gcc 4.9 or a recent version of Clang ## Installation ### Package managers We provide a package for the mamba (or conda) package manager: ```bash mamba install -c conda-forge xtensor ``` ### Install from sources `xtensor` is a header-only library. You can directly install it from the sources: ```bash cmake -D CMAKE_INSTALL_PREFIX=your_install_prefix make install ``` ## Trying it online You can play with `xtensor` interactively in a Jupyter notebook right now! Just click on the binder link below: [![Binder](docs/source/binder-logo.svg)](https://mybinder.org/v2/gh/xtensor-stack/xtensor/stable?filepath=notebooks/xtensor.ipynb) The C++ support in Jupyter is powered by the [xeus-cling](https://github.com/jupyter-xeus/xeus-cling) C++ kernel. Together with xeus-cling, xtensor enables a similar workflow to that of NumPy with the IPython Jupyter kernel. ![xeus-cling](docs/source/xeus-cling-screenshot.png) ## Documentation For more information on using `xtensor`, check out the reference documentation http://xtensor.readthedocs.io/ ## Dependencies `xtensor` depends on the [xtl](https://github.com/xtensor-stack/xtl) library and has an optional dependency on the [xsimd](https://github.com/xtensor-stack/xsimd) library: | `xtensor` | `xtl` |`xsimd` (optional) | |-----------|---------|-------------------| | master | ^0.7.0 | ^7.4.8 | | 0.23.10 | ^0.7.0 | ^7.4.8 | | 0.23.9 | ^0.7.0 | ^7.4.8 | | 0.23.8 | ^0.7.0 | ^7.4.8 | | 0.23.7 | ^0.7.0 | ^7.4.8 | | 0.23.6 | ^0.7.0 | ^7.4.8 | | 0.23.5 | ^0.7.0 | ^7.4.8 | | 0.23.4 | ^0.7.0 | ^7.4.8 | | 0.23.3 | ^0.7.0 | ^7.4.8 | | 0.23.2 | ^0.7.0 | ^7.4.8 | | 0.23.1 | ^0.7.0 | ^7.4.8 | | 0.23.0 | ^0.7.0 | ^7.4.8 | | 0.22.0 | ^0.6.23 | ^7.4.8 | The dependency on `xsimd` is required if you want to enable SIMD acceleration in `xtensor`. This can be done by defining the macro `XTENSOR_USE_XSIMD` *before* including any header of `xtensor`. ## Usage ### Basic usage **Initialize a 2-D array and compute the sum of one of its rows and a 1-D array.** ```cpp #include #include "xtensor/xarray.hpp" #include "xtensor/xio.hpp" #include "xtensor/xview.hpp" xt::xarray arr1 {{1.0, 2.0, 3.0}, {2.0, 5.0, 7.0}, {2.0, 5.0, 7.0}}; xt::xarray arr2 {5.0, 6.0, 7.0}; xt::xarray res = xt::view(arr1, 1) + arr2; std::cout << res; ``` Outputs: ``` {7, 11, 14} ``` **Initialize a 1-D array and reshape it inplace.** ```cpp #include #include "xtensor/xarray.hpp" #include "xtensor/xio.hpp" xt::xarray arr {1, 2, 3, 4, 5, 6, 7, 8, 9}; arr.reshape({3, 3}); std::cout << arr; ``` Outputs: ``` {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}} ``` **Index Access** ```cpp #include #include "xtensor/xarray.hpp" #include "xtensor/xio.hpp" xt::xarray arr1 {{1.0, 2.0, 3.0}, {2.0, 5.0, 7.0}, {2.0, 5.0, 7.0}}; std::cout << arr1(0, 0) << std::endl; xt::xarray arr2 {1, 2, 3, 4, 5, 6, 7, 8, 9}; std::cout << arr2(0); ``` Outputs: ``` 1.0 1 ``` ### The NumPy to xtensor cheat sheet If you are familiar with NumPy APIs, and you are interested in xtensor, you can check out the [NumPy to xtensor cheat sheet](https://xtensor.readthedocs.io/en/latest/numpy.html) provided in the documentation. ### Lazy broadcasting with `xtensor` Xtensor can operate on arrays of different shapes of dimensions in an element-wise fashion. Broadcasting rules of xtensor are similar to those of [NumPy](http://www.numpy.org) and [libdynd](http://libdynd.org). ### Broadcasting rules In an operation involving two arrays of different dimensions, the array with the lesser dimensions is broadcast across the leading dimensions of the other. For example, if `A` has shape `(2, 3)`, and `B` has shape `(4, 2, 3)`, the result of a broadcasted operation with `A` and `B` has shape `(4, 2, 3)`. ``` (2, 3) # A (4, 2, 3) # B --------- (4, 2, 3) # Result ``` The same rule holds for scalars, which are handled as 0-D expressions. If `A` is a scalar, the equation becomes: ``` () # A (4, 2, 3) # B --------- (4, 2, 3) # Result ``` If matched up dimensions of two input arrays are different, and one of them has size `1`, it is broadcast to match the size of the other. Let's say B has the shape `(4, 2, 1)` in the previous example, so the broadcasting happens as follows: ``` (2, 3) # A (4, 2, 1) # B --------- (4, 2, 3) # Result ``` ### Universal functions, laziness and vectorization With `xtensor`, if `x`, `y` and `z` are arrays of *broadcastable shapes*, the return type of an expression such as `x + y * sin(z)` is **not an array**. It is an `xexpression` object offering the same interface as an N-dimensional array, which does not hold the result. **Values are only computed upon access or when the expression is assigned to an xarray object**. This allows to operate symbolically on very large arrays and only compute the result for the indices of interest. We provide utilities to **vectorize any scalar function** (taking multiple scalar arguments) into a function that will perform on `xexpression`s, applying the lazy broadcasting rules which we just described. These functions are called *xfunction*s. They are `xtensor`'s counterpart to NumPy's universal functions. In `xtensor`, arithmetic operations (`+`, `-`, `*`, `/`) and all special functions are *xfunction*s. ### Iterating over `xexpression`s and broadcasting Iterators All `xexpression`s offer two sets of functions to retrieve iterator pairs (and their `const` counterpart). - `begin()` and `end()` provide instances of `xiterator`s which can be used to iterate over all the elements of the expression. The order in which elements are listed is `row-major` in that the index of last dimension is incremented first. - `begin(shape)` and `end(shape)` are similar but take a *broadcasting shape* as an argument. Elements are iterated upon in a row-major way, but certain dimensions are repeated to match the provided shape as per the rules described above. For an expression `e`, `e.begin(e.shape())` and `e.begin()` are equivalent. ### Runtime vs compile-time dimensionality Two container classes implementing multi-dimensional arrays are provided: `xarray` and `xtensor`. - `xarray` can be reshaped dynamically to any number of dimensions. It is the container that is the most similar to NumPy arrays. - `xtensor` has a dimension set at compilation time, which enables many optimizations. For example, shapes and strides of `xtensor` instances are allocated on the stack instead of the heap. `xarray` and `xtensor` container are both `xexpression`s and can be involved and mixed in universal functions, assigned to each other etc... Besides, two access operators are provided: - The variadic template `operator()` which can take multiple integral arguments or none. - And the `operator[]` which takes a single multi-index argument, which can be of size determined at runtime. `operator[]` also supports access with braced initializers. ## Performances Xtensor operations make use of SIMD acceleration depending on what instruction sets are available on the platform at hand (SSE, AVX, AVX512, Neon). ### [![xsimd](docs/source/xsimd-small.svg)](https://github.com/xtensor-stack/xsimd) The [xsimd](https://github.com/xtensor-stack/xsimd) project underlies the detection of the available instruction sets, and provides generic high-level wrappers and memory allocators for client libraries such as xtensor. ### Continuous benchmarking Xtensor operations are continuously benchmarked, and are significantly improved at each new version. Current performances on statically dimensioned tensors match those of the Eigen library. Dynamically dimension tensors for which the shape is heap allocated come at a small additional cost. ### Stack allocation for shapes and strides More generally, the library implement a `promote_shape` mechanism at build time to determine the optimal sequence type to hold the shape of an expression. The shape type of a broadcasting expression whose members have a dimensionality determined at compile time will have a stack allocated sequence type. If at least one note of a broadcasting expression has a dynamic dimension (for example an `xarray`), it bubbles up to the entire broadcasting expression which will have a heap allocated shape. The same hold for views, broadcast expressions, etc... Therefore, when building an application with xtensor, we recommend using statically-dimensioned containers whenever possible to improve the overall performance of the application. ## Language bindings ### [![xtensor-python](docs/source/xtensor-python-small.svg)](https://github.com/xtensor-stack/xtensor-python) The [xtensor-python](https://github.com/xtensor-stack/xtensor-python) project provides the implementation of two `xtensor` containers, `pyarray` and `pytensor` which effectively wrap NumPy arrays, allowing inplace modification, including reshapes. Utilities to automatically generate NumPy-style universal functions, exposed to Python from scalar functions are also provided. ### [![xtensor-julia](docs/source/xtensor-julia-small.svg)](https://github.com/xtensor-stack/xtensor-julia) The [xtensor-julia](https://github.com/xtensor-stack/xtensor-julia) project provides the implementation of two `xtensor` containers, `jlarray` and `jltensor` which effectively wrap julia arrays, allowing inplace modification, including reshapes. Like in the Python case, utilities to generate NumPy-style universal functions are provided. ### [![xtensor-r](docs/source/xtensor-r-small.svg)](https://github.com/xtensor-stack/xtensor-r) The [xtensor-r](https://github.com/xtensor-stack/xtensor-r) project provides the implementation of two `xtensor` containers, `rarray` and `rtensor` which effectively wrap R arrays, allowing inplace modification, including reshapes. Like for the Python and Julia bindings, utilities to generate NumPy-style universal functions are provided. ## Library bindings ### [![xtensor-blas](docs/source/xtensor-blas-small.svg)](https://github.com/xtensor-stack/xtensor-blas) The [xtensor-blas](https://github.com/xtensor-stack/xtensor-blas) project provides bindings to BLAS libraries, enabling linear-algebra operations on xtensor expressions. ### [![xtensor-io](docs/source/xtensor-io-small.svg)](https://github.com/xtensor-stack/xtensor-io) The [xtensor-io](https://github.com/xtensor-stack/xtensor-io) project enables the loading of a variety of file formats into xtensor expressions, such as image files, sound files, HDF5 files, as well as NumPy npy and npz files. ## Building and running the tests Building the tests requires the [GTest](https://github.com/google/googletest) testing framework and [cmake](https://cmake.org). gtest and cmake are available as packages for most Linux distributions. Besides, they can also be installed with the `conda` package manager (even on windows): ```bash conda install -c conda-forge gtest cmake ``` Once `gtest` and `cmake` are installed, you can build and run the tests: ```bash mkdir build cd build cmake -DBUILD_TESTS=ON ../ make xtest ``` You can also use CMake to download the source of `gtest`, build it, and use the generated libraries: ```bash mkdir build cd build cmake -DBUILD_TESTS=ON -DDOWNLOAD_GTEST=ON ../ make xtest ``` ## Building the HTML documentation xtensor's documentation is built with three tools - [doxygen](http://www.doxygen.org) - [sphinx](http://www.sphinx-doc.org) - [breathe](https://breathe.readthedocs.io) While doxygen must be installed separately, you can install breathe by typing ```bash pip install breathe sphinx_rtd_theme ``` Breathe can also be installed with `conda` ```bash conda install -c conda-forge breathe ``` Finally, go to `docs` subdirectory and build the documentation with the following command: ```bash make html ``` ## License We use a shared copyright model that enables all contributors to maintain the copyright on their contributions. This software is licensed under the BSD-3-Clause license. See the [LICENSE](LICENSE) file for details. xtensor-0.23.10/azure-pipelines.yml000066400000000000000000000004161405270662500172260ustar00rootroot00000000000000trigger: - master jobs: - template: ./.azure-pipelines/azure-pipelines-win.yml - template: ./.azure-pipelines/azure-pipelines-linux-clang.yml - template: ./.azure-pipelines/azure-pipelines-linux-gcc.yml - template: ./.azure-pipelines/azure-pipelines-osx.yml xtensor-0.23.10/benchmark/000077500000000000000000000000001405270662500153205ustar00rootroot00000000000000xtensor-0.23.10/benchmark/CMakeLists.txt000066400000000000000000000131431405270662500200620ustar00rootroot00000000000000############################################################################ # Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht # # # # Distributed under the terms of the BSD 3-Clause License. # # # # The full license is in the file LICENSE, distributed with this software. # ############################################################################ cmake_minimum_required(VERSION 3.1) if (CMAKE_CURRENT_SOURCE_DIR STREQUAL CMAKE_SOURCE_DIR) project(xtensor-benchmark) find_package(xtensor REQUIRED CONFIG) set(XTENSOR_INCLUDE_DIR ${xtensor_INCLUDE_DIRS}) endif () message(STATUS "Forcing tests build type to Release") set(CMAKE_BUILD_TYPE Release CACHE STRING "Choose the type of build." FORCE) include(CheckCXXCompilerFlag) string(TOUPPER "${CMAKE_BUILD_TYPE}" U_CMAKE_BUILD_TYPE) if (CMAKE_CXX_COMPILER_ID MATCHES "Clang" OR CMAKE_CXX_COMPILER_ID MATCHES "GNU" OR CMAKE_CXX_COMPILER_ID MATCHES "Intel") CHECK_CXX_COMPILER_FLAG(-march=native arch_native_supported) if(arch_native_supported AND NOT CMAKE_CXX_FLAGS MATCHES "-march") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=native") endif() set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O3 -g -Wunused-parameter -Wextra -Wreorder") CHECK_CXX_COMPILER_FLAG("-std=c++14" HAS_CPP14_FLAG) if (HAS_CPP14_FLAG) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14") else() message(FATAL_ERROR "Unsupported compiler -- xtensor requires C++14 support!") endif() # Enable link time optimization and set the default symbol # visibility to hidden (very important to obtain small binaries) if (NOT ${U_CMAKE_BUILD_TYPE} MATCHES DEBUG) # Default symbol visibility set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fvisibility=hidden") # Check for Link Time Optimization support # (GCC/Clang) # LTO had to be removed as google benchmark doesn't build with it # CHECK_CXX_COMPILER_FLAG("-flto" HAS_LTO_FLAG) # if (HAS_LTO_FLAG) # set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -flto") # endif() # Intel equivalent to LTO is called IPO if (CMAKE_CXX_COMPILER_ID MATCHES "Intel") CHECK_CXX_COMPILER_FLAG("-ipo" HAS_IPO_FLAG) if (HAS_IPO_FLAG) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -ipo") endif() endif() endif() endif() if(MSVC) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHsc /MP /bigobj") set(CMAKE_EXE_LINKER_FLAGS /MANIFEST:NO) foreach(flag_var CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_RELWITHDEBINFO) string(REPLACE "/MD" "-MT" ${flag_var} "${${flag_var}}") endforeach() endif() if(DOWNLOAD_GBENCHMARK OR GBENCHMARK_SRC_DIR) if(DOWNLOAD_GBENCHMARK) # Download and unpack googlebenchmark at configure time configure_file(downloadGBenchmark.cmake.in googlebenchmark-download/CMakeLists.txt) else() # Copy local source of googlebenchmark at configure time configure_file(copyGBenchmark.cmake.in googlebenchmark-download/CMakeLists.txt) endif() execute_process(COMMAND ${CMAKE_COMMAND} -G "${CMAKE_GENERATOR}" . RESULT_VARIABLE result WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/googlebenchmark-download ) if(result) message(FATAL_ERROR "CMake step for googlebenchmark failed: ${result}") endif() execute_process(COMMAND ${CMAKE_COMMAND} --build . RESULT_VARIABLE result WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/googlebenchmark-download ) if(result) message(FATAL_ERROR "Build step for googlebenchmark failed: ${result}") endif() # Add googlebenchmark directly to our build. This defines # the gtest and gtest_main targets. add_subdirectory(${CMAKE_CURRENT_BINARY_DIR}/googlebenchmark-src ${CMAKE_CURRENT_BINARY_DIR}/googlebenchmark-build) set(GBENCHMARK_INCLUDE_DIRS "${googlebenchmark_SOURCE_DIR}/include") set(GBENCHMARK_LIBRARIES benchmark) else() find_package(benchmark REQUIRED) endif() find_package(xsimd) if (xsimd_FOUND) include_directories(${xsimd_INCLUDE_DIRS}) add_definitions("-DXTENSOR_USE_XSIMD=1") endif() include_directories(${XTENSOR_INCLUDE_DIR}) include_directories(${GBENCHMARK_INCLUDE_DIRS}) set(XTENSOR_BENCHMARK benchmark_assign.cpp benchmark_builder.cpp benchmark_container.cpp benchmark_creation.cpp benchmark_increment_stepper.cpp benchmark_lambda_expressions.cpp benchmark_math.cpp benchmark_random.cpp benchmark_reducer.cpp benchmark_views.cpp benchmark_xshape.cpp benchmark_view_access.cpp benchmark_view_assignment.cpp benchmark_view_adapt.cpp main.cpp ) set(XTENSOR_BENCHMARK_TARGET benchmark_xtensor) add_executable(${XTENSOR_BENCHMARK_TARGET} EXCLUDE_FROM_ALL ${XTENSOR_BENCHMARK} ${XTENSOR_HEADERS}) target_link_libraries(${XTENSOR_BENCHMARK_TARGET} xtensor ${GBENCHMARK_LIBRARIES}) add_custom_target(xbenchmark COMMAND benchmark_xtensor DEPENDS ${XTENSOR_BENCHMARK_TARGET}) add_custom_target(xpowerbench COMMAND echo "sudo needed to set cpu power governor to performance" COMMAND sudo cpupower frequency-set --governor performance COMMAND benchmark_xtensor --benchmark_out=results.csv --benchmark_out_format=csv COMMAND sudo cpupower frequency-set --governor powersave DEPENDS ${XTENSOR_BENCHMARK_TARGET}) xtensor-0.23.10/benchmark/benchmark_adapter.cpp000066400000000000000000000123501405270662500214570ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include // #include "xtensor/xshape.hpp" #include "xtensor/xstorage.hpp" #include "xtensor/xutils.hpp" #include "xtensor/xadapt.hpp" #include "xtensor/xnoalias.hpp" namespace xt { template void shape_array_adapter(benchmark::State& state) { const V a({1,2,3,4}); const V b({1,2,3,4}); using value_type = typename V::value_type; for (auto _ : state) { xtensor result(std::array({4})); auto aa = xt::adapt(a); auto ab = xt::adapt(b); xt::noalias(result) = aa + ab; benchmark::DoNotOptimize(result.data()); } } template void shape_array_adapter_result(benchmark::State& state) { const V a({1, 2, 3, 4}); const V b({1, 2, 3, 4}); for (auto _ : state) { V res({0, 0, 0, 0}); auto aa = xt::adapt(a); auto ab = xt::adapt(b); auto ar = xt::adapt(res); xt::noalias(ar) = aa + ab; benchmark::DoNotOptimize(ar.data()); } } template void shape_array_adapter_result_copy(benchmark::State& state) { const V a({1, 2, 3, 4}); const V b({1, 2, 3, 4}); for (auto _ : state) { V res({0, 0, 0, 0}); auto aa = xt::adapt(a); auto ab = xt::adapt(b); auto ar = xt::adapt(res); auto fun = aa + ab; std::copy(fun.storage_cbegin(), fun.storage_cend(), ar.storage_begin()); benchmark::DoNotOptimize(ar.data()); } } template void shape_array_adapter_result_transform(benchmark::State& state) { const V a({1, 2, 3, 4}); const V b({1, 2, 3, 4}); for (auto _ : state) { V res({0, 0, 0, 0}); auto aa = xt::adapt(a); auto ab = xt::adapt(b); auto ar = xt::adapt(res); auto fun = aa + ab; std::transform(fun.storage_cbegin(), fun.storage_cend(), ar.storage_begin(), [](typename decltype(fun)::value_type x) { return static_cast(x); }); benchmark::DoNotOptimize(ar.data()); } } template void shape_no_adapter(benchmark::State& state) { V a({1, 2, 3, 4}); V b({1, 2, 3, 4}); for (auto _ : state) { V result({0, 0, 0, 0}); auto n = std::distance(a.begin(), a.end()); for (std::size_t i = 0; i < n; ++i) { result[i] = a[i] + b[i]; } benchmark::DoNotOptimize(result.data()); } } using array_type = std::array; // using array_type_ll = std::array; using array_type_ll = std::array; using uvector_type = xt::uvector>; using uvector_type_i64 = xt::uvector>; using uvector_type_i64_16 = xt::uvector>; using uvector_type_i64_ra = xt::uvector>; using small_type = xt::svector>; using small_type_d = xt::svector>; // BENCHMARK_TEMPLATE(shape_array_adapter, array_type); // BENCHMARK_TEMPLATE(shape_array_adapter, uvector_type); // BENCHMARK_TEMPLATE(shape_array_adapter, uvector_type_i64); // BENCHMARK_TEMPLATE(shape_array_adapter, uvector_type_i64_ra); // BENCHMARK_TEMPLATE(shape_array_adapter, std::vector); // BENCHMARK_TEMPLATE(shape_array_adapter, small_type); // BENCHMARK_TEMPLATE(shape_array_adapter, small_type_d); // BENCHMARK_TEMPLATE(shape_array_adapter_result, small_type); // BENCHMARK_TEMPLATE(shape_array_adapter_result, small_type_d); BENCHMARK_TEMPLATE(shape_array_adapter_result, array_type); // BENCHMARK_TEMPLATE(shape_array_adapter_result, array_type_ll); // BENCHMARK_TEMPLATE(shape_array_adapter_result_2, array_type); BENCHMARK_TEMPLATE(shape_array_adapter_result_copy, array_type); BENCHMARK_TEMPLATE(shape_array_adapter_result_transform, array_type); // // BENCHMARK_TEMPLATE(shape_array_adapter_result_2, array_type_ll); BENCHMARK_TEMPLATE(shape_no_adapter, array_type); // BENCHMARK_TEMPLATE(shape_no_adapter, std::vector); // BENCHMARK_TEMPLATE(shape_no_adapter, uvector_type_i64); // BENCHMARK_TEMPLATE(shape_no_adapter, uvector_type_i64_ra); // BENCHMARK_TEMPLATE(shape_no_adapter, uvector_type_i64_16); } xtensor-0.23.10/benchmark/benchmark_assign.cpp000066400000000000000000000200721405270662500213230ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #ifndef BENCHMARK_ASSIGN_HPP #define BENCHMARK_ASSIGN_HPP #include #include "xtensor/xnoalias.hpp" #include "xtensor/xtensor.hpp" #include "xtensor/xarray.hpp" namespace xt { namespace assign { /**************************** * Benchmark initialization * ****************************/ template inline void init_benchmark_data(V& lhs, V& rhs, std::size_t size0, std::size_t size1) { using T = typename V::value_type; for (std::size_t i = 0; i < size0; ++i) { for (std::size_t j = 0; j < size1; ++j) { lhs(i, j) = T(0.5) * T(j) / T(j + 1) + std::sqrt(T(i)) * T(9.) / T(size1); rhs(i, j) = T(10.2) / T(i + 2) + T(0.25) * T(j); } } } template inline void init_xtensor_benchmark(V& lhs, V& rhs, V& res, std::size_t size0, size_t size1) { lhs.resize({ size0, size1 }); rhs.resize({ size0, size1 }); res.resize({ size0, size1 }); init_benchmark_data(lhs, rhs, size0, size1); } template inline void init_dl_xtensor_benchmark(V& lhs, V& rhs, V& res, std::size_t size0, size_t size1) { using strides_type = typename V::strides_type; strides_type str = { size1, 1 }; lhs.resize({ size0, size1 }, str); rhs.resize({ size0, size1 }, str); res.resize({ size0, size1 }, str); init_benchmark_data(lhs, rhs, size0, size1); } template inline auto assign_c_assign(benchmark::State& state) { using size_type = typename E::size_type; E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { size_type csize = x.size(); for (size_type i = 0; i < csize; ++i) { res.data()[i] = 3.0 * x.data()[i] - 2.0 * y.data()[i]; } benchmark::DoNotOptimize(res.data()); } } template inline auto assign_x_assign(benchmark::State& state) { E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { xt::noalias(res) = 3.0 * x - 2.0 * y; benchmark::DoNotOptimize(res.data()); } } template inline auto assign_c_assign_ii(benchmark::State& state) { using size_type = typename E::size_type; E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { size_type csize = x.size(); for (size_type i = 0; i < csize; ++i) { res.data()[i] = 3.0 * x.data()[i]; } benchmark::DoNotOptimize(res.data()); } } template inline auto assign_x_assign_ii(benchmark::State& state) { E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { xt::noalias(res) = 3.0 * x; benchmark::DoNotOptimize(res.data()); } } template inline auto assign_x_assign_iii(benchmark::State& state) { E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { xt::noalias(res) = y * x; benchmark::DoNotOptimize(res.data()); } } template inline auto assign_c_assign_iii(benchmark::State& state) { using size_type = typename E::size_type; E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { size_type csize = x.size(); for (size_type i = 0; i < csize; ++i) { res.data()[i] = x.data()[i] * y.data()[i]; } benchmark::DoNotOptimize(res.data()); } } template inline auto assign_xstorageiter_copy(benchmark::State& state) { E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { auto fun = 3.0 * x - 2.0 * y; std::copy(fun.storage_cbegin(), fun.storage_cend(), res.storage_begin()); benchmark::DoNotOptimize(res.data()); } } template inline auto assign_xiter_copy(benchmark::State& state) { E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { auto fun = 3.0 * x - 2.0 * y; std::copy(fun.cbegin(), fun.cend(), res.begin()); benchmark::DoNotOptimize(res.data()); } } template inline auto assign_c_scalar_computed(benchmark::State& state) { using size_type = typename E::size_type; E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { size_type csize = res.size(); for (size_type i = 0; i < csize; ++i) { res.storage()[i] += 3.123; } benchmark::DoNotOptimize(res.data()); } } template inline auto assign_x_scalar_computed(benchmark::State& state) { E x, y, res; init_xtensor_benchmark(x, y, res, state.range(0), state.range(0)); for (auto _ : state) { res += 3.123; benchmark::DoNotOptimize(res.data()); } } BENCHMARK_TEMPLATE(assign_c_assign, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_x_assign, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_xiter_copy, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_xstorageiter_copy, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_c_assign_ii, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_x_assign_ii, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_x_assign_iii, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_c_assign_iii, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_x_assign, xt::xarray)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_x_assign, xt::xarray)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_x_assign, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_c_scalar_computed, xt::xtensor)->Range(32, 32<<3); BENCHMARK_TEMPLATE(assign_x_scalar_computed, xt::xtensor)->Range(32, 32<<3); } } #endif xtensor-0.23.10/benchmark/benchmark_builder.cpp000066400000000000000000000156761405270662500215030ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include "xtensor/xnoalias.hpp" #include "xtensor/xtensor.hpp" #include "xtensor/xarray.hpp" namespace xt { template inline auto builder_xarange(benchmark::State& state) { for (auto _ : state) { T res = xt::arange(0, 10000); benchmark::DoNotOptimize(res.storage().data()); } } template inline auto builder_xarange_manual(benchmark::State& state) { for (auto _ : state) { T res = T::from_shape({10000}); for (std::size_t i = 0; i < 10000; ++i) { res.storage()[i] = i; } benchmark::DoNotOptimize(res.data()); } } inline auto builder_iota_vector(benchmark::State& state) { for (auto _ : state) { xt::uvector a {}; a.resize(10000); std::iota(a.begin(), a.end(), 0); benchmark::DoNotOptimize(a.data()); } } template inline auto builder_arange_for_loop_assign(benchmark::State& state) { for (auto _ : state) { auto expr = xt::arange(0, 10000); T res = T::from_shape({10000}); for (std::size_t i = 0; i < 10000; ++i) { res(i) = expr(i); } benchmark::DoNotOptimize(res.data()); } } template inline auto builder_arange_for_loop_iter_assign(benchmark::State& state) { for (auto _ : state) { auto expr = xt::arange(0, 10000); T res = T::from_shape({10000}); auto xend = expr.cend(); auto reit = res.begin(); for (auto it = expr.cbegin(); it != xend; ++it) { *reit++ = *it; } benchmark::DoNotOptimize(res.data()); } } template inline auto builder_arange_for_loop_iter_assign_backward(benchmark::State& state) { for (auto _ : state) { auto expr = xt::arange(0, 10000); T res = T::from_shape({10000}); auto xend = expr.cend(); auto reit = res.begin(); auto it = expr.cbegin(); for(ptrdiff_t n = 10000; n > 0; --n) { *reit = *it; ++it; ++reit; } benchmark::DoNotOptimize(res.data()); } } template inline auto builder_arange_assign_iterator(benchmark::State& state) { for (auto _ : state) { auto xa = xt::arange(0, 10000); T res = T::from_shape({10000}); std::copy(xa.cbegin(), xa.cend(), res.begin()); benchmark::DoNotOptimize(res.data()); } } template inline auto builder_std_iota(benchmark::State& state) { for (auto _ : state) { T res = T::from_shape({10000}); std::iota(res.begin(), res.end(), 0); benchmark::DoNotOptimize(res.data()); } } inline auto builder_ones(benchmark::State& state) { for (auto _ : state) { xt::xarray res = xt::ones({200, 200}); benchmark::DoNotOptimize(res.data()); } } inline auto builder_ones_assign_iterator(benchmark::State& state) { auto xo = xt::ones({200, 200}); for (auto _ : state) { xt::xarray res(xt::dynamic_shape{200, 200}); auto xo = xt::ones({200, 200}); std::copy(xo.begin(), xo.end(), res.begin()); benchmark::DoNotOptimize(res.storage().data()); } } inline auto builder_ones_expr_for(benchmark::State& state) { auto xo = xt::ones({200, 200}); for (auto _ : state) { xt::xtensor res(xt::static_shape({200, 200})); auto xo = xt::ones({200, 200}) * 0.15; for (std::size_t i = 0; i < xo.shape()[0]; ++i) for (std::size_t j = 0; j < xo.shape()[1]; ++j) res(i, j) = xo(i, j); benchmark::DoNotOptimize(res.storage().data()); } } inline auto builder_ones_expr(benchmark::State& state) { auto xo = xt::ones({200, 200}); for (auto _ : state) { xt::xtensor res = xt::ones({200, 200}) * 0.15; benchmark::DoNotOptimize(res.storage().data()); } } inline auto builder_ones_expr_fill(benchmark::State& state) { auto xo = xt::ones({200, 200}); for (auto _ : state) { xt::xtensor res = xt::xtensor::from_shape({200, 200}); std::fill(res.begin(), res.end(), 0.15); benchmark::DoNotOptimize(res.storage().data()); } } inline auto builder_std_fill(benchmark::State& state) { for (auto _ : state) { xt::xarray res(xt::dynamic_shape{200, 200}); std::fill(res.begin(), res.end(), 1); benchmark::DoNotOptimize(res.storage().data()); } } BENCHMARK_TEMPLATE(builder_xarange, xarray); BENCHMARK_TEMPLATE(builder_xarange, xtensor); BENCHMARK_TEMPLATE(builder_xarange_manual, xarray); BENCHMARK_TEMPLATE(builder_xarange_manual, xtensor); BENCHMARK_TEMPLATE(builder_arange_for_loop_assign, xarray); BENCHMARK_TEMPLATE(builder_arange_for_loop_assign, xtensor); BENCHMARK_TEMPLATE(builder_arange_assign_iterator, xarray); BENCHMARK_TEMPLATE(builder_arange_assign_iterator, xtensor); BENCHMARK_TEMPLATE(builder_arange_for_loop_iter_assign, xarray); BENCHMARK_TEMPLATE(builder_arange_for_loop_iter_assign_backward, xarray); BENCHMARK_TEMPLATE(builder_arange_for_loop_iter_assign, xtensor); BENCHMARK_TEMPLATE(builder_arange_for_loop_iter_assign_backward, xtensor); BENCHMARK_TEMPLATE(builder_std_iota, xarray); BENCHMARK(builder_iota_vector); BENCHMARK(builder_ones); BENCHMARK(builder_ones_assign_iterator); BENCHMARK(builder_ones_expr); BENCHMARK(builder_ones_expr_fill); BENCHMARK(builder_ones_expr_for); BENCHMARK(builder_std_fill); } xtensor-0.23.10/benchmark/benchmark_container.cpp000066400000000000000000000103221405270662500220160ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include #include #include #include "xtensor/xarray.hpp" #include "xtensor/xtensor.hpp" namespace xt { namespace axpy_1d { // BENCHMARK Initialization template inline void init_benchmark(E& x, E& y, E& res, typename E::size_type size) { x.resize({ size }); y.resize({ size }); res.resize({ size }); using value_type = typename E::value_type; using size_type = typename E::size_type; for (size_type i = 0; i < size; ++i) { x(i) = 0.5 + value_type(i); y(i) = 0.25 * value_type(i); } } template inline auto container_iteration(benchmark::State& state) { using value_type = typename E::value_type; E x, y, res; init_benchmark(x, y, res, state.range(0)); value_type a = value_type(2.7); for (auto _ : state) { auto iterx = x.begin(); auto itery = y.begin(); for (auto iter = res.begin(); iter != res.end(); ++iter, ++iterx, ++itery) { *iter = a * (*iterx) + (*itery); } } } BENCHMARK_TEMPLATE(container_iteration, xarray_container>)->Arg(1000); BENCHMARK_TEMPLATE(container_iteration, xarray_container>)->Arg(1000); BENCHMARK_TEMPLATE(container_iteration, xtensor_container, 1>)->Arg(1000); BENCHMARK_TEMPLATE(container_iteration, xtensor_container, 1>)->Arg(1000); template inline auto container_xiteration(benchmark::State& state) { using value_type = typename E::value_type; E x, y, res; init_benchmark(x, y, res, state.range(0)); value_type a = value_type(2.7); for (auto _ : state) { auto iterx = x.begin(); auto itery = y.begin(); for (auto iter = res.begin(); iter != res.end(); ++iter, ++iterx, ++itery) { *iter = a * (*iterx) + (*itery); } } } BENCHMARK_TEMPLATE(container_xiteration, xarray_container>)->Arg(1000); BENCHMARK_TEMPLATE(container_xiteration, xarray_container>)->Arg(1000); BENCHMARK_TEMPLATE(container_xiteration, xtensor_container, 1>)->Arg(1000); BENCHMARK_TEMPLATE(container_xiteration, xtensor_container, 1>)->Arg(1000); template inline auto container_indexing(benchmark::State& state) { using size_type = typename E::size_type; using value_type = typename E::value_type; E x, y, res; init_benchmark(x, y, res, state.range(0)); value_type a = value_type(2.7); for (auto _ : state) { size_type n = x.size(); for (size_type i = 0; i < n; ++i) { res(i) = a * x(i) + y(i); } } } BENCHMARK_TEMPLATE(container_indexing, xarray_container>)->Arg(1000); BENCHMARK_TEMPLATE(container_indexing, xarray_container>)->Arg(1000); BENCHMARK_TEMPLATE(container_indexing, xtensor_container, 1>)->Arg(1000); BENCHMARK_TEMPLATE(container_indexing, xtensor_container, 1>)->Arg(1000); } } xtensor-0.23.10/benchmark/benchmark_creation.cpp000066400000000000000000000042411405270662500216430ustar00rootroot00000000000000/**************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * * Distributed under the terms of the BSD 3-Clause License. * * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include "xtensor/xbuilder.hpp" #include "xtensor/xarray.hpp" #include "xtensor/xtensor.hpp" #include "xtensor/xfixed.hpp" namespace xt { void benchmark_empty(benchmark::State& state) { for (auto _ : state) { auto e = xt::empty({5, 5}); } } template void benchmark_from_shape(benchmark::State& state) { for (auto _ : state) { T e = T::from_shape({5, 5}); } } template void benchmark_creation(benchmark::State& state) { for (auto _ : state) { T e(typename T::shape_type({5, 5})); } } void benchmark_empty_to_xtensor(benchmark::State& state) { for (auto _ : state) { xtensor e = xt::empty({5, 5}); } } void benchmark_empty_to_xarray(benchmark::State& state) { for (auto _ : state) { xarray e = xt::empty({5, 5}); } } BENCHMARK(benchmark_empty); BENCHMARK(benchmark_empty_to_xtensor); BENCHMARK(benchmark_empty_to_xarray); BENCHMARK_TEMPLATE(benchmark_from_shape, xarray); BENCHMARK_TEMPLATE(benchmark_from_shape, xtensor); BENCHMARK_TEMPLATE(benchmark_creation, xarray); BENCHMARK_TEMPLATE(benchmark_creation, xtensor); }xtensor-0.23.10/benchmark/benchmark_increment_stepper.cpp000066400000000000000000000050661405270662500235730ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include "xtensor/xarray.hpp" #include "xtensor/xrandom.hpp" #define SHAPE 30, 30 #define RANGE 3, 100 namespace xt { namespace benchmark_xstepper { void stepper_stepper(benchmark::State& state) { std::vector shape = {SHAPE, std::size_t(state.range(0))}; xt::xarray a = xt::random::rand(shape); xt::xarray b = xt::random::rand(shape); volatile double c = 0; for (auto _ : state) { auto end = compute_size(shape); auto it = a.stepper_begin(shape); auto bit = b.stepper_begin(shape); xindex index(shape.size()); xindex bindex(shape.size()); for (std::size_t i = 0; i < end; ++i) { c += *it + *bit; stepper_tools::increment_stepper(bit, bindex, shape); stepper_tools::increment_stepper(it, index, shape); } benchmark::DoNotOptimize(c); } } BENCHMARK(stepper_stepper)->Range(RANGE); void stepper_stepper_ref(benchmark::State& state) { std::vector shape = {SHAPE, std::size_t(state.range(0))}; xt::xarray a = xt::random::rand(shape); xt::xarray b = xt::random::rand(shape); xindex index; xindex bindex; volatile double c = 0; for (auto _ : state) { auto it = a.storage().begin(); auto bit = b.storage().begin(); auto end = a.storage().end(); for (; it != end; ++it) { c += *it + *bit; ++bit; } benchmark::DoNotOptimize(c); } } BENCHMARK(stepper_stepper_ref)->Range(RANGE); } } xtensor-0.23.10/benchmark/benchmark_lambda_expressions.cpp000066400000000000000000000047341405270662500237300ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include "xtensor/xnoalias.hpp" #include "xtensor/xbuilder.hpp" #include "xtensor/xmath.hpp" #include "xtensor/xtensor.hpp" #include "xtensor/xarray.hpp" namespace xt { void lambda_cube(benchmark::State& state) { xtensor x = empty({state.range(0), state.range(0)}); for (auto _ : state) { xtensor res = xt::cube(x); benchmark::DoNotOptimize(res.data()); } } void xexpression_cube(benchmark::State& state) { xtensor x = empty({state.range(0), state.range(0)}); for (auto _ : state) { xtensor res = x * x * x; benchmark::DoNotOptimize(res.data()); } } void lambda_higher_pow(benchmark::State& state) { xtensor x = empty({state.range(0), state.range(0)}); for (auto _ : state) { xtensor res = xt::pow<16>(x); benchmark::DoNotOptimize(res.data()); } } void xsimd_higher_pow(benchmark::State& state) { xtensor x = empty({state.range(0), state.range(0)}); for (auto _ : state) { xtensor res = xt::pow(x, 16); benchmark::DoNotOptimize(res.data()); } } void xexpression_higher_pow(benchmark::State& state) { xtensor x = empty({state.range(0), state.range(0)}); for (auto _ : state) { xtensor res = x * x * x * x * x * x * x * x * x * x * x * x * x * x * x * x; benchmark::DoNotOptimize(res.data()); } } BENCHMARK(lambda_cube)->Range(32, 32<<3); BENCHMARK(xexpression_cube)->Range(32, 32<<3); BENCHMARK(lambda_higher_pow)->Range(32, 32<<3); BENCHMARK(xsimd_higher_pow)->Range(32, 32<<3); BENCHMARK(xexpression_higher_pow)->Range(32, 32<<3); }xtensor-0.23.10/benchmark/benchmark_math.cpp000066400000000000000000000346741405270662500210050ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include #include #include #include #include "xtensor/xarray.hpp" #include "xtensor/xnoalias.hpp" #include "xtensor/xtensor.hpp" // For how many sizes should math functions be tested? #define MATH_RANGE 64, 64 namespace xt { namespace math { // TODO use a fixture here to avoid initializing arrays everytime anew ... /**************************** * Benchmark initialization * ****************************/ template inline void init_benchmark_data(V& lhs, V& rhs, std::size_t size0, std::size_t size1) { using T = typename V::value_type; for (std::size_t i = 0; i < size0; ++i) { for (std::size_t j = 0; j < size1; ++j) { lhs(i, j) = T(0.5) * T(j) / T(j + 1) + std::sqrt(T(i)) * T(9.) / T(size1); rhs(i, j) = T(10.2) / T(i + 2) + T(0.25) * T(j); } } } template inline void init_xtensor_benchmark(V& lhs, V& rhs, V& res, std::size_t size0, size_t size1) { lhs.resize({ size0, size1 }); rhs.resize({ size0, size1 }); res.resize({ size0, size1 }); init_benchmark_data(lhs, rhs, size0, size1); } template inline void init_ext_benchmark(V& lhs, V& rhs, V& res, std::size_t size0, size_t size1) { lhs.resize(size0, size1); rhs.resize(size0, size1); res.resize(size0, size1); init_benchmark_data(lhs, rhs, size0, size1); } /*********************** * Benchmark functions * ***********************/ template inline void math_xtensor_2(benchmark::State& state) { xtensor lhs, rhs, res; init_xtensor_benchmark(lhs, rhs, res, state.range(0), state.range(0)); auto f = F(); for (auto _ : state) { xt::noalias(res) = f(lhs, rhs); benchmark::DoNotOptimize(res.data()); } } template inline void math_xtensor_cpy_2(benchmark::State& state) { xtensor lhs, rhs, res; init_xtensor_benchmark(lhs, rhs, res, state.range(0), state.range(0)); auto f = F(); for (auto _ : state) { auto fct = f(lhs, rhs); std::copy(fct.storage_begin(), fct.storage_end(), res.storage_begin()); benchmark::DoNotOptimize(res.data()); } } template inline void math_xtensor_1(benchmark::State& state) { xtensor lhs, rhs, res; init_xtensor_benchmark(lhs, rhs, res, state.range(0), state.range(0)); auto f = F(); for (auto _ : state) { xt::noalias(res) = f(lhs); benchmark::DoNotOptimize(res.data()); } } template inline auto math_ref_2(benchmark::State& state) { auto f = F(); xtensor lhs, rhs, res; init_xtensor_benchmark(lhs, rhs, res, state.range(0), state.range(0)); size_t size = lhs.shape()[0] * lhs.shape()[1]; for (auto _ : state) { for (std::size_t i = 0; i < size; ++i) { res.data()[i] = f(lhs.data()[i], res.data()[i]); } benchmark::DoNotOptimize(res.data()); } } template inline void math_ref_1(benchmark::State& state) { auto f = F(); xtensor lhs, rhs, res; init_xtensor_benchmark(lhs, rhs, res, state.range(0), state.range(0)); size_t size = lhs.shape()[0] * lhs.shape()[1]; for (auto _ : state) { for (std::size_t i = 0; i < size; ++i) { res.data()[i] = f(lhs.data()[i]); } benchmark::DoNotOptimize(res.data()); } } /********************** * Benchmark functors * **********************/ #define DEFINE_OP_FUNCTOR_2OP(OP, NAME)\ struct NAME##_fn {\ template \ inline auto operator()(const T& lhs, const T& rhs) const { return lhs OP rhs; }\ inline static std::string name() { return #NAME; }\ } #define DEFINE_FUNCTOR_1OP(FN)\ struct FN##_fn {\ template \ inline auto operator()(const T& x) const { using std::FN; using xt::FN; return FN(x); }\ inline static std::string name() { return #FN; }\ } #define DEFINE_FUNCTOR_2OP(FN)\ struct FN##_fn{\ template \ inline auto operator()(const T&lhs, const T& rhs) const { using std::FN; using xt::FN; return FN(lhs, rhs); }\ inline static std::string name() { return #FN; }\ } DEFINE_OP_FUNCTOR_2OP(+, add); DEFINE_OP_FUNCTOR_2OP(-, sub); DEFINE_OP_FUNCTOR_2OP(*, mul); DEFINE_OP_FUNCTOR_2OP(/ , div); DEFINE_FUNCTOR_1OP(exp); DEFINE_FUNCTOR_1OP(exp2); DEFINE_FUNCTOR_1OP(expm1); DEFINE_FUNCTOR_1OP(log); DEFINE_FUNCTOR_1OP(log10); DEFINE_FUNCTOR_1OP(log2); DEFINE_FUNCTOR_1OP(log1p); DEFINE_FUNCTOR_1OP(sin); DEFINE_FUNCTOR_1OP(cos); DEFINE_FUNCTOR_1OP(tan); DEFINE_FUNCTOR_1OP(asin); DEFINE_FUNCTOR_1OP(acos); DEFINE_FUNCTOR_1OP(atan); DEFINE_FUNCTOR_1OP(sinh); DEFINE_FUNCTOR_1OP(cosh); DEFINE_FUNCTOR_1OP(tanh); DEFINE_FUNCTOR_1OP(asinh); DEFINE_FUNCTOR_1OP(acosh); DEFINE_FUNCTOR_1OP(atanh); DEFINE_FUNCTOR_2OP(pow); DEFINE_FUNCTOR_1OP(sqrt); DEFINE_FUNCTOR_1OP(cbrt); DEFINE_FUNCTOR_2OP(hypot); DEFINE_FUNCTOR_1OP(ceil); DEFINE_FUNCTOR_1OP(floor); DEFINE_FUNCTOR_1OP(trunc); DEFINE_FUNCTOR_1OP(round); DEFINE_FUNCTOR_1OP(nearbyint); DEFINE_FUNCTOR_1OP(rint); /******************** * benchmark groups * ********************/ BENCHMARK_TEMPLATE(math_ref_2, add_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_2, add_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_cpy_2, add_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_2, sub_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_2, sub_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_cpy_2, sub_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_2, mul_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_2, mul_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_cpy_2, mul_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_2, div_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_2, div_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_cpy_2, div_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, exp_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, exp_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, exp2_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, exp2_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, expm1_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, expm1_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, log_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, log_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, log2_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, log2_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, log10_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, log10_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, log1p_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, log1p_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, sin_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, sin_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, cos_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, cos_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, tan_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, tan_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, asin_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, asin_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, acos_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, acos_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, atan_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, atan_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, sinh_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, sinh_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, cosh_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, cosh_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, tanh_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, tanh_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, asinh_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, asinh_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, acosh_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, acosh_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, atanh_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, atanh_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_2, pow_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_2, pow_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, sqrt_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, sqrt_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, cbrt_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, cbrt_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_2, hypot_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_2, hypot_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, ceil_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, ceil_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, floor_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, floor_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, trunc_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, trunc_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, round_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, round_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, nearbyint_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, nearbyint_fn, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_ref_1, rint_fn)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(math_xtensor_1, rint_fn, xtensor)->Range(MATH_RANGE); } template void scalar_assign(benchmark::State& state) { T res; std::size_t sz = static_cast(state.range(0)); res.resize({sz, sz}); for (auto _ : state) { res += typename T::value_type(1); benchmark::DoNotOptimize(res.data()); } } template void scalar_assign_ref(benchmark::State& state) { T res; std::size_t sz = static_cast(state.range(0)); res.resize({sz, sz}); for (auto _ : state) { auto szt = res.size(); for (std::size_t i = 0; i < szt; ++i) { res.data()[i] += typename T::value_type(1); } benchmark::DoNotOptimize(res.data()); } } template void boolean_func(benchmark::State& state) { T a, b; std::size_t sz = static_cast(state.range(0)); a.resize({sz, sz}); b.resize({sz, sz}); xtensor res; res.resize({sz, sz}); for (auto _ : state) { res = equal(a, b); benchmark::DoNotOptimize(res.data()); } } template void boolean_func_ref(benchmark::State& state) { T a, b; std::size_t sz = static_cast(state.range(0)); a.resize({sz, sz}); b.resize({sz, sz}); xtensor res; res.resize({sz, sz}); for (auto _ : state) { auto szt = res.size(); for (std::size_t i = 0; i < szt; ++i) { res.data()[i] = (a.data()[i] == b.data()[i]); } benchmark::DoNotOptimize(res.data()); } } BENCHMARK_TEMPLATE(scalar_assign, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(scalar_assign_ref, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(boolean_func, xtensor)->Range(MATH_RANGE); BENCHMARK_TEMPLATE(boolean_func_ref, xtensor)->Range(MATH_RANGE); } xtensor-0.23.10/benchmark/benchmark_random.cpp000066400000000000000000000040011405270662500213110ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #ifndef BENCHMARK_RANDOM_HPP #define BENCHMARK_RANDOM_HPP #include #include "xtensor/xnoalias.hpp" #include "xtensor/xtensor.hpp" #include "xtensor/xarray.hpp" #include "xtensor/xrandom.hpp" namespace xt { namespace random_bench { void random_assign_xtensor(benchmark::State& state) { for (auto _ : state) { xtensor result = xt::random::rand({20, 20}); benchmark::DoNotOptimize(result.data()); } } void random_assign_forloop(benchmark::State& state) { for (auto _ : state) { xtensor result; result.resize({20, 20}); std::uniform_real_distribution dist(0, 1); auto& engine = xt::random::get_default_random_engine(); for (auto& el : result.storage()) { el = dist(engine); } benchmark::DoNotOptimize(result.data()); } } void random_assign_xarray(benchmark::State& state) { for (auto _ : state) { xarray result = xt::random::rand({20, 20}); benchmark::DoNotOptimize(result.data()); } } BENCHMARK(random_assign_xarray); BENCHMARK(random_assign_xtensor); BENCHMARK(random_assign_forloop); } } #endif xtensor-0.23.10/benchmark/benchmark_reducer.cpp000066400000000000000000000101421405270662500214650ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include "xtensor/xarray.hpp" #include "xtensor/xreducer.hpp" namespace xt { namespace reducer { template void reducer_reducer(benchmark::State& state, const E& x, E& res, const X& axes) { for (auto _ : state) { res = sum(x, axes); benchmark::DoNotOptimize(res.data()); } } template void reducer_immediate_reducer(benchmark::State& state, const E& x, E& res, const X& axes) { for (auto _ : state) { res = sum(x, axes, evaluation_strategy::immediate); benchmark::DoNotOptimize(res.data()); } } xarray u = ones({ 10, 100000 }); xarray v = ones({ 100000, 10 }); xarray res2 = ones({ 1 }); std::vector axis0 = { 0 }; std::vector axis1 = { 1 }; std::vector axis_both = { 0, 1 }; static auto res0 = xarray::from_shape({ 100000 }); static auto res1 = xarray::from_shape({ 10 }); BENCHMARK_CAPTURE(reducer_reducer, 10x100000/axis 0, u, res0, axis0); BENCHMARK_CAPTURE(reducer_reducer, 10x100000/axis 1, u, res1, axis1); BENCHMARK_CAPTURE(reducer_reducer, 100000x10/axis 1, v, res1, axis0); BENCHMARK_CAPTURE(reducer_reducer, 100000x10/axis 0, v, res0, axis1); BENCHMARK_CAPTURE(reducer_reducer, 100000x10/axis both, v, res2, axis_both); BENCHMARK_CAPTURE(reducer_immediate_reducer, 10x100000/axis 0, u, res0, axis0); BENCHMARK_CAPTURE(reducer_immediate_reducer, 10x100000/axis 1, u, res1, axis1); BENCHMARK_CAPTURE(reducer_immediate_reducer, 100000x10/axis 1, v, res1, axis0); BENCHMARK_CAPTURE(reducer_immediate_reducer, 100000x10/axis 0, v, res0, axis1); BENCHMARK_CAPTURE(reducer_immediate_reducer, 100000x10/axis both, v, res2, axis_both); template inline auto reducer_manual_strided_reducer(benchmark::State& state, const E& x, E& res, const X& axes) { using value_type = typename E::value_type; std::size_t stride = x.strides()[axes[0]]; std::size_t offset_end = x.strides()[axes[0]] * x.shape()[axes[0]]; std::size_t offset_iter = 0; if (axes[0] == 1) { offset_iter = x.strides()[0]; } else if (axes[0] == 0) { offset_iter = x.strides()[1]; } for (auto _ : state) { for (std::size_t j = 0; j < res.shape()[0]; ++j) { auto begin = x.data() + (offset_iter * j); auto end = begin + offset_end; value_type temp = *begin; begin += stride; for (; begin < end; begin += stride) { temp += *begin; } res(j) = temp; } benchmark::DoNotOptimize(res.data()); } } BENCHMARK_CAPTURE(reducer_manual_strided_reducer, 10x100000/axis 0, u, res0, axis0); BENCHMARK_CAPTURE(reducer_manual_strided_reducer, 10x100000/axis 1, u, res1, axis1); BENCHMARK_CAPTURE(reducer_manual_strided_reducer, 100000x10/axis 1, v, res1, axis0); BENCHMARK_CAPTURE(reducer_manual_strided_reducer, 100000x10/axis 0, v, res0, axis1); } } xtensor-0.23.10/benchmark/benchmark_view_access.cpp000066400000000000000000000341131405270662500223330ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include // #include "xtensor/xshape.hpp" #include "xtensor/xadapt.hpp" #include "xtensor/xnoalias.hpp" #include "xtensor/xrandom.hpp" #include "xtensor/xbuilder.hpp" #include "xtensor/xstorage.hpp" #include "xtensor/xutils.hpp" #include "xtensor/xview.hpp" namespace xt { template class simple_array { public: using self_type = simple_array; using shape_type = std::array; simple_array() = default; explicit simple_array(const std::array& shape) : m_shape(shape) { ptrdiff_t data_size = 1; m_strides[N - 1] = 1; for (std::ptrdiff_t i = N - 1; i > 0; --i) { data_size *= static_cast(shape[i]); m_strides[i - 1] = data_size; } data_size *= shape[0]; memory.resize(data_size); } template self_type& operator=(const xexpression& e) { const E& de = e.derived_cast(); std::copy(de.cbegin(), de.cend(), memory.begin()); return *this; } void fill(T val) { std::fill(memory.begin(), memory.end(), val); } template T& operator()(Args... args) { std::array idx({static_cast(args)...}); static_assert(sizeof...(Args) == N, "too few or too many indices!"); ptrdiff_t offset = 0; for (std::size_t i = 0; i < N; ++i) { offset += m_strides[i] * idx[i]; } return memory[offset]; } xt::uvector memory; std::array m_shape, m_strides; }; void xview_access_calc(benchmark::State &state) { xt::xtensor A = xt::random::rand({100, 100, 4, 4}); xt::xtensor elemvec = xt::random::rand({100, 4, 4}); xt::xtensor eps = xt::empty({2, 2}); for (auto _ : state) { for (size_t e = 0; e < 100; ++e) { // alias element vector (e.g. nodal displacements) auto u = xt::view(elemvec, e, xt::all(), xt::all()); for (size_t k = 0; k < 100; ++k) { auto dNx = xt::view(A, e, k, xt::all(), xt::all()); // - evaluate symmetrized dyadic product (loops unrolled for efficiency) // grad(i,j) += dNx(m,i) * u(m,j) // eps (j,i) = 0.5 * ( grad(i,j) + grad(j,i) ) eps(0, 0) = dNx(0, 0) * u(0, 0) + dNx(1, 0) * u(1, 0) + dNx(2, 0) * u(2, 0) + dNx(3, 0) * u(3, 0); eps(1, 1) = dNx(0, 1) * u(0, 1) + dNx(1, 1) * u(1, 1) + dNx(2, 1) * u(2, 1) + dNx(3, 1) * u(3, 1); eps(0, 1) = (dNx(0, 1) * u(0, 0) + dNx(1, 1) * u(1, 0) + dNx(2, 1) * u(2, 0) + dNx(3, 1) * u(3, 0) + dNx(0, 0) * u(0, 1) + dNx(1, 0) * u(1, 1) + dNx(2, 0) * u(2, 1) + dNx(3, 0) * u(3, 1)) / 2.; eps(1, 0) = eps(0, 1); benchmark::DoNotOptimize(eps.storage()); } } } } void raw_access_calc(benchmark::State &state) { xt::xtensor A = xt::random::rand({100, 100, 4, 4}); xt::xtensor elemvec = xt::random::rand({100, 4, 4}); xt::xtensor eps = xt::empty({2, 2}); for (auto _ : state) { for (size_t e = 0; e < 100; ++e) { for (size_t k = 0; k < 100; ++k) { // - evaluate symmetrized dyadic product (loops unrolled for efficiency) // grad(i,j) += dNx(m,i) * u(m,j) // eps (j,i) = 0.5 * ( grad(i,j) + grad(j,i) ) eps(0, 0) = A(e, k, 0, 0) * elemvec(e, 0, 0) + A(e, k, 1, 0) * elemvec(e, 1, 0) + A(e, k, 2, 0) * elemvec(e, 2, 0) + A(e, k, 3, 0) * elemvec(e, 3, 0); eps(1, 1) = A(e, k, 0, 1) * elemvec(e, 0, 1) + A(e, k, 1, 1) * elemvec(e, 1, 1) + A(e, k, 2, 1) * elemvec(e, 2, 1) + A(e, k, 3, 1) * elemvec(e, 3, 1); eps(0, 1) = (A(e, k, 0, 1) * elemvec(e, 0, 0) + A(e, k, 1, 1) * elemvec(e, 1, 0) + A(e, k, 2, 1) * elemvec(e, 2, 0) + A(e, k, 3, 1) * elemvec(e, 3, 0) + A(e, k, 0, 0) * elemvec(e, 0, 1) + A(e, k, 1, 0) * elemvec(e, 1, 1) + A(e, k, 2, 0) * elemvec(e, 2, 1) + A(e, k, 3, 0) * elemvec(e, 3, 1)) / 2.; eps(1, 0) = eps(0, 1); benchmark::DoNotOptimize(eps.storage()); } } } } void unchecked_access_calc(benchmark::State &state) { xt::xtensor A = xt::random::rand({100, 100, 4, 4}); xt::xtensor elemvec = xt::random::rand({100, 4, 4}); xt::xtensor eps = xt::empty({2, 2}); for (auto _ : state) { for (size_t e = 0; e < 100; ++e) { for (size_t k = 0; k < 100; ++k) { // - evaluate symmetrized dyadic product (loops unrolled for efficiency) // grad(i,j) += dNx(m,i) * u(m,j) // eps (j,i) = 0.5 * ( grad(i,j) + grad(j,i) ) eps.unchecked(0, 0) = A.unchecked(e, k, 0, 0) * elemvec.unchecked(e, 0, 0) + A.unchecked(e, k, 1, 0) * elemvec.unchecked(e, 1, 0) + A.unchecked(e, k, 2, 0) * elemvec.unchecked(e, 2, 0) + A.unchecked(e, k, 3, 0) * elemvec.unchecked(e, 3, 0); eps.unchecked(1, 1) = A.unchecked(e, k, 0, 1) * elemvec.unchecked(e, 0, 1) + A.unchecked(e, k, 1, 1) * elemvec.unchecked(e, 1, 1) + A.unchecked(e, k, 2, 1) * elemvec.unchecked(e, 2, 1) + A.unchecked(e, k, 3, 1) * elemvec.unchecked(e, 3, 1); eps.unchecked(0, 1) = (A.unchecked(e, k, 0, 1) * elemvec.unchecked(e, 0, 0) + A.unchecked(e, k, 1, 1) * elemvec.unchecked(e, 1, 0) + A.unchecked(e, k, 2, 1) * elemvec.unchecked(e, 2, 0) + A.unchecked(e, k, 3, 1) * elemvec.unchecked(e, 3, 0) + A.unchecked(e, k, 0, 0) * elemvec.unchecked(e, 0, 1) + A.unchecked(e, k, 1, 0) * elemvec.unchecked(e, 1, 1) + A.unchecked(e, k, 2, 0) * elemvec.unchecked(e, 2, 1) + A.unchecked(e, k, 3, 0) * elemvec.unchecked(e, 3, 1)) / 2.; eps.unchecked(1, 0) = eps.unchecked(0, 1); benchmark::DoNotOptimize(eps.storage()); } } } } void simplearray_access_calc(benchmark::State &state) { simple_array A(std::array{100, 100, 4, 2}); simple_array elemvec(std::array{100, 4, 2}); simple_array eps(std::array{2, 2}); for (auto _ : state) { for (size_t e = 0; e < 100; ++e) { for (size_t k = 0; k < 100; ++k) { // - evaluate sy mmetrized dyadic product (loops unrolled for efficiency) // grad(i,j) += dNx(m,i) * u(m,j) // eps (j,i) = 0.5 * ( grad(i,j) + grad(j,i) ) eps(0, 0) = A(e, k, 0, 0) * elemvec(e, 0, 0) + A(e, k, 1, 0) * elemvec(e, 1, 0) + A(e, k, 2, 0) * elemvec(e, 2, 0) + A(e, k, 3, 0) * elemvec(e, 3, 0); eps(1, 1) = A(e, k, 0, 1) * elemvec(e, 0, 1) + A(e, k, 1, 1) * elemvec(e, 1, 1) + A(e, k, 2, 1) * elemvec(e, 2, 1) + A(e, k, 3, 1) * elemvec(e, 3, 1); eps(0, 1) = (A(e, k, 0, 1) * elemvec(e, 0, 0) + A(e, k, 1, 1) * elemvec(e, 1, 0) + A(e, k, 2, 1) * elemvec(e, 2, 0) + A(e, k, 3, 1) * elemvec(e, 3, 0) + A(e, k, 0, 0) * elemvec(e, 0, 1) + A(e, k, 1, 0) * elemvec(e, 1, 1) + A(e, k, 2, 0) * elemvec(e, 2, 1) + A(e, k, 3, 0) * elemvec(e, 3, 1)) / 2.; eps(1, 0) = eps(0, 1); benchmark::DoNotOptimize(eps.memory); } } } } #define M_NELEM 3600 #define M_NNE 4 #define M_NDIM 2 template class jumping_random { public: using shape_type = typename X::shape_type; jumping_random() : m_dofs(shape_type{3721, 2}), m_conn(shape_type{3600, 4}) { m_dofs = xt::clip(xt::reshape_view(xt::arange(2 * 3721), {3721, 2}), 0, 7199); for (std::size_t i = 0; i < 3600; ++i) { m_conn(i, 0) = i; m_conn(i, 1) = i + 1; m_conn(i, 2) = i + 62; m_conn(i, 3) = i + 61; } } auto calc_dofval(xt::xtensor& elemvec, xt::xtensor& dofval) { dofval.fill(0.0); for (size_t e = 0 ; e < M_NELEM ; ++e) for (size_t m = 0 ; m < M_NNE ; ++m) for (size_t i = 0 ; i < M_NDIM; ++i) dofval(m_dofs(m_conn(e, m), i)) += elemvec(e, m, i); } auto calc_dofval_simple(simple_array& elemvec, simple_array& dofval) { dofval.fill(0.0); for (size_t e = 0 ; e < M_NELEM ; ++e) for (size_t m = 0 ; m < M_NNE ; ++m) for (size_t i = 0 ; i < M_NDIM; ++i) dofval(m_dofs(m_conn(e, m), i)) += elemvec(e, m, i); } auto calc_unchecked_dofval(xt::xtensor& elemvec, xt::xtensor& dofval) { dofval.fill(0.0); for (size_t e = 0 ; e < M_NELEM ; ++e) for (size_t m = 0 ; m < M_NNE ; ++m) for (size_t i = 0 ; i < M_NDIM; ++i) dofval.unchecked(m_dofs.unchecked(m_conn.unchecked(e, m), i)) += elemvec.unchecked(e, m, i); } X m_dofs, m_conn; }; template void jumping_access(benchmark::State& state) { auto rx = jumping_random, L>(); xt::xtensor elemvec = xt::random::rand({M_NELEM, M_NNE, M_NDIM}); xt::xtensor dofval = xt::empty({7200}); for (auto _ : state) { rx.calc_dofval(elemvec, dofval); benchmark::DoNotOptimize(dofval.data()); } } template void jumping_access_unchecked(benchmark::State& state) { auto rx = jumping_random, L>(); xt::xtensor elemvec = xt::random::rand({M_NELEM, M_NNE, M_NDIM}); xt::xtensor dofval = xt::empty({7200}); for (auto _ : state) { rx.calc_unchecked_dofval(elemvec, dofval); benchmark::DoNotOptimize(dofval.data()); } } void jumping_access_simplearray(benchmark::State& state) { auto rx = jumping_random, layout_type::row_major>(); simple_array elemvec({M_NELEM, M_NNE, M_NDIM}); elemvec = xt::random::rand({M_NELEM, M_NNE, M_NDIM}); simple_array dofval({7200}); for (auto _ : state) { rx.calc_dofval_simple(elemvec, dofval); benchmark::DoNotOptimize(dofval.memory); } } BENCHMARK(raw_access_calc); BENCHMARK(unchecked_access_calc); BENCHMARK(simplearray_access_calc); BENCHMARK(xview_access_calc); BENCHMARK_TEMPLATE(jumping_access, layout_type::row_major); BENCHMARK_TEMPLATE(jumping_access, layout_type::column_major); BENCHMARK_TEMPLATE(jumping_access_unchecked, layout_type::row_major); BENCHMARK_TEMPLATE(jumping_access_unchecked, layout_type::column_major); BENCHMARK(jumping_access_simplearray); }xtensor-0.23.10/benchmark/benchmark_view_adapt.cpp000066400000000000000000000047541405270662500221730ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #ifndef BENCHMARK_VIEW_ADAPT_HPP #define BENCHMARK_VIEW_ADAPT_HPP #include #include "xtensor/xnoalias.hpp" #include "xtensor/xtensor.hpp" #include "xtensor/xview.hpp" #include "xtensor/xfixed.hpp" #include "xtensor/xrandom.hpp" #include "xtensor/xadapt.hpp" namespace xt { namespace benchmark_view_adapt { using T2 = xt::xtensor_fixed>; T2 foo(const T2 &A) { return 2. * A; } void random_view(benchmark::State& state) { xt::xtensor A = xt::random::randn({2000,8,2,2}); xt::xtensor B = xt::empty(A.shape()); for (auto _ : state) { for ( size_t i = 0 ; i < A.shape()[0] ; ++i ) { for ( size_t j = 0 ; j < A.shape()[1] ; ++j ) { auto a = xt::view(A, i, j); auto b = xt::view(B, i, j); xt::noalias(b) = foo(a); } } benchmark::DoNotOptimize(B.data()); } } void random_adapt(benchmark::State& state) { xt::xtensor A = xt::random::randn({2000,8,2,2}); xt::xtensor B = xt::empty(A.shape()); for (auto _ : state) { for ( size_t i = 0 ; i < A.shape()[0] ; ++i ) { for ( size_t j = 0 ; j < A.shape()[1] ; ++j ) { auto a = xt::adapt(&A(i,j,0,0), xt::xshape<2,2>()); auto b = xt::adapt(&B(i,j,0,0), xt::xshape<2,2>()); xt::noalias(b) = foo(a); } } benchmark::DoNotOptimize(B.data()); } } BENCHMARK(random_view); BENCHMARK(random_adapt); } } #endif xtensor-0.23.10/benchmark/benchmark_view_assignment.cpp000066400000000000000000000127731405270662500232520ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include "xtensor/xnoalias.hpp" #include "xtensor/xtensor.hpp" #include "xtensor/xarray.hpp" #include "xtensor/xfixed.hpp" #include "xtensor/xrandom.hpp" namespace xt { inline void create_xview(benchmark::State& state) { xt::xtensor tens = xt::random::rand({100, 100, 3, 3}); for (auto _ : state) { auto v = xt::view(tens, 1, 2, all(), all()); } } inline void create_strided_view_outofplace(benchmark::State& state) { xt::xtensor tens = xt::random::rand({100, 100, 3, 3}); xstrided_slice_vector sv = {1, 2, all(), all()}; for (auto _ : state) { auto v = xt::strided_view(tens, sv); } } inline void create_strided_view_inplace(benchmark::State& state) { xt::xtensor tens = xt::random::rand({100, 100, 3, 3}); for (auto _ : state) { auto v = xt::strided_view(tens, {1, 2, all(), all()}); } } inline void assign_create_view(benchmark::State& state) { xt::xtensor tens = xt::random::rand({100, 100, 3, 3}); for (auto _ : state) { for (std::size_t i = 0; i < tens.shape()[0]; ++i) { for (std::size_t j = 0; j < tens.shape()[1]; ++j) { auto v = xt::view(tens, i, j, all(), all()); xt::xtensor vas = v; benchmark::ClobberMemory(); } } } } inline void assign_create_strided_view(benchmark::State& state) { xt::xtensor tens = xt::random::rand({100, 100, 3, 3}); for (auto _ : state) { for (std::size_t i = 0; i < tens.shape()[0]; ++i) { for (std::size_t j = 0; j < tens.shape()[1]; ++j) { auto v = xt::strided_view(tens, {i, j, all(), all()}); xt::xtensor vas = v; benchmark::ClobberMemory(); } } } } inline void assign_create_manual_view(benchmark::State& state) { xt::xtensor tens = xt::random::rand({100, 100, 3, 3}); for (auto _ : state) { for (std::size_t i = 0; i < tens.shape()[0]; ++i) { for (std::size_t j = 0; j < tens.shape()[1]; ++j) { auto v = xt::view(tens, i, j, all(), all()); xt::xtensor vas(std::array({3, 3})); std::copy(v.data() + v.data_offset(), v.data() + v.data_offset() + vas.size(), vas.begin()); benchmark::ClobberMemory(); } } } } inline void assign_create_manual_noview(benchmark::State& state) { xt::xtensor tens = xt::random::rand({100, 100, 3, 3}); for (auto _ : state) { for (std::size_t i = 0; i < tens.shape()[0]; ++i) { for (std::size_t j = 0; j < tens.shape()[1]; ++j) { ptrdiff_t offset = i * tens.strides()[0] + j * tens.strides()[1]; xt::xtensor vas(std::array({3, 3})); std::copy(tens.data() + offset, tens.data() + offset + vas.size(), vas.begin()); benchmark::ClobberMemory(); } } } } inline void data_offset(benchmark::State& state) { xt::xtensor tens = xt::random::rand({100, 100, 3, 3}); for (auto _ : state) { for (std::size_t i = 0; i < tens.shape()[0]; ++i) { for (std::size_t j = 0; j < tens.shape()[1]; ++j) { volatile ptrdiff_t offset = i * tens.strides()[0] + j * tens.strides()[1]; static_cast(offset); } } } } inline void data_offset_view(benchmark::State& state) { xt::xtensor tens = xt::random::rand({100, 100, 3, 3}); for (auto _ : state) { for (std::size_t i = 0; i < tens.shape()[0]; ++i) { for (std::size_t j = 0; j < tens.shape()[1]; ++j) { auto v = xt::view(tens, i, j, all(), all()); volatile ptrdiff_t offset = v.data_offset(); static_cast(offset); } } } } BENCHMARK(create_xview); BENCHMARK(create_strided_view_outofplace); BENCHMARK(create_strided_view_inplace); BENCHMARK(assign_create_manual_noview); BENCHMARK(assign_create_strided_view); BENCHMARK(assign_create_view); BENCHMARK(assign_create_manual_view); BENCHMARK(data_offset); BENCHMARK(data_offset_view); }xtensor-0.23.10/benchmark/benchmark_views.cpp000066400000000000000000000173741405270662500212070ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include #include #include #include "xtensor/xarray.hpp" #include "xtensor/xnoalias.hpp" #include "xtensor/xstrided_view.hpp" #include "xtensor/xmanipulation.hpp" #include "xtensor/xstrides.hpp" #include "xtensor/xtensor.hpp" #include "xtensor/xview.hpp" namespace xt { // Thanks to Ullrich Koethe for these benchmarks // https://github.com/xtensor-stack/xtensor/issues/695 namespace view_benchmarks { constexpr int SIZE = 1000; template void view_dynamic_iterator(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); auto v = xt::strided_view(data, xt::xstrided_slice_vector{xt::all(), SIZE/2}); for (auto _ : state) { std::copy(v.begin(), v.end(), res.begin()); benchmark::DoNotOptimize(res.data()); } } template void view_iterator(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); auto v = xt::view(data, xt::all(), SIZE/2); for (auto _ : state) { std::copy(v.begin(), v.end(), res.begin()); benchmark::DoNotOptimize(res.data()); } } template void view_loop(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); auto v = xt::strided_view(data, xt::xstrided_slice_vector{xt::all(), SIZE/2}); for (auto _ : state) { for(std::size_t k = 0; k < v.shape()[0]; ++k) { res(k) = v(k); } benchmark::DoNotOptimize(res.data()); } } template void view_loop_view(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); auto v = xt::view(data, xt::all(), SIZE / 2); for (auto _ : state) { for(std::size_t k = 0; k < v.shape()[0]; ++k) { res(k) = v(k); } benchmark::DoNotOptimize(res.data()); } } template void view_loop_raw(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); for (auto _ : state) { std::size_t j = SIZE / 2; for(std::size_t k = 0; k < SIZE; ++k) { res(k) = data(k, j); } benchmark::DoNotOptimize(res.data()); } } template void view_assign(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); auto v = xt::strided_view(data, xt::xstrided_slice_vector{xt::all(), SIZE/2}); for (auto _ : state) { xt::noalias(res) = v; benchmark::DoNotOptimize(res.data()); } } template void view_assign_view(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); auto v = xt::view(data, xt::all(), SIZE/2); auto r = xt::view(res, xt::all()); for (auto _ : state) { r = v; benchmark::DoNotOptimize(r.data()); } } template void view_assign_strided_view(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); auto v = xt::strided_view(data, xt::xstrided_slice_vector{xt::all(), SIZE/2}); auto r = xt::strided_view(res, xt::xstrided_slice_vector{xt::all()}); for (auto _ : state) { r = v; benchmark::DoNotOptimize(r.data()); } } template void view_assign_view_noalias(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); auto v = xt::view(data, xt::all(), SIZE/2); auto r = xt::view(res, xt::all()); for (auto _ : state) { xt::noalias(r) = v; benchmark::DoNotOptimize(r.data()); } } template void view_assign_strided_view_noalias(benchmark::State& state) { xt::xtensor data = xt::ones({SIZE,SIZE}); xt::xtensor res = xt::ones({SIZE}); auto v = xt::strided_view(data, xt::xstrided_slice_vector{xt::all(), SIZE/2}); auto r = xt::strided_view(res, xt::xstrided_slice_vector{xt::all()}); for (auto _ : state) { xt::noalias(r) = v; benchmark::DoNotOptimize(r.data()); } } BENCHMARK_TEMPLATE(view_dynamic_iterator, float); BENCHMARK_TEMPLATE(view_iterator, float); BENCHMARK_TEMPLATE(view_loop, float); BENCHMARK_TEMPLATE(view_loop_view, float); BENCHMARK_TEMPLATE(view_loop_raw, float); BENCHMARK_TEMPLATE(view_assign, float); BENCHMARK_TEMPLATE(view_assign_view, float); BENCHMARK_TEMPLATE(view_assign_strided_view, float); BENCHMARK_TEMPLATE(view_assign_view_noalias, float); BENCHMARK_TEMPLATE(view_assign_strided_view_noalias, float); } namespace stridedview { template inline auto transpose_assign(benchmark::State& state, std::vector shape) { xarray x = xt::arange(compute_size(shape)); x.resize(shape); xarray res; res.resize(std::vector(shape.rbegin(), shape.rend())); for (auto _ : state) { res = transpose(x); } } auto transpose_assign_rm_rm = transpose_assign; auto transpose_assign_cm_cm = transpose_assign; auto transpose_assign_rm_cm = transpose_assign; auto transpose_assign_cm_rm = transpose_assign; BENCHMARK_CAPTURE(transpose_assign_rm_rm, 10x20x500, {10, 20, 500}); BENCHMARK_CAPTURE(transpose_assign_cm_cm, 10x20x500, {10, 20, 500}); BENCHMARK_CAPTURE(transpose_assign_rm_cm, 10x20x500, {10, 20, 500}); BENCHMARK_CAPTURE(transpose_assign_cm_rm, 10x20x500, {10, 20, 500}); } } xtensor-0.23.10/benchmark/benchmark_xshape.cpp000066400000000000000000000045771405270662500213430ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #ifndef BENCHMARK_SHAPE_HPP #define BENCHMARK_SHAPE_HPP #include #include "xtensor/xshape.hpp" #include "xtensor/xstorage.hpp" namespace xt { namespace benchmark_xshape { template void xshape_initializer(benchmark::State& state) { for (auto _ : state) { T sv({2, 3, 1}); benchmark::DoNotOptimize(sv.data()); } } template void xshape_initializer_long(benchmark::State& state) { for (auto _ : state) { T sv({2, 3, 1, 2, 6, 1, 2, 3, 45, 6, 12, 3, 5, 45, 5, 6}); benchmark::DoNotOptimize(sv.data()); } } template void xshape_access(benchmark::State& state) { T a({3,2,1,3}); for (auto _ : state) { a[0] = a[1] * a[2] + a[3]; a[3] = a[1]; a[1] = a[2] + a[3]; a[2] = a[3]; benchmark::DoNotOptimize(a.data()); } } BENCHMARK_TEMPLATE(xshape_initializer, std::vector); BENCHMARK_TEMPLATE(xshape_initializer, xt::svector); BENCHMARK_TEMPLATE(xshape_initializer, std::array); BENCHMARK_TEMPLATE(xshape_initializer_long, xt::svector); BENCHMARK_TEMPLATE(xshape_initializer_long, xt::uvector); BENCHMARK_TEMPLATE(xshape_initializer_long, std::vector); BENCHMARK_TEMPLATE(xshape_access, xt::uvector); BENCHMARK_TEMPLATE(xshape_access, std::vector); BENCHMARK_TEMPLATE(xshape_access, xt::svector); BENCHMARK_TEMPLATE(xshape_access, std::array); } } #endifxtensor-0.23.10/benchmark/copyGBenchmark.cmake.in000066400000000000000000000017141405270662500216260ustar00rootroot00000000000000############################################################################ # Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht # # # # Distributed under the terms of the BSD 3-Clause License. # # # # The full license is in the file LICENSE, distributed with this software. # ############################################################################ cmake_minimum_required(VERSION 2.8.2) project(googlebenchmark-download NONE) include(ExternalProject) ExternalProject_Add(benchmark URL "${googlebenchmark_SRC_DIR}" SOURCE_DIR "${CMAKE_CURRENT_BINARY_DIR}/googlebenchmark-src" BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}/googlebenchmark-build" CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" TEST_COMMAND "" )xtensor-0.23.10/benchmark/downloadGBenchmark.cmake.in000066400000000000000000000017721405270662500224670ustar00rootroot00000000000000############################################################################ # Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht # # # # Distributed under the terms of the BSD 3-Clause License. # # # # The full license is in the file LICENSE, distributed with this software. # ############################################################################ cmake_minimum_required(VERSION 2.8.2) project(googlebenchmark-download NONE) include(ExternalProject) ExternalProject_Add(googlebenchmark GIT_REPOSITORY https://github.com/google/benchmark.git GIT_TAG v1.3.0 SOURCE_DIR "${CMAKE_CURRENT_BINARY_DIR}/googlebenchmark-src" BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}/googlebenchmark-build" CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" TEST_COMMAND "" )xtensor-0.23.10/benchmark/main.cpp000066400000000000000000000025121405270662500167500ustar00rootroot00000000000000/*************************************************************************** * Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht * * * * Distributed under the terms of the BSD 3-Clause License. * * * * The full license is in the file LICENSE, distributed with this software. * ****************************************************************************/ #include #include #include "xtensor/xtensor.hpp" #include "xtensor/xarray.hpp" #ifdef XTENSOR_USE_XSIMD #ifdef __GNUC__ template void print_type(T&& /*t*/) { std::cout << __PRETTY_FUNCTION__ << std::endl; } #endif void print_stats() { std::cout << "USING XSIMD\nSIMD SIZE: " << xsimd::simd_traits::size << "\n\n"; #ifdef __GNUC__ print_type(xt::xarray()); print_type(xt::xtensor()); #endif } #else void print_stats() { std::cout << "NOT USING XSIMD\n\n"; }; #endif // Custom main function to print SIMD config int main(int argc, char** argv) { print_stats(); benchmark::Initialize(&argc, argv); if (benchmark::ReportUnrecognizedArguments(argc, argv)) return 1; benchmark::RunSpecifiedBenchmarks(); } xtensor-0.23.10/cmake/000077500000000000000000000000001405270662500144465ustar00rootroot00000000000000xtensor-0.23.10/cmake/FindTBB.cmake000066400000000000000000000273721405270662500166730ustar00rootroot00000000000000# The MIT License (MIT) # # Copyright (c) 2015 Justus Calvin # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # # FindTBB # ------- # # Find TBB include directories and libraries. # # Usage: # # find_package(TBB [major[.minor]] [EXACT] # [QUIET] [REQUIRED] # [[COMPONENTS] [components...]] # [OPTIONAL_COMPONENTS components...]) # # where the allowed components are tbbmalloc and tbb_preview. Users may modify # the behavior of this module with the following variables: # # * TBB_ROOT_DIR - The base directory the of TBB installation. # * TBB_INCLUDE_DIR - The directory that contains the TBB headers files. # * TBB_LIBRARY - The directory that contains the TBB library files. # * TBB__LIBRARY - The path of the TBB the corresponding TBB library. # These libraries, if specified, override the # corresponding library search results, where # may be tbb, tbb_debug, tbbmalloc, tbbmalloc_debug, # tbb_preview, or tbb_preview_debug. # * TBB_USE_DEBUG_BUILD - The debug version of tbb libraries, if present, will # be used instead of the release version. # # Users may modify the behavior of this module with the following environment # variables: # # * TBB_INSTALL_DIR # * TBBROOT # * LIBRARY_PATH # # This module will set the following variables: # # * TBB_FOUND - Set to false, or undefined, if we haven’t found, or # don’t want to use TBB. # * TBB__FOUND - If False, optional part of TBB sytem is # not available. # * TBB_VERSION - The full version string # * TBB_VERSION_MAJOR - The major version # * TBB_VERSION_MINOR - The minor version # * TBB_INTERFACE_VERSION - The interface version number defined in # tbb/tbb_stddef.h. # * TBB__LIBRARY_RELEASE - The path of the TBB release version of # , where may be tbb, tbb_debug, # tbbmalloc, tbbmalloc_debug, tbb_preview, or # tbb_preview_debug. # * TBB__LIBRARY_DEGUG - The path of the TBB release version of # , where may be tbb, tbb_debug, # tbbmalloc, tbbmalloc_debug, tbb_preview, or # tbb_preview_debug. # # The following varibles should be used to build and link with TBB: # # * TBB_INCLUDE_DIRS - The include directory for TBB. # * TBB_LIBRARIES - The libraries to link against to use TBB. # * TBB_LIBRARIES_RELEASE - The release libraries to link against to use TBB. # * TBB_LIBRARIES_DEBUG - The debug libraries to link against to use TBB. # * TBB_DEFINITIONS - Definitions to use when compiling code that uses # TBB. # * TBB_DEFINITIONS_RELEASE - Definitions to use when compiling release code that # uses TBB. # * TBB_DEFINITIONS_DEBUG - Definitions to use when compiling debug code that # uses TBB. # # This module will also create the "tbb" target that may be used when building # executables and libraries. include(FindPackageHandleStandardArgs) if(NOT TBB_FOUND) ################################## # Check the build type ################################## if(NOT DEFINED TBB_USE_DEBUG_BUILD) if(CMAKE_BUILD_TYPE MATCHES "(Debug|DEBUG|debug|RelWithDebInfo|RELWITHDEBINFO|relwithdebinfo)") set(TBB_BUILD_TYPE DEBUG) else() set(TBB_BUILD_TYPE RELEASE) endif() elseif(TBB_USE_DEBUG_BUILD) set(TBB_BUILD_TYPE DEBUG) else() set(TBB_BUILD_TYPE RELEASE) endif() ################################## # Set the TBB search directories ################################## # Define search paths based on user input and environment variables set(TBB_SEARCH_DIR ${TBB_ROOT_DIR} $ENV{TBB_INSTALL_DIR} $ENV{TBBROOT}) # Define the search directories based on the current platform if(CMAKE_SYSTEM_NAME STREQUAL "Windows") set(TBB_DEFAULT_SEARCH_DIR "C:/Program Files/Intel/TBB" "C:/Program Files (x86)/Intel/TBB") # Set the target architecture if(CMAKE_SIZEOF_VOID_P EQUAL 8) set(TBB_ARCHITECTURE "intel64") else() set(TBB_ARCHITECTURE "ia32") endif() # Set the TBB search library path search suffix based on the version of VC if(WINDOWS_STORE) set(TBB_LIB_PATH_SUFFIX "lib/${TBB_ARCHITECTURE}/vc11_ui") elseif(MSVC14) set(TBB_LIB_PATH_SUFFIX "lib/${TBB_ARCHITECTURE}/vc14") elseif(MSVC12) set(TBB_LIB_PATH_SUFFIX "lib/${TBB_ARCHITECTURE}/vc12") elseif(MSVC11) set(TBB_LIB_PATH_SUFFIX "lib/${TBB_ARCHITECTURE}/vc11") elseif(MSVC10) set(TBB_LIB_PATH_SUFFIX "lib/${TBB_ARCHITECTURE}/vc10") endif() # Add the library path search suffix for the VC independent version of TBB list(APPEND TBB_LIB_PATH_SUFFIX "lib/${TBB_ARCHITECTURE}/vc_mt") elseif(CMAKE_SYSTEM_NAME STREQUAL "Darwin") # OS X set(TBB_DEFAULT_SEARCH_DIR "/opt/intel/tbb") # TODO: Check to see which C++ library is being used by the compiler. if(NOT ${CMAKE_SYSTEM_VERSION} VERSION_LESS 13.0) # The default C++ library on OS X 10.9 and later is libc++ set(TBB_LIB_PATH_SUFFIX "lib/libc++" "lib") else() set(TBB_LIB_PATH_SUFFIX "lib") endif() elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux") # Linux set(TBB_DEFAULT_SEARCH_DIR "/opt/intel/tbb") # TODO: Check compiler version to see the suffix should be /gcc4.1 or # /gcc4.1. For now, assume that the compiler is more recent than # gcc 4.4.x or later. if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64") set(TBB_LIB_PATH_SUFFIX "lib/intel64/gcc4.4") elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^i.86$") set(TBB_LIB_PATH_SUFFIX "lib/ia32/gcc4.4") endif() endif() ################################## # Find the TBB include dir ################################## find_path(TBB_INCLUDE_DIRS tbb/tbb.h HINTS ${TBB_INCLUDE_DIR} ${TBB_SEARCH_DIR} PATHS ${TBB_DEFAULT_SEARCH_DIR} PATH_SUFFIXES include) ################################## # Set version strings ################################## if(TBB_INCLUDE_DIRS) if(EXISTS "${TBB_INCLUDE_DIRS}/tbb/version.h") # since version 2021.1 file(READ "${TBB_INCLUDE_DIRS}/oneapi/tbb/version.h" _tbb_version_file) else() file(READ "${TBB_INCLUDE_DIRS}/tbb/tbb_stddef.h" _tbb_version_file) endif() string(REGEX REPLACE ".*#define TBB_VERSION_MAJOR ([0-9]+).*" "\\1" TBB_VERSION_MAJOR "${_tbb_version_file}") string(REGEX REPLACE ".*#define TBB_VERSION_MINOR ([0-9]+).*" "\\1" TBB_VERSION_MINOR "${_tbb_version_file}") string(REGEX REPLACE ".*#define TBB_INTERFACE_VERSION ([0-9]+).*" "\\1" TBB_INTERFACE_VERSION "${_tbb_version_file}") set(TBB_VERSION "${TBB_VERSION_MAJOR}.${TBB_VERSION_MINOR}") endif() ################################## # Find TBB components ################################## if(TBB_VERSION VERSION_LESS 4.3) set(TBB_SEARCH_COMPOMPONENTS tbb_preview tbbmalloc tbb) else() set(TBB_SEARCH_COMPOMPONENTS tbb_preview tbbmalloc_proxy tbbmalloc tbb) endif() # Find each component foreach(_comp ${TBB_SEARCH_COMPOMPONENTS}) if(";${TBB_FIND_COMPONENTS};tbb;" MATCHES ";${_comp};") # Search for the libraries find_library(TBB_${_comp}_LIBRARY_RELEASE ${_comp} HINTS ${TBB_LIBRARY} ${TBB_SEARCH_DIR} PATHS ${TBB_DEFAULT_SEARCH_DIR} ENV LIBRARY_PATH PATH_SUFFIXES ${TBB_LIB_PATH_SUFFIX}) find_library(TBB_${_comp}_LIBRARY_DEBUG ${_comp}_debug HINTS ${TBB_LIBRARY} ${TBB_SEARCH_DIR} PATHS ${TBB_DEFAULT_SEARCH_DIR} ENV LIBRARY_PATH PATH_SUFFIXES ${TBB_LIB_PATH_SUFFIX}) if(TBB_${_comp}_LIBRARY_DEBUG) list(APPEND TBB_LIBRARIES_DEBUG "${TBB_${_comp}_LIBRARY_DEBUG}") endif() if(TBB_${_comp}_LIBRARY_RELEASE) list(APPEND TBB_LIBRARIES_RELEASE "${TBB_${_comp}_LIBRARY_RELEASE}") endif() if(TBB_${_comp}_LIBRARY_${TBB_BUILD_TYPE} AND NOT TBB_${_comp}_LIBRARY) set(TBB_${_comp}_LIBRARY "${TBB_${_comp}_LIBRARY_${TBB_BUILD_TYPE}}") endif() if(TBB_${_comp}_LIBRARY AND EXISTS "${TBB_${_comp}_LIBRARY}") set(TBB_${_comp}_FOUND TRUE) else() set(TBB_${_comp}_FOUND FALSE) endif() # Mark internal variables as advanced mark_as_advanced(TBB_${_comp}_LIBRARY_RELEASE) mark_as_advanced(TBB_${_comp}_LIBRARY_DEBUG) mark_as_advanced(TBB_${_comp}_LIBRARY) endif() endforeach() ################################## # Set compile flags and libraries ################################## set(TBB_DEFINITIONS_RELEASE "") set(TBB_DEFINITIONS_DEBUG "-DTBB_USE_DEBUG=1") if(TBB_LIBRARIES_${TBB_BUILD_TYPE}) set(TBB_DEFINITIONS "${TBB_DEFINITIONS_${TBB_BUILD_TYPE}}") set(TBB_LIBRARIES "${TBB_LIBRARIES_${TBB_BUILD_TYPE}}") elseif(TBB_LIBRARIES_RELEASE) set(TBB_DEFINITIONS "${TBB_DEFINITIONS_RELEASE}") set(TBB_LIBRARIES "${TBB_LIBRARIES_RELEASE}") elseif(TBB_LIBRARIES_DEBUG) set(TBB_DEFINITIONS "${TBB_DEFINITIONS_DEBUG}") set(TBB_LIBRARIES "${TBB_LIBRARIES_DEBUG}") endif() find_package_handle_standard_args(TBB REQUIRED_VARS TBB_INCLUDE_DIRS TBB_LIBRARIES HANDLE_COMPONENTS VERSION_VAR TBB_VERSION) ################################## # Create targets ################################## if(NOT CMAKE_VERSION VERSION_LESS 3.0 AND TBB_FOUND) add_library(tbb SHARED IMPORTED) set_target_properties(tbb PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${TBB_INCLUDE_DIRS} IMPORTED_LOCATION ${TBB_LIBRARIES}) if(TBB_LIBRARIES_RELEASE AND TBB_LIBRARIES_DEBUG) set_target_properties(tbb PROPERTIES INTERFACE_COMPILE_DEFINITIONS "$<$,$>:TBB_USE_DEBUG=1>" IMPORTED_LOCATION_DEBUG ${TBB_LIBRARIES_DEBUG} IMPORTED_LOCATION_RELWITHDEBINFO ${TBB_LIBRARIES_DEBUG} IMPORTED_LOCATION_RELEASE ${TBB_LIBRARIES_RELEASE} IMPORTED_LOCATION_MINSIZEREL ${TBB_LIBRARIES_RELEASE} ) elseif(TBB_LIBRARIES_RELEASE) set_target_properties(tbb PROPERTIES IMPORTED_LOCATION ${TBB_LIBRARIES_RELEASE}) else() set_target_properties(tbb PROPERTIES INTERFACE_COMPILE_DEFINITIONS "${TBB_DEFINITIONS_DEBUG}" IMPORTED_LOCATION ${TBB_LIBRARIES_DEBUG} ) endif() endif() mark_as_advanced(TBB_INCLUDE_DIRS TBB_LIBRARIES) unset(TBB_ARCHITECTURE) unset(TBB_BUILD_TYPE) unset(TBB_LIB_PATH_SUFFIX) unset(TBB_DEFAULT_SEARCH_DIR) endif() xtensor-0.23.10/docs/000077500000000000000000000000001405270662500143165ustar00rootroot00000000000000xtensor-0.23.10/docs/Doxyfile000066400000000000000000000010761405270662500160300ustar00rootroot00000000000000PROJECT_NAME = "xtensor" XML_OUTPUT = xml INPUT = missing_macro.hpp ../include GENERATE_LATEX = NO GENERATE_MAN = NO GENERATE_RTF = NO CASE_SENSE_NAMES = NO GENERATE_HTML = YES GENERATE_XML = YES RECURSIVE = YES QUIET = YES JAVADOC_AUTOBRIEF = YES WARN_IF_UNDOCUMENTED = NO MACRO_EXPANSION = YES PREDEFINED = IN_DOXYGEN EXCLUDE_SYMBOLS = detail GENERATE_TREEVIEW = YES SOURCE_BROWSER = YES # WARN_IF_UNDOCUMENTED = YES # Allow for rst directives and advanced functions e.g. grid tables ALIASES = "rst=\verbatim embed:rst:leading-asterisk" ALIASES += "endrst=\endverbatim" xtensor-0.23.10/docs/Makefile000066400000000000000000000147421405270662500157660ustar00rootroot00000000000000# You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext api default: html help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " applehelp to make an Apple Help Book" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " coverage to run coverage check of the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* rm -rf xml html: doxygen $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: doxygen $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: doxygen $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: doxygen $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: doxygen $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: doxygen $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." epub: doxygen $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: doxygen $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: doxygen $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: doxygen $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: doxygen $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: doxygen $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: doxygen $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: doxygen $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: doxygen $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: doxygen $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: doxygen $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: doxygen $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." coverage: doxygen $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage @echo "Testing of coverage in the sources finished, look at the " \ "results in $(BUILDDIR)/coverage/python.txt." xml: doxygen $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: doxygen $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." xtensor-0.23.10/docs/environment.yaml000066400000000000000000000000641405270662500175460ustar00rootroot00000000000000channels: - conda-forge dependencies: - doxygen xtensor-0.23.10/docs/environment.yml000066400000000000000000000002731405270662500174070ustar00rootroot00000000000000name: xtensor-docs channels: - conda-forge dependencies: # More recent version of breathe has an # annoying bug regarding resolution of # template overloads - breathe==4.16.0 xtensor-0.23.10/docs/make.bat000066400000000000000000000161651405270662500157340ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source set I18NSPHINXOPTS=%SPHINXOPTS% source if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. xml to make Docutils-native XML files echo. pseudoxml to make pseudoxml-XML files for display purposes echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled echo. coverage to run coverage check of the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) REM Check if sphinx-build is available and fallback to Python version if any %SPHINXBUILD% 1>NUL 2>NUL if errorlevel 9009 goto sphinx_python goto sphinx_ok :sphinx_python set SPHINXBUILD=python -m sphinx.__init__ %SPHINXBUILD% 2> nul if errorlevel 9009 ( echo. echo.The 'sphinx-build' command was not found. Make sure you have Sphinx echo.installed, then set the SPHINXBUILD environment variable to point echo.to the full path of the 'sphinx-build' executable. Alternatively you echo.may add the Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.http://sphinx-doc.org/ exit /b 1 ) :sphinx_ok if "%1" == "html" ( doxygen %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\packagename.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\packagename.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdf" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf cd %~dp0 echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdfja" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf-ja cd %~dp0 echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) if "%1" == "coverage" ( %SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage if errorlevel 1 exit /b 1 echo. echo.Testing of coverage in the sources finished, look at the ^ results in %BUILDDIR%/coverage/python.txt. goto end ) if "%1" == "xml" ( %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml if errorlevel 1 exit /b 1 echo. echo.Build finished. The XML files are in %BUILDDIR%/xml. goto end ) if "%1" == "pseudoxml" ( %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml if errorlevel 1 exit /b 1 echo. echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. goto end ) :end xtensor-0.23.10/docs/source/000077500000000000000000000000001405270662500156165ustar00rootroot00000000000000xtensor-0.23.10/docs/source/_static/000077500000000000000000000000001405270662500172445ustar00rootroot00000000000000xtensor-0.23.10/docs/source/_static/goatcounter.js000066400000000000000000000004561405270662500221410ustar00rootroot00000000000000(function() { window.counter = 'https://xtensor_readthedocs_io.goatcounter.com/count' var script = document.createElement('script'); script.async = 1; script.src = '//gc.zgo.at/count.js'; var ins = document.getElementsByTagName('script')[0]; ins.parentNode.insertBefore(script, ins) })(); xtensor-0.23.10/docs/source/_static/main_stylesheet.css000066400000000000000000000000741405270662500231540ustar00rootroot00000000000000.wy-nav-content{ max-width: 1000px; margin: auto; } xtensor-0.23.10/docs/source/adaptor.rst000066400000000000000000000155721405270662500200140ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Adapting 1-D containers ======================= `xtensor` can adapt one-dimensional containers in place, and provide them a tensor interface. Only random access containers can be adapted. Adapting std::vector -------------------- The following example shows how to bring an ``std::vector`` into the expression system of `xtensor`: .. code:: #include #include #include #include std::vector v = {1., 2., 3., 4., 5., 6. }; std::vector shape = { 2, 3 }; auto a1 = xt::adapt(v, shape); xt::xarray a2 = {{ 1., 2., 3.}, { 4., 5., 6.}}; xt::xarray res = a1 + a2; // res = {{ 2., 4., 6. }, { 8., 10., 12. }}; ``v`` is not copied into ``a1``, so if you change a value in ``a1``, you're actually changing the corresponding value in ``v``: .. code:: a1(0, 0) = 20.; // now v is { 20., 2., 3., 4., 5., 6. } Adapting C-style arrays ----------------------- `xtensor` provides two ways for adapting a C-style array; the first one does not take the ownership of the array: .. code:: #include #include void compute(double* data, std::size_t size) { std::vector shape = { size }; auto a = xt::adapt(data, size, xt::no_ownership(), shape); a = a + a; // does not modify the size } int main() { std::size_t size = 2; double* data = new double[size]; for (int i = 0; i < size; i++) data[i] = i; std::cout << data << std::endl; // prints e.g. 0x557a363b7c20 compute(data, size); std::cout << data << std::endl; // prints e.g. 0x557a363b7c20 (same pointer) for (int i = 0; i < size; i++) std::cout << data[i] << " "; std::cout << std::endl; // prints 0 2 (data is still available here) } However if you replace ``xt::no_ownership`` with ``xt::acquire_ownership``, the adaptor will take the ownership of the array, meaning it will be deleted when the adaptor is destroyed: .. code:: #include #include #include void compute(double*& data, std::size_t size) { // data pointer can be changed, hence double*& std::vector shape = { size }; auto a = xt::adapt(data, size, xt::acquire_ownership(), shape); xt::xarray b {1., 2.}; b.reshape({2, 1}); a = a * b; // size has changed, shape is now { 2, 2 } } int main() { std::size_t size = 2; double* data = new double[size]; for (int i = 0; i < size; i++) data[i] = i; std::cout << data << std::endl; // prints e.g. 0x557a363b7c20 compute(data, size); std::cout << data << std::endl; // prints e.g. 0x557a363b8220 (pointer has changed) for (int i = 0; i < size * size; i++) std::cout << data[i] << " "; std::cout << std::endl; // prints e.g. 4.65504e-310 1 0 2 (data has been deleted and is now corrupted) } To safely get the computed data out of the function, you could pass an additional output parameter to ``compute`` in which you copy the result before exiting the function. Or you can create the adaptor before calling ``compute`` and pass it to the function: .. code:: #include #include #include template void compute(A& a) { xt::xarray b {1., 2.}; b.reshape({2, 1}); a = a * b; // size has changed, shape is now { 2, 2 } } int main() { std::size_t size = 2; double* data = new double[size]; for (int i = 0; i < size; i++) data[i] = i; std::vector shape = { size }; auto a = xt::adapt(data, size, xt::acquire_ownership(), shape); compute(a); for (int i = 0; i < size * size; i++) std::cout << data[i] << " "; std::cout << std::endl; // prints 0 1 0 2 } Adapting stack-allocated arrays ------------------------------- Adapting C arrays allocated on the stack is as simple as adapting ``std::vector``: .. code:: #include #include #include #include double v[6] = {1., 2., 3., 4., 5., 6. }; std::vector shape = { 2, 3 }; auto a1 = xt::adapt(v, shape); xt::xarray a2 = {{ 1., 2., 3.}, { 4., 5., 6.}}; xt::xarray res = a1 + a2; // res = {{ 2., 4., 6. }, { 8., 10., 12. }}; ``v`` is not copied into ``a1``, so if you change a value in ``a1``, you're actually changing the corresponding value in ``v``: .. code:: a1(0, 0) = 20.; // now v is { 20., 2., 3., 4., 5., 6. } Adapting C++ smart pointers --------------------------- If you want to manage your data with shared or unique pointers, you can use the ``adapt_smart_ptr`` function of xtensor. It will automatically increment the reference count of shared pointers upon creation, and decrement upon deletion. .. code:: #include #include #include std::shared_ptr sptr(new double[8], std::default_delete()); sptr.get()[2] = 321.; auto xptr = xt::adapt_smart_ptr(sptr, {4, 2}); xptr(1, 3) = 123.; std::cout << xptr; Or if you operate on shared pointers that do not directly point to the underlying buffer, you can pass the data pointer and the smart pointer (to manage the underlying memory) as follows: .. code:: #include #include #include struct Buffer { Buffer(std::vector& buf) : m_buf(buf) {} ~Buffer() { std::cout << "deleted" << std::endl; } std::vector m_buf; }; auto data = std::vector{1,2,3,4,5,6,7,8}; auto shared_buf = std::make_shared(data); auto unique_buf = std::make_unique(data); std::cout << shared_buf.use_count() << std::endl; { auto obj = xt::adapt_smart_ptr(shared_buf.get()->m_buf.data(), {2, 4}, shared_buf); // Use count increased to 2 std::cout << shared_buf.use_count() << std::endl; std::cout << obj << std::endl; } // Use count reset to 1 std::cout << shared_buf.use_count() << std::endl; { auto obj = xt::adapt_smart_ptr(unique_buf.get()->m_buf.data(), {2, 4}, std::move(unique_buf)); std::cout << obj << std::endl; } xtensor-0.23.10/docs/source/api/000077500000000000000000000000001405270662500163675ustar00rootroot00000000000000xtensor-0.23.10/docs/source/api/accumulating_functions.rst000066400000000000000000000012551405270662500236700ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Accumulating functions ====================== **xtensor** provides the following accumulating functions for xexpressions: Defined in ``xtensor/xmath.hpp`` .. _cumsum-function-reference: .. doxygenfunction:: cumsum(E&&) :project: xtensor .. doxygenfunction:: cumsum(E&&, std::ptrdiff_t) :project: xtensor .. _cumprod-function-reference: .. doxygenfunction:: cumprod(E&&) :project: xtensor .. doxygenfunction:: cumprod(E&&, std::ptrdiff_t) :project: xtensorxtensor-0.23.10/docs/source/api/basic_functions.rst000066400000000000000000000026001405270662500222700ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Basic functions =============== **xtensor** provides the following basic functions for xexpressions and scalars: Defined in ``xtensor/xmath.hpp`` .. _abs-function-reference: .. doxygenfunction:: abs(E&&) :project: xtensor .. _fabs-function-reference: .. doxygenfunction:: fabs(E&&) :project: xtensor .. _fmod-function-reference: .. doxygenfunction:: fmod(E1&&, E2&&) :project: xtensor .. _remainder-func-ref: .. doxygenfunction:: remainder(E1&&, E2&&) :project: xtensor .. _fma-function-reference: .. doxygenfunction:: fma(E1&&, E2&&, E3&&) :project: xtensor .. _maximum-func-ref: .. doxygenfunction:: maximum(E1&&, E2&&) :project: xtensor .. _minimum-func-ref: .. doxygenfunction:: minimum(E1&&, E2&&) :project: xtensor .. _fmax-function-reference: .. doxygenfunction:: fmax(E1&&, E2&&) :project: xtensor .. _fmin-function-reference: .. doxygenfunction:: fmin(E1&&, E2&&) :project: xtensor .. _fdim-function-reference: .. doxygenfunction:: fdim(E1&&, E2&&) :project: xtensor .. _clip-function-reference: .. doxygenfunction:: clip(E1&&, E2&&, E3&&) :project: xtensor .. _sign-function-reference: .. doxygenfunction:: sign(E&&) :project: xtensor xtensor-0.23.10/docs/source/api/chunked_array.rst000066400000000000000000000005301405270662500217360ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. chunked_array ============= Defined in ``xtensor/xchunked_array.hpp`` .. doxygenfunction:: xt::chunked_array :project: xtensor xtensor-0.23.10/docs/source/api/classif_functions.rst000066400000000000000000000014711405270662500226400ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Classification functions ======================== **xtensor** provides the following classification functions for xexpressions and scalars: Defined in ``xtensor/xmath.hpp`` .. _isfinite-func-ref: .. doxygenfunction:: isfinite(E&&) :project: xtensor .. _isinf-func-ref: .. doxygenfunction:: isinf(E&&) :project: xtensor .. _isnan-func-ref: .. doxygenfunction:: isnan(E&&) :project: xtensor .. _isclose-func-ref: .. doxygenfunction:: isclose(E1&&, E2&&, double, double, bool) :project: xtensor .. _allclose-func-ref: .. doxygenfunction:: allclose(E1&&, E2&, double, double) :project: xtensor xtensor-0.23.10/docs/source/api/container_index.rst000066400000000000000000000016001405270662500222670ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Containers and views ==================== Containers are in-memory expressions that share a common implementation of most of the methods of the xexpression API. The final container classes (``xarray``, ``xtensor``) mainly implement constructors and value semantic, most of the xexpression API is actually implemented in ``xstrided_container`` and ``xcontainer``. .. toctree:: xcontainer xaccessible xiterable xarray xarray_adaptor chunked_array xtensor xtensor_adaptor xfixed xoptional_assembly_base xoptional_assembly xoptional_assembly_adaptor xmasked_view xview xstrided_view xbroadcast xindex_view xfunctor_view xrepeat xtensor-0.23.10/docs/source/api/error_functions.rst000066400000000000000000000012651405270662500223460ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Error and gamma functions ========================= **xtensor** provides the following error and gamma functions for xexpressions: Defined in ``xtensor/xmath.hpp`` .. _erf-function-reference: .. doxygenfunction:: erf(E&&) :project: xtensor .. _erfc-function-reference: .. doxygenfunction:: erfc(E&&) :project: xtensor .. _tgamma-func-ref: .. doxygenfunction:: tgamma(E&&) :project: xtensor .. _lgamma-func-ref: .. doxygenfunction:: lgamma(E&&) :project: xtensor xtensor-0.23.10/docs/source/api/exponential_functions.rst000066400000000000000000000016211405270662500235370ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Exponential functions ===================== **xtensor** provides the following exponential functions for xexpressions: Defined in ``xtensor/xmath.hpp`` .. _exp-function-reference: .. doxygenfunction:: exp(E&&) :project: xtensor .. _exp2-function-reference: .. doxygenfunction:: exp2(E&&) :project: xtensor .. _expm1-func-ref: .. doxygenfunction:: expm1(E&&) :project: xtensor .. _log-function-reference: .. doxygenfunction:: log(E&&) :project: xtensor .. _log2-function-reference: .. doxygenfunction:: log2(E&&) :project: xtensor .. _log10-func-ref: .. doxygenfunction:: log10(E&&) :project: xtensor .. _log1p-func-ref: .. doxygenfunction:: log1p(E&&) :project: xtensor xtensor-0.23.10/docs/source/api/expression_index.rst000066400000000000000000000011301405270662500225020ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Expressions and semantic ======================== ``xexpression`` and the semantic classes contain all the methods required to perform evaluation and assignment of expressions. They define the computed assignment operators, the assignment methods for ``noalias`` and the downcast methods. .. toctree:: xexpression xsemantic_base xcontainer_semantic xview_semantic xeval xtensor-0.23.10/docs/source/api/function_index.rst000066400000000000000000000006451405270662500221420ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Functions and generators ======================== .. toctree:: xfunction xreducer xaccumulator xgenerator xbuilder xmanipulation xsort xset_operation xrandom xhistogram xpad xtensor-0.23.10/docs/source/api/hyperbolic_functions.rst000066400000000000000000000015001405270662500233450ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Hyperbolic functions ==================== **xtensor** provides the following hyperbolic functions for xexpressions: Defined in ``xtensor/xmath.hpp`` .. _sinh-function-reference: .. doxygenfunction:: sinh(E&&) :project: xtensor .. _cosh-function-reference: .. doxygenfunction:: cosh(E&&) :project: xtensor .. _tanh-function-reference: .. doxygenfunction:: tanh(E&&) :project: xtensor .. _asinh-func-ref: .. doxygenfunction:: asinh(E&&) :project: xtensor .. _acosh-func-ref: .. doxygenfunction:: acosh(E&&) :project: xtensor .. _atanh-func-ref: .. doxygenfunction:: atanh(E&&) :project: xtensor xtensor-0.23.10/docs/source/api/index_related.rst000066400000000000000000000011411405270662500217250ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Index related functions ======================= Defined in ``xtensor/xoperation.hpp`` .. _wherec-op-ref: .. doxygenfunction:: where(const T&) :project: xtensor .. _nonzero-op-ref: .. doxygenfunction:: nonzero(const T&) :project: xtensor .. _argwhere-op-ref: .. doxygenfunction:: argwhere :project: xtensor .. _frindices-op-ref: .. doxygenfunction:: from_indices :project: xtensor xtensor-0.23.10/docs/source/api/io_index.rst000066400000000000000000000004371405270662500207230ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. IO Operations ============= .. toctree:: xio xnpy xcsv xjson xtensor-0.23.10/docs/source/api/iterator_index.rst000066400000000000000000000007261405270662500221460ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Iterators ========= In addition to the iterators defined in the different types of expressions, ``xtensor`` provides classes that allow to iterate over slices of an expression along a specified axis. .. toctree:: xaxis_iterator xaxis_slice_iterator xtensor-0.23.10/docs/source/api/nan_functions.rst000066400000000000000000000026661405270662500217770ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. NaN functions ============= **xtensor** provides the following functions that deal with NaNs in xexpressions: Defined in ``xtensor/xmath.hpp`` .. _nan-to-num-function-reference: .. doxygenfunction:: nan_to_num(E&&) :project: xtensor .. _nanmin-function-reference: .. doxygenfunction:: nanmin(E&&, X&&, EVS) :project: xtensor .. _nanmax-function-reference: .. doxygenfunction:: nanmax(E&&, X&&, EVS) :project: xtensor .. _nansum-function-reference: .. doxygenfunction:: nansum(E&&, X&&, EVS) :project: xtensor .. _nanmean-function-reference: .. doxygenfunction:: nanmean(E&&, X&&, EVS) :project: xtensor .. _nanvar-function-reference: .. doxygenfunction:: nanvar(E&&, X&&, EVS) :project: xtensor .. _nanstd-function-reference: .. doxygenfunction:: nanstd(E&&, X&&, EVS) :project: xtensor .. _nanprod-function-reference: .. doxygenfunction:: nanprod(E&&, X&&, EVS) :project: xtensor .. _nancumsum-function-reference: .. doxygenfunction:: nancumsum(E&&) :project: xtensor .. doxygenfunction:: nancumsum(E&&, std::ptrdiff_t) :project: xtensor .. _nancumprod-function-reference: .. doxygenfunction:: nancumprod(E&&) :project: xtensor .. doxygenfunction:: nancumprod(E&&, std::ptrdiff_t) :project: xtensor xtensor-0.23.10/docs/source/api/nearint_operations.rst000066400000000000000000000015511405270662500230260ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Nearest integer floating point operations ========================================= **xtensor** provides the following rounding operations for xexpressions: Defined in ``xtensor/xmath.hpp`` .. _ceil-function-reference: .. doxygenfunction:: ceil(E&&) :project: xtensor .. _floor-func-ref: .. doxygenfunction:: floor(E&&) :project: xtensor .. _trunc-func-ref: .. doxygenfunction:: trunc(E&&) :project: xtensor .. _round-func-ref: .. doxygenfunction:: round(E&&) :project: xtensor .. _nearbyint-func-ref: .. doxygenfunction:: nearbyint(E&&) :project: xtensor .. _rint-function-reference: .. doxygenfunction:: rint(E&&) :project: xtensor xtensor-0.23.10/docs/source/api/operators.rst000066400000000000000000000061221405270662500211400ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Operators and related functions =============================== Defined in ``xtensor/xmath.hpp`` and ``xtensor/xoperation.hpp`` .. _identity-op-ref: .. doxygenfunction:: operator+(E&&) :project: xtensor .. _neg-op-ref: .. doxygenfunction:: operator-(E&&) :project: xtensor .. _plus-op-ref: .. doxygenfunction:: operator+(E1&&, E2&&) :project: xtensor .. _minus-op-ref: .. doxygenfunction:: operator-(E1&&, E2&&) :project: xtensor .. _mul-op-ref: .. doxygenfunction:: operator*(E1&&, E2&&) :project: xtensor .. _div-op-ref: .. doxygenfunction:: operator/(E1&&, E2&&) :project: xtensor .. _or-op-ref: .. doxygenfunction:: operator||(E1&&, E2&&) :project: xtensor .. _and-op-ref: .. doxygenfunction:: operator&&(E1&&, E2&&) :project: xtensor .. _not-op-ref: .. doxygenfunction:: operator!(E&&) :project: xtensor .. _where-op-ref: .. doxygenfunction:: where(E1&&, E2&&, E3&&) :project: xtensor .. _any-op-ref: .. doxygenfunction:: any(E&&) :project: xtensor .. _all-op-ref: .. doxygenfunction:: all(E&&) :project: xtensor .. _less-op-ref: .. doxygenfunction:: operator<(E1&&, E2&&) :project: xtensor .. _less-eq-op-ref: .. doxygenfunction:: operator<=(E1&&, E2&&) :project: xtensor .. _greater-op-ref: .. doxygenfunction:: operator>(E1&&, E2&&) :project: xtensor .. _greater-eq-op-ref: .. doxygenfunction:: operator>=(E1&&, E2&&) :project: xtensor .. _equal-op-ref: .. doxygenfunction:: operator==(const xexpression&, const xexpression&) :project: xtensor .. _nequal-op-ref: .. doxygenfunction:: operator!=(const xexpression&, const xexpression&) :project: xtensor .. _equal-fn-ref: .. doxygenfunction:: equal(E1&&, E2&&) :project: xtensor .. _nequal-fn-ref: .. doxygenfunction:: not_equal(E1&&, E2&&) :project: xtensor .. _less-fn-ref: .. doxygenfunction:: less(E1&& e1, E2&& e2) :project: xtensor .. _less-eq-fn-ref: .. doxygenfunction:: less_equal(E1&& e1, E2&& e2) :project: xtensor .. _greater-fn-ref: .. doxygenfunction:: greater(E1&& e1, E2&& e2) :project: xtensor .. _greate-eq-fn-ref: .. doxygenfunction:: greater_equal(E1&& e1, E2&& e2) :project: xtensor .. _bitwise-and-op-ref: .. doxygenfunction:: operator&(E1&&, E2&&) :project: xtensor .. _bitwise-or-op-ref: .. doxygenfunction:: operator|(E1&&, E2&&) :project: xtensor .. _bitwise-xor-op-ref: .. doxygenfunction:: operator^(E1&&, E2&&) :project: xtensor .. _bitwise-not-op-ref: .. doxygenfunction:: operator~(E&&) :project: xtensor .. _left-shift-fn-ref: .. doxygenfunction:: left_shift(E1&&, E2&&) :project: xtensor .. _right-shift-fn-ref: .. doxygenfunction:: right_shift(E1&&, E2&&) :project: xtensor .. _left-sh-op-ref: .. doxygenfunction:: operator<<(E1&&, E2&&) :project: xtensor .. _right-sh-op-ref: .. doxygenfunction:: operator>>(E1&&, E2&&) :project: xtensor .. _cast-ref: .. doxygenfunction:: cast(E&&) :project: xtensor xtensor-0.23.10/docs/source/api/power_functions.rst000066400000000000000000000015301405270662500223440ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Power functions =============== **xtensor** provides the following power functions for xexpressions and scalars: Defined in ``xtensor/xmath.hpp`` .. _pow-function-reference: .. doxygenfunction:: pow(E1&&, E2&&) :project: xtensor .. doxygenfunction:: pow(E&&) :project: xtensor .. doxygenfunction:: square(E1&&) :project: xtensor .. doxygenfunction:: cube(E1&&) :project: xtensor .. _sqrt-function-reference: .. doxygenfunction:: sqrt(E&&) :project: xtensor .. _cbrt-function-reference: .. doxygenfunction:: cbrt(E&&) :project: xtensor .. _hypot-func-ref: .. doxygenfunction:: hypot(E1&&, E2&&) :project: xtensor xtensor-0.23.10/docs/source/api/reducing_functions.rst000066400000000000000000000052701405270662500230150ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Reducing functions ================== **xtensor** provides the following reducing functions for xexpressions: Defined in ``xtensor/xmath.hpp`` .. doxygenfunction:: sum(E&&, EVS) :project: xtensor .. _sum-function-reference: .. doxygenfunction:: sum(E&&, X&&, EVS) :project: xtensor .. doxygenfunction:: prod(E&&, EVS) :project: xtensor .. _prod-function-reference: .. doxygenfunction:: prod(E&&, X&&, EVS) :project: xtensor .. doxygenfunction:: mean(E&&, EVS) :project: xtensor .. _mean-function-reference: .. doxygenfunction:: mean(E&&, X&&, EVS) :project: xtensor .. doxygenfunction:: variance(E&&, EVS) :project: xtensor .. doxygenfunction:: variance(E&&, X&&, EVS) :project: xtensor .. _variance-function-reference: .. doxygenfunction:: variance(E&&, X&&, const D&, EVS) :project: xtensor .. doxygenfunction:: stddev(E&&, EVS) :project: xtensor .. _stddev-function-reference: .. doxygenfunction:: stddev(E&&, X&&, EVS) :project: xtensor .. _diff-function-reference: .. doxygenfunction:: diff(const xexpression&, unsigned int, std::ptrdiff_t) :project: xtensor .. doxygenfunction:: amax(E&&, EVS) :project: xtensor .. _amax-function-reference: .. doxygenfunction:: amax(E&&, X&&, EVS) :project: xtensor .. doxygenfunction:: amin(E&&, EVS) :project: xtensor .. _amin-function-reference: .. doxygenfunction:: amin(E&&, X&&, EVS) :project: xtensor .. _trapz-function-reference: .. doxygenfunction:: trapz(const xexpression&, double, std::ptrdiff_t) :project: xtensor .. _trapz-function-reference2: .. doxygenfunction:: trapz(const xexpression&, const xexpression&, std::ptrdiff_t) :project: xtensor Defined in ``xtensor/xnorm.hpp`` .. _norm-l0-func-ref: .. doxygenfunction:: norm_l0(E&&, X&&, EVS) :project: xtensor .. _norm-l1-func-ref: .. doxygenfunction:: norm_l1(E&&, X&&, EVS) :project: xtensor .. _norm-sq-func-ref: .. doxygenfunction:: norm_sq(E&&, X&&, EVS) :project: xtensor .. _norm-l2-func-ref: .. doxygenfunction:: norm_l2(E&&, X&&, EVS) :project: xtensor .. _norm-linf-func-ref: .. doxygenfunction:: norm_linf(E&&, X&&, EVS) :project: xtensor .. _nlptop-func-ref: .. doxygenfunction:: norm_lp_to_p(E&&, double, X&&, EVS) :project: xtensor .. _norm-lp-func-ref: .. doxygenfunction:: norm_lp(E&&, double, X&&, EVS) :project: xtensor .. _nind-l1-ref: .. doxygenfunction:: norm_induced_l1(E&&, EVS) :project: xtensor .. _nilinf-ref: .. doxygenfunction:: norm_induced_linf(E&&, EVS) :project: xtensor xtensor-0.23.10/docs/source/api/trigonometric_functions.rst000066400000000000000000000016701405270662500241020ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Trigonometric functions ======================= **xtensor** provides the following trigonometric functions for xexpressions and scalars: Defined in ``xtensor/xmath.hpp`` .. _sin-function-reference: .. doxygenfunction:: sin(E&&) :project: xtensor .. _cos-function-reference: .. doxygenfunction:: cos(E&&) :project: xtensor .. _tan-function-reference: .. doxygenfunction:: tan(E&&) :project: xtensor .. _asin-function-reference: .. doxygenfunction:: asin(E&&) :project: xtensor .. _acos-function-reference: .. doxygenfunction:: acos(E&&) :project: xtensor .. _atan-function-reference: .. doxygenfunction:: atan(E&&) :project: xtensor .. _atan2-func-ref: .. doxygenfunction:: atan2(E1&&, E2&&) :project: xtensor xtensor-0.23.10/docs/source/api/xaccumulator.rst000066400000000000000000000006711405270662500216340ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xaccumulator ============ Defined in ``xtensor/xaccumulator.hpp`` .. doxygenfunction:: xt::accumulate(F&&, E&&, EVS) :project: xtensor .. doxygenfunction:: xt::accumulate(F&&, E&&, std::ptrdiff_t, EVS) :project: xtensor xtensor-0.23.10/docs/source/api/xarray.rst000066400000000000000000000007021405270662500204260ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xarray ====== Defined in ``xtensor/xarray.hpp`` .. doxygenclass:: xt::xarray_container :project: xtensor :members: .. doxygentypedef:: xt::xarray :project: xtensor .. doxygentypedef:: xt::xarray_optional :project: xtensor xtensor-0.23.10/docs/source/api/xarray_adaptor.rst000066400000000000000000000022071405270662500221420ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xarray_adaptor ============== Defined in ``xtensor/xarray.hpp`` .. doxygenclass:: xt::xarray_adaptor :project: xtensor :members: adapt (xarray_adaptor) ======================= Defined in ``xtensor/xadapt.hpp`` .. doxygenfunction:: xt::adapt(C&&, const SC&, layout_type) :project: xtensor .. doxygenfunction:: xt::adapt(C&&, SC&&, SS&&) :project: xtensor .. doxygenfunction:: xt::adapt(P&&, typename A::size_type, O, const SC&, layout_type, const A&) :project: xtensor .. doxygenfunction:: xt::adapt(P&&, typename A::size_type, O, SC&&, SS&&, const A&) :project: xtensor .. doxygenfunction:: xt::adapt(T (&)[N], const SC&, layout_type) :project: xtensor .. doxygenfunction:: xt::adapt(T (&)[N], SC&&, SS&&) :project: xtensor .. doxygenfunction:: xt::adapt_smart_ptr(P&&, const SC&, layout_type) :project: xtensor .. doxygenfunction:: xt::adapt_smart_ptr(P&&, const SC&, D&&, layout_type) :project: xtensor xtensor-0.23.10/docs/source/api/xaxis_iterator.rst000066400000000000000000000015601405270662500221700ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xaxis_iterator ============== Defined in ``xtensor/xaxis_iterator.hpp`` .. doxygenclass:: xt::xaxis_iterator :project: xtensor :members: .. doxygenfunction:: operator==(const xaxis_iterator&, const xaxis_iterator&) :project: xtensor .. doxygenfunction:: operator!=(const xaxis_iterator&, const xaxis_iterator&) :project: xtensor .. doxygenfunction:: axis_begin(E&&) :project: xtensor .. doxygenfunction:: axis_begin(E&&, typename std::decay_t::size_type) :project: xtensor .. doxygenfunction:: axis_end(E&&) :project: xtensor .. doxygenfunction:: axis_end(E&&, typename std::decay_t::size_type) :project: xtensor xtensor-0.23.10/docs/source/api/xaxis_slice_iterator.rst000066400000000000000000000016711405270662500233520ustar00rootroot00000000000000 .. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xaxis_slice_iterator ==================== Defined in ``xtensor/xaxis_slice_iterator.hpp`` .. doxygenclass:: xt::xaxis_slice_iterator :project: xtensor :members: .. doxygenfunction:: operator==(const xaxis_slice_iterator&, const xaxis_slice_iterator&) :project: xtensor .. doxygenfunction:: operator!=(const xaxis_slice_iterator&, const xaxis_slice_iterator&) :project: xtensor .. doxygenfunction:: axis_slice_begin(E&&) :project: xtensor .. doxygenfunction:: axis_slice_begin(E&&, typename std::decay_t::size_type) :project: xtensor .. doxygenfunction:: axis_slice_end(E&&) :project: xtensor .. doxygenfunction:: axis_slice_end(E&&, typename std::decay_t::size_type) :project: xtensor xtensor-0.23.10/docs/source/api/xbroadcast.rst000066400000000000000000000006351405270662500212570ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xbroadcast ========== Defined in ``xtensor/xbroadcast.hpp`` .. doxygenclass:: xt::xbroadcast :project: xtensor :members: .. doxygenfunction:: xt::broadcast(E&&, const S&) :project: xtensor xtensor-0.23.10/docs/source/api/xbuilder.rst000066400000000000000000000034241405270662500207420ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xbuilder ======== Defined in ``xtensor/xbuilder.hpp`` .. doxygenfunction:: xt::ones(S) :project: xtensor .. doxygenfunction:: xt::ones(const I (&)[L]) :project: xtensor .. doxygenfunction:: xt::zeros(S) :project: xtensor .. doxygenfunction:: xt::zeros(const I (&)[L]) :project: xtensor .. doxygenfunction:: xt::empty(const S&) :project: xtensor .. doxygenfunction:: xt::full_like(const xexpression&) :project: xtensor .. doxygenfunction:: xt::empty_like(const xexpression&) :project: xtensor .. doxygenfunction:: xt::zeros_like(const xexpression&) :project: xtensor .. doxygenfunction:: xt::ones_like(const xexpression&) :project: xtensor .. doxygenfunction:: xt::eye(const std::vector&, int) :project: xtensor .. doxygenfunction:: xt::eye(std::size_t, int) :project: xtensor .. doxygenfunction:: xt::arange(T, T, S) :project: xtensor .. doxygenfunction:: xt::arange(T) :project: xtensor .. doxygenfunction:: xt::linspace :project: xtensor .. doxygenfunction:: xt::logspace :project: xtensor .. doxygenfunction:: xt::concatenate(std::tuple&&, std::size_t) :project: xtensor .. doxygenfunction:: xt::stack :project: xtensor .. doxygenfunction:: xt::hstack :project: xtensor .. doxygenfunction:: xt::vstack :project: xtensor .. doxygenfunction:: xt::meshgrid :project: xtensor .. doxygenfunction:: xt::diag :project: xtensor .. doxygenfunction:: xt::diagonal :project: xtensor .. doxygenfunction:: xt::tril :project: xtensor .. doxygenfunction:: xt::triu :project: xtensor xtensor-0.23.10/docs/source/api/xcontainer.rst000066400000000000000000000012461405270662500212760ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. layout ====== Defined in ``xtensor/xlayout.hpp`` .. doxygenenum:: xt::layout_type :project: xtensor .. doxygenfunction:: xt::compute_layout(Args... args) :project: xtensor xcontainer ========== Defined in ``xtensor/xcontainer.hpp`` .. doxygenclass:: xt::xcontainer :project: xtensor :members: xstrided_container ================== Defined in ``xtensor/xcontainer.hpp`` .. doxygenclass:: xt::xstrided_container :project: xtensor :members: xtensor-0.23.10/docs/source/api/xcontainer_semantic.rst000066400000000000000000000005571405270662500231650ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xcontainer_semantic =================== Defined in ``xtensor/xsemantic.hpp`` .. doxygenclass:: xt::xcontainer_semantic :project: xtensor :members: xtensor-0.23.10/docs/source/api/xcsv.rst000066400000000000000000000006331405270662500201060ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xcsv: read/write CSV files ========================== Defined in ``xtensor/xcsv.hpp`` .. doxygenfunction:: xt::load_csv :project: xtensor .. doxygenfunction:: xt::dump_csv :project: xtensor xtensor-0.23.10/docs/source/api/xeval.rst000066400000000000000000000004751405270662500202460ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xeval ===== Defined in ``xtensor/xeval.hpp`` .. doxygenfunction:: xt::eval(E&& e) :project: xtensor xtensor-0.23.10/docs/source/api/xexpression.rst000066400000000000000000000011421405270662500215060ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xexpression =========== Defined in ``xtensor/xexpression.hpp`` .. doxygenclass:: xt::xexpression :project: xtensor :members: .. doxygenclass:: xt::xshared_expression :project: xtensor :members: .. doxygenfunction:: make_xshared :project: xtensor .. doxygenfunction:: share(xexpression&) :project: xtensor .. doxygenfunction:: share(xexpression&&) :project: xtensor xtensor-0.23.10/docs/source/api/xfixed.rst000066400000000000000000000006321405270662500204110ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xtensor_fixed ============= Defined in ``xtensor/xfixed.hpp`` .. doxygenclass:: xt::xfixed_container :project: xtensor :members: .. doxygentypedef:: xt::xtensor_fixed :project: xtensor xtensor-0.23.10/docs/source/api/xfunction.rst000066400000000000000000000006641405270662500211440ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xfunction ========= Defined in ``xtensor/xfunction.hpp`` .. doxygenclass:: xt::xfunction :project: xtensor :members: Defined in ``xtensor/xmath.hpp`` .. doxygenfunction:: make_lambda_xfunction :project: xtensor xtensor-0.23.10/docs/source/api/xfunctor_view.rst000066400000000000000000000007701405270662500220270ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xfunctor_view ============= Defined in ``xtensor/xfunctor_view.hpp`` .. doxygenclass:: xt::xfunctor_view :project: xtensor :members: Defined in ``xtensor/xcomplex.hpp`` .. doxygenfunction:: xt::real(E&&) :project: xtensor .. doxygenfunction:: xt::imag(E&&) :project: xtensor xtensor-0.23.10/docs/source/api/xgenerator.rst000066400000000000000000000005261405270662500213020ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xgenerator ========== Defined in ``xtensor/xgenerator.hpp`` .. doxygenclass:: xt::xgenerator :project: xtensor :members: xtensor-0.23.10/docs/source/api/xhistogram.rst000066400000000000000000000031631405270662500213110ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xhistogram ========== Defined in ``xtensor/xhistogram.hpp`` .. doxygenenum:: xt::histogram_algorithm :project: xtensor .. doxygenfunction:: xt::histogram(E1&&, E2&&, E3&&, bool) :project: xtensor .. doxygenfunction:: xt::bincount(E1&&, E2&&, std::size_t) :project: xtensor .. doxygenfunction:: xt::histogram_bin_edges(E1&&, E2&&, E3, E3, std::size_t, histogram_algorithm) :project: xtensor .. doxygenfunction:: xt::digitize(E1&&, E2&&, E3&&, bool, bool) :project: xtensor .. doxygenfunction:: xt::bin_items(size_t, E&&) :project: xtensor Further overloads ----------------- .. doxygenfunction:: xt::histogram(E1&&, E2&&, bool) :project: xtensor .. doxygenfunction:: xt::histogram(E1&&, std::size_t, bool) :project: xtensor .. doxygenfunction:: xt::histogram(E1&&, std::size_t, E2, E2, bool) :project: xtensor .. doxygenfunction:: xt::histogram(E1&&, std::size_t, E2&&, bool) :project: xtensor .. doxygenfunction:: xt::histogram(E1&&, std::size_t, E2&&, E3, E3, bool) :project: xtensor .. doxygenfunction:: xt::histogram_bin_edges(E1&&, E2, E2, std::size_t, histogram_algorithm) :project: xtensor .. doxygenfunction:: xt::histogram_bin_edges(E1&&, E2&&, std::size_t, histogram_algorithm) :project: xtensor .. doxygenfunction:: xt::histogram_bin_edges(E1&&, std::size_t, histogram_algorithm) :project: xtensor .. doxygenfunction:: xt::bin_items(size_t, size_t) :project: xtensor xtensor-0.23.10/docs/source/api/xindex_view.rst000066400000000000000000000011221405270662500214460ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xindex_view =========== Defined in ``xtensor/xindex_view.hpp`` .. doxygenclass:: xt::xindex_view :project: xtensor :members: .. doxygenclass:: xt::xfiltration :project: xtensor :members: .. doxygenfunction:: xt::index_view(E&&, I&&) :project: xtensor .. doxygenfunction:: xt::filter :project: xtensor .. doxygenfunction:: xt::filtration :project: xtensor xtensor-0.23.10/docs/source/api/xio.rst000066400000000000000000000026411405270662500177230ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xio: pretty printing ==================== Defined in ``xtensor/xio.hpp`` This file defines functions for pretty printing xexpressions. It defines appropriate overloads for the ``<<`` operator for std::ostreams and xexpressions. .. code:: #include #include int main() { xt::xarray a = {{1,2,3}, {4,5,6}}; std::cout << a << std::endl; return 0; } Will print .. code:: {{ 1., 2., 3.}, { 4., 5., 6.}} With the following functions, the global print options can be set: .. doxygenfunction:: xt::print_options::set_line_width :project: xtensor .. doxygenfunction:: xt::print_options::set_threshold :project: xtensor .. doxygenfunction:: xt::print_options::set_edge_items :project: xtensor .. doxygenfunction:: xt::print_options::set_precision :project: xtensor On can also locally overwrite the print options with io manipulators: .. doxygenclass:: xt::print_options::line_width :project: xtensor .. doxygenclass:: xt::print_options::threshold :project: xtensor .. doxygenclass:: xt::print_options::edge_items :project: xtensor .. doxygenclass:: xt::print_options::precision :project: xtensor xtensor-0.23.10/docs/source/api/xiterable.rst000066400000000000000000000007511405270662500211030ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xiterable ========= Defined in ``xtensor/xiterable.hpp`` .. doxygenclass:: xt::xconst_iterable :project: xtensor :members: .. doxygenclass:: xt::xiterable :project: xtensor :members: .. doxygenclass:: xt::xcontiguous_iterable :project: xtensor :members: xtensor-0.23.10/docs/source/api/xjson.rst000066400000000000000000000007321405270662500202640ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xjson: serialize to/from JSON ============================= Defined in ``xtensor/xjson.hpp`` .. doxygenfunction:: xt::to_json(nlohmann::json&, const E&); :project: xtensor .. doxygenfunction:: xt::from_json(const nlohmann::json&, E&); :project: xtensor xtensor-0.23.10/docs/source/api/xmanipulation.rst000066400000000000000000000033701405270662500220140ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay, Wolf Vollprecht and Martin Renou Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xmanipulation ============= Defined in ``xtensor/xmanipulation.hpp`` .. doxygenfunction:: xt::atleast_Nd :project: xtensor .. doxygenfunction:: xt::atleast_1d :project: xtensor .. doxygenfunction:: xt::atleast_2d :project: xtensor .. doxygenfunction:: xt::atleast_3d :project: xtensor .. doxygenfunction:: xt::expand_dims :project: xtensor .. doxygenfunction:: xt::flatten :project: xtensor .. doxygenfunction:: xt::flatnonzero :project: xtensor .. doxygenfunction:: xt::flip :project: xtensor .. doxygenfunction:: xt::ravel :project: xtensor .. doxygenfunction:: xt::repeat(E&&, std::size_t, std::size_t) :project: xtensor .. doxygenfunction:: xt::repeat(E&&, const std::vector&, std::size_t) :project: xtensor .. doxygenfunction:: xt::repeat(E&&, std::vector&&, std::size_t) :project: xtensor .. doxygenfunction:: xt::roll(E&&, std::ptrdiff_t) :project: xtensor .. doxygenfunction:: xt::roll(E&&, std::ptrdiff_t, std::ptrdiff_t) :project: xtensor .. doxygenfunction:: xt::rot90 :project: xtensor .. doxygenfunction:: xt::split :project: xtensor .. doxygenfunction:: xt::hsplit :project: xtensor .. doxygenfunction:: xt::vsplit :project: xtensor .. doxygenfunction:: xt::squeeze(E&&) :project: xtensor .. doxygenfunction:: xt::squeeze(E&&, S&&, Tag) :project: xtensor .. doxygenfunction:: xt::transpose(E&&) :project: xtensor .. doxygenfunction:: xt::transpose(E&&, S&&, Tag) :project: xtensor .. doxygenfunction:: xt::trim_zeros :project: xtensor xtensor-0.23.10/docs/source/api/xmasked_view.rst000066400000000000000000000005351405270662500216120ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xmasked_view ============ Defined in ``xtensor/xmasked_view.hpp`` .. doxygenclass:: xt::xmasked_view :project: xtensor :members: xtensor-0.23.10/docs/source/api/xmath.rst000066400000000000000000000577601405270662500202610ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. .. raw:: html Mathematical functions ====================== .. toctree:: operators +-----------------------------------------+------------------------------------------+ | :ref:`operator+ ` | identity | +-----------------------------------------+------------------------------------------+ | :ref:`operator- ` | opposite | +-----------------------------------------+------------------------------------------+ | :ref:`operator+ ` | addition | +-----------------------------------------+------------------------------------------+ | :ref:`operator- ` | substraction | +-----------------------------------------+------------------------------------------+ | :ref:`operator* ` | multiplication | +-----------------------------------------+------------------------------------------+ | :ref:`operator/ ` | division | +-----------------------------------------+------------------------------------------+ | :ref:`operator|| ` | logical or | +-----------------------------------------+------------------------------------------+ | :ref:`operator&& ` | logical and | +-----------------------------------------+------------------------------------------+ | :ref:`operator! ` | logical not | +-----------------------------------------+------------------------------------------+ | :ref:`where ` | ternary selection | +-----------------------------------------+------------------------------------------+ | :ref:`any ` | return true if any value is truthy | +-----------------------------------------+------------------------------------------+ | :ref:`all ` | return true if all the values are truthy | +-----------------------------------------+------------------------------------------+ | :ref:`operator\< ` | element-wise lesser than | +-----------------------------------------+------------------------------------------+ | :ref:`operator\<= ` | element-wise less or equal | +-----------------------------------------+------------------------------------------+ | :ref:`operator> ` | element-wise greater than | +-----------------------------------------+------------------------------------------+ | :ref:`operator>= ` | element-wise greater or equal | +-----------------------------------------+------------------------------------------+ | :ref:`operator== ` | expression equality | +-----------------------------------------+------------------------------------------+ | :ref:`operator!= ` | expression inequality | +-----------------------------------------+------------------------------------------+ | :ref:`equal ` | element-wise equality | +-----------------------------------------+------------------------------------------+ | :ref:`not_equal ` | element-wise inequality | +-----------------------------------------+------------------------------------------+ | :ref:`less ` | element-wise lesser than | +-----------------------------------------+------------------------------------------+ | :ref:`less_equal ` | element-wise less or equal | +-----------------------------------------+------------------------------------------+ | :ref:`greater ` | element-wise greater than | +-----------------------------------------+------------------------------------------+ | :ref:`greater_equal ` | element-wise greater or equal | +-----------------------------------------+------------------------------------------+ | :ref:`cast ` | element-wise `static_cast` | +-----------------------------------------+------------------------------------------+ | :ref:`operator& ` | bitwise and | +-----------------------------------------+------------------------------------------+ | :ref:`operator| ` | bitwise or | +-----------------------------------------+------------------------------------------+ | :ref:`operator^ ` | bitwise xor | +-----------------------------------------+------------------------------------------+ | :ref:`operator~ ` | bitwise not | +-----------------------------------------+------------------------------------------+ | :ref:`left_shift ` | bitwise shift left | +-----------------------------------------+------------------------------------------+ | :ref:`right_shift ` | bitwise shift right | +-----------------------------------------+------------------------------------------+ | :ref:`operator\<\< ` | bitwise shift left | +-----------------------------------------+------------------------------------------+ | :ref:`operator\>\> ` | bitwise shift right | +-----------------------------------------+------------------------------------------+ .. toctree:: index_related +-----------------------------------------+------------------------------------------+ | :ref:`where ` | indices selection | +-----------------------------------------+------------------------------------------+ | :ref:`nonzero ` | indices selection | +-----------------------------------------+------------------------------------------+ | :ref:`argwhere ` | indices selection | +-----------------------------------------+------------------------------------------+ | :ref:`from_indices ` | biulder from indices | +-----------------------------------------+------------------------------------------+ .. toctree:: basic_functions +---------------------------------------+----------------------------------------------------+ | :ref:`abs ` | absolute value | +---------------------------------------+----------------------------------------------------+ | :ref:`fabs ` | absolute value | +---------------------------------------+----------------------------------------------------+ | :ref:`fmod ` | remainder of the floating point division operation | +---------------------------------------+----------------------------------------------------+ | :ref:`remainder ` | signed remainder of the division operation | +---------------------------------------+----------------------------------------------------+ | :ref:`fma ` | fused multiply-add operation | +---------------------------------------+----------------------------------------------------+ | :ref:`minimum ` | element-wise minimum | +---------------------------------------+----------------------------------------------------+ | :ref:`maximum ` | element-wise maximum | +---------------------------------------+----------------------------------------------------+ | :ref:`fmin ` | element-wise minimum for floating point values | +---------------------------------------+----------------------------------------------------+ | :ref:`fmax ` | element-wise maximum for floating point values | +---------------------------------------+----------------------------------------------------+ | :ref:`fdim ` | element-wise positive difference | +---------------------------------------+----------------------------------------------------+ | :ref:`clip ` | element-wise clipping operation | +---------------------------------------+----------------------------------------------------+ | :ref:`sign ` | element-wise indication of the sign | +---------------------------------------+----------------------------------------------------+ .. toctree:: exponential_functions +---------------------------------------+----------------------------------------------------+ | :ref:`exp ` | natural exponential function | +---------------------------------------+----------------------------------------------------+ | :ref:`exp2 ` | base 2 exponential function | +---------------------------------------+----------------------------------------------------+ | :ref:`expm1 ` | natural exponential function, minus one | +---------------------------------------+----------------------------------------------------+ | :ref:`log ` | natural logarithm function | +---------------------------------------+----------------------------------------------------+ | :ref:`log2 ` | base 2 logarithm function | +---------------------------------------+----------------------------------------------------+ | :ref:`log10 ` | base 10 logarithm function | +---------------------------------------+----------------------------------------------------+ | :ref:`log1p ` | natural logarithm of one plus function | +---------------------------------------+----------------------------------------------------+ .. toctree:: power_functions +---------------------------------------+----------------------------------------------------+ | :ref:`pow ` | power function | +---------------------------------------+----------------------------------------------------+ | :ref:`sqrt ` | square root function | +---------------------------------------+----------------------------------------------------+ | :ref:`cbrt ` | cubic root function | +---------------------------------------+----------------------------------------------------+ | :ref:`hypot ` | hypotenuse function | +---------------------------------------+----------------------------------------------------+ .. toctree:: trigonometric_functions +---------------------------------------+----------------------------------------------------+ | :ref:`sin ` | sine function | +---------------------------------------+----------------------------------------------------+ | :ref:`cos ` | cosine function | +---------------------------------------+----------------------------------------------------+ | :ref:`tan ` | tangent function | +---------------------------------------+----------------------------------------------------+ | :ref:`asin ` | arc sine function | +---------------------------------------+----------------------------------------------------+ | :ref:`acos ` | arc cosine function | +---------------------------------------+----------------------------------------------------+ | :ref:`atan ` | arc tangent function | +---------------------------------------+----------------------------------------------------+ | :ref:`atan2 ` | arc tangent function, determining quadrants | +---------------------------------------+----------------------------------------------------+ .. toctree:: hyperbolic_functions +---------------------------------------+----------------------------------------------------+ | :ref:`sinh ` | hyperbolic sine function | +---------------------------------------+----------------------------------------------------+ | :ref:`cosh ` | hyperbolic cosine function | +---------------------------------------+----------------------------------------------------+ | :ref:`tanh ` | hyperbolic tangent function | +---------------------------------------+----------------------------------------------------+ | :ref:`asinh ` | inverse hyperbolic sine function | +---------------------------------------+----------------------------------------------------+ | :ref:`acosh ` | inverse hyperbolic cosine function | +---------------------------------------+----------------------------------------------------+ | :ref:`atanh ` | inverse hyperbolic tangent function | +---------------------------------------+----------------------------------------------------+ .. toctree:: error_functions +---------------------------------------+----------------------------------------------------+ | :ref:`erf ` | error function | +---------------------------------------+----------------------------------------------------+ | :ref:`erfc ` | complementary error function | +---------------------------------------+----------------------------------------------------+ | :ref:`tgamma ` | gamma function | +---------------------------------------+----------------------------------------------------+ | :ref:`lgamma ` | natural logarithm of the gamma function | +---------------------------------------+----------------------------------------------------+ .. toctree:: nearint_operations +---------------------------------------+----------------------------------------------------+ | :ref:`ceil ` | nearest integers not less | +---------------------------------------+----------------------------------------------------+ | :ref:`floor ` | nearest integers not greater | +---------------------------------------+----------------------------------------------------+ | :ref:`trunc ` | nearest integers not greater in magnitude | +---------------------------------------+----------------------------------------------------+ | :ref:`round ` | nearest integers, rounding away from zero | +---------------------------------------+----------------------------------------------------+ | :ref:`nearbyint ` | nearest integers using current rounding mode | +---------------------------------------+----------------------------------------------------+ | :ref:`rint ` | nearest integers using current rounding mode | +---------------------------------------+----------------------------------------------------+ .. toctree:: classif_functions +---------------------------------------+----------------------------------------------------+ | :ref:`isfinite ` | checks for finite values | +---------------------------------------+----------------------------------------------------+ | :ref:`isinf ` | checks for infinite values | +---------------------------------------+----------------------------------------------------+ | :ref:`isnan ` | checks for NaN values | +---------------------------------------+----------------------------------------------------+ | :ref:`isclose ` | element-wise closeness detection | +---------------------------------------+----------------------------------------------------+ | :ref:`allclose ` | closeness reduction | +---------------------------------------+----------------------------------------------------+ .. toctree:: reducing_functions +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`sum ` | sum of elements over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`prod ` | product of elements over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`mean ` | mean of elements over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`average ` | weighted average along the specified axis | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`variance ` | variance of elements over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`stddev ` | standard deviation of elements over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`diff ` | Calculate the n-th discrete difference along the given axis | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`amax ` | amax of elements over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`amin ` | amin of elements over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`trapz ` | Integrate along the given axis using the composite trapezoidal rule | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`norm_l0 ` | L0 pseudo-norm over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`norm_l1 ` | L1 norm over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`norm_sq ` | Squared L2 norm over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`norm_l2 ` | L2 norm over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`norm_linf ` | Infinity norm over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`norm_lp_to_p ` | p_th power of Lp norm over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`norm_lp ` | Lp norm over given axes | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`norm_induced_l1 ` | Induced L1 norm of a matrix | +-----------------------------------------------+---------------------------------------------------------------------+ | :ref:`norm_induced_linf ` | Induced L-infinity norm of a matrix | +-----------------------------------------------+---------------------------------------------------------------------+ .. toctree:: accumulating_functions +---------------------------------------------+-------------------------------------------------+ | :ref:`cumsum ` | cumulative sum of elements over a given axis | +---------------------------------------------+-------------------------------------------------+ | :ref:`cumprod ` | cumulative product of elements over given axes | +---------------------------------------------+-------------------------------------------------+ .. toctree:: nan_functions +---------------------------------------------------+------------------------------------------------------------+ | :ref:`nan_to_num ` | convert NaN and +/- inf to finite numbers | +---------------------------------------------------+------------------------------------------------------------+ | :ref:`nansum ` | sum of elements over a given axis, replacing NaN with 0 | +---------------------------------------------------+------------------------------------------------------------+ | :ref:`nanprod ` | product of elements over given axes, replacing NaN with 1 | +---------------------------------------------------+------------------------------------------------------------+ | :ref:`nancumsum ` | cumsum of elements over a given axis, replacing NaN with 0 | +---------------------------------------------------+------------------------------------------------------------+ | :ref:`nancumprod ` | cumprod of elements over given axes, replacing NaN with 1 | +---------------------------------------------------+------------------------------------------------------------+ xtensor-0.23.10/docs/source/api/xnpy.rst000066400000000000000000000011601405270662500201150ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xnpy: read/write NPY files ========================== Defined in ``xtensor/xnpy.hpp`` .. doxygenfunction:: xt::load_npy(std::istream&) :project: xtensor .. doxygenfunction:: xt::load_npy(const std::string&) :project: xtensor .. doxygenfunction:: xt::dump_npy(const std::string&, const xexpression&) :project: xtensor .. doxygenfunction:: xt::dump_npy(const xexpression&) :project: xtensor xtensor-0.23.10/docs/source/api/xoptional_assembly.rst000066400000000000000000000005651405270662500230430ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xoptional_assembly ================== Defined in ``xtensor/xoptional_assembly.hpp`` .. doxygenclass:: xt::xoptional_assembly :project: xtensor :members: xtensor-0.23.10/docs/source/api/xoptional_assembly_adaptor.rst000066400000000000000000000006151405270662500245510ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xoptional_assembly_adaptor ========================== Defined in ``xtensor/xoptional_assembly.hpp`` .. doxygenclass:: xt::xoptional_assembly_adaptor :project: xtensor :members: xtensor-0.23.10/docs/source/api/xoptional_assembly_base.rst000066400000000000000000000006111405270662500240250ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xoptional_assembly_base ======================= Defined in ``xtensor/xoptional_assembly_base.hpp`` .. doxygenclass:: xt::xoptional_assembly_base :project: xtensor :members: xtensor-0.23.10/docs/source/api/xpad.rst000066400000000000000000000013241405270662500200550ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xpad ==== Defined in ``xtensor/xpad.hpp`` .. doxygenenum:: xt::pad_mode :project: xtensor .. doxygenfunction:: xt::pad(E&& , const std::vector>&, pad_mode, V) :project: xtensor .. doxygenfunction:: xt::pad(E&& , const std::vector&, pad_mode, V) :project: xtensor .. doxygenfunction:: xt::pad(E&& , S, pad_mode, V) :project: xtensor .. doxygenfunction:: xt::tile(E&& , std::initializer_list) :project: xtensor .. doxygenfunction:: xt::tile(E&& , S) :project: xtensor xtensor-0.23.10/docs/source/api/xrandom.rst000066400000000000000000000056641405270662500206040ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xrandom ======= Defined in ``xtensor/xrandom.hpp`` .. warning:: xtensor uses a lazy generator for random numbers. You need to assign them or use ``eval`` to keep the generated values consistent. .. _random-get_default_random_engine-function-reference: .. doxygenfunction:: xt::random::get_default_random_engine :project: xtensor .. _random-seed-function-reference: .. doxygenfunction:: xt::random::seed :project: xtensor .. _random-rand-function-reference: .. doxygenfunction:: xt::random::rand(const S&, T, T, E&) :project: xtensor .. _random-randint-function-reference: .. doxygenfunction:: xt::random::randint(const S&, T, T, E&) :project: xtensor .. _random-randn-function-reference: .. doxygenfunction:: xt::random::randn(const S&, T, T, E&) :project: xtensor .. _random-binomial-function-reference: .. doxygenfunction:: xt::random::binomial(const S&, T, D, E&) :project: xtensor .. _random-geometric-function-reference: .. doxygenfunction:: xt::random::geometric(const S&, D, E&) :project: xtensor .. _random-negative_binomial-function-reference: .. doxygenfunction:: xt::random::negative_binomial(const S&, T, D, E&) :project: xtensor .. _random-poisson-function-reference: .. doxygenfunction:: xt::random::poisson(const S&, D, E&) :project: xtensor .. _random-exponential-function-reference: .. doxygenfunction:: xt::random::exponential(const S&, T, E&) :project: xtensor .. _random-gamma-function-reference: .. doxygenfunction:: xt::random::gamma(const S&, T, T, E&) :project: xtensor .. _random-weibull-function-reference: .. doxygenfunction:: xt::random::weibull(const S&, T, T, E&) :project: xtensor .. _random-extreme_value-function-reference: .. doxygenfunction:: xt::random::extreme_value(const S&, T, T, E&) :project: xtensor .. _random-lognormal-function-reference: .. doxygenfunction:: xt::random::lognormal(const S&, T, T, E&) :project: xtensor .. _random-cauchy-function-reference: .. doxygenfunction:: xt::random::cauchy(const S&, T, T, E&) :project: xtensor .. _random-fisher_f-function-reference: .. doxygenfunction:: xt::random::fisher_f(const S&, T, T, E&) :project: xtensor .. _random-student_t-function-reference: .. doxygenfunction:: xt::random::student_t(const S&, T, E&) :project: xtensor .. _random-choice-function-reference: .. doxygenfunction:: xt::random::choice(const xexpression&, std::size_t, bool, E&) :project: xtensor .. doxygenfunction:: xt::random::choice(const xexpression&, std::size_t, const xexpression&, bool, E&) :project: xtensor .. _random-shuffle-function-reference: .. doxygenfunction:: xt::random::shuffle :project: xtensor .. _random-permutation-function-reference: .. doxygenfunction:: xt::random::permutation(T, E&) :project: xtensor xtensor-0.23.10/docs/source/api/xreducer.rst000066400000000000000000000006311405270662500207420ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xreducer ======== Defined in ``xtensor/xreducer.hpp`` .. doxygenclass:: xt::xreducer :project: xtensor :members: .. doxygenfunction:: xt::reduce(F&&, E&&, X&&, EVS&&) :project: xtensor xtensor-0.23.10/docs/source/api/xrepeat.rst000066400000000000000000000005111405270662500205660ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xrepeat ======= Defined in ``xtensor/xrepeat.hpp`` .. doxygenclass:: xt::xrepeat :project: xtensor :members: xtensor-0.23.10/docs/source/api/xsemantic_base.rst000066400000000000000000000005401405270662500221050ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xsemantic_base ============== Defined in ``xtensor/xsemantic.hpp`` .. doxygenclass:: xt::xsemantic_base :project: xtensor :members: xtensor-0.23.10/docs/source/api/xset_operation.rst000066400000000000000000000014341405270662500221660ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xset_operation ============== Defined in ``xtensor/xset_operation.hpp`` .. doxygenenum:: xt::isin(E&&, F&&) :project: xtensor .. doxygenenum:: xt::in1d(E&&, F&&) :project: xtensor .. doxygenenum:: xt::searchsorted(E1&&, E2&&, bool) :project: xtensor Further overloads ----------------- .. doxygenenum:: xt::isin(E&&, std::initializer_list) :project: xtensor .. doxygenenum:: xt::isin(E&&, I&&, I&&) :project: xtensor .. doxygenenum:: xt::in1d(E&&, std::initializer_list) :project: xtensor .. doxygenenum:: xt::in1d(E&&, I&&, I&&) :project: xtensor xtensor-0.23.10/docs/source/api/xshape.rst000066400000000000000000000010301405270662500204030ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xshape ====== Defined in ``xtensor/xshape.hpp`` .. doxygenfunction:: bool same_shape(const S1& s1, const S2& s2) :project: xtensor .. doxygenfunction:: bool has_shape(const E& e, std::initializer_list shape) :project: xtensor .. doxygenfunction:: bool has_shape(const E& e, const S& shape) :project: xtensor xtensor-0.23.10/docs/source/api/xsort.rst000066400000000000000000000025151405270662500203030ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xsort ===== Defined in ``xtensor/xsort.hpp`` .. doxygenfunction:: xt::sort(const xexpression&, placeholders::xtuph) :project: xtensor .. doxygenfunction:: xt::sort(const xexpression&, std::ptrdiff_t) :project: xtensor .. doxygenfunction:: xt::argsort(const xexpression&, placeholders::xtuph) :project: xtensor .. doxygenfunction:: xt::argsort(const xexpression&, std::ptrdiff_t) :project: xtensor .. doxygenfunction:: xt::argmin(const xexpression&) :project: xtensor .. doxygenfunction:: xt::argmin(const xexpression&, std::ptrdiff_t) :project: xtensor .. doxygenfunction:: xt::argmax(const xexpression&) :project: xtensor .. doxygenfunction:: xt::argmax(const xexpression&, std::ptrdiff_t) :project: xtensor .. doxygenfunction:: xt::unique(const xexpression&) :project: xtensor .. doxygenfunction:: xt::partition(const xexpression&, const C&, placeholders::xtuph) :project: xtensor .. doxygenfunction:: xt::argpartition(const xexpression&, const C&, placeholders::xtuph) :project: xtensor .. doxygenfunction:: xt::median(E&&, std::ptrdiff_t) :project: xtensor xtensor-0.23.10/docs/source/api/xstrided_view.rst000066400000000000000000000012741405270662500220050ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xstrided_view ============= Defined in ``xtensor/xstrided_view.hpp`` .. doxygenclass:: xt::xstrided_view :project: xtensor :members: .. doxygentypedef:: xt::xstrided_slice_vector :project: xtensor .. doxygenfunction:: xt::strided_view(E&&, S&&, X&&, std::size_t, layout_type) :project: xtensor .. doxygenfunction:: xt::strided_view(E&&, const xstrided_slice_vector&) :project: xtensor .. doxygenfunction:: xt::reshape_view(E&&, S&&, layout_type) :project: xtensor xtensor-0.23.10/docs/source/api/xtensor.rst000066400000000000000000000012001405270662500206140ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xtensor ======= Defined in ``xtensor/xtensor.hpp`` .. doxygenclass:: xt::xtensor_container :project: xtensor :members: .. doxygentypedef:: xt::xtensor :project: xtensor .. doxygentypedef:: xt::xtensor_optional :project: xtensor .. doxygenfunction:: xt::from_indices :project: xtensor .. doxygenfunction:: xt::flatten_indices :project: xtensor .. doxygenfunction:: xt::ravel_indices :project: xtensor xtensor-0.23.10/docs/source/api/xtensor_adaptor.rst000066400000000000000000000025111405270662500223340ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xtensor_adaptor =============== Defined in ``xtensor/xtensor.hpp`` .. doxygenclass:: xt::xtensor_adaptor :project: xtensor :members: adapt (xtensor_adaptor) ======================== Defined in ``xtensor/xadapt.hpp`` .. doxygenfunction:: xt::adapt(C&&, layout_type) :project: xtensor .. doxygenfunction:: xt::adapt(C&&, const SC&, layout_type) :project: xtensor .. doxygenfunction:: xt::adapt(C&&, SC&&, SS&&) :project: xtensor .. doxygenfunction:: xt::adapt(P&&, typename A::size_type, O, layout_type, const A&) :project: xtensor .. doxygenfunction:: xt::adapt(P&&, typename A::size_type, O, const SC&, layout_type, const A&) :project: xtensor .. doxygenfunction:: xt::adapt(P&&, typename A::size_type, O, SC&&, SS&&, const A&) :project: xtensor .. doxygenfunction:: xt::adapt(T (&)[N], const SC&, layout_type) :project: xtensor .. doxygenfunction:: xt::adapt(T (&)[N], SC&&, SS&&) :project: xtensor .. doxygenfunction:: xt::adapt_smart_ptr(P&&, const I (&)[N], layout_type) :project: xtensor .. doxygenfunction:: xt::adapt_smart_ptr(P&&, const I (&)[N], D&&, layout_type) :project: xtensor xtensor-0.23.10/docs/source/api/xview.rst000066400000000000000000000016131405270662500202640ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xview ===== Defined in ``xtensor/xview.hpp`` .. doxygenclass:: xt::xview :project: xtensor :members: .. doxygenfunction:: xt::view :project: xtensor .. doxygenfunction:: xt::row :project: xtensor .. doxygenfunction:: xt::col :project: xtensor Defined in ``xtensor/xslice.hpp`` .. doxygenfunction:: xt::range(A, B) :project: xtensor .. doxygenfunction:: xt::range(A, B, C) :project: xtensor .. doxygenfunction:: xt::all :project: xtensor .. doxygenfunction:: xt::newaxis :project: xtensor .. doxygenfunction:: xt::ellipsis :project: xtensor .. doxygenfunction:: xt::keep(T&&) :project: xtensor .. doxygenfunction:: xt::drop(T&&) :project: xtensor xtensor-0.23.10/docs/source/api/xview_semantic.rst000066400000000000000000000005401405270662500221450ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. xview_semantic ============== Defined in ``xtensor/xsemantic.hpp`` .. doxygenclass:: xt::xview_semantic :project: xtensor :members: xtensor-0.23.10/docs/source/binder-logo.svg000066400000000000000000000057001405270662500205420ustar00rootroot00000000000000 xtensor-0.23.10/docs/source/bindings.rst000066400000000000000000000230261405270662500201500ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Designing language bindings with xtensor ======================================== xtensor and its :ref:`related-projects` make it easy to implement a feature once in C++ and expose it to the main languages of data science, such as Python, Julia and R with little extra work. Although, if that sounds simple in principle, difficulties may appear when it comes to define the API of the C++ library. The following illustrates the different options we have with the case of a single function ``compute`` that must be callable from all the languages. Generic API ----------- Since the xtensor bindings provide different container types for holding tensors (pytensor, rtensor and jltensor), if we want our function to be callable from all the languages, it must accept a generic argument: .. code:: template void compute(E&& e); However, this is a bit too generic and we may want to enforce that this function only accepts xtensor arguments. Since all xtensor containers inherit from the “xexpression†CRTP base class, we can easily express that constraint with the following signature: .. code:: template void compute(const xexpression& e) { // Now the implementation must use e() instead of e } Notice that with this change, we lose the ability to call the function with non-constant references or rvalue references. If we want them back, we need to add the following overloads: .. code:: template void compute(xexpression& e); template void compute(xexpression&& e); In the following we assume that the constant reference overload is enough. We can now expose the compute function to the other languages, let’s illustrate this with Python bindings: .. code:: PYBIND11_MODULE(pymod, m) { xt::import_numpy(); m.def("compute", &compute>); } Full qualified API ------------------ Accepting any kind of expression can still be too permissive; assume we want to restrict this function to 2-dimensional tensor containers only. In that case, a solution is to provide an API function that forwards the call to a common generic implementation: .. code:: namespace detail { template void compute_impl(E&&); } template void compute(const xtensor& t) { detail::compute_impl(t); } Exposing it to the Python is just as simple: .. code:: template void compute(const pytensor& t) { detail::compute_impl(t); } PYBIND11_MODULE(pymod, m) { xt::import_numpy(); m.def("compute", &compute); } Although this solution is really simple, it requires writing four additional functions for the API. Besides, if later, you decide to support array containers, you need to add four more functions. Therefore this solution should be considered for libraries with a small number of functions to expose, and whose APIs are unlikely to change in the future. Container selection ------------------- A way to keep the restriction on the parameter type while limiting the required amount of typing in the bindings is to rely on additional structures that will “select†the right type for us. The idea is to define a structure for selecting the type of containers (tensor, array) and a structure to select the library implementation of that container (xtensor, pytensor in the case of a tensor container): .. code:: // library container selector struct xtensor_c { }; // container selector, must be specialized for each // library container selector template struct tensor_container; // Specialization for xtensor library (or C++) template struct tensor_container { using type = xt::xtensor; }; template using tensor_container_t = typename tensor_container::type; The function signature then becomes .. code:: template void compute(const tensor_container_t& t); The Python bindings only require that we specialize the ``tensor_container`` structure .. code:: struct pytensor_c { }; template struct tensor_container { using type = pytensor; }; PYBIND11_MODULE(pymod, m) { xt::import_numpy(); m.def("compute", &compute); } Even if we need to specialize the “tensor_container†structure for each language, the specialization can be reused for other functions and thus reduce the amount of typing required. This comes at a cost though: we’ve lost type inference on the C++ side. .. code:: xt::xtensor t {{1., 2., 3.}, {4., 5., 6.}}; compute(t); // works compute(t); // error (couldn't infer template argument 'T') Besides, if later we want to support arrays, we need to add an “array_container†structure and its specializations, and an overload of the compute function: .. code:: template struct array_container; template struct array_container { using type = xt::xarray; }; template using array_container_t = typename array_container::type; template void compute(const array_container_t& t); Type restriction with SFINAE ---------------------------- The major drawback of the previous option is the loss of type inference in C++. The only means to get it back is to reintroduce a generic parameter type. However, we can make the compiler generate an invalid type so the function is removed from the overload resolution set when the actual type of the argument does not satisfy some constraint. This principle is known as SFINAE (Substitution Failure Is Not An Error). Modern C++ provide metafunctions to help us make use of SFINAE: .. code:: template struct is_tensor : std::false_type { }; template struct is_tensor> : std::true_type { }; template class C = is_tensor, std::enable_if_t::value, bool> = true> void compute(const T& t); Here when ``C::value`` is true, the ``enable_if_t`` invocation generates the bool type. Otherwise, it does not generate anything, leading to an invalid function declaration. The compiler removes this declaration from the overload resolution set and no error happens if another “compute†overload is a good match for the call. Otherwise, the compiler emits an error. The default value is here to avoid the need to pass a boolean value when invoking the ``compute`` function; this value is of no use, we only rely on the SFINAE trick. This declaration has a slight problem: adding ``enable_if_t`` to the signature of each function we want to expose is cumbersome. Let’s make this part more expressive: .. code:: template class C, class T> using check_constraints = std::enable_if_t::value, bool>; template class C = is_tensor, check_constraints = true> void compute(const T& t); All good, we have type inference and an expressive syntax for declaring our function. Besides, if we want to relax the constraint so the function can accept both tensors and arrays, all we have to do is to replace the default value for C: .. code:: // Equivalent to is_tensor::value || is_array::value template struct is_container : xtl::disjunction, is_array> { }; template class C = is_container, check_constraints = true> void compute(const T& t); This is far more flexible than the previous option. This flexibility comes at a minor cost: exposing the function to the Python is slightly more verbose: .. code:: template struct is_tensor> : std::true_type { }; PYBIND11_MODULE(pymod, m) { xt::import_numpy(); m.def("compute", &compute>); } Conclusion ---------- Each solution has its pros and cons and choosing one of them should be done according to the flexibility you want to impose on your API and the constraints you are imposed by the implementation. For instance, a method that requires a lot of typing in the bindings might not suit for libraries with a huge number of functions to expose, while a full generic API might be problematic if the implementation expects containers only. Below is a summary of the advantages and drawbacks of the different options: - Generic API: full genericity, no additional typing required in the bindings, but maybe too permissive. - Full qualified API: simple, accepts only the specified parameter type, but requires a lot of typing for the bindings. - Container selection: quite simple, requires less typing than the previous method, but loses type inference on the C++ side and lacks some flexibility. - Type restriction with SFINAE: more flexible than the previous option, gets type inference back, but slightly more complex to implement. xtensor-0.23.10/docs/source/build-options.rst000066400000000000000000000115311405270662500211410ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. .. _build-configuration: Build and configuration ======================= Configuration ------------- ``xtensor`` can be configured via macros which must be defined *before* including any of its headers. This can be achieved the following ways: - either define them in the CMakeLists of your project, with ``target_compile_definitions`` cmake command. - or create a header where you define all the macros you want and then include the headers you need. Then include this header whenever you need ``xtensor`` in your project. The following macros are already defined in ``xtensor`` but can be overwritten: - ``XTENSOR_DEFAULT_DATA_CONTAINER(T, A)``: defines the type used as the default data container for tensors and arrays. ``T`` is the ``value_type`` of the container and ``A`` its ``allocator_type``. - ``XTENSOR_DEFAULT_SHAPE_CONTAINER(T, EA, SA)``: defines the type used as the default shape container for tensors and arrays. ``T`` is the ``value_type`` of the data container, ``EA`` its ``allocator_type``, and ``SA`` is the ``allocator_type`` of the shape container. - ``XTENSOR_DEFAULT_LAYOUT``: defines the default layout (row_major, column_major, dynamic) for tensors and arrays. We *strongly* discourage using this macro, which is provided for testing purpose. Prefer defining alias types on tensor and array containers instead. - ``XTENSOR_DEFAULT_TRAVERSAL``: defines the default traversal order (row_major, column_major) for algorithms and iterators on tensors and arrays. We *strongly* discourage using this macro, which is provided for testing purpose. The following macros are helpers for debugging, they are not defined by default: - ``XTENSOR_ENABLE_ASSERT``: enables assertions in xtensor, such as bound check. - ``XTENSOR_ENABLE_CHECK_DIMENSION``: enables the dimensions check in ``xtensor``. Note that this option should not be turned on if you expect ``operator()`` to perform broadcasting. .. _external-dependencies: External dependencies --------------------- The last group of macros is for using external libraries to achieve maximum performance (see next section for additional requirements): - ``XTENSOR_USE_XSIMD``: enables SIMD acceleration in ``xtensor``. This requires that you have xsimd_ installed on your system. - ``XTENSOR_USE_TBB``: enables parallel assignment loop. This requires that you have tbb_ installed on your system. - ``XTENSOR_DISABLE_EXCEPTIONS``: disables c++ exceptions. - ``XTENSOR_USE_OPENMP``: enables parallel assignment loop using OpenMP. This requires that OpenMP is available on your system. Defining these macros in the CMakeLists of your project before searching for ``xtensor`` will trigger automatic finding of dependencies, so you don't have to include the ``find_package(xsimd)`` and ``find_package(TBB)`` commands in your CMakeLists: .. code:: cmake set(XTENSOR_USE_XSIMD 1) set(XTENSOR_USE_TBB 1) # xsimd and TBB dependencies are automatically # searched when the following is executed find_package(xtensor REQUIRED) # the target now sets the proper defines (e.g. "XTENSOR_USE_XSIMD") target_link_libraries(... xtensor) Build and optimization ---------------------- Windows ~~~~~~~ Windows users must activate the ``/bigobj`` flag, otherwise it's almost certain that the compilation fails. More generally, the following options are recommended: .. code:: cmake target_link_libraries(... xtensor xtensor::optimize) set(CMAKE_EXE_LINKER_FLAGS /MANIFEST:NO) # OR target_compile_options(target_name PRIVATE /EHsc /MP /bigobj) set(CMAKE_EXE_LINKER_FLAGS /MANIFEST:NO) If you defined ``XTENSOR_USE_XSIMD``, you must also specify which instruction set you target: .. code:: cmake target_compile_options(target_name PRIVATE /arch:AVX2) # OR target_compile_options(target_name PRIVATE /arch:AVX) # OR target_compile_options(target_name PRIVATE /arch:ARMv7VE) If you build on an old system that does not support any of these instruction sets, you don't have to specify anything, the system will do its best to enable the most recent supported instruction set. Linux/OSX ~~~~~~~~~ Whether you enabled ``XTENSOR_USE_XSIMD`` or not, it is highly recommended to build with ``-march=native`` option, if your compiler supports it: .. code:: cmake target_link_libraries(... xtensor xtensor::optimize) # OR target_compile_options(target_name PRIVATE -march=native) Notice that this option prevents building on a machine and distributing the resulting binary on another machine with a different architecture (i.e. not supporting the same instruction set). .. _xsimd: https://github.com/xtensor-stack/xsimd .. _tbb: https://www.threadingbuildingblocks.org xtensor-0.23.10/docs/source/builder.rst000066400000000000000000000111471405270662500200020ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Expression builders =================== `xtensor` provides functions to ease the build of common N-dimensional expressions. The expressions returned by these functions implement the laziness of `xtensor`, that is, they don't hold any value. Values are computed upon request. Ones and zeros -------------- - ``zeros(shape)``: generates an expression containing zeros of the specified shape. - ``ones(shape)``: generates an expression containing ones of the specified shape. - ``eye(shape, k=0)``: generates an expression of the specified shape, with ones on the k-th diagonal. - ``eye(n, k = 0)``: generates an expression of shape ``(n, n)`` with ones on the k-th diagonal. Numerical ranges ---------------- - ``arange(start=0, stop, step=1)``: generates numbers evenly spaced within given half-open interval. - ``linspace(start, stop, num_samples)``: generates num_samples evenly spaced numbers over given interval. - ``logspace(start, stop, num_samples)``: generates num_samples evenly spaced on a log scale over given interval Joining expressions ------------------- - ``concatenate(tuple, axis=0)``: concatenates a list of expressions along the given axis. - ``stack(tuple, axis=0)``: stacks a list of expressions along the given axis. - ``hstack(tuple)``: stacks expressions in sequence horizontally (i.e. column-wise). - ``vstack(tuple)``: stacks expressions in sequence vertically (i.e. row wise). Random distributions -------------------- .. warning:: xtensor uses a lazy generator for random numbers. You need to assign them or use ``eval`` to keep the generated values consistent. - ``rand(shape, lower, upper)``: generates an expression of the specified shape, containing uniformly distributed random numbers in the half-open interval [lower, upper). - ``randint(shape, lower, upper)``: generates an expression of the specified shape, containing uniformly distributed random integers in the half-open interval [lower, upper). - ``randn(shape, mean, std_dev)``: generates an expression of the specified shape, containing numbers sampled from the Normal random number distribution. - ``binomial(shape, trials, prob)``: generates an expression of the specified shape, containing numbers sampled from the binomial random number distribution. - ``geometric(shape, prob)``: generates an expression of the specified shape, containing numbers sampled from the geometric random number distribution. - ``negative_binomial(shape, k, prob)``: generates an expression of the specified shape, containing numbers sampled from the negative binomial random number distribution. - ``poisson(shape, rate)``: generates an expression of the specified shape, containing numbers sampled from the Poisson random number distribution. - ``exponential(shape, rate)``: generates an expression of the specified shape, containing numbers sampled from the exponential random number distribution. - ``gamma(shape, alpha, beta)``: generates an expression of the specified shape, containing numbers sampled from the gamma random number distribution. - ``weibull(shape, a, b)``: generates an expression of the specified shape, containing numbers sampled from the Weibull random number distribution. - ``extreme_value(shape, a, b)``: generates an expression of the specified shape, containing numbers sampled from the extreme value random number distribution. - ``lognormal(shape, a, b)``: generates an expression of the specified shape, containing numbers sampled from the Log-Normal random number distribution. - ``chi_squared(shape, a, b)``: generates an expression of the specified shape, containing numbers sampled from the chi-squared random number distribution. - ``cauchy(shape, a, b)``: generates an expression of the specified shape, containing numbers sampled from the Cauchy random number distribution. - ``fisher_f(shape, m, n)``: generates an expression of the specified shape, containing numbers sampled from the Fisher-f random number distribution. - ``student_t(shape, n)``: generates an expression of the specified shape, containing numbers sampled from the Student-t random number distribution. Meshes ------ - ``meshgrid(x1, x2,...)```: generates N-D coordinate expressions given one-dimensional coordinate arrays ``x1``, ``x2``... If specified vectors have lengths ``Ni = len(xi)``, meshgrid returns ``(N1, N2, N3,..., Nn)``-shaped arrays, with the elements of xi repeated to fill the matrix along the first dimension for x1, the second for x2 and so on. xtensor-0.23.10/docs/source/changelog.rst000066400000000000000000002617621405270662500203150ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Changelog ========= 0.23.10 ------- - Performance fix: set m_strides_computed = true after computing `#2377 https://github.com/xtensor-stack/xtensor/pull/2377` - argsort: catching zeros stride leading axis (bugfix) `#2238 https://github.com/xtensor-stack/xtensor/pull/2238` - Adding ``.flat(i)`` `#2356 https://github.com/xtensor-stack/xtensor/pull/2356` - Fixed ``check_index`` function `#2378 https://github.com/xtensor-stack/xtensor/pull/2378` - Fixing & -> && in histogram `#2386 https://github.com/xtensor-stack/xtensor/pull/2386` - Adding ``front()`` and ``back()`` convenience methods `#2385 https://github.com/xtensor-stack/xtensor/pull/2385` - Adding description of index operators `#2387 https://github.com/xtensor-stack/xtensor/pull/2387` - flip: adding overload without axis (mimics NumPy) `#2373 https://github.com/xtensor-stack/xtensor/pull/2373` - average: fixing overload issue for axis argument `#2374 https://github.com/xtensor-stack/xtensor/pull/2374` 0.23.9 ------ - Fix data_offset method in xview to compute the strides only once `#2371 https://github.com/xtensor-stack/xtensor/pull/2371` 0.23.8 ------ - Specialize operator= when RHS is chunked `#2367 https://github.com/xtensor-stack/xtensor/pull/2367` 0.23.7 ------ - Fixed chunked_iterator `#2365 https://github.com/xtensor-stack/xtensor/pull/2365` 0.23.6 ------ - Update installation instructions to mention mamba `#2357 https://github.com/xtensor-stack/xtensor/pull/2357` - Fixed grid_shape return type `#2360 https://github.com/xtensor-stack/xtensor/pull/2360` - Added assertion in resize method `#2361 https://github.com/xtensor-stack/xtensor/pull/2361` - Added const chunk iterators `#2362 https://github.com/xtensor-stack/xtensor/pull/2362` - Fixed chunk assignment `#2363 https://github.com/xtensor-stack/xtensor/pull/2363` 0.23.5 ------ - No need to explicitly install blas anymore with latest xtensor-blas `#2343 https://github.com/xtensor-stack/xtensor/pull/2343` - FIX for xtensor-stack/xtl/issues/245 `#2344 https://github.com/xtensor-stack/xtensor/pull/2344` - Implement grid view `#2346 https://github.com/xtensor-stack/xtensor/pull/2346` - Refactoring of xchunked_view `#2353 https://github.com/xtensor-stack/xtensor/pull/2353` 0.23.4 ------ - Fix edge chunk assignment `#2342 https://github.com/xtensor-stack/xtensor/pull/2342` 0.23.3 ------ - Use the correct version file for TBB since 2021.1 `#2334 https://github.com/xtensor-stack/xtensor/pull/2334` - Add missing API RTD for nan functions `#2333 https://github.com/xtensor-stack/xtensor/pull/2333` - Fixed layout issue in container classes `#2335 https://github.com/xtensor-stack/xtensor/pull/2335` - Fixed assignment of a tensor_view on a pseudo-container `#2336 https://github.com/xtensor-stack/xtensor/pull/2336` - Fixed return type of data method `#2338 https://github.com/xtensor-stack/xtensor/pull/2338` - Fixed assignment to flatten view `#2339 https://github.com/xtensor-stack/xtensor/pull/2339` 0.23.2 ------ - MSVC Build: Wrapped linker flags in quotes `#2299 https://github.com/xtensor-stack/xtensor/pull/2299` - Added can_assign and enable_assignable_expression `#2323 https://github.com/xtensor-stack/xtensor/pull/2323` - Fix automatically generated tests `#2313 https://github.com/xtensor-stack/xtensor/pull/2313` - Fix linspace endpoint bug `#2306 https://github.com/xtensor-stack/xtensor/pull/2306` - Added fallback to old behavior in FindTBB.cmake `#2325 https://github.com/xtensor-stack/xtensor/pull/2325` - Implement nanmin and nanmax `#2314 https://github.com/xtensor-stack/xtensor/pull/2314` - Clean up and add more tests for nanmin and nanmax `#2326 https://github.com/xtensor-stack/xtensor/pull/2326` - Fix linespace with only one point `#2327 https://github.com/xtensor-stack/xtensor/pull/2327` - Fixed ambiguous call of tile `#2329 https://github.com/xtensor-stack/xtensor/pull/2329` 0.23.1 ------ - Fix compilation warnings on unused local typedefs `#2295 https://github.com/xtensor-stack/xtensor/pull/2295` - Disable a failing shuffle test for clang `#2294 https://github.com/xtensor-stack/xtensor/pull/2294` - Fix simd assign_data `#2292 https://github.com/xtensor-stack/xtensor/pull/2292` - Fix -Wshadow and -Wunused-local-typedef warning `#2293 https://github.com/xtensor-stack/xtensor/pull/2293` - Documentation improvement Part #B `#2287 https://github.com/xtensor-stack/xtensor/pull/2287` 0.23.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - Remove chunked array extension mechanism `#2283 `_ - Upgraded to xtl 0.7.0 `#2284 `_ Other changes ~~~~~~~~~~~~~ - Harmonize #include statements in doc `#2280 `_ - Added missing shape_type in xfunctor_stepper `#2285 `_ 0.22.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - Drop support of 3.* Clang versions `#2251 `_ - Fix reducers assignment `#2254 `_ - Removed reducer ``big_promote_type`` `#2277 `_ Other changes ~~~~~~~~~~~~~ - Improve histogram performance with equal bin sizes `#2088 `_ - Added missing header in xfixed `#2225 `_ - Implement xt::random::choice with weights vector `#2241 `_ - Testing alignment `#2246 `_ - Add reducers tests `#2252 `_ - Fix binary operators on complex `#2253 `_ - Removed not implemented assign method from xchunked_array `#2256 `_ - Support initialized list for chunked_array shapes `#2258 `_ - Add as_strided free function `#2261 `_ - Fix histogram compatibility with containers beyond xtensor `#2263 `_ - Fixed broadcasting with keep_slice that holds a single element `#2270 `_ - Make xt::cast and xtl::optional compatible `#2271 `_ - Fix minor warnings detected by clang `#2272 `_ - Extra assert in mean computation wrt. ddof `#2273 `_ - Provide a -Werror mode and ensure xtensor passes with it `#2274 `_ - Moved layout_remove_any to xlayout.hpp `#2275 `_ - Provide a -Werror mode and ensure xtensor passes with it `#2274 `_ - Slight reorganization of the documentation `#2276 `_ - Updated reducer docs according to recent changes `#2278 `_ - Added template parameter for initial value type in accumulators `#2279 `_ 0.21.10 ------- - Document chunked arrays `#2102 `_ - Removed ``zarray`` files `#2221 `_ - Improved ``xeval`` `#2223 `_ - Fixed various warnings `#2224 `_ 0.21.9 ------ - Adding macro ``XTENSOR_SELECT_ALIGN`` `#2152 `_ - xcontainer.hpp: Renamed a shadowing type name inside a function `#2208 `_ - Add chunk_memory_layout to chunked_array factory `#2211 `_ - CMake: Modernized GTest-integration `#2212 `_ - ``xnpy.hpp``: fix multiple definition of 'host_endian_char' variable when included in different linked objects `#2214 `_ - Made global variable const to force internal linkage `#2216 `_ - Use xtl::endianness instead of bundling it `#2218 `_ - Fix call to resize of chunk container `#2219 `_ 0.21.8 ------ - Fix undefined behavior while testing shifts `#2175 `_ - Fix ``zarray`` initialization from ``zarray`` `#2180 `_ - Portable and generic implementation of endianess detection `#2182 `_ - Fix xnpy save padding computation `#2183 `_ - Only use ``-march=native`` if it's available `#2184 `_ - Fix ``xchunked_array`` assignment `#2177 `_ - Add specific ``xchunked_array`` constructor for ``xchunk_store_manager`` `#2188 `_ - Make xnpy tests aware of both little and big endian targets `#2189 `_ - Fixed constructors of ``xchunked_array`` `#2190 `_ - First implementation of ``zchunked_wrapper`` `#2193 `_ - Don't mark dirty a resized or reshaped ``xfile_array`` `#2194 `_ - Replaced catch-all constructor of ``zarray`` with more restrictive ones `#2195 `_ - Fixed SFINAE based on ``xchunked_store_manager`` `#2197 `_ - Fix generated cmake config to include missing required lib `#2200 `_ - Add ``set_chunk_shape`` to the first chunk of the pool `#2198 `_ - Chunked array refactoring `#2201 `_ - Refactored ``xchunked_array`` semantic `#2202 `_ - Added missing header to CMakeLists.txt `#2203 `_ - Fixed ``load_simd`` for ``xcomplex`` `#2204 `_ - Upgraded to xtl 0.6.20 `#2206 `_ - changed std traits to new ``xtl::xtraits`` `#2205 `_ - ``xstorage.hpp``: Renamed a shadowing variable inside a function `#2207 `_ 0.21.7 ------ - Removed zheaders from single header `#2157 `_ - Implemented insertion of range and intializer list in svector `#2165 `_ - Adding has_shape `#2163 `_ - Adding get_rank and has_fixed_rank `#2162 `_ - Zrefactoring `#2140 `_ - Added missing header `#2169 `_ - Extending docs random `#2173 `_ 0.21.6 ------ - Added implementation of ``isin`` and ``in1d`` `#2021 `_ - Wrote single include header `#2031 `_ - Added details for ``xt::random`` to docs `#2043 `_ - Added ``digitize``, ``searchsorted``, and ``bin_items`` `#2037 `_ - Fixed error with zero tensor size in ``xt::mean`` `#2047 `_ - Fixed initialization order in ``xfunction`` `#2050 `_ - ``adapt_smart_ptr`` overloads now accept STL-like container as shape `#2052 `_ - Added ``xchunked_array`` `#2076 `_ - ``xchunked_array`` inherits from ``xiterable`` `#2082 `_ - ``xchunked_array`` inherits from ``xcontainer_semantic`` `#2083 `_ - Fixed assignment operator of ``xchunked_array`` `#2084 `_ - Added constructors from ``xexpression`` and ``chunk_shape`` to ``xchunked_array`` `#2087 `_ - Fixed chunk layout `#2091 `_ - Copy constructor gets expression's chunk_shape if it is chunked `#2092 `_ - Replaced template parameter chunk_type with chunk_storage `#2095 `_ - Implemented on-disk chunked array `#2096 `_ - Implemented chunk pool in xchunk_store_manager `#2099 `_ - ``xfile_array`` is now an expression `#2107 `_ - ``xchunked_array`` code cleanup `#2109 `_ - ``xchunked_store_manager`` code cleanup `#2110 `_ - Refactored ``xfile_array`` `#2117 `_ - Added simd accessors to ``xfil_array_container`` `#2118 `_ - Abstracted file format through a formal class `#2115 `_ - Added ``xchunked_array`` extension template `#2122 `_ - Refactored ``xdisk_io_handler`` `#2123 `_ - Fixed exception for file write operation `#2125 `_ - Implemented ``zarray`` `#2127 `_ - Implemented the skeleton of the dynamic expression system `#2129 `_ - Implemented zfunctions, equivalent of xfunction for dynamic expression system `#2130 `_ - Implemented ``allocate_result`` in ``zfunction`` `#2132 `_ - Implemented assign mechanism for ``zarray`` `#2133 `_ - Added xindex_path to transform indexes into path `#2131 `_ - Fixing various compiler warnings `#2145 `_ - Removed conversion and initialization warnings `#2141 `_ 0.21.5 ------ - Fix segfault when using ``xt::drop`` on an empty list of indices `#1990 `_ - Implemented missing methods in ``xrepeat`` class `#1993 `_ - Added extension base to ``xrepeat`` and clean up ``xbroadcast`` `#1994 `_ - Fix return type of ``nanmean`` and add unittest `#1996 `_ - Add result type template argument for ``stddev``, ``variance``, ``nanstd`` and ``nanvar`` `#1999 `_ - Fix variance overload `#2002 `_ - Added missing ``xaxis_slice_iterator`` header to CMakeLists.txt `#2009 `_ - Fixed xview on const keep and const drop slices `#2010 `_ - Added ``static_assert`` to ``adapt`` methods `#2015 `_ - Removed allocator deprecated calls `#2018 `_ - Added missing overload of ``push_back`` to ``svector`` `#2024 `_ - Initialized all members of ``xfunciton_cache_impl`` `#2026 `_ 0.21.4 ------ - Fix warning -Wsign-conversion in ``xview`` `#1902 `_ - Fixed issue due to thread_local storage on some architectures `#1905 `_ - benchmark/CMakeLists.txt: fixed a tiny spelling mistake `#1904 `_ - nd-iterator implementation `#1891 `_ - Add GoatCounter analytics for the documentation `#1908 `_ - Added ``noexcept`` in ``svector`` `#1919 `_ - Add implementation of repeat (similar to numpy) `#1896 `_ - Fix initialization of out shape in ``xt::tile`` `#1923 `_ - ``xaxis_slice_iterator`` – Iterates over 1D slices oriented along the specified axis `#1916 `_ - Fixed cxx11 lib guard `#1925 `_ - Fixed CXX11 ABI when _GLIBCXX_USE_DUAL_ABI is set to 0 `#1927 `_ - Enabling array-bounds warning `#1933 `_ - Fixed warnings `#1934 `_ - Compile with g++ instead of gcc, clarify include directories `#1938 `_ - broadcast function now accepts fixed shapes `#1939 `_ - Don't print decimal point after ``inf`` or ``nan`` `#1940 `_ - Improved performance of ``xt::tile`` `#1943 `_ - Refactoring CI `#1942 `_ - Documentation build: Switched to channel QuantStack `#1948 `_ - Removed warnings due to gtest upgrade `#1949 `_ - Fixed flatten view of view `#1950 `_ - Improved narrative documentation of reducers `#1958 `_ - Add test for printing xarray of type ``size_t`` `#1947 `_ - Added documentation for iterators `#1961 `_ - Fixed ``check_element_index`` behavior for 0-D expressions `#1965 `_ - Fixed ``element`` method of xreducer `#1966 `_ - Fixed ``cast`` for third-party types `#1967 `_ - fix ``xoperation`` `#1790 `_ - Added installation instruction with MinGW `#1969 `_ - ``xrepeat`` now stores ``const_xclosure_t`` instead of ``E`` `#1968 `_ - Fixed ``argpartition`` leading axis test `#1971 `_ - Added tests with C++20 enabled `#1974 `_ - Added documentation for ``repeat`` `#1975 `_ - Fixed sort and partition `#1976 `_ - xt::view now supports negative indices `#1979 `_ 0.21.3 ------ - Allow use of cmake add_subdirectory(xtensor) by checking for xtl target `#1865 `_ - Simplifying CMake config `#1856 `_ - Fixed ``reshape`` with signed integers `#1867 `_ - Disabled MSVC iterator checks `#1874 `_ - Added covariance function `#1847 `_ - Fix for older cmake `#1880 `_ - Added row and col facade for 2-D contianers `#1876 `_ - Implementation of ``xt::tile`` `#1888 `_ - Fixed ``reshape`` return `#1886 `_ - Enabled ``add_subdirectory`` for ``xsimd`` `#1889 `_ - Support ``ddof`` argument for ``xt::variance`` `#1893 `_ - Set -march=native only if the user did not set another -march already `#1899 `_ - Assemble new container in ``xpad`` `#1808 `_ 0.21.2 ------ - Upgraded to gtest 1.10.0 `#1859 `_ - Upgraded to xsimd 7.4.4 `#1864 `_ - Removed allocator deprecated calls `#1862 `_ 0.21.1 ------ - Added circular includes check `#1853 `_ - Removed cricular dependencies `#1854 `_ 0.21.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - Dynamic SIMD assign `#1762 `_ Other changes ~~~~~~~~~~~~~ - Updated links to other projects `#1773 `_ - Updated license `#1774 `_ - Updated related projects `#1775 `_ - Fixed ``has_simd_interface`` for non existing ``simd_return_type`` `#1779 `_ - Added average overload for default equal weights `#1789 `_ - Implemented concatenation of ``fixed_shape`` tensors `#1793 `_ - Replaced ``new`` with ``unique_ptr`` in headers `#1800 `_ - Fixed reallocation when an ``xbuffer`` is copied over `#1799 `_ - Added hte ability to use the library with ``-fnoexception`` `#1801 `_ - Minor efficiency improvement `#1807 `_ - Unified ``xt::concatenate`` and ``xt::concatenate_fixed`` `#1805 `_ - Have ``reshape`` method return a reference to self `#1813 `_ - Enabling tests of ``xtensor_fixed`` on Windows with clang. `#1815 `_ - Disabled SIMD assignment when bool conversion occurs `#1818 `_ - Speed up views, added SIMD interface to strided views `#1627 `_ - Fixed assignment of scalar to complex `#1828 `_ - Fixed concurrency issue in ``flat_expression_adaptor`` `#1831 `_ - Implemented an equivalent to ``numpy.roll`` `#1823 `_ - Upgraded to ``xtl 0.6.9`` `#1839 `_ - Fixed type of OpenMP's index variable on Windows `#1838 `_ - Implemented ``hstack`` and ``vstack`` `#1841 `_ - Implemented ``hsplit`` and ``vsplit`` `#1842 `_ - Fixed behavior of ``diff`` when ``n`` is greater thant the number of elements `#1843 `_ - Added treshold to OpenMP parallelization `#1849 `_ - Added missing assign operator in ``xmasked_view`` `#1850 `_ - Updated CMake target `#1851 `_ 0.20.10 ------- - Simplified functors definition `#1756 `_ - Fixed ``container_simd_return_type`` `#1759 `_ - Fixed reducer init for ``xtensor_fixed`` value type `#1761 `_ 0.20.9 ------ - Added alias to check if type is ``xsemantic_base`` `#1673 `_ - Added missing include ``xoperation.hpp`` `#1674 `_ - Moved XSIMD and TBB dependencies to tests only `#1676 `_ - Added missing coma `#1680 `_ - Added Numpy-like parameter in ``load_csv`` `#1682 `_ - Added ``shape()`` method to ``xshape.hpp`` `#1592 `_ - Added shape print tip to docs `#1693 `_ - Fix lvalue npy_file heap corruption in MSVC `#1697 `_ - Fix UB when parsing 1-dimension npy `#1696 `_ - Fixed compiler error (missing ``shape`` method in ``xbroadcast`` and ``xscalar``) `#1699 `_ - Added: deg2rad, rad2deg, degrees, radians `#1700 `_ - Despecialized xt::to_json and xt::from_json `#1691 `_ - Added coverity `#1577 `_ - Additional configuration for future coverity branch `#1712 `_ - More tests for coverity `#1714 `_ - Update README.md for Conan installation instructions `#1717 `_ - Reset stream's flags after output operation `#1718 `_ - Added missing include in ``xview.hpp`` `#1719 `_ - Removed usage of allocator's members that are deprecated in C++17 `#1720 `_ - Added tests for mixed assignment `#1721 `_ - Fixed ``step_simd`` when underlying iterator holds an ``xscalar_stepper`` `#1724 `_ - Fixed accumulator for empty arrays `#1725 `_ - Use ``temporary_type`` in implementation of ``xt::diff`` `#1727 `_ - CMakeLists.txt: bumped up xsimd required version to 7.2.6 `#1728 `_ - Fixed reducers on empty arrays `#1729 `_ - Implemented additional random distributions `#1708 `_ - Fixed reducers: passing the same axis many times now throws `#1730 `_ - Made ``xfixed_container`` optionally sharable `#1733 `_ - ``step_simd`` template parameter is now the value type instead of the simd type `#1736 `_ - Implemented OpenMP Parallelization. `#1739 `_ - Readme improvements `#1741 `_ - Vectorized ``xt::where`` `#1738 `_ - Fix typos and wording in documentation `#1745 `_ - Upgraded to xtl 0.6.6. and xsimd 7.4.0 `#1747 `_ - Improve return value type for ``nanmean`` `#1749 `_ - Allows (de)serialization of xexpressions in NumPy formatted strings and streams `#1751 `_ - Enabled vectorization of boolean operations `#1748 `_ - Added the list of contributors `#1755 `_ 0.20.8 ------ - Added traversal order to ``argwhere`` and ``filter`` `#1672 `_ - ``flatten`` now returns the new type ``xtensor_view`` `#1671 `_ - Error case handling in ``concatenate`` `#1669 `_ - Added assign operator from ``temporary_type`` in ``xiterator_adaptor`` `#1668 `_ - Improved ``index_view`` examples `#1667 `_ - Updated build option section of the documentation `#1666 `_ - Made ``xsequence_view`` convertible to arbitrary sequence type providing iterators `#1657 `_ - Added overload of ``is_linear`` for expressions without ``strides`` method `#1655 `_ - Fixed reverse ``arange`` `#1653 `_ - Add warnings for random number generation `#1652 `_ - Added common pitfalls section in the documentation `#1649 `_ - Added missing ``shape`` overload in ``xfunction`` `#1650 `_ - Made ``xconst_accessible::shape(std::size_t)`` visible in ``xview`` `#1645 `_ - Diff: added bounds-check on maximal recursion `#1640 `_ - Add ``xframe`` to related projects `#1635 `_ - Update ``indice.rst`` `#1626 `_ - Remove unecessary arguments `#1624 `_ - Replace ``auto`` with explicit return type in ``make_xshared`` `#1621 `_ - Add `z5` to related projects `#1620 `_ - Fixed long double complex offset views `#1614 `_ - Fixed ``xpad`` bugs `#1607 `_ - Workaround for annoying bug in VS2017 `#1602 `_ 0.20.7 ------ - Fix reshape view assignment and allow setting traversal order `#1598 `_ 0.20.6 ------ - Added XTENSOR_DEFAULT_ALIGNMENT macro `#1597 `_ - Added missing comparison operators for const_array `#1596 `_ - Fixed reducer for expression with shape containing 0 `#1595 `_ - Very minor spelling checks in comments `#1591 `_ - tests can be built in debug mode `#1589 `_ - strided views constructors forward shape argument `#1587 `_ - Remove unused type alias `#1585 `_ - Fixed reducers with empty list of axes `#1582 `_ - Fix typo in builder docs `#1581 `_ - Fixed return type of data in xstrided_view `#1580 `_ - Fixed reducers on expression with shape containing 1 as first elements `#1579 `_ - Fixed xview::element for range with more elements than view's dimension `#1578 `_ - Fixed broadcasting of shape containing 0-sized dimensions `#1575 `_ - Fixed norm return type for complex `#1574 `_ - Fixed iterator incremented or decremented by 0 `#1572 `_ - Added complex exponential test `#1571 `_ - Strided views refactoring `#1569 `_ - Add clang-cl support `#1559 `_ 0.20.5 ------ - Fixed ``conj`` `#1556 `_ - Fixed ``real``, ``imag``, and ``functor_view`` `#1554 `_ - Allows to include ``xsimd`` without defining ``XTENSOR_USE_XSIMD`` `#1548 `_ - Fixed ``argsort`` in column major `#1547 `_ - Fixed ``assign_to`` for ``arange`` on ``double`` `#1541 `_ - Fix example code in container.rst `#1544 `_ - Removed return value from ``step_leading`` `#1536 `_ - Bugfix: amax `#1533 `_ - Removed extra ; `#1527 `_ 0.20.4 ------ - Buffer adaptor default constructor `#1524 `_ 0.20.3 ------ - Fix xbuffer adaptor `#1523 `_ 0.20.2 ------ - Fixed broadcast linear assign `#1493 `_ - Fixed ``do_stirdes_match`` `#1497 `_ - Removed unused capture `#1499 `_ - Upgraded to ``xtl`` 0.6.2 `#1502 `_ - Added missing methods in ``xshared_expression`` `#1503 `_ - Fixed iterator types of ``xcontainer`` `#1504 `_ - Typo correction in external-structure.rst `#1505 `_ - Added extension base to adaptors `#1507 `_ - Fixed shared expression iterator methods `#1509 `_ - Strided view fixes `#1512 `_ - Improved range documentation `#1515 `_ - Fixed ``ravel`` and ``flatten`` implementation `#1511 `_ - Fixed ``xfixed_adaptor`` temporary assign `#1516 `_ - Changed struct -> class in ``xiterator_adaptor`` `#1513 `_ - Fxed ``argmax`` for expressions with strides 0 `#1519 `_ - Add ``has_linear_assign`` to ``sdynamic_view`` `#1520 `_ 0.20.1 ------ - Add a test for mimetype rendering and fix forward declaration `#1490 `_ - Fix special case of view iteration `#1491 `_ 0.20.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - Removed ``xmasked_value`` and ``promote_type_t`` `#1389 `_ - Removed deprecated type ``slice_vector`` `#1459 `_ - Upgraded to ``xtl`` 0.6.1 `#1468 `_ - Added ``keep_dims`` option to reducers `#1474 `_ - ``do_strides_match`` now accept an addition base stride value `#1479 `_ Other changes ~~~~~~~~~~~~~ - Add ``partition``, ``argpartition`` and ``median`` `#991 `_ - Fix tets on avx512 `#1410 `_ - Implemented ``xcommon_tensor_t`` with tests `#1412 `_ - Code reorganization `#1416 `_ - ``reshape`` now accepts ``initializer_list`` parameter `#1417 `_ - Improved documentation `#1419 `_ - Fixed ``noexcept`` specifier `#1418 `_ - ``view`` now accepts lvalue slices `#1420 `_ - Removed warnings `#1422 `_ - Added ``reshape`` member to ``xgenerator`` to make ``arange`` more flexible `#1421 `_ - Add ``std::decay_t`` to ``shape_type`` in strided view `#1425 `_ - Generic reshape for ``xgenerator`` `#1426 `_ - Fix out of bounds accessing in ``xview::compute_strides`` `#1437 `_ - Added quick reference section to documentation `#1438 `_ - Improved getting started CMakeLists.txt `#1440 `_ - Added periodic indices `#1430 `_ - Added build section to narrative documentation `#1442 `_ - Fixed ``linspace`` corner case `#1443 `_ - Fixed type-o in documentation `#1446 `_ - Added ``xt::xpad`` `#1441 `_ - Added warning in ``resize`` documentation `#1447 `_ - Added ``in_bounds`` method `#1444 `_ - ``xstrided_view_base`` is now a CRTP base class `#1453 `_ - Turned ``xfunctor_applier_base`` into a CRTP base class `#1455 `_ - Removed out of bound access in ``data_offset`` `#1456 `_ - Added ``xaccessible`` base class `#1451 `_ - Refactored ``operator[]`` `#1460 `_ - Splitted ``xaccessible`` `#1461 `_ - Refactored ``size`` `#1462 `_ - Implemented ``nanvar`` and ``nanstd`` with tests `#1424 `_ - Removed warnings `#1463 `_ - Added ``periodic`` and ``in_bounds`` method to ``xoptional_assembly_base`` `#1464 `_ - Updated documentation according to last changes `#1465 `_ - Fixed ``flatten_sort_result_type`` `#1470 `_ - Fixed ``unique`` with expressions not defining ``temporary_type`` `#1472 `_ - Fixed ``xstrided_view_base`` constructor `#1473 `_ - Avoid signed integer overflow in integer printer `#1475 `_ - Fixed ``xview::inner_backstrides_type`` `#1480 `_ - Fixed compiler warnings `#1481 `_ - ``slice_implementation_getter`` now forwards its lice argument `#1486 `_ - ``linspace`` can now be reshaped `#1488 `_ 0.19.4 ------ - Add missing include `#1391 `_ - Fixes in xfunctor_view `#1393 `_ - Add tests for xfunctor_view `#1395 `_ - Add `empty` method to fixed_shape `#1396 `_ - Add accessors to slice members `#1401 `_ - Allow adaptors on shared pointers `#1218 `_ - Fix `eye` with negative index `#1406 `_ - Add documentation for shared pointer adaptor `#1407 `_ - Add `nanmean` function `#1408 `_ 0.19.3 ------ - Fix arange `#1361 `_. - Adaptors for C stack-allocated arrays `#1363 `_. - Add support for optionals in ``conditional_ternary`` `#1365 `_. - Add tests for ternary operator on xoptionals `#1368 `_. - Enable ternary operation for a mix of ``xoptional`` and ``value`` `#1370 `_. - ``reduce`` now accepts a single reduction function `#1371 `_. - Implemented share method `#1372 `_. - Documentation of shared improved `#1373 `_. - ``make_lambda_xfunction`` more generic `#1374 `_. - minimum/maximum for ``xoptional`` `#1378 `_. - Added missing methods in ``uvector`` and ``svector`` `#1379 `_. - Clip ``xoptional_assembly`` `#1380 `_. - Improve gtest cmake `#1382 `_. - Implement ternary operator for scalars `#1385 `_. - Added missing ``at`` method in ``uvector`` and ``svector`` `#1386 `_. - Fixup binder environment `#1387 `_. - Fixed ``resize`` and ``swap`` of ``svector`` `#1388 `_. 0.19.2 ------ - Enable CI for C++17 `#1324 `_. - Fix assignment of masked views `#1328 `_. - Set CMAKE_CXX_STANDARD instead of CMAKE_CXX_FLAGS `#1330 `_. - Allow specifying traversal order to argmin and argmax `#1331 `_. - Update section on differences with NumPy `#1336 `_. - Fix accumulators for shapes containing 1 `#1337 `_. - Decouple XTENSOR_DEFAULT_LAYOUT and XTENSOR_DEFAULT_TRAVERSAL `#1339 `_. - Prevent embiguity with `xsimd::reduce` `#1343 `_. - Require `xtl` 0.5.3 `#1346 `_. - Use concepts instead of SFINAE `#1347 `_. - Document good practice for xtensor-based API design `#1348 `_. - Fix rich display of tensor expressions `#1353 `_. - Fix xview on fixed tensor `#1354 `_. - Fix issue with `keep_slice` in case of `dynamic_view` on `view` `#1355 `_. - Prevent installation of gtest artifacts `#1357 `_. 0.19.1 ------ - Add string specialization to ``lexical_cast`` `#1281 `_. - Added HDF5 reference for ``xtensor-io`` `#1284 `_. - Fixed view index remap issue `#1288 `_. - Fixed gcc 8.2 deleted functions `#1289 `_. - Fixed reducer for 0d input `#1292 `_. - Fixed ``check_element_index`` `#1295 `_. - Added comparison functions `#1297 `_. - Add some tests to ensure chrono works with xexpressions `#1272 `_. - Refactor ``functor_view`` `#1276 `_. - Documentation improved `#1302 `_. - Implementation of shift operators `#1304 `_. - Make functor adaptor stepper work for proxy specializations `#1305 `_. - Replaced ``auto&`` with ``auto&&`` in ``assign_to`` `#1306 `_. - Fix namespace in ``xview_utils.hpp`` `#1308 `_. - Introducing ``flatten_indices`` and ``unravel_indices`` `#1300 `_. - Default layout parameter for ``ravel`` `#1311 `_. - Fixed ``xvie_stepper`` `#1317 `_. - Fixed assignment of view on view `#1314 `_. - Documented indices `#1318 `_. - Fixed shift operators return type `#1319 `_. 0.19.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - Upgraded to ``xtl 0.5`` `#1275 `_. Other changes ~~~~~~~~~~~~~ - Removed type-o in docs, minor code style consistency update `#1255 `_. - Removed most of the warnings `#1261 `_. - Optional bitwise fixed `#1263 `_. - Prevent macro expansion in ``std::max`` `#1265 `_. - Update numpy.rst `#1267 `_. - Update getting_started.rst `#1268 `_. - keep and drop ``step_size`` fixed `#1270 `_. - Fixed typo in ``xadapt`` `#1277 `_. - Fixed typo `#1278 `_. 0.18.3 ------ - Exporting optional dependencies `#1253 `_. - 0-D HTML rendering `#1252 `_. - Include nlohmann_json in xio for mime bundle repr `#1251 `_. - Fixup xview scalar assignment `#1250 `_. - Implemented `from_indices` `#1240 `_. - xtensor_forward.hpp cleanup `#1243 `_. - default layout-type for `unravel_from_strides` and `unravel_index` `#1239 `_. - xfunction iterator fix `#1241 `_. - xstepper fixes `#1237 `_. - print_options io manipulators `#1231 `_. - Add syntactic sugar for reducer on single axis `#1228 `_. - Added view vs. adapt benchmark `#1229 `_. - added precisions to the installation instructions `#1226 `_. - removed data interface from dynamic view `#1225 `_. - add xio docs `#1223 `_. - Fixup xview assignment `#1216 `_. - documentation updated to be consistent with last changes `#1214 `_. - prevents macro expansion of std::max `#1213 `_. - Fix minor typos `#1212 `_. - Added missing assign operator in xstrided_view `#1210 `_. - argmax on axis with single element fixed `#1209 `_. 0.18.2 ------ - expression tag system fixed `#1207 `_. - optional extension for generator `#1206 `_. - optional extension for ``xview`` `#1205 `_. - optional extension for ``xstrided_view`` `#1204 `_. - optional extension for reducer `#1203 `_. - optional extension for ``xindex_view`` `#1202 `_. - optional extension for ``xfunctor_view`` `#1201 `_. - optional extension for broadcast `#1198 `_. - extension API and code cleanup `#1197 `_. - ``xscalar`` optional refactoring `#1196 `_. - Extension mechanism `#1192 `_. - Many small fixes `#1191 `_. - Slight refactoring in ``step_size`` logic `#1188 `_. - Fixup call of const overload in assembly storage `#1187 `_. 0.18.1 ------ - Fixup xio forward declaration `#1185 `_. 0.18.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - Assign and trivial_broadcast refactoring `#1150 `_. - Moved array manipulation functions (``transpose``, ``ravel``, ``flatten``, ``trim_zeros``, ``squeeze``, ``expand_dims``, ``split``, ``atleast_Nd``, ``atleast_1d``, ``atleast_2d``, ``atleast_3d``, ``flip``) from ``xstrided_view.hpp`` to ``xmanipulation.hpp`` `#1153 `_. - iterator API improved `#1155 `_. - Fixed ``where`` and ``nonzero`` function behavior to mimic the behavior from NumPy `#1157 `_. - xsimd and functor refactoring `#1173 `_. New features ~~~~~~~~~~~~ - Implement ``rot90`` `#1153 `_. - Implement ``argwhere`` and ``flatnonzero`` `#1157 `_. - Implemented ``xexpression_holder`` `#1164 `_. Other changes ~~~~~~~~~~~~~ - Warnings removed `#1159 `_. - Added missing include `#1162 `_. - Removed unused type alias in ``xmath/average`` `#1163 `_. - Slices improved `#1168 `_. - Fixed ``xdrop_slice`` `#1181 `_. 0.17.4 ------ - perfect forwarding in ``xoptional_function`` constructor `#1101 `_. - fix issue with ``base_simd`` `#1103 `_. - ``XTENSOR_ASSERT`` fixed on Windows `#1104 `_. - Implement ``xmasked_value`` `#1032 `_. - Added ``setdiff1d`` using stl interface `#1109 `_. - Added test case for ``setdiff1d`` `#1110 `_. - Added missing reference to ``diff`` in ``From numpy to xtensor`` section `#1116 `_. - Add ``amax`` and ``amin`` to the documentation `#1121 `_. - ``histogram`` and ``histogram_bin_edges`` implementation `#1108 `_. - Added numpy comparison for interp `#1111 `_. - Allow multiple return type reducer functions `#1113 `_. - Fixes ``average`` bug + adds Numpy based tests `#1118 `_. - Static ``xfunction`` cache for fixed sizes `#1105 `_. - Add negative reshaping axis `#1120 `_. - Updated ``xmasked_view`` using ``xmasked_value`` `#1074 `_. - Clean documentation for views `#1131 `_. - Build with ``xsimd`` on Windows fixed `#1127 `_. - Implement ``mime_bundle_repr`` for ``xmasked_view`` `#1132 `_. - Modify shuffle to use identical algorithms for any number of dimensions `#1135 `_. - Warnings removal on windows `#1139 `_. - Add permutation function to random `#1141 `_. - ``xfunction_iterator`` permutation `#933 `_. - Add ``bincount`` to ``xhistogram`` `#1140 `_. - Add contiguous iterable base class and remove layout param from storage iterator `#1057 `_. - Add ``storage_iterator`` to view and strided view `#1045 `_. - Removes ``data_element`` from ``xoptional`` `#1137 `_. - ``xtensor`` default constructor and scalar assign fixed `#1148 `_. - Add ``resize / reshape`` to ``xfixed_container`` `#1147 `_. - Iterable refactoring `#1149 `_. - ``inner_strides_type`` imported in ``xstrided_view`` `#1151 `_. 0.17.3 ------ - ``xslice`` fix `#1099 `_. - added missing ``static_layout`` in ``xmasked_view`` `#1100 `_. 0.17.2 ------ - Add experimental TBB support for parallelized multicore assign `#948 `_. - Add inline statement to all functions in xnpy `#1097 `_. - Fix strided assign for certain assignments `#1095 `_. - CMake, remove gtest warnings `#1085 `_. - Add conversion operators to slices `#1093 `_. - Add optimization to unchecked accessors when contiguous layout is known `#1060 `_. - Speedup assign by computing ``any`` layout on vectors `#1063 `_. - Skip resizing for fixed shapes `#1072 `_. - Add xsimd apply to xcomplex functors (conj, norm, arg) `#1086 `_. - Propagate contiguous layout through views `#1039 `_. - Fix C++17 ambiguity for GCC 7 `#1081 `_. - Correct shape type in argmin, fix svector growth `#1079 `_. - Add ``interp`` function to xmath `#1071 `_. - Fix valgrind warnings + memory leak in xadapt `#1078 `_. - Remove more clang warnings & errors on OS X `#1077 `_. - Add move constructor from xtensor <-> xarray `#1051 `_. - Add global support for negative axes in reducers/accumulators allow multiple axes in average `#1010 `_. - Fix reference usage in xio `#1076 `_. - Remove occurences of std::size_t and double `#1073 `_. - Add missing parantheses around min/max for MSVC `#1061 `_. 0.17.1 ------ - Add std namespace to size_t everywhere, remove std::copysign for MSVC `#1053 `_. - Fix (wrong) bracket warnings for older clang versions (e.g. clang 5 on OS X) `#1050 `_. - Fix strided view on view by using std::addressof `#1049 `_. - Add more adapt functions and shorthands `#1043 `_. - Improve CRTP base class detection `#1041 `_. - Fix rebind container ambiguous template for C++17 / GCC 8 regression `#1038 `_. - Fix functor return value `#1035 `_. 0.17.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - Changed strides to ``std::ptrdiff_t`` `#925 `_. - Renamed ``count_nonzeros`` in ``count_nonzero`` `#974 `_. - homogenize ``xfixed`` constructors `#970 `_. - Improve ``random::choice`` `#1011 `_. New features ~~~~~~~~~~~~ - add ``signed char`` to npy deserialization format `#1017 `_. - simd assignment now requires convertible types instead of same type `#1000 `_. - shared expression and automatic xclosure detection `#992 `_. - average function `#987 `_. - added simd support for complex `#985 `_. - argsort function `#977 `_. - propagate fixed shape `#922 `_. - added xdrop_slice `#972 `_. - added doc for ``xmasked_view`` `#971 `_. - added ``xmasked_view`` `#969 `_. - added ``dynamic_view`` `#966 `_. - added ability to use negative indices in keep slice `#964 `_. - added an easy way to create lambda expressions, square and cube `#961 `_. - noalias on rvalue `#965 `_. Other changes ~~~~~~~~~~~~~ - ``xshared_expression`` fixed `#1025 `_. - fix ``make_xshared`` `#1024 `_. - add tests to evaluate shared expressions `#1019 `_. - fix ``where`` on ``xview`` `#1012 `_. - basic usage replaced with getting started `#1004 `_. - avoided installation failure in absence of ``nlohmann_json`` `#1001 `_. - code and documentation clean up `#998 `_. - removed g++ "pedantic" compiler warnings `#997 `_. - added missing header in basic_usage.rst `#996 `_. - warning pass `#990 `_. - added missing include in ``xview`` `#989 `_. - added missing ```` include `#983 `_. - xislice refactoring `#962 `_. - added missing operators to noalias `#932 `_. - cmake fix for Intel compiler on Windows `#951 `_. - fixed xsimd abs deduction `#946 `_. - added islice example to view doc `#940 `_. 0.16.4 ------ - removed usage of ``std::transfomr`` in assign `#868 `_. - add strided assignment `#901 `_. - simd activated for conditional ternary functor `#903 `_. - ``xstrided_view`` split `#905 `_. - assigning an expression to a view throws if it has more dimensions `#910 `_. - faster random `#913 `_. - ``xoptional_assembly_base`` storage type `#915 `_. - new tests and warning pass `#916 `_. - norm immediate reducer `#924 `_. - add ``reshape_view`` `#927 `_. - fix immediate reducers with 0 strides `#935 `_. 0.16.3 ------ - simd on mathematical functions fixed `#886 `_. - ``fill`` method added to containers `#887 `_. - access with more arguments than dimensions `#889 `_. - unchecked method implemented `#890 `_. - ``fill`` method implemented in view `#893 `_. - documentation fixed and warnings removed `#894 `_. - negative slices and new range syntax `#895 `_. - ``xview_stepper`` with implicit ``xt::all`` bug fix `#899 `_. 0.16.2 ------ - Add include of ``xview.hpp`` in example `#884 `_. - Remove ``FS`` identifier `#885 `_. 0.16.1 ------ - Workaround for Visual Studio Bug `#858 `_. - Fixup example notebook `#861 `_. - Prevent expansion of min and max macros on Windows `#863 `_. - Renamed ``m_data`` to ``m_storage`` `#864 `_. - Fix regression with respect to random access stepping with views `#865 `_. - Remove use of CS, DS and ES qualifiers for Solaris builds `#866 `_. - Removal of precision type `#870 `_. - Make json tests optional, bump xtl/xsimd versions `#871 `_. - Add more benchmarks `#876 `_. - Forbid simd fixed `#877 `_. - Add more asserts `#879 `_. - Add missing ``batch_bool`` typedef `#881 `_. - ``simd_return_type`` hack removed `#882 `_. - Removed test guard and fixed dimension check in ``xscalar`` `#883 `_. 0.16.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - ``data`` renamed in ``storage``, ``raw_data`` renamed in ``data`` `#792 `_. - Added layout template parameter to ``xstrided_view`` `#796 `_. - Remove equality operator from stepper `#824 `_. - ``dynamic_view`` renamed in ``strided_view`` `#832 `_. - ``xtensorf`` renamed in ``xtensor_fixed`` `#846 `_. New features ~~~~~~~~~~~~ - Added strided view selector `#765 `_. - Added ``count_nonzeros`` `#781 `_. - Added implicit conversion to scalar in ``xview`` `#788 `_. - Added tracking allocators to ``xutils.hpp`` `#789 `_. - ``xindexslice`` and ``shuffle`` function `#804 `_. - Allow ``xadapt`` with dynamic layout `#816 `_. - Added ``xtensorf`` initialization from C array `#819 `_. - Added policy to allocation tracking for throw option `#820 `_. - Free function ``empty`` for construction from shape `#827 `_. - Support for JSON serialization and deserialization of xtensor expressions `#830 `_. - Add ``trapz`` function `#837 `_. - Add ``diff`` and ``trapz(y, x)`` functions `#841 `_. Other changes ~~~~~~~~~~~~~ - Added fast path for specific assigns `#767 `_. - Renamed internal macros to prevent collisions `#772 `_. - ``dynamic_view`` unwrapping `#775 `_. - ``xreducer_stepper`` copy semantic fixed `#785 `_. - ``xfunction`` copy constructor fixed `#787 `_. - warnings removed `#791 `_. - ``xscalar_stepper`` fixed `#802 `_. - Fixup ``xadapt`` on const pointers `#809 `_. - Fix in owning buffer adaptors `#810 `_. - Macros fixup `#812 `_. - More fixes in ``xadapt`` `#813 `_. - Mute unused variable warning `#815 `_. - Remove comparison of steppers in assign loop `#823 `_. - Fix reverse iterators `#825 `_. - gcc-8 fix for template method calls `#833 `_. - refactor benchmarks for upcoming release `#842 `_. - ``flip`` now returns a view `#843 `_. - initial warning pass `#850 `_. - Fix warning on diff function `#851 `_. - xsimd assignment fixed `#852 `_. 0.15.9 ------ - missing layout method in xfixed `#777 `_. - fixed uninitialized backstrides `#774 `_. - update xtensor-blas in binder `#773 `_. 0.15.8 ------ - comparison operators for slices `#770 `_. - use default-assignable layout for strided views. `#769 `_. 0.15.7 ------ - nan related functions `#718 `_. - return types fixed in dynamic view helper `#722 `_. - xview on constant expressions `#723 `_. - added decays to make const ``value_type`` compile `#727 `_. - iterator for constant ``strided_view`` fixed `#729 `_. - ``strided_view`` on ``xfunction`` fixed `#732 `_. - Fixes in ``xstrided_view`` `#736 `_. - View semantic (broadcast on assign) fixed `#742 `_. - Compilation prevented when using ellipsis with ``xview`` `#743 `_. - Index of ``xiterator`` set to shape when reaching the end `#744 `_. - ``xscalar`` fixed `#748 `_. - Updated README and related projects `#749 `_. - Perfect forwarding in ``xfunction`` and views `#750 `_. - Missing include in ``xassign.hpp`` `#752 `_. - More related projects in the README `#754 `_. - Fixed stride computation for ``xtensorf`` `#755 `_. - Added tests for backstrides `#758 `_. - Clean up ``has_raw_data`` ins strided view `#759 `_. - Switch to ``ptrdiff_t`` for slices `#760 `_. - Fixed ``xview`` strides computation `#762 `_. - Additional methods in slices, required for ``xframe`` `#764 `_. 0.15.6 ------ - zeros, ones, full and empty_like functions `#686 `_. - squeeze view `#687 `_. - bitwise shift left and shift right `#688 `_. - ellipsis, unique and trim functions `#689 `_. - xview iterator benchmark `#696 `_. - optimize stepper increment `#697 `_. - minmax reducers `#698 `_. - where fix with SIMD `#704 `_. - additional doc for scalars and views `#705 `_. - mixed arithmetic with SIMD `#713 `_. - broadcast fixed `#717 `_. 0.15.5 ------ - assign functions optimized `#650 `_. - transposed view fixed `#652 `_. - exceptions refactoring `#654 `_. - performances improved `#655 `_. - view data accessor fixed `#660 `_. - new dynamic view using variant `#656 `_. - alignment added to fixed xtensor `#659 `_. - code cleanup `#664 `_. - xtensorf and new dynamic view documentation `#667 `_. - qualify namespace for compute_size `#665 `_. - make xio use ``dynamic_view`` instead of ``view`` `#662 `_. - transposed view on any expression `#671 `_. - docs typos and grammar plus formatting `#676 `_. - index view test assertion fixed `#680 `_. - flatten view `#678 `_. - handle the case of pointers to const element in ``xadapt`` `#679 `_. - use quotes in #include statements for xtl `#681 `_. - additional constructors for ``svector`` `#682 `_. - removed ``test_xsemantics.hpp`` from test CMakeLists `#684 `_. 0.15.4 ------ - fix gcc-7 error w.r.t. the use of ``assert`` `#648 `_. 0.15.3 ------ - add missing headers to cmake installation and tests `#647 `_. 0.15.2 ------ - ``xshape`` implementation `#572 `_. - xfixed container `#586 `_. - protected ``xcontainer::derived_cast`` `#627 `_. - const reference fix `#632 `_. - ``xgenerator`` access operators fixed `#643 `_. - contiguous layout optiimzation `#645 `_. 0.15.1 ------ - ``xarray_adaptor`` fixed `#618 `_. - ``xtensor_adaptor`` fixed `#620 `_. - fix in ``xreducer`` steppers `#622 `_. - documentation improved `#621 `_. `#623 `_. `#625 `_. - warnings removed `#624 `_. 0.15.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - change ``reshape`` to ``resize``, and add throwing ``reshape`` `#598 `_. - moved to modern cmake `#611 `_. New features ~~~~~~~~~~~~ - unravel function `#589 `_. - random access iterators `#596 `_. Other changes ~~~~~~~~~~~~~ - upgraded to google/benchmark version 1.3.0 `#583 `_. - ``XTENSOR_ASSERT`` renamed into ``XTENSOR_TRY``, new ``XTENSOR_ASSERT`` `#603 `_. - ``adapt`` fixed `#604 `_. - VC14 warnings removed `#608 `_. - ``xfunctor_iterator`` is now a random access iterator `#609 `_. - removed ``old-style-cast`` warnings `#610 `_. 0.14.1 ------ New features ~~~~~~~~~~~~ - sort, argmin and argmax `#549 `_. - ``xscalar_expression_tag`` `#582 `_. Other changes ~~~~~~~~~~~~~ - accumulator improvements `#570 `_. - benchmark cmake fixed `#571 `_. - allocator_type added to container interface `#573 `_. - allow conda-forge as fallback channel `#575 `_. - arithmetic mixing optional assemblies and scalars fixed `#578 `_. - arithmetic mixing optional assemblies and optionals fixed `#579 `_. - ``operator==`` restricted to xtensor and xoptional expressions `#580 `_. 0.14.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - ``xadapt`` renamed into ``adapt`` `#563 `_. - Naming consistency `#565 `_. New features ~~~~~~~~~~~~ - add ``random::choice`` `#547 `_. - evaluation strategy and accumulators. `#550 `_. - modulus operator `#556 `_. - ``adapt``: default overload for 1D arrays `#560 `_. - Move semantic on ``adapt`` `#564 `_. Other changes ~~~~~~~~~~~~~ - optional fixes to avoid ambiguous calls `#541 `_. - narrative documentation about ``xt::adapt`` `#544 `_. - ``xfunction`` refactoring `#545 `_. - SIMD acceleration for AVX fixed `#557 `_. - allocator fixes `#558 `_. `#559 `_. - return type of ``view::strides()`` fixed `#568 `_. 0.13.2 ------ - Support for complex version of ``isclose`` `#512 `_. - Fixup static layout in ``xstrided_view`` `#536 `_. - ``xexpression::operator[]`` now take support any type of sequence `#537 `_. - Fixing ``xinfo`` issues for Visual Studio. `#529 `_. - Fix const-correctness in ``xstrided_view``. `#526 `_. 0.13.1 ------ - More general floating point type `#518 `_. - Do not require functor to be passed via rvalue reference `#519 `_. - Documentation improved `#520 `_. - Fix in xreducer `#521 `_. 0.13.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - The API for ``xbuffer_adaptor`` has changed. The template parameter is the type of the buffer, not just the value type `#482 `_. - Change ``edge_items`` print option to ``edgeitems`` for better numpy consistency `#489 `_. - xtensor now depends on ``xtl`` version `~0.3.3` `#508 `_. New features ~~~~~~~~~~~~ - Support for parsing the ``npy`` file format `#465 `_. - Creation of optional expressions from value and boolean expressions (optional assembly) `#496 `_. - Support for the explicit cast of expressions with different value types `#491 `_. Other changes ~~~~~~~~~~~~~ - Addition of broadcasting bitwise operators `#459 `_. - More efficient optional expression system `#467 `_. - Migration of benchmarks to the Google benchmark framework `#473 `_. - Container semantic and adaptor semantic merged `#475 `_. - Various fixes and improvements of the strided views `#480 `_. `#481 `_. - Assignment now performs basic type conversion `#486 `_. - Workaround for a compiler bug in Visual Studio 2017 `#490 `_. - MSVC 2017 workaround `#492 `_. - The ``size()`` method for containers now returns the total number of elements instead of the buffer size, which may differ when the smallest stride is greater than ``1`` `#502 `_. - The behavior of ``linspace`` with integral types has been made consistent with numpy `#510 `_. 0.12.1 ------ - Fix issue with slicing when using heterogeneous integral types `#451 `_. 0.12.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - ``xtensor`` now depends on ``xtl`` version `0.2.x` `#421 `_. New features ~~~~~~~~~~~~ - ``xtensor`` has an optional dependency on ``xsimd`` for enabling simd acceleration `#426 `_. - All expressions have an additional safe access function (``at``) `#420 `_. - norm functions `#440 `_. - ``closure_pointer`` used in iterators returning temporaries so their ``operator->`` can be correctly defined `#446 `_. - expressions tags added so ``xtensor`` expression system can be extended `#447 `_. Other changes ~~~~~~~~~~~~~ - Preconditions and exceptions `#409 `_. - ``isclose`` is now symmetric `#411 `_. - concepts added `#414 `_. - narrowing cast for mixed arithmetic `#432 `_. - ``is_xexpression`` concept fixed `#439 `_. - ``void_t`` implementation fixed for compilers affected by C++14 defect CWG 1558 `#448 `_. 0.11.3 ------ - Fixed bug in length-1 statically dimensioned tensor construction `#431 `_. 0.11.2 ------ - Fixup compilation issue with latest clang compiler. (missing `constexpr` keyword) `#407 `_. 0.11.1 ------ - Fixes some warnings in julia and python bindings 0.11.0 ------ Breaking changes ~~~~~~~~~~~~~~~~ - ``xbegin`` / ``xend``, ``xcbegin`` / ``xcend``, ``xrbegin`` / ``xrend`` and ``xcrbegin`` / ``xcrend`` methods replaced with classical ``begin`` / ``end``, ``cbegin`` / ``cend``, ``rbegin`` / ``rend`` and ``crbegin`` / ``crend`` methods. Old ``begin`` / ``end`` methods and their variants have been removed. `#370 `_. - ``xview`` now uses a const stepper when its underlying expression is const. `#385 `_. Other changes ~~~~~~~~~~~~~ - ``xview`` copy semantic and move semantic fixed. `#377 `_. - ``xoptional`` can be implicitly constructed from a scalar. `#382 `_. - build with Emscripten fixed. `#388 `_. - STL version detection improved. `#396 `_. - Implicit conversion between signed and unsigned integers fixed. `#397 `_. xtensor-0.23.10/docs/source/closure-semantics.rst000066400000000000000000000266071405270662500220230ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. .. _closure-semantics-label: Closure semantics ================= The ``xtensor`` library is a tensor expression library implementing numpy-style broadcasting and universal functions but in a lazy fashion. If ``x`` and ``y`` are two tensor expressions with compatible shapes, the result of ``x + y`` is not a tensor but an expression that does not hold any value. Values of ``x + y`` are computed upon access or when the result is assigned to a container such as ``xt::xtensor`` or ``xt::xarray``. The same holds for most functions in xtensor, views, broadcasting views, etc. In order to be able to perform the differed computation of ``x + y``, the returned expression must hold references, const references or copies of the members ``x`` and ``y``, depending on how arguments were passed to ``operator+``. The actual types held by the expressions are the **closure types**. The concept of closure type is key in the implementation of ``xtensor`` and appears in all the expressions defined in xtensor, and the utility functions and metafunctions complement the tools of the standard library for the move semantics. Basic rules for determining closure types ----------------------------------------- The two main requirements are the following: - when an argument passed to the function returning an expression (here, ``operator+``) is an *rvalue*, the closure type is always a value and the ``rvalue`` is *moved*. - when an argument passed to the function returning an expression is an *lvalue reference*, the closure type is a reference of the same type. It is important for the closure type not to be a reference when the passed argument is an rvalue, which can result in dangling references. Following the conventions of the C++ standard library for naming type traits, we provide two type traits classes providing an implementation of these rules in the ``xutils.hpp`` header, ``closure_type``, and ``const_closure_type``. The latter adds the ``const`` qualifier to the reference even when the provided argument is not const. .. code:: cpp template struct closure_type { using underlying_type = std::conditional_t< std::is_const>::value, const std::decay_t, std::decay_t>; using type = typename std::conditional< std::is_lvalue_reference::value, underlying_type&, underlying_type>::type; }; template using closure_type_t = typename closure_type::type; The implementation for ``const_closure_type`` is slightly shorter. .. code:: cpp template struct const_closure_type { using underlying_type = std::decay_t; using type = typename std::conditional< std::is_lvalue_reference::value, std::add_const_t&, underlying_type>::type; }; template using const_closure_type_t = typename const_closure_type::type; Using this mechanism, we were able to - avoid dangling references in nested expressions, - hold references whenever possible, - take advantage of the move semantics when holding references is not possible. Closure types and scalar wrappers --------------------------------- A requirement for ``xtensor`` is the ability to mix scalars and tensors in tensor expressions. In order to do so, scalar values are wrapped into the ``xscalar`` wrapper, which is a cheap 0-D tensor expression holding a single scalar value. For the xscalar to be a proper proxy on the scalar value, if actually holds a closure type on the scalar value. The logic for this is encoded into xtensor's ``xclosure`` type trait. .. code:: cpp template struct xclosure { using type = closure_t; }; template struct xclosure>> { using type = xscalar>; }; template using xclosure_t = typename xclosure::type; In doing so, we ensure const-correctness, we avoid dangling reference, and ensure that lvalues remain lvalues. The `const_xclosure` follows the same scheme: .. code:: cpp template struct const_xclosure { using type = const_closure_type_t; }; template struct const_xclosure>> { using type = xscalar>; }; template using const_xclosure_t = typename const_xclosure::type; Writing functions that return expressions ----------------------------------------- *xtensor closure semantics are not meant to prevent users from doing mistakes, since it would also prevent them from doing something clever*. This section covers cases where understanding C++ move semantics and xtensor closure semantics helps writing better code with xtensor. Returning evaluated or unevaluated expressions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A key feature of xtensor is that a function returning e.g. ``x + y / z`` where ``x``, ``y`` and ``z`` are xtensor expressions does not actually perform any computation. It is only evaluated upon access or assignment. The returned expression holds values or references for ``x``, ``y`` and ``z`` depending on the lvalue-ness of the variables passed to the expression, using the *closure semantics* described earlier. This may result in dangling references when using local variables of a function in an unevaluated expression unless one properly forwards / move the variables. .. note:: The following rule of thumbs prevents dangling references in the xtensor closure semantics: - If the laziness is not important for your use case, returning ``xt::eval(x + y / z)`` will return an evaluated container and avoid these complications. - Otherwise, the key is to *move* lvalues that become invalid when leaving the current scope. - If you would need to *move* more than once, take a look at the `Reusing expressions / sharing expressions`_. **Example: moving local variables and forwarding universal references** Let us first consider the following implementation of the ``mean`` function in xtensor: .. raw:: html .. code:: cpp template inline auto mean(E&& e) noexcept { using value_type = typename std::decay_t::value_type; auto size = e.size(); auto s = sum(std::forward(e)); return std::move(s) / value_type(size); } The first thing to take into account is that the result of the final division is an expression, which performs the actual computation upon access or assignment. - In order to perform the division, the expression must hold the values or references on the numerator and denominator. - Since ``s`` is a local variable, it will be destroyed upon leaving the scope of the function, and more importantly, it is an *lvalue*. - A consequence of ``s`` being an lvalue and a local variable, is that the ``s / value_type(size)`` would end up holding a dangling ``const`` reference on ``s``. - Hence we must call return ``std::move(s) / value_type(size)``. The other place in this example where the C++ move semantics is used is the line ``s = sum(std::forward(e))``. The goal is to have the unevaluated ``s`` expression hold a const reference or a value for ``e`` depending on the lvalue-ness of the parameter passed to the function. Reusing expressions / sharing expressions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Sometimes it is necessary to use a xexpression in two separate places in another xexpression. For example, when computing something like ``sin(A) + cos(A)`` we can see A being referenced twice. This works fine if we can guarantee that ``A`` has a long enough lifetime. However, when writing generic interfaces that accept rvalues we cannot always guarantee that ``A`` will live long enough. Another scenario is the creation of a temporary, which needs to be used at more than one place in the resulting expression. We can only `std::move(...)` the temporary once into the expression to hand lifetime management to the expression. In order to solve this problem, xtensor offers two solutions: the first involves ad-hoc lambda construction and the second utilizes shared pointers wrapped in a ``xshared_expression``. We can rewrite the ``sin(A) + cos(A)`` function as a lambda that we use to create a vectorized xfunction, and xtensor has a simple utility to achieve this: .. code:: cpp template inline auto sin_plus_cos(E&& e) noexcept { auto func = [](auto x) -> decltype(sin(x) + cos(x)) { return sin(x) + cos(x); }; return detail::make_lambda_function(std::move(func), std::forward(e)); } Note: writing a lambda is just sugar for writing a functor. Also, using `auto x` as the function argument enables automatic `xsimd` acceleration. As the data flow through the lambda is entirely transparent to the compiler, using this construct is generally faster than using ``xshared_expressions``. The usage of ``xshared_expression`` also requires the creation of a ``shared_ptr`` which dynamically allocates some memory and is therefore slow(ish). But under certain circumstances it might be required, e.g. to implement a fully lazy average: .. code:: cpp template inline auto average(E&& e, W&& weights, std::ptrdiff_t axis) noexcept { auto shared_weights = xt::make_xshared(std::move(weights)); auto expr = xt::sum(e * shared_weights , {axis}) / xt::sum(shared_weights); // the following line prints how often shared_weights is used std::cout << shared_weights.use_count() << std::endl; // ==> 4 return expr; } We can see that, before returning from the function, four copies of ``shared_weights`` exist: two in the two ``xt::sum`` functions, and one is the temporary. The last one lies in ``weights`` itself, it is a technical requirement for the ``share`` syntax. After returning from the function, only two copies of the ``xshared_expression`` will exist. As discussed before, ``xt::make_xshared`` has the same overhead as creating a ``std::shared_ptr`` which is used internally by the shared expression. Another syntax can be used if you don't want to have a temporary variable for the shared expression: .. code:: cpp template inline auto average(E&& e, W&& weights, std::ptrdiff_t axis) noexcept { auto expr = xt::sum(e * xt::share(weights) , {axis}) / xt::sum(xt::share(weights)); // the following line prints how often shared_weights is used std::cout << shared_weights.use_count() << std::endl; // ==> 3 return expr; } In that case only three copies of the shared weights exist. Notice that contrary to ``make_xshare``, ``share`` also accepts lvalues; this is to avoid the required ``std::move``, however ``share`` will turn its argument into an rvalue and will move it into the shared expression. Thus ``share`` invalidates its argument, and the only thing that can be done with an expression upon which ``share`` has been called is another call to ``share``. Therefore ``share`` should be called on rvalue references or temporary expressions only. xtensor-0.23.10/docs/source/cmake.svg000066400000000000000000000425311405270662500174240ustar00rootroot00000000000000 image/svg+xml xtensor-0.23.10/docs/source/compilers.rst000066400000000000000000000141301405270662500203440ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Compiler workarounds ==================== This page tracks the workarounds for the various compiler issues that we encountered in the development. This is mostly of interest for developers interested in contributing to xtensor. Visual Studio 2015 and ``std::enable_if`` ----------------------------------------- With Visual Studio, ``std::enable_if`` evaluates its second argument, even if the condition is false. This is the reason for the presence of the indirection in the implementation of the ``xfunction_type_t`` meta-function. Visual Studio 2017 and alias templates with non-class template parameters and multiple aliasing levels ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Alias template with non-class parameters only, and multiple levels of aliasing are not properly considered as types by Visual Studio 2017. The base ``xcontainer`` template class underlying xtensor container types has such alias templates defined. We avoid the multiple levels of aliasing in the case of Visual Studio. Visual Studio and ``min`` and ``max`` macros ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Visual Studio defines ``min`` and ``max`` macros causing calls to e.g. ``std::min`` and ``std::max`` to be interpreted as syntax errors. The ``NOMINMAX`` definition may be used to disable these macros. In xtensor, to prevent macro replacements of ``min`` and ``max`` functions, we wrap them with parentheses, so that client code does not need the ``NOMINMAX`` definition. Visual Studio 2017 (15.7.1) seeing declarations as extra overloads ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In ``xvectorize.hpp``, Visual Studio 15.7.1 sees the forward declaration of ``vectorize(E&&)`` as a separate overload. Visual Studio 2017 double non-class parameter pack expansion ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In ``xfixed.hpp`` we add a level of indirection to expand one parameter pack before the other. Not doing this results in VS2017 complaining about a parameter pack that needs to be expanded in this context while it actually is. GCC-4.9 and Clang < 3.8 and constexpr ``std::min`` and ``std::max`` ------------------------------------------------------------------- ``std::min`` and ``std::max`` are not constexpr in these compilers. In ``xio.hpp``, we locally define a ``XTENSOR_MIN`` macro used instead of ``std::min``. The macro is undefined right after it is used. Clang < 3.8 not matching ``initializer_list`` with static arrays ---------------------------------------------------------------- Old versions of Clang don't handle overload resolution with braced initializer lists correctly: braced initializer lists are not properly matched to static arrays. This prevent compile-time detection of the length of a braced initializer list. A consequence is that we need to use stack-allocated shape types in these cases. Workarounds for this compiler bug arise in various files of the code base. Everywhere, the handling of `Clang < 3.8` is wrapped with checks for the ``X_OLD_CLANG`` macro. **The support of `Clang < 4.0` is dropped in xtensor 0.22.** Clang-cl and ``std::get`` ------------------------- `Clang-cl` does not allow to call ``std::get`` with ``*this`` as parameter from a class inheriting from std::tuple. In that case, we explicitly upcast to ``std::tuple``. GCC < 5.1 and ``std::is_trivially_default_constructible`` --------------------------------------------------------- The versions of the STL shipped with versions of GCC older than 5.1 are missing a number of type traits, such as ``std::is_trivially_default_constructible``. However, for some of them, equivalent type traits with different names are provided, such as ``std::has_trivial_default_constructor``. In this case, we polyfill the proper standard names using the deprecated ``std::has_trivial_default_constructor``. This must also be done when the compiler is clang when it makes use of the GCC implementation of the STL, which is the default behavior on linux. Properly detecting the version of the GCC STL used by clang cannot be done with the ``__GNUC__`` macro, which is overridden by clang. Instead, we check for the definition of the macro ``_GLIBCXX_USE_CXX11_ABI`` which is only defined with GCC versions greater than ``5``. GCC-6 and the signature of ``std::isnan`` and ``std::isinf`` ------------------------------------------------------------ We are not directly using ``std::isnan`` or ``std::isinf`` for the implementation of ``xt::isnan`` and ``xt::isinf``, as a workaround to the following bug in GCC-6 for the following reason. - C++11 requires that the ```` header declares ``bool std::isnan(double)`` and ``bool std::isinf(double)``. - C99 requires that the ```` header declares ``int ::isnan(double)`` and ``int ::isinf(double)``. These two definitions would clash when importing both headers and using namespace std. As of version 6, GCC detects whether the obsolete functions are present in the C header ```` and uses them if they are, avoiding the clash. However, this means that the function might return int instead of bool as C++11 requires, which is a bug. GCC-8 and deleted functions --------------------------- GCC-8 (8.2 specifically) doesn't seem to SFINAE deleted functions correctly. A strided view on a dynamic_view errors with a message: use of deleted function. It should pick the *other* implementation by SFINAE on the function signature, because our ``has_strides`` meta-function should return false. Instantiating the ``has_strides`` in the inner_types fixes the issue. Original issue here: https://github.com/xtensor-stack/xtensor/issues/1273 Apple LLVM version >= 8.0.0 --------------------------- ``tuple_cat`` is bugged and propagates the constness of its tuple arguments to the types inside the tuple. When checking if the resulting tuple contains a given type, the const qualified type also needs to be checked. xtensor-0.23.10/docs/source/conda.svg000066400000000000000000000034151405270662500174260ustar00rootroot00000000000000xtensor-0.23.10/docs/source/conf.py000066400000000000000000000021631405270662500171170ustar00rootroot00000000000000#!/usr/bin/env python3 # -*- coding: utf-8 -*- import os import subprocess on_rtd = os.environ.get('READTHEDOCS', None) == 'True' if on_rtd: subprocess.call('cd ..; doxygen', shell=True) import sphinx_rtd_theme html_theme = "sphinx_rtd_theme" html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] def setup(app): app.add_stylesheet("main_stylesheet.css") extensions = ['breathe'] breathe_projects = { 'xtensor': '../xml' } templates_path = ['_templates'] html_static_path = ['_static'] source_suffix = '.rst' master_doc = 'index' project = 'xtensor' copyright = '2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht' author = 'Johan Mabille, Sylvain Corlay and Wolf Vollprecht' html_logo = 'quantstack-white.svg' exclude_patterns = [] highlight_language = 'c++' pygments_style = 'sphinx' todo_include_todos = False htmlhelp_basename = 'xtensordoc' html_js_files = [ 'goatcounter.js' ] # Automatically link to numpy doc extensions += ['sphinx.ext.intersphinx'] intersphinx_mapping = { "numpy": ("https://numpy.org/doc/stable/", None), "scipy": ("https://docs.scipy.org/doc/scipy/reference", None), } xtensor-0.23.10/docs/source/container.rst000066400000000000000000000225331405270662500203370ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Arrays and tensors ================== Internal memory layout ---------------------- A multi-dimensional array of `xtensor` consists of a contiguous one-dimensional buffer combined with an indexing scheme that maps unsigned integers to the location of an element in the buffer. The range in which the indices can vary is specified by the `shape` of the array. The scheme used to map indices into a location in the buffer is a strided indexing scheme. In such a scheme, the index ``(i0, ..., in)`` corresponds to the offset ``sum(ik * sk)`` from the beginning of the one-dimensional buffer, where ``(s0, ..., sn)`` are the `strides` of the array. Some particular cases of strided schemes implement well-known memory layouts: - the row-major layout (or C layout) is a strided index scheme where the strides grow from right to left - the column-major layout (or Fortran layout) is a strided index scheme where the strides grow from left to right ``xtensor`` provides a ``layout_type`` enum that helps to specify the layout used by multidimensional arrays. This enum can be used in two ways: - at compile time, as a template argument. The value ``layout_type::dynamic`` allows specifying any strided index scheme at runtime (including row-major and column-major schemes), while ``layout_type::row_major`` and ``layout_type::column_major`` fixes the strided index scheme and disable ``resize`` and constructor overloads taking a set of strides or a layout value as parameter. The default value of the template parameter is ``XTENSOR_DEFAULT_LAYOUT``. - at runtime if the previous template parameter was set to ``layout_type::dynamic``. In that case, ``resize`` and constructor overloads allow specifying a set of strides or a layout value to avoid strides computation. If neither strides nor layout is specified when instantiating or resizing a multi-dimensional array, strides corresponding to ``XTENSOR_DEFAULT_LAYOUT`` are used. The following example shows how to initialize a multi-dimensional array of dynamic layout with specified strides: .. code:: #include #include std::vector shape = { 3, 2, 4 }; std::vector strides = { 8, 4, 1 }; xt::xarray a(shape, strides); However, this requires to carefully compute the strides to avoid buffer overflow when accessing elements of the array. We can use the following shortcut to specify the strides instead of computing them: .. code:: #include #include std::vector shape = { 3, 2, 4 }; xt::xarray a(shape, xt::layout_type::row_major); If the layout of the array can be fixed at compile time, we can make it even simpler: .. code:: #include #include std::vector shape = { 3, 2, 4 }; xt::xarray a(shape); // this shortcut is equivalent: // xt::xarray a(shape); However, in the latter case, the layout of the array is forced to ``row_major`` at compile time, and therefore cannot be changed at runtime. Runtime vs Compile-time dimensionality -------------------------------------- Three container classes implementing multidimensional arrays are provided: ``xarray`` and ``xtensor`` and ``xtensor_fixed``. - ``xarray`` can be reshaped dynamically to any number of dimensions. It is the container that is the most similar to numpy arrays. - ``xtensor`` has a dimension set at compilation time, which enables many optimizations. For example, shapes and strides of ``xtensor`` instances are allocated on the stack instead of the heap. - ``xtensor_fixed`` has a shape fixed at compile time. This allows even more optimizations, such as allocating the storage for the container on the stack, as well as computing strides and backstrides at compile time, making the allocation of this container extremely cheap. Let's use ``xtensor`` instead of ``xarray`` in the previous example: .. code:: #include #include std::array shape = { 3, 2, 4 }; xt::xtensor a(shape); // this is equivalent to // xt::xtensor a(shape); Or when using ``xtensor_fixed``: .. code:: #include xt::xtensor_fixed> a(); // or xt::xtensor_fixed, xt::layout_type::row_major>() ``xarray``, ``xtensor`` and ``xtensor_fixed`` containers are all ``xexpression`` s and can be involved and mixed in mathematical expressions, assigned to each other etc... They provide an augmented interface compared to other ``xexpression`` types: - Each method exposed in ``xexpression`` interface has its non-const counterpart exposed by ``xarray``, ``xtensor`` and ``xtensor_fixed``. - ``reshape()`` reshapes the container in place, and the global size of the container has to stay the same. - ``resize()`` resizes the container in place, that is, if the global size of the container doesn't change, no memory allocation occurs. - ``strides()`` returns the strides of the container, used to compute the position of an element in the underlying buffer. Reshape ------- The ``reshape`` method accepts any kind of 1D-container, you don't have to pass an instance of ``shape_type``. It only requires the new shape to be compatible with the old one, that is, the number of elements in the container must remain the same: .. code:: #include xt::xarray a = { 1, 2, 3, 4, 5, 6, 7, 8}; // The following two lines ... std::array sh1 = {2, 4}; a.reshape(sh1); // ... are equivalent to the following two lines ... xt::xarray::shape_type sh2({2, 4}); a.reshape(sh2); // ... which are equivalent to the following a.reshape({2, 4}); One of the values in the ``shape`` argument can be -1. In this case, the value is inferred from the number of elements in the container and the remaining values in the ``shape``: .. code:: #include xt::xarray a = { 1, 2, 3, 4, 5, 6, 7, 8}; a.reshape({2, -1}); // a.shape() return {2, 4} Performance ----------- The dynamic dimensionality of ``xarray`` comes at a cost. Since the dimension is unknown at build time, the sequences holding shape and strides of ``xarray`` instances are heap-allocated, which makes it significantly more expensive than ``xtensor``. Shape and strides of ``xtensor`` are stack-allocated which makes them more efficient. More generally, the library implements a ``promote_shape`` mechanism at build time to determine the optimal sequence type to hold the shape of an expression. The shape type of a broadcasting expression whose members have a dimensionality determined at compile time will have a stack-allocated shape. If a single member of a broadcasting expression has a dynamic dimension (for example an ``xarray``), it bubbles up to the entire broadcasting expression which will have a heap-allocated shape. The same hold for views, broadcast expressions, etc... Aliasing and temporaries ------------------------ In some cases, an expression should not be directly assigned to a container. Instead, it has to be assigned to a temporary variable before being copied into the destination container. A typical case where this happens is when the destination container is involved in the expression and has to be resized. This phenomenon is known as *aliasing*. To prevent this, `xtensor` assigns the expression to a temporary variable before copying it. In the case of ``xarray``, this results in an extra dynamic memory allocation and copy. However, if the left-hand side is not involved in the expression being assigned, no temporary variable should be required. `xtensor` cannot detect such cases automatically and applies the "temporary variable rule" by default. A mechanism is provided to forcibly prevent usage of a temporary variable: .. code:: #include #include // a, b, and c are xt::xarrays previously initialized xt::noalias(b) = a + c; // Even if b has to be resized, a+c will be assigned directly to it // No temporary variable will be involved Example of aliasing ~~~~~~~~~~~~~~~~~~~ The aliasing phenomenon is illustrated in the following example: .. code:: #include #include std::vector a_shape = {3, 2, 4}; xt::xarray a(a_shape); std::vector b_shape = {2, 4}; xt::xarray b(b_shape); b = a + b; // b appears on both left-hand and right-hand sides of the statement In the above example, the shape of ``a + b`` is ``{ 3, 2, 4 }``. Therefore, ``b`` must first be resized, which impacts how the right-hand side is computed. If the values of ``b`` were copied into the new buffer directly without an intermediary variable, then we would have ``new_b(0, i, j) == old_b(i, j) for (i,j) in [0,1] x [0, 3]``. After the resize of ``bb``, ``a(0, i, j) + b(0, i, j)`` is assigned to ``b(0, i, j)``, then, due to broadcasting rules, ``a(1, i, j) + b(0, i, j)`` is assigned to ``b(1, i, j)``. The issue is ``b(0, i, j)`` has been changed by the previous assignment. xtensor-0.23.10/docs/source/debian.svg000066400000000000000000000153171405270662500175700ustar00rootroot00000000000000 ]> xtensor-0.23.10/docs/source/dev-build-options.rst000066400000000000000000000102261405270662500217150ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Build and configuration ======================= Build ----- ``xtensor`` build supports the following options: - ``BUILD_TESTS``: enables the ``xtest`` and ``xbenchmark`` targets (see below). - ``DOWNLOAD_GTEST``: downloads ``gtest`` and builds it locally instead of using a binary installation. - ``GTEST_SRC_DIR``: indicates where to find the ``gtest`` sources instead of downloading them. - ``XTENSOR_ENABLE_ASSERT``: activates the assertions in ``xtensor``. - ``XTENSOR_CHECK_DIMENSION``: turns on ``XTENSOR_ENABLE_ASSERT`` and activates dimension checks in ``xtensor``. Note that the dimensions check should not be activated if you expect ``operator()`` to perform broadcasting. - ``XTENSOR_USE_XSIMD``: enables simd acceleration in ``xtensor``. This requires that you have xsimd_ installed on your system. - ``XTENSOR_USE_TBB``: enables parallel assignment loop. This requires that you have you have tbb_ installed on your system. - ``XTENSOR_USE_OPENMP``: enables parallel assignment loop using OpenMP. This requires that OpenMP is available on your system. All these options are disabled by default. Enabling ``DOWNLOAD_GTEST`` or setting ``GTEST_SRC_DIR`` enables ``BUILD_TESTS``. If the ``BUILD_TESTS`` option is enabled, the following targets are available: - xtest: builds an run the test suite. - xbenchmark: builds and runs the benchmarks. For instance, building the test suite of ``xtensor`` with assertions enabled: .. code:: mkdir build cd build cmake -DBUILD_TESTS=ON -DXTENSOR_ENABLE_ASSERT=ON ../ make xtest Building the test suite of ``xtensor`` where the sources of ``gtest`` are located in e.g. ``/usr/share/gtest``: .. code:: mkdir build cd build cmake -DGTEST_SRC_DIR=/usr/share/gtest ../ make xtest .. _configuration-label: Configuration ------------- ``xtensor`` can be configured via macros, which must be defined *before* including any of its header. Here is a list of available macros: - ``XTENSOR_ENABLE_ASSERT``: enables assertions in xtensor, such as bound check. - ``XTENSOR_ENABLE_CHECK_DIMENSION``: enables the dimensions check in ``xtensor``. Note that this option should not be turned on if you expect ``operator()`` to perform broadcasting. - ``XTENSOR_USE_XSIMD``: enables SIMD acceleration in ``xtensor``. This requires that you have xsimd_ installed on your system. - ``XTENSOR_USE_TBB``: enables parallel assignment loop. This requires that you have you have tbb_ installed on your system. - ``XTENSOR_USE_OPENMP``: enables parallel assignment loop using OpenMP. This requires that OpenMP is available on your system. - ``XTENSOR_DEFAULT_DATA_CONTAINER(T, A)``: defines the type used as the default data container for tensors and arrays. ``T`` is the ``value_type`` of the container and ``A`` its ``allocator_type``. - ``XTENSOR_DEFAULT_SHAPE_CONTAINER(T, EA, SA)``: defines the type used as the default shape container for tensors and arrays. ``T`` is the ``value_type`` of the data container, ``EA`` its ``allocator_type``, and ``SA`` is the ``allocator_type`` of the shape container. - ``XTENSOR_DEFAULT_LAYOUT``: defines the default layout (row_major, column_major, dynamic) for tensors and arrays. We *strongly* discourage using this macro, which is provided for testing purpose. Prefer defining alias types on tensor and array containers instead. - ``XTENSOR_DEFAULT_TRAVERSAL``: defines the default traversal order (row_major, column_major) for algorithms and iterators on tensors and arrays. We *strongly* discourage using this macro, which is provided for testing purpose. Build the documentation ----------------------- First install the tools required to build the documentation: .. code:: conda install breathe doxygen sphinx_rtd_theme -c conda-forge You can then build the documentation: .. code:: cd docs make html Type ``make help`` to see the list of available documentation targets. .. _xsimd: https://github.com/xtensor-stack/xsimd .. _tbb: https://www.threadingbuildingblocks.org xtensor-0.23.10/docs/source/developer/000077500000000000000000000000001405270662500176035ustar00rootroot00000000000000xtensor-0.23.10/docs/source/developer/assign_xexpression.svg000066400000000000000000000347051405270662500242700ustar00rootroot00000000000000
assign_xexpression(lhs, rhs)
assign_xexpression(lhs, rhs)
resize(lhs, rhs)
resize(lhs, rhs)
1
1
assign_data(lhs, rhs, trivial)
assign_data(lhs, rhs, trivial)
2
2
trivial?
trivial?
xsimd?
xsimd?
vectorized index-based loop
vectorized index-based loop
stepper-based loop
stepper-based loop
iterator-based loop
iterator-based loop
yes
yes
yes
yes
no
no
no
no
xtensor-0.23.10/docs/source/developer/assignment.rst000066400000000000000000000200711405270662500225050ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. .. _xtensor-assign-label: Assignment ========== In this section, we consider the class ``xarray`` and its semantic bases (``xcontainer_semantic`` and ``xsemantic_base``) to illustrate how the assignment works. `xtensor` provides different mechanics of assignment depending on the type of expression. Extended copy semantic ~~~~~~~~~~~~~~~~~~~~~~ ``xarray`` provides an extended copy constructor and an extended assignment operator: .. code:: template xarray(const xexpression&); template self_type& operator=(const xexpression& e); The assignment operator forwards to ``xsemantic_base::operator=`` whose implementation is given below: .. code:: template derived_type& operator=(const xexpression& e) { temporary_type tmp(e); return this->derived_cast().assign_temporary(std::move(tmp)); } Here ``temporary_type`` is ``xarray``, the assignment operator computes the result of the expression in a temporary variable and then assigns it to the ``xarray`` instance. This temporary variable avoids aliasing when the array is involved in the rhs expression where broadcasting happens: .. code:: xarray a = {1, 2, 3, 4}; xarray b = {{1, 2, 3, 4}, {5, 6, 7, 8}}; a = a + b; The extended copy constructor calls ``xsemantic_base::assign`` which calls ``xcontainer::assign_xexpression``. This two-steps invocation allows to provide an uniform API (assign, plus_assign, minus_assign, etc) in the top base class while specializing the implementations in inheriting classes (``xcontainer_semantic`` and ``xview_semantic``). ``xcontainer::assign_xexpression`` eventually calls the free function ``xt::assign_xexpression`` which will be discussed in details later. The behavior of the extended copy semantic can be summarized with the following diagram: .. image:: extended_copy_semantic.svg Computed assignment ~~~~~~~~~~~~~~~~~~~ Computed assignment can be achieved either with traditional operators (``operator+=``, ``operator-=``) or with the corresponding assign functions (``plus_assign``, ``minus_assign``, etc). The computed assignment operators forwards to the extended assignment operator as illustrated below: .. code:: template template inline auto xsemantic_base::operator+=(const xexpression& e) -> derived_type& { return operator=(this->derived_cast() + e.derived_cast()); } The computed assign functions, like ``assign`` itself, avoid the instantiation of a temporary variable. They call the overload of ``computed_assign`` which, in the case of ``xcontainer_semantic``, simply forwards to the free function ``xt::computed_assign``: .. code:: template template inline auto xsemantic_base::plus_assign(const xexpression& e) -> derived_type& { return this->derived_cast().computed_assign(this->derived_cast() + e.derived_cast()); } template template inline auto xcontainer_semantic::computed_assign(const xexpression& e) -> derived_type& { xt::computed_assign(*this, e); return this->derived_cast(); } Again this two-steps invocation allows to provide a uniform API in ``xsemantic_base`` and specializations in the inheriting semantic classes. Besides this allows some code factorization since the assignment logic is implemented only once in ``xt::computed_assign``. Scalar computed assignment ~~~~~~~~~~~~~~~~~~~~~~~~~~ Computed assignment operators involving a scalar are similar to computed assign methods: .. code:: template template inline auto xsemantic_base::operator+=(const E& e) -> disable_xexpression { return this->derived_cast().scalar_computed_assign(e, std::plus<>()); } template template inline auto xcontainer_semantic::scalar_computed_assign(const E& e, F&& f) -> derived_type& { xt::scalar_computed_assign(*this, e, std::forward(f)); return this->derived_cast(); } The free function ``xt::scalar_computed_assign`` contains optimizations specific to scalars. Expression assigners ~~~~~~~~~~~~~~~~~~~~ The three main functions for assigning expressions (``assign_xexpression``, ``computed_assign`` and ``scalar_computed_assign``) have a similar implementation: they forward the call to the ``xexpression_assigner``, a template class that can be specialized according to the expression tag: .. code:: template inline void assign_xexpression(xexpression& e1, const xexpression& e2) { using tag = xexpression_tag_t; xexpression_assigner::assign_xexpression(e1, e2); } template class xexpression_assigner : public xexpression_assigner_base { public: using base_type = xexpression_assigner_base; template static void assign_xexpression(xexpression& e1, const xexpression& e2); template static void computed_assign(xexpression& e1, const xexpression& e2); template static void scalar_computed_assign(xexpression& e1, const E2& e2, F&& f); // ... }; `xtensor` provides specializations for ``xtensor_expression_tag`` and ``xoptional_expression_tag``. When implementing a new function type whose API is unrelated to the one of ``xfunction_base``, the ``xexpression_assigner`` should be specialized so that the assignment relies on this specific API. assign_xexpression ~~~~~~~~~~~~~~~~~~ The ``assign_xexpression`` methods first resizes the lhs expression, it chooses an assignment method depending on many properties of both lhs and rhs expressions. One of these properties, computed during the resize phase, is the nature of the assignment: trivial or not. The assignment is said to be trivial when the memory layout of the lhs and rhs are such that assignment can be done by iterating over a 1-D sequence on both sides. In that case, two options are possible: - if ``xtensor`` is compiled with the optional ``xsimd`` dependency, and if the layout and the ``value_type`` of each expression allows it, the assignment is a vectorized index-based loop operating on the expression buffers. - if the ``xsimd`` assignment is not possible (for any reason), an iterator-based loop operating on the expresion buffers is used instead. These methods are implemented in specializations of the ``trivial_assigner`` class. When the assignment is not trivial, :ref:`stepper-label` are used to perform the assignment. Instead of using ``xiterator`` of each expression, an instance of ``data_assigner`` holds both steppers and makes them step together. .. image:: assign_xexpression.svg computed_assign ~~~~~~~~~~~~~~~ The ``computed_assign`` method is slightly different from the ``assign_xexpression`` method. After resizing the lhs member, it checks if some broadcasting is involved. If so, the rhs expression is evaluated into a temporary and the temporary is assigned to the lhs expression, otherwise rhs is directly evaluated in lhs. This is because a computed assignment always implies aliasing (meaning that the lhs is also involved in the rhs): ``a += b;`` is equivalent to ``a = a + b;``. .. image:: computed_assign.svg scalar_computed_assign ~~~~~~~~~~~~~~~~~~~~~~ The ``scalar_computed_assign`` method simply iterates over the expression and applies the scalar operation on each value: .. code:: template template inline void xexpression_assigner::scalar_computed_assign(xexpression& e1, const E2& e2, F&& f) { E1& d = e1.derived_cast(); std::transform(d.cbegin(), d.cend(), d.begin(), [e2, &f](const auto& v) { return f(v, e2); }); } xtensor-0.23.10/docs/source/developer/computed_assign.svg000066400000000000000000000303731405270662500235160ustar00rootroot00000000000000
computed_assign(lhs, rhs)
computed_assign(lhs, rhs)
resize(lhs, rhs)
resize(lhs, rhs)
1
1
assign_data(lhs, rhs, trivial)
assign_data(lhs, rhs, trivial)
2
2
broadcasting?
broadcasting?
no
no
assign_data(tmp, rhs, trivial)
assign_data(tmp, rhs, trivial)
lhs.assign_temporary(tmp)
lhs.assign_temporary(tmp)
yes
yes
1
1
2
2
xtensor-0.23.10/docs/source/developer/concepts.rst000066400000000000000000000237111405270662500221570ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. .. _concepts-label: Concepts ======== `xtensor`'s core is built upon key concepts captured in interfaces that are put together in derived classes through CRTP (`Curiously Recurring Template Pattern `_) and multiple inheritance. Interfaces and classes that model expressions implement *value semantic*. CRTP and value semantic achieve static polymorphism and avoid performance overhead of virtual methods and dynamic dispatching. xexpression ~~~~~~~~~~~ ``xexpression`` is the base class for all expression classes. It is a CRTP base whose template parameter must be the most derived class in the hierarchy. For instance, if ``A`` inherits from ``B`` which in turn inherits from ``xexpression``, then ``B`` should be a template class whose template parameter is ``A`` and should forward this parameter to ``xexpression``: .. code:: #include template class B : public xexpression { // ... }; class A : public B { // ... }; ``xexpression`` only provides three overloads of a same function, that cast an ``xexpression`` object to the most inheriting type, depending on the nature of the object (*lvalue*, *const lvalue* or *rvalue*): .. code:: derived_type& derived_cast() & noexcept; const derived_type& derived_cast() & noexcept; derived_type derived_cast() && noexcept; .. _xiterable-concept-label: xiterable ~~~~~~~~~ The iterable concept is modeled by two classes, ``xconst_iterable`` and ``xiterable``, defined in ``xtensor/xiterable.hpp``. ``xconst_iterable`` provides types and methods for iterating on constant expressions, similar to the ones provided by the STL containers. Unlike the STL, the methods of ``xconst_iterable`` and ``xiterable`` are templated by a layout parameter that allows you to iterate over a N-dimensional expression in row-major order or column-major order. .. note:: Row-major layout means that elements that only differ by their last index are contiguous in memory. Column-major layout means that elements that only differ by their first index are contiguous in memory. .. image:: iteration.svg .. code:: template const_iterator begin() const noexcept; template const_iterator end() const noexcept; template const_iterator cbegin() const noexcept; template const_iterator cend() const noexcept; template const_reverse_iterator rbegin() const noexcept; template const_reverse_iterator rend() const noexcept; template const_reverse_iterator crbegin() const noexcept; template const_reverse_iterator crend() const noexcept; This template parameter is defaulted to ``XTENSOR_DEFAULT_TRAVERSAL`` (see :ref:`configuration-label`), so that `xtensor` expressions can be used in generic code such as: .. code:: std::copy(a.cbegin(), a.cend(), b.begin()); where ``a`` and ``b`` can be arbitrary types (from `xtensor`, the STL or any external library) supporting standard iteration. ``xiterable`` inherits from ``xconst_iterable`` and provides non-const counterpart of methods defined in ``xconst_iterable``. Like ``xexpression``, both are CRTP classes whose template parameter must be the most derived type. Besides traditional methods for iterating, ``xconst_iterable`` and ``xiterable`` provide overloads taking a shape parameter. This allows to iterate over an expression as if it was broadcast to the given shape: .. code:: #include #include #include #include int main(int argc, char* argv[]) { xt::xarray a = { 1, 2, 3 }; std::vector shape = { 2, 3 }; std::copy(a.cbegin(shape), a.cend(shape), std::output_iterator(std::cout, " ")); // output: 1 2 3 1 2 3 } Iterators returned by methods defined in ``xconst_iterable`` and ``xiterable`` are random access iterators. .. _xsemantic-concept-label: xsemantic ~~~~~~~~~ The ``xsemantic_base`` interface provides methods for assigning an expression: .. code:: template disable_xexpression operator+=(const E&); template derived_type& operator+=(const xexpression&); and similar methods for ``operator-=``, ``operator*=``, ``operator/=``, ``operator%=``, ``operator&=``, ``operator|=`` and ``operator^=``. The first overload is meant for computed assignment involving a scalar; it allows to write code like .. code:: #include #include int main(int argc, char* argv) { xarray a = { 1, 2, 3 }; a += 4; std::cout << a << std::endl; // outputs { 5, 6, 7 } } We rely on SFINAE to remove this overload from the overload resolution set when the parameter that we want to assign is not a scalar, avoiding ambiguity. Operator-based methods taking a general ``xexpression`` parameter don't perform a direct assignment. Instead, the result is assigned to a temporary variable first, in order to prevent issues with aliasing. Thus, if ``a`` and ``b`` are expressions, the following .. code:: a += b is equivalent to .. code:: temporary_type tmp = a + b; a.assign(tmp); Temporaries can be avoided with the assign-based methods: .. code:: template derived_type& plus_assign(const xexpression&); template derived_type&> minus_assign(const xexpression&); template derived_type& multiplies_assign(const xexpression&); template derived_type& divides_assign(const xexpression&); template derived_type& modulus_assign(const xexpression&); ``xsemantic_base`` is a CRTP class whose parameter must be the most derived type in the hierarchy. It inherits from ``xexpression`` and forwards its template parameter to this latter one. ``xsemantic_base`` also provides a assignment operator that takes an ``xexpression`` in its protected section: .. code:: template derived_type& operator=(const xexpression&); Like computed assignment operators, it evaluates the expression inside a temporary before calling the ``assign`` method. Classes inheriting from ``xsemantic_base`` must redeclare this method either in their protected section (if they are not final classes) or in their public section. In both cases, they should forward the call to their base class. Two refinements of this concept are provided, ``xcontainer_semantic`` and ``xview_semantic``. Refer to the :ref:`xtensor-assign-label` section for more details about semantic classes and how they're involved in expression assignment. xsemantic classes hierarchy: .. image:: xsemantic_classes.svg .. _xcontainer-concept-label: xcontainer ~~~~~~~~~~ The ``xcontainer`` class provides methods for container-based expressions. It does not hold any data, this is delegated to inheriting classes. It assumes the data are stored using a strided-index scheme. ``xcontainer`` defines the following methods: **Shape, strides and size** .. code:: size_type size() const noexcept; size_type dimension() const noexcept; const inner_shape_type& shape() const noexcept; const inner_strides_type& strides() const noexcept; const inner_backstrides_type& backstrides() const noexcept; **Data access methods** .. code:: template const_reference operator()(Args... args) const; template const_reference at(Args... args) const; template disable_integral_t operator[](const S& index) const; template const_reference operator[](std::initializer_list index) const; template const_reference element(It first, It last) const; const storage_type& storage() const; (and their non-const counterpart) **Broadcasting methods** .. code:: template bool broadcast_shape(const S& shape) const; Lower-level methods are also provided, meant for optimized assignment and BLAS bindings. They are covered in the :ref:`xtensor-assign-label` section. If you read the entire code of ``xcontainer``, you'll notice that two types are defined for shape, strides and backstrides: ``shape_type`` and ``inner_shape_type``, ``strides_type`` and ``inner_strides_type``, and ``backstrides_type`` and ``inner_backstrides_type``. The distinction between ``inner_shape_type`` and ``shape_type`` was motivated by the xtensor-python wrapper around numpy data structures, where the inner shape type is a proxy on the shape section of the numpy arrayobject. It cannot have a value semantics on its own as it is bound to the entire numpy array. ``xstrided_container`` inherits from ``xcontainer``; it represents a container that holds its shape and strides. It provides methods for reshaping the container: .. code:: template void resize(D&& shape, bool force = false); template void resize(S&& shape, layout_type l); template void resize(S&& shape, const strides_type& strides); template void reshape(S&& shape, layout_type l); Both ``xstrided_container`` and ``xcontainer`` are CRTP classes whose template parameter must be the most derived type in the hierarchy. Besides, ``xcontainer`` inherits from ``xiterable``, thus providing iteration methods. .. image:: xcontainer_classes.svg xfunction ~~~~~~~~~ The ``xfunction`` class is used to model mathematical operations and functions. It provides similar methods to the ones defined in ``xcontainer``, and embeds the functor describing the operation and its operands. It inherits from ``xconst_iterable``, thus providing iteration methods. xtensor-0.23.10/docs/source/developer/expression_tree.rst000066400000000000000000000174151405270662500235630ustar00rootroot00000000000000.. Copyright (c) 2016, Johan Mabille, Sylvain Corlay and Wolf Vollprecht Distributed under the terms of the BSD 3-Clause License. The full license is in the file LICENSE, distributed with this software. Expression tree =============== Most of the expressions in `xtensor` are lazy-evaluated, they do not hold any value, the values are computed upon access or when the expression is assigned to a container. This means that `xtensor` needs somehow to keep track of the expression tree. xfunction ~~~~~~~~~ A node in the expression tree may be represented by different classes in `xtensor`; here we focus on basic arithmetic operations and mathematical functions, which are represented by an instance of ``xfunction``. This is a template class whose parameters are: - a functor describing the operation of the mathematical function - the closures of the child expressions, i.e. the most optimal way to store each child expression Consider the following code: .. code:: xarray a = xt::ones({2, 2}); xarray b = xt::ones({2, 2}); auto f = (a + b); Here the type of ``f`` is ``xfunction&, const xarray&>``, and f stores constant references on the arrays involved in the operation. This can be illustrated by the figure below: .. image:: xfunction_tree.svg The implementation of ``xfunction`` methods is quite easy: they forward the call to the nodes and apply the operation when this makes sense. For instance, assuming that the operands are stored as ``m_first`` and ``m_second``, and the functor describing the operation as ``m_functor``, the implementation of ``operator()`` and ``broadcast_shape`` looks like: .. code:: template template inline auto xfunction::operator()(Args... args) const -> const_reference { return m_functor(m_first(args...), m_second(args...)); } template template inline bool xfunction::broadcast_shape(S& shape) const { return m_first.broadcast_shape(shape) && m_second.broadcast_shape(shape); } In fact, ``xfunction`` can handle an arbitrary number of arguments. The practical implementation is slightly more complicated than the code snippet above, however the principle remains the same. Holding expressions ~~~~~~~~~~~~~~~~~~~ Each node of an expression tree holds const references to its child nodes, or the child nodes themselves, depending on their nature. When building a complex expression, if a part of this expression is an rvalue, it is moved inside its parent, else a constant reference is used: .. code:: xarray some_function(); xarray a = xt::ones({2, 2}); auto f = a + some_function(); Here ``f`` holds a constant reference on ``a``, while the array returned by ``some_function`` is moved into ``f``. The actual types held by the expression are the **closure types**, more details can be found in :ref:`closure-semantics-label`. Building the expression tree ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As previously stated, each mathematical function in xtensor returns an instance of ``xfunction``. This section explains in details how the template parameters of ``xfunction`` are computed according to the type of the function, the number and the types of its arguments. Let's consider the definition of ``operator+``: .. code:: template inline auto operator+(E1&& e1, E2&& e2) -> detail::xfunction_type { return detail::make_xfunction(std::forward(e1), std::forward(e2)); } This top-level function selects the appropriate functor and forwards its arguments to the ``make_xfunction`` generator. This latter is responsible for setting the remaining template parameters of ``xfunction``: .. code:: template