pax_global_header00006660000000000000000000000064141631243750014520gustar00rootroot0000000000000052 comment=f4a9df04dc08d28d1198af7b5550ad1e37b99aa5 PyAV-8.1.0/000077500000000000000000000000001416312437500123455ustar00rootroot00000000000000PyAV-8.1.0/.editorconfig000066400000000000000000000003371416312437500150250ustar00rootroot00000000000000root = true [*] charset = utf-8 end_of_line = lf indent_size = 4 indent_style = space insert_final_newline = true trim_trailing_whitespace = true [*.yml] indent_size = 2 [Makefile] indent_size = unset indent_style = tab PyAV-8.1.0/.github/000077500000000000000000000000001416312437500137055ustar00rootroot00000000000000PyAV-8.1.0/.github/ISSUE_TEMPLATE/000077500000000000000000000000001416312437500160705ustar00rootroot00000000000000PyAV-8.1.0/.github/ISSUE_TEMPLATE/build-bug-report.md000066400000000000000000000031271416312437500216000ustar00rootroot00000000000000--- name: Build bug report about: Report on an issue while building or installing PyAV. title: "FOO does not build." labels: build assignees: '' --- **IMPORTANT:** Be sure to replace all template sections {{ like this }} or your issue may be discarded. ## Overview {{ A clear and concise description of what the bug is. }} ## Expected behavior {{ A clear and concise description of what you expected to happen. }} ## Actual behavior {{ A clear and concise description of what actually happened. }} Build report: ``` {{ Complete output of `python setup.py build`. Reports that do not show compiler commands will not be accepted (e.g. results from `pip install av`). }} ``` ## Investigation {{ What you did to isolate the problem. }} ## Reproduction {{ Steps to reproduce the behavior. }} ## Versions - OS: {{ e.g. macOS 10.13.6 }} - PyAV runtime: ``` {{ Complete output of `python -m av --version` if you can run it. }} ``` - PyAV build: ``` {{ Complete output of `python setup.py config --verbose`. }} ``` - FFmpeg: ``` {{ Complete output of `ffmpeg -version` }} ``` ## Research I have done the following: - [ ] Checked the [PyAV documentation](https://pyav.org/docs) - [ ] Searched on [Google](https://www.google.com/search?q=pyav+how+do+I+foo) - [ ] Searched on [Stack Overflow](https://stackoverflow.com/search?q=pyav) - [ ] Looked through [old GitHub issues](https://github.com/PyAV-Org/PyAV/issues?&q=is%3Aissue) - [ ] Asked on [PyAV Gitter](https://gitter.im/PyAV-Org) - [ ] ... and waited 72 hours for a response. ## Additional context {{ Add any other context about the problem here. }} PyAV-8.1.0/.github/ISSUE_TEMPLATE/ffmpeg-feature-request.md000066400000000000000000000024461416312437500230030ustar00rootroot00000000000000--- name: FFmpeg feature request about: Request a feature of FFmpeg be exposed or supported by PyAV. title: "Allow FOO to BAR" labels: enhancement assignees: '' --- **IMPORTANT:** Be sure to replace all template sections {{ like this }} or your issue may be discarded. ## Overview {{ A clear and concise description of what the feature is. }} ## Existing FFmpeg API {{ Link to appropriate FFmpeg documentation, ideally the API doxygen files at https://ffmpeg.org/doxygen/trunk/ }} ## Expected PyAV API {{ A description of how you think PyAV should behave. }} Example: ``` {{ An example of how you think PyAV should behave. }} ``` ## Investigation {{ What you did to isolate the problem. }} ## Reproduction {{ Steps to reproduce the behavior. If the problem is media specific, include a link to it. Only send media that you have the rights to. }} ## Versions - OS: {{ e.g. macOS 10.13.6 }} - PyAV runtime: ``` {{ Complete output of `python -m av --version`. If this command won't run, you are likely dealing with the build issue and should use the appropriate template. }} ``` - PyAV build: ``` {{ Complete output of `python setup.py config --verbose`. }} ``` - FFmpeg: ``` {{ Complete output of `ffmpeg -version` }} ``` ## Additional context {{ Add any other context about the problem here. }} PyAV-8.1.0/.github/ISSUE_TEMPLATE/pyav-feature-request.md000066400000000000000000000011041416312437500225040ustar00rootroot00000000000000--- name: PyAV feature request about: Request a feature of PyAV that is not provided by FFmpeg. title: "Allow FOO to BAR" labels: enhancement assignees: '' --- **IMPORTANT:** Be sure to replace all template sections {{ like this }} or your issue may be discarded. ## Overview {{ A clear and concise description of what the feature is. }} ## Desired Behavior {{ A description of how you think PyAV should behave. }} ## Example API ``` {{ An example of how you think PyAV should behave. }} ``` ## Additional context {{ Add any other context about the problem here. }} PyAV-8.1.0/.github/ISSUE_TEMPLATE/runtime-bug-report.md000066400000000000000000000032611416312437500221630ustar00rootroot00000000000000--- name: Runtime bug report about: Report on an issue while running PyAV. title: "The FOO does not BAR." labels: bug assignees: '' --- **IMPORTANT:** Be sure to replace all template sections {{ like this }} or your issue may be discarded. ## Overview {{ A clear and concise description of what the bug is. }} ## Expected behavior {{ A clear and concise description of what you expected to happen. }} ## Actual behavior {{ A clear and concise description of what actually happened. }} Traceback: ``` {{ Include complete tracebacks if there are any exceptions. }} ``` ## Investigation {{ What you did to isolate the problem. }} ## Reproduction {{ Steps to reproduce the behavior. If the problem is media specific, include a link to it. Only send media that you have the rights to. }} ## Versions - OS: {{ e.g. macOS 10.13.6 }} - PyAV runtime: ``` {{ Complete output of `python -m av --version`. If this command won't run, you are likely dealing with the build issue and should use the appropriate template. }} ``` - PyAV build: ``` {{ Complete output of `python setup.py config --verbose`. }} ``` - FFmpeg: ``` {{ Complete output of `ffmpeg -version` }} ``` ## Research I have done the following: - [ ] Checked the [PyAV documentation](https://pyav.org/docs) - [ ] Searched on [Google](https://www.google.com/search?q=pyav+how+do+I+foo) - [ ] Searched on [Stack Overflow](https://stackoverflow.com/search?q=pyav) - [ ] Looked through [old GitHub issues](https://github.com/PyAV-Org/PyAV/issues?&q=is%3Aissue) - [ ] Asked on [PyAV Gitter](https://gitter.im/PyAV-Org) - [ ] ... and waited 72 hours for a response. ## Additional context {{ Add any other context about the problem here. }} PyAV-8.1.0/.github/ISSUE_TEMPLATE/user-help.md000066400000000000000000000022121416312437500203130ustar00rootroot00000000000000--- name: User help about: Request help with using PyAV. title: "How do I FOO?" labels: 'user help' assignees: '' --- **IMPORTANT:** Be sure to replace all template sections {{ like this }} or your issue may be discarded. ## Overview {{ A clear and concise description of your problem. }} ## Expected behavior {{ A clear and concise description of what you expected to happen. }} ## Actual behavior {{ A clear and concise description of what actually happened. }} Traceback: ``` {{ Include complete tracebacks if there are any exceptions. }} ``` ## Investigation {{ What you tried so far to fix your problem. }} ## Research I have done the following: - [ ] Checked the [PyAV documentation](https://pyav.org/docs) - [ ] Searched on [Google](https://www.google.com/search?q=pyav+how+do+I+foo) - [ ] Searched on [Stack Overflow](https://stackoverflow.com/search?q=pyav) - [ ] Looked through [old GitHub issues](https://github.com/PyAV-Org/PyAV/issues?&q=is%3Aissue) - [ ] Asked on [PyAV Gitter](https://gitter.im/PyAV-Org) - [ ] ... and waited 72 hours for a response. ## Additional context {{ Add any other context about the problem here. }} PyAV-8.1.0/.github/workflows/000077500000000000000000000000001416312437500157425ustar00rootroot00000000000000PyAV-8.1.0/.github/workflows/tests.yml000066400000000000000000000165231416312437500176360ustar00rootroot00000000000000name: tests on: [push, pull_request] jobs: style: name: "${{ matrix.config.suite }}" runs-on: ubuntu-latest strategy: matrix: config: - {suite: isort} - {suite: flake8} env: PYAV_PYTHON: python3 PYAV_LIBRARY: ffmpeg-4.2 # doesn't matter steps: - uses: actions/checkout@v2 name: Checkout - name: Python uses: actions/setup-python@v1 with: python-version: 3.7 - name: Environment run: env | sort - name: Packages run: | . scripts/activate.sh # A bit of a hack that we can get away with this. python -m pip install ${{ matrix.config.suite }} - name: "${{ matrix.config.suite }}" run: | . scripts/activate.sh ./scripts/test ${{ matrix.config.suite }} nix: name: "py-${{ matrix.config.python }} lib-${{ matrix.config.ffmpeg }} ${{matrix.config.os}}" runs-on: ${{ matrix.config.os }} strategy: matrix: config: - {os: ubuntu-latest, python: 3.7, ffmpeg: "4.2", extras: true} - {os: ubuntu-latest, python: 3.7, ffmpeg: "4.1"} - {os: ubuntu-latest, python: 3.7, ffmpeg: "4.0"} - {os: ubuntu-latest, python: pypy3, ffmpeg: "4.2"} #- {os: macos-latest, python: 3.7, ffmpeg: "4.2"} env: PYAV_PYTHON: python${{ matrix.config.python }} PYAV_LIBRARY: ffmpeg-${{ matrix.config.ffmpeg }} steps: - uses: actions/checkout@v2 name: Checkout - name: Python ${{ matrix.config.python }} uses: actions/setup-python@v1 with: python-version: ${{ matrix.config.python }} - name: OS Packages run: | case ${{ matrix.config.os }} in ubuntu-latest) sudo apt-get update sudo apt-get install autoconf automake build-essential cmake \ libtool mercurial pkg-config texinfo wget yasm zlib1g-dev sudo apt-get install libass-dev libfreetype6-dev libjpeg-dev \ libtheora-dev libvorbis-dev libx264-dev if [[ "${{ matrix.config.extras }}" ]]; then sudo apt-get install doxygen fi ;; macos-latest) brew update brew install automake libtool nasm pkg-config shtool texi2html wget brew install libass libjpeg libpng libvorbis libvpx opus theora x264 ;; esac - name: Pip and FFmpeg run: | . scripts/activate.sh ffmpeg-${{ matrix.config.ffmpeg }} scripts/build-deps - name: Build run: | . scripts/activate.sh ffmpeg-${{ matrix.config.ffmpeg }} scripts/build - name: Test run: | . scripts/activate.sh ffmpeg-${{ matrix.config.ffmpeg }} python -m av --version # Assert it can import. scripts/test - name: Docs if: matrix.config.extras run: | . scripts/activate.sh ffmpeg-${{ matrix.config.ffmpeg }} make -C docs html - name: Doctest if: matrix.config.extras run: | . scripts/activate.sh ffmpeg-${{ matrix.config.ffmpeg }} scripts/test doctest - name: Examples if: matrix.config.extras run: | . scripts/activate.sh ffmpeg-${{ matrix.config.ffmpeg }} scripts/test examples - name: Source Distribution if: matrix.config.extras run: | . scripts/activate.sh ffmpeg-${{ matrix.config.ffmpeg }} scripts/test sdist windows: name: "py-${{ matrix.config.python }} lib-${{ matrix.config.ffmpeg }} ${{matrix.config.os}}" runs-on: ${{ matrix.config.os }} strategy: matrix: config: - {os: windows-latest, python: 3.7, ffmpeg: "4.2"} - {os: windows-latest, python: 3.7, ffmpeg: "4.1"} - {os: windows-latest, python: 3.7, ffmpeg: "4.0"} steps: - name: Checkout uses: actions/checkout@v2 - name: Set up Conda shell: bash run: | . $CONDA/etc/profile.d/conda.sh conda config --set always_yes true conda config --add channels conda-forge conda create -q -n pyav \ cython \ ffmpeg=${{ matrix.config.ffmpeg }} \ numpy \ pillow \ python=${{ matrix.config.python }} \ setuptools - name: Build shell: bash run: | . $CONDA/etc/profile.d/conda.sh conda activate pyav python setup.py build_ext --inplace --ffmpeg-dir=$CONDA_PREFIX/Library - name: Test shell: bash run: | . $CONDA/etc/profile.d/conda.sh conda activate pyav python setup.py test package-source: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v1 with: python-version: 3.7 - name: Build source package run: | pip install cython python scripts/fetch-vendor /tmp/vendor PKG_CONFIG_PATH=/tmp/vendor/lib/pkgconfig make build python setup.py sdist - name: Upload source package uses: actions/upload-artifact@v1 with: name: dist path: dist/ package-wheel: runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: os: [ubuntu-latest, macos-latest, windows-latest] steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v1 with: python-version: 3.7 - name: Install packages if: matrix.os == 'macos-latest' run: | brew update brew install pkg-config - name: Build wheels env: CIBW_ARCHS_WINDOWS: AMD64 CIBW_BEFORE_BUILD: pip install cython && python scripts/fetch-vendor /tmp/vendor CIBW_BEFORE_BUILD_WINDOWS: pip install cython && python scripts\fetch-vendor C:\cibw\vendor CIBW_ENVIRONMENT_LINUX: LD_LIBRARY_PATH=/tmp/vendor/lib:$LD_LIBRARY_PATH PKG_CONFIG_PATH=/tmp/vendor/lib/pkgconfig CIBW_ENVIRONMENT_MACOS: PKG_CONFIG_PATH=/tmp/vendor/lib/pkgconfig LDFLAGS=-headerpad_max_install_names CIBW_ENVIRONMENT_WINDOWS: INCLUDE=C:\\cibw\\vendor\\include LIB=C:\\cibw\\vendor\\lib PYAV_SKIP_TESTS=unicode_filename CIBW_REPAIR_WHEEL_COMMAND_WINDOWS: python scripts/inject-dll {wheel} {dest_dir} C:\cibw\vendor\bin CIBW_SKIP: cp36-* pp36-* pp38-win* *-musllinux* CIBW_TEST_COMMAND: mv {project}/av {project}/av.disabled && python -m unittest discover -t {project} -s tests && mv {project}/av.disabled {project}/av # disable test suite on OS X, the SSL config seems broken CIBW_TEST_COMMAND_MACOS: true CIBW_TEST_REQUIRES: numpy run: | pip install cibuildwheel cibuildwheel --output-dir dist shell: bash - name: Upload wheels uses: actions/upload-artifact@v1 with: name: dist path: dist/ publish: runs-on: ubuntu-latest needs: [package-source, package-wheel] steps: - uses: actions/checkout@v2 - uses: actions/download-artifact@v1 with: name: dist path: dist/ - name: Publish to PyPI if: github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/') uses: pypa/gh-action-pypi-publish@master with: user: __token__ password: ${{ secrets.PYPI_TOKEN }} PyAV-8.1.0/.gitignore000066400000000000000000000005251416312437500143370ustar00rootroot00000000000000# General *~ .DS_Store .nfs.* ._* # Environment /.eggs /tmp /vendor /venv /venvs # Build products *.dll *.egg-info *.lib *.pyc *.so /*.sdf /*.sln /*.suo /av/**/*.exp /av/**/*.lib /av/**/*.pdb /av/**/*.pyd /build /dist /docs/_build /ipch /msvc-projects /src # Testing. *.spyderproject .idea /.vagrant /sandbox /tests/assets /tests/samples PyAV-8.1.0/AUTHORS.py000066400000000000000000000054101416312437500140440ustar00rootroot00000000000000import math import subprocess print('''Contributors ============ All contributors (by number of commits): ''') email_map = { # Maintainers. 'git@mikeboers.com': 'github@mikeboers.com', 'mboers@keypics.com': 'github@mikeboers.com', 'mikeb@loftysky.com': 'github@mikeboers.com', 'mikeb@markmedia.co': 'github@mikeboers.com', 'westernx@mikeboers.com': 'github@mikeboers.com', # Junk. 'mark@mark-VirtualBox.(none)': None, # Aliases. 'a.davoudi@aut.ac.ir': 'davoudialireza@gmail.com', 'tcaswell@bnl.gov': 'tcaswell@gmail.com', 'xxr3376@gmail.com': 'xxr@megvii.com', 'dallan@pha.jhu.edu': 'daniel.b.allan@gmail.com', } name_map = { 'caspervdw@gmail.com': 'Casper van der Wel', 'daniel.b.allan@gmail.com': 'Dan Allan', 'mgoacolou@cls.fr': 'Manuel Goacolou', 'mindmark@gmail.com': 'Mark Reid', 'moritzkassner@gmail.com': 'Moritz Kassner', 'vidartf@gmail.com': 'Vidar Tonaas Fauske', 'xxr@megvii.com': 'Xinran Xu', } github_map = { 'billy.shambrook@gmail.com': 'billyshambrook', 'daniel.b.allan@gmail.com': 'danielballan', 'davoudialireza@gmail.com': 'adavoudi', 'github@mikeboers.com': 'mikeboers', 'jeremy.laine@m4x.org': 'jlaine', 'kalle.litterfeldt@gmail.com': 'litterfeldt', 'mindmark@gmail.com': 'markreidvfx', 'moritzkassner@gmail.com': 'mkassner', 'rush@logic.cz': 'radek-senfeld', 'self@brendanlong.com': 'brendanlong', 'tcaswell@gmail.com': 'tacaswell', 'ulrik.mikaelsson@magine.com': 'rawler', 'vidartf@gmail.com': 'vidartf', 'willpatera@gmail.com': 'willpatera', 'xxr@megvii.com': 'xxr3376', } email_count = {} for line in subprocess.check_output(['git', 'log', '--format=%aN,%aE']).decode().splitlines(): name, email = line.strip().rsplit(',', 1) email = email_map.get(email, email) if not email: continue names = name_map.setdefault(email, set()) if isinstance(names, set): names.add(name) email_count[email] = email_count.get(email, 0) + 1 last = None block_i = 0 for email, count in sorted(email_count.items(), key=lambda x: (-x[1], x[0])): # This is the natural log, because of course it should be. ;) order = int(math.log(count)) if last and last != order: block_i += 1 print() last = order names = name_map[email] if isinstance(names, set): name = ', '.join(sorted(names)) else: name = names github = github_map.get(email) # The '-' vs '*' is so that Sphinx treats them as different lists, and # introduces a gap bettween them. if github: print('%s %s <%s>; `@%s `_' % ('-*'[block_i % 2], name, email, github, github)) else: print('%s %s <%s>' % ('-*'[block_i % 2], name, email, )) PyAV-8.1.0/AUTHORS.rst000066400000000000000000000042701416312437500142270ustar00rootroot00000000000000Contributors ============ All contributors (by number of commits): - Mike Boers ; `@mikeboers `_ * Jeremy Lainé ; `@jlaine `_ * Mark Reid ; `@markreidvfx `_ - Vidar Tonaas Fauske ; `@vidartf `_ - Billy Shambrook ; `@billyshambrook `_ - Casper van der Wel - Tadas Dailyda * Xinran Xu ; `@xxr3376 `_ * Dan Allan ; `@danielballan `_ * Alireza Davoudi ; `@adavoudi `_ * Moritz Kassner ; `@mkassner `_ * Thomas A Caswell ; `@tacaswell `_ * Ulrik Mikaelsson ; `@rawler `_ * Wel C. van der * Will Patera ; `@willpatera `_ - rutsh - Christoph Rackwitz - Johannes Erdfelt - Karl Litterfeldt ; `@litterfeldt `_ - Martin Larralde - Miles Kaufmann - Radek Senfeld ; `@radek-senfeld `_ - Ian Lee - Arthur Barros - Gemfield - mephi42 - Manuel Goacolou - Ömer Sezgin Uğurlu - Orivej Desh - Brendan Long ; `@brendanlong `_ - Tom Flanagan - Tim O'Shea - Tim Ahpee - Jonas Tingeborn - Vasiliy Kotov - Koichi Akabe - David Joy PyAV-8.1.0/CHANGELOG.rst000066400000000000000000000356321416312437500143770ustar00rootroot00000000000000Changelog ========= We are operating with `semantic versioning `_. .. Please try to update this file in the commits that make the changes. To make merging/rebasing easier, we don't manually break lines in here when they are too long, so any particular change is just one line. To make tracking easier, please add either ``closes #123`` or ``fixes #123`` to the first line of the commit message. There are more syntaxes at: . Note that they these tags will not actually close the issue/PR until they are merged into the "default" branch, currently "develop"). v8.1.0 ------ Minor: - Update FFmpeg to 4.3.2 for the binary wheels. - Provide binary wheels for Python 3.10 (:issue:`820`). - Stop providing binary wheels for end-of-life Python 3.6. - Fix args order in Frame.__repr__ (:issue:`749`). - Fix documentation to remove unavailable QUIET log level (:issue:`719`). - Expose codec_context.codec_tag (:issue:`741`). - Add example for encoding with a custom PTS (:issue:`725`). - Use av_packet_rescale_ts in Packet._rebase_time() (:issue:`737`). - Do not hardcode errno values in test suite (:issue:`729`). - Use av_guess_format for output container format (:issue:`691`). - Fix setting CodecContext.extradata (:issue:`658`, :issue:`740`). - Fix documentation code block indentation (:issue:`783`). - Fix link to Conda installation instructions (:issue:`782`). - Export AudioStream from av.audio (:issue:`775`). - Fix setting CodecContext.extradata (:issue:`801`). v8.0.3 ------ Minor: - Update FFmpeg to 4.3.1 for the binary wheels. v8.0.2 ------ Minor: - Enable GnuTLS support in the FFmpeg build used for binary wheels (:issue:`675`). - Make binary wheels compatible with Mac OS X 10.9+ (:issue:`662`). - Drop Python 2.x buffer protocol code. - Remove references to previous repository location. v8.0.1 ------ Minor: - Enable additional FFmpeg features in the binary wheels. - Use os.fsencode for both input and output file names (:issue:`600`). v8.0.0 ------ Major: - Drop support for Python 2 and Python 3.4. - Provide binary wheels for Linux, Mac and Windows. Minor: - Remove shims for obsolete FFmpeg versions (:issue:`588`). - Add yuvj420p format for :meth:`VideoFrame.from_ndarray` and :meth:`VideoFrame.to_ndarray` (:issue:`583`). - Add support for palette formats in :meth:`VideoFrame.from_ndarray` and :meth:`VideoFrame.to_ndarray` (:issue:`601`). - Fix Python 3.8 deprecation warning related to abstract base classes (:issue:`616`). - Remove ICC profiles from logos (:issue:`622`). Fixes: - Avoid infinite timeout in :func:`av.open` (:issue:`589`). v7.0.1 ------ Fixes: - Removed deprecated ``AV_FRAME_DATA_QP_TABLE_*`` enums. (:issue:`607`) v7.0.0 ------ Major: - Drop support for FFmpeg < 4.0. (:issue:`559`) - Introduce per-error exceptions, and mirror the builtin exception hierarchy. It is recommended to examine your error handling code, as common FFmpeg errors will result in `ValueError` baseclasses now. (:issue:`563`) - Data stream's `encode` and `decode` return empty lists instead of none allowing common API use patterns with data streams. - Remove ``whence`` parameter from :meth:`InputContainer.seek` as non-time seeking doesn't seem to actually be supported by any FFmpeg formats. Minor: - Users can disable the logging system to avoid lockups in sub-interpreters. (:issue:`545`) - Filters support audio in general, and a new :meth:`.Graph.add_abuffer`. (:issue:`562`) - :func:`av.open` supports `timeout` parameters. (:issue:`480` and :issue:`316`) - Expose :attr:`Stream.base_rate` and :attr:`Stream.guessed_rate`. (:issue:`564`) - :meth:`.VideoFrame.reformat` can specify interpolation. - Expose many sets of flags. Fixes: - Fix typing in :meth:`.CodecContext.parse` and make it more robust. - Fix wrong attribute in ByteSource. (:issue:`340`) - Remove exception that would break audio remuxing. (:issue:`537`) - Log messages include last FFmpeg error log in more helpful way. - Use AVCodecParameters so FFmpeg doesn't complain. (:issue:`222`) v6.2.0 ------ Major: - Allow :meth:`av.open` to be used as a context manager. - Fix compatibility with PyPy, the full test suite now passes. (:issue:`130`) Minor: - Add :meth:`.InputContainer.close` method. (:issue:`317`, :issue:`456`) - Ensure audio output gets flushes when using a FIFO. (:issue:`511`) - Make Python I/O buffer size configurable. (:issue:`512`) - Make :class:`.AudioFrame` and :class:`VideoFrame` more garbage-collector friendly by breaking a reference cycle. (:issue:`517`) Build: - Do not install the `scratchpad` package. v6.1.2 ------ Micro: - Fix a numpy deprecation warning in :meth:`.AudioFrame.to_ndarray`. v6.1.1 ------ Micro: - Fix alignment in :meth:`.VideoFrame.from_ndarray`. (:issue:`478`) - Fix error message in :meth:`.Buffer.update`. Build: - Fix more compiler warnings. v6.1.0 ------ Minor: - ``av.datasets`` for sample data that is pulled from either FFmpeg's FATE suite, or our documentation server. - :meth:`.InputContainer.seek` gets a ``stream`` argument to specify the ``time_base`` the requested ``offset`` is in. Micro: - Avoid infinite look in ``Stream.__getattr__``. (:issue:`450`) - Correctly handle Python I/O with no ``seek`` method. - Remove ``Datastream.seek`` override (:issue:`299`) Build: - Assert building against compatible FFmpeg. (:issue:`401`) - Lock down Cython lanaguage level to avoid build warnings. (:issue:`443`) Other: - Incremental improvements to docs and tests. - Examples directory will now always be runnable as-is, and embeded in the docs (in a copy-pastable form). v6.0.0 ------ Major: - Drop support for FFmpeg < 3.2. - Remove ``VideoFrame.to_qimage`` method, as it is too tied to PyQt4. (:issue:`424`) Minor: - Add support for all known sample formats in :meth:`.AudioFrame.to_ndarray` and add :meth:`.AudioFrame.to_ndarray`. (:issue:`422`) - Add support for more image formats in :meth:`.VideoFrame.to_ndarray` and :meth:`.VideoFrame.from_ndarray`. (:issue:`415`) Micro: - Fix a memory leak in :meth:`.OutputContainer.mux_one`. (:issue:`431`) - Ensure :meth:`.OutputContainer.close` is called at destruction. (:issue:`427`) - Fix a memory leak in :class:`.OutputContainer` initialisation. (:issue:`427`) - Make all video frames created by PyAV use 8-byte alignment. (:issue:`425`) - Behave properly in :meth:`.VideoFrame.to_image` and :meth:`.VideoFrame.from_image` when ``width != line_width``. (:issue:`425`) - Fix manipulations on video frames whose width does not match the line stride. (:issue:`423`) - Fix several :attr:`.Plane.line_size` misunderstandings. (:issue:`421`) - Consistently decode dictionary contents. (:issue:`414`) - Always use send/recv en/decoding mechanism. This removes the ``count`` parameter, which was not used in the send/recv pipeline. (:issue:`413`) - Remove various deprecated iterators. (:issue:`412`) - Fix a memory leak when using Python I/O. (:issue:`317`) - Make :meth:`.OutputContainer.mux_one` call `av_interleaved_write_frame` with the GIL released. Build: - Remove the "reflection" mechanism, and rely on FFmpeg version we build against to decide which methods to call. (:issue:`416`) - Fix many more ``const`` warnings. v0.x.y ------ .. note:: Below here we used ``v0.x.y``. We incremented ``x`` to signal a major change (i.e. backwards incompatibilities) and incremented ``y`` as a minor change (i.e. backwards compatible features). Once we wanted more subtlety and felt we had matured enough, we jumped past the implications of ``v1.0.0`` straight to ``v6.0.0`` (as if we had not been stuck in ``v0.x.y`` all along). v0.5.3 ------ Minor: - Expose :attr:`.VideoFrame.pict_type` as :class:`.PictureType` enum. (:pr:`402`) - Expose :attr:`.Codec.video_rates` and :attr:`.Codec.audio_rates`. (:pr:`381`) Patch: - Fix :attr:`.Packet.time_base` handling during flush. (:pr:`398`) - :meth:`.VideoFrame.reformat` can throw exceptions when requested colorspace transforms aren't possible. - Wrapping the stream object used to overwrite the ``pix_fmt`` attribute. (:pr:`390`) Runtime: - Deprecate ``VideoFrame.ptr`` in favour of :attr:`VideoFrame.buffer_ptr`. - Deprecate ``Plane.update_buffer()`` and ``Packet.update_buffer`` in favour of :meth:`.Plane.update`. (:pr:`407`) - Deprecate ``Plane.update_from_string()`` in favour of :meth:`.Plane.update`. (:pr:`407`) - Deprecate ``AudioFrame.to_nd_array()`` and ``VideoFrame.to_nd_array()`` in favour of :meth:`.AudioFrame.to_ndarray` and :meth:`.VideoFrame.to_ndarray`. (:pr:`404`) Build: - CI covers more cases, including macOS. (:pr:`373` and :pr:`399`) - Fix many compilation warnings. (:issue:`379`, :pr:`380`, :pr:`387`, and :pr:`388`) Docs: - Docstrings for many commonly used attributes. (:pr:`372` and :pr:`409`) v0.5.2 ------ Build: - Fixed Windows build, which broke in v0.5.1. - Compiler checks are not cached by default. This behaviour is retained if you ``source scripts/activate.sh`` to develop PyAV. (:issue:`256`) - Changed to ``PYAV_SETUP_REFLECT_DEBUG=1`` from ``PYAV_DEBUG_BUILD=1``. v0.5.1 ------ Build: - Set ``PYAV_DEBUG_BUILD=1`` to force a verbose reflection (mainly for being installed via ``pip``, which is why this is worth a release). v0.5.0 ------ Major: - Dropped support for Libav in general. (:issue:`110`) - No longer uses libavresample. Minor: - ``av.open`` has ``container_options`` and ``stream_options``. - ``Frame`` includes ``pts`` in ``repr``. Patch: - EnumItem's hash calculation no longer overflows. (:issue:`339`, :issue:`341` and :issue:`342`.) - Frame.time_base was not being set in most cases during decoding. (:issue:`364`) - CodecContext.options no longer needs to be manually initialized. - CodexContext.thread_type accepts its enums. v0.4.1 ------ Minor: - Add `Frame.interlaced_frame` to indicate if the frame is interlaced. (:issue:`327` by :gh-user:`MPGek`) - Add FLTP support to ``Frame.to_nd_array()``. (:issue:`288` by :gh-user:`rawler`) - Expose ``CodecContext.extradata`` for codecs that have extra data, e.g. Huffman tables. (:issue:`287` by :gh-user:`adavoudi`) Patch: - Packets retain their refcount after muxing. (:issue:`334`) - `Codec` construction is more robust to find more codecs. (:issue:`332` by :gh-user:`adavoudi`) - Refined frame corruption detection. (:issue:`291` by :gh-user:`Litterfeldt`) - Unicode filenames are okay. (:issue:`82`) v0.4.0 ------ Major: - ``CodecContext`` has taken over encoding/decoding, and can work in isolation of streams/containers. - ``Stream.encode`` returns a list of packets, instead of a single packet. - ``AudioFifo`` and ``AudioResampler`` will raise ``ValueError`` if input frames inconsistant ``pts``. - ``time_base`` use has been revisited across the codebase, and may not be converted bettween ``Stream.time_base`` and ``CodecContext.time_base`` at the same times in the transcoding pipeline. - ``CodecContext.rate`` has been removed, but proxied to ``VideoCodecContext.framerate`` and ``AudioCodecContext.sample_rate``. The definition is effectively inverted from the old one (i.e. for 24fps it used to be ``1/24`` and is now ``24/1``). - Fractions (e.g. ``time_base``, ``rate``) will be ``None`` if they are invalid. - ``InputContainer.seek`` and ``Stream.seek`` will raise TypeError if given a float, when previously they converted it from seconds. Minor: - Added ``Packet.is_keyframe`` and ``Packet.is_corrupt``. (:issue:`226`) - Many more ``time_base``, ``pts`` and other attributes are writeable. - ``Option`` exposes much more of the API (but not get/set). (:issue:`243`) - Expose metadata encoding controls. (:issue:`250`) - Expose ``CodecContext.skip_frame``. (:issue:`259`) Patch: - Build doesn't fail if you don't have git installed. (:issue:`184`) - Developer environment works better with Python3. (:issue:`248`) - Fix Container deallocation resulting in segfaults. (:issue:`253`) v0.3.3 ------ Patch: - Fix segfault due to buffer overflow in handling of stream options. (:issue:`163` and :issue:`169`) - Fix segfault due to seek not properly checking if codecs were open before using avcodec_flush_buffers. (:issue:`201`) v0.3.2 ------ Minor: - Expose basics of avfilter via ``Filter``. - Add ``Packet.time_base``. - Add ``AudioFrame.to_nd_array`` to match same on ``VideoFrame``. - Update Windows build process. Patch: - Further improvements to the logging system. (:issue:`128`) v0.3.1 ------ Minor: - ``av.logging.set_log_after_shutdown`` renamed to ``set_print_after_shutdown`` - Repeating log messages will be skipped, much like ffmpeg's does by default Patch: - Fix memory leak in logging system when under heavy logging loads while threading. (:issue:`128` with help from :gh-user:`mkassner` and :gh-user:`ksze`) v0.3.0 ------ Major: - Python IO can write - Improve build system to use Python's C compiler for function detection; build system is much more robust - MSVC support. (:issue:`115` by :gh-user:`vidartf`) - Continuous integration on Windows via AppVeyor. (by :gh-user:`vidartf`) Minor: - Add ``Packet.decode_one()`` to skip packet flushing for codecs that would otherwise error - ``StreamContainer`` for easier selection of streams - Add buffer protocol support to Packet Patch: - Fix bug when using Python IO on files larger than 2GB. (:issue:`109` by :gh-user:`xxr3376`) - Fix usage of changed Pillow API Known Issues: - VideoFrame is suspected to leak memory in narrow cases on Linux. (:issue:`128`) v0.2.4 ------ - fix library search path for current Libav/Ubuntu 14.04. (:issue:`97`) - explicitly include all sources to combat 0.2.3 release problem. (:issue:`100`) v0.2.3 ------ .. warning:: There was an issue with the PyPI distribution in which it required Cython to be installed. Major: - Python IO. - Agressively releases GIL - Add experimental Windows build. (:issue:`84`) Minor: - Several new Stream/Packet/Frame attributes Patch: - Fix segfault in audio handling. (:issue:`86` and :issue:`93`) - Fix use of PIL/Pillow API. (:issue:`85`) - Fix bad assumptions about plane counts. (:issue:`76`) v0.2.2 ------ - Cythonization in setup.py; mostly a development issue. - Fix for av.InputContainer.size over 2**31. v0.2.1 ------ - Python 3 compatibility! - Build process fails if missing libraries. - Fix linking of libavdevices. v0.2.0 ------ .. warning:: This version has an issue linking in libavdevices, and very likely will not work for you. It sure has been a long time since this was released, and there was a lot of arbitrary changes that come with us wrapping an API as we are discovering it. Changes include, but are not limited to: - Audio encoding. - Exposing planes and buffers. - Descriptors for channel layouts, video and audio formats, etc.. - Seeking. - Many many more properties on all of the objects. - Device support (e.g. webcams). v0.1.0 ------ - FIRST PUBLIC RELEASE! - Container/video/audio formats. - Audio layouts. - Decoding video/audio/subtitles. - Encoding video. - Audio FIFOs and resampling. PyAV-8.1.0/HACKING.rst000066400000000000000000000050661416312437500141520ustar00rootroot00000000000000Hacking on PyAV =============== The Goal -------- The goal of PyAV is to not only wrap FFmpeg in Python and provide complete access to the library for power users, but to make FFmpeg approachable without the need to understand all of the underlying mechanics. Names and Structure ------------------- As much as reasonable, PyAV mirrors FFmpeg's structure and naming. Ideally, searching for documentation for ``CodecContext.bit_rate`` leads to ``AVCodecContext.bit_rate`` as well. We allow ourselves to depart from FFmpeg to make everything feel more consistent, e.g.: - we change a few names to make them more readable, by adding underscores, etc.; - all of the audio classes are prefixed with ``Audio``, while some of the FFmpeg structs are prefixed with ``Sample`` (e.g. ``AudioFormat`` vs ``AVSampleFormat``). We will also sometimes duplicate APIs in order to provide both a low-level and high-level experience, e.g.: - Object flags are usually exposed as a :class:`av.enum.EnumFlag` (with FFmpeg names) under a ``flags`` attribute, **and** each flag is also a boolean attribute (with more Pythonic names). Version Compatibility --------------------- We currently support FFmpeg 4.0 through 4.2, on Python 3.5 through 3.8, on Linux, macOS, and Windows. We `continually test `_ these configurations. Differences are handled at compile time, in C, by checking against ``LIBAV*_VERSION_INT`` macros. We have not been able to perform this sort of checking in Cython as we have not been able to have it fully remove the code-paths, and so there are missing functions in newer FFmpeg's, and deprecated ones that emit compiler warnings in older FFmpeg's. Unfortunately, this means that PyAV is built for the existing FFmpeg, and must be rebuilt when FFmpeg is updated. We used to do this detection in small ``*.pyav.h`` headers in the ``include`` directory (and there are still some there as of writing), but the preferred method is to create ``*-shims.c`` files that are cimport-ed by the one module that uses them. You can use the same build system as continuous integration for local development:: # Prep the environment. source scripts/activate.sh # Build FFmpeg. ./scripts/build-deps # Build PyAV. make # Run the tests. make test Code Formatting and Linting --------------------------- ``isort`` and ``flake8`` are integrated into the continuous integration, and are required to pass for code to be merged into develop. You can run these via ``scripts/test``:: ./scripts/test isort ./scripts/test flake8 PyAV-8.1.0/LICENSE.txt000066400000000000000000000027411416312437500141740ustar00rootroot00000000000000Copyright retained by original committers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the project nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. PyAV-8.1.0/MANIFEST.in000066400000000000000000000003241416312437500141020ustar00rootroot00000000000000include *.txt *.md recursive-include av *.pyx *.pxd recursive-include docs *.rst *.py recursive-include examples *.py recursive-include include *.pxd *.h recursive-include src/av *.c recursive-include tests *.py PyAV-8.1.0/Makefile000066400000000000000000000034331416312437500140100ustar00rootroot00000000000000LDFLAGS ?= "" CFLAGS ?= "-O0" PYAV_PYTHON ?= python PYTHON := $(PYAV_PYTHON) .PHONY: default build cythonize clean clean-all info lint test fate-suite test-assets docs default: build build: CFLAGS=$(CFLAGS) LDFLAGS=$(LDFLAGS) $(PYTHON) setup.py build_ext --inplace --debug cythonize: $(PYTHON) setup.py cythonize wheel: build-mingw32 $(PYTHON) setup.py bdist_wheel build-mingw32: # before running, set PKG_CONFIG_PATH to the pkgconfig dir of the ffmpeg build. # set PKG_CONFIG_PATH=D:\dev\3rd\media-autobuild_suite\local32\bin-video\ffmpegSHARED\lib\pkgconfig CFLAGS=$(CFLAGS) LDFLAGS=$(LDFLAGS) $(PYTHON) setup.py build_ext --inplace -c mingw32 mv *.pyd av fate-suite: # Grab ALL of the samples from the ffmpeg site. rsync -vrltLW rsync://fate-suite.ffmpeg.org/fate-suite/ tests/assets/fate-suite/ lint: TESTSUITE=flake8 scripts/test TESTSUITE=isort scripts/test test: $(PYTHON) setup.py test vagrant: vagrant box list | grep -q precise32 || vagrant box add precise32 http://files.vagrantup.com/precise32.box vtest: vagrant ssh -c /vagrant/scripts/vagrant-test tmp/ffmpeg-git: @ mkdir -p tmp/ffmpeg-git git clone --depth=1 git://source.ffmpeg.org/ffmpeg.git tmp/ffmpeg-git tmp/Doxyfile: tmp/ffmpeg-git cp tmp/ffmpeg-git/doc/Doxyfile $@ echo "GENERATE_TAGFILE = ../tagfile.xml" >> $@ tmp/tagfile.xml: tmp/Doxyfile cd tmp/ffmpeg-git; doxygen ../Doxyfile docs: tmp/tagfile.xml PYTHONPATH=.. make -C docs html deploy-docs: docs ./docs/upload docs clean-build: - rm -rf build - find av -name '*.so' -delete clean-sandbox: - rm -rf sandbox/201* - rm sandbox/last clean-src: - rm -rf src clean-docs: - rm tmp/Doxyfile - rm tmp/tagfile.xml - make -C docs clean clean: clean-build clean-sandbox clean-src clean-all: clean-build clean-sandbox clean-src clean-docs PyAV-8.1.0/README.md000066400000000000000000000062331416312437500136300ustar00rootroot00000000000000PyAV ==== [![GitHub Test Status][github-tests-badge]][github-tests] \ [![Gitter Chat][gitter-badge]][gitter] [![Documentation][docs-badge]][docs] \ [![GitHub][github-badge]][github] [![Python Package Index][pypi-badge]][pypi] [![Conda Forge][conda-badge]][conda] PyAV is a Pythonic binding for the [FFmpeg][ffmpeg] libraries. We aim to provide all of the power and control of the underlying library, but manage the gritty details as much as possible. PyAV is for direct and precise access to your media via containers, streams, packets, codecs, and frames. It exposes a few transformations of that data, and helps you get your data to/from other packages (e.g. Numpy and Pillow). This power does come with some responsibility as working with media is horrendously complicated and PyAV can't abstract it away or make all the best decisions for you. If the `ffmpeg` command does the job without you bending over backwards, PyAV is likely going to be more of a hindrance than a help. But where you can't work without it, PyAV is a critical tool. Installation ------------ Due to the complexity of the dependencies, PyAV is not always the easiest Python package to install from source. Since release 8.0.0 binary wheels are provided on [PyPI][pypi] for Linux, Mac and Windows linked against a modern FFmpeg. You can install these wheels by running: ``` pip install av ``` If you want to use your existing FFmpeg/Libav, the C-source version of PyAV is on [PyPI][pypi] too: ``` pip install av --no-binary av ``` Alternative installation methods -------------------------------- Another way of installing PyAV is via [conda-forge][conda-forge]: ``` conda install av -c conda-forge ``` See the [Conda install][conda-install] docs to get started with (mini)Conda. And if you want to build from the absolute source (for development or testing): ``` git clone git@github.com:PyAV-Org/PyAV cd PyAV source scripts/activate.sh # Either install the testing dependencies: pip install --upgrade -r tests/requirements.txt # or have it all, including FFmpeg, built/installed for you: ./scripts/build-deps # Build PyAV. make ``` --- Have fun, [read the docs][docs], [come chat with us][gitter], and good luck! [conda-badge]: https://img.shields.io/conda/vn/conda-forge/av.svg?colorB=CCB39A [conda]: https://anaconda.org/conda-forge/av [docs-badge]: https://img.shields.io/badge/docs-on%20pyav.org-blue.svg [docs]: http://pyav.org/docs [gitter-badge]: https://img.shields.io/gitter/room/nwjs/nw.js.svg?logo=gitter&colorB=cc2b5e [gitter]: https://gitter.im/PyAV-Org [pypi-badge]: https://img.shields.io/pypi/v/av.svg?colorB=CCB39A [pypi]: https://pypi.org/project/av [github-tests-badge]: https://github.com/PyAV-Org/PyAV/workflows/tests/badge.svg [github-tests]: https://github.com/PyAV-Org/PyAV/actions?workflow=tests [github-badge]: https://img.shields.io/badge/dynamic/xml.svg?label=github&url=https%3A%2F%2Fraw.githubusercontent.com%2FPyAV-Org%2FPyAV%2Fdevelop%2FVERSION.txt&query=.&colorB=CCB39A&prefix=v [github]: https://github.com/PyAV-Org/PyAV [ffmpeg]: http://ffmpeg.org/ [conda-forge]: https://conda-forge.github.io/ [conda-install]: https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html PyAV-8.1.0/VERSION.txt000066400000000000000000000000061416312437500142270ustar00rootroot000000000000008.1.0 PyAV-8.1.0/av/000077500000000000000000000000001416312437500127535ustar00rootroot00000000000000PyAV-8.1.0/av/__init__.py000066400000000000000000000022571416312437500150720ustar00rootroot00000000000000# Add the native FFMPEG and MinGW libraries to executable path, so that the # AV pyd files can find them. import os if os.name == 'nt': os.environ['PATH'] = os.path.abspath(os.path.dirname(__file__)) + os.pathsep + os.environ['PATH'] # MUST import the core before anything else in order to initalize the underlying # library that is being wrapped. from av._core import time_base, pyav_version as __version__, library_versions # Capture logging (by importing it). from av import logging # For convenience, IMPORT ALL OF THE THINGS (that are constructable by the user). from av.audio.fifo import AudioFifo from av.audio.format import AudioFormat from av.audio.frame import AudioFrame from av.audio.layout import AudioLayout from av.audio.resampler import AudioResampler from av.codec.codec import Codec, codecs_available from av.codec.context import CodecContext from av.container import open from av.format import ContainerFormat, formats_available from av.packet import Packet from av.error import * # noqa: F403; This is limited to exception types. from av.video.format import VideoFormat from av.video.frame import VideoFrame # Backwards compatibility AVError = FFmpegError # noqa: F405 PyAV-8.1.0/av/__main__.py000066400000000000000000000023301416312437500150430ustar00rootroot00000000000000import argparse def main(): parser = argparse.ArgumentParser() parser.add_argument('--codecs', action='store_true') parser.add_argument('--version', action='store_true') args = parser.parse_args() # --- if args.version: import av._core print('PyAV v' + av._core.pyav_version) print('git origin: git@github.com:PyAV-Org/PyAV') print('git commit:', av._core.pyav_commit) by_config = {} for libname, config in sorted(av._core.library_meta.items()): version = config['version'] if version[0] >= 0: by_config.setdefault( (config['configuration'], config['license']), [] ).append((libname, config)) for (config, license), libs in sorted(by_config.items()): print('library configuration:', config) print('library license:', license) for libname, config in libs: version = config['version'] print('%-13s %3d.%3d.%3d' % (libname, version[0], version[1], version[2])) if args.codecs: from av.codec.codec import dump_codecs dump_codecs() if __name__ == '__main__': main() PyAV-8.1.0/av/_core.pyx000066400000000000000000000033531416312437500146100ustar00rootroot00000000000000cimport libav as lib # Initialise libraries. lib.avformat_network_init() lib.avdevice_register_all() # Exports. time_base = lib.AV_TIME_BASE pyav_version = lib.PYAV_VERSION_STR pyav_commit = lib.PYAV_COMMIT_STR cdef decode_version(v): if v < 0: return (-1, -1, -1) cdef int major = (v >> 16) & 0xff cdef int minor = (v >> 8) & 0xff cdef int micro = (v) & 0xff return (major, minor, micro) library_meta = { 'libavutil': dict( version=decode_version(lib.avutil_version()), configuration=lib.avutil_configuration(), license=lib.avutil_license() ), 'libavcodec': dict( version=decode_version(lib.avcodec_version()), configuration=lib.avcodec_configuration(), license=lib.avcodec_license() ), 'libavformat': dict( version=decode_version(lib.avformat_version()), configuration=lib.avformat_configuration(), license=lib.avformat_license() ), 'libavdevice': dict( version=decode_version(lib.avdevice_version()), configuration=lib.avdevice_configuration(), license=lib.avdevice_license() ), 'libavfilter': dict( version=decode_version(lib.avfilter_version()), configuration=lib.avfilter_configuration(), license=lib.avfilter_license() ), 'libswscale': dict( version=decode_version(lib.swscale_version()), configuration=lib.swscale_configuration(), license=lib.swscale_license() ), 'libswresample': dict( version=decode_version(lib.swresample_version()), configuration=lib.swresample_configuration(), license=lib.swresample_license() ), } library_versions = {name: meta['version'] for name, meta in library_meta.items()} PyAV-8.1.0/av/audio/000077500000000000000000000000001416312437500140545ustar00rootroot00000000000000PyAV-8.1.0/av/audio/__init__.py000066400000000000000000000000761416312437500161700ustar00rootroot00000000000000from .frame import AudioFrame from .stream import AudioStream PyAV-8.1.0/av/audio/codeccontext.pxd000066400000000000000000000006151416312437500172550ustar00rootroot00000000000000 from av.audio.fifo cimport AudioFifo from av.audio.frame cimport AudioFrame from av.audio.resampler cimport AudioResampler from av.codec.context cimport CodecContext cdef class AudioCodecContext(CodecContext): # Hold onto the frames that we will decode until we have a full one. cdef AudioFrame next_frame # For encoding. cdef AudioResampler resampler cdef AudioFifo fifo PyAV-8.1.0/av/audio/codeccontext.pyx000066400000000000000000000100671416312437500173040ustar00rootroot00000000000000cimport libav as lib from av.audio.format cimport AudioFormat, get_audio_format from av.audio.frame cimport AudioFrame, alloc_audio_frame from av.audio.layout cimport AudioLayout, get_audio_layout from av.error cimport err_check from av.frame cimport Frame from av.packet cimport Packet cdef class AudioCodecContext(CodecContext): cdef _init(self, lib.AVCodecContext *ptr, const lib.AVCodec *codec): CodecContext._init(self, ptr, codec) # Sometimes there isn't a layout set, but there are a number of # channels. Assume it is the default layout. # TODO: Put this behind `not bare_metal`. # TODO: Do this more efficiently. if self.ptr.channels and not self.ptr.channel_layout: self.ptr.channel_layout = get_audio_layout(self.ptr.channels, 0).layout cdef _set_default_time_base(self): self.ptr.time_base.num = 1 self.ptr.time_base.den = self.ptr.sample_rate cdef _prepare_frames_for_encode(self, Frame input_frame): cdef AudioFrame frame = input_frame # Resample. A None frame will flush the resampler, and then the fifo (if used). # Note that the resampler will simply return an input frame if there is # no resampling to be done. The control flow was just a little easier this way. if not self.resampler: self.resampler = AudioResampler( self.format, self.layout, self.ptr.sample_rate ) frame = self.resampler.resample(frame) cdef bint is_flushing = input_frame is None cdef bint use_fifo = not (self.ptr.codec.capabilities & lib.AV_CODEC_CAP_VARIABLE_FRAME_SIZE) if use_fifo: if not self.fifo: self.fifo = AudioFifo() if frame is not None: self.fifo.write(frame) frames = self.fifo.read_many(self.ptr.frame_size, partial=is_flushing) if is_flushing: frames.append(None) else: frames = [frame] return frames cdef Frame _alloc_next_frame(self): return alloc_audio_frame() cdef _setup_decoded_frame(self, Frame frame, Packet packet): CodecContext._setup_decoded_frame(self, frame, packet) cdef AudioFrame aframe = frame aframe._init_user_attributes() property frame_size: """ Number of samples per channel in an audio frame. :type: int """ def __get__(self): return self.ptr.frame_size property sample_rate: """ Sample rate of the audio data, in samples per second. :type: int """ def __get__(self): return self.ptr.sample_rate def __set__(self, int value): self.ptr.sample_rate = value property rate: """Another name for :attr:`sample_rate`.""" def __get__(self): return self.sample_rate def __set__(self, value): self.sample_rate = value # TODO: Integrate into AudioLayout. property channels: def __get__(self): return self.ptr.channels def __set__(self, value): self.ptr.channels = value self.ptr.channel_layout = lib.av_get_default_channel_layout(value) property channel_layout: def __get__(self): return self.ptr.channel_layout property layout: """ The audio channel layout. :type: AudioLayout """ def __get__(self): return get_audio_layout(self.ptr.channels, self.ptr.channel_layout) def __set__(self, value): cdef AudioLayout layout = AudioLayout(value) self.ptr.channel_layout = layout.layout self.ptr.channels = layout.nb_channels property format: """ The audio sample format. :type: AudioFormat """ def __get__(self): return get_audio_format(self.ptr.sample_fmt) def __set__(self, value): cdef AudioFormat format = AudioFormat(value) self.ptr.sample_fmt = format.sample_fmt PyAV-8.1.0/av/audio/fifo.pxd000066400000000000000000000007151416312437500155170ustar00rootroot00000000000000from libc.stdint cimport int64_t, uint64_t cimport libav as lib from av.audio.frame cimport AudioFrame cdef class AudioFifo: cdef lib.AVAudioFifo *ptr cdef AudioFrame template cdef readonly uint64_t samples_written cdef readonly uint64_t samples_read cdef readonly double pts_per_sample cpdef write(self, AudioFrame frame) cpdef read(self, int samples=*, bint partial=*) cpdef read_many(self, int samples, bint partial=*) PyAV-8.1.0/av/audio/fifo.pyx000066400000000000000000000141341416312437500155440ustar00rootroot00000000000000from av.audio.format cimport get_audio_format from av.audio.frame cimport alloc_audio_frame from av.audio.layout cimport get_audio_layout from av.error cimport err_check cdef class AudioFifo: """A simple audio sample FIFO (First In First Out) buffer.""" def __repr__(self): return '' % ( self.__class__.__name__, self.samples, self.sample_rate, self.layout, self.format, id(self), ) def __dealloc__(self): if self.ptr: lib.av_audio_fifo_free(self.ptr) cpdef write(self, AudioFrame frame): """write(frame) Push a frame of samples into the queue. :param AudioFrame frame: The frame of samples to push. The FIFO will remember the attributes from the first frame, and use those to populate all output frames. If there is a :attr:`~.Frame.pts` and :attr:`~.Frame.time_base` and :attr:`~.AudioFrame.sample_rate`, then the FIFO will assert that the incoming timestamps are continuous. """ if frame is None: raise TypeError('AudioFifo must be given an AudioFrame.') if not frame.ptr.nb_samples: return if not self.ptr: # Hold onto a copy of the attributes of the first frame to populate # output frames with. self.template = alloc_audio_frame() self.template._copy_internal_attributes(frame) self.template._init_user_attributes() # Figure out our "time_base". if frame._time_base.num and frame.ptr.sample_rate: self.pts_per_sample = frame._time_base.den / float(frame._time_base.num) self.pts_per_sample /= frame.ptr.sample_rate else: self.pts_per_sample = 0 self.ptr = lib.av_audio_fifo_alloc( frame.ptr.format, len(frame.layout.channels), # TODO: Can we safely use frame.ptr.nb_channels? frame.ptr.nb_samples * 2, # Just a default number of samples; it will adjust. ) if not self.ptr: raise RuntimeError('Could not allocate AVAudioFifo.') # Make sure nothing changed. elif ( frame.ptr.format != self.template.ptr.format or frame.ptr.channel_layout != self.template.ptr.channel_layout or frame.ptr.sample_rate != self.template.ptr.sample_rate or (frame._time_base.num and self.template._time_base.num and ( frame._time_base.num != self.template._time_base.num or frame._time_base.den != self.template._time_base.den )) ): raise ValueError('Frame does not match AudioFifo parameters.') # Assert that the PTS are what we expect. cdef int64_t expected_pts if self.pts_per_sample and frame.ptr.pts != lib.AV_NOPTS_VALUE: expected_pts = (self.pts_per_sample * self.samples_written) if frame.ptr.pts != expected_pts: raise ValueError('Frame.pts (%d) != expected (%d); fix or set to None.' % (frame.ptr.pts, expected_pts)) err_check(lib.av_audio_fifo_write( self.ptr, frame.ptr.extended_data, frame.ptr.nb_samples, )) self.samples_written += frame.ptr.nb_samples cpdef read(self, int samples=0, bint partial=False): """read(samples=0, partial=False) Read samples from the queue. :param int samples: The number of samples to pull; 0 gets all. :param bool partial: Allow returning less than requested. :returns: New :class:`AudioFrame` or ``None`` (if empty). If the incoming frames had valid a :attr:`~.Frame.time_base`, :attr:`~.AudioFrame.sample_rate` and :attr:`~.Frame.pts`, the returned frames will have accurate timing. """ if not self.ptr: return cdef int buffered_samples = lib.av_audio_fifo_size(self.ptr) if buffered_samples < 1: return samples = samples or buffered_samples if buffered_samples < samples: if partial: samples = buffered_samples else: return cdef AudioFrame frame = alloc_audio_frame() frame._copy_internal_attributes(self.template) frame._init( self.template.ptr.format, self.template.ptr.channel_layout, samples, 1, # Align? ) err_check(lib.av_audio_fifo_read( self.ptr, frame.ptr.extended_data, samples, )) if self.pts_per_sample: frame.ptr.pts = (self.pts_per_sample * self.samples_read) else: frame.ptr.pts = lib.AV_NOPTS_VALUE self.samples_read += samples return frame cpdef read_many(self, int samples, bint partial=False): """read_many(samples, partial=False) Read as many frames as we can. :param int samples: How large for the frames to be. :param bool partial: If we should return a partial frame. :returns: A ``list`` of :class:`AudioFrame`. """ cdef AudioFrame frame frames = [] while True: frame = self.read(samples, partial=partial) if frame is not None: frames.append(frame) else: break return frames property format: """The :class:`.AudioFormat` of this FIFO.""" def __get__(self): return self.template.format property layout: """The :class:`.AudioLayout` of this FIFO.""" def __get__(self): return self.template.layout property sample_rate: def __get__(self): return self.template.sample_rate property samples: """Number of audio samples (per channel) in the buffer.""" def __get__(self): return lib.av_audio_fifo_size(self.ptr) if self.ptr else 0 PyAV-8.1.0/av/audio/format.pxd000066400000000000000000000003231416312437500160570ustar00rootroot00000000000000cimport libav as lib cdef class AudioFormat(object): cdef lib.AVSampleFormat sample_fmt cdef _init(self, lib.AVSampleFormat sample_fmt) cdef AudioFormat get_audio_format(lib.AVSampleFormat format) PyAV-8.1.0/av/audio/format.pyx000066400000000000000000000073211416312437500161110ustar00rootroot00000000000000import sys cdef str container_format_postfix = 'le' if sys.byteorder == 'little' else 'be' cdef object _cinit_bypass_sentinel cdef AudioFormat get_audio_format(lib.AVSampleFormat c_format): """Get an AudioFormat without going through a string.""" cdef AudioFormat format = AudioFormat.__new__(AudioFormat, _cinit_bypass_sentinel) format._init(c_format) return format cdef class AudioFormat(object): """Descriptor of audio formats.""" def __cinit__(self, name): if name is _cinit_bypass_sentinel: return cdef lib.AVSampleFormat sample_fmt if isinstance(name, AudioFormat): sample_fmt = (name).sample_fmt else: sample_fmt = lib.av_get_sample_fmt(name) if sample_fmt < 0: raise ValueError('Not a sample format: %r' % name) self._init(sample_fmt) cdef _init(self, lib.AVSampleFormat sample_fmt): self.sample_fmt = sample_fmt def __repr__(self): return '' % (self.name) property name: """Canonical name of the sample format. >>> SampleFormat('s16p').name 's16p' """ def __get__(self): return lib.av_get_sample_fmt_name(self.sample_fmt) property bytes: """Number of bytes per sample. >>> SampleFormat('s16p').bytes 2 """ def __get__(self): return lib.av_get_bytes_per_sample(self.sample_fmt) property bits: """Number of bits per sample. >>> SampleFormat('s16p').bits 16 """ def __get__(self): return lib.av_get_bytes_per_sample(self.sample_fmt) << 3 property is_planar: """Is this a planar format? Strictly opposite of :attr:`is_packed`. """ def __get__(self): return bool(lib.av_sample_fmt_is_planar(self.sample_fmt)) property is_packed: """Is this a planar format? Strictly opposite of :attr:`is_planar`. """ def __get__(self): return not lib.av_sample_fmt_is_planar(self.sample_fmt) property planar: """The planar variant of this format. Is itself when planar: >>> fmt = Format('s16p') >>> fmt.planar is fmt True """ def __get__(self): if self.is_planar: return self return get_audio_format(lib.av_get_planar_sample_fmt(self.sample_fmt)) property packed: """The packed variant of this format. Is itself when packed: >>> fmt = Format('s16') >>> fmt.packed is fmt True """ def __get__(self): if self.is_packed: return self return get_audio_format(lib.av_get_packed_sample_fmt(self.sample_fmt)) property container_name: """The name of a :class:`ContainerFormat` which directly accepts this data. :raises ValueError: when planar, since there are no such containers. """ def __get__(self): if self.is_planar: raise ValueError('no planar container formats') if self.sample_fmt == lib.AV_SAMPLE_FMT_U8: return 'u8' elif self.sample_fmt == lib.AV_SAMPLE_FMT_S16: return 's16' + container_format_postfix elif self.sample_fmt == lib.AV_SAMPLE_FMT_S32: return 's32' + container_format_postfix elif self.sample_fmt == lib.AV_SAMPLE_FMT_FLT: return 'f32' + container_format_postfix elif self.sample_fmt == lib.AV_SAMPLE_FMT_DBL: return 'f64' + container_format_postfix raise ValueError('unknown layout') PyAV-8.1.0/av/audio/frame.pxd000066400000000000000000000013311416312437500156610ustar00rootroot00000000000000from libc.stdint cimport uint8_t, uint64_t cimport libav as lib from av.audio.format cimport AudioFormat from av.audio.layout cimport AudioLayout from av.frame cimport Frame cdef class AudioFrame(Frame): # For raw storage of the frame's data; don't ever touch this. cdef uint8_t *_buffer cdef size_t _buffer_size cdef readonly AudioLayout layout """ The audio channel layout. :type: AudioLayout """ cdef readonly AudioFormat format """ The audio sample format. :type: AudioFormat """ cdef _init(self, lib.AVSampleFormat format, uint64_t layout, unsigned int nb_samples, unsigned int align) cdef _init_user_attributes(self) cdef AudioFrame alloc_audio_frame() PyAV-8.1.0/av/audio/frame.pyx000066400000000000000000000131751416312437500157170ustar00rootroot00000000000000from av.audio.format cimport get_audio_format from av.audio.layout cimport get_audio_layout from av.audio.plane cimport AudioPlane from av.deprecation import renamed_attr from av.error cimport err_check cdef object _cinit_bypass_sentinel format_dtypes = { 'dbl': 'format self.ptr.channel_layout = layout # Sometimes this is called twice. Oh well. self._init_user_attributes() # Audio filters need AVFrame.channels to match number of channels from layout. self.ptr.channels = self.layout.nb_channels cdef size_t buffer_size if self.layout.channels and nb_samples: # Cleanup the old buffer. lib.av_freep(&self._buffer) # Get a new one. self._buffer_size = err_check(lib.av_samples_get_buffer_size( NULL, len(self.layout.channels), nb_samples, format, align )) self._buffer = lib.av_malloc(self._buffer_size) if not self._buffer: raise MemoryError("cannot allocate AudioFrame buffer") # Connect the data pointers to the buffer. err_check(lib.avcodec_fill_audio_frame( self.ptr, len(self.layout.channels), self.ptr.format, self._buffer, self._buffer_size, align )) def __dealloc__(self): lib.av_freep(&self._buffer) cdef _init_user_attributes(self): self.layout = get_audio_layout(0, self.ptr.channel_layout) self.format = get_audio_format(self.ptr.format) def __repr__(self): return '' % ( self.__class__.__name__, self.index, self.pts, self.samples, self.rate, self.layout.name, self.format.name, id(self), ) @staticmethod def from_ndarray(array, format='s16', layout='stereo'): """ Construct a frame from a numpy array. """ import numpy as np # map avcodec type to numpy type try: dtype = np.dtype(format_dtypes[format]) except KeyError: raise ValueError('Conversion from numpy array with format `%s` is not yet supported' % format) nb_channels = len(AudioLayout(layout).channels) assert array.dtype == dtype assert array.ndim == 2 if AudioFormat(format).is_planar: assert array.shape[0] == nb_channels samples = array.shape[1] else: assert array.shape[0] == 1 samples = array.shape[1] // nb_channels frame = AudioFrame(format=format, layout=layout, samples=samples) for i, plane in enumerate(frame.planes): plane.update(array[i, :]) return frame @property def planes(self): """ A tuple of :class:`~av.audio.plane.AudioPlane`. :type: tuple """ cdef int plane_count = 0 while self.ptr.extended_data[plane_count]: plane_count += 1 return tuple([AudioPlane(self, i) for i in range(plane_count)]) property samples: """ Number of audio samples (per channel). :type: int """ def __get__(self): return self.ptr.nb_samples property sample_rate: """ Sample rate of the audio data, in samples per second. :type: int """ def __get__(self): return self.ptr.sample_rate def __set__(self, value): self.ptr.sample_rate = value property rate: """Another name for :attr:`sample_rate`.""" def __get__(self): return self.ptr.sample_rate def __set__(self, value): self.ptr.sample_rate = value def to_ndarray(self, **kwargs): """Get a numpy array of this frame. .. note:: Numpy must be installed. """ import numpy as np # map avcodec type to numpy type try: dtype = np.dtype(format_dtypes[self.format.name]) except KeyError: raise ValueError("Conversion from {!r} format to numpy array is not supported.".format(self.format.name)) if self.format.is_planar: count = self.samples else: count = self.samples * len(self.layout.channels) # convert and return data return np.vstack([np.frombuffer(x, dtype=dtype, count=count) for x in self.planes]) to_nd_array = renamed_attr('to_ndarray') PyAV-8.1.0/av/audio/layout.pxd000066400000000000000000000007711416312437500161130ustar00rootroot00000000000000from libc.stdint cimport uint64_t cdef class AudioLayout(object): # The layout for FFMpeg; this is essentially a bitmask of channels. cdef uint64_t layout cdef int nb_channels cdef readonly tuple channels """ A tuple of :class:`AudioChannel` objects. :type: tuple """ cdef _init(self, uint64_t layout) cdef class AudioChannel(object): # The channel for FFmpeg. cdef uint64_t channel cdef AudioLayout get_audio_layout(int channels, uint64_t c_layout) PyAV-8.1.0/av/audio/layout.pyx000066400000000000000000000076221416312437500161420ustar00rootroot00000000000000cimport libav as lib cdef object _cinit_bypass_sentinel cdef AudioLayout get_audio_layout(int channels, uint64_t c_layout): """Get an AudioLayout from Cython land.""" cdef AudioLayout layout = AudioLayout.__new__(AudioLayout, _cinit_bypass_sentinel) if channels and not c_layout: c_layout = default_layouts[channels] layout._init(c_layout) return layout # These are the defaults given by FFmpeg; Libav is different. # TODO: What about av_get_default_channel_layout(...)? cdef uint64_t default_layouts[17] default_layouts[0] = 0 default_layouts[1] = lib.AV_CH_LAYOUT_MONO default_layouts[2] = lib.AV_CH_LAYOUT_STEREO default_layouts[3] = lib.AV_CH_LAYOUT_2POINT1 default_layouts[4] = lib.AV_CH_LAYOUT_4POINT0 default_layouts[5] = lib.AV_CH_LAYOUT_5POINT0_BACK default_layouts[6] = lib.AV_CH_LAYOUT_5POINT1_BACK default_layouts[7] = lib.AV_CH_LAYOUT_6POINT1 default_layouts[8] = lib.AV_CH_LAYOUT_7POINT1 default_layouts[9] = 0x01FF default_layouts[10] = 0x03FF default_layouts[11] = 0x07FF default_layouts[12] = 0x0FFF default_layouts[13] = 0x1FFF default_layouts[14] = 0x3FFF default_layouts[15] = 0x7FFF default_layouts[16] = 0xFFFF # FFmpeg has one here. # These are the descriptions as given by FFmpeg; Libav does not have them. cdef dict channel_descriptions = { 'FL': 'front left', 'FR': 'front right', 'FC': 'front center', 'LFE': 'low frequency', 'BL': 'back left', 'BR': 'back right', 'FLC': 'front left-of-center', 'FRC': 'front right-of-center', 'BC': 'back center', 'SL': 'side left', 'SR': 'side right', 'TC': 'top center', 'TFL': 'top front left', 'TFC': 'top front center', 'TFR': 'top front right', 'TBL': 'top back left', 'TBC': 'top back center', 'TBR': 'top back right', 'DL': 'downmix left', 'DR': 'downmix right', 'WL': 'wide left', 'WR': 'wide right', 'SDL': 'surround direct left', 'SDR': 'surround direct right', 'LFE2': 'low frequency 2', } cdef class AudioLayout(object): def __init__(self, layout): if layout is _cinit_bypass_sentinel: return cdef uint64_t c_layout if isinstance(layout, int): if layout < 0 or layout > 8: raise ValueError('no layout with %d channels' % layout) c_layout = default_layouts[layout] elif isinstance(layout, str): c_layout = lib.av_get_channel_layout(layout) elif isinstance(layout, AudioLayout): c_layout = layout.layout else: raise TypeError('layout must be str or int') if not c_layout: raise ValueError('invalid channel layout %r' % layout) self._init(c_layout) cdef _init(self, uint64_t layout): self.layout = layout self.nb_channels = lib.av_get_channel_layout_nb_channels(layout) # This just counts bits. self.channels = tuple(AudioChannel(self, i) for i in range(self.nb_channels)) def __repr__(self): return '' % (self.__class__.__name__, self.name) property name: """The canonical name of the audio layout.""" def __get__(self): cdef char out[32] # Passing 0 as number of channels... fix this later? lib.av_get_channel_layout_string(out, 32, 0, self.layout) return out cdef class AudioChannel(object): def __cinit__(self, AudioLayout layout, int index): self.channel = lib.av_channel_layout_extract_channel(layout.layout, index) def __repr__(self): return '' % (self.__class__.__name__, self.name, self.description) property name: """The canonical name of the audio channel.""" def __get__(self): return lib.av_get_channel_name(self.channel) property description: """A human description of the audio channel.""" def __get__(self): return channel_descriptions.get(self.name) PyAV-8.1.0/av/audio/plane.pxd000066400000000000000000000002061416312437500156660ustar00rootroot00000000000000from av.plane cimport Plane cdef class AudioPlane(Plane): cdef readonly size_t buffer_size cdef size_t _buffer_size(self) PyAV-8.1.0/av/audio/plane.pyx000066400000000000000000000005431416312437500157170ustar00rootroot00000000000000cimport libav as lib from av.audio.frame cimport AudioFrame cdef class AudioPlane(Plane): def __cinit__(self, AudioFrame frame, int index): # Only the first linesize is ever populated, but it applies to every plane. self.buffer_size = self.frame.ptr.linesize[0] cdef size_t _buffer_size(self): return self.buffer_size PyAV-8.1.0/av/audio/resampler.pxd000066400000000000000000000013551416312437500165670ustar00rootroot00000000000000from libc.stdint cimport uint64_t cimport libav as lib from av.audio.format cimport AudioFormat from av.audio.frame cimport AudioFrame from av.audio.layout cimport AudioLayout cdef class AudioResampler(object): cdef readonly bint is_passthrough cdef lib.SwrContext *ptr cdef AudioFrame template # Source descriptors; not for public consumption. cdef unsigned int template_rate # Destination descriptors cdef readonly AudioFormat format cdef readonly AudioLayout layout cdef readonly int rate # Retiming. cdef readonly uint64_t samples_in cdef readonly double pts_per_sample_in cdef readonly uint64_t samples_out cdef readonly bint simple_pts_out cpdef resample(self, AudioFrame) PyAV-8.1.0/av/audio/resampler.pyx000066400000000000000000000167131416312437500166200ustar00rootroot00000000000000from libc.stdint cimport int64_t, uint8_t cimport libav as lib from av.audio.fifo cimport AudioFifo from av.audio.format cimport get_audio_format from av.audio.frame cimport alloc_audio_frame from av.audio.layout cimport get_audio_layout from av.error cimport err_check from av.error import FFmpegError cdef class AudioResampler(object): """AudioResampler(format=None, layout=None, rate=None) :param AudioFormat format: The target format, or string that parses to one (e.g. ``"s16"``). :param AudioLayout layout: The target layout, or an int/string that parses to one (e.g. ``"stereo"``). :param int rate: The target sample rate. """ def __cinit__(self, format=None, layout=None, rate=None): if format is not None: self.format = format if isinstance(format, AudioFormat) else AudioFormat(format) if layout is not None: self.layout = layout if isinstance(layout, AudioLayout) else AudioLayout(layout) self.rate = int(rate) if rate else 0 def __dealloc__(self): if self.ptr: lib.swr_close(self.ptr) lib.swr_free(&self.ptr) cpdef resample(self, AudioFrame frame): """resample(frame) Convert the ``sample_rate``, ``channel_layout`` and/or ``format`` of a :class:`~.AudioFrame`. :param AudioFrame frame: The frame to convert. :returns: A new :class:`AudioFrame` in new parameters, or the same frame if there is nothing to be done. :raises: ``ValueError`` if ``Frame.pts`` is set and non-simple. """ if self.is_passthrough: return frame # Take source settings from the first frame. if not self.ptr: # We don't have any input, so don't bother even setting up. if not frame: return # Hold onto a copy of the attributes of the first frame to populate # output frames with. self.template = alloc_audio_frame() self.template._copy_internal_attributes(frame) self.template._init_user_attributes() # Set some default descriptors. self.format = self.format or self.template.format self.layout = self.layout or self.template.layout self.rate = self.rate or self.template.ptr.sample_rate # Check if there is actually work to do. if ( self.template.format.sample_fmt == self.format.sample_fmt and self.template.layout.layout == self.layout.layout and self.template.ptr.sample_rate == self.rate ): self.is_passthrough = True return frame # Figure out our time bases. if frame._time_base.num and frame.ptr.sample_rate: self.pts_per_sample_in = frame._time_base.den / float(frame._time_base.num) self.pts_per_sample_in /= self.template.ptr.sample_rate # We will only provide outgoing PTS if the time_base is trivial. if frame._time_base.num == 1 and frame._time_base.den == frame.ptr.sample_rate: self.simple_pts_out = True self.ptr = lib.swr_alloc() if not self.ptr: raise RuntimeError('Could not allocate SwrContext.') # Configure it! try: err_check(lib.av_opt_set_int(self.ptr, 'in_sample_fmt', self.template.format.sample_fmt, 0)) err_check(lib.av_opt_set_int(self.ptr, 'out_sample_fmt', self.format.sample_fmt, 0)) err_check(lib.av_opt_set_int(self.ptr, 'in_channel_layout', self.template.layout.layout, 0)) err_check(lib.av_opt_set_int(self.ptr, 'out_channel_layout', self.layout.layout, 0)) err_check(lib.av_opt_set_int(self.ptr, 'in_sample_rate', self.template.ptr.sample_rate, 0)) err_check(lib.av_opt_set_int(self.ptr, 'out_sample_rate', self.rate, 0)) err_check(lib.swr_init(self.ptr)) except FFmpegError: self.ptr = NULL raise elif frame: # Assert the settings are the same on consecutive frames. if ( frame.ptr.format != self.template.format.sample_fmt or frame.ptr.channel_layout != self.template.layout.layout or frame.ptr.sample_rate != self.template.ptr.sample_rate ): raise ValueError('Frame does not match AudioResampler setup.') # Assert that the PTS are what we expect. cdef int64_t expected_pts if frame is not None and frame.ptr.pts != lib.AV_NOPTS_VALUE: expected_pts = (self.pts_per_sample_in * self.samples_in) if frame.ptr.pts != expected_pts: raise ValueError('Input frame pts %d != expected %d; fix or set to None.' % (frame.ptr.pts, expected_pts)) self.samples_in += frame.ptr.nb_samples # The example "loop" as given in the FFmpeg documentation looks like: # uint8_t **input; # int in_samples; # while (get_input(&input, &in_samples)) { # uint8_t *output; # int out_samples = av_rescale_rnd(swr_get_delay(swr, 48000) + # in_samples, 44100, 48000, AV_ROUND_UP); # av_samples_alloc(&output, NULL, 2, out_samples, # AV_SAMPLE_FMT_S16, 0); # out_samples = swr_convert(swr, &output, out_samples, # input, in_samples); # handle_output(output, out_samples); # av_freep(&output); # } # Estimate out how many samples this will create; it will be high. # My investigations say that this swr_get_delay is not required, but # it is in the example loop, and avresample (as opposed to swresample) # may require it. cdef int output_nb_samples = lib.av_rescale_rnd( lib.swr_get_delay(self.ptr, self.rate) + frame.ptr.nb_samples, self.rate, self.template.ptr.sample_rate, lib.AV_ROUND_UP, ) if frame else lib.swr_get_delay(self.ptr, self.rate) # There aren't any frames coming, so no new frame pops out. if not output_nb_samples: return cdef AudioFrame output = alloc_audio_frame() output._copy_internal_attributes(self.template) output.ptr.sample_rate = self.rate output._init( self.format.sample_fmt, self.layout.layout, output_nb_samples, 1, # Align? ) output.ptr.nb_samples = err_check(lib.swr_convert( self.ptr, output.ptr.extended_data, output_nb_samples, # Cast for const-ness, because Cython isn't expressive enough. (frame.ptr.extended_data if frame else NULL), frame.ptr.nb_samples if frame else 0 )) # Empty frame. if output.ptr.nb_samples <= 0: return # Create new PTSes in simple cases. if self.simple_pts_out: output._time_base.num = 1 output._time_base.den = self.rate output.ptr.pts = self.samples_out else: output._time_base.num = 0 output._time_base.den = 1 output.ptr.pts = lib.AV_NOPTS_VALUE self.samples_out += output.ptr.nb_samples return output PyAV-8.1.0/av/audio/stream.pxd000066400000000000000000000001111416312437500160550ustar00rootroot00000000000000from av.stream cimport Stream cdef class AudioStream(Stream): pass PyAV-8.1.0/av/audio/stream.pyx000066400000000000000000000004751416312437500161170ustar00rootroot00000000000000 cdef class AudioStream(Stream): def __repr__(self): return '' % ( self.__class__.__name__, self.index, self.name, self.rate, self.layout.name, self.format.name, id(self), ) PyAV-8.1.0/av/buffer.pxd000066400000000000000000000002061416312437500147370ustar00rootroot00000000000000 cdef class Buffer(object): cdef size_t _buffer_size(self) cdef void* _buffer_ptr(self) cdef bint _buffer_writable(self) PyAV-8.1.0/av/buffer.pyx000066400000000000000000000040761416312437500147750ustar00rootroot00000000000000from cpython cimport PyBUF_WRITABLE, PyBuffer_FillInfo from libc.string cimport memcpy from av.bytesource cimport ByteSource, bytesource cdef class Buffer(object): """A base class for PyAV objects which support the buffer protocol, such as :class:`.Packet` and :class:`.Plane`. """ cdef size_t _buffer_size(self): return 0 cdef void* _buffer_ptr(self): return NULL cdef bint _buffer_writable(self): return True def __getbuffer__(self, Py_buffer *view, int flags): if flags & PyBUF_WRITABLE and not self._buffer_writable(): raise ValueError('buffer is not writable') PyBuffer_FillInfo(view, self, self._buffer_ptr(), self._buffer_size(), 0, flags) @property def buffer_size(self): """The size of the buffer in bytes.""" return self._buffer_size() @property def buffer_ptr(self): """The memory address of the buffer.""" return self._buffer_ptr() def to_bytes(self): """Return the contents of this buffer as ``bytes``. This copies the entire contents; consider using something that uses the `buffer protocol `_ as that will be more efficient. This is largely for Python2, as Python 3 can do the same via ``bytes(the_buffer)``. """ return (self._buffer_ptr())[:self._buffer_size()] def update(self, input): """Replace the data in this object with the given buffer. Accepts anything that supports the `buffer protocol `_, e.g. bytes, Numpy arrays, other :class:`Buffer` objects, etc.. """ if not self._buffer_writable(): raise ValueError('buffer is not writable') cdef ByteSource source = bytesource(input) cdef size_t size = self._buffer_size() if source.length != size: raise ValueError('got %d bytes; need %d bytes' % (source.length, size)) memcpy(self._buffer_ptr(), source.ptr, size) PyAV-8.1.0/av/bytesource.pxd000066400000000000000000000003711416312437500156550ustar00rootroot00000000000000from cpython.buffer cimport Py_buffer cdef class ByteSource(object): cdef object owner cdef bint has_view cdef Py_buffer view cdef unsigned char *ptr cdef size_t length cdef ByteSource bytesource(object, bint allow_none=*) PyAV-8.1.0/av/bytesource.pyx000066400000000000000000000021171416312437500157020ustar00rootroot00000000000000from cpython.buffer cimport ( PyBUF_SIMPLE, PyBuffer_Release, PyObject_CheckBuffer, PyObject_GetBuffer ) cdef class ByteSource(object): def __cinit__(self, owner): self.owner = owner try: self.ptr = owner except TypeError: pass else: self.length = len(owner) return if PyObject_CheckBuffer(owner): # Can very likely use PyBUF_ND instead of PyBUF_SIMPLE res = PyObject_GetBuffer(owner, &self.view, PyBUF_SIMPLE) if not res: self.has_view = True self.ptr = self.view.buf self.length = self.view.len return raise TypeError('expected bytes, bytearray or memoryview') def __dealloc__(self): if self.has_view: PyBuffer_Release(&self.view) cdef ByteSource bytesource(obj, bint allow_none=False): if allow_none and obj is None: return elif isinstance(obj, ByteSource): return obj else: return ByteSource(obj) PyAV-8.1.0/av/codec/000077500000000000000000000000001416312437500140305ustar00rootroot00000000000000PyAV-8.1.0/av/codec/__init__.py000066400000000000000000000002211416312437500161340ustar00rootroot00000000000000from .codec import ( Capabilities, Codec, Properties, codec_descriptor, codecs_available ) from .context import CodecContext PyAV-8.1.0/av/codec/codec.pxd000066400000000000000000000003551416312437500156250ustar00rootroot00000000000000cimport libav as lib cdef class Codec(object): cdef const lib.AVCodec *ptr cdef const lib.AVCodecDescriptor *desc cdef readonly bint is_encoder cdef _init(self, name=?) cdef Codec wrap_codec(const lib.AVCodec *ptr) PyAV-8.1.0/av/codec/codec.pyx000066400000000000000000000346441416312437500156620ustar00rootroot00000000000000from av.audio.format cimport get_audio_format from av.descriptor cimport wrap_avclass from av.enum cimport define_enum from av.utils cimport avrational_to_fraction, flag_in_bitfield from av.video.format cimport get_video_format cdef object _cinit_sentinel = object() cdef Codec wrap_codec(const lib.AVCodec *ptr): cdef Codec codec = Codec(_cinit_sentinel) codec.ptr = ptr codec.is_encoder = lib.av_codec_is_encoder(ptr) codec._init() return codec Properties = define_enum('Properties', 'av.codec', ( ('NONE', 0), ('INTRA_ONLY', lib.AV_CODEC_PROP_INTRA_ONLY, """Codec uses only intra compression. Video and audio codecs only."""), ('LOSSY', lib.AV_CODEC_PROP_LOSSY, """Codec supports lossy compression. Audio and video codecs only. Note: A codec may support both lossy and lossless compression modes."""), ('LOSSLESS', lib.AV_CODEC_PROP_LOSSLESS, """Codec supports lossless compression. Audio and video codecs only."""), ('REORDER', lib.AV_CODEC_PROP_REORDER, """Codec supports frame reordering. That is, the coded order (the order in which the encoded packets are output by the encoders / stored / input to the decoders) may be different from the presentation order of the corresponding frames. For codecs that do not have this property set, PTS and DTS should always be equal."""), ('BITMAP_SUB', lib.AV_CODEC_PROP_BITMAP_SUB, """Subtitle codec is bitmap based Decoded AVSubtitle data can be read from the AVSubtitleRect->pict field."""), ('TEXT_SUB', lib.AV_CODEC_PROP_TEXT_SUB, """Subtitle codec is text based. Decoded AVSubtitle data can be read from the AVSubtitleRect->ass field."""), ), is_flags=True) Capabilities = define_enum('Capabilities', 'av.codec', ( ('NONE', 0), ('DRAW_HORIZ_BAND', lib.AV_CODEC_CAP_DRAW_HORIZ_BAND, """Decoder can use draw_horiz_band callback."""), ('DR1', lib.AV_CODEC_CAP_DR1, """Codec uses get_buffer() for allocating buffers and supports custom allocators. If not set, it might not use get_buffer() at all or use operations that assume the buffer was allocated by avcodec_default_get_buffer."""), ('TRUNCATED', lib.AV_CODEC_CAP_TRUNCATED), ('HWACCEL', 1 << 4), ('DELAY', lib.AV_CODEC_CAP_DELAY, """Encoder or decoder requires flushing with NULL input at the end in order to give the complete and correct output. NOTE: If this flag is not set, the codec is guaranteed to never be fed with with NULL data. The user can still send NULL data to the public encode or decode function, but libavcodec will not pass it along to the codec unless this flag is set. Decoders: The decoder has a non-zero delay and needs to be fed with avpkt->data=NULL, avpkt->size=0 at the end to get the delayed data until the decoder no longer returns frames. Encoders: The encoder needs to be fed with NULL data at the end of encoding until the encoder no longer returns data. NOTE: For encoders implementing the AVCodec.encode2() function, setting this flag also means that the encoder must set the pts and duration for each output packet. If this flag is not set, the pts and duration will be determined by libavcodec from the input frame."""), ('SMALL_LAST_FRAME', lib.AV_CODEC_CAP_SMALL_LAST_FRAME, """Codec can be fed a final frame with a smaller size. This can be used to prevent truncation of the last audio samples."""), ('HWACCEL_VDPAU', 1 << 7), ('SUBFRAMES', lib.AV_CODEC_CAP_SUBFRAMES, """Codec can output multiple frames per AVPacket Normally demuxers return one frame at a time, demuxers which do not do are connected to a parser to split what they return into proper frames. This flag is reserved to the very rare category of codecs which have a bitstream that cannot be split into frames without timeconsuming operations like full decoding. Demuxers carrying such bitstreams thus may return multiple frames in a packet. This has many disadvantages like prohibiting stream copy in many cases thus it should only be considered as a last resort."""), ('EXPERIMENTAL', lib.AV_CODEC_CAP_EXPERIMENTAL, """Codec is experimental and is thus avoided in favor of non experimental encoders"""), ('CHANNEL_CONF', lib.AV_CODEC_CAP_CHANNEL_CONF, """Codec should fill in channel configuration and samplerate instead of container"""), ('NEG_LINESIZES', 1 << 11), ('FRAME_THREADS', lib.AV_CODEC_CAP_FRAME_THREADS, """Codec supports frame-level multithreading""",), ('SLICE_THREADS', lib.AV_CODEC_CAP_SLICE_THREADS, """Codec supports slice-based (or partition-based) multithreading."""), ('PARAM_CHANGE', lib.AV_CODEC_CAP_PARAM_CHANGE, """Codec supports changed parameters at any point."""), ('AUTO_THREADS', lib.AV_CODEC_CAP_AUTO_THREADS, """Codec supports avctx->thread_count == 0 (auto)."""), ('VARIABLE_FRAME_SIZE', lib.AV_CODEC_CAP_VARIABLE_FRAME_SIZE, """Audio encoder supports receiving a different number of samples in each call."""), ('AVOID_PROBING', lib.AV_CODEC_CAP_AVOID_PROBING, """Decoder is not a preferred choice for probing. This indicates that the decoder is not a good choice for probing. It could for example be an expensive to spin up hardware decoder, or it could simply not provide a lot of useful information about the stream. A decoder marked with this flag should only be used as last resort choice for probing."""), ('INTRA_ONLY', lib.AV_CODEC_CAP_INTRA_ONLY, """Codec is intra only."""), ('LOSSLESS', lib.AV_CODEC_CAP_LOSSLESS, """Codec is lossless."""), ('HARDWARE', lib.AV_CODEC_CAP_HARDWARE, """Codec is backed by a hardware implementation. Typically used to identify a non-hwaccel hardware decoder. For information about hwaccels, use avcodec_get_hw_config() instead."""), ('HYBRID', lib.AV_CODEC_CAP_HYBRID, """Codec is potentially backed by a hardware implementation, but not necessarily. This is used instead of AV_CODEC_CAP_HARDWARE, if the implementation provides some sort of internal fallback."""), ('ENCODER_REORDERED_OPAQUE', 1 << 20, # lib.AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, # FFmpeg 4.2 """This codec takes the reordered_opaque field from input AVFrames and returns it in the corresponding field in AVCodecContext after encoding."""), ), is_flags=True) class UnknownCodecError(ValueError): pass cdef class Codec(object): """Codec(name, mode='r') :param str name: The codec name. :param str mode: ``'r'`` for decoding or ``'w'`` for encoding. This object exposes information about an available codec, and an avenue to create a :class:`.CodecContext` to encode/decode directly. :: >>> codec = Codec('mpeg4', 'r') >>> codec.name 'mpeg4' >>> codec.type 'video' >>> codec.is_encoder False """ def __cinit__(self, name, mode='r'): if name is _cinit_sentinel: return if mode == 'w': self.ptr = lib.avcodec_find_encoder_by_name(name) if not self.ptr: self.desc = lib.avcodec_descriptor_get_by_name(name) if self.desc: self.ptr = lib.avcodec_find_encoder(self.desc.id) elif mode == 'r': self.ptr = lib.avcodec_find_decoder_by_name(name) if not self.ptr: self.desc = lib.avcodec_descriptor_get_by_name(name) if self.desc: self.ptr = lib.avcodec_find_decoder(self.desc.id) else: raise ValueError('Invalid mode; must be "r" or "w".', mode) self._init(name) # Sanity check. if (mode == 'w') != self.is_encoder: raise RuntimeError("Found codec does not match mode.", name, mode) cdef _init(self, name=None): if not self.ptr: raise UnknownCodecError(name) if not self.desc: self.desc = lib.avcodec_descriptor_get(self.ptr.id) if not self.desc: raise RuntimeError('No codec descriptor for %r.' % name) self.is_encoder = lib.av_codec_is_encoder(self.ptr) # Sanity check. if self.is_encoder and lib.av_codec_is_decoder(self.ptr): raise RuntimeError('%s is both encoder and decoder.') def create(self): """Create a :class:`.CodecContext` for this codec.""" from .context import CodecContext return CodecContext.create(self) property is_decoder: def __get__(self): return not self.is_encoder property descriptor: def __get__(self): return wrap_avclass(self.ptr.priv_class) property name: def __get__(self): return self.ptr.name or '' property long_name: def __get__(self): return self.ptr.long_name or '' @property def type(self): """ The media type of this codec. E.g: ``'audio'``, ``'video'``, ``'subtitle'``. """ return lib.av_get_media_type_string(self.ptr.type) property id: def __get__(self): return self.ptr.id @property def frame_rates(self): """A list of supported frame rates (:class:`fractions.Fraction`), or ``None``.""" if not self.ptr.supported_framerates: return ret = [] cdef int i = 0 while self.ptr.supported_framerates[i].denum: ret.append(avrational_to_fraction(&self.ptr.supported_framerates[i])) i += 1 return ret @property def audio_rates(self): """A list of supported audio sample rates (``int``), or ``None``.""" if not self.ptr.supported_samplerates: return ret = [] cdef int i = 0 while self.ptr.supported_samplerates[i]: ret.append(self.ptr.supported_samplerates[i]) i += 1 return ret @property def video_formats(self): """A list of supported :class:`.VideoFormat`, or ``None``.""" if not self.ptr.pix_fmts: return ret = [] cdef int i = 0 while self.ptr.pix_fmts[i] != -1: ret.append(get_video_format(self.ptr.pix_fmts[i], 0, 0)) i += 1 return ret @property def audio_formats(self): """A list of supported :class:`.AudioFormat`, or ``None``.""" if not self.ptr.sample_fmts: return ret = [] cdef int i = 0 while self.ptr.sample_fmts[i] != -1: ret.append(get_audio_format(self.ptr.sample_fmts[i])) i += 1 return ret # NOTE: there are some overlaps, which we defer to how `ffmpeg -codecs` # handles them (by prefering the capablity to the property). # Also, LOSSLESS and LOSSY don't have to agree. @Properties.property def properties(self): """Flag property of :class:`.Properties`""" return self.desc.props intra_only = properties.flag_property('INTRA_ONLY') lossy = properties.flag_property('LOSSY') # Defer to capability. lossless = properties.flag_property('LOSSLESS') # Defer to capability. reorder = properties.flag_property('REORDER') bitmap_sub = properties.flag_property('BITMAP_SUB') text_sub = properties.flag_property('TEXT_SUB') @Capabilities.property def capabilities(self): """Flag property of :class:`.Capabilities`""" return self.ptr.capabilities draw_horiz_band = capabilities.flag_property('DRAW_HORIZ_BAND') dr1 = capabilities.flag_property('DR1') truncated = capabilities.flag_property('TRUNCATED') hwaccel = capabilities.flag_property('HWACCEL') delay = capabilities.flag_property('DELAY') small_last_frame = capabilities.flag_property('SMALL_LAST_FRAME') hwaccel_vdpau = capabilities.flag_property('HWACCEL_VDPAU') subframes = capabilities.flag_property('SUBFRAMES') experimental = capabilities.flag_property('EXPERIMENTAL') channel_conf = capabilities.flag_property('CHANNEL_CONF') neg_linesizes = capabilities.flag_property('NEG_LINESIZES') frame_threads = capabilities.flag_property('FRAME_THREADS') slice_threads = capabilities.flag_property('SLICE_THREADS') param_change = capabilities.flag_property('PARAM_CHANGE') auto_threads = capabilities.flag_property('AUTO_THREADS') variable_frame_size = capabilities.flag_property('VARIABLE_FRAME_SIZE') avoid_probing = capabilities.flag_property('AVOID_PROBING') # intra_only = capabilities.flag_property('INTRA_ONLY') # Dupes. # lossless = capabilities.flag_property('LOSSLESS') # Dupes. hardware = capabilities.flag_property('HARDWARE') hybrid = capabilities.flag_property('HYBRID') encoder_reordered_opaque = capabilities.flag_property('ENCODER_REORDERED_OPAQUE') cdef get_codec_names(): names = set() cdef const lib.AVCodec *ptr cdef void *opaque = NULL while True: ptr = lib.av_codec_iterate(&opaque) if ptr: names.add(ptr.name) else: break return names codecs_available = get_codec_names() codec_descriptor = wrap_avclass(lib.avcodec_get_class()) def dump_codecs(): """Print information about available codecs.""" print '''Codecs: D..... = Decoding supported .E.... = Encoding supported ..V... = Video codec ..A... = Audio codec ..S... = Subtitle codec ...I.. = Intra frame-only codec ....L. = Lossy compression .....S = Lossless compression ------''' for name in sorted(codecs_available): try: e_codec = Codec(name, 'w') except ValueError: e_codec = None try: d_codec = Codec(name, 'r') except ValueError: d_codec = None # TODO: Assert these always have the same properties. codec = e_codec or d_codec try: print ' %s%s%s%s%s%s %-18s %s' % ( '.D'[bool(d_codec)], '.E'[bool(e_codec)], codec.type[0].upper(), '.I'[codec.intra_only], '.L'[codec.lossy], '.S'[codec.lossless], codec.name, codec.long_name ) except Exception as e: print '...... %-18s ERROR: %s' % (codec.name, e) PyAV-8.1.0/av/codec/context.pxd000066400000000000000000000044301416312437500162320ustar00rootroot00000000000000from libc.stdint cimport int64_t cimport libav as lib from av.bytesource cimport ByteSource from av.codec.codec cimport Codec from av.frame cimport Frame from av.packet cimport Packet cdef class CodecContext(object): cdef lib.AVCodecContext *ptr # Whether the AVCodecContext should be de-allocated upon destruction. cdef bint allocated # Whether AVCodecContext.extradata should be de-allocated upon destruction. cdef bint extradata_set # Used as a signal that this is within a stream, and also for us to access # that stream. This is set "manually" by the stream after constructing # this object. cdef int stream_index cdef lib.AVCodecParserContext *parser cdef _init(self, lib.AVCodecContext *ptr, const lib.AVCodec *codec) cdef readonly Codec codec cdef public dict options # Public API. cpdef open(self, bint strict=?) cpdef close(self, bint strict=?) cdef _set_default_time_base(self) # Wraps both versions of the transcode API, returning lists. cpdef encode(self, Frame frame=?) cpdef decode(self, Packet packet=?) # Used by both transcode APIs to setup user-land objects. # TODO: Remove the `Packet` from `_setup_decoded_frame` (because flushing # packets are bogus). It should take all info it needs from the context and/or stream. cdef _prepare_frames_for_encode(self, Frame frame) cdef _setup_encoded_packet(self, Packet) cdef _setup_decoded_frame(self, Frame, Packet) # Implemented by base for the generic send/recv API. # Note that the user cannot send without recieving. This is because # _prepare_frames_for_encode may expand a frame into multiple (e.g. when # resampling audio to a higher rate but with fixed size frames), and the # send/recv buffer may be limited to a single frame. Ergo, we need to flush # the buffer as often as possible. cdef _send_frame_and_recv(self, Frame frame) cdef _recv_packet(self) cdef _send_packet_and_recv(self, Packet packet) cdef _recv_frame(self) # Implemented by children for the generic send/recv API, so we have the # correct subclass of Frame. cdef Frame _next_frame cdef Frame _alloc_next_frame(self) cdef CodecContext wrap_codec_context(lib.AVCodecContext*, const lib.AVCodec*, bint allocated) PyAV-8.1.0/av/codec/context.pyx000066400000000000000000000525221416312437500162640ustar00rootroot00000000000000from libc.errno cimport EAGAIN from libc.stdint cimport int64_t, uint8_t from libc.string cimport memcpy cimport libav as lib from av.bytesource cimport ByteSource, bytesource from av.codec.codec cimport Codec, wrap_codec from av.dictionary cimport _Dictionary from av.enum cimport define_enum from av.error cimport err_check from av.packet cimport Packet from av.utils cimport avrational_to_fraction, to_avrational from av.dictionary import Dictionary cdef object _cinit_sentinel = object() cdef CodecContext wrap_codec_context(lib.AVCodecContext *c_ctx, const lib.AVCodec *c_codec, bint allocated): """Build an av.CodecContext for an existing AVCodecContext.""" cdef CodecContext py_ctx # TODO: This. if c_ctx.codec_type == lib.AVMEDIA_TYPE_VIDEO: from av.video.codeccontext import VideoCodecContext py_ctx = VideoCodecContext(_cinit_sentinel) elif c_ctx.codec_type == lib.AVMEDIA_TYPE_AUDIO: from av.audio.codeccontext import AudioCodecContext py_ctx = AudioCodecContext(_cinit_sentinel) elif c_ctx.codec_type == lib.AVMEDIA_TYPE_SUBTITLE: from av.subtitles.codeccontext import SubtitleCodecContext py_ctx = SubtitleCodecContext(_cinit_sentinel) else: py_ctx = CodecContext(_cinit_sentinel) py_ctx.allocated = allocated py_ctx._init(c_ctx, c_codec) return py_ctx ThreadType = define_enum('ThreadType', __name__, ( ('NONE', 0), ('FRAME', lib.FF_THREAD_FRAME, """Decode more than one frame at once"""), ('SLICE', lib.FF_THREAD_SLICE, """Decode more than one part of a single frame at once"""), ('AUTO', lib.FF_THREAD_SLICE | lib.FF_THREAD_FRAME, """Either method."""), ), is_flags=True) SkipType = define_enum('SkipType', __name__, ( ('NONE', lib.AVDISCARD_NONE, """Discard nothing"""), ('DEFAULT', lib.AVDISCARD_DEFAULT, """Discard useless packets like 0 size packets in AVI"""), ('NONREF', lib.AVDISCARD_NONREF, """Discard all non reference"""), ('BIDIR', lib.AVDISCARD_BIDIR, """Discard all bidirectional frames"""), ('NONINTRA', lib.AVDISCARD_NONINTRA, """Discard all non intra frames"""), ('NONKEY', lib.AVDISCARD_NONKEY, """Discard all frames except keyframes"""), ('ALL', lib.AVDISCARD_ALL, """Discard all"""), )) Flags = define_enum('Flags', __name__, ( ('NONE', 0), ('UNALIGNED', lib.AV_CODEC_FLAG_UNALIGNED, """Allow decoders to produce frames with data planes that are not aligned to CPU requirements (e.g. due to cropping)."""), ('QSCALE', lib.AV_CODEC_FLAG_QSCALE, """Use fixed qscale."""), ('4MV', lib.AV_CODEC_FLAG_4MV, """4 MV per MB allowed / advanced prediction for H.263."""), ('OUTPUT_CORRUPT', lib.AV_CODEC_FLAG_OUTPUT_CORRUPT, """Output even those frames that might be corrupted."""), ('QPEL', lib.AV_CODEC_FLAG_QPEL, """Use qpel MC."""), ('DROPCHANGED', 1 << 5, """Don't output frames whose parameters differ from first decoded frame in stream."""), ('PASS1', lib.AV_CODEC_FLAG_PASS1, """Use internal 2pass ratecontrol in first pass mode."""), ('PASS2', lib.AV_CODEC_FLAG_PASS2, """Use internal 2pass ratecontrol in second pass mode."""), ('LOOP_FILTER', lib.AV_CODEC_FLAG_LOOP_FILTER, """loop filter."""), ('GRAY', lib.AV_CODEC_FLAG_GRAY, """Only decode/encode grayscale."""), ('PSNR', lib.AV_CODEC_FLAG_PSNR, """error[?] variables will be set during encoding."""), ('TRUNCATED', lib.AV_CODEC_FLAG_TRUNCATED, """Input bitstream might be truncated at a random location instead of only at frame boundaries."""), ('INTERLACED_DCT', lib.AV_CODEC_FLAG_INTERLACED_DCT, """Use interlaced DCT."""), ('LOW_DELAY', lib.AV_CODEC_FLAG_LOW_DELAY, """Force low delay."""), ('GLOBAL_HEADER', lib.AV_CODEC_FLAG_GLOBAL_HEADER, """Place global headers in extradata instead of every keyframe."""), ('BITEXACT', lib.AV_CODEC_FLAG_BITEXACT, """Use only bitexact stuff (except (I)DCT)."""), ('AC_PRED', lib.AV_CODEC_FLAG_AC_PRED, """H.263 advanced intra coding / MPEG-4 AC prediction"""), ('INTERLACED_ME', lib.AV_CODEC_FLAG_INTERLACED_ME, """Interlaced motion estimation"""), ('CLOSED_GOP', lib.AV_CODEC_FLAG_CLOSED_GOP), ), is_flags=True) Flags2 = define_enum('Flags2', __name__, ( ('NONE', 0), ('FAST', lib.AV_CODEC_FLAG2_FAST, """Allow non spec compliant speedup tricks."""), ('NO_OUTPUT', lib.AV_CODEC_FLAG2_NO_OUTPUT, """Skip bitstream encoding."""), ('LOCAL_HEADER', lib.AV_CODEC_FLAG2_LOCAL_HEADER, """Place global headers at every keyframe instead of in extradata."""), ('DROP_FRAME_TIMECODE', lib.AV_CODEC_FLAG2_DROP_FRAME_TIMECODE, """Timecode is in drop frame format. DEPRECATED!!!!"""), ('CHUNKS', lib.AV_CODEC_FLAG2_CHUNKS, """Input bitstream might be truncated at a packet boundaries instead of only at frame boundaries."""), ('IGNORE_CROP', lib.AV_CODEC_FLAG2_IGNORE_CROP, """Discard cropping information from SPS."""), ('SHOW_ALL', lib.AV_CODEC_FLAG2_SHOW_ALL, """Show all frames before the first keyframe"""), ('EXPORT_MVS', lib.AV_CODEC_FLAG2_EXPORT_MVS, """Export motion vectors through frame side data"""), ('SKIP_MANUAL', lib.AV_CODEC_FLAG2_SKIP_MANUAL, """Do not skip samples and export skip information as frame side data"""), ('RO_FLUSH_NOOP', lib.AV_CODEC_FLAG2_RO_FLUSH_NOOP, """Do not reset ASS ReadOrder field on flush (subtitles decoding)"""), ), is_flags=True) cdef class CodecContext(object): @staticmethod def create(codec, mode=None): cdef Codec cy_codec = codec if isinstance(codec, Codec) else Codec(codec, mode) cdef lib.AVCodecContext *c_ctx = lib.avcodec_alloc_context3(cy_codec.ptr) return wrap_codec_context(c_ctx, cy_codec.ptr, True) def __cinit__(self, sentinel=None, *args, **kwargs): if sentinel is not _cinit_sentinel: raise RuntimeError('Cannot instantiate CodecContext') self.options = {} self.stream_index = -1 # This is set by the container immediately. cdef _init(self, lib.AVCodecContext *ptr, const lib.AVCodec *codec): self.ptr = ptr if self.ptr.codec and codec and self.ptr.codec != codec: raise RuntimeError('Wrapping CodecContext with mismatched codec.') self.codec = wrap_codec(codec if codec != NULL else self.ptr.codec) # Set reasonable threading defaults. # count == 0 -> use as many threads as there are CPUs. # type == 2 -> thread within a frame. This does not change the API. self.ptr.thread_count = 0 self.ptr.thread_type = 2 def _get_flags(self): return self.ptr.flags def _set_flags(self, value): self.ptr.flags = value flags = Flags.property( _get_flags, _set_flags, """Flag property of :class:`.Flags`.""" ) unaligned = flags.flag_property('UNALIGNED') qscale = flags.flag_property('QSCALE') four_mv = flags.flag_property('4MV') output_corrupt = flags.flag_property('OUTPUT_CORRUPT') qpel = flags.flag_property('QPEL') drop_changed = flags.flag_property('DROPCHANGED') pass1 = flags.flag_property('PASS1') pass2 = flags.flag_property('PASS2') loop_filter = flags.flag_property('LOOP_FILTER') gray = flags.flag_property('GRAY') psnr = flags.flag_property('PSNR') truncated = flags.flag_property('TRUNCATED') interlaced_dct = flags.flag_property('INTERLACED_DCT') low_delay = flags.flag_property('LOW_DELAY') global_header = flags.flag_property('GLOBAL_HEADER') bitexact = flags.flag_property('BITEXACT') ac_pred = flags.flag_property('AC_PRED') interlaced_me = flags.flag_property('INTERLACED_ME') closed_gop = flags.flag_property('CLOSED_GOP') def _get_flags2(self): return self.ptr.flags2 def _set_flags2(self, value): self.ptr.flags2 = value flags2 = Flags2.property( _get_flags2, _set_flags2, """Flag property of :class:`.Flags2`.""" ) fast = flags2.flag_property('FAST') no_output = flags2.flag_property('NO_OUTPUT') local_header = flags2.flag_property('LOCAL_HEADER') drop_frame_timecode = flags2.flag_property('DROP_FRAME_TIMECODE') chunks = flags2.flag_property('CHUNKS') ignore_crop = flags2.flag_property('IGNORE_CROP') show_all = flags2.flag_property('SHOW_ALL') export_mvs = flags2.flag_property('EXPORT_MVS') skip_manual = flags2.flag_property('SKIP_MANUAL') ro_flush_noop = flags2.flag_property('RO_FLUSH_NOOP') property extradata: def __get__(self): if self.ptr.extradata_size > 0: return (self.ptr.extradata)[:self.ptr.extradata_size] else: return None def __set__(self, data): if not self.is_decoder: raise ValueError("Can only set extradata for decoders.") if data is None: lib.av_freep(&self.ptr.extradata) self.ptr.extradata_size = 0 else: source = bytesource(data) self.ptr.extradata = lib.av_realloc(self.ptr.extradata, source.length + lib.AV_INPUT_BUFFER_PADDING_SIZE) if not self.ptr.extradata: raise MemoryError("Cannot allocate extradata") memcpy(self.ptr.extradata, source.ptr, source.length) self.ptr.extradata_size = source.length self.extradata_set = True property extradata_size: def __get__(self): return self.ptr.extradata_size property is_open: def __get__(self): return lib.avcodec_is_open(self.ptr) property is_encoder: def __get__(self): return lib.av_codec_is_encoder(self.ptr.codec) property is_decoder: def __get__(self): return lib.av_codec_is_decoder(self.ptr.codec) cpdef open(self, bint strict=True): if lib.avcodec_is_open(self.ptr): if strict: raise ValueError('CodecContext is already open.') return # We might pass partial frames. # TODO: What is this for?! This is causing problems with raw decoding # as the internal parser doesn't seem to see a frame until it sees # the next one. # if self.codec.ptr.capabilities & lib.CODEC_CAP_TRUNCATED: # self.ptr.flags |= lib.CODEC_FLAG_TRUNCATED # TODO: Do this better. cdef _Dictionary options = Dictionary() options.update(self.options or {}) # Assert we have a time_base. if not self.ptr.time_base.num: self._set_default_time_base() err_check(lib.avcodec_open2(self.ptr, self.codec.ptr, &options.ptr)) self.options = dict(options) cdef _set_default_time_base(self): self.ptr.time_base.num = 1 self.ptr.time_base.den = lib.AV_TIME_BASE cpdef close(self, bint strict=True): if not lib.avcodec_is_open(self.ptr): if strict: raise ValueError('CodecContext is already closed.') return err_check(lib.avcodec_close(self.ptr)) def __dealloc__(self): if self.ptr and self.extradata_set: lib.av_freep(&self.ptr.extradata) if self.ptr and self.allocated: lib.avcodec_close(self.ptr) lib.avcodec_free_context(&self.ptr) if self.parser: lib.av_parser_close(self.parser) def __repr__(self): return '' % ( self.__class__.__name__, self.type or '', self.name or '', id(self), ) def parse(self, raw_input=None): """Split up a byte stream into list of :class:`.Packet`. This is only effectively splitting up a byte stream, and does no actual interpretation of the data. It will return all packets that are fully contained within the given input, and will buffer partial packets until they are complete. :param ByteSource raw_input: A chunk of a byte-stream to process. Anything that can be turned into a :class:`.ByteSource` is fine. ``None`` or empty inputs will flush the parser's buffers. :return: ``list`` of :class:`.Packet` newly available. """ if not self.parser: self.parser = lib.av_parser_init(self.codec.ptr.id) if not self.parser: raise ValueError('No parser for %s' % self.codec.name) cdef ByteSource source = bytesource(raw_input, allow_none=True) cdef unsigned char *in_data = source.ptr if source is not None else NULL cdef int in_size = source.length if source is not None else 0 cdef unsigned char *out_data cdef int out_size cdef int consumed cdef Packet packet = None packets = [] while True: with nogil: consumed = lib.av_parser_parse2( self.parser, self.ptr, &out_data, &out_size, in_data, in_size, lib.AV_NOPTS_VALUE, lib.AV_NOPTS_VALUE, 0 ) err_check(consumed) if out_size: # We copy the data immediately, as we have yet to figure out # the expected lifetime of the buffer we get back. All of the # examples decode it immediately. # # We've also tried: # packet = Packet() # packet.data = out_data # packet.size = out_size # packet.source = source # # ... but this results in corruption. packet = Packet(out_size) memcpy(packet.struct.data, out_data, out_size) packets.append(packet) if not in_size: # This was a flush. Only one packet should ever be returned. break in_data += consumed in_size -= consumed if not in_size: # Aaaand now we're done. break return packets cdef _send_frame_and_recv(self, Frame frame): cdef Packet packet cdef int res with nogil: res = lib.avcodec_send_frame(self.ptr, frame.ptr if frame is not None else NULL) err_check(res) out = [] while True: packet = self._recv_packet() if packet: out.append(packet) else: break return out cdef _send_packet_and_recv(self, Packet packet): cdef Frame frame cdef int res with nogil: res = lib.avcodec_send_packet(self.ptr, &packet.struct if packet is not None else NULL) err_check(res) out = [] while True: frame = self._recv_frame() if frame: out.append(frame) else: break return out cdef _prepare_frames_for_encode(self, Frame frame): return [frame] cdef Frame _alloc_next_frame(self): raise NotImplementedError('Base CodecContext cannot decode.') cdef _recv_frame(self): if not self._next_frame: self._next_frame = self._alloc_next_frame() cdef Frame frame = self._next_frame cdef int res with nogil: res = lib.avcodec_receive_frame(self.ptr, frame.ptr) if res == -EAGAIN or res == lib.AVERROR_EOF: return err_check(res) if not res: self._next_frame = None return frame cdef _recv_packet(self): cdef Packet packet = Packet() cdef int res with nogil: res = lib.avcodec_receive_packet(self.ptr, &packet.struct) if res == -EAGAIN or res == lib.AVERROR_EOF: return err_check(res) if not res: return packet cpdef encode(self, Frame frame=None): """Encode a list of :class:`.Packet` from the given :class:`.Frame`.""" if self.ptr.codec_type not in [lib.AVMEDIA_TYPE_VIDEO, lib.AVMEDIA_TYPE_AUDIO]: raise NotImplementedError('Encoding is only supported for audio and video.') self.open(strict=False) frames = self._prepare_frames_for_encode(frame) # Assert the frames are in our time base. # TODO: Don't mutate time. for frame in frames: if frame is not None: frame._rebase_time(self.ptr.time_base) res = [] for frame in frames: for packet in self._send_frame_and_recv(frame): self._setup_encoded_packet(packet) res.append(packet) return res cdef _setup_encoded_packet(self, Packet packet): # We coerced the frame's time_base into the CodecContext's during encoding, # and FFmpeg copied the frame's pts/dts to the packet, so keep track of # this time_base in case the frame needs to be muxed to a container with # a different time_base. # # NOTE: if the CodecContext's time_base is altered during encoding, all bets # are off! packet._time_base = self.ptr.time_base cpdef decode(self, Packet packet=None): """Decode a list of :class:`.Frame` from the given :class:`.Packet`. If the packet is None, the buffers will be flushed. This is useful if you do not want the library to automatically re-order frames for you (if they are encoded with a codec that has B-frames). """ if not self.codec.ptr: raise ValueError('cannot decode unknown codec') self.open(strict=False) res = [] for frame in self._send_packet_and_recv(packet): if isinstance(frame, Frame): self._setup_decoded_frame(frame, packet) res.append(frame) return res cdef _setup_decoded_frame(self, Frame frame, Packet packet): # Propagate our manual times. # While decoding, frame times are in stream time_base, which PyAV # is carrying around. # TODO: Somehow get this from the stream so we can not pass the # packet here (because flushing packets are bogus). frame._time_base = packet._time_base frame.index = self.ptr.frame_number - 1 property name: def __get__(self): return self.codec.name property type: def __get__(self): return self.codec.type property profile: def __get__(self): if self.ptr.codec and lib.av_get_profile_name(self.ptr.codec, self.ptr.profile): return lib.av_get_profile_name(self.ptr.codec, self.ptr.profile) property time_base: def __get__(self): return avrational_to_fraction(&self.ptr.time_base) def __set__(self, value): to_avrational(value, &self.ptr.time_base) property codec_tag: def __get__(self): return self.ptr.codec_tag.to_bytes(4, byteorder="little", signed=False).decode( encoding="ascii") def __set__(self, value): if isinstance(value, str) and len(value) == 4: self.ptr.codec_tag = int.from_bytes(value.encode(encoding="ascii"), byteorder="little", signed=False) else: raise ValueError("Codec tag should be a 4 character string.") property ticks_per_frame: def __get__(self): return self.ptr.ticks_per_frame property bit_rate: def __get__(self): return self.ptr.bit_rate if self.ptr.bit_rate > 0 else None def __set__(self, int value): self.ptr.bit_rate = value property max_bit_rate: def __get__(self): if self.ptr.rc_max_rate > 0: return self.ptr.rc_max_rate else: return None property bit_rate_tolerance: def __get__(self): self.ptr.bit_rate_tolerance def __set__(self, int value): self.ptr.bit_rate_tolerance = value property thread_count: """How many threads to use; 0 means auto. Wraps :ffmpeg:`AVCodecContext.thread_count`. """ def __get__(self): return self.ptr.thread_count def __set__(self, int value): if lib.avcodec_is_open(self.ptr): raise RuntimeError("Cannot change thread_count after codec is open.") self.ptr.thread_count = value property thread_type: """One of :class:`.ThreadType`. Wraps :ffmpeg:`AVCodecContext.thread_type`. """ def __get__(self): return ThreadType.get(self.ptr.thread_type, create=True) def __set__(self, value): if lib.avcodec_is_open(self.ptr): raise RuntimeError("Cannot change thread_type after codec is open.") self.ptr.thread_type = ThreadType[value].value property skip_frame: """One of :class:`.SkipType`. Wraps ffmpeg:`AVCodecContext.skip_frame`. """ def __get__(self): return SkipType._get(self.ptr.skip_frame, create=True) def __set__(self, value): self.ptr.skip_frame = SkipType[value].value PyAV-8.1.0/av/container/000077500000000000000000000000001416312437500147355ustar00rootroot00000000000000PyAV-8.1.0/av/container/__init__.py000066400000000000000000000001571416312437500170510ustar00rootroot00000000000000from .core import Container, Flags, open from .input import InputContainer from .output import OutputContainer PyAV-8.1.0/av/container/core.pxd000066400000000000000000000024421416312437500164040ustar00rootroot00000000000000cimport libav as lib from av.container.streams cimport StreamContainer from av.dictionary cimport _Dictionary from av.format cimport ContainerFormat from av.stream cimport Stream # Interrupt callback information, times are in seconds. ctypedef struct timeout_info: double start_time double timeout cdef class Container(object): cdef readonly bint writeable cdef lib.AVFormatContext *ptr cdef readonly object name cdef readonly str metadata_encoding cdef readonly str metadata_errors # File-like source. cdef readonly object file cdef object fread cdef object fwrite cdef object fseek cdef object ftell # Custom IO for above. cdef lib.AVIOContext *iocontext cdef unsigned char *buffer cdef long pos cdef bint pos_is_valid cdef bint input_was_opened cdef readonly ContainerFormat format cdef readonly dict options cdef readonly dict container_options cdef readonly list stream_options cdef readonly StreamContainer streams cdef readonly dict metadata cdef int err_check(self, int value) except -1 # Timeouts cdef readonly object open_timeout cdef readonly object read_timeout cdef timeout_info interrupt_callback_info cdef set_timeout(self, object) cdef start_timeout(self) PyAV-8.1.0/av/container/core.pyx000077500000000000000000000326261416312437500164430ustar00rootroot00000000000000from cython.operator cimport dereference from libc.stdint cimport int64_t from libc.stdlib cimport free, malloc import os import time cimport libav as lib from av.container.core cimport timeout_info from av.container.input cimport InputContainer from av.container.output cimport OutputContainer from av.container.pyio cimport pyio_read, pyio_seek, pyio_write from av.enum cimport define_enum from av.error cimport err_check, stash_exception from av.format cimport build_container_format from av.dictionary import Dictionary from av.logging import Capture as LogCapture ctypedef int64_t (*seek_func_t)(void *opaque, int64_t offset, int whence) nogil cdef object _cinit_sentinel = object() # We want to use the monotonic clock if it is available. cdef object clock = getattr(time, 'monotonic', time.time) cdef int interrupt_cb (void *p) nogil: cdef timeout_info info = dereference( p) if info.timeout < 0: # timeout < 0 means no timeout return 0 cdef double current_time with gil: current_time = clock() # Check if the clock has been changed. if current_time < info.start_time: # Raise this when we get back to Python. stash_exception((RuntimeError, RuntimeError("Clock has been changed to before timeout start"), None)) return 1 if current_time > info.start_time + info.timeout: return 1 return 0 Flags = define_enum('Flags', __name__, ( ('GENPTS', lib.AVFMT_FLAG_GENPTS, "Generate missing pts even if it requires parsing future frames."), ('IGNIDX', lib.AVFMT_FLAG_IGNIDX, "Ignore index."), ('NONBLOCK', lib.AVFMT_FLAG_NONBLOCK, "Do not block when reading packets from input."), ('IGNDTS', lib.AVFMT_FLAG_IGNDTS, "Ignore DTS on frames that contain both DTS & PTS."), ('NOFILLIN', lib.AVFMT_FLAG_NOFILLIN, "Do not infer any values from other values, just return what is stored in the container."), ('NOPARSE', lib.AVFMT_FLAG_NOPARSE, """Do not use AVParsers, you also must set AVFMT_FLAG_NOFILLIN as the fillin code works on frames and no parsing -> no frames. Also seeking to frames can not work if parsing to find frame boundaries has been disabled."""), ('NOBUFFER', lib.AVFMT_FLAG_NOBUFFER, "Do not buffer frames when possible."), ('CUSTOM_IO', lib.AVFMT_FLAG_CUSTOM_IO, "The caller has supplied a custom AVIOContext, don't avio_close() it."), ('DISCARD_CORRUPT', lib.AVFMT_FLAG_DISCARD_CORRUPT, "Discard frames marked corrupted."), ('FLUSH_PACKETS', lib.AVFMT_FLAG_FLUSH_PACKETS, "Flush the AVIOContext every packet."), ('BITEXACT', lib.AVFMT_FLAG_BITEXACT, """When muxing, try to avoid writing any random/volatile data to the output. This includes any random IDs, real-time timestamps/dates, muxer version, etc. This flag is mainly intended for testing."""), ('MP4A_LATM', lib.AVFMT_FLAG_MP4A_LATM, "Enable RTP MP4A-LATM payload"), ('SORT_DTS', lib.AVFMT_FLAG_SORT_DTS, "Try to interleave outputted packets by dts (using this flag can slow demuxing down)."), ('PRIV_OPT', lib.AVFMT_FLAG_PRIV_OPT, "Enable use of private options by delaying codec open (this could be made default once all code is converted)."), ('KEEP_SIDE_DATA', lib.AVFMT_FLAG_KEEP_SIDE_DATA, "Deprecated, does nothing."), ('FAST_SEEK', lib.AVFMT_FLAG_FAST_SEEK, "Enable fast, but inaccurate seeks for some formats."), ('SHORTEST', lib.AVFMT_FLAG_SHORTEST, "Stop muxing when the shortest stream stops."), ('AUTO_BSF', lib.AVFMT_FLAG_AUTO_BSF, "Add bitstream filters as requested by the muxer."), ), is_flags=True) cdef class Container(object): def __cinit__(self, sentinel, file_, format_name, options, container_options, stream_options, metadata_encoding, metadata_errors, buffer_size, open_timeout, read_timeout): if sentinel is not _cinit_sentinel: raise RuntimeError('cannot construct base Container') self.writeable = isinstance(self, OutputContainer) if not self.writeable and not isinstance(self, InputContainer): raise RuntimeError('Container cannot be directly extended.') if isinstance(file_, str): self.name = file_ else: self.name = getattr(file_, 'name', '') if not isinstance(self.name, str): raise TypeError("File's name attribute must be string-like.") self.file = file_ self.options = dict(options or ()) self.container_options = dict(container_options or ()) self.stream_options = [dict(x) for x in stream_options or ()] self.metadata_encoding = metadata_encoding self.metadata_errors = metadata_errors self.open_timeout = open_timeout self.read_timeout = read_timeout if format_name is not None: self.format = ContainerFormat(format_name) self.input_was_opened = False cdef int res cdef bytes name_obj = os.fsencode(self.name) cdef char *name = name_obj cdef seek_func_t seek_func = NULL cdef lib.AVOutputFormat *ofmt if self.writeable: ofmt = self.format.optr if self.format else lib.av_guess_format(NULL, name, NULL) if ofmt == NULL: raise ValueError("Could not determine output format") with nogil: # This does not actually open the file. res = lib.avformat_alloc_output_context2( &self.ptr, ofmt, NULL, name, ) self.err_check(res) else: # We need the context before we open the input AND setup Python IO. self.ptr = lib.avformat_alloc_context() # Setup interrupt callback if self.open_timeout is not None or self.read_timeout is not None: self.ptr.interrupt_callback.callback = interrupt_cb self.ptr.interrupt_callback.opaque = &self.interrupt_callback_info self.ptr.flags |= lib.AVFMT_FLAG_GENPTS # Setup Python IO. if self.file is not None: self.fread = getattr(self.file, 'read', None) self.fwrite = getattr(self.file, 'write', None) self.fseek = getattr(self.file, 'seek', None) self.ftell = getattr(self.file, 'tell', None) if self.writeable: if self.fwrite is None: raise ValueError("File object has no write method.") else: if self.fread is None: raise ValueError("File object has no read method.") if self.fseek is not None and self.ftell is not None: seek_func = pyio_seek self.pos = 0 self.pos_is_valid = True # This is effectively the maximum size of reads. self.buffer = lib.av_malloc(buffer_size) self.iocontext = lib.avio_alloc_context( self.buffer, buffer_size, self.writeable, # Writeable. self, # User data. pyio_read, pyio_write, seek_func ) if seek_func: self.iocontext.seekable = lib.AVIO_SEEKABLE_NORMAL self.iocontext.max_packet_size = buffer_size self.ptr.pb = self.iocontext cdef lib.AVInputFormat *ifmt cdef _Dictionary c_options if not self.writeable: ifmt = self.format.iptr if self.format else NULL c_options = Dictionary(self.options, self.container_options) self.set_timeout(self.open_timeout) self.start_timeout() with nogil: res = lib.avformat_open_input( &self.ptr, name, ifmt, &c_options.ptr ) self.set_timeout(None) self.err_check(res) self.input_was_opened = True if format_name is None: self.format = build_container_format(self.ptr.iformat, self.ptr.oformat) def __dealloc__(self): with nogil: # FFmpeg will not release custom input, so it's up to us to free it. # Do not touch our original buffer as it may have been freed and replaced. if self.iocontext: lib.av_freep(&self.iocontext.buffer) lib.av_freep(&self.iocontext) # We likely errored badly if we got here, and so are still # responsible for our buffer. else: lib.av_freep(&self.buffer) # Finish releasing the whole structure. lib.avformat_free_context(self.ptr) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() def __repr__(self): return '' % (self.__class__.__name__, self.file or self.name) cdef int err_check(self, int value) except -1: return err_check(value, filename=self.name) def dumps_format(self): with LogCapture() as logs: lib.av_dump_format(self.ptr, 0, "", isinstance(self, OutputContainer)) return ''.join(log[2] for log in logs) cdef set_timeout(self, timeout): if timeout is None: self.interrupt_callback_info.timeout = -1.0 else: self.interrupt_callback_info.timeout = timeout cdef start_timeout(self): self.interrupt_callback_info.start_time = clock() def _get_flags(self): return self.ptr.flags def _set_flags(self, value): self.ptr.flags = value flags = Flags.property( _get_flags, _set_flags, """Flags property of :class:`.Flags`""" ) gen_pts = flags.flag_property('GENPTS') ign_idx = flags.flag_property('IGNIDX') non_block = flags.flag_property('NONBLOCK') ign_dts = flags.flag_property('IGNDTS') no_fill_in = flags.flag_property('NOFILLIN') no_parse = flags.flag_property('NOPARSE') no_buffer = flags.flag_property('NOBUFFER') custom_io = flags.flag_property('CUSTOM_IO') discard_corrupt = flags.flag_property('DISCARD_CORRUPT') flush_packets = flags.flag_property('FLUSH_PACKETS') bit_exact = flags.flag_property('BITEXACT') mp4a_latm = flags.flag_property('MP4A_LATM') sort_dts = flags.flag_property('SORT_DTS') priv_opt = flags.flag_property('PRIV_OPT') keep_side_data = flags.flag_property('KEEP_SIDE_DATA') fast_seek = flags.flag_property('FAST_SEEK') shortest = flags.flag_property('SHORTEST') auto_bsf = flags.flag_property('AUTO_BSF') def open(file, mode=None, format=None, options=None, container_options=None, stream_options=None, metadata_encoding='utf-8', metadata_errors='strict', buffer_size=32768, timeout=None): """open(file, mode='r', **kwargs) Main entrypoint to opening files/streams. :param str file: The file to open, which can be either a string or a file-like object. :param str mode: ``"r"`` for reading and ``"w"`` for writing. :param str format: Specific format to use. Defaults to autodect. :param dict options: Options to pass to the container and all streams. :param dict container_options: Options to pass to the container. :param list stream_options: Options to pass to each stream. :param str metadata_encoding: Encoding to use when reading or writing file metadata. Defaults to ``"utf-8"``. :param str metadata_errors: Specifies how to handle encoding errors; behaves like ``str.encode`` parameter. Defaults to ``"strict"``. :param int buffer_size: Size of buffer for Python input/output operations in bytes. Honored only when ``file`` is a file-like object. Defaults to 32768 (32k). :param timeout: How many seconds to wait for data before giving up, as a float, or a :ref:`(open timeout, read timeout) ` tuple. :type timeout: float or tuple For devices (via ``libavdevice``), pass the name of the device to ``format``, e.g.:: >>> # Open webcam on OS X. >>> av.open(format='avfoundation', file='0') # doctest: +SKIP .. seealso:: :ref:`garbage_collection` More information on using input and output devices is available on the `FFmpeg website `_. """ if mode is None: mode = getattr(file, 'mode', None) if mode is None: mode = 'r' if isinstance(timeout, tuple): open_timeout = timeout[0] read_timeout = timeout[1] else: open_timeout = timeout read_timeout = timeout if mode.startswith('r'): return InputContainer( _cinit_sentinel, file, format, options, container_options, stream_options, metadata_encoding, metadata_errors, buffer_size, open_timeout, read_timeout ) if mode.startswith('w'): if stream_options: raise ValueError("Provide stream options via Container.add_stream(..., options={}).") return OutputContainer( _cinit_sentinel, file, format, options, container_options, stream_options, metadata_encoding, metadata_errors, buffer_size, open_timeout, read_timeout ) raise ValueError("mode must be 'r' or 'w'; got %r" % mode) PyAV-8.1.0/av/container/input.pxd000066400000000000000000000002431416312437500166100ustar00rootroot00000000000000cimport libav as lib from av.container.core cimport Container from av.stream cimport Stream cdef class InputContainer(Container): cdef flush_buffers(self) PyAV-8.1.0/av/container/input.pyx000066400000000000000000000232701416312437500166420ustar00rootroot00000000000000from libc.stdint cimport int64_t from libc.stdlib cimport free, malloc from av.container.streams cimport StreamContainer from av.dictionary cimport _Dictionary from av.error cimport err_check from av.packet cimport Packet from av.stream cimport Stream, wrap_stream from av.utils cimport avdict_to_dict from av.dictionary import Dictionary cdef close_input(InputContainer self): if self.input_was_opened: with nogil: lib.avformat_close_input(&self.ptr) self.input_was_opened = False cdef class InputContainer(Container): def __cinit__(self, *args, **kwargs): cdef unsigned int i # If we have either the global `options`, or a `stream_options`, prepare # a mashup of those options for each stream. cdef lib.AVDictionary **c_options = NULL cdef _Dictionary base_dict, stream_dict if self.options or self.stream_options: base_dict = Dictionary(self.options) c_options = malloc(self.ptr.nb_streams * sizeof(void*)) for i in range(self.ptr.nb_streams): c_options[i] = NULL if i < len(self.stream_options) and self.stream_options: stream_dict = base_dict.copy() stream_dict.update(self.stream_options[i]) lib.av_dict_copy(&c_options[i], stream_dict.ptr, 0) else: lib.av_dict_copy(&c_options[i], base_dict.ptr, 0) self.set_timeout(self.open_timeout) self.start_timeout() with nogil: # This peeks are the first few frames to: # - set stream.disposition from codec.audio_service_type (not exposed); # - set stream.codec.bits_per_coded_sample; # - set stream.duration; # - set stream.start_time; # - set stream.r_frame_rate to average value; # - open and closes codecs with the options provided. ret = lib.avformat_find_stream_info( self.ptr, c_options ) self.set_timeout(None) self.err_check(ret) # Cleanup all of our options. if c_options: for i in range(self.ptr.nb_streams): lib.av_dict_free(&c_options[i]) free(c_options) self.streams = StreamContainer() for i in range(self.ptr.nb_streams): self.streams.add_stream(wrap_stream(self, self.ptr.streams[i])) self.metadata = avdict_to_dict(self.ptr.metadata, self.metadata_encoding, self.metadata_errors) def __dealloc__(self): close_input(self) property start_time: def __get__(self): return self.ptr.start_time property duration: def __get__(self): return self.ptr.duration property bit_rate: def __get__(self): return self.ptr.bit_rate property size: def __get__(self): return lib.avio_size(self.ptr.pb) def close(self): close_input(self) def demux(self, *args, **kwargs): """demux(streams=None, video=None, audio=None, subtitles=None, data=None) Yields a series of :class:`.Packet` from the given set of :class:`.Stream`:: for packet in container.demux(): # Do something with `packet`, often: for frame in packet.decode(): # Do something with `frame`. .. seealso:: :meth:`.StreamContainer.get` for the interpretation of the arguments. .. note:: The last packets are dummy packets that when decoded will flush the buffers. """ # For whatever reason, Cython does not like us directly passing kwargs # from one method to another. Without kwargs, it ends up passing a # NULL reference, which segfaults. So we force it to do something with it. # This is likely a bug in Cython; see https://github.com/cython/cython/issues/2166 # (and others). id(kwargs) streams = self.streams.get(*args, **kwargs) cdef bint *include_stream = malloc(self.ptr.nb_streams * sizeof(bint)) if include_stream == NULL: raise MemoryError() cdef unsigned int i cdef Packet packet cdef int ret self.set_timeout(self.read_timeout) try: for i in range(self.ptr.nb_streams): include_stream[i] = False for stream in streams: i = stream.index if i >= self.ptr.nb_streams: raise ValueError('stream index %d out of range' % i) include_stream[i] = True while True: packet = Packet() try: self.start_timeout() with nogil: ret = lib.av_read_frame(self.ptr, &packet.struct) self.err_check(ret) except EOFError: break if include_stream[packet.struct.stream_index]: # If AVFMTCTX_NOHEADER is set in ctx_flags, then new streams # may also appear in av_read_frame(). # http://ffmpeg.org/doxygen/trunk/structAVFormatContext.html # TODO: find better way to handle this if packet.struct.stream_index < len(self.streams): packet._stream = self.streams[packet.struct.stream_index] # Keep track of this so that remuxing is easier. packet._time_base = packet._stream._stream.time_base yield packet # Flush! for i in range(self.ptr.nb_streams): if include_stream[i]: packet = Packet() packet._stream = self.streams[i] packet._time_base = packet._stream._stream.time_base yield packet finally: self.set_timeout(None) free(include_stream) def decode(self, *args, **kwargs): """decode(streams=None, video=None, audio=None, subtitles=None, data=None) Yields a series of :class:`.Frame` from the given set of streams:: for frame in container.decode(): # Do something with `frame`. .. seealso:: :meth:`.StreamContainer.get` for the interpretation of the arguments. """ id(kwargs) # Avoid Cython bug; see demux(). for packet in self.demux(*args, **kwargs): for frame in packet.decode(): yield frame def seek(self, offset, *, str whence='time', bint backward=True, bint any_frame=False, Stream stream=None, bint unsupported_frame_offset=False, bint unsupported_byte_offset=False): """seek(offset, *, backward=True, any_frame=False, stream=None) Seek to a (key)frame nearsest to the given timestamp. :param int offset: Time to seek to, expressed in``stream.time_base`` if ``stream`` is given, otherwise in :data:`av.time_base`. :param bool backward: If there is not a (key)frame at the given offset, look backwards for it. :param bool any_frame: Seek to any frame, not just a keyframe. :param Stream stream: The stream who's ``time_base`` the ``offset`` is in. :param bool unsupported_frame_offset: ``offset`` is a frame index instead of a time; not supported by any known format. :param bool unsupported_byte_offset: ``offset`` is a byte location in the file; not supported by any known format. After seeking, packets that you demux should correspond (roughly) to the position you requested. In most cases, the defaults of ``backwards = True`` and ``any_frame = False`` are the best course of action, followed by you demuxing/decoding to the position that you want. This is becase to properly decode video frames you need to start from the previous keyframe. .. seealso:: :ffmpeg:`avformat_seek_file` for discussion of the flags. """ # We used to take floats here and assume they were in seconds. This # was super confusing, so lets go in the complete opposite direction # and reject non-ints. if not isinstance(offset, (int, long)): raise TypeError('Container.seek only accepts integer offset.', type(offset)) cdef int64_t c_offset = offset cdef int flags = 0 cdef int ret # We used to support whence in 'time', 'frame', and 'byte', but later # realized that FFmpged doens't implement the frame or byte ones. # We don't even document this anymore, but do allow 'time' to pass through. if whence != 'time': raise ValueError("whence != 'time' is no longer supported") if backward: flags |= lib.AVSEEK_FLAG_BACKWARD if any_frame: flags |= lib.AVSEEK_FLAG_ANY # If someone really wants (and to experiment), expose these. if unsupported_frame_offset: flags |= lib.AVSEEK_FLAG_FRAME if unsupported_byte_offset: flags |= lib.AVSEEK_FLAG_BYTE cdef int stream_index = stream.index if stream else -1 with nogil: ret = lib.av_seek_frame(self.ptr, stream_index, c_offset, flags) err_check(ret) self.flush_buffers() cdef flush_buffers(self): cdef unsigned int i cdef lib.AVStream *stream with nogil: for i in range(self.ptr.nb_streams): stream = self.ptr.streams[i] if stream.codec and stream.codec.codec and stream.codec.codec_id != lib.AV_CODEC_ID_NONE: lib.avcodec_flush_buffers(stream.codec) PyAV-8.1.0/av/container/output.pxd000066400000000000000000000003221416312437500170070ustar00rootroot00000000000000cimport libav as lib from av.container.core cimport Container from av.stream cimport Stream cdef class OutputContainer(Container): cdef bint _started cdef bint _done cpdef start_encoding(self) PyAV-8.1.0/av/container/output.pyx000066400000000000000000000175271416312437500170530ustar00rootroot00000000000000from fractions import Fraction import logging import os from av.codec.codec cimport Codec from av.container.streams cimport StreamContainer from av.dictionary cimport _Dictionary from av.error cimport err_check from av.packet cimport Packet from av.stream cimport Stream, wrap_stream from av.utils cimport dict_to_avdict from av.dictionary import Dictionary log = logging.getLogger(__name__) cdef close_output(OutputContainer self): cdef Stream stream if self._started and not self._done: self.err_check(lib.av_write_trailer(self.ptr)) for stream in self.streams: stream.codec_context.close() if self.file is None and not self.ptr.oformat.flags & lib.AVFMT_NOFILE: lib.avio_closep(&self.ptr.pb) self._done = True cdef class OutputContainer(Container): def __cinit__(self, *args, **kwargs): self.streams = StreamContainer() self.metadata = {} def __dealloc__(self): close_output(self) def add_stream(self, codec_name=None, object rate=None, Stream template=None, options=None, **kwargs): """add_stream(codec_name, rate=None) Create a new stream, and return it. :param str codec_name: The name of a codec. :param rate: The frame rate for video, and sample rate for audio. Examples for video include ``24``, ``23.976``, and ``Fraction(30000,1001)``. Examples for audio include ``48000`` and ``44100``. :returns: The new :class:`~av.stream.Stream`. """ if (codec_name is None and template is None) or (codec_name is not None and template is not None): raise ValueError('needs one of codec_name or template') cdef const lib.AVCodec *codec cdef Codec codec_obj if codec_name is not None: codec_obj = codec_name if isinstance(codec_name, Codec) else Codec(codec_name, 'w') codec = codec_obj.ptr else: if not template._codec: raise ValueError("template has no codec") if not template._codec_context: raise ValueError("template has no codec context") codec = template._codec # Assert that this format supports the requested codec. if not lib.avformat_query_codec( self.ptr.oformat, codec.id, lib.FF_COMPLIANCE_NORMAL, ): raise ValueError("%r format does not support %r codec" % (self.format.name, codec_name)) # Create new stream in the AVFormatContext, set AVCodecContext values. # As of last check, avformat_new_stream only calls avcodec_alloc_context3 to create # the context, but doesn't modify it in any other way. Ergo, we can allow CodecContext # to finish initializing it. lib.avformat_new_stream(self.ptr, codec) cdef lib.AVStream *stream = self.ptr.streams[self.ptr.nb_streams - 1] cdef lib.AVCodecContext *codec_context = stream.codec # For readability. # Copy from the template. if template is not None: lib.avcodec_copy_context(codec_context, template._codec_context) # Reset the codec tag assuming we are remuxing. codec_context.codec_tag = 0 # Now lets set some more sane video defaults elif codec.type == lib.AVMEDIA_TYPE_VIDEO: codec_context.pix_fmt = lib.AV_PIX_FMT_YUV420P codec_context.width = 640 codec_context.height = 480 codec_context.bit_rate = 1024000 codec_context.bit_rate_tolerance = 128000 codec_context.ticks_per_frame = 1 rate = Fraction(rate or 24) codec_context.framerate.num = rate.numerator codec_context.framerate.den = rate.denominator stream.time_base = codec_context.time_base # Some sane audio defaults elif codec.type == lib.AVMEDIA_TYPE_AUDIO: codec_context.sample_fmt = codec.sample_fmts[0] codec_context.bit_rate = 128000 codec_context.bit_rate_tolerance = 32000 codec_context.sample_rate = rate or 48000 codec_context.channels = 2 codec_context.channel_layout = lib.AV_CH_LAYOUT_STEREO # Some formats want stream headers to be separate if self.ptr.oformat.flags & lib.AVFMT_GLOBALHEADER: codec_context.flags |= lib.AV_CODEC_FLAG_GLOBAL_HEADER # Construct the user-land stream cdef Stream py_stream = wrap_stream(self, stream) self.streams.add_stream(py_stream) if options: py_stream.options.update(options) for k, v in kwargs.items(): setattr(py_stream, k, v) return py_stream cpdef start_encoding(self): """Write the file header! Called automatically.""" if self._started: return # TODO: This does NOT handle options coming from 3 sources. # This is only a rough approximation of what would be cool to do. used_options = set() # Finalize and open all streams. cdef Stream stream for stream in self.streams: ctx = stream.codec_context if not ctx.is_open: for k, v in self.options.items(): ctx.options.setdefault(k, v) ctx.open() # Track option consumption. for k in self.options: if k not in ctx.options: used_options.add(k) stream._finalize_for_output() # Open the output file, if needed. cdef bytes name_obj = os.fsencode(self.name if self.file is None else "") cdef char *name = name_obj if self.ptr.pb == NULL and not self.ptr.oformat.flags & lib.AVFMT_NOFILE: err_check(lib.avio_open(&self.ptr.pb, name, lib.AVIO_FLAG_WRITE)) # Copy the metadata dict. dict_to_avdict( &self.ptr.metadata, self.metadata, encoding=self.metadata_encoding, errors=self.metadata_errors ) cdef _Dictionary all_options = Dictionary(self.options, self.container_options) cdef _Dictionary options = all_options.copy() self.err_check(lib.avformat_write_header( self.ptr, &options.ptr )) # Track option usage... for k in all_options: if k not in options: used_options.add(k) # ... and warn if any weren't used. unused_options = {k: v for k, v in self.options.items() if k not in used_options} if unused_options: log.warning('Some options were not used: %s' % unused_options) self._started = True def close(self): close_output(self) def mux(self, packets): # We accept either a Packet, or a sequence of packets. This should # smooth out the transition to the new encode API which returns a # sequence of packets. if isinstance(packets, Packet): self.mux_one(packets) else: for packet in packets: self.mux_one(packet) def mux_one(self, Packet packet not None): self.start_encoding() # Assert the packet is in stream time. if packet.struct.stream_index < 0 or packet.struct.stream_index >= self.ptr.nb_streams: raise ValueError('Bad Packet stream_index.') cdef lib.AVStream *stream = self.ptr.streams[packet.struct.stream_index] packet._rebase_time(stream.time_base) # Make another reference to the packet, as av_interleaved_write_frame # takes ownership of it. cdef lib.AVPacket packet_ref lib.av_init_packet(&packet_ref) self.err_check(lib.av_packet_ref(&packet_ref, &packet.struct)) cdef int ret with nogil: ret = lib.av_interleaved_write_frame(self.ptr, &packet_ref) self.err_check(ret) PyAV-8.1.0/av/container/pyio.pxd000066400000000000000000000003741416312437500164360ustar00rootroot00000000000000from libc.stdint cimport int64_t, uint8_t cdef int pyio_read(void *opaque, uint8_t *buf, int buf_size) nogil cdef int pyio_write(void *opaque, uint8_t *buf, int buf_size) nogil cdef int64_t pyio_seek(void *opaque, int64_t offset, int whence) nogil PyAV-8.1.0/av/container/pyio.pyx000066400000000000000000000043351416312437500164640ustar00rootroot00000000000000from libc.string cimport memcpy cimport libav as lib from av.container.core cimport Container from av.error cimport stash_exception cdef int pyio_read(void *opaque, uint8_t *buf, int buf_size) nogil: with gil: return pyio_read_gil(opaque, buf, buf_size) cdef int pyio_read_gil(void *opaque, uint8_t *buf, int buf_size): cdef Container self cdef bytes res try: self = opaque res = self.fread(buf_size) memcpy(buf, res, len(res)) self.pos += len(res) if not res: return lib.AVERROR_EOF return len(res) except Exception as e: return stash_exception() cdef int pyio_write(void *opaque, uint8_t *buf, int buf_size) nogil: with gil: return pyio_write_gil(opaque, buf, buf_size) cdef int pyio_write_gil(void *opaque, uint8_t *buf, int buf_size): cdef Container self cdef bytes bytes_to_write cdef int bytes_written try: self = opaque bytes_to_write = buf[:buf_size] ret_value = self.fwrite(bytes_to_write) bytes_written = ret_value if isinstance(ret_value, int) else buf_size self.pos += bytes_written return bytes_written except Exception as e: return stash_exception() cdef int64_t pyio_seek(void *opaque, int64_t offset, int whence) nogil: # Seek takes the standard flags, but also a ad-hoc one which means that # the library wants to know how large the file is. We are generally # allowed to ignore this. if whence == lib.AVSEEK_SIZE: return -1 with gil: return pyio_seek_gil(opaque, offset, whence) cdef int64_t pyio_seek_gil(void *opaque, int64_t offset, int whence): cdef Container self try: self = opaque res = self.fseek(offset, whence) # Track the position for the user. if whence == 0: self.pos = offset elif whence == 1: self.pos += offset else: self.pos_is_valid = False if res is None: if self.pos_is_valid: res = self.pos else: res = self.ftell() return res except Exception as e: return stash_exception() PyAV-8.1.0/av/container/streams.pxd000066400000000000000000000004771416312437500171400ustar00rootroot00000000000000from av.stream cimport Stream cdef class StreamContainer(object): cdef list _streams # For the different types. cdef readonly tuple video cdef readonly tuple audio cdef readonly tuple subtitles cdef readonly tuple data cdef readonly tuple other cdef add_stream(self, Stream stream) PyAV-8.1.0/av/container/streams.pyx000066400000000000000000000066151416312437500171650ustar00rootroot00000000000000 cimport libav as lib def _flatten(input_): for x in input_: if isinstance(x, (tuple, list)): for y in _flatten(x): yield y else: yield x cdef class StreamContainer(object): """ A tuple-like container of :class:`Stream`. :: # There are a few ways to pulling out streams. first = container.streams[0] video = container.streams.video[0] audio = container.streams.get(audio=(0, 1)) """ def __cinit__(self): self._streams = [] self.video = () self.audio = () self.subtitles = () self.data = () self.other = () cdef add_stream(self, Stream stream): assert stream._stream.index == len(self._streams) self._streams.append(stream) if stream._codec_context.codec_type == lib.AVMEDIA_TYPE_VIDEO: self.video = self.video + (stream, ) elif stream._codec_context.codec_type == lib.AVMEDIA_TYPE_AUDIO: self.audio = self.audio + (stream, ) elif stream._codec_context.codec_type == lib.AVMEDIA_TYPE_SUBTITLE: self.subtitles = self.subtitles + (stream, ) elif stream._codec_context.codec_type == lib.AVMEDIA_TYPE_DATA: self.data = self.data + (stream, ) else: self.other = self.other + (stream, ) # Basic tuple interface. def __len__(self): return len(self._streams) def __iter__(self): return iter(self._streams) def __getitem__(self, index): if isinstance(index, int): return self.get(index)[0] else: return self.get(index) def get(self, *args, **kwargs): """get(streams=None, video=None, audio=None, subtitles=None, data=None) Get a selection of :class:`.Stream` as a ``list``. Positional arguments may be ``int`` (which is an index into the streams), or ``list`` or ``tuple`` of those:: # Get the first channel. streams.get(0) # Get the first two audio channels. streams.get(audio=(0, 1)) Keyword arguments (or dicts as positional arguments) as interpreted as ``(stream_type, index_value_or_set)`` pairs:: # Get the first video channel. streams.get(video=0) # or streams.get({'video': 0}) :class:`.Stream` objects are passed through untouched. If nothing is selected, then all streams are returned. """ selection = [] for x in _flatten((args, kwargs)): if x is None: pass elif isinstance(x, Stream): selection.append(x) elif isinstance(x, int): selection.append(self._streams[x]) elif isinstance(x, dict): for type_, indices in x.items(): if type_ == 'streams': # For compatibility with the pseudo signature streams = self._streams else: streams = getattr(self, type_) if not isinstance(indices, (tuple, list)): indices = [indices] for i in indices: selection.append(streams[i]) else: raise TypeError('Argument must be Stream or int.', type(x)) return selection or self._streams[:] PyAV-8.1.0/av/data/000077500000000000000000000000001416312437500136645ustar00rootroot00000000000000PyAV-8.1.0/av/data/__init__.py000066400000000000000000000000001416312437500157630ustar00rootroot00000000000000PyAV-8.1.0/av/data/stream.pxd000066400000000000000000000001101416312437500156640ustar00rootroot00000000000000from av.stream cimport Stream cdef class DataStream(Stream): pass PyAV-8.1.0/av/data/stream.pyx000066400000000000000000000012151416312437500157200ustar00rootroot00000000000000cimport libav as lib cdef class DataStream(Stream): def __repr__(self): return '' % ( self.__class__.__name__, self.index, self.type or '', self.name or '', id(self), ) def encode(self, frame=None): return [] def decode(self, packet=None, count=0): return [] property name: def __get__(self): cdef const lib.AVCodecDescriptor *desc = lib.avcodec_descriptor_get(self._codec_context.codec_id) if desc == NULL: return None return desc.name PyAV-8.1.0/av/datasets.py000066400000000000000000000061041416312437500151360ustar00rootroot00000000000000from __future__ import absolute_import import errno import logging import os import sys try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen log = logging.getLogger(__name__) def iter_data_dirs(check_writable=False): try: yield os.environ['PYAV_TESTDATA_DIR'] except KeyError: pass if os.name == 'nt': yield os.path.join(sys.prefix, 'pyav', 'datasets') return bases = [ '/usr/local/share', '/usr/local/lib', '/usr/share', '/usr/lib', ] # Prefer the local virtualenv. if hasattr(sys, 'real_prefix'): bases.insert(0, sys.prefix) for base in bases: dir_ = os.path.join(base, 'pyav', 'datasets') if check_writable: if os.path.exists(dir_): if not os.access(dir_, os.W_OK): continue else: if not os.access(base, os.W_OK): continue yield dir_ yield os.path.join(os.path.expanduser('~'), '.pyav', 'datasets') def cached_download(url, name): """Download the data at a URL, and cache it under the given name. The file is stored under `pyav/test` with the given name in the directory :envvar:`PYAV_TESTDATA_DIR`, or the first that is writeable of: - the current virtualenv - ``/usr/local/share`` - ``/usr/local/lib`` - ``/usr/share`` - ``/usr/lib`` - the user's home """ clean_name = os.path.normpath(name) if clean_name != name: raise ValueError("{} is not normalized.".format(name)) for dir_ in iter_data_dirs(): path = os.path.join(dir_, name) if os.path.exists(path): return path dir_ = next(iter_data_dirs(True)) path = os.path.join(dir_, name) log.info("Downloading {} to {}".format(url, path)) response = urlopen(url) if response.getcode() != 200: raise ValueError("HTTP {}".format(response.getcode())) dir_ = os.path.dirname(path) try: os.makedirs(dir_) except OSError as e: if e.errno != errno.EEXIST: raise tmp_path = path + '.tmp' with open(tmp_path, 'wb') as fh: while True: chunk = response.read(8196) if chunk: fh.write(chunk) else: break os.rename(tmp_path, path) return path def fate(name): """Download and return a path to a sample from the FFmpeg test suite. Data is handled by :func:`cached_download`. See the `FFmpeg Automated Test Environment `_ """ return cached_download('http://fate.ffmpeg.org/fate-suite/' + name, os.path.join('fate-suite', name.replace('/', os.path.sep))) def curated(name): """Download and return a path to a sample that is curated by the PyAV developers. Data is handled by :func:`cached_download`. """ return cached_download('https://pyav.org/datasets/' + name, os.path.join('pyav-curated', name.replace('/', os.path.sep))) PyAV-8.1.0/av/deprecation.py000066400000000000000000000041561416312437500156300ustar00rootroot00000000000000import functools import warnings class AVDeprecationWarning(DeprecationWarning): pass class AttributeRenamedWarning(AVDeprecationWarning): pass class MethodDeprecationWarning(AVDeprecationWarning): pass # DeprecationWarning is not printed by default (unless in __main__). We # really want these to be seen, but also to use the "correct" base classes. # So we're putting a filter in place to show our warnings. The users can # turn them back off if they want. warnings.filterwarnings('default', '', AVDeprecationWarning) class renamed_attr(object): """Proxy for renamed attributes (or methods) on classes. Getting and setting values will be redirected to the provided name, and warnings will be issues every time. """ def __init__(self, new_name): self.new_name = new_name self._old_name = None def old_name(self, cls): if self._old_name is None: for k, v in vars(cls).items(): if v is self: self._old_name = k break return self._old_name def __get__(self, instance, cls): old_name = self.old_name(cls) warnings.warn('{0}.{1} is deprecated; please use {0}.{2}.'.format( cls.__name__, old_name, self.new_name, ), AttributeRenamedWarning, stacklevel=2) return getattr(instance if instance is not None else cls, self.new_name) def __set__(self, instance, value): old_name = self.old_name(instance.__class__) warnings.warn('{0}.{1} is deprecated; please use {0}.{2}.'.format( instance.__class__.__name__, old_name, self.new_name, ), AttributeRenamedWarning, stacklevel=2) setattr(instance, self.new_name, value) class method(object): def __init__(self, func): functools.update_wrapper(self, func, ('__name__', '__doc__')) self.func = func def __get__(self, instance, cls): warning = MethodDeprecationWarning('{}.{} is deprecated.'.format( cls.__name__, self.func.__name__)) warnings.warn(warning, stacklevel=2) return self.func.__get__(instance, cls) PyAV-8.1.0/av/descriptor.pxd000066400000000000000000000010171416312437500156450ustar00rootroot00000000000000cimport libav as lib cdef class Descriptor(object): # These are present as: # - AVCodecContext.av_class (same as avcodec_get_class()) # - AVFormatContext.av_class (same as avformat_get_class()) # - AVFilterContext.av_class (same as avfilter_get_class()) # - AVCodec.priv_class # - AVOutputFormat.priv_class # - AVInputFormat.priv_class # - AVFilter.priv_class cdef const lib.AVClass *ptr cdef object _options # Option list cache. cdef Descriptor wrap_avclass(const lib.AVClass*) PyAV-8.1.0/av/descriptor.pyx000066400000000000000000000045541416312437500157030ustar00rootroot00000000000000cimport libav as lib from .option cimport Option, OptionChoice, wrap_option, wrap_option_choice cdef object _cinit_sentinel = object() cdef Descriptor wrap_avclass(const lib.AVClass *ptr): if ptr == NULL: return None cdef Descriptor obj = Descriptor(_cinit_sentinel) obj.ptr = ptr return obj cdef class Descriptor(object): def __cinit__(self, sentinel): if sentinel is not _cinit_sentinel: raise RuntimeError('Cannot construct av.Descriptor') property name: def __get__(self): return self.ptr.class_name if self.ptr.class_name else None property options: def __get__(self): cdef const lib.AVOption *ptr = self.ptr.option cdef const lib.AVOption *choice_ptr cdef Option option cdef OptionChoice option_choice cdef bint choice_is_default if self._options is None: options = [] ptr = self.ptr.option while ptr != NULL and ptr.name != NULL: if ptr.type == lib.AV_OPT_TYPE_CONST: ptr += 1 continue choices = [] if ptr.unit != NULL: # option has choices (matching const options) choice_ptr = self.ptr.option while choice_ptr != NULL and choice_ptr.name != NULL: if choice_ptr.type != lib.AV_OPT_TYPE_CONST or choice_ptr.unit != ptr.unit: choice_ptr += 1 continue choice_is_default = (choice_ptr.default_val.i64 == ptr.default_val.i64 or ptr.type == lib.AV_OPT_TYPE_FLAGS and choice_ptr.default_val.i64 & ptr.default_val.i64) option_choice = wrap_option_choice(choice_ptr, choice_is_default) choices.append(option_choice) choice_ptr += 1 option = wrap_option(tuple(choices), ptr) options.append(option) ptr += 1 self._options = tuple(options) return self._options def __repr__(self): return '<%s %s at 0x%x>' % (self.__class__.__name__, self.name, id(self)) PyAV-8.1.0/av/dictionary.pxd000066400000000000000000000002661416312437500156410ustar00rootroot00000000000000cimport libav as lib cdef class _Dictionary(object): cdef lib.AVDictionary *ptr cpdef _Dictionary copy(self) cdef _Dictionary wrap_dictionary(lib.AVDictionary *input_) PyAV-8.1.0/av/dictionary.pyx000066400000000000000000000031471416312437500156670ustar00rootroot00000000000000try: from collections.abc import MutableMapping except ImportError: from collections import MutableMapping from av.error cimport err_check cdef class _Dictionary(object): def __cinit__(self, *args, **kwargs): for arg in args: self.update(arg) if kwargs: self.update(kwargs) def __dealloc__(self): if self.ptr != NULL: lib.av_dict_free(&self.ptr) def __getitem__(self, str key): cdef lib.AVDictionaryEntry *element = lib.av_dict_get(self.ptr, key, NULL, 0) if element != NULL: return element.value else: raise KeyError(key) def __setitem__(self, str key, str value): err_check(lib.av_dict_set(&self.ptr, key, value, 0)) def __delitem__(self, str key): err_check(lib.av_dict_set(&self.ptr, key, NULL, 0)) def __len__(self): return err_check(lib.av_dict_count(self.ptr)) def __iter__(self): cdef lib.AVDictionaryEntry *element = NULL while True: element = lib.av_dict_get(self.ptr, "", element, lib.AV_DICT_IGNORE_SUFFIX) if element == NULL: break yield element.key def __repr__(self): return 'av.Dictionary(%r)' % dict(self) cpdef _Dictionary copy(self): cdef _Dictionary other = Dictionary() lib.av_dict_copy(&other.ptr, self.ptr, 0) return other class Dictionary(_Dictionary, MutableMapping): pass cdef _Dictionary wrap_dictionary(lib.AVDictionary *input_): cdef _Dictionary output = Dictionary() output.ptr = input_ return output PyAV-8.1.0/av/enum.pxd000066400000000000000000000001021416312437500144250ustar00rootroot00000000000000cpdef define_enum( name, module, items, bint is_flags=* ) PyAV-8.1.0/av/enum.pyx000066400000000000000000000247211416312437500144670ustar00rootroot00000000000000""" PyAV provides enumeration and flag classes that are similar to the stdlib ``enum`` module that shipped with Python 3.4. PyAV's enums are a little more forgiving to preserve backwards compatibility with earlier PyAV patterns. e.g., they can be freely compared to strings or integers for names and values respectively. """ from collections import OrderedDict import sys try: import copyreg except ImportError: import copy_reg as copyreg cdef sentinel = object() class EnumType(type): def __new__(mcl, name, bases, attrs, *args): # Just adapting the method signature. return super().__new__(mcl, name, bases, attrs) def __init__(self, name, bases, attrs, items): self._by_name = {} self._by_value = {} self._all = [] for spec in items: self._create(*spec) def _create(self, name, value, doc=None, by_value_only=False): # We only have one instance per value. try: item = self._by_value[value] except KeyError: item = self(sentinel, name, value, doc) self._by_value[value] = item if not by_value_only: setattr(self, name, item) self._all.append(item) self._by_name[name] = item return item def __len__(self): return len(self._all) def __iter__(self): return iter(self._all) def __getitem__(self, key): if isinstance(key, str): return self._by_name[key] if isinstance(key, int): try: return self._by_value[key] except KeyError: pass if issubclass(self, EnumFlag): return self._get_multi_flags(key) raise KeyError(key) if isinstance(key, self): return key raise TypeError("{0} indices must be str, int, or {0}".format( self.__name__, )) def _get(self, long value, bint create=False): try: return self._by_value[value] except KeyError: pass if not create: return return self._create('{}_{}'.format(self.__name__.upper(), value), value, by_value_only=True) def _get_multi_flags(self, long value): try: return self._by_value[value] except KeyError: pass flags = [] cdef long to_find = value for item in self: if item.value & to_find: flags.append(item) to_find = to_find ^ item.value if not to_find: break if to_find: raise KeyError(value) name = '|'.join(f.name for f in flags) cdef EnumFlag combo = self._create(name, value, by_value_only=True) combo.flags = tuple(flags) return combo def get(self, key, default=None, create=False): try: return self[key] except KeyError: if create: return self._get(key, create=True) return default def property(self, *args, **kwargs): return EnumProperty(self, *args, **kwargs) def _unpickle(mod_name, cls_name, item_name): mod = __import__(mod_name, fromlist=['.']) cls = getattr(mod, cls_name) return cls[item_name] copyreg.constructor(_unpickle) cdef class EnumItem(object): """ Enumerations are when an attribute may only take on a single value at once, and they are represented as integers in the FFmpeg API. We associate names with each value that are easier to operate with. Consider :data:`av.codec.context.SkipType`, which is the type of the :attr:`CodecContext.skip_frame` attribute:: >>> fh = av.open(video_path) >>> cc = fh.streams.video[0].codec_context >>> # The skip_frame attribute has a name and value: >>> cc.skip_frame.name 'DEFAULT' >>> cc.skip_frame.value 0 >>> # You can compare it to strings and ints: >>> cc.skip_frame == 'DEFAULT' True >>> cc.skip_frame == 0 True >>> # You can assign strings and ints: >>> cc.skip_frame = 'NONKEY' >>> cc.skip_frame == 'NONKEY' True >>> cc.skip_frame == 32 True """ cdef readonly str name cdef readonly int value cdef Py_hash_t _hash def __cinit__(self, sentinel_, str name, int value, doc=None): if sentinel_ is not sentinel: raise RuntimeError("Cannot instantiate {}.".format(self.__class__.__name__)) self.name = name self.value = value self.__doc__ = doc # This is not cdef because it doesn't work if it is. # We need to establish a hash that doesn't collide with anything that # would return true from `__eq__`. This is because these enums (vs # the stdlib ones) are weakly typed (they will compare against string # names and int values), and if we have the same hash AND are equal, # then they will be equivalent as keys in a dictionary, which is wierd. cdef Py_hash_t hash_ = value + 1 if hash_ == hash(name): hash_ += 1 self._hash = hash_ def __repr__(self): return '<{}.{}:{}(0x{:x})>'.format( self.__class__.__module__, self.__class__.__name__, self.name, self.value, ) def __str__(self): return self.name def __int__(self): return self.value def __hash__(self): return self._hash def __reduce__(self): return (_unpickle, (self.__class__.__module__, self.__class__.__name__, self.name)) def __eq__(self, other): if isinstance(other, str): if self.name == other: # The quick method. return True try: other_inst = self.__class__._by_name[other] except KeyError: raise ValueError("{} does not have item named {!r}".format( self.__class__.__name__, other, )) else: return self is other_inst if isinstance(other, int): if self.value == other: return True if other in self.__class__._by_value: return False raise ValueError("{} does not have item valued {}".format( self.__class__.__name__, other, )) if isinstance(other, self.__class__): return self is other raise TypeError("'==' not supported between {} and {}".format( self.__class__.__name__, type(other).__name__, )) def __ne__(self, other): return not (self == other) cdef class EnumFlag(EnumItem): """ Flags are sets of boolean attributes, which the FFmpeg API represents as individual bits in a larger integer which you manipulate with the bitwise operators. We associate names with each flag that are easier to operate with. Consider :data:`CodecContextFlags`, whis is the type of the :attr:`CodecContext.flags` attribute, and the set of boolean properties:: >>> fh = av.open(video_path) >>> cc = fh.streams.video[0].codec_context >>> cc.flags >>> # You can set flags via bitwise operations with the objects, names, or values: >>> cc.flags |= cc.flags.OUTPUT_CORRUPT >>> cc.flags |= 'GLOBAL_HEADER' >>> cc.flags >>> # You can test flags via bitwise operations with objects, names, or values: >>> bool(cc.flags & cc.flags.OUTPUT_CORRUPT) True >>> bool(cc.flags & 'QSCALE') False >>> # There are boolean properties for each flag: >>> cc.output_corrupt True >>> cc.qscale False >>> # You can set them: >>> cc.qscale = True >>> cc.flags """ cdef readonly tuple flags def __cinit__(self, sentinel, name, value, doc=None): self.flags = (self, ) def __and__(self, other): if not isinstance(other, int): other = self.__class__[other].value value = self.value & other return self.__class__._get_multi_flags(value) def __or__(self, other): if not isinstance(other, int): other = self.__class__[other].value value = self.value | other return self.__class__._get_multi_flags(value) def __xor__(self, other): if not isinstance(other, int): other = self.__class__[other].value value = self.value ^ other return self.__class__._get_multi_flags(value) def __invert__(self): # This can't result in a flag, but is helpful. return ~self.value def __nonzero__(self): return bool(self.value) cdef class EnumProperty(object): cdef object enum cdef object fget cdef object fset cdef public __doc__ def __init__(self, enum, fget, fset=None, doc=None): self.enum = enum self.fget = fget self.fset = fset self.__doc__ = doc or fget.__doc__ def setter(self, fset): self.fset = fset return self def __get__(self, inst, owner): if inst is not None: value = self.fget(inst) return self.enum.get(value, create=True) else: return self def __set__(self, inst, value): item = self.enum.get(value) self.fset(inst, item.value) def flag_property(self, name, doc=None): item = self.enum[name] cdef int item_value = item.value class Property(property): pass @Property def _property(inst): return bool(self.fget(inst) & item_value) if self.fset: @_property.setter def _property(inst, value): if value: flags = self.fget(inst) | item_value else: flags = self.fget(inst) & ~item_value self.fset(inst, flags) _property.__doc__ = doc or item.__doc__ _property._enum_item = item return _property cpdef define_enum(name, module, items, bint is_flags=False): if is_flags: base_cls = EnumFlag else: base_cls = EnumItem cls = EnumType(name, (base_cls, ), {'__module__': module}, items) return cls PyAV-8.1.0/av/error.pxd000066400000000000000000000002071416312437500146200ustar00rootroot00000000000000 cdef int stash_exception(exc_info=*) cpdef int err_check(int res, filename=*) except -1 cpdef make_error(int res, filename=*, log=*) PyAV-8.1.0/av/error.pyx000066400000000000000000000243211416312437500146500ustar00rootroot00000000000000cimport libav as lib from av.logging cimport get_last_error from threading import local import errno import os import sys import traceback from av.enum import define_enum # Will get extended with all of the exceptions. __all__ = [ 'ErrorType', 'FFmpegError', 'LookupError', 'HTTPError', 'HTTPClientError', 'UndefinedError', ] cpdef code_to_tag(int code): """Convert an integer error code into 4-byte tag. >>> code_to_tag(1953719668) b'test' """ return bytes(( code & 0xff, (code >> 8) & 0xff, (code >> 16) & 0xff, (code >> 24) & 0xff, )) cpdef tag_to_code(bytes tag): """Convert a 4-byte error tag into an integer code. >>> tag_to_code(b'test') 1953719668 """ if len(tag) != 4: raise ValueError("Error tags are 4 bytes.") return ( (tag[0]) + (tag[1] << 8) + (tag[2] << 16) + (tag[3] << 24) ) class FFmpegError(Exception): """Exception class for errors from within FFmpeg. .. attribute:: errno FFmpeg's integer error code. .. attribute:: strerror FFmpeg's error message. .. attribute:: filename The filename that was being operated on (if available). .. attribute:: type The :class:`av.error.ErrorType` enum value for the error type. .. attribute:: log The tuple from :func:`av.logging.get_last_log`, or ``None``. """ def __init__(self, code, message, filename=None, log=None): args = [code, message] if filename or log: args.append(filename) if log: args.append(log) super(FFmpegError, self).__init__(*args) self.args = tuple(args) # FileNotFoundError/etc. only pulls 2 args. self.type = ErrorType.get(code, create=True) @property def errno(self): return self.args[0] @property def strerror(self): return self.args[1] @property def filename(self): try: return self.args[2] except IndexError: pass @property def log(self): try: return self.args[3] except IndexError: pass def __str__(self): msg = f'[Errno {self.errno}] {self.strerror}' if self.filename: msg = f'{msg}: {self.filename!r}' if self.log: msg = f'{msg}; last error log: [{self.log[1].strip()}] {self.log[2].strip()}' return msg # Our custom error, used in callbacks. cdef int c_PYAV_STASHED_ERROR = tag_to_code(b'PyAV') cdef str PYAV_STASHED_ERROR_message = 'Error in PyAV callback' # Bases for the FFmpeg-based exceptions. class LookupError(FFmpegError, LookupError): pass class HTTPError(FFmpegError): pass class HTTPClientError(FFmpegError): pass # Tuples of (enum_name, enum_value, exc_name, exc_base). _ffmpeg_specs = ( ('BSF_NOT_FOUND', -lib.AVERROR_BSF_NOT_FOUND, 'BSFNotFoundError', LookupError), ('BUG', -lib.AVERROR_BUG, None, RuntimeError), ('BUFFER_TOO_SMALL', -lib.AVERROR_BUFFER_TOO_SMALL, None, ValueError), ('DECODER_NOT_FOUND', -lib.AVERROR_DECODER_NOT_FOUND, None, LookupError), ('DEMUXER_NOT_FOUND', -lib.AVERROR_DEMUXER_NOT_FOUND, None, LookupError), ('ENCODER_NOT_FOUND', -lib.AVERROR_ENCODER_NOT_FOUND, None, LookupError), ('EOF', -lib.AVERROR_EOF, 'EOFError', EOFError), ('EXIT', -lib.AVERROR_EXIT, None, None), ('EXTERNAL', -lib.AVERROR_EXTERNAL, None, None), ('FILTER_NOT_FOUND', -lib.AVERROR_FILTER_NOT_FOUND, None, LookupError), ('INVALIDDATA', -lib.AVERROR_INVALIDDATA, 'InvalidDataError', ValueError), ('MUXER_NOT_FOUND', -lib.AVERROR_MUXER_NOT_FOUND, None, LookupError), ('OPTION_NOT_FOUND', -lib.AVERROR_OPTION_NOT_FOUND, None, LookupError), ('PATCHWELCOME', -lib.AVERROR_PATCHWELCOME, 'PatchWelcomeError', None), ('PROTOCOL_NOT_FOUND', -lib.AVERROR_PROTOCOL_NOT_FOUND, None, LookupError), ('UNKNOWN', -lib.AVERROR_UNKNOWN, None, None), ('EXPERIMENTAL', -lib.AVERROR_EXPERIMENTAL, None, None), ('INPUT_CHANGED', -lib.AVERROR_INPUT_CHANGED, None, None), ('OUTPUT_CHANGED', -lib.AVERROR_OUTPUT_CHANGED, None, None), ('HTTP_BAD_REQUEST', -lib.AVERROR_HTTP_BAD_REQUEST, 'HTTPBadRequestError', HTTPClientError), ('HTTP_UNAUTHORIZED', -lib.AVERROR_HTTP_UNAUTHORIZED, 'HTTPUnauthorizedError', HTTPClientError), ('HTTP_FORBIDDEN', -lib.AVERROR_HTTP_FORBIDDEN, 'HTTPForbiddenError', HTTPClientError), ('HTTP_NOT_FOUND', -lib.AVERROR_HTTP_NOT_FOUND, 'HTTPNotFoundError', HTTPClientError), ('HTTP_OTHER_4XX', -lib.AVERROR_HTTP_OTHER_4XX, 'HTTPOtherClientError', HTTPClientError), ('HTTP_SERVER_ERROR', -lib.AVERROR_HTTP_SERVER_ERROR, 'HTTPServerError', HTTPError), ('PYAV_CALLBACK', c_PYAV_STASHED_ERROR, 'PyAVCallbackError', RuntimeError), ) # The actual enum. ErrorType = define_enum("ErrorType", __name__, [x[:2] for x in _ffmpeg_specs]) # It has to be monkey-patched. ErrorType.__doc__ = """An enumeration of FFmpeg's error types. .. attribute:: tag The FFmpeg byte tag for the error. .. attribute:: strerror The error message that would be returned. """ ErrorType.tag = property(lambda self: code_to_tag(self.value)) for enum in ErrorType: # Mimick the errno module. globals()[enum.name] = enum if enum.value == c_PYAV_STASHED_ERROR: enum.strerror = PYAV_STASHED_ERROR_message else: enum.strerror = lib.av_err2str(-enum.value) # Mimick the builtin exception types. # See https://www.python.org/dev/peps/pep-3151/#new-exception-classes # Use the named ones we have, otherwise default to OSError for anything in errno. r''' See this command for the count of POSIX codes used: egrep -IR 'AVERROR\(E[A-Z]+\)' vendor/ffmpeg-4.2 |\ sed -E 's/.*AVERROR\((E[A-Z]+)\).*/\1/' | \ sort | uniq -c The biggest ones that don't map to PEP 3151 builtins: 2106 EINVAL -> ValueError 649 EIO -> IOError (if it is distinct from OSError) 4080 ENOMEM -> MemoryError 340 ENOSYS -> NotImplementedError 35 ERANGE -> OverflowError ''' classes = {} def _extend_builtin(name, codes): base = getattr(__builtins__, name, OSError) cls = type(name, (FFmpegError, base), dict(__module__=__name__)) # Register in builder. for code in codes: classes[code] = cls # Register in module. globals()[name] = cls __all__.append(name) return cls # PEP 3151 builtins. _extend_builtin('PermissionError', (errno.EACCES, errno.EPERM)) _extend_builtin('BlockingIOError', (errno.EAGAIN, errno.EALREADY, errno.EINPROGRESS, errno.EWOULDBLOCK)) _extend_builtin('ChildProcessError', (errno.ECHILD, )) _extend_builtin('ConnectionAbortedError', (errno.ECONNABORTED, )) _extend_builtin('ConnectionRefusedError', (errno.ECONNREFUSED, )) _extend_builtin('ConnectionResetError', (errno.ECONNRESET, )) _extend_builtin('FileExistsError', (errno.EEXIST, )) _extend_builtin('InterruptedError', (errno.EINTR, )) _extend_builtin('IsADirectoryError', (errno.EISDIR, )) _extend_builtin('FileNotFoundError', (errno.ENOENT, )) _extend_builtin('NotADirectoryError', (errno.ENOTDIR, )) _extend_builtin('BrokenPipeError', (errno.EPIPE, errno.ESHUTDOWN)) _extend_builtin('ProcessLookupError', (errno.ESRCH, )) _extend_builtin('TimeoutError', (errno.ETIMEDOUT, )) # Other obvious ones. _extend_builtin('ValueError', (errno.EINVAL, )) _extend_builtin('MemoryError', (errno.ENOMEM, )) _extend_builtin('NotImplementedError', (errno.ENOSYS, )) _extend_builtin('OverflowError', (errno.ERANGE, )) if IOError is not OSError: _extend_builtin('IOError', (errno.EIO, )) # The rest of them (for now) _extend_builtin('OSError', [code for code in errno.errorcode if code not in classes]) # Classes for the FFmpeg errors. for enum_name, code, name, base in _ffmpeg_specs: name = name or enum_name.title().replace('_', '') + 'Error' if base is None: bases = (FFmpegError, ) elif issubclass(base, FFmpegError): bases = (base, ) else: bases = (FFmpegError, base) cls = type(name, bases, dict(__module__=__name__)) # Register in builder. classes[code] = cls # Register in module. globals()[name] = cls __all__.append(name) # Storage for stashing. cdef object _local = local() cdef int _err_count = 0 cdef int stash_exception(exc_info=None): global _err_count existing = getattr(_local, 'exc_info', None) if existing is not None: print >> sys.stderr, 'PyAV library exception being dropped:' traceback.print_exception(*existing) _err_count -= 1 # Balance out the +=1 that is coming. exc_info = exc_info or sys.exc_info() _local.exc_info = exc_info if exc_info: _err_count += 1 return -c_PYAV_STASHED_ERROR cdef int _last_log_count = 0 cpdef int err_check(int res, filename=None) except -1: """Raise appropriate exceptions from library return code.""" global _err_count global _last_log_count # Check for stashed exceptions. if _err_count: exc_info = getattr(_local, 'exc_info', None) if exc_info is not None: _err_count -= 1 _local.exc_info = None raise exc_info[0], exc_info[1], exc_info[2] if res >= 0: return res # Grab details from the last log. log_count, last_log = get_last_error() if log_count > _last_log_count: _last_log_count = log_count log = last_log else: log = None raise make_error(res, filename, log) class UndefinedError(FFmpegError): """Fallback exception type in case FFmpeg returns an error we don't know about.""" pass cpdef make_error(int res, filename=None, log=None): cdef int code = -res cdef bytes py_buffer cdef char *c_buffer if code == c_PYAV_STASHED_ERROR: message = PYAV_STASHED_ERROR_message else: # Jump through some hoops due to Python 2 in same codebase. py_buffer = b"\0" * lib.AV_ERROR_MAX_STRING_SIZE c_buffer = py_buffer lib.av_strerror(res, c_buffer, lib.AV_ERROR_MAX_STRING_SIZE) py_buffer = c_buffer message = py_buffer.decode('latin1') # Default to the OS if we have no message; this should not get called. message = message or os.strerror(code) cls = classes.get(code, UndefinedError) return cls(code, message, filename, log) PyAV-8.1.0/av/filter/000077500000000000000000000000001416312437500142405ustar00rootroot00000000000000PyAV-8.1.0/av/filter/__init__.py000066400000000000000000000001471416312437500163530ustar00rootroot00000000000000from .filter import Filter, FilterFlags, filter_descriptor, filters_available from .graph import Graph PyAV-8.1.0/av/filter/context.pxd000066400000000000000000000006121416312437500164400ustar00rootroot00000000000000cimport libav as lib from av.filter.filter cimport Filter from av.filter.graph cimport Graph cdef class FilterContext(object): cdef lib.AVFilterContext *ptr cdef readonly Graph graph cdef readonly Filter filter cdef object _inputs cdef object _outputs cdef bint inited cdef FilterContext wrap_filter_context(Graph graph, Filter filter, lib.AVFilterContext *ptr) PyAV-8.1.0/av/filter/context.pyx000066400000000000000000000072511416312437500164730ustar00rootroot00000000000000from libc.string cimport memcpy from av.audio.frame cimport AudioFrame, alloc_audio_frame from av.dictionary cimport _Dictionary from av.dictionary import Dictionary from av.error cimport err_check from av.filter.pad cimport alloc_filter_pads from av.frame cimport Frame from av.video.frame cimport VideoFrame, alloc_video_frame cdef object _cinit_sentinel = object() cdef FilterContext wrap_filter_context(Graph graph, Filter filter, lib.AVFilterContext *ptr): cdef FilterContext self = FilterContext(_cinit_sentinel) self.graph = graph self.filter = filter self.ptr = ptr return self cdef class FilterContext(object): def __cinit__(self, sentinel): if sentinel is not _cinit_sentinel: raise RuntimeError('cannot construct FilterContext') def __repr__(self): return '' % ( (repr(self.ptr.name) if self.ptr.name != NULL else '') if self.ptr != NULL else 'None', self.filter.ptr.name if self.filter and self.filter.ptr != NULL else None, id(self), ) property name: def __get__(self): if self.ptr.name != NULL: return self.ptr.name property inputs: def __get__(self): if self._inputs is None: self._inputs = alloc_filter_pads(self.filter, self.ptr.input_pads, True, self) return self._inputs property outputs: def __get__(self): if self._outputs is None: self._outputs = alloc_filter_pads(self.filter, self.ptr.output_pads, False, self) return self._outputs def init(self, args=None, **kwargs): if self.inited: raise ValueError('already inited') if args and kwargs: raise ValueError('cannot init from args and kwargs') cdef _Dictionary dict_ = None cdef char *c_args = NULL if args or not kwargs: if args: c_args = args err_check(lib.avfilter_init_str(self.ptr, c_args)) else: dict_ = Dictionary(kwargs) err_check(lib.avfilter_init_dict(self.ptr, &dict_.ptr)) self.inited = True if dict_: raise ValueError('unused config: %s' % ', '.join(sorted(dict_))) def link_to(self, FilterContext input_, int output_idx=0, int input_idx=0): err_check(lib.avfilter_link(self.ptr, output_idx, input_.ptr, input_idx)) def push(self, Frame frame): if self.filter.name in ('abuffer', 'buffer'): err_check(lib.av_buffersrc_write_frame(self.ptr, frame.ptr)) return # Delegate to the input. if len(self.inputs) != 1: raise ValueError('cannot delegate push without single input; found %d' % len(self.inputs)) if not self.inputs[0].link: raise ValueError('cannot delegate push without linked input') self.inputs[0].linked.context.push(frame) def pull(self): cdef Frame frame if self.filter.name == 'buffersink': frame = alloc_video_frame() elif self.filter.name == 'abuffersink': frame = alloc_audio_frame() else: # Delegate to the output. if len(self.outputs) != 1: raise ValueError('cannot delegate pull without single output; found %d' % len(self.outputs)) if not self.outputs[0].link: raise ValueError('cannot delegate pull without linked output') return self.outputs[0].linked.context.pull() self.graph.configure() err_check(lib.av_buffersink_get_frame(self.ptr, frame.ptr)) frame._init_user_attributes() return frame PyAV-8.1.0/av/filter/filter.pxd000066400000000000000000000004001416312437500162340ustar00rootroot00000000000000cimport libav as lib from av.descriptor cimport Descriptor cdef class Filter(object): cdef const lib.AVFilter *ptr cdef object _inputs cdef object _outputs cdef Descriptor _descriptor cdef Filter wrap_filter(const lib.AVFilter *ptr) PyAV-8.1.0/av/filter/filter.pyx000066400000000000000000000056661416312437500163040ustar00rootroot00000000000000cimport libav as lib from av.descriptor cimport wrap_avclass from av.filter.pad cimport alloc_filter_pads cdef object _cinit_sentinel = object() cdef Filter wrap_filter(const lib.AVFilter *ptr): cdef Filter filter_ = Filter(_cinit_sentinel) filter_.ptr = ptr return filter_ cpdef enum FilterFlags: DYNAMIC_INPUTS = lib.AVFILTER_FLAG_DYNAMIC_INPUTS DYNAMIC_OUTPUTS = lib.AVFILTER_FLAG_DYNAMIC_OUTPUTS SLICE_THREADS = lib.AVFILTER_FLAG_SLICE_THREADS SUPPORT_TIMELINE_GENERIC = lib.AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC SUPPORT_TIMELINE_INTERNAL = lib.AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL cdef class Filter(object): def __cinit__(self, name): if name is _cinit_sentinel: return if not isinstance(name, str): raise TypeError('takes a filter name as a string') self.ptr = lib.avfilter_get_by_name(name) if not self.ptr: raise ValueError('no filter %s' % name) property descriptor: def __get__(self): if self._descriptor is None: self._descriptor = wrap_avclass(self.ptr.priv_class) return self._descriptor property options: def __get__(self): if self.descriptor is None: return return self.descriptor.options property name: def __get__(self): return self.ptr.name property description: def __get__(self): return self.ptr.description property flags: def __get__(self): return self.ptr.flags property dynamic_inputs: def __get__(self): return bool(self.ptr.flags & lib.AVFILTER_FLAG_DYNAMIC_INPUTS) property dynamic_outputs: def __get__(self): return bool(self.ptr.flags & lib.AVFILTER_FLAG_DYNAMIC_OUTPUTS) property timeline_support: def __get__(self): return bool(self.ptr.flags & lib.AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC) property slice_threads: def __get__(self): return bool(self.ptr.flags & lib.AVFILTER_FLAG_SLICE_THREADS) property command_support: def __get__(self): return self.ptr.process_command != NULL property inputs: def __get__(self): if self._inputs is None: self._inputs = alloc_filter_pads(self, self.ptr.inputs, True) return self._inputs property outputs: def __get__(self): if self._outputs is None: self._outputs = alloc_filter_pads(self, self.ptr.outputs, False) return self._outputs cdef get_filter_names(): names = set() cdef const lib.AVFilter *ptr cdef void *opaque = NULL while True: ptr = lib.av_filter_iterate(&opaque) if ptr: names.add(ptr.name) else: break return names filters_available = get_filter_names() filter_descriptor = wrap_avclass(lib.avfilter_get_class()) PyAV-8.1.0/av/filter/graph.pxd000066400000000000000000000007621416312437500160630ustar00rootroot00000000000000cimport libav as lib from av.filter.context cimport FilterContext cdef class Graph(object): cdef lib.AVFilterGraph *ptr cdef readonly bint configured cpdef configure(self, bint auto_buffer=*, bint force=*) cdef dict _name_counts cdef str _get_unique_name(self, str name) cdef _register_context(self, FilterContext) cdef _auto_register(self) cdef int _nb_filters_seen cdef dict _context_by_ptr cdef dict _context_by_name cdef dict _context_by_type PyAV-8.1.0/av/filter/graph.pyx000066400000000000000000000165431416312437500161140ustar00rootroot00000000000000from fractions import Fraction from av.audio.format cimport AudioFormat from av.audio.frame cimport AudioFrame from av.audio.layout cimport AudioLayout from av.error cimport err_check from av.filter.context cimport FilterContext, wrap_filter_context from av.filter.filter cimport Filter, wrap_filter from av.video.format cimport VideoFormat from av.video.frame cimport VideoFrame cdef class Graph(object): def __cinit__(self): self.ptr = lib.avfilter_graph_alloc() self.configured = False self._name_counts = {} self._nb_filters_seen = 0 self._context_by_ptr = {} self._context_by_name = {} self._context_by_type = {} def __dealloc__(self): if self.ptr: # This frees the graph, filter contexts, links, etc.. lib.avfilter_graph_free(&self.ptr) cdef str _get_unique_name(self, str name): count = self._name_counts.get(name, 0) self._name_counts[name] = count + 1 if count: return '%s_%s' % (name, count) else: return name cpdef configure(self, bint auto_buffer=True, bint force=False): if self.configured and not force: return # if auto_buffer: # for ctx in self._context_by_ptr.itervalues(): # for in_ in ctx.inputs: # if not in_.link: # if in_.type == 'video': # pass err_check(lib.avfilter_graph_config(self.ptr, NULL)) self.configured = True # We get auto-inserted stuff here. self._auto_register() # def parse_string(self, str filter_str): # err_check(lib.avfilter_graph_parse2(self.ptr, filter_str, &self.inputs, &self.outputs)) # # cdef lib.AVFilterInOut *input_ # while input_ != NULL: # print 'in ', input_.pad_idx, (input_.name if input_.name != NULL else ''), input_.filter_ctx.name, input_.filter_ctx.filter.name # input_ = input_.next # # cdef lib.AVFilterInOut *output # while output != NULL: # print 'out', output.pad_idx, (output.name if output.name != NULL else ''), output.filter_ctx.name, output.filter_ctx.filter.name # output = output.next # NOTE: Only FFmpeg supports this. # def dump(self): # cdef char *buf = lib.avfilter_graph_dump(self.ptr, "") # cdef str ret = buf # lib.av_free(buf) # return ret def add(self, filter, args=None, **kwargs): cdef Filter cy_filter if isinstance(filter, str): cy_filter = Filter(filter) elif isinstance(filter, Filter): cy_filter = filter else: raise TypeError("filter must be a string or Filter") cdef str name = self._get_unique_name(kwargs.pop('name', None) or cy_filter.name) cdef lib.AVFilterContext *ptr = lib.avfilter_graph_alloc_filter(self.ptr, cy_filter.ptr, name) if not ptr: raise RuntimeError("Could not allocate AVFilterContext") # Manually construct this context (so we can return it). cdef FilterContext ctx = wrap_filter_context(self, cy_filter, ptr) ctx.init(args, **kwargs) self._register_context(ctx) # There might have been automatic contexts added (e.g. resamplers, # fifos, and scalers). It is more likely to see them after the graph # is configured, but we wan't to be safe. self._auto_register() return ctx cdef _register_context(self, FilterContext ctx): self._context_by_ptr[ctx.ptr] = ctx self._context_by_name[ctx.ptr.name] = ctx self._context_by_type.setdefault(ctx.filter.ptr.name, []).append(ctx) cdef _auto_register(self): cdef int i cdef lib.AVFilterContext *c_ctx cdef Filter filter_ cdef FilterContext py_ctx # We assume that filters are never removed from the graph. At this # point we don't expose that in the API, so we should be okay... for i in range(self._nb_filters_seen, self.ptr.nb_filters): c_ctx = self.ptr.filters[i] if c_ctx in self._context_by_ptr: continue filter_ = wrap_filter(c_ctx.filter) py_ctx = wrap_filter_context(self, filter_, c_ctx) self._register_context(py_ctx) self._nb_filters_seen = self.ptr.nb_filters def add_buffer(self, template=None, width=None, height=None, format=None, name=None): if template is not None: if width is None: width = template.width if height is None: height = template.height if format is None: format = template.format if width is None: raise ValueError('missing width') if height is None: raise ValueError('missing height') if format is None: raise ValueError('missing format') return self.add( 'buffer', name=name, video_size=f'{width}x{height}', pix_fmt=str(int(VideoFormat(format))), time_base='1/1000', pixel_aspect='1/1', ) def add_abuffer(self, template=None, sample_rate=None, format=None, layout=None, channels=None, name=None, time_base=None): """ Convenience method for adding `abuffer `_. """ if template is not None: if sample_rate is None: sample_rate = template.sample_rate if format is None: format = template.format if layout is None: layout = template.layout.name if channels is None: channels = template.channels if time_base is None: time_base = template.time_base if sample_rate is None: raise ValueError('missing sample_rate') if format is None: raise ValueError('missing format') if layout is None and channels is None: raise ValueError('missing layout or channels') if time_base is None: time_base = Fraction(1, sample_rate) kwargs = dict( sample_rate=str(sample_rate), sample_fmt=AudioFormat(format).name, time_base=str(time_base), ) if layout: kwargs['channel_layout'] = AudioLayout(layout).name if channels: kwargs['channels'] = str(channels) return self.add('abuffer', name=name, **kwargs) def push(self, frame): if isinstance(frame, VideoFrame): contexts = self._context_by_type.get('buffer', []) elif isinstance(frame, AudioFrame): contexts = self._context_by_type.get('abuffer', []) else: raise ValueError('can only push VideoFrame or AudioFrame', type(frame)) if len(contexts) != 1: raise ValueError('can only auto-push with single buffer; found %s' % len(contexts)) contexts[0].push(frame) def pull(self): vsinks = self._context_by_type.get('buffersink', []) asinks = self._context_by_type.get('abuffersink', []) nsinks = len(vsinks) + len(asinks) if nsinks != 1: raise ValueError('can only auto-pull with single sink; found %s' % nsinks) return (vsinks or asinks)[0].pull() PyAV-8.1.0/av/filter/link.pxd000066400000000000000000000005171416312437500157150ustar00rootroot00000000000000cimport libav as lib from av.filter.graph cimport Graph from av.filter.pad cimport FilterContextPad cdef class FilterLink(object): cdef readonly Graph graph cdef lib.AVFilterLink *ptr cdef FilterContextPad _input cdef FilterContextPad _output cdef FilterLink wrap_filter_link(Graph graph, lib.AVFilterLink *ptr) PyAV-8.1.0/av/filter/link.pyx000066400000000000000000000032161416312437500157410ustar00rootroot00000000000000cimport libav as lib from av.filter.graph cimport Graph cdef _cinit_sentinel = object() cdef class FilterLink(object): def __cinit__(self, sentinel): if sentinel is not _cinit_sentinel: raise RuntimeError('cannot instantiate FilterLink') property input: def __get__(self): if self._input: return self._input cdef lib.AVFilterContext *cctx = self.ptr.src cdef unsigned int i for i in range(cctx.nb_outputs): if self.ptr == cctx.outputs[i]: break else: raise RuntimeError('could not find link in context') ctx = self.graph._context_by_ptr[cctx] self._input = ctx.outputs[i] return self._input property output: def __get__(self): if self._output: return self._output cdef lib.AVFilterContext *cctx = self.ptr.dst cdef unsigned int i for i in range(cctx.nb_inputs): if self.ptr == cctx.inputs[i]: break else: raise RuntimeError('could not find link in context') try: ctx = self.graph._context_by_ptr[cctx] except KeyError: raise RuntimeError('could not find context in graph', (cctx.name, cctx.filter.name)) self._output = ctx.inputs[i] return self._output cdef FilterLink wrap_filter_link(Graph graph, lib.AVFilterLink *ptr): cdef FilterLink link = FilterLink(_cinit_sentinel) link.graph = graph link.ptr = ptr return link PyAV-8.1.0/av/filter/pad.pxd000066400000000000000000000010161416312437500155170ustar00rootroot00000000000000cimport libav as lib from av.filter.context cimport FilterContext from av.filter.filter cimport Filter from av.filter.link cimport FilterLink cdef class FilterPad(object): cdef readonly Filter filter cdef readonly FilterContext context cdef readonly bint is_input cdef readonly int index cdef const lib.AVFilterPad *base_ptr cdef class FilterContextPad(FilterPad): cdef FilterLink _link cdef tuple alloc_filter_pads(Filter, const lib.AVFilterPad *ptr, bint is_input, FilterContext context=?) PyAV-8.1.0/av/filter/pad.pyx000066400000000000000000000053311416312437500155500ustar00rootroot00000000000000from av.filter.link cimport wrap_filter_link cdef object _cinit_sentinel = object() cdef class FilterPad(object): def __cinit__(self, sentinel): if sentinel is not _cinit_sentinel: raise RuntimeError('cannot construct FilterPad') def __repr__(self): return '' % ( self.filter.name, 'inputs' if self.is_input else 'outputs', self.index, self.name, self.type, ) property is_output: def __get__(self): return not self.is_input property name: def __get__(self): return lib.avfilter_pad_get_name(self.base_ptr, self.index) @property def type(self): """ The media type of this filter pad. Examples: `'audio'`, `'video'`, `'subtitle'`. :type: str """ return lib.av_get_media_type_string(lib.avfilter_pad_get_type(self.base_ptr, self.index)) cdef class FilterContextPad(FilterPad): def __repr__(self): return '' % ( self.filter.name, 'inputs' if self.is_input else 'outputs', self.index, self.context.name, self.name, self.type, ) property link: def __get__(self): if self._link: return self._link cdef lib.AVFilterLink **links = self.context.ptr.inputs if self.is_input else self.context.ptr.outputs cdef lib.AVFilterLink *link = links[self.index] if not link: return self._link = wrap_filter_link(self.context.graph, link) return self._link property linked: def __get__(self): cdef FilterLink link = self.link if link: return link.input if self.is_input else link.output cdef tuple alloc_filter_pads(Filter filter, const lib.AVFilterPad *ptr, bint is_input, FilterContext context=None): if not ptr: return () pads = [] # We need to be careful and check our bounds if we know what they are, # since the arrays on a AVFilterContext are not NULL terminated. cdef int i = 0 cdef int count = (context.ptr.nb_inputs if is_input else context.ptr.nb_outputs) if context is not None else -1 cdef FilterPad pad while (i < count or count < 0) and lib.avfilter_pad_get_name(ptr, i): pad = FilterPad(_cinit_sentinel) if context is None else FilterContextPad(_cinit_sentinel) pads.append(pad) pad.filter = filter pad.context = context pad.is_input = is_input pad.base_ptr = ptr pad.index = i i += 1 return tuple(pads) PyAV-8.1.0/av/format.pxd000066400000000000000000000003621416312437500147610ustar00rootroot00000000000000cimport libav as lib cdef class ContainerFormat(object): cdef readonly str name cdef lib.AVInputFormat *iptr cdef lib.AVOutputFormat *optr cdef ContainerFormat build_container_format(lib.AVInputFormat*, lib.AVOutputFormat*) PyAV-8.1.0/av/format.pyx000066400000000000000000000156171416312437500150170ustar00rootroot00000000000000cimport libav as lib from av.descriptor cimport wrap_avclass from av.enum cimport define_enum cdef object _cinit_bypass_sentinel = object() cdef ContainerFormat build_container_format(lib.AVInputFormat* iptr, lib.AVOutputFormat* optr): if not iptr and not optr: raise ValueError('needs input format or output format') cdef ContainerFormat format = ContainerFormat.__new__(ContainerFormat, _cinit_bypass_sentinel) format.iptr = iptr format.optr = optr format.name = optr.name if optr else iptr.name return format Flags = define_enum('Flags', __name__, ( ('NOFILE', lib.AVFMT_NOFILE), ('NEEDNUMBER', lib.AVFMT_NEEDNUMBER, """Needs '%d' in filename."""), ('SHOW_IDS', lib.AVFMT_SHOW_IDS, """Show format stream IDs numbers."""), ('GLOBALHEADER', lib.AVFMT_GLOBALHEADER, """Format wants global header."""), ('NOTIMESTAMPS', lib.AVFMT_NOTIMESTAMPS, """Format does not need / have any timestamps."""), ('GENERIC_INDEX', lib.AVFMT_GENERIC_INDEX, """Use generic index building code."""), ('TS_DISCONT', lib.AVFMT_TS_DISCONT, """Format allows timestamp discontinuities. Note, muxers always require valid (monotone) timestamps"""), ('VARIABLE_FPS', lib.AVFMT_VARIABLE_FPS, """Format allows variable fps."""), ('NODIMENSIONS', lib.AVFMT_NODIMENSIONS, """Format does not need width/height"""), ('NOSTREAMS', lib.AVFMT_NOSTREAMS, """Format does not require any streams"""), ('NOBINSEARCH', lib.AVFMT_NOBINSEARCH, """Format does not allow to fall back on binary search via read_timestamp"""), ('NOGENSEARCH', lib.AVFMT_NOGENSEARCH, """Format does not allow to fall back on generic search"""), ('NO_BYTE_SEEK', lib.AVFMT_NO_BYTE_SEEK, """Format does not allow seeking by bytes"""), ('ALLOW_FLUSH', lib.AVFMT_ALLOW_FLUSH, """Format allows flushing. If not set, the muxer will not receive a NULL packet in the write_packet function."""), ('TS_NONSTRICT', lib.AVFMT_TS_NONSTRICT, """Format does not require strictly increasing timestamps, but they must still be monotonic."""), ('TS_NEGATIVE', lib.AVFMT_TS_NEGATIVE, """Format allows muxing negative timestamps. If not set the timestamp will be shifted in av_write_frame and av_interleaved_write_frame so they start from 0. The user or muxer can override this through AVFormatContext.avoid_negative_ts"""), ('SEEK_TO_PTS', lib.AVFMT_SEEK_TO_PTS, """Seeking is based on PTS"""), ), is_flags=True) cdef class ContainerFormat(object): """Descriptor of a container format. :param str name: The name of the format. :param str mode: ``'r'`` or ``'w'`` for input and output formats; defaults to None which will grab either. """ def __cinit__(self, name, mode=None): if name is _cinit_bypass_sentinel: return # We need to hold onto the original name because AVInputFormat.name # is actually comma-seperated, and so we need to remember which one # this was. self.name = name # Searches comma-seperated names. if mode is None or mode == 'r': self.iptr = lib.av_find_input_format(name) if mode is None or mode == 'w': self.optr = lib.av_guess_format(name, NULL, NULL) if not self.iptr and not self.optr: raise ValueError('no container format %r' % name) def __repr__(self): return '' % (self.__class__.__name__, self.name) property descriptor: def __get__(self): if self.iptr: return wrap_avclass(self.iptr.priv_class) else: return wrap_avclass(self.optr.priv_class) property options: def __get__(self): return self.descriptor.options property input: """An input-only view of this format.""" def __get__(self): if self.iptr == NULL: return None elif self.optr == NULL: return self else: return build_container_format(self.iptr, NULL) property output: """An output-only view of this format.""" def __get__(self): if self.optr == NULL: return None elif self.iptr == NULL: return self else: return build_container_format(NULL, self.optr) property is_input: def __get__(self): return self.iptr != NULL property is_output: def __get__(self): return self.optr != NULL property long_name: def __get__(self): # We prefer the output names since the inputs may represent # multiple formats. return self.optr.long_name if self.optr else self.iptr.long_name property extensions: def __get__(self): cdef set exts = set() if self.iptr and self.iptr.extensions: exts.update(self.iptr.extensions.split(',')) if self.optr and self.optr.extensions: exts.update(self.optr.extensions.split(',')) return exts @Flags.property def flags(self): return ( (self.iptr.flags if self.iptr else 0) | (self.optr.flags if self.optr else 0) ) no_file = flags.flag_property('NOFILE') need_number = flags.flag_property('NEEDNUMBER') show_ids = flags.flag_property('SHOW_IDS') global_header = flags.flag_property('GLOBALHEADER') no_timestamps = flags.flag_property('NOTIMESTAMPS') generic_index = flags.flag_property('GENERIC_INDEX') ts_discont = flags.flag_property('TS_DISCONT') variable_fps = flags.flag_property('VARIABLE_FPS') no_dimensions = flags.flag_property('NODIMENSIONS') no_streams = flags.flag_property('NOSTREAMS') no_bin_search = flags.flag_property('NOBINSEARCH') no_gen_search = flags.flag_property('NOGENSEARCH') no_byte_seek = flags.flag_property('NO_BYTE_SEEK') allow_flush = flags.flag_property('ALLOW_FLUSH') ts_nonstrict = flags.flag_property('TS_NONSTRICT') ts_negative = flags.flag_property('TS_NEGATIVE') seek_to_pts = flags.flag_property('SEEK_TO_PTS') cdef get_output_format_names(): names = set() cdef const lib.AVOutputFormat *ptr cdef void *opaque = NULL while True: ptr = lib.av_muxer_iterate(&opaque) if ptr: names.add(ptr.name) else: break return names cdef get_input_format_names(): names = set() cdef const lib.AVInputFormat *ptr cdef void *opaque = NULL while True: ptr = lib.av_demuxer_iterate(&opaque) if ptr: names.add(ptr.name) else: break return names formats_available = get_output_format_names() formats_available.update(get_input_format_names()) format_descriptor = wrap_avclass(lib.avformat_get_class()) PyAV-8.1.0/av/frame.pxd000066400000000000000000000007051416312437500145640ustar00rootroot00000000000000cimport libav as lib from av.packet cimport Packet from av.sidedata.sidedata cimport _SideDataContainer cdef class Frame(object): cdef lib.AVFrame *ptr # We define our own time. cdef lib.AVRational _time_base cdef _rebase_time(self, lib.AVRational) cdef _SideDataContainer _side_data cdef readonly int index cdef _copy_internal_attributes(self, Frame source, bint data_layout=?) cdef _init_user_attributes(self) PyAV-8.1.0/av/frame.pyx000066400000000000000000000076141416312437500146170ustar00rootroot00000000000000from av.utils cimport avrational_to_fraction, to_avrational from fractions import Fraction from av.sidedata.sidedata import SideDataContainer cdef class Frame(object): """ Base class for audio and video frames. See also :class:`~av.audio.frame.AudioFrame` and :class:`~av.video.frame.VideoFrame`. """ def __cinit__(self, *args, **kwargs): with nogil: self.ptr = lib.av_frame_alloc() def __dealloc__(self): with nogil: # This calls av_frame_unref, and then frees the pointer. # Thats it. lib.av_frame_free(&self.ptr) def __repr__(self): return 'av.%s #%d pts=%s at 0x%x>' % ( self.__class__.__name__, self.index, self.pts, id(self), ) cdef _copy_internal_attributes(self, Frame source, bint data_layout=True): """Mimic another frame.""" self.index = source.index self._time_base = source._time_base lib.av_frame_copy_props(self.ptr, source.ptr) if data_layout: # TODO: Assert we don't have any data yet. self.ptr.format = source.ptr.format self.ptr.width = source.ptr.width self.ptr.height = source.ptr.height self.ptr.channel_layout = source.ptr.channel_layout self.ptr.channels = source.ptr.channels cdef _init_user_attributes(self): pass # Dummy to match the API of the others. cdef _rebase_time(self, lib.AVRational dst): if not dst.num: raise ValueError('Cannot rebase to zero time.') if not self._time_base.num: self._time_base = dst return if self._time_base.num == dst.num and self._time_base.den == dst.den: return if self.ptr.pts != lib.AV_NOPTS_VALUE: self.ptr.pts = lib.av_rescale_q( self.ptr.pts, self._time_base, dst ) self._time_base = dst property dts: """ The decoding timestamp in :attr:`time_base` units for this frame. :type: int """ def __get__(self): if self.ptr.pkt_dts == lib.AV_NOPTS_VALUE: return None return self.ptr.pkt_dts property pts: """ The presentation timestamp in :attr:`time_base` units for this frame. This is the time at which the frame should be shown to the user. :type: int """ def __get__(self): if self.ptr.pts == lib.AV_NOPTS_VALUE: return None return self.ptr.pts def __set__(self, value): if value is None: self.ptr.pts = lib.AV_NOPTS_VALUE else: self.ptr.pts = value property time: """ The presentation time in seconds for this frame. This is the time at which the frame should be shown to the user. :type: float """ def __get__(self): if self.ptr.pts == lib.AV_NOPTS_VALUE: return None else: return float(self.ptr.pts) * self._time_base.num / self._time_base.den property time_base: """ The unit of time (in fractional seconds) in which timestamps are expressed. :type: fractions.Fraction """ def __get__(self): if self._time_base.num: return avrational_to_fraction(&self._time_base) def __set__(self, value): to_avrational(value, &self._time_base) property is_corrupt: """ Is this frame corrupt? :type: bool """ def __get__(self): return self.ptr.decode_error_flags != 0 or bool(self.ptr.flags & lib.AV_FRAME_FLAG_CORRUPT) @property def side_data(self): if self._side_data is None: self._side_data = SideDataContainer(self) return self._side_data PyAV-8.1.0/av/logging.pxd000066400000000000000000000000301416312437500151070ustar00rootroot00000000000000 cpdef get_last_error() PyAV-8.1.0/av/logging.pyx000066400000000000000000000226111416312437500151450ustar00rootroot00000000000000""" FFmpeg has a logging system that it uses extensively. PyAV hooks into that system to translate FFmpeg logs into Python's `logging system `_. If you are not already using Python's logging system, you can initialize it quickly with:: import logging logging.basicConfig() .. _disable_logging: Disabling Logging ~~~~~~~~~~~~~~~~~ You can disable hooking the logging system with an environment variable:: export PYAV_LOGGING=off or at runtime with :func:`restore_default_callback`. This will leave (or restore) the FFmpeg logging system, which prints to the terminal. This may also result in raised errors having less detailed messages. API Reference ~~~~~~~~~~~~~ """ from __future__ import absolute_import from libc.stdio cimport fprintf, printf, stderr from libc.stdlib cimport free, malloc cimport libav as lib from threading import Lock import logging import os import sys try: from threading import get_ident except ImportError: from thread import get_ident cdef bint is_py35 = sys.version_info[:2] >= (3, 5) cdef str decode_error_handler = 'backslashreplace' if is_py35 else 'replace' # Library levels. # QUIET = lib.AV_LOG_QUIET # -8; not really a level. PANIC = lib.AV_LOG_PANIC # 0 FATAL = lib.AV_LOG_FATAL # 8 ERROR = lib.AV_LOG_ERROR WARNING = lib.AV_LOG_WARNING INFO = lib.AV_LOG_INFO VERBOSE = lib.AV_LOG_VERBOSE DEBUG = lib.AV_LOG_DEBUG TRACE = lib.AV_LOG_TRACE # Mimicking stdlib. CRITICAL = FATAL cpdef adapt_level(int level): """Convert a library log level to a Python log level.""" if level <= lib.AV_LOG_FATAL: # Includes PANIC return 50 # logging.CRITICAL elif level <= lib.AV_LOG_ERROR: return 40 # logging.ERROR elif level <= lib.AV_LOG_WARNING: return 30 # logging.WARNING elif level <= lib.AV_LOG_INFO: return 20 # logging.INFO elif level <= lib.AV_LOG_VERBOSE: return 10 # logging.DEBUG elif level <= lib.AV_LOG_DEBUG: return 5 # Lower than any logging constant. else: # lib.AV_LOG_TRACE return 1 # ... yeah. # While we start with the level quite low, Python defaults to INFO, and so # they will not show. The logging system can add significant overhead, so # be wary of dropping this lower. cdef int level_threshold = lib.AV_LOG_VERBOSE # ... but lets limit ourselves to WARNING (assuming nobody already did this). if 'libav' not in logging.Logger.manager.loggerDict: logging.getLogger('libav').setLevel(logging.WARNING) def get_level(): """Return current FFmpeg logging threshold. See :func:`set_level`.""" return level_threshold def set_level(int level): """set_level(level) Sets logging threshold when converting from FFmpeg's logging system to Python's. It is recommended to use the constants available in this module to set the level: ``PANIC``, ``FATAL``, ``ERROR``, ``WARNING``, ``INFO``, ``VERBOSE``, and ``DEBUG``. While less efficient, it is generally preferable to modify logging with Python's :mod:`logging`, e.g.:: logging.getLogger('libav').setLevel(logging.ERROR) PyAV defaults to translating everything except ``AV_LOG_DEBUG``, so this function is only nessesary to use if you want to see those messages as well. ``AV_LOG_DEBUG`` will be translated to a level 5 message, which is lower than any builtin Python logging level, so you must lower that as well:: logging.getLogger().setLevel(5) """ global level_threshold level_threshold = level def restore_default_callback(): """Revert back to FFmpeg's log callback, which prints to the terminal.""" lib.av_log_set_callback(lib.av_log_default_callback) cdef bint print_after_shutdown = False def get_print_after_shutdown(): """Will logging continue to ``stderr`` after Python shutdown?""" return print_after_shutdown def set_print_after_shutdown(v): """Set if logging should continue to ``stderr`` after Python shutdown.""" global print_after_shutdown print_after_shutdown = bool(v) cdef bint skip_repeated = True cdef skip_lock = Lock() cdef object last_log = None cdef int skip_count = 0 def get_skip_repeated(): """Will identical logs be emitted?""" return skip_repeated def set_skip_repeated(v): """Set if identical logs will be emitted""" global skip_repeated skip_repeated = bool(v) # For error reporting. cdef object last_error = None cdef int error_count = 0 cpdef get_last_error(): """Get the last log that was at least ``ERROR``.""" if error_count: with skip_lock: return error_count, last_error else: return 0, None cdef global_captures = [] cdef thread_captures = {} cdef class Capture(object): """A context manager for capturing logs. :param bool local: Should logs from all threads be captured, or just one this object is constructed in? e.g.:: with Capture() as logs: # Do something. for log in logs: print(log.message) """ cdef readonly list logs cdef list captures def __init__(self, bint local=True): self.logs = [] if local: self.captures = thread_captures.setdefault(get_ident(), []) else: self.captures = global_captures def __enter__(self): self.captures.append(self.logs) return self.logs def __exit__(self, type_, value, traceback): self.captures.pop(-1) cdef struct log_context: lib.AVClass *class_ const char *name cdef const char *log_context_name(void *ptr) nogil: cdef log_context *obj = ptr return obj.name cdef lib.AVClass log_class log_class.item_name = log_context_name cpdef log(int level, str name, str message): """Send a log through the library logging system. This is mostly for testing. """ cdef log_context *obj = malloc(sizeof(log_context)) obj.class_ = &log_class obj.name = name lib.av_log(obj, level, "%s", message) free(obj) cdef void log_callback(void *ptr, int level, const char *format, lib.va_list args) nogil: cdef bint inited = lib.Py_IsInitialized() if not inited and not print_after_shutdown: return # Format the message. cdef char message[1024] lib.vsnprintf(message, 1023, format, args) # Get the name. cdef const char *name = NULL cdef lib.AVClass *cls = (ptr)[0] if ptr else NULL if cls and cls.item_name: # I'm not 100% on this, but this should be static, and so # it doesn't matter if the AVClass that returned it vanishes or not. name = cls.item_name(ptr) if not inited: fprintf(stderr, "av.logging (after shutdown): %s[%d]: %s\n", name, level, message) return with gil: try: log_callback_gil(level, name, message) except Exception as e: fprintf(stderr, "av.logging: exception while handling %s[%d]: %s\n", name, level, message) # For some reason lib.PyErr_PrintEx(0) won't work. exc, type_, tb = sys.exc_info() lib.PyErr_Display(exc, type_, tb) cdef log_callback_gil(int level, const char *c_name, const char *c_message): global error_count global skip_count global last_log global last_error name = c_name if c_name is not NULL else '' message = (c_message).decode('utf8', decode_error_handler) log = (level, name, message) # We have to filter it ourselves, but we will still process it in general so # it is available to our error handling. # Note that FFmpeg's levels are backwards from Python's. cdef bint is_interesting = level <= level_threshold # Skip messages which are identical to the previous. # TODO: Be smarter about threads. cdef bint is_repeated = False cdef object repeat_log = None with skip_lock: if is_interesting: is_repeated = skip_repeated and last_log == log if is_repeated: skip_count += 1 elif skip_count: # Now that we have hit the end of the repeat cycle, tally up how many. if skip_count == 1: repeat_log = last_log else: repeat_log = ( last_log[0], last_log[1], "%s (repeated %d more times)" % (last_log[2], skip_count) ) skip_count = 0 last_log = log # Hold onto errors for err_check. if level == lib.AV_LOG_ERROR: error_count += 1 last_error = log if repeat_log is not None: log_callback_emit(repeat_log) if is_interesting and not is_repeated: log_callback_emit(log) cdef log_callback_emit(log): lib_level, name, message = log captures = thread_captures.get(get_ident()) or global_captures if captures: captures[-1].append(log) return py_level = adapt_level(lib_level) logger_name = 'libav.' + name if name else 'libav.generic' logger = logging.getLogger(logger_name) logger.log(py_level, message.strip()) # Start the magic! # We allow the user to fully disable the logging system as it will not play # nicely with subinterpreters due to FFmpeg-created threads. if os.environ.get('PYAV_LOGGING') != 'off': lib.av_log_set_callback(log_callback) PyAV-8.1.0/av/option.pxd000066400000000000000000000005661416312437500150070ustar00rootroot00000000000000cimport libav as lib cdef class BaseOption(object): cdef const lib.AVOption *ptr cdef class Option(BaseOption): cdef readonly tuple choices cdef class OptionChoice(BaseOption): cdef readonly bint is_default cdef Option wrap_option(tuple choices, const lib.AVOption *ptr) cdef OptionChoice wrap_option_choice(const lib.AVOption *ptr, bint is_default) PyAV-8.1.0/av/option.pyx000066400000000000000000000132001416312437500150210ustar00rootroot00000000000000cimport libav as lib from av.enum cimport define_enum from av.utils cimport flag_in_bitfield cdef object _cinit_sentinel = object() cdef Option wrap_option(tuple choices, const lib.AVOption *ptr): if ptr == NULL: return None cdef Option obj = Option(_cinit_sentinel) obj.ptr = ptr obj.choices = choices return obj OptionType = define_enum('OptionType', __name__, ( ('FLAGS', lib.AV_OPT_TYPE_FLAGS), ('INT', lib.AV_OPT_TYPE_INT), ('INT64', lib.AV_OPT_TYPE_INT64), ('DOUBLE', lib.AV_OPT_TYPE_DOUBLE), ('FLOAT', lib.AV_OPT_TYPE_FLOAT), ('STRING', lib.AV_OPT_TYPE_STRING), ('RATIONAL', lib.AV_OPT_TYPE_RATIONAL), ('BINARY', lib.AV_OPT_TYPE_BINARY), ('DICT', lib.AV_OPT_TYPE_DICT), # ('UINT64', lib.AV_OPT_TYPE_UINT64), # Added recently, and not yet used AFAICT. ('CONST', lib.AV_OPT_TYPE_CONST), ('IMAGE_SIZE', lib.AV_OPT_TYPE_IMAGE_SIZE), ('PIXEL_FMT', lib.AV_OPT_TYPE_PIXEL_FMT), ('SAMPLE_FMT', lib.AV_OPT_TYPE_SAMPLE_FMT), ('VIDEO_RATE', lib.AV_OPT_TYPE_VIDEO_RATE), ('DURATION', lib.AV_OPT_TYPE_DURATION), ('COLOR', lib.AV_OPT_TYPE_COLOR), ('CHANNEL_LAYOUT', lib.AV_OPT_TYPE_CHANNEL_LAYOUT), ('BOOL', lib.AV_OPT_TYPE_BOOL), )) cdef tuple _INT_TYPES = ( lib.AV_OPT_TYPE_FLAGS, lib.AV_OPT_TYPE_INT, lib.AV_OPT_TYPE_INT64, lib.AV_OPT_TYPE_PIXEL_FMT, lib.AV_OPT_TYPE_SAMPLE_FMT, lib.AV_OPT_TYPE_DURATION, lib.AV_OPT_TYPE_CHANNEL_LAYOUT, lib.AV_OPT_TYPE_BOOL, ) OptionFlags = define_enum('OptionFlags', __name__, ( ('ENCODING_PARAM', lib.AV_OPT_FLAG_ENCODING_PARAM), ('DECODING_PARAM', lib.AV_OPT_FLAG_DECODING_PARAM), ('AUDIO_PARAM', lib.AV_OPT_FLAG_AUDIO_PARAM), ('VIDEO_PARAM', lib.AV_OPT_FLAG_VIDEO_PARAM), ('SUBTITLE_PARAM', lib.AV_OPT_FLAG_SUBTITLE_PARAM), ('EXPORT', lib.AV_OPT_FLAG_EXPORT), ('READONLY', lib.AV_OPT_FLAG_READONLY), ('FILTERING_PARAM', lib.AV_OPT_FLAG_FILTERING_PARAM), ), is_flags=True) cdef class BaseOption(object): def __cinit__(self, sentinel): if sentinel is not _cinit_sentinel: raise RuntimeError('Cannot construct av.%s' % self.__class__.__name__) property name: def __get__(self): return self.ptr.name property help: def __get__(self): return self.ptr.help if self.ptr.help != NULL else '' property flags: def __get__(self): return self.ptr.flags # Option flags property is_encoding_param: def __get__(self): return flag_in_bitfield(self.ptr.flags, lib.AV_OPT_FLAG_ENCODING_PARAM) property is_decoding_param: def __get__(self): return flag_in_bitfield(self.ptr.flags, lib.AV_OPT_FLAG_DECODING_PARAM) property is_audio_param: def __get__(self): return flag_in_bitfield(self.ptr.flags, lib.AV_OPT_FLAG_AUDIO_PARAM) property is_video_param: def __get__(self): return flag_in_bitfield(self.ptr.flags, lib.AV_OPT_FLAG_VIDEO_PARAM) property is_subtitle_param: def __get__(self): return flag_in_bitfield(self.ptr.flags, lib.AV_OPT_FLAG_SUBTITLE_PARAM) property is_export: def __get__(self): return flag_in_bitfield(self.ptr.flags, lib.AV_OPT_FLAG_EXPORT) property is_readonly: def __get__(self): return flag_in_bitfield(self.ptr.flags, lib.AV_OPT_FLAG_READONLY) property is_filtering_param: def __get__(self): return flag_in_bitfield(self.ptr.flags, lib.AV_OPT_FLAG_FILTERING_PARAM) cdef class Option(BaseOption): property type: def __get__(self): return OptionType._get(self.ptr.type, create=True) property offset: """ This can be used to find aliases of an option. Options in a particular descriptor with the same offset are aliases. """ def __get__(self): return self.ptr.offset property default: def __get__(self): if self.ptr.type in _INT_TYPES: return self.ptr.default_val.i64 if self.ptr.type in (lib.AV_OPT_TYPE_DOUBLE, lib.AV_OPT_TYPE_FLOAT, lib.AV_OPT_TYPE_RATIONAL): return self.ptr.default_val.dbl if self.ptr.type in (lib.AV_OPT_TYPE_STRING, lib.AV_OPT_TYPE_BINARY, lib.AV_OPT_TYPE_IMAGE_SIZE, lib.AV_OPT_TYPE_VIDEO_RATE, lib.AV_OPT_TYPE_COLOR): return self.ptr.default_val.str if self.ptr.default_val.str != NULL else '' def _norm_range(self, value): if self.ptr.type in _INT_TYPES: return int(value) return value property min: def __get__(self): return self._norm_range(self.ptr.min) property max: def __get__(self): return self._norm_range(self.ptr.max) def __repr__(self): return '' % ( self.__class__.__name__, self.name, self.type, self.offset, id(self), ) cdef OptionChoice wrap_option_choice(const lib.AVOption *ptr, bint is_default): if ptr == NULL: return None cdef OptionChoice obj = OptionChoice(_cinit_sentinel) obj.ptr = ptr obj.is_default = is_default return obj cdef class OptionChoice(BaseOption): """ Represents AV_OPT_TYPE_CONST options which are essentially choices of non-const option with same unit. """ property value: def __get__(self): return self.ptr.default_val.i64 def __repr__(self): return '' % (self.__class__.__name__, self.name, id(self)) PyAV-8.1.0/av/packet.pxd000066400000000000000000000007011416312437500147350ustar00rootroot00000000000000cimport libav as lib from av.buffer cimport Buffer from av.bytesource cimport ByteSource from av.stream cimport Stream cdef class Packet(Buffer): cdef lib.AVPacket struct cdef Stream _stream # We track our own time. cdef lib.AVRational _time_base cdef _rebase_time(self, lib.AVRational) # Hold onto the original reference. cdef ByteSource source cdef size_t _buffer_size(self) cdef void* _buffer_ptr(self) PyAV-8.1.0/av/packet.pyx000066400000000000000000000123611416312437500147670ustar00rootroot00000000000000cimport libav as lib from av.bytesource cimport bytesource from av.error cimport err_check from av.utils cimport avrational_to_fraction, to_avrational from av import deprecation cdef class Packet(Buffer): """A packet of encoded data within a :class:`~av.format.Stream`. This may, or may not include a complete object within a stream. :meth:`decode` must be called to extract encoded data. """ def __cinit__(self, input=None): with nogil: lib.av_init_packet(&self.struct) def __init__(self, input=None): cdef size_t size = 0 cdef ByteSource source = None if input is None: return if isinstance(input, (int, long)): size = input else: source = bytesource(input) size = source.length if size: err_check(lib.av_new_packet(&self.struct, size)) if source is not None: self.update(source) # TODO: Hold onto the source, and copy its pointer # instead of its data. # self.source = source def __dealloc__(self): with nogil: lib.av_packet_unref(&self.struct) def __repr__(self): return '' % ( self.__class__.__name__, self._stream.index if self._stream else 0, self.dts, self.pts, self.struct.size, id(self), ) # Buffer protocol. cdef size_t _buffer_size(self): return self.struct.size cdef void* _buffer_ptr(self): return self.struct.data cdef _rebase_time(self, lib.AVRational dst): if not dst.num: raise ValueError('Cannot rebase to zero time.') if not self._time_base.num: self._time_base = dst return if self._time_base.num == dst.num and self._time_base.den == dst.den: return lib.av_packet_rescale_ts(&self.struct, self._time_base, dst) self._time_base = dst def decode(self): """ Send the packet's data to the decoder and return a list of :class:`.AudioFrame`, :class:`.VideoFrame` or :class:`.SubtitleSet`. """ return self._stream.decode(self) @deprecation.method def decode_one(self): """ Send the packet's data to the decoder and return the first decoded frame. Returns ``None`` if there is no frame. .. warning:: This method is deprecated, as it silently discards any other frames which were decoded. """ res = self._stream.decode(self) return res[0] if res else None property stream_index: def __get__(self): return self.struct.stream_index property stream: """ The :class:`Stream` this packet was demuxed from. """ def __get__(self): return self._stream def __set__(self, Stream stream): self._stream = stream self.struct.stream_index = stream._stream.index property time_base: """ The unit of time (in fractional seconds) in which timestamps are expressed. :type: fractions.Fraction """ def __get__(self): return avrational_to_fraction(&self._time_base) def __set__(self, value): to_avrational(value, &self._time_base) property pts: """ The presentation timestamp in :attr:`time_base` units for this packet. This is the time at which the packet should be shown to the user. :type: int """ def __get__(self): if self.struct.pts != lib.AV_NOPTS_VALUE: return self.struct.pts def __set__(self, v): if v is None: self.struct.pts = lib.AV_NOPTS_VALUE else: self.struct.pts = v property dts: """ The decoding timestamp in :attr:`time_base` units for this packet. :type: int """ def __get__(self): if self.struct.dts != lib.AV_NOPTS_VALUE: return self.struct.dts def __set__(self, v): if v is None: self.struct.dts = lib.AV_NOPTS_VALUE else: self.struct.dts = v property pos: """ The byte position of this packet within the :class:`.Stream`. Returns `None` if it is not known. :type: int """ def __get__(self): if self.struct.pos != -1: return self.struct.pos property size: """ The size in bytes of this packet's data. :type: int """ def __get__(self): return self.struct.size property duration: """ The duration in :attr:`time_base` units for this packet. Returns `None` if it is not known. :type: int """ def __get__(self): if self.struct.duration != lib.AV_NOPTS_VALUE: return self.struct.duration property is_keyframe: def __get__(self): return bool(self.struct.flags & lib.AV_PKT_FLAG_KEY) property is_corrupt: def __get__(self): return bool(self.struct.flags & lib.AV_PKT_FLAG_CORRUPT) PyAV-8.1.0/av/plane.pxd000066400000000000000000000003041416312437500145640ustar00rootroot00000000000000from av.buffer cimport Buffer from av.frame cimport Frame cdef class Plane(Buffer): cdef Frame frame cdef int index cdef size_t _buffer_size(self) cdef void* _buffer_ptr(self) PyAV-8.1.0/av/plane.pyx000066400000000000000000000011151416312437500146120ustar00rootroot00000000000000 cdef class Plane(Buffer): """ Base class for audio and video planes. See also :class:`~av.audio.plane.AudioPlane` and :class:`~av.video.plane.VideoPlane`. """ def __cinit__(self, Frame frame, int index): self.frame = frame self.index = index def __repr__(self): return '' % ( self.__class__.__name__, self.buffer_size, self.buffer_ptr, id(self), ) cdef void* _buffer_ptr(self): return self.frame.ptr.extended_data[self.index] PyAV-8.1.0/av/sidedata/000077500000000000000000000000001416312437500145315ustar00rootroot00000000000000PyAV-8.1.0/av/sidedata/__init__.py000066400000000000000000000000001416312437500166300ustar00rootroot00000000000000PyAV-8.1.0/av/sidedata/motionvectors.pxd000066400000000000000000000004221416312437500201570ustar00rootroot00000000000000cimport libav as lib from av.frame cimport Frame from av.sidedata.sidedata cimport SideData cdef class _MotionVectors(SideData): cdef dict _vectors cdef int _len cdef class MotionVector(object): cdef _MotionVectors parent cdef lib.AVMotionVector *ptr PyAV-8.1.0/av/sidedata/motionvectors.pyx000066400000000000000000000052261416312437500202130ustar00rootroot00000000000000try: from collections.abc import Sequence except ImportError: from collections import Sequence cdef object _cinit_bypass_sentinel = object() # Cython doesn't let us inherit from the abstract Sequence, so we will subclass # it later. cdef class _MotionVectors(SideData): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._vectors = {} self._len = self.ptr.size // sizeof(lib.AVMotionVector) def __repr__(self): return f'self.ptr.data:0x}' def __getitem__(self, int index): try: return self._vectors[index] except KeyError: pass if index >= self._len: raise IndexError(index) vector = self._vectors[index] = MotionVector(_cinit_bypass_sentinel, self, index) return vector def __len__(self): return self._len def to_ndarray(self): import numpy as np return np.frombuffer(self, dtype=np.dtype([ ('source', 'int32'), ('w', 'uint8'), ('h', 'uint8'), ('src_x', 'int16'), ('src_y', 'int16'), ('dst_x', 'int16'), ('dst_y', 'int16'), ('flags', 'uint64'), ('motion_x', 'int32'), ('motion_y', 'int32'), ('motion_scale', 'uint16'), ], align=True)) class MotionVectors(_MotionVectors, Sequence): pass cdef class MotionVector(object): def __init__(self, sentinel, _MotionVectors parent, int index): if sentinel is not _cinit_bypass_sentinel: raise RuntimeError('cannot manually instatiate MotionVector') self.parent = parent cdef lib.AVMotionVector *base = parent.ptr.data self.ptr = base + index def __repr__(self): return f'' @property def source(self): return self.ptr.source @property def w(self): return self.ptr.w @property def h(self): return self.ptr.h @property def src_x(self): return self.ptr.src_x @property def src_y(self): return self.ptr.src_y @property def dst_x(self): return self.ptr.dst_x @property def dst_y(self): return self.ptr.dst_y @property def motion_x(self): return self.ptr.motion_x @property def motion_y(self): return self.ptr.motion_y @property def motion_scale(self): return self.ptr.motion_scale PyAV-8.1.0/av/sidedata/sidedata.pxd000066400000000000000000000006431416312437500170270ustar00rootroot00000000000000 cimport libav as lib from av.buffer cimport Buffer from av.dictionary cimport _Dictionary, wrap_dictionary from av.frame cimport Frame cdef class SideData(Buffer): cdef Frame frame cdef lib.AVFrameSideData *ptr cdef _Dictionary metadata cdef SideData wrap_side_data(Frame frame, int index) cdef class _SideDataContainer(object): cdef Frame frame cdef list _by_index cdef dict _by_type PyAV-8.1.0/av/sidedata/sidedata.pyx000066400000000000000000000062041416312437500170530ustar00rootroot00000000000000from av.enum cimport define_enum from av.sidedata.motionvectors import MotionVectors try: from collections.abc import Mapping except ImportError: from collections import Mapping cdef object _cinit_bypass_sentinel = object() Type = define_enum('Type', __name__, ( ('PANSCAN', lib.AV_FRAME_DATA_PANSCAN), ('A53_CC', lib.AV_FRAME_DATA_A53_CC), ('STEREO3D', lib.AV_FRAME_DATA_STEREO3D), ('MATRIXENCODING', lib.AV_FRAME_DATA_MATRIXENCODING), ('DOWNMIX_INFO', lib.AV_FRAME_DATA_DOWNMIX_INFO), ('REPLAYGAIN', lib.AV_FRAME_DATA_REPLAYGAIN), ('DISPLAYMATRIX', lib.AV_FRAME_DATA_DISPLAYMATRIX), ('AFD', lib.AV_FRAME_DATA_AFD), ('MOTION_VECTORS', lib.AV_FRAME_DATA_MOTION_VECTORS), ('SKIP_SAMPLES', lib.AV_FRAME_DATA_SKIP_SAMPLES), ('AUDIO_SERVICE_TYPE', lib.AV_FRAME_DATA_AUDIO_SERVICE_TYPE), ('MASTERING_DISPLAY_METADATA', lib.AV_FRAME_DATA_MASTERING_DISPLAY_METADATA), ('GOP_TIMECODE', lib.AV_FRAME_DATA_GOP_TIMECODE), ('SPHERICAL', lib.AV_FRAME_DATA_SPHERICAL), ('CONTENT_LIGHT_LEVEL', lib.AV_FRAME_DATA_CONTENT_LIGHT_LEVEL), ('ICC_PROFILE', lib.AV_FRAME_DATA_ICC_PROFILE), # These are deprecated. See https://github.com/PyAV-Org/PyAV/issues/607 # ('QP_TABLE_PROPERTIES', lib.AV_FRAME_DATA_QP_TABLE_PROPERTIES), # ('QP_TABLE_DATA', lib.AV_FRAME_DATA_QP_TABLE_DATA), )) cdef SideData wrap_side_data(Frame frame, int index): cdef lib.AVFrameSideDataType type_ = frame.ptr.side_data[index].type if type_ == lib.AV_FRAME_DATA_MOTION_VECTORS: return MotionVectors(_cinit_bypass_sentinel, frame, index) else: return SideData(_cinit_bypass_sentinel, frame, index) cdef class SideData(Buffer): def __init__(self, sentinel, Frame frame, int index): if sentinel is not _cinit_bypass_sentinel: raise RuntimeError('cannot manually instatiate SideData') self.frame = frame self.ptr = frame.ptr.side_data[index] self.metadata = wrap_dictionary(self.ptr.metadata) cdef size_t _buffer_size(self): return self.ptr.size cdef void* _buffer_ptr(self): return self.ptr.data cdef bint _buffer_writable(self): return False def __repr__(self): return f'self.ptr.data:0x}>' @property def type(self): return Type.get(self.ptr.type) or self.ptr.type cdef class _SideDataContainer(object): def __init__(self, Frame frame): self.frame = frame self._by_index = [] self._by_type = {} cdef int i cdef SideData data for i in range(self.frame.ptr.nb_side_data): data = wrap_side_data(frame, i) self._by_index.append(data) self._by_type[data.type] = data def __len__(self): return len(self._by_index) def __iter__(self): return iter(self._by_index) def __getitem__(self, key): if isinstance(key, int): return self._by_index[key] type_ = Type.get(key) return self._by_type[type_] class SideDataContainer(_SideDataContainer, Mapping): pass PyAV-8.1.0/av/stream.pxd000066400000000000000000000012321416312437500147610ustar00rootroot00000000000000from libc.stdint cimport int64_t cimport libav as lib from av.codec.context cimport CodecContext from av.container.core cimport Container from av.frame cimport Frame from av.packet cimport Packet cdef class Stream(object): # Stream attributes. cdef readonly Container container cdef lib.AVStream *_stream cdef readonly dict metadata # CodecContext attributes. cdef lib.AVCodecContext *_codec_context cdef const lib.AVCodec *_codec cdef readonly CodecContext codec_context # Private API. cdef _init(self, Container, lib.AVStream*) cdef _finalize_for_output(self) cdef Stream wrap_stream(Container, lib.AVStream*) PyAV-8.1.0/av/stream.pyx000066400000000000000000000231731416312437500150160ustar00rootroot00000000000000from cpython cimport PyWeakref_NewRef from libc.stdint cimport int64_t, uint8_t from libc.string cimport memcpy cimport libav as lib from av.codec.context cimport wrap_codec_context from av.error cimport err_check from av.packet cimport Packet from av.utils cimport ( avdict_to_dict, avrational_to_fraction, dict_to_avdict, to_avrational ) from av import deprecation cdef object _cinit_bypass_sentinel = object() cdef Stream wrap_stream(Container container, lib.AVStream *c_stream): """Build an av.Stream for an existing AVStream. The AVStream MUST be fully constructed and ready for use before this is called. """ # This better be the right one... assert container.ptr.streams[c_stream.index] == c_stream cdef Stream py_stream if c_stream.codec.codec_type == lib.AVMEDIA_TYPE_VIDEO: from av.video.stream import VideoStream py_stream = VideoStream.__new__(VideoStream, _cinit_bypass_sentinel) elif c_stream.codec.codec_type == lib.AVMEDIA_TYPE_AUDIO: from av.audio.stream import AudioStream py_stream = AudioStream.__new__(AudioStream, _cinit_bypass_sentinel) elif c_stream.codec.codec_type == lib.AVMEDIA_TYPE_SUBTITLE: from av.subtitles.stream import SubtitleStream py_stream = SubtitleStream.__new__(SubtitleStream, _cinit_bypass_sentinel) elif c_stream.codec.codec_type == lib.AVMEDIA_TYPE_DATA: from av.data.stream import DataStream py_stream = DataStream.__new__(DataStream, _cinit_bypass_sentinel) else: py_stream = Stream.__new__(Stream, _cinit_bypass_sentinel) py_stream._init(container, c_stream) return py_stream cdef class Stream(object): """ A single stream of audio, video or subtitles within a :class:`.Container`. :: >>> fh = av.open(video_path) >>> stream = fh.streams.video[0] >>> stream This encapsulates a :class:`.CodecContext`, located at :attr:`Stream.codec_context`. Attribute access is passed through to that context when attributes are missing on the stream itself. E.g. ``stream.options`` will be the options on the context. """ def __cinit__(self, name): if name is _cinit_bypass_sentinel: return raise RuntimeError('cannot manually instatiate Stream') cdef _init(self, Container container, lib.AVStream *stream): self.container = container self._stream = stream self._codec_context = stream.codec self.metadata = avdict_to_dict( stream.metadata, encoding=self.container.metadata_encoding, errors=self.container.metadata_errors, ) # This is an input container! if self.container.ptr.iformat: # Find the codec. self._codec = lib.avcodec_find_decoder(self._codec_context.codec_id) if not self._codec: # TODO: Setup a dummy CodecContext. self.codec_context = None return # This is an output container! else: self._codec = self._codec_context.codec self.codec_context = wrap_codec_context(self._codec_context, self._codec, False) self.codec_context.stream_index = stream.index def __repr__(self): return '' % ( self.__class__.__name__, self.index, self.type or '', self.name or '', id(self), ) def __getattr__(self, name): # avoid an infinite loop for unsupported codecs if self.codec_context is None: return try: return getattr(self.codec_context, name) except AttributeError: try: return getattr(self.codec_context.codec, name) except AttributeError: raise AttributeError(name) def __setattr__(self, name, value): setattr(self.codec_context, name, value) cdef _finalize_for_output(self): dict_to_avdict( &self._stream.metadata, self.metadata, encoding=self.container.metadata_encoding, errors=self.container.metadata_errors, ) if not self._stream.time_base.num: self._stream.time_base = self._codec_context.time_base # It prefers if we pass it parameters via this other object. # Lets just copy what we want. err_check(lib.avcodec_parameters_from_context(self._stream.codecpar, self._stream.codec)) def encode(self, frame=None): """ Encode an :class:`.AudioFrame` or :class:`.VideoFrame` and return a list of :class:`.Packet`. :return: :class:`list` of :class:`.Packet`. .. seealso:: This is mostly a passthrough to :meth:`.CodecContext.encode`. """ packets = self.codec_context.encode(frame) cdef Packet packet for packet in packets: packet._stream = self packet.struct.stream_index = self._stream.index return packets def decode(self, packet=None): """ Decode a :class:`.Packet` and return a list of :class:`.AudioFrame` or :class:`.VideoFrame`. :return: :class:`list` of :class:`.Frame` subclasses. .. seealso:: This is mostly a passthrough to :meth:`.CodecContext.decode`. """ return self.codec_context.decode(packet) @deprecation.method def seek(self, offset, **kwargs): """ .. seealso:: :meth:`.InputContainer.seek` for documentation on parameters. The only difference is that ``offset`` will be interpreted in :attr:`.Stream.time_base` when ``whence == 'time'``. .. deprecated:: 6.1.0 Use :meth:`.InputContainer.seek` with ``stream`` argument instead. """ self.container.seek(offset, stream=self, **kwargs) property id: """ The format-specific ID of this stream. :type: int """ def __get__(self): return self._stream.id def __set__(self, v): if v is None: self._stream.id = 0 else: self._stream.id = v property profile: """ The profile of this stream. :type: str """ def __get__(self): if self._codec and lib.av_get_profile_name(self._codec, self._codec_context.profile): return lib.av_get_profile_name(self._codec, self._codec_context.profile) else: return None property index: """ The index of this stream in its :class:`.Container`. :type: int """ def __get__(self): return self._stream.index property time_base: """ The unit of time (in fractional seconds) in which timestamps are expressed. :type: :class:`~fractions.Fraction` or ``None`` """ def __get__(self): return avrational_to_fraction(&self._stream.time_base) def __set__(self, value): to_avrational(value, &self._stream.time_base) property average_rate: """ The average frame rate of this video stream. This is calculated when the file is opened by looking at the first few frames and averaging their rate. :type: :class:`~fractions.Fraction` or ``None`` """ def __get__(self): return avrational_to_fraction(&self._stream.avg_frame_rate) property base_rate: """ The base frame rate of this stream. This is calculated as the lowest framerate at which the timestamps of frames can be represented accurately. See :ffmpeg:`AVStream.r_frame_rate` for more. :type: :class:`~fractions.Fraction` or ``None`` """ def __get__(self): return avrational_to_fraction(&self._stream.r_frame_rate) property guessed_rate: """The guessed frame rate of this stream. This is a wrapper around :ffmpeg:`av_guess_frame_rate`, and uses multiple huristics to decide what is "the" frame rate. :type: :class:`~fractions.Fraction` or ``None`` """ def __get__(self): # The two NULL arguments aren't used in FFmpeg >= 4.0 cdef lib.AVRational val = lib.av_guess_frame_rate(NULL, self._stream, NULL) return avrational_to_fraction(&val) property start_time: """ The presentation timestamp in :attr:`time_base` units of the first frame in this stream. :type: :class:`int` or ``None`` """ def __get__(self): if self._stream.start_time != lib.AV_NOPTS_VALUE: return self._stream.start_time property duration: """ The duration of this stream in :attr:`time_base` units. :type: :class:`int` or ``None`` """ def __get__(self): if self._stream.duration != lib.AV_NOPTS_VALUE: return self._stream.duration property frames: """ The number of frames this stream contains. Returns ``0`` if it is not known. :type: int """ def __get__(self): return self._stream.nb_frames property language: """ The language of the stream. :type: :class:``str`` or ``None`` """ def __get__(self): return self.metadata.get('language') @property def type(self): """ The type of the stream. Examples: ``'audio'``, ``'video'``, ``'subtitle'``. :type: str """ return lib.av_get_media_type_string(self._codec_context.codec_type) PyAV-8.1.0/av/subtitles/000077500000000000000000000000001416312437500147715ustar00rootroot00000000000000PyAV-8.1.0/av/subtitles/__init__.py000066400000000000000000000000001416312437500170700ustar00rootroot00000000000000PyAV-8.1.0/av/subtitles/codeccontext.pxd000066400000000000000000000001451416312437500201700ustar00rootroot00000000000000from av.codec.context cimport CodecContext cdef class SubtitleCodecContext(CodecContext): pass PyAV-8.1.0/av/subtitles/codeccontext.pyx000066400000000000000000000011641416312437500202170ustar00rootroot00000000000000cimport libav as lib from av.error cimport err_check from av.frame cimport Frame from av.packet cimport Packet from av.subtitles.subtitle cimport SubtitleProxy, SubtitleSet cdef class SubtitleCodecContext(CodecContext): cdef _send_packet_and_recv(self, Packet packet): cdef SubtitleProxy proxy = SubtitleProxy() cdef int got_frame = 0 err_check(lib.avcodec_decode_subtitle2( self.ptr, &proxy.struct, &got_frame, &packet.struct if packet else NULL)) if got_frame: return [SubtitleSet(proxy)] else: return [] PyAV-8.1.0/av/subtitles/stream.pxd000066400000000000000000000001141416312437500167750ustar00rootroot00000000000000from av.stream cimport Stream cdef class SubtitleStream(Stream): pass PyAV-8.1.0/av/subtitles/stream.pyx000066400000000000000000000000551416312437500170260ustar00rootroot00000000000000 cdef class SubtitleStream(Stream): pass PyAV-8.1.0/av/subtitles/subtitle.pxd000066400000000000000000000012641416312437500173440ustar00rootroot00000000000000cimport libav as lib from av.packet cimport Packet cdef class SubtitleProxy(object): cdef lib.AVSubtitle struct cdef class SubtitleSet(object): cdef readonly Packet packet cdef SubtitleProxy proxy cdef readonly tuple rects cdef class Subtitle(object): cdef SubtitleProxy proxy cdef lib.AVSubtitleRect *ptr cdef readonly bytes type cdef class TextSubtitle(Subtitle): pass cdef class ASSSubtitle(Subtitle): pass cdef class BitmapSubtitle(Subtitle): cdef readonly planes cdef class BitmapSubtitlePlane(object): cdef readonly BitmapSubtitle subtitle cdef readonly int index cdef readonly long buffer_size cdef void *_buffer PyAV-8.1.0/av/subtitles/subtitle.pyx000066400000000000000000000133601416312437500173710ustar00rootroot00000000000000from cpython cimport PyBuffer_FillInfo cdef class SubtitleProxy(object): def __dealloc__(self): lib.avsubtitle_free(&self.struct) cdef class SubtitleSet(object): def __cinit__(self, SubtitleProxy proxy): self.proxy = proxy cdef int i self.rects = tuple(build_subtitle(self, i) for i in range(self.proxy.struct.num_rects)) def __repr__(self): return '<%s.%s at 0x%x>' % ( self.__class__.__module__, self.__class__.__name__, id(self), ) property format: def __get__(self): return self.proxy.struct.format property start_display_time: def __get__(self): return self.proxy.struct.start_display_time property end_display_time: def __get__(self): return self.proxy.struct.end_display_time property pts: def __get__(self): return self.proxy.struct.pts def __len__(self): return len(self.rects) def __iter__(self): return iter(self.rects) def __getitem__(self, i): return self.rects[i] cdef Subtitle build_subtitle(SubtitleSet subtitle, int index): """Build an av.Stream for an existing AVStream. The AVStream MUST be fully constructed and ready for use before this is called. """ if index < 0 or index >= subtitle.proxy.struct.num_rects: raise ValueError('subtitle rect index out of range') cdef lib.AVSubtitleRect *ptr = subtitle.proxy.struct.rects[index] if ptr.type == lib.SUBTITLE_NONE: return Subtitle(subtitle, index) elif ptr.type == lib.SUBTITLE_BITMAP: return BitmapSubtitle(subtitle, index) elif ptr.type == lib.SUBTITLE_TEXT: return TextSubtitle(subtitle, index) elif ptr.type == lib.SUBTITLE_ASS: return AssSubtitle(subtitle, index) else: raise ValueError('unknown subtitle type %r' % ptr.type) cdef class Subtitle(object): def __cinit__(self, SubtitleSet subtitle, int index): if index < 0 or index >= subtitle.proxy.struct.num_rects: raise ValueError('subtitle rect index out of range') self.proxy = subtitle.proxy self.ptr = self.proxy.struct.rects[index] if self.ptr.type == lib.SUBTITLE_NONE: self.type = b'none' elif self.ptr.type == lib.SUBTITLE_BITMAP: self.type = b'bitmap' elif self.ptr.type == lib.SUBTITLE_TEXT: self.type = b'text' elif self.ptr.type == lib.SUBTITLE_ASS: self.type = b'ass' else: raise ValueError('unknown subtitle type %r' % self.ptr.type) def __repr__(self): return '<%s.%s at 0x%x>' % ( self.__class__.__module__, self.__class__.__name__, id(self), ) cdef class BitmapSubtitle(Subtitle): def __cinit__(self, SubtitleSet subtitle, int index): self.planes = tuple( BitmapSubtitlePlane(self, i) for i in range(4) if self.ptr.linesize[i] ) def __repr__(self): return '<%s.%s %dx%d at %d,%d; at 0x%x>' % ( self.__class__.__module__, self.__class__.__name__, self.width, self.height, self.x, self.y, id(self), ) property x: def __get__(self): return self.ptr.x property y: def __get__(self): return self.ptr.y property width: def __get__(self): return self.ptr.w property height: def __get__(self): return self.ptr.h property nb_colors: def __get__(self): return self.ptr.nb_colors def __len__(self): return len(self.planes) def __iter__(self): return iter(self.planes) def __getitem__(self, i): return self.planes[i] cdef class BitmapSubtitlePlane(object): def __cinit__(self, BitmapSubtitle subtitle, int index): if index >= 4: raise ValueError('BitmapSubtitles have only 4 planes') if not subtitle.ptr.linesize[index]: raise ValueError('plane does not exist') self.subtitle = subtitle self.index = index self.buffer_size = subtitle.ptr.w * subtitle.ptr.h self._buffer = subtitle.ptr.data[index] # PyBuffer_FromMemory(self.ptr.data[i], self.width * self.height) # Legacy buffer support. For `buffer` and PIL. # See: http://docs.python.org/2/c-api/typeobj.html#PyBufferProcs def __getsegcount__(self, Py_ssize_t *len_out): if len_out != NULL: len_out[0] = self.buffer_size return 1 def __getreadbuffer__(self, Py_ssize_t index, void **data): if index: raise RuntimeError("accessing non-existent buffer segment") data[0] = self._buffer return self.buffer_size def __getwritebuffer__(self, Py_ssize_t index, void **data): if index: raise RuntimeError("accessing non-existent buffer segment") data[0] = self._buffer return self.buffer_size # New-style buffer support. def __getbuffer__(self, Py_buffer *view, int flags): PyBuffer_FillInfo(view, self, self._buffer, self.buffer_size, 0, flags) cdef class TextSubtitle(Subtitle): def __repr__(self): return '<%s.%s %r at 0x%x>' % ( self.__class__.__module__, self.__class__.__name__, self.text, id(self), ) property text: def __get__(self): return self.ptr.text cdef class AssSubtitle(Subtitle): def __repr__(self): return '<%s.%s %r at 0x%x>' % ( self.__class__.__module__, self.__class__.__name__, self.ass, id(self), ) property ass: def __get__(self): return self.ptr.ass PyAV-8.1.0/av/utils.pxd000066400000000000000000000006421416312437500146320ustar00rootroot00000000000000from libc.stdint cimport int64_t, uint8_t, uint64_t cimport libav as lib cdef dict avdict_to_dict(lib.AVDictionary *input, str encoding, str errors) cdef dict_to_avdict(lib.AVDictionary **dst, dict src, str encoding, str errors) cdef object avrational_to_fraction(const lib.AVRational *input) cdef object to_avrational(object value, lib.AVRational *input) cdef flag_in_bitfield(uint64_t bitfield, uint64_t flag) PyAV-8.1.0/av/utils.pyx000066400000000000000000000035461416312437500146650ustar00rootroot00000000000000from libc.stdint cimport int64_t, uint8_t, uint64_t from fractions import Fraction cimport libav as lib from av.error cimport err_check # === DICTIONARIES === # ==================== cdef _decode(char *s, encoding, errors): return (s).decode(encoding, errors) cdef bytes _encode(s, encoding, errors): return s.encode(encoding, errors) cdef dict avdict_to_dict(lib.AVDictionary *input, str encoding, str errors): cdef lib.AVDictionaryEntry *element = NULL cdef dict output = {} while True: element = lib.av_dict_get(input, "", element, lib.AV_DICT_IGNORE_SUFFIX) if element == NULL: break output[_decode(element.key, encoding, errors)] = _decode(element.value, encoding, errors) return output cdef dict_to_avdict(lib.AVDictionary **dst, dict src, str encoding, str errors): lib.av_dict_free(dst) for key, value in src.items(): err_check(lib.av_dict_set(dst, _encode(key, encoding, errors), _encode(value, encoding, errors), 0)) # === FRACTIONS === # ================= cdef object avrational_to_fraction(const lib.AVRational *input): if input.num and input.den: return Fraction(input.num, input.den) cdef object to_avrational(object value, lib.AVRational *input): if value is None: input.num = 0 input.den = 1 return if isinstance(value, Fraction): frac = value else: frac = Fraction(value) input.num = frac.numerator input.den = frac.denominator # === OTHER === # ============= cdef flag_in_bitfield(uint64_t bitfield, uint64_t flag): # Not every flag exists in every version of FFMpeg, so we define them to 0. if not flag: return None return bool(bitfield & flag) # === BACKWARDS COMPAT === from .error import FFmpegError as AVError from .error import err_check PyAV-8.1.0/av/video/000077500000000000000000000000001416312437500140615ustar00rootroot00000000000000PyAV-8.1.0/av/video/__init__.py000066400000000000000000000000761416312437500161750ustar00rootroot00000000000000from .frame import VideoFrame from .stream import VideoStream PyAV-8.1.0/av/video/codeccontext.pxd000066400000000000000000000007431416312437500172640ustar00rootroot00000000000000 from av.codec.context cimport CodecContext from av.video.format cimport VideoFormat from av.video.frame cimport VideoFrame from av.video.reformatter cimport VideoReformatter cdef class VideoCodecContext(CodecContext): cdef VideoFormat _format cdef _build_format(self) cdef int last_w cdef int last_h cdef readonly VideoReformatter reformatter # For encoding. cdef readonly int encoded_frame_count # For decoding. cdef VideoFrame next_frame PyAV-8.1.0/av/video/codeccontext.pyx000066400000000000000000000112051416312437500173040ustar00rootroot00000000000000from libc.stdint cimport int64_t cimport libav as lib from av.codec.context cimport CodecContext from av.error cimport err_check from av.frame cimport Frame from av.packet cimport Packet from av.utils cimport avrational_to_fraction, to_avrational from av.video.format cimport VideoFormat, get_video_format from av.video.frame cimport VideoFrame, alloc_video_frame from av.video.reformatter cimport VideoReformatter cdef class VideoCodecContext(CodecContext): def __cinit__(self, *args, **kwargs): self.last_w = 0 self.last_h = 0 cdef _init(self, lib.AVCodecContext *ptr, const lib.AVCodec *codec): CodecContext._init(self, ptr, codec) # TODO: Can this be `super`? self._build_format() self.encoded_frame_count = 0 cdef _set_default_time_base(self): self.ptr.time_base.num = self.ptr.framerate.den or 1 self.ptr.time_base.den = self.ptr.framerate.num or lib.AV_TIME_BASE cdef _prepare_frames_for_encode(self, Frame input): if not input: return [None] cdef VideoFrame vframe = input # Reformat if it doesn't match. if ( vframe.format.pix_fmt != self._format.pix_fmt or vframe.width != self.ptr.width or vframe.height != self.ptr.height ): if not self.reformatter: self.reformatter = VideoReformatter() vframe = self.reformatter.reformat( vframe, self.ptr.width, self.ptr.height, self._format, ) # There is no pts, so create one. if vframe.ptr.pts == lib.AV_NOPTS_VALUE: vframe.ptr.pts = self.encoded_frame_count self.encoded_frame_count += 1 return [vframe] cdef Frame _alloc_next_frame(self): return alloc_video_frame() cdef _setup_decoded_frame(self, Frame frame, Packet packet): CodecContext._setup_decoded_frame(self, frame, packet) cdef VideoFrame vframe = frame vframe._init_user_attributes() cdef _build_format(self): self._format = get_video_format(self.ptr.pix_fmt, self.ptr.width, self.ptr.height) property format: def __get__(self): return self._format def __set__(self, VideoFormat format): self.ptr.pix_fmt = format.pix_fmt self.ptr.width = format.width self.ptr.height = format.height self._build_format() # Kinda wasteful. property width: def __get__(self): return self.ptr.width def __set__(self, unsigned int value): self.ptr.width = value self._build_format() property height: def __get__(self): return self.ptr.height def __set__(self, unsigned int value): self.ptr.height = value self._build_format() # TODO: Replace with `format`. property pix_fmt: def __get__(self): return self._format.name def __set__(self, value): self.ptr.pix_fmt = lib.av_get_pix_fmt(value) self._build_format() property framerate: """ The frame rate, in frames per second. :type: fractions.Fraction """ def __get__(self): return avrational_to_fraction(&self.ptr.framerate) def __set__(self, value): to_avrational(value, &self.ptr.framerate) property rate: """Another name for :attr:`framerate`.""" def __get__(self): return self.framerate def __set__(self, value): self.framerate = value property gop_size: def __get__(self): return self.ptr.gop_size def __set__(self, int value): self.ptr.gop_size = value property sample_aspect_ratio: def __get__(self): return avrational_to_fraction(&self.ptr.sample_aspect_ratio) def __set__(self, value): to_avrational(value, &self.ptr.sample_aspect_ratio) property display_aspect_ratio: def __get__(self): cdef lib.AVRational dar lib.av_reduce( &dar.num, &dar.den, self.ptr.width * self.ptr.sample_aspect_ratio.num, self.ptr.height * self.ptr.sample_aspect_ratio.den, 1024*1024) return avrational_to_fraction(&dar) property has_b_frames: def __get__(self): return bool(self.ptr.has_b_frames) property coded_width: def __get__(self): return self.ptr.coded_width property coded_height: def __get__(self): return self.ptr.coded_height PyAV-8.1.0/av/video/format.pxd000066400000000000000000000012261416312437500160670ustar00rootroot00000000000000cimport libav as lib cdef class VideoFormat(object): cdef lib.AVPixelFormat pix_fmt cdef const lib.AVPixFmtDescriptor *ptr cdef readonly unsigned int width, height cdef readonly tuple components cdef _init(self, lib.AVPixelFormat pix_fmt, unsigned int width, unsigned int height) cpdef chroma_width(self, int luma_width=?) cpdef chroma_height(self, int luma_height=?) cdef class VideoFormatComponent(object): cdef VideoFormat format cdef readonly unsigned int index cdef const lib.AVComponentDescriptor *ptr cdef VideoFormat get_video_format(lib.AVPixelFormat c_format, unsigned int width, unsigned int height) PyAV-8.1.0/av/video/format.pyx000066400000000000000000000131741416312437500161210ustar00rootroot00000000000000 cdef object _cinit_bypass_sentinel = object() cdef VideoFormat get_video_format(lib.AVPixelFormat c_format, unsigned int width, unsigned int height): if c_format == lib.AV_PIX_FMT_NONE: return None cdef VideoFormat format = VideoFormat.__new__(VideoFormat, _cinit_bypass_sentinel) format._init(c_format, width, height) return format cdef class VideoFormat(object): """ >>> format = VideoFormat('rgb24') >>> format.name 'rgb24' """ def __cinit__(self, name, width=0, height=0): if name is _cinit_bypass_sentinel: return cdef VideoFormat other if isinstance(name, VideoFormat): other = name self._init(other.pix_fmt, width or other.width, height or other.height) return cdef lib.AVPixelFormat pix_fmt = lib.av_get_pix_fmt(name) if pix_fmt < 0: raise ValueError('not a pixel format: %r' % name) self._init(pix_fmt, width, height) cdef _init(self, lib.AVPixelFormat pix_fmt, unsigned int width, unsigned int height): self.pix_fmt = pix_fmt self.ptr = lib.av_pix_fmt_desc_get(pix_fmt) self.width = width self.height = height self.components = tuple( VideoFormatComponent(self, i) for i in range(self.ptr.nb_components) ) def __repr__(self): if self.width or self.height: return '' % (self.__class__.__name__, self.name, self.width, self.height) else: return '' % (self.__class__.__name__, self.name) def __int__(self): return int(self.pix_fmt) property name: """Canonical name of the pixel format.""" def __get__(self): return self.ptr.name property bits_per_pixel: def __get__(self): return lib.av_get_bits_per_pixel(self.ptr) property padded_bits_per_pixel: def __get__(self): return lib.av_get_padded_bits_per_pixel(self.ptr) property is_big_endian: """Pixel format is big-endian.""" def __get__(self): return bool(self.ptr.flags & lib.AV_PIX_FMT_FLAG_BE) property has_palette: """Pixel format has a palette in data[1], values are indexes in this palette.""" def __get__(self): return bool(self.ptr.flags & lib.AV_PIX_FMT_FLAG_PAL) property is_bit_stream: """All values of a component are bit-wise packed end to end.""" def __get__(self): return bool(self.ptr.flags & lib.AV_PIX_FMT_FLAG_BITSTREAM) # Skipping PIX_FMT_HWACCEL # """Pixel format is an HW accelerated format.""" property is_planar: """At least one pixel component is not in the first data plane.""" def __get__(self): return bool(self.ptr.flags & lib.AV_PIX_FMT_FLAG_PLANAR) property is_rgb: """The pixel format contains RGB-like data (as opposed to YUV/grayscale).""" def __get__(self): return bool(self.ptr.flags & lib.AV_PIX_FMT_FLAG_RGB) cpdef chroma_width(self, int luma_width=0): """chroma_width(luma_width=0) Width of a chroma plane relative to a luma plane. :param int luma_width: Width of the luma plane; defaults to ``self.width``. """ luma_width = luma_width or self.width return -((-luma_width) >> self.ptr.log2_chroma_w) if luma_width else 0 cpdef chroma_height(self, int luma_height=0): """chroma_height(luma_height=0) Height of a chroma plane relative to a luma plane. :param int luma_height: Height of the luma plane; defaults to ``self.height``. """ luma_height = luma_height or self.height return -((-luma_height) >> self.ptr.log2_chroma_h) if luma_height else 0 cdef class VideoFormatComponent(object): def __cinit__(self, VideoFormat format, size_t index): self.format = format self.index = index self.ptr = &format.ptr.comp[index] property plane: """The index of the plane which contains this component.""" def __get__(self): return self.ptr.plane property bits: """Number of bits in the component.""" def __get__(self): return self.ptr.depth property is_alpha: """Is this component an alpha channel?""" def __get__(self): return ((self.index == 1 and self.format.ptr.nb_components == 2) or (self.index == 3 and self.format.ptr.nb_components == 4)) property is_luma: """Is this compoment a luma channel?""" def __get__(self): return self.index == 0 and ( self.format.ptr.nb_components == 1 or self.format.ptr.nb_components == 2 or not self.format.is_rgb ) property is_chroma: """Is this component a chroma channel?""" def __get__(self): return (self.index == 1 or self.index == 2) and (self.format.ptr.log2_chroma_w or self.format.ptr.log2_chroma_h) property width: """The width of this component's plane. Requires the parent :class:`VideoFormat` to have a width. """ def __get__(self): return self.format.chroma_width() if self.is_chroma else self.format.width property height: """The height of this component's plane. Requires the parent :class:`VideoFormat` to have a height. """ def __get__(self): return self.format.chroma_height() if self.is_chroma else self.format.height names = set() cdef const lib.AVPixFmtDescriptor *desc = NULL while True: desc = lib.av_pix_fmt_desc_next(desc) if not desc: break names.add(desc.name) PyAV-8.1.0/av/video/frame.pxd000066400000000000000000000011721416312437500156710ustar00rootroot00000000000000from libc.stdint cimport int16_t, int32_t, uint8_t, uint16_t, uint64_t cimport libav as lib from av.frame cimport Frame from av.video.format cimport VideoFormat from av.video.reformatter cimport VideoReformatter cdef class VideoFrame(Frame): # This is the buffer that is used to back everything in the AVFrame. # We don't ever actually access it directly. cdef uint8_t *_buffer cdef VideoReformatter reformatter cdef readonly VideoFormat format cdef _init(self, lib.AVPixelFormat format, unsigned int width, unsigned int height) cdef _init_user_attributes(self) cdef VideoFrame alloc_video_frame() PyAV-8.1.0/av/video/frame.pyx000066400000000000000000000277411416312437500157300ustar00rootroot00000000000000from libc.stdint cimport uint8_t from av.enum cimport define_enum from av.error cimport err_check from av.video.format cimport VideoFormat, get_video_format from av.video.plane cimport VideoPlane from av.deprecation import renamed_attr cdef object _cinit_bypass_sentinel cdef VideoFrame alloc_video_frame(): """Get a mostly uninitialized VideoFrame. You MUST call VideoFrame._init(...) or VideoFrame._init_user_attributes() before exposing to the user. """ return VideoFrame.__new__(VideoFrame, _cinit_bypass_sentinel) PictureType = define_enum('PictureType', __name__, ( ('NONE', lib.AV_PICTURE_TYPE_NONE, "Undefined"), ('I', lib.AV_PICTURE_TYPE_I, "Intra"), ('P', lib.AV_PICTURE_TYPE_P, "Predicted"), ('B', lib.AV_PICTURE_TYPE_B, "Bi-directional predicted"), ('S', lib.AV_PICTURE_TYPE_S, "S(GMC)-VOP MPEG-4"), ('SI', lib.AV_PICTURE_TYPE_SI, "Switching intra"), ('SP', lib.AV_PICTURE_TYPE_SP, "Switching predicted"), ('BI', lib.AV_PICTURE_TYPE_BI, "BI type"), )) cdef copy_array_to_plane(array, VideoPlane plane, unsigned int bytes_per_pixel): cdef bytes imgbytes = array.tobytes() cdef const uint8_t[:] i_buf = imgbytes cdef size_t i_pos = 0 cdef size_t i_stride = plane.width * bytes_per_pixel cdef size_t i_size = plane.height * i_stride cdef uint8_t[:] o_buf = plane cdef size_t o_pos = 0 cdef size_t o_stride = abs(plane.line_size) while i_pos < i_size: o_buf[o_pos:o_pos + i_stride] = i_buf[i_pos:i_pos + i_stride] i_pos += i_stride o_pos += o_stride cdef useful_array(VideoPlane plane, unsigned int bytes_per_pixel=1): """ Return the useful part of the VideoPlane as a single dimensional array. We are simply discarding any padding which was added for alignment. """ import numpy as np cdef size_t total_line_size = abs(plane.line_size) cdef size_t useful_line_size = plane.width * bytes_per_pixel arr = np.frombuffer(plane, np.uint8) if total_line_size != useful_line_size: arr = arr.reshape(-1, total_line_size)[:, 0:useful_line_size].reshape(-1) return arr cdef class VideoFrame(Frame): def __cinit__(self, width=0, height=0, format='yuv420p'): if width is _cinit_bypass_sentinel: return cdef lib.AVPixelFormat c_format = lib.av_get_pix_fmt(format) if c_format < 0: raise ValueError('invalid format %r' % format) self._init(c_format, width, height) cdef _init(self, lib.AVPixelFormat format, unsigned int width, unsigned int height): cdef int res = 0 with nogil: self.ptr.width = width self.ptr.height = height self.ptr.format = format # Allocate the buffer for the video frame. # # We enforce aligned buffers, otherwise `sws_scale` can perform # poorly or even cause out-of-bounds reads and writes. if width and height: res = lib.av_image_alloc( self.ptr.data, self.ptr.linesize, width, height, format, 16) self._buffer = self.ptr.data[0] if res: err_check(res) self._init_user_attributes() cdef _init_user_attributes(self): self.format = get_video_format(self.ptr.format, self.ptr.width, self.ptr.height) def __dealloc__(self): # The `self._buffer` member is only set if *we* allocated the buffer in `_init`, # as opposed to a buffer allocated by a decoder. lib.av_freep(&self._buffer) def __repr__(self): return '' % ( self.__class__.__name__, self.index, self.pts, self.format.name, self.width, self.height, id(self), ) @property def planes(self): """ A tuple of :class:`.VideoPlane` objects. """ # We need to detect which planes actually exist, but also contrain # ourselves to the maximum plane count (as determined only by VideoFrames # so far), in case the library implementation does not set the last # plane to NULL. cdef int max_plane_count = 0 for i in range(self.format.ptr.nb_components): count = self.format.ptr.comp[i].plane + 1 if max_plane_count < count: max_plane_count = count if self.format.name == 'pal8': max_plane_count = 2 cdef int plane_count = 0 while plane_count < max_plane_count and self.ptr.extended_data[plane_count]: plane_count += 1 return tuple([VideoPlane(self, i) for i in range(plane_count)]) property width: """Width of the image, in pixels.""" def __get__(self): return self.ptr.width property height: """Height of the image, in pixels.""" def __get__(self): return self.ptr.height property key_frame: """Is this frame a key frame? Wraps :ffmpeg:`AVFrame.key_frame`. """ def __get__(self): return self.ptr.key_frame property interlaced_frame: """Is this frame an interlaced or progressive? Wraps :ffmpeg:`AVFrame.interlaced_frame`. """ def __get__(self): return self.ptr.interlaced_frame @property def pict_type(self): """One of :class:`.PictureType`. Wraps :ffmpeg:`AVFrame.pict_type`. """ return PictureType.get(self.ptr.pict_type, create=True) @pict_type.setter def pict_type(self, value): self.ptr.pict_type = PictureType[value].value def reformat(self, *args, **kwargs): """reformat(width=None, height=None, format=None, src_colorspace=None, dst_colorspace=None, interpolation=None) Create a new :class:`VideoFrame` with the given width/height/format/colorspace. .. seealso:: :meth:`.VideoReformatter.reformat` for arguments. """ if not self.reformatter: self.reformatter = VideoReformatter() return self.reformatter.reformat(self, *args, **kwargs) def to_rgb(self, **kwargs): """Get an RGB version of this frame. Any ``**kwargs`` are passed to :meth:`.VideoReformatter.reformat`. >>> frame = VideoFrame(1920, 1080) >>> frame.format.name 'yuv420p' >>> frame.to_rgb().format.name 'rgb24' """ return self.reformat(format="rgb24", **kwargs) def to_image(self, **kwargs): """Get an RGB ``PIL.Image`` of this frame. Any ``**kwargs`` are passed to :meth:`.VideoReformatter.reformat`. .. note:: PIL or Pillow must be installed. """ from PIL import Image cdef VideoPlane plane = self.reformat(format="rgb24", **kwargs).planes[0] cdef const uint8_t[:] i_buf = plane cdef size_t i_pos = 0 cdef size_t i_stride = plane.line_size cdef size_t o_pos = 0 cdef size_t o_stride = plane.width * 3 cdef size_t o_size = plane.height * o_stride cdef bytearray o_buf = bytearray(o_size) while o_pos < o_size: o_buf[o_pos:o_pos + o_stride] = i_buf[i_pos:i_pos + o_stride] i_pos += i_stride o_pos += o_stride return Image.frombytes("RGB", (self.width, self.height), bytes(o_buf), "raw", "RGB", 0, 1) def to_ndarray(self, **kwargs): """Get a numpy array of this frame. Any ``**kwargs`` are passed to :meth:`.VideoReformatter.reformat`. .. note:: Numpy must be installed. .. note:: For ``pal8``, an ``(image, palette)`` tuple will be returned, with the palette being in ARGB (PyAV will swap bytes if needed). """ cdef VideoFrame frame = self.reformat(**kwargs) import numpy as np if frame.format.name in ('yuv420p', 'yuvj420p'): assert frame.width % 2 == 0 assert frame.height % 2 == 0 return np.hstack(( useful_array(frame.planes[0]), useful_array(frame.planes[1]), useful_array(frame.planes[2]) )).reshape(-1, frame.width) elif frame.format.name == 'yuyv422': assert frame.width % 2 == 0 assert frame.height % 2 == 0 return useful_array(frame.planes[0], 2).reshape(frame.height, frame.width, -1) elif frame.format.name in ('rgb24', 'bgr24'): return useful_array(frame.planes[0], 3).reshape(frame.height, frame.width, -1) elif frame.format.name in ('argb', 'rgba', 'abgr', 'bgra'): return useful_array(frame.planes[0], 4).reshape(frame.height, frame.width, -1) elif frame.format.name in ('gray', 'gray8', 'rgb8', 'bgr8'): return useful_array(frame.planes[0]).reshape(frame.height, frame.width) elif frame.format.name == 'pal8': image = useful_array(frame.planes[0]).reshape(frame.height, frame.width) palette = np.frombuffer(frame.planes[1], 'i4').astype('>i4').reshape(-1, 1).view(np.uint8) return image, palette else: raise ValueError('Conversion to numpy array with format `%s` is not yet supported' % frame.format.name) to_nd_array = renamed_attr('to_ndarray') @staticmethod def from_image(img): """ Construct a frame from a ``PIL.Image``. """ if img.mode != 'RGB': img = img.convert('RGB') cdef VideoFrame frame = VideoFrame(img.size[0], img.size[1], 'rgb24') copy_array_to_plane(img, frame.planes[0], 3) return frame @staticmethod def from_ndarray(array, format='rgb24'): """ Construct a frame from a numpy array. .. note:: for ``pal8``, an ``(image, palette)`` pair must be passed. `palette` must have shape (256, 4) and is given in ARGB format (PyAV will swap bytes if needed). """ if format == 'pal8': array, palette = array assert array.dtype == 'uint8' assert array.ndim == 2 assert palette.dtype == 'uint8' assert palette.shape == (256, 4) frame = VideoFrame(array.shape[1], array.shape[0], format) copy_array_to_plane(array, frame.planes[0], 1) frame.planes[1].update(palette.view('>i4').astype('i4').tobytes()) return frame if format in ('yuv420p', 'yuvj420p'): assert array.dtype == 'uint8' assert array.ndim == 2 assert array.shape[0] % 3 == 0 assert array.shape[1] % 2 == 0 frame = VideoFrame(array.shape[1], (array.shape[0] * 2) // 3, format) u_start = frame.width * frame.height v_start = 5 * u_start // 4 flat = array.reshape(-1) copy_array_to_plane(flat[0:u_start], frame.planes[0], 1) copy_array_to_plane(flat[u_start:v_start], frame.planes[1], 1) copy_array_to_plane(flat[v_start:], frame.planes[2], 1) return frame elif format == 'yuyv422': assert array.dtype == 'uint8' assert array.ndim == 3 assert array.shape[0] % 2 == 0 assert array.shape[1] % 2 == 0 assert array.shape[2] == 2 elif format in ('rgb24', 'bgr24'): assert array.dtype == 'uint8' assert array.ndim == 3 assert array.shape[2] == 3 elif format in ('argb', 'rgba', 'abgr', 'bgra'): assert array.dtype == 'uint8' assert array.ndim == 3 assert array.shape[2] == 4 elif format in ('gray', 'gray8', 'rgb8', 'bgr8'): assert array.dtype == 'uint8' assert array.ndim == 2 else: raise ValueError('Conversion from numpy array with format `%s` is not yet supported' % format) frame = VideoFrame(array.shape[1], array.shape[0], format) copy_array_to_plane(array, frame.planes[0], 1 if array.ndim == 2 else array.shape[2]) return frame PyAV-8.1.0/av/video/plane.pxd000066400000000000000000000003011416312437500156670ustar00rootroot00000000000000from av.plane cimport Plane from av.video.format cimport VideoFormatComponent cdef class VideoPlane(Plane): cdef readonly size_t buffer_size cdef readonly unsigned int width, height PyAV-8.1.0/av/video/plane.pyx000066400000000000000000000024441416312437500157260ustar00rootroot00000000000000from av.video.frame cimport VideoFrame cdef class VideoPlane(Plane): def __cinit__(self, VideoFrame frame, int index): # The palette plane has no associated component or linesize; set fields manually if frame.format.name == 'pal8' and index == 1: self.width = 256 self.height = 1 self.buffer_size = 256 * 4 return for i in range(frame.format.ptr.nb_components): if frame.format.ptr.comp[i].plane == index: component = frame.format.components[i] self.width = component.width self.height = component.height break else: raise RuntimeError('could not find plane %d of %r' % (index, frame.format)) # Sometimes, linesize is negative (and that is meaningful). We are only # insisting that the buffer size be based on the extent of linesize, and # ignore it's direction. self.buffer_size = abs(self.frame.ptr.linesize[self.index]) * self.height cdef size_t _buffer_size(self): return self.buffer_size property line_size: """ Bytes per horizontal line in this plane. :type: int """ def __get__(self): return self.frame.ptr.linesize[self.index] PyAV-8.1.0/av/video/reformatter.pxd000066400000000000000000000005001416312437500171230ustar00rootroot00000000000000cimport libav as lib from av.video.frame cimport VideoFrame cdef class VideoReformatter(object): cdef lib.SwsContext *ptr cdef _reformat(self, VideoFrame frame, int width, int height, lib.AVPixelFormat format, int src_colorspace, int dst_colorspace, int interpolation) PyAV-8.1.0/av/video/reformatter.pyx000066400000000000000000000156151416312437500171650ustar00rootroot00000000000000from libc.stdint cimport uint8_t cimport libav as lib from av.enum cimport define_enum from av.error cimport err_check from av.video.format cimport VideoFormat from av.video.frame cimport alloc_video_frame Interpolation = define_enum('Interpolation', __name__, ( ('FAST_BILINEAR', lib.SWS_FAST_BILINEAR, "Fast bilinear"), ('BILINEAR', lib.SWS_BILINEAR, "Bilinear"), ('BICUBIC', lib.SWS_BICUBIC, "Bicubic"), ('X', lib.SWS_X, "Experimental"), ('POINT', lib.SWS_POINT, "Nearest neighbor / point"), ('AREA', lib.SWS_AREA, "Area averaging"), ('BICUBLIN', lib.SWS_BICUBLIN, "Luma bicubic / chroma bilinear"), ('GAUSS', lib.SWS_GAUSS, "Gaussian"), ('SINC', lib.SWS_SINC, "Sinc"), ('LANCZOS', lib.SWS_LANCZOS, "Lanczos"), ('SPLINE', lib.SWS_SPLINE, "Bicubic spline"), )) Colorspace = define_enum('Colorspace', __name__, ( ('ITU709', lib.SWS_CS_ITU709), ('FCC', lib.SWS_CS_FCC), ('ITU601', lib.SWS_CS_ITU601), ('ITU624', lib.SWS_CS_ITU624), ('SMPTE170M', lib.SWS_CS_SMPTE170M), ('SMPTE240M', lib.SWS_CS_SMPTE240M), ('DEFAULT', lib.SWS_CS_DEFAULT), # Lowercase for b/c. ('itu709', lib.SWS_CS_ITU709), ('fcc', lib.SWS_CS_FCC), ('itu601', lib.SWS_CS_ITU601), ('itu624', lib.SWS_CS_SMPTE170M), ('smpte240', lib.SWS_CS_SMPTE240M), ('default', lib.SWS_CS_DEFAULT), )) cdef class VideoReformatter(object): """An object for reformatting size and pixel format of :class:`.VideoFrame`. It is most efficient to have a reformatter object for each set of parameters you will use as calling :meth:`reformat` will reconfigure the internal object. """ def __dealloc__(self): with nogil: lib.sws_freeContext(self.ptr) def reformat(self, VideoFrame frame not None, width=None, height=None, format=None, src_colorspace=None, dst_colorspace=None, interpolation=None): """Create a new :class:`VideoFrame` with the given width/height/format/colorspace. Returns the same frame untouched if nothing needs to be done to it. :param int width: New width, or ``None`` for the same width. :param int height: New height, or ``None`` for the same height. :param format: New format, or ``None`` for the same format. :type format: :class:`.VideoFormat` or ``str`` :param src_colorspace: Current colorspace, or ``None`` for ``DEFAULT``. :type src_colorspace: :class:`Colorspace` or ``str`` :param dst_colorspace: Desired colorspace, or ``None`` for ``DEFAULT``. :type dst_colorspace: :class:`Colorspace` or ``str`` :param interpolation: The interpolation method to use, or ``None`` for ``BILINEAR``. :type interpolation: :class:`Interpolation` or ``str`` """ cdef VideoFormat video_format = VideoFormat(format if format is not None else frame.format) cdef int c_src_colorspace = (Colorspace[src_colorspace] if src_colorspace is not None else Colorspace.DEFAULT).value cdef int c_dst_colorspace = (Colorspace[dst_colorspace] if dst_colorspace is not None else Colorspace.DEFAULT).value cdef int c_interpolation = (Interpolation[interpolation] if interpolation is not None else Interpolation.BILINEAR).value return self._reformat( frame, width or frame.ptr.width, height or frame.ptr.height, video_format.pix_fmt, c_src_colorspace, c_dst_colorspace, c_interpolation, ) cdef _reformat(self, VideoFrame frame, int width, int height, lib.AVPixelFormat dst_format, int src_colorspace, int dst_colorspace, int interpolation): if frame.ptr.format < 0: raise ValueError("Frame does not have format set.") cdef lib.AVPixelFormat src_format = frame.ptr.format # Shortcut! if ( dst_format == src_format and width == frame.ptr.width and height == frame.ptr.height and dst_colorspace == src_colorspace ): return frame # Try and reuse existing SwsContextProxy # VideoStream.decode will copy its SwsContextProxy to VideoFrame # So all Video frames from the same VideoStream should have the same one with nogil: self.ptr = lib.sws_getCachedContext( self.ptr, frame.ptr.width, frame.ptr.height, src_format, width, height, dst_format, interpolation, NULL, NULL, NULL ) # We want to change the colorspace transforms. We do that by grabbing # all of the current settings, changing a couple, and setting them all. # We need a lot of state here. cdef const int *inv_tbl cdef const int *tbl cdef int src_range, dst_range, brightness, contrast, saturation cdef int ret if src_colorspace != dst_colorspace: with nogil: # Casts for const-ness, because Cython isn't expressive enough. ret = lib.sws_getColorspaceDetails( self.ptr, &inv_tbl, &src_range, &tbl, &dst_range, &brightness, &contrast, &saturation ) err_check(ret) with nogil: # Grab the coefficients for the requested transforms. # The inv_table brings us to linear, and `tbl` to the new space. if src_colorspace != lib.SWS_CS_DEFAULT: inv_tbl = lib.sws_getCoefficients(src_colorspace) if dst_colorspace != lib.SWS_CS_DEFAULT: tbl = lib.sws_getCoefficients(dst_colorspace) # Apply! ret = lib.sws_setColorspaceDetails( self.ptr, inv_tbl, src_range, tbl, dst_range, brightness, contrast, saturation ) err_check(ret) # Create a new VideoFrame. cdef VideoFrame new_frame = alloc_video_frame() new_frame._copy_internal_attributes(frame) new_frame._init(dst_format, width, height) # Finally, scale the image. with nogil: lib.sws_scale( self.ptr, # Cast for const-ness, because Cython isn't expressive enough. frame.ptr.data, frame.ptr.linesize, 0, # slice Y frame.ptr.height, new_frame.ptr.data, new_frame.ptr.linesize, ) return new_frame PyAV-8.1.0/av/video/stream.pxd000066400000000000000000000001121416312437500160630ustar00rootroot00000000000000 from av.stream cimport Stream cdef class VideoStream(Stream): pass PyAV-8.1.0/av/video/stream.pyx000066400000000000000000000011671416312437500161230ustar00rootroot00000000000000from libc.stdint cimport int64_t cimport libav as lib from av.container.core cimport Container from av.utils cimport avrational_to_fraction cdef class VideoStream(Stream): def __repr__(self): return '' % ( self.__class__.__name__, self.index, self.name, self.format.name if self.format else None, self._codec_context.width, self._codec_context.height, id(self), ) property average_rate: def __get__(self): return avrational_to_fraction(&self._stream.avg_frame_rate) PyAV-8.1.0/docs/000077500000000000000000000000001416312437500132755ustar00rootroot00000000000000PyAV-8.1.0/docs/Makefile000066400000000000000000000015371416312437500147430ustar00rootroot00000000000000 SPHINXOPTS = SPHINXBUILD = sphinx-build BUILDDIR = _build ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) . .PHONY: clean html open upload default default: html TAGFILE := _build/doxygen/tagfile.xml $(TAGFILE) : ./generate-tagfile -o $(TAGFILE) TEMPLATES := $(wildcard api/*.py development/*.py) RENDERED := $(TEMPLATES:%.py=_build/rst/%.rst) _build/rst/%.rst: %.py $(TAGFILE) $(shell find ../include ../av -name '*.pyx' -or -name '*.pxd') @ mkdir -p $(@D) python $< > $@.tmp mv $@.tmp $@ clean: - rm -rf $(BUILDDIR)/* html: $(RENDERED) $(TAGFILE) $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html test: PYAV_SKIP_DOXYLINK=1 $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest open: open _build/html/index.html upload: rsync -avxP --delete _build/html/ pyav.org:/srv/pyav.org/www/httpdocs/docs/develop/ PyAV-8.1.0/docs/_static/000077500000000000000000000000001416312437500147235ustar00rootroot00000000000000PyAV-8.1.0/docs/_static/custom.css000066400000000000000000000002511416312437500167450ustar00rootroot00000000000000 .ffmpeg-quicklink { float: right; clear: right; margin: 0; } .ffmpeg-quicklink:before { content: "["; } .ffmpeg-quicklink:after { content: "]"; } PyAV-8.1.0/docs/_static/examples/000077500000000000000000000000001416312437500165415ustar00rootroot00000000000000PyAV-8.1.0/docs/_static/examples/numpy/000077500000000000000000000000001416312437500177115ustar00rootroot00000000000000PyAV-8.1.0/docs/_static/examples/numpy/barcode.jpg000066400000000000000000000351631416312437500220220ustar00rootroot00000000000000JFIFC  !"$"$C " }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?Q0?xokIׯU"ĝ=#Y.K;};{{W[E2S+M3}jMҬE3+/@h3h"(K0*=!ҥ+HVGIR)E|zS_bJEH6QҸGjB8ڔFM)_g!U /)sJO/گ@XW!@"^dO(p+|oҭl!*'=*يEJ*KNXU)\=)OڟӱT5U)/ENvD>_y~¥})vJF: Nni6W(=4=[GUCeO,z _*HlJ #ҭe5nTPkm=؛DcҐ8UԆ>VT=4]1RyBGg=(1 "1RaIqүEP1{QU]SJO(zUS G=[*=)6,_(zRǥY+F=Xghm=(ǥYKS~Xxh'ivQaCҐW|O.Q\a!U:9B+Q{ y~rS%O*z{B5qtRB5}/s^\wBʜgA*c+򩦎jRaIڋXuog&Þs=H-R>,q%)\i !SVfNbIy>ԾO thNjԑ@~n?rj>4\|=`毈 8A+Qڜ!!+CjӀQڥQ-R%J})F>re4~c?ʮ lTVE/{Qp*V(DBJQ LIY}^=*D\t 5zvWBoCTgzz'ZDfFgl/CBpM @!-H">Jޑ;jAfG;ˤۃM4KLJ3Ij1V۰f-E)3s4?jpDQjUAO =*Ql>kv;qUdK*>YRn_QTlF}( NXSw|d6Gl7qFM2B4nAbȣJb!?jҋ)BagANHu"G w1HeA~th@?ИN?A84q֘gB%`1ҘM0!q&MFj.=xHI?Eއ#[ӢTN?gWȾwOFy}*9oES?Hw#EƟBMUiP!^*%e1 (ڭ(ܔVbo7;rz:y ?ʕV(aHmǵY,e=)4+*JRy^jp:ʥpK?)V#*ڞ(\4/jaj(OZA9cFOinަ{Pm|kv [׊oE.i&[ץ$p6O"]Wbb[ӄMks/E/xa?n4gJ-!&0_COX;_>4L,p?ʴCMǏ?Ù8=4iiK=(cF^&g=j=(Qf>eH=_iA9cӿ4AO٦#z}(ToO֩) &Qҵ^;Z1^#Zs#4[I/ޚm"}i0R1 il[$SOMSz>.:"_jiY*I/{Ԇ]1RsUJLaf9 MUj=֤&OZ(;iKIƦj 37b jC$4sLɉzO0ni Ia߻͒yt4_cD>>!vdiCOPm?vv%O_S?HVOw;ݩGUނʚ7'_VxjO*oCERQG'r{QƽS5= /خcG0jNn4B4˃'?J4)KҋXKin(8O߆zЍkU$`?CB53R9)ǟ?BޚŶ<.zcJxSUCoJMϞ' 46x@Y&3s<>q{1ANmC1Nik) ιbuVK(W}?*>y }I≾qA_ϼBgbM4"zS=Q:O>;#O_AJteL4:wCVfqGU &d qRD6l ܟJt[_f֩?;횿dŵij},i-E'-cx枷xLW. y'_ʡKO GiBkR }}-KEgYS6U8)'Kˏ/EBaЗ^[ߗZre&\vFqv ??*7Of\fdRM9H4h+E'eU?-bqrcK̍Rd~ T#j~!:ʘtuᮄ>R!{6sͣ.>ᦝvJ 1}??3O!{6s'G=4d75}}4}4{D͞$M f g&OKO}koyG)#B; ҽ+6s~ʘ*qg4gQ**zpOz~SRZKjOOҏhffy*l3u?ʮX?Joiٛ{{RugSjSp[3~TLRoWCt0GQGsYR~TCh[i̝=)"ܿ]lnGOJlJX~U>خCo~T_QR0ݿ瑣͏U{>|0O[Gk)?/h;cEرc~S>Wq }S=SQSߥ/hS+5=S 1=c[)q~c(AwuG}:~9_o{j-v!<c~Ch.{ߐ?4=O?6o.=]on?*_SOռGz>@a?4N'֠iOҟb~pOV4Rfhrk[{h]I<>S}mvag?򟘤&Oԍ?ÙT?Hl|6?D1Y92_У2C} şEÀGf?>2L)G?BCo.Q{}RV9H]"GdԀh#]}mvռs˟?J"{DN{C(Dh/ga$7^c~_T+'+]n,|pf?A>?.@Sn%V݇}h>\gXZC5ZAvQSqNVvU:z'iDe}i[8/ҫ~ݏy1∗gpz.2;{ EΡ=Uuzp)D>±Ρ=J.5* C{G۵ oL">$1tW-u ~翨9ΔCNH~rٿZzj7ٿZ9<PΜ7O]J?4{?1C #ι^w3K~|MhG?soC}{=@QoOf~4 zzalX{}{r>Ύ:_ι_~t}~.F[UX }prtb}W󧭲u ߝ8j^ߝ'N}Sc2Ow~k:?:-FqԺSW<;xy)оB{Χ?iS'Xsߙjw+Cĩ}}$X}k{ƣ|~}?kOf1-țߙMoZ$ ޞ=oKq}t>K>v~|_g_ޞZK=}>iq3VKXncS y?~}PO_z!Z3*F_WWjB:cqʕ_oc\o=/j㇐xuydkl:ƚd'VֿJ>4ۿGգ݇gv|FӿO汾ց >>/F?ӨcySLUv\a?~&?=?~&z0*O#}V}r_ʍږo׬7*5[T߯ڶ?_׬oLAG(;쿕Uߑ4_O׬}__{?'g=_e oe _/QXwaF͟TZ߯ʲL('(;g#[߼*i=$Gi)d?B>>:?I~Ii=$ghqBBSϲ4νkߣQk=$_f3' >L;#O~zI~*p _d _!G)Nf6}6|k]_,oЍ|kUkg\uzxݭ`Pj h)GS/˧kTTzʏhgx5jN_?Ushպ^RuFs<C`k]dҕr:=+`*@UEdf:1?yҵCSUb6,tjTӢn0iPVegǟi`.irlf4`/.AG0fh6 kM]I?*~'1c38VS@َu Q0NޔشJ`ԐycKڕa7ޓ4稭((9ӎ:os`]$x)6Gڇ!4. icǥ?hk{SOmӢH[΃?ٽ:RfE寥>tdSxLcҝ tH,̈я:{dh>b\ti$D?֭2Yt>x]U&b%xW)7v>eקR%'K@~T1p?v@:šڕ?փU"L*VF>XmN3SUա6gg.>:zdX񦶫g7V =(6 ?:Huk0HͲ`ݦV?:CZ~_ִFMqJ4܎ڵ:oֈLgjM]57?GLݿ̛i(: 魭ȿQ E#X︴!삨>}(܈cZeZdyJOz=q)Jb$M0s!))_Ntݨӌl(S(ӗq;S[NܨT7c R`iC T#@w.Lt -sjxeן?(n~V~.e//z_EP>&OR??/(-2>Z_+0LSOۍʀ5WML}Ki~^J?(6*.#XiOEbG_J?)k޿,w֟#px*ܧ\8߭ ?j.GO ~tߵ&s&R}'~j,#L~(\j}d>V.c[ֹEԇBi(,2G^ZQqZV > `)EFH=vs~>vGk>Vh;_G֐FqgV_a՗+2;gX~tèG/\;꫎gT\}Otu4M\dHHzo;Sf^kmDMMIT;F+:L:4;^>ܸ^[Q=UR>wn/#3ޏ:ָu'Qͨ\~U!3kSs}qS>qUZ;U/8OާO_zz*8Z&KGzo^{ϐ\1O_j6jh-FopO̽+km{>ֿZw {Ux=WO~fo#6ޏs.9#j>7ΆթvO}iy}/od2\N^!hbkktMVwk_6^5}7Vߞ=pmmhժ:gyu Inpm?~CYߚq ~X:Yߚ٦\0$h=>팭:kH޿q2jCȟddnykӓGoΟ;rA29ȮJuF??k{6u!֚drڬ:?"͝PA\ڐmO#:B)N"6M?xhBgLd'\ԏ ԛ?+:'^i<:5ݛj/̃O0zH5|z /Z翴O-G2 xή|j-G,)2V@0XNJ$8OƗ,m4ao_֐Z.6CXm{pO0G?ڥ~\,zX;st|ξӕzU;gO_k9=RV;z U@q- J.zRzSRȹ4i?aJ^=)sB])1SG)\Q%7Gސ7{,RU:CM0`ݷZV?"[ҪLG}*OʛLSMf,3ҘXgZfmԟjګnZ7-UY['=2ԊYɚEJ5))XIB5Si2Edje}Z2#f\tݩf:S~_JW*w҇){1EaJsӱN Pg?ʕb2ғq &cFyUhǥC?G)_7S{r<ޟM5piòd֭̓2ZdOo;F8!3c4N:UKD QZڣ`s׭S2hɦS0>ßz#C!Cf\M1ԤJBjd4|)Oi5Wd45'ڪ솑_a>c#ҐKEb~Gӷ=*hHIJc! O!cV0W֫i ֐)V FM1JwB!IJC zSoQ俥'wFKi 2g.+M֗ɓȓo>ϭ;zQ? }iwZ_!ͼѶf.QϯF)i6Q֍L+Fz9~4yLǹ,NyWn/߿5j*簛'1ց0p1ҕG+Ha6K{ v=z{RJ^=E 5,r~=XG*]e!ep4 sIIzQz w -,3ڑ\|J4Fq0 qci'4 Z#&KFj-zRԩ/D"Y\bQWYxR$ֹtBM-3O4#XSwz -XV;R~T3))7/!R1ښtSHڕw~R)P*E ,()er1 =4Q[ԁQ߽*6fOXSRXن* ؜FhÚiR>FlL{>3QU: = }h.}E0UI2[֚]iِ}GHQ}E7zSKj2mO_֐b{R_Z SҘU3Қd zvdKqҐLCO6/ !fɹcR|X޵\s+&OւrˢH8֟(^3)>Xrw+) SҐOW.SMyn-URn-y(ܧ7-e}URo¹gOAUѾp/V((/W((l сPyLT^ee!jIދ& ѿޝTF8Zi5Lxj͡o(0҉h1{4SMR҉)r1pHɥOTĔe.y$r ,oR5"S'V-Iޟ [֍poW'HLCI=4Q 4j!NifC4hdh7}*M FzԨU{ժ$.+.:ԨTոOW%.n\M,l6S/ZHwҩ\dui7qUcwir14}ꇞh:=:43sBUggcKٲ42(+8i>hL=4KZO0z7&?d$Q) .O2I'^-AEƛZa{U36inj2]Bԗ QWg1qZ("FY7 qޘA1Vsd9RMF*2yQ%i֚}j"I>i>zUby,i {) ;Ɛ-TK`Ihɦi Cɤ9%O^M=iX8M$ѷސz}i dsM2SJiǥ1 M擏J? ]n) ;vMƓbqh!4%&M&M)ME/q@ E/P!QPqF)dPqF)xȠތS)3L&hf4+Z+|ߟ󧫌 Vz:Ɣ1CRV\5.zT&E; 7 z(b[ PvzEK)q=J (h# GBE;_h9QW"?:(zpaE.%&.T~EGpܞz9Gp2y~tQOR`MpE&K,lR{co`'@n.ێXAf+*3sf=[9ŕ^-pbn@ ~FZnw >De54#GonF >Io;_.]4`#RкN9Kܬա&d'ܹw6'fP]gI3#rߤԀ~2SLn(+/Mbt >ywP@q#n`,4{ۡ[7u\1鳗]JlǨO[ꘂv/u 炻UCdV-4*mI_1*~3ؐa`ҍY'Ʒ X|P~\"ʜ#F}Q3- - 5ȯzbP㙘aUm#}+:-6GqRzv/A̭S֫ޤ.0,8X[m9 8#*g:-'<$z%,ٖQcT]1/95{45Ű;G\ 9FF.S^7-y1,DJ{7N47C} > >\ѿ~Of<\ '˥v5{q~/ʆ՟4i;>NF}ξ4wԪ>ޑ6ITT6CRtĹiiٰpV_ɠ/R^ŬEc کlPQaQ}얤>]@YT[>=as.t%E\<R{r2FOc?+!,?.N<A/?CPAw_.|r! p+8ۑ3Q׆ԑF+yҌ'.=J^yHAoLe鄃x,Ģtr8䢜Э .iRRjJ2z۴a; y}cG-=a~|fzdA'O,j: Hwx#D;%5'U2sȿ{*UwE>+]»3, ̐HRWLeFH3Q}ИK=dјc & E}ʿN *_ځKU |gd~!W#kG У=/Vhmcz$g&"[ҜvF:ze_u}>=5Ms/?À }Xi{ H7Pv4:Oy ]E},mm'c98b_rhQ!i3ӇZ+_@y; f"I՝&@WQ3Z]5F׃Ti Fbo8;k@1u Sn`JZ!wgJ4#=-j+V1]$';0kȔ Ղg}M8]:_V5\`S_mKF$)ƭSnGhT؍sQWָOW-F$ G ܟ`*0ܰ]9COjwx[+g@~h߹x3e#  hܻ|HIENDB`PyAV-8.1.0/docs/_static/logo-250.png000066400000000000000000000304101416312437500166730ustar00rootroot00000000000000PNG  IHDR^ a cHRMz%u0`:o_F0IDATxw|SU&i-S"@P"2DBܢSTd AEAPAd+]J6mB>W|<|#t}| L:JBi{~` Vԅ*#F7&u{b? sbȚHz>T1+~kı qxg$zPDzI0wsDž) ukuF1!n\g0a[_|v{#/F&knWo<:׻ξj1: ?\6+݈29=\7o< 7_3dixrHm~T. Mt$!pԍ S4&17EdjUk5ZZ-Ţ W/++A$HGnvgI7nlqk0dq_Ns_ >΁B @u$Nq"/uyҋz)NbϧubgypY9PxH.p.=66k:->寏DrADkDSgƨקwTU"yaQ17ܛ<̃p+KMJuƼ7ۼ}zf=!:m[,V2е;Le*%U(@p&(j_--/s  ~>H^7RXۻŊbApgX  ';t3Hp z)<#ʃ;{ X,˜};6]KWm=2Eð [}$e0[={}>u]\Yم12HV'G{n1u/ƀ 'B1gDy!zs\)*5p,9pK_L_Sh(n[p![CǦq?~4}>ߺ?PWh9?M\'-{E{}Nav$l)zAErrg(Qm9<Ńk؍TͽyEZKg5䩡]~z?;t6z2 iި3 Rc *u?usWegnn&H~+b{}(- u\Q0r{u': Mw0o>tLDJT_kv_B=v;F=x+Gr0BL>~'d[&|-Qͺ(>+ʆSA]H% j i{YÀH!nA! ꌜIgҐ#[3_sńTT_\lBRFEƷ&tx{뭽Xaz1NCC2sϻRӒF۹D[lZn1M6覯YI>WJ˻He^;j=v>PP29cp#%l ! HaU!)}] ,6p ts[,7[N0 ;OA]k{v\ݐ o* LE*I3O~8Ӄs׀q. Z7ɸpR(V*(ah_m{oe[覄z$(qLEӇ5J> vuz>os`3]Kk73$VuG"[>w[DXx}盼v(VX( w~1MEfFAz,>In5x탚9yݧS<R)jᆖvhoaw/=q#)lUfw7?K&WdGT .pkEt(7??ӛuk%\M1~~!gOئAC$ trsh>i-FsJ呫0ڹF|Av85pmd|)Nu+Я2 }v>j fY튭:=:s@cwwnާY&vC$ړ+C\ό=;s8tD]u&gaR:DhY1+TpxV@k[]VHJq!/і }qV:r Jѭ`R$Ա;LC?k{Ûy>z) +@jV!o|N\$B(= +ᵀ{]/+p7rry> $טB9OXpH#M~pb`r~rL54X/5ϯ( ;:JV:{{L m #;B5empVKA[^NCuz QBv!aAUD/So3C]tb_;_an`Je;jc|6cms:3smPT bP);l> BF{FȘ4\Mjoh*]! dxڃ8|r5·u+O=̂_=yBR*sʁ$4aCY'w<'+- He~MWdT,qiDWݞy;2Ҝ.'շ6|GauNtQ((**eϝ4!kw@ʆ7@w ޱ0`NI=Nld0JQT{=XKPH^٬OÀ-yf`_ƒrp)}UzQxN`^Wb v$71QAbXP)\]TE;Z!l⸜.*д}s/R ,.[/+ѕ RN wmms9{EړP/z l~^?7 Be)C40֟'!8˭BwFu r ;/܈$8*S@;6Uu5 a+~وT|0v[M?)v?m7;q};_*߭ !:&I4m}d1]+;)c]wx PYBwmP㢤w" r w$1G^BM6V:TEX- ۃBm#MxEVAw Z z]Vr_oGf#yBKf&&\'^>AII}Yj)Ҥc5RW\xg8v‰hŝNLV< ]|h ĸ7elOt()b0=[yHs >qV\/=~'O6.2](dso JdJ"$ל$DhB}K>'SJ \$8QA WHi $Ք|G Vu9[oJ PTD/#c%4gGm&g{/|+V1w>}ڢC91A @Aq)*3sFwj>'KF"ܱۊI 4!X ײ 2r}6\t-ϙh{xwk;#y)\C@g0P2[#u6.߲J륕T !;IX㕯4#~oB#&4zٶɠ6[C')Ż{$`c 7C3a}.0538|V 6d%6N#\abBJuw2]cLU])8ZL}~ [o v*@{vbI֛Y݆Qa b̂ɛOC$LAxEi]+ۈv /p85Hϻn#}dZvs}޻eXҐb]ߎg9tnI9>@~]+Bkyq8 oCN]xۯ(@w]lkޒRWB9{1ࠔk0s Egg/i2M[K]Sɯ@cP7V@#*$!2Ir2ugx HjEN=IF2^q+/3øox+,EgSC!iP$_q]' 4xjSLv Q #̏"!"N9&h7 o 0> b&A`|c%•W#}m&g5[d$ØU]CzWGcvvg6VM$yk.QM\S'OI=bƭ mp|9h6rEgO6oios(_:8cbǗ99JbDirc4~w4~&~ m.R1C@dp%r4̊ɅQŹQ6_u%WWNly20D lˉYmy2 d%:=&v[bv:i%֌Vr}2ş &/ޭ6ZYi-fEhsuJ䄙sldxsOn), _N耕d,O'Q'T.W]'9[O뙕tX,WhǟW$^uvWe9 r$x*i#d"왫9HU L=G-]/tXqox)+XEbReɘ:z#EGZ'Ts%Aw=AhI~u6)oL+UMu{1O!LJj'\7I RWuumgxqWN܀{Uu ErreY=pLG1"rrQ:iݤUك?SN-_k{ JɅKVѸ;_|/4@M)(!К7Oؙw/Jn+}[Jq?Js9d~ 6(OrXh埃W#Ͷ8]r&\튰0ɩߨI ˉ}qA2ȉ:\T@CяoupO>9J.u$'yI͊Ll7룕 ޔ "Jy744_UN; rsS4"F˨l%ϦbAtj B Xjeͨq{Vuh oyN2ޓGX&@@9A.{2c4B?t/>4x~ߓBNo*ƞ8JNC| ykc/6‚ I*D"JEԬ!e_d댺>nЬО1-.~8硊ƛ6 eJحj OtMeӵjRj#ȡJ|LE Geod͝D L~/U`08*NtӜܕG,AZD)ۿ/[Zٻ}~;0-VOĻQek!A1kb_^KoOFm_-V JמƦ* gN4=s!j-.|8eIHn1;Q1]b_|~|>J:8 ȱT@յNU2mݐ*ߺ!w v@-.mv7AffPHR_ºwyu?w1KPfۈ9kͿzAvpRFˤa r$IwHIWfRX׬'uMI^b)VZC';^עu o]rXuMTD^0Ycv\QTNPg5!:zhXn+7l~V"'%c;1UZ-l{t?p<5[06z > @wKO>__2f/,Va)7j8vɉ& :]N#¿/5hK5D)|c$SҼGΙSkb/ *ХZ[@U!?;{Ҕ@*E)"/t_nX]Bsw%Jo-S-\+ԱUϽ9 RKlyC,t`PՅ~3RZjEˆyjג*_L!l7U. fV0YDܺ5vH }~ x#JCxt1^NQ kE%˞7hsqznG^i"ޢǯ vXr sFA wׇy}I*-i8sv( BJ;*]NЏ&mn8|mg3h oaKu gQ.~eFI `]((Ң3`>?N _on oSǠOmO*PT'/[vkkf80RV<׮{~Zq,T>R3BJZAa@R-Д_"yxEe --6Zbڥ" M4ťhJaޤq7 Q;I7sa0I.HmОUͺ7rx~@^$L:nCDRݎ}g^h)G%W isYȮEҿUaj*)oW((#XmZZVۉ (6ۯcz$/r'9,; ݃fWT!Qv.# aD~HCH9ܷ8wW1Vg(Ã2ڲF6f"PrPy?‘\fpѝIs> /KB@` Vnog[m &J$O7ѯM}l-|m5Y#.H!286,VPK+龴hg+';\vEi%m:+X.zz`v͉o;%8\~=vu\8Qǹ_ťL'WlY7KI~Zڀ+b/H?RF#9"3u(W.:z߇^g!FJjems$j+laR/!cPϭ᎞wu39v>&Q)yb&[#. T9ZeXg&K=}ϻfni+Hu؟pMZ؞ Iuȫv_j&\x&bmEMWl+.Rm˿;S\! MP"\QeR|4;̢MO$^X )vdvȋm2j~Aj甾H9**~ |#ӂHm$DPLɎO ?:y&K zޕV QEuO@kGrAh~yޏ\"}/"CUFnsKk&*Y`T6CEZ(+&)(i ]$}Đ+l^ =)U=:*3dI U]|"󺇫8NZ"%WRDr-u%iρ@HXiޕn CG5n5D \gK{:ܴLoO}hRńuy>Dmք+Cr٠bE|[ѭ}EL cTfw;\+(prޕlV-uxxuS+ˌ[2~>eւFӴrU_U t+%?Ti*(1i{3 q{n) z1~<&l v% i.I-%N ۤm:(̟j#U|)PIv؉U48T^x$RE]lU`;4l*Gm H%vyoUYL*Ԣ.$ڞ-V }Zd2nFV=WV#ߑkyp(G.u"p={pm!UG2>oqVpWͻJ{|IENDB`PyAV-8.1.0/docs/_themes/000077500000000000000000000000001416312437500147215ustar00rootroot00000000000000PyAV-8.1.0/docs/_themes/pyav/000077500000000000000000000000001416312437500157005ustar00rootroot00000000000000PyAV-8.1.0/docs/_themes/pyav/layout.html000066400000000000000000000004541416312437500201060ustar00rootroot00000000000000 {%- extends "basic/layout.html" %} {% block extrahead %} {% endblock %} {% block relbaritems %}
  • {% endblock %} PyAV-8.1.0/docs/_themes/pyav/theme.conf000066400000000000000000000000311416312437500176430ustar00rootroot00000000000000[theme] inherit = nature PyAV-8.1.0/docs/api/000077500000000000000000000000001416312437500140465ustar00rootroot00000000000000PyAV-8.1.0/docs/api/_globals.rst000066400000000000000000000000541416312437500163610ustar00rootroot00000000000000 Globals ======= .. autofunction:: av.open PyAV-8.1.0/docs/api/audio.rst000066400000000000000000000022111416312437500156750ustar00rootroot00000000000000 Audio ===== Audio Streams ------------- .. automodule:: av.audio.stream .. autoclass:: AudioStream :members: Audio Context ------------- .. automodule:: av.audio.codeccontext .. autoclass:: AudioCodecContext :members: :exclude-members: channel_layout, channels Audio Formats ------------- .. automodule:: av.audio.format .. autoclass:: AudioFormat :members: Audio Layouts ------------- .. automodule:: av.audio.layout .. autoclass:: AudioLayout :members: .. autoclass:: AudioChannel :members: Audio Frames ------------ .. automodule:: av.audio.frame .. autoclass:: AudioFrame :members: :exclude-members: to_nd_array Audio FIFOs ----------- .. automodule:: av.audio.fifo .. autoclass:: AudioFifo :members: :exclude-members: write, read, read_many .. automethod:: write .. automethod:: read .. automethod:: read_many Audio Resamplers ---------------- .. automodule:: av.audio.resampler .. autoclass:: AudioResampler :members: :exclude-members: resample .. automethod:: resample PyAV-8.1.0/docs/api/buffer.rst000066400000000000000000000001311416312437500160440ustar00rootroot00000000000000 Buffers ======= .. automodule:: av.buffer .. autoclass:: Buffer :members: PyAV-8.1.0/docs/api/codec.rst000066400000000000000000000056461416312437500156700ustar00rootroot00000000000000 Codecs ====== Descriptors ----------- .. currentmodule:: av.codec .. automodule:: av.codec .. autoclass:: Codec .. automethod:: Codec.create .. autoattribute:: Codec.is_encoder .. autoattribute:: Codec.is_encoder .. .. autoattribute:: Codec.descriptor .. autoattribute:: Codec.name .. autoattribute:: Codec.long_name .. autoattribute:: Codec.type .. autoattribute:: Codec.id .. autoattribute:: Codec.frame_rates .. autoattribute:: Codec.audio_rates .. autoattribute:: Codec.video_formats .. autoattribute:: Codec.audio_formats Flags ~~~~~ .. autoattribute:: Codec.properties .. autoclass:: Properties Wraps :ffmpeg:`AVCodecDescriptor.props` (``AV_CODEC_PROP_*``). .. enumtable:: av.codec.codec.Properties :class: av.codec.codec.Codec .. autoattribute:: Codec.capabilities .. autoclass:: Capabilities Wraps :ffmpeg:`AVCodec.capabilities` (``AV_CODEC_CAP_*``). Note that ``ffmpeg -codecs`` prefers the properties versions of ``INTRA_ONLY`` and ``LOSSLESS``. .. enumtable:: av.codec.codec.Capabilities :class: av.codec.codec.Codec Contexts -------- .. currentmodule:: av.codec.context .. automodule:: av.codec.context .. autoclass:: CodecContext .. autoattribute:: CodecContext.codec .. autoattribute:: CodecContext.options .. automethod:: CodecContext.create .. automethod:: CodecContext.open .. automethod:: CodecContext.close Attributes ~~~~~~~~~~ .. autoattribute:: CodecContext.is_open .. autoattribute:: CodecContext.is_encoder .. autoattribute:: CodecContext.is_decoder .. autoattribute:: CodecContext.name .. autoattribute:: CodecContext.type .. autoattribute:: CodecContext.profile .. autoattribute:: CodecContext.time_base .. autoattribute:: CodecContext.ticks_per_frame .. autoattribute:: CodecContext.bit_rate .. autoattribute:: CodecContext.bit_rate_tolerance .. autoattribute:: CodecContext.max_bit_rate .. autoattribute:: CodecContext.thread_count .. autoattribute:: CodecContext.thread_type .. autoattribute:: CodecContext.skip_frame .. autoattribute:: CodecContext.extradata .. autoattribute:: CodecContext.extradata_size Transcoding ~~~~~~~~~~~ .. automethod:: CodecContext.parse .. automethod:: CodecContext.encode .. automethod:: CodecContext.decode Flags ~~~~~ .. autoattribute:: CodecContext.flags .. autoclass:: av.codec.context.Flags .. enumtable:: av.codec.context:Flags :class: av.codec.context:CodecContext .. autoattribute:: CodecContext.flags2 .. autoclass:: av.codec.context.Flags2 .. enumtable:: av.codec.context:Flags2 :class: av.codec.context:CodecContext Enums ~~~~~ .. autoclass:: av.codec.context.ThreadType Which multithreading methods to use. Use of FF_THREAD_FRAME will increase decoding delay by one frame per thread, so clients which cannot provide future frames should not use it. .. enumtable:: av.codec.context.ThreadType .. autoclass:: av.codec.context.SkipType .. enumtable:: av.codec.context.SkipType PyAV-8.1.0/docs/api/container.rst000066400000000000000000000025351416312437500165670ustar00rootroot00000000000000 Containers ========== Generic ------- .. currentmodule:: av.container .. automodule:: av.container .. autoclass:: Container .. attribute:: options .. attribute:: container_options .. attribute:: stream_options .. attribute:: metadata_encoding .. attribute:: metadata_errors .. attribute:: open_timeout .. attribute:: read_timeout Flags ~~~~~ .. attribute:: av.container.Container.flags .. class:: av.container.Flags Wraps :ffmpeg:`AVFormatContext.flags`. .. enumtable:: av.container.core:Flags :class: av.container.core:Container Input Containers ---------------- .. autoclass:: InputContainer :members: Output Containers ----------------- .. autoclass:: OutputContainer :members: Formats ------- .. currentmodule:: av.format .. automodule:: av.format .. autoclass:: ContainerFormat .. autoattribute:: ContainerFormat.name .. autoattribute:: ContainerFormat.long_name .. autoattribute:: ContainerFormat.options .. autoattribute:: ContainerFormat.input .. autoattribute:: ContainerFormat.output .. autoattribute:: ContainerFormat.is_input .. autoattribute:: ContainerFormat.is_output .. autoattribute:: ContainerFormat.extensions Flags ~~~~~ .. autoattribute:: ContainerFormat.flags .. autoclass:: av.format.Flags .. enumtable:: av.format.Flags :class: av.format.ContainerFormat PyAV-8.1.0/docs/api/enum.rst000066400000000000000000000003311416312437500155410ustar00rootroot00000000000000 Enumerations and Flags ====================== .. currentmodule:: av.enum .. automodule:: av.enum .. _enums: Enumerations ------------ .. autoclass:: EnumItem .. _flags: Flags ----- .. autoclass:: EnumFlag PyAV-8.1.0/docs/api/error.rst000066400000000000000000000045571416312437500157440ustar00rootroot00000000000000Errors ====== .. currentmodule:: av.error .. _error_behaviour: General Behaviour ----------------- When PyAV encounters an FFmpeg error, it raises an appropriate exception. FFmpeg has a couple dozen of its own error types which we represent via :ref:`error_classes` and at a lower level via :ref:`error_types`. FFmpeg will also return more typical errors such as ``ENOENT`` or ``EAGAIN``, which we do our best to translate to extensions of the builtin exceptions as defined by `PEP 3151 `_ (and fall back onto ``OSError`` if using Python < 3.3). .. _error_types: Error Type Enumerations ----------------------- We provide :class:`av.error.ErrorType` as an enumeration of the various FFmpeg errors. To mimick the stdlib ``errno`` module, all enumeration values are available in the ``av.error`` module, e.g.:: try: do_something() except OSError as e: if e.errno != av.error.FILTER_NOT_FOUND: raise handle_error() .. autoclass:: av.error.ErrorType .. _error_classes: Error Exception Classes ----------------------- PyAV raises the typical builtin exceptions within its own codebase, but things get a little more complex when it comes to translating FFmpeg errors. There are two competing ideas that have influenced the final design: 1. We want every exception that originates within FFmpeg to inherit from a common :class:`.FFmpegError` exception; 2. We want to use the builtin exceptions whenever possible. As such, PyAV effectivly shadows as much of the builtin exception heirarchy as it requires, extending from both the builtins and from :class:`FFmpegError`. Therefore, an argument error within FFmpeg will raise a ``av.error.ValueError``, which can be caught via either :class:`FFmpegError` or ``ValueError``. All of these exceptions expose the typical ``errno`` and ``strerror`` attributes (even ``ValueError`` which doesn't typically), as well as some PyAV extensions such as :attr:`FFmpegError.log`. All of these exceptions are available on the top-level ``av`` package, e.g.:: try: do_something() except av.FilterNotFoundError: handle_error() .. autoclass:: av.FFmpegError Mapping Codes and Classes ------------------------- Here is how the classes line up with the error codes/enumerations: .. include:: ../_build/rst/api/error_table.rst PyAV-8.1.0/docs/api/error_table.py000066400000000000000000000013551416312437500167240ustar00rootroot00000000000000 import av rows = [( #'Tag (Code)', 'Exception Class', 'Code/Enum Name', 'FFmpeg Error Message', )] for code, cls in av.error.classes.items(): enum = av.error.ErrorType.get(code) if not enum: continue if enum.tag == b'PyAV': continue rows.append(( #'{} ({})'.format(enum.tag, code), '``av.{}``'.format(cls.__name__), '``av.error.{}``'.format(enum.name), enum.strerror, )) lens = [max(len(row[i]) for row in rows) for i in range(len(rows[0]))] header = tuple('=' * x for x in lens) rows.insert(0, header) rows.insert(2, header) rows.append(header) for row in rows: print(' '.join('{:{}s}'.format(cell, len_) for cell, len_ in zip(row, lens))) PyAV-8.1.0/docs/api/filter.rst000066400000000000000000000007361416312437500160730ustar00rootroot00000000000000Filters ======= .. automodule:: av.filter.filter .. autoclass:: Filter :members: .. automodule:: av.filter.graph .. autoclass:: Graph :members: .. automodule:: av.filter.context .. autoclass:: FilterContext :members: .. automodule:: av.filter.link .. autoclass:: FilterLink :members: .. automodule:: av.filter.pad .. autoclass:: FilterPad :members: .. autoclass:: FilterContextPad :members: PyAV-8.1.0/docs/api/frame.rst000066400000000000000000000001251416312437500156700ustar00rootroot00000000000000 Frames ====== .. automodule:: av.frame .. autoclass:: Frame :members: PyAV-8.1.0/docs/api/packet.rst000066400000000000000000000001311416312437500160420ustar00rootroot00000000000000 Packets ======= .. automodule:: av.packet .. autoclass:: Packet :members: PyAV-8.1.0/docs/api/plane.rst000066400000000000000000000001251416312437500156750ustar00rootroot00000000000000 Planes ====== .. automodule:: av.plane .. autoclass:: Plane :members: PyAV-8.1.0/docs/api/sidedata.rst000066400000000000000000000004741416312437500163630ustar00rootroot00000000000000 Side Data ========= .. automodule:: av.sidedata.sidedata .. autoclass:: SideData :members: .. autoclass:: av.sidedata.sidedata.Type .. enumtable:: av.sidedata.sidedata.Type Motion Vectors -------------- .. automodule:: av.sidedata.motionvectors .. autoclass:: MotionVectors :members: PyAV-8.1.0/docs/api/stream.rst000066400000000000000000000036171416312437500161020ustar00rootroot00000000000000 Streams ======= Stream collections ------------------ .. currentmodule:: av.container.streams .. autoclass:: StreamContainer Dynamic Slicing ~~~~~~~~~~~~~~~ .. automethod:: StreamContainer.get Typed Collections ~~~~~~~~~~~~~~~~~ These attributes are preferred for readability if you don't need the dynamic capabilities of :meth:`.get`: .. attribute:: StreamContainer.video A tuple of :class:`VideoStream`. .. attribute:: StreamContainer.audio A tuple of :class:`AudioStream`. .. attribute:: StreamContainer.subtitles A tuple of :class:`SubtitleStream`. .. attribute:: StreamContainer.data A tuple of :class:`DataStream`. .. attribute:: StreamContainer.other A tuple of :class:`Stream` Streams ------- .. currentmodule:: av.stream .. autoclass:: Stream Basics ~~~~~~ .. autoattribute:: Stream.type .. autoattribute:: Stream.codec_context .. autoattribute:: Stream.id .. autoattribute:: Stream.index Transcoding ~~~~~~~~~~~ .. automethod:: Stream.encode .. automethod:: Stream.decode Timing ~~~~~~ .. seealso:: :ref:`time` for a discussion of time in general. .. autoattribute:: Stream.time_base .. autoattribute:: Stream.start_time .. autoattribute:: Stream.duration .. autoattribute:: Stream.frames .. _frame_rates: Frame Rates ........... These attributes are different ways of calculating frame rates. Since containers don't need to explicitly identify a frame rate, nor even have a static frame rate, these attributes are not guaranteed to be accurate. You must experiment with them with your media to see which ones work for you for your purposes. Whenever possible, we advise that you use raw timing instead of frame rates. .. autoattribute:: Stream.average_rate .. autoattribute:: Stream.base_rate .. autoattribute:: Stream.guessed_rate Others ~~~~~~ .. automethod:: Stream.seek .. autoattribute:: Stream.profile .. autoattribute:: Stream.language PyAV-8.1.0/docs/api/subtitles.rst000066400000000000000000000007151416312437500166210ustar00rootroot00000000000000 Subtitles =========== .. automodule:: av.subtitles.stream .. autoclass:: SubtitleStream :members: .. automodule:: av.subtitles.subtitle .. autoclass:: SubtitleSet :members: .. autoclass:: Subtitle :members: .. autoclass:: BitmapSubtitle :members: .. autoclass:: BitmapSubtitlePlane :members: .. autoclass:: TextSubtitle :members: .. autoclass:: AssSubtitle :members: PyAV-8.1.0/docs/api/time.rst000066400000000000000000000076771416312437500155570ustar00rootroot00000000000000 .. _time: Time ==== Overview -------- Time is expressed as integer multiples of arbitrary units of time called a ``time_base``. There are different contexts that have different time bases: :class:`.Stream` has :attr:`.Stream.time_base`, :class:`.CodecContext` has :attr:`.CodecContext.time_base`, and :class:`.Container` has :data:`av.TIME_BASE`. .. testsetup:: import av path = av.datasets.curated('pexels/time-lapse-video-of-night-sky-857195.mp4') def get_nth_packet_and_frame(fh, skip): for p in fh.demux(): for f in p.decode(): if not skip: return p, f skip -= 1 .. doctest:: >>> fh = av.open(path) >>> video = fh.streams.video[0] >>> video.time_base Fraction(1, 25) >>> video.codec_context.time_base Fraction(1, 50) Attributes that represent time on those objects will be in that object's ``time_base``: .. doctest:: >>> video.duration 168 >>> float(video.duration * video.time_base) 6.72 :class:`.Packet` has a :attr:`.Packet.pts` ("presentation" time stamp), and :class:`.Frame` has a :attr:`.Frame.pts` and :attr:`.Frame.dts` ("presentation" and "decode" time stamps). Both have a ``time_base`` attribute, but it defaults to the time base of the object that handles them. For packets that is streams. For frames it is streams when decoding, and codec contexts when encoding (which is strange, but it is what it is). In many cases a stream has a time base of ``1 / frame_rate``, and then its frames have incrementing integers for times (0, 1, 2, etc.). Those frames take place at ``pts * time_base`` or ``0 / frame_rate``, ``1 / frame_rate``, ``2 / frame_rate``, etc.. .. doctest:: >>> p, f = get_nth_packet_and_frame(fh, skip=1) >>> p.time_base Fraction(1, 25) >>> p.dts 1 >>> f.time_base Fraction(1, 25) >>> f.pts 1 For convenince, :attr:`.Frame.time` is a ``float`` in seconds: .. doctest:: >>> f.time 0.04 FFMpeg Internals ---------------- .. note:: Time in FFmpeg is not 100% clear to us (see :ref:`authority_of_docs`). At times the FFmpeg documentation and canonical seeming posts in the forums appear contradictory. We've experiemented with it, and what follows is the picture that we are operating under. Both :ffmpeg:`AVStream` and :ffmpeg:`AVCodecContext` have a ``time_base`` member. However, they are used for different purposes, and (this author finds) it is too easy to abstract the concept too far. When there is no ``time_base`` (such as on :ffmpeg:`AVFormatContext`), there is an implicit ``time_base`` of ``1/AV_TIME_BASE``. Encoding ........ For encoding, you (the PyAV developer / FFmpeg "user") must set :ffmpeg:`AVCodecContext.time_base`, ideally to the inverse of the frame rate (or so the library docs say to do if your frame rate is fixed; we're not sure what to do if it is not fixed), and you may set :ffmpeg:`AVStream.time_base` as a hint to the muxer. After you open all the codecs and call :ffmpeg:`avformat_write_header`, the stream time base may change, and you must respect it. We don't know if the codec time base may change, so we will make the safer assumption that it may and respect it as well. You then prepare :ffmpeg:`AVFrame.pts` in :ffmpeg:`AVCodecContext.time_base`. The encoded :ffmpeg:`AVPacket.pts` is simply copied from the frame by the library, and so is still in the codec's time base. You must rescale it to :ffmpeg:`AVStream.time_base` before muxing (as all stream operations assume the packet time is in stream time base). For fixed-fps content your frames' ``pts`` would be the frame or sample index (for video and audio, respectively). PyAV should attempt to do this. Decoding ........ Everything is in :ffmpeg:`AVStream.time_base` because we don't have to rebase it into codec time base (as it generally seems to be the case that :ffmpeg:`AVCodecContext` doesn't really care about your timing; I wish there was a way to assert this without reading every codec). PyAV-8.1.0/docs/api/utils.rst000066400000000000000000000002421416312437500157360ustar00rootroot00000000000000 Utilities ========= Logging ------- .. automodule:: av.logging :members: Other ----- .. automodule:: av.utils :members: .. autoclass:: AVError PyAV-8.1.0/docs/api/video.rst000066400000000000000000000041641416312437500157130ustar00rootroot00000000000000Video ===== Video Streams ------------- .. automodule:: av.video.stream .. autoclass:: VideoStream :members: Video Codecs ------------- .. automodule:: av.video.codeccontext .. autoclass:: VideoCodecContext :members: Video Formats ------------- .. automodule:: av.video.format .. autoclass:: VideoFormat :members: .. autoclass:: VideoFormatComponent :members: Video Frames ------------ .. automodule:: av.video.frame .. autoclass:: VideoFrame A single video frame. :param int width: The width of the frame. :param int height: The height of the frame. :param format: The format of the frame. :type format: :class:`VideoFormat` or ``str``. >>> frame = VideoFrame(1920, 1080, 'rgb24') Structural ~~~~~~~~~~ .. autoattribute:: VideoFrame.width .. autoattribute:: VideoFrame.height .. attribute:: VideoFrame.format The :class:`.VideoFormat` of the frame. .. autoattribute:: VideoFrame.planes Types ~~~~~ .. autoattribute:: VideoFrame.key_frame .. autoattribute:: VideoFrame.interlaced_frame .. autoattribute:: VideoFrame.pict_type .. autoclass:: av.video.frame.PictureType Wraps ``AVPictureType`` (``AV_PICTURE_TYPE_*``). .. enumtable:: av.video.frame.PictureType Conversions ~~~~~~~~~~~ .. automethod:: VideoFrame.reformat .. automethod:: VideoFrame.to_rgb .. automethod:: VideoFrame.to_image .. automethod:: VideoFrame.to_ndarray .. automethod:: VideoFrame.from_image .. automethod:: VideoFrame.from_ndarray Video Planes ------------- .. automodule:: av.video.plane .. autoclass:: VideoPlane :members: Video Reformatters ------------------ .. automodule:: av.video.reformatter .. autoclass:: VideoReformatter .. automethod:: reformat Enums ~~~~~ .. autoclass:: av.video.reformatter.Interpolation Wraps the ``SWS_*`` flags. .. enumtable:: av.video.reformatter.Interpolation .. autoclass:: av.video.reformatter.Colorspace Wraps the ``SWS_CS_*`` flags. There is a bit of overlap in these names which comes from FFmpeg and backards compatibility. .. enumtable:: av.video.reformatter.Colorspace PyAV-8.1.0/docs/conf.py000066400000000000000000000323041416312437500145760ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # PyAV documentation build configuration file, created by # sphinx-quickstart on Fri Dec 7 22:13:16 2012. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. from docutils import nodes import logging import math import os import re import sys import sys import xml.etree.ElementTree as etree import sphinx from sphinx import addnodes from sphinx.util.docutils import SphinxDirective logging.basicConfig() if sphinx.version_info < (1, 8): print("Sphinx {} is too old; we require >= 1.8.".format(sphinx.__version__), file=sys.stderr) exit(1) # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('..')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.viewcode', 'sphinx.ext.extlinks', 'sphinx.ext.doctest', # We used to use doxylink, but we found its caching behaviour annoying, and # so made a minimally viable version of our own. ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'PyAV' copyright = u'2017, Mike Boers' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The full version, including alpha/beta/rc tags. release = open('../VERSION.txt').read().strip() # The short X.Y version. version = release.split('-')[0] # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'pyav' html_theme_path = [os.path.abspath(os.path.join(__file__, '..', '_themes'))] # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = '_static/logo-250.png' # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = '_static/favicon.png' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None doctest_global_setup = ''' import errno import os import av from av.datasets import fate, fate as fate_suite, curated from tests import common from tests.common import sandboxed as _sandboxed def sandboxed(*args, **kwargs): kwargs['timed'] = True return _sandboxed('docs', *args, **kwargs) _cwd = os.getcwd() here = sandboxed('__cwd__') try: os.makedirs(here) except OSError as e: if e.errno != errno.EEXIST: raise os.chdir(here) video_path = curated('pexels/time-lapse-video-of-night-sky-857195.mp4') ''' doctest_global_cleanup = ''' os.chdir(_cwd) ''' doctest_test_doctest_blocks = '' extlinks = { 'ffstruct': ('http://ffmpeg.org/doxygen/trunk/struct%s.html', 'struct '), 'issue': ('https://github.com/PyAV-Org/PyAV/issues/%s', '#'), 'pr': ('https://github.com/PyAV-Org/PyAV/pull/%s', '#'), 'gh-user': ('https://github.com/%s', '@'), } intersphinx_mapping = { 'https://docs.python.org/3': None, } autodoc_member_order = 'bysource' autodoc_default_options = { 'undoc-members': True, 'show-inheritance': True, } todo_include_todos = True class PyInclude(SphinxDirective): has_content = True def run(self): source = '\n'.join(self.content) output = [] def write(*content, sep=' ', end='\n'): output.append(sep.join(map(str, content)) + end) namespace = dict(write=write) exec(compile(source, '', 'exec'), namespace, namespace) output = ''.join(output).splitlines() self.state_machine.insert_input(output, 'blah') return [] #[nodes.literal('hello', repr(content))] def load_entrypoint(name): parts = name.split(':') if len(parts) == 1: parts = name.rsplit('.', 1) mod_name, attrs = parts attrs = attrs.split('.') try: obj = __import__(mod_name, fromlist=['.']) except ImportError as e: print('Error while importing.', (name, mod_name, attrs, e)) raise for attr in attrs: obj = getattr(obj, attr) return obj class EnumTable(SphinxDirective): required_arguments = 1 option_spec = { 'class': lambda x: x, } def run(self): cls_ep = self.options.get('class') cls = load_entrypoint(cls_ep) if cls_ep else None enum = load_entrypoint(self.arguments[0]) properties = {} if cls is not None: for name, value in vars(cls).items(): if isinstance(value, property): try: item = value._enum_item except AttributeError: pass else: if isinstance(item, enum): properties[item] = name colwidths = [15, 15, 5, 65] if cls else [15, 5, 75] ncols = len(colwidths) table = nodes.table() tgroup = nodes.tgroup(cols=ncols) table += tgroup for width in colwidths: tgroup += nodes.colspec(colwidth=width) thead = nodes.thead() tgroup += thead tbody = nodes.tbody() tgroup += tbody def makerow(*texts): row = nodes.row() for text in texts: if text is None: continue row += nodes.entry('', nodes.paragraph('', str(text))) return row thead += makerow( '{} Attribute'.format(cls.__name__) if cls else None, '{} Name'.format(enum.__name__), 'Flag Value', 'Meaning in FFmpeg', ) seen = set() for name, item in enum._by_name.items(): if name.lower() in seen: continue seen.add(name.lower()) try: attr = properties[item] except KeyError: if cls: continue attr = None value = '0x{:X}'.format(item.value) doc = item.__doc__ or '-' tbody += makerow( attr, name, value, doc, ) return [table] doxylink = {} ffmpeg_tagfile = os.path.abspath(os.path.join(__file__, '..', '_build', 'doxygen', 'tagfile.xml')) if not os.path.exists(ffmpeg_tagfile): print("ERROR: Missing FFmpeg tagfile.") exit(1) doxylink['ffmpeg'] = (ffmpeg_tagfile, 'https://ffmpeg.org/doxygen/trunk/') def doxylink_create_handler(app, file_name, url_base): print("Finding all names in Doxygen tagfile", file_name) doc = etree.parse(file_name) root = doc.getroot() parent_map = {} # ElementTree doesn't five us access to parents. urls = {} for node in root.findall('.//name/..'): for child in node: parent_map[child] = node kind = node.attrib['kind'] if kind not in ('function', 'struct', 'variable'): continue name = node.find('name').text if kind not in ('function', ): parent = parent_map.get(node) parent_name = parent.find('name') if parent else None if parent_name is not None: name = '{}.{}'.format(parent_name.text, name) filenode = node.find('filename') if filenode is not None: url = filenode.text else: url = '{}#{}'.format( node.find('anchorfile').text, node.find('anchor').text, ) urls.setdefault(kind, {})[name] = url def get_url(name): # These are all the kinds that seem to exist. for kind in ( 'function', 'struct', 'variable', # These are struct members. # 'class', # 'define', # 'enumeration', # 'enumvalue', # 'file', # 'group', # 'page', # 'typedef', # 'union', ): try: return urls[kind][name] except KeyError: pass def _doxylink_handler(name, rawtext, text, lineno, inliner, options={}, content=[]): m = re.match(r'^(.+?)(?:<(.+?)>)?$', text) title, name = m.groups() name = name or title url = get_url(name) if not url: print("ERROR: Could not find", name) exit(1) node = addnodes.literal_strong(title, title) if url: url = url_base + url node = nodes.reference( '', '', node, refuri=url ) return [node], [] return _doxylink_handler def setup(app): app.add_stylesheet('custom.css') app.add_directive('flagtable', EnumTable) app.add_directive('enumtable', EnumTable) app.add_directive('pyinclude', PyInclude) skip = os.environ.get('PYAV_SKIP_DOXYLINK') for role, (filename, url_base) in doxylink.items(): if skip: app.add_role(role, lambda *args: ([], [])) else: app.add_role(role, doxylink_create_handler(app, filename, url_base)) PyAV-8.1.0/docs/cookbook/000077500000000000000000000000001416312437500151035ustar00rootroot00000000000000PyAV-8.1.0/docs/cookbook/basics.rst000066400000000000000000000030741416312437500171050ustar00rootroot00000000000000Basics ====== Here are some common things to do without digging too deep into the mechanics. Saving Keyframes ---------------- If you just want to look at keyframes, you can set :attr:`.CodecContext.skip_frame` to speed up the process: .. literalinclude:: ../../examples/basics/save_keyframes.py Remuxing -------- Remuxing is copying audio/video data from one container to the other without transcoding it. By doing so, the data does not suffer any generational loss, and is the full quality that it was in the source container. .. literalinclude:: ../../examples/basics/remux.py Parsing ------- Sometimes we have a raw stream of data, and we need to split it into packets before working with it. We can use :meth:`.CodecContext.parse` to do this. .. literalinclude:: ../../examples/basics/parse.py Threading --------- By default, codec contexts will decode with :data:`~av.codec.context.ThreadType.SLICE` threading. This allows multiple threads to cooperate to decode any given frame. This is faster than no threading, but is not as fast as we can go. Also enabling :data:`~av.codec.context.ThreadType.FRAME` (or :data:`~av.codec.context.ThreadType.AUTO`) threading allows multiple threads to decode independent frames. This is not enabled by default because it does change the API a bit: you will get a much larger "delay" between starting the decode of a packet and getting it's results. Take a look at the output of this sample to see what we mean: .. literalinclude:: ../../examples/basics/thread_type.py On the author's machine, the second pass decodes ~5 times faster. PyAV-8.1.0/docs/cookbook/numpy.rst000066400000000000000000000010651416312437500170070ustar00rootroot00000000000000Numpy ===== Video Barcode ------------- A video barcode shows the change in colour and tone over time. Time is represented on the horizontal axis, while the vertical remains the vertical direction in the image. See http://moviebarcode.tumblr.com/ for examples from Hollywood movies, and here is an example from a sunset timelapse: .. image:: ../_static/examples/numpy/barcode.jpg The code that created this: .. literalinclude:: ../../examples/numpy/barcode.py Generating Video ---------------- .. literalinclude:: ../../examples/numpy/generate_video.py PyAV-8.1.0/docs/development/000077500000000000000000000000001416312437500156175ustar00rootroot00000000000000PyAV-8.1.0/docs/development/changelog.rst000066400000000000000000000001771416312437500203050ustar00rootroot00000000000000 .. It is all in the other file (that we want at the top-level of the repo). .. _changelog: .. include:: ../../CHANGELOG.rst PyAV-8.1.0/docs/development/contributors.rst000066400000000000000000000000411416312437500211010ustar00rootroot00000000000000 .. include:: ../../AUTHORS.rst PyAV-8.1.0/docs/development/hacking.rst000066400000000000000000000001731416312437500177560ustar00rootroot00000000000000 .. It is all in the other file (that we want at the top-level of the repo). .. _hacking: .. include:: ../../HACKING.rst PyAV-8.1.0/docs/development/includes.py000066400000000000000000000255021416312437500200030ustar00rootroot00000000000000import json import os import re import sys import xml.etree.ElementTree as etree from Cython.Compiler.Main import compile_single, CompilationOptions from Cython.Compiler.TreeFragment import parse_from_strings from Cython.Compiler.Visitor import TreeVisitor from Cython.Compiler import Nodes os.chdir(os.path.abspath(os.path.join(__file__, '..', '..', '..'))) class Visitor(TreeVisitor): def __init__(self, state=None): super(Visitor, self).__init__() self.state = dict(state or {}) self.events = [] def record_event(self, node, **kw): state = self.state.copy() state.update(**kw) state['node'] = node state['pos'] = node.pos state['end_pos'] = node.end_pos() self.events.append(state) def visit_Node(self, node): self.visitchildren(node) def visit_ModuleNode(self, node): self.state['module'] = node.full_module_name self.visitchildren(node) self.state.pop('module') def visit_CDefExternNode(self, node): self.state['extern_from'] = node.include_file self.visitchildren(node) self.state.pop('extern_from') def visit_CStructOrUnionDefNode(self, node): self.record_event(node, type='struct', name=node.name) self.state['struct'] = node.name self.visitchildren(node) self.state.pop('struct') def visit_CFuncDeclaratorNode(self, node): if isinstance(node.base, Nodes.CNameDeclaratorNode): self.record_event(node, type='function', name=node.base.name) else: self.visitchildren(node) def visit_CVarDefNode(self, node): if isinstance(node.declarators[0], Nodes.CNameDeclaratorNode): # Grab the type name. # TODO: Do a better job. type_ = node.base_type if hasattr(type_, 'name'): type_name = type_.name elif hasattr(type_, 'base_type'): type_name = type_.base_type.name else: type_name = str(type_) self.record_event(node, type='variable', name=node.declarators[0].name, vartype=type_name) else: self.visitchildren(node) def visit_CClassDefNode(self, node): self.state['class'] = node.class_name self.visitchildren(node) self.state.pop('class') def visit_PropertyNode(self, node): self.state['property'] = node.name self.visitchildren(node) self.state.pop('property') def visit_DefNode(self, node): self.state['function'] = node.name self.visitchildren(node) self.state.pop('function') def visit_AttributeNode(self, node): if getattr(node.obj, 'name', None) == 'lib': self.record_event(node, type='use', name=node.attribute) else: self.visitchildren(node) def extract(path, **kwargs): name = os.path.splitext(os.path.relpath(path))[0].replace('/', '.') options = CompilationOptions() options.include_path.append('include') options.language_level = 2 options.compiler_directives = dict( c_string_type='str', c_string_encoding='ascii', ) context = options.create_context() tree = parse_from_strings(name, open(path).read(), context, level='module_pxd' if path.endswith('.pxd') else None, **kwargs) extractor = Visitor({'file': path}) extractor.visit(tree) return extractor.events def iter_cython(path): '''Yield all ``.pyx`` and ``.pxd`` files in the given root.''' for dir_path, dir_names, file_names in os.walk(path): for file_name in file_names: if file_name.startswith('.'): continue if os.path.splitext(file_name)[1] not in ('.pyx', '.pxd'): continue yield os.path.join(dir_path, file_name) doxygen = {} doxygen_base = 'https://ffmpeg.org/doxygen/trunk' tagfile_path = 'docs/_build/doxygen/tagfile.xml' tagfile_json = tagfile_path + '.json' if os.path.exists(tagfile_json): print('Loading pre-parsed Doxygen tagfile:', tagfile_json, file=sys.stderr) doxygen = json.load(open(tagfile_json)) if not doxygen: print('Parsing Doxygen tagfile:', tagfile_path, file=sys.stderr) if not os.path.exists(tagfile_path): print(' MISSING!', file=sys.stderr) else: root = etree.parse(tagfile_path) def inspect_member(node, name_prefix=''): name = name_prefix + node.find('name').text anchorfile = node.find('anchorfile').text anchor = node.find('anchor').text url = '%s/%s#%s' % (doxygen_base, anchorfile, anchor) doxygen[name] = {'url': url} if node.attrib['kind'] == 'function': ret_type = node.find('type').text arglist = node.find('arglist').text sig = '%s %s%s' % (ret_type, name, arglist) doxygen[name]['sig'] = sig for struct in root.iter('compound'): if struct.attrib['kind'] != 'struct': continue name_prefix = struct.find('name').text + '.' for node in struct.iter('member'): inspect_member(node, name_prefix) for node in root.iter('member'): inspect_member(node) json.dump(doxygen, open(tagfile_json, 'w'), sort_keys=True, indent=4) print('Parsing Cython source for references...', file=sys.stderr) lib_references = {} for path in iter_cython('av'): try: events = extract(path) except Exception as e: print(" %s in %s" % (e.__class__.__name__, path), file=sys.stderr) print(" %s" % e, file=sys.stderr) continue for event in events: if event['type'] == 'use': lib_references.setdefault(event['name'], []).append(event) defs_by_extern = {} for path in iter_cython('include'): # This one has "include" directives, which is not supported when # parsing from a string. if path == 'include/libav.pxd': continue # Extract all #: comments from the source files. comments_by_line = {} for i, line in enumerate(open(path)): m = re.match(r'^\s*#: ?', line) if m: comment = line[m.end():].rstrip() comments_by_line[i + 1] = line[m.end():] # Extract Cython definitions from the source files. for event in extract(path): extern = event.get('extern_from') or path.replace('include/', '') defs_by_extern.setdefault(extern, []).append(event) # Collect comments above and below comments = event['_comments'] = [] line = event['pos'][1] - 1 while line in comments_by_line: comments.insert(0, comments_by_line.pop(line)) line -= 1 line = event['end_pos'][1] + 1 while line in comments_by_line: comments.append(comments_by_line.pop(line)) line += 1 # Figure out the Sphinx headline. if event['type'] == 'function': event['_sort_key'] = 2 sig = doxygen.get(event['name'], {}).get('sig') if sig: sig = re.sub(r'\).+', ')', sig) # strip trailer event['_headline'] = '.. c:function:: %s' % sig else: event['_headline'] = '.. c:function:: %s()' % event['name'] elif event['type'] == 'variable': struct = event.get('struct') if struct: event['_headline'] = '.. c:member:: %s %s' % (event['vartype'], event['name']) event['_sort_key'] = 1.1 else: event['_headline'] = '.. c:var:: %s' % event['name'] event['_sort_key'] = 3 elif event['type'] == 'struct': event['_headline'] = '.. c:type:: struct %s' % event['name'] event['_sort_key'] = 1 event['_doxygen_url'] = '%s/struct%s.html' % (doxygen_base, event['name']) else: print('Unknown event type %s' % event['type'], file=sys.stderr) name = event['name'] if event.get('struct'): name = '%s.%s' % (event['struct'], name) # Doxygen URLs event.setdefault('_doxygen_url', doxygen.get(name, {}).get('url')) # Find use references. ref_events = lib_references.get(name, []) if ref_events: ref_pairs = [] for ref in sorted(ref_events, key=lambda e: e['name']): chunks = [ ref.get('module'), ref.get('class'), ] chunks = filter(None, chunks) prefix = '.'.join(chunks) + '.' if chunks else '' if ref.get('property'): ref_pairs.append((ref['property'], ':attr:`%s%s`' % (prefix, ref['property']))) elif ref.get('function'): name = ref['function'] if name in ('__init__', '__cinit__', '__dealloc__'): ref_pairs.append((name, ':class:`%s%s <%s>`' % (prefix, name, prefix.rstrip('.')))) else: ref_pairs.append((name, ':func:`%s%s`' % (prefix, name))) else: continue unique_refs = event['_references'] = [] seen = set() for name, ref in sorted(ref_pairs): if name in seen: continue seen.add(name) unique_refs.append(ref) print(''' .. This file is generated by includes.py; any modifications will be destroyed! Wrapped C Types and Functions ============================= ''') for extern, events in sorted(defs_by_extern.items()): did_header = False for event in events: headline = event.get('_headline') comments = event.get('_comments') refs = event.get('_references', []) url = event.get('_doxygen_url') indent = ' ' if event.get('struct') else '' if not headline: continue if ( not filter(None, (x.strip() for x in comments if x.strip())) and not refs and event['type'] not in ('struct', ) ): pass if not did_header: print('``%s``' % extern) print('-' * (len(extern) + 4)) print() did_header = True if url: print() print(indent + '.. rst-class:: ffmpeg-quicklink') print() print(indent + ' `FFmpeg Docs <%s>`__' % url) print(indent + headline) print() if comments: for line in comments: print(indent + ' ' + line) print() if refs: print(indent + ' Referenced by: ', end='') for i, ref in enumerate(refs): print((', ' if i else '') + ref, end='') print('.') print() PyAV-8.1.0/docs/development/includes.rst000066400000000000000000000000651416312437500201600ustar00rootroot00000000000000 .. include:: ../_build/rst/development/includes.rst PyAV-8.1.0/docs/development/license.rst000066400000000000000000000003701416312437500177730ustar00rootroot00000000000000 .. It is all in the other file (that we want at the top-level of the repo). .. _license: License ======= From `LICENSE.txt `_: .. literalinclude:: ../../LICENSE.txt :language: text PyAV-8.1.0/docs/generate-tagfile000077500000000000000000000021501416312437500164240ustar00rootroot00000000000000#!/usr/bin/env python import os import subprocess import argparse parser = argparse.ArgumentParser() parser.add_argument('-l', '--library', default=os.environ.get('PYAV_LIBRARY')) parser.add_argument('-o', '--output', default=os.path.abspath(os.path.join( __file__, '..', '_build', 'doxygen', 'tagfile.xml', ))) args = parser.parse_args() if not args.library: print("Please provide --library or set $PYAV_LIBRARY") exit(1) library = os.path.abspath(os.path.join( __file__, '..', '..', 'vendor', args.library, )) if not os.path.exists(library): print("Library does not exist:", library) exit(2) output = os.path.abspath(args.output) outdir = os.path.dirname(output) if not os.path.exists(outdir): os.makedirs(outdir) proc = subprocess.Popen(['doxygen', '-'], stdin=subprocess.PIPE, cwd=library) proc.communicate(''' #@INCLUDE = doc/Doxyfile GENERATE_TAGFILE = {} GENERATE_HTML = no GENERATE_LATEX = no CASE_SENSE_NAMES = yes INPUT = libavcodec libavdevice libavfilter libavformat libavresample libavutil libswresample libswscale '''.format(output).encode()) PyAV-8.1.0/docs/index.rst000066400000000000000000000041121416312437500151340ustar00rootroot00000000000000**PyAV** Documentation ====================== **PyAV** is a Pythonic binding for FFmpeg_. We aim to provide all of the power and control of the underlying library, but manage the gritty details as much as possible. PyAV is for direct and precise access to your media via containers, streams, packets, codecs, and frames. It exposes a few transformations of that data, and helps you get your data to/from other packages (e.g. Numpy and Pillow). This power does come with some responsibility as working with media is horrendously complicated and PyAV can't abstract it away or make all the best decisions for you. If the ``ffmpeg`` command does the job without you bending over backwards, PyAV is likely going to be more of a hindrance than a help. But where you can't work without it, PyAV is a critical tool. Currently we provide: - ``libavformat``: :class:`containers <.Container>`, audio/video/subtitle :class:`streams <.Stream>`, :class:`packets <.Packet>`; - ``libavdevice`` (by specifying a format to containers); - ``libavcodec``: :class:`.Codec`, :class:`.CodecContext`, audio/video :class:`frames <.Frame>`, :class:`data planes <.Plane>`, :class:`subtitles <.Subtitle>`; - ``libavfilter``: :class:`.Filter`, :class:`.Graph`; - ``libswscale``: :class:`.VideoReformatter`; - ``libswresample``: :class:`.AudioResampler`; - and a few more utilities. .. _FFmpeg: https://ffmpeg.org/ Basic Demo ---------- .. testsetup:: path_to_video = common.fate_png() # We don't need a full QT here. .. testcode:: import av container = av.open(path_to_video) for frame in container.decode(video=0): frame.to_image().save('frame-%04d.jpg' % frame.index) Overview -------- .. toctree:: :glob: :maxdepth: 2 overview/* Cookbook -------- .. toctree:: :glob: :maxdepth: 2 cookbook/* Reference --------- .. toctree:: :glob: :maxdepth: 2 api/* Development ----------- .. toctree:: :glob: :maxdepth: 1 development/* Indices and Tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` PyAV-8.1.0/docs/overview/000077500000000000000000000000001416312437500151435ustar00rootroot00000000000000PyAV-8.1.0/docs/overview/about.rst000066400000000000000000000061141416312437500170110ustar00rootroot00000000000000More About PyAV =============== Binary wheels ------------- Since release 8.0.0 binary wheels are provided on PyPI for Linux, Mac and Windows linked against FFmpeg. Currently FFmpeg 4.2.2 is used with the following features enabled for all platforms: - fontconfig - libaom - libass - libbluray - libdav1d - libfreetype - libmp3lame - libopencore-amrnb - libopencore-amrwb - libopenjpeg - libopus - libspeex - libtheora - libtwolame - libvorbis - libwavpack - libx264 - libx265 - libxml2 - libxvid - lzma - zlib Bring your own FFmpeg --------------------- PyAV can also be compiled against your own build of FFmpeg. While it must be built for the specific FFmpeg version installed it does not require a specific version. You can force installing PyAV from source by running: .. code-block:: bash pip install av --no-binary av We automatically detect the differences that we depended on at build time. This is a fairly trial-and-error process, so please let us know if something won't compile due to missing functions or members. Additionally, we are far from wrapping the full extents of the libraries. There are many functions and C struct members which are currently unexposed. Dropping Libav -------------- Until mid-2018 PyAV supported either FFmpeg_ or Libav_. The split support in the community essentially required we do so. That split has largely been resolved as distributions have returned to shipping FFmpeg instead of Libav. While we could have theoretically continued to support both, it has been years since automated testing of PyAV with Libav passed, and we received zero complaints. Supporting both also restricted us to using the subset of both, which was starting to erode at the cleanliness of PyAV. Many Libav-isms remain in PyAV, and we will slowly scrub them out to clean up PyAV as we come across them again. Unsupported Features -------------------- Our goal is to provide all of the features that make sense for the contexts that PyAV would be used in. If there is something missing, please reach out on Gitter_ or open a feature request on GitHub_ (or even better a pull request). Your request will be more likely to be addressed if you can point to the relevant `FFmpeg API documentation `__. There are some features we may elect to not implement because we don't believe they fit the PyAV ethos. The only one that we've encountered so far is hardware decoding. The `FFmpeg man page `__ discusses the drawback of ``-hwaccel``: Note that most acceleration methods are intended for playback and will not be faster than software decoding on modern CPUs. Additionally, ``ffmpeg`` will usually need to copy the decoded frames from the GPU memory into the system memory, resulting in further performance loss. Since PyAV is not expected to be used in a high performance playback loop, we do not find the added code complexity worth the benefits of supporting this feature .. _FFmpeg: https://ffmpeg.org/ .. _Libav: https://libav.org/ .. _Gitter: https://gitter.im/PyAV-Org .. _GitHub: https://github.com/PyAV-Org/pyav PyAV-8.1.0/docs/overview/caveats.rst000066400000000000000000000044661416312437500173350ustar00rootroot00000000000000Caveats ======= .. _authority_of_docs: Authority of Documentation -------------------------- FFmpeg is extremely complex, and the PyAV developers have not been successful in making it 100% clear to themselves in all aspects. Our understanding of how it works and how to work with it is via reading the docs, digging through the source, perfoming experiments, and hearing from users where PyAV isn't doing the right thing. Only where this documentation is about the mechanics of PyAV can it be considered authoritative. Anywhere that we discuss something that is actually about the underlying FFmpeg libraries comes with the caveat that we can not always be 100% on it. It is, unfortunately, often on the user the understand and deal with the edge cases. We encourage you to bring them to our attension via GitHub_ so that we can try to make PyAV deal with it, but we can't always make it work. .. _GitHub: https://github.com/PyAv-Org/PyAV/issues Sub-Interpeters --------------- Since we rely upon C callbacks in a few locations, PyAV is not fully compatible with sub-interpreters. Users have experienced lockups in WSGI web applications, for example. This is due to the ``PyGILState_Ensure`` calls made by Cython in a C callback from FFmpeg. If this is called in a thread that was not started by Python, it is very likely to break. There is no current instrumentation to detect such events. The two main features that are able to cause lockups are: 1. Python IO (passing a file-like object to ``av.open``). While this is in theory possible, so far it seems like the callbacks are made in the calling thread, and so are safe. 2. Logging. As soon as you en/decode with threads you are highly likely to get log messages issues from threads started by FFmpeg, and you will get lockups. See :ref:`disable_logging`. .. _garbage_collection: Garbage Collection ------------------ PyAV currently has a number of reference cycles that make it more difficult for the garbage collector than we would like. In some circumstances (usually tight loops involving opening many containers), a :class:`.Container` will not auto-close until many a few thousand have built-up. Until we resolve this issue, you should explicitly call :meth:`.Container.close` or use the container as a context manager:: with av.open(path) as fh: # Do stuff with it. PyAV-8.1.0/docs/overview/installation.rst000066400000000000000000000062171416312437500204040ustar00rootroot00000000000000Installation ============ Conda ----- Due to the complexity of the dependencies, PyAV is not always the easiest Python package to install. The most straight-foward install is via `conda-forge `_:: conda install av -c conda-forge See the `Conda quick install `_ docs to get started with (mini)Conda. Dependencies ------------ PyAV depends upon several libraries from FFmpeg (version ``4.0`` or higher): - ``libavcodec`` - ``libavdevice`` - ``libavfilter`` - ``libavformat`` - ``libavutil`` - ``libswresample`` - ``libswscale`` and a few other tools in general: - ``pkg-config`` - Python's development headers Mac OS X ^^^^^^^^ On **Mac OS X**, Homebrew_ saves the day:: brew install ffmpeg pkg-config .. _homebrew: http://brew.sh/ Ubuntu >= 18.04 LTS ^^^^^^^^^^^^^^^^^^^ On **Ubuntu 18.04 LTS** everything can come from the default sources:: # General dependencies sudo apt-get install -y python-dev pkg-config # Library components sudo apt-get install -y \ libavformat-dev libavcodec-dev libavdevice-dev \ libavutil-dev libswscale-dev libswresample-dev libavfilter-dev Ubuntu < 18.04 LTS ^^^^^^^^^^^^^^^^^^ On older Ubuntu releases you will be unable to satisfy these requirements with the default package sources. We recommend compiling and installing FFmpeg from source. For FFmpeg:: sudo apt install \ autoconf \ automake \ build-essential \ cmake \ libass-dev \ libfreetype6-dev \ libjpeg-dev \ libtheora-dev \ libtool \ libvorbis-dev \ libx264-dev \ pkg-config \ wget \ yasm \ zlib1g-dev wget http://ffmpeg.org/releases/ffmpeg-3.2.tar.bz2 tar -xjf ffmpeg-3.2.tar.bz2 cd ffmpeg-3.2 ./configure --disable-static --enable-shared --disable-doc make sudo make install `See this script `_ for a very detailed installation of all dependencies. Windows ^^^^^^^ It is possible to build PyAV on Windows without Conda by installing FFmpeg yourself, e.g. from the `shared and dev packages `_. Unpack them somewhere (like ``C:\ffmpeg``), and then :ref:`tell PyAV where they are located `. PyAV ---- Via PyPI/CheeseShop ^^^^^^^^^^^^^^^^^^^ :: pip install av Via Source ^^^^^^^^^^ :: # Get PyAV from GitHub. git clone git@github.com:PyAV-Org/PyAV.git cd PyAV # Prep a virtualenv. source scripts/activate.sh # Install basic requirements. pip install -r tests/requirements.txt # Optionally build FFmpeg. ./scripts/build-deps # Build PyAV. make # or python setup.py build_ext --inplace On **Mac OS X** you may have issues with regards to Python expecting gcc but finding clang. Try to export the following before installation:: export ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future .. _build_on_windows: On **Windows** you must indicate the location of your FFmpeg, e.g.:: python setup.py build --ffmpeg-dir=C:\ffmpeg PyAV-8.1.0/examples/000077500000000000000000000000001416312437500141635ustar00rootroot00000000000000PyAV-8.1.0/examples/basics/000077500000000000000000000000001416312437500154275ustar00rootroot00000000000000PyAV-8.1.0/examples/basics/parse.py000066400000000000000000000017731416312437500171230ustar00rootroot00000000000000import os import subprocess import av import av.datasets # We want an H.264 stream in the Annex B byte-stream format. # We haven't exposed bitstream filters yet, so we're gonna use the `ffmpeg` CLI. h264_path = 'night-sky.h264' if not os.path.exists(h264_path): subprocess.check_call([ 'ffmpeg', '-i', av.datasets.curated('pexels/time-lapse-video-of-night-sky-857195.mp4'), '-vcodec', 'copy', '-an', '-bsf:v', 'h264_mp4toannexb', h264_path, ]) fh = open(h264_path, 'rb') codec = av.CodecContext.create('h264', 'r') while True: chunk = fh.read(1 << 16) packets = codec.parse(chunk) print("Parsed {} packets from {} bytes:".format(len(packets), len(chunk))) for packet in packets: print(' ', packet) frames = codec.decode(packet) for frame in frames: print(' ', frame) # We wait until the end to bail so that the last empty `buf` flushes # the parser. if not chunk: break PyAV-8.1.0/examples/basics/remux.py000066400000000000000000000012341416312437500171410ustar00rootroot00000000000000import av import av.datasets input_ = av.open(av.datasets.curated('pexels/time-lapse-video-of-night-sky-857195.mp4')) output = av.open('remuxed.mkv', 'w') # Make an output stream using the input as a template. This copies the stream # setup from one to the other. in_stream = input_.streams.video[0] out_stream = output.add_stream(template=in_stream) for packet in input_.demux(in_stream): print(packet) # We need to skip the "flushing" packets that `demux` generates. if packet.dts is None: continue # We need to assign the packet to the new stream. packet.stream = out_stream output.mux(packet) input_.close() output.close() PyAV-8.1.0/examples/basics/save_keyframes.py000066400000000000000000000010651416312437500210070ustar00rootroot00000000000000import av import av.datasets content = av.datasets.curated('pexels/time-lapse-video-of-night-sky-857195.mp4') with av.open(content) as container: # Signal that we only want to look at keyframes. stream = container.streams.video[0] stream.codec_context.skip_frame = 'NONKEY' for frame in container.decode(stream): print(frame) # We use `frame.pts` as `frame.index` won't make must sense with the `skip_frame`. frame.to_image().save( 'night-sky.{:04d}.jpg'.format(frame.pts), quality=80, ) PyAV-8.1.0/examples/basics/thread_type.py000066400000000000000000000016421416312437500203140ustar00rootroot00000000000000import time import av import av.datasets print("Decoding with default (slice) threading...") container = av.open(av.datasets.curated('pexels/time-lapse-video-of-night-sky-857195.mp4')) start_time = time.time() for packet in container.demux(): print(packet) for frame in packet.decode(): print(frame) default_time = time.time() - start_time container.close() print("Decoding with auto threading...") container = av.open(av.datasets.curated('pexels/time-lapse-video-of-night-sky-857195.mp4')) # !!! This is the only difference. container.streams.video[0].thread_type = 'AUTO' start_time = time.time() for packet in container.demux(): print(packet) for frame in packet.decode(): print(frame) auto_time = time.time() - start_time container.close() print("Decoded with default threading in {:.2f}s.".format(default_time)) print("Decoded with auto threading in {:.2f}s.".format(auto_time)) PyAV-8.1.0/examples/numpy/000077500000000000000000000000001416312437500153335ustar00rootroot00000000000000PyAV-8.1.0/examples/numpy/barcode.py000066400000000000000000000015431416312437500173070ustar00rootroot00000000000000from PIL import Image import numpy as np import av import av.datasets container = av.open(av.datasets.curated('pexels/time-lapse-video-of-sunset-by-the-sea-854400.mp4')) container.streams.video[0].thread_type = 'AUTO' # Go faster! columns = [] for frame in container.decode(video=0): print(frame) array = frame.to_ndarray(format='rgb24') # Collapse down to a column. column = array.mean(axis=1) # Convert to bytes, as the `mean` turned our array into floats. column = column.clip(0, 255).astype('uint8') # Get us in the right shape for the `hstack` below. column = column.reshape(-1, 1, 3) columns.append(column) # Close the file, free memory container.close() full_array = np.hstack(columns) full_img = Image.fromarray(full_array, 'RGB') full_img = full_img.resize((800, 200)) full_img.save('barcode.jpg', quality=85) PyAV-8.1.0/examples/numpy/generate_video.py000066400000000000000000000016301416312437500206650ustar00rootroot00000000000000 from __future__ import division import numpy as np import av duration = 4 fps = 24 total_frames = duration * fps container = av.open('test.mp4', mode='w') stream = container.add_stream('mpeg4', rate=fps) stream.width = 480 stream.height = 320 stream.pix_fmt = 'yuv420p' for frame_i in range(total_frames): img = np.empty((480, 320, 3)) img[:, :, 0] = 0.5 + 0.5 * np.sin(2 * np.pi * (0 / 3 + frame_i / total_frames)) img[:, :, 1] = 0.5 + 0.5 * np.sin(2 * np.pi * (1 / 3 + frame_i / total_frames)) img[:, :, 2] = 0.5 + 0.5 * np.sin(2 * np.pi * (2 / 3 + frame_i / total_frames)) img = np.round(255 * img).astype(np.uint8) img = np.clip(img, 0, 255) frame = av.VideoFrame.from_ndarray(img, format='rgb24') for packet in stream.encode(frame): container.mux(packet) # Flush stream for packet in stream.encode(): container.mux(packet) # Close the file container.close() PyAV-8.1.0/examples/numpy/generate_video_with_pts.py000066400000000000000000000055651416312437500226210ustar00rootroot00000000000000#!/usr/bin/env python3 from fractions import Fraction import colorsys import numpy as np import av (width, height) = (640, 360) total_frames = 20 fps = 30 container = av.open('generate_video_with_pts.mp4', mode='w') stream = container.add_stream('mpeg4', rate=fps) # alibi frame rate stream.width = width stream.height = height stream.pix_fmt = 'yuv420p' # ffmpeg time is complicated # more at https://github.com/PyAV-Org/PyAV/blob/main/docs/api/time.rst # our situation is the "encoding" one # this is independent of the "fps" you give above # 1/1000 means milliseconds (and you can use that, no problem) # 1/2 means half a second (would be okay for the delays we use below) # 1/30 means ~33 milliseconds # you should use the least fraction that makes sense for you stream.codec_context.time_base = Fraction(1, fps) # this says when to show the next frame # (increment by how long the current frame will be shown) my_pts = 0 # [seconds] # below we'll calculate that into our chosen time base # we'll keep this frame around to draw on this persistently # you can also redraw into a new object every time but you needn't the_canvas = np.zeros((height, width, 3), dtype=np.uint8) the_canvas[:, :] = (32, 32, 32) # some dark gray background because why not block_w2 = int(0.5 * width / total_frames * 0.75) block_h2 = int(0.5 * height / 4) for frame_i in range(total_frames): # move around the color wheel (hue) nice_color = colorsys.hsv_to_rgb(frame_i / total_frames, 1.0, 1.0) nice_color = (np.array(nice_color) * 255).astype(np.uint8) # draw blocks of a progress bar cx = int(width / total_frames * (frame_i + 0.5)) cy = int(height / 2) the_canvas[cy-block_h2: cy+block_h2, cx-block_w2: cx+block_w2] = nice_color frame = av.VideoFrame.from_ndarray(the_canvas, format='rgb24') # seconds -> counts of time_base frame.pts = int(round(my_pts / stream.codec_context.time_base)) # increment by display time to pre-determine next frame's PTS my_pts += 1.0 if ((frame_i // 3) % 2 == 0) else 0.5 # yes, the last frame has no "duration" because nothing follows it # frames don't have duration, only a PTS for packet in stream.encode(frame): container.mux(packet) # finish it with a blank frame, so the "last" frame actually gets shown for some time # this black frame will probably be shown for 1/fps time # at least, that is the analysis of ffprobe the_canvas[:] = 0 frame = av.VideoFrame.from_ndarray(the_canvas, format='rgb24') frame.pts = int(round(my_pts / stream.codec_context.time_base)) for packet in stream.encode(frame): container.mux(packet) # the time should now be 15.5 + 1/30 = 15.533 # without that last black frame, the real last frame gets shown for 1/30 # so that video would have been 14.5 + 1/30 = 14.533 seconds long # Flush stream for packet in stream.encode(): container.mux(packet) # Close the file container.close() PyAV-8.1.0/flags.txt000066400000000000000000000021451416312437500142040ustar00rootroot00000000000000 Objects with flags === √ AVCodec.capabilities √ AVCodecDescriptor.props √ AVCodecContext.flags and flags2 AVOutputFormat.flags Thoughts === - Having both individual properties AND the flags objects is kinda nice. - I want lowercase flag/enum names, but to also work with the upper ones for b/c. Option: av.enum flags. - context.flags2 & 'EXPORT_MVS' - context.flags2 |= 'EXPORT_MVS' - new APIs: - 'export_mvs' in context.flags2 - context.flags2.export_mvs = True - context.flags2['export_mvs'] = True Option: object which represents all flags, but can't work with integer values - context.flags merges flags and flags2 - this is really only handy on AVCodecContext, so... fuckit? Option: all exposed as individual properties - context.export_mvs - This polutes the attribute space a lot. - This feels the most "pythonic". - If you can set multiple in constructors, then NBD if you want to do many. - I don't like how I have to pick names. How to name === If a prefix is required, one of: - is - has - can - use - do PyAV-8.1.0/include/000077500000000000000000000000001416312437500137705ustar00rootroot00000000000000PyAV-8.1.0/include/libav.pxd000066400000000000000000000017441416312437500156100ustar00rootroot00000000000000 # This file is built by setup.py and contains macros telling us which libraries # and functions we have (of those which are different between FFMpeg and LibAV). cdef extern from "pyav/config.h" nogil: char* PYAV_VERSION_STR char* PYAV_COMMIT_STR include "libavutil/avutil.pxd" include "libavutil/channel_layout.pxd" include "libavutil/dict.pxd" include "libavutil/error.pxd" include "libavutil/frame.pxd" include "libavutil/samplefmt.pxd" include "libavutil/motion_vector.pxd" include "libavcodec/avcodec.pxd" include "libavdevice/avdevice.pxd" include "libavformat/avformat.pxd" include "libswresample/swresample.pxd" include "libswscale/swscale.pxd" include "libavfilter/avfilter.pxd" include "libavfilter/avfiltergraph.pxd" include "libavfilter/buffersink.pxd" include "libavfilter/buffersrc.pxd" cdef extern from "stdio.h" nogil: cdef int snprintf(char *output, int n, const char *format, ...) cdef int vsnprintf(char *output, int n, const char *format, va_list args) PyAV-8.1.0/include/libav.pyav.h000066400000000000000000000000001416312437500162020ustar00rootroot00000000000000PyAV-8.1.0/include/libavcodec/000077500000000000000000000000001416312437500160635ustar00rootroot00000000000000PyAV-8.1.0/include/libavcodec/avcodec.pxd000066400000000000000000000262261416312437500202140ustar00rootroot00000000000000from libc.stdint cimport ( uint8_t, int8_t, uint16_t, int16_t, uint32_t, int32_t, uint64_t, int64_t ) cdef extern from "libavcodec/avcodec.h" nogil: # custom cdef set pyav_get_available_codecs() cdef int avcodec_version() cdef char* avcodec_configuration() cdef char* avcodec_license() cdef size_t AV_INPUT_BUFFER_PADDING_SIZE cdef int64_t AV_NOPTS_VALUE # AVCodecDescriptor.props cdef enum: AV_CODEC_PROP_INTRA_ONLY AV_CODEC_PROP_LOSSY AV_CODEC_PROP_LOSSLESS AV_CODEC_PROP_REORDER AV_CODEC_PROP_BITMAP_SUB AV_CODEC_PROP_TEXT_SUB #AVCodec.capabilities cdef enum: AV_CODEC_CAP_DRAW_HORIZ_BAND AV_CODEC_CAP_DR1 AV_CODEC_CAP_TRUNCATED # AV_CODEC_CAP_HWACCEL AV_CODEC_CAP_DELAY AV_CODEC_CAP_SMALL_LAST_FRAME # AV_CODEC_CAP_HWACCEL_VDPAU AV_CODEC_CAP_SUBFRAMES AV_CODEC_CAP_EXPERIMENTAL AV_CODEC_CAP_CHANNEL_CONF # AV_CODEC_CAP_NEG_LINESIZES AV_CODEC_CAP_FRAME_THREADS AV_CODEC_CAP_SLICE_THREADS AV_CODEC_CAP_PARAM_CHANGE AV_CODEC_CAP_AUTO_THREADS AV_CODEC_CAP_VARIABLE_FRAME_SIZE AV_CODEC_CAP_AVOID_PROBING AV_CODEC_CAP_INTRA_ONLY AV_CODEC_CAP_LOSSLESS AV_CODEC_CAP_HARDWARE AV_CODEC_CAP_HYBRID AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE cdef enum: FF_THREAD_FRAME FF_THREAD_SLICE cdef enum: AV_CODEC_FLAG_UNALIGNED AV_CODEC_FLAG_QSCALE AV_CODEC_FLAG_4MV AV_CODEC_FLAG_OUTPUT_CORRUPT AV_CODEC_FLAG_QPEL AV_CODEC_FLAG_DROPCHANGED AV_CODEC_FLAG_PASS1 AV_CODEC_FLAG_PASS2 AV_CODEC_FLAG_LOOP_FILTER AV_CODEC_FLAG_GRAY AV_CODEC_FLAG_PSNR AV_CODEC_FLAG_TRUNCATED AV_CODEC_FLAG_INTERLACED_DCT AV_CODEC_FLAG_LOW_DELAY AV_CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_BITEXACT AV_CODEC_FLAG_AC_PRED AV_CODEC_FLAG_INTERLACED_ME AV_CODEC_FLAG_CLOSED_GOP cdef enum: AV_CODEC_FLAG2_FAST AV_CODEC_FLAG2_NO_OUTPUT AV_CODEC_FLAG2_LOCAL_HEADER AV_CODEC_FLAG2_DROP_FRAME_TIMECODE AV_CODEC_FLAG2_CHUNKS AV_CODEC_FLAG2_IGNORE_CROP AV_CODEC_FLAG2_SHOW_ALL AV_CODEC_FLAG2_EXPORT_MVS AV_CODEC_FLAG2_SKIP_MANUAL AV_CODEC_FLAG2_RO_FLUSH_NOOP cdef enum: AV_PKT_FLAG_KEY AV_PKT_FLAG_CORRUPT cdef enum: AV_FRAME_FLAG_CORRUPT cdef enum: FF_COMPLIANCE_VERY_STRICT FF_COMPLIANCE_STRICT FF_COMPLIANCE_NORMAL FF_COMPLIANCE_UNOFFICIAL FF_COMPLIANCE_EXPERIMENTAL cdef enum AVCodecID: AV_CODEC_ID_NONE AV_CODEC_ID_MPEG2VIDEO AV_CODEC_ID_MPEG1VIDEO cdef enum AVDiscard: AVDISCARD_NONE AVDISCARD_DEFAULT AVDISCARD_NONREF AVDISCARD_BIDIR AVDISCARD_NONINTRA AVDISCARD_NONKEY AVDISCARD_ALL cdef struct AVCodec: char *name char *long_name AVMediaType type AVCodecID id int capabilities AVRational* supported_framerates AVSampleFormat* sample_fmts AVPixelFormat* pix_fmts int* supported_samplerates AVClass *priv_class cdef int av_codec_is_encoder(AVCodec*) cdef int av_codec_is_decoder(AVCodec*) cdef struct AVCodecDescriptor: AVCodecID id char *name char *long_name int props char **mime_types AVCodecDescriptor* avcodec_descriptor_get(AVCodecID) cdef struct AVCodecContext: AVClass *av_class AVMediaType codec_type char codec_name[32] unsigned int codec_tag AVCodecID codec_id int flags int flags2 int thread_count int thread_type int profile AVDiscard skip_frame AVFrame* coded_frame int bit_rate int bit_rate_tolerance int mb_decision int global_quality int compression_level int frame_number int qmin int qmax int rc_max_rate int rc_min_rate int rc_buffer_size float rc_max_available_vbv_use float rc_min_vbv_overflow_use AVRational framerate AVRational time_base int ticks_per_frame int extradata_size uint8_t *extradata int delay AVCodec *codec # Video. int width int height int coded_width int coded_height AVPixelFormat pix_fmt AVRational sample_aspect_ratio int gop_size # The number of pictures in a group of pictures, or 0 for intra_only. int max_b_frames int has_b_frames # Audio. AVSampleFormat sample_fmt int sample_rate int channels int frame_size int channel_layout #: .. todo:: ``get_buffer`` is deprecated for get_buffer2 in newer versions of FFmpeg. int get_buffer(AVCodecContext *ctx, AVFrame *frame) void release_buffer(AVCodecContext *ctx, AVFrame *frame) # User Data void *opaque cdef AVCodecContext* avcodec_alloc_context3(AVCodec *codec) cdef void avcodec_free_context(AVCodecContext **ctx) cdef AVClass* avcodec_get_class() cdef int avcodec_copy_context(AVCodecContext *dst, const AVCodecContext *src) cdef struct AVCodecDescriptor: AVCodecID id AVMediaType type char *name char *long_name int props cdef AVCodec* avcodec_find_decoder(AVCodecID id) cdef AVCodec* avcodec_find_encoder(AVCodecID id) cdef AVCodec* avcodec_find_decoder_by_name(char *name) cdef AVCodec* avcodec_find_encoder_by_name(char *name) cdef const AVCodec* av_codec_iterate(void **opaque) cdef AVCodecDescriptor* avcodec_descriptor_get (AVCodecID id) cdef AVCodecDescriptor* avcodec_descriptor_get_by_name (char *name) cdef char* avcodec_get_name(AVCodecID id) cdef char* av_get_profile_name(AVCodec *codec, int profile) cdef int avcodec_open2( AVCodecContext *ctx, AVCodec *codec, AVDictionary **options, ) cdef int avcodec_is_open(AVCodecContext *ctx ) cdef int avcodec_close(AVCodecContext *ctx) cdef int AV_NUM_DATA_POINTERS cdef enum AVFrameSideDataType: AV_FRAME_DATA_PANSCAN AV_FRAME_DATA_A53_CC AV_FRAME_DATA_STEREO3D AV_FRAME_DATA_MATRIXENCODING AV_FRAME_DATA_DOWNMIX_INFO AV_FRAME_DATA_REPLAYGAIN AV_FRAME_DATA_DISPLAYMATRIX AV_FRAME_DATA_AFD AV_FRAME_DATA_MOTION_VECTORS AV_FRAME_DATA_SKIP_SAMPLES AV_FRAME_DATA_AUDIO_SERVICE_TYPE AV_FRAME_DATA_MASTERING_DISPLAY_METADATA AV_FRAME_DATA_GOP_TIMECODE AV_FRAME_DATA_SPHERICAL AV_FRAME_DATA_CONTENT_LIGHT_LEVEL AV_FRAME_DATA_ICC_PROFILE AV_FRAME_DATA_QP_TABLE_PROPERTIES AV_FRAME_DATA_QP_TABLE_DATA cdef struct AVFrameSideData: AVFrameSideDataType type uint8_t *data int size AVDictionary *metadata # See: http://ffmpeg.org/doxygen/trunk/structAVFrame.html cdef struct AVFrame: uint8_t *data[4]; int linesize[4]; uint8_t **extended_data int format # Should be AVPixelFormat or AVSampleFormat int key_frame # 0 or 1. AVPictureType pict_type int interlaced_frame # 0 or 1. int width int height int nb_side_data AVFrameSideData **side_data int nb_samples # Audio samples int sample_rate # Audio Sample rate int channels # Number of audio channels int channel_layout # Audio channel_layout int64_t pts int64_t pkt_dts int pkt_size uint8_t **base void *opaque AVDictionary *metadata int flags int decode_error_flags cdef AVFrame* avcodec_alloc_frame() cdef struct AVPacket: int64_t pts int64_t dts uint8_t *data int size int stream_index int flags int duration int64_t pos void (*destruct)(AVPacket*) cdef int avcodec_fill_audio_frame( AVFrame *frame, int nb_channels, AVSampleFormat sample_fmt, uint8_t *buf, int buf_size, int align ) cdef void avcodec_free_frame(AVFrame **frame) cdef void av_init_packet(AVPacket*) cdef int av_new_packet(AVPacket*, int) cdef int av_packet_ref(AVPacket *dst, const AVPacket *src) cdef void av_packet_unref(AVPacket *pkt) cdef void av_packet_rescale_ts(AVPacket *pkt, AVRational src_tb, AVRational dst_tb) cdef enum AVSubtitleType: SUBTITLE_NONE SUBTITLE_BITMAP SUBTITLE_TEXT SUBTITLE_ASS cdef struct AVSubtitleRect: int x int y int w int h int nb_colors uint8_t *data[4]; int linesize[4]; AVSubtitleType type char *text char *ass int flags cdef struct AVSubtitle: uint16_t format uint32_t start_display_time uint32_t end_display_time unsigned int num_rects AVSubtitleRect **rects int64_t pts cdef int avcodec_decode_subtitle2( AVCodecContext *ctx, AVSubtitle *sub, int *done, AVPacket *pkt, ) cdef int avcodec_encode_subtitle( AVCodecContext *avctx, uint8_t *buf, int buf_size, AVSubtitle *sub ) cdef void avsubtitle_free(AVSubtitle*) cdef void avcodec_get_frame_defaults(AVFrame* frame) cdef void avcodec_flush_buffers(AVCodecContext *ctx) # TODO: avcodec_default_get_buffer is deprecated for avcodec_default_get_buffer2 in newer versions of FFmpeg cdef int avcodec_default_get_buffer(AVCodecContext *ctx, AVFrame *frame) cdef void avcodec_default_release_buffer(AVCodecContext *ctx, AVFrame *frame) # === New-style Transcoding cdef int avcodec_send_packet(AVCodecContext *avctx, AVPacket *packet) cdef int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame) cdef int avcodec_send_frame(AVCodecContext *avctx, AVFrame *frame) cdef int avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt) # === Parsers cdef struct AVCodecParser: int codec_ids[5] cdef AVCodecParser* av_parser_next(AVCodecParser *c) cdef struct AVCodecParserContext: pass cdef AVCodecParserContext *av_parser_init(int codec_id) cdef int av_parser_parse2( AVCodecParserContext *s, AVCodecContext *avctx, uint8_t **poutbuf, int *poutbuf_size, const uint8_t *buf, int buf_size, int64_t pts, int64_t dts, int64_t pos ) cdef int av_parser_change( AVCodecParserContext *s, AVCodecContext *avctx, uint8_t **poutbuf, int *poutbuf_size, const uint8_t *buf, int buf_size, int keyframe ) cdef void av_parser_close(AVCodecParserContext *s) cdef struct AVCodecParameters: pass cdef int avcodec_parameters_from_context( AVCodecParameters *par, const AVCodecContext *codec, ) PyAV-8.1.0/include/libavdevice/000077500000000000000000000000001416312437500162455ustar00rootroot00000000000000PyAV-8.1.0/include/libavdevice/avdevice.pxd000066400000000000000000000007211416312437500205500ustar00rootroot00000000000000 cdef extern from "libavdevice/avdevice.h" nogil: cdef int avdevice_version() cdef char* avdevice_configuration() cdef char* avdevice_license() void avdevice_register_all() AVInputFormat * av_input_audio_device_next(AVInputFormat *d) AVInputFormat * av_input_video_device_next(AVInputFormat *d) AVOutputFormat * av_output_audio_device_next(AVOutputFormat *d) AVOutputFormat * av_output_video_device_next(AVOutputFormat *d) PyAV-8.1.0/include/libavfilter/000077500000000000000000000000001416312437500162735ustar00rootroot00000000000000PyAV-8.1.0/include/libavfilter/avfilter.pxd000066400000000000000000000037551416312437500206360ustar00rootroot00000000000000 cdef extern from "libavfilter/avfilter.h" nogil: cdef int avfilter_version() cdef char* avfilter_configuration() cdef char* avfilter_license() cdef struct AVFilterPad: # This struct is opaque. pass const char* avfilter_pad_get_name(const AVFilterPad *pads, int index) AVMediaType avfilter_pad_get_type(const AVFilterPad *pads, int index) cdef struct AVFilter: AVClass *priv_class const char *name const char *description const int flags const AVFilterPad *inputs const AVFilterPad *outputs int (*process_command)(AVFilterContext *, const char *cmd, const char *arg, char *res, int res_len, int flags) cdef enum: AVFILTER_FLAG_DYNAMIC_INPUTS AVFILTER_FLAG_DYNAMIC_OUTPUTS AVFILTER_FLAG_SLICE_THREADS AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL cdef AVFilter* avfilter_get_by_name(const char *name) cdef const AVFilter* av_filter_iterate(void **opaque) cdef struct AVFilterLink # Defined later. cdef struct AVFilterContext: AVClass *av_class AVFilter *filter char *name unsigned int nb_inputs AVFilterPad *input_pads AVFilterLink **inputs unsigned int nb_outputs AVFilterPad *output_pads AVFilterLink **outputs cdef int avfilter_init_str(AVFilterContext *ctx, const char *args) cdef int avfilter_init_dict(AVFilterContext *ctx, AVDictionary **options) cdef void avfilter_free(AVFilterContext*) cdef AVClass* avfilter_get_class() cdef struct AVFilterLink: AVFilterContext *src AVFilterPad *srcpad AVFilterContext *dst AVFilterPad *dstpad AVMediaType Type int w int h AVRational sample_aspect_ratio uint64_t channel_layout int sample_rate int format AVRational time_base # custom cdef set pyav_get_available_filters() PyAV-8.1.0/include/libavfilter/avfiltergraph.pxd000066400000000000000000000023771416312437500216570ustar00rootroot00000000000000 cdef extern from "libavfilter/avfilter.h" nogil: cdef struct AVFilterGraph: int nb_filters AVFilterContext **filters cdef struct AVFilterInOut: char *name AVFilterContext *filter_ctx int pad_idx AVFilterInOut *next cdef AVFilterGraph* avfilter_graph_alloc() cdef void avfilter_graph_free(AVFilterGraph **ptr) cdef int avfilter_graph_parse2( AVFilterGraph *graph, const char *filter_str, AVFilterInOut **inputs, AVFilterInOut **outputs ) cdef AVFilterContext* avfilter_graph_alloc_filter( AVFilterGraph *graph, const AVFilter *filter, const char *name ) cdef int avfilter_graph_create_filter( AVFilterContext **filt_ctx, AVFilter *filt, const char *name, const char *args, void *opaque, AVFilterGraph *graph_ctx ) cdef int avfilter_link( AVFilterContext *src, unsigned int srcpad, AVFilterContext *dst, unsigned int dstpad ) cdef int avfilter_graph_config(AVFilterGraph *graph, void *logctx) cdef char* avfilter_graph_dump(AVFilterGraph *graph, const char *options) cdef void avfilter_inout_free(AVFilterInOut **inout_list) PyAV-8.1.0/include/libavfilter/buffersink.pxd000066400000000000000000000002201416312437500211400ustar00rootroot00000000000000cdef extern from "libavfilter/buffersink.h" nogil: int av_buffersink_get_frame( AVFilterContext *ctx, AVFrame *frame ) PyAV-8.1.0/include/libavfilter/buffersrc.pxd000066400000000000000000000002261416312437500207710ustar00rootroot00000000000000cdef extern from "libavfilter/buffersrc.h" nogil: int av_buffersrc_write_frame( AVFilterContext *ctx, const AVFrame *frame ) PyAV-8.1.0/include/libavformat/000077500000000000000000000000001416312437500162765ustar00rootroot00000000000000PyAV-8.1.0/include/libavformat/avformat.pxd000066400000000000000000000167521416312437500206450ustar00rootroot00000000000000from libc.stdint cimport int64_t, uint64_t cdef extern from "libavformat/avformat.h" nogil: cdef int avformat_version() cdef char* avformat_configuration() cdef char* avformat_license() cdef void avformat_network_init() cdef int64_t INT64_MIN cdef int AV_TIME_BASE cdef int AVSEEK_FLAG_BACKWARD cdef int AVSEEK_FLAG_BYTE cdef int AVSEEK_FLAG_ANY cdef int AVSEEK_FLAG_FRAME cdef int AVIO_FLAG_WRITE cdef enum AVMediaType: AVMEDIA_TYPE_UNKNOWN AVMEDIA_TYPE_VIDEO AVMEDIA_TYPE_AUDIO AVMEDIA_TYPE_DATA AVMEDIA_TYPE_SUBTITLE AVMEDIA_TYPE_ATTACHMENT AVMEDIA_TYPE_NB cdef struct AVStream: int index int id AVCodecContext *codec AVCodecParameters *codecpar AVRational time_base int64_t start_time int64_t duration int64_t nb_frames int64_t cur_dts AVDictionary *metadata AVRational avg_frame_rate AVRational r_frame_rate AVRational sample_aspect_ratio # http://ffmpeg.org/doxygen/trunk/structAVIOContext.html cdef struct AVIOContext: unsigned char* buffer int buffer_size int write_flag int direct int seekable int max_packet_size # http://ffmpeg.org/doxygen/trunk/structAVIOInterruptCB.html cdef struct AVIOInterruptCB: int (*callback)(void*) void *opaque cdef int AVIO_FLAG_DIRECT cdef int AVIO_SEEKABLE_NORMAL cdef int SEEK_SET cdef int SEEK_CUR cdef int SEEK_END cdef int AVSEEK_SIZE cdef AVIOContext* avio_alloc_context( unsigned char *buffer, int buffer_size, int write_flag, void *opaque, int(*read_packet)(void *opaque, uint8_t *buf, int buf_size), int(*write_packet)(void *opaque, uint8_t *buf, int buf_size), int64_t(*seek)(void *opaque, int64_t offset, int whence) ) # http://ffmpeg.org/doxygen/trunk/structAVInputFormat.html cdef struct AVInputFormat: const char *name const char *long_name const char *extensions int flags # const AVCodecTag* const *codec_tag const AVClass *priv_class cdef struct AVProbeData: unsigned char *buf int buf_size const char *filename cdef AVInputFormat* av_probe_input_format( AVProbeData *pd, int is_opened ) # http://ffmpeg.org/doxygen/trunk/structAVOutputFormat.html cdef struct AVOutputFormat: const char *name const char *long_name const char *extensions AVCodecID video_codec AVCodecID audio_codec AVCodecID subtitle_codec int flags # const AVCodecTag* const *codec_tag const AVClass *priv_class # AVInputFormat.flags and AVOutputFormat.flags cdef enum: AVFMT_NOFILE AVFMT_NEEDNUMBER AVFMT_SHOW_IDS AVFMT_GLOBALHEADER AVFMT_NOTIMESTAMPS AVFMT_GENERIC_INDEX AVFMT_TS_DISCONT AVFMT_VARIABLE_FPS AVFMT_NODIMENSIONS AVFMT_NOSTREAMS AVFMT_NOBINSEARCH AVFMT_NOGENSEARCH AVFMT_NO_BYTE_SEEK AVFMT_ALLOW_FLUSH AVFMT_TS_NONSTRICT AVFMT_TS_NEGATIVE AVFMT_SEEK_TO_PTS # AVFormatContext.flags cdef enum: AVFMT_FLAG_GENPTS AVFMT_FLAG_IGNIDX AVFMT_FLAG_NONBLOCK AVFMT_FLAG_IGNDTS AVFMT_FLAG_NOFILLIN AVFMT_FLAG_NOPARSE AVFMT_FLAG_NOBUFFER AVFMT_FLAG_CUSTOM_IO AVFMT_FLAG_DISCARD_CORRUPT AVFMT_FLAG_FLUSH_PACKETS AVFMT_FLAG_BITEXACT AVFMT_FLAG_MP4A_LATM AVFMT_FLAG_SORT_DTS AVFMT_FLAG_PRIV_OPT AVFMT_FLAG_KEEP_SIDE_DATA # deprecated; does nothing AVFMT_FLAG_FAST_SEEK AVFMT_FLAG_SHORTEST AVFMT_FLAG_AUTO_BSF cdef int av_probe_input_buffer( AVIOContext *pb, AVInputFormat **fmt, const char *filename, void *logctx, unsigned int offset, unsigned int max_probe_size ) cdef AVInputFormat* av_find_input_format(const char *name) # http://ffmpeg.org/doxygen/trunk/structAVFormatContext.html cdef struct AVFormatContext: # Streams. unsigned int nb_streams AVStream **streams AVInputFormat *iformat AVOutputFormat *oformat AVIOContext *pb AVIOInterruptCB interrupt_callback AVDictionary *metadata char filename int64_t start_time int64_t duration int bit_rate int flags int64_t max_analyze_duration cdef AVFormatContext* avformat_alloc_context() # .. c:function:: avformat_open_input(...) # # Options are passed via :func:`av.open`. # # .. seealso:: FFmpeg's docs: :ffmpeg:`avformat_open_input` # cdef int avformat_open_input( AVFormatContext **ctx, # NULL will allocate for you. char *filename, AVInputFormat *format, # Can be NULL. AVDictionary **options # Can be NULL. ) cdef int avformat_close_input(AVFormatContext **ctx) # .. c:function:: avformat_write_header(...) # # Options are passed via :func:`av.open`; called in # :meth:`av.container.OutputContainer.start_encoding`. # # .. seealso:: FFmpeg's docs: :ffmpeg:`avformat_write_header` # cdef int avformat_write_header( AVFormatContext *ctx, AVDictionary **options # Can be NULL ) cdef int av_write_trailer(AVFormatContext *ctx) cdef int av_interleaved_write_frame( AVFormatContext *ctx, AVPacket *pkt ) cdef int av_write_frame( AVFormatContext *ctx, AVPacket *pkt ) cdef int avio_open( AVIOContext **s, char *url, int flags ) cdef int64_t avio_size( AVIOContext *s ) cdef AVOutputFormat* av_guess_format( char *short_name, char *filename, char *mime_type ) cdef int avformat_query_codec( AVOutputFormat *ofmt, AVCodecID codec_id, int std_compliance ) cdef int avio_close(AVIOContext *s) cdef int avio_closep(AVIOContext **s) cdef int avformat_find_stream_info( AVFormatContext *ctx, AVDictionary **options, # Can be NULL. ) cdef AVStream* avformat_new_stream( AVFormatContext *ctx, AVCodec *c ) cdef int avformat_alloc_output_context2( AVFormatContext **ctx, AVOutputFormat *oformat, char *format_name, char *filename ) cdef int avformat_free_context(AVFormatContext *ctx) cdef AVClass* avformat_get_class() cdef void av_dump_format( AVFormatContext *ctx, int index, char *url, int is_output, ) cdef int av_read_frame( AVFormatContext *ctx, AVPacket *packet, ) cdef int av_seek_frame( AVFormatContext *ctx, int stream_index, int64_t timestamp, int flags ) cdef int avformat_seek_file( AVFormatContext *ctx, int stream_index, int64_t min_ts, int64_t ts, int64_t max_ts, int flags ) cdef AVRational av_guess_frame_rate( AVFormatContext *ctx, AVStream *stream, AVFrame *frame ) cdef const AVInputFormat* av_demuxer_iterate(void **opaque) cdef const AVOutputFormat* av_muxer_iterate(void **opaque) # custom cdef set pyav_get_available_formats() PyAV-8.1.0/include/libavutil/000077500000000000000000000000001416312437500157635ustar00rootroot00000000000000PyAV-8.1.0/include/libavutil/avutil.pxd000066400000000000000000000200451416312437500200050ustar00rootroot00000000000000from libc.stdint cimport int64_t, uint8_t, uint64_t cdef extern from "libavutil/mathematics.h" nogil: pass cdef extern from "libavutil/rational.h" nogil: cdef int av_reduce(int *dst_num, int *dst_den, int64_t num, int64_t den, int64_t max) cdef extern from "libavutil/avutil.h" nogil: cdef int avutil_version() cdef char* avutil_configuration() cdef char* avutil_license() cdef enum AVPictureType: AV_PICTURE_TYPE_NONE AV_PICTURE_TYPE_I AV_PICTURE_TYPE_P AV_PICTURE_TYPE_B AV_PICTURE_TYPE_S AV_PICTURE_TYPE_SI AV_PICTURE_TYPE_SP AV_PICTURE_TYPE_BI cdef enum AVPixelFormat: AV_PIX_FMT_NONE AV_PIX_FMT_YUV420P AV_PIX_FMT_RGB24 PIX_FMT_RGB24 PIX_FMT_RGBA cdef enum AVRounding: AV_ROUND_ZERO AV_ROUND_INF AV_ROUND_DOWN AV_ROUND_UP AV_ROUND_NEAR_INF # This is nice, but only in FFMpeg: # AV_ROUND_PASS_MINMAX cdef double M_PI cdef void* av_malloc(size_t size) cdef void *av_calloc(size_t nmemb, size_t size) cdef void *av_realloc(void *ptr, size_t size) cdef void av_freep(void *ptr) cdef int av_get_bytes_per_sample(AVSampleFormat sample_fmt) cdef int av_samples_get_buffer_size( int *linesize, int nb_channels, int nb_samples, AVSampleFormat sample_fmt, int align ) # See: http://ffmpeg.org/doxygen/trunk/structAVRational.html ctypedef struct AVRational: int num int den cdef AVRational AV_TIME_BASE_Q # Rescales from one time base to another cdef int64_t av_rescale_q( int64_t a, # time stamp AVRational bq, # source time base AVRational cq # target time base ) # Rescale a 64-bit integer with specified rounding. # A simple a*b/c isn't possible as it can overflow cdef int64_t av_rescale_rnd( int64_t a, int64_t b, int64_t c, int r # should be AVRounding, but then we can't use bitwise logic. ) cdef int64_t av_rescale_q_rnd( int64_t a, AVRational bq, AVRational cq, int r # should be AVRounding, but then we can't use bitwise logic. ) cdef int64_t av_rescale( int64_t a, int64_t b, int64_t c ) cdef char* av_strdup(char *s) cdef int av_opt_set_int( void *obj, char *name, int64_t value, int search_flags ) cdef const char* av_get_media_type_string(AVMediaType media_type) cdef extern from "libavutil/pixdesc.h" nogil: # See: http://ffmpeg.org/doxygen/trunk/structAVComponentDescriptor.html cdef struct AVComponentDescriptor: unsigned int plane unsigned int step unsigned int offset unsigned int shift unsigned int depth cdef enum AVPixFmtFlags: AV_PIX_FMT_FLAG_BE AV_PIX_FMT_FLAG_PAL AV_PIX_FMT_FLAG_BITSTREAM AV_PIX_FMT_FLAG_HWACCEL AV_PIX_FMT_FLAG_PLANAR AV_PIX_FMT_FLAG_RGB AV_PIX_FMT_FLAG_PSEUDOPAL AV_PIX_FMT_FLAG_ALPHA AV_PIX_FMT_FLAG_BAYER AV_PIX_FMT_FLAG_FLOAT # See: http://ffmpeg.org/doxygen/trunk/structAVPixFmtDescriptor.html cdef struct AVPixFmtDescriptor: const char *name uint8_t nb_components uint8_t log2_chroma_w uint8_t log2_chroma_h uint8_t flags AVComponentDescriptor comp[4] cdef AVPixFmtDescriptor* av_pix_fmt_desc_get(AVPixelFormat pix_fmt) cdef AVPixFmtDescriptor* av_pix_fmt_desc_next(AVPixFmtDescriptor *prev) cdef char * av_get_pix_fmt_name(AVPixelFormat pix_fmt) cdef AVPixelFormat av_get_pix_fmt(char* name) int av_get_bits_per_pixel(AVPixFmtDescriptor *pixdesc) int av_get_padded_bits_per_pixel(AVPixFmtDescriptor *pixdesc) cdef extern from "libavutil/channel_layout.h" nogil: # Layouts. cdef uint64_t av_get_channel_layout(char* name) cdef int av_get_channel_layout_nb_channels(uint64_t channel_layout) cdef int64_t av_get_default_channel_layout(int nb_channels) cdef void av_get_channel_layout_string( char* buff, int buf_size, int nb_channels, uint64_t channel_layout ) # Channels. cdef uint64_t av_channel_layout_extract_channel(uint64_t layout, int index) cdef char* av_get_channel_name(uint64_t channel) cdef char* av_get_channel_description(uint64_t channel) cdef extern from "libavutil/audio_fifo.h" nogil: cdef struct AVAudioFifo: pass cdef void av_audio_fifo_free(AVAudioFifo *af) cdef AVAudioFifo* av_audio_fifo_alloc( AVSampleFormat sample_fmt, int channels, int nb_samples ) cdef int av_audio_fifo_write( AVAudioFifo *af, void **data, int nb_samples ) cdef int av_audio_fifo_read( AVAudioFifo *af, void **data, int nb_samples ) cdef int av_audio_fifo_size(AVAudioFifo *af) cdef int av_audio_fifo_space (AVAudioFifo *af) cdef extern from "stdarg.h" nogil: # For logging. Should really be in another PXD. ctypedef struct va_list: pass cdef extern from "Python.h" nogil: # For logging. See av/logging.pyx for an explanation. cdef int Py_AddPendingCall(void *, void *) void PyErr_PrintEx(int set_sys_last_vars) int Py_IsInitialized() void PyErr_Display(object, object, object) cdef extern from "libavutil/opt.h" nogil: cdef enum AVOptionType: AV_OPT_TYPE_FLAGS AV_OPT_TYPE_INT AV_OPT_TYPE_INT64 AV_OPT_TYPE_DOUBLE AV_OPT_TYPE_FLOAT AV_OPT_TYPE_STRING AV_OPT_TYPE_RATIONAL AV_OPT_TYPE_BINARY AV_OPT_TYPE_DICT #AV_OPT_TYPE_UINT64 # since FFmpeg 3.3 AV_OPT_TYPE_CONST AV_OPT_TYPE_IMAGE_SIZE AV_OPT_TYPE_PIXEL_FMT AV_OPT_TYPE_SAMPLE_FMT AV_OPT_TYPE_VIDEO_RATE AV_OPT_TYPE_DURATION AV_OPT_TYPE_COLOR AV_OPT_TYPE_CHANNEL_LAYOUT AV_OPT_TYPE_BOOL cdef struct AVOption_default_val: int64_t i64 double dbl const char *str AVRational q cdef enum: AV_OPT_FLAG_ENCODING_PARAM AV_OPT_FLAG_DECODING_PARAM AV_OPT_FLAG_AUDIO_PARAM AV_OPT_FLAG_VIDEO_PARAM AV_OPT_FLAG_SUBTITLE_PARAM AV_OPT_FLAG_EXPORT AV_OPT_FLAG_READONLY AV_OPT_FLAG_FILTERING_PARAM cdef struct AVOption: const char *name const char *help AVOptionType type int offset AVOption_default_val default_val double min double max int flags const char *unit cdef extern from "libavutil/imgutils.h" nogil: cdef int av_image_alloc( uint8_t *pointers[4], int linesizes[4], int width, int height, AVPixelFormat pix_fmt, int align ) cdef extern from "libavutil/log.h" nogil: cdef enum AVClassCategory: AV_CLASS_CATEGORY_NA AV_CLASS_CATEGORY_INPUT AV_CLASS_CATEGORY_OUTPUT AV_CLASS_CATEGORY_MUXER AV_CLASS_CATEGORY_DEMUXER AV_CLASS_CATEGORY_ENCODER AV_CLASS_CATEGORY_DECODER AV_CLASS_CATEGORY_FILTER AV_CLASS_CATEGORY_BITSTREAM_FILTER AV_CLASS_CATEGORY_SWSCALER AV_CLASS_CATEGORY_SWRESAMPLER AV_CLASS_CATEGORY_NB cdef struct AVClass: const char *class_name const char *(*item_name)(void*) nogil AVClassCategory category int parent_log_context_offset const AVOption *option cdef enum: AV_LOG_QUIET AV_LOG_PANIC AV_LOG_FATAL AV_LOG_ERROR AV_LOG_WARNING AV_LOG_INFO AV_LOG_VERBOSE AV_LOG_DEBUG AV_LOG_TRACE AV_LOG_MAX_OFFSET # Send a log. void av_log(void *ptr, int level, const char *fmt, ...) # Get the logs. ctypedef void(*av_log_callback)(void *, int, const char *, va_list) void av_log_default_callback(void *, int, const char *, va_list) void av_log_set_callback (av_log_callback callback) PyAV-8.1.0/include/libavutil/channel_layout.pxd000066400000000000000000000006341416312437500215100ustar00rootroot00000000000000cdef extern from "libavutil/channel_layout.h" nogil: # This is not a comprehensive list. cdef uint64_t AV_CH_LAYOUT_MONO cdef uint64_t AV_CH_LAYOUT_STEREO cdef uint64_t AV_CH_LAYOUT_2POINT1 cdef uint64_t AV_CH_LAYOUT_4POINT0 cdef uint64_t AV_CH_LAYOUT_5POINT0_BACK cdef uint64_t AV_CH_LAYOUT_5POINT1_BACK cdef uint64_t AV_CH_LAYOUT_6POINT1 cdef uint64_t AV_CH_LAYOUT_7POINT1 PyAV-8.1.0/include/libavutil/dict.pxd000066400000000000000000000015001416312437500174170ustar00rootroot00000000000000cdef extern from "libavutil/dict.h" nogil: # See: http://ffmpeg.org/doxygen/trunk/structAVDictionary.html ctypedef struct AVDictionary: pass cdef void av_dict_free(AVDictionary **) # See: http://ffmpeg.org/doxygen/trunk/structAVDictionaryEntry.html ctypedef struct AVDictionaryEntry: char *key char *value cdef int AV_DICT_IGNORE_SUFFIX cdef AVDictionaryEntry* av_dict_get( AVDictionary *dict, char *key, AVDictionaryEntry *prev, int flags, ) cdef int av_dict_set( AVDictionary **pm, const char *key, const char *value, int flags ) cdef int av_dict_count( AVDictionary *m ) cdef int av_dict_copy( AVDictionary **dst, AVDictionary *src, int flags ) PyAV-8.1.0/include/libavutil/error.pxd000066400000000000000000000023761416312437500176410ustar00rootroot00000000000000cdef extern from "libavutil/error.h" nogil: # Not actually from here, but whatever. cdef int ENOMEM cdef int EAGAIN cdef int AVERROR_BSF_NOT_FOUND cdef int AVERROR_BUG cdef int AVERROR_BUFFER_TOO_SMALL cdef int AVERROR_DECODER_NOT_FOUND cdef int AVERROR_DEMUXER_NOT_FOUND cdef int AVERROR_ENCODER_NOT_FOUND cdef int AVERROR_EOF cdef int AVERROR_EXIT cdef int AVERROR_EXTERNAL cdef int AVERROR_FILTER_NOT_FOUND cdef int AVERROR_INVALIDDATA cdef int AVERROR_MUXER_NOT_FOUND cdef int AVERROR_OPTION_NOT_FOUND cdef int AVERROR_PATCHWELCOME cdef int AVERROR_PROTOCOL_NOT_FOUND cdef int AVERROR_UNKNOWN cdef int AVERROR_EXPERIMENTAL cdef int AVERROR_INPUT_CHANGED cdef int AVERROR_OUTPUT_CHANGED cdef int AVERROR_HTTP_BAD_REQUEST cdef int AVERROR_HTTP_UNAUTHORIZED cdef int AVERROR_HTTP_FORBIDDEN cdef int AVERROR_HTTP_NOT_FOUND cdef int AVERROR_HTTP_OTHER_4XX cdef int AVERROR_HTTP_SERVER_ERROR cdef int AVERROR_NOMEM "AVERROR(ENOMEM)" # cdef int FFERRTAG(int, int, int, int) cdef int AVERROR(int error) cdef int AV_ERROR_MAX_STRING_SIZE cdef int av_strerror(int errno, char *output, size_t output_size) cdef char* av_err2str(int errnum) PyAV-8.1.0/include/libavutil/frame.pxd000066400000000000000000000013161416312437500175730ustar00rootroot00000000000000cdef extern from "libavutil/frame.h" nogil: cdef AVFrame* av_frame_alloc() cdef void av_frame_free(AVFrame**) cdef int av_frame_ref(AVFrame *dst, const AVFrame *src) cdef AVFrame* av_frame_clone(const AVFrame *src) cdef void av_frame_unref(AVFrame *frame) cdef void av_frame_move_ref(AVFrame *dst, AVFrame *src) cdef int av_frame_get_buffer(AVFrame *frame, int align) cdef int av_frame_is_writable(AVFrame *frame) cdef int av_frame_make_writable(AVFrame *frame) cdef int av_frame_copy(AVFrame *dst, const AVFrame *src) cdef int av_frame_copy_props(AVFrame *dst, const AVFrame *src) cdef AVFrameSideData* av_frame_get_side_data(AVFrame *frame, AVFrameSideDataType type) PyAV-8.1.0/include/libavutil/motion_vector.pxd000066400000000000000000000007611416312437500213730ustar00rootroot00000000000000from libc.stdint cimport ( uint8_t, int8_t, uint16_t, int16_t, uint32_t, int32_t, uint64_t, int64_t ) cdef extern from "libavutil/motion_vector.h" nogil: cdef struct AVMotionVector: int32_t source uint8_t w uint8_t h int16_t src_x int16_t src_y int16_t dst_x int16_t dst_y uint64_t flags int32_t motion_x int32_t motion_y uint16_t motion_scale PyAV-8.1.0/include/libavutil/samplefmt.pxd000066400000000000000000000031571416312437500204760ustar00rootroot00000000000000cdef extern from "libavutil/samplefmt.h" nogil: cdef enum AVSampleFormat: AV_SAMPLE_FMT_NONE AV_SAMPLE_FMT_U8 AV_SAMPLE_FMT_S16 AV_SAMPLE_FMT_S32 AV_SAMPLE_FMT_FLT AV_SAMPLE_FMT_DBL AV_SAMPLE_FMT_U8P AV_SAMPLE_FMT_S16P AV_SAMPLE_FMT_S32P AV_SAMPLE_FMT_FLTP AV_SAMPLE_FMT_DBLP AV_SAMPLE_FMT_NB # Number. # Find by name. cdef AVSampleFormat av_get_sample_fmt(char* name) # Inspection. cdef char * av_get_sample_fmt_name(AVSampleFormat sample_fmt) cdef int av_get_bytes_per_sample(AVSampleFormat sample_fmt) cdef int av_sample_fmt_is_planar(AVSampleFormat sample_fmt) # Alternative forms. cdef AVSampleFormat av_get_packed_sample_fmt(AVSampleFormat sample_fmt) cdef AVSampleFormat av_get_planar_sample_fmt(AVSampleFormat sample_fmt) cdef int av_samples_alloc( uint8_t** audio_data, int* linesize, int nb_channels, int nb_samples, AVSampleFormat sample_fmt, int align ) cdef int av_samples_get_buffer_size( int *linesize, int nb_channels, int nb_samples, AVSampleFormat sample_fmt, int align ) cdef int av_samples_fill_arrays( uint8_t **audio_data, int *linesize, const uint8_t *buf, int nb_channels, int nb_samples, AVSampleFormat sample_fmt, int align ) cdef int av_samples_set_silence( uint8_t **audio_data, int offset, int nb_samples, int nb_channels, AVSampleFormat sample_fmt ) PyAV-8.1.0/include/libswresample/000077500000000000000000000000001416312437500166415ustar00rootroot00000000000000PyAV-8.1.0/include/libswresample/swresample.pxd000066400000000000000000000020641416312437500215420ustar00rootroot00000000000000from libc.stdint cimport int64_t, uint8_t cdef extern from "libswresample/swresample.h" nogil: cdef int swresample_version() cdef char* swresample_configuration() cdef char* swresample_license() cdef struct SwrContext: pass cdef SwrContext* swr_alloc_set_opts( SwrContext *ctx, int64_t out_ch_layout, AVSampleFormat out_sample_fmt, int out_sample_rate, int64_t in_ch_layout, AVSampleFormat in_sample_fmt, int in_sample_rate, int log_offset, void *log_ctx #logging context, can be NULL ) cdef int swr_convert( SwrContext *ctx, uint8_t ** out_buffer, int out_count, uint8_t **in_buffer, int in_count ) # Gets the delay the next input sample will # experience relative to the next output sample. cdef int64_t swr_get_delay(SwrContext *s, int64_t base) cdef SwrContext* swr_alloc() cdef int swr_init(SwrContext* ctx) cdef void swr_free(SwrContext **ctx) cdef void swr_close(SwrContext *ctx) PyAV-8.1.0/include/libswscale/000077500000000000000000000000001416312437500161205ustar00rootroot00000000000000PyAV-8.1.0/include/libswscale/swscale.pxd000066400000000000000000000044031416312437500202770ustar00rootroot00000000000000 cdef extern from "libswscale/swscale.h" nogil: cdef int swscale_version() cdef char* swscale_configuration() cdef char* swscale_license() # See: http://ffmpeg.org/doxygen/trunk/structSwsContext.html cdef struct SwsContext: pass # See: http://ffmpeg.org/doxygen/trunk/structSwsFilter.html cdef struct SwsFilter: pass # Flags. cdef int SWS_FAST_BILINEAR cdef int SWS_BILINEAR cdef int SWS_BICUBIC cdef int SWS_X cdef int SWS_POINT cdef int SWS_AREA cdef int SWS_BICUBLIN cdef int SWS_GAUSS cdef int SWS_SINC cdef int SWS_LANCZOS cdef int SWS_SPLINE cdef int SWS_CS_ITU709 cdef int SWS_CS_FCC cdef int SWS_CS_ITU601 cdef int SWS_CS_ITU624 cdef int SWS_CS_SMPTE170M cdef int SWS_CS_SMPTE240M cdef int SWS_CS_DEFAULT cdef SwsContext* sws_getContext( int src_width, int src_height, AVPixelFormat src_format, int dst_width, int dst_height, AVPixelFormat dst_format, int flags, SwsFilter *src_filter, SwsFilter *dst_filter, double *param, ) cdef int sws_scale( SwsContext *ctx, unsigned char **src_slice, int *src_stride, int src_slice_y, int src_slice_h, unsigned char **dst_slice, int *dst_stride, ) cdef void sws_freeContext(SwsContext *ctx) cdef SwsContext *sws_getCachedContext( SwsContext *context, int src_width, int src_height, AVPixelFormat src_format, int dst_width, int dst_height, AVPixelFormat dst_format, int flags, SwsFilter *src_filter, SwsFilter *dst_filter, double *param, ) cdef int* sws_getCoefficients(int colorspace) cdef int sws_getColorspaceDetails( SwsContext *context, int **inv_table, int *srcRange, int **table, int *dstRange, int *brightness, int *contrast, int *saturation ) cdef int sws_setColorspaceDetails( SwsContext *context, const int inv_table[4], int srcRange, const int table[4], int dstRange, int brightness, int contrast, int saturation ) PyAV-8.1.0/scratchpad/000077500000000000000000000000001416312437500144615ustar00rootroot00000000000000PyAV-8.1.0/scratchpad/README000066400000000000000000000002601416312437500153370ustar00rootroot00000000000000This directory is for the PyAV developers to dump partial or exprimental tests. The contents of this directory are not guaranteed to work, or make sense in any way. Have fun! PyAV-8.1.0/scratchpad/__init__.py000066400000000000000000000000731416312437500165720ustar00rootroot00000000000000import logging logging.basicConfig(level=logging.WARNING) PyAV-8.1.0/scratchpad/audio.py000066400000000000000000000061351416312437500161410ustar00rootroot00000000000000from __future__ import print_function import array import argparse import sys import pprint import subprocess from PIL import Image import av def print_data(frame): for i, plane in enumerate(frame.planes or ()): data = plane.to_bytes() print('\tPLANE %d, %d bytes' % (i, len(data))) data = data.encode('hex') for i in xrange(0, len(data), 128): print('\t\t\t%s' % data[i:i + 128]) arg_parser = argparse.ArgumentParser() arg_parser.add_argument('path') arg_parser.add_argument('-p', '--play', action='store_true') arg_parser.add_argument('-d', '--data', action='store_true') arg_parser.add_argument('-f', '--format') arg_parser.add_argument('-l', '--layout') arg_parser.add_argument('-r', '--rate', type=int) arg_parser.add_argument('-s', '--size', type=int, default=1024) arg_parser.add_argument('-c', '--count', type=int, default=5) args = arg_parser.parse_args() ffplay = None container = av.open(args.path) stream = next(s for s in container.streams if s.type == 'audio') fifo = av.AudioFifo() if args.size else None resampler = av.AudioResampler( format=av.AudioFormat(args.format or stream.format.name).packed if args.format else None, layout=int(args.layout) if args.layout and args.layout.isdigit() else args.layout, rate=args.rate, ) if (args.format or args.layout or args.rate) else None read_count = 0 fifo_count = 0 sample_count = 0 for i, packet in enumerate(container.demux(stream)): for frame in packet.decode(): read_count += 1 print('>>>> %04d' % read_count, frame) if args.data: print_data(frame) frames = [frame] if resampler: for i, frame in enumerate(frames): frame = resampler.resample(frame) print('RESAMPLED', frame) if args.data: print_data(frame) frames[i] = frame if fifo: to_process = frames frames = [] for frame in to_process: fifo.write(frame) while frame: frame = fifo.read(args.size) if frame: fifo_count += 1 print('|||| %04d' % fifo_count, frame) if args.data: print_data(frame) frames.append(frame) if frames and args.play: if not ffplay: cmd = ['ffplay', '-f', frames[0].format.packed.container_name, '-ar', str(args.rate or stream.rate), '-ac', str(len(resampler.layout.channels if resampler else stream.layout.channels)), '-vn', '-', ] print('PLAY', ' '.join(cmd)) ffplay = subprocess.Popen(cmd, stdin=subprocess.PIPE) try: for frame in frames: ffplay.stdin.write(frame.planes[0].to_bytes()) except IOError as e: print(e) exit() if args.count and read_count >= args.count: exit() PyAV-8.1.0/scratchpad/audio_player.py000066400000000000000000000036621416312437500175170ustar00rootroot00000000000000from __future__ import print_function import array import argparse import sys import pprint import subprocess import time from qtproxy import Q import av parser = argparse.ArgumentParser() parser.add_argument('path') args = parser.parse_args() container = av.open(args.path) stream = next(s for s in container.streams if s.type == 'audio') fifo = av.AudioFifo() resampler = av.AudioResampler( format=av.AudioFormat('s16').packed, layout='stereo', rate=48000, ) qformat = Q.AudioFormat() qformat.setByteOrder(Q.AudioFormat.LittleEndian) qformat.setChannelCount(2) qformat.setCodec('audio/pcm') qformat.setSampleRate(48000) qformat.setSampleSize(16) qformat.setSampleType(Q.AudioFormat.SignedInt) output = Q.AudioOutput(qformat) output.setBufferSize(2 * 2 * 48000) device = output.start() print(qformat, output, device) def decode_iter(): try: for pi, packet in enumerate(container.demux(stream)): for fi, frame in enumerate(packet.decode()): yield pi, fi, frame except: return for pi, fi, frame in decode_iter(): frame = resampler.resample(frame) print(pi, fi, frame, output.state()) bytes_buffered = output.bufferSize() - output.bytesFree() us_processed = output.processedUSecs() us_buffered = 1000000 * bytes_buffered / (2 * 16 / 8) / 48000 print('pts: %.3f, played: %.3f, buffered: %.3f' % (frame.time or 0, us_processed / 1000000.0, us_buffered / 1000000.0)) data = frame.planes[0].to_bytes() while data: written = device.write(data) if written: # print 'wrote', written data = data[written:] else: # print 'did not accept data; sleeping' time.sleep(0.033) if False and pi % 100 == 0: output.reset() print(output.state(), output.error()) device = output.start() # time.sleep(0.05) while output.state() == Q.Audio.ActiveState: time.sleep(0.1) PyAV-8.1.0/scratchpad/average.py000066400000000000000000000026331416312437500164510ustar00rootroot00000000000000from __future__ import print_function import argparse import os import sys import pprint import itertools import cv2 from av import open parser = argparse.ArgumentParser() parser.add_argument('-f', '--format') parser.add_argument('-n', '--frames', type=int, default=0) parser.add_argument('path', nargs='+') args = parser.parse_args() max_size = 24 * 60 # One minute's worth. def frame_iter(video): count = 0 streams = [s for s in video.streams if s.type == b'video'] streams = [streams[0]] for packet in video.demux(streams): for frame in packet.decode(): yield frame count += 1 if args.frames and count > args.frames: return for src_path in args.path: print('reading', src_path) basename = os.path.splitext(os.path.basename(src_path))[0] dir_name = os.path.join('sandbox', basename) if not os.path.exists(dir_name): os.makedirs(dir_name) video = open(src_path, format=args.format) frames = frame_iter(video) sum_ = None for fi, frame in enumerate(frame_iter(video)): if sum_ is None: sum_ = frame.to_ndarray().astype(float) else: sum_ += frame.to_ndarray().astype(float) sum_ /= (fi + 1) dst_path = os.path.join('sandbox', os.path.basename(src_path) + '-avg.jpeg') print('writing', (fi + 1), 'frames to', dst_path) cv2.imwrite(dst_path, sum_) PyAV-8.1.0/scratchpad/cctx_decode.py000066400000000000000000000012161416312437500172770ustar00rootroot00000000000000from __future__ import print_function import logging logging.basicConfig() import av from av.codec import CodecContext, CodecParser from av.video import VideoFrame from av.packet import Packet cc = CodecContext.create('mpeg4', 'r') print(cc) fh = open('test.mp4', 'r') frame_count = 0 while True: chunk = fh.read(819200) for packet in cc.parse(chunk or None, allow_stream=True): print(packet) for frame in cc.decode(packet) or (): print(frame) img = frame.to_image() img.save('sandbox/test.%04d.jpg' % frame_count) frame_count += 1 if not chunk: break # EOF! PyAV-8.1.0/scratchpad/cctx_encode.py000066400000000000000000000016661416312437500173220ustar00rootroot00000000000000from __future__ import print_function import logging from PIL import Image, ImageFont, ImageDraw logging.basicConfig() import av from av.codec import CodecContext from av.video import VideoFrame from tests.common import fate_suite cc = CodecContext.create('flv', 'w') print(cc) base_img = Image.open(fate_suite('png1/lena-rgb24.png')) font = ImageFont.truetype("/System/Library/Fonts/Menlo.ttc", 15) fh = open('test.flv', 'w') for i in range(30): print(i) img = base_img.copy() draw = ImageDraw.Draw(img) draw.text((10, 10), "FRAME %02d" % i, font=font) frame = VideoFrame.from_image(img) frame = frame.reformat(format='yuv420p') print(' ', frame) packet = cc.encode(frame) print(' ', packet) fh.write(str(buffer(packet))) print('Flushing...') while True: packet = cc.encode() if not packet: break print(' ', packet) fh.write(str(buffer(packet))) print('Done!') PyAV-8.1.0/scratchpad/container-gc.py000066400000000000000000000014761416312437500174140ustar00rootroot00000000000000import resource import gc import av import av.datasets path = av.datasets.curated('pexels/time-lapse-video-of-night-sky-857195.mp4') def format_bytes(n): order = 0 while n > 1024: order += 1 n //= 1024 return '%d%sB' % (n, ('', 'k', 'M', 'G', 'T', 'P')[order]) after = resource.getrusage(resource.RUSAGE_SELF) count = 0 streams = [] while True: container = av.open(path) # streams.append(container.streams.video[0]) del container gc.collect() count += 1 if not count % 100: pass # streams.clear() # gc.collect() before = after after = resource.getrusage(resource.RUSAGE_SELF) print('{:6d} {} ({})'.format( count, format_bytes(after.ru_maxrss), format_bytes(after.ru_maxrss - before.ru_maxrss), )) PyAV-8.1.0/scratchpad/decode.py000066400000000000000000000124451416312437500162640ustar00rootroot00000000000000from __future__ import print_function import array import argparse import logging import sys import pprint import subprocess from PIL import Image from av import open, time_base logging.basicConfig(level=logging.DEBUG) def format_time(time, time_base): if time is None: return 'None' return '%.3fs (%s or %s/%s)' % (time_base * time, time_base * time, time_base.numerator * time, time_base.denominator) arg_parser = argparse.ArgumentParser() arg_parser.add_argument('path') arg_parser.add_argument('-f', '--format') arg_parser.add_argument('-a', '--audio', action='store_true') arg_parser.add_argument('-v', '--video', action='store_true') arg_parser.add_argument('-s', '--subs', action='store_true') arg_parser.add_argument('-d', '--data', action='store_true') arg_parser.add_argument('--dump-packets', action='store_true') arg_parser.add_argument('--dump-planes', action='store_true') arg_parser.add_argument('-p', '--play', action='store_true') arg_parser.add_argument('-t', '--thread-type') arg_parser.add_argument('-o', '--option', action='append', default=[]) arg_parser.add_argument('-c', '--count', type=int, default=5) args = arg_parser.parse_args() proc = None options = dict(x.split('=') for x in args.option) container = open(args.path, format=args.format, options=options) print('container:', container) print('\tformat:', container.format) print('\tduration:', float(container.duration) / time_base) print('\tmetadata:') for k, v in sorted(container.metadata.items()): print('\t\t%s: %r' % (k, v)) print() print(len(container.streams), 'stream(s):') for i, stream in enumerate(container.streams): if args.thread_type: stream.codec_context.thread_type = args.thread_type print('\t%r' % stream) print('\t\ttime_base: %r' % stream.time_base) print('\t\trate: %r' % stream.rate) print('\t\tstart_time: %r' % stream.start_time) print('\t\tduration: %s' % format_time(stream.duration, stream.time_base)) print('\t\tbit_rate: %r' % stream.bit_rate) print('\t\tbit_rate_tolerance: %r' % stream.bit_rate_tolerance) codec_context = stream.codec_context if codec_context: print('\t\tcodec_context:', codec_context) print('\t\t\ttime_base:', codec_context.time_base) if stream.type == b'audio': print('\t\taudio:') print('\t\t\tformat:', stream.format) print('\t\t\tchannels: %s' % stream.channels) elif stream.type == 'video': print('\t\tvideo:') print('\t\t\tformat:', stream.format) print('\t\t\taverage_rate: %r' % stream.average_rate) print('\t\tmetadata:') for k, v in sorted(stream.metadata.items()): print('\t\t\t%s: %r' % (k, v)) print() streams = [s for s in container.streams if (s.type == 'audio' and args.audio) or (s.type == 'video' and args.video) or (s.type == 'subtitle' and args.subs) or (s.type == 'data' and args.data) ] frame_count = 0 for i, packet in enumerate(container.demux(streams)): print('%02d %r' % (i, packet)) print('\ttime_base: %s' % packet.time_base) print('\tduration: %s' % format_time(packet.duration, packet.stream.time_base)) print('\tpts: %s' % format_time(packet.pts, packet.stream.time_base)) print('\tdts: %s' % format_time(packet.dts, packet.stream.time_base)) print('\tkey: %s' % packet.is_keyframe) if args.dump_packets: print(bytes(packet)) for frame in packet.decode(): frame_count += 1 print('\tdecoded:', frame) print('\t\ttime_base: %s' % frame.time_base) print('\t\tpts:', format_time(frame.pts, packet.stream.time_base)) if packet.stream.type == 'video': pass elif packet.stream.type == 'audio': print('\t\tsamples:', frame.samples) print('\t\tformat:', frame.format.name) print('\t\tlayout:', frame.layout.name) elif packet.stream.type == 'subtitle': sub = frame print('\t\tformat:', sub.format) print('\t\tstart_display_time:', format_time(sub.start_display_time, packet.stream.time_base)) print('\t\tend_display_time:', format_time(sub.end_display_time, packet.stream.time_base)) print('\t\trects: %d' % len(sub.rects)) for rect in sub.rects: print('\t\t\t%r' % rect) if rect.type == 'ass': print('\t\t\t\tass: %r' % rect.ass) if args.play and packet.stream.type == 'audio': if not proc: cmd = ['ffplay', '-f', 's16le', '-ar', str(packet.stream.time_base), '-vn', '-', ] proc = subprocess.Popen(cmd, stdin=subprocess.PIPE) try: proc.stdin.write(frame.planes[0].to_bytes()) except IOError as e: print(e) exit() if args.dump_planes: print('\t\tplanes') for i, plane in enumerate(frame.planes or ()): data = plane.to_bytes() print('\t\t\tPLANE %d, %d bytes' % (i, len(data))) data = data.encode('hex') for i in xrange(0, len(data), 128): print('\t\t\t%s' % data[i:i + 128]) if args.count and frame_count >= args.count: exit() print() PyAV-8.1.0/scratchpad/dump_format.py000066400000000000000000000002661416312437500173540ustar00rootroot00000000000000import sys import logging logging.basicConfig(level=logging.DEBUG) logging.getLogger('libav').setLevel(logging.DEBUG) import av fh = av.open(sys.argv[1]) print(fh.dumps_format()) PyAV-8.1.0/scratchpad/encode.py000066400000000000000000000051601416312437500162720ustar00rootroot00000000000000from __future__ import print_function import argparse import logging import os import sys import av from tests.common import asset, sandboxed arg_parser = argparse.ArgumentParser() arg_parser.add_argument('-v', '--verbose', action='store_true') arg_parser.add_argument('input', nargs=1) args = arg_parser.parse_args() input_file = av.open(args.input[0]) input_video_stream = None # next((s for s in input_file.streams if s.type == 'video'), None) input_audio_stream = next((s for s in input_file.streams if s.type == 'audio'), None) # open output file output_file_path = sandboxed('encoded-' + os.path.basename(args.input[0])) output_file = av.open(output_file_path, 'w') output_video_stream = output_file.add_stream("mpeg4", 24) if input_video_stream else None output_audio_stream = output_file.add_stream("mp3") if input_audio_stream else None frame_count = 0 for packet in input_file.demux([s for s in (input_video_stream, input_audio_stream) if s]): if args.verbose: print('in ', packet) for frame in packet.decode(): if args.verbose: print('\t%s' % frame) if packet.stream.type == b'video': if frame_count % 10 == 0: if frame_count: print() print(('%03d:' % frame_count), end=' ') sys.stdout.write('.') sys.stdout.flush() frame_count += 1 # Signal to generate it's own timestamps. frame.pts = None stream = output_audio_stream if packet.stream.type == b'audio' else output_video_stream output_packets = [output_audio_stream.encode(frame)] while output_packets[-1]: output_packets.append(output_audio_stream.encode(None)) for p in output_packets: if p: if args.verbose: print('OUT', p) output_file.mux(p) if frame_count >= 100: break print('-' * 78) # Finally we need to flush out the frames that are buffered in the encoder. # To do that we simply call encode with no args until we get a None returned if output_audio_stream: while True: output_packet = output_audio_stream.encode(None) if output_packet: if args.verbose: print('<<<', output_packet) output_file.mux(output_packet) else: break if output_video_stream: while True: output_packet = output_video_stream.encode(None) if output_packet: if args.verbose: print('<<<', output_packet) output_file.mux(output_packet) else: break output_file.close() PyAV-8.1.0/scratchpad/encode_frames.py000066400000000000000000000021771416312437500176340ustar00rootroot00000000000000from __future__ import print_function import argparse import os import sys import av import cv2 arg_parser = argparse.ArgumentParser() arg_parser.add_argument('-r', '--rate', default='23.976') arg_parser.add_argument('-f', '--format', default='yuv420p') arg_parser.add_argument('-w', '--width', type=int) arg_parser.add_argument('--height', type=int) arg_parser.add_argument('-b', '--bitrate', type=int, default=8000000) arg_parser.add_argument('-c', '--codec', default='mpeg4') arg_parser.add_argument('inputs', nargs='+') arg_parser.add_argument('output', nargs=1) args = arg_parser.parse_args() output = av.open(args.output[0], 'w') stream = output.add_stream(args.codec, args.rate) stream.bit_rate = args.bitrate stream.pix_fmt = args.format for i, path in enumerate(args.inputs): print(os.path.basename(path)) img = cv2.imread(path) if not i: stream.height = args.height or (args.width * img.shape[0] / img.shape[1]) or img.shape[0] stream.width = args.width or img.shape[1] frame = av.VideoFrame.from_ndarray(img, format='bgr24') packet = stream.encode(frame) output.mux(packet) output.close() PyAV-8.1.0/scratchpad/filter_audio.py000066400000000000000000000066001416312437500175030ustar00rootroot00000000000000""" Simple audio filtering example ported from C code: https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/filter_audio.c """ from __future__ import division, print_function from fractions import Fraction import hashlib import sys import numpy as np import av import av.audio.frame as af import av.filter FRAME_SIZE = 1024 INPUT_SAMPLE_RATE = 48000 INPUT_FORMAT = 'fltp' INPUT_CHANNEL_LAYOUT = '5.0(side)' # -> AV_CH_LAYOUT_5POINT0 OUTPUT_SAMPLE_RATE = 44100 OUTPUT_FORMAT = 's16' # notice, packed audio format, expect only one plane in output OUTPUT_CHANNEL_LAYOUT = 'stereo' # -> AV_CH_LAYOUT_STEREO VOLUME_VAL = 0.90 def init_filter_graph(): graph = av.filter.Graph() output_format = 'sample_fmts={}:sample_rates={}:channel_layouts={}'.format( OUTPUT_FORMAT, OUTPUT_SAMPLE_RATE, OUTPUT_CHANNEL_LAYOUT ) print('Output format: {}'.format(output_format)) # initialize filters filter_chain = [ graph.add_abuffer(format=INPUT_FORMAT, sample_rate=INPUT_SAMPLE_RATE, layout=INPUT_CHANNEL_LAYOUT, time_base=Fraction(1, INPUT_SAMPLE_RATE)), # initialize filter with keyword parameters graph.add('volume', volume=str(VOLUME_VAL)), # or compound string configuration graph.add('aformat', output_format), graph.add('abuffersink') ] # link up the filters into a chain print('Filter graph:') for c, n in zip(filter_chain, filter_chain[1:]): print('\t{} -> {}'.format(c, n)) c.link_to(n) # initialize the filter graph graph.configure() return graph def get_input(frame_num): """ Manually construct and update AudioFrame. Consider using AudioFrame.from_ndarry for most real life numpy->AudioFrame conversions. :param frame_num: :return: """ frame = av.AudioFrame(format=INPUT_FORMAT, layout=INPUT_CHANNEL_LAYOUT, samples=FRAME_SIZE) frame.sample_rate = INPUT_SAMPLE_RATE frame.pts = frame_num * FRAME_SIZE for i in range(len(frame.layout.channels)): data = np.zeros(FRAME_SIZE, dtype=af.format_dtypes[INPUT_FORMAT]) for j in range(FRAME_SIZE): data[j] = np.sin(2 * np.pi * (frame_num + j) * (i + 1) / float(FRAME_SIZE)) frame.planes[i].update(data) return frame def process_output(frame): data = frame.to_ndarray() for i in range(data.shape[0]): m = hashlib.md5(data[i, :].tobytes()) print('Plane: {:0d} checksum: {}'.format(i, m.hexdigest())) def main(duration): frames_count = int(duration * INPUT_SAMPLE_RATE / FRAME_SIZE) graph = init_filter_graph() for f in range(frames_count): frame = get_input(f) # submit the frame for processing graph.push(frame) # pull frames from graph until graph has done processing or is waiting for a new input while True: try: out_frame = graph.pull() process_output(out_frame) except (BlockingIOError, av.EOFError): break # process any remaining buffered frames while True: try: out_frame = graph.pull() process_output(out_frame) except (BlockingIOError, av.EOFError): break if __name__ == '__main__': duration = 1.0 if len(sys.argv) < 2 else float(sys.argv[1]) main(duration) PyAV-8.1.0/scratchpad/frame_seek_example.py000066400000000000000000000306101416312437500206470ustar00rootroot00000000000000from __future__ import print_function """ Note this example only really works accurately on constant frame rate media. """ from PyQt4 import QtGui from PyQt4 import QtCore from PyQt4.QtCore import Qt import sys import av AV_TIME_BASE = 1000000 def pts_to_frame(pts, time_base, frame_rate, start_time): return int(pts * time_base * frame_rate) - int(start_time * time_base * frame_rate) def get_frame_rate(stream): if stream.average_rate.denominator and stream.average_rate.numerator: return float(stream.average_rate) if stream.time_base.denominator and stream.time_base.numerator: return 1.0 / float(stream.time_base) else: raise ValueError("Unable to determine FPS") def get_frame_count(f, stream): if stream.frames: return stream.frames elif stream.duration: return pts_to_frame(stream.duration, float(stream.time_base), get_frame_rate(stream), 0) elif f.duration: return pts_to_frame(f.duration, 1 / float(AV_TIME_BASE), get_frame_rate(stream), 0) else: raise ValueError("Unable to determine number for frames") class FrameGrabber(QtCore.QObject): frame_ready = QtCore.pyqtSignal(object, object) update_frame_range = QtCore.pyqtSignal(object) def __init__(self, parent=None): super(FrameGrabber, self).__init__(parent) self.file = None self.stream = None self.frame = None self.active_frame = None self.start_time = 0 self.pts_seen = False self.nb_frames = None self.rate = None self.time_base = None def next_frame(self): frame_index = None rate = self.rate time_base = self.time_base self.pts_seen = False for packet in self.file.demux(self.stream): #print " pkt", packet.pts, packet.dts, packet if packet.pts: self.pts_seen = True for frame in packet.decode(): if frame_index is None: if self.pts_seen: pts = frame.pts else: pts = frame.dts if not pts is None: frame_index = pts_to_frame(pts, time_base, rate, self.start_time) elif not frame_index is None: frame_index += 1 yield frame_index, frame @QtCore.pyqtSlot(object) def request_frame(self, target_frame): frame = self.get_frame(target_frame) if not frame: return rgba = frame.reformat(frame.width, frame.height, "rgb24", 'itu709') #print rgba.to_image().save("test.png") # could use the buffer interface here instead, some versions of PyQt don't support it for some reason # need to track down which version they added support for it self.frame = bytearray(rgba.planes[0]) bytesPerPixel = 3 img = QtGui.QImage(self.frame, rgba.width, rgba.height, rgba.width * bytesPerPixel, QtGui.QImage.Format_RGB888) #img = QtGui.QImage(rgba.planes[0], rgba.width, rgba.height, QtGui.QImage.Format_RGB888) #pixmap = QtGui.QPixmap.fromImage(img) self.frame_ready.emit(img, target_frame) def get_frame(self, target_frame): if target_frame != self.active_frame: return print('seeking to', target_frame) seek_frame = target_frame rate = self.rate time_base = self.time_base frame = None reseek = 250 original_target_frame_pts = None while reseek >= 0: # convert seek_frame to pts target_sec = seek_frame * 1 / rate target_pts = int(target_sec / time_base) + self.start_time if original_target_frame_pts is None: original_target_frame_pts = target_pts self.stream.seek(int(target_pts)) frame_index = None frame_cache = [] for i, (frame_index, frame) in enumerate(self.next_frame()): # optimization if the time slider has changed, the requested frame no longer valid if target_frame != self.active_frame: return print(" ", i, "at frame", frame_index, "at ts:", frame.pts, frame.dts, "target:", target_pts, 'orig', original_target_frame_pts) if frame_index is None: pass elif frame_index >= target_frame: break frame_cache.append(frame) # Check if we over seeked, if we over seekd we need to seek to a earlier time # but still looking for the target frame if frame_index != target_frame: if frame_index is None: over_seek = '?' else: over_seek = frame_index - target_frame if frame_index > target_frame: print(over_seek, frame_cache) if over_seek <= len(frame_cache): print("over seeked by %i, using cache" % over_seek) frame = frame_cache[-over_seek] break seek_frame -= 1 reseek -= 1 print("over seeked by %s, backtracking.. seeking: %i target: %i retry: %i" % (str(over_seek), seek_frame, target_frame, reseek)) else: break if reseek < 0: raise ValueError("seeking failed %i" % frame_index) # frame at this point should be the correct frame if frame: return frame else: raise ValueError("seeking failed %i" % target_frame) def get_frame_count(self): frame_count = None if self.stream.frames: frame_count = self.stream.frames elif self.stream.duration: frame_count = pts_to_frame(self.stream.duration, float(self.stream.time_base), get_frame_rate(self.stream), 0) elif self.file.duration: frame_count = pts_to_frame(self.file.duration, 1 / float(AV_TIME_BASE), get_frame_rate(self.stream), 0) else: raise ValueError("Unable to determine number for frames") seek_frame = frame_count retry = 100 while retry: target_sec = seek_frame * 1 / self.rate target_pts = int(target_sec / self.time_base) + self.start_time self.stream.seek(int(target_pts)) frame_index = None for frame_index, frame in self.next_frame(): print(frame_index, frame) continue if not frame_index is None: break else: seek_frame -= 1 retry -= 1 print("frame count seeked", frame_index, "container frame count", frame_count) return frame_index or frame_count @QtCore.pyqtSlot(object) def set_file(self, path): self.file = av.open(path) self.stream = next(s for s in self.file.streams if s.type == b'video') self.rate = get_frame_rate(self.stream) self.time_base = float(self.stream.time_base) index, first_frame = next(self.next_frame()) self.stream.seek(self.stream.start_time) # find the pts of the first frame index, first_frame = next(self.next_frame()) if self.pts_seen: pts = first_frame.pts else: pts = first_frame.dts self.start_time = pts or first_frame.dts print("First pts", pts, self.stream.start_time, first_frame) #self.nb_frames = get_frame_count(self.file, self.stream) self.nb_frames = self.get_frame_count() self.update_frame_range.emit(self.nb_frames) class DisplayWidget(QtGui.QLabel): def __init__(self, parent=None): super(DisplayWidget, self).__init__(parent) #self.setScaledContents(True) self.setMinimumSize(1920 / 10, 1080 / 10) size_policy = QtGui.QSizePolicy(QtGui.QSizePolicy.Preferred, QtGui.QSizePolicy.Preferred) size_policy.setHeightForWidth(True) self.setSizePolicy(size_policy) self.setAlignment(Qt.AlignHCenter | Qt.AlignBottom) self.pixmap = None self.setMargin(10) def heightForWidth(self, width): return width * 9 / 16.0 @QtCore.pyqtSlot(object, object) def setPixmap(self, img, index): #if index == self.current_index: self.pixmap = QtGui.QPixmap.fromImage(img) #super(DisplayWidget, self).setPixmap(self.pixmap) super(DisplayWidget, self).setPixmap(self.pixmap.scaled(self.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation)) def sizeHint(self): width = self.width() return QtCore.QSize(width, self.heightForWidth(width)) def resizeEvent(self, event): if self.pixmap: super(DisplayWidget, self).setPixmap(self.pixmap.scaled(self.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation)) def sizeHint(self): return QtCore.QSize(1920 / 2.5, 1080 / 2.5) class VideoPlayerWidget(QtGui.QWidget): request_frame = QtCore.pyqtSignal(object) load_file = QtCore.pyqtSignal(object) def __init__(self, parent=None): super(VideoPlayerWidget, self).__init__(parent) self.display = DisplayWidget() self.timeline = QtGui.QScrollBar(Qt.Horizontal) self.frame_grabber = FrameGrabber() self.frame_control = QtGui.QSpinBox() self.frame_control.setFixedWidth(100) self.timeline.valueChanged.connect(self.frame_changed) self.frame_control.valueChanged.connect(self.frame_changed) self.request_frame.connect(self.frame_grabber.request_frame) self.load_file.connect(self.frame_grabber.set_file) self.frame_grabber.frame_ready.connect(self.display.setPixmap) self.frame_grabber.update_frame_range.connect(self.set_frame_range) self.frame_grabber_thread = QtCore.QThread() self.frame_grabber.moveToThread(self.frame_grabber_thread) self.frame_grabber_thread.start() control_layout = QtGui.QHBoxLayout() control_layout.addWidget(self.frame_control) control_layout.addWidget(self.timeline) layout = QtGui.QVBoxLayout() layout.addWidget(self.display) layout.addLayout(control_layout) self.setLayout(layout) self.setAcceptDrops(True) def set_file(self, path): #self.frame_grabber.set_file(path) self.load_file.emit(path) self.frame_changed(0) @QtCore.pyqtSlot(object) def set_frame_range(self, maximum): print("frame range =", maximum) self.timeline.setMaximum(maximum) self.frame_control.setMaximum(maximum) def frame_changed(self, value): self.timeline.blockSignals(True) self.frame_control.blockSignals(True) self.timeline.setValue(value) self.frame_control.setValue(value) self.timeline.blockSignals(False) self.frame_control.blockSignals(False) #self.display.current_index = value self.frame_grabber.active_frame = value self.request_frame.emit(value) def keyPressEvent(self, event): if event.key() in (Qt.Key_Right, Qt.Key_Left): direction = 1 if event.key() == Qt.Key_Left: direction = -1 if event.modifiers() == Qt.ShiftModifier: print('shift') direction *= 10 self.timeline.setValue(self.timeline.value() + direction) else: super(VideoPlayerWidget, self).keyPressEvent(event) def mousePressEvent(self, event): # clear focus of spinbox focused_widget = QtGui.QApplication.focusWidget() if focused_widget: focused_widget.clearFocus() super(VideoPlayerWidget, self).mousePressEvent(event) def dragEnterEvent(self, event): event.accept() def dropEvent(self, event): mime = event.mimeData() event.accept() if mime.hasUrls(): path = str(mime.urls()[0].path()) self.set_file(path) def closeEvent(self, event): self.frame_grabber.active_frame = -1 self.frame_grabber_thread.quit() self.frame_grabber_thread.wait() event.accept() if __name__ == "__main__": app = QtGui.QApplication(sys.argv) window = VideoPlayerWidget() test_file = sys.argv[1] window.set_file(test_file) window.show() sys.exit(app.exec_()) PyAV-8.1.0/scratchpad/glproxy.py000066400000000000000000000051021416312437500165350ustar00rootroot00000000000000'''Mikes wrapper for the visualizer???''' from contextlib import contextmanager from OpenGL.GLUT import * from OpenGL.GLU import * from OpenGL.GL import * import OpenGL __all__ = ''' gl glu glut '''.strip().split() class ModuleProxy(object): def __init__(self, name, module): self.name = name self.module = module def __getattr__(self, name): if name.isupper(): return getattr(self.module, self.name.upper() + '_' + name) else: # convert to camel case name = name.split('_') name = [x[0].upper() + x[1:] for x in name] name = ''.join(name) return getattr(self.module, self.name + name) class GLProxy(ModuleProxy): @contextmanager def matrix(self): self.module.glPushMatrix() try: yield finally: self.module.glPopMatrix() @contextmanager def attrib(self, *args): mask = 0 for arg in args: if isinstance(arg, str): arg = getattr(self.module, 'GL_%s_BIT' % arg.upper()) mask |= arg self.module.glPushAttrib(mask) try: yield finally: self.module.glPopAttrib() def enable(self, *args, **kwargs): self._enable(True, args, kwargs) return self._apply_on_exit(self._enable, False, args, kwargs) def disable(self, *args, **kwargs): self._enable(False, args, kwargs) return self._apply_on_exit(self._enable, True, args, kwargs) def _enable(self, enable, args, kwargs): todo = [] for arg in args: if isinstance(arg, str): arg = getattr(self.module, 'GL_%s' % arg.upper()) todo.append((arg, enable)) for key, value in kwargs.iteritems(): flag = getattr(self.module, 'GL_%s' % key.upper()) value = value if enable else not value todo.append((flag, value)) for flag, value in todo: if value: self.module.glEnable(flag) else: self.module.glDisable(flag) def begin(self, arg): if isinstance(arg, str): arg = getattr(self.module, 'GL_%s' % arg.upper()) self.module.glBegin(arg) return self._apply_on_exit(self.module.glEnd) @contextmanager def _apply_on_exit(self, func, *args, **kwargs): try: yield finally: func(*args, **kwargs) gl = GLProxy('gl', OpenGL.GL) glu = ModuleProxy('glu', OpenGL.GLU) glut = ModuleProxy('glut', OpenGL.GLUT) PyAV-8.1.0/scratchpad/graph.py000066400000000000000000000003361416312437500161360ustar00rootroot00000000000000from __future__ import print_function from av.filter.graph import Graph g = Graph() print(g.dump()) f = g.pull() print(f) f = f.reformat(format='rgb24') print(f) img = f.to_image() print(img) img.save('graph.png') PyAV-8.1.0/scratchpad/player.py000066400000000000000000000046511416312437500163350ustar00rootroot00000000000000from __future__ import print_function import argparse import ctypes import os import sys import pprint import time from qtproxy import Q from glproxy import gl import av WIDTH = 960 HEIGHT = 540 class PlayerGLWidget(Q.GLWidget): def initializeGL(self): print('initialize GL') gl.clearColor(0, 0, 0, 0) gl.enable(gl.TEXTURE_2D) # gl.texEnv(gl.TEXTURE_ENV, gl.TEXTURE_ENV_MODE, gl.DECAL) self.tex_id = gl.genTextures(1) gl.bindTexture(gl.TEXTURE_2D, self.tex_id) gl.texParameter(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST) gl.texParameter(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST) print('texture id', self.tex_id) def setImage(self, w, h, img): gl.texImage2D(gl.TEXTURE_2D, 0, 3, w, h, 0, gl.RGB, gl.UNSIGNED_BYTE, img) def resizeGL(self, w, h): print('resize to', w, h) gl.viewport(0, 0, w, h) # gl.matrixMode(gl.PROJECTION) # gl.loadIdentity() # gl.ortho(0, w, 0, h, -10, 10) # gl.matrixMode(gl.MODELVIEW) def paintGL(self): # print 'paint!' gl.clear(gl.COLOR_BUFFER_BIT) with gl.begin('polygon'): gl.texCoord(0, 0); gl.vertex(-1, 1) gl.texCoord(1, 0); gl.vertex(1, 1) gl.texCoord(1, 1); gl.vertex(1, -1) gl.texCoord(0, 1); gl.vertex(-1, -1) parser = argparse.ArgumentParser() parser.add_argument('-f', '--format') parser.add_argument('path') args = parser.parse_args() def _iter_images(): video = av.open(args.path, format=args.format) stream = next(s for s in video.streams if s.type == b'video') for packet in video.demux(stream): for frame in packet.decode(): yield frame.reformat(frame.width, frame.height, 'rgb24') image_iter = _iter_images() app = Q.Application([]) glwidget = PlayerGLWidget() glwidget.setFixedWidth(WIDTH) glwidget.setFixedHeight(HEIGHT) glwidget.show() glwidget.raise_() start_time = 0 count = 0 timer = Q.Timer() timer.setInterval(1000 / 30) @timer.timeout.connect def on_timeout(*args): global start_time, count start_time = start_time or time.time() frame = next(image_iter) ptr = ctypes.c_void_p(frame.planes[0].ptr) glwidget.setImage(frame.width, frame.height, ptr) glwidget.updateGL() count += 1 elapsed = time.time() - start_time print(frame.pts, frame.dts, '%.2ffps' % (count / elapsed)) timer.start() app.exec_() PyAV-8.1.0/scratchpad/qtproxy.py000066400000000000000000000010401416312437500165540ustar00rootroot00000000000000import sys sys.path.append('/usr/local/lib/python2.7/site-packages') from PyQt4 import QtCore, QtGui, QtOpenGL, QtMultimedia class QtProxy(object): def __init__(self, *modules): self._modules = modules def __getattr__(self, base_name): for mod in self._modules: for prefix in ('Q', '', 'Qt'): name = prefix + base_name obj = getattr(mod, name, None) if obj is not None: setattr(self, base_name, obj) return obj raise AttributeError(base_name) Q = QtProxy(QtGui, QtCore, QtCore.Qt, QtOpenGL, QtMultimedia) PyAV-8.1.0/scratchpad/remux.py000066400000000000000000000040251416312437500161740ustar00rootroot00000000000000from __future__ import print_function import array import argparse import logging import sys import pprint import subprocess from PIL import Image from av import open, time_base logging.basicConfig(level=logging.DEBUG) def format_time(time, time_base): if time is None: return 'None' return '%.3fs (%s or %s/%s)' % (time_base * time, time_base * time, time_base.numerator * time, time_base.denominator) arg_parser = argparse.ArgumentParser() arg_parser.add_argument('input') arg_parser.add_argument('output') arg_parser.add_argument('-F', '--iformat') arg_parser.add_argument('-O', '--ioption', action='append', default=[]) arg_parser.add_argument('-f', '--oformat') arg_parser.add_argument('-o', '--ooption', action='append', default=[]) arg_parser.add_argument('-a', '--noaudio', action='store_true') arg_parser.add_argument('-v', '--novideo', action='store_true') arg_parser.add_argument('-s', '--nosubs', action='store_true') arg_parser.add_argument('-d', '--nodata', action='store_true') arg_parser.add_argument('-c', '--count', type=int, default=0) args = arg_parser.parse_args() input_ = open(args.input, format=args.iformat, options=dict(x.split('=') for x in args.ioption), ) output = open(args.output, 'w', format=args.oformat, options=dict(x.split('=') for x in args.ooption), ) in_to_out = {} for i, stream in enumerate(input_.streams): if ( (stream.type == b'audio' and not args.noaudio) or (stream.type == b'video' and not args.novideo) or (stream.type == b'subtitle' and not args.nosubtitle) or (stream.type == b'data' and not args.nodata) ): in_to_out[stream] = ostream = output.add_stream(template=stream) for i, packet in enumerate(input_.demux(in_to_out.keys())): if args.count and i >= args.count: break print('%02d %r' % (i, packet)) print('\tin: ', packet.stream) if packet.dts is None: continue packet.stream = in_to_out[packet.stream] print('\tout:', packet.stream) output.mux(packet) output.close() PyAV-8.1.0/scratchpad/resource_use.py000066400000000000000000000031551416312437500175420ustar00rootroot00000000000000from __future__ import division, print_function from __future__ import division import argparse import resource import gc import av parser = argparse.ArgumentParser() parser.add_argument('-c', '--count', type=int, default=5) parser.add_argument('-f', '--frames', type=int, default=100) parser.add_argument('--print', dest='print_', action='store_true') parser.add_argument('--to-rgb', action='store_true') parser.add_argument('--to-image', action='store_true') parser.add_argument('--gc', '-g', action='store_true') parser.add_argument('input') args = parser.parse_args() def format_bytes(n): order = 0 while n > 1024: order += 1 n //= 1024 return '%d%sB' % (n, ('', 'k', 'M', 'G', 'T', 'P')[order]) usage = [] for round_ in xrange(args.count): print('Round %d/%d:' % (round_ + 1, args.count)) if args.gc: gc.collect() usage.append(resource.getrusage(resource.RUSAGE_SELF)) fh = av.open(args.input) vs = next(s for s in fh.streams if s.type == 'video') fi = 0 for packet in fh.demux([vs]): for frame in packet.decode(): if args.print_: print(frame) if args.to_rgb: print(frame.to_rgb()) if args.to_image: print(frame.to_image()) fi += 1 if fi > args.frames: break frame = packet = fh = vs = None usage.append(resource.getrusage(resource.RUSAGE_SELF)) for i in xrange(len(usage) - 1): before = usage[i] after = usage[i + 1] print('%s (%s)' % (format_bytes(after.ru_maxrss), format_bytes(after.ru_maxrss - before.ru_maxrss))) PyAV-8.1.0/scratchpad/save_subtitles.py000066400000000000000000000032201416312437500200640ustar00rootroot00000000000000from __future__ import print_function """ As you can see, the subtitle API needs some work. """ import os import sys import pprint from PIL import Image from av import open if not os.path.exists('subtitles'): os.makedirs('subtitles') video = open(sys.argv[1]) streams = [s for s in video.streams if s.type == b'subtitle'] if not streams: print('no subtitles') exit(1) print(streams) count = 0 for pi, packet in enumerate(video.demux([streams[0]])): print('packet', pi) for si, subtitle in enumerate(packet.decode()): print('\tsubtitle', si, subtitle) for ri, rect in enumerate(subtitle.rects): if rect.type == 'ass': print('\t\tass: ', rect, rect.ass.rstrip('\n')) if rect.type == 'text': print('\t\ttext: ', rect, rect.text.rstrip('\n')) if rect.type == 'bitmap': print('\t\tbitmap: ', rect, rect.width, rect.height, rect.pict_buffers) buffers = [b for b in rect.pict_buffers if b is not None] if buffers: imgs = [ Image.frombuffer('L', (rect.width, rect.height), buffer, "raw", "L", 0, 1) for buffer in buffers ] if len(imgs) == 1: img = imgs[0] elif len(imgs) == 2: img = Image.merge('LA', imgs) else: img = Image.merge('RGBA', imgs) img.save('subtitles/%04d.png' % count) count += 1 if count > 10: pass # exit() PyAV-8.1.0/scratchpad/second_seek_example.py000066400000000000000000000343501416312437500210350ustar00rootroot00000000000000from __future__ import print_function """ Note this example only really works accurately on constant frame rate media. """ from PyQt4 import QtGui from PyQt4 import QtCore from PyQt4.QtCore import Qt import sys import av AV_TIME_BASE = 1000000 def pts_to_frame(pts, time_base, frame_rate, start_time): return int(pts * time_base * frame_rate) - int(start_time * time_base * frame_rate) def get_frame_rate(stream): if stream.average_rate.denominator and stream.average_rate.numerator: return float(stream.average_rate) if stream.time_base.denominator and stream.time_base.numerator: return 1.0 / float(stream.time_base) else: raise ValueError("Unable to determine FPS") def get_frame_count(f, stream): if stream.frames: return stream.frames elif stream.duration: return pts_to_frame(stream.duration, float(stream.time_base), get_frame_rate(stream), 0) elif f.duration: return pts_to_frame(f.duration, 1 / float(AV_TIME_BASE), get_frame_rate(stream), 0) else: raise ValueError("Unable to determine number for frames") class FrameGrabber(QtCore.QObject): frame_ready = QtCore.pyqtSignal(object, object) update_frame_range = QtCore.pyqtSignal(object, object) def __init__(self, parent=None): super(FrameGrabber, self).__init__(parent) self.file = None self.stream = None self.frame = None self.active_time = None self.start_time = 0 self.pts_seen = False self.nb_frames = None self.rate = None self.time_base = None self.pts_map = {} def next_frame(self): frame_index = None rate = self.rate time_base = self.time_base self.pts_seen = False for packet in self.file.demux(self.stream): #print " pkt", packet.pts, packet.dts, packet if packet.pts: self.pts_seen = True for frame in packet.decode(): if frame_index is None: if self.pts_seen: pts = frame.pts else: pts = frame.dts if not pts is None: frame_index = pts_to_frame(pts, time_base, rate, self.start_time) elif not frame_index is None: frame_index += 1 if not frame.dts in self.pts_map: secs = None if not pts is None: secs = pts * time_base self.pts_map[frame.dts] = secs #if frame.pts == None: yield frame_index, frame @QtCore.pyqtSlot(object) def request_time(self, second): frame = self.get_frame(second) if not frame: return rgba = frame.reformat(frame.width, frame.height, "rgb24", 'itu709') #print rgba.to_image().save("test.png") # could use the buffer interface here instead, some versions of PyQt don't support it for some reason # need to track down which version they added support for it self.frame = bytearray(rgba.planes[0]) bytesPerPixel = 3 img = QtGui.QImage(self.frame, rgba.width, rgba.height, rgba.width * bytesPerPixel, QtGui.QImage.Format_RGB888) #img = QtGui.QImage(rgba.planes[0], rgba.width, rgba.height, QtGui.QImage.Format_RGB888) #pixmap = QtGui.QPixmap.fromImage(img) self.frame_ready.emit(img, second) def get_frame(self, target_sec): if target_sec != self.active_time: return print('seeking to', target_sec) rate = self.rate time_base = self.time_base target_pts = int(target_sec / time_base) + self.start_time seek_pts = target_pts self.stream.seek(seek_pts) #frame_cache = [] last_frame = None for i, (frame_index, frame) in enumerate(self.next_frame()): if target_sec != self.active_time: return pts = frame.dts if self.pts_seen: pts = frame.pts if pts > target_pts: break print(frame.pts, seek_pts) last_frame = frame if last_frame: return last_frame def get_frame_old(self, target_frame): if target_frame != self.active_frame: return print('seeking to', target_frame) seek_frame = target_frame rate = self.rate time_base = self.time_base frame = None reseek = 250 original_target_frame_pts = None while reseek >= 0: # convert seek_frame to pts target_sec = seek_frame * 1 / rate target_pts = int(target_sec / time_base) + self.start_time if original_target_frame_pts is None: original_target_frame_pts = target_pts self.stream.seek(int(target_pts)) frame_index = None frame_cache = [] for i, (frame_index, frame) in enumerate(self.next_frame()): # optimization if the time slider has changed, the requested frame no longer valid if target_frame != self.active_frame: return print(" ", i, "at frame", frame_index, "at ts:", frame.pts, frame.dts, "target:", target_pts, 'orig', original_target_frame_pts) if frame_index is None: pass elif frame_index >= target_frame: break frame_cache.append(frame) # Check if we over seeked, if we over seekd we need to seek to a earlier time # but still looking for the target frame if frame_index != target_frame: if frame_index is None: over_seek = '?' else: over_seek = frame_index - target_frame if frame_index > target_frame: print(over_seek, frame_cache) if over_seek <= len(frame_cache): print("over seeked by %i, using cache" % over_seek) frame = frame_cache[-over_seek] break seek_frame -= 1 reseek -= 1 print("over seeked by %s, backtracking.. seeking: %i target: %i retry: %i" % (str(over_seek), seek_frame, target_frame, reseek)) else: break if reseek < 0: raise ValueError("seeking failed %i" % frame_index) # frame at this point should be the correct frame if frame: return frame else: raise ValueError("seeking failed %i" % target_frame) def get_frame_count(self): frame_count = None if self.stream.frames: frame_count = self.stream.frames elif self.stream.duration: frame_count = pts_to_frame(self.stream.duration, float(self.stream.time_base), get_frame_rate(self.stream), 0) elif self.file.duration: frame_count = pts_to_frame(self.file.duration, 1 / float(AV_TIME_BASE), get_frame_rate(self.stream), 0) else: raise ValueError("Unable to determine number for frames") seek_frame = frame_count retry = 100 while retry: target_sec = seek_frame * 1 / self.rate target_pts = int(target_sec / self.time_base) + self.start_time self.stream.seek(int(target_pts)) frame_index = None for frame_index, frame in self.next_frame(): print(frame_index, frame) continue if not frame_index is None: break else: seek_frame -= 1 retry -= 1 print("frame count seeked", frame_index, "container frame count", frame_count) return frame_index or frame_count @QtCore.pyqtSlot(object) def set_file(self, path): self.file = av.open(path) self.stream = next(s for s in self.file.streams if s.type == b'video') self.rate = get_frame_rate(self.stream) self.time_base = float(self.stream.time_base) index, first_frame = next(self.next_frame()) self.stream.seek(self.stream.start_time) # find the pts of the first frame index, first_frame = next(self.next_frame()) if self.pts_seen: pts = first_frame.pts else: pts = first_frame.dts self.start_time = pts or first_frame.dts print("First pts", pts, self.stream.start_time, first_frame) #self.nb_frames = get_frame_count(self.file, self.stream) self.nb_frames = self.get_frame_count() dur = None if self.stream.duration: dur = self.stream.duration * self.time_base else: dur = self.file.duration * 1.0 / float(AV_TIME_BASE) self.update_frame_range.emit(dur, self.rate) class DisplayWidget(QtGui.QLabel): def __init__(self, parent=None): super(DisplayWidget, self).__init__(parent) #self.setScaledContents(True) self.setMinimumSize(1920 / 10, 1080 / 10) size_policy = QtGui.QSizePolicy(QtGui.QSizePolicy.Preferred, QtGui.QSizePolicy.Preferred) size_policy.setHeightForWidth(True) self.setSizePolicy(size_policy) self.setAlignment(Qt.AlignHCenter | Qt.AlignBottom) self.pixmap = None self.setMargin(10) def heightForWidth(self, width): return width * 9 / 16.0 @QtCore.pyqtSlot(object, object) def setPixmap(self, img, index): #if index == self.current_index: self.pixmap = QtGui.QPixmap.fromImage(img) #super(DisplayWidget, self).setPixmap(self.pixmap) super(DisplayWidget, self).setPixmap(self.pixmap.scaled(self.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation)) def sizeHint(self): width = self.width() return QtCore.QSize(width, self.heightForWidth(width)) def resizeEvent(self, event): if self.pixmap: super(DisplayWidget, self).setPixmap(self.pixmap.scaled(self.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation)) def sizeHint(self): return QtCore.QSize(1920 / 2.5, 1080 / 2.5) class VideoPlayerWidget(QtGui.QWidget): request_time = QtCore.pyqtSignal(object) load_file = QtCore.pyqtSignal(object) def __init__(self, parent=None): super(VideoPlayerWidget, self).__init__(parent) self.rate = None self.display = DisplayWidget() self.timeline = QtGui.QScrollBar(Qt.Horizontal) self.timeline_base = 100000 self.frame_grabber = FrameGrabber() self.frame_control = QtGui.QDoubleSpinBox() self.frame_control.setFixedWidth(100) self.timeline.valueChanged.connect(self.slider_changed) self.frame_control.valueChanged.connect(self.frame_changed) self.request_time.connect(self.frame_grabber.request_time) self.load_file.connect(self.frame_grabber.set_file) self.frame_grabber.frame_ready.connect(self.display.setPixmap) self.frame_grabber.update_frame_range.connect(self.set_frame_range) self.frame_grabber_thread = QtCore.QThread() self.frame_grabber.moveToThread(self.frame_grabber_thread) self.frame_grabber_thread.start() control_layout = QtGui.QHBoxLayout() control_layout.addWidget(self.frame_control) control_layout.addWidget(self.timeline) layout = QtGui.QVBoxLayout() layout.addWidget(self.display) layout.addLayout(control_layout) self.setLayout(layout) self.setAcceptDrops(True) def set_file(self, path): #self.frame_grabber.set_file(path) self.load_file.emit(path) self.frame_changed(0) @QtCore.pyqtSlot(object, object) def set_frame_range(self, maximum, rate): print("frame range =", maximum, rate, int(maximum * self.timeline_base)) self.timeline.setMaximum(int(maximum * self.timeline_base)) self.frame_control.setMaximum(maximum) self.frame_control.setSingleStep(1 / rate) #self.timeline.setSingleStep( int(AV_TIME_BASE * 1/rate)) self.rate = rate def slider_changed(self, value): print('..', value) self.frame_changed(value * 1.0 / float(self.timeline_base)) def frame_changed(self, value): self.timeline.blockSignals(True) self.frame_control.blockSignals(True) self.timeline.setValue(int(value * self.timeline_base)) self.frame_control.setValue(value) self.timeline.blockSignals(False) self.frame_control.blockSignals(False) #self.display.current_index = value self.frame_grabber.active_time = value self.request_time.emit(value) def keyPressEvent(self, event): if event.key() in (Qt.Key_Right, Qt.Key_Left): direction = 1 if event.key() == Qt.Key_Left: direction = -1 if event.modifiers() == Qt.ShiftModifier: print('shift') direction *= 10 direction = direction * 1 / self.rate self.frame_changed(self.frame_control.value() + direction) else: super(VideoPlayerWidget, self).keyPressEvent(event) def mousePressEvent(self, event): # clear focus of spinbox focused_widget = QtGui.QApplication.focusWidget() if focused_widget: focused_widget.clearFocus() super(VideoPlayerWidget, self).mousePressEvent(event) def dragEnterEvent(self, event): event.accept() def dropEvent(self, event): mime = event.mimeData() event.accept() if mime.hasUrls(): path = str(mime.urls()[0].path()) self.set_file(path) def closeEvent(self, event): self.frame_grabber.active_time = -1 self.frame_grabber_thread.quit() self.frame_grabber_thread.wait() for key, value in sorted(self.frame_grabber.pts_map.items()): print(key, '=', value) event.accept() if __name__ == "__main__": app = QtGui.QApplication(sys.argv) window = VideoPlayerWidget() test_file = sys.argv[1] window.set_file(test_file) window.show() sys.exit(app.exec_()) PyAV-8.1.0/scratchpad/seekmany.py000066400000000000000000000024271416312437500166540ustar00rootroot00000000000000from __future__ import print_function import sys import av container = av.open(sys.argv[1]) duration = container.duration stream = container.streams.video[0] print('container.duration', duration, float(duration) / av.time_base) print('container.time_base', av.time_base) print('stream.duration', stream.duration) print('stream.time_base', stream.time_base) print('codec.time_base', stream.codec_context.time_base) print('scale', float(stream.codec_context.time_base / stream.time_base)) print() exit() real_duration = float(duration) / av.time_base steps = 120 tolerance = real_duration / (steps * 4) print('real_duration', real_duration) print() def iter_frames(): for packet in container.demux(stream): for frame in packet.decode(): yield frame for i in xrange(steps): time = real_duration * i / steps min_time = time - tolerance pts = time / stream.time_base print('seeking', time, pts) stream.seek(int(pts)) skipped = 0 for frame in iter_frames(): ftime = float(frame.pts * stream.time_base) if ftime >= min_time: break skipped += 1 else: print(' WARNING: iterated to the end') print(' ', skipped, frame.pts, float(frame.pts * stream.time_base)) # WTF is this stream.time_base? PyAV-8.1.0/scratchpad/show_frames_opencv.py000066400000000000000000000006351416312437500207260ustar00rootroot00000000000000import os import sys import cv2 from av import open video = open(sys.argv[1]) stream = next(s for s in video.streams if s.type == 'video') for packet in video.demux(stream): for frame in packet.decode(): # some other formats gray16be, bgr24, rgb24 img = frame.to_ndarray(format='bgr24') cv2.imshow("Test", img) if cv2.waitKey(1) == 27: break cv2.destroyAllWindows() PyAV-8.1.0/scratchpad/sidedata.py000066400000000000000000000007461416312437500166200ustar00rootroot00000000000000import sys import av fh = av.open(sys.argv[1]) fh.streams.video[0].export_mvs = True # fh.streams.video[0].flags2 |= 'EXPORT_MVS' for pi, packet in enumerate(fh.demux()): for fi, frame in enumerate(packet.decode()): for di, data in enumerate(frame.side_data): print(pi, fi, di, data) print(data.to_ndarray()) for mi, vec in enumerate(data): print(mi, vec) if mi > 10: exit() PyAV-8.1.0/scripts/000077500000000000000000000000001416312437500140345ustar00rootroot00000000000000PyAV-8.1.0/scripts/activate.sh000077500000000000000000000054551416312437500162040ustar00rootroot00000000000000#!/bin/bash # Make sure this is sourced. if [[ "$0" == "${BASH_SOURCE[0]}" ]]; then echo This must be sourced. exit 1 fi export PYAV_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.."; pwd)" if [[ "$TRAVIS" ]]; then PYAV_LIBRARY=$LIBRARY fi if [[ ! "$PYAV_LIBRARY" ]]; then # Pull from command line argument. if [[ "$1" ]]; then PYAV_LIBRARY="$1" else PYAV_LIBRARY=ffmpeg-4.2 echo "No \$PYAV_LIBRARY set; defaulting to $PYAV_LIBRARY" fi fi export PYAV_LIBRARY if [[ ! "$PYAV_PYTHON" ]]; then PYAV_PYTHON="${PYAV_PYTHON-python3}" echo 'No $PYAV_PYTHON set; defaulting to python3.' fi # Hack for PyPy on GitHub Actions. # This is because PYAV_PYTHON is constructed from "python${{ matrix.config.python }}" # resulting in "pythonpypy3", which won't work. # It would be nice to clean this up, but I want it to work ASAP. if [[ "$PYAV_PYTHON" == *pypy* ]]; then PYAV_PYTHON=python fi export PYAV_PYTHON export PYAV_PIP="${PYAV_PIP-$PYAV_PYTHON -m pip}" if [[ "$GITHUB_ACTION" || "$TRAVIS" ]]; then # GitHub/Travis as a very self-contained environment. Lets just work in that. echo "We're on CI, so not setting up another virtualenv." if [[ "$TRAVIS_PYTHON_VERSION" = "2.7" || "$TRAVIS_PYTHON_VERSION" = "pypy" ]]; then PYAV_PYTHON=python PYAV_PIP=pip fi else export PYAV_VENV_NAME="$(uname -s).$(uname -r).$("$PYAV_PYTHON" -c ' import sys import platform print("{}{}.{}".format(platform.python_implementation().lower(), *sys.version_info[:2])) ')" export PYAV_VENV="$PYAV_ROOT/venvs/$PYAV_VENV_NAME" if [[ ! -e "$PYAV_VENV/bin/python" ]]; then mkdir -p "$PYAV_VENV" virtualenv -p "$PYAV_PYTHON" "$PYAV_VENV" "$PYAV_VENV/bin/pip" install --upgrade pip setuptools fi if [[ -e "$PYAV_VENV/bin/activate" ]]; then source "$PYAV_VENV/bin/activate" else # Not a virtualenv (perhaps a debug Python); lets manually "activate" it. PATH="$PYAV_VENV/bin:$PATH" fi fi # Just a flag so that we know this was supposedly run. export _PYAV_ACTIVATED=1 if [[ ! "$PYAV_LIBRARY_BUILD_ROOT" && -d /vagrant ]]; then # On Vagrant, building the library in the shared directory causes some # problems, so we move it to the user's home. PYAV_LIBRARY_ROOT="/home/vagrant/vendor" fi export PYAV_LIBRARY_ROOT="${PYAV_LIBRARY_ROOT-$PYAV_ROOT/vendor}" export PYAV_LIBRARY_BUILD="${PYAV_LIBRARY_BUILD-$PYAV_LIBRARY_ROOT/build}" export PYAV_LIBRARY_PREFIX="$PYAV_LIBRARY_BUILD/$PYAV_LIBRARY" export PATH="$PYAV_LIBRARY_PREFIX/bin:$PATH" export PYTHONPATH="$PYAV_ROOT:$PYTHONPATH" export PKG_CONFIG_PATH="$PYAV_LIBRARY_PREFIX/lib/pkgconfig:$PKG_CONFIG_PATH" export LD_LIBRARY_PATH="$PYAV_LIBRARY_PREFIX/lib:$LD_LIBRARY_PATH" export DYLD_LIBRARY_PATH="$PYAV_LIBRARY_PREFIX/lib:$DYLD_LIBRARY_PATH" PyAV-8.1.0/scripts/autolint000077500000000000000000000125501416312437500156240ustar00rootroot00000000000000#!/usr/bin/env python import argparse import fnmatch import os import re import sys import autopep8 import editorconfig import lib2to3.refactor SOURCE_ROOTS = ['av', 'docs', 'examples', 'include', 'scratchpad', 'scripts', 'tests'] SOURCE_EXTS = set('.py .pyx .pxd .rst'.split()) def iter_source_paths(): for root in SOURCE_ROOTS: for dir_path, _, file_names in os.walk(root): for file_name in file_names: base, ext = os.path.splitext(file_name) if base.startswith('.'): continue if ext not in SOURCE_EXTS: continue yield os.path.abspath(os.path.join(dir_path, file_name)) def apply_editorconfig(source, path): config = editorconfig.get_properties(path) soft_indent = config.get('indent_style', 'space') == 'space' indent_size = int(config.get('indent_size', 4)) do_trim = config['trim_trailing_whitespace'] == 'true' do_final_newline = config['insert_final_newline'] == 'true' spaced_indent = ' ' * indent_size output = [] for line in source.splitlines(): # Apply trim_trailing_whitespace. if do_trim: line = line.rstrip() # Adapt tabs to/from spaces. m = re.match(r'(\s+)(.*)', line) if m: indent, content = m.groups() if soft_indent: indent.replace('\t', spaced_indent) else: indent.replace(indent, '\t') line = indent + content output.append(line) while output and not output[-1]: output.pop() if do_final_newline: output.append('') return '\n'.join(output) pep8_fixes_by_ext = { pattern: tuple(filter(None, (x.split('#')[0].split('-')[0].strip() for x in value.splitlines()))) for pattern, value in { '*': ''' E121 - Fix indentation to be a multiple of four. E122 - Add absent indentation for hanging indentation. ''', '.py': ''' E226 - Fix missing whitespace around arithmetic operator. E227 - Fix missing whitespace around bitwise/shift operator. E228 - Fix missing whitespace around modulo operator. E231 - Add missing whitespace. E242 - Remove extraneous whitespace around operator. W603 - Use "!=" instead of "<>" W604 - Use "repr()" instead of backticks. E22 - Fix extraneous whitespace around keywords. # Messes with Cython. E241 - Fix extraneous whitespace around keywords. # Messes with Cython. ''', '.py*': ''' E224 - Remove extraneous whitespace around operator. # E301 - Add missing blank line. # E302 - Add missing 2 blank lines. # E303 - Remove extra blank lines. E251 - Remove whitespace around parameter '=' sign. E304 - Remove blank line following function decorator. E401 - Put imports on separate lines. E20 - Remove extraneous whitespace. E211 - Remove extraneous whitespace. ''', }.items() } def apply_autopep8(source, path): fixes = set() ext = os.path.splitext(path)[1] for pattern in pep8_fixes_by_ext: if fnmatch.fnmatch(ext, pattern): fixes.update(pep8_fixes_by_ext[pattern]) source = autopep8.fix_code(source, options=dict( select=filter(None, fixes), )) return source def apply_future(source, path): if os.path.splitext(path)[1] not in ('.py', ): return source m = re.search(r'^from __future__ import ([\w, \t]+)', source, flags=re.MULTILINE) if m: features = set(x.strip() for x in m.group(1).split(',')) else: features = set() fixes = [] if 'print_function' not in features and re.search(r'^\s*print\s+', source, flags=re.MULTILINE): fixes.append('lib2to3.fixes.fix_print') if not fixes: # Nothing to do. return source # The parser chokes if the last line is not empty. if not source.endswith('\n'): source += '\n' tool = lib2to3.refactor.RefactoringTool(fixes) tree = tool.refactor_string(source, path) source = str(tree) if 'print' in source: features.add('print_function') source = 'from __future__ import {}\n{}'.format(', '.join(sorted(features)), source) return source def main(): parser = argparse.ArgumentParser() parser.add_argument('-a', '--all', action='store_true') parser.add_argument('-e', '--editorconfig', action='store_true') parser.add_argument('-p', '--pep8', action='store_true') parser.add_argument('-f', '--future', action='store_true') args = parser.parse_args() if not (args.all or args.editorconfig or args.pep8 or args.future): print("Nothing to do.", file=sys.stderr) parser.print_usage() exit(1) for path in iter_source_paths(): before = after = open(path).read() print(path) if args.all or args.pep8: after = apply_autopep8(after, path) if args.all or args.future: after = apply_future(after, path) if args.all or args.editorconfig: after = apply_editorconfig(after, path) if before == after: continue with open(path, 'w') as fh: fh.write(after) if __name__ == '__main__': main() PyAV-8.1.0/scripts/build000077500000000000000000000011441416312437500150610ustar00rootroot00000000000000#!/bin/bash if [[ "$TRAVIS" && ("$TESTSUITE" == "isort" || "$TESTSUITE" == "flake8") ]]; then echo "We don't need to build PyAV for source linting." exit 0 fi if [[ ! "$_PYAV_ACTIVATED" ]]; then export here="$(cd "$(dirname "${BASH_SOURCE[0]}")"; pwd)" source "$here/activate.sh" fi cd "$PYAV_ROOT" export PATH="$PYAV_VENV/vendor/$PYAV_LIBRARY_SLUG/bin:$PATH" env | grep PYAV | sort echo echo PKG_CONFIG_PATH: $PKG_CONFIG_PATH echo LD_LIBRARY_PATH: $LD_LIBRARY_PATH echo which ffmpeg || exit 2 ffmpeg -version || exit 3 echo "$PYAV_PYTHON" setup.py config build_ext --inplace || exit 1 PyAV-8.1.0/scripts/build-debug-python000077500000000000000000000016271416312437500174720ustar00rootroot00000000000000#!/bin/bash export PYAV_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.."; pwd)" export PYAV_PYTHON_VERSION="2.7.13" export PYAV_PLATFORM_SLUG="$(uname -s).$(uname -r)" export PYAV_VENV_NAME="$PYAV_PLATFORM_SLUG.cpython-$PYAV_PYTHON_VERSION-debug" export PYAV_VENV="$PYAV_ROOT/venvs/$PYAV_VENV_NAME" export PYAV_PYTHON_SRC="$PYAV_ROOT/vendor/Python-$PYAV_PYTHON_VERSION" if [[ ! -d "$PYAV_PYTHON_SRC" ]]; then url="https://www.python.org/ftp/python/$PYAV_PYTHON_VERSION/Python-$PYAV_PYTHON_VERSION.tgz" echo "Downloading $url" wget -O "$PYAV_PYTHON_SRC.tgz" "$url" || exit 2 tar -C "$PYAV_ROOT/vendor" -xvzf "$PYAV_PYTHON_SRC.tgz" || exit 3 fi cd "$PYAV_PYTHON_SRC" || exit 4 # TODO: Make generic. export CPPFLAGS="-I$(brew --prefix openssl)/include" export LDFLAGS="-L$(brew --prefix openssl)/lib" # --with-pymalloc \ ./configure \ --with-pydebug \ --prefix "$PYAV_VENV" \ || exit 5 make -j 12 || exit 6 PyAV-8.1.0/scripts/build-deps000077500000000000000000000026721416312437500160210ustar00rootroot00000000000000#!/bin/bash if [[ ! "$_PYAV_ACTIVATED" ]]; then export here="$(cd "$(dirname "${BASH_SOURCE[0]}")"; pwd)" source "$here/activate.sh" fi cd "$PYAV_ROOT" # Always try to install the Python dependencies they are cheap. $PYAV_PIP install --upgrade -r tests/requirements.txt if [[ "$TRAVIS" && ("$TESTSUITE" == "isort" || "$TESTSUITE" == "flake8") ]]; then echo "We don't need to build dependencies for source linting." exit 0 fi # Skip the rest of the build if it already exists. if [[ -e "$PYAV_LIBRARY_PREFIX/bin/ffmpeg" ]]; then echo "We have a cached build of $PYAV_LIBRARY; skipping re-build." exit 0 fi mkdir -p "$PYAV_LIBRARY_ROOT" mkdir -p "$PYAV_LIBRARY_PREFIX" cd "$PYAV_LIBRARY_ROOT" # Download and expand the source. if [[ ! -d $PYAV_LIBRARY ]]; then url="https://ffmpeg.org/releases/$PYAV_LIBRARY.tar.gz" echo Downloading $url wget --no-check-certificate "$url" || exit 1 tar -xzf $PYAV_LIBRARY.tar.gz rm $PYAV_LIBRARY.tar.gz echo fi cd $PYAV_LIBRARY echo ./configure ./configure \ --disable-doc \ --disable-mmx \ --disable-optimizations \ --disable-static \ --disable-stripping \ --enable-debug=3 \ --enable-gpl \ --enable-libx264 \ --enable-shared \ --prefix="$PYAV_LIBRARY_PREFIX" \ || exit 2 echo echo make make -j4 || exit 3 echo echo make install make install || exit 4 echo echo Build products: cd ~ find "$PYAV_LIBRARY_PREFIX" -name '*libav*' PyAV-8.1.0/scripts/clean-branches000077500000000000000000000072041416312437500166320ustar00rootroot00000000000000#!/usr/bin/env python import subprocess # These are remote branches that got rebased or something, but I don't have # control over. ignored_hashes = set(''' 078daa1148a84849ea0890b388353acd7333fcea 0832d2bdaa048fca1393f3d3810b8b8810535f9f 08e86996736d77df7d694d0a67a126fc7eac7e94 097631535fa4cdb87204980541d3f9bb9c7a9ffb 16ce34ba1d1f1ed4327326a10565f1a9f07107b0 183cf1447571aa48baf0665d17d41be1e48f2cc6 245a4dca69cf0fff674c6ad5bcfb7929c7347bf6 28b4b3988981471ff173679db3643418ff3f5aaa 3977b5d5be22922f2eb4288e2682f1de8fad8e12 5b9f192165855942918f9bd957c30e918b97cbeb 6091e89de0ae4aff2ba76d21b1110409ef174b78 636afe3f0b5b07233edae8e333db35c044c36b30 74f79ef74ec281f5e0da51bcfd0b1051aa53edbf 7737ef6e9e7307c40f326e61cc9291047540bc49 8618940d333f44ff960d561dda34167d4dbb81d4 a2fb55e97788809b5f33b1b0c9241fc77312f606 aa044b3f62a6d7bf4dde18cba91b1d0dd8a0816a aa7d01ba458025ede1757e56b638002375bb864a aafe064e209b667f565c4f57a94b098474d0b184 afac2d8f89673c012d1f4b845b006911f55d1d86 b115786b950c87ef9c422752e014297903bca393 b737c6ceb6750d00f62dfdaa40fee3e757c680a3 b7bf427a485736e6e1c71605bdce101214bae09f ba02afa7ea160328b5a3be111c7e276fb9d3c961 bc5ffe456345286a64ce33ffe5ce6a2ee8b63f40 c45a337fe49875b1cc28a0501a704890be444765 c6b1a5ac03e775ea46bffac7bbfea9d73cd03b87 c9c0d63b09c450d494fba1c4073fbe18851dfaff cc270d6790c02e6c5e93313d1e6499ce534350b9 cdd8e4c085a55e258bd551f7bcf4fee60474aa05 eac71881c24d42f801e9c18e448855a402333960 efd12926b1f446c32f5a239c0b2d351fa2d78101 f04dce0e80b4f290482eba4fb3c3ec68f353bf01 f0d1e82dee788085cf4afad7656a90966e40f7a0 f518f6e7bf47e00fe0c73a5098ae40813920400f f779c4371fdace76ee572053b4acb3999ffd4107 '''.strip().split()) def get_branches(*args): cmd = ['git', 'branch', '-v', '--abbrev=40'] cmd.extend(args) res = {} for line in subprocess.check_output(cmd).decode().splitlines(): parts = line[2:].strip().split() name = parts[0] hash_ = parts[1] res[name] = hash_ return res def rm(*args): subprocess.check_call(('git', 'branch', '-D') + args) # Clean up everything that was merged for line in subprocess.check_output(['git', 'branch', '--merged']).decode().splitlines(): line = line.strip() if not line: continue parts = line.split() if parts[0] == '*': continue if parts[-1] in ('develop', 'master'): continue rm(parts[-1]) for line in subprocess.check_output(['git', 'branch', '-r', '--merged']).decode().splitlines(): name = line.strip().split()[-1] if not name: continue if name.split('/', 1)[-1] in ('develop', 'master'): continue rm('-r', name) our_branches = get_branches() for name, hash_ in get_branches('-r').items(): if hash_ in ignored_hashes: print("Removing ignored", name) rm('-r', name) continue if name.startswith('origin/'): our_branches[name] = hash_ for name in get_branches('-r', '--merged'): if name.startswith('origin/'): continue print("Removing merged", name) rm('-r', name) for name, hash_ in get_branches('-r', '--no-merged').items(): remote, branch = name.split('/', 1) if remote == 'origin': continue for prefix in '', 'origin/': our_name = prefix + branch if our_branches.get(our_name) == hash_: print("Removing identical", name) rm('-r', name) break # Anything that doesn't root at the same place as us. for name in get_branches('-r', '--no-contains', 'e105c0b4e64a0471f3f5375a86342c33cb942e23'): rm('-r', name) PyAV-8.1.0/scripts/fetch-vendor000077500000000000000000000033661416312437500163560ustar00rootroot00000000000000#!/usr/bin/env python import argparse import logging import json import os import shutil import struct import subprocess import sys def get_platform(): if sys.platform == "linux": return "manylinux_%s" % os.uname().machine elif sys.platform == "darwin": return "macosx_%s" % os.uname().machine elif sys.platform == "win32": return "win%s" % (struct.calcsize("P") * 8) else: raise Exception("Unsupported platfom %s" % sys.platform) parser = argparse.ArgumentParser(description="Fetch and extract tarballs") parser.add_argument("destination_dir") parser.add_argument("--cache-dir", default="tarballs") parser.add_argument("--config-file", default=__file__ + ".json") args = parser.parse_args() logging.basicConfig(level=logging.INFO) # read config file with open(args.config_file, "r") as fp: config = json.load(fp) # create fresh destination directory logging.info("Creating directory %s" % args.destination_dir) if os.path.exists(args.destination_dir): shutil.rmtree(args.destination_dir) os.mkdir(args.destination_dir) for url_template in config["urls"]: tarball_url = url_template.replace("{platform}", get_platform()) # download tarball tarball_name = tarball_url.split("/")[-1] tarball_file = os.path.join(args.cache_dir, tarball_name) if not os.path.exists(tarball_file): logging.info("Downloading %s" % tarball_url) if not os.path.exists(args.cache_dir): os.mkdir(args.cache_dir) subprocess.check_call( ["curl", "--location", "--output", tarball_file, "--silent", tarball_url] ) # extract tarball logging.info("Extracting %s" % tarball_name) subprocess.check_call(["tar", "-C", args.destination_dir, "-xf", tarball_file]) PyAV-8.1.0/scripts/fetch-vendor.json000066400000000000000000000001571416312437500173160ustar00rootroot00000000000000{ "urls": ["https://github.com/PyAV-Org/pyav-ffmpeg/releases/download/4.3.2-1/ffmpeg-{platform}.tar.gz"] } PyAV-8.1.0/scripts/inject-dll000077500000000000000000000025211416312437500160070ustar00rootroot00000000000000#!/usr/bin/env python import argparse import logging import os import shutil import zipfile parser = argparse.ArgumentParser(description="Inject DLLs into a Windows binary wheel") parser.add_argument( "wheel", type=str, help="the source wheel to which DLLs should be added", ) parser.add_argument( "dest_dir", type=str, help="the directory where to create the repaired wheel", ) parser.add_argument( "dll_dir", type=str, help="the directory containing the DLLs", ) args = parser.parse_args() wheel_name = os.path.basename(args.wheel) package_name = wheel_name.split("-")[0] repaired_wheel = os.path.join(args.dest_dir, wheel_name) logging.basicConfig(level=logging.INFO) logging.info("Copying '%s' to '%s'", args.wheel, repaired_wheel) shutil.copy(args.wheel, repaired_wheel) logging.info("Adding DLLs from '%s' to package '%s'", args.dll_dir, package_name) with zipfile.ZipFile(repaired_wheel, mode="a", compression=zipfile.ZIP_DEFLATED) as wheel: for name in sorted(os.listdir(args.dll_dir)): if name.lower().endswith(".dll"): local_path = os.path.join(args.dll_dir, name) archive_path = os.path.join(package_name, name) if archive_path not in wheel.namelist(): logging.info("Adding '%s' as '%s'", local_path, archive_path) wheel.write(local_path, archive_path) PyAV-8.1.0/scripts/test000077500000000000000000000022501416312437500147400ustar00rootroot00000000000000#!/bin/bash # Exit as soon as something errors. set -e if [[ ! "$_PYAV_ACTIVATED" ]]; then export here="$(cd "$(dirname "${BASH_SOURCE[0]}")"; pwd)" source "$here/activate.sh" fi cd "$PYAV_ROOT" TESTSUITE="${1-main}" istest() { [[ "$TESTSUITE" == all || "$TESTSUITE" == "$1" ]] return $? } if istest flake8; then # Settings are in setup.cfg $PYAV_PYTHON -m flake8 av examples tests fi if istest isort; then # More settings in setup.cfg $PYAV_PYTHON -m isort --check-only --diff av examples tests fi if istest main; then $PYAV_PYTHON setup.py test fi if istest sdist; then $PYAV_PYTHON setup.py build_ext $PYAV_PYTHON setup.py sdist if [[ "$TRAVIS_TAG" ]]; then $PYAV_PIP install twine $PYAV_PYTHON -m twine upload --skip-existing dist/* fi fi if istest doctest; then make -C docs test fi if istest examples; then for name in $(find examples -name '*.py'); do echo echo === $name cd "$PYAV_ROOT" mkdir -p "sandbox/$1" cd "sandbox/$1" if ! python "$PYAV_ROOT/$name"; then echo FAILED $name with code $? exit $? fi done fi PyAV-8.1.0/scripts/vagrant-test000077500000000000000000000001161416312437500163770ustar00rootroot00000000000000#!/bin/bash cd /vagrant ./scripts/build-deps ./scripts/build ./scripts/test PyAV-8.1.0/setup.cfg000066400000000000000000000012331416312437500141650ustar00rootroot00000000000000[flake8] filename = *.py,*.pyx,*.pxd max-line-length = 142 per-file-ignores = __init__.py: E402,F401 *.pyx,*.pxd: E211,E225,E227,E402,E999 [isort] sections = FUTURE,STDLIB,THIRDPARTY,FIRSTPARTY,LOCALFOLDER lines_after_imports = 2 skip = av/__init__.py known_first_party = av default_section = THIRDPARTY from_first = 1 multi_line_output = 3 [metadata] license = BSD long_description = file: README.md long_description_content_type = text/markdown project_urls = Bug Reports = https://github.com/PyAV-Org/PyAV/issues Documentation = https://pyav.org/docs Feedstock = https://github.com/conda-forge/av-feedstock Download = https://pypi.org/project/av PyAV-8.1.0/setup.py000066400000000000000000000430621416312437500140640ustar00rootroot00000000000000from distutils.ccompiler import new_compiler as _new_compiler from distutils.command.clean import clean, log from distutils.core import Command from distutils.dir_util import remove_tree from distutils.errors import DistutilsExecError from distutils.msvccompiler import MSVCCompiler from setuptools import setup, find_packages, Extension, Distribution from setuptools.command.build_ext import build_ext from shlex import quote from subprocess import Popen, PIPE import argparse import errno import os import platform import re import shlex import sys try: # This depends on _winreg, which is not available on not-Windows. from distutils.msvc9compiler import MSVCCompiler as MSVC9Compiler except ImportError: MSVC9Compiler = None try: from distutils._msvccompiler import MSVCCompiler as MSVC14Compiler except ImportError: MSVC14Compiler = None try: from Cython import __version__ as cython_version from Cython.Build import cythonize except ImportError: cythonize = None else: # We depend upon some features in Cython 0.27; reject older ones. if tuple(map(int, cython_version.split('.'))) < (0, 27): print("Cython {} is too old for PyAV; ignoring it.".format(cython_version)) cythonize = None # We will embed this metadata into the package so it can be recalled for debugging. version = open('VERSION.txt').read().strip() try: git_commit, _ = Popen(['git', 'describe', '--tags'], stdout=PIPE, stderr=PIPE).communicate() except OSError: git_commit = None else: git_commit = git_commit.decode().strip() _cflag_parser = argparse.ArgumentParser(add_help=False) _cflag_parser.add_argument('-I', dest='include_dirs', action='append') _cflag_parser.add_argument('-L', dest='library_dirs', action='append') _cflag_parser.add_argument('-l', dest='libraries', action='append') _cflag_parser.add_argument('-D', dest='define_macros', action='append') _cflag_parser.add_argument('-R', dest='runtime_library_dirs', action='append') def parse_cflags(raw_cflags): raw_args = shlex.split(raw_cflags.strip()) args, unknown = _cflag_parser.parse_known_args(raw_args) config = {k: v or [] for k, v in args.__dict__.items()} for i, x in enumerate(config['define_macros']): parts = x.split('=', 1) value = x[1] or None if len(x) == 2 else None config['define_macros'][i] = (parts[0], value) return config, ' '.join(quote(x) for x in unknown) def get_library_config(name): """Get distutils-compatible extension extras for the given library. This requires ``pkg-config``. """ try: proc = Popen(['pkg-config', '--cflags', '--libs', name], stdout=PIPE, stderr=PIPE) except OSError: print('pkg-config is required for building PyAV') exit(1) raw_cflags, err = proc.communicate() if proc.wait(): return known, unknown = parse_cflags(raw_cflags.decode('utf8')) if unknown: print("pkg-config returned flags we don't understand: {}".format(unknown)) exit(1) return known def update_extend(dst, src): """Update the `dst` with the `src`, extending values where lists. Primiarily useful for integrating results from `get_library_config`. """ for k, v in src.items(): existing = dst.setdefault(k, []) for x in v: if x not in existing: existing.append(x) def unique_extend(a, *args): a[:] = list(set().union(a, *args)) # Obtain the ffmpeg dir from the "--ffmpeg-dir=" argument FFMPEG_DIR = None for i, arg in enumerate(sys.argv): if arg.startswith('--ffmpeg-dir='): FFMPEG_DIR = arg.split('=')[1] break if FFMPEG_DIR is not None: # delete the --ffmpeg-dir arg so that distutils does not see it del sys.argv[i] if not os.path.isdir(FFMPEG_DIR): print('The specified ffmpeg directory does not exist') exit(1) else: # Check the environment variable FFMPEG_DIR FFMPEG_DIR = os.environ.get('FFMPEG_DIR') if FFMPEG_DIR is not None: if not os.path.isdir(FFMPEG_DIR): FFMPEG_DIR = None if FFMPEG_DIR is not None: ffmpeg_lib = os.path.join(FFMPEG_DIR, 'lib') ffmpeg_include = os.path.join(FFMPEG_DIR, 'include') if os.path.exists(ffmpeg_lib): ffmpeg_lib = [ffmpeg_lib] else: ffmpeg_lib = [FFMPEG_DIR] if os.path.exists(ffmpeg_include): ffmpeg_include = [ffmpeg_include] else: ffmpeg_include = [FFMPEG_DIR] else: ffmpeg_lib = [] ffmpeg_include = [] # The "extras" to be supplied to every one of our modules. # This is expanded heavily by the `config` command. extension_extra = { 'include_dirs': ['include'] + ffmpeg_include, # The first are PyAV's includes. 'libraries' : [], 'library_dirs': ffmpeg_lib, } # The macros which describe the current PyAV version. config_macros = { "PYAV_VERSION": version, "PYAV_VERSION_STR": '"%s"' % version, "PYAV_COMMIT_STR": '"%s"' % (git_commit or 'unknown-commit'), } def dump_config(): """Print out all the config information we have so far (for debugging).""" print('PyAV:', version, git_commit or '(unknown commit)') print('Python:', sys.version.encode('unicode_escape').decode()) print('platform:', platform.platform()) print('extension_extra:') for k, vs in extension_extra.items(): print('\t%s: %s' % (k, [x.encode('utf8') for x in vs])) print('config_macros:') for x in sorted(config_macros.items()): print('\t%s=%s' % x) # Monkey-patch for CCompiler to be silent. def _CCompiler_spawn_silent(cmd, dry_run=None): """Spawn a process, and eat the stdio.""" proc = Popen(cmd, stdout=PIPE, stderr=PIPE) out, err = proc.communicate() if proc.returncode: raise DistutilsExecError(err) def new_compiler(*args, **kwargs): """Create a C compiler. :param bool silent: Eat all stdio? Defaults to ``True``. All other arguments passed to ``distutils.ccompiler.new_compiler``. """ make_silent = kwargs.pop('silent', True) cc = _new_compiler(*args, **kwargs) # If MSVC10, initialize the compiler here and add /MANIFEST to linker flags. # See Python issue 4431 (https://bugs.python.org/issue4431) if is_msvc(cc): from distutils.msvc9compiler import get_build_version if get_build_version() == 10: cc.initialize() for ldflags in [cc.ldflags_shared, cc.ldflags_shared_debug]: unique_extend(ldflags, ['/MANIFEST']) # If MSVC14, do not silence. As msvc14 requires some custom # steps before the process is spawned, we can't monkey-patch this. elif get_build_version() == 14: make_silent = False # monkey-patch compiler to suppress stdout and stderr. if make_silent: cc.spawn = _CCompiler_spawn_silent return cc _msvc_classes = tuple(filter(None, (MSVCCompiler, MSVC9Compiler, MSVC14Compiler))) def is_msvc(cc=None): cc = _new_compiler() if cc is None else cc return isinstance(cc, _msvc_classes) if os.name == 'nt': if is_msvc(): config_macros['inline'] = '__inline' # Since we're shipping a self contained unit on Windows, we need to mark # the package as such. On other systems, let it be universal. class BinaryDistribution(Distribution): def is_pure(self): return False distclass = BinaryDistribution else: # Nothing to see here. distclass = Distribution # Monkey-patch Cython to not overwrite embedded signatures. if cythonize: from Cython.Compiler.AutoDocTransforms import EmbedSignature old_embed_signature = EmbedSignature._embed_signature def new_embed_signature(self, sig, doc): # Strip any `self` parameters from the front. sig = re.sub(r'\(self(,\s+)?', '(', sig) # If they both start with the same signature; skip it. if sig and doc: new_name = sig.split('(')[0].strip() old_name = doc.split('(')[0].strip() if new_name == old_name: return doc if new_name.endswith('.' + old_name): return doc return old_embed_signature(self, sig, doc) EmbedSignature._embed_signature = new_embed_signature # Construct the modules that we find in the "av" directory. ext_modules = [] for dirname, dirnames, filenames in os.walk('av'): for filename in filenames: # We are looing for Cython sources. if filename.startswith('.') or os.path.splitext(filename)[1] != '.pyx': continue pyx_path = os.path.join(dirname, filename) base = os.path.splitext(pyx_path)[0] # Need to be a little careful because Windows will accept / or \ # (where os.sep will be \ on Windows). mod_name = base.replace('/', '.').replace(os.sep, '.') c_path = os.path.join('src', base + '.c') # We go with the C sources if Cython is not installed, and fail if # those also don't exist. We can't `cythonize` here though, since the # `pyav/include.h` must be generated (by `build_ext`) first. if not cythonize and not os.path.exists(c_path): print('Cython is required to build PyAV from raw sources.') print('Please `pip install Cython`.') exit(3) ext_modules.append(Extension( mod_name, sources=[c_path if not cythonize else pyx_path], )) class ConfigCommand(Command): user_options = [ ('no-pkg-config', None, "do not use pkg-config to configure dependencies"), ('verbose', None, "dump out configuration"), ('compiler=', 'c', "specify the compiler type"), ] boolean_options = ['no-pkg-config'] def initialize_options(self): self.compiler = None self.no_pkg_config = None def finalize_options(self): self.set_undefined_options('build', ('compiler', 'compiler'),) self.set_undefined_options('build_ext', ('no_pkg_config', 'no_pkg_config'),) def run(self): # For some reason we get the feeling that CFLAGS is not respected, so we parse # it here. TODO: Leave any arguments that we can't figure out. for name in 'CFLAGS', 'LDFLAGS': known, unknown = parse_cflags(os.environ.pop(name, '')) if unknown: print("Warning: We don't understand some of {} (and will leave it in the envvar): {}".format(name, unknown)) os.environ[name] = unknown update_extend(extension_extra, known) if is_msvc(new_compiler(compiler=self.compiler)): # Assume we have to disable /OPT:REF for MSVC with ffmpeg config = { 'extra_link_args': ['/OPT:NOREF'], } update_extend(extension_extra, config) # Check if we're using pkg-config or not if self.no_pkg_config: # Simply assume we have everything we need! config = { 'libraries': ['avformat', 'avcodec', 'avdevice', 'avutil', 'avfilter', 'swscale', 'swresample'], 'library_dirs': [], 'include_dirs': [] } update_extend(extension_extra, config) for ext in self.distribution.ext_modules: for key, value in extension_extra.items(): setattr(ext, key, value) return # We're using pkg-config: errors = [] # Get the config for the libraries that we require. for name in 'libavformat', 'libavcodec', 'libavdevice', 'libavutil', 'libavfilter', 'libswscale', 'libswresample': config = get_library_config(name) if config: update_extend(extension_extra, config) # We don't need macros for these, since they all must exist. else: errors.append('Could not find ' + name + ' with pkg-config.') if self.verbose: dump_config() # Don't continue if we have errors. # TODO: Warn Ubuntu 12 users that they can't satisfy requirements with the # default package sources. if errors: print('\n'.join(errors)) exit(1) # Normalize the extras. extension_extra.update( dict((k, sorted(set(v))) for k, v in extension_extra.items()) ) # Apply them. for ext in self.distribution.ext_modules: for key, value in extension_extra.items(): setattr(ext, key, value) class CleanCommand(clean): user_options = clean.user_options + [ ('sources', None, "remove Cython build output (C sources)")] boolean_options = clean.boolean_options + ['sources'] def initialize_options(self): clean.initialize_options(self) self.sources = None def run(self): clean.run(self) if self.sources: if os.path.exists('src'): remove_tree('src', dry_run=self.dry_run) else: log.info("'%s' does not exist -- can't clean it", 'src') class CythonizeCommand(Command): user_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): # Cythonize, if required. We do it individually since we must update # the existing extension instead of replacing them all. for i, ext in enumerate(self.distribution.ext_modules): if any(s.endswith('.pyx') for s in ext.sources): if is_msvc(): ext.define_macros.append(('inline', '__inline')) new_ext = cythonize( ext, compiler_directives=dict( c_string_type='str', c_string_encoding='ascii', embedsignature=True, language_level=2, ), build_dir='src', include_path=ext.include_dirs, )[0] ext.sources = new_ext.sources class BuildExtCommand(build_ext): if os.name != 'nt': user_options = build_ext.user_options + [ ('no-pkg-config', None, "do not use pkg-config to configure dependencies")] boolean_options = build_ext.boolean_options + ['no-pkg-config'] def initialize_options(self): build_ext.initialize_options(self) self.no_pkg_config = None else: no_pkg_config = 1 def run(self): # Propagate build options to config obj = self.distribution.get_command_obj('config') obj.compiler = self.compiler obj.no_pkg_config = self.no_pkg_config obj.include_dirs = self.include_dirs obj.libraries = self.libraries obj.library_dirs = self.library_dirs self.run_command('config') # We write a header file containing everything we have discovered by # inspecting the libraries which exist. This is the main mechanism we # use to detect differenced between FFmpeg and Libav. include_dir = os.path.join(self.build_temp, 'include') pyav_dir = os.path.join(include_dir, 'pyav') try: os.makedirs(pyav_dir) except OSError as e: if e.errno != errno.EEXIST: raise header_path = os.path.join(pyav_dir, 'config.h') print('writing', header_path) with open(header_path, 'w') as fh: fh.write('#ifndef PYAV_COMPAT_H\n') fh.write('#define PYAV_COMPAT_H\n') for k, v in sorted(config_macros.items()): fh.write('#define %s %s\n' % (k, v)) fh.write('#endif\n') self.include_dirs = self.include_dirs or [] self.include_dirs.append(include_dir) # Propagate config to cythonize. for i, ext in enumerate(self.distribution.ext_modules): unique_extend(ext.include_dirs, self.include_dirs) unique_extend(ext.library_dirs, self.library_dirs) unique_extend(ext.libraries, self.libraries) self.run_command('cythonize') build_ext.run(self) setup( name='av', version=version, description="Pythonic bindings for FFmpeg's libraries.", author="Mike Boers", author_email="pyav@mikeboers.com", url="https://github.com/PyAV-Org/PyAV", packages=find_packages(exclude=['build*', 'examples*', 'scratchpad*', 'tests*']), zip_safe=False, ext_modules=ext_modules, cmdclass={ 'build_ext': BuildExtCommand, 'clean': CleanCommand, 'config': ConfigCommand, 'cythonize': CythonizeCommand, }, test_suite='tests', entry_points={ 'console_scripts': [ 'pyav = av.__main__:main', ], }, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Natural Language :: English', 'Operating System :: MacOS :: MacOS X', 'Operating System :: POSIX', 'Operating System :: Unix', 'Operating System :: Microsoft :: Windows', 'Programming Language :: Cython', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: Multimedia :: Sound/Audio', 'Topic :: Multimedia :: Sound/Audio :: Conversion', 'Topic :: Multimedia :: Video', 'Topic :: Multimedia :: Video :: Conversion', ], distclass=distclass, ) PyAV-8.1.0/tests/000077500000000000000000000000001416312437500135075ustar00rootroot00000000000000PyAV-8.1.0/tests/__init__.py000066400000000000000000000000001416312437500156060ustar00rootroot00000000000000PyAV-8.1.0/tests/common.py000066400000000000000000000107261416312437500153570ustar00rootroot00000000000000from __future__ import division from unittest import TestCase as _Base import datetime import errno import functools import os import sys import types from av.datasets import fate as fate_suite try: import PIL.Image as Image import PIL.ImageFilter as ImageFilter except ImportError: Image = ImageFilter = None is_windows = os.name == 'nt' skip_tests = frozenset(os.environ.get("PYAV_SKIP_TESTS", "").split(",")) def makedirs(path): try: os.makedirs(path) except OSError as e: if e.errno != errno.EEXIST: raise _start_time = datetime.datetime.now() def _sandbox(timed=False): root = os.path.abspath(os.path.join( __file__, '..', '..', 'sandbox' )) sandbox = os.path.join( root, _start_time.strftime('%Y%m%d-%H%M%S'), ) if timed else root if not os.path.exists(sandbox): os.makedirs(sandbox) return sandbox def asset(*args): adir = os.path.dirname(__file__) return os.path.abspath(os.path.join(adir, 'assets', *args)) # Store all of the sample data here. os.environ['PYAV_TESTDATA_DIR'] = asset() def fate_png(): return fate_suite('png1/55c99e750a5fd6_50314226.png') def sandboxed(*args, **kwargs): do_makedirs = kwargs.pop('makedirs', True) base = kwargs.pop('sandbox', None) timed = kwargs.pop('timed', False) if kwargs: raise TypeError('extra kwargs: %s' % ', '.join(sorted(kwargs))) path = os.path.join(_sandbox(timed=timed) if base is None else base, *args) if do_makedirs: makedirs(os.path.dirname(path)) return path class MethodLogger(object): def __init__(self, obj): self._obj = obj self._log = [] def __getattr__(self, name): value = getattr(self._obj, name) if isinstance(value, (types.MethodType, types.FunctionType, types.BuiltinFunctionType, types.BuiltinMethodType)): return functools.partial(self._method, name, value) else: self._log.append(('__getattr__', (name, ), {})) return value def _method(self, name, meth, *args, **kwargs): self._log.append((name, args, kwargs)) return meth(*args, **kwargs) def _filter(self, type_): return [log for log in self._log if log[0] == type_] class TestCase(_Base): @classmethod def _sandbox(cls, timed=True): path = os.path.join(_sandbox(timed=timed), cls.__name__) makedirs(path) return path @property def sandbox(self): return self._sandbox(timed=True) def sandboxed(self, *args, **kwargs): kwargs.setdefault('sandbox', self.sandbox) kwargs.setdefault('timed', True) return sandboxed(*args, **kwargs) def assertImagesAlmostEqual(self, a, b, epsilon=0.1, *args): self.assertEqual(a.size, b.size, 'sizes dont match') a = a.filter(ImageFilter.BLUR).getdata() b = b.filter(ImageFilter.BLUR).getdata() for i, ax, bx in zip(range(len(a)), a, b): diff = sum(abs(ac / 256 - bc / 256) for ac, bc in zip(ax, bx)) / 3 if diff > epsilon: self.fail('images differed by %s at index %d; %s %s' % (diff, i, ax, bx)) # Add some of the unittest methods that we love from 2.7. if sys.version_info < (2, 7): def assertIs(self, a, b, msg=None): if a is not b: self.fail(msg or '%r at 0x%x is not %r at 0x%x; %r is not %r' % (type(a), id(a), type(b), id(b), a, b)) def assertIsNot(self, a, b, msg=None): if a is b: self.fail(msg or 'both are %r at 0x%x; %r' % (type(a), id(a), a)) def assertIsNone(self, x, msg=None): if x is not None: self.fail(msg or 'is not None; %r' % x) def assertIsNotNone(self, x, msg=None): if x is None: self.fail(msg or 'is None; %r' % x) def assertIn(self, a, b, msg=None): if a not in b: self.fail(msg or '%r not in %r' % (a, b)) def assertNotIn(self, a, b, msg=None): if a in b: self.fail(msg or '%r in %r' % (a, b)) def assertIsInstance(self, instance, types, msg=None): if not isinstance(instance, types): self.fail(msg or 'not an instance of %r; %r' % (types, instance)) def assertNotIsInstance(self, instance, types, msg=None): if isinstance(instance, types): self.fail(msg or 'is an instance of %r; %r' % (types, instance)) PyAV-8.1.0/tests/requirements.txt000066400000000000000000000000761416312437500167760ustar00rootroot00000000000000autopep8 Cython editorconfig flake8 isort numpy Pillow sphinx PyAV-8.1.0/tests/test_audiofifo.py000066400000000000000000000060261416312437500170710ustar00rootroot00000000000000import av from .common import TestCase, fate_suite class TestAudioFifo(TestCase): def test_data(self): container = av.open(fate_suite('audio-reference/chorusnoise_2ch_44kHz_s16.wav')) stream = container.streams.audio[0] fifo = av.AudioFifo() input_ = [] output = [] for i, packet in enumerate(container.demux(stream)): for frame in packet.decode(): input_.append(frame.planes[0].to_bytes()) fifo.write(frame) for frame in fifo.read_many(512, partial=i == 10): output.append(frame.planes[0].to_bytes()) if i == 10: break input_ = b''.join(input_) output = b''.join(output) min_len = min(len(input_), len(output)) self.assertTrue(min_len > 10 * 512 * 2 * 2) self.assertTrue(input_[:min_len] == output[:min_len]) def test_pts_simple(self): fifo = av.AudioFifo() iframe = av.AudioFrame(samples=1024) iframe.pts = 0 iframe.sample_rate = 48000 iframe.time_base = '1/48000' fifo.write(iframe) oframe = fifo.read(512) self.assertTrue(oframe is not None) self.assertEqual(oframe.pts, 0) self.assertEqual(oframe.time_base, iframe.time_base) self.assertEqual(fifo.samples_written, 1024) self.assertEqual(fifo.samples_read, 512) self.assertEqual(fifo.pts_per_sample, 1.0) iframe.pts = 1024 fifo.write(iframe) oframe = fifo.read(512) self.assertTrue(oframe is not None) self.assertEqual(oframe.pts, 512) self.assertEqual(oframe.time_base, iframe.time_base) iframe.pts = 9999 # Wrong! self.assertRaises(ValueError, fifo.write, iframe) def test_pts_complex(self): fifo = av.AudioFifo() iframe = av.AudioFrame(samples=1024) iframe.pts = 0 iframe.sample_rate = 48000 iframe.time_base = '1/96000' fifo.write(iframe) iframe.pts = 2048 fifo.write(iframe) oframe = fifo.read_many(1024)[-1] self.assertEqual(oframe.pts, 2048) self.assertEqual(fifo.pts_per_sample, 2.0) def test_missing_sample_rate(self): fifo = av.AudioFifo() iframe = av.AudioFrame(samples=1024) iframe.pts = 0 iframe.time_base = '1/48000' fifo.write(iframe) oframe = fifo.read(512) self.assertTrue(oframe is not None) self.assertIsNone(oframe.pts) self.assertEqual(oframe.sample_rate, 0) self.assertEqual(oframe.time_base, iframe.time_base) def test_missing_time_base(self): fifo = av.AudioFifo() iframe = av.AudioFrame(samples=1024) iframe.pts = 0 iframe.sample_rate = 48000 fifo.write(iframe) oframe = fifo.read(512) self.assertTrue(oframe is not None) self.assertIsNone(oframe.pts) self.assertIsNone(oframe.time_base) self.assertEqual(oframe.sample_rate, iframe.sample_rate) PyAV-8.1.0/tests/test_audioformat.py000066400000000000000000000015061416312437500174340ustar00rootroot00000000000000import sys from av import AudioFormat from .common import TestCase postfix = 'le' if sys.byteorder == 'little' else 'be' class TestAudioFormats(TestCase): def test_s16_inspection(self): fmt = AudioFormat('s16') self.assertEqual(fmt.name, 's16') self.assertFalse(fmt.is_planar) self.assertEqual(fmt.bits, 16) self.assertEqual(fmt.bytes, 2) self.assertEqual(fmt.container_name, 's16' + postfix) self.assertEqual(fmt.planar.name, 's16p') self.assertIs(fmt.packed, fmt) def test_s32p_inspection(self): fmt = AudioFormat('s32p') self.assertEqual(fmt.name, 's32p') self.assertTrue(fmt.is_planar) self.assertEqual(fmt.bits, 32) self.assertEqual(fmt.bytes, 4) self.assertRaises(ValueError, lambda: fmt.container_name) PyAV-8.1.0/tests/test_audioframe.py000066400000000000000000000173241416312437500172430ustar00rootroot00000000000000import warnings import numpy from av import AudioFrame from av.deprecation import AttributeRenamedWarning from .common import TestCase class TestAudioFrameConstructors(TestCase): def test_null_constructor(self): frame = AudioFrame() self.assertEqual(frame.format.name, 's16') self.assertEqual(frame.layout.name, 'stereo') self.assertEqual(len(frame.planes), 0) self.assertEqual(frame.samples, 0) def test_manual_flt_mono_constructor(self): frame = AudioFrame(format='flt', layout='mono', samples=160) self.assertEqual(frame.format.name, 'flt') self.assertEqual(frame.layout.name, 'mono') self.assertEqual(len(frame.planes), 1) self.assertEqual(frame.planes[0].buffer_size, 640) self.assertEqual(frame.samples, 160) def test_manual_flt_stereo_constructor(self): frame = AudioFrame(format='flt', layout='stereo', samples=160) self.assertEqual(frame.format.name, 'flt') self.assertEqual(frame.layout.name, 'stereo') self.assertEqual(len(frame.planes), 1) self.assertEqual(frame.planes[0].buffer_size, 1280) self.assertEqual(frame.samples, 160) def test_manual_fltp_stereo_constructor(self): frame = AudioFrame(format='fltp', layout='stereo', samples=160) self.assertEqual(frame.format.name, 'fltp') self.assertEqual(frame.layout.name, 'stereo') self.assertEqual(len(frame.planes), 2) self.assertEqual(frame.planes[0].buffer_size, 640) self.assertEqual(frame.planes[1].buffer_size, 640) self.assertEqual(frame.samples, 160) def test_manual_s16_mono_constructor(self): frame = AudioFrame(format='s16', layout='mono', samples=160) self.assertEqual(frame.format.name, 's16') self.assertEqual(frame.layout.name, 'mono') self.assertEqual(len(frame.planes), 1) self.assertEqual(frame.planes[0].buffer_size, 320) self.assertEqual(frame.samples, 160) def test_manual_s16_mono_constructor_align_8(self): frame = AudioFrame(format='s16', layout='mono', samples=159, align=8) self.assertEqual(frame.format.name, 's16') self.assertEqual(frame.layout.name, 'mono') self.assertEqual(len(frame.planes), 1) self.assertEqual(frame.planes[0].buffer_size, 320) self.assertEqual(frame.samples, 159) def test_manual_s16_stereo_constructor(self): frame = AudioFrame(format='s16', layout='stereo', samples=160) self.assertEqual(frame.format.name, 's16') self.assertEqual(frame.layout.name, 'stereo') self.assertEqual(len(frame.planes), 1) self.assertEqual(frame.planes[0].buffer_size, 640) self.assertEqual(frame.samples, 160) def test_manual_s16p_stereo_constructor(self): frame = AudioFrame(format='s16p', layout='stereo', samples=160) self.assertEqual(frame.format.name, 's16p') self.assertEqual(frame.layout.name, 'stereo') self.assertEqual(len(frame.planes), 2) self.assertEqual(frame.planes[0].buffer_size, 320) self.assertEqual(frame.planes[1].buffer_size, 320) self.assertEqual(frame.samples, 160) class TestAudioFrameConveniences(TestCase): def test_basic_to_ndarray(self): frame = AudioFrame(format='s16p', layout='stereo', samples=160) array = frame.to_ndarray() self.assertEqual(array.dtype, '") self.assertEqual(layout.channels[0].name, "FL") self.assertEqual(layout.channels[0].description, "front left") self.assertEqual(repr(layout.channels[0]), "") self.assertEqual(layout.channels[1].name, "FR") self.assertEqual(layout.channels[1].description, "front right") self.assertEqual(repr(layout.channels[1]), "") def test_defaults(self): for i, name in enumerate(''' mono stereo 2.1 4.0 5.0 5.1 6.1 7.1 '''.strip().split()): layout = AudioLayout(i + 1) self.assertEqual(layout.name, name) self.assertEqual(len(layout.channels), i + 1) PyAV-8.1.0/tests/test_audioresampler.py000066400000000000000000000057421416312437500201440ustar00rootroot00000000000000from av import AudioFrame, AudioResampler from .common import TestCase class TestAudioResampler(TestCase): def test_identity_passthrough(self): # If we don't ask it to do anything, it won't. resampler = AudioResampler() iframe = AudioFrame('s16', 'stereo', 1024) oframe = resampler.resample(iframe) self.assertIs(iframe, oframe) def test_matching_passthrough(self): # If the frames match, it won't do anything. resampler = AudioResampler('s16', 'stereo') iframe = AudioFrame('s16', 'stereo', 1024) oframe = resampler.resample(iframe) self.assertIs(iframe, oframe) def test_pts_assertion_same_rate(self): resampler = AudioResampler('s16', 'mono') iframe = AudioFrame('s16', 'stereo', 1024) iframe.sample_rate = 48000 iframe.time_base = '1/48000' iframe.pts = 0 oframe = resampler.resample(iframe) self.assertEqual(oframe.pts, 0) self.assertEqual(oframe.time_base, iframe.time_base) self.assertEqual(oframe.sample_rate, iframe.sample_rate) iframe.pts = 1024 oframe = resampler.resample(iframe) self.assertEqual(oframe.pts, 1024) self.assertEqual(oframe.time_base, iframe.time_base) self.assertEqual(oframe.sample_rate, iframe.sample_rate) iframe.pts = 9999 self.assertRaises(ValueError, resampler.resample, iframe) def test_pts_assertion_new_rate(self): resampler = AudioResampler('s16', 'mono', 44100) iframe = AudioFrame('s16', 'stereo', 1024) iframe.sample_rate = 48000 iframe.time_base = '1/48000' iframe.pts = 0 oframe = resampler.resample(iframe) self.assertEqual(oframe.pts, 0) self.assertEqual(str(oframe.time_base), '1/44100') self.assertEqual(oframe.sample_rate, 44100) samples_out = resampler.samples_out self.assertTrue(samples_out > 0) iframe.pts = 1024 oframe = resampler.resample(iframe) self.assertEqual(oframe.pts, samples_out) self.assertEqual(str(oframe.time_base), '1/44100') self.assertEqual(oframe.sample_rate, 44100) def test_pts_missing_time_base(self): resampler = AudioResampler('s16', 'mono', 44100) iframe = AudioFrame('s16', 'stereo', 1024) iframe.sample_rate = 48000 iframe.pts = 0 oframe = resampler.resample(iframe) self.assertIs(oframe.pts, None) self.assertIs(oframe.time_base, None) self.assertEqual(oframe.sample_rate, 44100) def test_pts_complex_time_base(self): resampler = AudioResampler('s16', 'mono', 44100) iframe = AudioFrame('s16', 'stereo', 1024) iframe.sample_rate = 48000 iframe.time_base = '1/96000' iframe.pts = 0 oframe = resampler.resample(iframe) self.assertIs(oframe.pts, None) self.assertIs(oframe.time_base, None) self.assertEqual(oframe.sample_rate, 44100) PyAV-8.1.0/tests/test_codec.py000066400000000000000000000062701416312437500162020ustar00rootroot00000000000000import unittest from av import AudioFormat, Codec, VideoFormat, codecs_available from av.codec.codec import UnknownCodecError from .common import TestCase # some older ffmpeg versions have no native opus encoder try: opus_c = Codec('opus', 'w') opus_encoder_missing = False except UnknownCodecError: opus_encoder_missing = True class TestCodecs(TestCase): def test_codec_bogus(self): with self.assertRaises(UnknownCodecError): Codec('bogus123') with self.assertRaises(UnknownCodecError): Codec('bogus123', 'w') def test_codec_mpeg4_decoder(self): c = Codec('mpeg4') self.assertEqual(c.name, 'mpeg4') self.assertEqual(c.long_name, 'MPEG-4 part 2') self.assertEqual(c.type, 'video') self.assertIn(c.id, (12, 13)) self.assertTrue(c.is_decoder) self.assertFalse(c.is_encoder) # audio self.assertIsNone(c.audio_formats) self.assertIsNone(c.audio_rates) # video formats = c.video_formats self.assertTrue(formats) self.assertIsInstance(formats[0], VideoFormat) self.assertTrue(any(f.name == 'yuv420p' for f in formats)) self.assertIsNone(c.frame_rates) def test_codec_mpeg4_encoder(self): c = Codec('mpeg4', 'w') self.assertEqual(c.name, 'mpeg4') self.assertEqual(c.long_name, 'MPEG-4 part 2') self.assertEqual(c.type, 'video') self.assertIn(c.id, (12, 13)) self.assertTrue(c.is_encoder) self.assertFalse(c.is_decoder) # audio self.assertIsNone(c.audio_formats) self.assertIsNone(c.audio_rates) # video formats = c.video_formats self.assertTrue(formats) self.assertIsInstance(formats[0], VideoFormat) self.assertTrue(any(f.name == 'yuv420p' for f in formats)) self.assertIsNone(c.frame_rates) def test_codec_opus_decoder(self): c = Codec('opus') self.assertEqual(c.name, 'opus') self.assertEqual(c.long_name, 'Opus') self.assertEqual(c.type, 'audio') self.assertTrue(c.is_decoder) self.assertFalse(c.is_encoder) # audio self.assertIsNone(c.audio_formats) self.assertIsNone(c.audio_rates) # video self.assertIsNone(c.video_formats) self.assertIsNone(c.frame_rates) @unittest.skipIf(opus_encoder_missing, 'Opus encoder is not available') def test_codec_opus_encoder(self): c = Codec('opus', 'w') self.assertIn(c.name, ('opus', 'libopus')) self.assertIn(c.long_name, ('Opus', 'libopus Opus')) self.assertEqual(c.type, 'audio') self.assertTrue(c.is_encoder) self.assertFalse(c.is_decoder) # audio formats = c.audio_formats self.assertTrue(formats) self.assertIsInstance(formats[0], AudioFormat) self.assertTrue(any(f.name in ['flt', 'fltp'] for f in formats)) self.assertIsNotNone(c.audio_rates) self.assertIn(48000, c.audio_rates) # video self.assertIsNone(c.video_formats) self.assertIsNone(c.frame_rates) def test_codecs_available(self): self.assertTrue(codecs_available) PyAV-8.1.0/tests/test_codec_context.py000066400000000000000000000305331416312437500177450ustar00rootroot00000000000000from fractions import Fraction from unittest import SkipTest import os from av import AudioResampler, Codec, Packet from av.codec.codec import UnknownCodecError import av from .common import TestCase, fate_suite def iter_frames(container, stream): for packet in container.demux(stream): for frame in packet.decode(): yield frame def iter_raw_frames(path, packet_sizes, ctx): with open(path, 'rb') as f: for i, size in enumerate(packet_sizes): packet = Packet(size) read_size = f.readinto(packet) assert size assert read_size == size if not read_size: break for frame in ctx.decode(packet): yield frame while True: try: frames = ctx.decode(None) except EOFError: break for frame in frames: yield frame if not frames: break class TestCodecContext(TestCase): def test_skip_frame_default(self): ctx = Codec('png', 'w').create() self.assertEqual(ctx.skip_frame.name, 'DEFAULT') def test_codec_tag(self): ctx = Codec('mpeg4', 'w').create() self.assertEqual(ctx.codec_tag, '\x00\x00\x00\x00') ctx.codec_tag = 'xvid' self.assertEqual(ctx.codec_tag, 'xvid') # wrong length with self.assertRaises(ValueError) as cm: ctx.codec_tag = 'bob' self.assertEqual(str(cm.exception), 'Codec tag should be a 4 character string.') # wrong type with self.assertRaises(ValueError) as cm: ctx.codec_tag = 123 self.assertEqual(str(cm.exception), 'Codec tag should be a 4 character string.') with av.open(fate_suite('h264/interlaced_crop.mp4')) as container: self.assertEqual(container.streams[0].codec_tag, 'avc1') def test_decoder_extradata(self): ctx = av.codec.Codec('h264', 'r').create() self.assertEqual(ctx.extradata, None) self.assertEqual(ctx.extradata_size, 0) ctx.extradata = b"123" self.assertEqual(ctx.extradata, b"123") self.assertEqual(ctx.extradata_size, 3) ctx.extradata = b"54321" self.assertEqual(ctx.extradata, b"54321") self.assertEqual(ctx.extradata_size, 5) ctx.extradata = None self.assertEqual(ctx.extradata, None) self.assertEqual(ctx.extradata_size, 0) def test_encoder_extradata(self): ctx = av.codec.Codec('h264', 'w').create() self.assertEqual(ctx.extradata, None) self.assertEqual(ctx.extradata_size, 0) with self.assertRaises(ValueError) as cm: ctx.extradata = b"123" self.assertEqual(str(cm.exception), "Can only set extradata for decoders.") def test_parse(self): # This one parses into a single packet. self._assert_parse('mpeg4', fate_suite('h264/interlaced_crop.mp4')) # This one parses into many small packets. self._assert_parse('mpeg2video', fate_suite('mpeg2/mpeg2_field_encoding.ts')) def _assert_parse(self, codec_name, path): fh = av.open(path) packets = [] for packet in fh.demux(video=0): packets.append(packet) full_source = b''.join(p.to_bytes() for p in packets) for size in 1024, 8192, 65535: ctx = Codec(codec_name).create() packets = [] for i in range(0, len(full_source), size): block = full_source[i:i + size] packets.extend(ctx.parse(block)) packets.extend(ctx.parse()) parsed_source = b''.join(p.to_bytes() for p in packets) self.assertEqual(len(parsed_source), len(full_source)) self.assertEqual(full_source, parsed_source) class TestEncoding(TestCase): def test_encoding_png(self): self.image_sequence_encode('png') def test_encoding_mjpeg(self): self.image_sequence_encode('mjpeg') def test_encoding_tiff(self): self.image_sequence_encode('tiff') def image_sequence_encode(self, codec_name): try: codec = Codec(codec_name, 'w') except UnknownCodecError: raise SkipTest() container = av.open(fate_suite('h264/interlaced_crop.mp4')) video_stream = container.streams.video[0] width = 640 height = 480 ctx = codec.create() pix_fmt = ctx.codec.video_formats[0].name ctx.width = width ctx.height = height ctx.time_base = video_stream.codec_context.time_base ctx.pix_fmt = pix_fmt ctx.open() frame_count = 1 path_list = [] for frame in iter_frames(container, video_stream): new_frame = frame.reformat(width, height, pix_fmt) new_packets = ctx.encode(new_frame) self.assertEqual(len(new_packets), 1) new_packet = new_packets[0] path = self.sandboxed('%s/encoder.%04d.%s' % ( codec_name, frame_count, codec_name if codec_name != 'mjpeg' else 'jpg', )) path_list.append(path) with open(path, 'wb') as f: f.write(new_packet) frame_count += 1 if frame_count > 5: break ctx = av.Codec(codec_name, 'r').create() for path in path_list: with open(path, 'rb') as f: size = os.fstat(f.fileno()).st_size packet = Packet(size) size = f.readinto(packet) frame = ctx.decode(packet)[0] self.assertEqual(frame.width, width) self.assertEqual(frame.height, height) self.assertEqual(frame.format.name, pix_fmt) def test_encoding_h264(self): self.video_encoding('libx264', {'crf': '19'}) def test_encoding_mpeg4(self): self.video_encoding('mpeg4') def test_encoding_xvid(self): self.video_encoding('mpeg4', codec_tag='xvid') def test_encoding_mpeg1video(self): self.video_encoding('mpeg1video') def test_encoding_dvvideo(self): options = {'pix_fmt': 'yuv411p', 'width': 720, 'height': 480} self.video_encoding('dvvideo', options) def test_encoding_dnxhd(self): options = {'b': '90M', # bitrate 'pix_fmt': 'yuv422p', 'width': 1920, 'height': 1080, 'time_base': '1001/30000', 'max_frames': 5} self.video_encoding('dnxhd', options) def video_encoding(self, codec_name, options={}, codec_tag=None): try: codec = Codec(codec_name, 'w') except UnknownCodecError: raise SkipTest() container = av.open(fate_suite('h264/interlaced_crop.mp4')) video_stream = container.streams.video[0] pix_fmt = options.pop('pix_fmt', 'yuv420p') width = options.pop('width', 640) height = options.pop('height', 480) max_frames = options.pop('max_frames', 50) time_base = options.pop('time_base', video_stream.codec_context.time_base) ctx = codec.create() ctx.width = width ctx.height = height ctx.time_base = time_base ctx.framerate = 1 / ctx.time_base ctx.pix_fmt = pix_fmt ctx.options = options # TODO if codec_tag: ctx.codec_tag = codec_tag ctx.open() path = self.sandboxed('encoder.%s' % codec_name) packet_sizes = [] frame_count = 0 with open(path, 'wb') as f: for frame in iter_frames(container, video_stream): """ bad_frame = frame.reformat(width, 100, pix_fmt) with self.assertRaises(ValueError): ctx.encode(bad_frame) bad_frame = frame.reformat(100, height, pix_fmt) with self.assertRaises(ValueError): ctx.encode(bad_frame) bad_frame = frame.reformat(width, height, "rgb24") with self.assertRaises(ValueError): ctx.encode(bad_frame) """ if frame: frame_count += 1 new_frame = frame.reformat(width, height, pix_fmt) if frame else None for packet in ctx.encode(new_frame): packet_sizes.append(packet.size) f.write(packet) if frame_count >= max_frames: break for packet in ctx.encode(None): packet_sizes.append(packet.size) f.write(packet) dec_codec_name = codec_name if codec_name == 'libx264': dec_codec_name = 'h264' ctx = av.Codec(dec_codec_name, 'r').create() ctx.open() decoded_frame_count = 0 for frame in iter_raw_frames(path, packet_sizes, ctx): decoded_frame_count += 1 self.assertEqual(frame.width, width) self.assertEqual(frame.height, height) self.assertEqual(frame.format.name, pix_fmt) self.assertEqual(frame_count, decoded_frame_count) def test_encoding_pcm_s24le(self): self.audio_encoding('pcm_s24le') def test_encoding_aac(self): self.audio_encoding('aac') def test_encoding_mp2(self): self.audio_encoding('mp2') def audio_encoding(self, codec_name): try: codec = Codec(codec_name, 'w') except UnknownCodecError: raise SkipTest() ctx = codec.create() if ctx.codec.experimental: raise SkipTest() sample_fmt = ctx.codec.audio_formats[-1].name sample_rate = 48000 channel_layout = "stereo" channels = 2 ctx.time_base = Fraction(1) / sample_rate ctx.sample_rate = sample_rate ctx.format = sample_fmt ctx.layout = channel_layout ctx.channels = channels ctx.open() resampler = AudioResampler(sample_fmt, channel_layout, sample_rate) container = av.open(fate_suite('audio-reference/chorusnoise_2ch_44kHz_s16.wav')) audio_stream = container.streams.audio[0] path = self.sandboxed('encoder.%s' % codec_name) samples = 0 packet_sizes = [] with open(path, 'wb') as f: for frame in iter_frames(container, audio_stream): # We need to let the encoder retime. frame.pts = None """ bad_resampler = AudioResampler(sample_fmt, "mono", sample_rate) bad_frame = bad_resampler.resample(frame) with self.assertRaises(ValueError): next(encoder.encode(bad_frame)) bad_resampler = AudioResampler(sample_fmt, channel_layout, 3000) bad_frame = bad_resampler.resample(frame) with self.assertRaises(ValueError): next(encoder.encode(bad_frame)) bad_resampler = AudioResampler('u8', channel_layout, 3000) bad_frame = bad_resampler.resample(frame) with self.assertRaises(ValueError): next(encoder.encode(bad_frame)) """ resampled_frame = resampler.resample(frame) samples += resampled_frame.samples for packet in ctx.encode(resampled_frame): # bytearray because python can # freaks out if the first byte is NULL f.write(bytearray(packet)) packet_sizes.append(packet.size) for packet in ctx.encode(None): packet_sizes.append(packet.size) f.write(bytearray(packet)) ctx = Codec(codec_name, 'r').create() ctx.time_base = Fraction(1) / sample_rate ctx.sample_rate = sample_rate ctx.format = sample_fmt ctx.layout = channel_layout ctx.channels = channels ctx.open() result_samples = 0 # should have more asserts but not sure what to check # libav and ffmpeg give different results # so can really use checksums for frame in iter_raw_frames(path, packet_sizes, ctx): result_samples += frame.samples self.assertEqual(frame.rate, sample_rate) self.assertEqual(len(frame.layout.channels), channels) PyAV-8.1.0/tests/test_container.py000066400000000000000000000015121416312437500171010ustar00rootroot00000000000000import sys import unittest import av from .common import TestCase, fate_suite, is_windows, skip_tests # On Windows, Python 3.0 - 3.5 have issues handling unicode filenames. # Starting with Python 3.6 the situation is saner thanks to PEP 529: # # https://www.python.org/dev/peps/pep-0529/ broken_unicode = is_windows and sys.version_info < (3, 6) class TestContainers(TestCase): def test_context_manager(self): with av.open(fate_suite('h264/interlaced_crop.mp4')) as container: self.assertEqual(container.format.long_name, 'QuickTime / MOV') self.assertEqual(len(container.streams), 1) @unittest.skipIf(broken_unicode or 'unicode_filename' in skip_tests, 'Unicode filename handling is broken') def test_unicode_filename(self): av.open(self.sandboxed(u'¢∞§¶•ªº.mov'), 'w') PyAV-8.1.0/tests/test_containerformat.py000066400000000000000000000030741416312437500203170ustar00rootroot00000000000000from av import ContainerFormat, formats_available from .common import TestCase class TestContainerFormats(TestCase): def test_matroska(self): fmt = ContainerFormat('matroska') self.assertTrue(fmt.is_input) self.assertTrue(fmt.is_output) self.assertEqual(fmt.name, 'matroska') self.assertEqual(fmt.long_name, 'Matroska') self.assertIn('mkv', fmt.extensions) self.assertFalse(fmt.no_file) def test_mov(self): fmt = ContainerFormat('mov') self.assertTrue(fmt.is_input) self.assertTrue(fmt.is_output) self.assertEqual(fmt.name, 'mov') self.assertEqual(fmt.long_name, 'QuickTime / MOV') self.assertIn('mov', fmt.extensions) self.assertFalse(fmt.no_file) def test_stream_segment(self): # This format goes by two names, check both. fmt = ContainerFormat('stream_segment') self.assertFalse(fmt.is_input) self.assertTrue(fmt.is_output) self.assertEqual(fmt.name, 'stream_segment') self.assertEqual(fmt.long_name, 'streaming segment muxer') self.assertEqual(fmt.extensions, set()) self.assertTrue(fmt.no_file) fmt = ContainerFormat('ssegment') self.assertFalse(fmt.is_input) self.assertTrue(fmt.is_output) self.assertEqual(fmt.name, 'ssegment') self.assertEqual(fmt.long_name, 'streaming segment muxer') self.assertEqual(fmt.extensions, set()) self.assertTrue(fmt.no_file) def test_formats_available(self): self.assertTrue(formats_available) PyAV-8.1.0/tests/test_decode.py000066400000000000000000000053161416312437500163500ustar00rootroot00000000000000import av from .common import TestCase, fate_suite class TestDecode(TestCase): def test_decoded_video_frame_count(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) video_stream = next(s for s in container.streams if s.type == 'video') self.assertIs(video_stream, container.streams.video[0]) frame_count = 0 for packet in container.demux(video_stream): for frame in packet.decode(): frame_count += 1 self.assertEqual(frame_count, video_stream.frames) def test_decode_audio_sample_count(self): container = av.open(fate_suite('audio-reference/chorusnoise_2ch_44kHz_s16.wav')) audio_stream = next(s for s in container.streams if s.type == 'audio') self.assertIs(audio_stream, container.streams.audio[0]) sample_count = 0 for packet in container.demux(audio_stream): for frame in packet.decode(): sample_count += frame.samples total_samples = (audio_stream.duration * audio_stream.rate.numerator) / audio_stream.time_base.denominator self.assertEqual(sample_count, total_samples) def test_decoded_time_base(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) stream = container.streams.video[0] codec_context = stream.codec_context self.assertNotEqual(stream.time_base, codec_context.time_base) for packet in container.demux(stream): for frame in packet.decode(): self.assertEqual(packet.time_base, frame.time_base) self.assertEqual(stream.time_base, frame.time_base) return def test_decoded_motion_vectors(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) stream = container.streams.video[0] codec_context = stream.codec_context codec_context.options = {"flags2": "+export_mvs"} for packet in container.demux(stream): for frame in packet.decode(): vectors = frame.side_data.get('MOTION_VECTORS') if frame.key_frame: # Key frame don't have motion vectors assert vectors is None else: assert len(vectors) > 0 return def test_decoded_motion_vectors_no_flag(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) stream = container.streams.video[0] for packet in container.demux(stream): for frame in packet.decode(): vectors = frame.side_data.get('MOTION_VECTORS') if not frame.key_frame: assert vectors is None return PyAV-8.1.0/tests/test_deprecation.py000066400000000000000000000026071416312437500174220ustar00rootroot00000000000000import warnings from av import deprecation from .common import TestCase class TestDeprecations(TestCase): def test_method(self): class Example(object): def __init__(self, x=100): self.x = x @deprecation.method def foo(self, a, b): return self.x + a + b obj = Example() with warnings.catch_warnings(record=True) as captured: self.assertEqual(obj.foo(20, b=3), 123) self.assertIn('Example.foo is deprecated', captured[0].message.args[0]) def test_renamed_attr(self): class Example(object): new_value = 'foo' old_value = deprecation.renamed_attr('new_value') def new_func(self, a, b): return a + b old_func = deprecation.renamed_attr('new_func') obj = Example() with warnings.catch_warnings(record=True) as captured: self.assertEqual(obj.old_value, 'foo') self.assertIn('Example.old_value is deprecated', captured[0].message.args[0]) obj.old_value = 'bar' self.assertIn('Example.old_value is deprecated', captured[1].message.args[0]) with warnings.catch_warnings(record=True) as captured: self.assertEqual(obj.old_func(1, 2), 3) self.assertIn('Example.old_func is deprecated', captured[0].message.args[0]) PyAV-8.1.0/tests/test_dictionary.py000066400000000000000000000007311416312437500172660ustar00rootroot00000000000000from av.dictionary import Dictionary from .common import TestCase class TestDictionary(TestCase): def test_basics(self): d = Dictionary() d['key'] = 'value' self.assertEqual(d['key'], 'value') self.assertIn('key', d) self.assertEqual(len(d), 1) self.assertEqual(list(d), ['key']) self.assertEqual(d.pop('key'), 'value') self.assertRaises(KeyError, d.pop, 'key') self.assertEqual(len(d), 0) PyAV-8.1.0/tests/test_doctests.py000066400000000000000000000027231416312437500167540ustar00rootroot00000000000000from unittest import TestCase import doctest import pkgutil import re import av def fix_doctests(suite): for case in suite._tests: # Add some more flags. case._dt_optionflags = ( (case._dt_optionflags or 0) | doctest.IGNORE_EXCEPTION_DETAIL | doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE ) case._dt_test.globs['av'] = av case._dt_test.globs['video_path'] = av.datasets.curated('pexels/time-lapse-video-of-night-sky-857195.mp4') for example in case._dt_test.examples: # Remove b prefix from strings. if example.want.startswith("b'"): example.want = example.want[1:] def register_doctests(mod): if isinstance(mod, str): mod = __import__(mod, fromlist=['']) try: suite = doctest.DocTestSuite(mod) except ValueError: return fix_doctests(suite) cls_name = 'Test' + ''.join(x.title() for x in mod.__name__.split('.')) cls = type(cls_name, (TestCase, ), {}) for test in suite._tests: def func(self): return test.runTest() name = str('test_' + re.sub('[^a-zA-Z0-9]+', '_', test.id()).strip('_')) func.__name__ = name setattr(cls, name, func) globals()[cls_name] = cls for importer, mod_name, ispkg in pkgutil.walk_packages( path=av.__path__, prefix=av.__name__ + '.', onerror=lambda x: None ): register_doctests(mod_name) PyAV-8.1.0/tests/test_encode.py000066400000000000000000000147451416312437500163700ustar00rootroot00000000000000from __future__ import division from fractions import Fraction from unittest import SkipTest import math from av import AudioFrame, VideoFrame from av.audio.stream import AudioStream from av.video.stream import VideoStream import av from .common import Image, TestCase, fate_suite WIDTH = 320 HEIGHT = 240 DURATION = 48 def write_rgb_rotate(output): if not Image: raise SkipTest() output.metadata['title'] = 'container' output.metadata['key'] = 'value' stream = output.add_stream("mpeg4", 24) stream.width = WIDTH stream.height = HEIGHT stream.pix_fmt = "yuv420p" for frame_i in range(DURATION): frame = VideoFrame(WIDTH, HEIGHT, 'rgb24') image = Image.new('RGB', (WIDTH, HEIGHT), ( int(255 * (0.5 + 0.5 * math.sin(frame_i / DURATION * 2 * math.pi))), int(255 * (0.5 + 0.5 * math.sin(frame_i / DURATION * 2 * math.pi + 2 / 3 * math.pi))), int(255 * (0.5 + 0.5 * math.sin(frame_i / DURATION * 2 * math.pi + 4 / 3 * math.pi))), )) frame.planes[0].update(image.tobytes()) for packet in stream.encode(frame): output.mux(packet) for packet in stream.encode(None): output.mux(packet) # Done! output.close() def assert_rgb_rotate(self, input_): # Now inspect it a little. self.assertEqual(len(input_.streams), 1) self.assertEqual(input_.metadata.get('title'), 'container', input_.metadata) self.assertEqual(input_.metadata.get('key'), None) stream = input_.streams[0] self.assertIsInstance(stream, VideoStream) self.assertEqual(stream.type, 'video') self.assertEqual(stream.name, 'mpeg4') self.assertEqual(stream.average_rate, 24) # Only because we constructed is precisely. self.assertEqual(stream.rate, Fraction(24, 1)) self.assertEqual(stream.time_base * stream.duration, 2) self.assertEqual(stream.format.name, 'yuv420p') self.assertEqual(stream.format.width, WIDTH) self.assertEqual(stream.format.height, HEIGHT) class TestBasicVideoEncoding(TestCase): def test_rgb_rotate(self): path = self.sandboxed('rgb_rotate.mov') output = av.open(path, 'w') write_rgb_rotate(output) assert_rgb_rotate(self, av.open(path)) def test_encoding_with_pts(self): path = self.sandboxed('video_with_pts.mov') output = av.open(path, 'w') stream = output.add_stream('libx264', 24) stream.width = WIDTH stream.height = HEIGHT stream.pix_fmt = "yuv420p" for i in range(DURATION): frame = VideoFrame(WIDTH, HEIGHT, 'rgb24') frame.pts = i * 2000 frame.time_base = Fraction(1, 48000) for packet in stream.encode(frame): self.assertEqual(packet.time_base, Fraction(1, 24)) output.mux(packet) for packet in stream.encode(None): self.assertEqual(packet.time_base, Fraction(1, 24)) output.mux(packet) output.close() class TestBasicAudioEncoding(TestCase): def test_audio_transcode(self): path = self.sandboxed('audio_transcode.mov') output = av.open(path, 'w') output.metadata['title'] = 'container' output.metadata['key'] = 'value' sample_rate = 48000 channel_layout = 'stereo' channels = 2 sample_fmt = 's16' stream = output.add_stream('mp2', sample_rate) ctx = stream.codec_context ctx.time_base = sample_rate ctx.sample_rate = sample_rate ctx.format = sample_fmt ctx.layout = channel_layout ctx.channels = channels src = av.open(fate_suite('audio-reference/chorusnoise_2ch_44kHz_s16.wav')) for frame in src.decode(audio=0): frame.pts = None for packet in stream.encode(frame): output.mux(packet) for packet in stream.encode(None): output.mux(packet) output.close() container = av.open(path) self.assertEqual(len(container.streams), 1) self.assertEqual(container.metadata.get('title'), 'container', container.metadata) self.assertEqual(container.metadata.get('key'), None) stream = container.streams[0] self.assertIsInstance(stream, AudioStream) self.assertEqual(stream.codec_context.sample_rate, sample_rate) self.assertEqual(stream.codec_context.format.name, 's16p') self.assertEqual(stream.codec_context.channels, channels) class TestEncodeStreamSemantics(TestCase): def test_audio_default_options(self): output = av.open(self.sandboxed('output.mov'), 'w') stream = output.add_stream('mp2') self.assertEqual(stream.bit_rate, 128000) self.assertEqual(stream.format.name, 's16') self.assertEqual(stream.rate, 48000) self.assertEqual(stream.ticks_per_frame, 1) self.assertEqual(stream.time_base, None) def test_video_default_options(self): output = av.open(self.sandboxed('output.mov'), 'w') stream = output.add_stream('mpeg4') self.assertEqual(stream.bit_rate, 1024000) self.assertEqual(stream.format.height, 480) self.assertEqual(stream.format.name, 'yuv420p') self.assertEqual(stream.format.width, 640) self.assertEqual(stream.height, 480) self.assertEqual(stream.pix_fmt, 'yuv420p') self.assertEqual(stream.rate, Fraction(24, 1)) self.assertEqual(stream.ticks_per_frame, 1) self.assertEqual(stream.time_base, None) self.assertEqual(stream.width, 640) def test_stream_index(self): output = av.open(self.sandboxed('output.mov'), 'w') vstream = output.add_stream('mpeg4', 24) vstream.pix_fmt = 'yuv420p' vstream.width = 320 vstream.height = 240 astream = output.add_stream('mp2', 48000) astream.channels = 2 astream.format = 's16' self.assertEqual(vstream.index, 0) self.assertEqual(astream.index, 1) vframe = VideoFrame(320, 240, 'yuv420p') vpacket = vstream.encode(vframe)[0] self.assertIs(vpacket.stream, vstream) self.assertEqual(vpacket.stream_index, 0) for i in range(10): aframe = AudioFrame('s16', 'stereo', samples=astream.frame_size) aframe.rate = 48000 apackets = astream.encode(aframe) if apackets: apacket = apackets[0] break self.assertIs(apacket.stream, astream) self.assertEqual(apacket.stream_index, 1) PyAV-8.1.0/tests/test_enums.py000066400000000000000000000136561416312437500162620ustar00rootroot00000000000000import pickle from av.enum import EnumType, define_enum from .common import TestCase # This must be at the top-level. PickleableFooBar = define_enum('PickleableFooBar', __name__, [('FOO', 1)]) class TestEnums(TestCase): def define_foobar(self, **kwargs): return define_enum('Foobar', __name__, ( ('FOO', 1), ('BAR', 2), ), **kwargs) def test_basics(self): cls = self.define_foobar() self.assertIsInstance(cls, EnumType) foo = cls.FOO self.assertIsInstance(foo, cls) self.assertEqual(foo.name, 'FOO') self.assertEqual(foo.value, 1) self.assertNotIsInstance(foo, PickleableFooBar) def test_access(self): cls = self.define_foobar() foo1 = cls.FOO foo2 = cls['FOO'] foo3 = cls[1] foo4 = cls[foo1] self.assertIs(foo1, foo2) self.assertIs(foo1, foo3) self.assertIs(foo1, foo4) self.assertIn(foo1, cls) self.assertIn('FOO', cls) self.assertIn(1, cls) self.assertRaises(KeyError, lambda: cls['not a foo']) self.assertRaises(KeyError, lambda: cls[10]) self.assertRaises(TypeError, lambda: cls[()]) self.assertEqual(cls.get('FOO'), foo1) self.assertIs(cls.get('not a foo'), None) def test_casting(self): cls = self.define_foobar() foo = cls.FOO self.assertEqual(repr(foo), '') str_foo = str(foo) self.assertIsInstance(str_foo, str) self.assertEqual(str_foo, 'FOO') int_foo = int(foo) self.assertIsInstance(int_foo, int) self.assertEqual(int_foo, 1) def test_iteration(self): cls = self.define_foobar() self.assertEqual(list(cls), [cls.FOO, cls.BAR]) def test_equality(self): cls = self.define_foobar() foo = cls.FOO bar = cls.BAR self.assertEqual(foo, 'FOO') self.assertEqual(foo, 1) self.assertEqual(foo, foo) self.assertNotEqual(foo, 'BAR') self.assertNotEqual(foo, 2) self.assertNotEqual(foo, bar) self.assertRaises(ValueError, lambda: foo == 'not a foo') self.assertRaises(ValueError, lambda: foo == 10) self.assertRaises(TypeError, lambda: foo == ()) def test_as_key(self): cls = self.define_foobar() foo = cls.FOO d = {foo: 'value'} self.assertEqual(d[foo], 'value') self.assertIs(d.get('FOO'), None) self.assertIs(d.get(1), None) def test_pickleable(self): cls = PickleableFooBar foo = cls.FOO enc = pickle.dumps(foo) foo2 = pickle.loads(enc) self.assertIs(foo, foo2) def test_create_unknown(self): cls = self.define_foobar() baz = cls.get(3, create=True) self.assertEqual(baz.name, 'FOOBAR_3') self.assertEqual(baz.value, 3) def test_multiple_names(self): cls = define_enum('FFooBBar', __name__, ( ('FOO', 1), ('F', 1), ('BAR', 2), ('B', 2), )) self.assertIs(cls.F, cls.FOO) self.assertEqual(cls.F.name, 'FOO') self.assertNotEqual(cls.F.name, 'F') # This is actually the string. self.assertEqual(cls.F, 'FOO') self.assertEqual(cls.F, 'F') self.assertNotEqual(cls.F, 'BAR') self.assertNotEqual(cls.F, 'B') self.assertRaises(ValueError, lambda: cls.F == 'x') def test_flag_basics(self): cls = define_enum('FoobarAllFlags', __name__, dict(FOO=1, BAR=2, FOOBAR=3).items(), is_flags=True) foo = cls.FOO bar = cls.BAR foobar = foo | bar self.assertIs(foobar, cls.FOOBAR) foo2 = foobar & foo self.assertIs(foo2, foo) bar2 = foobar ^ foo self.assertIs(bar2, bar) bar3 = foobar & ~foo self.assertIs(bar3, bar) x = cls.FOO x |= cls.BAR self.assertIs(x, cls.FOOBAR) x = cls.FOOBAR x &= cls.FOO self.assertIs(x, cls.FOO) def test_multi_flags_basics(self): cls = self.define_foobar(is_flags=True) foo = cls.FOO bar = cls.BAR foobar = foo | bar self.assertEqual(foobar.name, 'FOO|BAR') self.assertEqual(foobar.value, 3) self.assertEqual(foobar.flags, (foo, bar)) foobar2 = foo | bar foobar3 = cls[3] foobar4 = cls[foobar] self.assertIs(foobar, foobar2) self.assertIs(foobar, foobar3) self.assertIs(foobar, foobar4) self.assertRaises(KeyError, lambda: cls['FOO|BAR']) self.assertEqual(len(cls), 2) # It didn't get bigger self.assertEqual(list(cls), [foo, bar]) def test_multi_flags_create_missing(self): cls = self.define_foobar(is_flags=True) foobar = cls[3] self.assertIs(foobar, cls.FOO | cls.BAR) self.assertRaises(KeyError, lambda: cls[4]) # Not FOO or BAR self.assertRaises(KeyError, lambda: cls[7]) # FOO and BAR and missing flag. def test_properties(self): Flags = self.define_foobar(is_flags=True) foobar = Flags.FOO | Flags.BAR class Class(object): def __init__(self, value): self.value = Flags[value].value @Flags.property def flags(self): return self.value @flags.setter def flags(self, value): self.value = value foo = flags.flag_property('FOO') bar = flags.flag_property('BAR') obj = Class('FOO') self.assertIs(obj.flags, Flags.FOO) self.assertTrue(obj.foo) self.assertFalse(obj.bar) obj.bar = True self.assertIs(obj.flags, foobar) self.assertTrue(obj.foo) self.assertTrue(obj.bar) obj.foo = False self.assertIs(obj.flags, Flags.BAR) self.assertFalse(obj.foo) self.assertTrue(obj.bar) PyAV-8.1.0/tests/test_errors.py000066400000000000000000000046701416312437500164430ustar00rootroot00000000000000import errno import traceback import av from .common import TestCase, is_windows class TestErrorBasics(TestCase): def test_stringify(self): for cls in (av.ValueError, av.FileNotFoundError, av.DecoderNotFoundError): e = cls(1, 'foo') self.assertEqual(str(e), '[Errno 1] foo') self.assertEqual(repr(e), "{}(1, 'foo')".format(cls.__name__)) self.assertEqual( traceback.format_exception_only(cls, e)[-1], '{}{}: [Errno 1] foo\n'.format( 'av.error.', cls.__name__, ), ) for cls in (av.ValueError, av.FileNotFoundError, av.DecoderNotFoundError): e = cls(1, 'foo', 'bar.txt') self.assertEqual(str(e), "[Errno 1] foo: 'bar.txt'") self.assertEqual(repr(e), "{}(1, 'foo', 'bar.txt')".format(cls.__name__)) self.assertEqual( traceback.format_exception_only(cls, e)[-1], "{}{}: [Errno 1] foo: 'bar.txt'\n".format( 'av.error.', cls.__name__, ), ) def test_bases(self): self.assertTrue(issubclass(av.ValueError, ValueError)) self.assertTrue(issubclass(av.ValueError, av.FFmpegError)) self.assertTrue(issubclass(av.FileNotFoundError, FileNotFoundError)) self.assertTrue(issubclass(av.FileNotFoundError, OSError)) self.assertTrue(issubclass(av.FileNotFoundError, av.FFmpegError)) def test_filenotfound(self): """Catch using builtin class on Python 3.3""" try: av.open('does not exist') except FileNotFoundError as e: self.assertEqual(e.errno, errno.ENOENT) if is_windows: self.assertTrue(e.strerror in ['Error number -2 occurred', 'No such file or directory']) else: self.assertEqual(e.strerror, 'No such file or directory') self.assertEqual(e.filename, 'does not exist') else: self.fail('no exception raised') def test_buffertoosmall(self): """Throw an exception from an enum.""" try: av.error.err_check(-av.error.BUFFER_TOO_SMALL.value) except av.BufferTooSmallError as e: self.assertEqual(e.errno, av.error.BUFFER_TOO_SMALL.value) else: self.fail('no exception raised') PyAV-8.1.0/tests/test_file_probing.py000066400000000000000000000226141416312437500175640ustar00rootroot00000000000000from __future__ import division from fractions import Fraction import av from .common import TestCase, fate_suite try: long except NameError: long = int class TestAudioProbe(TestCase): def setUp(self): self.file = av.open(fate_suite('aac/latm_stereo_to_51.ts')) def test_container_probing(self): self.assertEqual(str(self.file.format), "") self.assertEqual(self.file.format.name, 'mpegts') self.assertEqual(self.file.format.long_name, "MPEG-TS (MPEG-2 Transport Stream)") self.assertEqual(self.file.size, 207740) # This is a little odd, but on OS X with FFmpeg we get a different value. self.assertIn(self.file.bit_rate, (269558, 270494)) self.assertEqual(len(self.file.streams), 1) self.assertEqual(self.file.start_time, long(1400000)) self.assertEqual(self.file.metadata, {}) def test_stream_probing(self): stream = self.file.streams[0] # actual stream properties self.assertEqual(stream.average_rate, None) self.assertEqual(stream.base_rate, None) self.assertEqual(stream.guessed_rate, None) self.assertEqual(stream.duration, 554880) self.assertEqual(stream.frames, 0) self.assertEqual(stream.id, 256) self.assertEqual(stream.index, 0) self.assertEqual(stream.language, 'eng') self.assertEqual(stream.metadata, { 'language': 'eng', }) self.assertEqual(stream.profile, 'LC') self.assertEqual(stream.start_time, 126000) self.assertEqual(stream.time_base, Fraction(1, 90000)) self.assertEqual(stream.type, 'audio') # codec properties self.assertEqual(stream.name, 'aac_latm') self.assertEqual(stream.long_name, 'AAC LATM (Advanced Audio Coding LATM syntax)') # codec context properties self.assertEqual(stream.bit_rate, None) self.assertEqual(stream.channels, 2) self.assertEqual(stream.format.bits, 32) self.assertEqual(stream.format.name, 'fltp') self.assertEqual(stream.layout.name, 'stereo') self.assertEqual(stream.max_bit_rate, None) self.assertEqual(stream.rate, 48000) class TestDataProbe(TestCase): def setUp(self): self.file = av.open(fate_suite('mxf/track_01_v02.mxf')) def test_container_probing(self): self.assertEqual(str(self.file.format), "") self.assertEqual(self.file.format.name, 'mxf') self.assertEqual(self.file.format.long_name, 'MXF (Material eXchange Format)') self.assertEqual(self.file.size, 1453153) self.assertEqual(self.file.bit_rate, 8 * self.file.size * av.time_base // self.file.duration) self.assertEqual(self.file.duration, 417083) self.assertEqual(len(self.file.streams), 4) for key, value, min_version in ( ('application_platform', 'AAFSDK (MacOS X)', None), ('comment_Comments', 'example comment', None), ('comment_UNC Path', '/Users/mark/Desktop/dnxhr_tracknames_export.aaf', None), ('company_name', 'Avid Technology, Inc.', None), ('generation_uid', 'b6bcfcab-70ff-7331-c592-233869de11d2', None), ('material_package_name', 'Example.new.04', None), ('material_package_umid', '0x060A2B340101010101010F001300000057E19D16BA8202DB060E2B347F7F2A80', None), ('modification_date', '2016-09-20T20:33:26.000000Z', None), # Next one is FFmpeg >= 4.2. ('operational_pattern_ul', '060e2b34.04010102.0d010201.10030000', {'libavformat': (58, 29)}), ('product_name', 'Avid Media Composer 8.6.3.43955', None), ('product_uid', 'acfbf03a-4f42-a231-d0b7-c06ecd3d4ad7', None), ('product_version', 'Unknown version', None), ('project_name', 'UHD', None), ('uid', '4482d537-4203-ea40-9e4e-08a22900dd39', None), ): if min_version and any( av.library_versions[name] < version for name, version in min_version.items() ): continue self.assertEqual(self.file.metadata.get(key), value) def test_stream_probing(self): stream = self.file.streams[0] # actual stream properties self.assertEqual(stream.average_rate, None) self.assertEqual(stream.base_rate, None) self.assertEqual(stream.guessed_rate, None) self.assertEqual(stream.duration, 37537) self.assertEqual(stream.frames, 0) self.assertEqual(stream.id, 1) self.assertEqual(stream.index, 0) self.assertEqual(stream.language, None) self.assertEqual(stream.metadata, { 'data_type': 'video', 'file_package_umid': '0x060A2B340101010101010F001300000057E19D16BA8302DB060E2B347F7F2A80', 'track_name': 'Base', }) self.assertEqual(stream.profile, None) self.assertEqual(stream.start_time, 0) self.assertEqual(stream.time_base, Fraction(1, 90000)) self.assertEqual(stream.type, 'data') # codec properties self.assertEqual(stream.name, None) self.assertEqual(stream.long_name, None) class TestSubtitleProbe(TestCase): def setUp(self): self.file = av.open(fate_suite('sub/MovText_capability_tester.mp4')) def test_container_probing(self): self.assertEqual(str(self.file.format), "") self.assertEqual(self.file.format.name, 'mov,mp4,m4a,3gp,3g2,mj2') self.assertEqual(self.file.format.long_name, 'QuickTime / MOV') self.assertEqual(self.file.size, 825) self.assertEqual(self.file.bit_rate, 8 * self.file.size * av.time_base // self.file.duration) self.assertEqual(self.file.duration, 8140000) self.assertEqual(len(self.file.streams), 1) self.assertEqual(self.file.metadata, { 'compatible_brands': 'isom', 'creation_time': '2012-07-04T05:10:41.000000Z', 'major_brand': 'isom', 'minor_version': '1', }) def test_stream_probing(self): stream = self.file.streams[0] # actual stream properties self.assertEqual(stream.average_rate, None) self.assertEqual(stream.duration, 8140) self.assertEqual(stream.frames, 6) self.assertEqual(stream.id, 1) self.assertEqual(stream.index, 0) self.assertEqual(stream.language, 'und') self.assertEqual(stream.metadata, { 'creation_time': '2012-07-04T05:10:41.000000Z', 'handler_name': 'reference.srt - Imported with GPAC 0.4.6-DEV-rev4019', 'language': 'und' }) self.assertEqual(stream.profile, None) self.assertEqual(stream.start_time, None) self.assertEqual(stream.time_base, Fraction(1, 1000)) self.assertEqual(stream.type, 'subtitle') # codec properties self.assertEqual(stream.name, 'mov_text') self.assertEqual(stream.long_name, '3GPP Timed Text subtitle') class TestVideoProbe(TestCase): def setUp(self): self.file = av.open(fate_suite('mpeg2/mpeg2_field_encoding.ts')) def test_container_probing(self): self.assertEqual(str(self.file.format), "") self.assertEqual(self.file.format.name, 'mpegts') self.assertEqual(self.file.format.long_name, "MPEG-TS (MPEG-2 Transport Stream)") self.assertEqual(self.file.size, 800000) # This is a little odd, but on OS X with FFmpeg we get a different value. self.assertIn(self.file.duration, (1620000, 1580000)) self.assertEqual(self.file.bit_rate, 8 * self.file.size * av.time_base // self.file.duration) self.assertEqual(len(self.file.streams), 1) self.assertEqual(self.file.start_time, long(22953408322)) self.assertEqual(self.file.metadata, {}) def test_stream_probing(self): stream = self.file.streams[0] # actual stream properties self.assertEqual(stream.average_rate, Fraction(25, 1)) self.assertEqual(stream.duration, 145800) self.assertEqual(stream.frames, 0) self.assertEqual(stream.id, 4131) self.assertEqual(stream.index, 0) self.assertEqual(stream.language, None) self.assertEqual(stream.metadata, {}) self.assertEqual(stream.profile, 'Simple') self.assertEqual(stream.start_time, 2065806749) self.assertEqual(stream.time_base, Fraction(1, 90000)) self.assertEqual(stream.type, 'video') # codec properties self.assertEqual(stream.long_name, 'MPEG-2 video') self.assertEqual(stream.name, 'mpeg2video') # codec context properties self.assertEqual(stream.bit_rate, 3364800) self.assertEqual(stream.display_aspect_ratio, Fraction(4, 3)) self.assertEqual(stream.format.name, 'yuv420p') self.assertFalse(stream.has_b_frames) self.assertEqual(stream.gop_size, 12) self.assertEqual(stream.height, 576) self.assertEqual(stream.max_bit_rate, None) self.assertEqual(stream.sample_aspect_ratio, Fraction(16, 15)) self.assertEqual(stream.width, 720) # For some reason, these behave differently on OS X (@mikeboers) and # Ubuntu (Travis). We think it is FFmpeg, but haven't been able to # confirm. self.assertIn(stream.coded_width, (720, 0)) self.assertIn(stream.coded_height, (576, 0)) PyAV-8.1.0/tests/test_filters.py000066400000000000000000000147521416312437500166010ustar00rootroot00000000000000from fractions import Fraction from unittest import SkipTest import errno import numpy as np from av import AudioFrame, VideoFrame from av.audio.frame import format_dtypes from av.filter import Filter, Graph from .common import Image, TestCase, fate_suite def generate_audio_frame(frame_num, input_format='s16', layout='stereo', sample_rate=44100, frame_size=1024): """ Generate audio frame representing part of the sinusoidal wave :param input_format: default: s16 :param layout: default: stereo :param sample_rate: default: 44100 :param frame_size: default: 1024 :param frame_num: frame number :return: audio frame for sinusoidal wave audio signal slice """ frame = AudioFrame(format=input_format, layout=layout, samples=frame_size) frame.sample_rate = sample_rate frame.pts = frame_num * frame_size for i in range(len(frame.layout.channels)): data = np.zeros(frame_size, dtype=format_dtypes[input_format]) for j in range(frame_size): data[j] = np.sin(2 * np.pi * (frame_num + j) * (i + 1) / float(frame_size)) frame.planes[i].update(data) return frame class TestFilters(TestCase): def test_filter_descriptor(self): f = Filter('testsrc') self.assertEqual(f.name, 'testsrc') self.assertEqual(f.description, 'Generate test pattern.') self.assertFalse(f.dynamic_inputs) self.assertEqual(len(f.inputs), 0) self.assertFalse(f.dynamic_outputs) self.assertEqual(len(f.outputs), 1) self.assertEqual(f.outputs[0].name, 'default') self.assertEqual(f.outputs[0].type, 'video') def test_dynamic_filter_descriptor(self): f = Filter('split') self.assertFalse(f.dynamic_inputs) self.assertEqual(len(f.inputs), 1) self.assertTrue(f.dynamic_outputs) self.assertEqual(len(f.outputs), 0) def test_generator_graph(self): graph = Graph() src = graph.add('testsrc') lutrgb = graph.add('lutrgb', "r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val", name='invert') sink = graph.add('buffersink') src.link_to(lutrgb) lutrgb.link_to(sink) # pads and links self.assertIs(src.outputs[0].link.output, lutrgb.inputs[0]) self.assertIs(lutrgb.inputs[0].link.input, src.outputs[0]) frame = sink.pull() self.assertIsInstance(frame, VideoFrame) if Image: frame.to_image().save(self.sandboxed('mandelbrot2.png')) def test_auto_find_sink(self): graph = Graph() src = graph.add('testsrc') src.link_to(graph.add('buffersink')) graph.configure() frame = graph.pull() if Image: frame.to_image().save(self.sandboxed('mandelbrot3.png')) def test_delegate_sink(self): graph = Graph() src = graph.add('testsrc') src.link_to(graph.add('buffersink')) graph.configure() frame = src.pull() if Image: frame.to_image().save(self.sandboxed('mandelbrot4.png')) def test_haldclut_graph(self): raise SkipTest() graph = Graph() img = Image.open(fate_suite('png1/lena-rgb24.png')) frame = VideoFrame.from_image(img) img_source = graph.add_buffer(frame) hald_img = Image.open('hald_7.png') hald_frame = VideoFrame.from_image(hald_img) hald_source = graph.add_buffer(hald_frame) hald_filter = graph.add('haldclut') sink = graph.add('buffersink') img_source.link(0, hald_filter, 0) hald_source.link(0, hald_filter, 1) hald_filter.link(0, sink, 0) graph.config() self.assertIs(img_source.outputs[0].linked_to, hald_filter.inputs[0]) self.assertIs(hald_source.outputs[0].linked_to, hald_filter.inputs[1]) self.assertIs(hald_filter.outputs[0].linked_to, sink.inputs[0]) hald_source.push(hald_frame) img_source.push(frame) frame = sink.pull() self.assertIsInstance(frame, VideoFrame) frame.to_image().save(self.sandboxed('filtered.png')) def test_audio_buffer_sink(self): graph = Graph() audio_buffer = graph.add_abuffer( format='fltp', sample_rate=48000, layout='stereo', time_base=Fraction(1, 48000) ) audio_buffer.link_to(graph.add('abuffersink')) graph.configure() try: graph.pull() except OSError as e: # we haven't pushed any input so expect no frames / EAGAIN if e.errno != errno.EAGAIN: raise @staticmethod def link_nodes(*nodes): for c, n in zip(nodes, nodes[1:]): c.link_to(n) def test_audio_buffer_resample(self): graph = Graph() self.link_nodes( graph.add_abuffer( format='fltp', sample_rate=48000, layout='stereo', time_base=Fraction(1, 48000) ), graph.add( 'aformat', 'sample_fmts=s16:sample_rates=44100:channel_layouts=stereo' ), graph.add('abuffersink') ) graph.configure() graph.push( generate_audio_frame( 0, input_format='fltp', layout='stereo', sample_rate=48000 ) ) out_frame = graph.pull() self.assertEqual(out_frame.format.name, 's16') self.assertEqual(out_frame.layout.name, 'stereo') self.assertEqual(out_frame.sample_rate, 44100) def test_audio_buffer_volume_filter(self): graph = Graph() self.link_nodes( graph.add_abuffer( format='fltp', sample_rate=48000, layout='stereo', time_base=Fraction(1, 48000) ), graph.add('volume', volume='0.5'), graph.add('abuffersink') ) graph.configure() input_frame = generate_audio_frame(0, input_format='fltp', layout='stereo', sample_rate=48000) graph.push(input_frame) out_frame = graph.pull() self.assertEqual(out_frame.format.name, 'fltp') self.assertEqual(out_frame.layout.name, 'stereo') self.assertEqual(out_frame.sample_rate, 48000) input_data = input_frame.to_ndarray() output_data = out_frame.to_ndarray() self.assertTrue(np.allclose(input_data * 0.5, output_data), "Check that volume is reduced") PyAV-8.1.0/tests/test_logging.py000066400000000000000000000042511416312437500165500ustar00rootroot00000000000000from __future__ import division import errno import logging import threading import av.error import av.logging from .common import TestCase def do_log(message): av.logging.log(av.logging.INFO, 'test', message) class TestLogging(TestCase): def test_adapt_level(self): self.assertEqual( av.logging.adapt_level(av.logging.ERROR), logging.ERROR ) self.assertEqual( av.logging.adapt_level(av.logging.WARNING), logging.WARNING ) self.assertEqual( av.logging.adapt_level((av.logging.WARNING + av.logging.ERROR) // 2), logging.WARNING ) def test_threaded_captures(self): with av.logging.Capture(local=True) as logs: do_log('main') thread = threading.Thread(target=do_log, args=('thread', )) thread.start() thread.join() self.assertIn((av.logging.INFO, 'test', 'main'), logs) def test_global_captures(self): with av.logging.Capture(local=False) as logs: do_log('main') thread = threading.Thread(target=do_log, args=('thread', )) thread.start() thread.join() self.assertIn((av.logging.INFO, 'test', 'main'), logs) self.assertIn((av.logging.INFO, 'test', 'thread'), logs) def test_repeats(self): with av.logging.Capture() as logs: do_log('foo') do_log('foo') do_log('bar') do_log('bar') do_log('bar') do_log('baz') logs = [log for log in logs if log[1] == 'test'] self.assertEqual(logs, [ (av.logging.INFO, 'test', 'foo'), (av.logging.INFO, 'test', 'foo'), (av.logging.INFO, 'test', 'bar'), (av.logging.INFO, 'test', 'bar (repeated 2 more times)'), (av.logging.INFO, 'test', 'baz'), ]) def test_error(self): log = (av.logging.ERROR, 'test', 'This is a test.') av.logging.log(*log) try: av.error.err_check(-errno.EPERM) except OSError as e: self.assertEqual(e.log, log) else: self.fail() PyAV-8.1.0/tests/test_options.py000066400000000000000000000010711416312437500166120ustar00rootroot00000000000000from av import ContainerFormat from av.option import Option, OptionType from .common import TestCase class TestOptions(TestCase): def test_mov_options(self): mov = ContainerFormat('mov') options = mov.descriptor.options by_name = {opt.name: opt for opt in options} opt = by_name.get('use_absolute_path') self.assertIsInstance(opt, Option) self.assertEqual(opt.name, 'use_absolute_path') # This was not a good option to actually test. self.assertIn(opt.type, (OptionType.BOOL, OptionType.INT)) PyAV-8.1.0/tests/test_python_io.py000066400000000000000000000054121416312437500171320ustar00rootroot00000000000000from __future__ import division import av from .common import MethodLogger, TestCase, fate_suite from .test_encode import assert_rgb_rotate, write_rgb_rotate try: from cStringIO import StringIO except ImportError: from io import BytesIO as StringIO class NonSeekableBuffer: def __init__(self, data): self.data = data def read(self, n): data = self.data[0:n] self.data = self.data[n:] return data class TestPythonIO(TestCase): def test_reading(self): with open(fate_suite('mpeg2/mpeg2_field_encoding.ts'), 'rb') as fh: wrapped = MethodLogger(fh) container = av.open(wrapped) self.assertEqual(container.format.name, 'mpegts') self.assertEqual(container.format.long_name, "MPEG-TS (MPEG-2 Transport Stream)") self.assertEqual(len(container.streams), 1) self.assertEqual(container.size, 800000) self.assertEqual(container.metadata, {}) # Make sure it did actually call "read". reads = wrapped._filter('read') self.assertTrue(reads) def test_reading_no_seek(self): with open(fate_suite('mpeg2/mpeg2_field_encoding.ts'), 'rb') as fh: data = fh.read() buf = NonSeekableBuffer(data) wrapped = MethodLogger(buf) container = av.open(wrapped) self.assertEqual(container.format.name, 'mpegts') self.assertEqual(container.format.long_name, "MPEG-TS (MPEG-2 Transport Stream)") self.assertEqual(len(container.streams), 1) self.assertEqual(container.metadata, {}) # Make sure it did actually call "read". reads = wrapped._filter('read') self.assertTrue(reads) def test_basic_errors(self): self.assertRaises(Exception, av.open, None) self.assertRaises(Exception, av.open, None, 'w') def test_writing(self): path = self.sandboxed('writing.mov') with open(path, 'wb') as fh: wrapped = MethodLogger(fh) output = av.open(wrapped, 'w', 'mov') write_rgb_rotate(output) output.close() fh.close() # Make sure it did actually write. writes = wrapped._filter('write') self.assertTrue(writes) # Standard assertions. assert_rgb_rotate(self, av.open(path)) def test_buffer_read_write(self): buffer_ = StringIO() wrapped = MethodLogger(buffer_) write_rgb_rotate(av.open(wrapped, 'w', 'mp4')) # Make sure it did actually write. writes = wrapped._filter('write') self.assertTrue(writes) self.assertTrue(buffer_.tell()) # Standard assertions. buffer_.seek(0) assert_rgb_rotate(self, av.open(buffer_)) PyAV-8.1.0/tests/test_seek.py000066400000000000000000000132471416312437500160560ustar00rootroot00000000000000from __future__ import division import unittest import warnings import av from .common import TestCase, fate_suite def timestamp_to_frame(timestamp, stream): fps = stream.rate time_base = stream.time_base start_time = stream.start_time frame = (timestamp - start_time) * float(time_base) * float(fps) return frame def step_forward(container, stream): for packet in container.demux(stream): for frame in packet.decode(): if frame: return frame class TestSeek(TestCase): def test_seek_float(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) self.assertRaises(TypeError, container.seek, 1.0) self.assertRaises(TypeError, container.streams.video[0].seek, 1.0) def test_seek_int64(self): # Assert that it accepts large values. # Issue 251 pointed this out. container = av.open(fate_suite('h264/interlaced_crop.mp4')) container.seek(2**32) def test_seek_start(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) # count all the packets total_packet_count = 0 for packet in container.demux(): total_packet_count += 1 # seek to beginning container.seek(-1) # count packets again seek_packet_count = 0 for packet in container.demux(): seek_packet_count += 1 self.assertEqual(total_packet_count, seek_packet_count) def test_seek_middle(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) # count all the packets total_packet_count = 0 for packet in container.demux(): total_packet_count += 1 # seek to middle container.seek(container.duration // 2) seek_packet_count = 0 for packet in container.demux(): seek_packet_count += 1 self.assertTrue(seek_packet_count < total_packet_count) def test_seek_end(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) # seek to middle container.seek(container.duration // 2) middle_packet_count = 0 for packet in container.demux(): middle_packet_count += 1 # you can't really seek to to end but you can to the last keyframe container.seek(container.duration) seek_packet_count = 0 for packet in container.demux(): seek_packet_count += 1 # there should be some packet because we're seeking to the last keyframe self.assertTrue(seek_packet_count > 0) self.assertTrue(seek_packet_count < middle_packet_count) def test_decode_half(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) video_stream = next(s for s in container.streams if s.type == 'video') total_frame_count = 0 # Count number of frames in video for packet in container.demux(video_stream): for frame in packet.decode(): total_frame_count += 1 self.assertEqual(video_stream.frames, total_frame_count) # set target frame to middle frame target_frame = int(total_frame_count / 2.0) target_timestamp = int((target_frame * av.time_base) / video_stream.rate) # should seek to nearest keyframe before target_timestamp container.seek(target_timestamp) current_frame = None frame_count = 0 for packet in container.demux(video_stream): for frame in packet.decode(): if current_frame is None: current_frame = timestamp_to_frame(frame.pts, video_stream) else: current_frame += 1 # start counting once we reach the target frame if current_frame is not None and current_frame >= target_frame: frame_count += 1 self.assertEqual(frame_count, total_frame_count - target_frame) def test_stream_seek(self, use_deprecated_api=False): container = av.open(fate_suite('h264/interlaced_crop.mp4')) video_stream = next(s for s in container.streams if s.type == 'video') total_frame_count = 0 # Count number of frames in video for packet in container.demux(video_stream): for frame in packet.decode(): total_frame_count += 1 target_frame = int(total_frame_count / 2.0) time_base = float(video_stream.time_base) rate = float(video_stream.average_rate) target_sec = target_frame * 1 / rate target_timestamp = int(target_sec / time_base) + video_stream.start_time if use_deprecated_api: with warnings.catch_warnings(record=True) as captured: video_stream.seek(target_timestamp) self.assertEqual(len(captured), 1) self.assertIn('Stream.seek is deprecated.', captured[0].message.args[0]) else: container.seek(target_timestamp, stream=video_stream) current_frame = None frame_count = 0 for packet in container.demux(video_stream): for frame in packet.decode(): if current_frame is None: current_frame = timestamp_to_frame(frame.pts, video_stream) else: current_frame += 1 # start counting once we reach the target frame if current_frame is not None and current_frame >= target_frame: frame_count += 1 self.assertEqual(frame_count, total_frame_count - target_frame) def test_deprecated_stream_seek(self): self.test_stream_seek(use_deprecated_api=True) if __name__ == "__main__": unittest.main() PyAV-8.1.0/tests/test_streams.py000066400000000000000000000017771416312437500166120ustar00rootroot00000000000000import av from .common import TestCase, fate_suite class TestStreams(TestCase): def test_stream_tuples(self): for fate_name in ('h264/interlaced_crop.mp4', ): container = av.open(fate_suite(fate_name)) video_streams = tuple([s for s in container.streams if s.type == 'video']) self.assertEqual(video_streams, container.streams.video) audio_streams = tuple([s for s in container.streams if s.type == 'audio']) self.assertEqual(audio_streams, container.streams.audio) def test_selection(self): container = av.open(fate_suite('h264/interlaced_crop.mp4')) video = container.streams.video[0] # audio_stream = container.streams.audio[0] # audio_streams = list(container.streams.audio[0:2]) self.assertEqual([video], container.streams.get(video=0)) self.assertEqual([video], container.streams.get(video=(0, ))) # TODO: Find something in the fate suite with video, audio, and subtitles. PyAV-8.1.0/tests/test_subtitles.py000066400000000000000000000027341416312437500171440ustar00rootroot00000000000000from av.subtitles.subtitle import AssSubtitle, BitmapSubtitle import av from .common import TestCase, fate_suite class TestSubtitle(TestCase): def test_movtext(self): path = fate_suite('sub/MovText_capability_tester.mp4') fh = av.open(path) subs = [] for packet in fh.demux(): subs.extend(packet.decode()) self.assertEqual(len(subs), 3) self.assertIsInstance(subs[0][0], AssSubtitle) # The format FFmpeg gives us changed at one point. self.assertIn(subs[0][0].ass, ('Dialogue: 0,0:00:00.97,0:00:02.54,Default,- Test 1.\\N- Test 2.\r\n', 'Dialogue: 0,0:00:00.97,0:00:02.54,Default,,0,0,0,,- Test 1.\\N- Test 2.\r\n')) def test_vobsub(self): path = fate_suite('sub/vobsub.sub') fh = av.open(path) subs = [] for packet in fh.demux(): subs.extend(packet.decode()) self.assertEqual(len(subs), 43) sub = subs[0][0] self.assertIsInstance(sub, BitmapSubtitle) self.assertEqual(sub.x, 259) self.assertEqual(sub.y, 379) self.assertEqual(sub.width, 200) self.assertEqual(sub.height, 24) bms = sub.planes self.assertEqual(len(bms), 1) if hasattr(__builtins__, 'buffer'): self.assertEqual(len(buffer(bms[0])), 4800) # noqa if hasattr(__builtins__, 'memoryview'): self.assertEqual(len(memoryview(bms[0])), 4800) # noqa PyAV-8.1.0/tests/test_timeout.py000066400000000000000000000037131416312437500166120ustar00rootroot00000000000000from http.server import BaseHTTPRequestHandler from socketserver import TCPServer import threading import time import av from .common import TestCase, fate_suite PORT = 8002 CONTENT = open(fate_suite('mpeg2/mpeg2_field_encoding.ts'), 'rb').read()\ # Needs to be long enough for all host OSes to deal. TIMEOUT = 0.25 DELAY = 4 * TIMEOUT class HttpServer(TCPServer): allow_reuse_address = True def handle_error(self, request, client_address): pass class SlowRequestHandler(BaseHTTPRequestHandler): def do_GET(self): time.sleep(DELAY) self.send_response(200) self.send_header('Content-Length', str(len(CONTENT))) self.end_headers() self.wfile.write(CONTENT) def log_message(self, format, *args): pass class TestTimeout(TestCase): def setUp(cls): cls._server = HttpServer(('', PORT), SlowRequestHandler) cls._thread = threading.Thread(target=cls._server.handle_request) cls._thread.daemon = True # Make sure the tests will exit. cls._thread.start() def tearDown(cls): cls._thread.join(1) # Can't wait forever or the tests will never exit. cls._server.server_close() def test_no_timeout(self): start = time.time() av.open('http://localhost:%d/mpeg2_field_encoding.ts' % PORT) duration = time.time() - start self.assertGreater(duration, DELAY) def test_open_timeout(self): with self.assertRaises(av.ExitError): start = time.time() av.open('http://localhost:%d/mpeg2_field_encoding.ts' % PORT, timeout=TIMEOUT) duration = time.time() - start self.assertLess(duration, DELAY) def test_open_timeout_2(self): with self.assertRaises(av.ExitError): start = time.time() av.open('http://localhost:%d/mpeg2_field_encoding.ts' % PORT, timeout=(TIMEOUT, None)) duration = time.time() - start self.assertLess(duration, DELAY) PyAV-8.1.0/tests/test_videoformat.py000066400000000000000000000070751416312437500174500ustar00rootroot00000000000000from av import VideoFormat from .common import TestCase class TestVideoFormats(TestCase): def test_rgb24_inspection(self): fmt = VideoFormat('rgb24', 640, 480) self.assertEqual(fmt.name, 'rgb24') self.assertEqual(len(fmt.components), 3) self.assertFalse(fmt.is_planar) self.assertFalse(fmt.has_palette) self.assertTrue(fmt.is_rgb) self.assertEqual(fmt.chroma_width(), 640) self.assertEqual(fmt.chroma_height(), 480) self.assertEqual(fmt.chroma_width(1024), 1024) self.assertEqual(fmt.chroma_height(1024), 1024) for i in range(3): comp = fmt.components[i] self.assertEqual(comp.plane, 0) self.assertEqual(comp.bits, 8) self.assertFalse(comp.is_luma) self.assertFalse(comp.is_chroma) self.assertFalse(comp.is_alpha) self.assertEqual(comp.width, 640) self.assertEqual(comp.height, 480) def test_yuv420p_inspection(self): fmt = VideoFormat('yuv420p', 640, 480) self.assertEqual(fmt.name, 'yuv420p') self.assertEqual(len(fmt.components), 3) self._test_yuv420(fmt) def _test_yuv420(self, fmt): self.assertTrue(fmt.is_planar) self.assertFalse(fmt.has_palette) self.assertFalse(fmt.is_rgb) self.assertEqual(fmt.chroma_width(), 320) self.assertEqual(fmt.chroma_height(), 240) self.assertEqual(fmt.chroma_width(1024), 512) self.assertEqual(fmt.chroma_height(1024), 512) for i, comp in enumerate(fmt.components): comp = fmt.components[i] self.assertEqual(comp.plane, i) self.assertEqual(comp.bits, 8) self.assertFalse(fmt.components[0].is_chroma) self.assertTrue(fmt.components[1].is_chroma) self.assertTrue(fmt.components[2].is_chroma) self.assertTrue(fmt.components[0].is_luma) self.assertFalse(fmt.components[1].is_luma) self.assertFalse(fmt.components[2].is_luma) self.assertFalse(fmt.components[0].is_alpha) self.assertFalse(fmt.components[1].is_alpha) self.assertFalse(fmt.components[2].is_alpha) self.assertEqual(fmt.components[0].width, 640) self.assertEqual(fmt.components[1].width, 320) self.assertEqual(fmt.components[2].width, 320) def test_yuva420p_inspection(self): fmt = VideoFormat('yuva420p', 640, 480) self.assertEqual(len(fmt.components), 4) self._test_yuv420(fmt) self.assertFalse(fmt.components[3].is_chroma) self.assertEqual(fmt.components[3].width, 640) def test_gray16be_inspection(self): fmt = VideoFormat('gray16be', 640, 480) self.assertEqual(fmt.name, 'gray16be') self.assertEqual(len(fmt.components), 1) self.assertFalse(fmt.is_planar) self.assertFalse(fmt.has_palette) self.assertFalse(fmt.is_rgb) self.assertEqual(fmt.chroma_width(), 640) self.assertEqual(fmt.chroma_height(), 480) self.assertEqual(fmt.chroma_width(1024), 1024) self.assertEqual(fmt.chroma_height(1024), 1024) comp = fmt.components[0] self.assertEqual(comp.plane, 0) self.assertEqual(comp.bits, 16) self.assertTrue(comp.is_luma) self.assertFalse(comp.is_chroma) self.assertEqual(comp.width, 640) self.assertEqual(comp.height, 480) self.assertFalse(comp.is_alpha) def test_pal8_inspection(self): fmt = VideoFormat('pal8', 640, 480) self.assertEqual(len(fmt.components), 1) self.assertTrue(fmt.has_palette) PyAV-8.1.0/tests/test_videoframe.py000066400000000000000000000324171416312437500172500ustar00rootroot00000000000000from unittest import SkipTest import warnings import numpy from av import VideoFrame from av.deprecation import AttributeRenamedWarning from .common import Image, TestCase, fate_png class TestVideoFrameConstructors(TestCase): def test_null_constructor(self): frame = VideoFrame() self.assertEqual(frame.width, 0) self.assertEqual(frame.height, 0) self.assertEqual(frame.format.name, 'yuv420p') def test_manual_yuv_constructor(self): frame = VideoFrame(640, 480, 'yuv420p') self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, 'yuv420p') def test_manual_rgb_constructor(self): frame = VideoFrame(640, 480, 'rgb24') self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, 'rgb24') class TestVideoFramePlanes(TestCase): def test_null_planes(self): frame = VideoFrame() # yuv420p self.assertEqual(len(frame.planes), 0) def test_yuv420p_planes(self): frame = VideoFrame(640, 480, 'yuv420p') self.assertEqual(len(frame.planes), 3) self.assertEqual(frame.planes[0].width, 640) self.assertEqual(frame.planes[0].height, 480) self.assertEqual(frame.planes[0].line_size, 640) self.assertEqual(frame.planes[0].buffer_size, 640 * 480) for i in range(1, 3): self.assertEqual(frame.planes[i].width, 320) self.assertEqual(frame.planes[i].height, 240) self.assertEqual(frame.planes[i].line_size, 320) self.assertEqual(frame.planes[i].buffer_size, 320 * 240) def test_yuv420p_planes_align(self): # If we request 8-byte alignment for a width which is not a multiple of 8, # the line sizes are larger than the plane width. frame = VideoFrame(318, 238, 'yuv420p') self.assertEqual(len(frame.planes), 3) self.assertEqual(frame.planes[0].width, 318) self.assertEqual(frame.planes[0].height, 238) self.assertEqual(frame.planes[0].line_size, 320) self.assertEqual(frame.planes[0].buffer_size, 320 * 238) for i in range(1, 3): self.assertEqual(frame.planes[i].width, 159) self.assertEqual(frame.planes[i].height, 119) self.assertEqual(frame.planes[i].line_size, 160) self.assertEqual(frame.planes[i].buffer_size, 160 * 119) def test_rgb24_planes(self): frame = VideoFrame(640, 480, 'rgb24') self.assertEqual(len(frame.planes), 1) self.assertEqual(frame.planes[0].width, 640) self.assertEqual(frame.planes[0].height, 480) self.assertEqual(frame.planes[0].line_size, 640 * 3) self.assertEqual(frame.planes[0].buffer_size, 640 * 480 * 3) class TestVideoFrameBuffers(TestCase): def test_buffer(self): if not hasattr(__builtins__, 'buffer'): raise SkipTest() frame = VideoFrame(640, 480, 'rgb24') frame.planes[0].update(b'01234' + (b'x' * (640 * 480 * 3 - 5))) buf = buffer(frame.planes[0]) # noqa self.assertEqual(buf[1], b'1') self.assertEqual(buf[:7], b'01234xx') def test_memoryview_read(self): if not hasattr(__builtins__, 'memoryview'): raise SkipTest() frame = VideoFrame(640, 480, 'rgb24') frame.planes[0].update(b'01234' + (b'x' * (640 * 480 * 3 - 5))) mem = memoryview(frame.planes[0]) # noqa self.assertEqual(mem.ndim, 1) self.assertEqual(mem.shape, (640 * 480 * 3, )) self.assertFalse(mem.readonly) self.assertEqual(mem[1], 49) self.assertEqual(mem[:7], b'01234xx') mem[1] = 46 self.assertEqual(mem[:7], b'0.234xx') class TestVideoFrameImage(TestCase): def setUp(self): if not Image: raise SkipTest() def test_roundtrip(self): image = Image.open(fate_png()) frame = VideoFrame.from_image(image) img = frame.to_image() img.save(self.sandboxed('roundtrip-high.jpg')) self.assertImagesAlmostEqual(image, img) def test_to_image_rgb24(self): sizes = [ (318, 238), (320, 240), (500, 500), ] for width, height in sizes: frame = VideoFrame(width, height, format='rgb24') # fill video frame data for plane in frame.planes: ba = bytearray(plane.buffer_size) pos = 0 for row in range(height): for i in range(plane.line_size): ba[pos] = i % 256 pos += 1 plane.update(ba) # construct expected image data expected = bytearray(height * width * 3) pos = 0 for row in range(height): for i in range(width * 3): expected[pos] = i % 256 pos += 1 img = frame.to_image() self.assertEqual(img.size, (width, height)) self.assertEqual(img.tobytes(), expected) class TestVideoFrameNdarray(TestCase): def test_basic_to_ndarray(self): frame = VideoFrame(640, 480, 'rgb24') array = frame.to_ndarray() self.assertEqual(array.shape, (480, 640, 3)) def test_basic_to_nd_array(self): frame = VideoFrame(640, 480, 'rgb24') with warnings.catch_warnings(record=True) as recorded: array = frame.to_nd_array() self.assertEqual(array.shape, (480, 640, 3)) # check deprecation warning self.assertEqual(len(recorded), 1) self.assertEqual(recorded[0].category, AttributeRenamedWarning) self.assertEqual( str(recorded[0].message), 'VideoFrame.to_nd_array is deprecated; please use VideoFrame.to_ndarray.') def test_ndarray_gray(self): array = numpy.random.randint(0, 256, size=(480, 640), dtype=numpy.uint8) for format in ['gray', 'gray8']: frame = VideoFrame.from_ndarray(array, format=format) self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, 'gray') self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_gray_align(self): array = numpy.random.randint(0, 256, size=(238, 318), dtype=numpy.uint8) for format in ['gray', 'gray8']: frame = VideoFrame.from_ndarray(array, format=format) self.assertEqual(frame.width, 318) self.assertEqual(frame.height, 238) self.assertEqual(frame.format.name, 'gray') self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_rgb(self): array = numpy.random.randint(0, 256, size=(480, 640, 3), dtype=numpy.uint8) for format in ['rgb24', 'bgr24']: frame = VideoFrame.from_ndarray(array, format=format) self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, format) self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_rgb_align(self): array = numpy.random.randint(0, 256, size=(238, 318, 3), dtype=numpy.uint8) for format in ['rgb24', 'bgr24']: frame = VideoFrame.from_ndarray(array, format=format) self.assertEqual(frame.width, 318) self.assertEqual(frame.height, 238) self.assertEqual(frame.format.name, format) self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_rgba(self): array = numpy.random.randint(0, 256, size=(480, 640, 4), dtype=numpy.uint8) for format in ['argb', 'rgba', 'abgr', 'bgra']: frame = VideoFrame.from_ndarray(array, format=format) self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, format) self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_rgba_align(self): array = numpy.random.randint(0, 256, size=(238, 318, 4), dtype=numpy.uint8) for format in ['argb', 'rgba', 'abgr', 'bgra']: frame = VideoFrame.from_ndarray(array, format=format) self.assertEqual(frame.width, 318) self.assertEqual(frame.height, 238) self.assertEqual(frame.format.name, format) self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_yuv420p(self): array = numpy.random.randint(0, 256, size=(720, 640), dtype=numpy.uint8) frame = VideoFrame.from_ndarray(array, format='yuv420p') self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, 'yuv420p') self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_yuv420p_align(self): array = numpy.random.randint(0, 256, size=(357, 318), dtype=numpy.uint8) frame = VideoFrame.from_ndarray(array, format='yuv420p') self.assertEqual(frame.width, 318) self.assertEqual(frame.height, 238) self.assertEqual(frame.format.name, 'yuv420p') self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_yuvj420p(self): array = numpy.random.randint(0, 256, size=(720, 640), dtype=numpy.uint8) frame = VideoFrame.from_ndarray(array, format='yuvj420p') self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, 'yuvj420p') self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_yuyv422(self): array = numpy.random.randint(0, 256, size=(480, 640, 2), dtype=numpy.uint8) frame = VideoFrame.from_ndarray(array, format='yuyv422') self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, 'yuyv422') self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_yuyv422_align(self): array = numpy.random.randint(0, 256, size=(238, 318, 2), dtype=numpy.uint8) frame = VideoFrame.from_ndarray(array, format='yuyv422') self.assertEqual(frame.width, 318) self.assertEqual(frame.height, 238) self.assertEqual(frame.format.name, 'yuyv422') self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_rgb8(self): array = numpy.random.randint(0, 256, size=(480, 640), dtype=numpy.uint8) frame = VideoFrame.from_ndarray(array, format='rgb8') self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, 'rgb8') self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_bgr8(self): array = numpy.random.randint(0, 256, size=(480, 640), dtype=numpy.uint8) frame = VideoFrame.from_ndarray(array, format='bgr8') self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, 'bgr8') self.assertTrue((frame.to_ndarray() == array).all()) def test_ndarray_pal8(self): array = numpy.random.randint(0, 256, size=(480, 640), dtype=numpy.uint8) palette = numpy.random.randint(0, 256, size=(256, 4), dtype=numpy.uint8) frame = VideoFrame.from_ndarray((array, palette), format='pal8') self.assertEqual(frame.width, 640) self.assertEqual(frame.height, 480) self.assertEqual(frame.format.name, 'pal8') returned = frame.to_ndarray() self.assertTrue((type(returned) is tuple) and len(returned) == 2) self.assertTrue((returned[0] == array).all()) self.assertTrue((returned[1] == palette).all()) class TestVideoFrameTiming(TestCase): def test_reformat_pts(self): frame = VideoFrame(640, 480, 'rgb24') frame.pts = 123 frame.time_base = '456/1' # Just to be different. frame = frame.reformat(320, 240) self.assertEqual(frame.pts, 123) self.assertEqual(frame.time_base, 456) class TestVideoFrameReformat(TestCase): def test_reformat_identity(self): frame1 = VideoFrame(640, 480, 'rgb24') frame2 = frame1.reformat(640, 480, 'rgb24') self.assertIs(frame1, frame2) def test_reformat_colourspace(self): # This is allowed. frame = VideoFrame(640, 480, 'rgb24') frame.reformat(src_colorspace=None, dst_colorspace='smpte240') # I thought this was not allowed, but it seems to be. frame = VideoFrame(640, 480, 'yuv420p') frame.reformat(src_colorspace=None, dst_colorspace='smpte240') def test_reformat_pixel_format_align(self): height = 480 for width in range(2, 258, 2): frame_yuv = VideoFrame(width, height, 'yuv420p') for plane in frame_yuv.planes: plane.update(b'\xff' * plane.buffer_size) expected_rgb = numpy.zeros(shape=(height, width, 3), dtype=numpy.uint8) expected_rgb[:, :, 0] = 255 expected_rgb[:, :, 1] = 124 expected_rgb[:, :, 2] = 255 frame_rgb = frame_yuv.reformat(format='rgb24') array_rgb = frame_rgb.to_ndarray() self.assertEqual(array_rgb.shape, (height, width, 3)) self.assertTrue((array_rgb == expected_rgb).all())