dvgrab-3.5+git20160707.1.e46042e/0000755000175000017500000000000012716434257013176 5ustar esesdvgrab-3.5+git20160707.1.e46042e/INSTALL0000644000175000017500000001722712716434257014240 0ustar esesBasic Installation ================== These are generic installation instructions. The `configure' shell script attempts to guess correct values for various system-dependent variables used during compilation. It uses those values to create a `Makefile' in each directory of the package. It may also create one or more `.h' files containing system-dependent definitions. Finally, it creates a shell script `config.status' that you can run in the future to recreate the current configuration, a file `config.cache' that saves the results of its tests to speed up reconfiguring, and a file `config.log' containing compiler output (useful mainly for debugging `configure'). If you need to do unusual things to compile the package, please try to figure out how `configure' could check whether to do them, and mail diffs or instructions to the address given in the `README' so they can be considered for the next release. If at some point `config.cache' contains results you don't want to keep, you may remove or edit it. The file `configure.in' is used to create `configure' by a program called `autoconf'. You only need `configure.in' if you want to change it or regenerate `configure' using a newer version of `autoconf'. The simplest way to compile this package is: 1. `cd' to the directory containing the package's source code and type `./configure' to configure the package for your system. If you're using `csh' on an old version of System V, you might need to type `sh ./configure' instead to prevent `csh' from trying to execute `configure' itself. Running `configure' takes awhile. While running, it prints some messages telling which features it is checking for. 2. Type `make' to compile the package. 3. Optionally, type `make check' to run any self-tests that come with the package. 4. Type `make install' to install the programs and any data files and documentation. 5. You can remove the program binaries and object files from the source code directory by typing `make clean'. To also remove the files that `configure' created (so you can compile the package for a different kind of computer), type `make distclean'. There is also a `make maintainer-clean' target, but that is intended mainly for the package's developers. If you use it, you may have to get all sorts of other programs in order to regenerate files that came with the distribution. Compilers and Options ===================== Some systems require unusual options for compilation or linking that the `configure' script does not know about. You can give `configure' initial values for variables by setting them in the environment. Using a Bourne-compatible shell, you can do that on the command line like this: CC=c89 CFLAGS=-O2 LIBS=-lposix ./configure Or on systems that have the `env' program, you can do it like this: env CPPFLAGS=-I/usr/local/include LDFLAGS=-s ./configure Compiling For Multiple Architectures ==================================== You can compile the package for more than one kind of computer at the same time, by placing the object files for each architecture in their own directory. To do this, you must use a version of `make' that supports the `VPATH' variable, such as GNU `make'. `cd' to the directory where you want the object files and executables to go and run the `configure' script. `configure' automatically checks for the source code in the directory that `configure' is in and in `..'. If you have to use a `make' that does not supports the `VPATH' variable, you have to compile the package for one architecture at a time in the source code directory. After you have installed the package for one architecture, use `make distclean' before reconfiguring for another architecture. Installation Names ================== By default, `make install' will install the package's files in `/usr/local/bin', `/usr/local/man', etc. You can specify an installation prefix other than `/usr/local' by giving `configure' the option `--prefix=PATH'. You can specify separate installation prefixes for architecture-specific files and architecture-independent files. If you give `configure' the option `--exec-prefix=PATH', the package will use PATH as the prefix for installing programs and libraries. Documentation and other data files will still use the regular prefix. In addition, if you use an unusual directory layout you can give options like `--bindir=PATH' to specify different values for particular kinds of files. Run `configure --help' for a list of the directories you can set and what kinds of files go in them. If the package supports it, you can cause programs to be installed with an extra prefix or suffix on their names by giving `configure' the option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'. Optional Features ================= Some packages pay attention to `--enable-FEATURE' options to `configure', where FEATURE indicates an optional part of the package. They may also pay attention to `--with-PACKAGE' options, where PACKAGE is something like `gnu-as' or `x' (for the X Window System). The `README' should mention any `--enable-' and `--with-' options that the package recognizes. For packages that use the X Window System, `configure' can usually find the X include and library files automatically, but if it doesn't, you can use the `configure' options `--x-includes=DIR' and `--x-libraries=DIR' to specify their locations. Specifying the System Type ========================== There may be some features `configure' can not figure out automatically, but needs to determine by the type of host the package will run on. Usually `configure' can figure that out, but if it prints a message saying it can not guess the host type, give it the `--host=TYPE' option. TYPE can either be a short name for the system type, such as `sun4', or a canonical name with three fields: CPU-COMPANY-SYSTEM See the file `config.sub' for the possible values of each field. If `config.sub' isn't included in this package, then this package doesn't need to know the host type. If you are building compiler tools for cross-compiling, you can also use the `--target=TYPE' option to select the type of system they will produce code for and the `--build=TYPE' option to select the type of system on which you are compiling the package. Sharing Defaults ================ If you want to set default values for `configure' scripts to share, you can create a site shell script called `config.site' that gives default values for variables like `CC', `cache_file', and `prefix'. `configure' looks for `PREFIX/share/config.site' if it exists, then `PREFIX/etc/config.site' if it exists. Or, you can set the `CONFIG_SITE' environment variable to the location of the site script. A warning: not all `configure' scripts look for a site script. Operation Controls ================== `configure' recognizes the following options to control how it operates. `--cache-file=FILE' Use and save the results of the tests in FILE instead of `./config.cache'. Set FILE to `/dev/null' to disable caching, for debugging `configure'. `--help' Print a summary of the options to `configure', and exit. `--quiet' `--silent' `-q' Do not print messages saying which checks are being made. To suppress all normal output, redirect it to `/dev/null' (any error messages will still be shown). `--srcdir=DIR' Look for the package's source code in directory DIR. Usually `configure' can determine that directory automatically. `--version' Print the version of Autoconf used to generate the `configure' script, and exit. `configure' also accepts some other, not widely useful, options. dvgrab-3.5+git20160707.1.e46042e/rawdump.c0000644000175000017500000000122412716434257015020 0ustar eses#include #include #include #include #include #include #include int main(int argc, char **argv) { int gfd, i, result, len; unsigned char buffer[4096]; if (argc != 2) { fprintf(stderr, "Usage: rawdump raw-data-file\n"); exit(1); } gfd = open(argv[1], O_RDONLY); result = 1; while (result > 0) { result = read(gfd, &len, sizeof(len)); if (result > 0) { result = read(gfd, buffer, len); if (result > 0) { printf("%3d:", len); for (i = 0; i < (len / 1); ++i) printf(" %2.2x", buffer[i]); printf("\n"); } } } close(gfd); return 0; } dvgrab-3.5+git20160707.1.e46042e/README0000644000175000017500000000200012716434257014046 0ustar esesdvgrab (C) 2000-2004 Arne Schirmacher (C) 2002-2008 Dan Dennedy (C) 2007 Dan Streetman dvgrab website: http://www.kinodv.org/ dvgrab cvs: http://sourceforge.net/projects/kino/ To use the program, turn on the camera, press play, and run the program: dvgrab --frames 200 myfilm- If you have properly configured your kernel and your ieee1394 device drivers, you should end up with about 30 MByte of DV image data nicely packed in a file myfilm-001.avi (digits and extension automatically appended), which can be displayed with the Microsoft Media Player or a similar program (remember to install the required DV codecs, software should be included with your hardware). Read the man page for more documentation or 'dvgrab --help'. Please submit bug reports, fixes, or suggestions to http://sourceforge.net/tracker/?atid=114103&group_id=14103 or access the discussion forum at http://www.kinodv.org/ Be sure to read the NEWS file for very helpful release notes. dvgrab-3.5+git20160707.1.e46042e/NEWS0000644000175000017500000002660112716434257013702 0ustar esesdvgrab R E L E A S E N O T E S ------- v3.5 ------- * Automatically detect DV vs. HDV when not using -noavc, -input, or -stdin. * Now waits indefinitely for DV or HDV instead of giving up after 10 seconds. * Bugfixes ------- v3.4 ------- * Fix a showstopping bug in the v3.3 release ------- v3.3 ------- * Give different warnings for dropped versus damaged frames * Bugfixes ------- v3.2 ------- * New -srt option to write subtitle files containing recordind date/time * New -recordonly (-r) option for use with camcorder record (camera) mode * New -rewind option to rewind tape to beginning before starting capture * Added automatic stop capture at end of tape * Added optional seconds argument to autosplit * Improved autosplit * Bugfixes and updated documentation ------- v3.1 ------- * New -jvc-p25 option to post-process JVC P25 streams for interoperability * Fix regression on broken pipe output * Improved HDV handling * New -24p and -24pa options for Quicktime DV for 24 fps modes ------- v3.0 ------- * Add HDV (MPEG2-TS) support (-f hdv)! * Add USB DV support (-v4l2) * Simpler and more flexible option parsing * Bugfixes and updated documentation ------- v2.1 ------- * Add feature to not capture/write-to-file when device is paused while in recording mode. This depends upon the device sending timecode as all zeroes while in that state. * Changed --jpeg-overwrite so it does not generate a sequentially numbered filename or filename containing recording date or timecode. This makes dvgrab work nicely with StopMotion (http://stopmotion.bjoernen.com/). * Bugfixes with dropped frame counting and quicktime support. ------- v2.0 ------- This release exclusively uses the new third generation Linux IEEE 1394 capture API available in raw1394 and libiec61883. The libiec61883 provides a protocol- specific layer over the raw foundation. This new approach is both robust and simpler than using protocol-specific kernel modules such as dv1394. As a result, dvgrab 2.0 requires Linux kernel 2.4.20 or newer. Usage of the connection management procedures in libiec61883 also makes it rather easy to capture from multiple DV devices on the same bus by using the --guid option. This feature is not entirely foolproof due to the inconsistency of implementations of devices, but you can try. Most devices are only running at 100mbps speed, which limits it to two DV streams per bus. However, through dip switches or the output master plug register, some devices can run at 200 or 400 and therefore more streams can exist on the bus. For example, I have tested four Canopus ADVC-500 devices on a single bus using the connection management procedures. Also, multiple host adapters are transparently supported when using the --guid option. Like dvconnect, now dvgrab will schedule with realtime scheduling policy and lock all memory resident to prevent paging when run with super user permission. New options "--lockstep" and "--timecode" can be used together to provide a redundant capture system where multiple Linux hosts can capture from the same bus without knowledge of each other and still produce output files with the same name for the same segment of video. ------- v1.7 ------- This release fixes a few bugs related to deadlocks when stopping a dv1394 capture and file permissions. Also, --duration should be slightly more accurate especially for specifying 1 frame: --duration smpte=:::1. P.S. You can capture a specific number of frames using --duration smpte=:::X whereas the --frames option is used for file splitting. ------- v1.6 ------- This is an enhancement release with some bug and compilation fixes. New options are --timesys for timestamping filenames using the system date and time, and --csize, and --cmincutsize for fine-grained control over sizes of collections of files useful for creating archives to CD or DVD. ------- v1.5 ------- This is a bugfix release. There were bugs in the AVI file creation especially for files near or greater than 1GB. Also, the --every option was broken in the past 2 releases. There are also changes to make dvgrab behave nicer when a camera is in camera mode rather than VTR mode. Finally, --autosplit is enhanced to check timecode discontinuity as well as the special flag in the DV stream. ---- v1.4 ---- This is mainly a bugfix release. To prevent issues with dvgrab running in a non-interactive shell script, dvgrab now requires the --stdin option to read DV from a pipe. Also, option --noavc was added to prevent any attempt to control the camera, which is especially useful if you are capturing from your device in camera mode as opposed to VTR mode. Another new option --guid lets you select from more than one device by its GUID (see /proc/bus/ieee1394/devices to lookup a GUID). Finally, some changes were made in the AVI file to reduce memory consumption on very large file captures and to prevent occasional segfaults. ---- v1.3 ---- This is a major release prepared by Dan Dennedy that adds AV/C control and an interactive mode. The default mode of operation is non-interactive to prevent breaking existing users' scripts. However, dvgrab 1.3 will issue an AV/C command to tell the camera VTR to start playing. Therefore, dvcont may no longer be required. However, heed caution! If your camera is not in VTR mode, but is in camera mode, the play command tells the camera to start recording! One enters interactive mode by supplying the -i or --interactive options. From there, the command prompt describes the single key presses available to control the camera and start/stop capture. All dvgrab 1.3 messages are written to stderr (normally connected to your terminal so you can see them) and include a consistent, easy-to-parse format for status, event, and error messages. dvgrab 1.3 can also write raw DV to stdout while simultaneously capturing to a file. This behavior is automatic when you redirect stdout or pipe the command. This is especially nice with recent versions of Xine and mplayer as they can display DV with deinterlacing and adjusting for the non-square pixel aspect ratio of DV. For example, dvgrab -i | xine -D stdin://#demux:rawdv The last part of the Xine MRL in the line above is important; otherwise, xine does not know the data format of the pipe! All of the above means one can build a user interface frontend (non-GPL even) to dvgrab by fork()ing and exec()ing dvgrab with pipes established between the frontend and dvgrab. Building upon the pipe capabilities, in non-interactive mode dvgrab 1.3 accepts raw DV on stdin. If detected, dvgrab forgoes any 1394-based capture method. This allows one to leverage the file format writing and splitting options of dvgrab in conjunction with raw DV producers such as smilutils and ffmpeg. A grand new option is --duration, which specifies a maximum duration for an entire capture session regardless of splitting. Furthermore, the argument to this option is specified in SMIL2 time value formats, which is quite expressive. Note, that in interactive mode, starting capture starts a new session. Thus, in interactive mode, you can have multiple capture sessions by stopping and re-starting. Additional changes include some friendlier defaults and automatic options. dvgrab 1.3 will look on all your 1394 buses/host adapters for your camera without the explicit --card opion. It selects the first camera discovered. Type 2 AVI is now the default file format for increased compatibility with other applications. Now, supplying --frames and --size options that cause dvgrab to generate an AVI >1GB automatically enables the --opendml option. Finally, the base filename argument is optional now too and defaults to "dvgrab-". ---- v1.2 ---- This release is a major new release prepared by Dan Dennedy that pulls in much of what improved upon in Kino plus more. AVI file correctness and compatibility has been improved upon. 1.1 beta releases allowed one to specify small and large indexes. This is no longer required; it is automatic now based upon general practice observed in AVIs created by other applications. Instead, now there is a new --opendml option for use with type 2 DV AVI (--format dv2). OpenDML is a specification of AVI extensions created by Matrox in collaboration with others and approved by Microsoft. OpenDML introduced a new index structure, which dvgrab 1.1 called the "large" index. Type 1 DV AVI (dv1) is inherently OpenDML, and it contains a large index only. dvgrab 1.2 type 2 always contain a small index, but OpenDML adds a large index to permit type 2 AVIs greater than 1GB. The main difference between a type 1 and type 2 file is that a type 2 contains a separate audio stream extracted from the DV. dvgrab 1.2 can use libdv (must be detected when compiled) to create this audio stream instead of using internal routines. There are several advantages of using libdv the most obvious being audio block error handling. Without error handling, one can hear "bleeps" in the audio stream. Raw DV capture has been improved to prevent corrupt files in the event of a dropped firewire packet. Also, one can send raw DV to stdout instead of to a file, so it can be used in pipes. This is especially useful with smilutils. For example, one can preview video with libdv's playdv utility: dvgrab - | playdv --no-mmap The '-' file designation forces the format to raw overridding whatever you specify; the other formats are not streaming formats. Support for Quicktime has been added if libquicktime is detected when compiled. Use "--format qt." It creates a separate audio track in twos complement format. Support for JPEG has been added if jpeglib is deteced when compiled. Use "--format jpeg" to save a series of sequentially numbered JPEG still images. There are two number parts of the filename, the first number is increments with each scene change while the second number increments with each successive save. Use --jpeg-overwrite to continually update the same file and not add any numbers to the file name stem you provide. The JPEG option works nicely with the --every option. There are several JPEG options you will find handy. First, is --jpeg-quality followed by a number between 0 and 100; the default is 75. --jpeg-deinterlace extracts one of the fields and doubles it for a cheap form of deinterlacing. However, when used with the scaling options --jpeg-width and --jpeg-height to reduce the resolution by 50%, you can get perfect deinterlaced output. The scaling options also allow one to compensate for the non-square pixels of DV. For example, on NTSC, the following produces nice output for a webcam, dvgrab --format jpeg --jpeg-deinterlace --jpeg-width 320 --jpeg-height 240 Due to code refactorization, all capture options work with all formats. For example, previous versions of dvgrab did not support hardly any options like --autosplit, --every, --timestamp, and --frames when used with --format raw. A new capture option --size followed by a number in mega bytes, sets the maximum file size of any captured file. If all this were not enough; dvgrab 1.2 also supports capture using the new dv1394 driver! See http://www.linux1394.org/dv1394.html for more information on this driver. By default and traditionally, dvgrab uses raw1394; however, raw1394 requires an interrupt for each packet, which can be too intense for some systems. dv1394 in kernel releases 2.4.21 or recent linux1394.org Subversion snapshots drastically reduce the IRQ and using kernel background threads and internal buffering. You must use the --dv1394 option followed by the device file name to use dv1394 instead of raw1394. dvgrab-3.5+git20160707.1.e46042e/avi.h0000644000175000017500000001573412716434257014140 0ustar eses/* * avi.h library for AVI file format i/o * Copyright (C) 2000 - 2002 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /** Common AVI declarations Some of this comes from the public domain AVI specification, which explains the microsoft-style definitions. \file avi.h */ #ifndef _AVI_H #define _AVI_H 1 #include "riff.h" #include "dvframe.h" #define PACKED(x) __attribute__((packed)) x #define AVI_SMALL_INDEX (0x01) #define AVI_LARGE_INDEX (0x02) #define KINO_AVI_INDEX_OF_INDEXES (0x00) #define KINO_AVI_INDEX_OF_CHUNKS (0x01) #define AVI_INDEX_2FIELD (0x01) enum { AVI_PAL, AVI_NTSC, AVI_AUDIO_48KHZ, AVI_AUDIO_44KHZ, AVI_AUDIO_32KHZ }; /** Declarations of the main AVI file header The contents of this struct goes into the 'avih' chunk. */ typedef struct { /// frame display rate (or 0L) DWORD dwMicroSecPerFrame; /// max. transfer rate DWORD dwMaxBytesPerSec; /// pad to multiples of this size, normally 2K DWORD dwPaddingGranularity; /// the ever-present flags DWORD dwFlags; /// # frames in file DWORD dwTotalFrames; DWORD dwInitialFrames; DWORD dwStreams; DWORD dwSuggestedBufferSize; DWORD dwWidth; DWORD dwHeight; DWORD dwReserved[ 4 ]; } PACKED(MainAVIHeader); typedef struct { WORD top, bottom, left, right; } PACKED(RECT); /** Declaration of a stream header The contents of this struct goes into the 'strh' header. */ typedef struct { FOURCC fccType; FOURCC fccHandler; DWORD dwFlags; /* Contains AVITF_* flags */ WORD wPriority; WORD wLanguage; DWORD dwInitialFrames; DWORD dwScale; DWORD dwRate; /* dwRate / dwScale == samples/second */ DWORD dwStart; DWORD dwLength; /* In units above... */ DWORD dwSuggestedBufferSize; DWORD dwQuality; DWORD dwSampleSize; RECT rcFrame; } PACKED(AVIStreamHeader); typedef struct { DWORD dwDVAAuxSrc; DWORD dwDVAAuxCtl; DWORD dwDVAAuxSrc1; DWORD dwDVAAuxCtl1; DWORD dwDVVAuxSrc; DWORD dwDVVAuxCtl; DWORD dwDVReserved[ 2 ]; } PACKED(DVINFO); typedef struct { DWORD biSize; LONG biWidth; LONG biHeight; WORD biPlanes; WORD biBitCount; DWORD biCompression; DWORD biSizeImage; LONG biXPelsPerMeter; LONG biYPelsPerMeter; DWORD biClrUsed; DWORD biClrImportant; } PACKED(BITMAPINFOHEADER); typedef struct { WORD wFormatTag; WORD nChannels; DWORD nSamplesPerSec; DWORD nAvgBytesPerSec; WORD nBlockAlign; WORD wBitsPerSample; WORD cbSize; WORD dummy; } PACKED(WAVEFORMATEX); typedef struct { WORD wLongsPerEntry; BYTE bIndexSubType; BYTE bIndexType; DWORD nEntriesInUse; FOURCC dwChunkId; DWORD dwReserved[ 3 ]; struct avisuperindex_entry { QUADWORD qwOffset; DWORD dwSize; DWORD dwDuration; } aIndex[ 2014 ]; } PACKED(AVISuperIndex); typedef struct { WORD wLongsPerEntry; BYTE bIndexSubType; BYTE bIndexType; DWORD nEntriesInUse; FOURCC dwChunkId; QUADWORD qwBaseOffset; DWORD dwReserved; struct avifieldindex_entry { DWORD dwOffset; DWORD dwSize; } aIndex[ 4028 ]; } PACKED(AVIStdIndex); typedef struct { struct avisimpleindex_entry { FOURCC dwChunkId; DWORD dwFlags; DWORD dwOffset; DWORD dwSize; } aIndex[ 20000 ]; DWORD nEntriesInUse; } PACKED(AVISimpleIndex); typedef struct { DWORD dirEntryType; DWORD dirEntryName; DWORD dirEntryLength; size_t dirEntryOffset; int dirEntryWrittenFlag; int dirEntryParentList; } AviDirEntry; /** base class for all AVI type files It contains methods and members which are the same in all AVI type files regardless of the particular compression, number of streams etc. The AVIFile class also contains methods for handling several indexes to the video frame content. */ class AVIFile : public RIFFFile { public: AVIFile(); AVIFile( const AVIFile& ); virtual ~AVIFile(); virtual AVIFile& operator=( const AVIFile& ); virtual void Init( int format, int sampleFrequency, int indexType ); virtual int GetFrameInfo( off_t &offset, int &size, int frameNum ); virtual int GetFrame( Frame *frame, int frameNum ); virtual int GetTotalFrames(); virtual void PrintDirectoryEntryData( const RIFFDirEntry &entry ); virtual bool WriteFrame( Frame *frame ) { return false; } virtual void ParseList( int parent ); virtual void ParseRIFF( void ); virtual void ReadIndex( void ); virtual void WriteRIFF( void ) { } virtual void FlushIndx( int stream ); virtual void UpdateIndx( int stream, int chunk, int duration ); virtual void UpdateIdx1( int chunk, int flags ); virtual bool verifyStreamFormat( FOURCC type ); virtual bool verifyStream( FOURCC type ); virtual bool isOpenDML( void ); virtual void setDVINFO( DVINFO& ) { } virtual void setFccHandler( FOURCC type, FOURCC handler ); protected: MainAVIHeader mainHdr; AVISimpleIndex *idx1; int file_list; int riff_list; int hdrl_list; int avih_chunk; int movi_list; int junk_chunk; int idx1_chunk; AVIStreamHeader streamHdr[ 2 ]; AVISuperIndex *indx[ 2 ]; AVIStdIndex *ix[ 2 ]; int indx_chunk[ 2 ]; int ix_chunk[ 2 ]; int strl_list[ 2 ]; int strh_chunk[ 2 ]; int strf_chunk[ 2 ]; int index_type; int current_ix00; DWORD dmlh[ 62 ]; int odml_list; int dmlh_chunk; bool isUpdateIdx1; }; /** writing Type 1 DV AVIs */ class AVI1File : public AVIFile { public: AVI1File(); virtual ~AVI1File(); virtual void Init( int format, int sampleFrequency, int indexType ); virtual bool WriteFrame( Frame *frame ); virtual void WriteRIFF( void ); virtual void setDVINFO( DVINFO& ); private: DVINFO dvinfo; AVI1File( const AVI1File& ); AVI1File& operator=( const AVI1File& ); }; /** writing Type 2 (separate audio data) DV AVIs This file type contains both audio and video tracks. It is therefore more compatible to certain Windows programs, which expect any AVI having both audio and video tracks. The video tracks contain the raw DV data (as in type 1) and the extracted audio tracks. Note that because the DV data contains audio information anyway, this means duplication of data and a slight increase of file size. */ class AVI2File : public AVIFile { public: AVI2File(); virtual ~AVI2File(); virtual void Init( int format, int sampleFrequency, int indexType ); virtual bool WriteFrame( Frame *frame ); virtual void WriteRIFF( void ); virtual void setDVINFO( DVINFO& ); private: BITMAPINFOHEADER bitmapinfo; WAVEFORMATEX waveformatex; AVI2File( const AVI2File& ); AVI2File& operator=( const AVI2File& ); }; #endif dvgrab-3.5+git20160707.1.e46042e/filehandler.cc0000644000175000017500000010033412716434257015763 0ustar eses/* * filehandler.cc -- saving DV data into different file formats * Copyright (C) 2000 Arne Schirmacher * Raw DV, JPEG, and Quicktime portions Copyright (C) 2003-2008 Dan Dennedy * Portions of Quicktime code borrowed from Arthur Peters' dv_utils. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include #include using std::cout; using std::endl; using std::ostringstream; using std::setw; using std::setfill; using std::ends; #include #include #include #include #include #include #include #include #include #include #include "filehandler.h" #include "error.h" #include "riff.h" #include "avi.h" #include "frame.h" #include "dvframe.h" #include "affine.h" #include "stringutils.h" FileTracker *FileTracker::instance = NULL; FileTracker::FileTracker( ) : mode( CAPTURE_MOVIE_APPEND ) { return ; sendEvent( ">> Constructing File Capture tracker" ); } FileTracker::~FileTracker( ) { return ; sendEvent( ">> Destroying File Capture tracker" ); } FileTracker &FileTracker::GetInstance( ) { if ( instance == NULL ) instance = new FileTracker(); return *instance; } void FileTracker::SetMode( FileCaptureMode mode ) { this->mode = mode; } FileCaptureMode FileTracker::GetMode( ) { return this->mode; } char *FileTracker::Get( int index ) { return list[ index ]; } void FileTracker::Add( const char *file ) { return ; if ( this->mode != CAPTURE_IGNORE ) { sendEvent( ">>>> Registering %s with the tracker", file ); list.push_back( strdup( file ) ); } } unsigned int FileTracker::Size( ) { return list.size(); } void FileTracker::Clear( ) { while ( Size() > 0 ) { free( list[ Size() - 1 ] ); list.pop_back( ); } this->mode = CAPTURE_MOVIE_APPEND; } FileHandler::FileHandler() : done( false ), autoSplit( false ), timeSplit(0), maxFrameCount( 0 ), isNewFile( false ), isFirstFile( -1 ), lastCollectionFreeSpace( 0 ), currentCollectionSize( 0 ), framesWritten( 0 ), filename( "" ) { prevTimeCode.sec = -1; } FileHandler::~FileHandler() { } void FileHandler::CollectionCounterUpdate() { if ( GetSizeSplitMode() == 1) { currentCollectionSize += GetFileSize(); // Shove this cut into the last collection if there's room if (currentCollectionSize <= lastCollectionFreeSpace) { currentCollectionSize = 0; lastCollectionFreeSpace = lastCollectionFreeSpace - currentCollectionSize; } else lastCollectionFreeSpace = 0; // Start a new collection if we've gone over the Minimum Collection Size if (currentCollectionSize >= GetMinColSize()) { if (currentCollectionSize < GetMaxColSize()) lastCollectionFreeSpace = GetMaxColSize() - currentCollectionSize; else lastCollectionFreeSpace = 0; currentCollectionSize = 0; } } } bool FileHandler::GetAutoSplit() { return autoSplit; } int FileHandler::GetTimeSplit() { return timeSplit; } bool FileHandler::GetTimeStamp() { return timeStamp; } bool FileHandler::GetTimeSys() { return timeSys; } bool FileHandler::GetTimeCode() { return timeCode; } string FileHandler::GetBaseName() { return base; } string FileHandler::GetExtension() { return extension; } int FileHandler::GetMaxFrameCount() { return maxFrameCount; } off_t FileHandler::GetMaxFileSize() { return maxFileSize; } int FileHandler::GetSizeSplitMode() { return sizeSplitMode; } off_t FileHandler::GetMaxColSize() { return maxColSize; } off_t FileHandler::GetMinColSize() { return minColSize; } void FileHandler::SetAutoSplit( bool flag ) { autoSplit = flag; } void FileHandler::SetTimeSplit( int secs ) { timeSplit = secs; } void FileHandler::SetTimeStamp( bool flag ) { timeStamp = flag; } void FileHandler::SetTimeSys( bool flag ) { timeSys = flag; } void FileHandler::SetTimeCode( bool flag ) { timeCode = flag; } void FileHandler::SetBaseName( const string& s ) { base = s; } void FileHandler::SetMaxFrameCount( int count ) { assert( count >= 0 ); maxFrameCount = count; } void FileHandler::SetEveryNthFrame( int every ) { assert ( every > 0 ); everyNthFrame = every; } void FileHandler::SetMaxFileSize( off_t size ) { assert ( size >= 0 ); maxFileSize = size; } void FileHandler::SetSizeSplitMode( int mode ) { sizeSplitMode = mode; } void FileHandler::SetMaxColSize( off_t size ) { maxColSize = size; } void FileHandler::SetMinColSize( off_t size ) { minColSize = size; } void FileHandler::SetFilmRate( bool flag) { filmRate = flag; } void FileHandler::SetRemove2332( bool flag) { remove2332 = flag; } bool FileHandler::Done() { return done; } bool FileHandler::WriteFrame( Frame *frame ) { /* If the file size, collection size, or max frame count would be exceeded * by this frame, and we can start a new file on this frame, close the current file. * If the autosplit flag is set, a new file will be created. */ if ( FileIsOpen() && frame->CanStartNewStream() ) { bool startNewFile = false; off_t newFileSize = GetFileSize() + frame->GetDataLen(); bool maxFileSizeExceeded = newFileSize >= GetMaxFileSize(); bool maxColSizeExceeded = GetCurrentCollectionSize() + newFileSize >= GetMaxColSize(); if ( GetFileSize() > 0 ) { if ( GetMaxFileSize() > 0 && maxFileSizeExceeded ) startNewFile = true; if ( GetMaxColSize() > 0 && GetSizeSplitMode() == 1 && maxColSizeExceeded ) startNewFile = true; } if ( GetMaxFrameCount() > 0 && framesWritten >= GetMaxFrameCount() ) startNewFile = true; if ( startNewFile ) { CollectionCounterUpdate(); Close(); done = !GetAutoSplit(); } } TimeCode tc; int time_diff = 0; if ( frame->GetTimeCode( tc ) ) time_diff = TIMECODE_TO_SEC(tc) - TIMECODE_TO_SEC(prevTimeCode); bool discontinuity = prevTimeCode.sec != -1 && ( time_diff > 1 || time_diff < 0 ); bool isTimeSplit = false; if ( timeSplit != 0 ) { struct tm rd; if ( frame->GetRecordingDate( rd ) ) { time_t now = mktime( &rd ); if ( now >= prevTime + timeSplit ) isTimeSplit = true; prevTime = now; } } // If the user wants autosplit, start a new file if a new recording is detected // either by explicit frame flag or a timecode discontinuity if ( FileIsOpen() && ( ( GetAutoSplit() && ( frame->IsNewRecording() || discontinuity ) ) || isTimeSplit ) ) { CollectionCounterUpdate(); Close(); } isNewFile = false; if ( ! FileIsOpen() ) { static int counter = 0; ostringstream stimestamp, stimecode; prevTimeCode.sec = -1; if ( GetTimeStamp() || GetTimeSys() ) { struct tm date; if ( ( GetTimeStamp() && ! frame->GetRecordingDate( date ) ) || GetTimeSys() ) { time_t timesys; time( ×ys ); localtime_r( ×ys, &date ); } stimestamp << setfill( '0' ) << setw( 4 ) << date.tm_year + 1900 << '.' << setw( 2 ) << date.tm_mon + 1 << '.' << setw( 2 ) << date.tm_mday << '_' << setw( 2 ) << date.tm_hour << '-' << setw( 2 ) << date.tm_min << '-' << setw( 2 ) << date.tm_sec; } if ( GetTimeCode() ) { TimeCode tc; if ( frame->GetTimeCode( tc ) ) { stimecode << setfill( '0' ) << setw( 2 ) << tc.hour << ':' << setw( 2 ) << tc.min << ':' << setw( 2 ) << tc.sec << ':' << setw( 2 ) << tc.frame; } else { stimecode << "EE:EE:EE:EE"; } } if ( GetTimeStamp() || GetTimeSys() || GetTimeCode() ) { ostringstream sb; sb << GetBaseName(); if ( ( GetTimeStamp() || GetTimeSys() ) && GetTimeCode() ) { sb << stimestamp.str() << "--" << stimecode.str(); } else if ( GetTimeCode() ) { sb << stimecode.str(); } else { sb << stimestamp.str(); } sb << GetExtension() << ends; filename = sb.str(); } else { struct stat stats; do { ostringstream sb; sb << GetBaseName() << setfill( '0' ) << setw( 3 ) << ++ counter << GetExtension() << ends; filename = sb.str(); } while ( stat( filename.c_str(), &stats ) == 0 ); } if ( ! Create( filename ) ) { sendEvent( ">>> Error creating file!" ); return false; } isNewFile = true; if ( isFirstFile == -1 ) isFirstFile = 1; else if ( isFirstFile == 1 ) isFirstFile = 0; framesWritten = 0; framesToSkip = 0; } /* write frame */ if ( framesToSkip == 0 ) { if ( 0 > Write( frame ) ) { sendEvent( ">>> Error writing frame!" ); return false; } framesToSkip = everyNthFrame; ++framesWritten; } framesToSkip--; if ( !frame->GetTimeCode( prevTimeCode ) ) prevTimeCode.sec = -1; return true; } static ssize_t writen( int fd, unsigned char *vptr, size_t n ) { size_t nleft = n; ssize_t nwritten; unsigned char *ptr = vptr; while ( nleft > 0 ) { if ( ( nwritten = write( fd, ptr, nleft ) ) <= 0 ) { if ( errno == EINTR ) { nwritten = 0; } else { return -1; } } nleft -= nwritten; ptr += nwritten; } return n; } /***************************************************************************/ RawHandler::RawHandler( const string& ext ) : fd( -1 ) { extension = ext; } RawHandler::~RawHandler() { Close(); } bool RawHandler::FileIsOpen() { return fd != -1; } bool RawHandler::Create( const string& filename ) { if ( GetBaseName() == "-" ) fd = fileno( stdout ); else fd = open( filename.c_str(), O_CREAT | O_TRUNC | O_RDWR | O_NONBLOCK, 0644 ); if ( fd != -1 ) { FileTracker::GetInstance().Add( filename.c_str() ); this->filename = filename; } return ( fd != -1 ); } int RawHandler::Write( Frame *frame ) { int result = writen( fd, frame->data, frame->GetDataLen() ); return result; } int RawHandler::Close() { if ( fd != -1 && fd != fileno( stdin ) && fd != fileno( stdout ) ) { close( fd ); fd = -1; } return 0; } off_t RawHandler::GetFileSize() { struct stat file_status; if ( fstat( fd, &file_status ) < 0 ) return 0; return file_status.st_size; } int RawHandler::GetTotalFrames() { return GetFileSize() / ( 480 * numBlocks ); } bool RawHandler::Open( const char *s ) { unsigned char data[ 4 ]; assert( fd == -1 ); if ( strcmp( s, "-" ) == 0 ) { fd = fileno( stdin ); filename = "stdin"; } else { fd = open( s, O_RDWR | O_NONBLOCK ); if ( fd < 0 ) return false; filename = s; } if ( read( fd, data, 4 ) < 0 ) return false; lseek( fd, 0, SEEK_SET ); numBlocks = ( ( data[ 3 ] & 0x80 ) == 0 ) ? 250 : 300; return true; } int RawHandler::GetFrame( Frame *frame, int frameNum ) { assert( fd != -1 ); int size = 480 * numBlocks; if ( frameNum < 0 ) return -1; off_t offset = ( ( off_t ) frameNum * ( off_t ) size ); fail_if( lseek( fd, offset, SEEK_SET ) == ( off_t ) - 1 ); if ( read( fd, frame->data, size ) > 0 ) return 0; else return -1; } /***************************************************************************/ AVIHandler::AVIHandler( int format ) : avi( NULL ), filen( NULL ), aviFormat( format ), isOpenDML( false ), fccHandler( make_fourcc( "dvsd" ) ), infoSet( false ) { extension = ".avi"; } AVIHandler::~AVIHandler() { Close(); } void AVIHandler::SetSampleFrame( DVFrame *sample ) { Pack pack; sample->GetAudioInfo( audioInfo ); sample->GetVideoInfo( videoInfo ); sample->GetAAUXPack( 0x50, pack ); dvinfo.dwDVAAuxSrc = *( DWORD* ) ( pack.data + 1 ); sample->GetAAUXPack( 0x51, pack ); dvinfo.dwDVAAuxCtl = *( DWORD* ) ( pack.data + 1 ); sample->GetAAUXPack( 0x52, pack ); dvinfo.dwDVAAuxSrc1 = *( DWORD* ) ( pack.data + 1 ); sample->GetAAUXPack( 0x53, pack ); dvinfo.dwDVAAuxCtl1 = *( DWORD* ) ( pack.data + 1 ); sample->GetVAUXPack( 0x60, pack ); dvinfo.dwDVVAuxSrc = *( DWORD* ) ( pack.data + 1 ); sample->GetVAUXPack( 0x61, pack ); dvinfo.dwDVVAuxCtl = *( DWORD* ) ( pack.data + 1 ); #ifdef WITH_LIBDV if ( sample->decoder->std == e_dv_std_smpte_314m ) fccHandler = make_fourcc( "dv25" ); #endif infoSet = true; } bool AVIHandler::FileIsOpen() { return avi != NULL; } bool AVIHandler::Create( const string& filename ) { assert( avi == NULL ); if ( !infoSet ) { filen = &filename; return true; } switch ( aviFormat ) { case AVI_DV1_FORMAT: fail_null( avi = new AVI1File ); if ( avi->Create( filename.c_str() ) == false ) return false; avi->Init( videoInfo.isPAL ? AVI_PAL : AVI_NTSC, audioInfo.frequency, ( AVI_SMALL_INDEX | AVI_LARGE_INDEX ) ); break; case AVI_DV2_FORMAT: fail_null( avi = new AVI2File ); if ( avi->Create( filename.c_str() ) == false ) return false; if ( GetOpenDML() ) avi->Init( videoInfo.isPAL ? AVI_PAL : AVI_NTSC, audioInfo.frequency, ( AVI_SMALL_INDEX | AVI_LARGE_INDEX ) ); else avi->Init( videoInfo.isPAL ? AVI_PAL : AVI_NTSC, audioInfo.frequency, ( AVI_SMALL_INDEX ) ); break; default: assert( aviFormat == AVI_DV1_FORMAT || aviFormat == AVI_DV2_FORMAT ); } avi->setDVINFO( dvinfo ); avi->setFccHandler( make_fourcc( "iavs" ), fccHandler ); avi->setFccHandler( make_fourcc( "vids" ), fccHandler ); this->filename = filename; FileTracker::GetInstance().Add( filename.c_str() ); return ( avi != NULL ); } int AVIHandler::Write( Frame *frame ) { if ( !infoSet ) { assert( !frame->IsHDV() ); SetSampleFrame( (DVFrame*)frame ); if ( ! Create( *filen ) ) { sendEvent( ">>> Error creating file!" ); return false; } } assert( avi != NULL ); return ( avi->WriteFrame( frame ) ? 0 : -1 ); } int AVIHandler::Close() { if ( avi != NULL ) { avi->WriteRIFF(); delete avi; avi = NULL; } return 0; } off_t AVIHandler::GetFileSize() { if ( avi ) return avi->GetFileSize(); else return 0; } int AVIHandler::GetTotalFrames() { return avi->GetTotalFrames(); } bool AVIHandler::Open( const char *s ) { assert( avi == NULL ); fail_null( avi = new AVI1File ); if ( avi->Open( s ) ) { avi->ParseRIFF(); if ( !( avi->verifyStreamFormat( make_fourcc( "dvsd" ) ) || avi->verifyStreamFormat( make_fourcc( "dv25" ) ) ) ) return false; avi->ReadIndex(); if ( avi->verifyStream( make_fourcc( "auds" ) ) ) aviFormat = AVI_DV2_FORMAT; else aviFormat = AVI_DV1_FORMAT; isOpenDML = avi->isOpenDML(); filename = s; return true; } else return false; } int AVIHandler::GetFrame( Frame *frame, int frameNum ) { int result = avi->GetFrame( frame, frameNum ); return result; } void AVIHandler::SetOpenDML( bool flag ) { isOpenDML = flag; } bool AVIHandler::GetOpenDML() { return isOpenDML; } /***************************************************************************/ #ifdef HAVE_LIBQUICKTIME #ifndef HAVE_LIBDV_DV_H #define DV_AUDIO_MAX_SAMPLES 1944 #endif QtHandler::QtHandler() : fd( NULL ), audioBufferSize( 0 ) { extension = ".mov"; Init(); } QtHandler::~QtHandler() { Close(); } void QtHandler::Init() { if ( fd != NULL ) Close(); fd = NULL; samplingRate = 0; samplesPerBuffer = 0; channels = 2; audioBuffer = NULL; audioChannelBuffer = NULL; isFullyInitialized = false; } bool QtHandler::FileIsOpen() { return fd != NULL; } bool QtHandler::Create( const string& filename ) { Init(); fd = quicktime_open( const_cast( filename.c_str() ), 0, 1 ); this->filename = filename; return ( fd != NULL ); } inline void QtHandler::DeinterlaceStereo16( void* pInput, int iBytes, void* pLOutput, void* pROutput ) { short int * piSampleInput = ( short int* ) pInput; short int* piSampleLOutput = ( short int* ) pLOutput; short int* piSampleROutput = ( short int* ) pROutput; while ( ( char* ) piSampleInput < ( ( char* ) pInput + iBytes ) ) { *piSampleLOutput++ = *piSampleInput++; *piSampleROutput++ = *piSampleInput++; } } int QtHandler::Write( Frame *f ) { assert( !f->IsHDV() ); int result = 0; DVFrame *frame = (DVFrame*)f; if ( ! isFullyInitialized ) { AudioInfo audio; if ( frame->GetAudioInfo( audio ) ) { /* TODO: handle 12-bit, non-linear audio */ char compressor[] = QUICKTIME_TWOS; channels = 2; quicktime_set_audio( fd, channels, audio.frequency, 16, compressor ); } else { channels = 0; } if ( filmRate || remove2332 ) { char compressor[] = QUICKTIME_DV; quicktime_set_video( fd, 1, 720, 480, 24, compressor ); } else { char compressor[] = QUICKTIME_DV; quicktime_set_video( fd, 1, 720, frame->IsPAL() ? 576 : 480, frame->GetFrameRate(), compressor ); } if ( channels > 0 ) { audioBuffer = new int16_t[ DV_AUDIO_MAX_SAMPLES * channels ]; audioBufferSize = DV_AUDIO_MAX_SAMPLES; audioChannelBuffer = new short int * [ channels ]; for ( int c = 0; c < channels; c++ ) audioChannelBuffer[ c ] = new short int[ 3000 ]; assert( channels <= 4 ); for ( int c = 0; c < channels; c++ ) audioChannelBuffers[ c ] = audioChannelBuffer[ c ]; } else { audioChannelBuffer = NULL; for ( int c = 0; c < 4; c++ ) audioChannelBuffers[ c ] = NULL; } isFullyInitialized = true; } if ( remove2332 ) { const int frameOffset = 3; TimeCode tc; frame->GetTimeCode( tc ); if ( ( tc.frame + frameOffset ) % 5 ) { result = quicktime_write_frame( fd, const_cast( frame->data ), frame->GetFrameSize(), 0 ); } else { result = 1; } } else { result = quicktime_write_frame( fd, const_cast( frame->data ), frame->GetFrameSize(), 0 ); } if ( channels > 0 ) { AudioInfo audio; frame->ExtractHeader(); if ( frame->GetAudioInfo( audio ) && ( unsigned int ) audio.samples < audioBufferSize ) { long bytesRead = frame->ExtractAudio( audioBuffer ); DeinterlaceStereo16( audioBuffer, bytesRead, audioChannelBuffer[ 0 ], audioChannelBuffer[ 1 ] ); quicktime_encode_audio( fd, audioChannelBuffers, NULL, bytesRead / 4 ); } } return result; } int QtHandler::Close() { if ( fd != NULL ) { quicktime_close( fd ); fd = NULL; } if ( audioBuffer != NULL ) { delete audioBuffer; audioBuffer = NULL; } if ( audioChannelBuffer != NULL ) { for ( int c = 0; c < channels; c++ ) delete audioChannelBuffer[ c ]; delete audioChannelBuffer; audioChannelBuffer = NULL; } return 0; } off_t QtHandler::GetFileSize() { if ( fd ) { struct stat file_status; if ( stat( filename.c_str(), &file_status ) == 0 ) return file_status.st_size; } return 0; } int QtHandler::GetTotalFrames() { return ( int ) quicktime_video_length( fd, 0 ); } bool QtHandler::Open( const char *s ) { Init(); fd = quicktime_open( ( char * ) s, 1, 1 ); if ( fd == NULL ) { fprintf( stderr, "Error opening: %s\n", s ); return false; } if ( quicktime_has_video( fd ) <= 0 ) { fprintf( stderr, "There must be at least one video track in the input file (%s).\n", s ); Close(); return false; } if ( strncmp( quicktime_video_compressor( fd, 0 ), QUICKTIME_DV, 4 ) != 0 ) { fprintf( stderr, "Video in input file (%s) must be in DV format.\n", s ); Close(); return false; } filename = s; return true; } int QtHandler::GetFrame( Frame *frame, int frameNum ) { assert( fd != NULL ); quicktime_set_video_position( fd, frameNum, 0 ); frame->SetDataLen( quicktime_read_frame( fd, frame->data, 0 ) ); return 0; } #endif /********************************************************************************/ #if defined(HAVE_LIBJPEG) && defined(HAVE_LIBDV) JPEGHandler::JPEGHandler( int quality, bool deinterlace, int width, int height, bool overwrite, string temp, bool usetemp ) : isOpen( false ), count( 0 ), deinterlace( deinterlace ), overwrite( overwrite ) { extension = ".jpg"; cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress( &cinfo ); cinfo.input_components = 3; /* # of color components per pixel */ cinfo.in_color_space = JCS_RGB; /* colorspace of input image */ jpeg_set_defaults( &cinfo ); jpeg_set_quality( &cinfo, quality, TRUE /* limit to baseline-JPEG values */ ); new_height = CLAMP( height, -1, 2048 ); new_width = CLAMP( width, -1, 2048 ); this->temp=temp; this->usetemp=usetemp; } JPEGHandler::~JPEGHandler() { Close(); jpeg_destroy_compress( &cinfo ); } bool JPEGHandler::Create( const string& filename ) { this->filename = filename; isOpen = true; count = 0; return true; } /* this must be called before scaling */ /* height is fixed, returns new width */ int JPEGHandler::fixAspect( Frame *frame ) { assert( !frame->IsHDV() ); DVFrame *dvframe = (DVFrame*)frame; int width = frame->GetWidth( ); int height = frame->GetHeight( ); static JSAMPLE image[ 2048 * 2048 * 3 ]; register JSAMPLE *dest = image_buffer, *src = image; int new_width = dvframe->IsPAL() ? 337 : 320; int n = width / 2 - new_width; int d = width / 2; int a = n; memcpy( src, dest, width * height * 3 ); for ( register int j = 0; j < height; j += 2 ) { src = image + j * ( width * 3 ); for ( register int i = 0; i < new_width ; i++ ) { if ( a > d ) { a -= d; src += 3; } else a += n; *dest++ = *src++; *dest++ = *src++; *dest++ = *src++; src += 3; } } return new_width; } bool JPEGHandler::scale( Frame *frame ) { int width = frame->GetWidth( ); int height = frame->GetHeight( ); static JSAMPLE image[ 2048 * 2048 * 3 ]; register JSAMPLE *dest = image_buffer, *src = image; AffineTransform affine; double scale_x = ( double ) new_width / ( double ) width; double scale_y = ( double ) new_height / ( double ) height; memcpy( src, dest, width * height * 3 ); register int i, j, x, y; if ( scale_x <= 1.0 && scale_y <= 1.0 ) { affine.Scale( scale_x, scale_y ); for ( j = 0; j < height; j++ ) for ( i = 0; i < width; i++ ) { x = ( int ) ( affine.MapX( i - width / 2, j - height / 2 ) ); y = ( int ) ( affine.MapY( i - width / 2, j - height / 2 ) ); x += new_width / 2; x = CLAMP( x, 0, new_width ); y += new_height / 2; y = CLAMP( y, 0, new_height ); //cout << "i = " << i << " j = " << j << " x = " << x << " y = " << y << endl; src = image + ( j * width * 3 ) + i * 3; dest = image_buffer + y * ( int ) ( new_width ) * 3 + ( int ) ( x * 3 ); *dest++ = *src++; *dest++ = *src++; *dest++ = *src++; } } else if ( scale_x >= 1.0 && scale_y >= 1.0 ) { affine.Scale( 1.0 / scale_x, 1.0 / scale_y ); for ( y = 0; y < new_height; y++ ) for ( x = 0; x < new_width; x++ ) { i = ( int ) ( affine.MapX( x - new_width / 2, y - new_height / 2 ) ); j = ( int ) ( affine.MapY( x - new_width / 2, y - new_height / 2 ) ); i += width / 2; i = CLAMP( i, 0, new_width ); j += height / 2; j = CLAMP( j, 0, new_height ); //cout << "i = " << i << " j = " << j << " x = " << x << " y = " << y << endl; src = image + ( j * width * 3 ) + i * 3; dest = image_buffer + y * ( int ) ( new_width ) * 3 + ( int ) ( x * 3 ); *dest++ = *src++; *dest++ = *src++; *dest++ = *src++; } } else return false; return true; } int JPEGHandler::Write( Frame *frame ) { assert( !frame->IsHDV() ); DVFrame *dvframe = (DVFrame*)frame; JSAMPROW row_pointer[ 1 ]; /* pointer to JSAMPLE row[s] */ int row_stride; /* physical row width in image buffer */ JDIMENSION width = frame->GetWidth(); JDIMENSION height = frame->GetHeight(); dvframe->ExtractHeader(); if ( frame->IsNewRecording() && GetAutoSplit() ) count = 0; dvframe->ExtractRGB( image_buffer ); if ( deinterlace ) dvframe->Deinterlace( image_buffer, 3 ); if ( new_width != -1 || new_height != -1 ) { if ( new_width == -1 ) new_width = width; if ( new_height == -1 ) new_height = height; if ( !scale( frame ) ) { new_width = width; new_height = height; } } else { if ( new_width == -1 ) new_width = width; if ( new_height == -1 ) new_height = height; } if ( overwrite ) { if ( StringUtils::ends( GetBaseName(), ".jpg" ) || StringUtils::ends( GetBaseName(), ".jpeg" ) ) { file = GetBaseName(); } else { ostringstream sb; sb << GetBaseName() << GetExtension() << ends; file = sb.str(); } } else { ostringstream sb; sb << filename.substr( 0, filename.find_last_of( '.' ) ) << "-" << setfill( '0' ) << setw( 8 ) << ++count << GetExtension() << ends; file = sb.str(); } FILE *outfile; if ( this->usetemp ) { outfile = fopen( const_cast( this->temp.c_str() ), "wb" ); } else { outfile = fopen( const_cast( file.c_str() ), "wb" ); } if ( outfile != NULL ) jpeg_stdio_dest( &cinfo, outfile ); cinfo.image_width = new_width; cinfo.image_height = new_height; row_stride = cinfo.image_width * cinfo.input_components; jpeg_start_compress( &cinfo, TRUE ); while ( cinfo.next_scanline < cinfo.image_height ) { row_pointer[ 0 ] = &image_buffer[ cinfo.next_scanline * row_stride ]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } jpeg_finish_compress( &cinfo ); fclose( outfile ); if ( this->usetemp ) { rename(const_cast( this->temp.c_str() ), const_cast( file.c_str() )); } return 0; } int JPEGHandler::Close( void ) { isOpen = false; return 0; } #endif /***************************************************************************/ Mpeg2Handler::Mpeg2Handler( unsigned char flags, const string& ext ) : fd( -1 ), waitingForRecordingDate( true ), bufferLen( 0 ), totalFrames( 0 ), writerFlags( flags ), firstPayloadEntry( NULL ) { extension = ext; } Mpeg2Handler::~Mpeg2Handler() { PayloadList *next; while ( firstPayloadEntry != NULL ) { next = firstPayloadEntry->next; free( firstPayloadEntry ); firstPayloadEntry = next; } Close(); } bool Mpeg2Handler::FileIsOpen() { return fd != -1; } bool Mpeg2Handler::Create( const string& filename ) { if ( GetBaseName() == "-" ) fd = fileno( stdout ); else fd = open( filename.c_str(), O_CREAT | O_TRUNC | O_RDWR | O_NONBLOCK, 0644 ); if ( fd != -1 ) { FileTracker::GetInstance().Add( filename.c_str() ); this->filename = filename; } return ( fd != -1 ); } bool Mpeg2Handler::WriteFrame( Frame *frame ) { if ( waitingForRecordingDate ) { struct tm rd; if ( !frame->GetRecordingDate( rd ) ) { if ( bufferLen + frame->GetDataLen() < MPEG2_BUFFER_SIZE ) { // Buffer up the first several frames until we get the recording date memcpy( &buffer[bufferLen], frame->data, frame->GetDataLen() ); bufferLen += frame->GetDataLen(); totalFrames++; return true; } } waitingForRecordingDate = false; } return FileHandler::WriteFrame( frame ); } int Mpeg2Handler::Write( Frame *frame ) { int result; // Write any buffered data first. if ( bufferLen > 0 ) { if ( frame->CouldBeJVCP25() && ( writerFlags & MPEG2_JVC_P25 ) ) result = writeJVCP25( buffer, bufferLen ); else result = writen( fd, buffer, bufferLen ); if ( 0 > result ) return result; bufferLen = 0; } if ( frame->CouldBeJVCP25() && ( writerFlags & MPEG2_JVC_P25 ) ) result = writeJVCP25( frame->data, frame->GetDataLen() ); else result = writen( fd, frame->data, frame->GetDataLen() ); if ( 0 <= result ) totalFrames++; return result; } int Mpeg2Handler::Close() { if ( fd != -1 && fd != fileno( stdin ) && fd != fileno( stdout ) ) { close( fd ); fd = -1; } return 0; } off_t Mpeg2Handler::GetFileSize() { struct stat file_status; if ( fstat( fd, &file_status ) < 0 ) return 0; return file_status.st_size; } int Mpeg2Handler::GetTotalFrames() { return totalFrames; } bool Mpeg2Handler::Open( const char *s ) { return false; } int Mpeg2Handler::GetFrame( Frame *frame, int frameNum ) { return -1; } static inline int CorrectP25( unsigned char *data, unsigned char len, unsigned char *state ) { int i; for (i=0; i ignore next 3 bytes */ *state = 4; else if ( EXTENSION_START_CODE_VALUE == data[i] ) /* Extension Header? */ *state = 14; else *state = 0; break; /* read over three more bytes */ case 4: case 5: case 6: case 15: case 16: (*state)++; break; case 14: if ( PICTURE_CODING_EXTENSION_ID_VALUE == (data[i] >> 4) ) /* Picture Coding extension ?*/ *state = 15; else *state = 0; break; /* the eights bit has to be changed for both parameters */ case 7: /* change 50 fps to 25 fps */ /* least significant 4 bits */ /* 0110 = 50 fps */ /* 0011 = 25 fps */ /* works only with value of 50fps */ /* data[i] ^= 0x05; */ data[i] = ( data[i] & 0xF0 ) | 0x03; *state = 0; break; case 17: /* unset repeat_first_field flag */ /* value |= 0x02 flag set */ /* value &= ~0x02 flag unset */ data[i] &= ~0x02; *state = 0; break; default: /* undefined state */ return -1; } /* switch */ } /* for */ return 0; } void Mpeg2Handler::ProcessPayload( unsigned char *packet, unsigned int pid, unsigned char len ) { static PayloadList *current = NULL; if ( NULL != firstPayloadEntry ) /* look for the struct for the current pid */ for ( current=firstPayloadEntry; ( ( NULL!=current ) && ( pid!=current->pid ) ); current=current->next ); if (NULL == current) { current = ( PayloadList* ) malloc( sizeof(PayloadList) ); if ( NULL == current ) { fprintf( stderr, "Error allocating memory!\n" ); exit( 1 ); } if ( NULL == firstPayloadEntry ) firstPayloadEntry = current; current->pid = pid; current->state = 0; current->next = NULL; } /* if */ CorrectP25( packet, len, &(current->state) ); } void Mpeg2Handler::ProcessTSPacket( unsigned char *packet ) { unsigned int pid; pid = ((packet[1] & 0x1F) << 8 ) | packet[2]; if (0x1FFF == pid) return; /* throw NULL packet away */ switch (packet[3] & 0x30) { case 0x00: /* reserved */ case 0x20: /* adaptation field without payload */ break; case 0x10: /* payload only */ ProcessPayload( &packet[4], pid, 188-4 ); break; case 0x30: /* adaptation field before payload */ if (183 > packet[4]) /* max_length of ts_packet - 4 bytes header - 1 byte adaptation-length info - adaptation-length */ ProcessPayload( &packet[188 - 4 - 1 - packet[4]], pid, 188 - 4 - 1 - packet[4] ); break; } /* switch */ } int Mpeg2Handler::writeJVCP25( unsigned char *data, int len ) { static unsigned char state = 0, ts_packet[188], rest_length; int i; unsigned char next_possible_start_position = 0; for ( i = 0; i < len; i++ ) { switch (state) { /* seek for HDV_PACKET_MARKER of transport packet */ case 0: if ( HDV_PACKET_MARKER == data[i] ) { ts_packet[0] = data[i]; rest_length = 187; state = 1; } break; case 1: if (0 < rest_length) { ts_packet[ 188 - rest_length ] = data[i]; rest_length--; if ( ! next_possible_start_position ) if ( HDV_PACKET_MARKER == data[i] ) next_possible_start_position = i; } else { /* 188 byte seen */ if ( HDV_PACKET_MARKER == data[i] ) { /* last 188 bytes were ts packet */ ProcessTSPacket( ts_packet ); writen( fd, ts_packet, 188 ); ts_packet[0] = data[i]; rest_length = 187; } else { /* last 188 bytes are not a ts packet */ /* scan again beginning with the next possible start of ts_packet */ state = 0; /* write unchanged first bytes up to next_possible_start_position */ writen( fd, ts_packet, next_possible_start_position + 1 ); writeJVCP25( &ts_packet[ next_possible_start_position ], 188 - next_possible_start_position); ts_packet[ 188 - rest_length ] = data[i]; } /* if */ } /* if */ break; default: return -1; /* undefined state */ } /* switch */ } /* for */ return 0; } dvgrab-3.5+git20160707.1.e46042e/iec13818-1.h0000644000175000017500000001530012716434257014651 0ustar eses/* * iec13818-1.h * Copyright (C) 2007 Dan Streetman * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _IEC13818_1_H #define _IEC13818_1_H 1 #include "error.h" #define HDV_PACKET_SIZE 188 #define HDV_PACKET_MARKER 0x47 #define STREAM_VIDEO (0x02) #define STREAM_AUDIO (0x03) #define STREAM_SONY_PRIVATE_A0 (0xa0) #define STREAM_SONY_PRIVATE_A1 (0xa1) #define PID_NULL_PACKET (0x1fff) #define DUMP_RAW_DATA( type, bytes, start, end ) do { \ int i, j; \ for ( j = end; ( j > start+1 ) && ( bytes[j-1] == 0xff ) && ( bytes[j-2] == 0xff ); j-- ); \ for ( i = start; i < j; i++ ) DEBUG_RAW( type, "%02x ", bytes[ i ] ); \ if ( j < end ) DEBUG_RAW( type, "\b*%d ", end - j + 1 ); \ } while(0) #define BCD(c) ( ((((c) >> 4) & 0x0f) * 10) + ((c) & 0x0f) ) #define TOBYTES( n ) ( ( n + 7 ) / 8 ) static unsigned char bitmask[8] = { 0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f, 0x7f, 0xff }; #define GETBITS( offset, len ) do { \ unsigned long value = 0; \ while ( len > 0 ) \ { \ int bits = 8 - ( offset % 8 ); \ unsigned char data = GetData( offset / 8 ) & bitmask[bits-1]; \ if ( len > bits ) \ value += data << ( len - bits ); \ else if ( len < bits ) \ value += data >> ( bits - len ); \ else \ value += data; \ offset += bits; \ len -= bits; \ } \ return value; \ } while (0) class PAT { public: PAT(); ~PAT(); void SetData( unsigned char *d, int len ); int GetLength(); unsigned char GetData( int pos ); unsigned long GetBits( int offset, int len ) { GETBITS( offset, len ); } void Dump(); unsigned char table_id(); bool section_syntax_indicator(); unsigned short section_length(); unsigned short transport_stream_id(); unsigned char version_number(); bool current_next_indicator(); unsigned char section_number(); unsigned char last_section_number(); int program_number( unsigned int n ); int pid( unsigned int n ); int network_PID(); int program_map_PID(); protected: unsigned char *data; int length; }; class PMT_element { public: PMT_element(); ~PMT_element(); void SetData( unsigned char *d, int len ); int GetLength(); unsigned char GetData( int pos ); unsigned long GetBits( int offset, int len ) { GETBITS( offset, len ); } void Dump(); unsigned char stream_type(); unsigned short elementary_PID(); unsigned short ES_info_length(); unsigned char *descriptor( unsigned int n ); protected: unsigned char *data; int length; }; class PMT { public: PMT(); ~PMT(); void SetData( unsigned char *d, int len ); int GetLength(); unsigned char GetData( int pos ); unsigned long GetBits( int offset, int len ) { GETBITS( offset, len ); } void Dump(); unsigned char table_id(); bool section_syntax_indicator(); unsigned short section_length(); unsigned short program_number(); unsigned char version_number(); bool current_next_indicator(); unsigned char section_number(); unsigned char last_section_number(); unsigned short PCR_PID(); unsigned short program_info_length(); unsigned char *descriptor( unsigned int n ); PMT_element *GetPmtElement( unsigned int n ); protected: void CreatePmtElements(); protected: unsigned char *data; int length; #define MAX_PMT_ELEMENTS 32 PMT_element pmtElement[MAX_PMT_ELEMENTS]; unsigned int nPmtElements; }; class PES { public: PES(); ~PES(); void Clear(); void AddData( unsigned char *d, int l ); unsigned char *GetBuffer(); unsigned char GetData( int pos ); unsigned long GetBits( int offset, int len ) { GETBITS( offset, len ); } int GetLength(); void Dump(); unsigned int packet_start_code_prefix(); unsigned char stream_id(); unsigned short PES_packet_length(); unsigned char PES_scrambling_control(); bool PES_priority(); bool data_alignment_indicator(); bool copyright(); bool original_or_copy(); unsigned char PTS_DTS_flags(); bool ESCR_flag(); bool ES_rate_flag(); bool DSM_trick_mode_flag(); bool additional_copy_info_flag(); bool PES_CRC_flag(); bool PES_extension_flag(); unsigned char PES_header_data_length(); bool PES_private_data_flag(); bool pack_header_field_flag(); bool program_packet_sequence_counter_flag(); bool P_STD_buffer_flag(); bool PES_extension_flag_2(); int GetPacketDataOffset(); int GetPacketDataLength(); unsigned char PES_packet_data_byte( int n ); protected: bool IsHeaderPresent(); private: #define MAX_PES_SIZE 1024*1024 unsigned char data[MAX_PES_SIZE]; int length; int packetDataOffset; }; class SonyA1 { public: SonyA1(); ~SonyA1(); void SetData( unsigned char *d, int len ); int GetLength(); unsigned char GetData( int pos ); unsigned long GetBits( int offset, int len ) { GETBITS( offset, len ); } void Dump(); unsigned char year(); unsigned char month(); unsigned char day(); unsigned char hour(); unsigned char minute(); unsigned char second(); unsigned char timecode_hour(); unsigned char timecode_minute(); unsigned char timecode_second(); unsigned char timecode_frame(); bool scene_start(); protected: unsigned char *data; int length; }; class HDVFrame; class HDVStreamParams; class HDVPacket { public: HDVPacket( HDVFrame *f, HDVStreamParams *p ); ~HDVPacket(); void SetData( unsigned char *d ); int GetLength(); unsigned char GetData( int pos ); unsigned long GetBits( int offset, int len ) { GETBITS( offset, len ); } void Dump(); unsigned char sync_byte(); bool transport_error_indicator(); bool payload_unit_start_indicator(); bool transport_priority(); unsigned short pid(); unsigned char transport_scrambling_control(); unsigned char adaptation_field_control(); unsigned char continuity_counter(); unsigned char *pointer_field(); unsigned char *adaptation_field(); unsigned char *payload(); int PayloadOffset(); int PayloadLength(); bool is_program_association_packet(); PAT *program_association_table(); bool is_program_map_packet(); PMT *program_map_table(); bool is_video_packet(); bool is_audio_packet(); bool is_sony_private_a0_packet(); bool is_sony_private_a1_packet(); bool is_null_packet(); SonyA1 *GetSonyA1(); protected: unsigned char *data; HDVFrame *frame; HDVStreamParams *params; PAT pat; PMT pmt; SonyA1 sonyA1; }; #endif dvgrab-3.5+git20160707.1.e46042e/TODO0000644000175000017500000000021112716434257013660 0ustar esesmpeg2-ts support Some form of positioning, such that dvgrab waits for a given timecode or date/time stamp before grabbing batch capture dvgrab-3.5+git20160707.1.e46042e/dvgrab.h0000644000175000017500000000743112716434257014621 0ustar eses/* * dvgrab.h -- DVGrab control class * Copyright (C) 2003-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #ifndef _DVGRAB_H #define _DVGRAB_H 1 #include #include #include #include #include "ieee1394io.h" #include "filehandler.h" #include "dvframe.h" #include "hdvframe.h" #include "smiltime.h" #include #define DEFAULT_FORMAT (RAW_FORMAT) #define DEFAULT_FORMAT_STR "raw" #define DEFAULT_FRAMES 0 #define DEFAULT_LOCKSTEP_MAXDROPS -1 #define DEFAULT_LOCKSTEP_TOTALDROPS -1 #define DEFAULT_SIZE 1000 #define DEFAULT_CSIZE 0 #define DEFAULT_CMINCUTSIZE 0 #define DEFAULT_EVERY 1 #define DEFAULT_CHANNEL 63 #define DEFAULT_BUFFERS 100 #define DEFAULT_V4L2_DEVICE "/dev/video" extern int g_debug; class DVgrab { private: /// the interface card to use (typically == 0) int m_port; int m_node; char *m_program_name; int m_showstatus; bool m_hdv; bool m_reader_active; const char *m_input_file_name; char *m_dst_file_name; int m_autosplit; int m_timestamp; int m_channel; int m_frame_count; int m_max_file_size; int m_collection_size; int m_collection_min_cut_file_size; int m_sizesplitmode; int m_file_format; int m_open_dml; int m_frame_every; int m_jpeg_quality; int m_jpeg_deinterlace; int m_jpeg_width; int m_jpeg_height; std::string m_jpeg_temp; int m_jpeg_usetemp; int m_jpeg_overwrite; int m_dropped_frames; int m_bad_frames; bool m_interactive; int m_buffers; int m_total_frames; std::string m_duration; SMIL::MediaClippingTime* m_timeDuration; int m_noavc; uint64_t m_guid; int m_timesys; iec61883Connection* m_connection; bool m_raw_pipe; int m_no_stop; int m_timecode; int m_lockstep; int m_lockstep_maxdrops; int m_lockstep_totaldrops; bool m_lockPending; TimeCode m_lastTimeCode; bool m_isLastTimeCodeSet; struct tm m_lastRecDate; bool m_isLastRecDateSet; bool m_v4l2; int m_jvc_p25; int m_24p; int m_24pa; int m_timeSplit; int m_srt; bool m_isNewFile; bool m_isRecordMode; int m_isRewindFirst; static FileHandler *m_writer; bool m_captureActive; static pthread_mutex_t capture_mutex; static pthread_t capture_thread; static pthread_t watchdog_thread; AVC *m_avc; IEEE1394Reader *m_reader; static Frame *m_frame; unsigned int m_transportStatus; static void *captureThread( void* ); static void *watchdogThreadProxy( void* ); public: DVgrab( int argc, char *argv[] ); ~DVgrab(); void getargs( int argc, char *argv[] ); void startCapture(); void stopCapture(); void status(); void watchdogThread(); void captureThreadRun(); bool execute( const char ); bool isPlaying(); bool isInteractive() { return m_interactive; } bool done(); void testCapture( void ); static void testCaptureProxy( BusResetHandlerData ); private: void sendCaptureStatus( const char *name, float size, int frames, TimeCode *tc, struct tm *rd, bool newline ); void sendFrameDroppedStatus( const char *reason, const char *meaning ); void writeFrame(); void cleanup(); void print_usage(); void print_help(); void print_version(); void set_file_format( char *format ); void set_format_from_name( void ); }; #endif dvgrab-3.5+git20160707.1.e46042e/ChangeLog0000644000175000017500000006324312716434257014760 0ustar esesdvgrab ChangeLog (c) 2000-2003 Arne Schirmacher dvgrab@schirmacher.de (c) 2002-2009 Dan Dennedy SEE the NEWS file for RELEASE NOTES! ----- v3.5 released ----- 2009-09-07 Dan Dennedy - iec13818-1.{h,cc}, iec13818-2.{h,cc}: apply patch from Arren to remove compiler warnings (2804114). - filehandler.cc: apply fix to undefined return value in QtHandler::GetFileSize() from Arren (2813664). - dvgrab.cc: remove (H)DV timeout and replace with "Waiting" message per Carl Karsten's suggestion. - filehandler.cc: apply patch from Marcel Mol to allow both -timecode and -timestamp or -timesys (2706278). - ieee1394io.cc: fix deadlock on SIGINT/SIGTERM with -stdin or -input(2460025). 2009-04-10 Dan Dennedy - main.cc: apply patch from Jarod Wilson to set retval to 1 if we get an error, to make life easier for folks who wrap dvgrab to tell if something went wrong (rhbz #486061). - error.cc, smiltime.cc: apply patch from Jarod Wilson to fix build on gcc 4.4. - ieee1394io.{h,cc}: added AVC:isHDV(). - dvgrab.cc: try to automatically detect HDV devices and set file format. ----- v3.4 released ----- 2009-02-14 Dan Dennedy - main.cc: bugfix failure to start due to regression in 3.3. ----- v3.3 released ----- 2009-01-14 Dan Dennedy - ieee1394io.{h,cc}, dvgrab.{h,cc}: separate the warnings for dropped versus damaged frames. - main.cc: silence warnings about inability to change scheduling class and lock memory. - ieee1394io.cc: return false on invalid timecode in AVC::Timecode(). - io.c: read terminal every 125ms instead of 1s. - dvgrab.cc: in interactive mode display invalid timecode as "--:--:--:--" instead of "ff:ff:ff:ff" and suppress filename when no capture is in progress instead of displaying "stdout". - dvgrab.cc: reduce the frequency of AVC::TransportStatus calls making it consistently every ~125ms. - filehandler.cc: remove "deprecated conversion from string constant" warnings. - dvgrab.{cc,1}: add 'avi' as alias for dv2 and 'mov' as alias for qt. 2009-01-04 Dan Dennedy - dvframe.cc, filehandler.cc: bugfix compilation errors when not using libdv due to missing includes. 2008-11-25 Dan Dennedy - v4l2reader.cc: bugfix (2337246) hang on interrupting capture with no valid data received. Also, bugfix double free on exit. - most files: updated copyrights. - dvgrab.1: fix (2313429) missing hyphens. 2008-11-19 Dan Dennedy - dvgrab.cc: bugfix (2313380) premature exit/stop capture when using live/record mode - this time to cover more use cases. 2008-11-18 Dan Dennedy - dvgrab.cc: bugfix (2313380) premature exit/stop capture when using live/record mode on device instead of VCR and not using -record option. 2008-11-09 Dan Dennedy - filehandler.cc: bugfix -timestamp option not putting file extension on filename. ----- v3.2 released ----- 2008-08-04 Dan Dennedy - filehandler.{h,cc}, dvgrab.{h,cc}, dvgrab.1, srt.{h,cc}: apply patch from Pelayo Bernedo to add -srt option and optional arg to -autosplit to perform splits on gaps in recording date/time. 2008-07-24 Dan Dennedy - filehandler.cc: bugfix autosplit not detecting timecode discontinuity when timecode reverses a small amount - any backwards timecode event should be considered discontinuous. - configure.in, Makefile.am, config.h.in: switch to pkg-config for libavc1394-related build stuff. 2008-07-22 Dan Dennedy - filehandler.cc: bugfix autosplit not detecting timecode discontinuity when timecode resets or jumps way back in time. - dvgrab.cc, ieee1394io.cc, raw1393util.c: bugfix memory leaks reported by valgrind. - filehandler.cc, smiltime.cc, stringutils.cc: set std::ends on stringstreams in case it helps. 2008-07-02 Dan Dennedy - dvframe.cc, dvgrab.cc: apply patches from Patrick Mansfield to prevent stack corruption on invalid DV timecodes. 2008-02-26 Dan Denendy - dvgrab.cc: bugfix short option '-a' not working. - dvgrab.{h,cc,1}: added -recordonly (short -r ) option. - dvgrab.{h,cc}: added ability to stop capture at end of tape. - dvgrab.{h,cc,1}: added -rewind option to fully rewind tape before capture. - filehandler.cc: fix reporting bogus file size. - apply patch from Jarod Wilson/Red Hat to fix compilation with gcc 4.3. 2007-12-11 Dan Denendy - filehandler.cc: bugfix frame dropping with -24pa option. ----- v3.1 released ----- 2007-12-04 Dan Denendy - filehandler.{h,cc}, dvgrab.{h,cc}, dvgrab.1: Apply 24p patch from Joe Stewart/CinCVS which adds -24p and -24pa options for Quicktime DV output: https://init.linpro.no/pipermail/skolelinux.no/cinelerra/2006-October/008095.html 2007-11-21 Dan Dennedy - filehandler.cc: apply patch to fix writen returning -1 on failure. - riff.{h,cc}, dvgrab.{h,cc}, iec13818-2.cc, ieee1394io.h, v4l2reader.{h,cc}: add const to char* to reduce compiler warnings. 2007-11-12 Dan Dennedy - iec13818-1.cc: bugfix (1828106) transport_scrambling_control() not returning value. 2007-11-06 Dan Dennedy - hdvframe.{h,cc}: apply patch from Lars Täuber to fix P25 detection. 2007-11-01 Dan Dennedy - hdvframe.h: apply patch 1823641 to address bug 1773826 and prevent message "ERROR: too much carryover data." - filehandler.{h,cc}, dvgrab.{h,cc}, frame.h, hdvframe.{h,cc}, dvgrab.1: apply patch from Lars Täuber to process JVC P25 mode HDV (-jvc-p25). - error.h, iec13818-2.{h,cc}: apply patch 1823069 from Dan Streetman to improve HDV debug logging. The [S] slice debugging is compacted into [S*n]; the default DEBUG does a newline now instead of a \r (i.e. it preserves the previous debug output on the screen); the GOP Dump() method has a fix in the format string. - dvgrab.1, dvgrab.cc: address bug 1810616 by using mebibyte and MiB. - dvgrab.cc: bugfix 1810641 by not using lockstep unless a value greater than zero is supplied for -frames. - dvgrab.h: bugfix default format to raw. - ieee1394io.{h,cc}: bugfix segfault on StopThread() on reported by Jarod Wilson. - ieee1394io.{h,cc}: bugfix assertion in libiec61883 - removed union in iec61883Reader to keep it simple. 2007-09-11 Dan Dennedy - filehandler.cc: apply patch from David Arendt to fix potential data loss due to short write on raw DV and MPEG2-TS files. - ieee1394io.cc: bugfix hang at end of reading from stdin. 2007-09-04 Dan Dennedy - dvgrab.cc: bugfix pipe output in conjunction with file capture. 2007-08-30 Dan Dennedy - configure.in: added explicit --with options to suppress dependencies on libdv, libquicktime, and libjpeg. - dvframe.h: fix broken compilation without libdv. ----- v3.0 released ----- 2007-08-06 Dan Dennedy - configure.in, v4lreader.{h,cc}, dvgrab.cc: fix compiling without linux/videodev2.h. - iec13818-2.cc: bugfix virtual infinite loop while parsing mpeg2 visual sequence end packet. 2007-07-28 Dan Dennedy - dvframe.h: define CLAMP if not building with libdv. - dvgrab.1, dvgrab.h: change default DV file format to raw. - dvgrab.1, v4l2reader.{h,c}, dvgrab.cc: add support for DV-over-USB (linux uvcvideo) with new -v4l2 option. 2007-07-08 Dan Dennedy - stringutils.{h,cc}: added toUpper and toLower methods. - dvgrab.{h,cc}, dvgrab.1: infer format from filename extension. - dvgrab.cc, dvgrab.1: sort the options, document more options, add more short options. 2007-07-06 Dan Dennedy - avi.{h,cc}, dvframe.{h,cc}, dvgrab.{h,cc}, error.{h,cc}, filehandler.{h,cc}, frame.{h,cc}, hdvframe.{h,cc}, iec13818-1.{h,cc}, iec13818-2.{h,cc}, ieee1394io.{h,cc}, main.cc: apply patch 1703694 from Dan Streetman adding HDV support. - dvgrab.{h,cc}, filehandler.cc, frame.h, hdvframe.{h,cc}, iec13818-1.{h,cc}, iec13818-2.{h,cc}: apply patch 1705146 from Dan Streetman improving mpeg2ts timecode parsing. - dvgrab.cc, error.{h,cc}, hdvframe.cc, iec13818-1.{h,cc}, iec13818-2.cc, main.cc: apply patch 1705292 from Dan Streetman to enhance mpeg2ts parser debug logging. - dvgrab.cc: remove --dv and --hdv, making hdv usage automatic via --format mpeg2 with mpeg2ts or hdv as aliases. - dvgrab.cc, dvgrab.1: use getopt_long_only to allow single hyphen in addition to double hyphen. - dvgrab.cc, filehandler.cc: regression bugfixes - raw1394util.c: add discovery of Motorola DCT setop boxes (mpeg2-ts). - dvgrab.1, dvgrab.cc, raw1394util.{h,c}: improve support for settop boxes with -guid=1, which discovers AND sets up p2p connection. Also, reset bus prior to exit and suggest retry upon timeout on mpeg2ts. - dvgrab.cc: bugfix off-by-one in -duration handling. 2007-01-14 Dan Dennedy - filehandler.{h,cc}, dvgrab.{h,cc}, dvgrab.1: apply patch (bug 1629632) from Koen Martens to accept a temporary file in jpeg-overwrite mode and then rename to target file. This prevents other apps (web server, stopmotion) from receiving partial images. - dvgrab.cc: ignore timecode for detecting pause in jpeg-overwrite mode. ----- v2.1 released ----- 2006-12-19 Dan Dennedy - dvgrab.cc: do not capture frames with zero timecode. My camera sends zero timecode when paused in recording mode. Maybe yours does too. - filehandler.{h,cc}: do not change filename in --jpeg-overwrite mode. Only appends extension to base name if extension not in base. Does not put numbers or timecode or other into filename. 2006-03-03 Dan Dennedy - dvgrab.1: remove GNU FDL license from man page upon request by Daniel Kobras. - configure.in, Makefile.am, filehandler.h: improve building against libquicktime. - filehandler.cc: bugfix usage of private member of quicktime4linux API. - avi.h, avi.cc: prevent macro name clash on AVI_INDEX_OF_... with quicktime4linux qtprivate.h. 2005-09-25 Dan Dennedy - ieee1394io.h, ieee1394io.cc: drop the GetIncompleteFrames method. - ieee1394io.cc: set the number of bytes in Frame() based upon info from libiec61883 to properly detect incomplete state in Frame object. - dvgrab.cc: bugfix incomplete frame handling so as to not drop a complete frame due to an asynchronous IEEE1394Reader::GetIncompleteFrames event. Instead, rely soley upon the fixed Frame::IsComplete. ----- v2.0 released ----- 2005-04-08 Dan Dennedy - main.cc: apply realtime and priority scheduling and memory locking as used in dvconnect. - riff.cc, riff.h, avi.cc, avi.h, endian_types.h (added): make RIFF and AVI classes endian and architecture safe (Daniel Kobras). 2005-03-29 Dan Dennedy - dvgrab.cc: added lockstep_maxdrops and lockstep_totaldrops options. 2005-03-25 Dan Dennedy - ieee1394io.cc: separate dropped from incomplete frames; remove mutex lock in iec61883Reader::StopThread to prevent deadlock on exit; update iec61883Connection to libiec61883 API changes. - dvgrab.cc: reorganize capture thread method; make lockstep repeat frames when frames are dropped rather than truncate. 2005-03-21 Dan Dennedy - dvgrab.cc: ensure cleanup and error reporting on failure after a bus reset. 2005-02-07 Dan Dennedy - smiltime.h, smiltime.cc: added parseSmpteNtscDropValue() - dvgrab.cc: use proper timecode parser for NTSC drop-frame to fixup lockstep. 2005-02-04 Dan Dennedy - added --lockstep option 2005-01-31 Dan Dennedy - dvgrab.cc: disable some erroneous usage of some AV/C commands that interferes with bus reset handling and --nostop. - dvgrab.cc: reduce timeout to 1 second in bus reset handler. 2005-01-27 Dan Dennedy - ieee1394io.h, ieee1394io.cc: add an optional seconds parameter to IEEE1394Reader::WaitForAction() and add iec61883Connection::Reconnect() that uses new libiec61883 function. - dvgrab.cc: do not issue AV/C pause when not interactive and use WaitForAction in bus reset watchdog thread. - ieee1394io.h, ieee1394io.cc, dvgrab.cc: added iec61883Connection::CheckConsistency() 2005-01-26 Dan Dennedy - ieee1394io.cc, ieeee1394io.h, dvgrab.cc, dvgrab.h: check for DV in a watchdog thread upon bus reset. 2005-01-25 Dan Dennedy - dvgrab.cc, dvgrab.h: disable sending raw dv to stdout when stdout is not a tty unless '-' is on command line. - dvgrab.cc, dvgrab.h: added --nostop to prevent sending AV/C stop command on exit. - filehandler.cc, dvgrab.cc, dvgrab.1: added --timecode option. 2004-12-11 Dan Dennedy - ieee1394io.cc, dvgrab.cc: drop legacy raw1394 and dv1394 capture and add libiec61883 support with connection management. 2004-12-07 Dan Dennedy - dvgrab.cc: fix display of --dv1394 option in usage report. ----- v1.7 released ----- 2004-11-29 Dan Dennedy - configure.in: remove unnecessary check for dv1394.h - ieee1394io.cc: remove libraw1394 0.8 compatibility cruft - dvgrab.cc: make duration option more accurate 2004-10-26 Dan Dennedy - dvgrab.1: correct the --frames example. 2004-08-21 Dan Dennedy - ieee1394io.cc: cleanup reception thread cancellation issues by removing blocking 1394 I/O calls 2004-08-16 Dan Dennedy - remove debian directory - ieee1394io.cc: bugfix deadlock on stop capture for dv1394 2004-08-12 Dan Dennedy - smiltime.cc: bugfix indefinite state ----- v1.6 released ----- 2004-07-21 Dan Dennedy - dvgrab.{h,cc}, filehandler.{h,cc}, dvgrab.1: added --timesys option -- similar to --timestamp but using system time instead. 2004-07-13 Dan Dennedy - dv1394.h: added since no longer in libdv - ieee1394io.cc: prevent potential deadlock on stopping threads (patch from Jean-Francois Panisset ) - ieee1394io.{h,cc}: cleanup usage of statics and threads - dvgrab.{1,cc,h}, filehandler.{cc,h}: added --csize and --cmincutsize opts. (patch from Pierre Marc Dumuid ) ----- v1.5 released ----- 2004-01-14 Dan Dennedy - dvgrab.cc: bugfix --every option 2004-01-06 Dan Dennedy - riff.h: 64bit OS fix. 2003-12-29 Dan Dennedy - filehandler.cc, dvgrab.cc: bugfix autosplit first frame of new recording appearing as last frame of previous file (DVGRAB-20). - dvgrab.cc: bugfix type 2 AVI audio stream frequency setting when starting capture from non-playing state. - dvgrab.cc: enhanced autosplit by checking timecode continuity (DVGRAB-2). 2003-12-27 Arne Schirmacher - bugfix broken file when file size was between 1008 and 1024 MByte and OpenDML was not turned on (DVGRAB-17, patch from Andrew Church) - bugfix file size displayed was always xxx.00 MB (DVGRAB-19, patch from Andrew Church) 2003-12-09 Dan Dennedy - ieee1394io.cc: revise AV/C Record mode to never issue a record command on AVC::Play or AVC::Pause. - filehandler.cc: bugfix notification of capture file on autosplit. - dvgrab.1: bugfix to remove erroneous short option -i on --noavc. 2003-11-24 Dan Dennedy - ieee1394io.cc, dvgrab.cc: make camera (AV/C Record) mode work correctly (toggle record / pause). ----- v1.4 released ----- 2003-11-16 Dan Dennedy - dvgrab.cc:: in startCapture() check g_done when waiting for initial frame to allow interruption. - all: remove use of deprecated strstream, using sstream and ostringstream. - all: fixup pretty make target and run against all code to reformat. - configure.in: bump version 2003-11-09 Dan Dennedy - frame.cc: bugfix speed detection on DVCPRO (SMPTE 314M) - avi.cc: remove addition of JUNK chunks in the MOVI list to reduce memory overhead of RIFF Directory and reduce file sizes. 2003-10-13 Dan Dennedy - bugfix parsing new boolean options --stdin and --noavc if last on command line. - bugfix sigint handling - added --guid option to select device. - make --noavc not do bus probes if not necessary. - make arg to --dv1394 optional for use with devfs. 2003-10-09 Dan Dennedy - dvgrab.cc, dvgrab.1: make default --frames = 0, document 0 is unlimited, fix documentation of SMIL time (.ms optional - dvgrab.cc: make opendml default off, ring bell on frame drop 2003-10-08 Dan Dennedy - added option --noavc to disable use of any AV/C. 2003-10-07 Dan Dennedy - make --duration option work with piped output too. 2003-09-26 Dan Dennedy - require option --stdin to read from a pipe. ----- v1.3 released ----- 2003-09-26 Dan Dennedy - added support for saving raw DV with .dif extension for MainActor5. - added support for reading raw DV on stdin instead of 1394 in non- interactive mode. - added --duration option with SMIL time value parser. - updated man page. 2003-09-17 Dan Dennedy - added interactive mode with AV/C control - send AV/C play prior to capture on non-interactive capture - make dv2 default format - simultaneous capture and rawdv on stdout - automatically discover camera on any port (overridden via --card) - bump version to 1.3 2003-04-21 Dan Dennedy - Applied patch from Daniel Kobras with some changes. Return success/failure on AVI and FileHandler WriteFrame methods. Automatically enable OpenDML feature if --size >1GB or --size = 0 (unlimited). 2003-02-22 Dan Dennedy - added JPEG overwriting mode (--jpeg-overwrite) (update the same jpeg instead of seqentially numbering them). - added raw DV stdout (base filename = '-') mode - updated AVI code from Kino 0.6.4. - bugfix segfault on dv1394 buffer underrun - bump version to 1.2 2003-02-06 Dan Dennedy - disable processing of jpeg options when not compile with libdv and jpeglib. Changes in version 1.1b4 by Dan Dennedy - added JPEG output with deinterlace and scaling - bugfix an AVI write bug - bugfix dv1394 capture on PAL Changes in version 1.1b3 by Dan Dennedy - pickup improved AVI classes from Kino. - pickup FileHandler class from Kino to unify operation and features across formats. - raw format always creates consistent bytes-per-frame files even if there is a dropped packet. - detect libdv and use it for audio extraction and metadata - detect libdv/dv1394.h to enable support for dv1394 (--dv1394) - split on a max file size (--size) - do not overwrite existing files. - detect libquicktime and add quicktime file support (--format qt). - change --index option to --opendml for dv2 format only. dv1 always only contains large index. dv2 contains only small index. dv2+opendml contains both small and large index. - verified compilation on gcc 3.2. Changes in version 1.1b2 - more work on writing large files Changes in version 1.1b1 - writes files larger than 1 GByte. Currently only with --format dv1 files. - lots of code refactoring Changes in version 1.01 - PAL/NTSC detection improved - minor changes to compile with g++ 3.0 (contributed by Daniel Kobras) - buffer space for frames doubled: now 100 frames instead of 50 (about 3-4 secs, 16 MByte RAM total) - bugfix: dvgrab will now reliably close the current AVI file when aborted by ctrl-c (contributed by Glen Nakamura) Changes in version 1.0 - adapted to libraw 0.9 - packet assemble code modified - some camcorders send only 492 bytes instead of 496 Changes in version 0.99 - fixed a bug that caused dvgrab to crash when grabbing very big files in AVI Type 2 format and NTSC mode - fixed the incorrect size of files in raw format (they always had an extra 480 bytes) - fixed exception thrown when writing a 0 frames file - automake and debian files provided by Daniel Kobras Changes in version 0.89: - added the raw1394_stop_iso_rcv call in the close routines Changes in version 0.88: - support for 12 bit audio in AVI Type 2 files (courtesy Mike Pieper) Changes in version 0.87: - new feature (courtesy Stephane Gourichon): there is now a parameter that allows to save every n-th frame only. Great for monitoring purposes or when you need a rough overview of the contents of a tape. - Type 2 audio has been slightly improved. Still no 12-bit audio, but I'm getting close. - refactoring and cleanup of the code. - a few bug fixes: the --format raw mode now works again. Changes in version 0.86: - some improvements in the Type 2 AVI audio code. Many camcorders send audio data which is not defined in my 314m specification, which caused clicks and distortions, because the audio data was left out. If you have extensive audio error messages, your camcorder is using an unsupported audio mode, such as 12 bit audio data. - the --testmode parameter reads DV data from a testdata.raw file created previously with the --format test parameter. No camcorder is required. Changes in version 0.85: - handling of additional file formats: --format raw --format test Changes in version 0.84: - major rewrite. dvgrab now uses a separate thread for handling the ieee1394 interface. This improves the behavior under load conditions very much. Changes in version 0.83: - there is now audio support for --format dv2 AVIs. Currently only 16 bit 48 kHz sound is supported. Changes in version 0.82: - incorrect audio speed value fixed, overflow error for long Type 2 AVIs fixed. - Added support for the --format raw data format. This is just the raw DV data (120000 bytes for each NTSC frame or 144000 bytes for each PAL frame) written to a file. Can be used with the playdv program from the libdv project. Changes in version 0.81: - fixed a bug that prevented Type 1 AVIs from playing with some players. Changes in version 0.8: - the program can now write Type 1 and Type 2 DV AVI files. The difference is that Type 1 contains only one data stream which has audio and video data interleaved, whereas Type 2 AVIs contain an additional audio stream. Since the video stream still has the audio data interleaved, this is some waste of disk space, but MainActor can't read Type 1 files so I had to add support for this other format too. Note that this version currently writes empty audio tracks. This means: no sound, but at least it loads nicely into MainActor. A free demo version of MainActor can be downloaded from: http://www.mainconcept.com - Because of the increasing number of supported file formats I have changed the option for selecting them. To write MainActor compatible AVI files use this command: dvgrab --format dv2 outfile Changes in version 0.7: - the previous algorithm for assembling the data packets did not work for some camcorders. I rewrote it and the new algorithm explicitly calculates the address in the frame buffer for each data packet. - the program attempts to auto-detect NTSC and PAL format. The --pal or --ntsc command line parameter is no longer mandantory, but it will override the auto-detect feature if applied. Use them if the auto-detect feature doesnt work for you. - The --timestamp parameter will put the date and time of the recording into the AVI file name. If it doesnt do that, dvgrab cant find any date and time related packages in the DV data. - I included the little utilities riffdump (prints out the structure of an arbitrary AVI file) and rawdump (prints out an ASCII dump of a file created with the --raw option) Changes in version 0.6: - switched to C++ code. The reason is that C++ provides the exception mechanism, which I use for a more reasonable error reporting. Also the whole concept of the RIFF/AVI file structure leads to an object oriented approach. - The program has now an auto-split function (see dvgrab --help): It will start a new file whenever (1) the file size approaches 2 GByte (exceeds 0x70000000 bytes, to be specific), (2) the frame count is exceeded, or (3) the recording of a new scene is detected. The program will loop forever until you press CTRL-C. The current AVI file will be properly closed, so you dont lose any data (at least this is the plan). - Closing the file is now reasonably fast. - there is some visual feedback if the device driver drops data. Changes in version 0.5: - implemented the extended AVI file format which allows for files > 1 GByte. Note that Linux is currently still limited to a maximum file size of 2 GByte, which corresponds to approx. 14500 frames for PAL format or 17500 frames for NTSC format. - more comments in the source code Changes in version 0.4: - implemented the NTSC file format. - there is now an error handler which intercepts the signals 1, 2, 3 and 15 and attempts to properly close the AVI file. So you can safely press CTRL-C to stop capturing without losing the data saved so far. Changes in version 0.3: - implemented the raw capturing mode, wich simply saves the raw DV data to a file. Useful for analyzing the data. - tested with files up to 2000 frames (288 MByte) Changes in version 0.2: - a basic set of command line options. Run 'dvgrab --help' for a list. - more error checking - more comments - tested with the latest ohci driver (no changes were necessary though) Changes in version 0.1: - first version dvgrab-3.5+git20160707.1.e46042e/frame.cc0000644000175000017500000000213212716434257014575 0ustar eses/* * frame.cc -- utilities for processing digital video frames * Copyright (C) 2000 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #include "frame.h" Frame::Frame() { Clear(); } Frame::~Frame() { } int Frame::GetDataLen() { return dataLen; } void Frame::SetDataLen( int len ) { dataLen = len; } void Frame::AddDataLen( int len ) { SetDataLen( GetDataLen() + len ); } void Frame::Clear() { dataLen = 0; } dvgrab-3.5+git20160707.1.e46042e/v4l2reader.cc0000644000175000017500000001504112716434257015460 0ustar eses/* * ieee1394io.cc -- grab DV/MPEG-2 from V4L2 * Copyright (C) 2007-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #include "v4l2reader.h" #ifdef HAVE_LINUX_VIDEODEV2_H #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "error.h" #include "dvframe.h" v4l2Reader::v4l2Reader( const char *filename, int frames, bool hdv ) : IEEE1394Reader( 0, frames, hdv ) , m_device( filename ) , m_fd( -1 ) , m_bufferCount( 0 ) , m_buffers( 0 ) { } v4l2Reader::~v4l2Reader() { Close(); } bool v4l2Reader::Open( void ) { bool success = true; try { // Open device file fail_neg( m_fd = open( m_device, O_RDWR | O_NONBLOCK, 0 ) ); // Validate capabilities struct v4l2_capability cap; fail_neg( ioctl( VIDIOC_QUERYCAP, &cap ) ); fail_if( !( cap.capabilities & V4L2_CAP_VIDEO_CAPTURE ) ); fail_if( !( cap.capabilities & V4L2_CAP_STREAMING ) ); // Signal MMAP capture struct v4l2_requestbuffers req; memset( &req, 0, sizeof( req ) ); req.count = 4; req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; req.memory = V4L2_MEMORY_MMAP; fail_neg( ioctl( VIDIOC_REQBUFS, &req ) ); fail_if( req.count < 2 ); // Allocate mmap buffers tracking list m_buffers = static_cast< struct buffer* >( calloc( req.count, sizeof( struct buffer ) ) ); fail_null( m_buffers ); for ( m_bufferCount = 0; m_bufferCount < req.count; m_bufferCount++ ) { struct v4l2_buffer buf; memset( &buf, 0, sizeof( buf ) ); buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf.memory = V4L2_MEMORY_MMAP; buf.index = m_bufferCount; fail_neg( ioctl( VIDIOC_QUERYBUF, &buf ) ); m_buffers[ m_bufferCount ].length = buf.length; // MMAP m_buffers[ m_bufferCount ].start = mmap( NULL, buf.length, PROT_READ | PROT_WRITE, MAP_SHARED, m_fd, buf.m.offset ); fail_if( m_buffers[ m_bufferCount ].start == MAP_FAILED ); } } catch ( std::string exc ) { Close(); sendEvent( exc.c_str() ); success = false; } return success; } void v4l2Reader::Close( void ) { if ( m_buffers ) { // Release mmaped buffers for ( unsigned int i = 0; i < m_bufferCount; i++ ) munmap( m_buffers[i].start, m_buffers[i].length ); // Release mmap buffers tracking list free( m_buffers ); m_buffers = NULL; } // Close device file if ( m_fd > -1 ) { fail_neg( close( m_fd ) ); m_fd = -1; } } bool v4l2Reader::StartReceive( void ) { bool success = true; try { // Enqueue all V4L2 buffers for ( unsigned int i = 0; i < m_bufferCount; i++ ) { struct v4l2_buffer buf; memset( &buf, 0, sizeof( buf ) ); buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf.memory = V4L2_MEMORY_MMAP; buf.index = i; fail_neg( ioctl( VIDIOC_QBUF, &buf ) ); } // Tell V4L2 to start enum v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE; fail_neg( ioctl( VIDIOC_STREAMON, &type ) ); } catch ( std::string exc ) { sendEvent( exc.c_str() ); success = false; } return success; } void v4l2Reader::StopReceive( void ) { enum v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE; if ( ioctl( VIDIOC_STREAMOFF, &type ) < 0 ) sendEvent( "VIDIOC_STREAMOFF failed" ); } bool v4l2Reader::StartThread( void ) { if ( isRunning ) return true; if ( Open() && StartReceive() ) { isRunning = true; pthread_create( &thread, NULL, ThreadProxy, this ); pthread_mutex_unlock( &mutex ); return true; } else { Close(); pthread_mutex_unlock( &mutex ); TriggerAction( ); return false; } } void v4l2Reader::StopThread( void ) { if ( isRunning ) { isRunning = false; pthread_join( thread, NULL ); StopReceive(); Close(); Flush(); } TriggerAction( ); } void* v4l2Reader::ThreadProxy( void *arg ) { v4l2Reader* self = static_cast< v4l2Reader* >( arg ); return self->Thread(); } void* v4l2Reader::Thread( void ) { struct pollfd v4l2_poll; int result; v4l2_poll.fd = m_fd; v4l2_poll.events = POLLIN | POLLERR | POLLHUP | POLLPRI; while ( isRunning ) { while ( ( result = poll( &v4l2_poll, 1, 200 ) ) < 0 ) { if ( !( errno == EAGAIN || errno == EINTR ) ) { perror( "error: v4l2 poll" ); break; } } if ( result > 0 && ( ( v4l2_poll.revents & POLLIN ) || ( v4l2_poll.revents & POLLPRI ) ) ) if ( ! Handler( ) ) isRunning = false; } return NULL; } bool v4l2Reader::Handler( void ) { bool success = true; try { // Get the current V4L2 buffer struct v4l2_buffer buf; memset( &buf, 0, sizeof( buf ) ); buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf.memory = V4L2_MEMORY_MMAP; int result = ioctl( VIDIOC_DQBUF, &buf ); if ( result < 0 && errno != EAGAIN && errno != EIO ) fail_neg( result ); assert( buf.index < m_bufferCount ); // Get a new dvgrab buffer (frame) if ( currentFrame == NULL ) { if ( inFrames.size() > 0 ) { pthread_mutex_lock( &mutex ); currentFrame = inFrames.front(); currentFrame->Clear(); inFrames.pop_front(); pthread_mutex_unlock( &mutex ); } else { droppedFrames++; return 0; } } // Copy the data from V4L2 to dvgrab size_t length = CLAMP( m_buffers[buf.index].length, 0, sizeof( currentFrame->data ) ); memcpy( currentFrame->data, m_buffers[buf.index].start, length ); currentFrame->AddDataLen( length ); fail_neg( ioctl( VIDIOC_QBUF, &buf ) ); // Signal dvgrab buffer ready if ( currentFrame->IsComplete( ) ) { pthread_mutex_lock( &mutex ); outFrames.push_back( currentFrame ); currentFrame = NULL; TriggerAction( ); pthread_mutex_unlock( &mutex ); } } catch ( std::string exc ) { sendEvent( exc.c_str() ); success = false; } return success; } int v4l2Reader::ioctl( int request, void *arg ) { int r; do r = ::ioctl( m_fd, request, arg ); while ( -1 == r && EINTR == errno ); return r; } #endif dvgrab-3.5+git20160707.1.e46042e/hdvframe.cc0000644000175000017500000001721412716434257015306 0ustar eses/* * hdvframe.cc -- utilities for processing HDV frames * Copyright (C) 2000 Arne Schirmacher * Copyright (C) 2007 Dan Streetman * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #include #include "hdvframe.h" HDVFrame::HDVFrame( HDVStreamParams *p ) { Clear(); params = p; packet = new HDVPacket( this, params ); } HDVFrame::~HDVFrame() { delete packet; } bool HDVFrame::IsHDV() { return true; } bool HDVFrame::CouldBeJVCP25() { return repeatFirstField && ( 50 == frameRate ); } bool HDVFrame::CanStartNewStream() { return isGOP; } void HDVFrame::Clear() { isComplete = false; isRecordingDateSet = false; isTimeCodeSet = false; isNewRecording = false; isGOP = false; width = 0; height = 0; frameRate = 0; repeatFirstField = false; lastVideoDataLen = 0; lastAudioDataLen = 0; Frame::Clear(); } void HDVFrame::SetDataLen( int len ) { int old_len = GetDataLen(); if ( !old_len ) { if ( params->carryover_length > 0 ) { memmove( &data[ params->carryover_length ], data, len ); memcpy( data, params->carryover_data, params->carryover_length ); len += params->carryover_length; params->carryover_length = 0; } DEBUG_RAW( d_hdv_video, "->\n<- New HDVFrame:" ); } Frame::SetDataLen( len ); ProcessFrame( old_len ); } bool HDVFrame::GetRecordingDate( struct tm &rd ) { if ( !isRecordingDateSet ) return false; memcpy(&rd, &recordingDate, sizeof(rd)); return true; } bool HDVFrame::GetTimeCode( TimeCode &tc ) { if ( !isTimeCodeSet ) return false; memcpy(&tc, &timeCode, sizeof(tc)); return true; } float HDVFrame::GetFrameRate() { return frameRate; } bool HDVFrame::IsNewRecording() { return isNewRecording; } bool HDVFrame::IsGOP() { return isGOP; } bool HDVFrame::IsComplete( void ) { return isComplete; } int HDVFrame::GetWidth() { return width; } int HDVFrame::GetHeight() { return height; } void HDVFrame::ProcessFrame( unsigned int start ) { for ( int i = start; i+HDV_PACKET_SIZE-1 < GetDataLen() && !IsComplete(); i += HDV_PACKET_SIZE ) { if ( i+HDV_PACKET_SIZE > DATA_BUFFER_LEN ) { sendEvent( "\aERROR:HDV Frame out of buffer space, completing packet early" ); isComplete = true; return; } if ( HDV_PACKET_MARKER == data[i] ) { packet->SetData( &data[i] ); ProcessPacket(); } else { // The stream has to be synced on packet boundries. // This could be changed to do in-code packet marker searching/syncing, // but it doesn't do that right now. sendEvent( "Invalid packet sync_byte 0x%02x!", data[i] ); } } } void HDVFrame::ProcessPacket() { DEBUG( d_hdv_pids, "PID %04x", packet->pid() ); packet->Dump(); if ( packet->is_program_association_packet() ) ProcessPAT(); else if ( packet->is_program_map_packet() ) ProcessPMT(); else if ( packet->is_sony_private_a1_packet() ) ProcessSonyA1(); else if ( packet->is_video_packet() ) ProcessVideo(); else if ( packet->is_audio_packet() ) ProcessAudio(); else if ( packet->is_null_packet() ) return; } void HDVFrame::ProcessPAT() { PAT *pat = packet->program_association_table(); pat->Dump(); // NOTE - this assumes a non-changing PAT; once we process the first one, skip the rest. if ( params->program_map_PID ) return; // Set the PMT PID if this is a PAT. if ( pat && pat->program_map_PID() > 0 && !params->program_map_PID ) params->program_map_PID = pat->program_map_PID(); } void HDVFrame::ProcessPMT() { PMT *pmt = packet->program_map_table(); pmt->Dump(); // NOTE - this assumes a non-changing PMT; once we get the video/audio PIDs, skip the rest. if ( params->video_stream_PID && params->audio_stream_PID ) return; PMT_element *elem; for ( int i = 0; ( elem = pmt->GetPmtElement( i ) ); i++ ) { switch ( elem->stream_type() ) { case STREAM_VIDEO: params->video_stream_PID = elem->elementary_PID(); break; case STREAM_AUDIO: params->audio_stream_PID = elem->elementary_PID(); break; case STREAM_SONY_PRIVATE_A0: params->sony_private_a0_PID = elem->elementary_PID(); break; case STREAM_SONY_PRIVATE_A1: params->sony_private_a1_PID = elem->elementary_PID(); break; } } } void HDVFrame::ProcessVideo() { params->video.AddPacket( packet ); if ( params->video.IsComplete() ) { // This carryover (probably) will not be needed if iec13818-2 slice macroblock parsing is added // until then, once the iec13818-2 parser detects a new PES packet and completes this HDVFrame, // all data since the last video or audio packet is carried over to the next HDVFrame. int lastDataLen = lastAudioDataLen > lastVideoDataLen ? lastAudioDataLen : lastVideoDataLen; params->carryover_length = GetDataLen() - lastDataLen; if ( params->carryover_length > CARRYOVER_DATA_MAX_SIZE ) { sendEvent( "\aERROR: too much carryover data (%d bytes), DROPPING DATA!\n", params->carryover_length ); params->carryover_length = CARRYOVER_DATA_MAX_SIZE; } memcpy( params->carryover_data, &data[lastDataLen], params->carryover_length ); Frame::SetDataLen( lastDataLen ); SetComplete(); } else { lastVideoDataLen = GetDataLen(); } } void HDVFrame::ProcessAudio() { lastAudioDataLen = GetDataLen(); } void HDVFrame::ProcessSonyA1() { SonyA1 *a1 = packet->GetSonyA1(); a1->Dump(); struct tm *rd = ¶ms->recordingDate; TimeCode *tc = ¶ms->timeCode; rd->tm_year = 100 + a1->year(); rd->tm_mon = a1->month() - 1; rd->tm_mday = a1->day(); rd->tm_hour = a1->hour(); rd->tm_min = a1->minute(); rd->tm_sec = a1->second(); tc->hour = a1->timecode_hour(); tc->min = a1->timecode_minute(); tc->sec = a1->timecode_second(); tc->frame = a1->timecode_frame(); params->isRecordingDateSet = true; params->isTimeCodeSet = true; isNewRecording = a1->scene_start(); } void HDVFrame::SetComplete() { Video *v = ¶ms->video; if ( v->width ) params->width = v->width; if ( v->height ) params->height = v->height; if ( v->frameRate ) params->frameRate = v->frameRate; if ( v->hasGOP ) isGOP = true; if ( v->isTimeCodeSet ) { memcpy( ¶ms->gopTimeCode, &v->timeCode, sizeof( params->gopTimeCode ) ); params->isGOPTimeCodeSet = true; } if ( params->isRecordingDateSet ) { memcpy( &recordingDate, ¶ms->recordingDate, sizeof( recordingDate ) ); isRecordingDateSet = true; } if ( params->isTimeCodeSet ) { memcpy( &timeCode, ¶ms->timeCode, sizeof( timeCode ) ); isTimeCodeSet = true; } else if ( params->isGOPTimeCodeSet ) { memcpy( &timeCode, ¶ms->gopTimeCode, sizeof( timeCode ) ); isTimeCodeSet = true; } width = params->width; height = params->height; frameRate = params->frameRate; repeatFirstField = v->repeat_first_field; isComplete = true; v->Clear(); } /////////////////// /// HDVStreamParams HDVStreamParams::HDVStreamParams() : program_map_PID( 0 ), video_stream_PID( 0 ), audio_stream_PID( 0 ), sony_private_a0_PID( 0 ), sony_private_a1_PID( 0 ), width( 0 ), height( 0 ), frameRate( 0 ), carryover_length( 0 ), isRecordingDateSet( false ), isTimeCodeSet( false ), isGOPTimeCodeSet( false ) { } HDVStreamParams::~HDVStreamParams() { } dvgrab-3.5+git20160707.1.e46042e/dvgrab.dox0000644000175000017500000006326412716434257015172 0ustar eses# Doxyfile 1.2.0 # This file describes the settings to be used by doxygen for a project # # All text after a hash (#) is considered a comment and will be ignored # The format is: # TAG = value [value, ...] # Values that contain spaces should be placed between quotes (" ") #--------------------------------------------------------------------------- # General configuration options #--------------------------------------------------------------------------- # The PROJECT_NAME tag is a single word (or a sequence of words surrounded # by quotes) that should identify the project. PROJECT_NAME = dvgrab # The PROJECT_NUMBER tag can be used to enter a project or revision number. # This could be handy for archiving the generated documentation or # if some version control system is used. PROJECT_NUMBER = 2.0 # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) # base path where the generated documentation will be put. # If a relative path is entered, it will be relative to the location # where doxygen was started. If left blank the current directory will be used. OUTPUT_DIRECTORY = doc # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # The default language is English, other supported languages are: # Dutch, French, Italian, Czech, Swedish, German, Finnish, Japanese, # Spanish, Russian, Croatian and Polish. OUTPUT_LANGUAGE = English # The DISABLE_INDEX tag can be used to turn on/off the condensed index at # top of each HTML page. The value NO (the default) enables the index and # the value YES disables it. DISABLE_INDEX = NO # If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in # documentation are documented, even if no documentation was available. # Private class members and static file members will be hidden unless # the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES EXTRACT_ALL = YES # If the EXTRACT_PRIVATE tag is set to YES all private members of a class # will be included in the documentation. EXTRACT_PRIVATE = YES # If the EXTRACT_STATIC tag is set to YES all static members of a file # will be included in the documentation. EXTRACT_STATIC = YES # If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all # undocumented members of documented classes, files or namespaces. # If set to NO (the default) these members will be included in the # various overviews, but no documentation section is generated. # This option has no effect if EXTRACT_ALL is enabled. HIDE_UNDOC_MEMBERS = NO # If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. # If set to NO (the default) these class will be included in the various # overviews. This option has no effect if EXTRACT_ALL is enabled. HIDE_UNDOC_CLASSES = NO # If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will # include brief member descriptions after the members that are listed in # the file and class documentation (similar to JavaDoc). # Set to NO to disable this. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend # the brief description of a member or function before the detailed description. # Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. REPEAT_BRIEF = YES # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # Doxygen will generate a detailed section even if there is only a brief # description. ALWAYS_DETAILED_SEC = YES # If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full # path before files name in the file list and in the header files. If set # to NO the shortest path that makes the file name unique will be used. FULL_PATH_NAMES = NO # If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag # can be used to strip a user defined part of the path. Stripping is # only done if one of the specified strings matches the left-hand part of # the path. It is allowed to use relative paths in the argument list. STRIP_FROM_PATH = # The INTERNAL_DOCS tag determines if documentation # that is typed after a \internal command is included. If the tag is set # to NO (the default) then the documentation will be excluded. # Set it to YES to include the internal documentation. INTERNAL_DOCS = NO # If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will # generate a class diagram (in Html and LaTeX) for classes with base or # super classes. Setting the tag to NO turns the diagrams off. CLASS_DIAGRAMS = YES # If the SOURCE_BROWSER tag is set to YES then a list of source files will # be generated. Documented entities will be cross-referenced with these sources. SOURCE_BROWSER = YES # Setting the INLINE_SOURCES tag to YES will include the body # of functions and classes directly in the documentation. INLINE_SOURCES = YES # Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct # doxygen to hide any special comment blocks from generated source code # fragments. Normal C and C++ comments will always remain visible. STRIP_CODE_COMMENTS = YES # If the CASE_SENSE_NAMES tag is set to NO (the default) then Doxygen # will only generate file names in lower case letters. If set to # YES upper case letters are also allowed. This is useful if you have # classes or files whose names only differ in case and if your file system # supports case sensitive file names. CASE_SENSE_NAMES = YES # If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen # will show members with their full class and namespace scopes in the # documentation. If set to YES the scope will be hidden. HIDE_SCOPE_NAMES = NO # If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen # will generate a verbatim copy of the header file for each class for # which an include is specified. Set to NO to disable this. VERBATIM_HEADERS = YES # If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen # will put list of the files that are included by a file in the documentation # of that file. SHOW_INCLUDE_FILES = YES # If the JAVADOC_AUTOBRIEF tag is set to YES (the default) then Doxygen # will interpret the first line (until the first dot) of a JavaDoc-style # comment as the brief description. If set to NO, the Javadoc-style will # behave just like the Qt-style comments. JAVADOC_AUTOBRIEF = YES # If the INHERIT_DOCS tag is set to YES (the default) then an undocumented # member inherits the documentation from any documented member that it # reimplements. INHERIT_DOCS = YES # If the INLINE_INFO tag is set to YES (the default) then a tag [inline] # is inserted in the documentation for inline members. INLINE_INFO = YES # If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen # will sort the (detailed) documentation of file and class members # alphabetically by member name. If set to NO the members will appear in # declaration order. SORT_MEMBER_DOCS = YES # The TAB_SIZE tag can be used to set the number of spaces in a tab. # Doxygen uses this value to replace tabs by spaces in code fragments. TAB_SIZE = 4 # The ENABLE_SECTIONS tag can be used to enable conditional # documentation sections, marked by \if sectionname ... \endif. ENABLED_SECTIONS = #--------------------------------------------------------------------------- # configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated # by doxygen. Possible values are YES and NO. If left blank NO is used. QUIET = YES # The WARNINGS tag can be used to turn on/off the warning messages that are # generated by doxygen. Possible values are YES and NO. If left blank # NO is used. WARNINGS = YES # If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings # for undocumented members. If EXTRACT_ALL is set to YES then this flag will # automatically be disabled. WARN_IF_UNDOCUMENTED = YES # The WARN_FORMAT tag determines the format of the warning messages that # doxygen can produce. The string should contain the $file, $line, and $text # tags, which will be replaced by the file and line number from which the # warning originated and the warning text. WARN_FORMAT = "$file:$line: $text" #--------------------------------------------------------------------------- # configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag can be used to specify the files and/or directories that contain # documented source files. You may enter file names like "myfile.cpp" or # directories like "/usr/src/myproject". Separate the files or directories # with spaces. INPUT = error.cc error.h avi.cc avi.h riff.h riff.cc ieee1394io.cc ieee1394io.h \ frame.cc frame.h main.cc filehandler.cc filehandler.h affine.h \ raw1394util.c raw1394util.h dvgrab.cc dvgrab.h io.c io.h \ stringutils.h stringutils.cc smiltime.h smiltime.cc endian_types.h # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp # and *.h) to filter out the source-files in the directories. If left # blank all files are included. FILE_PATTERNS = # The RECURSIVE tag can be used to turn specify whether or not subdirectories # should be searched for input files as well. Possible values are YES and NO. # If left blank NO is used. RECURSIVE = NO # The EXCLUDE tag can be used to specify files and/or directories that should # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. EXCLUDE = # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. EXCLUDE_PATTERNS = # The EXAMPLE_PATH tag can be used to specify one or more files or # directories that contain example code fragments that are included (see # the \include command). EXAMPLE_PATH = # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp # and *.h) to filter out the source-files in the directories. If left # blank all files are included. EXAMPLE_PATTERNS = # The IMAGE_PATH tag can be used to specify one or more files or # directories that contain image that are included in the documentation (see # the \image command). IMAGE_PATH = # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command , where # is the value of the INPUT_FILTER tag, and is the name of an # input file. Doxygen will then use the output that the filter program writes # to standard output. INPUT_FILTER = #--------------------------------------------------------------------------- # configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index # of all compounds will be generated. Enable this if the project # contains a lot of classes, structs, unions or interfaces. ALPHABETICAL_INDEX = YES # If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then # the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns # in which this list will be split (can be a number in the range [1..20]) COLS_IN_ALPHA_INDEX = 5 # In case all classes in a project start with a common prefix, all # classes will be put under the same header in the alphabetical index. # The IGNORE_PREFIX tag can be used to specify one or more prefixes that # should be ignored while generating the index headers. IGNORE_PREFIX = #--------------------------------------------------------------------------- # configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES (the default) Doxygen will # generate HTML output. GENERATE_HTML = YES # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `html' will be used as the default path. HTML_OUTPUT = html # The HTML_HEADER tag can be used to specify a personal HTML header for # each generated HTML page. If it is left blank doxygen will generate a # standard header. HTML_HEADER = # The HTML_FOOTER tag can be used to specify a personal HTML footer for # each generated HTML page. If it is left blank doxygen will generate a # standard footer. HTML_FOOTER = # The HTML_STYLESHEET tag can be used to specify a user defined cascading # style sheet that is used by each HTML page. It can be used to # fine-tune the look of the HTML output. If the tag is left blank doxygen # will generate a default style sheet HTML_STYLESHEET = # If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, # files or namespaces will be aligned in HTML using tables. If set to # NO a bullet list will be used. HTML_ALIGN_MEMBERS = YES # If the GENERATE_HTMLHELP tag is set to YES, additional index files # will be generated that can be used as input for tools like the # Microsoft HTML help workshop to generate a compressed HTML help file (.chm) # of the generated HTML documentation. GENERATE_HTMLHELP = NO #--------------------------------------------------------------------------- # configuration options related to the LaTeX output #--------------------------------------------------------------------------- # If the GENERATE_LATEX tag is set to YES (the default) Doxygen will # generate Latex output. GENERATE_LATEX = NO # The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `latex' will be used as the default path. LATEX_OUTPUT = latex # If the COMPACT_LATEX tag is set to YES Doxygen generates more compact # LaTeX documents. This may be useful for small projects and may help to # save some trees in general. COMPACT_LATEX = NO # The PAPER_TYPE tag can be used to set the paper type that is used # by the printer. Possible values are: a4, a4wide, letter, legal and # executive. If left blank a4wide will be used. PAPER_TYPE = a4wide # The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX # packages that should be included in the LaTeX output. EXTRA_PACKAGES = # The LATEX_HEADER tag can be used to specify a personal LaTeX header for # the generated latex document. The header should contain everything until # the first chapter. If it is left blank doxygen will generate a # standard header. Notice: only use this tag if you know what you are doing! LATEX_HEADER = # If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated # is prepared for conversion to pdf (using ps2pdf). The pdf file will # contain links (just like the HTML output) instead of page references # This makes the output suitable for online browsing using a pdf viewer. PDF_HYPERLINKS = NO # If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. # command to the generated LaTeX files. This will instruct LaTeX to keep # running if errors occur, instead of asking the user for help. # This option is also used when generating formulas in HTML. LATEX_BATCHMODE = NO #--------------------------------------------------------------------------- # configuration options related to the RTF output #--------------------------------------------------------------------------- # If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output # For now this is experimental and is disabled by default. The RTF output # is optimised for Word 97 and may not look too pretty with other readers # or editors. GENERATE_RTF = YES # The RTF_OUTPUT tag is used to specify where the RTF docs will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `rtf' will be used as the default path. RTF_OUTPUT = rtf # If the COMPACT_RTF tag is set to YES Doxygen generates more compact # RTF documents. This may be useful for small projects and may help to # save some trees in general. COMPACT_RTF = NO # If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated # will contain hyperlink fields. The RTF file will # contain links (just like the HTML output) instead of page references. # This makes the output suitable for online browsing using a WORD or other. # programs which support those fields. # Note: wordpad (write) and others do not support links. RTF_HYPERLINKS = YES # Load stylesheet definitions from file. Syntax is similar to doxygen's # config file, i.e. a series of assigments. You only have to provide # replacements, missing definitions are set to their default value. RTF_STYLESHEET_FILE = #--------------------------------------------------------------------------- # configuration options related to the man page output #--------------------------------------------------------------------------- # If the GENERATE_MAN tag is set to YES (the default) Doxygen will # generate man pages GENERATE_MAN = NO # The MAN_OUTPUT tag is used to specify where the man pages will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `man' will be used as the default path. MAN_OUTPUT = man # The MAN_EXTENSION tag determines the extension that is added to # the generated man pages (default is the subroutine's section .3) MAN_EXTENSION = .3 #--------------------------------------------------------------------------- # Configuration options related to the preprocessor #--------------------------------------------------------------------------- # If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will # evaluate all C-preprocessor directives found in the sources and include # files. ENABLE_PREPROCESSING = YES # If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro # names in the source code. If set to NO (the default) only conditional # compilation will be performed. Macro expansion can be done in a controlled # way by setting EXPAND_ONLY_PREDEF to YES. MACRO_EXPANSION = NO # If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES # then the macro expansion is limited to the macros specified with the # PREDEFINED and EXPAND_AS_PREDEFINED tags. EXPAND_ONLY_PREDEF = NO # If the SEARCH_INCLUDES tag is set to YES (the default) the includes files # in the INCLUDE_PATH (see below) will be search if a #include is found. SEARCH_INCLUDES = YES # The INCLUDE_PATH tag can be used to specify one or more directories that # contain include files that are not input files but should be processed by # the preprocessor. INCLUDE_PATH = # You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard # patterns (like *.h and *.hpp) to filter out the header-files in the # directories. If left blank, the patterns specified with FILE_PATTERNS will # be used. INCLUDE_FILE_PATTERNS = # The PREDEFINED tag can be used to specify one or more macro names that # are defined before the preprocessor is started (similar to the -D option of # gcc). The argument of the tag is a list of macros of the form: name # or name=definition (no spaces). If the definition and the = are # omitted =1 is assumed. PREDEFINED = # If the MACRO_EXPANSION and EXPAND_PREDEF_ONLY tags are set to YES then # this tag can be used to specify a list of macro names that should be expanded. # The macro definition that is found in the sources will be used. # Use the PREDEFINED tag if you want to use a different macro definition. EXPAND_AS_DEFINED = #--------------------------------------------------------------------------- # Configuration::addtions related to external references #--------------------------------------------------------------------------- # The TAGFILES tag can be used to specify one or more tagfiles. TAGFILES = # When a file name is specified after GENERATE_TAGFILE, doxygen will create # a tag file that is based on the input files it reads. GENERATE_TAGFILE = # If the ALLEXTERNALS tag is set to YES all external classes will be listed # in the class index. If set to NO only the inherited external classes # will be listed. ALLEXTERNALS = NO # The PERL_PATH should be the absolute path and name of the perl script # interpreter (i.e. the result of `which perl'). PERL_PATH = /usr/bin/perl #--------------------------------------------------------------------------- # Configuration options related to the dot tool #--------------------------------------------------------------------------- # If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is # available from the path. This tool is part of Graphviz, a graph visualization # toolkit from AT&T and Lucent Bell Labs. The other options in this section # have no effect if this option is set to NO (the default) HAVE_DOT = NO # If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen # will generate a graph for each documented class showing the direct and # indirect inheritance relations. Setting this tag to YES will force the # the CLASS_DIAGRAMS tag to NO. CLASS_GRAPH = YES # If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen # will generate a graph for each documented class showing the direct and # indirect implementation dependencies (inheritance, containment, and # class references variables) of the class with other documented classes. COLLABORATION_GRAPH = YES # If the ENABLE_PREPROCESSING, INCLUDE_GRAPH, and HAVE_DOT tags are set to # YES then doxygen will generate a graph for each documented file showing # the direct and indirect include dependencies of the file with other # documented files. INCLUDE_GRAPH = YES # If the ENABLE_PREPROCESSING, INCLUDED_BY_GRAPH, and HAVE_DOT tags are set to # YES then doxygen will generate a graph for each documented header file showing # the documented files that directly or indirectly include this file INCLUDED_BY_GRAPH = YES # If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen # will graphical hierarchy of all classes instead of a textual one. GRAPHICAL_HIERARCHY = YES # The tag DOT_PATH can be used to specify the path where the dot tool can be # found. If left blank, it is assumed the dot tool can be found on the path. DOT_PATH = # The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width # (in pixels) of the graphs generated by dot. If a graph becomes larger than # this value, doxygen will try to truncate the graph, so that it fits within # the specified constraint. Beware that most browsers cannot cope with very # large images. MAX_DOT_GRAPH_WIDTH = 1024 # The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height # (in pixels) of the graphs generated by dot. If a graph becomes larger than # this value, doxygen will try to truncate the graph, so that it fits within # the specified constraint. Beware that most browsers cannot cope with very # large images. MAX_DOT_GRAPH_HEIGHT = 1024 #--------------------------------------------------------------------------- # Configuration::addtions related to the search engine #--------------------------------------------------------------------------- # The SEARCHENGINE tag specifies whether or not a search engine should be # used. If set to NO the values of all tags below this one will be ignored. SEARCHENGINE = NO # The CGI_NAME tag should be the name of the CGI script that # starts the search engine (doxysearch) with the correct parameters. # A script with this name will be generated by doxygen. CGI_NAME = search.cgi # The CGI_URL tag should be the absolute URL to the directory where the # cgi binaries are located. See the documentation of your http daemon for # details. CGI_URL = # The DOC_URL tag should be the absolute URL to the directory where the # documentation is located. If left blank the absolute path to the # documentation, with file:// prepended to it, will be used. DOC_URL = # The DOC_ABSPATH tag should be the absolute path to the directory where the # documentation is located. If left blank the directory on the local machine # will be used. DOC_ABSPATH = # The BIN_ABSPATH tag must point to the directory where the doxysearch binary # is installed. BIN_ABSPATH = /usr/local/bin/ # The EXT_DOC_PATHS tag can be used to specify one or more paths to # documentation generated for other projects. This allows doxysearch to search # the documentation for these projects as well. EXT_DOC_PATHS = dvgrab-3.5+git20160707.1.e46042e/iec13818-1.cc0000644000175000017500000003557212716434257015024 0ustar eses/* * iec13818-1.cc * Copyright (C) 2007 Dan Streetman * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #include using std::string; #include #include "hdvframe.h" #include "iec13818-1.h" HDVPacket::HDVPacket( HDVFrame *f, HDVStreamParams *p ) : frame( f ), params( p ) { } HDVPacket::~HDVPacket() { } unsigned char HDVPacket::sync_byte() { return GetBits( 0, 8 ); } bool HDVPacket::transport_error_indicator() { return GetBits( 8, 1 ); } bool HDVPacket::payload_unit_start_indicator() { return GetBits( 9, 1 ); } bool HDVPacket::transport_priority() { return GetBits( 10, 1 ); } unsigned short HDVPacket::pid() { return GetBits( 11, 13 ); } unsigned char HDVPacket::transport_scrambling_control() { return GetBits( 24, 2 ); } unsigned char HDVPacket::adaptation_field_control() { return GetBits( 26, 2 ); } unsigned char HDVPacket::continuity_counter() { return GetBits( 28, 4 ); } unsigned char *HDVPacket::pointer_field() { // If this is a PSI packet, the payload is _after_ a "pointer_field"; if this is a // PES packet, there is no pointer_field. Only way to tell if it's PSI or PES // is keep track of what PIDs correspond to. Ick. Wish a damn bit had been used // instead. Or dump the "pointer_field" completely. // For simplicity, assume only the video and audio streams are PES. if ( is_video_packet() || is_audio_packet() ) return 0; else return payload_unit_start_indicator() ? &data[4] : 0; } bool HDVPacket::is_program_association_packet() { return !pid(); } PAT *HDVPacket::program_association_table() { if ( is_program_association_packet() ) return &pat; else return 0; } bool HDVPacket::is_program_map_packet() { return pid() && ( pid() == params->program_map_PID ); } PMT *HDVPacket::program_map_table() { if ( is_program_map_packet() ) return &pmt; else return 0; } bool HDVPacket::is_video_packet() { return pid() && ( pid() == params->video_stream_PID ); } bool HDVPacket::is_audio_packet() { return pid() && ( pid() == params->audio_stream_PID ); } bool HDVPacket::is_sony_private_a0_packet() { return pid() && ( pid() == params->sony_private_a0_PID ); } bool HDVPacket::is_sony_private_a1_packet() { return pid() && ( pid() == params->sony_private_a1_PID ); } bool HDVPacket::is_null_packet() { return pid() && ( pid() == PID_NULL_PACKET ); } SonyA1 *HDVPacket::GetSonyA1() { if ( is_sony_private_a1_packet() ) return &sonyA1; else return 0; } unsigned char *HDVPacket::adaptation_field() { int pos = 4; if ( pointer_field() ) pos += (*pointer_field()) + 1; switch ( adaptation_field_control() ) { case 0: return 0; /* reserved value; ignore this packet */ case 1: return 0; /* payload only */ case 2: return &data[pos]; /* adaptation field only */ case 3: return &data[pos]; /* adaptation field and payload */ } return 0; } unsigned char *HDVPacket::payload() { int pos = PayloadOffset(); if ( pos < 0 ) return 0; else return &data[pos]; } int HDVPacket::PayloadOffset() { int pos = 4; if ( pointer_field() ) pos += (*pointer_field()) + 1; switch ( adaptation_field_control() ) { case 0: return -1; /* reserved value; ignore this packet */ case 1: return pos; /* payload only */ case 2: return -1; /* adaptation field only */ case 3: return pos+(*adaptation_field()); /* adaptation field and payload */ } return -1; } int HDVPacket::PayloadLength() { if ( payload() ) return GetLength() - PayloadOffset(); else return 0; } void HDVPacket::SetData( unsigned char *d ) { data = d; if ( is_program_association_packet() ) pat.SetData( payload(), PayloadLength() ); if ( is_program_map_packet() ) pmt.SetData( payload(), PayloadLength() ); if ( is_sony_private_a1_packet() ) sonyA1.SetData( payload(), PayloadLength() ); } unsigned char HDVPacket::GetData( int pos ) { if ( pos < GetLength() ) return data[pos]; else return 0; } int HDVPacket::GetLength() { return HDV_PACKET_SIZE; } void HDVPacket::Dump() { bool b = d_hdv_packet || d_hdv_pid_check( pid() ); DEBUG_PARAMS( b, 1, 0, "HDVPacket FLAG=%02x ERR=%s,P_IND=%s,PRI=%s PID=%04x SCR=%d,ADAPT=%d,CONT=%02d : ", sync_byte(), transport_error_indicator() ? "T" : "F", payload_unit_start_indicator() ? "T" : "F", transport_priority() ? "T" : "F", pid(), transport_scrambling_control(), adaptation_field_control(), continuity_counter() ); DUMP_RAW_DATA( b, data, 4, HDV_PACKET_SIZE ); DEBUG_PARAMS( b, 0, 1, "" ); } ////////////// /// PAT packet PAT::PAT() : length( 0 ) { } PAT::~PAT() { } unsigned char PAT::table_id() { return GetBits( 0, 8 ); } bool PAT::section_syntax_indicator() { return GetBits( 8, 1 ); } unsigned short PAT::section_length() { return GetBits( 12, 12 ); } unsigned short PAT::transport_stream_id() { return GetBits( 24, 16 ); } unsigned char PAT::version_number() { return GetBits( 42, 5 ); } bool PAT::current_next_indicator() { return GetBits( 47, 1 ); } unsigned char PAT::section_number() { return GetBits( 48, 8 ); } unsigned char PAT::last_section_number() { return GetBits( 56, 8 ); } #define NUM_PROGRAMS ( ( section_length() - 9U ) / 4U ) int PAT::program_number( unsigned int n ) { if ( n < NUM_PROGRAMS ) return GetBits( 64 + ( n * 32 ), 16 ); else return 0; } int PAT::pid( unsigned int n ) { if ( n < NUM_PROGRAMS ) return GetBits( 83 + ( n * 32 ), 13 ); else return 0; } int PAT::network_PID() { for ( unsigned int n = 0; n < NUM_PROGRAMS; n++ ) { if ( program_number( n ) == 0 ) return pid( n ); } return -1; } int PAT::program_map_PID() { for ( unsigned int n = 0; n < NUM_PROGRAMS; n++ ) { if ( program_number( n ) != 0 ) return pid( n ); } return -1; } void PAT::SetData( unsigned char *d, int len ) { data = d; length = len; } unsigned char PAT::GetData( int pos ) { if ( pos < GetLength() ) return data[pos]; else return 0; } int PAT::GetLength() { return length; } void PAT::Dump() { DEBUG_PARAMS( d_hdv_pat, 1, 0, "PAT TID=%02x SYN=%s LEN=%04x TSID=%04x VER=%02x IND=%s NUM=%02x LAST=%02x :", table_id(), section_syntax_indicator() ? "T" : "F", section_length(), transport_stream_id(), version_number(), current_next_indicator() ? "T" : "F", section_number(), last_section_number() ); for ( unsigned int n = 0; n < NUM_PROGRAMS; n++ ) DEBUG_RAW( d_hdv_pat, " PROG_NUM=%04x,PID=%04x", program_number( n ), pid( n ) ); DEBUG_PARAMS( d_hdv_pat, 0, 1, "" ); } /////////////// /// PMT element PMT_element::PMT_element() : length( 0 ) { } PMT_element::~PMT_element() { } unsigned char PMT_element::stream_type() { return GetBits( 0, 8 ); } unsigned short PMT_element::elementary_PID() { return GetBits( 11, 13 ); } unsigned short PMT_element::ES_info_length() { return GetBits( 28, 12 ); } unsigned char *PMT_element::descriptor( unsigned int n ) { int start = 5; for ( unsigned int i = 0, j = 0; i < ES_info_length(); i += data[start+i+1]+2, j++ ) if ( j == n ) return &data[start+i]; return 0; } void PMT_element::SetData( unsigned char *d, int len ) { data = d; length = len; } unsigned char PMT_element::GetData( int pos ) { if ( pos < GetLength() ) return data[pos]; else return 0; } int PMT_element::GetLength() { return length; } void PMT_element::Dump() { DEBUG_RAW( d_hdv_pmt, "{TYPE=%02x PID=%04x LEN=%04x ", stream_type(), elementary_PID(), ES_info_length() ); unsigned char *desc; // For "Registration" descriptors, the registered format identifiers // are listed here: // http://www.smpte-ra.org/mpegreg/mpegreg.html for ( int i = 0; ( desc = descriptor( i ) ); i++ ) { DEBUG_RAW( d_hdv_pmt, "DESC=[" ); DUMP_RAW_DATA( d_hdv_pmt, desc, 0, desc[1]+2 ); DEBUG_RAW( d_hdv_pmt, "\b] " ); } DEBUG_RAW( d_hdv_pmt, "\b} " ); } ////////////// /// PMT packet PMT::PMT() : length( 0 ), nPmtElements( 0 ) { } PMT::~PMT() { } unsigned char PMT::table_id() { return GetBits( 0, 8 ); } bool PMT::section_syntax_indicator() { return GetBits( 8, 1 ); } unsigned short PMT::section_length() { return GetBits( 12, 12 ); } unsigned short PMT::program_number() { return GetBits( 24, 16 ); } unsigned char PMT::version_number() { return GetBits( 42, 5 ); } bool PMT::current_next_indicator() { return GetBits( 47, 1 ); } unsigned char PMT::section_number() { return GetBits( 48, 8 ); } unsigned char PMT::last_section_number() { return GetBits( 56, 8 ); } unsigned short PMT::PCR_PID() { return GetBits( 67, 13 ); } unsigned short PMT::program_info_length() { return GetBits( 84, 12 ); } unsigned char *PMT::descriptor( unsigned int n ) { int start = 12; for ( unsigned int i = 0, j = 0; i < program_info_length(); i += data[start+i+1]+2, j++ ) if ( j == n ) return &data[start+i]; return 0; } PMT_element *PMT::GetPmtElement( unsigned int n ) { if ( n < nPmtElements ) return &pmtElement[n]; else return 0; } void PMT::SetData( unsigned char *d, int len ) { data = d; length = len; CreatePmtElements(); } unsigned char PMT::GetData( int pos ) { if ( pos < GetLength() ) return data[pos]; else return 0; } int PMT::GetLength() { return length; } void PMT::CreatePmtElements() { int start = 12 + program_info_length(); int end = 3 + section_length() - 4; int i; nPmtElements = 0; for ( i = start; i < end && nPmtElements < MAX_PMT_ELEMENTS; nPmtElements++ ) { // Set the data with min len first, then reset with the actual len pmtElement[nPmtElements].SetData( &data[i], 5 ); int len = 5 + pmtElement[nPmtElements].ES_info_length(); pmtElement[nPmtElements].SetData( &data[i], len ); i += len; } } void PMT::Dump() { DEBUG_PARAMS( d_hdv_pmt, 1, 0, "PMT TID=%02x SYN=%s SECLEN=%04x PROG#=%04x VER#=%02x IND=%s SEC#=%02x LAST#=%02x PCRPID=%02x PROGLEN=%02x ", table_id(), section_syntax_indicator() ? "T" : "F", section_length(), program_number(), version_number(), current_next_indicator() ? "T" : "F", section_number(), last_section_number(), PCR_PID(), program_info_length() ); unsigned char *desc; for ( int i = 0; ( desc = descriptor( i ) ); i++ ) { DEBUG_RAW( d_hdv_pmt, "DESC=[" ); DUMP_RAW_DATA( d_hdv_pmt, desc, 0, desc[1]+2 ); DEBUG_RAW( d_hdv_pmt, "\b] " ); } PMT_element *elem; for ( int i = 0; ( elem = GetPmtElement( i ) ); i++ ) elem->Dump(); DEBUG_PARAMS( d_hdv_pmt, 0, 1, "" ); } ///////////// // PES Packet PES::PES() { Clear(); } PES::~PES() { } void PES::AddData( unsigned char *d, int l ) { if ( length + l < MAX_PES_SIZE ) { memcpy( &data[length], d, l ); length += l; } else { printf("PES data overflow!\n"); } } unsigned char *PES::GetBuffer() { return data; } unsigned char PES::GetData( int pos ) { if ( pos < GetLength() ) return data[pos]; else return 0; } int PES::GetLength() { return length; } void PES::Clear() { length = 0; packetDataOffset = -1; } unsigned int PES::packet_start_code_prefix() { return GetBits( 0, 24 ); } unsigned char PES::stream_id() { return GetBits( 24, 8 ); } unsigned short PES::PES_packet_length() { return GetBits( 32, 16 ); } unsigned char PES::PES_scrambling_control() { return IsHeaderPresent() ? GetBits( 50, 2 ) : 0; } bool PES::PES_priority() { return IsHeaderPresent() ? GetBits( 52, 1 ) : 0; } bool PES::data_alignment_indicator() { return IsHeaderPresent() ? GetBits( 53, 1 ) : 0; } bool PES::copyright() { return IsHeaderPresent() ? GetBits( 54, 1 ) : 0; } bool PES::original_or_copy() { return IsHeaderPresent() ? GetBits( 55, 1 ) : 0; } unsigned char PES::PTS_DTS_flags() { return IsHeaderPresent() ? GetBits( 56, 2 ) : 0; } bool PES::ESCR_flag() { return IsHeaderPresent() ? GetBits( 58, 1 ) : 0; } bool PES::ES_rate_flag() { return IsHeaderPresent() ? GetBits( 59, 1 ) : 0; } bool PES::DSM_trick_mode_flag() { return IsHeaderPresent() ? GetBits( 60, 1 ) : 0; } bool PES::additional_copy_info_flag() { return IsHeaderPresent() ? GetBits( 61, 1 ) : 0; } bool PES::PES_CRC_flag() { return IsHeaderPresent() ? GetBits( 62, 1 ) : 0; } bool PES::PES_extension_flag() { return IsHeaderPresent() ? GetBits( 63, 1 ) : 0; } unsigned char PES::PES_header_data_length() { return IsHeaderPresent() ? GetBits( 64, 8 ) : 0; } bool PES::IsHeaderPresent() { // Check the spec for the actual IDs, I'm too lazy to put them here. switch ( stream_id() ) { case 0xbc: case 0xbf: case 0xf0: case 0xf1: case 0xff: case 0xf2: case 0xf8: return false; default: return true; } } int PES::GetPacketDataOffset() { if ( packetDataOffset < 0 && GetLength() > 9 ) { if ( IsHeaderPresent() ) packetDataOffset = 9 + PES_header_data_length(); else packetDataOffset = 6; } return packetDataOffset; } unsigned char PES::PES_packet_data_byte( int n ) { return GetData( GetPacketDataOffset() + n ); } int PES::GetPacketDataLength() { return GetLength() - GetPacketDataOffset() - 1; } void PES::Dump() { DEBUG_PARAMS( d_hdv_pes, 1, 0, "PES START=%06x SID=%02x PES_LEN=%d : GetLength %d : data bytes", packet_start_code_prefix(), stream_id(), PES_packet_length(), GetLength() ); for ( int i = 0; i < 16; i++ ) DEBUG_RAW( d_hdv_pes, " 0x%02x", PES_packet_data_byte( i ) ); DEBUG_PARAMS( d_hdv_pes, 0, 1, "" ); } /////////////////// /// Sony Private A1 /// (not part of IEC13818-1 spec) SonyA1::SonyA1() : length( 0 ) { } SonyA1::~SonyA1() { } unsigned char SonyA1::year() { return BCD( GetBits(352, 8) ); } unsigned char SonyA1::month() { return BCD( GetBits(347, 5) ); } unsigned char SonyA1::day() { return BCD( GetBits(338, 6) ); } unsigned char SonyA1::hour() { return BCD( GetBits(386, 6) ); } unsigned char SonyA1::minute() { return BCD( GetBits(377, 7) ); } unsigned char SonyA1::second() { return BCD( GetBits(369, 7) ); } unsigned char SonyA1::timecode_hour() { return BCD( GetBits(322, 6) ); } unsigned char SonyA1::timecode_minute() { return BCD( GetBits(313, 7) ); } unsigned char SonyA1::timecode_second() { return BCD( GetBits(305, 7) ); } unsigned char SonyA1::timecode_frame() { return BCD( GetBits(298, 6) ); } bool SonyA1::scene_start() { return !GetBits(394, 1); } void SonyA1::SetData( unsigned char *d, int len ) { data = d; length = len; } unsigned char SonyA1::GetData( int pos ) { if ( pos < GetLength() ) return data[pos]; else return 0; } int SonyA1::GetLength() { return length; } void SonyA1::Dump() { DEBUG( d_hdv_sonya1, "Record date : %04d/%02d/%02d %02d:%02d:%02d Timecode : %02d:%02d:%02d.%02d Scene Start : %s", 2000 + year(), month(), day(), hour(), minute(), second(), timecode_hour(), timecode_minute(), timecode_second(), timecode_frame(), scene_start() ? "T" : "F" ); } dvgrab-3.5+git20160707.1.e46042e/smiltime.h0000644000175000017500000000621212716434257015173 0ustar eses/* * smiltime.h -- W3C SMIL2 Time value parser * Copyright (C) 2003-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #ifndef _SMILTIME_H #define _SMILTIME_H #include using namespace std; namespace SMIL { class Time { public: typedef enum { SMIL_TIME_INDEFINITE = 0, SMIL_TIME_OFFSET, SMIL_TIME_SYNC_BASED, // not implemented SMIL_TIME_EVENT_BASED, // not implemented SMIL_TIME_WALLCLOCK, // not implemented SMIL_TIME_MEDIA_MARKER, // not implemented SMIL_TIME_REPEAT, // not implemented SMIL_TIME_ACCESSKEY, //not implemented } TimeType; typedef enum { TIME_FORMAT_NONE, TIME_FORMAT_FRAMES, TIME_FORMAT_SMPTE, TIME_FORMAT_CLOCK, TIME_FORMAT_MS, TIME_FORMAT_S, TIME_FORMAT_MIN, TIME_FORMAT_H } TimeFormat; protected: // internally stored in milliseconds long timeValue; long offset; bool indefinite; bool resolved; bool syncbaseBegin; TimeType timeType; public: Time(); Time( long time ); Time( string time ); virtual ~Time() {} virtual void parseTimeValue( string time ); long getTimeValue() { return timeValue; } TimeType getTimeType() { return timeType; } long getOffset() { return offset; } bool operator< ( Time& time ); bool operator==( Time& time ); bool operator> ( Time& time ); bool isResolved() { return resolved; } long getResolvedOffset(); bool isNegative(); bool isIndefinite() { return indefinite; } virtual string toString( TimeFormat format = TIME_FORMAT_CLOCK ); virtual string serialise(); protected: long parseClockValue( string time ); private: void setTimeValue( Time& source ); }; class MediaClippingTime : public Time { public: typedef enum { SMIL_SUBFRAME_NONE, SMIL_SUBFRAME_0, SMIL_SUBFRAME_1 } SubframeType; MediaClippingTime(); MediaClippingTime( float framerate ); MediaClippingTime( string time, float framerate ); virtual ~MediaClippingTime() {} void setFramerate( float framerate ); virtual void parseValue( string time ); void parseSmpteValue( string time ); void parseSmpteNtscDropValue( string time ); virtual string toString( TimeFormat format = TIME_FORMAT_SMPTE ); virtual string serialise(); string parseValueToString( string time, TimeFormat format ); string parseFramesToString( int frames, TimeFormat format ); int getFrames(); private: float m_framerate; bool m_isSmpteValue; SubframeType m_subframe; }; string framesToSmpte( int frames, int fps ); } // namespace #endif dvgrab-3.5+git20160707.1.e46042e/ieee1394io.cc0000644000175000017500000006243112716434257015273 0ustar eses/* * ieee1394io.cc -- asynchronously grabbing DV data * Copyright (C) 2000 Arne Schirmacher * Copyright (C) 2003-2009 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /** \page The IEEE 1394 Reader Class This text explains how the IEEE 1394 Reader class works. The IEEE1394Reader object maintains a connection to a DV camcorder. It reads DV frames from the camcorder and stores them in a queue. The frames can then be retrieved from the buffer and displayed, stored, or processed in other ways. The IEEE1394Reader class supports asynchronous operation: it starts a separate thread, which reads as fast as possible from the ieee1394 interface card to make sure that no frames are lost. Since the buffer can be configured to hold many frames, no frames will be lost even if the disk access is temporarily slow. There are two queues available in an IEEE1394Reader object. One queue holds empty frames, the other holds frames filled with DV content just read from the interface. During operation the reader thread takes unused frames from the inFrames queue, fills them and places them in the outFrame queue. The program can then take frames from the outFrames queue, process them and finally put them back in the inFrames queue. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include using std::endl; #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "ieee1394io.h" #include "dvframe.h" #include "hdvframe.h" #include "error.h" /** Initializes the IEEE1394Reader object. The object is initialized with port and channel number. These parameters define the interface card and the iso channel on which the camcorder sends its data. The object contains a list of empty frames, which are allocated here. 50 frames (2 seconds) should be enough in most cases. \param c the iso channel number to use \param bufSize the number of frames to allocate for the frames buffer */ IEEE1394Reader::IEEE1394Reader( int c, int bufSize, bool hdv ) : droppedFrames( 0 ), badFrames( 0 ), currentFrame( NULL ), channel( c ), isRunning( false ), isHDV( hdv ) { Frame * frame; /* Create empty frames and put them in our inFrames queue */ for ( int i = 0; i < bufSize; ++i ) { if ( isHDV ) frame = new HDVFrame( &hdvStreamParams ); else frame = new DVFrame(); inFrames.push_back( frame ); } /* Initialize mutexes */ pthread_mutex_init( &mutex, NULL ); /* Initialise mutex and condition for action triggerring */ pthread_mutex_init( &condition_mutex, NULL ); pthread_cond_init( &condition, NULL ); } /** Destroys the IEEE1394Reader object. In particular, it deletes all frames in the inFrames and outFrames queues, as well as the one currently in use. Note that one or more frames may have been taken out of the queues by a user of the IEEE1394Reader class. */ IEEE1394Reader::~IEEE1394Reader() { Frame * frame; for ( int i = inFrames.size(); i > 0; --i ) { frame = inFrames[ 0 ]; inFrames.pop_front(); delete frame; } for ( int i = outFrames.size(); i > 0; --i ) { frame = outFrames[ 0 ]; outFrames.pop_front(); delete frame; } if ( currentFrame != NULL ) { delete currentFrame; currentFrame = NULL; } pthread_mutex_destroy( &condition_mutex ); pthread_cond_destroy( &condition ); } /** Fetches the next frame from the output queue The outFrames contains a list of frames to be processed (saved, displayed) by the user of this class. Copy the first frame (actually only a pointer to it) and remove it from the queue. \note If this returns NULL, wait some time (1/25 sec.) before calling it again. \return a pointer to the current frame, or NULL if no frames are in the queue */ Frame* IEEE1394Reader::GetFrame() { Frame * frame = NULL; pthread_mutex_lock( &mutex ); if ( outFrames.size() > 0 ) { frame = outFrames[ 0 ]; outFrames.pop_front(); } pthread_mutex_unlock( &mutex ); return frame; } /** Put back a frame to the queue of available frames */ void IEEE1394Reader::DoneWithFrame( Frame* frame ) { pthread_mutex_lock( &mutex ); inFrames.push_back( frame ); pthread_mutex_unlock( &mutex ); } /** Return the number of dropped frames since last call */ int IEEE1394Reader::GetDroppedFrames( void ) { pthread_mutex_lock( &mutex ); int n = droppedFrames; droppedFrames = 0; pthread_mutex_unlock( &mutex ); return n; } /** Return the number of incomplete frames since last call */ int IEEE1394Reader::GetBadFrames( void ) { pthread_mutex_lock( &mutex ); int n = badFrames; badFrames = 0; pthread_mutex_unlock( &mutex ); return n; } /** Throw away all currently available frames. All frames in the outFrames queue are put back to the inFrames queue. Also the currentFrame is put back too. */ void IEEE1394Reader::Flush() { Frame * frame = NULL; for ( int i = outFrames.size(); i > 0; --i ) { frame = outFrames[ 0 ]; outFrames.pop_front(); inFrames.push_back( frame ); } if ( currentFrame != NULL ) { inFrames.push_back( currentFrame ); currentFrame = NULL; } } bool IEEE1394Reader::WaitForAction( int seconds ) { pthread_mutex_lock( &mutex ); int size = outFrames.size( ); pthread_mutex_unlock( &mutex ); if ( size == 0 ) { pthread_mutex_lock( &condition_mutex ); if ( seconds == 0 ) { pthread_cond_wait( &condition, &condition_mutex ); pthread_mutex_unlock( &condition_mutex ); pthread_mutex_lock( &mutex ); size = outFrames.size( ); } else { struct timeval tp; struct timespec ts; int result; gettimeofday( &tp, NULL ); ts.tv_sec = tp.tv_sec + seconds; ts.tv_nsec = tp.tv_usec * 1000; result = pthread_cond_timedwait( &condition, &condition_mutex, &ts ); pthread_mutex_unlock( &condition_mutex ); pthread_mutex_lock( &mutex ); if ( result == ETIMEDOUT ) size = 0; else size = outFrames.size(); } pthread_mutex_unlock( &mutex ); } return size != 0; } void IEEE1394Reader::TriggerAction( ) { pthread_mutex_lock( &condition_mutex ); pthread_cond_signal( &condition ); pthread_mutex_unlock( &condition_mutex ); } /** Initializes the raw1394Reader object. The object is initialized with port and channel number. These parameters define the interface card and the iso channel on which the camcorder sends its data. \param p the number of the interface card to use \param c the iso channel number to use \param bufSize the number of frames to allocate for the frames buffer */ iec61883Reader::iec61883Reader( int p, int c, int bufSize, BusResetHandler resetHandler, BusResetHandlerData data, bool hdv ) : IEEE1394Reader( c, bufSize, hdv ), m_port( p ), m_resetHandler( resetHandler), m_resetHandlerData( data ) { m_handle = NULL; m_iec61883_mpeg2 = NULL; m_iec61883_dv = NULL; } iec61883Reader::~iec61883Reader() { Close(); } /** Start receiving DV frames The ieee1394 subsystem is initialized with the parameters provided to the constructor (port and channel). The received frames can be retrieved from the outFrames queue. */ bool iec61883Reader::StartThread() { if ( isRunning ) return true; pthread_mutex_lock( &mutex ); currentFrame = NULL; if ( Open() && StartReceive() ) { isRunning = true; pthread_create( &thread, NULL, ThreadProxy, this ); pthread_mutex_unlock( &mutex ); return true; } else { Close(); pthread_mutex_unlock( &mutex ); return false; } } /** Stop the receiver thread. The receiver thread is being canceled. It will finish the next time it calls the pthread_testcancel() function. After it is canceled, we turn off iso receive and close the ieee1394 subsystem. We also remove all frames in the outFrames queue that have not been processed until now. */ void iec61883Reader::StopThread() { if ( isRunning ) { isRunning = false; pthread_join( thread, NULL ); StopReceive(); Close(); Flush(); TriggerAction( ); } } void iec61883Reader::ResetHandler( void ) { if ( m_resetHandler ) m_resetHandler( const_cast< void* >( m_resetHandlerData ) ); } int iec61883Reader::ResetHandlerProxy( raw1394handle_t handle, unsigned int generation ) { iec61883Reader *self = NULL; void *userdata = raw1394_get_userdata( handle ); if ( typeid( iec61883_mpeg2_t ) == typeid( userdata ) ) { iec61883_mpeg2_t mpeg2 = static_cast< iec61883_mpeg2_t >( userdata ); self = static_cast< iec61883Reader* >( iec61883_mpeg2_get_callback_data( mpeg2 ) ); } else if ( typeid( iec61883_dv_t ) == typeid( userdata ) ) { iec61883_dv_t dv = static_cast< iec61883_dv_t >( userdata ); iec61883_dv_fb_t dvfb = static_cast< iec61883_dv_fb_t >( iec61883_dv_get_callback_data( dv ) ); self = static_cast< iec61883Reader* >( iec61883_dv_fb_get_callback_data( dvfb ) ); } if ( self ) self->ResetHandler(); return 0; } /** Open the raw1394 interface \return success/failure */ bool iec61883Reader::Open() { bool success; assert( m_handle == 0 ); try { m_handle = raw1394_new_handle_on_port( m_port ); if ( m_handle == NULL ) return false; raw1394_set_bus_reset_handler( m_handle, this->ResetHandlerProxy ); if ( isHDV ) { m_iec61883_mpeg2 = iec61883_mpeg2_recv_init( m_handle, Mpeg2HandlerProxy, this ); success = ( m_iec61883_mpeg2 != NULL ); } else { m_iec61883_dv = iec61883_dv_fb_init( m_handle, DvHandlerProxy, this ); success = ( m_iec61883_dv != NULL ); } } catch ( string exc ) { Close(); sendEvent( exc.c_str() ); success = false; } return success; } /** Close the raw1394 interface */ void iec61883Reader::Close() { if ( m_iec61883_dv != NULL ) { iec61883_dv_fb_close( m_iec61883_dv ); m_iec61883_dv = NULL; } else if ( m_iec61883_mpeg2 != NULL ) { iec61883_mpeg2_close( m_iec61883_mpeg2 ); m_iec61883_mpeg2 = NULL; } if ( m_handle ) { raw1394_destroy_handle( m_handle ); m_handle = NULL; } } bool iec61883Reader::StartReceive() { bool success; /* Starting iso receive */ try { if ( isHDV ) fail_neg( iec61883_mpeg2_recv_start( m_iec61883_mpeg2, channel ) ); else fail_neg( iec61883_dv_fb_start( m_iec61883_dv, channel ) ); success = true; } catch ( string exc ) { sendEvent( exc.c_str() ); success = false; } return success; } void iec61883Reader::StopReceive() { if ( m_iec61883_dv != NULL ) { iec61883_dv_fb_stop( m_iec61883_dv ); } else if ( m_iec61883_mpeg2 != NULL ) { iec61883_mpeg2_recv_stop( m_iec61883_mpeg2 ); } } int iec61883Reader::Mpeg2HandlerProxy( unsigned char *data, int length, unsigned int dropped, void *callback_data ) { iec61883Reader *self = static_cast< iec61883Reader* >( callback_data ); return self->Handler( data, length, dropped ); } int iec61883Reader::DvHandlerProxy( unsigned char *data, int length, int complete, void *callback_data ) { iec61883Reader *self = static_cast< iec61883Reader* >( callback_data ); return self->Handler( data, length, !complete ); } int iec61883Reader::Handler( unsigned char *data, int length, int dropped ) { badFrames += dropped; if ( currentFrame == NULL ) { if ( inFrames.size() > 0 ) { pthread_mutex_lock( &mutex ); currentFrame = inFrames.front(); currentFrame->Clear(); inFrames.pop_front(); pthread_mutex_unlock( &mutex ); } else { droppedFrames++; return 0; } } memcpy( ¤tFrame->data[currentFrame->GetDataLen()], data, length ); currentFrame->AddDataLen( length ); if ( currentFrame->IsComplete( ) ) { pthread_mutex_lock( &mutex ); outFrames.push_back( currentFrame ); currentFrame = NULL; TriggerAction( ); pthread_mutex_unlock( &mutex ); } return 0; } /** The thread responsible for polling the raw1394 interface. Though this is an infinite loop, it can be canceled by StopThread, but only in the pthread_testcancel() function. */ void* iec61883Reader::ThreadProxy( void* arg ) { iec61883Reader* self = static_cast< iec61883Reader* >( arg ); return self->Thread(); } void* iec61883Reader::Thread() { struct pollfd raw1394_poll; int result; raw1394_poll.fd = raw1394_get_fd( m_handle ); raw1394_poll.events = POLLIN | POLLERR | POLLHUP | POLLPRI; while ( isRunning ) { while ( ( result = poll( &raw1394_poll, 1, 200 ) ) < 0 ) { if ( !( errno == EAGAIN || errno == EINTR ) ) { perror( "error: raw1394 poll" ); break; } } if ( result > 0 && ( ( raw1394_poll.revents & POLLIN ) || ( raw1394_poll.revents & POLLPRI ) ) ) result = raw1394_loop_iterate( m_handle ); } return NULL; } iec61883Connection::iec61883Connection( int port, int node ) : m_node( node | 0xffc0 ), m_channel( -1 ), m_bandwidth( 0 ), m_outputPort( -1 ), m_inputPort( -1 ) { m_handle = raw1394_new_handle_on_port( port ); if ( m_handle ) { m_channel = iec61883_cmp_connect( m_handle, m_node, &m_outputPort, raw1394_get_local_id( m_handle ), &m_inputPort, &m_bandwidth ); if ( m_channel < 0 ) m_channel = 63; } } iec61883Connection::~iec61883Connection( ) { if ( m_handle ) { iec61883_cmp_disconnect( m_handle, m_node, m_outputPort, raw1394_get_local_id (m_handle), m_inputPort, m_channel, m_bandwidth ); raw1394_destroy_handle( m_handle ); } } void iec61883Connection::CheckConsistency( int port, int node ) { raw1394handle_t handle = raw1394_new_handle_on_port( port ); if ( handle ) { iec61883_cmp_normalize_output( handle, 0xffc0 | node ); raw1394_destroy_handle( handle ); } } int iec61883Connection::Reconnect( void ) { return iec61883_cmp_reconnect( m_handle, m_node, &m_outputPort, raw1394_get_local_id( m_handle ), &m_inputPort, &m_bandwidth, m_channel ); } /** Initializes the AVC object. \param p the number of the interface card to use (port) */ AVC::AVC( int p ) : port( p ) { pthread_mutex_init( &avc_mutex, NULL ); avc_handle = NULL; int numcards; struct raw1394_portinfo pinf[ 16 ]; try { avc_handle = raw1394_new_handle(); if ( avc_handle == 0 ) return ; fail_neg( numcards = raw1394_get_port_info( avc_handle, pinf, 16 ) ); fail_neg( raw1394_set_port( avc_handle, port ) ); } catch ( string exc ) { if ( avc_handle != NULL ) raw1394_destroy_handle( avc_handle ); avc_handle = NULL; sendEvent( exc.c_str() ); } return ; } /** Destroys the AVC object. */ AVC::~AVC() { if ( avc_handle != NULL ) { pthread_mutex_lock( &avc_mutex ); raw1394_destroy_handle( avc_handle ); avc_handle = NULL; pthread_mutex_unlock( &avc_mutex ); } } /** See if a node_id is still valid and pointing to an AV/C Recorder. If the node_id is not valid, then look for the first AV/C device on the bus; \param phyID The node_id to check. \return The same node_id if valid, a new node_id if not valid and a another AV/C recorder exists, or -1 if not valid and no AV/C recorders exist. */ int AVC::isPhyIDValid( int phyID ) { int value = -1; pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { int currentNode, nodeCount; rom1394_directory rom1394_dir; nodeCount = raw1394_get_nodecount( avc_handle ); if ( phyID >= 0 && phyID < nodeCount ) { rom1394_get_directory( avc_handle, phyID, &rom1394_dir ); if ( rom1394_get_node_type( &rom1394_dir ) == ROM1394_NODE_TYPE_AVC ) { if ( avc1394_check_subunit_type( avc_handle, phyID, AVC1394_SUBUNIT_TYPE_VCR ) ) value = phyID; } rom1394_free_directory( &rom1394_dir ); } // look for a new AVC recorder for ( currentNode = 0; value == -1 && currentNode < nodeCount; currentNode++ ) { rom1394_get_directory( avc_handle, currentNode, &rom1394_dir ); if ( rom1394_get_node_type( &rom1394_dir ) == ROM1394_NODE_TYPE_AVC ) { if ( avc1394_check_subunit_type( avc_handle, currentNode, AVC1394_SUBUNIT_TYPE_VCR ) ) { // set Preferences to the newly found AVC node and return //octlet_t guid = rom1394_get_guid( avc_handle, currentNode ); //snprintf( Preferences::getInstance().avcGUID, 64, "%08x%08x", (quadlet_t) (guid>>32), //(quadlet_t) (guid & 0xffffffff) ); value = currentNode; } } rom1394_free_directory( &rom1394_dir ); } } pthread_mutex_unlock( &avc_mutex ); return value; } /** Do not do anything but let raw1394 make necessary callbacks (bus reset) */ void AVC::Noop( void ) { struct pollfd raw1394_poll; raw1394_poll.fd = raw1394_get_fd( avc_handle ); raw1394_poll.events = POLLIN | POLLPRI; raw1394_poll.revents = 0; if ( poll( &raw1394_poll, 1, 100 ) > 0 ) { if ( ( raw1394_poll.revents & POLLIN ) || ( raw1394_poll.revents & POLLPRI ) ) raw1394_loop_iterate( avc_handle ); } } int AVC::Play( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) { if ( !avc1394_vcr_is_recording( avc_handle, phyID ) && avc1394_vcr_is_playing( avc_handle, phyID ) != AVC1394_VCR_OPERAND_PLAY_FORWARD ) avc1394_vcr_play( avc_handle, phyID ); } } pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::Pause( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) { if ( !avc1394_vcr_is_recording( avc_handle, phyID ) && ( avc1394_vcr_is_playing( avc_handle, phyID ) != AVC1394_VCR_OPERAND_PLAY_FORWARD_PAUSE ) ) avc1394_vcr_pause( avc_handle, phyID ); } } struct timespec t = { 0, 250000000L }; nanosleep( &t, NULL ); pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::Stop( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_stop( avc_handle, phyID ); } struct timespec t = { 0, 250000000L }; nanosleep( &t, NULL ); pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::Rewind( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_rewind( avc_handle, phyID ); } pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::FastForward( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_forward( avc_handle, phyID ); } pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::Forward( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_next( avc_handle, phyID ); } pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::Back( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_previous( avc_handle, phyID ); } pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::NextScene( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_next_index( avc_handle, phyID ); } pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::PreviousScene( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_previous_index( avc_handle, phyID ); } pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::Record( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_record( avc_handle, phyID ); } pthread_mutex_unlock( &avc_mutex ); return 0; } int AVC::Shuttle( int phyID, int speed ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_trick_play( avc_handle, phyID, speed ); } pthread_mutex_unlock( &avc_mutex ); return 0; } unsigned int AVC::TransportStatus( int phyID ) { quadlet_t val = 0; pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) val = avc1394_vcr_status( avc_handle, phyID ); } pthread_mutex_unlock( &avc_mutex ); return val; } bool AVC::Timecode( int phyID, char* timecode ) { bool result = false; pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) { quadlet_t request[ 2 ]; quadlet_t *response; request[ 0 ] = AVC1394_CTYPE_STATUS | AVC1394_SUBUNIT_TYPE_TAPE_RECORDER | AVC1394_SUBUNIT_ID_0 | AVC1394_VCR_COMMAND_TIME_CODE | AVC1394_VCR_OPERAND_TIME_CODE_STATUS; request[ 1 ] = 0xFFFFFFFF; response = avc1394_transaction_block( avc_handle, phyID, request, 2, 1 ); if ( response && response[1] != 0xffffffff ) { sprintf( timecode, "%2.2x:%2.2x:%2.2x:%2.2x", response[ 1 ] & 0x000000ff, ( response[ 1 ] >> 8 ) & 0x000000ff, ( response[ 1 ] >> 16 ) & 0x000000ff, ( response[ 1 ] >> 24 ) & 0x000000ff ); result = true; } } } pthread_mutex_unlock( &avc_mutex ); return result; } int AVC::getNodeId( const char *guid ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { for ( int currentNode = 0; currentNode < raw1394_get_nodecount( avc_handle ); currentNode++ ) { octlet_t currentGUID = rom1394_get_guid( avc_handle, currentNode ); char currentGUIDStr[ 65 ]; snprintf( currentGUIDStr, 64, "%08x%08x", ( quadlet_t ) ( currentGUID >> 32 ), ( quadlet_t ) ( currentGUID & 0xffffffff ) ); if ( strncmp( currentGUIDStr, guid, 64 ) == 0 ) { pthread_mutex_unlock( &avc_mutex ); return currentNode; } } pthread_mutex_unlock( &avc_mutex ); } return -1; } int AVC::Reverse( int phyID ) { pthread_mutex_lock( &avc_mutex ); if ( avc_handle != NULL ) { if ( phyID >= 0 ) avc1394_vcr_reverse( avc_handle, phyID ); } pthread_mutex_unlock( &avc_mutex ); return 0; } bool AVC::isHDV( int phyID ) const { int retry = 2; quadlet_t response = avc1394_transaction( avc_handle, phyID, AVC1394_CTYPE_STATUS | AVC1394_SUBUNIT_TYPE_TAPE_RECORDER | AVC1394_SUBUNIT_ID_0 | AVC1394_VCR_COMMAND_OUTPUT_SIGNAL_MODE | 0xFF, retry ); response = AVC1394_GET_OPERAND0( response ); // fprintf(stderr, "%s: 0x%x\n", __PRETTY_FUNCTION__, response); return ( response == 0x10 || response == 0x90 || response == 0x1A || response == 0x9A ); } /** Start receiving DV frames The received frames can be retrieved from the outFrames queue. */ bool pipeReader::StartThread() { pthread_mutex_lock( &mutex ); currentFrame = NULL; pthread_create( &thread, NULL, ThreadProxy, this ); pthread_mutex_unlock( &mutex ); return true; } /** Stop the receiver thread. The receiver thread is being canceled. It will finish the next time it calls the pthread_testcancel() function. We also remove all frames in the outFrames queue that have not been processed until now. */ void pipeReader::StopThread() { pthread_cancel( thread ); pthread_join( thread, NULL ); Flush(); } bool pipeReader::Handler() { bool ret = true; pthread_mutex_lock( &mutex ); if ( currentFrame == NULL && inFrames.size() > 0 ) { currentFrame = inFrames.front(); currentFrame->Clear(); inFrames.pop_front(); //printf("reader < buf: buffer %d, output %d\n", inFrames.size(), outFrames.size()); //fflush(stdout); } pthread_mutex_unlock( &mutex ); if ( currentFrame != NULL ) { if ( isHDV ) { void *buf = ¤tFrame->data[currentFrame->GetDataLen()]; if ( ret = ( fread( buf, IEC61883_MPEG2_TSP_SIZE, 1, file ) == 1 ) ) currentFrame->AddDataLen( IEC61883_MPEG2_TSP_SIZE ); else ((HDVFrame*)currentFrame)->SetComplete(); } else { if ( ret = ( fread( currentFrame->data, 120000, 1, file ) == 1 ) ) { currentFrame->SetDataLen( 120000 ); if ( currentFrame->data[ 3 ] & 0x80 ) if ( ret = ( fread( currentFrame->data + 120000, 24000, 1, file ) == 1 ) ) currentFrame->AddDataLen( 24000 ); } } if ( ( ret && currentFrame->IsComplete() ) || ( !ret && currentFrame->GetDataLen() > 0 ) ) { pthread_mutex_lock( &mutex ); outFrames.push_back( currentFrame ); currentFrame = NULL; TriggerAction( ); pthread_mutex_unlock( &mutex ); } } return ret; } /** The thread responsible for polling the rawdv interface. Though this is an infinite loop, it can be canceled by StopThread, but only in the pthread_testcancel() function. */ void* pipeReader::ThreadProxy( void* arg ) { pipeReader* self = static_cast< pipeReader* >( arg ); return self->Thread(); } void* pipeReader::Thread() { if ( strcmp( input_file, "-" ) == 0 ) file = stdin; else file = fopen( input_file, "rb" ); if ( ! file ) { sendEvent( "No input file" ); return NULL; } while ( true ) { if ( ! Handler() ) break; pthread_testcancel(); } if ( strcmp( input_file, "-" ) != 0 ) fclose( file ); sendEvent( "End of pipe" ); pthread_mutex_lock( &mutex ); if ( currentFrame ) outFrames.push_back( currentFrame ); currentFrame = NULL; outFrames.push_back( currentFrame ); TriggerAction( ); pthread_mutex_unlock( &mutex ); return NULL; } dvgrab-3.5+git20160707.1.e46042e/main.cc0000644000175000017500000000604412716434257014435 0ustar eses/* * main.cc -- A DV/1394 capture utility * Copyright (C) 2000-2002 Arne Schirmacher * Copyright (C) 2003-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /** the dvgrab main program contains the main logic */ #ifdef HAVE_CONFIG_H #include #endif // C++ includes #include #include using std::cout; using std::endl; // C includes #include #include #include #include #include #include #include #include #include #include #include // local includes #include "io.h" #include "dvgrab.h" #include "error.h" bool g_done = false; void signal_handler( int sig ) { g_done = true; } int rt_raisepri (int pri) { #ifdef _SC_PRIORITY_SCHEDULING struct sched_param scp; /* * Verify that scheduling is available */ if ( sysconf( _SC_PRIORITY_SCHEDULING ) == -1) { // sendEvent( "Warning: RR-scheduler not available, disabling." ); return -1; } else { memset( &scp, '\0', sizeof( scp ) ); scp.sched_priority = sched_get_priority_max( SCHED_RR ) - pri; if ( sched_setscheduler( 0, SCHED_RR, &scp ) < 0 ) { // sendEvent( "Warning: Cannot set RR-scheduler" ); return -1; } } return 0; #else return -1; #endif } int main( int argc, char *argv[] ) { int ret = 0; fcntl( fileno( stderr ), F_SETFL, O_NONBLOCK ); try { char c; DVgrab dvgrab( argc, argv ); signal( SIGINT, signal_handler ); signal( SIGTERM, signal_handler ); signal( SIGHUP, signal_handler ); signal( SIGPIPE, signal_handler ); if ( rt_raisepri( 1 ) != 0 ) setpriority( PRIO_PROCESS, 0, -20 ); #if _POSIX_MEMLOCK > 0 mlockall( MCL_CURRENT | MCL_FUTURE ); #endif if ( dvgrab.isInteractive() ) { term_init(); fprintf( stderr, "Going interactive. Press '?' for help.\n" ); while ( !g_done ) { dvgrab.status( ); if ( ( c = term_read() ) != -1 ) if ( !dvgrab.execute( c ) ) break; } term_exit(); } else { dvgrab.startCapture(); while ( !g_done ) if ( dvgrab.done() ) break; dvgrab.stopCapture(); } } catch ( std::string s ) { fprintf( stderr, "Error: %s\n", s.c_str() ); fflush( stderr ); ret = 1; } catch ( ... ) { fprintf( stderr, "Error: unknown\n" ); fflush( stderr ); ret = 1; } fprintf( stderr, "\n" ); return ret; } dvgrab-3.5+git20160707.1.e46042e/dvframe.h0000644000175000017500000000625212716434257015000 0ustar eses/* * dvframe.h -- utilities for process DV-format frames * Copyright (C) 2000 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _DVFRAME_H #define _DVFRAME_H 1 #include "frame.h" #ifdef HAVE_LIBDV #include #include #else #define DV_AUDIO_MAX_SAMPLES 1944 #ifndef CLAMP #define CLAMP(x, low, high) (((x) > (high)) ? (high) : (((x) < (low)) ? (low) : (x))) #endif #endif #define FRAME_MAX_WIDTH 720 #define FRAME_MAX_HEIGHT 576 typedef struct Pack { /// the five bytes of a packet unsigned char data[ 5 ]; } Pack; typedef struct AudioInfo { int frames; int frequency; int samples; int channels; int quantization; } AudioInfo; class VideoInfo { public: int width; int height; bool isPAL; TimeCode timeCode; struct tm recDate; VideoInfo(); // string GetTimeCodeString(); // string GetRecDateString(); } ; class DVFrame : public Frame { public: #ifdef HAVE_LIBDV dv_decoder_t *decoder; #endif int16_t *audio_buffers[ 4 ]; public: DVFrame(); ~DVFrame(); void SetDataLen( int len ); // Meta-data bool GetTimeCode( TimeCode &timeCode ); bool GetRecordingDate( struct tm &recDate ); bool IsNewRecording( void ); bool IsNormalSpeed( void ); bool IsComplete( void ); // Video info #ifdef HAVE_LIBDV int GetWidth(); int GetHeight(); #endif float GetFrameRate(); // HDV cd DV bool IsHDV(); bool GetSSYBPack( int packNum, Pack &pack ); bool GetVAUXPack( int packNum, Pack &pack ); bool GetAAUXPack( int packNum, Pack &pack ); bool GetAudioInfo( AudioInfo &info ); bool GetVideoInfo( VideoInfo &info ); int GetFrameSize( void ); bool IsPAL( void ); int ExtractAudio( void *sound ); void ExtractHeader( void ); #ifdef HAVE_LIBDV int ExtractAudio( int16_t **channels ); int ExtractRGB( void *rgb ); int ExtractPreviewRGB( void *rgb ); int ExtractYUV( void *yuv ); int ExtractPreviewYUV( void *yuv ); bool IsWide( void ); void SetRecordingDate( time_t *datetime, int frame ); void SetTimeCode( int frame ); void Deinterlace( void *image, int bpp ); #endif private: #ifndef HAVE_LIBDV /// flag for initializing the lookup maps once at startup static bool maps_initialized; /// lookup tables for collecting the shuffled audio data static int palmap_ch1[ 2000 ]; static int palmap_ch2[ 2000 ]; static int palmap_2ch1[ 2000 ]; static int palmap_2ch2[ 2000 ]; static int ntscmap_ch1[ 2000 ]; static int ntscmap_ch2[ 2000 ]; static int ntscmap_2ch1[ 2000 ]; static int ntscmap_2ch2[ 2000 ]; static short compmap[ 4096 ]; #endif }; #endif dvgrab-3.5+git20160707.1.e46042e/raw1394util.c0000644000175000017500000001055412716434257015357 0ustar eses/* * raw1394util.c -- libraw1394 utilities * Copyright (C) 2003-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #include #include #include #include #include #include #include #include "raw1394util.h" #define MOTDCT_SPEC_ID 0x00005068 /** Open the raw1394 device and get a handle. * * \param port A 0-based number indicating which host adapter to use. * \return a raw1394 handle. */ int raw1394_get_num_ports( void ) { int n_ports; struct raw1394_portinfo pinf[ 16 ]; raw1394handle_t handle; /* get a raw1394 handle */ if ( !( handle = raw1394_new_handle() ) ) { fprintf( stderr, "raw1394 - failed to get handle: %s.\n", strerror( errno ) ); exit( EXIT_FAILURE ); } if ( ( n_ports = raw1394_get_port_info( handle, pinf, 16 ) ) < 0 ) { fprintf( stderr, "raw1394 - failed to get port info: %s.\n", strerror( errno ) ); raw1394_destroy_handle( handle ); exit( EXIT_FAILURE ); } raw1394_destroy_handle( handle ); return n_ports; } /** Open the raw1394 device and get a handle. * * \param port A 0-based number indicating which host adapter to use. * \return a raw1394 handle. */ raw1394handle_t raw1394_open( int port ) { int n_ports; struct raw1394_portinfo pinf[ 16 ]; raw1394handle_t handle; /* get a raw1394 handle */ #ifdef RAW1394_V_0_8 handle = raw1394_get_handle(); #else handle = raw1394_new_handle(); #endif if ( !handle ) { fprintf( stderr, "raw1394 - failed to get handle: %s.\n", strerror( errno ) ); exit( EXIT_FAILURE ); } if ( ( n_ports = raw1394_get_port_info( handle, pinf, 16 ) ) < 0 ) { fprintf( stderr, "raw1394 - failed to get port info: %s.\n", strerror( errno ) ); raw1394_destroy_handle( handle ); exit( EXIT_FAILURE ); } /* tell raw1394 which host adapter to use */ if ( raw1394_set_port( handle, port ) < 0 ) { fprintf( stderr, "raw1394 - failed to set set port: %s.\n", strerror( errno ) ); exit( EXIT_FAILURE ); } return handle; } void raw1394_close( raw1394handle_t handle ) { raw1394_destroy_handle( handle ); } int discoverAVC( int* port, octlet_t* guid ) { rom1394_directory rom_dir; raw1394handle_t handle; int device = -1; int i, j = 0; int m = raw1394_get_num_ports(); if ( *port >= 0 ) { /* search on explicit port */ j = *port; m = *port + 1; } for ( ; j < m && device == -1; j++ ) { handle = raw1394_open( j ); for ( i = 0; i < raw1394_get_nodecount( handle ); ++i ) { if ( *guid > 1 ) { /* select explicitly by GUID */ if ( *guid == rom1394_get_guid( handle, i ) ) { device = i; *port = j; break; } } else { /* select first AV/C Tape Reccorder Player node */ if ( rom1394_get_directory( handle, i, &rom_dir ) < 0 ) { rom1394_free_directory( &rom_dir ); fprintf( stderr, "error reading config rom directory for node %d\n", i ); continue; } if ( ( ( rom1394_get_node_type( &rom_dir ) == ROM1394_NODE_TYPE_AVC ) && avc1394_check_subunit_type( handle, i, AVC1394_SUBUNIT_TYPE_VCR ) ) || ( rom_dir.unit_spec_id == MOTDCT_SPEC_ID ) ) { rom1394_free_directory( &rom_dir ); octlet_t my_guid, *pguid = ( *guid == 1 )? guid : &my_guid; *pguid = rom1394_get_guid( handle, i ); fprintf( stderr, "Found AV/C device with GUID 0x%08x%08x\n", (quadlet_t) (*pguid>>32), (quadlet_t) (*pguid & 0xffffffff)); device = i; *port = j; break; } rom1394_free_directory( &rom_dir ); } } raw1394_close( handle ); } return device; } void reset_bus( int port ) { raw1394handle_t handle; handle = raw1394_open( port ); raw1394_reset_bus( handle ); raw1394_close( handle ); } dvgrab-3.5+git20160707.1.e46042e/io.h0000644000175000017500000000245012716434257013757 0ustar eses/* * io.h -- dv1394d client demo input/output * Copyright (C) 2002-2003 Ushodaya Enterprises Limited * Author: Charles Yates * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef IO_H #define IO_H #ifdef __cplusplus extern "C" { #endif extern char *chomp( char * ); extern char *trim( char * ); extern char *strip_quotes( char * ); extern char *get_string( char *, int, char * ); extern int *get_int( int *, int ); extern void term_init( ); extern int term_read( ); extern void term_exit( ); extern char get_keypress( ); extern void wait_for_any_key( char * ); extern void beep( ); #ifdef __cplusplus } #endif #endif dvgrab-3.5+git20160707.1.e46042e/avi.cc0000644000175000017500000014243312716434257014273 0ustar eses/* * avi.cc library for AVI file format i/o * Copyright (C) 2000 - 2002 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifdef HAVE_CONFIG_H #include #endif // C++ includes #include #include #include using std::cout; using std::hex; using std::dec; using std::setw; using std::setfill; using std::endl; // C includes #include #include #include #include #include // local includes #include "error.h" #include "riff.h" #include "avi.h" #define PADDING_SIZE (512) #define PADDING_1GB (0x40000000) #define IX00_INDEX_SIZE (4028) #define AVIF_HASINDEX 0x00000010 #define AVIF_MUSTUSEINDEX 0x00000020 #define AVIF_TRUSTCKTYPE 0x00000800 #define AVIF_ISINTERLEAVED 0x00000100 #define AVIF_WASCAPTUREFILE 0x00010000 #define AVIF_COPYRIGHTED 0x00020000 static char g_zeroes[ PADDING_SIZE ]; /** The constructor \todo mainHdr not initialized \todo add checking for NULL pointers */ AVIFile::AVIFile() : RIFFFile(), idx1( NULL ), file_list( -1 ), riff_list( -1 ), hdrl_list( -1 ), avih_chunk( -1 ), movi_list( -1 ), junk_chunk( -1 ), idx1_chunk( -1 ), index_type( -1 ), current_ix00( -1 ), odml_list( -1 ), dmlh_chunk( -1 ), isUpdateIdx1( true ) { // cerr << "0x" << hex << (long)this << dec << " AVIFile::AVIFile() : RIFFFile(), ..." << endl; for ( int i = 0; i < 2; ++i ) { indx[ i ] = new AVISuperIndex; memset( indx[ i ], 0, sizeof( AVISuperIndex ) ); ix[ i ] = new AVIStdIndex; memset( ix[ i ], 0, sizeof( AVIStdIndex ) ); indx_chunk[ i ] = -1; ix_chunk[ i ] = -1; strl_list[ i ] = -1; strh_chunk[ i ] = -1; strf_chunk[ i ] = -1; } idx1 = new AVISimpleIndex; memset( idx1, 0, sizeof( AVISimpleIndex ) ); } /** The copy constructor \todo add checking for NULL pointers */ AVIFile::AVIFile( const AVIFile& avi ) : RIFFFile( avi ) { // cerr << "0x" << hex << (long)this << dec << " 0x" << hex << (long)&avi << dec << " AVIFile::AVIFile(const AVIFile& avi) : RIFFFile(avi)" << endl; mainHdr = avi.mainHdr; idx1 = new AVISimpleIndex; *idx1 = *avi.idx1; file_list = avi.file_list; riff_list = avi.riff_list; hdrl_list = avi.hdrl_list; avih_chunk = avi.avih_chunk; movi_list = avi.movi_list; junk_chunk = avi.junk_chunk; idx1_chunk = avi.idx1_chunk; for ( int i = 0; i < 2; ++i ) { indx[ i ] = new AVISuperIndex; *indx[ i ] = *avi.indx[ i ]; ix[ i ] = new AVIStdIndex; *ix[ i ] = *avi.ix[ i ]; indx_chunk[ i ] = avi.indx_chunk[ i ]; ix_chunk[ i ] = avi.ix_chunk[ i ]; strl_list[ i ] = avi.strl_list[ i ]; strh_chunk[ i ] = avi.strh_chunk[ i ]; strf_chunk[ i ] = avi.strf_chunk[ i ]; } index_type = avi.index_type; current_ix00 = avi.current_ix00; for ( int i = 0; i < 62; ++i ) dmlh[ i ] = avi.dmlh[ i ]; isUpdateIdx1 = avi.isUpdateIdx1; } /** The assignment operator */ AVIFile& AVIFile::operator=( const AVIFile& avi ) { // cerr << "0x" << hex << (long)this << dec << " 0x" << hex << (long)&avi << dec << " AVIFile& AVIFile::operator=(const AVIFile& avi)" << endl; if ( this != &avi ) { RIFFFile::operator=( avi ); mainHdr = avi.mainHdr; *idx1 = *avi.idx1; file_list = avi.file_list; riff_list = avi.riff_list; hdrl_list = avi.hdrl_list; avih_chunk = avi.avih_chunk; movi_list = avi.movi_list; junk_chunk = avi.junk_chunk; idx1_chunk = avi.idx1_chunk; for ( int i = 0; i < 2; ++i ) { *indx[ i ] = *avi.indx[ i ]; *ix[ i ] = *avi.ix[ i ]; indx_chunk[ i ] = avi.indx_chunk[ i ]; ix_chunk[ i ] = avi.ix_chunk[ i ]; strl_list[ i ] = avi.strl_list[ i ]; strh_chunk[ i ] = avi.strh_chunk[ i ]; strf_chunk[ i ] = avi.strf_chunk[ i ]; } index_type = avi.index_type; current_ix00 = avi.current_ix00; for ( int i = 0; i < 62; ++i ) dmlh[ i ] = avi.dmlh[ i ]; isUpdateIdx1 = avi.isUpdateIdx1; } return *this; } /** The destructor */ AVIFile::~AVIFile() { // cerr << "0x" << hex << (long)this << dec << " AVIFile::~AVIFile()" << endl; for ( int i = 0; i < 2; ++i ) { delete ix[ i ]; delete indx[ i ]; } delete idx1; } /** Initialize the AVI structure to its initial state, either for PAL or NTSC format Initialize the AVIFile attributes: mainHdr, indx, ix00, idx1 \todo consolidate AVIFile::Init, AVI1File::Init, AVI2File::Init. They are somewhat redundant. \param format pass AVI_PAL or AVI_NTSC \param sampleFrequency the sample frequency of the audio content \param indexType pass AVI_SMALL_INDEX or AVI_LARGE_INDEX */ void AVIFile::Init( int format, int sampleFrequency, int indexType ) { int i, j; assert( ( format == AVI_PAL ) || ( format == AVI_NTSC ) ); index_type = indexType; switch ( format ) { case AVI_PAL: mainHdr.dwMicroSecPerFrame = 40000; mainHdr.dwSuggestedBufferSize = 144008; break; case AVI_NTSC: mainHdr.dwMicroSecPerFrame = 33366; mainHdr.dwSuggestedBufferSize = 120008; break; default: /* no default allowed */ assert( 0 ); break; } /* Initialize the 'avih' chunk */ mainHdr.dwMaxBytesPerSec = 3600000 + sampleFrequency * 4; mainHdr.dwPaddingGranularity = PADDING_SIZE; mainHdr.dwFlags = AVIF_TRUSTCKTYPE; if ( indexType & AVI_SMALL_INDEX ) mainHdr.dwFlags |= AVIF_HASINDEX; mainHdr.dwTotalFrames = 0; mainHdr.dwInitialFrames = 0; mainHdr.dwStreams = 1; mainHdr.dwWidth = 0; mainHdr.dwHeight = 0; mainHdr.dwReserved[ 0 ] = 0; mainHdr.dwReserved[ 1 ] = 0; mainHdr.dwReserved[ 2 ] = 0; mainHdr.dwReserved[ 3 ] = 0; /* Initialize the 'idx1' chunk */ for ( int i = 0; i < 8000; ++i ) { idx1->aIndex[ i ].dwChunkId = 0; idx1->aIndex[ i ].dwFlags = 0; idx1->aIndex[ i ].dwOffset = 0; idx1->aIndex[ i ].dwSize = 0; } idx1->nEntriesInUse = 0; /* Initialize the 'indx' chunk */ for ( i = 0; i < 2; ++i ) { indx[ i ] ->wLongsPerEntry = 4; indx[ i ] ->bIndexSubType = 0; indx[ i ] ->bIndexType = KINO_AVI_INDEX_OF_INDEXES; indx[ i ] ->nEntriesInUse = 0; indx[ i ] ->dwReserved[ 0 ] = 0; indx[ i ] ->dwReserved[ 1 ] = 0; indx[ i ] ->dwReserved[ 2 ] = 0; for ( j = 0; j < 2014; ++j ) { indx[ i ] ->aIndex[ j ].qwOffset = 0; indx[ i ] ->aIndex[ j ].dwSize = 0; indx[ i ] ->aIndex[ j ].dwDuration = 0; } } /* The ix00 and ix01 chunk will be added dynamically in avi_write_frame as needed */ /* Initialize the 'dmlh' chunk. I have no clue what this means though */ for ( i = 0; i < 62; ++i ) dmlh[ i ] = 0; //dmlh[0] = -1; /* frame count + 1? */ } /** Find position and size of a given frame in the file Depending on which index is available, search one of them to find position and frame size \todo the size parameter is redundant. All frames have the same size, which is also in the mainHdr. \todo all index related operations should be isolated \param offset the file offset to the start of the frame \param size the size of the frame \param the number of the frame we wish to find \return 0 if the frame could be found, -1 otherwise */ int AVIFile::GetFrameInfo( off_t &offset, int &size, int frameNum ) { switch ( index_type ) { case AVI_LARGE_INDEX: /* find relevant index in indx0 */ int i; for ( i = 0; frameNum >= indx[ 0 ] ->aIndex[ i ].dwDuration; frameNum -= indx[ 0 ] ->aIndex[ i ].dwDuration, ++i ) ; if ( i != current_ix00 ) { fail_if( lseek( fd, indx[ 0 ] ->aIndex[ i ].qwOffset + RIFF_HEADERSIZE, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, ix[ 0 ], indx[ 0 ] ->aIndex[ i ].dwSize - RIFF_HEADERSIZE ) ); current_ix00 = i; } if ( frameNum < ix[ 0 ] ->nEntriesInUse ) { offset = ix[ 0 ] ->qwBaseOffset + ix[ 0 ] ->aIndex[ frameNum ].dwOffset; size = ix[ 0 ] ->aIndex[ frameNum ].dwSize; return 0; } else return -1; break; case AVI_SMALL_INDEX: int index = -1; int frameNumIndex = 0; for ( int i = 0; i < idx1->nEntriesInUse; ++i ) { FOURCC chunkID1 = make_fourcc( "00dc" ); FOURCC chunkID2 = make_fourcc( "00db" ); if ( idx1->aIndex[ i ].dwChunkId == chunkID1 || idx1->aIndex[ i ].dwChunkId == chunkID2 ) { if ( frameNumIndex == frameNum ) { index = i; break; } ++frameNumIndex; } } if ( index != -1 ) { // compatibility check for broken dvgrab dv2 format if ( idx1->aIndex[ 0 ].dwOffset > GetDirectoryEntry( movi_list ).offset ) { offset = idx1->aIndex[ index ].dwOffset + RIFF_HEADERSIZE; } else { // new, correct dv2 format offset = idx1->aIndex[ index ].dwOffset + RIFF_HEADERSIZE + GetDirectoryEntry( movi_list ).offset; } size = idx1->aIndex[ index ].dwSize; return 0; } else return -1; break; } return -1; } /** Read in a frame \todo use 64 bit seek \todo we actually don't need the frame here, we could use just a void pointer \param frame a reference to the frame object that will receive the frame data \param frameNum the frame number to read \return 0 if the frame could be read, -1 otherwise */ int AVIFile::GetFrame( Frame *frame, int frameNum ) { off_t offset; int size; if ( GetFrameInfo( offset, size, frameNum ) != 0 ) return -1; fail_if( lseek( fd, offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, frame->data, size ) ); return 0; } int AVIFile::GetTotalFrames() { return mainHdr.dwTotalFrames; } /** prints out a directory entry in text form Every subclass of RIFFFile is supposed to override this function and to implement it for the entry types it knows about. For all other entry types it should call its parent::PrintDirectoryData. \todo use 64 bit routines \param entry the entry to print */ void AVIFile::PrintDirectoryEntryData( const RIFFDirEntry &entry ) { static FOURCC lastStreamType = make_fourcc( " " ); if ( entry.type == make_fourcc( "avih" ) ) { int i; MainAVIHeader main_avi_header; fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, &main_avi_header, sizeof( MainAVIHeader ) ) ); cout << " dwMicroSecPerFrame: " << ( int ) main_avi_header.dwMicroSecPerFrame << endl << " dwMaxBytesPerSec: " << ( int ) main_avi_header.dwMaxBytesPerSec << endl << " dwPaddingGranularity: " << ( int ) main_avi_header.dwPaddingGranularity << endl << " dwFlags: " << ( int ) main_avi_header.dwFlags << endl << " dwTotalFrames: " << ( int ) main_avi_header.dwTotalFrames << endl << " dwInitialFrames: " << ( int ) main_avi_header.dwInitialFrames << endl << " dwStreams: " << ( int ) main_avi_header.dwStreams << endl << " dwSuggestedBufferSize: " << ( int ) main_avi_header.dwSuggestedBufferSize << endl << " dwWidth: " << ( int ) main_avi_header.dwWidth << endl << " dwHeight: " << ( int ) main_avi_header.dwHeight << endl; for ( i = 0; i < 4; ++i ) cout << " dwReserved[" << i << "]: " << ( int ) main_avi_header.dwReserved[ i ] << endl; } else if ( entry.type == make_fourcc( "strh" ) ) { AVIStreamHeader avi_stream_header; fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, &avi_stream_header, sizeof( AVIStreamHeader ) ) ); lastStreamType = avi_stream_header.fccType; cout << " fccType: '" << ((char *)&avi_stream_header.fccType)[0] << ((char *)&avi_stream_header.fccType)[1] << ((char *)&avi_stream_header.fccType)[2] << ((char *)&avi_stream_header.fccType)[3] << '\'' << endl << " fccHandler: '" << ((char *)&avi_stream_header.fccHandler)[0] << ((char *)&avi_stream_header.fccHandler)[1] << ((char *)&avi_stream_header.fccHandler)[2] << ((char *)&avi_stream_header.fccHandler)[3] << '\'' << endl << " dwFlags: " << ( int ) avi_stream_header.dwFlags << endl << " wPriority: " << ( int ) avi_stream_header.wPriority << endl << " wLanguage: " << ( int ) avi_stream_header.wLanguage << endl << " dwInitialFrames: " << ( int ) avi_stream_header.dwInitialFrames << endl << " dwScale: " << ( int ) avi_stream_header.dwScale << endl << " dwRate: " << ( int ) avi_stream_header.dwRate << endl << " dwLength: " << ( int ) avi_stream_header.dwLength << endl << " dwQuality: " << ( int ) avi_stream_header.dwQuality << endl << " dwSampleSize: " << ( int ) avi_stream_header.dwSampleSize << endl; } else if ( entry.type == make_fourcc( "indx" ) ) { int i; AVISuperIndex avi_super_index; fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, &avi_super_index, sizeof( AVISuperIndex ) ) ); cout << " wLongsPerEntry: " << ( int ) avi_super_index.wLongsPerEntry << endl << " bIndexSubType: " << ( int ) avi_super_index.bIndexSubType << endl << " bIndexType: " << ( int ) avi_super_index.bIndexType << endl << " nEntriesInUse: " << ( int ) avi_super_index.nEntriesInUse << endl << " dwChunkId: '" << ((char *)&avi_super_index.dwChunkId)[0] << ((char *)&avi_super_index.dwChunkId)[1] << ((char *)&avi_super_index.dwChunkId)[2] << ((char *)&avi_super_index.dwChunkId)[3] << '\'' << endl << " dwReserved[0]: " << ( int ) avi_super_index.dwReserved[ 0 ] << endl << " dwReserved[1]: " << ( int ) avi_super_index.dwReserved[ 1 ] << endl << " dwReserved[2]: " << ( int ) avi_super_index.dwReserved[ 2 ] << endl; for ( i = 0; i < avi_super_index.nEntriesInUse; ++i ) { cout << ' ' << setw( 4 ) << setfill( ' ' ) << i << ": qwOffset : 0x" << setw( 12 ) << setfill( '0' ) << hex << avi_super_index.aIndex[ i ].qwOffset << endl << " dwSize : 0x" << setw( 8 ) << avi_super_index.aIndex[ i ].dwSize << endl << " dwDuration : " << dec << avi_super_index.aIndex[ i ].dwDuration << endl; } } else if ( entry.type == make_fourcc( "strf" ) ) { if ( lastStreamType == make_fourcc( "auds" ) ) { WAVEFORMATEX waveformatex; fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, &waveformatex, sizeof( WAVEFORMATEX ) ) ); cout << " waveformatex.wFormatTag : " << waveformatex.wFormatTag << endl; cout << " waveformatex.nChannels : " << waveformatex.nChannels << endl; cout << " waveformatex.nSamplesPerSec : " << waveformatex.nSamplesPerSec << endl; cout << " waveformatex.nAvgBytesPerSec: " << waveformatex.nAvgBytesPerSec << endl; cout << " waveformatex.nBlockAlign : " << waveformatex.nBlockAlign << endl; cout << " waveformatex.wBitsPerSample : " << waveformatex.wBitsPerSample << endl; cout << " waveformatex.cbSize : " << waveformatex.cbSize << endl; } else if ( lastStreamType == make_fourcc( "vids" ) ) { BITMAPINFOHEADER bitmapinfo; fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, &bitmapinfo, sizeof( BITMAPINFOHEADER ) ) ); cout << " bitmapinfo.biSize : " << bitmapinfo.biSize << endl; cout << " bitmapinfo.biWidth : " << bitmapinfo.biWidth << endl; cout << " bitmapinfo.biHeight : " << bitmapinfo.biHeight << endl; cout << " bitmapinfo.biPlanes : " << bitmapinfo.biPlanes << endl; cout << " bitmapinfo.biBitCount : " << bitmapinfo.biBitCount << endl; cout << " bitmapinfo.biCompression : " << bitmapinfo.biCompression << endl; cout << " bitmapinfo.biSizeImage : " << bitmapinfo.biSizeImage << endl; cout << " bitmapinfo.biXPelsPerMeter: " << bitmapinfo.biXPelsPerMeter << endl; cout << " bitmapinfo.biYPelsPerMeter: " << bitmapinfo.biYPelsPerMeter << endl; cout << " bitmapinfo.biClrUsed : " << bitmapinfo.biClrUsed << endl; cout << " bitmapinfo.biClrImportant : " << bitmapinfo.biClrImportant << endl; } else if ( lastStreamType == make_fourcc( "iavs" ) ) { DVINFO dvinfo; fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, &dvinfo, sizeof( DVINFO ) ) ); cout << " dvinfo.dwDVAAuxSrc : 0x" << setw( 8 ) << setfill( '0' ) << hex << dvinfo.dwDVAAuxSrc << endl; cout << " dvinfo.dwDVAAuxCtl : 0x" << setw( 8 ) << setfill( '0' ) << hex << dvinfo.dwDVAAuxCtl << endl; cout << " dvinfo.dwDVAAuxSrc1: 0x" << setw( 8 ) << setfill( '0' ) << hex << dvinfo.dwDVAAuxSrc1 << endl; cout << " dvinfo.dwDVAAuxCtl1: 0x" << setw( 8 ) << setfill( '0' ) << hex << dvinfo.dwDVAAuxCtl1 << endl; cout << " dvinfo.dwDVVAuxSrc : 0x" << setw( 8 ) << setfill( '0' ) << hex << dvinfo.dwDVVAuxSrc << endl; cout << " dvinfo.dwDVVAuxCtl : 0x" << setw( 8 ) << setfill( '0' ) << hex << dvinfo.dwDVVAuxCtl << endl; } } /* This is the Standard Index. It is an array of offsets and sizes relative to some start offset. */ else if ( ( entry.type == make_fourcc( "ix00" ) ) || ( entry.type == make_fourcc( "ix01" ) ) ) { int i; AVIStdIndex avi_std_index; fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, &avi_std_index, sizeof( AVIStdIndex ) ) ); cout << " wLongsPerEntry: " << ( int ) avi_std_index.wLongsPerEntry << endl << " bIndexSubType: " << ( int ) avi_std_index.bIndexSubType << endl << " bIndexType: " << ( int ) avi_std_index.bIndexType << endl << " nEntriesInUse: " << ( int ) avi_std_index.nEntriesInUse << endl << " dwChunkId: '" << ((char *)&avi_std_index.dwChunkId)[0] << ((char *)&avi_std_index.dwChunkId)[1] << ((char *)&avi_std_index.dwChunkId)[2] << ((char *)&avi_std_index.dwChunkId)[3] << '\'' << endl << " qwBaseOffset: 0x" << setw( 12 ) << hex << avi_std_index.qwBaseOffset << endl << " dwReserved: " << dec << ( int ) avi_std_index.dwReserved << endl; for ( i = 0; i < avi_std_index.nEntriesInUse; ++i ) { cout << ' ' << setw( 4 ) << setfill( ' ' ) << i << ": dwOffset : 0x" << setw( 8 ) << setfill( '0' ) << hex << avi_std_index.aIndex[ i ].dwOffset << " (0x" << setw( 12 ) << avi_std_index.qwBaseOffset + avi_std_index.aIndex[ i ].dwOffset << ')' << endl << " dwSize : 0x" << setw( 8 ) << avi_std_index.aIndex[ i ].dwSize << dec << endl; } } else if ( entry.type == make_fourcc( "idx1" ) ) { int i; int numEntries = entry.length / sizeof( int ) / 4; DWORD *idx1 = new DWORD[ numEntries * 4 ]; // FOURCC movi_list = FindDirectoryEntry(make_fourcc("movi")); fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, idx1, entry.length ) ); for ( i = 0; i < numEntries; ++i ) { cout << ' ' << setw( 4 ) << setfill( ' ' ) << i << setfill( '0' ) << ": dwChunkId : '" << ((char *)&idx1[ i * 4 + 0 ])[0] << ((char *)&idx1[ i * 4 + 0 ])[1] << ((char *)&idx1[ i * 4 + 0 ])[2] << ((char *)&idx1[ i * 4 + 0 ])[3] << '\'' << endl << " dwType : 0x" << setw( 8 ) << hex << idx1[ i * 4 + 1 ] << endl << " dwOffset : 0x" << setw( 8 ) << idx1[ i * 4 + 2 ] << endl // << " (0x" << setw(8) << idx1[i * 4 + 2] + GetDirectoryEntry(movi_list).offset << ')' << endl << " dwSize : 0x" << setw( 8 ) << idx1[ i * 4 + 3 ] << dec << endl; } delete[] idx1; } else if ( entry.type == make_fourcc( "dmlh" ) ) { int i; int numEntries = entry.length / sizeof( int ); DWORD *dmlh = new DWORD[ numEntries ]; fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, dmlh, entry.length ) ); for ( i = 0; i < numEntries; ++i ) { cout << ' ' << setw( 4 ) << setfill( ' ' ) << i << setfill( '0' ) << ": " << " dwTotalFrames: 0x" << setw( 8 ) << hex << dmlh[ i ] << " (" << dec << dmlh[ i ] << ")" << endl; } delete[] dmlh; } } /** If this is not a movi list, read its contents */ void AVIFile::ParseList( int parent ) { FOURCC type; FOURCC name; DWORD length; int list; off_t pos; off_t listEnd; /* Read in the chunk header (type and length). */ fail_neg( read( fd, &type, sizeof( type ) ) ); fail_neg( read( fd, &length, sizeof( length ) ) ); if ( length & 1 ) length++; /* The contents of the list starts here. Obtain its offset. The list name (4 bytes) is already part of the contents). */ pos = lseek( fd, 0, SEEK_CUR ); fail_if( pos == ( off_t ) - 1 ); fail_neg( read( fd, &name, sizeof( name ) ) ); /* if we encounter a movi list, do not read it. It takes too much time and we don't need it anyway. */ if ( name != make_fourcc( "movi" ) ) { // if (1) { /* Add an entry for this list. */ list = AddDirectoryEntry( type, name, sizeof( name ), parent ); /* Read in any chunks contained in this list. This list is the parent for all chunks it contains. */ listEnd = pos + length; while ( pos < listEnd ) { ParseChunk( list ); pos = lseek( fd, 0, SEEK_CUR ); fail_if( pos == ( off_t ) - 1 ); } } else { /* Add an entry for this list. */ movi_list = AddDirectoryEntry( type, name, length, parent ); pos = lseek( fd, length - 4, SEEK_CUR ); fail_if( pos == ( off_t ) - 1 ); } } void AVIFile::ParseRIFF() { RIFFFile::ParseRIFF(); avih_chunk = FindDirectoryEntry( make_fourcc( "avih" ) ); if ( avih_chunk != -1 ) ReadChunk( avih_chunk, ( void* ) & mainHdr ); } void AVIFile::ReadIndex() { indx_chunk[ 0 ] = FindDirectoryEntry( make_fourcc( "indx" ) ); if ( indx_chunk[ 0 ] != -1 ) { ReadChunk( indx_chunk[ 0 ], ( void* ) indx[ 0 ] ); index_type = AVI_LARGE_INDEX; /* recalc number of frames from each index */ mainHdr.dwTotalFrames = 0; for ( int i = 0; i < indx[ 0 ] ->nEntriesInUse; mainHdr.dwTotalFrames += indx[ 0 ] ->aIndex[ i++ ].dwDuration ) ; return ; } idx1_chunk = FindDirectoryEntry( make_fourcc( "idx1" ) ); if ( idx1_chunk != -1 ) { ReadChunk( idx1_chunk, ( void* ) idx1 ); idx1->nEntriesInUse = GetDirectoryEntry( idx1_chunk ).length / 16; index_type = AVI_SMALL_INDEX; /* recalc number of frames from the simple index */ int frameNumIndex = 0; FOURCC chunkID1 = make_fourcc( "00dc" ); FOURCC chunkID2 = make_fourcc( "00db" ); for ( int i = 0; i < idx1->nEntriesInUse; ++i ) { if ( idx1->aIndex[ i ].dwChunkId == chunkID1 || idx1->aIndex[ i ].dwChunkId == chunkID2 ) { ++frameNumIndex; } } mainHdr.dwTotalFrames = frameNumIndex; return ; } } void AVIFile::FlushIndx( int stream ) { FOURCC type; FOURCC name; off_t length; off_t offset; int parent; int i; /* Write out the previous index. When this function is entered for the first time, there is no index to write. Note: this may be an expensive operation because of a time consuming seek to the former file position. */ if ( ix_chunk[ stream ] != -1 ) WriteChunk( ix_chunk[ stream ], ix[ stream ] ); /* make a new ix chunk. */ if ( stream == 0 ) type = make_fourcc( "ix00" ); else type = make_fourcc( "ix01" ); ix_chunk[ stream ] = AddDirectoryEntry( type, 0, sizeof( AVIStdIndex ), movi_list ); GetDirectoryEntry( ix_chunk[ stream ], type, name, length, offset, parent ); /* fill out all required fields. The offsets in the array are relative to qwBaseOffset, so fill in the offset to the next free location in the file there. */ ix[ stream ] ->wLongsPerEntry = 2; ix[ stream ] ->bIndexSubType = 0; ix[ stream ] ->bIndexType = KINO_AVI_INDEX_OF_CHUNKS; ix[ stream ] ->nEntriesInUse = 0; ix[ stream ] ->dwChunkId = indx[ stream ] ->dwChunkId; ix[ stream ] ->qwBaseOffset = offset + length; ix[ stream ] ->dwReserved = 0; for ( i = 0; i < IX00_INDEX_SIZE; ++i ) { ix[ stream ] ->aIndex[ i ].dwOffset = 0; ix[ stream ] ->aIndex[ i ].dwSize = 0; } /* add a reference to this new index in our super index. */ i = indx[ stream ] ->nEntriesInUse++; indx[ stream ] ->aIndex[ i ].qwOffset = offset - RIFF_HEADERSIZE; indx[ stream ] ->aIndex[ i ].dwSize = length + RIFF_HEADERSIZE; indx[ stream ] ->aIndex[ i ].dwDuration = 0; } void AVIFile::UpdateIndx( int stream, int chunk, int duration ) { FOURCC type; FOURCC name; off_t length; off_t offset; int parent; int i; /* update the appropiate entry in the super index. It reflects the number of frames in the referenced index. */ i = indx[ stream ] ->nEntriesInUse - 1; indx[ stream ] ->aIndex[ i ].dwDuration += duration; /* update the standard index. Calculate the file position of the new frame. */ GetDirectoryEntry( chunk, type, name, length, offset, parent ); indx[ stream ] ->dwChunkId = type; i = ix[ stream ] ->nEntriesInUse++; ix[ stream ] ->aIndex[ i ].dwOffset = offset - ix[ stream ] ->qwBaseOffset; ix[ stream ] ->aIndex[ i ].dwSize = length; } void AVIFile::UpdateIdx1( int chunk, int flags ) { if ( idx1->nEntriesInUse < 20000 ) { FOURCC type; FOURCC name; off_t length; off_t offset; int parent; GetDirectoryEntry( chunk, type, name, length, offset, parent ); idx1->aIndex[ idx1->nEntriesInUse ].dwChunkId = type; idx1->aIndex[ idx1->nEntriesInUse ].dwFlags = flags; idx1->aIndex[ idx1->nEntriesInUse ].dwOffset = offset - GetDirectoryEntry( movi_list ).offset - RIFF_HEADERSIZE; idx1->aIndex[ idx1->nEntriesInUse ].dwSize = length; idx1->nEntriesInUse++; } } bool AVIFile::verifyStreamFormat( FOURCC type ) { int i, j = 0; AVIStreamHeader avi_stream_header; BITMAPINFOHEADER bih; FOURCC strh = make_fourcc( "strh" ); FOURCC strf = make_fourcc( "strf" ); while ( ( i = FindDirectoryEntry( strh, j++ ) ) != -1 ) { ReadChunk( i, ( void* ) & avi_stream_header ); if ( avi_stream_header.fccHandler == type ) return true; } j = 0; while ( ( i = FindDirectoryEntry( strf, j++ ) ) != -1 ) { ReadChunk( i, ( void* ) & bih ); if ( ( FOURCC ) bih.biCompression == type ) return true; } return false; } bool AVIFile::verifyStream( FOURCC type ) { int i, j = 0; AVIStreamHeader avi_stream_header; FOURCC strh = make_fourcc( "strh" ); while ( ( i = FindDirectoryEntry( strh, j++ ) ) != -1 ) { ReadChunk( i, ( void* ) & avi_stream_header ); if ( avi_stream_header.fccType == type ) return true; } return false; } bool AVIFile::isOpenDML( void ) { int i, j = 0; FOURCC dmlh = make_fourcc( "dmlh" ); while ( ( i = FindDirectoryEntry( dmlh, j++ ) ) != -1 ) { return true; } return false; } AVI1File::AVI1File() : AVIFile() {} AVI1File::~AVI1File() {} /* Initialize the AVI structure to its initial state, either for PAL or NTSC format */ void AVI1File::Init( int format, int sampleFrequency, int indexType ) { int num_blocks; FOURCC type; FOURCC name; off_t length; off_t offset; int parent; assert( ( format == AVI_PAL ) || ( format == AVI_NTSC ) ); AVIFile::Init( format, sampleFrequency, indexType ); switch ( format ) { case AVI_PAL: mainHdr.dwWidth = 720; mainHdr.dwHeight = 576; streamHdr[ 0 ].dwScale = 1; streamHdr[ 0 ].dwRate = 25; streamHdr[ 0 ].dwSuggestedBufferSize = 144008; /* initialize the 'strf' chunk */ /* Meaning of the DV stream format chunk per Microsoft dwDVAAuxSrc Specifies the Audio Auxiliary Data Source Pack for the first audio block (first 5 DV DIF sequences for 525-60 systems or 6 DV DIF sequences for 625-50 systems) of a frame. A DIF sequence is a data block that contains 150 DIF blocks. A DIF block consists of 80 bytes. The Audio Auxiliary Data Source Pack is defined in section D.7.1 of Part 2, Annex D, "The Pack Header Table and Contents of Packs" of the Specification of Consumer-use Digital VCRs. dwDVAAuxCtl Specifies the Audio Auxiliary Data Source Control Pack for the first audio block of a frame. The Audio Auxiliary Data Control Pack is defined in section D.7.2 of Part 2, Annex D, "The Pack Header Table and Contents of Packs" of the Specification of Consumer-use Digital VCRs. dwDVAAuxSrc1 Specifies the Audio Auxiliary Data Source Pack for the second audio block (second 5 DV DIF sequences for 525-60 systems or 6 DV DIF sequences for 625-50 systems) of a frame. dwDVAAuxCtl1 Specifies the Audio Auxiliary Data Source Control Pack for the second audio block of a frame. dwDVVAuxSrc Specifies the Video Auxiliary Data Source Pack as defined in section D.8.1 of Part 2, Annex D, "The Pack Header Table and Contents of Packs" of the Specification of Consumer-use Digital VCRs. dwDVVAuxCtl Specifies the Video Auxiliary Data Source Control Pack as defined in section D.8.2 of Part 2, Annex D, "The Pack Header Table and Contents of Packs" of the Specification of Consumer-use Digital VCRs. dwDVReserved[2] Reserved. Set this array to zero. */ dvinfo.dwDVAAuxSrc = 0xd1e030d0; dvinfo.dwDVAAuxCtl = 0xffa0cf3f; dvinfo.dwDVAAuxSrc1 = 0xd1e03fd0; dvinfo.dwDVAAuxCtl1 = 0xffa0cf3f; dvinfo.dwDVVAuxSrc = 0xff20ffff; dvinfo.dwDVVAuxCtl = 0xfffdc83f; dvinfo.dwDVReserved[ 0 ] = 0; dvinfo.dwDVReserved[ 1 ] = 0; break; case AVI_NTSC: mainHdr.dwWidth = 720; mainHdr.dwHeight = 480; streamHdr[ 0 ].dwScale = 1001; streamHdr[ 0 ].dwRate = 30000; streamHdr[ 0 ].dwSuggestedBufferSize = 120008; /* initialize the 'strf' chunk */ dvinfo.dwDVAAuxSrc = 0xc0c000c0; dvinfo.dwDVAAuxCtl = 0xffa0cf3f; dvinfo.dwDVAAuxSrc1 = 0xc0c001c0; dvinfo.dwDVAAuxCtl1 = 0xffa0cf3f; dvinfo.dwDVVAuxSrc = 0xff80ffff; dvinfo.dwDVVAuxCtl = 0xfffcc83f; dvinfo.dwDVReserved[ 0 ] = 0; dvinfo.dwDVReserved[ 1 ] = 0; break; default: /* no default allowed */ assert( 0 ); break; } indx[ 0 ] ->dwChunkId = make_fourcc( "00__" ); /* Initialize the 'strh' chunk */ streamHdr[ 0 ].fccType = make_fourcc( "iavs" ); streamHdr[ 0 ].fccHandler = make_fourcc( "dvsd" ); streamHdr[ 0 ].dwFlags = 0; streamHdr[ 0 ].wPriority = 0; streamHdr[ 0 ].wLanguage = 0; streamHdr[ 0 ].dwInitialFrames = 0; streamHdr[ 0 ].dwStart = 0; streamHdr[ 0 ].dwLength = 0; streamHdr[ 0 ].dwQuality = 0; streamHdr[ 0 ].dwSampleSize = 0; streamHdr[ 0 ].rcFrame.top = 0; streamHdr[ 0 ].rcFrame.bottom = 0; streamHdr[ 0 ].rcFrame.left = 0; streamHdr[ 0 ].rcFrame.right = 0; /* This is a simple directory structure setup. For details see the "OpenDML AVI File Format Extensions" document. An AVI file contains basically two types of objects, a "chunk" and a "list" object. The list object contains any number of chunks. Since a list is also a chunk, it is possible to create a hierarchical "list of lists" structure. Every AVI file starts with a "RIFF" object, which is a list of several other required objects. The actual DV data is contained in a "movi" list, each frame is in its own chunk. Old AVI files (pre OpenDML V. 1.02) contain only one RIFF chunk of less than 1 GByte size per file. The current format which allow for almost arbitrary sizes can contain several RIFF chunks of less than 1 GByte size. Old software however would only deal with the first RIFF chunk. Note that the first entry (FILE) isnt actually part of the AVI file. I use this (pseudo-) directory entry to keep track of the RIFF chunks and their positions in the AVI file. */ /* Create the container directory entry */ file_list = AddDirectoryEntry( make_fourcc( "FILE" ), make_fourcc( "FILE" ), 0, RIFF_NO_PARENT ); /* Create a basic directory structure. Only chunks defined from here on will be written to the AVI file. */ riff_list = AddDirectoryEntry( make_fourcc( "RIFF" ), make_fourcc( "AVI " ), RIFF_LISTSIZE, file_list ); hdrl_list = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "hdrl" ), RIFF_LISTSIZE, riff_list ); avih_chunk = AddDirectoryEntry( make_fourcc( "avih" ), 0, sizeof( MainAVIHeader ), hdrl_list ); strl_list[ 0 ] = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "strl" ), RIFF_LISTSIZE, hdrl_list ); strh_chunk[ 0 ] = AddDirectoryEntry( make_fourcc( "strh" ), 0, sizeof( AVIStreamHeader ), strl_list[ 0 ] ); strf_chunk[ 0 ] = AddDirectoryEntry( make_fourcc( "strf" ), 0, sizeof( dvinfo ), strl_list[ 0 ] ); if ( index_type & AVI_LARGE_INDEX ) indx_chunk[ 0 ] = AddDirectoryEntry( make_fourcc( "indx" ), 0, sizeof( AVISuperIndex ), strl_list[ 0 ] ); odml_list = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "odml" ), RIFF_LISTSIZE, hdrl_list ); dmlh_chunk = AddDirectoryEntry( make_fourcc( "dmlh" ), 0, 0x00f8, odml_list ); /* align movi list to block */ GetDirectoryEntry( hdrl_list, type, name, length, offset, parent ); num_blocks = length / PADDING_SIZE + 1; length = num_blocks * PADDING_SIZE - length - 5 * RIFF_HEADERSIZE; // why 5? junk_chunk = AddDirectoryEntry( make_fourcc( "JUNK" ), 0, length, riff_list ); movi_list = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "movi" ), RIFF_LISTSIZE, riff_list ); /* The ix00 chunk will be added dynamically to the movi_list in avi_write_frame as needed */ ix_chunk[ 0 ] = -1; } /* Write a DV video frame. This is somewhat complex... */ bool AVI1File::WriteFrame( Frame *frame ) { int frame_chunk; // int junk_chunk; int num_blocks; FOURCC type; FOURCC name; off_t length; off_t offset; int parent; /* exit if no large index and 1GB reached */ if ( !( index_type & AVI_LARGE_INDEX ) && isUpdateIdx1 == false ) return false; /* Check if we need a new ix00 Standard Index. It has a capacity of IX00_INDEX_SIZE frames. Whenever we exceed that number, we need a new index. The new ix00 chunk is also part of the movi list. */ if ( ( index_type & AVI_LARGE_INDEX ) && ( ( ( streamHdr[ 0 ].dwLength - 0 ) % IX00_INDEX_SIZE ) == 0 ) ) FlushIndx( 0 ); /* Write the DV frame data. Make a new 00__ chunk for the new frame, write out the frame, then add a JUNK chunk which is sized such that we end up on a 512 bytes boundary. */ frame_chunk = AddDirectoryEntry( make_fourcc( "00__" ), 0, frame->GetDataLen(), movi_list ); if ( ( index_type & AVI_LARGE_INDEX ) && ( streamHdr[ 0 ].dwLength % IX00_INDEX_SIZE ) == 0 ) { GetDirectoryEntry( frame_chunk, type, name, length, offset, parent ); ix[ 0 ] ->qwBaseOffset = offset - RIFF_HEADERSIZE; } WriteChunk( frame_chunk, frame->data ); // num_blocks = (frame->GetDataLen() + RIFF_HEADERSIZE) / PADDING_SIZE + 1; // length = num_blocks * PADDING_SIZE - frame->GetDataLen() - 2 * RIFF_HEADERSIZE; // junk_chunk = AddDirectoryEntry(make_fourcc("JUNK"), 0, length, movi_list); // WriteChunk(junk_chunk, g_zeroes); if ( index_type & AVI_LARGE_INDEX ) UpdateIndx( 0, frame_chunk, 1 ); if ( ( index_type & AVI_SMALL_INDEX ) && isUpdateIdx1 ) UpdateIdx1( frame_chunk, 0x10 ); /* update some variables with the new frame count. */ if ( isUpdateIdx1 ) ++mainHdr.dwTotalFrames; ++streamHdr[ 0 ].dwLength; ++dmlh[ 0 ]; /* Find out if the current riff list is close to 1 GByte in size. If so, start a new (extended) RIFF. The only allowed item in the new RIFF chunk is a movi list (with video frames and indexes as usual). */ GetDirectoryEntry( riff_list, type, name, length, offset, parent ); if ( length > 0x3f000000 ) { /* write idx1 only once and before end of first GB */ if ( ( index_type & AVI_SMALL_INDEX ) && isUpdateIdx1 ) { int idx1_chunk = AddDirectoryEntry( make_fourcc( "idx1" ), 0, idx1->nEntriesInUse * 16, riff_list ); WriteChunk( idx1_chunk, ( void* ) idx1 ); } isUpdateIdx1 = false; if ( index_type & AVI_LARGE_INDEX ) { /* pad out to 1GB */ //GetDirectoryEntry(riff_list, type, name, length, offset, parent); //junk_chunk = AddDirectoryEntry(make_fourcc("JUNK"), 0, PADDING_1GB - length - 5 * RIFF_HEADERSIZE, riff_list); //WriteChunk(junk_chunk, g_zeroes); /* padding for alignment */ GetDirectoryEntry( riff_list, type, name, length, offset, parent ); num_blocks = ( length + 4 * RIFF_HEADERSIZE ) / PADDING_SIZE + 1; length = ( num_blocks * PADDING_SIZE ) - length - 4 * RIFF_HEADERSIZE - 2 * RIFF_LISTSIZE; if ( length > 0 ) { junk_chunk = AddDirectoryEntry( make_fourcc( "JUNK" ), 0, length, riff_list ); WriteChunk( junk_chunk, g_zeroes ); } riff_list = AddDirectoryEntry( make_fourcc( "RIFF" ), make_fourcc( "AVIX" ), RIFF_LISTSIZE, file_list ); movi_list = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "movi" ), RIFF_LISTSIZE, riff_list ); } } return true; } void AVI1File::WriteRIFF() { WriteChunk( avih_chunk, ( void* ) & mainHdr ); WriteChunk( strh_chunk[ 0 ], ( void* ) & streamHdr[ 0 ] ); WriteChunk( strf_chunk[ 0 ], ( void* ) & dvinfo ); WriteChunk( dmlh_chunk, ( void* ) & dmlh ); if ( index_type & AVI_LARGE_INDEX ) { WriteChunk( indx_chunk[ 0 ], ( void* ) indx[ 0 ] ); WriteChunk( ix_chunk[ 0 ], ( void* ) ix[ 0 ] ); } if ( ( index_type & AVI_SMALL_INDEX ) && isUpdateIdx1 ) { int idx1_chunk = AddDirectoryEntry( make_fourcc( "idx1" ), 0, idx1->nEntriesInUse * 16, riff_list ); WriteChunk( idx1_chunk, ( void* ) idx1 ); } RIFFFile::WriteRIFF(); } AVI2File::AVI2File() : AVIFile() {} AVI2File::~AVI2File() {} /* Initialize the AVI structure to its initial state, either for PAL or NTSC format */ void AVI2File::Init( int format, int sampleFrequency, int indexType ) { int num_blocks; FOURCC type; FOURCC name; off_t length; off_t offset; int parent; assert( ( format == AVI_PAL ) || ( format == AVI_NTSC ) ); AVIFile::Init( format, sampleFrequency, indexType ); switch ( format ) { case AVI_PAL: mainHdr.dwStreams = 2; mainHdr.dwWidth = 720; mainHdr.dwHeight = 576; /* Initialize the 'strh' chunk */ streamHdr[ 0 ].fccType = make_fourcc( "vids" ); streamHdr[ 0 ].fccHandler = make_fourcc( "dvsd" ); streamHdr[ 0 ].dwFlags = 0; streamHdr[ 0 ].wPriority = 0; streamHdr[ 0 ].wLanguage = 0; streamHdr[ 0 ].dwInitialFrames = 0; streamHdr[ 0 ].dwScale = 1; streamHdr[ 0 ].dwRate = 25; streamHdr[ 0 ].dwStart = 0; streamHdr[ 0 ].dwLength = 0; streamHdr[ 0 ].dwSuggestedBufferSize = 144008; streamHdr[ 0 ].dwQuality = -1; streamHdr[ 0 ].dwSampleSize = 0; streamHdr[ 0 ].rcFrame.top = 0; streamHdr[ 0 ].rcFrame.bottom = 0; streamHdr[ 0 ].rcFrame.left = 0; streamHdr[ 0 ].rcFrame.right = 0; bitmapinfo.biSize = sizeof( bitmapinfo ); bitmapinfo.biWidth = 720; bitmapinfo.biHeight = 576; bitmapinfo.biPlanes = 1; bitmapinfo.biBitCount = 24; bitmapinfo.biCompression = make_fourcc( "dvsd" ); bitmapinfo.biSizeImage = 144000; bitmapinfo.biXPelsPerMeter = 0; bitmapinfo.biYPelsPerMeter = 0; bitmapinfo.biClrUsed = 0; bitmapinfo.biClrImportant = 0; streamHdr[ 1 ].fccType = make_fourcc( "auds" ); streamHdr[ 1 ].fccHandler = 0; streamHdr[ 1 ].dwFlags = 0; streamHdr[ 1 ].wPriority = 0; streamHdr[ 1 ].wLanguage = 0; streamHdr[ 1 ].dwInitialFrames = 0; streamHdr[ 1 ].dwScale = 2 * 2; streamHdr[ 1 ].dwRate = sampleFrequency * 2 * 2; streamHdr[ 1 ].dwStart = 0; streamHdr[ 1 ].dwLength = 0; streamHdr[ 1 ].dwSuggestedBufferSize = 8192; streamHdr[ 1 ].dwQuality = -1; streamHdr[ 1 ].dwSampleSize = 2 * 2; streamHdr[ 1 ].rcFrame.top = 0; streamHdr[ 1 ].rcFrame.bottom = 0; streamHdr[ 1 ].rcFrame.left = 0; streamHdr[ 1 ].rcFrame.right = 0; break; case AVI_NTSC: mainHdr.dwTotalFrames = 0; mainHdr.dwStreams = 2; mainHdr.dwWidth = 720; mainHdr.dwHeight = 480; /* Initialize the 'strh' chunk */ streamHdr[ 0 ].fccType = make_fourcc( "vids" ); streamHdr[ 0 ].fccHandler = make_fourcc( "dvsd" ); streamHdr[ 0 ].dwFlags = 0; streamHdr[ 0 ].wPriority = 0; streamHdr[ 0 ].wLanguage = 0; streamHdr[ 0 ].dwInitialFrames = 0; streamHdr[ 0 ].dwScale = 1001; streamHdr[ 0 ].dwRate = 30000; streamHdr[ 0 ].dwStart = 0; streamHdr[ 0 ].dwLength = 0; streamHdr[ 0 ].dwSuggestedBufferSize = 120008; streamHdr[ 0 ].dwQuality = -1; streamHdr[ 0 ].dwSampleSize = 0; streamHdr[ 0 ].rcFrame.top = 0; streamHdr[ 0 ].rcFrame.bottom = 0; streamHdr[ 0 ].rcFrame.left = 0; streamHdr[ 0 ].rcFrame.right = 0; bitmapinfo.biSize = sizeof( bitmapinfo ); bitmapinfo.biWidth = 720; bitmapinfo.biHeight = 480; bitmapinfo.biPlanes = 1; bitmapinfo.biBitCount = 24; bitmapinfo.biCompression = make_fourcc( "dvsd" ); bitmapinfo.biSizeImage = 120000; bitmapinfo.biXPelsPerMeter = 0; bitmapinfo.biYPelsPerMeter = 0; bitmapinfo.biClrUsed = 0; bitmapinfo.biClrImportant = 0; streamHdr[ 1 ].fccType = make_fourcc( "auds" ); streamHdr[ 1 ].fccHandler = 0; streamHdr[ 1 ].dwFlags = 0; streamHdr[ 1 ].wPriority = 0; streamHdr[ 1 ].wLanguage = 0; streamHdr[ 1 ].dwInitialFrames = 1; streamHdr[ 1 ].dwScale = 2 * 2; streamHdr[ 1 ].dwRate = sampleFrequency * 2 * 2; streamHdr[ 1 ].dwStart = 0; streamHdr[ 1 ].dwLength = 0; streamHdr[ 1 ].dwSuggestedBufferSize = 8192; streamHdr[ 1 ].dwQuality = 0; streamHdr[ 1 ].dwSampleSize = 2 * 2; streamHdr[ 1 ].rcFrame.top = 0; streamHdr[ 1 ].rcFrame.bottom = 0; streamHdr[ 1 ].rcFrame.left = 0; streamHdr[ 1 ].rcFrame.right = 0; break; } waveformatex.wFormatTag = 1; waveformatex.nChannels = 2; waveformatex.nSamplesPerSec = sampleFrequency; waveformatex.nAvgBytesPerSec = sampleFrequency * 2 * 2; waveformatex.nBlockAlign = 4; waveformatex.wBitsPerSample = 16; waveformatex.cbSize = 0; file_list = AddDirectoryEntry( make_fourcc( "FILE" ), make_fourcc( "FILE" ), 0, RIFF_NO_PARENT ); /* Create a basic directory structure. Only chunks defined from here on will be written to the AVI file. */ riff_list = AddDirectoryEntry( make_fourcc( "RIFF" ), make_fourcc( "AVI " ), RIFF_LISTSIZE, file_list ); hdrl_list = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "hdrl" ), RIFF_LISTSIZE, riff_list ); avih_chunk = AddDirectoryEntry( make_fourcc( "avih" ), 0, sizeof( MainAVIHeader ), hdrl_list ); strl_list[ 0 ] = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "strl" ), RIFF_LISTSIZE, hdrl_list ); strh_chunk[ 0 ] = AddDirectoryEntry( make_fourcc( "strh" ), 0, sizeof( AVIStreamHeader ), strl_list[ 0 ] ); strf_chunk[ 0 ] = AddDirectoryEntry( make_fourcc( "strf" ), 0, sizeof( BITMAPINFOHEADER ), strl_list[ 0 ] ); if ( index_type & AVI_LARGE_INDEX ) { indx_chunk[ 0 ] = AddDirectoryEntry( make_fourcc( "indx" ), 0, sizeof( AVISuperIndex ), strl_list[ 0 ] ); ix_chunk[ 0 ] = -1; indx[ 0 ] ->dwChunkId = make_fourcc( "00dc" ); } strl_list[ 1 ] = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "strl" ), RIFF_LISTSIZE, hdrl_list ); strh_chunk[ 1 ] = AddDirectoryEntry( make_fourcc( "strh" ), 0, sizeof( AVIStreamHeader ), strl_list[ 1 ] ); strf_chunk[ 1 ] = AddDirectoryEntry( make_fourcc( "strf" ), 0, sizeof( WAVEFORMATEX ) - 2, strl_list[ 1 ] ); junk_chunk = AddDirectoryEntry( make_fourcc( "JUNK" ), 0, 2, strl_list[ 1 ] ); if ( index_type & AVI_LARGE_INDEX ) { indx_chunk[ 1 ] = AddDirectoryEntry( make_fourcc( "indx" ), 0, sizeof( AVISuperIndex ), strl_list[ 1 ] ); ix_chunk[ 1 ] = -1; indx[ 1 ] ->dwChunkId = make_fourcc( "01wb" ); odml_list = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "odml" ), RIFF_LISTSIZE, hdrl_list ); dmlh_chunk = AddDirectoryEntry( make_fourcc( "dmlh" ), 0, 0x00f8, odml_list ); } /* align movi list to block */ GetDirectoryEntry( hdrl_list, type, name, length, offset, parent ); num_blocks = length / PADDING_SIZE + 1; length = num_blocks * PADDING_SIZE - length - 5 * RIFF_HEADERSIZE; // why 5 headers? junk_chunk = AddDirectoryEntry( make_fourcc( "JUNK" ), 0, length, riff_list ); movi_list = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "movi" ), RIFF_LISTSIZE, riff_list ); idx1->aIndex[ idx1->nEntriesInUse ].dwChunkId = make_fourcc( "7Fxx" ); idx1->aIndex[ idx1->nEntriesInUse ].dwFlags = 0; idx1->aIndex[ idx1->nEntriesInUse ].dwOffset = 0; idx1->aIndex[ idx1->nEntriesInUse ].dwSize = 0; idx1->nEntriesInUse++; } void AVI2File::WriteRIFF() { WriteChunk( avih_chunk, ( void* ) & mainHdr ); WriteChunk( strh_chunk[ 0 ], ( void* ) & streamHdr[ 0 ] ); WriteChunk( strf_chunk[ 0 ], ( void* ) & bitmapinfo ); if ( index_type & AVI_LARGE_INDEX ) { WriteChunk( dmlh_chunk, ( void* ) & dmlh ); WriteChunk( indx_chunk[ 0 ], ( void* ) indx[ 0 ] ); WriteChunk( ix_chunk[ 0 ], ( void* ) ix[ 0 ] ); } WriteChunk( strh_chunk[ 1 ], ( void* ) & streamHdr[ 1 ] ); WriteChunk( strf_chunk[ 1 ], ( void* ) & waveformatex ); if ( index_type & AVI_LARGE_INDEX ) { WriteChunk( indx_chunk[ 1 ], ( void* ) indx[ 1 ] ); WriteChunk( ix_chunk[ 1 ], ( void* ) ix[ 1 ] ); } if ( ( index_type & AVI_SMALL_INDEX ) && isUpdateIdx1 ) { int idx1_chunk = AddDirectoryEntry( make_fourcc( "idx1" ), 0, idx1->nEntriesInUse * 16, riff_list ); WriteChunk( idx1_chunk, ( void* ) idx1 ); } RIFFFile::WriteRIFF(); } /** Write a DV video frame \param frame the frame to write */ bool AVI2File::WriteFrame( Frame *frame ) { int audio_chunk; int frame_chunk; // int junk_chunk; char soundbuf[ 20000 ]; int audio_size; int num_blocks; FOURCC type; FOURCC name; off_t length; off_t offset; int parent; /* exit if no large index and 1GB reached */ if ( !( index_type & AVI_LARGE_INDEX ) && isUpdateIdx1 == false ) return false; /* Check if we need a new ix00 Standard Index. It has a capacity of IX00_INDEX_SIZE frames. Whenever we exceed that number, we need a new index. The new ix00 chunk is also part of the movi list. */ if ( ( index_type & AVI_LARGE_INDEX ) && ( ( ( streamHdr[ 0 ].dwLength - 0 ) % IX00_INDEX_SIZE ) == 0 ) ) { FlushIndx( 0 ); FlushIndx( 1 ); } /* Write audio data if we have it */ assert( !frame->IsHDV() ); audio_size = ((DVFrame*)frame)->ExtractAudio( soundbuf ); if ( audio_size > 0 ) { audio_chunk = AddDirectoryEntry( make_fourcc( "01wb" ), 0, audio_size, movi_list ); if ( ( index_type & AVI_LARGE_INDEX ) && ( streamHdr[ 0 ].dwLength % IX00_INDEX_SIZE ) == 0 ) { GetDirectoryEntry( audio_chunk, type, name, length, offset, parent ); ix[ 1 ] ->qwBaseOffset = offset - RIFF_HEADERSIZE; } WriteChunk( audio_chunk, soundbuf ); // num_blocks = (audio_size + RIFF_HEADERSIZE) / PADDING_SIZE + 1; // length = num_blocks * PADDING_SIZE - audio_size - 2 * RIFF_HEADERSIZE; // junk_chunk = AddDirectoryEntry(make_fourcc("JUNK"), 0, length, movi_list); // WriteChunk(junk_chunk, g_zeroes); if ( index_type & AVI_LARGE_INDEX ) UpdateIndx( 1, audio_chunk, audio_size / waveformatex.nChannels / 2 ); if ( ( index_type & AVI_SMALL_INDEX ) && isUpdateIdx1 ) UpdateIdx1( audio_chunk, 0x00 ); streamHdr[ 1 ].dwLength += audio_size / waveformatex.nChannels / 2; } /* Write video data */ frame_chunk = AddDirectoryEntry( make_fourcc( "00dc" ), 0, frame->GetDataLen(), movi_list ); if ( ( index_type & AVI_LARGE_INDEX ) && ( streamHdr[ 0 ].dwLength % IX00_INDEX_SIZE ) == 0 ) { GetDirectoryEntry( frame_chunk, type, name, length, offset, parent ); ix[ 0 ] ->qwBaseOffset = offset - RIFF_HEADERSIZE; } WriteChunk( frame_chunk, frame->data ); // num_blocks = (frame->GetDataLen() + RIFF_HEADERSIZE) / PADDING_SIZE + 1; // length = num_blocks * PADDING_SIZE - frame->GetDataLen() - 2 * RIFF_HEADERSIZE; // junk_chunk = AddDirectoryEntry(make_fourcc("JUNK"), 0, length, movi_list); // WriteChunk(junk_chunk, g_zeroes); if ( index_type & AVI_LARGE_INDEX ) UpdateIndx( 0, frame_chunk, 1 ); if ( ( index_type & AVI_SMALL_INDEX ) && isUpdateIdx1 ) UpdateIdx1( frame_chunk, 0x10 ); /* update some variables with the new frame count. */ if ( isUpdateIdx1 ) ++mainHdr.dwTotalFrames; ++streamHdr[ 0 ].dwLength; ++dmlh[ 0 ]; /* Find out if the current riff list is close to 1 GByte in size. If so, start a new (extended) RIFF. The only allowed item in the new RIFF chunk is a movi list (with video frames and indexes as usual). */ GetDirectoryEntry( riff_list, type, name, length, offset, parent ); if ( length > 0x3f000000 ) { /* write idx1 only once and before end of first GB */ if ( ( index_type & AVI_SMALL_INDEX ) && isUpdateIdx1 ) { int idx1_chunk = AddDirectoryEntry( make_fourcc( "idx1" ), 0, idx1->nEntriesInUse * 16, riff_list ); WriteChunk( idx1_chunk, ( void* ) idx1 ); } isUpdateIdx1 = false; if ( index_type & AVI_LARGE_INDEX ) { /* padding for alignment */ GetDirectoryEntry( riff_list, type, name, length, offset, parent ); num_blocks = ( length + 4 * RIFF_HEADERSIZE ) / PADDING_SIZE + 1; length = ( num_blocks * PADDING_SIZE ) - length - 4 * RIFF_HEADERSIZE - 2 * RIFF_LISTSIZE; if ( length > 0 ) { junk_chunk = AddDirectoryEntry( make_fourcc( "JUNK" ), 0, length, riff_list ); WriteChunk( junk_chunk, g_zeroes ); } riff_list = AddDirectoryEntry( make_fourcc( "RIFF" ), make_fourcc( "AVIX" ), RIFF_LISTSIZE, file_list ); movi_list = AddDirectoryEntry( make_fourcc( "LIST" ), make_fourcc( "movi" ), RIFF_LISTSIZE, riff_list ); } } return true; } void AVI1File::setDVINFO( DVINFO &info ) { // do not do this until debugged audio against DirectShow return ; dvinfo.dwDVAAuxSrc = info.dwDVAAuxSrc; dvinfo.dwDVAAuxCtl = info.dwDVAAuxCtl; dvinfo.dwDVAAuxSrc1 = info.dwDVAAuxSrc1; dvinfo.dwDVAAuxCtl1 = info.dwDVAAuxCtl1; dvinfo.dwDVVAuxSrc = info.dwDVVAuxSrc; dvinfo.dwDVVAuxCtl = info.dwDVVAuxCtl; } void AVI2File::setDVINFO( DVINFO &info ) {} void AVIFile::setFccHandler( FOURCC type, FOURCC handler ) { for ( int i = 0; i < mainHdr.dwStreams; i++ ) { if ( streamHdr[ i ].fccType == type ) { int k, j = 0; FOURCC strf = make_fourcc( "strf" ); BITMAPINFOHEADER bih; streamHdr[ i ].fccHandler = handler; while ( ( k = FindDirectoryEntry( strf, j++ ) ) != -1 ) { ReadChunk( k, ( void* ) & bih ); bih.biCompression = handler; } } } } dvgrab-3.5+git20160707.1.e46042e/filehandler.h0000644000175000017500000001705112716434257015630 0ustar eses/* * filehandler.h * Copyright (C) 2000 Arne Schirmacher * Raw DV, JPEG, and Quicktime portions Copyright (C) 2003-2008 Dan Dennedy * Portions of Quicktime code borrowed from Arthur Peters' dv_utils. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _FILEHANDLER_H #define _FILEHANDLER_H enum { PAL_FORMAT, NTSC_FORMAT, AVI_DV1_FORMAT, AVI_DV2_FORMAT, QT_FORMAT, RAW_FORMAT, DIF_FORMAT, JPEG_FORMAT, MPEG2TS_FORMAT, UNDEFINED }; #include using std::vector; #include using std::string; #include "dvframe.h" #include "hdvframe.h" #include "riff.h" #include "avi.h" #include /* this is a struct for each available pid */ typedef struct PayloadList { unsigned char pid, state; struct PayloadList *next; } PayloadList; enum FileCaptureMode { CAPTURE_IGNORE, CAPTURE_FRAME_APPEND, CAPTURE_FRAME_INSERT, CAPTURE_MOVIE_APPEND }; class FileTracker { protected: FileTracker(); ~FileTracker(); public: static FileTracker &GetInstance( ); void SetMode( FileCaptureMode ); FileCaptureMode GetMode( ); unsigned int Size(); char *Get( int ); void Add( const char * ); void Clear( ); private: static FileTracker *instance; vector list; FileCaptureMode mode; }; class FileHandler { public: FileHandler(); virtual ~FileHandler(); virtual bool GetAutoSplit(); virtual int GetTimeSplit(); virtual bool GetTimeStamp(); virtual bool GetTimeSys(); virtual bool GetTimeCode(); virtual string GetBaseName(); virtual string GetExtension(); virtual int GetMaxFrameCount(); virtual off_t GetMaxFileSize(); void CollectionCounterUpdate(); virtual int GetSizeSplitMode(); virtual off_t GetMinColSize(); virtual off_t GetMaxColSize(); virtual off_t GetFileSize() = 0; virtual int GetTotalFrames() = 0; virtual void SetAutoSplit( bool ); virtual void SetTimeSplit(int secs); virtual void SetTimeStamp( bool ); virtual void SetTimeSys( bool ); virtual void SetTimeCode( bool ); virtual void SetBaseName( const string& base ); virtual void SetMaxFrameCount( int ); virtual void SetEveryNthFrame( int ); virtual void SetMaxFileSize( off_t ); virtual void SetSizeSplitMode( int ); virtual void SetMinColSize( off_t ); virtual void SetMaxColSize( off_t ); virtual void SetFilmRate( bool ); virtual void SetRemove2332( bool ); virtual bool WriteFrame( Frame *frame ); virtual bool FileIsOpen() = 0; virtual bool Create( const string& filename ) = 0; virtual int Write( Frame *frame ) = 0; virtual int Close() = 0; virtual bool Done( void ); virtual bool Open( const char *s ) = 0; virtual int GetFrame( Frame *frame, int frameNum ) = 0; off_t GetLastCollectionFreeSpace() { return lastCollectionFreeSpace; } off_t GetCurrentCollectionSize() { return currentCollectionSize; } int GetFramesWritten() { return framesWritten; } virtual string GetFileName() { return filename; } virtual bool IsNewFile() { return isNewFile; } virtual bool IsFirstFile() { return isFirstFile == 0 ? false : true; } protected: int isFirstFile; bool isNewFile; bool done; bool autoSplit; int timeSplit; bool timeStamp; bool timeSys; bool timeCode; int maxFrameCount; int framesWritten; int everyNthFrame; int framesToSkip; off_t maxFileSize; off_t minColSize; off_t maxColSize; off_t currentCollectionSize; off_t lastCollectionFreeSpace; int sizeSplitMode; string base; string extension; string filename; TimeCode prevTimeCode; bool filmRate; bool remove2332; time_t prevTime; }; class RawHandler: public FileHandler { public: int fd; RawHandler( const string& ext = string( ".dv" ) ); ~RawHandler(); bool FileIsOpen(); bool Create( const string& filename ); int Write( Frame *frame ); int Close(); off_t GetFileSize(); int GetTotalFrames(); bool Open( const char *s ); int GetFrame( Frame *frame, int frameNum ); private: int numBlocks; }; class AVIHandler: public FileHandler { public: AVIHandler( int format = AVI_DV1_FORMAT ); ~AVIHandler(); void SetSampleFrame( DVFrame *sample ); bool FileIsOpen(); bool Create( const string& filename ); int Write( Frame *frame ); int Close(); off_t GetFileSize(); int GetTotalFrames(); bool Open( const char *s ); int GetFrame( Frame *frame, int frameNum ); bool GetOpenDML(); void SetOpenDML( bool ); protected: const string *filen; AVIFile *avi; int aviFormat; bool infoSet; AudioInfo audioInfo; VideoInfo videoInfo; bool isOpenDML; DVINFO dvinfo; FOURCC fccHandler; }; #ifdef HAVE_LIBQUICKTIME #include class QtHandler: public FileHandler { public: QtHandler(); ~QtHandler(); bool FileIsOpen(); bool Create( const string& filename ); int Write( Frame *frame ); int Close(); off_t GetFileSize(); int GetTotalFrames(); bool Open( const char *s ); int GetFrame( Frame *frame, int frameNum ); private: quicktime_t *fd; long samplingRate; int samplesPerBuffer; int channels; bool isFullyInitialized; unsigned int audioBufferSize; int16_t *audioBuffer; short int** audioChannelBuffer; short int* audioChannelBuffers[ 4 ]; void Init(); inline void DeinterlaceStereo16( void* pInput, int iBytes, void* pLOutput, void* pROutput ); }; #endif #if defined(HAVE_LIBJPEG) && defined(HAVE_LIBDV) extern "C" { #include } class JPEGHandler: public FileHandler { private: struct jpeg_error_mgr jerr; struct jpeg_compress_struct cinfo; JSAMPLE image_buffer[ 2048*2048*3 ]; bool isOpen; string filename; unsigned int count; bool deinterlace; int new_width; int new_height; bool overwrite; string file; string temp; bool usetemp; int fixAspect( Frame *frame ); bool scale( Frame *frame ); public: JPEGHandler( int quality, bool deinterlace = false, int width = -1, int height = -1, bool overwrite = false, string temp = "tmp.jpg", bool usetemp = false ); ~JPEGHandler(); bool FileIsOpen() { return isOpen; } bool Create( const string& filename ); int Write( Frame *frame ); int Close(); off_t GetFileSize() { return 0; } int GetTotalFrames() { return 0; } bool Open( const char *s ) { return false; } string GetFileName() { return file; } int GetFrame( Frame *frame, int frameNum ) { return -1; } }; #endif class Mpeg2Handler: public FileHandler { public: int fd; Mpeg2Handler( unsigned char flags, const string& ext = string( ".m2t" ) ); ~Mpeg2Handler(); bool WriteFrame( Frame *frame ); bool FileIsOpen(); bool Create( const string& filename ); int Write( Frame *frame ); int Close(); off_t GetFileSize(); int GetTotalFrames(); bool Open( const char *s ); int GetFrame( Frame *frame, int frameNum ); private: void ProcessPayload( unsigned char *packet, unsigned int pid, unsigned char len ); void ProcessTSPacket( unsigned char *packet ); int writeJVCP25( unsigned char *data, int len ); #define MPEG2_BUFFER_SIZE (2*1024*1024) bool waitingForRecordingDate; unsigned char buffer[MPEG2_BUFFER_SIZE]; int bufferLen; int totalFrames; const unsigned char writerFlags; PayloadList *firstPayloadEntry; }; #endif dvgrab-3.5+git20160707.1.e46042e/raw1394util.h0000644000175000017500000000221712716434257015361 0ustar eses/* * raw1394util.h -- libraw1394 utilities * Copyright (C) 2003-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #ifndef RAW1394UTIL_H #define RAW1394UTIL_H 1 #include #ifdef __cplusplus extern "C" { #endif int raw1394_get_num_ports( void ); raw1394handle_t raw1394_open( int port ); void raw1394_close( raw1394handle_t handle ); int discoverAVC( int * port, octlet_t* guid ); void reset_bus( int port ); #ifdef __cplusplus } #endif #endif dvgrab-3.5+git20160707.1.e46042e/stringutils.cc0000644000175000017500000000434312716434257016100 0ustar eses/* * stringutils.cc -- C++ STL string functions * Copyright (C) 2003-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #ifdef HAVE_CONFIG_H #include #endif #include #include using std::ostringstream; #include #include "stringutils.h" string StringUtils::replaceAll ( string haystack, string needle, string s ) { string::size_type pos = 0; while ( ( pos = haystack.find ( needle, pos ) ) != string::npos ) { haystack.erase ( pos, needle.length() ); haystack.insert ( pos, s ); } return haystack; } string StringUtils::stripWhite ( string s ) { ostringstream dest; char c; for ( string::size_type pos = 0; pos < s.size(); ++pos ) { c = s.at( pos ); if ( c != 0x20 && c != 0x09 && c != 0x0d && c != 0x0a ) dest << c; } dest << std::ends; return dest.str(); } bool StringUtils::begins( string source, string sub ) { return ( source.substr ( 0, sub.length() ) == sub ); } bool StringUtils::ends( string source, string sub ) { if ( sub.length() >= source.length() ) return false; return ( source.substr( source.length() - sub.length() ) == sub ); } string StringUtils::ltos( long num ) { char s[ 81 ]; sprintf ( s, "%ld", num ); return string( s ); } string StringUtils::itos( int num ) { char s[ 81 ]; sprintf ( s, "%d", num ); return string( s ); } string StringUtils::toLower( string s ) { transform( s.begin(), s.end(), s.begin(), (int(*)(int)) tolower ); return s; } string StringUtils::toUpper( string s ) { transform( s.begin(), s.end(), s.begin(), (int(*)(int)) toupper ); return s; } dvgrab-3.5+git20160707.1.e46042e/dvgrab.10000644000175000017500000004502712716434257014535 0ustar eses.\" This -*- nroff -*- file has been generated from .\" DocBook SGML with docbook-to-man on Debian GNU/Linux. ...\" ...\" transcript compatibility for postscript use. ...\" ...\" synopsis: .P! ...\" .de P! \\&. .fl \" force out current output buffer \\!%PB \\!/showpage{}def ...\" the following is from Ken Flowers -- it prevents dictionary overflows \\!/tempdict 200 dict def tempdict begin .fl \" prolog .sy cat \\$1\" bring in postscript file ...\" the following line matches the tempdict above \\!end % tempdict % \\!PE \\!. .sp \\$2u \" move below the image .. .de pF .ie \\*(f1 .ds f1 \\n(.f .el .ie \\*(f2 .ds f2 \\n(.f .el .ie \\*(f3 .ds f3 \\n(.f .el .ie \\*(f4 .ds f4 \\n(.f .el .tm ? font overflow .ft \\$1 .. .de fP .ie !\\*(f4 \{\ . ft \\*(f4 . ds f4\" ' br \} .el .ie !\\*(f3 \{\ . ft \\*(f3 . ds f3\" ' br \} .el .ie !\\*(f2 \{\ . ft \\*(f2 . ds f2\" ' br \} .el .ie !\\*(f1 \{\ . ft \\*(f1 . ds f1\" ' br \} .el .tm ? font underflow .. .ds f1\" .ds f2\" .ds f3\" .ds f4\" '\" t .ta 8n 16n 24n 32n 40n 48n 56n 64n 72n .TH "DVGRAB" "1" .SH "NAME" dvgrab \(em Capture DV or MPEG-2 Transport Stream (HDV) video and audio data from FireWire .SH "SYNOPSIS" \fBdvgrab\fP [\fIoptions\fP] [\fIbase\fP] [-] .SH "DESCRIPTION" \fBdvgrab\fP is a program that captures DV or HDV (MPEG2-TS) video and audio data from digital camcorders via FireWire (IEEE 1394). The data is stored in one or several files and can later be processed by video editing software. \fBdvgrab\fP can remote control the camcorder but it does not show the video's content on screen. .PP \fBdvgrab\fP also supports UVC (USB Video Class) compliant DV devices using Linux kernel module uvcvideo, which is a V4L2 driver. In this mode, there is no AV/C VTR control and therefore interactive mode is almost useless. interactive feature is .PP The \fIbase\fP argument is used to construct the filename to store video data: \fIbase\fP-\fInum\fP.\fIext\fP. \fInum\fP is a running number starting from 001, and \fIext\fP is the file name extension specifying the file format used, e.g. avi. A different naming scheme is used whenever the \fB-timestamp\fP, \fB-timecode\fP, or \fB-timesys\fP is given (see below). If \fIbase\fP is a full filename including extension, then \fBdvgrab\fP attempts to determine the output file format from the extension, but it still inserts \fInum\fP. The default value for \fIbase\fP is "dvgrab-". .PP If you specify a trailing '-' then the format is forced to raw DV or HDV and sent to stdout. \fBdvgrab\fP will also output raw DV or HDV to stdout while capturing to a file if stdout is piped or redirected. .PP You can use \fBdvgrab's\fP powerful file writing capabilities with other programs that produce raw DV or HDV. Using the \fB-stdin\fP option and if \fBdvgrab\fP detects that it is on the receiving end of a pipe and it is not in interactive mode, then it will try to read raw DV or HDV on stdin. .SH "OPTIONS" Options longer than a single character can be specified with either one or two leading hyphens. Also, you can use a space character or equal sign to separate the option name and its argument value. .IP "\fB-a[\fInum\fP], -autosplit[=\fInum\fP]\fP" 10 Try to detect whenever a new recording starts, and store it into a separate file. This can be combined with the \fB-frames\fP and \fB-size\fP options, and a split occurs whenever a specified event arises. Autosplit is off by default. .IP "" 10 \fInum\fP is optional. Without it, \fBdvgrab\fP determines when to split using a flag in the stream or a discontinuity in the timecode, where timecode discontinuity is anything backwards or greater than one second. If you set the optional argument \fInum\fP you can set the time sensitivity in seconds and ignore the stream's new-recording flag. This basically lets you split on larger time increments such as minutes or hours. For example, \fI-autosplit=3600\fP splits the recording whenever there is a gap in the recording that is an hour or longer. .IP "\fB-buffers \fInum\fP\fP" 10 The number of frames to use for buffering device I/O delays. Defaults to 100. .IP "\fB-card \fInum\fP\fP" 10 Tells \fBdvgrab\fP to receive data from FireWire card \fInum\fP. The default behaviour is to automatically select the first card containing the first discovered camera If used in conjunction with \fB-noavc\fP, then no bus probing is performed If used in conjunction with \fB-guid\fP \fIhex\fP, then only the specified bus is probed for node with guid \fIhex\fP. .IP "\fB-channel \fInum\fP\fP" 10 Isochronous channel to receive data from. Defaults to 63, which is pretty much standard among DV camcorders these days. If you specify anything different, no attempt is made at this time to tell the device which channel to use. You must have some manual way to tell the transmitting device which channel to use. .IP "\fB-cmincutsize \fInum\fP\fP" 10 This option is used to start the collection if a cut occurs \fInum\fP megabytes (actually, mebibytes) prior to the end of the collection. This option reduces small files being created when using the \fB-csize\fP option. When a new collection is started in this manner, the amount of free space in the previous collection is stored, and while the following clips fit within the previous collection, the new collection starting point is reset. .IP "\fB-csize \fInum\fP" 10 This option tells \fBdvgrab\fP to split the files when the collection of files exceeds \fInum\fP . This option is used to create collections of files that fit perfectly into \fInum\fP megabytes (actually, mebibytes) (i.e. for archiving onto DVD). When this occurs, a new collection is started (See also the \fB-cmincutsize\fP option) .IP "\fB-debug \fItype\fP\fP" 10 Display HDV debug info, \fItype\fP is one or more of: all,pat,pmt,pids,pid=N,pes,packet,video,sonya1 .IP "\fB-d, -duration \fItime\fP\fP" 10 Set the maximum capture duration across all file splits for a single capture session (multiple sessions are possible in interactive mode). The \fItime\fP value is expressed in SMIL2 MediaClipping Time format. See http://w3.org/AudioVideo/ for the specification. .IP "" 10 Briefly, the formats are: .IP "" 10 XXX[.Y]h, XXX[.Y]min, XXX[.Y][s], XXXms, .IP "" 10 [[HH:]MM:]SS[.ms], or smpte=[[[HH:]MM:]SS:]FF. .IP "\fB-every \fIn\fP\fP" 10 This option tells \fBdvgrab\fP to write every \fIn\fP'th frame only (default all frames). .IP "\fB-f, -format \fIdv1\fP | \fIdv2\fP | \fIavi\fP | \fIraw\fP | \fIdif\fP | \fIqt\fP | \fImov\fP | \fIjpeg\fP | \fIjpg\fP | \fImpeg2\fP | \fIhdv\fP\fP" 10 Specifies the format of the output file(s). File format can also be determined if you include an extension on the \fIbase\fP name. The following extensions are recognizable: avi, dv, dif, mov, jpg, jpeg, and m2t (HDV). .IP "" 10 \fIdv1\fP and \fIdv2\fP both are AVI files with slightly different formats. \fIdv2\fP stores a separate audio track in addition to the DV video track, which is more compatible with other applications. \fIdv1\fP only stores a single, integrated DV track since the DV format natively interleaves audio with video. Therefore, while \fIdv1\fP produces smaller output, some applications won't grok it and require \fIdv2\fP instead. \fBdvgrab\fP is capable of creating extremely large AVI files\(emwell over 2 or 4 GB\(emhowever, compatibility with other tools starts to decrease over the 1 GB size. .IP "" 10 \fIraw\fP stores the data unmodified and have the .dv extension. These files are read by a number of GNU/Linux tools as well as Apple Quicktime. .IP "" 10 \fIdif\fP is a variation of raw DV that names files with a .dif extension so they can be more immediately loaded into MainConcept MainActor5. .IP "" 10 \fIqt\fP is Quicktime, but requires that dvgrab be compiled with libquicktime. .IP "" 10 \fIjpg\fP or \fIjpeg\fP is for a sequence of JPEG image files if dvgrab was compiled with libdv and jpeglib. This option can only be used with a DV input, not HDV (MPEG2-TS). .IP "" 10 \fImpeg2\fP or \fIhdv\fP is for a MPEG-2 transport stream when using, for example, a HDV camcorder or digital TV settop box. .IP "" 10 Defaults to \fIraw\fP .IP "\fB-F, -frames \fInum\fP\fP" 10 This option tells \fBdvgrab\fP to store at most \fInum\fP frames per file before splitting to a new file, where \fInum\fP = 0 means ulimited. The corresponding time depends on the video system used. PAL shows 25, NTSC about 30 frames per second. .IP "\fB-guid \fIhex\fP\fP" 10 If you have more than one DV device, then select one using the node's GUID specified in \fIhex\fP (hexadecimal) format. This is the format as displayed in /proc/bus/ieee1394/devices or the new kernel 2.6 /sys filesystem. When you specify a GUID, \fBdvgrab\fP will establish (or overlay) a peer-to-peer connection with the device instead of listening to the device's broadcast. If you supply a \fIhex\fP value of 1, then \fBdvgrab\fP attempts to discover the device as well as setup a peer-to-peer connection. This is especially handy with MPEG2-TS settop boxes, which typically require a connection management procedure to start transmitting. .IP "\fB-h, -help\fP" 10 Show summary of options. .IP "\fB-I, -input \fIfile\fP\fP" 10 Read from \fIfile\fP instead of FireWire. You can use '-' for stdin instead of using \fB-stdin\fP. .IP "\fB-i, -interactive\fP" 10 Make dvgrab interactive where single keypresses on stdin control the camera VTR or start and stop capture. Otherwise, dvgrab runs in session mode, where it immediately starts capture and stops as directed or interrupted (ctrl-c). .IP "\fB-jpeg-deinterlace\fP" 10 If using \fB-format jpeg\fP, deinterlace the output by doubling the lines of the upper field. This is a cheap form of deinterlace that results in an effective 50% loss in resolution. .IP "\fB-jpeg-height \fInum\fP\fP" 10 If using \fB-format jpeg\fP, scale the output of the height to \fInum\fP (1 - 2048). .IP "\fB-jpeg-overwrite \fIname\fP\fP" 10 Write to same image file for each frame, instead of creating a sequence of image files. .IP "\fB-jpeg-quality \fInum\fP\fP" 10 If using \fB-format jpeg\fP, set the JPEG quality level from 0 (worst) to 100 (best). .IP "\fB-jpeg-temp \fIname\fP\fP 10 Use a temporary file to create the jpeg, rename the file to the target file name when done. Useful when using dvgrab with \fB-jpeg-overwrite\fP for generating a webcam image. .IP "\fB-jpeg-width \fInum\fP\fP" 10 If using \fB-format jpeg\fP, scale the output of the width to \fInum\fP (1 - 2048). .IP "" 10 The JPEG scaling width and height must be both either less than or greater than the normal frame size. For example, the scaled size of 700 wide by 525 high yields a nice 4:3 aspect image with square pixels, but it is illegal for NTSC because 700 is less than the normal width of 720 while the height is greater than the normal height of 480. .IP "" 10 Since DV uses non-square pixels, it is nice to be able to scale to an image based upon a 4:3 aspect ratio using square pixels. For NTSC, example sizes are 800x600, 640x480, and 320x240. For PAL, example square pixel sizes are 384x270 and 768x540. .IP "\fB-jvc-p25\fP" 10 Remove repeat_first_field flag and set frames per second to 25 to correct a stream recorded in JVC's HDV P25 mode. .IP "\fB-lockstep\fP" 10 Align capture to a multiple of \fB-frames\fP based on timecode. This is useful for redundancy, when more than one machine is capturing from the same FireWire device, and you want to ensure each file contains the same footage. To ensure the files from each machine have the same name use the \fB-timecode\fP option and the same \fIbase\fP name. .IP "\fB-lockstep_maxdrops \fInum\fP\fP" 10 If \fInum\fP frames are dropped consecutively, then close the file and resume capture on the next lockstop interval. If \fInum\fP is -1, then permit an unlimited number of consecutively dropped frames; this is the default. .IP "\fB-lockstep_totaldrops \fInum\fP\fP" 10 If \fInum\fP frames are dropped in the current file, then close the file and resume capture on the next lockstep interval. If \fInum\fP is -1, then permit an unlimited number of total dropped frames; this is the default. .IP "\fB-noavc\fP" 10 Disable use of AV/C VTR control. This is useful if you are capturing live video from a camera because in camera mode, an AV/C play command tells the camera to start recording, perhaps over material on the current tape. This applies to either interactive more or non-interactive because non-interactive stills sends a play and stop to the VTR upon capture start and stop. .IP "\fB-nostop\fP" 10 Disables sending the AV/C VTR stop command when exiting \fBdvgrab\fP. .IP "\fB-opendml\fP" 10 If using \fB-format dv2\fP, create an OpenDML-compliant type 2 DV AVI. This is required to support dv2 files >1GB. dv1 always supports files >1GB. .IP "\fB-r, -recordonly\fP" 10 When the camcorder is in record mode, this option causes \fBdvgrab\fP to only capture when the camcorder is recording and not paused. Normally, when in record mode, dvgrab always captures to let you use the camcorder purely as a camera where the computer operator is in control. This option makes dvgrab act like the VCR where the camera operator controls when capture takes place. This is very handy when used with the \fB-autosplit\fP option to automatically create a new file for each shot. This option requires AV/C and will not work with the \fB-noavc\fP option. .IP "\fB-rewind\fP" 10 Rewind the tape completely to the beginning prior to starting capture. Naturally, this requires AV/C; however, perhaps not so obvious is that this does not apply to interactive mode. .IP "\fB-showstatus\fP" 10 Normally, the capture status information is displayed after finished writing to each file. This option makes it show the capture status during capture, updated for each frame. .IP "\fB-s, -size \fInum\fP\fP" 10 This option tells \fBdvgrab\fP to store at most \fInum\fP megabytes (actually, mebibytes) per file, where \fInum\fP = 0 means unlimited file size for large files. The default size limit is 1024 MB. .IP "\fB-srt\fP" 10 Generate subtitle files containing the recording date and time in SRT format. For each video file that is created two additional files with the extension .srt0 and .srt1 are created. They contain the recording date and time as subtitles in the SRT format. The .srt0 file contains the subtitles with timing based on the running time from the start of the current file. Use this file if you transcode to a format like AVI. The .srt1 file contains the subtitles with timing based on the time code as delivered by the camera. The mplayer program understands this type of subtitles. .IP "\fB-stdin\fP" 10 Read the DV stream from a pipe on stdin instead of FireWire. .IP "\fB-timecode\fP" 10 Put the timecode of the first frame of each file into the file name. .IP "\fB-t, -timestamp\fP" 10 Put information on date and time of recording into file name. .IP "\fB-timesys\fP" 10 Put system rather than recording date and time into file name. This is useful when using converter devices that do not change the recording date time in the DV stream. .IP "\fB-V, -v4l2\fP" 10 Capture from a USB Video Class (UVC) device that supports DV. This uses the uvcvideo kernel module via V4L2. The default device file is /dev/video. Use the \fB-input\fP option to set a different device file. .IP "\fB-v, -version\fP" 10 Show version of program. .IP "\fB-24p\fP" 10 For Quicktime DV, set the frame rate as 24 fps in the Quicktime file. This only works as expected when the video has been shot in 24p mode. .IP "\fB-24pa\fP" 10 For Quicktime, DV, in addition to setting the frame rate to 24 in the Quicktime file, also reverse the 2:3:3:2 pulldown process by removing the interlaced "C" frame. This only works as expected when the video has been shot in 24p Advanced mode. See http://www.adamwilt.com/24p/ .SH "EXAMPLES" .IP "\fBdvgrab foo-\fP" 10 Captures video data from the default FireWire source and stores it to files \fBfoo-001.avi\fP, \fBfoo-002.avi\fP, etc. .IP "\fBdvgrab -frames 25 foo-\fP" 10 Assuming a PAL video source, this command records one second's worth of video data per file. .IP "\fBdvgrab -autosplit -frames 750 -timestamp foo-\fP" 10 Records video data from the default FireWire source, cuts it into chunks of 30 seconds (assuming PAL) or when a new recording starts and names the resulting files according to date and time info in the videostream. .IP "\fBdvgrab -autosplit -size 1998 -csize 4400 -cmincutsize 10 foo-\fP" 10 Records video data from the default FireWire source, cuts it into chunks when a new recording starts or when the current file exceeds 1998 megabytes (actually, mebibytes), or the current collection of files exceeds 4400 megabytes. It also reduces the size of the smallest file made due to a collection size cut to 10 megabytes. .IP "" 10 This option is perfect for backing up DV to DVD's as 2 Gb is around the maximum file size that (the current) linux implementation of the ISO9660 filesystem can handle! .IP "" 10 Warning: It is possible to make ISO9660 filesystems with files greater than 2 Gb, but the current linux IS09660 driver can't read them! Newer linux kernels may be able to handle ISO9660 filesystems with filesizes greater than 2 Gb. .IP "\fBdvgrab -format hdv -autosplit\fP" 10 Capture from a HDV camcorder. .IP "\fBdvgrab -format mpeg2 -guid 1\fP" 10 Record from a digital TV settop box. .IP "\fBdvgrab -jpeg-over -jpeg-w=320 -jpeg-h=240 -d smpte=1 webcam.jpeg\fP" 10 Capture a single frame, save it as a JPEG named webcam.jpg and exit. This example also demonstrates option handling. You only need to specify enough of a long option name to uniquely identify it. You can use space or equal sign to separate option name and argument. The file format is inferred from the filename extension. Also, since \fB-jpeg-overwrite\fP is used, the filename will be exactly "webcam.jpeg" and not include any numbers. .IP "\fBdvgrab -V\fP" 10 Capture over USB from a UVC compliant DV device. .IP "\fBdvgrab -v4l -input /dev/video1\fP" 10 Capture over USB from a UVC compliant DV device using device file /dev/video1. .IP "\fBdvgrab -format=hdv -autosplit=28800 -srt foo-\fP" 10 Capture from a HDV camcorder, splitting whenever there is a gap in the recording that lasts longer than 8 hours. This will likely generate a separate file for each day (useful for holiday videos). It will also generate subtitle files. Assuming that the files foo-001.m2t and foo-002.m2t are generated, the corresponding subtitle files will be foo-001.srt0, foo-001.srt1 and foo-002.srt0, foo-002.srt1. You can use the subtitle files to show the recording date and time while viewing the video. .SH "AUTHOR" Dan Dennedy and Daniel Kobras kobras@debian.org> .PP See http://www.kinodv.org/ for more information and support. ...\" created by instant / docbook-to-man, Wed 13 Dec 2000, 17:30 dvgrab-3.5+git20160707.1.e46042e/iec13818-2.h0000644000175000017500000003405412716434257014661 0ustar eses/* * iec13818-2.h * Copyright (C) 2007 Dan Streetman * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _IEC13818_2_H #define _IEC13818_2_H 1 #include "iec13818-1.h" #define START_CODE_VALUE { (unsigned char)0x00, (unsigned char)0x00, (unsigned char)0x01 } #define PICTURE_START_CODE_VALUE 0x00 #define SLICE_START_CODE_MIN 0x01 #define SLICE_START_CODE_MAX 0xaf #define USER_DATA_START_CODE_VALUE 0xb2 #define SEQUENCE_HEADER_CODE_VALUE 0xb3 #define SEQUENCE_ERROR_CODE_VALUE 0xb4 #define EXTENSION_START_CODE_VALUE 0xb5 #define SEQUENCE_END_CODE_VALUE 0xb7 #define GROUP_START_CODE_VALUE 0xb8 #define SYSTEM_START_CODE_MIN 0xb9 #define SYSTEM_START_CODE_MAX 0xff #define START_CODE_PREFIX( n ) ( GetData(n) == 0 && GetData((n)+1) == 0 && GetData((n)+2) == 1 ) #define START_CODE( n, code ) ( START_CODE_PREFIX(n) && GetData((n)+3) == (code) ) #define START_CODE_RANGE( n, min, max ) ( START_CODE_PREFIX(n) && GetData((n)+3) >= (min) && GetData((n)+3) <= (max) ) #define PICTURE_START_CODE( n ) ( START_CODE( (n), PICTURE_START_CODE_VALUE ) ) #define SLICE_START_CODE( n ) ( START_CODE_RANGE( (n), SLICE_START_CODE_MIN, SLICE_START_CODE_MAX ) ) #define USER_DATA_START_CODE( n ) ( START_CODE( (n), USER_DATA_START_CODE_VALUE ) ) #define SEQUENCE_HEADER_CODE( n ) ( START_CODE( (n), SEQUENCE_HEADER_CODE_VALUE ) ) #define SEQUENCE_ERROR_CODE( n ) ( START_CODE( (n), SEQUENCE_ERROR_CODE_VALUE ) ) #define EXTENSION_START_CODE( n ) ( START_CODE( (n), EXTENSION_START_CODE_VALUE ) ) #define SEQUENCE_END_CODE( n ) ( START_CODE( (n), SEQUENCE_END_CODE_VALUE ) ) #define GROUP_START_CODE( n ) ( START_CODE( (n), GROUP_START_CODE_VALUE ) ) #define SYSTEM_START_CODE( n ) ( START_CODE_RANGE( (n), SYSTEM_START_CODE_MIN, SYSTEM_START_CODE_MAX ) ) #define EXTENSION_ID( n, code ) ( EXTENSION_START_CODE( n ) && ( GetBits(((n)+4)*8,4) == (code) ) ) #define SEQUENCE_EXTENSION_ID( n ) ( EXTENSION_ID( n, SEQUENCE_EXTENSION_ID_VALUE ) ) #define SEQUENCE_DISPLAY_EXTENSION_ID( n ) ( EXTENSION_ID( n, SEQUENCE_DISPLAY_EXTENSION_ID_VALUE ) ) #define QUANT_MATRIX_EXTENSION_ID( n ) ( EXTENSION_ID( n, QUANT_MATRIX_EXTENSION_ID_VALUE ) ) #define COPYRIGHT_EXTENSION_ID( n ) ( EXTENSION_ID( n, COPYRIGHT_EXTENSION_ID_VALUE ) ) #define SEQUENCE_SCALABLE_EXTENSION_ID( n ) ( EXTENSION_ID( n, SEQUENCE_SCALABLE_EXTENSION_ID_VALUE ) ) #define PICTURE_DISPLAY_EXTENSION_ID( n ) ( EXTENSION_ID( n, PICTURE_DISPLAY_EXTENSION_ID_VALUE ) ) #define PICTURE_CODING_EXTENSION_ID( n ) ( EXTENSION_ID( n, PICTURE_CODING_EXTENSION_ID_VALUE ) ) #define PICTURE_SPATIAL_SCALABLE_EXTENSION_ID( n ) ( EXTENSION_ID( n, PICTURE_SPATIAL_SCALABLE_EXTENSION_ID_VALUE ) ) #define PICTURE_TEMPORAL_SCALABLE_EXTENSION_ID( n ) ( EXTENSION_ID( n, PICTURE_TEMPORAL_SCALABLE_EXTENSION_ID_VALUE ) ) #define SEQUENCE_EXTENSION_ID_VALUE 0x1 #define SEQUENCE_DISPLAY_EXTENSION_ID_VALUE 0x2 #define QUANT_MATRIX_EXTENSION_ID_VALUE 0x3 #define COPYRIGHT_EXTENSION_ID_VALUE 0x4 #define SEQUENCE_SCALABLE_EXTENSION_ID_VALUE 0x5 #define PICTURE_DISPLAY_EXTENSION_ID_VALUE 0x7 #define PICTURE_CODING_EXTENSION_ID_VALUE 0x8 #define PICTURE_SPATIAL_SCALABLE_EXTENSION_ID_VALUE 0x9 #define PICTURE_TEMPORAL_SCALABLE_EXTENSION_ID_VALUE 0xa #define FRAMERATE_LOOKUP( n ) ( \ (n) == 1 ? 24000 / 1001 : \ (n) == 2 ? 24 : \ (n) == 3 ? 25 : \ (n) == 4 ? 30000 / 1001 : \ (n) == 5 ? 30 : \ (n) == 6 ? 50 : \ (n) == 7 ? 60000 / 1001 : \ (n) == 8 ? 60 : \ 0 ) class Video; /////////////// // VideoSection class VideoSection { public: VideoSection( Video *video ); virtual ~VideoSection(); virtual void Clear(); virtual void SetOffset( int o ); virtual void AddLength( int l ); virtual bool IsComplete(); virtual unsigned char GetData( int pos ); virtual unsigned long GetBits( int offset, int len ) { GETBITS( offset, len ); } virtual int GetCompleteLength() { return -1; } virtual void Dump() { } protected: Video *video; int offset; int length; }; ////////// // Picture class Picture : public VideoSection { public: Picture( Video *v ); ~Picture(); int GetCompleteLength(); void Dump(); unsigned int picture_start_code(); unsigned short temporal_reference(); unsigned char picture_coding_type(); unsigned short vbv_delay(); bool full_pel_forward_vector(); unsigned char forward_f_code(); bool full_pel_backward_vector(); unsigned char backward_f_code(); unsigned char extra_information_picture( int n ); }; ////////////////// // Sequence Header class SequenceHeader : public VideoSection { public: SequenceHeader( Video *v ); ~SequenceHeader(); int GetCompleteLength(); void Dump(); unsigned int sequence_header_code(); unsigned short horizontal_size_value(); unsigned short vertical_size_value(); unsigned char aspect_ratio_information(); unsigned char frame_rate_code(); unsigned int bit_rate_value(); unsigned short vbv_buffer_size_value(); bool constrained_parameters_flag(); bool load_intra_quantiser_matrix(); unsigned char intra_quantiser_matrix( unsigned int n ); bool load_non_intra_quantiser_matrix(); unsigned char non_intra_quantiser_matrix( unsigned int n ); }; //////////////////// // SequenceExtension class SequenceExtension : public VideoSection { public: SequenceExtension( Video *v ); ~SequenceExtension(); int GetCompleteLength(); void Dump(); unsigned int extension_start_code(); unsigned char extension_start_code_identifier(); unsigned char profile_and_level_indication(); bool progressive_sequence(); unsigned char chroma_format(); unsigned char horizontal_size_extension(); unsigned char vertical_size_extension(); unsigned short bit_rate_extension(); unsigned char vbv_buffer_size_extension(); bool low_delay(); unsigned char frame_rate_extension_n(); unsigned char frame_rate_extension_d(); }; /////////////////////////// // SequenceDisplayExtension class SequenceDisplayExtension : public VideoSection { public: SequenceDisplayExtension( Video *v ); ~SequenceDisplayExtension(); int GetCompleteLength(); void Dump(); unsigned int extension_start_code(); unsigned char extension_start_code_identifier(); unsigned char video_format(); bool colour_description(); unsigned char colour_primaries(); unsigned char transfer_characteristics(); unsigned char matrix_coefficients(); unsigned short display_horizontal_size(); bool marker_bit(); unsigned short display_vertical_size(); }; /////////////////////// // QuantMatrixExtension class QuantMatrixExtension : public VideoSection { public: QuantMatrixExtension( Video *v ); ~QuantMatrixExtension(); int GetCompleteLength(); void Dump(); unsigned int extension_start_code(); unsigned char extension_start_code_identifier(); bool load_intra_quantiser_matrix(); unsigned char intra_quantiser_matrix( unsigned int n ); bool load_non_intra_quantiser_matrix(); unsigned char non_intra_quantiser_matrix( unsigned int n ); bool load_chroma_intra_quantiser_matrix(); unsigned char chroma_intra_quantiser_matrix( unsigned int n ); bool load_chroma_non_intra_quantiser_matrix(); unsigned char chroma_non_intra_quantiser_matrix( unsigned int n ); }; ///////////////////// // CopyrightExtension class CopyrightExtension : public VideoSection { public: CopyrightExtension( Video *v ); ~CopyrightExtension(); int GetCompleteLength(); void Dump(); unsigned int extension_start_code(); unsigned char extension_start_code_identifier(); bool copyright_flag(); unsigned char copyright_identifier(); bool original_or_copy(); unsigned int copyright_number_1(); unsigned int copyright_number_2(); unsigned int copyright_number_3(); }; //////////////////////////// // SequenceScalableExtension #define DATA_PARTITIONING 0x00 #define SPATIAL_SCALABILITY 0x01 #define SNR_SCALABILITY 0x02 #define TEMPORAL_SCALABILITY 0x03 class SequenceScalableExtension : public VideoSection { public: SequenceScalableExtension( Video *v ); ~SequenceScalableExtension(); int GetCompleteLength(); void Dump(); unsigned int extension_start_code(); unsigned char extension_start_code_identifier(); unsigned char scalable_mode(); unsigned char layer_id(); unsigned short lower_layer_prediction_horizontal_size(); unsigned short lower_layer_prediction_vertical_size(); unsigned char horizontal_subsampling_factor_m(); unsigned char horizontal_subsampling_factor_n(); unsigned char vertical_subsampling_factor_m(); unsigned char vertical_subsampling_factor_n(); bool picture_mux_enable(); bool mux_to_progressive_sequence(); unsigned char picture_mux_order(); unsigned char picture_mux_factor(); }; ////////////////////////// // PictureDisplayExtension class PictureDisplayExtension : public VideoSection { public: PictureDisplayExtension( Video *v ); ~PictureDisplayExtension(); int GetCompleteLength(); void Dump(); unsigned int extension_start_code(); unsigned char extension_start_code_identifier(); unsigned short frame_centre_horizontal_offset( unsigned int n ); unsigned short frame_centre_vertical_offset( unsigned int n ); unsigned char number_of_frame_centre_offsets(); }; ///////////////////////// // PictureCodingExtension #define PICTURE_STRUCTURE_TOP_FIELD 0x1 #define PICTURE_STRUCTURE_BOTTOM_FIELD 0x2 #define PICTURE_STRUCTURE_FRAME 0x3 class PictureCodingExtension : public VideoSection { public: PictureCodingExtension( Video *v ); ~PictureCodingExtension(); int GetCompleteLength(); void Dump(); unsigned int extension_start_code(); unsigned char extension_start_code_identifier(); unsigned char f_code00(); unsigned char f_code01(); unsigned char f_code10(); unsigned char f_code11(); unsigned char intra_dc_precision(); unsigned char picture_structure(); bool top_field_first(); bool frame_pred_frame_dct(); bool concealment_motion_vectors(); bool q_scale_type(); bool intra_vlc_format(); bool alternate_scan(); bool repeat_first_field(); bool chroma_420_type(); bool progressive_frame(); bool composite_display_flag(); bool v_axis(); unsigned char field_sequence(); bool sub_carrier(); unsigned char burst_amplitude(); unsigned char sub_carrier_phase(); }; ////////////////////////////////// // PictureSpatialScalableExtension class PictureSpatialScalableExtension : public VideoSection { public: PictureSpatialScalableExtension( Video *v ); ~PictureSpatialScalableExtension(); int GetCompleteLength(); void Dump(); unsigned int extension_start_code(); unsigned char extension_start_code_identifier(); unsigned short lower_layer_temporal_reference(); unsigned short lower_layer_horizontal_offset(); unsigned short lower_layer_vertical_offset(); unsigned char spatial_temporal_weight_code_table_index(); bool lower_layer_progressive_frame(); bool lower_layer_deinterlaced_field_select(); }; /////////////////////////////////// // PictureTemporalScalableExtension class PictureTemporalScalableExtension : public VideoSection { public: PictureTemporalScalableExtension( Video *v ); ~PictureTemporalScalableExtension(); int GetCompleteLength(); void Dump(); unsigned int extension_start_code(); unsigned char extension_start_code_identifier(); unsigned char reference_select_code(); unsigned short forward_temporal_reference(); unsigned short backward_temporal_reference(); }; /////////// // UserData class UserData : public VideoSection { public: UserData( Video *v ); ~UserData(); int GetCompleteLength(); void Dump(); }; //////// // Group class Group : public VideoSection { public: Group( Video *v ); ~Group(); int GetCompleteLength(); void Dump(); unsigned int group_start_code(); unsigned int time_code(); bool closed_gop(); bool broken_link(); bool drop_frame_flag(); unsigned char time_code_hours(); unsigned char time_code_minutes(); unsigned char time_code_seconds(); unsigned char time_code_pictures(); }; //////// // Slice class Slice : public VideoSection { public: Slice( Video *v ); ~Slice(); int GetCompleteLength(); void Dump(); unsigned int slice_start_code(); unsigned char slice_vertical_position_extension(); unsigned char priority_breakpoint(); unsigned char quantiser_scale_code(); bool intra_slice_flag(); bool intra_slice(); bool extra_bit_slice( unsigned int n ); unsigned char extra_information_slice( unsigned int n ); //FIXME - TODO - parse macroblocks, etc. //this Clear() and last_pos aren't needed if macroblock parsing is added. void Clear(); private: int last_pos; }; //////// // Video class Video { public: Video(); ~Video(); void Clear(); void AddPacket( HDVPacket *packet ); int GetLength(); unsigned char *GetBuffer(); unsigned char GetData( int pos ); unsigned long GetBits( int o, int l ) { GETBITS( o, l ); } bool IsComplete(); void ProcessPacket(); int width, height; float frameRate; TimeCode timeCode; bool isTimeCodeSet; bool hasGOP; // Needed for iec13818-2 parsing. int repeat_first_field; int top_field_first; int picture_structure; int progressive_sequence; int scalable_mode; private: PES pes; bool isComplete; int offset; int sectionStart; VideoSection *currentSection; VideoSection *lastSection; int sliceCount; Picture *picture; SequenceHeader *sequenceHeader; SequenceExtension *sequenceExtension; SequenceDisplayExtension *sequenceDisplayExtension; QuantMatrixExtension *quantMatrixExtension; CopyrightExtension *copyrightExtension; SequenceScalableExtension *sequenceScalableExtension; PictureDisplayExtension *pictureDisplayExtension; PictureCodingExtension *pictureCodingExtension; PictureSpatialScalableExtension *pictureSpatialScalableExtension; PictureTemporalScalableExtension *pictureTemporalScalableExtension; UserData *userData; Group *group; Slice *slice; }; #endif dvgrab-3.5+git20160707.1.e46042e/affine.h0000644000175000017500000000532212716434257014601 0ustar eses/* * affine.h -- Affine Transforms for 2d objects * Copyright (C) 2002 Charles Yates * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _KINOPLUS_AFFINE_H #define _KINOPLUS_AFFINE_H #include /** Affine transforms for 2d image manipulation. Current provides shearing and rotating support. */ class AffineTransform { protected: double matrix[ 2 ][ 2 ]; // Multiply two this affine transform with that void Multiply( AffineTransform that ) { double output[ 2 ][ 2 ]; for ( int i = 0; i < 2; i ++ ) for ( int j = 0; j < 2; j ++ ) output[ i ][ j ] = matrix[ i ][ 0 ] * that.matrix[ j ][ 0 ] + matrix[ i ][ 1 ] * that.matrix[ j ][ 1 ]; matrix[ 0 ][ 0 ] = output[ 0 ][ 0 ]; matrix[ 0 ][ 1 ] = output[ 0 ][ 1 ]; matrix[ 1 ][ 0 ] = output[ 1 ][ 0 ]; matrix[ 1 ][ 1 ] = output[ 1 ][ 1 ]; } public: // Define a default matrix AffineTransform() { matrix[ 0 ][ 0 ] = 1; matrix[ 0 ][ 1 ] = 0; matrix[ 1 ][ 0 ] = 0; matrix[ 1 ][ 1 ] = 1; } // Rotate by a given angle void Rotate( double angle ) { AffineTransform affine; affine.matrix[ 0 ][ 0 ] = cos( angle * M_PI / 180 ); affine.matrix[ 0 ][ 1 ] = 0 - sin( angle * M_PI / 180 ); affine.matrix[ 1 ][ 0 ] = sin( angle * M_PI / 180 ); affine.matrix[ 1 ][ 1 ] = cos( angle * M_PI / 180 ); Multiply( affine ); } // Shear by a given value void Shear( double shear ) { AffineTransform affine; affine.matrix[ 0 ][ 0 ] = 1; affine.matrix[ 0 ][ 1 ] = shear; affine.matrix[ 1 ][ 0 ] = 0; affine.matrix[ 1 ][ 1 ] = 1; Multiply( affine ); } void Scale( double sx, double sy ) { AffineTransform affine; affine.matrix[ 0 ][ 0 ] = sx; affine.matrix[ 0 ][ 1 ] = 0; affine.matrix[ 1 ][ 0 ] = 0; affine.matrix[ 1 ][ 1 ] = sy; Multiply( affine ); } // Obtain the mapped x coordinate of the input double MapX( int x, int y ) { return matrix[ 0 ][ 0 ] * x + matrix[ 0 ][ 1 ] * y; } // Obtain the mapped y coordinate of the input double MapY( int x, int y ) { return matrix[ 1 ][ 0 ] * x + matrix[ 1 ][ 1 ] * y; } }; #endif dvgrab-3.5+git20160707.1.e46042e/stringutils.h0000644000175000017500000000241012716434257015733 0ustar eses/* * stringutils.h -- C++ STL string functions * Copyright (C) 2003-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #ifndef _STRINGUTILS_H #define _STRINGUTILS_H #include using std::string; class StringUtils { public: static string replaceAll ( string haystack, string needle, string s ); static string stripWhite ( string s ); static bool begins ( string source, string sub ); static bool ends ( string source, string sub ); static string itos ( int num ); static string ltos ( long num ); static string toLower( string s ); static string toUpper( string s ); }; #endif dvgrab-3.5+git20160707.1.e46042e/configure.ac0000644000175000017500000000637412716434257015476 0ustar esesdnl Process this file with autoconf to produce a configure script. AC_PREREQ(2.69) AC_INIT(dvgrab, 3.5) AM_INIT_AUTOMAKE AM_CONFIG_HEADER(config.h) AM_MAINTAINER_MODE dnl Checks for programs. AC_PROG_CC AC_ISC_POSIX AC_PROG_CXX AM_PROG_CC_STDC AC_C_BIGENDIAN AC_PROG_INSTALL dnl Checks for header files. AC_HEADER_STDC AC_CHECK_HEADERS(fcntl.h unistd.h stdio.h) dnl Checks for libraries. PKG_CHECK_MODULES(LIBRAW1394, libraw1394 >= 1.1.0) AC_SUBST(LIBRAW1394_CFLAGS) AC_SUBST(LIBRAW1394_LIBS) PKG_CHECK_MODULES(LIBAVC1394, libavc1394 >= 0.5.1) AC_SUBST(LIBAVC1394_CFLAGS) AC_SUBST(LIBAVC1394_LIBS) PKG_CHECK_MODULES(LIBIEC61883, libiec61883 >= 1.0.0) AC_SUBST(LIBIEC61883_CFLAGS) AC_SUBST(LIBIEC61883_LIBS) AC_CHECK_LIB(pthread, pthread_create,, [ AC_ERROR(You need the pthread library to compile dvgrab) ]) # LIBDV AC_ARG_WITH(libdv, [  --with-libdv  Enables libdv support for JPEG output], [enable_libdv=$withval], [enable_libdv=yes]) if test "$enable_libdv" = yes; then PKG_CHECK_MODULES(LIBDV, libdv >= 0.103, [ AC_DEFINE(HAVE_LIBDV, 1,[Define to 1 if you have libdv.]) AC_SUBST(LIBDV_CFLAGS) AC_SUBST(LIBDV_LIBS) ], [AC_MSG_RESULT([libdv not installed; I make better dv2 AVI files with libdv 0.103 or newer.])] ) fi # LIBQUICKTIME AC_ARG_WITH(libquicktime, [  --with-libquicktime  Enables Quicktime support], [enable_libquicktime=$withval], [enable_libquicktime=yes]) if test "$enable_libquicktime" = yes; then PKG_CHECK_MODULES(LIBQUICKTIME, [libquicktime >= 0.9.5], [ AC_DEFINE(HAVE_LIBQUICKTIME, 1, [libquicktime.sourceforge.net present]) AC_SUBST(LIBQUICKTIME_CFLAGS) AC_SUBST(LIBQUICKTIME_LIBS) ],[ AC_CHECK_HEADERS(quicktime/quicktime.h, [ LIBQUICKTIME_CFLAGS="-I$prefix/include/quicktime" AC_CHECK_LIB(quicktime, quicktime_open, [ LIBQUICKTIME_LIBS="-lquicktime" AC_DEFINE(HAVE_LIBQUICKTIME, 1, [Heroine Virtual Quicktime4Linux present]) ],[ AC_CHECK_LIB(quicktimehv, quicktime_open, [ LIBQUICKTIME_LIBS="-lquicktimehv" AC_DEFINE(HAVE_LIBQUICKTIME, 1, [cvs.cinelerra.org Quicktime4Linux present]) ],[ AC_WARN([libquicktime missing; install libquicktime or quicktime4linux to support Quicktime files.]) ]) ]) ],[ AC_WARN($LIBQUICKTIME_PKG_ERRORS) AC_WARN([libquicktime missing; install libquicktime or quicktime4linux to support Quicktime files.]) ]) ]) fi AC_SUBST(LIBQUICKTIME_CFLAGS) AC_SUBST(LIBQUICKTIME_LIBS) # LIBJPEG AC_ARG_WITH(libjpeg, [  --with-libjpeg  Enables JPEG support], [enable_libjpeg=$withval], [enable_libjpeg=yes]) if test "$enable_libjpeg" = yes; then AC_CHECK_HEADERS(jpeglib.h,, [ AC_WARN(jpeglib headers missing; install jpeglib to save to JPEG files.) ]) AC_CHECK_LIB(jpeg, jpeg_CreateCompress,, [ AC_WARN(jpeglib missing; install jpeglib to save to JPEG files.) ]) fi # V4L AC_CHECK_HEADERS(linux/videodev2.h,, [ AC_WARN(V4L2 headers missing; install linux 2.6 headers to use USB.) ]) # EFENCE AC_ARG_WITH(efence,[ --with-efence Use ElectricFence for debugging support.], [ AC_CHECK_LIB(efence,free,, [ AC_ERROR(efence not found) ]) ]) dnl Checks for typedefs, structures, and compiler characteristics. AC_C_CONST AC_TYPE_SIZE_T AC_STRUCT_TM dnl Checks for library functions. AC_TYPE_SIGNAL AC_CHECK_FUNCS(mktime) AC_OUTPUT(Makefile) dvgrab-3.5+git20160707.1.e46042e/endian_types.h0000644000175000017500000001324012716434257016031 0ustar eses/* * * Quick hack to handle endianness and word length issues. * Defines _le, _be, and _ne variants to standard ISO types * like int32_t, that are stored in little-endian, big-endian, * and native-endian byteorder in memory, respectively. * Caveat: int32_le_t and friends cannot be used in vararg * functions like printf() without an explicit cast. * * Copyright (c) 2003-2005 Daniel Kobras * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _ENDIAN_TYPES_H #define _ENDIAN_TYPES_H /* Needed for BYTE_ORDER and BIG/LITTLE_ENDIAN macros. */ #ifndef _BSD_SOURCE # define _BSD_SOURCE # include # undef _BSD_SOURCE #else # include #endif #include #include static inline int8_t bswap(const int8_t& x) { return x; } static inline u_int8_t bswap(const u_int8_t& x) { return x; } static inline int16_t bswap(const int16_t& x) { return bswap_16(x); } static inline u_int16_t bswap(const u_int16_t& x) { return bswap_16(x); } static inline int32_t bswap(const int32_t& x) { return bswap_32(x); } static inline u_int32_t bswap(const u_int32_t& x) { return bswap_32(x); } static inline int64_t bswap(const int64_t& x) { return bswap_64(x); } static inline u_int64_t bswap(const u_int64_t& x) { return bswap_64(x); } #define le_to_cpu cpu_to_le #define be_to_cpu cpu_to_be template static inline T cpu_to_le(const T& x) { #if BYTE_ORDER == LITTLE_ENDIAN return x; #else return bswap(x); #endif } template static inline T cpu_to_be(const T& x) { #if BYTE_ORDER == LITTLE_ENDIAN return bswap(x); #else return x; #endif } template class le_t { T m; T read() const { return le_to_cpu(m); }; void write(const T& n) { m = cpu_to_le(n); }; public: le_t(void) { m = 0; }; le_t(const T& o) { write(o); }; operator T() const { return read(); }; le_t operator++() { write(read() + 1); return *this; }; le_t operator++(int) { write(read() + 1); return *this; }; le_t operator--() { write(read() - 1); return *this; }; le_t operator--(int) { write(read() - 1); return *this; }; le_t& operator+=(const T& t) { write(read() + t); return *this; }; le_t& operator-=(const T& t) { write(read() - t); return *this; }; le_t& operator&=(const le_t& t) { m &= t.m; return *this; }; le_t& operator|=(const le_t& t) { m |= t.m; return *this; }; } __attribute__((packed)); /* Just copy-and-pasted from le_t. Too lazy to do it right. */ template class be_t { T m; T read() const { return be_to_cpu(m); }; void write(const T& n) { m = cpu_to_be(n); }; public: be_t(void) { m = 0; }; be_t(const T& o) { write(o); }; operator T() const { return read(); }; be_t operator++() { write(read() + 1); return *this; }; be_t operator++(int) { write(read() + 1); return *this; }; be_t operator--() { write(read() - 1); return *this; }; be_t operator--(int) { write(read() - 1); return *this; }; be_t& operator+=(const T& t) { write(read() + t); return *this; }; be_t& operator-=(const T& t) { write(read() - t); return *this; }; be_t& operator&=(const be_t& t) { m &= t.m; return *this; }; be_t& operator|=(const be_t& t) { m |= t.m; return *this; }; } __attribute__((packed)); /* Define types of native endianness similar to the little and big endian * versions below. Not really necessary but useful occasionally to emphasize * endianness of data. */ typedef int8_t int8_ne_t; typedef int16_t int16_ne_t; typedef int32_t int32_ne_t; typedef int64_t int64_ne_t; typedef u_int8_t u_int8_ne_t; typedef u_int16_t u_int16_ne_t; typedef u_int32_t u_int32_ne_t; typedef u_int64_t u_int64_ne_t; /* The classes work on their native endianness as well, but obviously * introduce some overhead. Use the faster typedefs to native types * therefore, unless you're debugging. */ #if BYTE_ORDER == LITTLE_ENDIAN typedef int8_ne_t int8_le_t; typedef int16_ne_t int16_le_t; typedef int32_ne_t int32_le_t; typedef int64_ne_t int64_le_t; typedef u_int8_ne_t u_int8_le_t; typedef u_int16_ne_t u_int16_le_t; typedef u_int32_ne_t u_int32_le_t; typedef u_int64_ne_t u_int64_le_t; typedef int8_t int8_be_t; typedef be_t int16_be_t; typedef be_t int32_be_t; typedef be_t int64_be_t; typedef u_int8_t u_int8_be_t; typedef be_t u_int16_be_t; typedef be_t u_int32_be_t; typedef be_t u_int64_be_t; #else typedef int8_ne_t int8_be_t; typedef int16_ne_t int16_be_t; typedef int32_ne_t int32_be_t; typedef int64_ne_t int64_be_t; typedef u_int8_ne_t u_int8_be_t; typedef u_int16_ne_t u_int16_be_t; typedef u_int32_ne_t u_int32_be_t; typedef u_int64_ne_t u_int64_be_t; typedef int8_t int8_le_t; typedef le_t int16_le_t; typedef le_t int32_le_t; typedef le_t int64_le_t; typedef u_int8_t u_int8_le_t; typedef le_t u_int16_le_t; typedef le_t u_int32_le_t; typedef le_t u_int64_le_t; #endif #endif dvgrab-3.5+git20160707.1.e46042e/riffdump.cc0000644000175000017500000000075612716434257015331 0ustar eses#ifdef HAVE_CONFIG_H #include #endif #include #include "riff.h" #include "avi.h" int main (int argc, char *argv[]) { try { AVIFile avi; avi.Open(argv[1]); avi.ParseRIFF(); avi.PrintDirectory(); } catch (char *s) { printf("Exception caught.\n%s\n", s); } catch (...) { printf("Unexpected exception caught.\n"); } return 0; } dvgrab-3.5+git20160707.1.e46042e/ieee1394io.h0000644000175000017500000001333412716434257015133 0ustar eses/* * ieee1394io.cc -- asynchronously grabbing DV data * Copyright (C) 2000 Arne Schirmacher * Copyright (C) 2003-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _IEEE1394IO_H #define _IEEE1394IO_H 1 #include #include #include #include using std::string; #include using std::deque; #include "hdvframe.h" class Frame; class IEEE1394Reader { protected: /// the number of frames that had to be thrown away because /// our inFrames queue did not contain available frames int droppedFrames; int badFrames; /// a pointer to the frame which is currently been transmitted Frame *currentFrame; /// a list of empty frames deque < Frame* > inFrames; /// a list of already received frames deque < Frame* > outFrames; public: IEEE1394Reader( int channel = 63, int frames = 50, bool hdv = false ); virtual ~IEEE1394Reader(); // Mutex protected public methods virtual bool StartThread( void ) = 0; virtual void StopThread( void ) = 0; Frame* GetFrame( void ); void DoneWithFrame( Frame* ); int GetDroppedFrames( void ); int GetBadFrames( void ); int GetOutQueueSize( void ) { return outFrames.size(); } int GetInQueueSize( void ) { return inFrames.size(); } // These two public methods are not mutex protected virtual bool Open( void ) = 0; virtual void Close( void ) = 0; bool WaitForAction( int seconds = 0 ); void TriggerAction( ); virtual bool StartReceive( void ) = 0; virtual void StopReceive( void ) = 0; protected: /// the iso channel we listen to (typically == 63) int channel; /// contains information about our thread after calling StartThread pthread_t thread; /// this mutex protects capture related variables that could possibly /// accessed from two threads at the same time pthread_mutex_t mutex; // This condition and mutex are used to indicate when new frames are // received pthread_mutex_t condition_mutex; pthread_cond_t condition; /// A state variable for starting and stopping thread bool isRunning; /// If this is handling DV or HDV bool isHDV; HDVStreamParams hdvStreamParams; void Flush( void ); }; typedef void (*BusResetHandler)( void* ); typedef void* BusResetHandlerData; class iec61883Reader: public IEEE1394Reader { private: /// the interface card to use (typically == 0) int m_port; /// the handle to libraw1394 raw1394handle_t m_handle; /// the handle to libiec61883 iec61883_dv_fb_t m_iec61883_dv; iec61883_mpeg2_t m_iec61883_mpeg2; BusResetHandler m_resetHandler; const void* m_resetHandlerData; public: iec61883Reader( int port = 0, int channel = 63, int buffers = 50, BusResetHandler = 0, BusResetHandlerData = 0, bool hdv = false ); ~iec61883Reader(); bool Open( void ); void Close( void ); bool StartReceive( void ); void StopReceive( void ); bool StartThread( void ); void StopThread( void ); int Handler( unsigned char *data, int length, int dropped ); void *Thread(); void ResetHandler( void ); private: static int ResetHandlerProxy( raw1394handle_t handle, unsigned int generation ); static int Mpeg2HandlerProxy( unsigned char *data, int length, unsigned int dropped, void *callback_data ); static int DvHandlerProxy( unsigned char *data, int length, int complete, void *callback_data ); static void* ThreadProxy( void *arg ); }; class iec61883Connection { private: raw1394handle_t m_handle; nodeid_t m_node; int m_channel; int m_bandwidth; int m_outputPort; int m_inputPort; public: iec61883Connection( int port, int node ); ~iec61883Connection(); static void CheckConsistency( int port, int node ); int GetChannel( void ) const { return m_channel; } int Reconnect( void ); }; class AVC { private: /// the interface card to use (typically == 0) int port; /// this mutex protects avc related variables that could possibly /// accessed from two threads at the same time pthread_mutex_t avc_mutex; /// the handle to the ieee1394 subsystem raw1394handle_t avc_handle; public: AVC( int crd = 0 ); ~AVC(); int isPhyIDValid( int id ); void Noop( void ); int Play( int id ); int Pause( int id ); int Stop( int id ); int FastForward( int id ); int Rewind( int id ); int Forward( int id ); int Back( int id ); int NextScene( int id ); int PreviousScene( int id ); int Record( int id ); int Shuttle( int id, int speed ); unsigned int TransportStatus( int id ); bool Timecode( int id, char* timecode ); int getNodeId( const char *guid ); int Reverse( int id ); bool isHDV( int phyID ) const; private: }; class pipeReader: public IEEE1394Reader { public: pipeReader( const char *filename, int frames = 50, bool hdv = false ) : IEEE1394Reader( 0, frames, hdv ), input_file( filename ) {}; ~pipeReader() {}; bool Open( void ) { return true; }; void Close( void ) {} ; bool StartReceive( void ) { return true; }; void StopReceive( void ) {}; bool StartThread( void ); void StopThread( void ); void* Thread( ); private: bool Handler(); static void* ThreadProxy( void *arg ); FILE *file; const char *input_file; }; #endif dvgrab-3.5+git20160707.1.e46042e/bootstrap0000755000175000017500000000003212716434257015134 0ustar eses#! /bin/sh autoreconf -fi dvgrab-3.5+git20160707.1.e46042e/srt.h0000644000175000017500000000323212716434257014157 0ustar eses/* * srt.h -- Class for writing SRT subtitle files * Copyright (C) 2008 Pelayo Bernedo * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef DVGRAB_SRT_H #define DVGRAB_SRT_H #include #include #include "frame.h" class SubtitleWriter { std::ofstream os_tc, os_fr; // Time code and frame number based. float frameRate; int lastYear, lastMonth, lastDay; int lastHour, lastMinute; int frameCount, lastFrameWritten; TimeCode codeNow, nextCodeToWrite; int titleNumber; bool timeCodeValid; void frameToTime( int frameNo, int *hh, int *mm, int *ss, int *millis ); void codeToTime( const TimeCode &tc, int *hh, int *mm, int *ss, int *millis ); void writeSubtitleFrame(); void writeSubtitleCode(); void finishFile(); public: SubtitleWriter(); ~SubtitleWriter(); void newFile( const char *videoName ); void setFrameRate( float fr ) { frameRate = fr; } bool hasFrameRate() const { return frameRate != 0.0; } void addRecordingDate( struct tm &rd, const TimeCode &tc ); }; #endifdvgrab-3.5+git20160707.1.e46042e/dvgrab.cc0000644000175000017500000011411512716434257014755 0ustar eses/* * dvgrab.cc -- DVGrab control class * Copyright (C) 2003-2009 Dan Dennedy * Major rewrite of code based upon older versions of dvgrab by Arne Schirmacher * and some Kino code also contributed by Charles Yates. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include using std::cerr; using std::endl; #include #include #include #include #include #include #include #include #include #include #include #include #include "error.h" #include "riff.h" #include "avi.h" #include "dvgrab.h" #include "raw1394util.h" #include "smiltime.h" #include "stringutils.h" #include "v4l2reader.h" #include "srt.h" extern bool g_done; pthread_mutex_t DVgrab::capture_mutex; pthread_t DVgrab::capture_thread; pthread_t DVgrab::watchdog_thread; Frame *DVgrab::m_frame; FileHandler *DVgrab::m_writer; static SubtitleWriter subWriter; DVgrab::DVgrab( int argc, char *argv[] ) : m_program_name( argv[0] ), m_port( -1 ), m_node( -1 ), m_reader_active( false ), m_autosplit( false ), m_timestamp( false ), m_channel( DEFAULT_CHANNEL ), m_frame_count( DEFAULT_FRAMES ), m_max_file_size( DEFAULT_SIZE ), m_collection_size( DEFAULT_CSIZE ), m_collection_min_cut_file_size( DEFAULT_CMINCUTSIZE ), m_sizesplitmode ( 0 ), m_file_format( DEFAULT_FORMAT ), m_open_dml( false ), m_frame_every( DEFAULT_EVERY ), m_jpeg_quality( 75 ), m_jpeg_deinterlace( false ), m_jpeg_width( -1 ), m_jpeg_height( -1 ), m_jpeg_overwrite( false ), m_jpeg_temp( "dvtmp.jpg" ), m_jpeg_usetemp( false ), m_dropped_frames( 0 ), m_bad_frames(0), m_interactive( false ), m_buffers( DEFAULT_BUFFERS ), m_total_frames( 0 ), m_duration( "" ), m_timeDuration( 0 ), m_noavc( false ), m_guid( 0 ), m_timesys( false ), m_connection( 0 ), m_raw_pipe( false ), m_no_stop( false ), m_timecode( false ), m_lockstep( false ), m_lockPending( false ), m_lockstep_maxdrops( DEFAULT_LOCKSTEP_MAXDROPS ), m_lockstep_totaldrops( DEFAULT_LOCKSTEP_TOTALDROPS ), m_captureActive( false ), m_avc( 0 ), m_reader( 0 ), m_hdv( false ), m_showstatus( false ), m_isLastTimeCodeSet( false ), m_isLastRecDateSet( false ), m_v4l2( false ), m_jvc_p25( false ), m_24p( false ), m_24pa( false ), m_isRecordMode( false ), m_isRewindFirst( false ), m_timeSplit(0), m_srt( false ), m_isNewFile(false) { m_frame = 0; m_writer = 0; m_input_file_name = NULL; m_dst_file_name = NULL; getargs( argc, argv ); if ( m_v4l2 ) { if ( !m_input_file_name ) m_input_file_name = DEFAULT_V4L2_DEVICE; } else { // if reading stdin, make sure its not a tty! if ( m_input_file_name && ( strcmp( m_input_file_name, "-" ) == 0 ) && ( isatty( fileno( stdin ) ) || m_interactive ) ) throw std::string( "Can't read from tty or in interactive mode" ); if ( ! m_input_file_name && ( ! m_noavc || m_port == -1 ) ) m_node = discoverAVC( &m_port, &m_guid ); if ( ( m_interactive || ! m_input_file_name ) && ( ! m_noavc && m_node == -1 ) ) throw std::string( "no camera exists" ); if ( m_file_format == MPEG2TS_FORMAT ) m_hdv = true; } pthread_mutex_init( &capture_mutex, NULL ); if ( m_port != -1 ) { iec61883Connection::CheckConsistency( m_port, m_node ); if ( ! m_noavc ) { m_avc = new AVC( m_port ); if ( ! m_avc ) throw std::string( "failed to initialize AV/C" ); if ( m_interactive ) m_avc->Pause( m_node ); if ( m_avc->isHDV( m_node ) ) { m_file_format = MPEG2TS_FORMAT; m_hdv = true; } } if ( m_guid ) { m_connection = new iec61883Connection( m_port, m_node ); if ( ! m_connection ) throw std::string( "failed to establish isochronous connection" ); m_channel = m_connection->GetChannel(); sendEvent( "Established connection over channel %d", m_channel ); } m_reader = new iec61883Reader( m_port, m_channel, m_buffers, this->testCaptureProxy, this, m_hdv ); } else if ( m_v4l2 ) { #ifdef HAVE_LINUX_VIDEODEV2_H m_reader = new v4l2Reader( m_input_file_name, m_buffers, m_hdv ); #endif } else if ( m_input_file_name ) { m_reader = new pipeReader( m_input_file_name, m_buffers, m_hdv ); } else throw std::string( "invalid source specified" ); if ( m_reader ) { pthread_create( &capture_thread, NULL, captureThread, this ); m_reader->StartThread(); } } DVgrab::~DVgrab() { cleanup(); } void DVgrab::print_usage() { cerr << "Usage: " << m_program_name << " [options] [file] [-]" << endl; cerr << "Try " << m_program_name << " --help for more information" << endl; } void DVgrab::print_version() { cerr << PACKAGE << " " << VERSION << endl; } void DVgrab::print_help() { print_version(); cerr << "Usage: " << m_program_name << " [options] [file] [-]" << endl << endl; cerr << "Use '-' to pipe video to stdout." << endl << endl; cerr << "Options:" << endl << endl; cerr << " -a[n],-autosplit[=n] start a new file when a new recording is detected" << endl; cerr << " or if n is supplied, after n seconds gap in recording date/time" << endl; cerr << " -buffers number the number of internal frames to buffer [default " << DEFAULT_BUFFERS << "]" << endl; cerr << " -card number card number [default automatic]" << endl; cerr << " -channel number iso channel number for listening [default " << DEFAULT_CHANNEL << "]" << endl; cerr << " -cmincutsize num min file size in MiB due to collection split [default " << DEFAULT_CMINCUTSIZE << "]" << endl; cerr << " -csize number split file when collections of files are about to exceed" << endl; cerr << " number MiB, 0 = unlimited [default " << DEFAULT_CSIZE << "]" << endl; cerr << " -debug type display (HDV) debug info, type is one or more of:" << endl; cerr << " all,pat,pmt,pids,pid=N,pes,packet,video,sonya1" << endl; cerr << " -d, -duration time total capture duration specified as a SMIL time value:" << endl; cerr << " XXX[.Y]h, XXX[.Y]min, XXX[.Y][s], XXXms," << endl; cerr << " [[HH:]MM:]SS[.ms], or smpte=[[[HH:]MM:]SS:]FF" << endl; cerr << " [default unlimited]" << endl; cerr << " -every number write every n'th frame only [default " << DEFAULT_EVERY << "]" << endl; cerr << " -f -format type save as one of the following file types [default " << DEFAULT_FORMAT_STR << "]" << endl; cerr << " raw raw DV file with a .dv extension" << endl; cerr << " dif raw DV file with a .dif extension" << endl; cerr << " dv1 'Type 1' DV AVI file" << endl; cerr << " dv2, avi 'Type 2' DV AVI file" << endl; #ifdef HAVE_LIBQUICKTIME cerr << " qt, mov QuickTime DV movie" << endl; #endif cerr << " mpeg2, hdv MPEG-2 transport stream (HDV)" << endl; #if defined(HAVE_LIBJPEG) && defined(HAVE_LIBDV) cerr << " jpeg, jpg sequence of JPEG files (DV only)" << endl; #endif cerr << " -F, -frames number max number of frames per split" << endl; cerr << " 0 = unlimited [default " << DEFAULT_FRAMES << "]" << endl; cerr << " -guid hex select one of multiple DV devices by its GUID" << endl; cerr << " GUID is in hexadecimal; see /sys/bus/ieee1394/devices/" << endl; cerr << " -h, -help display this help and exit" << endl; cerr << " -I, -input file read from file (\"-\" = stdin)" << endl; cerr << " -i, -interactive go interactive with camera VTR and capture control" << endl; #if defined(HAVE_LIBJPEG) && defined(HAVE_LIBDV) cerr << " -jpeg-deinterlace deinterlace the output by line doubling the upper field" << endl; cerr << " -jpeg-height n scale the output to the specified height (max=2048)" << endl; cerr << " -jpeg-overwrite overwrite the same file instead of creating a sequence" << endl; cerr << " -jpeg-quality n set the JPEG compression level" << endl; cerr << " -jpeg-temp name use name as temporary file output" << endl; cerr << " -jpeg-width n scale the output to the specified width (max=2048)" << endl; #endif cerr << " -jvc-p25 remove repeat_first_field flag and set fps to 25 (HDV)" << endl; cerr << " (to correct stream from JVC cam recorded in P25 mode)" << endl; cerr << " -lockstep align capture to multiple of -frames based on timecode" << endl; cerr << " -lockstep_maxdrops max consecutive frame drops before closing file" << endl; cerr << " -1 = unlimited [default " << DEFAULT_LOCKSTEP_MAXDROPS << "]" << endl; cerr << " -lockstep_totaldrops max total frame drops before closing file" << endl; cerr << " -1 = unlimited [default " << DEFAULT_LOCKSTEP_TOTALDROPS << "]" << endl; cerr << " -noavc disable use of AV/C VTR control" << endl; cerr << " -nostop do not send AV/C stop command on exit" << endl; cerr << " -opendml use the OpenDML extensions to write large (>1GB)" << endl; cerr << " 'Type 2' DV AVI files (requires -format dv2)" << endl; cerr << " -r, recordonly only capture when not paused while in record mode" << endl; cerr << " -rewind completely rewind the tape prior to capture" << endl; cerr << " -showstatus show the recording status while capturing" << endl; cerr << " -s, -size number max file size, 0 = unlimited [default " << DEFAULT_SIZE << "]" << endl; cerr << " -srt write SRT files with the recording date\n"; cerr << " -stdin read from stdin pipe [default = raw1394]" << endl; cerr << " -timecode put the first frame's timecode into the file name" << endl; cerr << " -t, -timestamp put the date and time of recording into the file name" << endl; cerr << " -timesys put the system date and time into the file name" << endl; #ifdef HAVE_LINUX_VIDEODEV2_H cerr << " -V, -v4l2 capture DV from V4L2 USB device (linux-uvc)" << endl; cerr << " use -input to set device file [default " << DEFAULT_V4L2_DEVICE << "]" << endl; #endif cerr << " -v, -version display version and exit" << endl; #ifdef HAVE_LIBQUICKTIME cerr << " -24p use 24 fps as output rate (Quicktime Only)" << endl; cerr << " -24pa remove 2:3:3:2 pulldown for 24p Advanced (Quicktime Only)" << endl; #endif cerr << endl; cerr << "Check out the dvgrab website for the latest version, news and other software:" << endl; cerr << "http://www.kinodv.org/" << endl << endl; } void DVgrab::set_file_format( char *format ) { if ( strcmp( "dv1", format ) == 0 ) m_file_format = AVI_DV1_FORMAT; else if ( strcmp( "dv2", format ) == 0 || strcmp( "avi", format ) == 0 ) m_file_format = AVI_DV2_FORMAT; else if ( strcmp( "raw", format ) == 0 ) m_file_format = RAW_FORMAT; else if ( strcmp( "qt", format ) == 0 || strcmp( "mov", format ) == 0 ) m_file_format = QT_FORMAT; else if ( strcmp( "dif", format ) == 0 ) m_file_format = DIF_FORMAT; #if defined(HAVE_LIBJPEG) && defined(HAVE_LIBDV) else if ( strcmp( "jpeg", format ) == 0 || strcmp( "jpg", format ) == 0 ) m_file_format = JPEG_FORMAT; #endif else if ( strncmp( "mpeg2", format, 5 ) == 0 || strcmp( "hdv", format ) == 0 ) m_file_format = MPEG2TS_FORMAT; else { cerr << "Unknown file format : " << format << endl; print_usage(); exit( EXIT_FAILURE ); } } void DVgrab::set_format_from_name( void ) { std::string filename = m_dst_file_name; if ( filename.find( '.' ) != string::npos ) { std::string ext = StringUtils::toUpper( filename.substr( filename.find_last_of( '.' ) + 1 ) ); if ( ext == "AVI" ) m_file_format = AVI_DV2_FORMAT; else if ( ext == "DV" ) m_file_format = RAW_FORMAT; else if ( ext == "DIF" ) m_file_format = DIF_FORMAT; else if ( ext == "MOV" ) m_file_format = QT_FORMAT; else if ( ext == "JPG" || ext == "JPEG" ) m_file_format = JPEG_FORMAT; else if ( ext == "M2T" ) m_file_format = MPEG2TS_FORMAT; else { cerr << "Unknown filename extension" << endl; print_usage(); exit( EXIT_FAILURE ); } if ( m_file_format != JPEG_FORMAT ) m_dst_file_name[ filename.find_last_of( '.' ) ] = '\0'; } } void DVgrab::getargs( int argc, char *argv[] ) { const char *opts = "a::d:hif:F:I:rs:tVv-"; int optindex = 0; int c; struct option long_opts[] = { // all these use sscanf for int conversion, use val == 0xff to indicate { "autosplit", optional_argument, 0, 'a' }, { "buffers", required_argument, &m_buffers, 0xff }, { "card", required_argument, &m_port, 0xff }, { "channel", required_argument, &m_channel, 0xff }, { "cmincutsize", required_argument, &m_collection_min_cut_file_size, 0xff }, { "csize", required_argument, &m_collection_size, 0xff }, { "debug", required_argument, 0, 0 }, { "duration", required_argument, 0, 0 }, { "every", required_argument, &m_frame_every, 0xff }, { "format", required_argument, 0, 'f' }, { "frames", required_argument, &m_frame_count, 0xff }, { "guid", required_argument, 0, 0 }, { "help", no_argument, 0, 'h' }, { "input", required_argument, 0, 'I' }, { "interactive", no_argument, 0, 'i'}, #if defined(HAVE_LIBJPEG) && defined(HAVE_LIBDV) { "jpeg-deinterlace", no_argument, &m_jpeg_deinterlace, true }, { "jpeg-height", required_argument, &m_jpeg_height, 0xff }, { "jpeg-overwrite", no_argument, &m_jpeg_overwrite, true }, { "jpeg-quality", required_argument, &m_jpeg_quality, 0xff }, { "jpeg-temp", required_argument, &m_jpeg_usetemp, true }, { "jpeg-width", required_argument, &m_jpeg_width, 0xff }, #endif { "jvc-p25", no_argument, &m_jvc_p25, true }, { "lockstep", no_argument, &m_lockstep, true }, { "lockstep_maxdrops", required_argument, &m_lockstep_maxdrops, 0xff }, { "lockstep_totaldrops", required_argument, &m_lockstep_totaldrops, 0xff }, { "noavc", no_argument, &m_noavc, true }, { "nostop", no_argument, &m_no_stop, true }, { "opendml", no_argument, &m_open_dml, true }, { "recordonly", no_argument, 0, 'r'}, { "rewind", no_argument, &m_isRewindFirst, true }, { "showstatus", no_argument, &m_showstatus, true }, { "size", required_argument, &m_max_file_size, 0xff }, { "srt", no_argument, &m_srt, true }, { "stdin", no_argument, 0, 0 }, { "timecode", no_argument, &m_timecode, true }, { "timestamp", no_argument, &m_timestamp, true }, { "timesys", no_argument, &m_timesys, true }, #ifdef HAVE_LINUX_VIDEODEV2_H { "v4l2", no_argument, 0, 'V' }, #endif #ifdef HAVE_LIBQUICKTIME { "24p", no_argument, &m_24p, true }, { "24pa", no_argument, &m_24pa, true }, #endif { "version", no_argument, 0, 'v' }, { 0, 0, 0, 0 } }; while ( -1 != ( c = getopt_long_only( argc, argv, opts, long_opts, &optindex ) ) ) { switch ( c ) { case 0: { const char *name = long_opts[optindex].name; if ( long_opts[optindex].val == 0xff ) { if ( sscanf( optarg, "%d", long_opts[optindex].flag ) != 1 ) { cerr << "Parameter " << name << " invalid value : " << optarg << endl; print_usage(); exit( EXIT_FAILURE ); } } else if ( strcmp( "guid", name ) == 0 ) { if ( sscanf( optarg, "%llx", &m_guid ) != 1 ) { cerr << "Parameter m_guid invalid value : " << optarg << endl; print_usage(); exit( EXIT_FAILURE ); } } else if ( strcmp( "debug", name ) == 0 ) { char *str = strdup( optarg ); char *token; while ( token = strsep( &str, "," ) ) { if ( strcmp( token, "all" ) == 0 ) d_all = true; else if ( strcmp( token, "pat" ) == 0 ) d_hdv_pat = true; else if ( strcmp( token, "pmt" ) == 0 ) d_hdv_pmt = true; else if ( strcmp( token, "pids" ) == 0 ) d_hdv_pids = true; else if ( strcmp( token, "pes" ) == 0 ) d_hdv_pes = true; else if ( strcmp( token, "packet" ) == 0 ) d_hdv_packet = true; else if ( strcmp( token, "video" ) == 0 ) d_hdv_video = true; else if ( strcmp( token, "sonya1" ) == 0 ) d_hdv_sonya1 = true; else if ( strncmp( token, "pid=", 4 ) == 0 ) d_hdv_pid_add( (int)strtol( &token[4], NULL, 0 ) ); else { cerr << "Invalid debug parameter : " << token << endl; print_usage(); exit( EXIT_FAILURE ); } } if ( str ) free( str ); } else if ( strcmp( "jpeg-temp", name ) == 0 ) m_jpeg_temp = optarg; else if ( strcmp( "stdin", name ) == 0 ) m_input_file_name = "-"; else if ( strcmp( "duration", name ) == 0 ) m_duration = optarg; } break; case 'a': if ( optarg ) m_timeSplit = atoi( optarg ); else m_autosplit = true; break; case 'd': m_duration = optarg; break; case 'i': m_interactive = true; break; case 'f': set_file_format( optarg ); break; case 'F': m_frame_count = atoi( optarg ); break; case 'h': print_help(); exit( EXIT_SUCCESS ); break; case 'I': m_input_file_name = optarg; break; case 'r': m_isRecordMode = true; break; case 's': m_max_file_size = atoi( optarg ); break; case 't': m_timestamp = true; break; case 'V': m_v4l2 = true; break; case 'v': print_version(); exit( EXIT_SUCCESS ); break; default: print_usage(); exit( EXIT_FAILURE ); } } if ( optind < argc ) { if ( argv[ optind ][0] == '-' ) { m_raw_pipe = true; ++optind; } } if ( optind < argc ) { m_dst_file_name = strdup( argv[ optind++ ] ); set_format_from_name(); } if ( optind < argc ) { if ( argv[ optind ][0] == '-' ) { m_raw_pipe = true; ++optind; } else { cerr << "Too many output file names." << endl; print_usage(); exit( EXIT_FAILURE ); } } if ( m_dst_file_name == NULL && !m_raw_pipe ) m_dst_file_name = strdup( "dvgrab-" ); } void DVgrab::startCapture() { if ( m_dst_file_name ) { pthread_mutex_lock( &capture_mutex ); switch ( m_file_format ) { case RAW_FORMAT: m_writer = new RawHandler(); break; case DIF_FORMAT: m_writer = new RawHandler( ".dif" ); break; case AVI_DV1_FORMAT: { AVIHandler *aviWriter = new AVIHandler( AVI_DV1_FORMAT ); m_writer = aviWriter; break; } case AVI_DV2_FORMAT: { AVIHandler *aviWriter = new AVIHandler( AVI_DV2_FORMAT ); m_writer = aviWriter; if ( m_max_file_size == 0 || m_max_file_size > 1000 ) { sendEvent( "Turning on OpenDML to support large file size." ); m_open_dml = true; } aviWriter->SetOpenDML( m_open_dml ); break; } #ifdef HAVE_LIBQUICKTIME case QT_FORMAT: m_writer = new QtHandler(); m_writer->SetFilmRate( m_24p ); m_writer->SetRemove2332( m_24pa ); break; #endif #if defined(HAVE_LIBJPEG) && defined(HAVE_LIBDV) case JPEG_FORMAT: m_writer = new JPEGHandler( m_jpeg_quality, m_jpeg_deinterlace, m_jpeg_width, m_jpeg_height, m_jpeg_overwrite, m_jpeg_temp, m_jpeg_usetemp ); break; #endif case MPEG2TS_FORMAT: m_writer = new Mpeg2Handler( m_jvc_p25 ? MPEG2_JVC_P25 : 0 ); break; } m_writer->SetTimeStamp( m_timestamp ); m_writer->SetTimeSys( m_timesys ); m_writer->SetTimeCode( m_timecode ); m_writer->SetBaseName( m_dst_file_name ); m_writer->SetMaxFrameCount( m_frame_count ); m_writer->SetAutoSplit( m_autosplit ); m_writer->SetTimeSplit ( m_timeSplit ); m_writer->SetEveryNthFrame( m_frame_every ); m_writer->SetMaxFileSize( ( off_t ) m_max_file_size * ( off_t ) ( 1024 * 1024 ) ); if (m_collection_size) { m_sizesplitmode = 1; } m_writer->SetSizeSplitMode( m_sizesplitmode ); m_writer->SetMaxColSize( ( off_t ) ( m_collection_size ) * ( off_t ) ( 1024 * 1024 ) ); m_writer->SetMinColSize( ( off_t ) ( m_collection_size - m_collection_min_cut_file_size ) * ( off_t ) ( 1024 * 1024 ) ); } if ( m_avc ) { if ( m_isRewindFirst && !m_interactive ) { // Stop whatever is happening m_avc->Stop( m_node ); // Wait until it is stopped while ( !g_done && AVC1394_MASK_RESPONSE_OPERAND( m_avc->TransportStatus( m_node ), 3 ) != AVC1394_VCR_OPERAND_WIND_STOP ) { timespec t = {0, 125000000L}; nanosleep( &t, NULL ); } // Rewind if ( !g_done ) { m_avc->Rewind( m_node ); timespec t = {0, 125000000L}; nanosleep( &t, NULL ); } // Wait until is done rewinding while ( !g_done && AVC1394_MASK_RESPONSE_OPERAND( m_avc->TransportStatus( m_node ), 3 ) != AVC1394_VCR_OPERAND_WIND_STOP ) { timespec t = {0, 125000000L}; nanosleep( &t, NULL ); } } // Now Play so we can capture something if ( !g_done ) m_avc->Play( m_node ); } sendEvent( "Waiting for %s...", m_hdv ? "HDV" : "DV" ); // this is a little unclean, checking global g_done from main.cc to allow interruption while ( !g_done && m_frame == NULL ) { timespec t = {0, 25000000L}; nanosleep( &t, NULL ); } if ( !g_done && m_frame ) { // OK, we have data, commence capture sendEvent( "Capture Started" ); m_captureActive = true; m_total_frames = 0; // parse the SMIL time value duration if ( m_timeDuration == NULL && ! m_duration.empty() ) m_timeDuration = new SMIL::MediaClippingTime( m_duration, m_frame->GetFrameRate() ); if ( m_dst_file_name ) pthread_mutex_unlock( &capture_mutex ); } else { // No data received, throw an error if ( m_dst_file_name ) pthread_mutex_unlock( &capture_mutex ); const char *err = m_hdv ? "no HDV. Try again before giving up." : "no DV"; if ( m_hdv ) reset_bus( m_port ); throw std::string( err ); } } void DVgrab::stopCapture() { pthread_mutex_lock( &capture_mutex ); if ( m_writer != NULL ) { std::string filename = m_writer->GetFileName(); int frames = m_writer->GetFramesWritten(); float size = ( float ) m_writer->GetFileSize() / 1024 / 1024; m_writer->Close(); delete m_writer; m_writer = NULL; if ( m_avc && m_interactive ) m_avc->Pause( m_node ); if ( m_frame != NULL ) { TimeCode timeCode; struct tm recDate; m_frame->GetTimeCode( timeCode ); if ( ! m_frame->GetRecordingDate( recDate ) ) { // If the month is invalid, then report system date/time time_t timesys; time( ×ys ); localtime_r( ×ys, &recDate ); } sendEvent( "\"%s\": %8.2f MiB %d frames timecode %2.2d:%2.2d:%2.2d.%2.2d date %4.4d.%2.2d.%2.2d %2.2d:%2.2d:%2.2d", filename.c_str(), size, frames, timeCode.hour, timeCode.min, timeCode.sec, timeCode.frame, recDate.tm_year + 1900, recDate.tm_mon + 1, recDate.tm_mday, recDate.tm_hour, recDate.tm_min, recDate.tm_sec ); } else sendEvent( "\"%s\" %8.2f MiB %d frames", filename.c_str(), size, frames ); sendEvent( "Capture Stopped" ); if ( m_dropped_frames > 0 ) sendEvent( "Warning: %d dropped frames.", m_dropped_frames ); if ( m_bad_frames > 0 ) sendEvent( "Warning: %d damaged frames.", m_bad_frames ); m_dropped_frames = 0; m_bad_frames = 0; m_captureActive = false; } pthread_mutex_unlock( &capture_mutex ); } void DVgrab::testCapture( void ) { pthread_attr_t thread_attributes; sendEvent( "Bus Reset, launching watchdog thread" ); pthread_attr_init( &thread_attributes ); pthread_attr_setdetachstate( &thread_attributes, PTHREAD_CREATE_DETACHED ); pthread_create( &watchdog_thread, NULL, watchdogThreadProxy, this ); } void* DVgrab::watchdogThreadProxy( void* arg ) { DVgrab *self = static_cast< DVgrab* >( arg ); self->watchdogThread(); return NULL; } void DVgrab::watchdogThread() { if ( m_reader ) { if ( ! m_reader->WaitForAction( 1 ) ) { cleanup(); sendEvent( "Error: timed out waiting for DV after bus reset" ); throw; } // Otherwise, reestablish the connection if ( m_connection ) { int newChannel = m_connection->Reconnect(); if ( newChannel != m_channel ) { cleanup(); sendEvent( "Error: unable to reestablish connection after bus reset" ); throw; // TODO: the following attempt to recreate reader and restart capture // does not work #if 0 bool restartCapture = m_captureActive; if ( m_captureActive ) stopCapture(); m_reader_active = false; if ( m_reader ) { m_reader->TriggerAction( ); pthread_join( capture_thread, NULL ); m_reader->StopThread(); delete m_reader; } sendEvent( "Closed existing reader" ); m_reader = new iec61883Reader( m_port, m_channel, m_buffers, this->testCaptureProxy, this, m_hdv ); if ( m_reader ) { sendEvent( "new reader created" ); pthread_create( &capture_thread, NULL, captureThread, this ); m_reader->StartThread(); } sendEvent( "restarting capture" ); if ( restartCapture ) startCapture(); #endif } } } } void DVgrab::testCaptureProxy( void* arg ) { DVgrab *self = static_cast< DVgrab * >( arg ); self->testCapture(); } void *DVgrab::captureThread( void *arg ) { DVgrab * me = ( DVgrab* ) arg; me->captureThreadRun(); return NULL; } void DVgrab::sendCaptureStatus( const char *name, float size, int frames, TimeCode *tc, struct tm *rd, bool newline ) { char tc_str[64], rd_str[128]; if ( tc ) sprintf( tc_str, "%2.2d:%2.2d:%2.2d.%2.2d", tc->hour, tc->min, tc->sec, tc->frame ); else sprintf( tc_str, "??:??:??.??" ); if ( rd ) sprintf( rd_str, "%4.4d.%2.2d.%2.2d %2.2d:%2.2d:%2.2d", rd->tm_year + 1900, rd->tm_mon + 1, rd->tm_mday, rd->tm_hour, rd->tm_min, rd->tm_sec ); else sprintf( rd_str, "????.??.?? ??:??:??" ); sendEventParams( 2, 0, "\"%s\": %8.2f MiB %5d frames timecode %s date %s%s", name, size, frames, tc_str, rd_str, newline ? "\n" : "" ); } void DVgrab::writeFrame() { // All access to the writer is protected pthread_mutex_lock( &capture_mutex ); // see if we have exceeded requested duration if ( m_timeDuration && m_timeDuration->isResolved() && ( ( float )m_total_frames++ / m_frame->GetFrameRate() * 1000.0 + 0.5 ) >= m_timeDuration->getResolvedOffset() ) { pthread_mutex_unlock( &capture_mutex ); stopCapture(); m_reader_active = false; } else if ( m_writer != NULL ) { std::string fileName = m_writer->GetFileName(); float size = ( float ) m_writer->GetFileSize() / 1024 / 1024; int framesWritten = m_writer->GetFramesWritten(); TimeCode tc, *timeCode = &tc; struct tm rd, *recDate = &rd; TimeCode *lasttc = m_isLastTimeCodeSet ? &m_lastTimeCode : 0; struct tm *lastrd = m_isLastRecDateSet ? &m_lastRecDate : 0; if ( !m_frame->GetTimeCode( tc ) ) timeCode = 0; if ( !m_frame->GetRecordingDate( rd ) ) { // If the month is invalid, then report system date/timem_reader_active time_t timesys; time( ×ys ); localtime_r( ×ys, recDate ); } if ( m_lockstep && m_lockPending && m_frame_count > 0 && m_frame->CanStartNewStream() ) { // If a lock is pending due to dropped frames, close the file if ( m_writer->FileIsOpen() ) { m_writer->CollectionCounterUpdate(); m_writer->Close(); } if ( !m_hdv && timeCode ) { // Convert timecode to #frames SMIL::MediaClippingTime mcTime( m_frame->GetFrameRate() ); std::ostringstream sb; sb << setfill( '0' ) << std::setw( 2 ) << timeCode->hour << ':' << timeCode->min << ':' << timeCode->sec << ':' << timeCode->frame; DVFrame *dvframe = static_cast( m_frame ); if ( dvframe->IsPAL() ) mcTime.parseSmpteValue( sb.str() ); else mcTime.parseSmpteNtscDropValue( sb.str() ); // If lock step point (multiple of frame count) is reached, skip writing if ( mcTime.getFrames() % m_frame_count != 0 ) { pthread_mutex_unlock( &capture_mutex ); return; } } m_lockPending = false; } if ( ! m_writer->WriteFrame( m_frame ) ) { pthread_mutex_unlock( &capture_mutex ); stopCapture(); throw std::string( "writing failed" ); } m_isNewFile |= m_writer->IsNewFile(); if ( m_writer->IsNewFile() && !m_writer->IsFirstFile() ) { sendCaptureStatus( fileName.c_str(), size, framesWritten, lasttc, lastrd, true ); if ( m_dropped_frames > 0 ) sendEvent( "Warning: %d dropped frames.", m_dropped_frames ); m_dropped_frames = 0; if ( m_bad_frames > 0 ) sendEvent( "Warning: %d damaged frames.", m_bad_frames ); m_bad_frames = 0; } else if ( m_showstatus ) { sendCaptureStatus( m_writer->GetFileName().c_str(), (float) m_writer->GetFileSize() / 1024 / 1024, m_writer->GetFramesWritten(), timeCode, recDate, false ); } if ( timeCode ) { memcpy( &m_lastTimeCode, timeCode, sizeof( m_lastTimeCode ) ); m_isLastTimeCodeSet = true; } if ( recDate ) { memcpy( &m_lastRecDate, recDate, sizeof( m_lastRecDate ) ); m_isLastRecDateSet = true; if ( m_srt ) { if ( m_isNewFile ) { subWriter.newFile( m_writer->GetFileName().c_str() ); m_isNewFile = false; } if ( !subWriter.hasFrameRate() ) subWriter.setFrameRate( m_frame->GetFrameRate() ); subWriter.addRecordingDate( *recDate, tc ); } } } pthread_mutex_unlock( &capture_mutex ); } void DVgrab::sendFrameDroppedStatus( const char *reason, const char *meaning ) { TimeCode timeCode; struct tm recDate; char tc[32], rd[32]; if ( m_frame && m_frame->GetTimeCode( timeCode ) ) sprintf( tc, "%2.2d:%2.2d:%2.2d.%2.2d", timeCode.hour, timeCode.min, timeCode.sec, timeCode.frame ); else sprintf( tc, "??:??:??.??" ); if ( m_frame && m_frame->GetRecordingDate( recDate ) ) sprintf( rd, "%4.4d.%2.2d.%2.2d %2.2d:%2.2d:%2.2d", recDate.tm_year + 1900, recDate.tm_mon + 1, recDate.tm_mday, recDate.tm_hour, recDate.tm_min, recDate.tm_sec ); else sprintf( rd, "????.??.?? ??:??:??" ); sendEvent( "\n\a\"%s\": %s: timecode %s date %s", m_writer ? m_writer->GetFileName().c_str() : "", reason, tc, rd ); sendEvent( meaning ); } void DVgrab::captureThreadRun() { m_lockPending = true; m_reader_active = true; // Loop until we're informed otherwise while ( m_reader_active ) { pthread_testcancel(); // Wait for the reader to indicate that something has happened m_reader->WaitForAction( ); int dropped = m_reader->GetDroppedFrames(); int badFrames = m_reader->GetBadFrames(); // Get the next frame if ( ( m_frame = m_reader->GetFrame() ) == NULL ) // reader has erred or signaling a stop condition (end of pipe) break; // Check if the out queue is falling behind bool critical_mass = m_reader->GetOutQueueSize( ) > m_reader->GetInQueueSize( ); // Handle exceptional situations if ( dropped > 0 ) { m_dropped_frames += dropped; sendFrameDroppedStatus( "buffer underrun near", "This error means that the frames could not be written fast enough." ); if ( m_lockstep && m_frame_count > 0 ) { if ( m_writer->FileIsOpen() ) { if ( ( m_lockstep_maxdrops > -1 && dropped > m_lockstep_maxdrops ) ||( m_lockstep_totaldrops > -1 && m_dropped_frames > m_lockstep_totaldrops ) ) { sendEvent( "Warning: closing file early due to too many dropped frames." ); m_lockPending = true; } for ( int n = 0; n < dropped; n++ ) writeFrame(); } else { m_dropped_frames = 0; } } } if ( badFrames > 0 ) { m_bad_frames += badFrames; sendFrameDroppedStatus( "damaged frame near", "This means that there were missing or invalid FireWire packets." ); } if ( ! m_frame->IsComplete() ) { m_dropped_frames++; sendFrameDroppedStatus( "frame dropped", "This error means that the ieee1394 driver received an incomplete frame." ); if ( m_lockstep && m_frame_count > 0 ) { if ( m_writer->FileIsOpen() ) { if ( m_lockstep_totaldrops > -1 && m_dropped_frames > m_lockstep_totaldrops ) { sendEvent( "Warning: closing file early due to too many dropped frames." ); m_lockPending = true; } writeFrame(); } else { m_dropped_frames = 0; } } } else { if ( m_hdv ) { writeFrame(); } else { DVFrame *dvframe = static_cast( m_frame ); TimeCode timeCode = { 0, 0, 0, 0 }; dvframe->GetTimeCode( timeCode ); if ( dvframe->IsNormalSpeed() && ( m_jpeg_overwrite || !m_avc || !m_isRecordMode || ( m_isRecordMode && strcmp( avc1394_vcr_decode_status( m_transportStatus ), "Recording" ) == 0 && !( timeCode.hour == 0 && timeCode.min == 0 && timeCode.sec == 0 && timeCode.frame == 0 ) ) ) ) writeFrame(); } // drop frame on stdout if getting low on buffers if ( !critical_mass && m_raw_pipe ) { fd_set wfds; struct timeval tv = { 0, 20000 }; FD_ZERO( &wfds ); FD_SET( fileno( stdout ), &wfds ); if ( select( fileno( stdout ) + 1, NULL, &wfds, NULL, &tv ) ) { write( fileno( stdout ), m_frame->data, m_frame->GetDataLen() ); } } } m_reader->DoneWithFrame( m_frame ); } m_reader_active = false; } void DVgrab::status( ) { char s[ 32 ]; unsigned int status; static unsigned int prevStatus = 0; std::string transportStatus( "" ); std::string timecode( "--:--:--:--" ); std::string filename( "" ); std::string duration( "" ); if ( ! m_avc ) return ; status = m_avc->TransportStatus( m_node ); if ( ( int ) status >= 0 ) transportStatus = avc1394_vcr_decode_status( status ); if ( prevStatus == 0 ) prevStatus = status; if ( status != prevStatus && AVC1394_MASK_RESPONSE_OPERAND( prevStatus, 2 ) == AVC1394_VCR_RESPONSE_TRANSPORT_STATE_WIND ) { quadlet_t resp2 = AVC1394_MASK_RESPONSE_OPERAND( status, 2 ); quadlet_t resp3 = AVC1394_MASK_RESPONSE_OPERAND( status, 3 ); if ( resp2 == AVC1394_VCR_RESPONSE_TRANSPORT_STATE_WIND && resp3 == AVC1394_VCR_OPERAND_WIND_STOP ) sendEvent( "Winding Stopped" ); } m_transportStatus = prevStatus = status; if ( m_avc->Timecode( m_node, s ) ) timecode = s; if ( m_writer != NULL ) filename = m_writer->GetFileName(); else filename = ""; if ( m_frame != NULL && m_writer != NULL ) { sprintf( s, "%8.2f", ( float ) m_writer->GetFramesWritten() / m_frame->GetFrameRate() ); duration = s; } else duration = ""; fprintf( stderr, "%-80.80s\r", " " ); fprintf( stderr, "\"%s\" %s \"%s\" %8s sec\r", transportStatus.c_str(), timecode.c_str(), filename.c_str(), duration.c_str() ); fflush( stderr ); } bool DVgrab::execute( const char cmd ) { bool result = true; switch ( cmd ) { case 'p': if ( m_avc ) { m_avc->Play( m_node ); } break; case ' ': if ( m_avc ) { if ( isPlaying() ) m_avc->Pause( m_node ); else m_avc->Play( m_node ); } break; case 'h': if ( m_avc ) { m_avc->Reverse( m_node ); } break; case 'j': if ( m_avc ) { m_avc->Pause( m_node ); m_avc->Rewind( m_node ); } break; case 'k': if ( m_avc ) { m_avc->Pause( m_node ); } break; case 'l': if ( m_avc ) { m_avc->Pause( m_node ); m_avc->FastForward( m_node ); } break; case 'a': if ( m_avc ) { m_avc->Stop( m_node ); m_avc->Rewind( m_node ); } break; case 'z': if ( m_avc ) { m_avc->Stop( m_node ); m_avc->FastForward( m_node ); } break; case '1': if ( m_avc ) { m_avc->Shuttle( m_node, -14 ); } break; case '2': if ( m_avc ) { m_avc->Shuttle( m_node, -11 ); } break; case '3': if ( m_avc ) { m_avc->Shuttle( m_node, -8 ); } break; case '4': if ( m_avc ) { m_avc->Shuttle( m_node, -4 ); } break; case '5': if ( m_avc ) { m_avc->Shuttle( m_node, -1 ); } break; case '6': if ( m_avc ) { m_avc->Shuttle( m_node, 1 ); } break; case '7': if ( m_avc ) { m_avc->Shuttle( m_node, 4 ); } break; case '8': if ( m_avc ) { m_avc->Shuttle( m_node, 8 ); } break; case '9': if ( m_avc ) { m_avc->Shuttle( m_node, 11 ); } break; case '0': if ( m_avc ) { m_avc->Shuttle( m_node, 14 ); } break; case 's': case 0x1b: // Esc if ( m_captureActive ) stopCapture(); else if ( m_avc ) m_avc->Stop( m_node ); break; case 'c': startCapture(); break; case 'q': result = false; break; case '?': cerr << "q=quit, p=play, c=capture, Esc=stop, h=reverse, j=backward scan, k=pause" << endl; cerr << "l=forward scan, a=rewind, z=fast forward, 0-9=trickplay, =play/pause" << endl; break; default: //fprintf( stderr, "\nunkown key 0x%2.2x", cmd ); //result = false; break; } return result; } bool DVgrab::isPlaying() { if ( ! m_avc ) return false; quadlet_t resp2 = AVC1394_MASK_RESPONSE_OPERAND( m_transportStatus, 2 ); quadlet_t resp3 = AVC1394_MASK_RESPONSE_OPERAND( m_transportStatus, 3 ); return ( ( resp2 == AVC1394_VCR_RESPONSE_TRANSPORT_STATE_PLAY && resp3 != AVC1394_VCR_OPERAND_PLAY_FORWARD_PAUSE ) || ( resp2 == AVC1394_VCR_RESPONSE_TRANSPORT_STATE_RECORD && resp3 != AVC1394_VCR_OPERAND_RECORD_PAUSE ) ); } bool DVgrab::done() { if ( m_reader_active ) { // Stop capture at end of tape if ( !m_interactive && m_writer && m_writer->GetFileSize() > 0 && m_avc && !m_isRecordMode ) { m_transportStatus = m_avc->TransportStatus( m_node ); if ( AVC1394_MASK_RESPONSE_OPERAND( m_transportStatus, 3 ) == AVC1394_VCR_OPERAND_WIND_STOP && AVC1394_MASK_OPCODE( m_transportStatus ) == AVC1394_VCR_RESPONSE_TRANSPORT_STATE_WIND ) return true; } timespec t = {0, 125000000L}; return ( nanosleep( &t, NULL ) == -1 ); } return true; } void DVgrab::cleanup() { stopCapture(); if ( m_avc && !m_no_stop ) m_avc->Stop( m_node ); m_reader_active = false; if ( m_reader ) { m_reader->StopThread(); pthread_join( capture_thread, NULL ); delete m_reader; } delete m_avc; delete m_connection; delete m_timeDuration; if (m_dst_file_name) free(m_dst_file_name); } dvgrab-3.5+git20160707.1.e46042e/COPYING0000644000175000017500000004311012716434257014230 0ustar eses GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. dvgrab-3.5+git20160707.1.e46042e/srt.cc0000644000175000017500000001164712716434257014326 0ustar eses/* * srt.cc -- Class for writing SRT subtitle files * Copyright (C) 2008 Pelayo Bernedo * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include "srt.h" SubtitleWriter::SubtitleWriter() : frameRate( 0.0 ), lastYear( 0 ), lastMonth( 0 ), lastDay( 0 ), lastHour( 0 ), lastMinute( 0 ), frameCount( 0 ), lastFrameWritten( 0 ), titleNumber( 1 ), timeCodeValid( false ) {} SubtitleWriter::~SubtitleWriter() { finishFile(); } void SubtitleWriter::finishFile() { if ( frameCount > lastFrameWritten + 1 ) { if ( os_fr ) { writeSubtitleFrame(); } if ( os_tc ) { writeSubtitleCode(); } } } void SubtitleWriter::newFile ( const char *videoName ) { const char *dot, *cp; finishFile(); os_fr.close(); os_tc.close(); frameRate = 0.0; lastYear = lastMonth = lastDay = lastHour = lastMinute = 0; frameCount = 0; lastFrameWritten = 0; titleNumber = 1; nextCodeToWrite.hour = nextCodeToWrite.min = nextCodeToWrite.sec = 0; nextCodeToWrite.frame = 0; timeCodeValid = false; dot = 0; cp = videoName; while ( *cp ) { if ( *cp == '.' ) { dot = cp; } ++cp; } // Now dot points to the extension. char srtName[FILENAME_MAX]; size_t baseLen; if ( dot ) { baseLen = dot - videoName; } else { baseLen = cp - videoName; } if ( baseLen + 6 >= FILENAME_MAX ) { os_tc.open( "datecode.srt1" ); os_fr.open( "datecode.srt" ); } else { memcpy( srtName, videoName, baseLen ); strcpy( srtName + baseLen, ".srt1" ); os_tc.open( srtName ); strcpy( srtName + baseLen, ".srt0" ); os_fr.open( srtName ); } } void SubtitleWriter::frameToTime ( int frameNum, int *hh, int *mm, int *ss, int *millis ) { int t = int( frameNum * 1000 / frameRate ); *millis = t % 1000; t /= 1000; *ss = t % 60; t /= 60; *mm = t % 60; *hh = t / 60; } void SubtitleWriter::codeToTime ( const TimeCode &tc, int *hh, int *mm, int *ss, int *millis ) { *hh = tc.hour; *mm = tc.min; *ss = tc.sec; *millis = int( tc.frame / frameRate * 1000 ); } void SubtitleWriter::writeSubtitleFrame() { int hh1, mm1, ss1, millis1, hh2, mm2, ss2, millis2; frameToTime( lastFrameWritten + 1, &hh1, &mm1, &ss1, &millis1 ); frameToTime( frameCount -1, &hh2, &mm2, &ss2, &millis2 ); if ( os_fr ) { os_fr << titleNumber << '\n' << std::setfill( '0' ); os_fr << std::setw( 2 ) << hh1 << ':' << std::setw( 2 ) << mm1 << ':' << std::setw( 2 ) << ss1 << ',' << std::setw( 3 ) << millis1 << " --> " << std::setw( 2 ) << hh2 << ':' << std::setw( 2 ) << mm2 << ':' << std::setw( 2 ) << ss2 << ',' << std::setw( 3 ) << millis2 << '\n'; os_fr << lastYear << '-' << std::setw( 2 ) << lastMonth << '-' << std::setw( 2 ) << lastDay << ' ' << std::setw( 2 ) << lastHour << ':' << std::setw( 2 ) << lastMinute << "\n\n"; os_fr.flush(); } lastFrameWritten = frameCount; } void SubtitleWriter::writeSubtitleCode() { int hh1, mm1, ss1, millis1, hh2, mm2, ss2, millis2; codeToTime( nextCodeToWrite, &hh1, &mm1, &ss1, &millis1 ); codeToTime( codeNow, &hh2, &mm2, &ss2, &millis2 ); if ( os_tc ) { os_tc << titleNumber << '\n' << std::setfill( '0' ); os_tc << std::setw( 2 ) << hh1 << ':' << std::setw( 2 ) << mm1 << ':' << std::setw( 2 ) << ss1 << ',' << std::setw( 3 ) << millis1 << " --> " << std::setw( 2 ) << hh2 << ':' << std::setw( 2 ) << mm2 << ':' << std::setw( 2 ) << ss2 << ',' << std::setw( 3 ) << millis2 << '\n'; os_tc << lastYear << '-' << std::setw( 2 ) << lastMonth << '-' << std::setw( 2 ) << lastDay << ' ' << std::setw( 2 ) << lastHour << ':' << std::setw( 2 ) << lastMinute << "\n\n"; os_tc.flush(); } titleNumber++; } void SubtitleWriter::addRecordingDate ( struct tm &rd, const TimeCode &tc ) { int y, m, d, hh, mm; ++frameCount; y = rd.tm_year + 1900; m = rd.tm_mon + 1; d = rd.tm_mday; hh = rd.tm_hour; mm = rd.tm_min; if ( y != lastYear || m != lastMonth || d != lastDay || hh != lastHour || mm != lastMinute ) { // We must write a new subtitle. if ( lastYear != 0 ) { writeSubtitleFrame(); writeSubtitleCode(); nextCodeToWrite = tc; titleNumber++; } lastYear = y; lastMonth = m; lastDay = d; lastHour = hh; lastMinute = mm; } codeNow = tc; if ( !timeCodeValid ) { nextCodeToWrite = tc; timeCodeValid = true; } } dvgrab-3.5+git20160707.1.e46042e/frame.h0000644000175000017500000000450312716434257014443 0ustar eses/* * frame.h -- utilities for process digital video frames * Copyright (C) 2000 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _FRAME_H #define _FRAME_H 1 #ifdef HAVE_CONFIG_H #include #endif // C++ includes #include #include #include #include #include using std::string; using std::ostringstream; using std::setw; using std::setfill; using std::deque; using std::cout; using std::endl; // C includes #include #include #include #define TIMECODE_TO_SEC( tc ) (((((tc).hour * 60) + (tc).min) * 60) + (tc).sec) typedef struct TimeCode { int hour; int min; int sec; int frame; } TimeCode; #define DATA_BUFFER_LEN (1024*1024) class Frame { public: unsigned char data[ DATA_BUFFER_LEN ]; private: int dataLen; public: Frame(); virtual ~Frame(); virtual int GetDataLen( void ); virtual void SetDataLen( int len ); virtual void AddDataLen( int len ); virtual void Clear( void ); // Meta-data virtual bool GetTimeCode( TimeCode &timeCode ) { return false; } virtual bool GetRecordingDate( struct tm &recDate ) { return false; } virtual bool IsNewRecording( void ) { return false; } virtual bool IsComplete( void ) { return false; } // Video info virtual int GetWidth() { return -1; } virtual int GetHeight() { return -1; } virtual float GetFrameRate() { return -1; } // HDV vs DV virtual bool IsHDV() { return false; } virtual bool CouldBeJVCP25() { return false; } // For HDV only GOP packets can start a new stream/file // For DV we can start a new stream/file on any packet virtual bool CanStartNewStream() { return true; } }; #endif dvgrab-3.5+git20160707.1.e46042e/error.cc0000644000175000017500000000671412716434257014646 0ustar eses/* * error.cc Error handling * Copyright (C) 2000 - 2002 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifdef HAVE_CONFIG_H #include #endif // C includes #include #include // C++ includes #include #include #include #include #include using std::ostringstream; using std::string; using std::endl; using std::ends; // local includes #include "error.h" static bool needNewLine = false; #define MAX_DEBUG_PIDS 512 static int pids[MAX_DEBUG_PIDS]; static int n_pids = 0; bool d_all; bool d_hdv_pat; bool d_hdv_pmt; bool d_hdv_pids; bool d_hdv_pes; bool d_hdv_packet; bool d_hdv_video; bool d_hdv_sonya1; void d_hdv_pid_add( int pid ) { if ( n_pids < MAX_DEBUG_PIDS ) pids[n_pids++] = pid; } bool d_hdv_pid_check( int pid ) { for ( int i=0; i 0 ) vfprintf( stderr, line, list ); va_end( list ); if ( postline ) needNewLine = false; else needNewLine = true; } void real_fail_neg( int eval, const char *eval_str, const char *func, const char *file, int line ) { if ( eval < 0 ) { string exc; ostringstream sb; sb << file << ":" << line << ": In function \"" << func << "\": \"" << eval_str << "\" evaluated to " << eval; if ( errno != 0 ) sb << endl << file << ":" << line << ": errno: " << errno << " (" << strerror( errno ) << ")"; sb << ends; exc = sb.str(); throw exc; } } /** error handler for NULL result codes Whenever this is called with a NULL argument, it will throw an exception. Typically used with functions like malloc() and new(). */ void real_fail_null( const void *eval, const char *eval_str, const char *func, const char *file, int line ) { if ( eval == NULL ) { string exc; ostringstream sb; sb << file << ":" << line << ": In function \"" << func << "\": " << eval_str << " is NULL" << ends; exc = sb.str(); throw exc; } } void real_fail_if( bool eval, const char *eval_str, const char *func, const char *file, int line ) { if ( eval == true ) { string exc; ostringstream sb; sb << file << ":" << line << ": In function \"" << func << "\": condition \"" << eval_str << "\" is true"; if ( errno != 0 ) sb << endl << file << ":" << line << ": errno: " << errno << " (" << strerror( errno ) << ")"; sb << ends; exc = sb.str(); throw exc; } } dvgrab-3.5+git20160707.1.e46042e/error.h0000644000175000017500000000420212716434257014476 0ustar eses/* * error.h Error handling * Copyright (C) 2000 - 2002 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _ERROR_H #define _ERROR_H 1 #include #ifdef __cplusplus extern "C" { #endif #define fail_neg(eval) real_fail_neg (eval, #eval, __ASSERT_FUNCTION, __FILE__, __LINE__) #define fail_null(eval) real_fail_null (eval, #eval, __ASSERT_FUNCTION, __FILE__, __LINE__) #define fail_if(eval) real_fail_if (eval, #eval, __ASSERT_FUNCTION, __FILE__, __LINE__) extern bool d_all; extern bool d_hdv_pat; extern bool d_hdv_pmt; extern bool d_hdv_pids; extern bool d_hdv_pes; extern bool d_hdv_packet; extern bool d_hdv_video; extern bool d_hdv_sonya1; void d_hdv_pid_add( int p ); bool d_hdv_pid_check( int p ); #define DEBUG_PARAMS( type, prel, postl, msg... ) do { if ( (type) || d_all ) sendEventParams( prel, postl, msg ); } while (0) #define DEBUG_RAW( type, msg... ) DEBUG_PARAMS( type, 0, 0, msg ) #define DEBUG( type, msg... ) DEBUG_PARAMS( type, 1, 1, msg ) #define sendEvent( msg... ) sendEventParams( 2, 1, msg ) void sendEventParams( int clearline, int newline, const char *format, ... ); void real_fail_neg ( int eval, const char * eval_str, const char * func, const char * file, int line ); void real_fail_null ( const void * eval, const char * eval_str, const char * func, const char * file, int line ); void real_fail_if ( bool eval, const char * eval_str, const char * func, const char * file, int line ); #ifdef __cplusplus } #endif #endif dvgrab-3.5+git20160707.1.e46042e/smiltime.cc0000644000175000017500000003755712716434257015351 0ustar eses/* * smiltime.cc -- W3C SMIL2 Time value parser * Copyright (C) 2003-2008 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #ifdef HAVE_CONFIG_H #include #endif #include "smiltime.h" #include #include #include #include #include #include #include "stringutils.h" namespace SMIL { Time::Time() { Time( 0L ); indefinite = true; timeType = SMIL_TIME_INDEFINITE; resolved = true; } Time::Time( long time ) : timeValue( time ), offset( 0 ), indefinite( false ), resolved( true ), syncbaseBegin( false ), timeType( SMIL_TIME_OFFSET ) { } Time::Time( string time ) { Time( 0L ); parseTimeValue( time ); } void Time::parseTimeValue( string time ) { time = StringUtils::stripWhite( time ); resolved = false; if ( StringUtils::begins ( time, "indefinite" ) || time.empty() || time.size() == 0 ) { indefinite = true; timeType = SMIL_TIME_INDEFINITE; resolved = true; //cerr << "smil: this is an indefinite time value" << endl; } else if ( time.at( 0 ) == '+' || time.at( 0 ) == '-' ) { //cerr << "smil: this is an offset time value" << endl; timeValue = parseClockValue( time.substr( 1 ) ); if ( time.at( 0 ) == '-' ) timeValue *= -1; timeType = SMIL_TIME_OFFSET; resolved = true; indefinite = false; } else if ( StringUtils::begins ( time, "wallclock(" ) ) { //parseWallclockValue(time); timeType = SMIL_TIME_WALLCLOCK; resolved = true; indefinite = false; //cerr << "smil: this is a wallclock time value" << endl; } else if ( StringUtils::begins ( time, "accesskey(" ) ) { timeType = SMIL_TIME_ACCESSKEY; //cerr << "smil: this is an accesskey time value" << endl; } else { std::ostringstream token; char c; string::size_type pos = 0; string base; for ( ; pos < time.size(); ++pos ) { c = time.at( pos ); if ( c == '+' || c == '-' ) { token << std::ends; string symbol = token.str(); token.str( string() ); //cerr << "smil: parsed symbol token '" << symbol << "'" << endl; if ( symbol == string( "begin" ) ) { //cerr << "smil: this is a sync based time value" << endl; syncbaseBegin = true; timeType = SMIL_TIME_SYNC_BASED; } else if ( symbol == string( "end" ) ) { //cerr << "smil: this is a sync based time value" << endl; syncbaseBegin = false; timeType = SMIL_TIME_SYNC_BASED; } else if ( StringUtils::begins( symbol, "marker(" ) ) { //cerr << "smil: this is a media marker time value" << endl; //parseMediaMarkerValue(base, token); // base must not be empty timeType = SMIL_TIME_MEDIA_MARKER; } else if ( StringUtils::begins( symbol, "repeat(" ) ) { //cerr << "smil: this is an event repeat time value" << endl; //parseRepeatValue(base, token); // base can be empty := current element timeType = SMIL_TIME_REPEAT; } else { //cerr << "smil: this is an event based time value" << endl; //parseEventValue( base, token ); // base can be empty := current element timeType = SMIL_TIME_EVENT_BASED; } offset = parseClockValue( time.substr( pos + 1 ) ); if ( c == '-' ) offset *= -1; break; } else if ( c == '.' && ( pos == 0 || time.at( pos - 1 ) != '\\' ) ) { token << std::ends; base = token.str(); token.str( string() ); //cerr << "smil: parsed base token '" << base << "'" << endl; } else { token << c; } } token << std::ends; string remaining = token.str(); //cerr << "smil: parsed remaining token '" << remaining << "'" << endl; if ( !remaining.empty() ) { //cerr << "smil: this is an offset time value" << endl; offset = parseClockValue( time ); timeType = SMIL_TIME_OFFSET; resolved = true; indefinite = false; } } } static inline string getFraction( string& time ) { string::size_type pos; string fraction; if ( ( pos = time.find( '.' ) ) != string::npos ) { fraction = time.substr( pos ); //cerr << "smil: fraction = " << fraction << endl; time = time.substr( 0, pos ); } return fraction; } long Time::parseClockValue( string time ) { long total = 0; string hours; string minutes; string seconds; string milliseconds; string::size_type pos1, pos2; if ( ( pos1 = time.find( ':' ) ) != string::npos ) { if ( ( pos2 = time.find( ':', pos1 + 1 ) ) != string::npos ) { //parse Full-clock-value //cerr << "smil: parsing Full clock value " << time << endl; hours = time.substr( 0, pos1 ); time = time.substr( pos1 + 1 ); pos1 = pos2 - pos1 - 1; } //parse Partial-clock-value //cerr << "smil: parsing Partial clock value " << time << endl; if ( ( pos2 = time.find( '.' ) ) != string::npos ) { milliseconds = "0" + time.substr( pos2 ); //keep the decimal point time = time.substr( 0, pos2 ); } minutes = time.substr( 0, pos1 ); seconds = time.substr( pos1 + 1 ); } else { //parse Timecount-value //cerr << "smil: parsing Timecount value " << time << endl; if ( StringUtils::ends( time, "h" ) ) { total = ( long ) ( atof( getFraction( time ).c_str() ) * 3600000 ); hours = time; } else if ( StringUtils::ends( time, "min" ) ) { total = ( long ) ( atof( getFraction( time ).c_str() ) * 60000 ); minutes = time; } else if ( StringUtils::ends( time, "ms" ) ) { total = ( long ) ( atof( time.c_str() ) + 0.5 ); } else { total = ( long ) ( atof( getFraction( time ).c_str() ) * 1000 ); seconds = time; } } //cerr << "smil: subtotal = " << total << ", h = " << atol(hours.c_str()) << // ", m = " << atol(minutes.c_str()) << ", s = " << atol(seconds.c_str()) << // ", ms = " << (long) (atof(milliseconds.c_str()) * 1000) << endl; total += ( atol( hours.c_str() ) * 3600 + atol( minutes.c_str() ) * 60 + atol( seconds.c_str() ) ) * 1000 + ( long ) ( atof( milliseconds.c_str() ) * 1000 ); return total; } long Time::getResolvedOffset() { return ( resolved ? ( timeValue + offset ) : 0 ); } bool Time::isNegative() { return ( resolved && !indefinite && ( getResolvedOffset() < 0 ) ); } bool Time::operator< ( Time& time ) { return !( ( *this > time ) || ( *this == time ) ); } bool Time::operator==( Time& time ) { return ( ( this->isIndefinite() && time.isIndefinite() ) || ( this->getResolvedOffset() == time.getResolvedOffset() ) ); } bool Time::operator> ( Time& time ) { return ( ( !resolved ) || ( indefinite && time.isResolved() && !time.isIndefinite() ) || ( resolved && time.isResolved() && this->getResolvedOffset() > time.getResolvedOffset() ) ); } void Time::setTimeValue( Time& source ) { resolved = source.isResolved(); indefinite = source.isIndefinite(); timeValue = source.getTimeValue(); } string Time::toString( TimeFormat format ) { long ms = getResolvedOffset(); ostringstream str; if ( indefinite ) { str << "indefinite"; } else if ( !resolved ) { str << "unresolved"; } else switch( format ) { case TIME_FORMAT_CLOCK: { int hh = ( ms / 3600000 ); ms -= hh * 3600000; int mm = ( ms / 60000 ); ms -= mm * 60000; int ss = ms / 1000; ms -= ss * 1000; str << hh << ":" << mm << ":" << ss << "." << std::setfill( '0' ) << std::setw( 3 ) << ms; break; } case TIME_FORMAT_MS: str << ms << "ms"; break; case TIME_FORMAT_S: str << ( ms / 1000 ) << "." << ( ms % 1000 ) << "s"; break; case TIME_FORMAT_MIN: str << ( ms / 60000 ) << "." << std::setfill( '0' ) << std::setw( 4 ) << round( ( float )( ms % 60000 ) / 6 ) << "min"; break; case TIME_FORMAT_H: str << ( ms / 3600000 ) << "." << std::setfill( '0' ) << std::setw( 5 ) << round( ( float )( ms % 3600000 ) / 36 ) << "h"; break; default: break; } str << std::ends; return str.str(); } string Time::serialise() { return toString(); } MediaClippingTime::MediaClippingTime( ) : Time( 0L ), m_framerate( 0.0 ), m_isSmpteValue( false ), m_subframe( SMIL_SUBFRAME_NONE ) { } MediaClippingTime::MediaClippingTime( float framerate ) : Time( 0L ), m_framerate( framerate ), m_isSmpteValue( false ), m_subframe( SMIL_SUBFRAME_NONE ) { } MediaClippingTime::MediaClippingTime( string time, float framerate ) : Time( 0L ), m_framerate( framerate ), m_isSmpteValue( false ), m_subframe( SMIL_SUBFRAME_NONE ) { parseValue( time ); } void MediaClippingTime::setFramerate( float framerate ) { m_framerate = framerate; } void MediaClippingTime::parseValue( string time ) { time = StringUtils::stripWhite( time ); //cerr << "smil: parsing media clipping time " << time << endl; if ( StringUtils::begins( time, "smpte=" ) || StringUtils::begins( time, "smpte-25=" ) ) parseSmpteValue( time.substr( time.find( '=' ) + 1 ) ); else if ( StringUtils::begins( time, "smpte-30-drop=" ) ) parseSmpteNtscDropValue( time.substr( time.find( '=' ) + 1 ) ); else if ( time.find( '=' ) != string::npos ) // discard npt= parseTimeValue( time.substr( time.find( '=' ) + 1 ) ); else parseTimeValue( time ); } void MediaClippingTime::parseSmpteNtscDropValue( string time ) { string hours; string minutes; string seconds; string frames; string::size_type pos; m_isSmpteValue = true; if ( ( pos = time.find( ':' ) ) != string::npos ) { //cerr << "smil: parsing HH SMPTE value " << time << endl; hours = time.substr( 0, pos ); time = time.substr( pos + 1 ); if ( ( pos = time.find( ':' ) ) != string::npos ) { //cerr << "smil: parsing MM SMPTE value " << time << endl; minutes = time.substr( 0, pos ); time = time.substr( pos + 1 ); if ( ( pos = time.find( ':' ) ) != string::npos ) { //cerr << "smil: parsing SS SMPTE value " << time << endl; seconds = time.substr( 0, pos ); time = time.substr( pos + 1 ); if ( ( pos = time.find( '.' ) ) != string::npos ) { //cerr << "smil: parsing FF SMPTE value " << time << endl; frames = time.substr( 0, pos ); switch ( time.at( pos + 1 ) ) { case '0' : m_subframe = SMIL_SUBFRAME_0; break; case '1' : m_subframe = SMIL_SUBFRAME_1; break; default: m_subframe = SMIL_SUBFRAME_NONE; break; } } else { frames = time; } } else { // minutes, seconds, and frames only frames = time; seconds = minutes; minutes = hours; hours = ""; } } else { // frames and seconds only frames = time; seconds = hours; hours = ""; } } else { // frames only frames = time; } //cerr << "hh = " << hours << ", mm = " << minutes << ", ss = " << seconds << ", ff = " << frames << endl; const float mspf = 1001.0 / 30.0; long min = atol( minutes.c_str() ); timeValue = atol( hours.c_str() ) * ( 30 * 60 * 60 - 108 ) + min * ( 30 * 60 - 2 ) + min / 10 * 2 + atol( seconds.c_str() ) * 30 + atol( frames.c_str() ); timeValue = ( long )( ( float )timeValue * mspf ); resolved = true; indefinite = false; } void MediaClippingTime::parseSmpteValue( string time ) { string hours; string minutes; string seconds; string frames; string::size_type pos; m_isSmpteValue = true; if ( ( pos = time.find( ':' ) ) != string::npos ) { //cerr << "smil: parsing HH SMPTE value " << time << endl; hours = time.substr( 0, pos ); time = time.substr( pos + 1 ); if ( ( pos = time.find( ':' ) ) != string::npos ) { //cerr << "smil: parsing MM SMPTE value " << time << endl; minutes = time.substr( 0, pos ); time = time.substr( pos + 1 ); if ( ( pos = time.find( ':' ) ) != string::npos ) { //cerr << "smil: parsing SS SMPTE value " << time << endl; seconds = time.substr( 0, pos ); time = time.substr( pos + 1 ); if ( ( pos = time.find( '.' ) ) != string::npos ) { //cerr << "smil: parsing FF SMPTE value " << time << endl; frames = time.substr( 0, pos ); switch ( time.at( pos + 1 ) ) { case '0' : m_subframe = SMIL_SUBFRAME_0; break; case '1' : m_subframe = SMIL_SUBFRAME_1; break; default: m_subframe = SMIL_SUBFRAME_NONE; break; } } else { frames = time; } } else { // minutes, seconds, and frames only frames = time; seconds = minutes; minutes = hours; hours = ""; } } else { // frames and seconds only frames = time; seconds = hours; hours = ""; } } else { // frames only frames = time; } //cerr << "hh = " << hours << ", mm = " << minutes << ", ss = " << seconds << ", ff = " << frames << endl; timeValue = ( atol( hours.c_str() ) * 3600 + atol( minutes.c_str() ) * 60 + atol( seconds.c_str() ) ) * 1000 + ( long )( atof( frames.c_str() ) / m_framerate * 1000 + 0.5 ); resolved = true; indefinite = false; } string MediaClippingTime::toString( TimeFormat format ) { if ( format == TIME_FORMAT_SMPTE ) { if ( indefinite ) return "indefinite"; else if ( !resolved ) return "unresolved"; else { long ms = getResolvedOffset(); int hh = ( ms / 3600000 ); ms -= hh * 3600000; int mm = ( ms / 60000 ); ms -= mm * 60000; int ss = ms / 1000; ms -= ss * 1000; ostringstream str; str << hh << ":" << std::setfill( '0' ) << std::setw( 2 ) << mm << ":" << std::setfill( '0' ) << std::setw( 2 ) << ss << ( m_framerate == 25.0 ? ":" : ";" ) << std::setfill( '0' ) << std::setw( 2 ) << round( m_framerate * ms / 1000.0 ); if ( m_subframe == SMIL_SUBFRAME_0 ) str << ".0"; else if ( m_subframe == SMIL_SUBFRAME_1 ) str << ".1"; str << std::ends; return str.str(); } } else { return Time::toString( format ); } } string MediaClippingTime::serialise() { std::string s; if ( m_isSmpteValue ) { if ( m_framerate == 25.0 ) s = "smpte-25="; else s = "smpte="; return s + toString(); } return Time::toString(); } string framesToSmpte( int frames, int fps ) { char s[ 12 ]; int hours, mins, secs; int cur = frames; if ( fps == 29 ) fps = 30; if ( frames == 0 ) { hours = 0; mins = 0; secs = 0; } else { /* NTSC drop-frame */ if ( fps == 30 ) { int max_frames = cur; for ( int j = 1800; j <= max_frames; j += 1800 ) { if ( j % 18000 ) { max_frames += 2; cur += 2; } } } hours = cur / ( fps * 3600 ); cur -= hours * ( fps * 3600 ); mins = cur / ( fps * 60 ); cur -= mins * ( fps * 60 ); secs = cur / fps; cur -= secs * fps; } snprintf( s, 12, "%2.2d:%2.2d:%2.2d%s%2.2d", hours, mins, secs, ( fps == 30 ) ? ";" : ":", cur ); return string( s ); } string MediaClippingTime::parseValueToString( string time, TimeFormat format ) { if ( format == TIME_FORMAT_NONE || format == TIME_FORMAT_FRAMES || format == TIME_FORMAT_SMPTE ) parseSmpteValue( time ); else parseValue( time ); return toString( format ); } string MediaClippingTime::parseFramesToString( int frames, TimeFormat format ) { timeValue = ( long )( 1000.0 * frames / m_framerate + 0.5 ); switch ( format ) { case TIME_FORMAT_NONE: return ""; case TIME_FORMAT_FRAMES: { std::ostringstream str; str << frames << std::ends; return str.str(); } case TIME_FORMAT_SMPTE: return framesToSmpte( frames, ( int )m_framerate ); default: return toString( format ); } } int MediaClippingTime::getFrames() { return ( int )( m_framerate * getResolvedOffset() / 1000.0 + 0.5 ); } } // namespace dvgrab-3.5+git20160707.1.e46042e/io.c0000644000175000017500000001033712716434257013755 0ustar eses/* * io.c -- dv1394d client demo input/output * Copyright (C) 2002-2003 Ushodaya Enterprises Limited * Author: Charles Yates * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifdef HAVE_CONFIG_H #include #endif /* System header files */ #include #include #include #include #include #include #include /* Application header files */ #include "io.h" char *chomp( char *input ) { if ( input != NULL ) { int length = strlen( input ); if ( length && input[ length - 1 ] == '\n' ) input[ length - 1 ] = '\0'; if ( length > 1 && input[ length - 2 ] == '\r' ) input[ length - 2 ] = '\0'; } return input; } char *trim( char *input ) { if ( input != NULL ) { int length = strlen( input ); int first = 0; while ( first < length && isspace( input[ first ] ) ) first ++; memmove( input, input + first, length - first + 1 ); length = length - first; while ( length > 0 && isspace( input[ length - 1 ] ) ) input[ -- length ] = '\0'; } return input; } char *strip_quotes( char *input ) { if ( input != NULL ) { char * ptr = strrchr( input, '\"' ); if ( ptr != NULL ) * ptr = '\0'; if ( input[ 0 ] == '\"' ) strcpy( input, input + 1 ); } return input; } char *get_string( char *output, int maxlength, char *use ) { char * value = NULL; strcpy( output, use ); if ( trim( chomp( fgets( output, maxlength, stdin ) ) ) != NULL ) { if ( !strcmp( output, "" ) ) strcpy( output, use ); value = output; } return value; } int *get_int( int *output, int use ) { int * value = NULL; char temp[ 132 ]; *output = use; if ( trim( chomp( fgets( temp, 132, stdin ) ) ) != NULL ) { if ( strcmp( temp, "" ) ) * output = atoi( temp ); value = output; } return value; } /** This stores the previous settings */ static struct termios oldtty; static int mode = 0; /** This is called automatically on application exit to restore the previous tty settings. */ void term_exit( void ) { if ( mode == 1 ) { tcsetattr( 0, TCSANOW, &oldtty ); mode = 0; } } /** Init terminal so that we can grab keys without blocking. */ void term_init( ) { struct termios tty; if ( isatty( fileno( stdin ) ) ) { tcgetattr( 0, &tty ); oldtty = tty; tty.c_iflag &= ~( IGNBRK | BRKINT | PARMRK | ISTRIP | INLCR | IGNCR | ICRNL | IXON ); tty.c_oflag |= OPOST; tty.c_lflag &= ~( ECHO | ECHONL | ICANON | IEXTEN ); tty.c_cflag &= ~( CSIZE | PARENB ); tty.c_cflag |= CS8; tty.c_cc[ VMIN ] = 1; tty.c_cc[ VTIME ] = 0; tcsetattr( 0, TCSANOW, &tty ); mode = 1; atexit( term_exit ); } else { fcntl( fileno( stdin ), F_SETFL, O_NONBLOCK ); } } /** Check for a keypress without blocking infinitely. Returns: ASCII value of keypress or -1 if no keypress detected. */ int term_read( ) { int n = 1; unsigned char ch; struct timeval tv; fd_set rfds; FD_ZERO( &rfds ); FD_SET( 0, &rfds ); tv.tv_sec = 0; tv.tv_usec = 125000; n = select( 1, &rfds, NULL, NULL, &tv ); if ( n > 0 ) { n = read( 0, &ch, 1 ); if ( isatty( fileno( stdin ) ) ) tcflush( 0, TCIFLUSH ); if ( n == 1 ) return ch; return n; } return -1; } char get_keypress( ) { char value = '\0'; int pressed = 0; fflush( stderr ); term_init( ); while ( ( pressed = term_read( ) ) == -1 ) ; term_exit( ); value = ( char ) pressed; return value; } void wait_for_any_key( char *message ) { if ( message == NULL ) printf( "Press any key to continue: " ); else printf( "%s", message ); get_keypress( ); printf( "\n\n" ); } void beep( ) { printf( "%c", 7 ); fflush( stderr ); } dvgrab-3.5+git20160707.1.e46042e/config.h.in0000644000175000017500000000577712716434257015241 0ustar eses/* config.h.in. Generated from configure.ac by autoheader. */ /* Define if building universal (internal helper macro) */ #undef AC_APPLE_UNIVERSAL_BUILD /* Define to 1 if you have the header file. */ #undef HAVE_FCNTL_H /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if you have the header file. */ #undef HAVE_JPEGLIB_H /* Define to 1 if you have libdv. */ #undef HAVE_LIBDV /* Define to 1 if you have the `efence' library (-lefence). */ #undef HAVE_LIBEFENCE /* Define to 1 if you have the `jpeg' library (-ljpeg). */ #undef HAVE_LIBJPEG /* Define to 1 if you have the `pthread' library (-lpthread). */ #undef HAVE_LIBPTHREAD /* libquicktime.sourceforge.net present */ #undef HAVE_LIBQUICKTIME /* Define to 1 if you have the header file. */ #undef HAVE_LINUX_VIDEODEV2_H /* Define to 1 if you have the header file. */ #undef HAVE_MEMORY_H /* Define to 1 if you have the `mktime' function. */ #undef HAVE_MKTIME /* Define to 1 if you have the header file. */ #undef HAVE_QUICKTIME_QUICKTIME_H /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #undef HAVE_STDIO_H /* Define to 1 if you have the header file. */ #undef HAVE_STDLIB_H /* Define to 1 if you have the header file. */ #undef HAVE_STRINGS_H /* Define to 1 if you have the header file. */ #undef HAVE_STRING_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_STAT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TYPES_H /* Define to 1 if you have the header file. */ #undef HAVE_UNISTD_H /* Name of package */ #undef PACKAGE /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT /* Define to the full name of this package. */ #undef PACKAGE_NAME /* Define to the full name and version of this package. */ #undef PACKAGE_STRING /* Define to the one symbol short name of this package. */ #undef PACKAGE_TARNAME /* Define to the home page for this package. */ #undef PACKAGE_URL /* Define to the version of this package. */ #undef PACKAGE_VERSION /* Define as the return type of signal handlers (`int' or `void'). */ #undef RETSIGTYPE /* Define to 1 if you have the ANSI C header files. */ #undef STDC_HEADERS /* Define to 1 if your declares `struct tm'. */ #undef TM_IN_SYS_TIME /* Version number of package */ #undef VERSION /* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most significant byte first (like Motorola and SPARC, unlike Intel). */ #if defined AC_APPLE_UNIVERSAL_BUILD # if defined __BIG_ENDIAN__ # define WORDS_BIGENDIAN 1 # endif #else # ifndef WORDS_BIGENDIAN # undef WORDS_BIGENDIAN # endif #endif /* Define to empty if `const' does not conform to ANSI C. */ #undef const /* Define to `unsigned int' if does not define. */ #undef size_t dvgrab-3.5+git20160707.1.e46042e/dvgrab.spec0000644000175000017500000000231712716434257015322 0ustar eses# spec file to create a RPM package (rpm -ba dvgrab.spec) Name: dvgrab Version: 2.1 Release: 1 Packager: dan@dennedy.org Copyright: 2000-2005, Arne Schirmacher, Charles Yates, Dan Dennedy (GPL) Group: Utilities/System Source0: dvgrab-%{version}.tar.gz URL: http://www.kinodv.org/ Summary: A program to copy Digital Video data from a DV camcorder Requires: libraw1394 >= 1.2.0 Requires: libiec1394 >= 1.0.0 Requires: libavc1394 >= 0.4.1 Requires: libdv >= 0.103 Prefix: /usr BuildRoot: %{_tmppath}/%{name}-buildroot BuildRequires: libraw1394-devel >= 1.2.0 BuildRequires: libiec1394-devel >= 1.0.0 BuildRequires: libavc1394-devel >= 0.4.1 BuildRequires: libdv-devel >= 0.103 %description dvgrab copies digital video data from a DV camcorder. %changelog * Fri May 18 2001 Arne Schirmacher - minor change for libraw 0.9, bugfix * Sat Feb 10 2001 Arne Schirmacher - initial package %prep rm -rf $RPM_BUILD_ROOT %setup -q %build ./configure --prefix=/usr make %install mkdir -p %buildroot%{_bindir} make DESTDIR=%buildroot install %clean rm -rf %buildroot %post %postun %files %defattr(-,root,root) %{_bindir}/* %{_mandir}/man1/* %doc README ChangeLog NEWS dvgrab-3.5+git20160707.1.e46042e/dvframe.cc0000644000175000017500000005570712716434257015147 0ustar eses/* * dvframe.cc -- utilities for processing DV-format frames * Copyright (C) 2000 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /** Code for handling raw DV frame data These methods are for handling the raw DV frame data. It contains methods for getting info and retrieving the audio data. \file dvframe.cc */ #include #include #include #include "dvframe.h" VideoInfo::VideoInfo() : width( 0 ), height( 0 ), isPAL( false ) {} #ifndef HAVE_LIBDV bool DVFrame::maps_initialized = false; int DVFrame::palmap_ch1[ 2000 ]; int DVFrame::palmap_ch2[ 2000 ]; int DVFrame::palmap_2ch1[ 2000 ]; int DVFrame::palmap_2ch2[ 2000 ]; int DVFrame::ntscmap_ch1[ 2000 ]; int DVFrame::ntscmap_ch2[ 2000 ]; int DVFrame::ntscmap_2ch1[ 2000 ]; int DVFrame::ntscmap_2ch2[ 2000 ]; short DVFrame::compmap[ 4096 ]; #endif /**ructor All DVFrame objects share a set of lookup maps, which are initalized once (we are using a variant of the Singleton pattern). */ DVFrame::DVFrame() { #ifdef HAVE_LIBDV decoder = dv_decoder_new( 0, 0, 0 ); decoder->quality = DV_QUALITY_BEST; decoder->audio->arg_audio_emphasis = 2; dv_set_audio_correction ( decoder, DV_AUDIO_CORRECT_AVERAGE ); FILE* libdv_log = fopen( "/dev/null", "w" ); dv_set_error_log( decoder, libdv_log ); #else if ( maps_initialized == false ) { for ( int n = 0; n < 1944; ++n ) { int sequence1 = ( ( n / 3 ) + 2 * ( n % 3 ) ) % 6; int sequence2 = sequence1 + 6; int block = 3 * ( n % 3 ) + ( ( n % 54 ) / 18 ); block = 6 + block * 16; { register int byte = 8 + 2 * ( n / 54 ); palmap_ch1[ n ] = sequence1 * 150 * 80 + block * 80 + byte; palmap_ch2[ n ] = sequence2 * 150 * 80 + block * 80 + byte; byte += ( n / 54 ); palmap_2ch1[ n ] = sequence1 * 150 * 80 + block * 80 + byte; palmap_2ch2[ n ] = sequence2 * 150 * 80 + block * 80 + byte; } } for ( int n = 0; n < 1620; ++n ) { int sequence1 = ( ( n / 3 ) + 2 * ( n % 3 ) ) % 5; int sequence2 = sequence1 + 5; int block = 3 * ( n % 3 ) + ( ( n % 45 ) / 15 ); block = 6 + block * 16; { register int byte = 8 + 2 * ( n / 45 ); ntscmap_ch1[ n ] = sequence1 * 150 * 80 + block * 80 + byte; ntscmap_ch2[ n ] = sequence2 * 150 * 80 + block * 80 + byte; byte += ( n / 45 ); ntscmap_2ch1[ n ] = sequence1 * 150 * 80 + block * 80 + byte; ntscmap_2ch2[ n ] = sequence2 * 150 * 80 + block * 80 + byte; } } for ( int y = 0x700; y <= 0x7ff; ++y ) compmap[ y ] = ( y - 0x600 ) << 6; for ( int y = 0x600; y <= 0x6ff; ++y ) compmap[ y ] = ( y - 0x500 ) << 5; for ( int y = 0x500; y <= 0x5ff; ++y ) compmap[ y ] = ( y - 0x400 ) << 4; for ( int y = 0x400; y <= 0x4ff; ++y ) compmap[ y ] = ( y - 0x300 ) << 3; for ( int y = 0x300; y <= 0x3ff; ++y ) compmap[ y ] = ( y - 0x200 ) << 2; for ( int y = 0x200; y <= 0x2ff; ++y ) compmap[ y ] = ( y - 0x100 ) << 1; for ( int y = 0x000; y <= 0x1ff; ++y ) compmap[ y ] = y; for ( int y = 0x800; y <= 0xfff; ++y ) compmap[ y ] = -1 - compmap[ 0xfff - y ]; maps_initialized = true; } #endif for ( int n = 0; n < 4; n++ ) audio_buffers[ n ] = ( int16_t * ) malloc( 2 * DV_AUDIO_MAX_SAMPLES * sizeof( int16_t ) ); } DVFrame::~DVFrame() { #ifdef HAVE_LIBDV dv_decoder_free( decoder ); #endif for ( int n = 0; n < 4; n++ ) free( audio_buffers[ n ] ); } void DVFrame::SetDataLen( int len ) { Frame::SetDataLen( len ); ExtractHeader(); } bool DVFrame::IsHDV() { return false; } /** gets a subcode data packet This function returns a SSYB packet from the subcode data section. \param packNum the SSYB package id to return \param pack a reference to the variable where the result is stored \return true for success, false if no pack could be found */ bool DVFrame::GetSSYBPack( int packNum, Pack &pack ) { #ifdef WITH_LIBDV pack.data[ 0 ] = packNum; #ifdef HAVE_LIBDV_1_0 dv_get_vaux_pack( decoder, packNum, &pack.data[ 1 ] ); #else int id; if ( ( id = decoder->ssyb_pack[ packNum ] ) != 0xff ) { pack.data[ 1 ] = decoder->ssyb_data[ id ][ 0 ]; pack.data[ 2 ] = decoder->ssyb_data[ id ][ 1 ]; pack.data[ 3 ] = decoder->ssyb_data[ id ][ 2 ]; pack.data[ 4 ] = decoder->ssyb_data[ id ][ 3 ]; } #endif return true; #else /* number of DIF sequences is different for PAL and NTSC */ int seqCount = IsPAL() ? 12 : 10; /* process all DIF sequences */ for ( int i = 0; i < seqCount; ++i ) { /* there are two DIF blocks in the subcode section */ for ( int j = 0; j < 2; ++j ) { /* each block has 6 packets */ for ( int k = 0; k < 6; ++k ) { /* calculate address: 150 DIF blocks per sequence, 80 bytes per DIF block, subcode blocks start at block 1, block and packet have 3 bytes header, packet is 8 bytes long (including header) */ const unsigned char *s = &data[ i * 150 * 80 + 1 * 80 + j * 80 + 3 + k * 8 + 3 ]; // printf("ssyb %d: %2.2x %2.2x %2.2x %2.2x %2.2x\n", // j * 6 + k, s[0], s[1], s[2], s[3], s[4]); if ( s[ 0 ] == packNum ) { // printf("GetSSYBPack[%x]: sequence %d, block %d, packet %d\n", packNum,i,j,k); pack.data[ 0 ] = s[ 0 ]; pack.data[ 1 ] = s[ 1 ]; pack.data[ 2 ] = s[ 2 ]; pack.data[ 3 ] = s[ 3 ]; pack.data[ 4 ] = s[ 4 ]; return true; } } } } return false; #endif } /** gets a video auxiliary data packet Every DIF block in the video auxiliary data section contains 15 video auxiliary data packets, for a total of 45 VAUX packets. As the position of a VAUX packet is fixed, we could directly look it up, but I choose to walk through all data as with the other GetXXXX routines. \param packNum the VAUX package id to return \param pack a reference to the variable where the result is stored \return true for success, false if no pack could be found */ bool DVFrame::GetVAUXPack( int packNum, Pack &pack ) { #ifdef WITH_LIBDV pack.data[ 0 ] = packNum; dv_get_vaux_pack( decoder, packNum, &pack.data[ 1 ] ); //cerr << "VAUX: 0x" //<< setw(2) << setfill('0') << hex << (int) pack.data[0] //<< setw(2) << setfill('0') << hex << (int) pack.data[1] //<< setw(2) << setfill('0') << hex << (int) pack.data[2] //<< setw(2) << setfill('0') << hex << (int) pack.data[3] //<< setw(2) << setfill('0') << hex << (int) pack.data[4] //<< endl; return true; #else /* number of DIF sequences is different for PAL and NTSC */ int seqCount = IsPAL() ? 12 : 10; /* process all DIF sequences */ for ( int i = 0; i < seqCount; ++i ) { /* there are three DIF blocks in the VAUX section */ for ( int j = 0; j < 3; ++j ) { /* each block has 15 packets */ for ( int k = 0; k < 15; ++k ) { /* calculate address: 150 DIF blocks per sequence, 80 bytes per DIF block, vaux blocks start at block 3, block has 3 bytes header, packets have no header and are 5 bytes long. */ const unsigned char *s = &data[ i * 150 * 80 + 3 * 80 + j * 80 + 3 + k * 5 ]; // printf("vaux %d: %2.2x %2.2x %2.2x %2.2x %2.2x\n", // j * 15 + k, s[0], s[1], s[2], s[3], s[4]); if ( s[ 0 ] == packNum ) { pack.data[ 0 ] = s[ 0 ]; pack.data[ 1 ] = s[ 1 ]; pack.data[ 2 ] = s[ 2 ]; pack.data[ 3 ] = s[ 3 ]; pack.data[ 4 ] = s[ 4 ]; return true; } } } } return false; #endif } /** gets an audio auxiliary data packet Every DIF block in the audio section contains 5 bytes audio auxiliary data and 72 bytes of audio data. The function searches through all DIF blocks although AAUX packets are only allowed in certain defined DIF blocks. \param packNum the AAUX package id to return \param pack a reference to the variable where the result is stored \return true for success, false if no pack could be found */ bool DVFrame::GetAAUXPack( int packNum, Pack &pack ) { #ifdef WITH_LIBDV bool done = false; switch ( packNum ) { case 0x50: memcpy( pack.data, &decoder->audio->aaux_as, 5 ); done = true; break; case 0x51: memcpy( pack.data, &decoder->audio->aaux_asc, 5 ); done = true; break; #ifdef HAVE_LIBDV_1_0 case 0x52: memcpy( pack.data, decoder->audio->aaux_as1, 5 ); done = true; break; case 0x53: memcpy( pack.data, decoder->audio->aaux_asc1, 5 ); done = true: break; #else default: break; #endif } if ( done ) return true; #endif /* number of DIF sequences is different for PAL and NTSC */ int seqCount = IsPAL() ? 12 : 10; /* process all DIF sequences */ for ( int i = 0; i < seqCount; ++i ) { /* there are nine audio DIF blocks */ for ( int j = 0; j < 9; ++j ) { /* calculate address: 150 DIF blocks per sequence, 80 bytes per DIF block, audio blocks start at every 16th beginning with block 6, block has 3 bytes header, followed by one packet. */ const unsigned char *s = &data[ i * 150 * 80 + 6 * 80 + j * 16 * 80 + 3 ]; if ( s[ 0 ] == packNum ) { // printf("aaux %d: %2.2x %2.2x %2.2x %2.2x %2.2x\n", // j, s[0], s[1], s[2], s[3], s[4]); pack.data[ 0 ] = s[ 0 ]; pack.data[ 1 ] = s[ 1 ]; pack.data[ 2 ] = s[ 2 ]; pack.data[ 3 ] = s[ 3 ]; pack.data[ 4 ] = s[ 4 ]; return true; } } } return false; } /** gets the date and time of recording of this frame Returns a struct tm with date and time of recording of this frame. This code courtesy of Andy (http://www.videox.net/) \param recDate the time and date of recording of this frame \return true for success, false if no date or time information could be found */ bool DVFrame::GetRecordingDate( struct tm &recDate ) { #ifdef HAVE_LIBDV return dv_get_recording_datetime_tm( decoder, ( struct tm * ) &recDate ); #else Pack pack62; Pack pack63; if ( GetSSYBPack( 0x62, pack62 ) == false ) return false; int day = pack62.data[ 2 ]; int month = pack62.data[ 3 ]; int year = pack62.data[ 4 ]; if ( GetSSYBPack( 0x63, pack63 ) == false ) return false; int sec = pack63.data[ 2 ]; int min = pack63.data[ 3 ]; int hour = pack63.data[ 4 ]; sec = ( sec & 0xf ) + 10 * ( ( sec >> 4 ) & 0x7 ); min = ( min & 0xf ) + 10 * ( ( min >> 4 ) & 0x7 ); hour = ( hour & 0xf ) + 10 * ( ( hour >> 4 ) & 0x3 ); year = ( year & 0xf ) + 10 * ( ( year >> 4 ) & 0xf ); month = ( month & 0xf ) + 10 * ( ( month >> 4 ) & 0x1 ); day = ( day & 0xf ) + 10 * ( ( day >> 4 ) & 0x3 ); if ( year < 25 ) year += 2000; else year += 1900; recDate.tm_sec = sec; recDate.tm_min = min; recDate.tm_hour = hour; recDate.tm_mday = day; recDate.tm_mon = month - 1; recDate.tm_year = year - 1900; recDate.tm_wday = -1; recDate.tm_yday = -1; recDate.tm_isdst = -1; /* sanity check of the results */ if ( mktime( &recDate ) == -1 ) return false; return true; #endif } /** gets the timecode information of this frame Returns a string with the timecode of this frame. The timecode is the relative location of this frame on the tape, and is defined by hour, minute, second and frame (within the last second). \param timeCode the TimeCode struct \return true for success, false if no timecode information could be found */ bool DVFrame::GetTimeCode( TimeCode &timeCode ) { #ifdef HAVE_LIBDV int timestamp[ 4 ]; if ( dv_get_timestamp_int( decoder, timestamp ) ) { timeCode.hour = timestamp[ 0 ]; timeCode.min = timestamp[ 1 ]; timeCode.sec = timestamp[ 2 ]; timeCode.frame = timestamp[ 3 ]; return true; } else { timeCode.hour = 0; timeCode.min = 0; timeCode.sec = 0; timeCode.frame = 0; return false; } #else Pack tc; if ( GetSSYBPack( 0x13, tc ) == false ) return false; int frame = tc.data[ 1 ]; int sec = tc.data[ 2 ]; int min = tc.data[ 3 ]; int hour = tc.data[ 4 ]; timeCode.frame = ( frame & 0xf ) + 10 * ( ( frame >> 4 ) & 0x3 ); timeCode.sec = ( sec & 0xf ) + 10 * ( ( sec >> 4 ) & 0x7 ); timeCode.min = ( min & 0xf ) + 10 * ( ( min >> 4 ) & 0x7 ); timeCode.hour = ( hour & 0xf ) + 10 * ( ( hour >> 4 ) & 0x3 ); return true; #endif } /** gets the audio properties of this frame get the sampling frequency and the number of samples in this particular DV frame (which can vary) \param info the AudioInfo record \return true, if audio properties could be determined */ bool DVFrame::GetAudioInfo( AudioInfo &info ) { #ifdef HAVE_LIBDV info.frequency = decoder->audio->frequency; info.samples = decoder->audio->samples_this_frame; info.frames = ( decoder->audio->aaux_as.pc3.system == 1 ) ? 50 : 60; info.channels = decoder->audio->num_channels; info.quantization = ( decoder->audio->aaux_as.pc4.qu == 0 ) ? 16 : 12; return true; #else int af_size; int smp; int flag; Pack pack50; info.channels = 2; info.quantization = 16; /* Check whether this frame has a valid AAUX source packet (header == 0x50). If so, get the audio samples count. If not, skip this audio data. */ if ( GetAAUXPack( 0x50, pack50 ) == true ) { /* get size, sampling type and the 50/60 flag. The number of audio samples is dependend on all of these. */ af_size = pack50.data[ 1 ] & 0x3f; smp = ( pack50.data[ 4 ] >> 3 ) & 0x07; flag = pack50.data[ 3 ] & 0x20; if ( flag == 0 ) { info.frames = 60; switch ( smp ) { case 0: info.frequency = 48000; info.samples = 1580 + af_size; break; case 1: info.frequency = 44100; info.samples = 1452 + af_size; break; case 2: info.frequency = 32000; info.samples = 1053 + af_size; info.quantization = 12; break; } } else { // 50 frames (PAL) info.frames = 50; switch ( smp ) { case 0: info.frequency = 48000; info.samples = 1896 + af_size; break; case 1: info.frequency = 44100; info.samples = 0; // I don't know break; case 2: info.frequency = 32000; info.samples = 1264 + af_size; info.quantization = 12; break; } } return true; } else { return false; } #endif } bool DVFrame::GetVideoInfo( VideoInfo &info ) { GetTimeCode( info.timeCode ); GetRecordingDate( info.recDate ); info.isPAL = IsPAL(); return true; } /** gets the size of the frame Depending on the type (PAL or NTSC) of the frame, the length of the frame is returned \return the length of the frame in Bytes */ int DVFrame::GetFrameSize( void ) { return IsPAL() ? 144000 : 120000; } /** get the video frame rate \return frames per second */ float DVFrame::GetFrameRate() { return IsPAL() ? 25.0 : 30000.0 / 1001.0; } /** checks whether the frame is in PAL or NTSC format \todo function can't handle "empty" frame \return true for PAL frame, false for a NTSC frame */ bool DVFrame::IsPAL( void ) { unsigned char dsf = data[ 3 ] & 0x80; bool pal = ( dsf == 0 ) ? false : true; #ifdef HAVE_LIBDV if ( !pal ) { pal = ( dv_system_50_fields ( decoder ) == 1 ) ? true : pal; } #endif return pal; } /** checks whether this frame is the first in a new recording To determine this, the function looks at the recStartPoint bit in AAUX pack 51. \return true if this frame is the start of a new recording */ bool DVFrame::IsNewRecording() { #ifdef HAVE_LIBDV return ( decoder->audio->aaux_asc.pc2.rec_st == 0 ); #else Pack aauxSourceControl; /* if we can't find the packet, we return "no new recording" */ if ( GetAAUXPack( 0x51, aauxSourceControl ) == false ) return false; unsigned char recStartPoint = aauxSourceControl.data[ 2 ] & 0x80; return recStartPoint == 0 ? true : false; #endif } /** checks whether this frame is playing at normal speed To determine this, the function looks at the speed bit in AAUX pack 51. \return true if this frame is playing at normal speed */ bool DVFrame::IsNormalSpeed() { bool normal_speed = true; #ifdef HAVE_LIBDV /* don't do audio if speed is not 1 */ if ( decoder->std == e_dv_std_iec_61834 ) { normal_speed = ( decoder->audio->aaux_asc.pc3.speed == 0x20 ); } else if ( decoder->std == e_dv_std_smpte_314m ) { if ( decoder->audio->aaux_as.pc3.system ) /* PAL */ normal_speed = ( decoder->audio->aaux_asc.pc3.speed == 0x64 ); else /* NTSC */ normal_speed = ( decoder->audio->aaux_asc.pc3.speed == 0x78 ); } #endif return ( normal_speed ); } /** check whether we have received as many bytes as expected for this frame \return true if this frames is completed, false otherwise */ bool DVFrame::IsComplete( void ) { return GetDataLen() == GetFrameSize(); } /** retrieves the audio data from the frame The DV frame contains audio data mixed in the video data blocks, which can be retrieved easily using this function. The audio data consists of 16 bit, two channel audio samples (a 16 bit word for channel 1, followed by a 16 bit word for channel 2 etc.) \param sound a pointer to a buffer that holds the audio data \return the number of bytes put into the buffer, or 0 if no audio data could be retrieved */ int DVFrame::ExtractAudio( void *sound ) { AudioInfo info; if ( GetAudioInfo( info ) == true ) { #ifdef HAVE_LIBDV int n, i; int16_t* s = ( int16_t * ) sound; dv_decode_full_audio( decoder, data, ( int16_t ** ) audio_buffers ); for ( n = 0; n < info.samples; ++n ) for ( i = 0; i < info.channels; i++ ) *s++ = audio_buffers[ i ][ n ]; } else info.samples = 0; #else /* Collect the audio samples */ char* s = ( char * ) sound; switch ( info.frequency ) { case 32000: /* This is 4 channel audio */ if ( IsPAL() ) { short * p = ( short* ) sound; for ( int n = 0; n < info.samples; ++n ) { register int r = ( ( unsigned char* ) data ) [ palmap_2ch1[ n ] + 1 ]; // LSB *p++ = compmap[ ( ( ( unsigned char* ) data ) [ palmap_2ch1[ n ] ] << 4 ) + ( r >> 4 ) ]; // MSB *p++ = compmap[ ( ( ( unsigned char* ) data ) [ palmap_2ch1[ n ] + 1 ] << 4 ) + ( r & 0x0f ) ]; } } else { short* p = ( short* ) sound; for ( int n = 0; n < info.samples; ++n ) { register int r = ( ( unsigned char* ) data ) [ ntscmap_2ch1[ n ] + 1 ]; // LSB *p++ = compmap[ ( ( ( unsigned char* ) data ) [ ntscmap_2ch1[ n ] ] << 4 ) + ( r >> 4 ) ]; // MSB *p++ = compmap[ ( ( ( unsigned char* ) data ) [ ntscmap_2ch1[ n ] + 1 ] << 4 ) + ( r & 0x0f ) ]; } } break; case 44100: case 48000: /* this can be optimized significantly */ if ( IsPAL() ) { for ( int n = 0; n < info.samples; ++n ) { *s++ = ( ( char* ) data ) [ palmap_ch1[ n ] + 1 ]; /* LSB */ *s++ = ( ( char* ) data ) [ palmap_ch1[ n ] ]; /* MSB */ *s++ = ( ( char* ) data ) [ palmap_ch2[ n ] + 1 ]; /* LSB */ *s++ = ( ( char* ) data ) [ palmap_ch2[ n ] ]; /* MSB */ } } else { for ( int n = 0; n < info.samples; ++n ) { *s++ = ( ( char* ) data ) [ ntscmap_ch1[ n ] + 1 ]; /* LSB */ *s++ = ( ( char* ) data ) [ ntscmap_ch1[ n ] ]; /* MSB */ *s++ = ( ( char* ) data ) [ ntscmap_ch2[ n ] + 1 ]; /* LSB */ *s++ = ( ( char* ) data ) [ ntscmap_ch2[ n ] ]; /* MSB */ } } break; /* we can't handle any other format in the moment */ default: info.samples = 0; } } else info.samples = 0; #endif return info.samples * info.channels * 2; } #ifdef HAVE_LIBDV /** retrieves the audio data from the frame The DV frame contains audio data mixed in the video data blocks, which can be retrieved easily using this function. The audio data consists of 16 bit, two channel audio samples (a 16 bit word for channel 1, followed by a 16 bit word for channel 2 etc.) \param channels an array of buffers of audio data, one per channel, up to four channels \return the number of bytes put into the buffer, or 0 if no audio data could be retrieved */ int DVFrame::ExtractAudio( int16_t **channels ) { AudioInfo info; if ( GetAudioInfo( info ) == true ) dv_decode_full_audio( decoder, data, channels ); else info.samples = 0; return info.samples * info.channels * 2; } void DVFrame::ExtractHeader( void ) { dv_parse_header( decoder, data ); dv_parse_packs( decoder, data ); } void DVFrame::Deinterlace( void * image, int bpp ) { int width = GetWidth( ) * bpp; int height = GetHeight( ); for ( int i = 0; i < height; i += 2 ) memcpy( ( uint8_t * ) image + width * ( i + 1 ), ( uint8_t * ) image + width * i, width ); } int DVFrame::ExtractRGB( void * rgb ) { unsigned char * pixels[ 3 ]; int pitches[ 3 ]; pixels[ 0 ] = ( unsigned char* ) rgb; pixels[ 1 ] = NULL; pixels[ 2 ] = NULL; pitches[ 0 ] = 720 * 3; pitches[ 1 ] = 0; pitches[ 2 ] = 0; dv_decode_full_frame( decoder, data, e_dv_color_rgb, pixels, pitches ); return 0; } int DVFrame::ExtractPreviewRGB( void * rgb ) { ExtractRGB( rgb ); return 0; } int DVFrame::ExtractYUV( void * yuv ) { unsigned char * pixels[ 3 ]; int pitches[ 3 ]; pixels[ 0 ] = ( unsigned char* ) yuv; pitches[ 0 ] = decoder->width * 2; dv_decode_full_frame( decoder, data, e_dv_color_yuv, pixels, pitches ); return 0; } int DVFrame::ExtractPreviewYUV( void * yuv ) { ExtractYUV( yuv ); return 0; } /** Get the frame aspect ratio. Indicates whether frame aspect ration is normal (4:3) or wide (16:9). \return true if the frame is wide (16:9), false if unknown or normal. */ bool DVFrame::IsWide( void ) { return dv_format_wide( decoder ) > 0; } /** Get the frame image width. \return the width in pixels. */ int DVFrame::GetWidth() { return decoder->width; } /** Get the frame image height. \return the height in pixels. */ int DVFrame::GetHeight() { return decoder->height; } /** Set the RecordingDate of the frame. This updates the calendar date and time and the timecode. However, timecode is derived from the time in the datetime parameter and frame number. Use SetTimeCode for more control over timecode. \param datetime A simple time value containing the RecordingDate and time information. The time in this structure is automatically incremented by one second depending on the frame parameter and updatded. \param frame A zero-based running frame sequence/serial number. This is used both in the timecode as well as a timestamp on dif block headers. */ void DVFrame::SetRecordingDate( time_t * datetime, int frame ) { dv_encode_metadata( data, IsPAL(), IsWide(), datetime, frame ); } /** Set the TimeCode of the frame. This function takes a zero-based frame counter and automatically derives the timecode. \param frame The frame counter. */ void DVFrame::SetTimeCode( int frame ) { dv_encode_timecode( data, IsPAL(), frame ); } #else void DVFrame::ExtractHeader( void ) {} #endif dvgrab-3.5+git20160707.1.e46042e/iec13818-2.cc0000644000175000017500000010043012716434257015007 0ustar eses/* * iec13818-2.cc * Copyright (C) 2007 Dan Streetman * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #include using std::string; #include "hdvframe.h" #include "iec13818-2.h" /////////////// // Video Stream Video::Video() { picture = new Picture( this ); sequenceHeader = new SequenceHeader( this ); sequenceExtension = new SequenceExtension( this ); sequenceDisplayExtension = new SequenceDisplayExtension( this ); quantMatrixExtension = new QuantMatrixExtension( this ); copyrightExtension = new CopyrightExtension( this ); sequenceScalableExtension = new SequenceScalableExtension( this ); pictureDisplayExtension = new PictureDisplayExtension( this ); pictureCodingExtension = new PictureCodingExtension( this ); pictureSpatialScalableExtension = new PictureSpatialScalableExtension( this ); pictureTemporalScalableExtension = new PictureTemporalScalableExtension( this ); userData = new UserData( this ); group = new Group( this ); slice = new Slice( this ); Clear(); } Video::~Video() { delete picture; delete sequenceHeader; delete sequenceExtension; delete sequenceDisplayExtension; delete quantMatrixExtension; delete copyrightExtension; delete sequenceScalableExtension; delete pictureDisplayExtension; delete pictureCodingExtension; delete pictureSpatialScalableExtension; delete pictureTemporalScalableExtension; delete userData; delete group; delete slice; } void Video::AddPacket( HDVPacket *packet ) { if ( offset > 0 && packet->payload_unit_start_indicator() ) { if ( lastSection == slice ) DEBUG_RAW( d_hdv_video, "*%d", sliceCount ); DEBUG_RAW( d_hdv_video, "]" ); isComplete = true; } else { pes.AddData( packet->payload(), packet->PayloadLength() ); ProcessPacket(); } } void Video::Clear() { pes.Clear(); currentSection = 0; lastSection = 0; sliceCount = 0; offset = 0; sectionStart = 0; width = 0; height = 0; frameRate = 0; isTimeCodeSet = false; hasGOP = false; repeat_first_field = -1; top_field_first = -1; picture_structure = -1; progressive_sequence = -1; scalable_mode = -1; isComplete = false; } unsigned char *Video::GetBuffer() { return &pes.GetBuffer()[pes.GetPacketDataOffset()]; } unsigned char Video::GetData( int pos ) { if ( pos < pes.GetPacketDataLength() ) return pes.PES_packet_data_byte( pos ); else return 0; } int Video::GetLength() { return pes.GetPacketDataLength(); } bool Video::IsComplete() { return isComplete; } void Video::ProcessPacket() { while ( ( offset + 4 ) < GetLength() ) { int currentLen = GetLength() - offset; if ( !currentSection ) { const char *dstr = NULL; bool restart = false; if ( START_CODE_PREFIX( offset ) ) { int start_code = GetData( offset + 3 ); int extension_code; switch ( start_code ) { case SEQUENCE_HEADER_CODE_VALUE: dstr = "H"; currentSection = sequenceHeader; break; case PICTURE_START_CODE_VALUE: dstr = "P"; currentSection = picture; break; case EXTENSION_START_CODE_VALUE: extension_code = GetBits( ( offset + 4 ) * 8, 4 ); switch ( extension_code ) { case SEQUENCE_EXTENSION_ID_VALUE: dstr = "SE"; currentSection = sequenceExtension; break; case SEQUENCE_DISPLAY_EXTENSION_ID_VALUE: dstr = "SDE"; currentSection = sequenceDisplayExtension; break; case QUANT_MATRIX_EXTENSION_ID_VALUE: dstr = "QME"; currentSection = quantMatrixExtension; break; case COPYRIGHT_EXTENSION_ID_VALUE: dstr = "CE"; currentSection = copyrightExtension; break; case SEQUENCE_SCALABLE_EXTENSION_ID_VALUE: dstr = "SSE"; currentSection = sequenceScalableExtension; break; case PICTURE_DISPLAY_EXTENSION_ID_VALUE: dstr = "PDE"; currentSection = pictureDisplayExtension; break; case PICTURE_CODING_EXTENSION_ID_VALUE: dstr = "PCE"; currentSection = pictureCodingExtension; break; case PICTURE_SPATIAL_SCALABLE_EXTENSION_ID_VALUE: dstr = "PSSE"; currentSection = pictureSpatialScalableExtension; break; case PICTURE_TEMPORAL_SCALABLE_EXTENSION_ID_VALUE: dstr = "PTSE"; currentSection = pictureTemporalScalableExtension; break; default: DEBUG( d_hdv_video, "Unknown Extension %x", extension_code ); offset++; restart = true; break; } break; case USER_DATA_START_CODE_VALUE: dstr = "U"; currentSection = userData; break; case GROUP_START_CODE_VALUE: dstr = "G"; currentSection = group; break; case SEQUENCE_END_CODE_VALUE: dstr = "END"; offset += 4; restart = true; break; default: if ( SLICE_START_CODE_MIN <= start_code && start_code <= SLICE_START_CODE_MAX ) { if ( lastSection == slice ) { sliceCount++; } else { dstr = "S"; sliceCount = 1; } currentSection = slice; } else { DEBUG( d_hdv_video, "Unknown Start Code %x", start_code ); offset++; restart = true; } break; } } else { DEBUG_RAW( d_hdv_video, "%02x ", GetData(offset) ); offset++; restart = true; } if ( dstr ) { if ( lastSection == slice && currentSection != slice ) DEBUG_RAW( d_hdv_video, "*%d]", sliceCount ); DEBUG_RAW( d_hdv_video, "[%s", dstr ); if ( restart ) DEBUG_RAW( d_hdv_video, "]" ); } if ( restart ) continue; currentSection->Clear(); currentSection->SetOffset( offset ); sectionStart = offset; } currentSection->AddLength( currentLen ); if ( currentSection->IsComplete() ) { if ( currentSection != slice ) DEBUG_RAW( d_hdv_video, "]" ); if ( currentSection == sequenceHeader ) { width = sequenceHeader->horizontal_size_value(); height = sequenceHeader->vertical_size_value(); frameRate = FRAMERATE_LOOKUP( sequenceHeader->frame_rate_code() ); } else if ( currentSection == pictureCodingExtension ) { repeat_first_field = pictureCodingExtension->repeat_first_field() ? 1 : 0; top_field_first = pictureCodingExtension->top_field_first() ? 1 : 0; picture_structure = pictureCodingExtension->picture_structure(); } else if ( currentSection == group ) { timeCode.hour = group->time_code_hours(); timeCode.min = group->time_code_minutes(); timeCode.sec = group->time_code_seconds(); timeCode.frame = group->time_code_pictures(); isTimeCodeSet = true; hasGOP = true; } else if ( currentSection == sequenceExtension ) progressive_sequence = sequenceExtension->progressive_sequence() ? 1 : 0; else if ( currentSection == sequenceScalableExtension ) scalable_mode = sequenceScalableExtension->scalable_mode(); currentLen = currentSection->GetCompleteLength() - ( offset - sectionStart ); lastSection = currentSection; currentSection = 0; } offset += currentLen; } } /////////////// // VideoSection VideoSection::VideoSection( Video *v ) { video = v; } VideoSection::~VideoSection() { } void VideoSection::Clear() { offset = 0; length = 0; } void VideoSection::SetOffset( int o ) { offset = o; } void VideoSection::AddLength( int l ) { length += l; } bool VideoSection::IsComplete() { return GetCompleteLength() > 0; } unsigned char VideoSection::GetData( int pos ) { return video->GetData( pos + offset ); } ////////// // Picture Picture::Picture( Video *v ) : VideoSection( v ) { } Picture::~Picture() { } int Picture::GetCompleteLength() { int blen = length * 8; int bits = 61; if ( blen < bits ) return -1; if ( picture_coding_type() == 2 ) bits += 4; else if ( picture_coding_type() == 3 ) bits += 8; if ( blen < bits ) return -1; while ( GetBits( bits, 1 ) ) { bits += 9; if ( blen < bits ) return -1; } return TOBYTES( bits ); } void Picture::Dump() { DEBUG( d_hdv_video, "Picture section" ); } unsigned int Picture::picture_start_code() { return GetBits( 0, 32 ); } unsigned short Picture::temporal_reference() { return GetBits( 32, 10 ); } unsigned char Picture::picture_coding_type() { return GetBits( 42, 3 ); } unsigned short Picture::vbv_delay() { return GetBits( 45, 16 ); } bool Picture::full_pel_forward_vector() { if ( picture_coding_type() == 2 || picture_coding_type() == 3 ) return GetBits( 61, 1 ); else return 0; } unsigned char Picture::forward_f_code() { if ( picture_coding_type() == 2 || picture_coding_type() == 3 ) return GetBits( 62, 3 ); else return 0; } bool Picture::full_pel_backward_vector() { if ( picture_coding_type() == 3 ) return GetBits( 65, 1 ); else return 0; } unsigned char Picture::backward_f_code() { if ( picture_coding_type() == 3 ) return GetBits( 66, 3 ); else return 0; } unsigned char Picture::extra_information_picture( int n ) { int bits = 61; int i = 0; if ( picture_coding_type() == 2 ) bits += 4; else if ( picture_coding_type() == 3 ) bits += 8; while ( GetBits( bits, 1 ) ) { if ( n == i ) return GetBits( bits + 1, 8 ); bits += 9; i++; } return 0; } ////////////////// // Sequence Header SequenceHeader::SequenceHeader( Video *v ) : VideoSection( v ) { } SequenceHeader::~SequenceHeader() { } int SequenceHeader::GetCompleteLength() { int blen = length * 8; int bits = 95; if ( blen < bits ) return -1; if ( load_intra_quantiser_matrix() ) bits += 512; bits += 1; if ( blen < bits ) return -1; if ( load_non_intra_quantiser_matrix() ) bits += 512; if ( blen < bits ) return -1; return TOBYTES( bits ); } void SequenceHeader::Dump() { DEBUG( d_hdv_video, "SequenceHeader section H %d V %d aspect %d rate %d bitrate %d", horizontal_size_value(), vertical_size_value(), aspect_ratio_information(), frame_rate_code(), bit_rate_value() ); } unsigned int SequenceHeader::sequence_header_code() { return GetBits( 0, 32 ); } unsigned short SequenceHeader::horizontal_size_value() { return GetBits( 32, 12 ); } unsigned short SequenceHeader::vertical_size_value() { return GetBits( 44, 12 ); } unsigned char SequenceHeader::aspect_ratio_information() { return GetBits( 56, 4 ); } unsigned char SequenceHeader::frame_rate_code() { return GetBits( 60, 4 ); } unsigned int SequenceHeader::bit_rate_value() { return GetBits( 64, 18 ); } unsigned short SequenceHeader::vbv_buffer_size_value() { return GetBits( 83, 10 ); } bool SequenceHeader::constrained_parameters_flag() { return GetBits( 93, 1 ); } bool SequenceHeader::load_intra_quantiser_matrix() { return GetBits( 94, 1 ); } unsigned char SequenceHeader::intra_quantiser_matrix( unsigned int n ) { if ( load_intra_quantiser_matrix() && n < 64 ) return GetBits( 95 + ( n * 8 ), 8 ); else return 0; } bool SequenceHeader::load_non_intra_quantiser_matrix() { return load_intra_quantiser_matrix() ? GetBits( 607, 1 ) : GetBits( 95, 1 ); } unsigned char SequenceHeader::non_intra_quantiser_matrix( unsigned int n ) { if ( load_non_intra_quantiser_matrix() && n < 64 ) return load_intra_quantiser_matrix() ? GetBits( 608 + ( n * 8 ), 8 ) : GetBits( 96 + ( n * 8 ), 8 ); else return 0; } //////////////////// // SequenceExtension SequenceExtension::SequenceExtension( Video *v ) : VideoSection( v ) { } SequenceExtension::~SequenceExtension() { } int SequenceExtension::GetCompleteLength() { int blen = length * 8; int bits = 80; if ( blen < bits ) return -1; return TOBYTES( bits ); } void SequenceExtension::Dump() { DEBUG( d_hdv_video, "SequenceExtension section" ); } unsigned int SequenceExtension::extension_start_code() { return GetBits( 0, 32 ); } unsigned char SequenceExtension::extension_start_code_identifier() { return GetBits( 32, 4 ); } unsigned char SequenceExtension::profile_and_level_indication() { return GetBits( 36, 8 ); } bool SequenceExtension::progressive_sequence() { return GetBits( 44, 1 ); } unsigned char SequenceExtension::chroma_format() { return GetBits( 45, 2 ); } unsigned char SequenceExtension::horizontal_size_extension() { return GetBits( 47, 2 ); } unsigned char SequenceExtension::vertical_size_extension() { return GetBits( 49, 2 ); } unsigned short SequenceExtension::bit_rate_extension() { return GetBits( 51, 12 ); } unsigned char SequenceExtension::vbv_buffer_size_extension() { return GetBits( 64, 8 ); } bool SequenceExtension::low_delay() { return GetBits( 72, 1 ); } unsigned char SequenceExtension::frame_rate_extension_n() { return GetBits( 73, 2 ); } unsigned char SequenceExtension::frame_rate_extension_d() { return GetBits( 75, 5 ); } /////////////////////////// // SequenceDisplayExtension SequenceDisplayExtension::SequenceDisplayExtension( Video *v ) : VideoSection( v ) { } SequenceDisplayExtension::~SequenceDisplayExtension() { } int SequenceDisplayExtension::GetCompleteLength() { int blen = length * 8; int bits = 40; if ( blen < bits ) return -1; if ( colour_description() ) bits += 24; if ( blen < bits ) return -1; bits += 29; if ( blen < bits ) return -1; return TOBYTES( bits ); } void SequenceDisplayExtension::Dump() { DEBUG( d_hdv_video, "SequenceDisplayExtension section" ); } unsigned int SequenceDisplayExtension::extension_start_code() { return GetBits( 0, 32 ); } unsigned char SequenceDisplayExtension::extension_start_code_identifier() { return GetBits( 32, 4 ); } unsigned char SequenceDisplayExtension::video_format() { return GetBits( 36, 3 ); } bool SequenceDisplayExtension::colour_description() { return GetBits( 39, 1 ); } unsigned char SequenceDisplayExtension::colour_primaries() { return colour_description() ? GetBits( 40, 8 ) : 0; } unsigned char SequenceDisplayExtension::transfer_characteristics() { return colour_description() ? GetBits( 48, 8 ) : 0; } unsigned char SequenceDisplayExtension::matrix_coefficients() { return colour_description() ? GetBits( 56, 8 ) : 0; } unsigned short SequenceDisplayExtension::display_horizontal_size() { return colour_description() ? GetBits( 64, 14 ) : GetBits( 40, 14 ); } bool SequenceDisplayExtension::marker_bit() { return colour_description() ? GetBits( 78, 1 ) : GetBits( 54, 1 ); } unsigned short SequenceDisplayExtension::display_vertical_size() { return colour_description() ? GetBits( 79, 14 ) : GetBits( 55, 14 ); } /////////////////////// // QuantMatrixExtension QuantMatrixExtension::QuantMatrixExtension( Video *v ) : VideoSection( v ) { } QuantMatrixExtension::~QuantMatrixExtension() { } int QuantMatrixExtension::GetCompleteLength() { int blen = length * 8; int bits = 37; if ( blen < bits ) return -1; if ( load_intra_quantiser_matrix() ) bits += 512; bits += 1; if ( blen < bits ) return -1; if ( load_non_intra_quantiser_matrix() ) bits += 512; bits += 1; if ( blen < bits ) return -1; if ( load_chroma_intra_quantiser_matrix() ) bits += 512; bits += 1; if ( blen < bits ) return -1; if ( load_chroma_non_intra_quantiser_matrix() ) bits += 512; if ( blen < bits ) return -1; return TOBYTES( bits ); } void QuantMatrixExtension::Dump() { DEBUG( d_hdv_video, "QuantMatrixExtension section" ); } unsigned int QuantMatrixExtension::extension_start_code() { return GetBits( 0, 32 ); } unsigned char QuantMatrixExtension::extension_start_code_identifier() { return GetBits( 32, 4 ); } bool QuantMatrixExtension::load_intra_quantiser_matrix() { return GetBits( 36, 1 ); } unsigned char QuantMatrixExtension::intra_quantiser_matrix( unsigned int n ) { int pos = 37; if ( load_intra_quantiser_matrix() && n < 64 ) return GetBits( pos + ( 8 * n ), 8 ); else return 0; } bool QuantMatrixExtension::load_non_intra_quantiser_matrix() { int pos = 37; if ( load_intra_quantiser_matrix() ) pos += 512; return GetBits( pos, 1 ); } unsigned char QuantMatrixExtension::non_intra_quantiser_matrix( unsigned int n ) { int pos = 38; if ( load_intra_quantiser_matrix() ) pos += 512; if ( load_non_intra_quantiser_matrix() && n < 64 ) return GetBits( pos + ( 8 * n ), 8 ); else return 0; } bool QuantMatrixExtension::load_chroma_intra_quantiser_matrix() { int pos = 38; if ( load_intra_quantiser_matrix() ) pos += 512; if ( load_non_intra_quantiser_matrix() ) pos += 512; return GetBits( pos, 1 ); } unsigned char QuantMatrixExtension::chroma_intra_quantiser_matrix( unsigned int n ) { int pos = 39; if ( load_intra_quantiser_matrix() ) pos += 512; if ( load_non_intra_quantiser_matrix() ) pos += 512; if ( load_chroma_intra_quantiser_matrix() && n < 64 ) return GetBits( pos + ( 8 * n ), 8 ); else return 0; } bool QuantMatrixExtension::load_chroma_non_intra_quantiser_matrix() { int pos = 39; if ( load_intra_quantiser_matrix() ) pos += 512; if ( load_non_intra_quantiser_matrix() ) pos += 512; if ( load_chroma_intra_quantiser_matrix() ) pos += 512; return GetBits( pos, 1 ); } unsigned char QuantMatrixExtension::chroma_non_intra_quantiser_matrix( unsigned int n ) { int pos = 40; if ( load_intra_quantiser_matrix() ) pos += 512; if ( load_non_intra_quantiser_matrix() ) pos += 512; if ( load_chroma_intra_quantiser_matrix() ) pos += 512; if ( load_chroma_non_intra_quantiser_matrix() && n < 64 ) return GetBits( pos + ( 8 * n ), 8 ); else return 0; } ///////////////////// // CopyrightExtension CopyrightExtension::CopyrightExtension( Video *v ) : VideoSection( v ) { } CopyrightExtension::~CopyrightExtension() { } int CopyrightExtension::GetCompleteLength() { int blen = length * 8; int bits = 110; if ( blen < bits ) return -1; return TOBYTES( bits ); } void CopyrightExtension::Dump() { DEBUG( d_hdv_video, "CopyrightExtension section" ); } unsigned int CopyrightExtension::extension_start_code() { return GetBits( 0, 32 ); } unsigned char CopyrightExtension::extension_start_code_identifier() { return GetBits( 32, 4 ); } bool CopyrightExtension::copyright_flag() { return GetBits( 36, 1 ); } unsigned char CopyrightExtension::copyright_identifier() { return GetBits( 37, 8 ); } bool CopyrightExtension::original_or_copy() { return GetBits( 45, 1 ); } unsigned int CopyrightExtension::copyright_number_1() { return GetBits( 54, 20 ); } unsigned int CopyrightExtension::copyright_number_2() { return GetBits( 65, 22 ); } unsigned int CopyrightExtension::copyright_number_3() { return GetBits( 88, 22 ); } //////////////////////////// // SequenceScalableExtension SequenceScalableExtension::SequenceScalableExtension( Video *v ) : VideoSection( v ) { } SequenceScalableExtension::~SequenceScalableExtension() { } int SequenceScalableExtension::GetCompleteLength() { int blen = length * 8; int bits = 42; if ( blen < bits ) return -1; if ( scalable_mode() == SPATIAL_SCALABILITY ) { bits += 48; } else if ( scalable_mode() == TEMPORAL_SCALABILITY ) { bits += 1; if ( blen < bits ) return -1; if ( picture_mux_enable() ) bits += 1; bits += 6; } if ( blen < bits ) return -1; return TOBYTES( bits ); } void SequenceScalableExtension::Dump() { DEBUG( d_hdv_video, "SequenceScalableExtension section" ); } unsigned int SequenceScalableExtension::extension_start_code() { return GetBits( 0, 32 ); } unsigned char SequenceScalableExtension::extension_start_code_identifier() { return GetBits( 32, 4 ); } unsigned char SequenceScalableExtension::scalable_mode() { return GetBits( 36, 2 ); } unsigned char SequenceScalableExtension::layer_id() { return GetBits( 38, 4 ); } unsigned short SequenceScalableExtension::lower_layer_prediction_horizontal_size() { return scalable_mode() == SPATIAL_SCALABILITY ? GetBits( 42, 14 ) : 0; } unsigned short SequenceScalableExtension::lower_layer_prediction_vertical_size() { return scalable_mode() == SPATIAL_SCALABILITY ? GetBits( 57, 14 ) : 0; } unsigned char SequenceScalableExtension::horizontal_subsampling_factor_m() { return scalable_mode() == SPATIAL_SCALABILITY ? GetBits( 71, 5 ) : 0; } unsigned char SequenceScalableExtension::horizontal_subsampling_factor_n() { return scalable_mode() == SPATIAL_SCALABILITY ? GetBits( 76, 5 ) : 0; } unsigned char SequenceScalableExtension::vertical_subsampling_factor_m() { return scalable_mode() == SPATIAL_SCALABILITY ? GetBits( 81, 5 ) : 0; } unsigned char SequenceScalableExtension::vertical_subsampling_factor_n() { return scalable_mode() == SPATIAL_SCALABILITY ? GetBits( 86, 5 ) : 0; } bool SequenceScalableExtension::picture_mux_enable() { return scalable_mode() == TEMPORAL_SCALABILITY ? GetBits( 42, 1 ) : 0; } bool SequenceScalableExtension::mux_to_progressive_sequence() { if ( scalable_mode() == TEMPORAL_SCALABILITY ) return picture_mux_enable() ? GetBits( 43, 1 ) : 0; else return 0; } unsigned char SequenceScalableExtension::picture_mux_order() { if ( scalable_mode() == TEMPORAL_SCALABILITY ) return picture_mux_enable() ? GetBits( 44, 3 ) : GetBits( 43, 3 ); else return 0; } unsigned char SequenceScalableExtension::picture_mux_factor() { if ( scalable_mode() == TEMPORAL_SCALABILITY ) return picture_mux_enable() ? GetBits( 47, 3 ) : GetBits( 46, 3 ); else return 0; } ////////////////////////// // PictureDisplayExtension PictureDisplayExtension::PictureDisplayExtension( Video *v ) : VideoSection( v ) { } PictureDisplayExtension::~PictureDisplayExtension() { } int PictureDisplayExtension::GetCompleteLength() { int blen = length * 8; int bits = 36; if ( blen < bits ) return -1; bits += 34 * number_of_frame_centre_offsets(); if ( blen < bits ) return -1; return TOBYTES( bits ); } void PictureDisplayExtension::Dump() { DEBUG( d_hdv_video, "PictureDisplayExtension section" ); } unsigned int PictureDisplayExtension::extension_start_code() { return GetBits( 0, 32 ); } unsigned char PictureDisplayExtension::extension_start_code_identifier() { return GetBits( 32, 4 ); } unsigned short PictureDisplayExtension::frame_centre_horizontal_offset( unsigned int n ) { if ( n < number_of_frame_centre_offsets() ) return GetBits( 36 + ( n * 34 ), 16 ); else return 0; } unsigned short PictureDisplayExtension::frame_centre_vertical_offset( unsigned int n ) { if ( n < number_of_frame_centre_offsets() ) return GetBits( 53 + ( n * 34 ), 16 ); else return 0; } unsigned char PictureDisplayExtension::number_of_frame_centre_offsets() { // These should be set in the stream before this, but if not return the minimum. if ( video->progressive_sequence < 0 || video->repeat_first_field < 0 || video->top_field_first < 0 || video->picture_structure < 0 ) return 1; bool picture_structure_is_field = video->picture_structure == PICTURE_STRUCTURE_TOP_FIELD || video->picture_structure == PICTURE_STRUCTURE_BOTTOM_FIELD; // Straight from the spec. Ick. if ( video->progressive_sequence == 1 ) { if ( video->repeat_first_field == 1 ) { if ( video->top_field_first == 1 ) return 3; else return 2; } else { return 1; } } else { if ( picture_structure_is_field ) { return 1; } else { if ( video->repeat_first_field == 1 ) return 3; else return 2; } } } ///////////////////////// // PictureCodingExtension PictureCodingExtension::PictureCodingExtension( Video *v ) : VideoSection( v ) { } PictureCodingExtension::~PictureCodingExtension() { } int PictureCodingExtension::GetCompleteLength() { int blen = length * 8; int bits = 66; if ( blen < bits ) return -1; if ( composite_display_flag() ) bits += 20; if ( blen < bits ) return -1; return TOBYTES( bits ); } void PictureCodingExtension::Dump() { DEBUG( d_hdv_video, "PictureCodingExtension section" ); } unsigned int PictureCodingExtension::extension_start_code() { return GetBits( 0, 32 ); } unsigned char PictureCodingExtension::extension_start_code_identifier() { return GetBits( 32, 4 ); } unsigned char PictureCodingExtension::f_code00() { return GetBits( 36, 4 ); } unsigned char PictureCodingExtension::f_code01() { return GetBits( 40, 4 ); } unsigned char PictureCodingExtension::f_code10() { return GetBits( 44, 4 ); } unsigned char PictureCodingExtension::f_code11() { return GetBits( 48, 4 ); } unsigned char PictureCodingExtension::intra_dc_precision() { return GetBits( 52, 2 ); } unsigned char PictureCodingExtension::picture_structure() { return GetBits( 54, 2 ); } bool PictureCodingExtension::top_field_first() { return GetBits( 56, 1 ); } bool PictureCodingExtension::frame_pred_frame_dct() { return GetBits( 57, 1 ); } bool PictureCodingExtension::concealment_motion_vectors() { return GetBits( 58, 1 ); } bool PictureCodingExtension::q_scale_type() { return GetBits( 59, 1 ); } bool PictureCodingExtension::intra_vlc_format() { return GetBits( 60, 1 ); } bool PictureCodingExtension::alternate_scan() { return GetBits( 61, 1 ); } bool PictureCodingExtension::repeat_first_field() { return GetBits( 62, 1 ); } bool PictureCodingExtension::chroma_420_type() { return GetBits( 63, 1 ); } bool PictureCodingExtension::progressive_frame() { return GetBits( 64, 1 ); } bool PictureCodingExtension::composite_display_flag() { return GetBits( 65, 1 ); } bool PictureCodingExtension::v_axis() { return composite_display_flag() ? GetBits( 66, 1 ) : 0; } unsigned char PictureCodingExtension::field_sequence() { return composite_display_flag() ? GetBits( 67, 3 ) : 0; } bool PictureCodingExtension::sub_carrier() { return composite_display_flag() ? GetBits( 70, 1 ) : 0; } unsigned char PictureCodingExtension::burst_amplitude() { return composite_display_flag() ? GetBits( 71, 7 ) : 0; } unsigned char PictureCodingExtension::sub_carrier_phase() { return composite_display_flag() ? GetBits( 78, 8 ) : 0; } ////////////////////////////////// // PictureSpatialScalableExtension PictureSpatialScalableExtension::PictureSpatialScalableExtension( Video *v ) : VideoSection( v ) { } PictureSpatialScalableExtension::~PictureSpatialScalableExtension() { } int PictureSpatialScalableExtension::GetCompleteLength() { int blen = length * 8; int bits = 82; if ( blen < bits ) return -1; return TOBYTES( bits ); } void PictureSpatialScalableExtension::Dump() { DEBUG( d_hdv_video, "PictureSpatialScalableExtension section" ); } unsigned int PictureSpatialScalableExtension::extension_start_code() { return GetBits( 0, 32 ); } unsigned char PictureSpatialScalableExtension::extension_start_code_identifier() { return GetBits( 32, 4 ); } unsigned short PictureSpatialScalableExtension::lower_layer_temporal_reference() { return GetBits( 36, 10 ); } unsigned short PictureSpatialScalableExtension::lower_layer_horizontal_offset() { return GetBits( 47, 15 ); } unsigned short PictureSpatialScalableExtension::lower_layer_vertical_offset() { return GetBits( 63, 15 ); } unsigned char PictureSpatialScalableExtension::spatial_temporal_weight_code_table_index() { return GetBits( 78, 2 ); } bool PictureSpatialScalableExtension::lower_layer_progressive_frame() { return GetBits( 80, 1 ); } bool PictureSpatialScalableExtension::lower_layer_deinterlaced_field_select() { return GetBits( 81, 1 ); } /////////////////////////////////// // PictureTemporalScalableExtension PictureTemporalScalableExtension::PictureTemporalScalableExtension( Video *v ) : VideoSection( v ) { } PictureTemporalScalableExtension::~PictureTemporalScalableExtension() { } int PictureTemporalScalableExtension::GetCompleteLength() { int blen = length * 8; int bits = 59; if ( blen < bits ) return -1; return TOBYTES( bits ); } void PictureTemporalScalableExtension::Dump() { DEBUG( d_hdv_video, "PictureTemporalScalableExtension section" ); } unsigned int PictureTemporalScalableExtension::extension_start_code() { return GetBits( 0, 32 ); } unsigned char PictureTemporalScalableExtension::extension_start_code_identifier() { return GetBits( 32, 4 ); } unsigned char PictureTemporalScalableExtension::reference_select_code() { return GetBits( 36, 2 ); } unsigned short PictureTemporalScalableExtension::forward_temporal_reference() { return GetBits( 38, 10 ); } unsigned short PictureTemporalScalableExtension::backward_temporal_reference() { return GetBits( 49, 10 ); } /////////// // UserData UserData::UserData( Video *v ) : VideoSection( v ) { } UserData::~UserData() { } int UserData::GetCompleteLength() { int blen = length * 8; int bits = 32; if ( blen < bits ) return -1; while ( !START_CODE_PREFIX( bits / 8 ) ) { bits += 8; if ( blen < bits ) return -1; } return TOBYTES( bits ); } void UserData::Dump() { DEBUG( d_hdv_video, "UserData section length %d", GetCompleteLength() ); } //////// // Group Group::Group( Video *v ) : VideoSection( v ) { } Group::~Group() { } int Group::GetCompleteLength() { int blen = length * 8; int bits = 59; if ( blen < bits ) return -1; return TOBYTES( bits ); } void Group::Dump() { DEBUG( d_hdv_video, "Group of pictures closed_gap %d broken_link %d time code %02d:%02d:%02d.%02d", closed_gop(), broken_link(), time_code_hours(), time_code_minutes(), time_code_seconds(), time_code_pictures() ); } unsigned int Group::group_start_code() { return GetBits( 0, 32 ); } unsigned int Group::time_code() { return GetBits( 32, 25 ); } bool Group::closed_gop() { return GetBits( 57, 1 ); } bool Group::broken_link() { return GetBits( 58, 1 ); } bool Group::drop_frame_flag() { return GetBits( 32, 1 ); } unsigned char Group::time_code_hours() { return GetBits( 33, 5 ); } unsigned char Group::time_code_minutes() { return GetBits( 38, 6 ); } unsigned char Group::time_code_seconds() { return GetBits( 45, 6 ); } unsigned char Group::time_code_pictures() { return GetBits( 51, 6 ); } //////// // Slice Slice::Slice( Video *v ) : VideoSection( v ) { } Slice::~Slice() { } int Slice::GetCompleteLength() { //FIXME - replace with macroblock parsing if ( !last_pos ) { int blen = length * 8; int bits = 32; if ( video->width > 2800 ) bits += 3; if ( video->scalable_mode == DATA_PARTITIONING ) bits += 7; bits += 5; if ( blen < bits ) return -1; if ( intra_slice_flag() ) { bits += 9; if ( blen < bits ) return -1; while ( GetBits( bits, 1 ) ) { bits += 9; if ( blen < bits ) return -1; } } bits += 1; if ( blen < bits ) return -1; last_pos = TOBYTES( bits ); } while ( ( last_pos + 3 ) < length ) { if ( START_CODE_PREFIX(last_pos) ) return last_pos; else last_pos++; } return -1; } //FIXME - remove this when macroblock parsing is added void Slice::Clear() { last_pos = 0; VideoSection::Clear(); } void Slice::Dump() { DEBUG( d_hdv_video, "Slice section." ); } unsigned int Slice::slice_start_code() { return GetBits( 0, 32 ); } unsigned char Slice::slice_vertical_position_extension() { return video->width > 2800 ? GetBits( 32, 3 ) : 0; } unsigned char Slice::priority_breakpoint() { int bits = 32; if ( video->width > 2800 ) bits += 3; return video->scalable_mode == DATA_PARTITIONING ? GetBits( bits, 7 ) : 0; } unsigned char Slice::quantiser_scale_code() { int bits = 32; if ( video->width > 2800 ) bits += 3; if ( video->scalable_mode == DATA_PARTITIONING ) bits += 7; return GetBits( bits, 5 ); } bool Slice::intra_slice_flag() { int bits = 32; if ( video->width > 2800 ) bits += 3; if ( video->scalable_mode == DATA_PARTITIONING ) bits += 7; bits += 5; return GetBits( bits, 1 ); // Note, this is bit only exists if it's a 1...see spec } bool Slice::intra_slice() { int bits = 32; if ( video->width > 2800 ) bits += 3; if ( video->scalable_mode == DATA_PARTITIONING ) bits += 7; bits += 6; return intra_slice_flag() ? GetBits( bits, 1 ) : 0; } bool Slice::extra_bit_slice( unsigned int n ) { unsigned int p = 0; int bits = 32; if ( video->width > 2800 ) bits += 3; if ( video->scalable_mode == DATA_PARTITIONING ) bits += 7; bits += 14; if ( !intra_slice_flag() ) return 0; while ( GetBits( bits + ( p * 9 ), 1 ) ) { if ( n == p ) return GetBits( bits + ( p * 9 ), 1 ); else p++; } return 0; } unsigned char Slice::extra_information_slice( unsigned int n ) { unsigned int p = 0; int bits = 32; if ( video->width > 2800 ) bits += 3; if ( video->scalable_mode == DATA_PARTITIONING ) bits += 7; bits += 14; if ( !intra_slice_flag() ) return 0; while ( GetBits( bits + ( p * 9 ), 1 ) ) { if ( n == p ) return GetBits( bits + 1 + ( p * 9 ), 8 ); else p++; } return 0; } dvgrab-3.5+git20160707.1.e46042e/v4l2reader.h0000644000175000017500000000301412716434257015317 0ustar eses/* * ieee1394io.cc -- grab DV/MPEG-2 from V4L2 * Copyright (C) 2007 Dan Dennedy * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _V4L2IO_H #define _V4L2IO_H 1 #ifdef HAVE_CONFIG_H #include #endif #ifdef HAVE_LINUX_VIDEODEV2_H #include "ieee1394io.h" class Frame; class v4l2Reader: public IEEE1394Reader { public: v4l2Reader( const char *filename, int frames = 50, bool hdv = false ); ~v4l2Reader(); bool Open( void ); void Close( void ); bool StartReceive( void ); void StopReceive( void ); bool StartThread( void ); void StopThread( void ); void* Thread( void ); private: struct buffer { void *start; size_t length; }; static void* ThreadProxy( void *arg ); bool Handler( void ); int ioctl( int request, void *arg ); const char* m_device; int m_fd; unsigned int m_bufferCount; struct buffer* m_buffers; }; #endif #endif dvgrab-3.5+git20160707.1.e46042e/riff.h0000644000175000017500000000526312716434257014303 0ustar eses/* * riff.h library for RIFF file format i/o * Copyright (C) 2000 - 2002 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _RIFF_H #define _RIFF_H 1 #include using std::vector; #include "endian_types.h" #define QUADWORD int64_le_t #define DWORD int32_le_t #define LONG u_int32_le_t #define WORD int16_le_t #define BYTE u_int8_le_t #define FOURCC u_int32_t // No endian conversion needed. #define RIFF_NO_PARENT (-1) #define RIFF_LISTSIZE (4) #define RIFF_HEADERSIZE (8) #ifdef __cplusplus extern "C" { FOURCC make_fourcc( const char * s ); } #endif class RIFFDirEntry { public: FOURCC type; FOURCC name; off_t length; off_t offset; int parent; int written; RIFFDirEntry(); RIFFDirEntry( FOURCC t, FOURCC n, int l, int o, int p ); }; class RIFFFile { public: RIFFFile(); RIFFFile( const RIFFFile& ); virtual ~RIFFFile(); RIFFFile& operator=( const RIFFFile& ); virtual bool Open( const char *s ); virtual bool Create( const char *s ); virtual void Close(); virtual int AddDirectoryEntry( FOURCC type, FOURCC name, off_t length, int list ); virtual void SetDirectoryEntry( int i, FOURCC type, FOURCC name, off_t length, off_t offset, int list ); virtual void SetDirectoryEntry( int i, RIFFDirEntry &entry ); virtual void GetDirectoryEntry( int i, FOURCC &type, FOURCC &name, off_t &length, off_t &offset, int &list ) const; virtual RIFFDirEntry GetDirectoryEntry( int i ) const; virtual off_t GetFileSize( void ) const; virtual void PrintDirectoryEntry( int i ) const; virtual void PrintDirectoryEntryData( const RIFFDirEntry &entry ) const; virtual void PrintDirectory( void ) const; virtual int FindDirectoryEntry( FOURCC type, int n = 0 ) const; virtual void ParseChunk( int parent ); virtual void ParseList( int parent ); virtual void ParseRIFF( void ); virtual void ReadChunk( int chunk_index, void *data ); virtual void WriteChunk( int chunk_index, const void *data ); virtual void WriteRIFF( void ); protected: int fd; private: vector directory; }; #endif dvgrab-3.5+git20160707.1.e46042e/Makefile.am0000644000175000017500000000235012716434257015232 0ustar esesAM_CXXFLAGS = -D_REENTRANT -D_FILE_OFFSET_BITS=64 AM_CFLAGS = -D_REENTRANT -D_FILE_OFFSET_BITS=64 AUTOMAKE_OPTIONS = foreign EXTRA_DIST = ChangeLog TODO dvgrab.dox dvgrab.spec dvgrab.1 NEWS man_MANS = dvgrab.1 bin_PROGRAMS = dvgrab #noinst_PROGRAMS = riffdump rawdump dvgrab_SOURCES = affine.h avi.cc avi.h dvframe.cc dvframe.h dvgrab.cc dvgrab.h \ endian_types.h error.cc error.h filehandler.cc filehandler.h frame.cc frame.h \ hdvframe.cc hdvframe.h iec13818-1.cc iec13818-1.h iec13818-2.cc iec13818-2.h \ ieee1394io.cc ieee1394io.h io.c io.h main.cc raw1394util.c raw1394util.h riff.cc \ riff.h smiltime.cc smiltime.h stringutils.cc stringutils.h v4l2reader.h v4l2reader.cc \ srt.h srt.cc AM_CPPFLAGS = \ @LIBRAW1394_CFLAGS@ \ @LIBAVC1394_CFLAGS@ \ @LIBIEC61883_CFLAGS@ \ @LIBDV_CFLAGS@ \ @LIBQUICKTIME_CFLAGS@ dvgrab_LDADD = \ @LIBRAW1394_LIBS@ \ @LIBAVC1394_LIBS@ \ @LIBIEC61883_LIBS@ \ @LIBDV_LIBS@ \ @LIBQUICKTIME_LIBS@ #riffdump_SOURCES = error.cc error.h riffdump.cc avi.h riff.h avi.cc riff.cc dvframe.h dvframe.cc frame.h #rawdump_SOURCES = rawdump.c # a C++ formatter ASTYLEFLAGS = -c -t -b -P pretty: astyle $(ASTYLEFLAGS) $(dvgrab_SOURCES) # a documentation generator dox: doxygen dvgrab.dox dvgrab-3.5+git20160707.1.e46042e/riff.cc0000644000175000017500000003545112716434257014443 0ustar eses/* * riff.cc library for RIFF file format i/o * Copyright (C) 2000 - 2002 Arne Schirmacher * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifdef HAVE_CONFIG_H #include #endif // C++ includes #include //#include #include #include #include using std::cout; using std::hex; using std::dec; using std::setw; using std::setfill; using std::endl; // C includes #include #include #include // local includes #include "error.h" #include "riff.h" /** make a 32 bit "string-id" \param s a pointer to 4 chars \return the 32 bit "string id" \bugs It is not checked whether we really have 4 characters Some compilers understand constants like int id = 'ABCD'; but I could not get it working on the gcc compiler so I had to use this workaround. We can now use id = make_fourcc("ABCD") instead. */ FOURCC make_fourcc( const char *s ) { if ( s[ 0 ] == 0 ) return 0; else return *( ( FOURCC* ) s ); } RIFFDirEntry::RIFFDirEntry() {} RIFFDirEntry::RIFFDirEntry ( FOURCC t, FOURCC n, int l, int o, int p ) : type( t ), name( n ), length( l ), offset( o ), parent( p ), written( 0 ) {} /** Creates the object without an output file. */ RIFFFile::RIFFFile() : fd( -1 ) {} /* Copy constructor Duplicate the file descriptor */ RIFFFile::RIFFFile( const RIFFFile& riff ) : fd( -1 ) { if ( riff.fd != -1 ) { fd = dup( riff.fd ); } directory = riff.directory; } /** Destroys the object. If it has an associated opened file, close it. */ RIFFFile::~RIFFFile() { Close(); } RIFFFile& RIFFFile::operator=( const RIFFFile& riff ) { if ( fd != riff.fd ) { Close(); if ( riff.fd != -1 ) { fd = dup( riff.fd ); } directory = riff.directory; } return *this; } /** Creates or truncates the file. \param s the filename */ bool RIFFFile::Create( const char *s ) { fd = open( s, O_RDWR | O_NONBLOCK | O_CREAT | O_TRUNC, 00644 ); if ( fd == -1 ) return false; else return true; } /** Opens the file read only. \param s the filename */ bool RIFFFile::Open( const char *s ) { fd = open( s, O_RDONLY | O_NONBLOCK ); if ( fd == -1 ) return false; else return true; } /** Destroys the object. If it has an associated opened file, close it. */ void RIFFFile::Close() { if ( fd != -1 ) { close( fd ); fd = -1; } } /** Adds an entry to the list of containers. \param type the type of this entry \param name the name \param length the length of the data in the container \param list the container in which this object is contained. \return the ID of the newly created entry The topmost object is not contained in any other container. Use the special ID RIFF_NO_PARENT to create the topmost object. */ int RIFFFile::AddDirectoryEntry( FOURCC type, FOURCC name, off_t length, int list ) { /* Put all parameters in an RIFFDirEntry object. The offset is currently unknown. */ RIFFDirEntry entry( type, name, length, 0 /* offset */, list ); /* If the new chunk is in a list, then get the offset and size of that list. The offset of this chunk is the end of the list (parent_offset + parent_length) plus the size of the chunk header. */ if ( list != RIFF_NO_PARENT ) { RIFFDirEntry parent = GetDirectoryEntry( list ); entry.offset = parent.offset + parent.length + RIFF_HEADERSIZE; } /* The list which this new chunk is a member of has now increased in size. Get that directory entry and bump up its length by the size of the chunk. Since that list may also be contained in another list, walk up to the top of the tree. */ while ( list != RIFF_NO_PARENT ) { RIFFDirEntry parent = GetDirectoryEntry( list ); parent.length += RIFF_HEADERSIZE + length; SetDirectoryEntry( list, parent ); list = parent.parent; } directory.insert( directory.end(), entry ); return directory.size() - 1; } /** Modifies an entry. \param i the ID of the entry which is to modify \param type the type of this entry \param name the name \param length the length of the data in the container \param list the container in which this object is contained. \note Do not change length, offset, or the parent container. \note Do not change an empty name ("") to a name and vice versa */ void RIFFFile::SetDirectoryEntry( int i, FOURCC type, FOURCC name, off_t length, off_t offset, int list ) { RIFFDirEntry entry( type, name, length, offset, list ); assert( i >= 0 && i < ( int ) directory.size() ); directory[ i ] = entry; } /** Modifies an entry. The entry.written flag is set to false because the contents has been modified \param i the ID of the entry which is to modify \param entry the new entry \note Do not change length, offset, or the parent container. \note Do not change an empty name ("") to a name and vice versa */ void RIFFFile::SetDirectoryEntry( int i, RIFFDirEntry &entry ) { assert( i >= 0 && i < ( int ) directory.size() ); entry.written = false; directory[ i ] = entry; } /** Retrieves an entry. Gets the most important member variables. \param i the ID of the entry to retrieve \param type \param name \param length \param offset \param list */ void RIFFFile::GetDirectoryEntry( int i, FOURCC &type, FOURCC &name, off_t &length, off_t &offset, int &list ) const { RIFFDirEntry entry; assert( i >= 0 && i < ( int ) directory.size() ); entry = directory[ i ]; type = entry.type; name = entry.name; length = entry.length; offset = entry.offset; list = entry.parent; } /** Retrieves an entry. Gets the whole RIFFDirEntry object. \param i the ID of the entry to retrieve \return the entry */ RIFFDirEntry RIFFFile::GetDirectoryEntry( int i ) const { assert( i >= 0 && i < ( int ) directory.size() ); return directory[ i ]; } /** Calculates the total size of the file \return the size the file in bytes */ off_t RIFFFile::GetFileSize( void ) const { /* If we have at least one entry, return the length field of the FILE entry, which is the length of its contents, which is the actual size of whatever is currently in the AVI directory structure. Note that the first entry does not belong to the AVI file. If we don't have any entry, the file size is zero. */ if ( directory.size() > 0 ) return directory[ 0 ].length; else return 0; } /** prints the attributes of the entry \param i the ID of the entry to print */ void RIFFFile::PrintDirectoryEntry ( int i ) const { RIFFDirEntry entry; RIFFDirEntry parent; FOURCC entry_name; FOURCC list_name; /* Get all attributes of the chunk object. If it is contained in a list, get the name of the list too (otherwise the name of the list is blank). If the chunk object doesn´t have a name (only LISTs and RIFFs have a name), the name is blank. */ entry = GetDirectoryEntry( i ); if ( entry.parent != RIFF_NO_PARENT ) { parent = GetDirectoryEntry( entry.parent ); list_name = parent.name; } else { list_name = make_fourcc( " " ); } if ( entry.name != 0 ) { entry_name = entry.name; } else { entry_name = make_fourcc( " " ); } /* Print out the ascii representation of type and name, as well as length and file offset. */ cout << hex << setfill( '0' ) << "type: " << ((char *)&entry.type)[0] << ((char *)&entry.type)[1] << ((char *)&entry.type)[2] << ((char *)&entry.type)[3] << " name: " << ((char *)&entry_name)[0] << ((char *)&entry_name)[1] << ((char *)&entry_name)[2] << ((char *)&entry_name)[3] << " length: 0x" << setw( 12 ) << entry.length << " offset: 0x" << setw( 12 ) << entry.offset << " list: " << ((char *)&list_name)[0] << ((char *)&list_name)[1] << ((char *)&list_name)[2] << ((char *)&list_name)[3] << dec << endl; /* print the content itself */ PrintDirectoryEntryData( entry ); } /** prints the contents of the entry Prints a readable representation of the contents of an index. Override this to print out any objects you store in the RIFF file. \param entry the entry to print */ void RIFFFile::PrintDirectoryEntryData( const RIFFDirEntry &entry ) const {} /** prints the contents of the whole directory Prints a readable representation of the contents of an index. Override this to print out any objects you store in the RIFF file. \param entry the entry to print */ void RIFFFile::PrintDirectory() const { int i; int count = directory.size(); for ( i = 0; i < count; ++i ) PrintDirectoryEntry( i ); } /** finds the index finds the index of a given directory entry type \todo inefficient if the directory has lots of items \param type the type of the entry to find \param n the zero-based instance of type to locate \return the index of the found object in the directory, or -1 if not found */ int RIFFFile::FindDirectoryEntry ( FOURCC type, int n ) const { int i, j = 0; int count = directory.size(); for ( i = 0; i < count; ++i ) if ( directory[ i ].type == type ) { if ( j == n ) return i; j++; } return -1; } /** Reads all items that are contained in one list Read in one chunk and add it to the directory. If the chunk happens to be of type LIST, then call ParseList recursively for it. \param parent The id of the item to process */ void RIFFFile::ParseChunk( int parent ) { FOURCC type; DWORD length; int typesize; /* Check whether it is a LIST. If so, let ParseList deal with it */ read( fd, &type, sizeof( type ) ); if ( type == make_fourcc( "LIST" ) ) { typesize = -sizeof( type ); fail_if( lseek( fd, typesize, SEEK_CUR ) == ( off_t ) - 1 ); ParseList( parent ); } /* it is a normal chunk, create a new directory entry for it */ else { fail_neg( read( fd, &length, sizeof( length ) ) ); if ( length & 1 ) length++; AddDirectoryEntry( type, 0, length, parent ); fail_if( lseek( fd, length, SEEK_CUR ) == ( off_t ) - 1 ); } } /** Reads all items that are contained in one list \param parent The id of the list to process */ void RIFFFile::ParseList( int parent ) { FOURCC type; FOURCC name; int list; DWORD length; off_t pos; off_t listEnd; /* Read in the chunk header (type and length). */ fail_neg( read( fd, &type, sizeof( type ) ) ); fail_neg( read( fd, &length, sizeof( length ) ) ); if ( length & 1 ) length++; /* The contents of the list starts here. Obtain its offset. The list name (4 bytes) is already part of the contents). */ pos = lseek( fd, 0, SEEK_CUR ); fail_if( pos == ( off_t ) - 1 ); fail_neg( read( fd, &name, sizeof( name ) ) ); /* Add an entry for this list. */ list = AddDirectoryEntry( type, name, sizeof( name ), parent ); /* Read in any chunks contained in this list. This list is the parent for all chunks it contains. */ listEnd = pos + length; while ( pos < listEnd ) { ParseChunk( list ); pos = lseek( fd, 0, SEEK_CUR ); fail_if( pos == ( off_t ) - 1 ); } } /** Reads the directory structure of the whole RIFF file */ void RIFFFile::ParseRIFF( void ) { FOURCC type; DWORD length; off_t filesize; off_t pos; int container = AddDirectoryEntry( make_fourcc( "FILE" ), make_fourcc( "FILE" ), 0, RIFF_NO_PARENT ); pos = lseek( fd, 0, SEEK_SET ); /* calculate file size from RIFF header instead from physical file. */ while ( ( read( fd, &type, sizeof( type ) ) > 0 ) && ( read( fd, &length, sizeof( length ) ) > 0 ) && ( type == make_fourcc( "RIFF" ) ) ) { filesize += length + RIFF_HEADERSIZE; fail_if( lseek( fd, pos, SEEK_SET ) == ( off_t ) - 1 ); ParseList( container ); pos = lseek( fd, 0, SEEK_CUR ); fail_if( pos == ( off_t ) - 1 ); } } /** Reads one item including its contents from the RIFF file \param chunk_index The index of the item to write \param data A pointer to the data */ void RIFFFile::ReadChunk( int chunk_index, void *data ) { RIFFDirEntry entry; entry = GetDirectoryEntry( chunk_index ); fail_if( lseek( fd, entry.offset, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( read( fd, data, entry.length ) ); } /** Writes one item including its contents to the RIFF file \param chunk_index The index of the item to write \param data A pointer to the data */ void RIFFFile::WriteChunk( int chunk_index, const void *data ) { RIFFDirEntry entry; entry = GetDirectoryEntry( chunk_index ); fail_if( lseek( fd, entry.offset - RIFF_HEADERSIZE, SEEK_SET ) == ( off_t ) - 1 ); fail_neg( write( fd, &entry.type, sizeof( entry.type ) ) ); DWORD length = entry.length; fail_neg( write( fd, &length, sizeof( length ) ) ); fail_neg( write( fd, data, entry.length ) ); /* Remember that this entry already has been written. */ directory[ chunk_index ].written = true; } /** Writes out the directory structure For all items in the directory list that have not been written yet, it seeks to the file position where that item should be stored and writes the type and length field. If the item has a name, it will also write the name field. \note It does not write the contents of any item. Use WriteChunk to do that. */ void RIFFFile::WriteRIFF( void ) { int i; RIFFDirEntry entry; int count = directory.size(); /* Start at the second entry (RIFF), since the first entry (FILE) is needed only for internal purposes and is not written to the file. */ for ( i = 1; i < count; ++i ) { /* Only deal with entries that haven´t been written */ entry = GetDirectoryEntry( i ); if ( entry.written == false ) { /* A chunk entry consist of its type and length, a list entry has an additional name. Look up the entry, seek to the start of the header, which is at the offset of the data start minus the header size and write out the items. */ fail_if( lseek( fd, entry.offset - RIFF_HEADERSIZE, SEEK_SET ) == ( off_t ) - 1 ) ; fail_neg( write( fd, &entry.type, sizeof( entry.type ) ) ); DWORD length = entry.length; fail_neg( write( fd, &length, sizeof( length ) ) ); /* If it has a name, it is a list. Write out the extra name field. */ if ( entry.name != 0 ) { fail_neg( write( fd, &entry.name, sizeof( entry.name ) ) ); } /* Remember that this entry already has been written. */ directory[ i ].written = true; } } } dvgrab-3.5+git20160707.1.e46042e/AUTHORS0000644000175000017500000000112212716434257014242 0ustar esesDan Dennedy Dan Streetman Arne Schirmacher Contributors: Andy (http://www.videox.net/) Stephane Gourichon Daniel Kobras Alastair Mayer Mike Piper Glen Nakamura Pierre Marc Dumuid - csize and cmincutsize options Frederik V. - help with linux-uvc (uvcvideo/V4L2) integration Lars Täuber - jvc-p25 option Pelayo Bernedo - srt (subtitle output) - autosplit on recording date/time gap dvgrab-3.5+git20160707.1.e46042e/hdvframe.h0000644000175000017500000000501512716434257015144 0ustar eses/* * hdvframe.h * Copyright (C) 2007 Dan Streetman * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software Foundation, * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #ifndef _HDVFRAME_H #define _HDVFRAME_H 1 #include "frame.h" #include "iec13818-1.h" #include "iec13818-2.h" #define MPEG2_JVC_P25 (1<<0) class HDVStreamParams { public: HDVStreamParams(); ~HDVStreamParams(); unsigned short program_map_PID; unsigned short video_stream_PID; unsigned short audio_stream_PID; unsigned short sony_private_a0_PID; unsigned short sony_private_a1_PID; Video video; #define CARRYOVER_DATA_MAX_SIZE (4 * DATA_BUFFER_LEN) unsigned char carryover_data[CARRYOVER_DATA_MAX_SIZE]; int carryover_length; int width, height; float frameRate; struct tm recordingDate; bool isRecordingDateSet; TimeCode timeCode; bool isTimeCodeSet; TimeCode gopTimeCode; bool isGOPTimeCodeSet; }; class HDVFrame : public Frame { public: HDVFrame( HDVStreamParams *p ); ~HDVFrame(); void SetDataLen( int len ); void Clear(); // Meta-data bool GetTimeCode( TimeCode &tc ); bool GetRecordingDate( struct tm &rd ); bool IsNewRecording(); bool IsGOP(); bool IsComplete(); // Video info int GetWidth(); int GetHeight(); float GetFrameRate(); // HDV or DV bool IsHDV(); bool CanStartNewStream(); bool CouldBeJVCP25(); // This is public so the reader can set the last // HDVFrame as complete at end of stream/file void SetComplete(); protected: void ProcessFrame( unsigned int start ); void ProcessPacket(); void ProcessPAT(); void ProcessPMT(); void ProcessVideo(); void ProcessAudio(); void ProcessSonyA1(); protected: HDVStreamParams *params; HDVPacket *packet; struct tm recordingDate; bool isRecordingDateSet; TimeCode timeCode; bool isTimeCodeSet; bool isNewRecording; bool isComplete; bool isGOP; int width; int height; float frameRate; int lastVideoDataLen; int lastAudioDataLen; bool repeatFirstField; }; #endif