Retired Document
Important: This document may not represent best practices for current development. Links to downloads and other resources may no longer be valid.
Video Output Components
This section describes what video output components are, and what they do.
QuickTime video output, which most often comes from QuickTime movies, can be displayed in windows that appear on a computer’s desktop. Because these windows are created and managed by the computer’s operating system, software that presents QuickTime video can use the operating system’s video display services to specify which display (when there is more than one video display) and window to use for video output.
There are, however, many video output devices that are not recognized by operating systems. To display QuickTime video on these devices, your software can use video output components. The components, which are normally developed by the manufacturers of video output devices, provide a standard interface for video output to a device.
How Video Output Components Process Video Data
A video output component receives QuickTime video data and delivers data to a video output device for display. If the incoming data is in a format that the video output device can display directly, the video output component can simply send the data to the video output device. If the incoming data cannot be displayed directly, the video output component must use a transfer codec or decompressor component to convert the data to a format that the video output device can display.
If a video output device cannot directly display 32-bit RGB data or data in one of the other supported QuickTime pixel formats, the developers of the device are strongly encouraged to provide a transfer codec that accepts data in one of the supported QuickTime pixel formats (preferably 32-bit RGB) and converts it to data that can be displayed on the device. When this transfer codec is available, any QuickTime video can be displayed on the video output device: the Image Compression Manager can convert any QuickTime images to a supported QuickTime pixel format and then invoke the transfer codec to display the result.
If any special decompressors, such as a transfer codec, are needed for a video output device, the decompressors are included in the definitions of the component’s display modes, as described in Display Modes. How hardware developers can develop a transfer codec for their device is described in Creating a Transfer Codec for a Video Output Component.
Some video output devices do not accept pixels as input. For example, there are devices that display JPEG data directly. For these devices, a video output component can send the appropriate data directly, or it can invoke a compressor component to convert data in a pixel format to the appropriate data.
Display Modes
A video output device has one or more display modes. The characteristics of each mode determine how video is displayed. When any software displays video on a video output device, it must select which of the device’s display modes to use.
The characteristics of a display mode include
the height of the displayed image, in pixels
the width of the displayed image, in pixels
the horizontal resolution of the display, in pixels per inch
the vertical resolution of the display, in pixels per inch
the refresh rate of the display, in Hertz
the pixel type of the display
a text description of the display mode
The characteristics can also include a list of decompressor components required for the mode that are provided specifically for the video output device. If a video output device cannot directly display any of the pixel formats supported by QuickTime, the vendor of the device must provide one or more special decompressors to convert supported pixel formats to a format the device can display. If a video output device can display one or more of the pixel formats supported by QuickTime, the Image Compression Manager can use standard decompressors that are included with QuickTime, and the list of special decompressor components can be empty.
These characteristics, returned by the QTVideoOutputGetDisplayModeList
function, are stored in a QT atom container. For a description of this QT atom container, see Display Mode QT Atom Container.
Transfer Codecs
If you are the manufacturer of a video output device, you need to provide a video output component for your device as described in Creating Video Output Components. In addition, if your video output device cannot display a pixel format defined by QuickTime, you should include a special decompressor, known as a transfer codec, that converts one of the supported QuickTime pixel formats (preferably 32-bit RGB) to data that the device can display. When this transfer codec is available, the QuickTime Image Compression Manager automatically uses it together with its built-in decompressors. This, in turn, lets applications or other software draw any QuickTime video directly to the video output component’s graphics world.
This section gives an overview of developing this transfer codec. Bear in mind that a transfer codec is a specialized image decompressor component, and should be based on the Base Image Decompressor.
Overview of Transfer Codecs
QuickTime 2.5 contained new support for developers of codecs to accelerate certain image decompression operations. These features will most likely be used by developers of video hardware boards that provide special acceleration features, such as arbitrary scaling or color space conversion.
Prior to QuickTime 2.5, if a codec could not decompress an image directly to the screen, the ICM would prepare an offscreen buffer for the codec, then use the None codec to transfer the image from the offscreen buffer to the screen. With QuickTime 2.5, if a codec cannot decompress directly to the screen it has the option of specifying that it can decompress to one or more types of non-RGB pixel spaces, specified as an OSType
(e.g., 'yuvs'
). The ICM then attempts to find a decompressor component of that type (a transfer codec) that can transfer the image to the screen. Since the ICM does not define non-RGB pixel types, the transfer codec must support additional calls to set up the offscreen. If a transfer codec cannot be found that supports the specified non-RGB pixel types, the ICM uses the None codec with an RGB offscreen buffer.
The real speed benefit comes from the fact that since the transfer codec defines the offscreen buffer, it can place the buffer in on-board memory, or even point to an overlay plane so that the offscreen image really is on the screen. In this case, the additional step of transferring the bits from offscreen memory on to the screen is avoided.
Creating a Transfer Codec for a Video Output Component
For an image decompressor component to indicate that it can decompress to non-RGB pixel types, it should, in the ImageCodecPreDecompress
call, fill in the wantedDestinationPixelTypes
field with a handle to a zero-terminated list of pixel types that it can decompress to. The ICM immediately makes a copy of the handle. Cinepak, for example, returns a 12-byte handle containing yuvs
, yuvu
, and $00000000. Since ImageCodecPreDecompress
can be called often, it is suggested that codecs allocate this handle when their component is opened and simply fill in the wantedDestinationPixelTypes
field with this handle during ImageCodecPreDecompress
. Components that use this method should be sure to dispose the handle at close.
Apple’s Cinepak decompressor supports decompressing to 'yuvs'
and 'yuvu'
pixel types. Type 'yuvs'
is a YUV format with u and v
components signed (center point at $00), while 'yuvu'
has the u and v
component centered at $80.
As an example, suppose XYZ Co. had a video board that had a YUV overlay plane capable of doing arbitrary scaling. The overlay plane takes data in the same format as Cinepak’s 'yuvs'
format. In this case, XYZ would make a component of type 'imdc'
and subtype 'yuvs'
.
The ImageCodecPreDecompress
call would set the codecCanScale
, codecHasVolatileBuffer
, and codecImageBufferIsOnScreen
bits in the capabilities
.flags field. The codecImageBufferIsOnScreen
bit is necessary to inform the ICM that the codec is a direct screen transfer codec. A direct screen transfer codec is one that sets up an offscreen buffer that is actually onscreen (such as an overlay plane). Not setting this bit correctly can cause unpredictable results.
The real work of the codec takes place in the ImageCodecNewImageBufferMemory
call. This is where the codec is instructed to prepare the non-RGB pixel buffer. The codec must fill in the baseAddr
and rowBytes
fields of the dstPixMap
structure in the
CodecDecompressParams
. The ICM then passes these values to the original codec (e.g., Cinepak) to decompress into.
The codec must also implement
ImageCodecDisposeMemory
to balance ImageCodecNewImageBufferMemory
.
Since Cinepak then decompresses into the card’s overlay plane, ImageCodecBandDecompress
needs to do nothing aside from calling ICMDecompressComplete
.
pascal ComponentResult ImageCodecPreDecompress( Handle storage, |
CodecDecompressParams *p) |
{ |
CodecCapabilities *capabilities = p->capabilities; // only allow 16 bpp |
// source |
if ((**p->imageDescription).depth != 16) |
return codecConditionErr; /* we only support 16 bits per pixel dest */ |
if (p->dstPixMap.pixelSize != 16) |
return codecConditionErr; |
capabilities->wantedPixelSize = p->dstPixMap.pixelSize; |
capabilities->bandInc = capabilities->bandMin = |
(*p->imageDescription)->height; |
capabilities->extendWidth = 0; |
capabilities->extendHeight = 0; |
capabilities->flags = codecCanScale | codecImageBufferIsOnScreen | codecHasVolatileBuffer; |
return noErr; |
} |
pascal ComponentResult ImageCodecBandDecompress(Handle storage, |
CodecDecompressParams *p) |
{ |
ICMDecompressComplete(p->sequenceID, noErr, codecCompletionSource | |
codecCompletionDest, &p->completionProcRecord); |
return noErr; |
} |
pascal ComponentResult ImageCodecNewImageBufferMemory(Handle storage, |
CodecDecompressParams *p, long flags, |
ICMMemoryDisposedUPP memoryGoneProc, void *refCon) |
{ |
OSErr err = noErr; |
long offsetH, offsetV; |
Ptr baseAddr; |
long rowBytes; |
// call predecompress to check to make sure we can handle |
// this destination |
err = ImageCodecPreDecompress(storage, p); |
if (err) goto bail; |
// set video board registers with the scale |
XYZVideoSetScale(p->matrix); |
// calculate a base address to write to |
offsetH = (p->dstRect.left - p->dstPixMap.bounds.left); |
offsetV = (p->dstRect.top - p->dstPixMap.bounds.top); |
XYZVideoGetBaseAddress(p->dstPixMap, offsetH, offsetV, |
&baseAddr, &rowBytes); |
p->dstPixMap.baseAddr = baseAddr; |
p->dstPixMap.rowBytes = rowBytes; |
p->capabilities->flags = codecImageBufferIsOnScreen; |
bail: |
return err; |
} |
pascal ComponentResult ImageCodecDisposeMemory(Handle storage, Ptr data) |
{ |
return noErr; |
} |
Some video hardware boards that use an overlay plane require that the image area on screen be flooded with a particular RGB value or alpha-channel in order to have the overlay buffer show through at that location. Codecs that require this support should set the screenFloodMethod
and screenFloodValue
fields of the CodecDecompressParams
record during ImageCodecPreDecompress
. The ICM then manages the flooding of the screen buffer. This method is more reliable than having the codec attempt to flood the screen itself, and will ensure compatibility with future versions of QuickTime.
Copyright © 2005, 2006 Apple Computer, Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2006-01-10