|CONTENTS | PREV | NEXT | INDEX||JMF 2.0 API Guide|
Any data that changes meaningfully with respect to time can be characterized as time-based media. Audio clips, MIDI sequences, movie clips, and animations are common forms of time-based media. Such media data can be obtained from a variety of sources, such as local or network files, cameras, microphones, and live broadcasts.
This chapter describes the key characteristics of time-based media and describes the use of time-based media in terms of a fundamental data processing model:
Figure 1-1: Media processing model.
A key characteristic of time-based media is that it requires timely delivery and processing. Once the flow of media data begins, there are strict timing deadlines that must be met, both in terms of receiving and presenting the data. For this reason, time-based media is often referred to as streaming media--it is delivered in a steady stream that must be received and processed within a particular timeframe to produce acceptable results.
For example, when a movie is played, if the media data cannot be delivered quickly enough, there might be odd pauses and delays in playback. On the other hand, if the data cannot be received and processed quickly enough, the movie might appear jumpy as data is lost or frames are intentionally dropped in an attempt to maintain the proper playback rate.
The format in which the media data is stored is referred to as its content type. QuickTime, MPEG, and WAV are all examples of content types. Content type is essentially synonymous with file type--content type is used because media data is often acquired from sources other than local files.
A media stream is the media data obtained from a local file, acquired over the network, or captured from a camera or microphone. Media streams often contain multiple channels of data called tracks. For example, a Quicktime file might contain both an audio track and a video track. Media streams that contain multiple tracks are often referred to as multiplexed or complex media streams. Demultiplexing is the process of extracting individual tracks from a complex media stream.
A track's type identifies the kind of data it contains, such as audio or video. The format of a track defines how the data for the track is structured.
A media stream can be identified by its location and the protocol used to access it. For example, a URL might be used to describe the location of a QuickTime file on a local or remote system. If the file is local, it can be accessed through the FILE protocol. On the other hand, if it's on a web server, the file can be accessed through the HTTP protocol. A media locator provides a way to identify the location of a media stream when a URL can't be used.
Media streams can be categorized according to how the data is delivered:
- Pull--data transfer is initiated and controlled from the client side. For example, Hypertext Transfer Protocol (HTTP) and FILE are pull protocols.
- Push--the server initiates data transfer and controls the flow of data. For example, Real-time Transport Protocol (RTP) is a push protocol used for streaming media. Similarly, the SGI MediaBase protocol is a push protocol used for video-on-demand (VOD).
Common Media Formats
The following tables identify some of the characteristics of common media formats. When selecting a format, it's important to take into account the characteristics of the format, the target environment, and the expectations of the intended audience. For example, if you're delivering media content via the web, you need to pay special attention to the bandwidth requirements.
The CPU Requirements column characterizes the processing power necessary for optimal presentation of the specified format. The Bandwidth Requirements column characterizes the transmission speeds necessary to send or receive data quickly enough for optimal presentation.
Medium Low High MPEG-1 MPEG High High High H.261 AVI
Low Medium Medium H.263 QuickTime
Medium Medium Low JPEG QuickTime
High High High Indeo QuickTime AVI Medium Medium Medium
Table 1-1: Common video formats.
High Low High Mu-Law AVI
Low Low High ADPCM
Medium Medium Medium MPEG-1 MPEG High High High MPEG
MPEG High High Medium GSM WAV
Low Low Low G.723.1 WAV
Medium Medium Low
Table 1-2: Common audio formats.
Some formats are designed with particular applications and requirements in mind. High-quality, high-bandwidth formats are generally targeted toward CD-ROM or local storage applications. H.261 and H.263 are generally used for video conferencing applications and are optimized for video where there's not a lot of action. Similarly, G.723 is typically used to produce low bit-rate speech for telephony applications.
Most time-based media is audio or video data that can be presented through output devices such as speakers and monitors. Such devices are the most common destination for media data output. Media streams can also be sent to other destinations--for example, saved to a file or transmitted across the network. An output destination for media data is sometimes referred to as a data sink.
While a media stream is being presented, VCR-style presentation controls are often provided to enable the user to control playback. For example, a control panel for a movie player might offer buttons for stopping, starting, fast-forwarding, and rewinding the movie.
In many cases, particularly when presenting a media stream that resides on the network, the presentation of the media stream cannot begin immediately. The time it takes before presentation can begin is referred to as the start latency. Users might experience this as a delay between the time that they click the start button and the time when playback actually starts.
Multimedia presentations often combine several types of time-based media into a synchronized presentation. For example, background music might be played during an image slide-show, or animated text might be synchronized with an audio or video clip. When the presentation of multiple media streams is synchronized, it is essential to take into account the start latency of each stream--otherwise the playback of the different streams might actually begin at different times.
The quality of the presentation of a media stream depends on several factors, including:
- The compression scheme used
- The processing capability of the playback system
- The bandwidth available (for media streams acquired over the network)
Traditionally, the higher the quality, the larger the file size and the greater the processing power and bandwidth required. Bandwidth is usually represented as the number of bits that are transmitted in a certain period of time--the bit rate.
To achieve high-quality video presentations, the number of frames displayed in each period of time (the frame rate) should be as high as possible. Usually movies at a frame rate of 30 frames-per-second are considered indistinguishable from regular TV broadcasts or video tapes.
In most instances, the data in a media stream is manipulated before it is presented to the user. Generally, a series of processing operations occur before presentation:
- If the stream is multiplexed, the individual tracks are extracted.
- If the individual tracks are compressed, they are decoded.
- If necessary, the tracks are converted to a different format.
- Effect filters are applied to the decoded tracks (if desired).
The tracks are then delivered to the appropriate output device. If the media stream is to be stored instead of rendered to an output device, the processing stages might differ slightly. For example, if you wanted to capture audio and video from a video camera, process the data, and save it to a file:
- The audio and video tracks would be captured.
- Effect filters would be applied to the raw tracks (if desired).
- The individual tracks would be encoded.
- The compressed tracks would be multiplexed into a single media stream.
- The multiplexed media stream would then be saved to a file.
Demultiplexers and Multiplexers
A demultiplexer extracts individual tracks of media data from a multiplexed media stream. A mutliplexer performs the opposite function, it takes individual tracks of media data and merges them into a single multiplexed media stream.
A codec performs media-data compression and decompression. When a track is encoded, it is converted to a compressed format suitable for storage or transmission; when it is decoded it is converted to a non-compressed (raw) format suitable for presentation.
Each codec has certain input formats that it can handle and certain output formats that it can generate. In some situations, a series of codecs might be used to convert from one format to another.
An effect filter modifies the track data in some way, often to create special effects such as blur or echo.
Effect filters are classified as either pre-processing effects or post-processing effects, depending on whether they are applied before or after the codec processes the track. Typically, effect filters are applied to uncompressed (raw) data.
A renderer is an abstraction of a presentation device. For audio, the presentation device is typically the computer's hardware audio card that outputs sound to the speakers. For video, the presentation device is typically the computer monitor.
Certain specialized devices support compositing. Compositing time-based media is the process of combining multiple tracks of data onto a single presentation medium. For example, overlaying text on a video presentation is one common form of compositing. Compositing can be done in either hardware or software. A device that performs compositing can be abstracted as a renderer that can receive multiple tracks of input data.
Time-based media can be captured from a live source for processing and playback. For example, audio can be captured from a microphone or a video capture card can be used to obtain video from a camera. Capturing can be thought of as the input phase of the standard media processing model.
A capture device might deliver multiple media streams. For example, a video camera might deliver both audio and video. These streams might be captured and manipulated separately or combined into a single, multiplexed stream that contains both an audio track and a video track.
To capture time-based media you need specialized hardware--for example, to capture audio from a live source, you need a microphone and an appropriate audio card. Similarly, capturing a TV broadcast requires a TV tuner and an appropriate video capture card. Most systems provide a query mechanism to find out what capture devices are available.
Capture devices can be characterized as either push or pull sources. For example, a still camera is a pull source--the user controls when to capture an image. A microphone is a push source--the live source continuously provides a stream of audio.
The format of a captured media stream depends on the processing performed by the capture device. Some devices do very little processing and deliver raw, uncompressed data. Other capture devices might deliver the data in a compressed format.
Controls are sometimes provided to enable the user to manage the capture process. For example, a capture control panel might enable the user to specify the data rate and encoding type for the captured stream and start and stop the capture process.