Skip to main content
Skip table of contents

What is the BWF Format?

What is the Broadcast WAVE format:


Broadcast WAVE Format Explained

The solution to easy file transfer between PC-based workstations of different manufacturers lies in defining a minimum standard necessary for the file exchange, while allowing individual applications to create and use information that belongs only to it.

File formats already exist for this purpose, such as MUSIFILE, developed by Digigram and the CAR Group in 1993 and Microsoft's WAVE (Waveform Audio File Format).

A file using the MUSIFILE format is composed of one header, followed by sound data that conforms to the ISO/MPEG Audio standard.

In the PC environment, the WAVE file format is widely used. It permits storage of sound data, whether in linear PCM (Pulse Code Modulation) mode or in compressed mode using algorithms such as ADPCM, G.721 and, of course, MPEG Audio.

The WAVE format was not specifically designed for broadcast applications, so WAVE files require certain additional information in order to render the format usable by a sophisticated broadcast workstation or automation system. Any modifications, however, must be performed entirely with the framework defined by the WAVE standard to maintain compatibility.

A strict minimum file structure is defined by Microsoft for the WAVE standard, so applications frequently add complementary information to the file. This information can be classified in two ways:

* Data that are very specific to the application
Data that can be useful for a large number of applications if the developers of those applications standardize the data structure and content

* Data specific to a particular application do not need to be disclosed by the developer, and these data will be ignored by applications that do not recognize it.

A WAVE file is made up of a succession of chunks. The Layer II Audio Interest Group and in parallel with the EBU/UER committee, has created a definition for two complementary chunks to be added along with the MPEG1WAVEFORMAT chunk from Microsoft. As a result, a sound file can be more precisely described by specifying a standard format for the data.

The use of these chunks permits a sound file to relay information about its content and quality to applications. In the absence of this type of information, an application is often led to perform a conversion in order to guarantee that the file is coherent. Additionally, due to the eight character file name limitation of some operating systems, fields such as Description, Time, Date and Originator are very useful, if not absolutely necessary, for identifying a sound file.

The Broadcast WAVE Format proposal respects the WAVE standard, is very simple to implement and provides an unambiguous solution for the exchange of broadcast sound. In the same manner a compressed MPEG Audio, additional chunks can be defined for other types of coding. The first chunk, called broadcast_audio_extension, describes the general parameters for handling the sound file, independent of the sound data format itself. The second chunk is specific to MPEG Audio. It contains complementary information concerning the MPEG stream.

The broadcast chunk:

typedef struct broadcast_audio_extension {
CHAR Description[256]; /* ASCII : "Description of the sound sequence" */
CHAR Originator[32]; /* ASCII : "Name of the originator" */
CHAR OriginatorReference[32]; /* ASCII : "Reference of the originator" */
CHAR OriginationDate[10]; /* ASCII : "yyyy:mm:dd" */
CHAR OriginationTime; /* ASCII : "hh:mm:ss" */
DWORD TimeReferenceLow; /*First sample count since midnight , low word */
DWORD TimeReferenceHigh; /*First sample count since midnight, high word */
CHAR Reserved[256]; /* reserved for future use, set to NULL */
CHAR CodingHistory[]; /* ASCII : " History coding" */

This chunk has meaning only for WAVE files that store MPEG Audio. In the absence of this chunk, the sound data is still usable, but the auxiliary data for example, will not be usable. The SoundInformation field, through different bit combinations, specifies:

If the MPEG stream is homogenous or heterogeneous.
If the padding bit mechanism is used.
The AncillaryDataLength field contains the minimum length of the auxiliary data in the MPEG frames that make up the audio stream. The AncillaryDataDef field defines the content of the auxiliary data that mainly concern the energy of the frame.

To get the exact specification of the BWF contact the EBU:

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.