ref: 34da05b30d4947b54c1aafb2dbca8e91f58afaf0
dir: /doc/draft-ietf-codec-opus.xml/
<?xml version="1.0" encoding="utf-8"?> <!DOCTYPE rfc SYSTEM 'rfc2629.dtd'> <?rfc toc="yes" symrefs="yes" ?> <rfc ipr="trust200902" category="std" docName="draft-ietf-codec-opus-14"> <front> <title abbrev="Interactive Audio Codec">Definition of the Opus Audio Codec</title> <author initials="JM" surname="Valin" fullname="Jean-Marc Valin"> <organization>Mozilla Corporation</organization> <address> <postal> <street>650 Castro Street</street> <city>Mountain View</city> <region>CA</region> <code>94041</code> <country>USA</country> </postal> <phone>+1 650 903-0800</phone> <email>jmvalin@jmvalin.ca</email> </address> </author> <author initials="K." surname="Vos" fullname="Koen Vos"> <organization>Skype Technologies S.A.</organization> <address> <postal> <street>Soder Malarstrand 43</street> <city>Stockholm</city> <region></region> <code>11825</code> <country>SE</country> </postal> <phone>+46 73 085 7619</phone> <email>koen.vos@skype.net</email> </address> </author> <author initials="T." surname="Terriberry" fullname="Timothy B. Terriberry"> <organization>Mozilla Corporation</organization> <address> <postal> <street>650 Castro Street</street> <city>Mountain View</city> <region>CA</region> <code>94041</code> <country>USA</country> </postal> <phone>+1 650 903-0800</phone> <email>tterriberry@mozilla.com</email> </address> </author> <date day="17" month="May" year="2012" /> <area>General</area> <workgroup></workgroup> <abstract> <t> This document defines the Opus interactive speech and audio codec. Opus is designed to handle a wide range of interactive audio applications, including Voice over IP, videoconferencing, in-game chat, and even live, distributed music performances. It scales from low bitrate narrowband speech at 6 kb/s to very high quality stereo music at 510 kb/s. Opus uses both linear prediction (LP) and the Modified Discrete Cosine Transform (MDCT) to achieve good compression of both speech and music. </t> </abstract> </front> <middle> <section anchor="introduction" title="Introduction"> <t> The Opus codec is a real-time interactive audio codec designed to meet the requirements described in <xref target="requirements"></xref>. It is composed of a linear prediction (LP)-based <xref target="LPC"/> layer and a Modified Discrete Cosine Transform (MDCT)-based <xref target="MDCT"/> layer. The main idea behind using two layers is that in speech, linear prediction techniques (such as Code-Excited Linear Prediction, or CELP) code low frequencies more efficiently than transform (e.g., MDCT) domain techniques, while the situation is reversed for music and higher speech frequencies. Thus a codec with both layers available can operate over a wider range than either one alone and, by combining them, achieve better quality than either one individually. </t> <t> The primary normative part of this specification is provided by the source code in <xref target="ref-implementation"></xref>. Only the decoder portion of this software is normative, though a significant amount of code is shared by both the encoder and decoder. <xref target="conformance"/> provides a decoder conformance test. The decoder contains a great deal of integer and fixed-point arithmetic which needs to be performed exactly, including all rounding considerations, so any useful specification requires domain-specific symbolic language to adequately define these operations. Additionally, any conflict between the symbolic representation and the included reference implementation must be resolved. For the practical reasons of compatibility and testability it would be advantageous to give the reference implementation priority in any disagreement. The C language is also one of the most widely understood human-readable symbolic representations for machine behavior. For these reasons this RFC uses the reference implementation as the sole symbolic representation of the codec. </t> <t>While the symbolic representation is unambiguous and complete it is not always the easiest way to understand the codec's operation. For this reason this document also describes significant parts of the codec in English and takes the opportunity to explain the rationale behind many of the more surprising elements of the design. These descriptions are intended to be accurate and informative, but the limitations of common English sometimes result in ambiguity, so it is expected that the reader will always read them alongside the symbolic representation. Numerous references to the implementation are provided for this purpose. The descriptions sometimes differ from the reference in ordering or through mathematical simplification wherever such deviation makes an explanation easier to understand. For example, the right shift and left shift operations in the reference implementation are often described using division and multiplication in the text. In general, the text is focused on the "what" and "why" while the symbolic representation most clearly provides the "how". </t> <section anchor="notation" title="Notation and Conventions"> <t> The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 <xref target="rfc2119"></xref>. </t> <t> Various operations in the codec require bit-exact fixed-point behavior, even when writing a floating point implementation. The notation "Q<n>", where n is an integer, denotes the number of binary digits to the right of the decimal point in a fixed-point number. For example, a signed Q14 value in a 16-bit word can represent values from -2.0 to 1.99993896484375, inclusive. This notation is for informational purposes only. Arithmetic, when described, always operates on the underlying integer. E.g., the text will explicitly indicate any shifts required after a multiplication. </t> <t> Expressions, where included in the text, follow C operator rules and precedence, with the exception that the syntax "x**y" indicates x raised to the power y. The text also makes use of the following functions: </t> <section anchor="min" toc="exclude" title="min(x,y)"> <t> The smallest of two values x and y. </t> </section> <section anchor="max" toc="exclude" title="max(x,y)"> <t> The largest of two values x and y. </t> </section> <section anchor="clamp" toc="exclude" title="clamp(lo,x,hi)"> <figure align="center"> <artwork align="center"><![CDATA[ clamp(lo,x,hi) = max(lo,min(x,hi)) ]]></artwork> </figure> <t> With this definition, if lo > hi, the lower bound is the one that is enforced. </t> </section> <section anchor="sign" toc="exclude" title="sign(x)"> <t> The sign of x, i.e., <figure align="center"> <artwork align="center"><![CDATA[ ( -1, x < 0 , sign(x) = < 0, x == 0 , ( 1, x > 0 . ]]></artwork> </figure> </t> </section> <section anchor="abs" toc="exclude" title="abs(x)"> <t> The absolute value of x, i.e., <figure align="center"> <artwork align="center"><![CDATA[ abs(x) = sign(x)*x . ]]></artwork> </figure> </t> </section> <section anchor="floor" toc="exclude" title="floor(f)"> <t> The largest integer z such that z <= f. </t> </section> <section anchor="ceil" toc="exclude" title="ceil(f)"> <t> The smallest integer z such that z >= f. </t> </section> <section anchor="round" toc="exclude" title="round(f)"> <t> The integer z nearest to f, with ties rounded towards negative infinity, i.e., <figure align="center"> <artwork align="center"><![CDATA[ round(f) = ceil(f - 0.5) . ]]></artwork> </figure> </t> </section> <section anchor="log2" toc="exclude" title="log2(f)"> <t> The base-two logarithm of f. </t> </section> <section anchor="ilog" toc="exclude" title="ilog(n)"> <t> The minimum number of bits required to store a positive integer n in two's complement notation, or 0 for a non-positive integer n. <figure align="center"> <artwork align="center"><![CDATA[ ( 0, n <= 0, ilog(n) = < ( floor(log2(n))+1, n > 0 ]]></artwork> </figure> Examples: <list style="symbols"> <t>ilog(-1) = 0</t> <t>ilog(0) = 0</t> <t>ilog(1) = 1</t> <t>ilog(2) = 2</t> <t>ilog(3) = 2</t> <t>ilog(4) = 3</t> <t>ilog(7) = 3</t> </list> </t> </section> </section> </section> <section anchor="overview" title="Opus Codec Overview"> <t> The Opus codec scales from 6 kb/s narrowband mono speech to 510 kb/s fullband stereo music, with algorithmic delays ranging from 5 ms to 65.2 ms. At any given time, either the LP layer, the MDCT layer, or both, may be active. It can seamlessly switch between all of its various operating modes, giving it a great deal of flexibility to adapt to varying content and network conditions without renegotiating the current session. The codec allows input and output of various audio bandwidths, defined as follows: </t> <texttable anchor="audio-bandwidth"> <ttcol>Abbreviation</ttcol> <ttcol align="right">Audio Bandwidth</ttcol> <ttcol align="right">Sample Rate (Effective)</ttcol> <c>NB (narrowband)</c> <c>4 kHz</c> <c>8 kHz</c> <c>MB (medium-band)</c> <c>6 kHz</c> <c>12 kHz</c> <c>WB (wideband)</c> <c>8 kHz</c> <c>16 kHz</c> <c>SWB (super-wideband)</c> <c>12 kHz</c> <c>24 kHz</c> <c>FB (fullband)</c> <c>20 kHz (*)</c> <c>48 kHz</c> </texttable> <t> (*) Although the sampling theorem allows a bandwidth as large as half the sampling rate, Opus never codes audio above 20 kHz, as that is the generally accepted upper limit of human hearing. </t> <t> Opus defines super-wideband (SWB) with an effective sample rate of 24 kHz, unlike some other audio coding standards that use 32 kHz. This was chosen for a number of reasons. The band layout in the MDCT layer naturally allows skipping coefficients for frequencies over 12 kHz, but does not allow cleanly dropping just those frequencies over 16 kHz. A sample rate of 24 kHz also makes resampling in the MDCT layer easier, as 24 evenly divides 48, and when 24 kHz is sufficient, it can save computation in other processing, such as Acoustic Echo Cancellation (AEC). Experimental changes to the band layout to allow a 16 kHz cutoff (32 kHz effective sample rate) showed potential quality degradations at other sample rates, and at typical bitrates the number of bits saved by using such a cutoff instead of coding in fullband (FB) mode is very small. Therefore, if an application wishes to process a signal sampled at 32 kHz, it should just use FB. </t> <t> The LP layer is based on the SILK codec <xref target="SILK"></xref>. It supports NB, MB, or WB audio and frame sizes from 10 ms to 60 ms, and requires an additional 5 ms look-ahead for noise shaping estimation. A small additional delay (up to 1.5 ms) may be required for sampling rate conversion. Like Vorbis <xref target='Vorbis-website'/> and many other modern codecs, SILK is inherently designed for variable-bitrate (VBR) coding, though the encoder can also produce constant-bitrate (CBR) streams. The version of SILK used in Opus is substantially modified from, and not compatible with, the stand-alone SILK codec previously deployed by Skype. This document does not serve to define that format, but those interested in the original SILK codec should see <xref target="SILK"/> instead. </t> <t> The MDCT layer is based on the CELT codec <xref target="CELT"></xref>. It supports NB, WB, SWB, or FB audio and frame sizes from 2.5 ms to 20 ms, and requires an additional 2.5 ms look-ahead due to the overlapping MDCT windows. The CELT codec is inherently designed for CBR coding, but unlike many CBR codecs it is not limited to a set of predetermined rates. It internally allocates bits to exactly fill any given target budget, and an encoder can produce a VBR stream by varying the target on a per-frame basis. The MDCT layer is not used for speech when the audio bandwidth is WB or less, as it is not useful there. On the other hand, non-speech signals are not always adequately coded using linear prediction, so for music only the MDCT layer should be used. </t> <t> A "Hybrid" mode allows the use of both layers simultaneously with a frame size of 10 or 20 ms and a SWB or FB audio bandwidth. The LP layer codes the low frequencies by resampling the signal down to WB. The MDCT layer follows, coding the high frequency portion of the signal. The cutoff between the two lies at 8 kHz, the maximum WB audio bandwidth. In the MDCT layer, all bands below 8 kHz are discarded, so there is no coding redundancy between the two layers. </t> <t> The sample rate (in contrast to the actual audio bandwidth) can be chosen independently on the encoder and decoder side, e.g., a fullband signal can be decoded as wideband, or vice versa. This approach ensures a sender and receiver can always interoperate, regardless of the capabilities of their actual audio hardware. Internally, the LP layer always operates at a sample rate of twice the audio bandwidth, up to a maximum of 16 kHz, which it continues to use for SWB and FB. The decoder simply resamples its output to support different sample rates. The MDCT layer always operates internally at a sample rate of 48 kHz. Since all the supported sample rates evenly divide this rate, and since the the decoder may easily zero out the high frequency portion of the spectrum in the frequency domain, it can simply decimate the MDCT layer output to achieve the other supported sample rates very cheaply. </t> <t> After conversion to the common, desired output sample rate, the decoder simply adds the output from the two layers together. To compensate for the different look-ahead required by each layer, the CELT encoder input is delayed by an additional 2.7 ms. This ensures that low frequencies and high frequencies arrive at the same time. This extra delay may be reduced by an encoder by using less look-ahead for noise shaping or using a simpler resampler in the LP layer, but this will reduce quality. However, the base 2.5 ms look-ahead in the CELT layer cannot be reduced in the encoder because it is needed for the MDCT overlap, whose size is fixed by the decoder. </t> <t> Both layers use the same entropy coder, avoiding any waste from "padding bits" between them. The hybrid approach makes it easy to support both CBR and VBR coding. Although the LP layer is VBR, the bit allocation of the MDCT layer can produce a final stream that is CBR by using all the bits left unused by the LP layer. </t> <section title="Control Parameters"> <t> The Opus codec includes a number of control parameters which can be changed dynamically during regular operation of the codec, without interrupting the audio stream from the encoder to the decoder. These parameters only affect the encoder since any impact they have on the bit-stream is signaled in-band such that a decoder can decode any Opus stream without any out-of-band signaling. Any Opus implementation can add or modify these control parameters without affecting interoperability. The most important encoder control parameters in the reference encoder are listed below. </t> <section title="Bitrate" toc="exlcude"> <t> Opus supports all bitrates from 6 kb/s to 510 kb/s. All other parameters being equal, higher bitrate results in higher quality. For a frame size of 20 ms, these are the bitrate "sweet spots" for Opus in various configurations: <list style="symbols"> <t>8-12 kb/s for NB speech,</t> <t>16-20 kb/s for WB speech,</t> <t>28-40 kb/s for FB speech,</t> <t>48-64 kb/s for FB mono music, and</t> <t>64-128 kb/s for FB stereo music.</t> </list> </t> </section> <section title="Number of Channels (Mono/Stereo)" toc="exlcude"> <t> Opus can transmit either mono or stereo frames within a single stream. When decoding a mono frame in a stereo decoder, the left and right channels are identical, and when decoding a stereo frame in a mono decoder, the mono output is the average of the left and right channels. In some cases, it is desirable to encode a stereo input stream in mono (e.g., because the bitrate is too low to encode stereo with sufficient quality). The number of channels encoded can be selected in real-time, but by default the reference encoder attempts to make the best decision possible given the current bitrate. </t> </section> <section title="Audio Bandwidth" toc="exlcude"> <t> The audio bandwidths supported by Opus are listed in <xref target="audio-bandwidth"/>. Just like for the number of channels, any decoder can decode audio encoded at any bandwidth. For example, any Opus decoder operating at 8 kHz can decode a FB Opus frame, and any Opus decoder operating at 48 kHz can decode a NB frame. Similarly, the reference encoder can take a 48 kHz input signal and encode it as NB. The higher the audio bandwidth, the higher the required bitrate to achieve acceptable quality. The audio bandwidth can be explicitly specified in real-time, but by default the reference encoder attempts to make the best bandwidth decision possible given the current bitrate. </t> </section> <section title="Frame Duration" toc="exlcude"> <t> Opus can encode frames of 2.5, 5, 10, 20, 40 or 60 ms. It can also combine multiple frames into packets of up to 120 ms. For real-time applications, sending fewer packets per second reduces the bitrate, since it reduces the overhead from IP, UDP, and RTP headers. However, it increases latency and sensitivity to packet losses, as losing one packet constitutes a loss of a bigger chunk of audio. Increasing the frame duration also slightly improves coding efficiency, but the gain becomes small for frame sizes above 20 ms. For this reason, 20 ms frames are a good choice for most applications. </t> </section> <section title="Complexity" toc="exlcude"> <t> There are various aspects of the Opus encoding process where trade-offs can be made between CPU complexity and quality/bitrate. In the reference encoder, the complexity is selected using an integer from 0 to 10, where 0 is the lowest complexity and 10 is the highest. Examples of computations for which such trade-offs may occur are: <list style="symbols"> <t>The order of the pitch analysis whitening filter <xref target="Whitening"/>,</t> <t>The order of the short-term noise shaping filter,</t> <t>The number of states in delayed decision quantization of the residual signal, and</t> <t>The use of certain bit-stream features such as variable time-frequency resolution and the pitch post-filter.</t> </list> </t> </section> <section title="Packet Loss Resilience" toc="exlcude"> <t> Audio codecs often exploit inter-frame correlations to reduce the bitrate at a cost in error propagation: after losing one packet several packets need to be received before the decoder is able to accurately reconstruct the speech signal. The extent to which Opus exploits inter-frame dependencies can be adjusted on the fly to choose a trade-off between bitrate and amount of error propagation. </t> </section> <section title="Forward Error Correction (FEC)" toc="exlcude"> <t> Another mechanism providing robustness against packet loss is the in-band Forward Error Correction (FEC). Packets that are determined to contain perceptually important speech information, such as onsets or transients, are encoded again at a lower bitrate and this re-encoded information is added to a subsequent packet. </t> </section> <section title="Constant/Variable Bitrate" toc="exlcude"> <t> Opus is more efficient when operating with variable bitrate (VBR), which is the default. However, in some (rare) applications, constant bitrate (CBR) is required. There are two main reasons to operate in CBR mode: <list style="symbols"> <t>When the transport only supports a fixed size for each compressed frame</t> <t>When encryption is used for an audio stream that is either highly constrained (e.g. yes/no, recorded prompts) or highly sensitive <xref target="SRTP-VBR"></xref> </t> </list> When low-latency transmission is required over a relatively slow connection, then constrained VBR can also be used. This uses VBR in a way that simulates a "bit reservoir" and is equivalent to what MP3 (MPEG 1, Layer 3) and AAC (Advanced Audio Coding) call CBR (i.e., not true CBR due to the bit reservoir). </t> </section> <section title="Discontinuous Transmission (DTX)" toc="exlcude"> <t> Discontinuous Transmission (DTX) reduces the bitrate during silence or background noise. When DTX is enabled, only one frame is encoded every 400 milliseconds. </t> </section> </section> </section> <section anchor="modes" title="Internal Framing"> <t> The Opus encoder produces "packets", which are each a contiguous set of bytes meant to be transmitted as a single unit. The packets described here do not include such things as IP, UDP, or RTP headers which are normally found in a transport-layer packet. A single packet may contain multiple audio frames, so long as they share a common set of parameters, including the operating mode, audio bandwidth, frame size, and channel count (mono vs. stereo). This section describes the possible combinations of these parameters and the internal framing used to pack multiple frames into a single packet. This framing is not self-delimiting. Instead, it assumes that a higher layer (such as UDP or RTP <xref target='RFC3550'/> or Ogg <xref target='RFC3533'/> or Matroska <xref target='Matroska-website'/>) will communicate the length, in bytes, of the packet, and it uses this information to reduce the framing overhead in the packet itself. A decoder implementation MUST support the framing described in this section. An alternative, self-delimiting variant of the framing is described in <xref target="self-delimiting-framing"/>. Support for that variant is OPTIONAL. </t> <t> All bit diagrams in this document number the bits so that bit 0 is the most significant bit of the first byte, and bit 7 is the least significant. Bit 8 is thus the most significant bit of the second byte, etc. Well-formed Opus packets obey certain requirements, marked [R1] through [R7] below. These are summarized in <xref target="malformed-packets"/> along with appropriate means of handling malformed packets. </t> <section anchor="toc_byte" title="The TOC Byte"> <t anchor="R1"> A well-formed Opus packet MUST contain at least one byte [R1]. This byte forms a table-of-contents (TOC) header that signals which of the various modes and configurations a given packet uses. It is composed of a configuration number, "config", a stereo flag, "s", and a frame count code, "c", arranged as illustrated in <xref target="toc_byte_fig"/>. A description of each of these fields follows. </t> <figure anchor="toc_byte_fig" title="The TOC Byte"> <artwork align="center"><![CDATA[ 0 0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ | config |s| c | +-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <t> The top five bits of the TOC byte, labeled "config", encode one of 32 possible configurations of operating mode, audio bandwidth, and frame size. As described, the LP (SILK) layer and MDCT (CELT) layer can be combined in three possible operating modes: <list style="numbers"> <t>A SILK-only mode for use in low bitrate connections with an audio bandwidth of WB or less,</t> <t>A Hybrid (SILK+CELT) mode for SWB or FB speech at medium bitrates, and</t> <t>A CELT-only mode for very low delay speech transmission as well as music transmission (NB to FB).</t> </list> The 32 possible configurations each identify which one of these operating modes the packet uses, as well as the audio bandwidth and the frame size. <xref target="config_bits"/> lists the parameters for each configuration. </t> <texttable anchor="config_bits" title="TOC Byte Configuration Parameters"> <ttcol>Configuration Number(s)</ttcol> <ttcol>Mode</ttcol> <ttcol>Bandwidth</ttcol> <ttcol>Frame Sizes</ttcol> <c>0...3</c> <c>SILK-only</c> <c>NB</c> <c>10, 20, 40, 60 ms</c> <c>4...7</c> <c>SILK-only</c> <c>MB</c> <c>10, 20, 40, 60 ms</c> <c>8...11</c> <c>SILK-only</c> <c>WB</c> <c>10, 20, 40, 60 ms</c> <c>12...13</c> <c>Hybrid</c> <c>SWB</c> <c>10, 20 ms</c> <c>14...15</c> <c>Hybrid</c> <c>FB</c> <c>10, 20 ms</c> <c>16...19</c> <c>CELT-only</c> <c>NB</c> <c>2.5, 5, 10, 20 ms</c> <c>20...23</c> <c>CELT-only</c> <c>WB</c> <c>2.5, 5, 10, 20 ms</c> <c>24...27</c> <c>CELT-only</c> <c>SWB</c> <c>2.5, 5, 10, 20 ms</c> <c>28...31</c> <c>CELT-only</c> <c>FB</c> <c>2.5, 5, 10, 20 ms</c> </texttable> <t> The configuration numbers in each range (e.g., 0...3 for NB SILK-only) correspond to the various choices of frame size, in the same order. For example, configuration 0 has a 10 ms frame size and configuration 3 has a 60 ms frame size. </t> <t> One additional bit, labeled "s", signals mono vs. stereo, with 0 indicating mono and 1 indicating stereo. </t> <t> The remaining two bits of the TOC byte, labeled "c", code the number of frames per packet (codes 0 to 3) as follows: <list style="symbols"> <t>0: 1 frame in the packet</t> <t>1: 2 frames in the packet, each with equal compressed size</t> <t>2: 2 frames in the packet, with different compressed sizes</t> <t>3: an arbitrary number of frames in the packet</t> </list> This draft refers to a packet as a code 0 packet, code 1 packet, etc., based on the value of "c". </t> </section> <section title="Frame Packing"> <t> This section describes how frames are packed according to each possible value of "c" in the TOC byte. </t> <section anchor="frame-length-coding" title="Frame Length Coding"> <t> When a packet contains multiple VBR frames (i.e., code 2 or 3), the compressed length of one or more of these frames is indicated with a one- or two-byte sequence, with the meaning of the first byte as follows: <list style="symbols"> <t>0: No frame (discontinuous transmission (DTX) or lost packet)</t> <t>1...251: Length of the frame in bytes</t> <t>252...255: A second byte is needed. The total length is (second_byte*4)+first_byte</t> </list> </t> <t> The special length 0 indicates that no frame is available, either because it was dropped during transmission by some intermediary or because the encoder chose not to transmit it. Any Opus frame in any mode MAY have a length of 0. </t> <t> The maximum representable length is 255*4+255=1275 bytes. For 20 ms frames, this represents a bitrate of 510 kb/s, which is approximately the highest useful rate for lossily compressed fullband stereo music. Beyond this point, lossless codecs are more appropriate. It is also roughly the maximum useful rate of the MDCT layer, as shortly thereafter quality no longer improves with additional bits due to limitations on the codebook sizes. </t> <t anchor="R2"> No length is transmitted for the last frame in a VBR packet, or for any of the frames in a CBR packet, as it can be inferred from the total size of the packet and the size of all other data in the packet. However, the length of any individual frame MUST NOT exceed 1275 bytes [R2], to allow for repacketization by gateways, conference bridges, or other software. </t> </section> <section title="Code 0: One Frame in the Packet"> <t> For code 0 packets, the TOC byte is immediately followed by N-1 bytes of compressed data for a single frame (where N is the size of the packet), as illustrated in <xref target="code0_packet"/>. </t> <figure anchor="code0_packet" title="A Code 0 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|0|0| | +-+-+-+-+-+-+-+-+ | | Compressed frame 1 (N-1 bytes)... : : | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </section> <section title="Code 1: Two Frames in the Packet, Each with Equal Compressed Size"> <t anchor="R3"> For code 1 packets, the TOC byte is immediately followed by the (N-1)/2 bytes of compressed data for the first frame, followed by (N-1)/2 bytes of compressed data for the second frame, as illustrated in <xref target="code1_packet"/>. The number of payload bytes available for compressed data, N-1, MUST be even for all code 1 packets [R3]. </t> <figure anchor="code1_packet" title="A Code 1 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|0|1| | +-+-+-+-+-+-+-+-+ : | Compressed frame 1 ((N-1)/2 bytes)... | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : | Compressed frame 2 ((N-1)/2 bytes)... | : +-+-+-+-+-+-+-+-+ | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </section> <section title="Code 2: Two Frames in the Packet, with Different Compressed Sizes"> <t anchor="R4"> For code 2 packets, the TOC byte is followed by a one- or two-byte sequence indicating the length of the first frame (marked N1 in <xref target='code2_packet'/>), followed by N1 bytes of compressed data for the first frame. The remaining N-N1-2 or N-N1-3 bytes are the compressed data for the second frame. This is illustrated in <xref target="code2_packet"/>. A code 2 packet MUST contain enough bytes to represent a valid length. For example, a 1-byte code 2 packet is always invalid, and a 2-byte code 2 packet whose second byte is in the range 252...255 is also invalid. The length of the first frame, N1, MUST also be no larger than the size of the payload remaining after decoding that length for all code 2 packets [R4]. This makes, for example, a 2-byte code 2 packet with a second byte in the range 1...251 invalid as well (the only valid 2-byte code 2 packet is one where the length of both frames is zero). </t> <figure anchor="code2_packet" title="A Code 2 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|1|0| N1 (1-2 bytes): | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : | Compressed frame 1 (N1 bytes)... | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | Compressed frame 2... : : | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </section> <section title="Code 3: A Signaled Number of Frames in the Packet"> <t anchor="R5"> Code 3 packets signal the number of frames, as well as additional padding, called "Opus padding" to indicate that this padding is added at the Opus layer, rather than at the transport layer. Code 3 packets MUST have at least 2 bytes [R6,R7]. The TOC byte is followed by a byte encoding the number of frames in the packet in bits 2 to 7 (marked "M" in <xref target='frame_count_byte'/>), with bit 1 indicating whether or not Opus padding is inserted (marked "p" in <xref target='frame_count_byte'/>), and bit 0 indicating VBR (marked "v" in <xref target='frame_count_byte'/>). M MUST NOT be zero, and the audio duration contained within a packet MUST NOT exceed 120 ms [R5]. This limits the maximum frame count for any frame size to 48 (for 2.5 ms frames), with lower limits for longer frame sizes. <xref target="frame_count_byte"/> illustrates the layout of the frame count byte. </t> <figure anchor="frame_count_byte" title="The frame count byte"> <artwork align="center"><![CDATA[ 0 0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |v|p| M | +-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <t> When Opus padding is used, the number of bytes of padding is encoded in the bytes following the frame count byte. Values from 0...254 indicate that 0...254 bytes of padding are included, in addition to the byte(s) used to indicate the size of the padding. If the value is 255, then the size of the additional padding is 254 bytes, plus the padding value encoded in the next byte. There MUST be at least one more byte in the packet in this case [R6,R7]. The additional padding bytes appear at the end of the packet, and MUST be set to zero by the encoder to avoid creating a covert channel. The decoder MUST accept any value for the padding bytes, however. </t> <t> Although this encoding provides multiple ways to indicate a given number of padding bytes, each uses a different number of bytes to indicate the padding size, and thus will increase the total packet size by a different amount. For example, to add 255 bytes to a packet, set the padding bit, p, to 1, insert a single byte after the frame count byte with a value of 254, and append 254 padding bytes with the value zero to the end of the packet. To add 256 bytes to a packet, set the padding bit to 1, insert two bytes after the frame count byte with the values 255 and 0, respectively, and append 254 padding bytes with the value zero to the end of the packet. By using the value 255 multiple times, it is possible to create a packet of any specific, desired size. Let P be the number of header bytes used to indicate the padding size plus the number of padding bytes themselves (i.e., P is the total number of bytes added to the packet). Then P MUST be no more than N-2 [R6,R7]. </t> <t anchor="R6"> In the CBR case, let R=N-2-P be the number of bytes remaining in the packet after subtracting the (optional) padding. Then the compressed length of each frame in bytes is equal to R/M. The value R MUST be a non-negative integer multiple of M [R6]. The compressed data for all M frames follows, each of size R/M bytes, as illustrated in <xref target="code3cbr_packet"/>. </t> <figure anchor="code3cbr_packet" title="A CBR Code 3 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|1|1|0|p| M | Padding length (Optional) : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame 1 (R/M bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame 2 (R/M bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : ... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame M (R/M bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : Opus Padding (Optional)... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <t anchor="R7"> In the VBR case, the (optional) padding length is followed by M-1 frame lengths (indicated by "N1" to "N[M-1]" in <xref target='code3vbr_packet'/>), each encoded in a one- or two-byte sequence as described above. The packet MUST contain enough data for the M-1 lengths after removing the (optional) padding, and the sum of these lengths MUST be no larger than the number of bytes remaining in the packet after decoding them [R7]. The compressed data for all M frames follows, each frame consisting of the indicated number of bytes, with the final frame consuming any remaining bytes before the final padding, as illustrated in <xref target="code3cbr_packet"/>. The number of header bytes (TOC byte, frame count byte, padding length bytes, and frame length bytes), plus the signaled length of the first M-1 frames themselves, plus the signaled length of the padding MUST be no larger than N, the total size of the packet. </t> <figure anchor="code3vbr_packet" title="A VBR Code 3 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|1|1|1|p| M | Padding length (Optional) : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : N1 (1-2 bytes): N2 (1-2 bytes): ... : N[M-1] | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame 1 (N1 bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame 2 (N2 bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : ... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame M... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : Opus Padding (Optional)... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </section> </section> <section anchor="examples" title="Examples"> <t> Simplest case, one NB mono 20 ms SILK frame: </t> <figure anchor='framing_example_1'> <artwork><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 1 |0|0|0| compressed data... : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <t> Two FB mono 5 ms CELT frames of the same compressed size: </t> <figure anchor='framing_example_2'> <artwork><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 29 |0|0|1| compressed data... : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <t> Two FB mono 20 ms Hybrid frames of different compressed size: </t> <figure anchor='framing_example_3'> <artwork><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 15 |0|1|1|1|0| 2 | N1 | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | compressed data... : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <t> Four FB stereo 20 ms CELT frames of the same compressed size: </t> <figure anchor='framing_example_4'> <artwork><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 31 |1|1|1|0|0| 4 | compressed data... : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </section> <section anchor="malformed-packets" title="Receiving Malformed Packets"> <t> A receiver MUST NOT process packets which violate any of the rules above as normal Opus packets. They are reserved for future applications, such as in-band headers (containing metadata, etc.). Packets which violate these constraints may cause implementations of <spanx style="emph">this</spanx> specification to treat them as malformed, and discard them. </t> <t> These constraints are summarized here for reference: <list style="format [R%d]"> <t>Packets are at least one byte.</t> <t>No implicit frame length is larger than 1275 bytes.</t> <t>Code 1 packets have an odd total length, N, so that (N-1)/2 is an integer.</t> <t>Code 2 packets have enough bytes after the TOC for a valid frame length, and that length is no larger than the number of bytes remaining in the packet.</t> <t>Code 3 packets contain at least one frame, but no more than 120 ms of audio total.</t> <t>The length of a CBR code 3 packet, N, is at least two bytes, the number of bytes added to indicate the padding size plus the trailing padding bytes themselves, P, is no more than N-2, and the frame count, M, satisfies the constraint that (N-2-P) is a non-negative integer multiple of M.</t> <t>VBR code 3 packets are large enough to contain all the header bytes (TOC byte, frame count byte, any padding length bytes, and any frame length bytes), plus the length of the first M-1 frames, plus any trailing padding bytes.</t> </list> </t> </section> </section> <section title="Opus Decoder"> <t> The Opus decoder consists of two main blocks: the SILK decoder and the CELT decoder. At any given time, one or both of the SILK and CELT decoders may be active. The output of the Opus decode is the sum of the outputs from the SILK and CELT decoders with proper sample rate conversion and delay compensation on the SILK side, and optional decimation (when decoding to sample rates less than 48 kHz) on the CELT side, as illustrated in the block diagram below. </t> <figure> <artwork> <![CDATA[ +---------+ +------------+ | SILK | | Sample | +->| Decoder |--->| Rate |----+ Bit- +---------+ | | | | Conversion | v stream | Range |---+ +---------+ +------------+ /---\ Audio ------->| Decoder | | + |------> | |---+ +---------+ +------------+ \---/ +---------+ | | CELT | | Decimation | ^ +->| Decoder |--->| (Optional) |----+ | | | | +---------+ +------------+ ]]> </artwork> </figure> <section anchor="range-decoder" title="Range Decoder"> <t> Opus uses an entropy coder based on range coding <xref target="range-coding"></xref> <xref target="Martin79"></xref>, which is itself a rediscovery of the FIFO arithmetic code introduced by <xref target="coding-thesis"></xref>. It is very similar to arithmetic encoding, except that encoding is done with digits in any base instead of with bits, so it is faster when using larger bases (i.e., a byte). All of the calculations in the range coder must use bit-exact integer arithmetic. </t> <t> Symbols may also be coded as "raw bits" packed directly into the bitstream, bypassing the range coder. These are packed backwards starting at the end of the frame, as illustrated in <xref target="rawbits-example"/>. This reduces complexity and makes the stream more resilient to bit errors, as corruption in the raw bits will not desynchronize the decoding process, unlike corruption in the input to the range decoder. Raw bits are only used in the CELT layer. </t> <figure anchor="rawbits-example" title="Illustrative example of packing range coder and raw bits data"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Range coder data (packed MSB to LSB) -> : + + : : + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : | <- Boundary occurs at an arbitrary bit position : +-+-+-+ + : <- Raw bits data (packed LSB to MSB) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <t> Each symbol coded by the range coder is drawn from a finite alphabet and coded in a separate "context", which describes the size of the alphabet and the relative frequency of each symbol in that alphabet. </t> <t> Suppose there is a context with n symbols, identified with an index that ranges from 0 to n-1. The parameters needed to encode or decode symbol k in this context are represented by a three-tuple (fl[k], fh[k], ft), with 0 <= fl[k] < fh[k] <= ft <= 65535. The values of this tuple are derived from the probability model for the symbol, represented by traditional "frequency counts". Because Opus uses static contexts these are not updated as symbols are decoded. Let f[i] be the frequency of symbol i. Then the three-tuple corresponding to symbol k is given by </t> <figure align="center"> <artwork align="center"><![CDATA[ k-1 n-1 __ __ fl[k] = \ f[i], fh[k] = fl[k] + f[k], ft = \ f[i] /_ /_ i=0 i=0 ]]></artwork> </figure> <t> The range decoder extracts the symbols and integers encoded using the range encoder in <xref target="range-encoder"/>. The range decoder maintains an internal state vector composed of the two-tuple (val, rng), representing the difference between the high end of the current range and the actual coded value, minus one, and the size of the current range, respectively. Both val and rng are 32-bit unsigned integer values. </t> <section anchor="range-decoder-init" title="Range Decoder Initialization"> <t> Let b0 be the first input byte (or zero if there are no bytes in this Opus frame). The decoder initializes rng to 128 and initializes val to (127 - (b0>>1)), where (b0>>1) is the top 7 bits of the first input byte. It saves the remaining bit, (b0&1), for use in the renormalization procedure described in <xref target="range-decoder-renorm"/>, which the decoder invokes immediately after initialization to read additional bits and establish the invariant that rng > 2**23. </t> </section> <section anchor="decoding-symbols" title="Decoding Symbols"> <t> Decoding a symbol is a two-step process. The first step determines a 16-bit unsigned value fs, which lies within the range of some symbol in the current context. The second step updates the range decoder state with the three-tuple (fl[k], fh[k], ft) corresponding to that symbol. </t> <t> The first step is implemented by ec_decode() (entdec.c), which computes <figure align="center"> <artwork align="center"><![CDATA[ val fs = ft - min(------ + 1, ft) . rng/ft ]]></artwork> </figure> The divisions here are integer division. </t> <t> The decoder then identifies the symbol in the current context corresponding to fs; i.e., the value of k whose three-tuple (fl[k], fh[k], ft) satisfies fl[k] <= fs < fh[k]. It uses this tuple to update val according to <figure align="center"> <artwork align="center"><![CDATA[ rng val = val - --- * (ft - fh[k]) . ft ]]></artwork> </figure> If fl[k] is greater than zero, then the decoder updates rng using <figure align="center"> <artwork align="center"><![CDATA[ rng rng = --- * (fh[k] - fl[k]) . ft ]]></artwork> </figure> Otherwise, it updates rng using <figure align="center"> <artwork align="center"><![CDATA[ rng rng = rng - --- * (ft - fh[k]) . ft ]]></artwork> </figure> </t> <t> Using a special case for the first symbol (rather than the last symbol, as is commonly done in other arithmetic coders) ensures that all the truncation error from the finite precision arithmetic accumulates in symbol 0. This makes the cost of coding a 0 slightly smaller, on average, than its estimated probability indicates and makes the cost of coding any other symbol slightly larger. When contexts are designed so that 0 is the most probable symbol, which is often the case, this strategy minimizes the inefficiency introduced by the finite precision. It also makes some of the special-case decoding routines in <xref target="decoding-alternate"/> particularly simple. </t> <t> After the updates, implemented by ec_dec_update() (entdec.c), the decoder normalizes the range using the procedure in the next section, and returns the index k. </t> <section anchor="range-decoder-renorm" title="Renormalization"> <t> To normalize the range, the decoder repeats the following process, implemented by ec_dec_normalize() (entdec.c), until rng > 2**23. If rng is already greater than 2**23, the entire process is skipped. First, it sets rng to (rng<<8). Then it reads the next byte of the Opus frame and forms an 8-bit value sym, using the left-over bit buffered from the previous byte as the high bit and the top 7 bits of the byte just read as the other 7 bits of sym. The remaining bit in the byte just read is buffered for use in the next iteration. If no more input bytes remain, it uses zero bits instead. See <xref target="range-decoder-init"/> for the initialization used to process the first byte. Then, it sets <figure align="center"> <artwork align="center"><![CDATA[ val = ((val<<8) + (255-sym)) & 0x7FFFFFFF . ]]></artwork> </figure> </t> <t> It is normal and expected that the range decoder will read several bytes into the raw bits data (if any) at the end of the packet by the time the frame is completely decoded, as illustrated in <xref target="finalize-example"/>. This same data MUST also be returned as raw bits when requested. The encoder is expected to terminate the stream in such a way that the decoder will decode the intended values regardless of the data contained in the raw bits. <xref target="encoder-finalizing"/> describes a procedure for doing this. If the range decoder consumes all of the bytes belonging to the current frame, it MUST continue to use zero when any further input bytes are required, even if there is additional data in the current packet from padding or other frames. </t> <figure anchor="finalize-example" title="Illustrative example of raw bits overlapping range coder data"> <artwork align="center"><![CDATA[ n n+1 n+2 n+3 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : | <----------- Overlap region ------------> | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ^ ^ | End of data buffered by the range coder | ...-----------------------------------------------+ | | End of data consumed by raw bits +-------------------------------------------------------... ]]></artwork> </figure> </section> </section> <section anchor="decoding-alternate" title="Alternate Decoding Methods"> <t> The reference implementation uses three additional decoding methods that are exactly equivalent to the above, but make assumptions and simplifications that allow for a more efficient implementation. </t> <section anchor="ec_decode_bin" title="ec_decode_bin()"> <t> The first is ec_decode_bin() (entdec.c), defined using the parameter ftb instead of ft. It is mathematically equivalent to calling ec_decode() with ft = (1<<ftb), but avoids one of the divisions. </t> </section> <section anchor="ec_dec_bit_logp" title="ec_dec_bit_logp()"> <t> The next is ec_dec_bit_logp() (entdec.c), which decodes a single binary symbol, replacing both the ec_decode() and ec_dec_update() steps. The context is described by a single parameter, logp, which is the absolute value of the base-2 logarithm of the probability of a "1". It is mathematically equivalent to calling ec_decode() with ft = (1<<logp), followed by ec_dec_update() with the 3-tuple (fl[k] = 0, fh[k] = (1<<logp) - 1, ft = (1<<logp)) if the returned value of fs is less than (1<<logp) - 1 (a "0" was decoded), and with (fl[k] = (1<<logp) - 1, fh[k] = ft = (1<<logp)) otherwise (a "1" was decoded). The implementation requires no multiplications or divisions. </t> </section> <section anchor="ec_dec_icdf" title="ec_dec_icdf()"> <t> The last is ec_dec_icdf() (entdec.c), which decodes a single symbol with a table-based context of up to 8 bits, also replacing both the ec_decode() and ec_dec_update() steps, as well as the search for the decoded symbol in between. The context is described by two parameters, an icdf ("inverse" cumulative distribution function) table and ftb. As with ec_decode_bin(), (1<<ftb) is equivalent to ft. idcf[k], on the other hand, stores (1<<ftb)-fh[k], which is equal to (1<<ftb) - fl[k+1]. fl[0] is assumed to be 0, and the table is terminated by a value of 0 (where fh[k] == ft). </t> <t> The function is mathematically equivalent to calling ec_decode() with ft = (1<<ftb), using the returned value fs to search the table for the first entry where fs < (1<<ftb)-icdf[k], and calling ec_dec_update() with fl[k] = (1<<ftb) - icdf[k-1] (or 0 if k == 0), fh[k] = (1<<ftb) - idcf[k], and ft = (1<<ftb). Combining the search with the update allows the division to be replaced by a series of multiplications (which are usually much cheaper), and using an inverse CDF allows the use of an ftb as large as 8 in an 8-bit table without any special cases. This is the primary interface with the range decoder in the SILK layer, though it is used in a few places in the CELT layer as well. </t> <t> Although icdf[k] is more convenient for the code, the frequency counts, f[k], are a more natural representation of the probability distribution function (PDF) for a given symbol. Therefore this draft lists the latter, not the former, when describing the context in which a symbol is coded as a list, e.g., {4, 4, 4, 4}/16 for a uniform context with four possible values and ft = 16. The value of ft after the slash is always the sum of the entries in the PDF, but is included for convenience. Contexts with identical probabilities, f[k]/ft, but different values of ft (or equivalently, ftb) are not the same, and cannot, in general, be used in place of one another. An icdf table is also not capable of representing a PDF where the first symbol has 0 probability. In such contexts, ec_dec_icdf() can decode the symbol by using a table that drops the entries for any initial zero-probability values and adding the constant offset of the first value with a non-zero probability to its return value. </t> </section> </section> <section anchor="decoding-bits" title="Decoding Raw Bits"> <t> The raw bits used by the CELT layer are packed at the end of the packet, with the least significant bit of the first value packed in the least significant bit of the last byte, filling up to the most significant bit in the last byte, continuing on to the least significant bit of the penultimate byte, and so on. The reference implementation reads them using ec_dec_bits() (entdec.c). Because the range decoder must read several bytes ahead in the stream, as described in <xref target="range-decoder-renorm"/>, the input consumed by the raw bits may overlap with the input consumed by the range coder, and a decoder MUST allow this. The format should render it impossible to attempt to read more raw bits than there are actual bits in the frame, though a decoder may wish to check for this and report an error. </t> </section> <section anchor="ec_dec_uint" title="Decoding Uniformly Distributed Integers"> <t> The function ec_dec_uint() (entdec.c) decodes one of ft equiprobable values in the range 0 to (ft - 1), inclusive, each with a frequency of 1, where ft may be as large as (2**32 - 1). Because ec_decode() is limited to a total frequency of (2**16 - 1), it splits up the value into a range coded symbol representing up to 8 of the high bits, and, if necessary, raw bits representing the remainder of the value. The limit of 8 bits in the range coded symbol is a trade-off between implementation complexity, modeling error (since the symbols no longer truly have equal coding cost), and rounding error introduced by the range coder itself (which gets larger as more bits are included). Using raw bits reduces the maximum number of divisions required in the worst case, but means that it may be possible to decode a value outside the range 0 to (ft - 1), inclusive. </t> <t> ec_dec_uint() takes a single, positive parameter, ft, which is not necessarily a power of two, and returns an integer, t, whose value lies between 0 and (ft - 1), inclusive. Let ftb = ilog(ft - 1), i.e., the number of bits required to store (ft - 1) in two's complement notation. If ftb is 8 or less, then t is decoded with t = ec_decode(ft), and the range coder state is updated using the three-tuple (t, t + 1, ft). </t> <t> If ftb is greater than 8, then the top 8 bits of t are decoded using <figure align="center"> <artwork align="center"><![CDATA[ t = ec_decode(((ft - 1) >> (ftb - 8)) + 1) , ]]></artwork> </figure> the decoder state is updated using the three-tuple (t, t + 1, ((ft - 1) >> (ftb - 8)) + 1), and the remaining bits are decoded as raw bits, setting <figure align="center"> <artwork align="center"><![CDATA[ t = (t << (ftb - 8)) | ec_dec_bits(ftb - 8) . ]]></artwork> </figure> If, at this point, t >= ft, then the current frame is corrupt. In that case, the decoder should assume there has been an error in the coding, decoding, or transmission and SHOULD take measures to conceal the error and/or report to the application that the error has occurred. </t> </section> <section anchor="decoder-tell" title="Current Bit Usage"> <t> The bit allocation routines in the CELT decoder need a conservative upper bound on the number of bits that have been used from the current frame thus far, including both range coder bits and raw bits. This drives allocation decisions that must match those made in the encoder. The upper bound is computed in the reference implementation to whole-bit precision by the function ec_tell() (entcode.h) and to fractional 1/8th bit precision by the function ec_tell_frac() (entcode.c). Like all operations in the range coder, it must be implemented in a bit-exact manner, and must produce exactly the same value returned by the same functions in the encoder after encoding the same symbols. </t> <t> ec_tell() is guaranteed to return ceil(ec_tell_frac()/8.0). In various places the codec will check to ensure there is enough room to contain a symbol before attempting to decode it. In practice, although the number of bits used so far is an upper bound, decoding a symbol whose probability model suggests it has a worst-case cost of p 1/8th bits may actually advance the return value of ec_tell_frac() by p-1, p, or p+1 1/8th bits, due to approximation error in that upper bound, truncation error in the range coder, and for large values of ft, modeling error in ec_dec_uint(). </t> <t> However, this error is bounded, and periodic calls to ec_tell() or ec_tell_frac() at precisely defined points in the decoding process prevent it from accumulating. For a range coder symbol that requires a whole number of bits (i.e., for which ft/(fh[k] - fl[k]) is a power of two), where there are at least p 1/8th bits available, decoding the symbol will never cause ec_tell() or ec_tell_frac() to exceed the size of the frame ("bust the budget"). In this case the return value of ec_tell_frac() will only advance by more than p 1/8th bits if there was an additional, fractional number of bits remaining, and it will never advance beyond the next whole-bit boundary, which is safe, since frames always contain a whole number of bits. However, when p is not a whole number of bits, an extra 1/8th bit is required to ensure that decoding the symbol will not bust the budget. </t> <t> The reference implementation keeps track of the total number of whole bits that have been processed by the decoder so far in the variable nbits_total, including the (possibly fractional) number of bits that are currently buffered, but not consumed, inside the range coder. nbits_total is initialized to 9 just before the initial range renormalization process completes (or equivalently, it can be initialized to 33 after the first renormalization). The extra two bits over the actual amount buffered by the range coder guarantees that it is an upper bound and that there is enough room for the encoder to terminate the stream. Each iteration through the range coder's renormalization loop increases nbits_total by 8. Reading raw bits increases nbits_total by the number of raw bits read. </t> <section anchor="ec_tell" title="ec_tell()"> <t> The whole number of bits buffered in rng may be estimated via lg = ilog(rng). ec_tell() then becomes a simple matter of removing these bits from the total. It returns (nbits_total - lg). </t> <t> In a newly initialized decoder, before any symbols have been read, this reports that 1 bit has been used. This is the bit reserved for termination of the encoder. </t> </section> <section anchor="ec_tell_frac" title="ec_tell_frac()"> <t> ec_tell_frac() estimates the number of bits buffered in rng to fractional precision. Since rng must be greater than 2**23 after renormalization, lg must be at least 24. Let <figure align="center"> <artwork align="center"> <![CDATA[ r_Q15 = rng >> (lg-16) , ]]></artwork> </figure> so that 32768 <= r_Q15 < 65536, an unsigned Q15 value representing the fractional part of rng. Then the following procedure can be used to add one bit of precision to lg. First, update <figure align="center"> <artwork align="center"> <![CDATA[ r_Q15 = (r_Q15*r_Q15) >> 15 . ]]></artwork> </figure> Then add the 16th bit of r_Q15 to lg via <figure align="center"> <artwork align="center"> <![CDATA[ lg = 2*lg + (r_Q15 >> 16) . ]]></artwork> </figure> Finally, if this bit was a 1, reduce r_Q15 by a factor of two via <figure align="center"> <artwork align="center"> <![CDATA[ r_Q15 = r_Q15 >> 1 , ]]></artwork> </figure> so that it once again lies in the range 32768 <= r_Q15 < 65536. </t> <t> This procedure is repeated three times to extend lg to 1/8th bit precision. ec_tell_frac() then returns (nbits_total*8 - lg). </t> </section> </section> </section> <section anchor="silk_decoder_outline" title="SILK Decoder"> <t> The decoder's LP layer uses a modified version of the SILK codec (herein simply called "SILK"), which runs a decoded excitation signal through adaptive long-term and short-term prediction synthesis filters. It runs at NB, MB, and WB sample rates internally. When used in a SWB or FB Hybrid frame, the LP layer itself still only runs in WB. </t> <section title="SILK Decoder Modules"> <t> An overview of the decoder is given in <xref target="silk_decoder_figure"/>. </t> <figure align="center" anchor="silk_decoder_figure" title="SILK Decoder"> <artwork align="center"> <![CDATA[ +---------+ +------------+ -->| Range |--->| Decode |---------------------------+ 1 | Decoder | 2 | Parameters |----------+ 5 | +---------+ +------------+ 4 | | 3 | | | \/ \/ \/ +------------+ +------------+ +------------+ | Generate |-->| LTP |-->| LPC | | Excitation | | Synthesis | | Synthesis | +------------+ +------------+ +------------+ ^ | | | +-------------------+----------------+ | 6 | +------------+ +-------------+ +-->| Stereo |-->| Sample Rate |--> | Unmixing | 7 | Conversion | 8 +------------+ +-------------+ 1: Range encoded bitstream 2: Coded parameters 3: Pulses, LSBs, and signs 4: Pitch lags, Long-Term Prediction (LTP) coefficients 5: Linear Predictive Coding (LPC) coefficients and gains 6: Decoded signal (mono or mid-side stereo) 7: Unmixed signal (mono or left-right stereo) 8: Resampled signal ]]> </artwork> </figure> <t> The decoder feeds the bitstream (1) to the range decoder from <xref target="range-decoder"/>, and then decodes the parameters in it (2) using the procedures detailed in Sections <xref format="counter" target="silk_header_bits"/> through <xref format="counter" target="silk_signs"/>. These parameters (3, 4, 5) are used to generate an excitation signal (see <xref target="silk_excitation_reconstruction"/>), which is fed to an optional long-term prediction (LTP) filter (voiced frames only, see <xref target="silk_ltp_synthesis"/>) and then a short-term prediction filter (see <xref target="silk_lpc_synthesis"/>), producing the decoded signal (6). For stereo streams, the mid-side representation is converted to separate left and right channels (7). The result is finally resampled to the desired output sample rate (e.g., 48 kHz) so that the resampled signal (8) can be mixed with the CELT layer. </t> </section> <section anchor="silk_layer_organization" title="LP Layer Organization"> <t> Internally, the LP layer of a single Opus frame is composed of either a single 10 ms regular SILK frame or between one and three 20 ms regular SILK frames. A stereo Opus frame may double the number of regular SILK frames (up to a total of six), since it includes separate frames for a mid channel and, optionally, a side channel. Optional Low Bit-Rate Redundancy (LBRR) frames, which are reduced-bitrate encodings of previous SILK frames, may be included to aid in recovery from packet loss. If present, these appear before the regular SILK frames. They are in most respects identical to regular, active SILK frames, except that they are usually encoded with a lower bitrate. This draft uses "SILK frame" to refer to either one and "regular SILK frame" if it needs to draw a distinction between the two. </t> <t> Logically, each SILK frame is in turn composed of either two or four 5 ms subframes. Various parameters, such as the quantization gain of the excitation and the pitch lag and filter coefficients can vary on a subframe-by-subframe basis. Physically, the parameters for each subframe are interleaved in the bitstream, as described in the relevant sections for each parameter. </t> <t> All of these frames and subframes are decoded from the same range coder, with no padding between them. Thus packing multiple SILK frames in a single Opus frame saves, on average, half a byte per SILK frame. It also allows some parameters to be predicted from prior SILK frames in the same Opus frame, since this does not degrade packet loss robustness (beyond any penalty for merely using fewer, larger packets to store multiple frames). </t> <t> Stereo support in SILK uses a variant of mid-side coding, allowing a mono decoder to simply decode the mid channel. However, the data for the two channels is interleaved, so a mono decoder must still unpack the data for the side channel. It would be required to do so anyway for Hybrid Opus frames, or to support decoding individual 20 ms frames. </t> <t> <xref target="silk_symbols"/> summarizes the overall grouping of the contents of the LP layer. Figures <xref format="counter" target="silk_mono_60ms_frame"/> and <xref format="counter" target="silk_stereo_60ms_frame"/> illustrate the ordering of the various SILK frames for a 60 ms Opus frame, for both mono and stereo, respectively. </t> <texttable anchor="silk_symbols" title="Organization of the SILK layer of an Opus frame"> <ttcol align="center">Symbol(s)</ttcol> <ttcol align="center">PDF(s)</ttcol> <ttcol align="center">Condition</ttcol> <c>Voice Activity Detection (VAD) flags</c> <c>{1, 1}/2</c> <c/> <c>LBRR flag</c> <c>{1, 1}/2</c> <c/> <c>Per-frame LBRR flags</c> <c><xref target="silk_lbrr_flag_pdfs"/></c> <c><xref target="silk_lbrr_flags"/></c> <c>LBRR Frame(s)</c> <c><xref target="silk_frame"/></c> <c><xref target="silk_lbrr_flags"/></c> <c>Regular SILK Frame(s)</c> <c><xref target="silk_frame"/></c> <c/> </texttable> <figure align="center" anchor="silk_mono_60ms_frame" title="A 60 ms Mono Frame"> <artwork align="center"><![CDATA[ +---------------------------------+ | VAD Flags | +---------------------------------+ | LBRR Flag | +---------------------------------+ | Per-Frame LBRR Flags (Optional) | +---------------------------------+ | LBRR Frame 1 (Optional) | +---------------------------------+ | LBRR Frame 2 (Optional) | +---------------------------------+ | LBRR Frame 3 (Optional) | +---------------------------------+ | Regular SILK Frame 1 | +---------------------------------+ | Regular SILK Frame 2 | +---------------------------------+ | Regular SILK Frame 3 | +---------------------------------+ ]]></artwork> </figure> <figure align="center" anchor="silk_stereo_60ms_frame" title="A 60 ms Stereo Frame"> <artwork align="center"><![CDATA[ +---------------------------------------+ | Mid VAD Flags | +---------------------------------------+ | Mid LBRR Flag | +---------------------------------------+ | Side VAD Flags | +---------------------------------------+ | Side LBRR Flag | +---------------------------------------+ | Mid Per-Frame LBRR Flags (Optional) | +---------------------------------------+ | Side Per-Frame LBRR Flags (Optional) | +---------------------------------------+ | Mid LBRR Frame 1 (Optional) | +---------------------------------------+ | Side LBRR Frame 1 (Optional) | +---------------------------------------+ | Mid LBRR Frame 2 (Optional) | +---------------------------------------+ | Side LBRR Frame 2 (Optional) | +---------------------------------------+ | Mid LBRR Frame 3 (Optional) | +---------------------------------------+ | Side LBRR Frame 3 (Optional) | +---------------------------------------+ | Mid Regular SILK Frame 1 | +---------------------------------------+ | Side Regular SILK Frame 1 (Optional) | +---------------------------------------+ | Mid Regular SILK Frame 2 | +---------------------------------------+ | Side Regular SILK Frame 2 (Optional) | +---------------------------------------+ | Mid Regular SILK Frame 3 | +---------------------------------------+ | Side Regular SILK Frame 3 (Optional) | +---------------------------------------+ ]]></artwork> </figure> </section> <section anchor="silk_header_bits" title="Header Bits"> <t> The LP layer begins with two to eight header bits, decoded in silk_Decode() (dec_API.c). These consist of one Voice Activity Detection (VAD) bit per frame (up to 3), followed by a single flag indicating the presence of LBRR frames. For a stereo packet, these first flags correspond to the mid channel, and a second set of flags is included for the side channel. </t> <t> Because these are the first symbols decoded by the range coder and because they are coded as binary values with uniform probability, they can be extracted directly from the most significant bits of the first byte of compressed data. Thus, a receiver can determine if an Opus frame contains any active SILK frames without the overhead of using the range decoder. </t> </section> <section anchor="silk_lbrr_flags" title="Per-Frame LBRR Flags"> <t> For Opus frames longer than 20 ms, a set of LBRR flags is decoded for each channel that has its LBRR flag set. Each set contains one flag per 20 ms SILK frame. 40 ms Opus frames use the 2-frame LBRR flag PDF from <xref target="silk_lbrr_flag_pdfs"/>, and 60 ms Opus frames use the 3-frame LBRR flag PDF. For each channel, the resulting 2- or 3-bit integer contains the corresponding LBRR flag for each frame, packed in order from the LSB to the MSB. </t> <texttable anchor="silk_lbrr_flag_pdfs" title="LBRR Flag PDFs"> <ttcol>Frame Size</ttcol> <ttcol>PDF</ttcol> <c>40 ms</c> <c>{0, 53, 53, 150}/256</c> <c>60 ms</c> <c>{0, 41, 20, 29, 41, 15, 28, 82}/256</c> </texttable> <t> A 10 or 20 ms Opus frame does not contain any per-frame LBRR flags, as there may be at most one LBRR frame per channel. The global LBRR flag in the header bits (see <xref target="silk_header_bits"/>) is already sufficient to indicate the presence of that single LBRR frame. </t> </section> <section anchor="silk_lbrr_frames" title="LBRR Frames"> <t> The LBRR frames, if present, contain an encoded representation of the signal immediately prior to the current Opus frame as if it were encoded with the current mode, frame size, audio bandwidth, and channel count, even if those differ from the prior Opus frame. When one of these parameters changes from one Opus frame to the next, this implies that the LBRR frames of the current Opus frame may not be simple drop-in replacements for the contents of the previous Opus frame. </t> <t> For example, when switching from 20 ms to 60 ms, the 60 ms Opus frame may contain LBRR frames covering up to three prior 20 ms Opus frames, even if those frames already contained LBRR frames covering some of the same time periods. When switching from 20 ms to 10 ms, the 10 ms Opus frame can contain an LBRR frame covering at most half the prior 20 ms Opus frame, potentially leaving a hole that needs to be concealed from even a single packet loss (see <xref target="Packet Loss Concealment"/>). When switching from mono to stereo, the LBRR frames in the first stereo Opus frame MAY contain a non-trivial side channel. </t> <t> In order to properly produce LBRR frames under all conditions, an encoder might need to buffer up to 60 ms of audio and re-encode it during these transitions. However, the reference implementation opts to disable LBRR frames at the transition point for simplicity. Since transitions are relatively infrequent in normal usage, this does not have a significant impact on packet loss robustness. </t> <t> The LBRR frames immediately follow the LBRR flags, prior to any regular SILK frames. <xref target="silk_frame"/> describes their exact contents. LBRR frames do not include their own separate VAD flags. LBRR frames are only meant to be transmitted for active speech, thus all LBRR frames are treated as active. </t> <t> In a stereo Opus frame longer than 20 ms, although the per-frame LBRR flags for the mid channel are coded as a unit before the per-frame LBRR flags for the side channel, the LBRR frames themselves are interleaved. The decoder parses an LBRR frame for the mid channel of a given 20 ms interval (if present) and then immediately parses the corresponding LBRR frame for the side channel (if present), before proceeding to the next 20 ms interval. </t> </section> <section anchor="silk_regular_frames" title="Regular SILK Frames"> <t> The regular SILK frame(s) follow the LBRR frames (if any). <xref target="silk_frame"/> describes their contents, as well. Unlike the LBRR frames, a regular SILK frame is coded for each time interval in an Opus frame, even if the corresponding VAD flags are unset. For stereo Opus frames longer than 20 ms, the regular mid and side SILK frames for each 20 ms interval are interleaved, just as with the LBRR frames. The side frame may be skipped by coding an appropriate flag, as detailed in <xref target="silk_mid_only_flag"/>. </t> </section> <section anchor="silk_frame" title="SILK Frame Contents"> <t> Each SILK frame includes a set of side information that encodes <list style="symbols"> <t>The frame type and quantization type (<xref target="silk_frame_type"/>),</t> <t>Quantization gains (<xref target="silk_gains"/>),</t> <t>Short-term prediction filter coefficients (<xref target="silk_nlsfs"/>),</t> <t>A Line Spectral Frequencies (LSF) interpolation weight (<xref target="silk_nlsf_interpolation"/>),</t> <t> Long-term prediction filter lags and gains (<xref target="silk_ltp_params"/>), and </t> <t>A linear congruential generator (LCG) seed (<xref target="silk_seed"/>).</t> </list> The quantized excitation signal (see <xref target="silk_excitation"/>) follows these at the end of the frame. <xref target="silk_frame_symbols"/> details the overall organization of a SILK frame. </t> <texttable anchor="silk_frame_symbols" title="Order of the symbols in an individual SILK frame"> <ttcol align="center">Symbol(s)</ttcol> <ttcol align="center">PDF(s)</ttcol> <ttcol align="center">Condition</ttcol> <c>Stereo Prediction Weights</c> <c><xref target="silk_stereo_pred_pdfs"/></c> <c><xref target="silk_stereo_pred"/></c> <c>Mid-only Flag</c> <c><xref target="silk_mid_only_pdf"/></c> <c><xref target="silk_mid_only_flag"/></c> <c>Frame Type</c> <c><xref target="silk_frame_type"/></c> <c/> <c>Subframe Gains</c> <c><xref target="silk_gains"/></c> <c/> <c>Normalized LSF Stage-1 Index</c> <c><xref target="silk_nlsf_stage1_pdfs"/></c> <c/> <c>Normalized LSF Stage-2 Residual</c> <c><xref target="silk_nlsf_stage2"/></c> <c/> <c>Normalized LSF Interpolation Weight</c> <c><xref target="silk_nlsf_interp_pdf"/></c> <c>20 ms frame</c> <c>Primary Pitch Lag</c> <c><xref target="silk_ltp_lags"/></c> <c>Voiced frame</c> <c>Subframe Pitch Contour</c> <c><xref target="silk_pitch_contour_pdfs"/></c> <c>Voiced frame</c> <c>Periodicity Index</c> <c><xref target="silk_perindex_pdf"/></c> <c>Voiced frame</c> <c>LTP Filter</c> <c><xref target="silk_ltp_filter_pdfs"/></c> <c>Voiced frame</c> <c>LTP Scaling</c> <c><xref target="silk_ltp_scaling_pdf"/></c> <c><xref target="silk_ltp_scaling"/></c> <c>LCG Seed</c> <c><xref target="silk_seed_pdf"/></c> <c/> <c>Excitation Rate Level</c> <c><xref target="silk_rate_level_pdfs"/></c> <c/> <c>Excitation Pulse Counts</c> <c><xref target="silk_pulse_count_pdfs"/></c> <c/> <c>Excitation Pulse Locations</c> <c><xref target="silk_pulse_locations"/></c> <c>Non-zero pulse count</c> <c>Excitation LSBs</c> <c><xref target="silk_shell_lsb_pdf"/></c> <c><xref target="silk_pulse_counts"/></c> <c>Excitation Signs</c> <c><xref target="silk_sign_pdfs"/></c> <c/> </texttable> <section anchor="silk_stereo_pred" toc="include" title="Stereo Prediction Weights"> <t> A SILK frame corresponding to the mid channel of a stereo Opus frame begins with a pair of side channel prediction weights, designed such that zeros indicate normal mid-side coupling. Since these weights can change on every frame, the first portion of each frame linearly interpolates between the previous weights and the current ones, using zeros for the previous weights if none are available. These prediction weights are never included in a mono Opus frame, and the previous weights are reset to zeros on any transition from mono to stereo. They are also not included in an LBRR frame for the side channel, even if the LBRR flags indicate the corresponding mid channel was not coded. In that case, the previous weights are used, again substituting in zeros if no previous weights are available since the last decoder reset (see <xref target="decoder-reset"/>). </t> <t> To summarize, these weights are coded if and only if <list style="symbols"> <t>This is a stereo Opus frame (<xref target="toc_byte"/>), and</t> <t>The current SILK frame corresponds to the mid channel.</t> </list> </t> <t> The prediction weights are coded in three separate pieces, which are decoded by silk_stereo_decode_pred() (decode_stereo_pred.c). The first piece jointly codes the high-order part of a table index for both weights. The second piece codes the low-order part of each table index. The third piece codes an offset used to linearly interpolate between table indices. The details are as follows. </t> <t> Let n be an index decoded with the 25-element stage-1 PDF in <xref target="silk_stereo_pred_pdfs"/>. Then let i0 and i1 be indices decoded with the stage-2 and stage-3 PDFs in <xref target="silk_stereo_pred_pdfs"/>, respectively, and let i2 and i3 be two more indices decoded with the stage-2 and stage-3 PDFs, all in that order. </t> <texttable anchor="silk_stereo_pred_pdfs" title="Stereo Weight PDFs"> <ttcol align="left">Stage</ttcol> <ttcol align="left">PDF</ttcol> <c>Stage 1</c> <c>{7, 2, 1, 1, 1, 10, 24, 8, 1, 1, 3, 23, 92, 23, 3, 1, 1, 8, 24, 10, 1, 1, 1, 2, 7}/256</c> <c>Stage 2</c> <c>{85, 86, 85}/256</c> <c>Stage 3</c> <c>{51, 51, 52, 51, 51}/256</c> </texttable> <t> Then use n, i0, and i2 to form two table indices, wi0 and wi1, according to <figure align="center"> <artwork align="center"><![CDATA[ wi0 = i0 + 3*(n/5) wi1 = i2 + 3*(n%5) ]]></artwork> </figure> where the division is integer division. The range of these indices is 0 to 14, inclusive. Let w[i] be the i'th weight from <xref target="silk_stereo_weights_table"/>. Then the two prediction weights, w0_Q13 and w1_Q13, are <figure align="center"> <artwork align="center"><![CDATA[ w1_Q13 = w_Q13[wi1] + ((w_Q13[wi1+1] - w_Q13[wi1])*6554) >> 16)*(2*i3 + 1) w0_Q13 = w_Q13[wi0] + ((w_Q13[wi0+1] - w_Q13[wi0])*6554) >> 16)*(2*i1 + 1) - w1_Q13 ]]></artwork> </figure> N.b., w1_Q13 is computed first here, because w0_Q13 depends on it. The constant 6554 is approximately 0.1 in Q16. Although wi0 and wi1 only have 15 possible values, <xref target="silk_stereo_weights_table"/> contains 16 entries to allow interpolation between entry wi0 and (wi0 + 1) (and likewise for wi1). </t> <texttable anchor="silk_stereo_weights_table" title="Stereo Weight Table"> <ttcol align="left">Index</ttcol> <ttcol align="right">Weight (Q13)</ttcol> <c>0</c> <c>-13732</c> <c>1</c> <c>-10050</c> <c>2</c> <c>-8266</c> <c>3</c> <c>-7526</c> <c>4</c> <c>-6500</c> <c>5</c> <c>-5000</c> <c>6</c> <c>-2950</c> <c>7</c> <c>-820</c> <c>8</c> <c>820</c> <c>9</c> <c>2950</c> <c>10</c> <c>5000</c> <c>11</c> <c>6500</c> <c>12</c> <c>7526</c> <c>13</c> <c>8266</c> <c>14</c> <c>10050</c> <c>15</c> <c>13732</c> </texttable> </section> <section anchor="silk_mid_only_flag" toc="include" title="Mid-only Flag"> <t> A flag appears after the stereo prediction weights that indicates if only the mid channel is coded for this time interval. It appears only when <list style="symbols"> <t>This is a stereo Opus frame (see <xref target="toc_byte"/>),</t> <t>The current SILK frame corresponds to the mid channel, and</t> <t>Either <list style="symbols"> <t>This is a regular SILK frame where the VAD flags (see <xref target="silk_header_bits"/>) indicate that the corresponding side channel is not active.</t> <t> This is an LBRR frame where the LBRR flags (see <xref target="silk_header_bits"/> and <xref target="silk_lbrr_flags"/>) indicate that the corresponding side channel is not coded. </t> </list> </t> </list> It is omitted when there are no stereo weights, for all of the same reasons. It is also omitted for a regular SILK frame when the VAD flag of the corresponding side channel frame is set (indicating it is active). The side channel must be coded in this case, making the mid-only flag redundant. It is also omitted for an LBRR frame when the corresponding LBRR flags indicate the side channel is coded. </t> <t> When the flag is present, the decoder reads a single value using the PDF in <xref target="silk_mid_only_pdf"/>, as implemented in silk_stereo_decode_mid_only() (decode_stereo_pred.c). If the flag is set, then there is no corresponding SILK frame for the side channel, the entire decoding process for the side channel is skipped, and zeros are fed to the stereo unmixing process (see <xref target="silk_stereo_unmixing"/>) instead. As stated above, LBRR frames still include this flag when the LBRR flag indicates that the side channel is not coded. In that case, if this flag is zero (indicating that there should be a side channel), then Packet Loss Concealment (PLC, see <xref target="Packet Loss Concealment"/>) SHOULD be invoked to recover a side channel signal. Otherwise, the stereo image will collapse. </t> <texttable anchor="silk_mid_only_pdf" title="Mid-only Flag PDF"> <ttcol align="left">PDF</ttcol> <c>{192, 64}/256</c> </texttable> </section> <section anchor="silk_frame_type" toc="include" title="Frame Type"> <t> Each SILK frame contains a single "frame type" symbol that jointly codes the signal type and quantization offset type of the corresponding frame. If the current frame is a regular SILK frame whose VAD bit was not set (an "inactive" frame), then the frame type symbol takes on a value of either 0 or 1 and is decoded using the first PDF in <xref target="silk_frame_type_pdfs"/>. If the frame is an LBRR frame or a regular SILK frame whose VAD flag was set (an "active" frame), then the value of the symbol may range from 2 to 5, inclusive, and is decoded using the second PDF in <xref target="silk_frame_type_pdfs"/>. <xref target="silk_frame_type_table"/> translates between the value of the frame type symbol and the corresponding signal type and quantization offset type. </t> <texttable anchor="silk_frame_type_pdfs" title="Frame Type PDFs"> <ttcol>VAD Flag</ttcol> <ttcol>PDF</ttcol> <c>Inactive</c> <c>{26, 230, 0, 0, 0, 0}/256</c> <c>Active</c> <c>{0, 0, 24, 74, 148, 10}/256</c> </texttable> <texttable anchor="silk_frame_type_table" title="Signal Type and Quantization Offset Type from Frame Type"> <ttcol>Frame Type</ttcol> <ttcol>Signal Type</ttcol> <ttcol align="right">Quantization Offset Type</ttcol> <c>0</c> <c>Inactive</c> <c>Low</c> <c>1</c> <c>Inactive</c> <c>High</c> <c>2</c> <c>Unvoiced</c> <c>Low</c> <c>3</c> <c>Unvoiced</c> <c>High</c> <c>4</c> <c>Voiced</c> <c>Low</c> <c>5</c> <c>Voiced</c> <c>High</c> </texttable> </section> <section anchor="silk_gains" toc="include" title="Subframe Gains"> <t> A separate quantization gain is coded for each 5 ms subframe. These gains control the step size between quantization levels of the excitation signal and, therefore, the quality of the reconstruction. They are independent of and unrelated to the pitch contours coded for voiced frames. The quantization gains are themselves uniformly quantized to 6 bits on a log scale, giving them a resolution of approximately 1.369 dB and a range of approximately 1.94 dB to 88.21 dB. </t> <t> The subframe gains are either coded independently, or relative to the gain from the most recent coded subframe in the same channel. Independent coding is used if and only if <list style="symbols"> <t> This is the first subframe in the current SILK frame, and </t> <t>Either <list style="symbols"> <t> This is the first SILK frame of its type (LBRR or regular) for this channel in the current Opus frame, or </t> <t> The previous SILK frame of the same type (LBRR or regular) for this channel in the same Opus frame was not coded. </t> </list> </t> </list> </t> <t> In an independently coded subframe gain, the 3 most significant bits of the quantization gain are decoded using a PDF selected from <xref target="silk_independent_gain_msb_pdfs"/> based on the decoded signal type (see <xref target="silk_frame_type"/>). </t> <texttable anchor="silk_independent_gain_msb_pdfs" title="PDFs for Independent Quantization Gain MSB Coding"> <ttcol align="left">Signal Type</ttcol> <ttcol align="left">PDF</ttcol> <c>Inactive</c> <c>{32, 112, 68, 29, 12, 1, 1, 1}/256</c> <c>Unvoiced</c> <c>{2, 17, 45, 60, 62, 47, 19, 4}/256</c> <c>Voiced</c> <c>{1, 3, 26, 71, 94, 50, 9, 2}/256</c> </texttable> <t> The 3 least significant bits are decoded using a uniform PDF: </t> <texttable anchor="silk_independent_gain_lsb_pdf" title="PDF for Independent Quantization Gain LSB Coding"> <ttcol align="left">PDF</ttcol> <c>{32, 32, 32, 32, 32, 32, 32, 32}/256</c> </texttable> <t> These 6 bits are combined to form a value, gain_index, between 0 and 63. When the gain for the previous subframe is available, then the current gain is limited as follows: <figure align="center"> <artwork align="center"><![CDATA[ log_gain = max(gain_index, previous_log_gain - 16) . ]]></artwork> </figure> This may help some implementations limit the change in precision of their internal LTP history. The indices which this clamp applies to cannot simply be removed from the codebook, because previous_log_gain will not be available after packet loss. The clamping is skipped after a decoder reset, and in the side channel if the previous frame in the side channel was not coded, since there is no value for previous_log_gain available. It MAY also be skipped after packet loss. </t> <t> For subframes which do not have an independent gain (including the first subframe of frames not listed as using independent coding above), the quantization gain is coded relative to the gain from the previous subframe (in the same channel). The PDF in <xref target="silk_delta_gain_pdf"/> yields a delta_gain_index value between 0 and 40, inclusive. </t> <texttable anchor="silk_delta_gain_pdf" title="PDF for Delta Quantization Gain Coding"> <ttcol align="left">PDF</ttcol> <c>{6, 5, 11, 31, 132, 21, 8, 4, 3, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}/256</c> </texttable> <t> The following formula translates this index into a quantization gain for the current subframe using the gain from the previous subframe: <figure align="center"> <artwork align="center"><![CDATA[ log_gain = clamp(0, max(2*delta_gain_index - 16, previous_log_gain + delta_gain_index - 4), 63) . ]]></artwork> </figure> </t> <t> silk_gains_dequant() (gain_quant.c) dequantizes log_gain for the k'th subframe and converts it into a linear Q16 scale factor via <figure align="center"> <artwork align="center"><![CDATA[ gain_Q16[k] = silk_log2lin((0x1D1C71*log_gain>>16) + 2090) ]]></artwork> </figure> </t> <t> The function silk_log2lin() (log2lin.c) computes an approximation of 2**(inLog_Q7/128.0), where inLog_Q7 is its Q7 input. Let i = inLog_Q7>>7 be the integer part of inLogQ7 and f = inLog_Q7&127 be the fractional part. Then <figure align="center"> <artwork align="center"><![CDATA[ (1<<i) + ((-174*f*(128-f)>>16)+f)*((1<<i)>>7) ]]></artwork> </figure> yields the approximate exponential. The final Q16 gain values lies between 81920 and 1686110208, inclusive (representing scale factors of 1.25 to 25728, respectively). </t> </section> <section anchor="silk_nlsfs" toc="include" title="Normalized Line Spectral Frequency (LSF) and Linear Predictive Coding (LPC) Coefficients"> <t> A set of normalized Line Spectral Frequency (LSF) coefficients follow the quantization gains in the bitstream, and represent the Linear Predictive Coding (LPC) coefficients for the current SILK frame. Once decoded, the normalized LSFs form an increasing list of Q15 values between 0 and 1. These represent the interleaved zeros on the upper half of the unit circle (between 0 and pi, hence "normalized") in the standard decomposition <xref target="line-spectral-pairs"/> of the LPC filter into a symmetric part and an anti-symmetric part (P and Q in <xref target="silk_nlsf2lpc"/>). Because of non-linear effects in the decoding process, an implementation SHOULD match the fixed-point arithmetic described in this section exactly. An encoder SHOULD also use the same process. </t> <t> The normalized LSFs are coded using a two-stage vector quantizer (VQ) (<xref target="silk_nlsf_stage1"/> and <xref target="silk_nlsf_stage2"/>). NB and MB frames use an order-10 predictor, while WB frames use an order-16 predictor, and thus have different sets of tables. After reconstructing the normalized LSFs (<xref target="silk_nlsf_reconstruction"/>), the decoder runs them through a stabilization process (<xref target="silk_nlsf_stabilization"/>), interpolates them between frames (<xref target="silk_nlsf_interpolation"/>), converts them back into LPC coefficients (<xref target="silk_nlsf2lpc"/>), and then runs them through further processes to limit the range of the coefficients (<xref target="silk_lpc_range_limit"/>) and the gain of the filter (<xref target="silk_lpc_gain_limit"/>). All of this is necessary to ensure the reconstruction process is stable. </t> <section anchor="silk_nlsf_stage1" title="Normalized LSF Stage 1 Decoding"> <t> The first VQ stage uses a 32-element codebook, coded with one of the PDFs in <xref target="silk_nlsf_stage1_pdfs"/>, depending on the audio bandwidth and the signal type of the current SILK frame. This yields a single index, I1, for the entire frame, which <list style="numbers"> <t>Indexes an element in a coarse codebook,</t> <t>Selects the PDFs for the second stage of the VQ, and</t> <t>Selects the prediction weights used to remove intra-frame redundancy from the second stage.</t> </list> The actual codebook elements are listed in <xref target="silk_nlsf_nbmb_codebook"/> and <xref target="silk_nlsf_wb_codebook"/>, but they are not needed until the last stages of reconstructing the LSF coefficients. </t> <texttable anchor="silk_nlsf_stage1_pdfs" title="PDFs for Normalized LSF Stage-1 Index Decoding"> <ttcol align="left">Audio Bandwidth</ttcol> <ttcol align="left">Signal Type</ttcol> <ttcol align="left">PDF</ttcol> <c>NB or MB</c> <c>Inactive or unvoiced</c> <c> {44, 34, 30, 19, 21, 12, 11, 3, 3, 2, 16, 2, 2, 1, 5, 2, 1, 3, 3, 1, 1, 2, 2, 2, 3, 1, 9, 9, 2, 7, 2, 1}/256 </c> <c>NB or MB</c> <c>Voiced</c> <c> {1, 10, 1, 8, 3, 8, 8, 14, 13, 14, 1, 14, 12, 13, 11, 11, 12, 11, 10, 10, 11, 8, 9, 8, 7, 8, 1, 1, 6, 1, 6, 5}/256 </c> <c>WB</c> <c>Inactive or unvoiced</c> <c> {31, 21, 3, 17, 1, 8, 17, 4, 1, 18, 16, 4, 2, 3, 1, 10, 1, 3, 16, 11, 16, 2, 2, 3, 2, 11, 1, 4, 9, 8, 7, 3}/256 </c> <c>WB</c> <c>Voiced</c> <c> {1, 4, 16, 5, 18, 11, 5, 14, 15, 1, 3, 12, 13, 14, 14, 6, 14, 12, 2, 6, 1, 12, 12, 11, 10, 3, 10, 5, 1, 1, 1, 3}/256 </c> </texttable> </section> <section anchor="silk_nlsf_stage2" title="Normalized LSF Stage 2 Decoding"> <t> A total of 16 PDFs are available for the LSF residual in the second stage: the 8 (a...h) for NB and MB frames given in <xref target="silk_nlsf_stage2_nbmb_pdfs"/>, and the 8 (i...p) for WB frames given in <xref target="silk_nlsf_stage2_wb_pdfs"/>. Which PDF is used for which coefficient is driven by the index, I1, decoded in the first stage. <xref target="silk_nlsf_nbmb_stage2_cb_sel"/> lists the letter of the corresponding PDF for each normalized LSF coefficient for NB and MB, and <xref target="silk_nlsf_wb_stage2_cb_sel"/> lists the same information for WB. </t> <texttable anchor="silk_nlsf_stage2_nbmb_pdfs" title="PDFs for NB/MB Normalized LSF Stage-2 Index Decoding"> <ttcol align="left">Codebook</ttcol> <ttcol align="left">PDF</ttcol> <c>a</c> <c>{1, 1, 1, 15, 224, 11, 1, 1, 1}/256</c> <c>b</c> <c>{1, 1, 2, 34, 183, 32, 1, 1, 1}/256</c> <c>c</c> <c>{1, 1, 4, 42, 149, 55, 2, 1, 1}/256</c> <c>d</c> <c>{1, 1, 8, 52, 123, 61, 8, 1, 1}/256</c> <c>e</c> <c>{1, 3, 16, 53, 101, 74, 6, 1, 1}/256</c> <c>f</c> <c>{1, 3, 17, 55, 90, 73, 15, 1, 1}/256</c> <c>g</c> <c>{1, 7, 24, 53, 74, 67, 26, 3, 1}/256</c> <c>h</c> <c>{1, 1, 18, 63, 78, 58, 30, 6, 1}/256</c> </texttable> <texttable anchor="silk_nlsf_stage2_wb_pdfs" title="PDFs for WB Normalized LSF Stage-2 Index Decoding"> <ttcol align="left">Codebook</ttcol> <ttcol align="left">PDF</ttcol> <c>i</c> <c>{1, 1, 1, 9, 232, 9, 1, 1, 1}/256</c> <c>j</c> <c>{1, 1, 2, 28, 186, 35, 1, 1, 1}/256</c> <c>k</c> <c>{1, 1, 3, 42, 152, 53, 2, 1, 1}/256</c> <c>l</c> <c>{1, 1, 10, 49, 126, 65, 2, 1, 1}/256</c> <c>m</c> <c>{1, 4, 19, 48, 100, 77, 5, 1, 1}/256</c> <c>n</c> <c>{1, 1, 14, 54, 100, 72, 12, 1, 1}/256</c> <c>o</c> <c>{1, 1, 15, 61, 87, 61, 25, 4, 1}/256</c> <c>p</c> <c>{1, 7, 21, 50, 77, 81, 17, 1, 1}/256</c> </texttable> <texttable anchor="silk_nlsf_nbmb_stage2_cb_sel" title="Codebook Selection for NB/MB Normalized LSF Stage-2 Index Decoding"> <ttcol>I1</ttcol> <ttcol>Coefficient</ttcol> <c/> <c><spanx style="vbare">0 1 2 3 4 5 6 7 8 9</spanx></c> <c> 0</c> <c><spanx style="vbare">a a a a a a a a a a</spanx></c> <c> 1</c> <c><spanx style="vbare">b d b c c b c b b b</spanx></c> <c> 2</c> <c><spanx style="vbare">c b b b b b b b b b</spanx></c> <c> 3</c> <c><spanx style="vbare">b c c c c b c b b b</spanx></c> <c> 4</c> <c><spanx style="vbare">c d d d d c c c c c</spanx></c> <c> 5</c> <c><spanx style="vbare">a f d d c c c c b b</spanx></c> <c> g</c> <c><spanx style="vbare">a c c c c c c c c b</spanx></c> <c> 7</c> <c><spanx style="vbare">c d g e e e f e f f</spanx></c> <c> 8</c> <c><spanx style="vbare">c e f f e f e g e e</spanx></c> <c> 9</c> <c><spanx style="vbare">c e e h e f e f f e</spanx></c> <c>10</c> <c><spanx style="vbare">e d d d c d c c c c</spanx></c> <c>11</c> <c><spanx style="vbare">b f f g e f e f f f</spanx></c> <c>12</c> <c><spanx style="vbare">c h e g f f f f f f</spanx></c> <c>13</c> <c><spanx style="vbare">c h f f f f f g f e</spanx></c> <c>14</c> <c><spanx style="vbare">d d f e e f e f e e</spanx></c> <c>15</c> <c><spanx style="vbare">c d d f f e e e e e</spanx></c> <c>16</c> <c><spanx style="vbare">c e e g e f e f f f</spanx></c> <c>17</c> <c><spanx style="vbare">c f e g f f f e f e</spanx></c> <c>18</c> <c><spanx style="vbare">c h e f e f e f f f</spanx></c> <c>19</c> <c><spanx style="vbare">c f e g h g f g f e</spanx></c> <c>20</c> <c><spanx style="vbare">d g h e g f f g e f</spanx></c> <c>21</c> <c><spanx style="vbare">c h g e e e f e f f</spanx></c> <c>22</c> <c><spanx style="vbare">e f f e g g f g f e</spanx></c> <c>23</c> <c><spanx style="vbare">c f f g f g e g e e</spanx></c> <c>24</c> <c><spanx style="vbare">e f f f d h e f f e</spanx></c> <c>25</c> <c><spanx style="vbare">c d e f f g e f f e</spanx></c> <c>26</c> <c><spanx style="vbare">c d c d d e c d d d</spanx></c> <c>27</c> <c><spanx style="vbare">b b c c c c c d c c</spanx></c> <c>28</c> <c><spanx style="vbare">e f f g g g f g e f</spanx></c> <c>29</c> <c><spanx style="vbare">d f f e e e e d d c</spanx></c> <c>30</c> <c><spanx style="vbare">c f d h f f e e f e</spanx></c> <c>31</c> <c><spanx style="vbare">e e f e f g f g f e</spanx></c> </texttable> <texttable anchor="silk_nlsf_wb_stage2_cb_sel" title="Codebook Selection for WB Normalized LSF Stage-2 Index Decoding"> <ttcol>I1</ttcol> <ttcol>Coefficient</ttcol> <c/> <c><spanx style="vbare">0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15</spanx></c> <c> 0</c> <c><spanx style="vbare">i i i i i i i i i i i i i i i i</spanx></c> <c> 1</c> <c><spanx style="vbare">k l l l l l k k k k k j j j i l</spanx></c> <c> 2</c> <c><spanx style="vbare">k n n l p m m n k n m n n m l l</spanx></c> <c> 3</c> <c><spanx style="vbare">i k j k k j j j j j i i i i i j</spanx></c> <c> 4</c> <c><spanx style="vbare">i o n m o m p n m m m n n m m l</spanx></c> <c> 5</c> <c><spanx style="vbare">i l n n m l l n l l l l l l k m</spanx></c> <c> 6</c> <c><spanx style="vbare">i i i i i i i i i i i i i i i i</spanx></c> <c> 7</c> <c><spanx style="vbare">i k o l p k n l m n n m l l k l</spanx></c> <c> 8</c> <c><spanx style="vbare">i o k o o m n m o n m m n l l l</spanx></c> <c> 9</c> <c><spanx style="vbare">k j i i i i i i i i i i i i i i</spanx></c> <c>10</c> <c><spanx style="vbare">i j i i i i i i i i i i i i i j</spanx></c> <c>11</c> <c><spanx style="vbare">k k l m n l l l l l l l k k j l</spanx></c> <c>12</c> <c><spanx style="vbare">k k l l m l l l l l l l l k j l</spanx></c> <c>13</c> <c><spanx style="vbare">l m m m o m m n l n m m n m l m</spanx></c> <c>14</c> <c><spanx style="vbare">i o m n m p n k o n p m m l n l</spanx></c> <c>15</c> <c><spanx style="vbare">i j i j j j j j j j i i i i j i</spanx></c> <c>16</c> <c><spanx style="vbare">j o n p n m n l m n m m m l l m</spanx></c> <c>17</c> <c><spanx style="vbare">j l l m m l l n k l l n n n l m</spanx></c> <c>18</c> <c><spanx style="vbare">k l l k k k l k j k j k j j j m</spanx></c> <c>19</c> <c><spanx style="vbare">i k l n l l k k k j j i i i i i</spanx></c> <c>20</c> <c><spanx style="vbare">l m l n l l k k j j j j j k k m</spanx></c> <c>21</c> <c><spanx style="vbare">k o l p p m n m n l n l l k l l</spanx></c> <c>22</c> <c><spanx style="vbare">k l n o o l n l m m l l l l k m</spanx></c> <c>23</c> <c><spanx style="vbare">j l l m m m m l n n n l j j j j</spanx></c> <c>24</c> <c><spanx style="vbare">k n l o o m p m m n l m m l l l</spanx></c> <c>25</c> <c><spanx style="vbare">i o j j i i i i i i i i i i i i</spanx></c> <c>26</c> <c><spanx style="vbare">i o o l n k n n l m m p p m m m</spanx></c> <c>27</c> <c><spanx style="vbare">l l p l n m l l l k k l l l k l</spanx></c> <c>28</c> <c><spanx style="vbare">i i j i i i k j k j j k k k j j</spanx></c> <c>29</c> <c><spanx style="vbare">i l k n l l k l k j i i j i i j</spanx></c> <c>30</c> <c><spanx style="vbare">l n n m p n l l k l k k j i j i</spanx></c> <c>31</c> <c><spanx style="vbare">k l n l m l l l k j k o m i i i</spanx></c> </texttable> <t> Decoding the second stage residual proceeds as follows. For each coefficient, the decoder reads a symbol using the PDF corresponding to I1 from either <xref target="silk_nlsf_nbmb_stage2_cb_sel"/> or <xref target="silk_nlsf_wb_stage2_cb_sel"/>, and subtracts 4 from the result to give an index in the range -4 to 4, inclusive. If the index is either -4 or 4, it reads a second symbol using the PDF in <xref target="silk_nlsf_ext_pdf"/>, and adds the value of this second symbol to the index, using the same sign. This gives the index, I2[k], a total range of -10 to 10, inclusive. </t> <texttable anchor="silk_nlsf_ext_pdf" title="PDF for Normalized LSF Index Extension Decoding"> <ttcol align="left">PDF</ttcol> <c>{156, 60, 24, 9, 4, 2, 1}/256</c> </texttable> <t> The decoded indices from both stages are translated back into normalized LSF coefficients in silk_NLSF_decode() (NLSF_decode.c). The stage-2 indices represent residuals after both the first stage of the VQ and a separate backwards-prediction step. The backwards prediction process in the encoder subtracts a prediction from each residual formed by a multiple of the coefficient that follows it. The decoder must undo this process. <xref target="silk_nlsf_pred_weights"/> contains lists of prediction weights for each coefficient. There are two lists for NB and MB, and another two lists for WB, giving two possible prediction weights for each coefficient. </t> <texttable anchor="silk_nlsf_pred_weights" title="Prediction Weights for Normalized LSF Decoding"> <ttcol align="left">Coefficient</ttcol> <ttcol align="right">A</ttcol> <ttcol align="right">B</ttcol> <ttcol align="right">C</ttcol> <ttcol align="right">D</ttcol> <c>0</c> <c>179</c> <c>116</c> <c>175</c> <c>68</c> <c>1</c> <c>138</c> <c>67</c> <c>148</c> <c>62</c> <c>2</c> <c>140</c> <c>82</c> <c>160</c> <c>66</c> <c>3</c> <c>148</c> <c>59</c> <c>176</c> <c>60</c> <c>4</c> <c>151</c> <c>92</c> <c>178</c> <c>72</c> <c>5</c> <c>149</c> <c>72</c> <c>173</c> <c>117</c> <c>6</c> <c>153</c> <c>100</c> <c>174</c> <c>85</c> <c>7</c> <c>151</c> <c>89</c> <c>164</c> <c>90</c> <c>8</c> <c>163</c> <c>92</c> <c>177</c> <c>118</c> <c>9</c> <c/> <c/> <c>174</c> <c>136</c> <c>10</c> <c/> <c/> <c>196</c> <c>151</c> <c>11</c> <c/> <c/> <c>182</c> <c>142</c> <c>12</c> <c/> <c/> <c>198</c> <c>160</c> <c>13</c> <c/> <c/> <c>192</c> <c>142</c> <c>14</c> <c/> <c/> <c>182</c> <c>155</c> </texttable> <t> The prediction is undone using the procedure implemented in silk_NLSF_residual_dequant() (NLSF_decode.c), which is as follows. Each coefficient selects its prediction weight from one of the two lists based on the stage-1 index, I1. <xref target="silk_nlsf_nbmb_weight_sel"/> gives the selections for each coefficient for NB and MB, and <xref target="silk_nlsf_wb_weight_sel"/> gives the selections for WB. Let d_LPC be the order of the codebook, i.e., 10 for NB and MB, and 16 for WB, and let pred_Q8[k] be the weight for the k'th coefficient selected by this process for 0 <= k < d_LPC-1. Then, the stage-2 residual for each coefficient is computed via <figure align="center"> <artwork align="center"><![CDATA[ res_Q10[k] = (k+1 < d_LPC ? (res_Q10[k+1]*pred_Q8[k])>>8 : 0) + ((((I2[k]<<10) - sign(I2[k])*102)*qstep)>>16) , ]]></artwork> </figure> where qstep is the Q16 quantization step size, which is 11796 for NB and MB and 9830 for WB (representing step sizes of approximately 0.18 and 0.15, respectively). </t> <texttable anchor="silk_nlsf_nbmb_weight_sel" title="Prediction Weight Selection for NB/MB Normalized LSF Decoding"> <ttcol>I1</ttcol> <ttcol>Coefficient</ttcol> <c/> <c><spanx style="vbare">0 1 2 3 4 5 6 7 8</spanx></c> <c> 0</c> <c><spanx style="vbare">A B A A A A A A A</spanx></c> <c> 1</c> <c><spanx style="vbare">B A A A A A A A A</spanx></c> <c> 2</c> <c><spanx style="vbare">A A A A A A A A A</spanx></c> <c> 3</c> <c><spanx style="vbare">B B B A A A A B A</spanx></c> <c> 4</c> <c><spanx style="vbare">A B A A A A A A A</spanx></c> <c> 5</c> <c><spanx style="vbare">A B A A A A A A A</spanx></c> <c> 6</c> <c><spanx style="vbare">B A B B A A A B A</spanx></c> <c> 7</c> <c><spanx style="vbare">A B B A A B B A A</spanx></c> <c> 8</c> <c><spanx style="vbare">A A B B A B A B B</spanx></c> <c> 9</c> <c><spanx style="vbare">A A B B A A B B B</spanx></c> <c>10</c> <c><spanx style="vbare">A A A A A A A A A</spanx></c> <c>11</c> <c><spanx style="vbare">A B A B B B B B A</spanx></c> <c>12</c> <c><spanx style="vbare">A B A B B B B B A</spanx></c> <c>13</c> <c><spanx style="vbare">A B B B B B B B A</spanx></c> <c>14</c> <c><spanx style="vbare">B A B B A B B B B</spanx></c> <c>15</c> <c><spanx style="vbare">A B B B B B A B A</spanx></c> <c>16</c> <c><spanx style="vbare">A A B B A B A B A</spanx></c> <c>17</c> <c><spanx style="vbare">A A B B B A B B B</spanx></c> <c>18</c> <c><spanx style="vbare">A B B A A B B B A</spanx></c> <c>19</c> <c><spanx style="vbare">A A A B B B A B A</spanx></c> <c>20</c> <c><spanx style="vbare">A B B A A B A B A</spanx></c> <c>21</c> <c><spanx style="vbare">A B B A A A B B A</spanx></c> <c>22</c> <c><spanx style="vbare">A A A A A B B B B</spanx></c> <c>23</c> <c><spanx style="vbare">A A B B A A A B B</spanx></c> <c>24</c> <c><spanx style="vbare">A A A B A B B B B</spanx></c> <c>25</c> <c><spanx style="vbare">A B B B B B B B A</spanx></c> <c>26</c> <c><spanx style="vbare">A A A A A A A A A</spanx></c> <c>27</c> <c><spanx style="vbare">A A A A A A A A A</spanx></c> <c>28</c> <c><spanx style="vbare">A A B A B B A B A</spanx></c> <c>29</c> <c><spanx style="vbare">B A A B A A A A A</spanx></c> <c>30</c> <c><spanx style="vbare">A A A B B A B A B</spanx></c> <c>31</c> <c><spanx style="vbare">B A B B A B B B B</spanx></c> </texttable> <texttable anchor="silk_nlsf_wb_weight_sel" title="Prediction Weight Selection for WB Normalized LSF Decoding"> <ttcol>I1</ttcol> <ttcol>Coefficient</ttcol> <c/> <c><spanx style="vbare">0 1 2 3 4 5 6 7 8 9 10 11 12 13 14</spanx></c> <c> 0</c> <c><spanx style="vbare">C C C C C C C C C C C C C C D</spanx></c> <c> 1</c> <c><spanx style="vbare">C C C C C C C C C C C C C C C</spanx></c> <c> 2</c> <c><spanx style="vbare">C C D C C D D D C D D D D C C</spanx></c> <c> 3</c> <c><spanx style="vbare">C C C C C C C C C C C C D C C</spanx></c> <c> 4</c> <c><spanx style="vbare">C D D C D C D D C D D D D D C</spanx></c> <c> 5</c> <c><spanx style="vbare">C C D C C C C C C C C C C C C</spanx></c> <c> 6</c> <c><spanx style="vbare">D C C C C C C C C C C D C D C</spanx></c> <c> 7</c> <c><spanx style="vbare">C D D C C C D C D D D C D C D</spanx></c> <c> 8</c> <c><spanx style="vbare">C D C D D C D C D C D D D D D</spanx></c> <c> 9</c> <c><spanx style="vbare">C C C C C C C C C C C C C C D</spanx></c> <c>10</c> <c><spanx style="vbare">C D C C C C C C C C C C C C C</spanx></c> <c>11</c> <c><spanx style="vbare">C C D C D D D D D D D C D C C</spanx></c> <c>12</c> <c><spanx style="vbare">C C D C C D C D C D C C D C C</spanx></c> <c>13</c> <c><spanx style="vbare">C C C C D D C D C D D D D C C</spanx></c> <c>14</c> <c><spanx style="vbare">C D C C C D D C D D D C D D D</spanx></c> <c>15</c> <c><spanx style="vbare">C C D D C C C C C C C C D D C</spanx></c> <c>16</c> <c><spanx style="vbare">C D D C D C D D D D D C D C C</spanx></c> <c>17</c> <c><spanx style="vbare">C C D C C C C D C C D D D C C</spanx></c> <c>18</c> <c><spanx style="vbare">C C C C C C C C C C C C C C D</spanx></c> <c>19</c> <c><spanx style="vbare">C C C C C C C C C C C C D C C</spanx></c> <c>20</c> <c><spanx style="vbare">C C C C C C C C C C C C C C C</spanx></c> <c>21</c> <c><spanx style="vbare">C D C D C D D C D C D C D D C</spanx></c> <c>22</c> <c><spanx style="vbare">C C D D D D C D D C C D D C C</spanx></c> <c>23</c> <c><spanx style="vbare">C D D C D C D C D C C C C D C</spanx></c> <c>24</c> <c><spanx style="vbare">C C C D D C D C D D D D D D D</spanx></c> <c>25</c> <c><spanx style="vbare">C C C C C C C C C C C C C C D</spanx></c> <c>26</c> <c><spanx style="vbare">C D D C C C D D C C D D D D D</spanx></c> <c>27</c> <c><spanx style="vbare">C C C C C D C D D D D C D D D</spanx></c> <c>28</c> <c><spanx style="vbare">C C C C C C C C C C C C C C D</spanx></c> <c>29</c> <c><spanx style="vbare">C C C C C C C C C C C C C C D</spanx></c> <c>30</c> <c><spanx style="vbare">D C C C C C C C C C C D C C C</spanx></c> <c>31</c> <c><spanx style="vbare">C C D C C D D D C C D C C D C</spanx></c> </texttable> </section> <section anchor="silk_nlsf_reconstruction" title="Reconstructing the Normalized LSF Coefficients"> <t> Once the stage-1 index I1 and the stage-2 residual res_Q10[] have been decoded, the final normalized LSF coefficients can be reconstructed. </t> <t> The spectral distortion introduced by the quantization of each LSF coefficient varies, so the stage-2 residual is weighted accordingly, using the low-complexity Inverse Harmonic Mean Weighting (IHMW) function proposed in <xref target="laroia-icassp"/>. The weights are derived directly from the stage-1 codebook vector. Let cb1_Q8[k] be the k'th entry of the stage-1 codebook vector from <xref target="silk_nlsf_nbmb_codebook"/> or <xref target="silk_nlsf_wb_codebook"/>. Then for 0 <= k < d_LPC the following expression computes the square of the weight as a Q18 value: <figure align="center"> <artwork align="center"> <![CDATA[ w2_Q18[k] = (1024/(cb1_Q8[k] - cb1_Q8[k-1]) + 1024/(cb1_Q8[k+1] - cb1_Q8[k])) << 16 , ]]> </artwork> </figure> where cb1_Q8[-1] = 0 and cb1_Q8[d_LPC] = 256, and the division is integer division. This is reduced to an unsquared, Q9 value using the following square-root approximation: <figure align="center"> <artwork align="center"><![CDATA[ i = ilog(w2_Q18[k]) f = (w2_Q18[k]>>(i-8)) & 127 y = ((i&1) ? 32768 : 46214) >> ((32-i)>>1) w_Q9[k] = y + ((213*f*y)>>16) ]]></artwork> </figure> The constant 46214 here is approximately the square root of 2 in Q15. The cb1_Q8[] vector completely determines these weights, and they may be tabulated and stored as 13-bit unsigned values (with a range of 1819 to 5227, inclusive) to avoid computing them when decoding. The reference implementation already requires code to compute these weights on unquantized coefficients in the encoder, in silk_NLSF_VQ_weights_laroia() (NLSF_VQ_weights_laroia.c) and its callers, so it reuses that code in the decoder instead of using a pre-computed table to reduce the amount of ROM required. </t> <texttable anchor="silk_nlsf_nbmb_codebook" title="NB/MB Normalized LSF Stage-1 Codebook Vectors"> <ttcol>I1</ttcol> <ttcol>Codebook (Q8)</ttcol> <c/> <c><spanx style="vbare"> 0 1 2 3 4 5 6 7 8 9</spanx></c> <c>0</c> <c><spanx style="vbare">12 35 60 83 108 132 157 180 206 228</spanx></c> <c>1</c> <c><spanx style="vbare">15 32 55 77 101 125 151 175 201 225</spanx></c> <c>2</c> <c><spanx style="vbare">19 42 66 89 114 137 162 184 209 230</spanx></c> <c>3</c> <c><spanx style="vbare">12 25 50 72 97 120 147 172 200 223</spanx></c> <c>4</c> <c><spanx style="vbare">26 44 69 90 114 135 159 180 205 225</spanx></c> <c>5</c> <c><spanx style="vbare">13 22 53 80 106 130 156 180 205 228</spanx></c> <c>6</c> <c><spanx style="vbare">15 25 44 64 90 115 142 168 196 222</spanx></c> <c>7</c> <c><spanx style="vbare">19 24 62 82 100 120 145 168 190 214</spanx></c> <c>8</c> <c><spanx style="vbare">22 31 50 79 103 120 151 170 203 227</spanx></c> <c>9</c> <c><spanx style="vbare">21 29 45 65 106 124 150 171 196 224</spanx></c> <c>10</c> <c><spanx style="vbare">30 49 75 97 121 142 165 186 209 229</spanx></c> <c>11</c> <c><spanx style="vbare">19 25 52 70 93 116 143 166 192 219</spanx></c> <c>12</c> <c><spanx style="vbare">26 34 62 75 97 118 145 167 194 217</spanx></c> <c>13</c> <c><spanx style="vbare">25 33 56 70 91 113 143 165 196 223</spanx></c> <c>14</c> <c><spanx style="vbare">21 34 51 72 97 117 145 171 196 222</spanx></c> <c>15</c> <c><spanx style="vbare">20 29 50 67 90 117 144 168 197 221</spanx></c> <c>16</c> <c><spanx style="vbare">22 31 48 66 95 117 146 168 196 222</spanx></c> <c>17</c> <c><spanx style="vbare">24 33 51 77 116 134 158 180 200 224</spanx></c> <c>18</c> <c><spanx style="vbare">21 28 70 87 106 124 149 170 194 217</spanx></c> <c>19</c> <c><spanx style="vbare">26 33 53 64 83 117 152 173 204 225</spanx></c> <c>20</c> <c><spanx style="vbare">27 34 65 95 108 129 155 174 210 225</spanx></c> <c>21</c> <c><spanx style="vbare">20 26 72 99 113 131 154 176 200 219</spanx></c> <c>22</c> <c><spanx style="vbare">34 43 61 78 93 114 155 177 205 229</spanx></c> <c>23</c> <c><spanx style="vbare">23 29 54 97 124 138 163 179 209 229</spanx></c> <c>24</c> <c><spanx style="vbare">30 38 56 89 118 129 158 178 200 231</spanx></c> <c>25</c> <c><spanx style="vbare">21 29 49 63 85 111 142 163 193 222</spanx></c> <c>26</c> <c><spanx style="vbare">27 48 77 103 133 158 179 196 215 232</spanx></c> <c>27</c> <c><spanx style="vbare">29 47 74 99 124 151 176 198 220 237</spanx></c> <c>28</c> <c><spanx style="vbare">33 42 61 76 93 121 155 174 207 225</spanx></c> <c>29</c> <c><spanx style="vbare">29 53 87 112 136 154 170 188 208 227</spanx></c> <c>30</c> <c><spanx style="vbare">24 30 52 84 131 150 166 186 203 229</spanx></c> <c>31</c> <c><spanx style="vbare">37 48 64 84 104 118 156 177 201 230</spanx></c> </texttable> <texttable anchor="silk_nlsf_wb_codebook" title="WB Normalized LSF Stage-1 Codebook Vectors"> <ttcol>I1</ttcol> <ttcol>Codebook (Q8)</ttcol> <c/> <c><spanx style="vbare"> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15</spanx></c> <c>0</c> <c><spanx style="vbare"> 7 23 38 54 69 85 100 116 131 147 162 178 193 208 223 239</spanx></c> <c>1</c> <c><spanx style="vbare">13 25 41 55 69 83 98 112 127 142 157 171 187 203 220 236</spanx></c> <c>2</c> <c><spanx style="vbare">15 21 34 51 61 78 92 106 126 136 152 167 185 205 225 240</spanx></c> <c>3</c> <c><spanx style="vbare">10 21 36 50 63 79 95 110 126 141 157 173 189 205 221 237</spanx></c> <c>4</c> <c><spanx style="vbare">17 20 37 51 59 78 89 107 123 134 150 164 184 205 224 240</spanx></c> <c>5</c> <c><spanx style="vbare">10 15 32 51 67 81 96 112 129 142 158 173 189 204 220 236</spanx></c> <c>6</c> <c><spanx style="vbare"> 8 21 37 51 65 79 98 113 126 138 155 168 179 192 209 218</spanx></c> <c>7</c> <c><spanx style="vbare">12 15 34 55 63 78 87 108 118 131 148 167 185 203 219 236</spanx></c> <c>8</c> <c><spanx style="vbare">16 19 32 36 56 79 91 108 118 136 154 171 186 204 220 237</spanx></c> <c>9</c> <c><spanx style="vbare">11 28 43 58 74 89 105 120 135 150 165 180 196 211 226 241</spanx></c> <c>10</c> <c><spanx style="vbare"> 6 16 33 46 60 75 92 107 123 137 156 169 185 199 214 225</spanx></c> <c>11</c> <c><spanx style="vbare">11 19 30 44 57 74 89 105 121 135 152 169 186 202 218 234</spanx></c> <c>12</c> <c><spanx style="vbare">12 19 29 46 57 71 88 100 120 132 148 165 182 199 216 233</spanx></c> <c>13</c> <c><spanx style="vbare">17 23 35 46 56 77 92 106 123 134 152 167 185 204 222 237</spanx></c> <c>14</c> <c><spanx style="vbare">14 17 45 53 63 75 89 107 115 132 151 171 188 206 221 240</spanx></c> <c>15</c> <c><spanx style="vbare"> 9 16 29 40 56 71 88 103 119 137 154 171 189 205 222 237</spanx></c> <c>16</c> <c><spanx style="vbare">16 19 36 48 57 76 87 105 118 132 150 167 185 202 218 236</spanx></c> <c>17</c> <c><spanx style="vbare">12 17 29 54 71 81 94 104 126 136 149 164 182 201 221 237</spanx></c> <c>18</c> <c><spanx style="vbare">15 28 47 62 79 97 115 129 142 155 168 180 194 208 223 238</spanx></c> <c>19</c> <c><spanx style="vbare"> 8 14 30 45 62 78 94 111 127 143 159 175 192 207 223 239</spanx></c> <c>20</c> <c><spanx style="vbare">17 30 49 62 79 92 107 119 132 145 160 174 190 204 220 235</spanx></c> <c>21</c> <c><spanx style="vbare">14 19 36 45 61 76 91 108 121 138 154 172 189 205 222 238</spanx></c> <c>22</c> <c><spanx style="vbare">12 18 31 45 60 76 91 107 123 138 154 171 187 204 221 236</spanx></c> <c>23</c> <c><spanx style="vbare">13 17 31 43 53 70 83 103 114 131 149 167 185 203 220 237</spanx></c> <c>24</c> <c><spanx style="vbare">17 22 35 42 58 78 93 110 125 139 155 170 188 206 224 240</spanx></c> <c>25</c> <c><spanx style="vbare"> 8 15 34 50 67 83 99 115 131 146 162 178 193 209 224 239</spanx></c> <c>26</c> <c><spanx style="vbare">13 16 41 66 73 86 95 111 128 137 150 163 183 206 225 241</spanx></c> <c>27</c> <c><spanx style="vbare">17 25 37 52 63 75 92 102 119 132 144 160 175 191 212 231</spanx></c> <c>28</c> <c><spanx style="vbare">19 31 49 65 83 100 117 133 147 161 174 187 200 213 227 242</spanx></c> <c>29</c> <c><spanx style="vbare">18 31 52 68 88 103 117 126 138 149 163 177 192 207 223 239</spanx></c> <c>30</c> <c><spanx style="vbare">16 29 47 61 76 90 106 119 133 147 161 176 193 209 224 240</spanx></c> <c>31</c> <c><spanx style="vbare">15 21 35 50 61 73 86 97 110 119 129 141 175 198 218 237</spanx></c> </texttable> <t> Given the stage-1 codebook entry cb1_Q8[], the stage-2 residual res_Q10[], and their corresponding weights, w_Q9[], the reconstructed normalized LSF coefficients are <figure align="center"> <artwork align="center"><![CDATA[ NLSF_Q15[k] = clamp(0, (cb1_Q8[k]<<7) + (res_Q10[k]<<14)/w_Q9[k], 32767) , ]]></artwork> </figure> where the division is integer division. However, nothing in either the reconstruction process or the quantization process in the encoder thus far guarantees that the coefficients are monotonically increasing and separated well enough to ensure a stable filter <xref target="Kabal86"/>. When using the reference encoder, roughly 2% of frames violate this constraint. The next section describes a stabilization procedure used to make these guarantees. </t> </section> <section anchor="silk_nlsf_stabilization" title="Normalized LSF Stabilization"> <t> The normalized LSF stabilization procedure is implemented in silk_NLSF_stabilize() (NLSF_stabilize.c). This process ensures that consecutive values of the normalized LSF coefficients, NLSF_Q15[], are spaced some minimum distance apart (predetermined to be the 0.01 percentile of a large training set). <xref target="silk_nlsf_min_spacing"/> gives the minimum spacings for NB and MB and those for WB, where row k is the minimum allowed value of NLSF_Q[k]-NLSF_Q[k-1]. For the purposes of computing this spacing for the first and last coefficient, NLSF_Q15[-1] is taken to be 0, and NLSF_Q15[d_LPC] is taken to be 32768. </t> <texttable anchor="silk_nlsf_min_spacing" title="Minimum Spacing for Normalized LSF Coefficients"> <ttcol>Coefficient</ttcol> <ttcol align="right">NB and MB</ttcol> <ttcol align="right">WB</ttcol> <c>0</c> <c>250</c> <c>100</c> <c>1</c> <c>3</c> <c>3</c> <c>2</c> <c>6</c> <c>40</c> <c>3</c> <c>3</c> <c>3</c> <c>4</c> <c>3</c> <c>3</c> <c>5</c> <c>3</c> <c>3</c> <c>6</c> <c>4</c> <c>5</c> <c>7</c> <c>3</c> <c>14</c> <c>8</c> <c>3</c> <c>14</c> <c>9</c> <c>3</c> <c>10</c> <c>10</c> <c>461</c> <c>11</c> <c>11</c> <c/> <c>3</c> <c>12</c> <c/> <c>8</c> <c>13</c> <c/> <c>9</c> <c>14</c> <c/> <c>7</c> <c>15</c> <c/> <c>3</c> <c>16</c> <c/> <c>347</c> </texttable> <t> The procedure starts off by trying to make small adjustments which attempt to minimize the amount of distortion introduced. After 20 such adjustments, it falls back to a more direct method which guarantees the constraints are enforced but may require large adjustments. </t> <t> Let NDeltaMin_Q15[k] be the minimum required spacing for the current audio bandwidth from <xref target="silk_nlsf_min_spacing"/>. First, the procedure finds the index i where NLSF_Q15[i] - NLSF_Q15[i-1] - NDeltaMin_Q15[i] is the smallest, breaking ties by using the lower value of i. If this value is non-negative, then the stabilization stops; the coefficients satisfy all the constraints. Otherwise, if i == 0, it sets NLSF_Q15[0] to NDeltaMin_Q15[0], and if i == d_LPC, it sets NLSF_Q15[d_LPC-1] to (32768 - NDeltaMin_Q15[d_LPC]). For all other values of i, both NLSF_Q15[i-1] and NLSF_Q15[i] are updated as follows: <figure align="center"> <artwork align="center"><![CDATA[ i-1 __ min_center_Q15 = (NDeltaMin_Q15[i]>>1) + \ NDeltaMin_Q15[k] /_ k=0 d_LPC __ max_center_Q15 = 32768 - (NDeltaMin_Q15[i]>>1) - \ NDeltaMin_Q15[k] /_ k=i+1 center_freq_Q15 = clamp(min_center_Q15[i], (NLSF_Q15[i-1] + NLSF_Q15[i] + 1)>>1, max_center_Q15[i]) NLSF_Q15[i-1] = center_freq_Q15 - (NDeltaMin_Q15[i]>>1) NLSF_Q15[i] = NLSF_Q15[i-1] + NDeltaMin_Q15[i] . ]]></artwork> </figure> Then the procedure repeats again, until it has either executed 20 times or has stopped because the coefficients satisfy all the constraints. </t> <t> After the 20th repetition of the above procedure, the following fallback procedure executes once. First, the values of NLSF_Q15[k] for 0 <= k < d_LPC are sorted in ascending order. Then for each value of k from 0 to d_LPC-1, NLSF_Q15[k] is set to <figure align="center"> <artwork align="center"><![CDATA[ max(NLSF_Q15[k], NLSF_Q15[k-1] + NDeltaMin_Q15[k]) . ]]></artwork> </figure> Next, for each value of k from d_LPC-1 down to 0, NLSF_Q15[k] is set to <figure align="center"> <artwork align="center"><![CDATA[ min(NLSF_Q15[k], NLSF_Q15[k+1] - NDeltaMin_Q15[k+1]) . ]]></artwork> </figure> </t> </section> <section anchor="silk_nlsf_interpolation" title="Normalized LSF Interpolation"> <t> For 20 ms SILK frames, the first half of the frame (i.e., the first two subframes) may use normalized LSF coefficients that are interpolated between the decoded LSFs for the most recent coded frame (in the same channel) and the current frame. A Q2 interpolation factor follows the LSF coefficient indices in the bitstream, which is decoded using the PDF in <xref target="silk_nlsf_interp_pdf"/>. This happens in silk_decode_indices() (decode_indices.c). After either <list style="symbols"> <t>An uncoded regular SILK frame in the side channel, or</t> <t>A decoder reset (see <xref target="decoder-reset"/>),</t> </list> the decoder still decodes this factor, but ignores its value and always uses 4 instead. For 10 ms SILK frames, this factor is not stored at all. </t> <texttable anchor="silk_nlsf_interp_pdf" title="PDF for Normalized LSF Interpolation Index"> <ttcol>PDF</ttcol> <c>{13, 22, 29, 11, 181}/256</c> </texttable> <t> Let n2_Q15[k] be the normalized LSF coefficients decoded by the procedure in <xref target="silk_nlsfs"/>, n0_Q15[k] be the LSF coefficients decoded for the prior frame, and w_Q2 be the interpolation factor. Then the normalized LSF coefficients used for the first half of a 20 ms frame, n1_Q15[k], are <figure align="center"> <artwork align="center"><![CDATA[ n1_Q15[k] = n0_Q15[k] + (w_Q2*(n2_Q15[k] - n0_Q15[k]) >> 2) . ]]></artwork> </figure> This interpolation is performed in silk_decode_parameters() (decode_parameters.c). </t> </section> <section anchor="silk_nlsf2lpc" title="Converting Normalized LSFs to LPC Coefficients"> <t> Any LPC filter A(z) can be split into a symmetric part P(z) and an anti-symmetric part Q(z) such that <figure align="center"> <artwork align="center"><![CDATA[ d_LPC __ -k 1 A(z) = 1 - \ a[k] * z = - * (P(z) + Q(z)) /_ 2 k=1 ]]></artwork> </figure> with <figure align="center"> <artwork align="center"><![CDATA[ -d_LPC-1 -1 P(z) = A(z) + z * A(z ) -d_LPC-1 -1 Q(z) = A(z) - z * A(z ) . ]]></artwork> </figure> The even normalized LSF coefficients correspond to a pair of conjugate roots of P(z), while the odd coefficients correspond to a pair of conjugate roots of Q(z), all of which lie on the unit circle. In addition, P(z) has a root at pi and Q(z) has a root at 0. Thus, they may be reconstructed mathematically from a set of normalized LSF coefficients, n[k], as <figure align="center"> <artwork align="center"><![CDATA[ d_LPC/2-1 -1 ___ -1 -2 P(z) = (1 + z ) * | | (1 - 2*cos(pi*n[2*k])*z + z ) k=0 d_LPC/2-1 -1 ___ -1 -2 Q(z) = (1 - z ) * | | (1 - 2*cos(pi*n[2*k+1])*z + z ) k=0 ]]></artwork> </figure> </t> <t> However, SILK performs this reconstruction using a fixed-point approximation so that all decoders can reproduce it in a bit-exact manner to avoid prediction drift. The function silk_NLSF2A() (NLSF2A.c) implements this procedure. </t> <t> To start, it approximates cos(pi*n[k]) using a table lookup with linear interpolation. The encoder SHOULD use the inverse of this piecewise linear approximation, rather than the true inverse of the cosine function, when deriving the normalized LSF coefficients. These values are also re-ordered to improve numerical accuracy when constructing the LPC polynomials. </t> <texttable anchor="silk_nlsf_orderings" title="LSF Ordering for Polynomial Evaluation"> <ttcol>Coefficient</ttcol> <ttcol align="right">NB and MB</ttcol> <ttcol align="right">WB</ttcol> <c>0</c> <c>0</c> <c>0</c> <c>1</c> <c>9</c> <c>15</c> <c>2</c> <c>6</c> <c>8</c> <c>3</c> <c>3</c> <c>7</c> <c>4</c> <c>4</c> <c>4</c> <c>5</c> <c>5</c> <c>11</c> <c>6</c> <c>8</c> <c>12</c> <c>7</c> <c>1</c> <c>3</c> <c>8</c> <c>2</c> <c>2</c> <c>9</c> <c>7</c> <c>13</c> <c>10</c> <c/> <c>10</c> <c>11</c> <c/> <c>5</c> <c>12</c> <c/> <c>6</c> <c>13</c> <c/> <c>9</c> <c>14</c> <c/> <c>14</c> <c>15</c> <c/> <c>1</c> </texttable> <t> The top 7 bits of each normalized LSF coefficient index a value in the table, and the next 8 bits interpolate between it and the next value. Let i = (n[k] >> 8) be the integer index and f = (n[k] & 255) be the fractional part of a given coefficient. Then the re-ordered, approximated cosine, c_Q17[ordering[k]], is <figure align="center"> <artwork align="center"><![CDATA[ c_Q17[ordering[k]] = (cos_Q12[i]*256 + (cos_Q12[i+1]-cos_Q12[i])*f + 4) >> 3 , ]]></artwork> </figure> where ordering[k] is the k'th entry of the column of <xref target="silk_nlsf_orderings"/> corresponding to the current audio bandwidth and cos_Q12[i] is the i'th entry of <xref target="silk_cos_table"/>. </t> <texttable anchor="silk_cos_table" title="Q12 Cosine Table for LSF Conversion"> <ttcol align="right">i</ttcol> <ttcol align="right">+0</ttcol> <ttcol align="right">+1</ttcol> <ttcol align="right">+2</ttcol> <ttcol align="right">+3</ttcol> <c>0</c> <c>4096</c> <c>4095</c> <c>4091</c> <c>4085</c> <c>4</c> <c>4076</c> <c>4065</c> <c>4052</c> <c>4036</c> <c>8</c> <c>4017</c> <c>3997</c> <c>3973</c> <c>3948</c> <c>12</c> <c>3920</c> <c>3889</c> <c>3857</c> <c>3822</c> <c>16</c> <c>3784</c> <c>3745</c> <c>3703</c> <c>3659</c> <c>20</c> <c>3613</c> <c>3564</c> <c>3513</c> <c>3461</c> <c>24</c> <c>3406</c> <c>3349</c> <c>3290</c> <c>3229</c> <c>28</c> <c>3166</c> <c>3102</c> <c>3035</c> <c>2967</c> <c>32</c> <c>2896</c> <c>2824</c> <c>2751</c> <c>2676</c> <c>36</c> <c>2599</c> <c>2520</c> <c>2440</c> <c>2359</c> <c>40</c> <c>2276</c> <c>2191</c> <c>2106</c> <c>2019</c> <c>44</c> <c>1931</c> <c>1842</c> <c>1751</c> <c>1660</c> <c>48</c> <c>1568</c> <c>1474</c> <c>1380</c> <c>1285</c> <c>52</c> <c>1189</c> <c>1093</c> <c>995</c> <c>897</c> <c>56</c> <c>799</c> <c>700</c> <c>601</c> <c>501</c> <c>60</c> <c>401</c> <c>301</c> <c>201</c> <c>101</c> <c>64</c> <c>0</c> <c>-101</c> <c>-201</c> <c>-301</c> <c>68</c> <c>-401</c> <c>-501</c> <c>-601</c> <c>-700</c> <c>72</c> <c>-799</c> <c>-897</c> <c>-995</c> <c>-1093</c> <c>76</c> <c>-1189</c><c>-1285</c><c>-1380</c><c>-1474</c> <c>80</c> <c>-1568</c><c>-1660</c><c>-1751</c><c>-1842</c> <c>84</c> <c>-1931</c><c>-2019</c><c>-2106</c><c>-2191</c> <c>88</c> <c>-2276</c><c>-2359</c><c>-2440</c><c>-2520</c> <c>92</c> <c>-2599</c><c>-2676</c><c>-2751</c><c>-2824</c> <c>96</c> <c>-2896</c><c>-2967</c><c>-3035</c><c>-3102</c> <c>100</c> <c>-3166</c><c>-3229</c><c>-3290</c><c>-3349</c> <c>104</c> <c>-3406</c><c>-3461</c><c>-3513</c><c>-3564</c> <c>108</c> <c>-3613</c><c>-3659</c><c>-3703</c><c>-3745</c> <c>112</c> <c>-3784</c><c>-3822</c><c>-3857</c><c>-3889</c> <c>116</c> <c>-3920</c><c>-3948</c><c>-3973</c><c>-3997</c> <c>120</c> <c>-4017</c><c>-4036</c><c>-4052</c><c>-4065</c> <c>124</c> <c>-4076</c><c>-4085</c><c>-4091</c><c>-4095</c> <c>128</c> <c>-4096</c> <c/> <c/> <c/> </texttable> <t> Given the list of cosine values, silk_NLSF2A_find_poly() (NLSF2A.c) computes the coefficients of P and Q, described here via a simple recurrence. Let p_Q16[k][j] and q_Q16[k][j] be the coefficients of the products of the first (k+1) root pairs for P and Q, with j indexing the coefficient number. Only the first (k+2) coefficients are needed, as the products are symmetric. Let p_Q16[0][0] = q_Q16[0][0] = 1<<16, p_Q16[0][1] = -c_Q17[0], q_Q16[0][1] = -c_Q17[1], and d2 = d_LPC/2. As boundary conditions, assume p_Q16[k][j] = q_Q16[k][j] = 0 for all j < 0. Also, assume p_Q16[k][k+2] = p_Q16[k][k] and q_Q16[k][k+2] = q_Q16[k][k] (because of the symmetry). Then, for 0 < k < d2 and 0 <= j <= k+1, <figure align="center"> <artwork align="center"><![CDATA[ p_Q16[k][j] = p_Q16[k-1][j] + p_Q16[k-1][j-2] - ((c_Q17[2*k]*p_Q16[k-1][j-1] + 32768)>>16) , q_Q16[k][j] = q_Q16[k-1][j] + q_Q16[k-1][j-2] - ((c_Q17[2*k+1]*q_Q16[k-1][j-1] + 32768)>>16) . ]]></artwork> </figure> The use of Q17 values for the cosine terms in an otherwise Q16 expression implicitly scales them by a factor of 2. The multiplications in this recurrence may require up to 48 bits of precision in the result to avoid overflow. In practice, each row of the recurrence only depends on the previous row, so an implementation does not need to store all of them. </t> <t> silk_NLSF2A() uses the values from the last row of this recurrence to reconstruct a 32-bit version of the LPC filter (without the leading 1.0 coefficient), a32_Q17[k], 0 <= k < d2: <figure align="center"> <artwork align="center"><![CDATA[ a32_Q17[k] = -(q_Q16[d2-1][k+1] - q_Q16[d2-1][k]) - (p_Q16[d2-1][k+1] + p_Q16[d2-1][k])) , a32_Q17[d_LPC-k-1] = (q_Q16[d2-1][k+1] - q_Q16[d2-1][k]) - (p_Q16[d2-1][k+1] + p_Q16[d2-1][k])) . ]]></artwork> </figure> The sum and difference of two terms from each of the p_Q16 and q_Q16 coefficient lists reflect the (1 + z**-1) and (1 - z**-1) factors of P and Q, respectively. The promotion of the expression from Q16 to Q17 implicitly scales the result by 1/2. </t> </section> <section anchor="silk_lpc_range_limit" title="Limiting the Range of the LPC Coefficients"> <t> The a32_Q17[] coefficients are too large to fit in a 16-bit value, which significantly increases the cost of applying this filter in fixed-point decoders. Reducing them to Q12 precision doesn't incur any significant quality loss, but still does not guarantee they will fit. silk_NLSF2A() applies up to 10 rounds of bandwidth expansion to limit the dynamic range of these coefficients. Even floating-point decoders SHOULD perform these steps, to avoid mismatch. </t> <t> For each round, the process first finds the index k such that abs(a32_Q17[k]) is largest, breaking ties by choosing the lowest value of k. Then, it computes the corresponding Q12 precision value, maxabs_Q12, subject to an upper bound to avoid overflow in subsequent computations: <figure align="center"> <artwork align="center"><![CDATA[ maxabs_Q12 = min((maxabs_Q17 + 16) >> 5, 163838) . ]]></artwork> </figure> If this is larger than 32767, the procedure derives the chirp factor, sc_Q16[0], to use in the bandwidth expansion as <figure align="center"> <artwork align="center"><![CDATA[ (maxabs_Q12 - 32767) << 14 sc_Q16[0] = 65470 - -------------------------- , (maxabs_Q12 * (k+1)) >> 2 ]]></artwork> </figure> where the division here is integer division. This is an approximation of the chirp factor needed to reduce the target coefficient to 32767, though it is both less than 0.999 and, for k > 0 when maxabs_Q12 is much greater than 32767, still slightly too large. The upper bound on maxabs_Q12, 163838, was chosen because it is equal to ((2**31 - 1) >> 14) + 32767, i.e., the largest value of maxabs_Q12 that would not overflow the numerator in the equation above when stored in a signed 32-bit integer. </t> <t> silk_bwexpander_32() (bwexpander_32.c) performs the bandwidth expansion (again, only when maxabs_Q12 is greater than 32767) using the following recurrence: <figure align="center"> <artwork align="center"><![CDATA[ a32_Q17[k] = (a32_Q17[k]*sc_Q16[k]) >> 16 sc_Q16[k+1] = (sc_Q16[0]*sc_Q16[k] + 32768) >> 16 ]]></artwork> </figure> The first multiply may require up to 48 bits of precision in the result to avoid overflow. The second multiply must be unsigned to avoid overflow with only 32 bits of precision. The reference implementation uses a slightly more complex formulation that avoids the 32-bit overflow using signed multiplication, but is otherwise equivalent. </t> <t> After 10 rounds of bandwidth expansion are performed, they are simply saturated to 16 bits: <figure align="center"> <artwork align="center"><![CDATA[ a32_Q17[k] = clamp(-32768, (a32_Q17[k] + 16) >> 5, 32767) << 5 . ]]></artwork> </figure> Because this performs the actual saturation in the Q12 domain, but converts the coefficients back to the Q17 domain for the purposes of prediction gain limiting, this step must be performed after the 10th round of bandwidth expansion, regardless of whether or not the Q12 version of any coefficient still overflows a 16-bit integer. This saturation is not performed if maxabs_Q12 drops to 32767 or less prior to the 10th round. </t> </section> <section anchor="silk_lpc_gain_limit" title="Limiting the Prediction Gain of the LPC Filter"> <t> The prediction gain of an LPC synthesis filter is the square-root of the output energy when the filter is excited by a unit-energy impulse. Even if the Q12 coefficients would fit, the resulting filter may still have a significant gain (especially for voiced sounds), making the filter unstable. silk_NLSF2A() applies up to 18 additional rounds of bandwidth expansion to limit the prediction gain. Instead of controlling the amount of bandwidth expansion using the prediction gain itself (which may diverge to infinity for an unstable filter), silk_NLSF2A() uses silk_LPC_inverse_pred_gain_QA() (LPC_inv_pred_gain.c) to compute the reflection coefficients associated with the filter. The filter is stable if and only if the magnitude of these coefficients is sufficiently less than one. The reflection coefficients, rc[k], can be computed using a simple Levinson recurrence, initialized with the LPC coefficients a[d_LPC-1][n] = a[n], and then updated via <figure align="center"> <artwork align="center"><![CDATA[ rc[k] = -a[k][k] , a[k][n] - a[k][k-n-1]*rc[k] a[k-1][n] = --------------------------- . 2 1 - rc[k] ]]></artwork> </figure> </t> <t> However, silk_LPC_inverse_pred_gain_QA() approximates this using fixed-point arithmetic to guarantee reproducible results across platforms and implementations. Since small changes in the coefficients can make a stable filter unstable, it takes the real Q12 coefficients that will be used during reconstruction as input. Thus, let <figure align="center"> <artwork align="center"><![CDATA[ a32_Q12[n] = (a32_Q17[n] + 16) >> 5 ]]></artwork> </figure> be the Q12 version of the LPC coefficients that will eventually be used. As a simple initial check, the decoder computes the DC response as <figure align="center"> <artwork align="center"><![CDATA[ d_PLC-1 __ DC_resp = \ a32_Q12[n] /_ n=0 ]]></artwork> </figure> and if DC_resp > 4096, the filter is unstable. </t> <t> Increasing the precision of these Q12 coefficients to Q24 for intermediate computations allows more accurate computation of the reflection coefficients, so the decoder initializes the recurrence via <figure align="center"> <artwork align="center"><![CDATA[ a32_Q24[d_LPC-1][n] = a32_Q12[n] << 12 . ]]></artwork> </figure> Then for each k from d_LPC-1 down to 0, if abs(a32_Q24[k][k]) > 16773022, the filter is unstable and the recurrence stops. The constant 16773022 here is approximately 0.99975 in Q24. Otherwise, row k-1 of a32_Q24 is computed from row k as <figure align="center"> <artwork align="center"><![CDATA[ rc_Q31[k] = -a32_Q24[k][k] << 7 , div_Q30[k] = (1<<30) - (rc_Q31[k]*rc_Q31[k] >> 32) , b1[k] = ilog(div_Q30[k]) , b2[k] = b1[k] - 16 , (1<<29) - 1 inv_Qb2[k] = ----------------------- , div_Q30[k] >> (b2[k]+1) err_Q29[k] = (1<<29) - ((div_Q30[k]<<(15-b2[k]))*inv_Qb2[k] >> 16) , gain_Qb1[k] = ((inv_Qb2[k] << 16) + (err_Q29[k]*inv_Qb2[k] >> 13)) , num_Q24[k-1][n] = a32_Q24[k][n] - ((a32_Q24[k][k-n-1]*rc_Q31[k] + (1<<30)) >> 31) , a32_Q24[k-1][n] = (num_Q24[k-1][n]*gain_Qb1[k] + (1<<(b1[k]-1))) >> b1[k] , ]]></artwork> </figure> where 0 <= n < k. Here, rc_Q30[k] are the reflection coefficients. div_Q30[k] is the denominator for each iteration, and gain_Qb1[k] is its multiplicative inverse (with b1[k] fractional bits, where b1[k] ranges from 20 to 31). inv_Qb2[k], which ranges from 16384 to 32767, is a low-precision version of that inverse (with b2[k] fractional bits). err_Q29[k] is the residual error, ranging from -32763 to 32392, which is used to improve the accuracy. The values t_Q24[k-1][n] for each n are the numerators for the next row of coefficients in the recursion, and a32_Q24[k-1][n] is the final version of that row. Every multiply in this procedure except the one used to compute gain_Qb1[k] requires more than 32 bits of precision, but otherwise all intermediate results fit in 32 bits or less. In practice, because each row only depends on the next one, an implementation does not need to store them all. </t> <t> If abs(a32_Q24[k][k]) <= 16773022 for 0 <= k < d_LPC, then the filter is considered stable. However, the problem of determining stability is ill-conditioned when the filter contains several reflection coefficients whose magnitude is very close to one. This fixed-point algorithm is not mathematically guaranteed to correctly classify filters as stable or unstable in this case, though it does very well in practice. </t> <t> On round i, 1 <= i <= 18, if the filter passes these stability checks, then this procedure stops, and the final LPC coefficients to use for reconstruction in <xref target="silk_lpc_synthesis"/> are <figure align="center"> <artwork align="center"><![CDATA[ a_Q12[k] = (a32_Q17[k] + 16) >> 5 . ]]></artwork> </figure> Otherwise, a round of bandwidth expansion is applied using the same procedure as in <xref target="silk_lpc_range_limit"/>, with <figure align="center"> <artwork align="center"><![CDATA[ sc_Q16[0] = 65536 - (2<<i) . ]]></artwork> </figure> During the 15th round, sc_Q16[0] becomes 0 in the above equation, so a_Q12[k] is set to 0 for all k, guaranteeing a stable filter. </t> </section> </section> <section anchor="silk_ltp_params" toc="include" title="Long-Term Prediction (LTP) Parameters"> <t> After the normalized LSF indices and, for 20 ms frames, the LSF interpolation index, voiced frames (see <xref target="silk_frame_type"/>) include additional LTP parameters. There is one primary lag index for each SILK frame, but this is refined to produce a separate lag index per subframe using a vector quantizer. Each subframe also gets its own prediction gain coefficient. </t> <section anchor="silk_ltp_lags" title="Pitch Lags"> <t> The primary lag index is coded either relative to the primary lag of the prior frame in the same channel, or as an absolute index. Absolute coding is used if and only if <list style="symbols"> <t> This is the first SILK frame of its type (LBRR or regular) for this channel in the current Opus frame, </t> <t> The previous SILK frame of the same type (LBRR or regular) for this channel in the same Opus frame was not coded, or </t> <t> That previous SILK frame was coded, but was not voiced (see <xref target="silk_frame_type"/>). </t> </list> </t> <t> With absolute coding, the primary pitch lag may range from 2 ms (inclusive) up to 18 ms (exclusive), corresponding to pitches from 500 Hz down to 55.6 Hz, respectively. It is comprised of a high part and a low part, where the decoder reads the high part using the 32-entry codebook in <xref target="silk_abs_pitch_high_pdf"/> and the low part using the codebook corresponding to the current audio bandwidth from <xref target="silk_abs_pitch_low_pdf"/>. The final primary pitch lag is then <figure align="center"> <artwork align="center"><![CDATA[ lag = lag_high*lag_scale + lag_low + lag_min ]]></artwork> </figure> where lag_high is the high part, lag_low is the low part, and lag_scale and lag_min are the values from the "Scale" and "Minimum Lag" columns of <xref target="silk_abs_pitch_low_pdf"/>, respectively. </t> <texttable anchor="silk_abs_pitch_high_pdf" title="PDF for High Part of Primary Pitch Lag"> <ttcol align="left">PDF</ttcol> <c>{3, 3, 6, 11, 21, 30, 32, 19, 11, 10, 12, 13, 13, 12, 11, 9, 8, 7, 6, 4, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1}/256</c> </texttable> <texttable anchor="silk_abs_pitch_low_pdf" title="PDF for Low Part of Primary Pitch Lag"> <ttcol>Audio Bandwidth</ttcol> <ttcol>PDF</ttcol> <ttcol>Scale</ttcol> <ttcol>Minimum Lag</ttcol> <ttcol>Maximum Lag</ttcol> <c>NB</c> <c>{64, 64, 64, 64}/256</c> <c>4</c> <c>16</c> <c>144</c> <c>MB</c> <c>{43, 42, 43, 43, 42, 43}/256</c> <c>6</c> <c>24</c> <c>216</c> <c>WB</c> <c>{32, 32, 32, 32, 32, 32, 32, 32}/256</c> <c>8</c> <c>32</c> <c>288</c> </texttable> <t> All frames that do not use absolute coding for the primary lag index use relative coding instead. The decoder reads a single delta value using the 21-entry PDF in <xref target="silk_rel_pitch_pdf"/>. If the resulting value is zero, it falls back to the absolute coding procedure from the prior paragraph. Otherwise, the final primary pitch lag is then <figure align="center"> <artwork align="center"><![CDATA[ lag = previous_lag + (delta_lag_index - 9) ]]></artwork> </figure> where previous_lag is the primary pitch lag from the most recent frame in the same channel and delta_lag_index is the value just decoded. This allows a per-frame change in the pitch lag of -8 to +11 samples. The decoder does no clamping at this point, so this value can fall outside the range of 2 ms to 18 ms, and the decoder must use this unclamped value when using relative coding in the next SILK frame (if any). However, because an Opus frame can use relative coding for at most two consecutive SILK frames, integer overflow should not be an issue. </t> <texttable anchor="silk_rel_pitch_pdf" title="PDF for Primary Pitch Lag Change"> <ttcol align="left">PDF</ttcol> <c>{46, 2, 2, 3, 4, 6, 10, 15, 26, 38, 30, 22, 15, 10, 7, 6, 4, 4, 2, 2, 2}/256</c> </texttable> <t> After the primary pitch lag, a "pitch contour", stored as a single entry from one of four small VQ codebooks, gives lag offsets for each subframe in the current SILK frame. The codebook index is decoded using one of the PDFs in <xref target="silk_pitch_contour_pdfs"/> depending on the current frame size and audio bandwidth. Tables <xref format="counter" target="silk_pitch_contour_cb_nb10ms"/> through <xref format="counter" target="silk_pitch_contour_cb_mbwb20ms"/> give the corresponding offsets to apply to the primary pitch lag for each subframe given the decoded codebook index. </t> <texttable anchor="silk_pitch_contour_pdfs" title="PDFs for Subframe Pitch Contour"> <ttcol>Audio Bandwidth</ttcol> <ttcol>SILK Frame Size</ttcol> <ttcol align="right">Codebook Size</ttcol> <ttcol>PDF</ttcol> <c>NB</c> <c>10 ms</c> <c>3</c> <c>{143, 50, 63}/256</c> <c>NB</c> <c>20 ms</c> <c>11</c> <c>{68, 12, 21, 17, 19, 22, 30, 24, 17, 16, 10}/256</c> <c>MB or WB</c> <c>10 ms</c> <c>12</c> <c>{91, 46, 39, 19, 14, 12, 8, 7, 6, 5, 5, 4}/256</c> <c>MB or WB</c> <c>20 ms</c> <c>34</c> <c>{33, 22, 18, 16, 15, 14, 14, 13, 13, 10, 9, 9, 8, 6, 6, 6, 5, 4, 4, 4, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1}/256</c> </texttable> <texttable anchor="silk_pitch_contour_cb_nb10ms" title="Codebook Vectors for Subframe Pitch Contour: NB, 10 ms Frames"> <ttcol>Index</ttcol> <ttcol align="right">Subframe Offsets</ttcol> <c>0</c> <c><spanx style="vbare"> 0 0</spanx></c> <c>1</c> <c><spanx style="vbare"> 1 0</spanx></c> <c>2</c> <c><spanx style="vbare"> 0 1</spanx></c> </texttable> <texttable anchor="silk_pitch_contour_cb_nb20ms" title="Codebook Vectors for Subframe Pitch Contour: NB, 20 ms Frames"> <ttcol>Index</ttcol> <ttcol align="right">Subframe Offsets</ttcol> <c>0</c> <c><spanx style="vbare"> 0 0 0 0</spanx></c> <c>1</c> <c><spanx style="vbare"> 2 1 0 -1</spanx></c> <c>2</c> <c><spanx style="vbare">-1 0 1 2</spanx></c> <c>3</c> <c><spanx style="vbare">-1 0 0 1</spanx></c> <c>4</c> <c><spanx style="vbare">-1 0 0 0</spanx></c> <c>5</c> <c><spanx style="vbare"> 0 0 0 1</spanx></c> <c>6</c> <c><spanx style="vbare"> 0 0 1 1</spanx></c> <c>7</c> <c><spanx style="vbare"> 1 1 0 0</spanx></c> <c>8</c> <c><spanx style="vbare"> 1 0 0 0</spanx></c> <c>9</c> <c><spanx style="vbare"> 0 0 0 -1</spanx></c> <c>10</c> <c><spanx style="vbare"> 1 0 0 -1</spanx></c> </texttable> <texttable anchor="silk_pitch_contour_cb_mbwb10ms" title="Codebook Vectors for Subframe Pitch Contour: MB or WB, 10 ms Frames"> <ttcol>Index</ttcol> <ttcol align="right">Subframe Offsets</ttcol> <c>0</c> <c><spanx style="vbare"> 0 0</spanx></c> <c>1</c> <c><spanx style="vbare"> 0 1</spanx></c> <c>2</c> <c><spanx style="vbare"> 1 0</spanx></c> <c>3</c> <c><spanx style="vbare">-1 1</spanx></c> <c>4</c> <c><spanx style="vbare"> 1 -1</spanx></c> <c>5</c> <c><spanx style="vbare">-1 2</spanx></c> <c>6</c> <c><spanx style="vbare"> 2 -1</spanx></c> <c>7</c> <c><spanx style="vbare">-2 2</spanx></c> <c>8</c> <c><spanx style="vbare"> 2 -2</spanx></c> <c>9</c> <c><spanx style="vbare">-2 3</spanx></c> <c>10</c> <c><spanx style="vbare"> 3 -2</spanx></c> <c>11</c> <c><spanx style="vbare">-3 3</spanx></c> </texttable> <texttable anchor="silk_pitch_contour_cb_mbwb20ms" title="Codebook Vectors for Subframe Pitch Contour: MB or WB, 20 ms Frames"> <ttcol>Index</ttcol> <ttcol align="right">Subframe Offsets</ttcol> <c>0</c> <c><spanx style="vbare"> 0 0 0 0</spanx></c> <c>1</c> <c><spanx style="vbare"> 0 0 1 1</spanx></c> <c>2</c> <c><spanx style="vbare"> 1 1 0 0</spanx></c> <c>3</c> <c><spanx style="vbare">-1 0 0 0</spanx></c> <c>4</c> <c><spanx style="vbare"> 0 0 0 1</spanx></c> <c>5</c> <c><spanx style="vbare"> 1 0 0 0</spanx></c> <c>6</c> <c><spanx style="vbare">-1 0 0 1</spanx></c> <c>7</c> <c><spanx style="vbare"> 0 0 0 -1</spanx></c> <c>8</c> <c><spanx style="vbare">-1 0 1 2</spanx></c> <c>9</c> <c><spanx style="vbare"> 1 0 0 -1</spanx></c> <c>10</c> <c><spanx style="vbare">-2 -1 1 2</spanx></c> <c>11</c> <c><spanx style="vbare"> 2 1 0 -1</spanx></c> <c>12</c> <c><spanx style="vbare">-2 0 0 2</spanx></c> <c>13</c> <c><spanx style="vbare">-2 0 1 3</spanx></c> <c>14</c> <c><spanx style="vbare"> 2 1 -1 -2</spanx></c> <c>15</c> <c><spanx style="vbare">-3 -1 1 3</spanx></c> <c>16</c> <c><spanx style="vbare"> 2 0 0 -2</spanx></c> <c>17</c> <c><spanx style="vbare"> 3 1 0 -2</spanx></c> <c>18</c> <c><spanx style="vbare">-3 -1 2 4</spanx></c> <c>19</c> <c><spanx style="vbare">-4 -1 1 4</spanx></c> <c>20</c> <c><spanx style="vbare"> 3 1 -1 -3</spanx></c> <c>21</c> <c><spanx style="vbare">-4 -1 2 5</spanx></c> <c>22</c> <c><spanx style="vbare"> 4 2 -1 -3</spanx></c> <c>23</c> <c><spanx style="vbare"> 4 1 -1 -4</spanx></c> <c>24</c> <c><spanx style="vbare">-5 -1 2 6</spanx></c> <c>25</c> <c><spanx style="vbare"> 5 2 -1 -4</spanx></c> <c>26</c> <c><spanx style="vbare">-6 -2 2 6</spanx></c> <c>27</c> <c><spanx style="vbare">-5 -2 2 5</spanx></c> <c>28</c> <c><spanx style="vbare"> 6 2 -1 -5</spanx></c> <c>29</c> <c><spanx style="vbare">-7 -2 3 8</spanx></c> <c>30</c> <c><spanx style="vbare"> 6 2 -2 -6</spanx></c> <c>31</c> <c><spanx style="vbare"> 5 2 -2 -5</spanx></c> <c>32</c> <c><spanx style="vbare"> 8 3 -2 -7</spanx></c> <c>33</c> <c><spanx style="vbare">-9 -3 3 9</spanx></c> </texttable> <t> The final pitch lag for each subframe is assembled in silk_decode_pitch() (decode_pitch.c). Let lag be the primary pitch lag for the current SILK frame, contour_index be index of the VQ codebook, and lag_cb[contour_index][k] be the corresponding entry of the codebook from the appropriate table given above for the k'th subframe. Then the final pitch lag for that subframe is <figure align="center"> <artwork align="center"><![CDATA[ pitch_lags[k] = clamp(lag_min, lag + lag_cb[contour_index][k], lag_max) ]]></artwork> </figure> where lag_min and lag_max are the values from the "Minimum Lag" and "Maximum Lag" columns of <xref target="silk_abs_pitch_low_pdf"/>, respectively. </t> </section> <section anchor="silk_ltp_filter" title="LTP Filter Coefficients"> <t> SILK uses a separate 5-tap pitch filter for each subframe, selected from one of three codebooks. The three codebooks each represent different rate-distortion trade-offs, with average rates of 1.61 bits/subframe, 3.68 bits/subframe, and 4.85 bits/subframe, respectively. </t> <t> The importance of the filter coefficients generally depends on two factors: the periodicity of the signal and relative energy between the current subframe and the signal from one period earlier. Greater periodicity and decaying energy both lead to more important filter coefficients, and thus should be coded with lower distortion and higher rate. These properties are relatively stable over the duration of a single SILK frame, hence all of the subframes in a SILK frame choose their filter from the same codebook. This is signaled with an explicitly-coded "periodicity index". This immediately follows the subframe pitch lags, and is coded using the 3-entry PDF from <xref target="silk_perindex_pdf"/>. </t> <texttable anchor="silk_perindex_pdf" title="Periodicity Index PDF"> <ttcol>PDF</ttcol> <c>{77, 80, 99}/256</c> </texttable> <t> The indices of the filters for each subframe follow. They are all coded using the PDF from <xref target="silk_ltp_filter_pdfs"/> corresponding to the periodicity index. Tables <xref format="counter" target="silk_ltp_filter_coeffs0"/> through <xref format="counter" target="silk_ltp_filter_coeffs2"/> contain the corresponding filter taps as signed Q7 integers. </t> <texttable anchor="silk_ltp_filter_pdfs" title="LTP Filter PDFs"> <ttcol>Periodicity Index</ttcol> <ttcol align="right">Codebook Size</ttcol> <ttcol>PDF</ttcol> <c>0</c> <c>8</c> <c>{185, 15, 13, 13, 9, 9, 6, 6}/256</c> <c>1</c> <c>16</c> <c>{57, 34, 21, 20, 15, 13, 12, 13, 10, 10, 9, 10, 9, 8, 7, 8}/256</c> <c>2</c> <c>32</c> <c>{15, 16, 14, 12, 12, 12, 11, 11, 11, 10, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 6, 6, 5, 4, 5, 4, 4, 4, 3, 4, 3, 2}/256</c> </texttable> <texttable anchor="silk_ltp_filter_coeffs0" title="Codebook Vectors for LTP Filter, Periodicity Index 0"> <ttcol>Index</ttcol> <ttcol align="right">Filter Taps (Q7)</ttcol> <c>0</c> <c><spanx style="vbare"> 4 6 24 7 5</spanx></c> <c>1</c> <c><spanx style="vbare"> 0 0 2 0 0</spanx></c> <c>2</c> <c><spanx style="vbare"> 12 28 41 13 -4</spanx></c> <c>3</c> <c><spanx style="vbare"> -9 15 42 25 14</spanx></c> <c>4</c> <c><spanx style="vbare"> 1 -2 62 41 -9</spanx></c> <c>5</c> <c><spanx style="vbare">-10 37 65 -4 3</spanx></c> <c>6</c> <c><spanx style="vbare"> -6 4 66 7 -8</spanx></c> <c>7</c> <c><spanx style="vbare"> 16 14 38 -3 33</spanx></c> </texttable> <texttable anchor="silk_ltp_filter_coeffs1" title="Codebook Vectors for LTP Filter, Periodicity Index 1"> <ttcol>Index</ttcol> <ttcol align="right">Filter Taps (Q7)</ttcol> <c>0</c> <c><spanx style="vbare"> 13 22 39 23 12</spanx></c> <c>1</c> <c><spanx style="vbare"> -1 36 64 27 -6</spanx></c> <c>2</c> <c><spanx style="vbare"> -7 10 55 43 17</spanx></c> <c>3</c> <c><spanx style="vbare"> 1 1 8 1 1</spanx></c> <c>4</c> <c><spanx style="vbare"> 6 -11 74 53 -9</spanx></c> <c>5</c> <c><spanx style="vbare">-12 55 76 -12 8</spanx></c> <c>6</c> <c><spanx style="vbare"> -3 3 93 27 -4</spanx></c> <c>7</c> <c><spanx style="vbare"> 26 39 59 3 -8</spanx></c> <c>8</c> <c><spanx style="vbare"> 2 0 77 11 9</spanx></c> <c>9</c> <c><spanx style="vbare"> -8 22 44 -6 7</spanx></c> <c>10</c> <c><spanx style="vbare"> 40 9 26 3 9</spanx></c> <c>11</c> <c><spanx style="vbare"> -7 20 101 -7 4</spanx></c> <c>12</c> <c><spanx style="vbare"> 3 -8 42 26 0</spanx></c> <c>13</c> <c><spanx style="vbare">-15 33 68 2 23</spanx></c> <c>14</c> <c><spanx style="vbare"> -2 55 46 -2 15</spanx></c> <c>15</c> <c><spanx style="vbare"> 3 -1 21 16 41</spanx></c> </texttable> <texttable anchor="silk_ltp_filter_coeffs2" title="Codebook Vectors for LTP Filter, Periodicity Index 2"> <ttcol>Index</ttcol> <ttcol align="right">Filter Taps (Q7)</ttcol> <c>0</c> <c><spanx style="vbare"> -6 27 61 39 5</spanx></c> <c>1</c> <c><spanx style="vbare">-11 42 88 4 1</spanx></c> <c>2</c> <c><spanx style="vbare"> -2 60 65 6 -4</spanx></c> <c>3</c> <c><spanx style="vbare"> -1 -5 73 56 1</spanx></c> <c>4</c> <c><spanx style="vbare"> -9 19 94 29 -9</spanx></c> <c>5</c> <c><spanx style="vbare"> 0 12 99 6 4</spanx></c> <c>6</c> <c><spanx style="vbare"> 8 -19 102 46 -13</spanx></c> <c>7</c> <c><spanx style="vbare"> 3 2 13 3 2</spanx></c> <c>8</c> <c><spanx style="vbare"> 9 -21 84 72 -18</spanx></c> <c>9</c> <c><spanx style="vbare">-11 46 104 -22 8</spanx></c> <c>10</c> <c><spanx style="vbare"> 18 38 48 23 0</spanx></c> <c>11</c> <c><spanx style="vbare">-16 70 83 -21 11</spanx></c> <c>12</c> <c><spanx style="vbare"> 5 -11 117 22 -8</spanx></c> <c>13</c> <c><spanx style="vbare"> -6 23 117 -12 3</spanx></c> <c>14</c> <c><spanx style="vbare"> 3 -8 95 28 4</spanx></c> <c>15</c> <c><spanx style="vbare">-10 15 77 60 -15</spanx></c> <c>16</c> <c><spanx style="vbare"> -1 4 124 2 -4</spanx></c> <c>17</c> <c><spanx style="vbare"> 3 38 84 24 -25</spanx></c> <c>18</c> <c><spanx style="vbare"> 2 13 42 13 31</spanx></c> <c>19</c> <c><spanx style="vbare"> 21 -4 56 46 -1</spanx></c> <c>20</c> <c><spanx style="vbare"> -1 35 79 -13 19</spanx></c> <c>21</c> <c><spanx style="vbare"> -7 65 88 -9 -14</spanx></c> <c>22</c> <c><spanx style="vbare"> 20 4 81 49 -29</spanx></c> <c>23</c> <c><spanx style="vbare"> 20 0 75 3 -17</spanx></c> <c>24</c> <c><spanx style="vbare"> 5 -9 44 92 -8</spanx></c> <c>25</c> <c><spanx style="vbare"> 1 -3 22 69 31</spanx></c> <c>26</c> <c><spanx style="vbare"> -6 95 41 -12 5</spanx></c> <c>27</c> <c><spanx style="vbare"> 39 67 16 -4 1</spanx></c> <c>28</c> <c><spanx style="vbare"> 0 -6 120 55 -36</spanx></c> <c>29</c> <c><spanx style="vbare">-13 44 122 4 -24</spanx></c> <c>30</c> <c><spanx style="vbare"> 81 5 11 3 7</spanx></c> <c>31</c> <c><spanx style="vbare"> 2 0 9 10 88</spanx></c> </texttable> </section> <section anchor="silk_ltp_scaling" title="LTP Scaling Parameter"> <t> An LTP scaling parameter appears after the LTP filter coefficients if and only if <list style="symbols"> <t>This is a voiced frame (see <xref target="silk_frame_type"/>), and</t> <t>Either <list style="symbols"> <t> This SILK frame corresponds to the first time interval of the current Opus frame for its type (LBRR or regular), or </t> <t> This is an LBRR frame where the LBRR flags (see <xref target="silk_lbrr_flags"/>) indicate the previous LBRR frame in the same channel is not coded. </t> </list> </t> </list> This allows the encoder to trade off the prediction gain between packets against the recovery time after packet loss. Unlike absolute-coding for pitch lags, regular SILK frames that are not at the start of an Opus frame (i.e., that do not correspond to the first 20 ms time interval in Opus frames of 40 or 60 ms) do not include this field, even if the prior frame was not voiced, or (in the case of the side channel) not even coded. After an uncoded frame in the side channel, the LTP buffer (see <xref target="silk_ltp_synthesis"/>) is cleared to zero, and is thus in a known state. In contrast, LBRR frames do include this field when the prior frame was not coded, since the LTP buffer contains the output of the PLC, which is non-normative. </t> <t> If present, the decoder reads a value using the 3-entry PDF in <xref target="silk_ltp_scaling_pdf"/>. The three possible values represent Q14 scale factors of 15565, 12288, and 8192, respectively (corresponding to approximately 0.95, 0.75, and 0.5). Frames that do not code the scaling parameter use the default factor of 15565 (approximately 0.95). </t> <texttable anchor="silk_ltp_scaling_pdf" title="PDF for LTP Scaling Parameter"> <ttcol align="left">PDF</ttcol> <c>{128, 64, 64}/256</c> </texttable> </section> </section> <section anchor="silk_seed" toc="include" title="Linear Congruential Generator (LCG) Seed"> <t> As described in <xref target="silk_excitation_reconstruction"/>, SILK uses a linear congruential generator (LCG) to inject pseudorandom noise into the quantized excitation. To ensure synchronization of this process between the encoder and decoder, each SILK frame stores a 2-bit seed after the LTP parameters (if any). The encoder may consider the choice of seed during quantization, and the flexibility of this choice lets it reduce distortion, helping to pay for the bit cost required to signal it. The decoder reads the seed using the uniform 4-entry PDF in <xref target="silk_seed_pdf"/>, yielding a value between 0 and 3, inclusive. </t> <texttable anchor="silk_seed_pdf" title="PDF for LCG Seed"> <ttcol align="left">PDF</ttcol> <c>{64, 64, 64, 64}/256</c> </texttable> </section> <section anchor="silk_excitation" toc="include" title="Excitation"> <t> SILK codes the excitation using a modified version of the Pyramid Vector Quantization (PVQ) codebook <xref target="PVQ"/>. The PVQ codebook is designed for Laplace-distributed values and consists of all sums of K signed, unit pulses in a vector of dimension N, where two pulses at the same position are required to have the same sign. Thus the codebook includes all integer codevectors y of dimension N that satisfy <figure align="center"> <artwork align="center"><![CDATA[ N-1 __ \ abs(y[j]) = K . /_ j=0 ]]></artwork> </figure> Unlike regular PVQ, SILK uses a variable-length, rather than fixed-length, encoding. This encoding is better suited to the more Gaussian-like distribution of the coefficient magnitudes and the non-uniform distribution of their signs (caused by the quantization offset described below). SILK also handles large codebooks by coding the least significant bits (LSBs) of each coefficient directly. This adds a small coding efficiency loss, but greatly reduces the computation time and ROM size required for decoding, as implemented in silk_decode_pulses() (decode_pulses.c). </t> <t> SILK fixes the dimension of the codebook to N = 16. The excitation is made up of a number of "shell blocks", each 16 samples in size. <xref target="silk_shell_block_table"/> lists the number of shell blocks required for a SILK frame for each possible audio bandwidth and frame size. 10 ms MB frames nominally contain 120 samples (10 ms at 12 kHz), which is not a multiple of 16. This is handled by coding 8 shell blocks (128 samples) and discarding the final 8 samples of the last block. The decoder contains no special case that prevents an encoder from placing pulses in these samples, and they must be correctly parsed from the bitstream if present, but they are otherwise ignored. </t> <texttable anchor="silk_shell_block_table" title="Number of Shell Blocks Per SILK Frame"> <ttcol>Audio Bandwidth</ttcol> <ttcol>Frame Size</ttcol> <ttcol align="right">Number of Shell Blocks</ttcol> <c>NB</c> <c>10 ms</c> <c>5</c> <c>MB</c> <c>10 ms</c> <c>8</c> <c>WB</c> <c>10 ms</c> <c>10</c> <c>NB</c> <c>20 ms</c> <c>10</c> <c>MB</c> <c>20 ms</c> <c>15</c> <c>WB</c> <c>20 ms</c> <c>20</c> </texttable> <section anchor="silk_rate_level" title="Rate Level"> <t> The first symbol in the excitation is a "rate level", which is an index from 0 to 8, inclusive, coded using the PDF in <xref target="silk_rate_level_pdfs"/> corresponding to the signal type of the current frame (from <xref target="silk_frame_type"/>). The rate level selects the PDF used to decode the number of pulses in the individual shell blocks. It does not directly convey any information about the bitrate or the number of pulses itself, but merely changes the probability of the symbols in <xref target="silk_pulse_counts"/>. Level 0 provides a more efficient encoding at low rates generally, and level 8 provides a more efficient encoding at high rates generally, though the most efficient level for a particular SILK frame may depend on the exact distribution of the coded symbols. An encoder should, but is not required to, use the most efficient rate level. </t> <texttable anchor="silk_rate_level_pdfs" title="PDFs for the Rate Level"> <ttcol>Signal Type</ttcol> <ttcol>PDF</ttcol> <c>Inactive or Unvoiced</c> <c>{15, 51, 12, 46, 45, 13, 33, 27, 14}/256</c> <c>Voiced</c> <c>{33, 30, 36, 17, 34, 49, 18, 21, 18}/256</c> </texttable> </section> <section anchor="silk_pulse_counts" title="Pulses Per Shell Block"> <t> The total number of pulses in each of the shell blocks follows the rate level. The pulse counts for all of the shell blocks are coded consecutively, before the content of any of the blocks. Each block may have anywhere from 0 to 16 pulses, inclusive, coded using the 18-entry PDF in <xref target="silk_pulse_count_pdfs"/> corresponding to the rate level from <xref target="silk_rate_level"/>. The special value 17 indicates that this block has one or more additional LSBs to decode for each coefficient. If the decoder encounters this value, it decodes another value for the actual pulse count of the block, but uses the PDF corresponding to the special rate level 9 instead of the normal rate level. This process repeats until the decoder reads a value less than 17, and it then sets the number of extra LSBs used to the number of 17's decoded for that block. If it reads the value 17 ten times, then the next iteration uses the special rate level 10 instead of 9. The probability of decoding a 17 when using the PDF for rate level 10 is zero, ensuring that the number of LSBs for a block will not exceed 10. The cumulative distribution for rate level 10 is just a shifted version of that for 9 and thus does not require any additional storage. </t> <texttable anchor="silk_pulse_count_pdfs" title="PDFs for the Pulse Count"> <ttcol>Rate Level</ttcol> <ttcol>PDF</ttcol> <c>0</c> <c>{131, 74, 25, 8, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}/256</c> <c>1</c> <c>{58, 93, 60, 23, 7, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}/256</c> <c>2</c> <c>{43, 51, 46, 33, 24, 16, 11, 8, 6, 3, 3, 3, 2, 1, 1, 2, 1, 2}/256</c> <c>3</c> <c>{17, 52, 71, 57, 31, 12, 5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}/256</c> <c>4</c> <c>{6, 21, 41, 53, 49, 35, 21, 11, 6, 3, 2, 2, 1, 1, 1, 1, 1, 1}/256</c> <c>5</c> <c>{7, 14, 22, 28, 29, 28, 25, 20, 17, 13, 11, 9, 7, 5, 4, 4, 3, 10}/256</c> <c>6</c> <c>{2, 5, 14, 29, 42, 46, 41, 31, 19, 11, 6, 3, 2, 1, 1, 1, 1, 1}/256</c> <c>7</c> <c>{1, 2, 4, 10, 19, 29, 35, 37, 34, 28, 20, 14, 8, 5, 4, 2, 2, 2}/256</c> <c>8</c> <c>{1, 2, 2, 5, 9, 14, 20, 24, 27, 28, 26, 23, 20, 15, 11, 8, 6, 15}/256</c> <c>9</c> <c>{1, 1, 1, 6, 27, 58, 56, 39, 25, 14, 10, 6, 3, 3, 2, 1, 1, 2}/256</c> <c>10</c> <c>{2, 1, 6, 27, 58, 56, 39, 25, 14, 10, 6, 3, 3, 2, 1, 1, 2, 0}/256</c> </texttable> </section> <section anchor="silk_pulse_locations" title="Pulse Location Decoding"> <t> The locations of the pulses in each shell block follow the pulse counts, as decoded by silk_shell_decoder() (shell_coder.c). As with the pulse counts, these locations are coded for all the shell blocks before any of the remaining information for each block. Unlike many other codecs, SILK places no restriction on the distribution of pulses within a shell block. All of the pulses may be placed in a single location, or each one in a unique location, or anything in between. </t> <t> The location of pulses is coded by recursively partitioning each block into halves, and coding how many pulses fall on the left side of the split. All remaining pulses must fall on the right side of the split. The process then recurses into the left half, and after that returns, the right half (preorder traversal). The PDF to use is chosen by the size of the current partition (16, 8, 4, or 2) and the number of pulses in the partition (1 to 16, inclusive). Tables <xref format="counter" target="silk_shell_code3_pdfs"/> through <xref format="counter" target="silk_shell_code0_pdfs"/> list the PDFs used for each partition size and pulse count. This process skips partitions without any pulses, i.e., where the initial pulse count from <xref target="silk_pulse_counts"/> was zero, or where the split in the prior level indicated that all of the pulses fell on the other side. These partitions have nothing to code, so they require no PDF. </t> <texttable anchor="silk_shell_code3_pdfs" title="PDFs for Pulse Count Split, 16 Sample Partitions"> <ttcol>Pulse Count</ttcol> <ttcol>PDF</ttcol> <c>1</c> <c>{126, 130}/256</c> <c>2</c> <c>{56, 142, 58}/256</c> <c>3</c> <c>{25, 101, 104, 26}/256</c> <c>4</c> <c>{12, 60, 108, 64, 12}/256</c> <c>5</c> <c>{7, 35, 84, 87, 37, 6}/256</c> <c>6</c> <c>{4, 20, 59, 86, 63, 21, 3}/256</c> <c>7</c> <c>{3, 12, 38, 72, 75, 42, 12, 2}/256</c> <c>8</c> <c>{2, 8, 25, 54, 73, 59, 27, 7, 1}/256</c> <c>9</c> <c>{2, 5, 17, 39, 63, 65, 42, 18, 4, 1}/256</c> <c>10</c> <c>{1, 4, 12, 28, 49, 63, 54, 30, 11, 3, 1}/256</c> <c>11</c> <c>{1, 4, 8, 20, 37, 55, 57, 41, 22, 8, 2, 1}/256</c> <c>12</c> <c>{1, 3, 7, 15, 28, 44, 53, 48, 33, 16, 6, 1, 1}/256</c> <c>13</c> <c>{1, 2, 6, 12, 21, 35, 47, 48, 40, 25, 12, 5, 1, 1}/256</c> <c>14</c> <c>{1, 1, 4, 10, 17, 27, 37, 47, 43, 33, 21, 9, 4, 1, 1}/256</c> <c>15</c> <c>{1, 1, 1, 8, 14, 22, 33, 40, 43, 38, 28, 16, 8, 1, 1, 1}/256</c> <c>16</c> <c>{1, 1, 1, 1, 13, 18, 27, 36, 41, 41, 34, 24, 14, 1, 1, 1, 1}/256</c> </texttable> <texttable anchor="silk_shell_code2_pdfs" title="PDFs for Pulse Count Split, 8 Sample Partitions"> <ttcol>Pulse Count</ttcol> <ttcol>PDF</ttcol> <c>1</c> <c>{127, 129}/256</c> <c>2</c> <c>{53, 149, 54}/256</c> <c>3</c> <c>{22, 105, 106, 23}/256</c> <c>4</c> <c>{11, 61, 111, 63, 10}/256</c> <c>5</c> <c>{6, 35, 86, 88, 36, 5}/256</c> <c>6</c> <c>{4, 20, 59, 87, 62, 21, 3}/256</c> <c>7</c> <c>{3, 13, 40, 71, 73, 41, 13, 2}/256</c> <c>8</c> <c>{3, 9, 27, 53, 70, 56, 28, 9, 1}/256</c> <c>9</c> <c>{3, 8, 19, 37, 57, 61, 44, 20, 6, 1}/256</c> <c>10</c> <c>{3, 7, 15, 28, 44, 54, 49, 33, 17, 5, 1}/256</c> <c>11</c> <c>{1, 7, 13, 22, 34, 46, 48, 38, 28, 14, 4, 1}/256</c> <c>12</c> <c>{1, 1, 11, 22, 27, 35, 42, 47, 33, 25, 10, 1, 1}/256</c> <c>13</c> <c>{1, 1, 6, 14, 26, 37, 43, 43, 37, 26, 14, 6, 1, 1}/256</c> <c>14</c> <c>{1, 1, 4, 10, 20, 31, 40, 42, 40, 31, 20, 10, 4, 1, 1}/256</c> <c>15</c> <c>{1, 1, 3, 8, 16, 26, 35, 38, 38, 35, 26, 16, 8, 3, 1, 1}/256</c> <c>16</c> <c>{1, 1, 2, 6, 12, 21, 30, 36, 38, 36, 30, 21, 12, 6, 2, 1, 1}/256</c> </texttable> <texttable anchor="silk_shell_code1_pdfs" title="PDFs for Pulse Count Split, 4 Sample Partitions"> <ttcol>Pulse Count</ttcol> <ttcol>PDF</ttcol> <c>1</c> <c>{127, 129}/256</c> <c>2</c> <c>{49, 157, 50}/256</c> <c>3</c> <c>{20, 107, 109, 20}/256</c> <c>4</c> <c>{11, 60, 113, 62, 10}/256</c> <c>5</c> <c>{7, 36, 84, 87, 36, 6}/256</c> <c>6</c> <c>{6, 24, 57, 82, 60, 23, 4}/256</c> <c>7</c> <c>{5, 18, 39, 64, 68, 42, 16, 4}/256</c> <c>8</c> <c>{6, 14, 29, 47, 61, 52, 30, 14, 3}/256</c> <c>9</c> <c>{1, 15, 23, 35, 51, 50, 40, 30, 10, 1}/256</c> <c>10</c> <c>{1, 1, 21, 32, 42, 52, 46, 41, 18, 1, 1}/256</c> <c>11</c> <c>{1, 6, 16, 27, 36, 42, 42, 36, 27, 16, 6, 1}/256</c> <c>12</c> <c>{1, 5, 12, 21, 31, 38, 40, 38, 31, 21, 12, 5, 1}/256</c> <c>13</c> <c>{1, 3, 9, 17, 26, 34, 38, 38, 34, 26, 17, 9, 3, 1}/256</c> <c>14</c> <c>{1, 3, 7, 14, 22, 29, 34, 36, 34, 29, 22, 14, 7, 3, 1}/256</c> <c>15</c> <c>{1, 2, 5, 11, 18, 25, 31, 35, 35, 31, 25, 18, 11, 5, 2, 1}/256</c> <c>16</c> <c>{1, 1, 4, 9, 15, 21, 28, 32, 34, 32, 28, 21, 15, 9, 4, 1, 1}/256</c> </texttable> <texttable anchor="silk_shell_code0_pdfs" title="PDFs for Pulse Count Split, 2 Sample Partitions"> <ttcol>Pulse Count</ttcol> <ttcol>PDF</ttcol> <c>1</c> <c>{128, 128}/256</c> <c>2</c> <c>{42, 172, 42}/256</c> <c>3</c> <c>{21, 107, 107, 21}/256</c> <c>4</c> <c>{12, 60, 112, 61, 11}/256</c> <c>5</c> <c>{8, 34, 86, 86, 35, 7}/256</c> <c>6</c> <c>{8, 23, 55, 90, 55, 20, 5}/256</c> <c>7</c> <c>{5, 15, 38, 72, 72, 36, 15, 3}/256</c> <c>8</c> <c>{6, 12, 27, 52, 77, 47, 20, 10, 5}/256</c> <c>9</c> <c>{6, 19, 28, 35, 40, 40, 35, 28, 19, 6}/256</c> <c>10</c> <c>{4, 14, 22, 31, 37, 40, 37, 31, 22, 14, 4}/256</c> <c>11</c> <c>{3, 10, 18, 26, 33, 38, 38, 33, 26, 18, 10, 3}/256</c> <c>12</c> <c>{2, 8, 13, 21, 29, 36, 38, 36, 29, 21, 13, 8, 2}/256</c> <c>13</c> <c>{1, 5, 10, 17, 25, 32, 38, 38, 32, 25, 17, 10, 5, 1}/256</c> <c>14</c> <c>{1, 4, 7, 13, 21, 29, 35, 36, 35, 29, 21, 13, 7, 4, 1}/256</c> <c>15</c> <c>{1, 2, 5, 10, 17, 25, 32, 36, 36, 32, 25, 17, 10, 5, 2, 1}/256</c> <c>16</c> <c>{1, 2, 4, 7, 13, 21, 28, 34, 36, 34, 28, 21, 13, 7, 4, 2, 1}/256</c> </texttable> </section> <section anchor="silk_shell_lsb" title="LSB Decoding"> <t> After the decoder reads the pulse locations for all blocks, it reads the LSBs (if any) for each block in turn. Inside each block, it reads all the LSBs for each coefficient in turn, even those where no pulses were allocated, before proceeding to the next one. For 10 ms MB frames, it reads LSBs even for the extra 8 samples in the last block. The LSBs are coded from most significant to least significant, and they all use the PDF in <xref target="silk_shell_lsb_pdf"/>. </t> <texttable anchor="silk_shell_lsb_pdf" title="PDF for Excitation LSBs"> <ttcol>PDF</ttcol> <c>{136, 120}/256</c> </texttable> <t> The number of LSBs read for each coefficient in a block is determined in <xref target="silk_pulse_counts"/>. The magnitude of the coefficient is initially equal to the number of pulses placed at that location in <xref target="silk_pulse_locations"/>. As each LSB is decoded, the magnitude is doubled, and then the value of the LSB added to it, to obtain an updated magnitude. </t> </section> <section anchor="silk_signs" title="Sign Decoding"> <t> After decoding the pulse locations and the LSBs, the decoder knows the magnitude of each coefficient in the excitation. It then decodes a sign for all coefficients with a non-zero magnitude, using one of the PDFs from <xref target="silk_sign_pdfs"/>. If the value decoded is 0, then the coefficient magnitude is negated. Otherwise, it remains positive. </t> <t> The decoder chooses the PDF for the sign based on the signal type and quantization offset type (from <xref target="silk_frame_type"/>) and the number of pulses in the block (from <xref target="silk_pulse_counts"/>). The number of pulses in the block does not take into account any LSBs. Most PDFs are skewed towards negative signs because of the quantization offset, but the PDFs for zero pulses are highly skewed towards positive signs. If a block contains many positive coefficients, it is sometimes beneficial to code it solely using LSBs (i.e., with zero pulses), since the encoder may be able to save enough bits on the signs to justify the less efficient coefficient magnitude encoding. </t> <texttable anchor="silk_sign_pdfs" title="PDFs for Excitation Signs"> <ttcol>Signal Type</ttcol> <ttcol>Quantization Offset Type</ttcol> <ttcol>Pulse Count</ttcol> <ttcol>PDF</ttcol> <c>Inactive</c> <c>Low</c> <c>0</c> <c>{2, 254}/256</c> <c>Inactive</c> <c>Low</c> <c>1</c> <c>{207, 49}/256</c> <c>Inactive</c> <c>Low</c> <c>2</c> <c>{189, 67}/256</c> <c>Inactive</c> <c>Low</c> <c>3</c> <c>{179, 77}/256</c> <c>Inactive</c> <c>Low</c> <c>4</c> <c>{174, 82}/256</c> <c>Inactive</c> <c>Low</c> <c>5</c> <c>{163, 93}/256</c> <c>Inactive</c> <c>Low</c> <c>6 or more</c> <c>{157, 99}/256</c> <c>Inactive</c> <c>High</c> <c>0</c> <c>{58, 198}/256</c> <c>Inactive</c> <c>High</c> <c>1</c> <c>{245, 11}/256</c> <c>Inactive</c> <c>High</c> <c>2</c> <c>{238, 18}/256</c> <c>Inactive</c> <c>High</c> <c>3</c> <c>{232, 24}/256</c> <c>Inactive</c> <c>High</c> <c>4</c> <c>{225, 31}/256</c> <c>Inactive</c> <c>High</c> <c>5</c> <c>{220, 36}/256</c> <c>Inactive</c> <c>High</c> <c>6 or more</c> <c>{211, 45}/256</c> <c>Unvoiced</c> <c>Low</c> <c>0</c> <c>{1, 255}/256</c> <c>Unvoiced</c> <c>Low</c> <c>1</c> <c>{210, 46}/256</c> <c>Unvoiced</c> <c>Low</c> <c>2</c> <c>{190, 66}/256</c> <c>Unvoiced</c> <c>Low</c> <c>3</c> <c>{178, 78}/256</c> <c>Unvoiced</c> <c>Low</c> <c>4</c> <c>{169, 87}/256</c> <c>Unvoiced</c> <c>Low</c> <c>5</c> <c>{162, 94}/256</c> <c>Unvoiced</c> <c>Low</c> <c>6 or more</c> <c>{152, 104}/256</c> <c>Unvoiced</c> <c>High</c> <c>0</c> <c>{48, 208}/256</c> <c>Unvoiced</c> <c>High</c> <c>1</c> <c>{242, 14}/256</c> <c>Unvoiced</c> <c>High</c> <c>2</c> <c>{235, 21}/256</c> <c>Unvoiced</c> <c>High</c> <c>3</c> <c>{224, 32}/256</c> <c>Unvoiced</c> <c>High</c> <c>4</c> <c>{214, 42}/256</c> <c>Unvoiced</c> <c>High</c> <c>5</c> <c>{205, 51}/256</c> <c>Unvoiced</c> <c>High</c> <c>6 or more</c> <c>{190, 66}/256</c> <c>Voiced</c> <c>Low</c> <c>0</c> <c>{1, 255}/256</c> <c>Voiced</c> <c>Low</c> <c>1</c> <c>{162, 94}/256</c> <c>Voiced</c> <c>Low</c> <c>2</c> <c>{152, 104}/256</c> <c>Voiced</c> <c>Low</c> <c>3</c> <c>{147, 109}/256</c> <c>Voiced</c> <c>Low</c> <c>4</c> <c>{144, 112}/256</c> <c>Voiced</c> <c>Low</c> <c>5</c> <c>{141, 115}/256</c> <c>Voiced</c> <c>Low</c> <c>6 or more</c> <c>{138, 118}/256</c> <c>Voiced</c> <c>High</c> <c>0</c> <c>{8, 248}/256</c> <c>Voiced</c> <c>High</c> <c>1</c> <c>{203, 53}/256</c> <c>Voiced</c> <c>High</c> <c>2</c> <c>{187, 69}/256</c> <c>Voiced</c> <c>High</c> <c>3</c> <c>{176, 80}/256</c> <c>Voiced</c> <c>High</c> <c>4</c> <c>{168, 88}/256</c> <c>Voiced</c> <c>High</c> <c>5</c> <c>{161, 95}/256</c> <c>Voiced</c> <c>High</c> <c>6 or more</c> <c>{154, 102}/256</c> </texttable> </section> <section anchor="silk_excitation_reconstruction" title="Reconstructing the Excitation"> <t> After the signs have been read, there is enough information to reconstruct the complete excitation signal. This requires adding a constant quantization offset to each non-zero sample, and then pseudorandomly inverting and offsetting every sample. The constant quantization offset varies depending on the signal type and quantization offset type (see <xref target="silk_frame_type"/>). </t> <texttable anchor="silk_quantization_offsets" title="Excitation Quantization Offsets"> <ttcol align="left">Signal Type</ttcol> <ttcol align="left">Quantization Offset Type</ttcol> <ttcol align="right">Quantization Offset (Q23)</ttcol> <c>Inactive</c> <c>Low</c> <c>25</c> <c>Inactive</c> <c>High</c> <c>60</c> <c>Unvoiced</c> <c>Low</c> <c>25</c> <c>Unvoiced</c> <c>High</c> <c>60</c> <c>Voiced</c> <c>Low</c> <c>8</c> <c>Voiced</c> <c>High</c> <c>25</c> </texttable> <t> Let e_raw[i] be the raw excitation value at position i, with a magnitude composed of the pulses at that location (see <xref target="silk_pulse_locations"/>) combined with any additional LSBs (see <xref target="silk_shell_lsb"/>), and with the corresponding sign decoded in <xref target="silk_signs"/>. Additionally, let seed be the current pseudorandom seed, which is initialized to the value decoded from <xref target="silk_seed"/> for the first sample in the current SILK frame, and updated for each subsequent sample according to the procedure below. Finally, let offset_Q23 be the quantization offset from <xref target="silk_quantization_offsets"/>. Then the following procedure produces the final reconstructed excitation value, e_Q23[i]: <figure align="center"> <artwork align="center"><![CDATA[ e_Q23[i] = (e_raw[i] << 8) - sign(e_raw[i])*20 + offset_Q23; seed = (196314165*seed + 907633515) & 0xFFFFFFFF; e_Q23[i] = (seed & 0x80000000) ? -e_Q23[i] : e_Q23[i]; seed = (seed + e_raw[i]) & 0xFFFFFFFF; ]]></artwork> </figure> When e_raw[i] is zero, sign() returns 0 by the definition in <xref target="sign"/>, so the factor of 20 does not get added. The final e_Q23[i] value may require more than 16 bits per sample, but will not require more than 23, including the sign. </t> </section> </section> <section anchor="silk_frame_reconstruction" toc="include" title="SILK Frame Reconstruction"> <t> The remainder of the reconstruction process for the frame does not need to be bit-exact, as small errors should only introduce proportionally small distortions. Although the reference implementation only includes a fixed-point version of the remaining steps, this section describes them in terms of a floating-point version for simplicity. This produces a signal with a nominal range of -1.0 to 1.0. </t> <t> silk_decode_core() (decode_core.c) contains the code for the main reconstruction process. It proceeds subframe-by-subframe, since quantization gains, LTP parameters, and (in 20 ms SILK frames) LPC coefficients can vary from one to the next. </t> <t> Let a_Q12[k] be the LPC coefficients for the current subframe. If this is the first or second subframe of a 20 ms SILK frame and the LSF interpolation factor, w_Q2 (see <xref target="silk_nlsf_interpolation"/>), is less than 4, then these correspond to the final LPC coefficients produced by <xref target="silk_lpc_gain_limit"/> from the interpolated LSF coefficients, n1_Q15[k] (computed in <xref target="silk_nlsf_interpolation"/>). Otherwise, they correspond to the final LPC coefficients produced from the uninterpolated LSF coefficients for the current frame, n2_Q15[k]. </t> <t> Also, let n be the number of samples in a subframe (40 for NB, 60 for MB, and 80 for WB), s be the index of the current subframe in this SILK frame (0 or 1 for 10 ms frames, or 0 to 3 for 20 ms frames), and j be the index of the first sample in the residual corresponding to the current subframe. </t> <section anchor="silk_ltp_synthesis" title="LTP Synthesis"> <t> Voiced SILK frames (see <xref target="silk_frame_type"/>) pass the excitation through an LTP filter using the parameters decoded in <xref target="silk_ltp_params"/> to produce an LPC residual. The LTP filter requires LPC residual values from before the current subframe as input. However, since the LPC coefficients may have changed, it obtains this residual by "rewhitening" the corresponding output signal using the LPC coefficients from the current subframe. Let out[i] for (j - pitch_lags[s] - d_LPC - 2) <= i < j be the fully reconstructed output signal from the last (pitch_lags[s] + d_LPC + 2) samples of previous subframes (see <xref target="silk_lpc_synthesis"/>), where pitch_lags[s] is the pitch lag for the current subframe from <xref target="silk_ltp_lags"/>. During reconstruction of the first subframe for this channel after either <list style="symbols"> <t>An uncoded regular SILK frame (if this is the side channel), or</t> <t>A decoder reset (see <xref target="decoder-reset"/>),</t> </list> out[] is rewhitened into an LPC residual, res[i], via <figure align="center"> <artwork align="center"><![CDATA[ 4.0*LTP_scale_Q14 res[i] = ----------------- * clamp(-1.0, gain_Q16[s] d_LPC-1 __ a_Q12[k] out[i] - \ out[i-k-1] * --------, 1.0) . /_ 4096.0 k=0 ]]></artwork> </figure> This requires storage to buffer up to 306 values of out[i] from previous subframes. This corresponds to WB with a maximum pitch lag of 18 ms * 16 kHz samples, plus 16 samples for d_LPC, plus 2 samples for the width of the LTP filter. </t> <t> Let e_Q23[i] for j <= i < (j + n) be the excitation for the current subframe, and b_Q7[k] for 0 <= k < 5 be the coefficients of the LTP filter taken from the codebook entry in one of Tables <xref format="counter" target="silk_ltp_filter_coeffs0"/> through <xref format="counter" target="silk_ltp_filter_coeffs2"/> corresponding to the index decoded for the current subframe in <xref target="silk_ltp_filter"/>. Then for i such that j <= i < (j + n), the LPC residual is <figure align="center"> <artwork align="center"><![CDATA[ 4 e_Q23[i] __ b_Q7[k] res[i] = --------- + \ res[i - pitch_lags[s] + 2 - k] * ------- . 2.0**23 /_ 128.0 k=0 ]]></artwork> </figure> </t> <t> For unvoiced frames, the LPC residual for j <= i < (j + n) is simply a normalized copy of the excitation signal, i.e., <figure align="center"> <artwork align="center"><![CDATA[ e_Q23[i] res[i] = --------- 2.0**23 ]]></artwork> </figure> </t> </section> <section anchor="silk_lpc_synthesis" title="LPC Synthesis"> <t> LPC synthesis uses the short-term LPC filter to predict the next output coefficient. For i such that (j - d_LPC) <= i < j, let lpc[i] be the result of LPC synthesis from the last d_LPC samples of the previous subframe, or zeros in the first subframe for this channel after either <list style="symbols"> <t>An uncoded regular SILK frame (if this is the side channel), or</t> <t>A decoder reset (see <xref target="decoder-reset"/>).</t> </list> Then for i such that j <= i < (j + n), the result of LPC synthesis for the current subframe is <figure align="center"> <artwork align="center"><![CDATA[ d_LPC-1 gain_Q16[i] __ a_Q12[k] lpc[i] = ----------- * res[i] + \ lpc[i-k-1] * -------- . 65536.0 /_ 4096.0 k=0 ]]></artwork> </figure> The decoder saves the final d_LPC values, i.e., lpc[i] such that (j + n - d_LPC) <= i < (j + n), to feed into the LPC synthesis of the next subframe. This requires storage for up to 16 values of lpc[i] (for WB frames). </t> <t> Then, the signal is clamped into the final nominal range: <figure align="center"> <artwork align="center"><![CDATA[ out[i] = clamp(-1.0, lpc[i], 1.0) . ]]></artwork> </figure> This clamping occurs entirely after the LPC synthesis filter has run. The decoder saves the unclamped values, lpc[i], to feed into the LPC filter for the next subframe, but saves the clamped values, out[i], for rewhitening in voiced frames. </t> </section> </section> </section> <section anchor="silk_stereo_unmixing" title="Stereo Unmixing"> <t> For stereo streams, after decoding a frame from each channel, the decoder must convert the mid-side (MS) representation into a left-right (LR) representation. The function silk_stereo_MS_to_LR (stereo_MS_to_LR.c) implements this process. In it, the decoder predicts the side channel using a) a simple low-passed version of the mid channel, and b) the unfiltered mid channel, using the prediction weights decoded in <xref target="silk_stereo_pred"/>. This simple low-pass filter imposes a one-sample delay, and the unfiltered mid channel is also delayed by one sample. In order to allow seamless switching between stereo and mono, mono streams must also impose the same one-sample delay. The encoder requires an additional one-sample delay for both mono and stereo streams, though an encoder may omit the delay for mono if it knows it will never switch to stereo. </t> <t> The unmixing process operates in two phases. The first phase lasts for 8 ms, during which it interpolates the prediction weights from the previous frame, prev_w0_Q13 and prev_w1_Q13, to the values for the current frame, w0_Q13 and w1_Q13. The second phase simply uses these weights for the remainder of the frame. </t> <t> Let mid[i] and side[i] be the contents of out[i] (from <xref target="silk_lpc_synthesis"/>) for the current mid and side channels, respectively, and let left[i] and right[i] be the corresponding stereo output channels. If the side channel is not coded (see <xref target="silk_mid_only_flag"/>), then side[i] is set to zero. Also let j be defined as in <xref target="silk_frame_reconstruction"/>, n1 be the number of samples in phase 1 (64 for NB, 96 for MB, and 128 for WB), and n2 be the total number of samples in the frame. Then for i such that j <= i < (j + n2), the left and right channel output is <figure align="center"> <artwork align="center"><![CDATA[ prev_w0_Q13 (w0_Q13 - prev_w0_Q13) w0 = ----------- + min(i - j, n1)*---------------------- , 8192.0 8192.0*n1 prev_w1_Q13 (w1_Q13 - prev_w1_Q13) w1 = ----------- + min(i - j, n1)*---------------------- , 8192.0 8192.0*n1 mid[i-2] + 2*mid[i-1] + mid[i] p0 = ------------------------------ , 4.0 left[i] = clamp(-1.0, (1 + w1)*mid[i-1] + side[i-1] + w0*p0, 1.0) , right[i] = clamp(-1.0, (1 - w1)*mid[i-1] - side[i-1] - w0*p0, 1.0) . ]]></artwork> </figure> These formulas require two samples prior to index j, the start of the frame, for the mid channel, and one prior sample for the side channel. For the first frame after a decoder reset, zeros are used instead. </t> </section> <section title="Resampling"> <t> After stereo unmixing (if any), the decoder applies resampling to convert the decoded SILK output to the sample rate desired by the application. This is necessary when decoding a Hybrid frame at SWB or FB sample rates, or whenever the decoder wants the output at a different sample rate than the internal SILK sampling rate (e.g., to allow a constant sample rate when the audio bandwidth changes, or to allow mixing with audio from other applications). The resampler itself is non-normative, and a decoder can use any method it wants to perform the resampling. </t> <t> However, a minimum amount of delay is imposed to allow the resampler to operate, and this delay is normative, so that the corresponding delay can be applied to the MDCT layer in the encoder. A decoder is always free to use a resampler which requires more delay than allowed for here (e.g., to improve quality), but it must then delay the output of the MDCT layer by this extra amount. Keeping as much delay as possible on the encoder side allows an encoder which knows it will never use any of the SILK or Hybrid modes to skip this delay. By contrast, if it were all applied by the decoder, then a decoder which processes audio in fixed-size blocks would be forced to delay the output of CELT frames just in case of a later switch to a SILK or Hybrid mode. </t> <t> <xref target="silk_resampler_delay_alloc"/> gives the maximum resampler delay in samples at 48 kHz for each SILK audio bandwidth. Because the actual output rate may not be 48 kHz, it may not be possible to achieve exactly these delays while using a whole number of input or output samples. The reference implementation is able to resample to any of the supported output sampling rates (8, 12, 16, 24, or 48 kHz) within or near this delay constraint. Some resampling filters (including those used by the reference implementation) may add a delay that is not an exact integer, or is not linear-phase, and so cannot be represented by a single delay at all frequencies. However, such deviations are unlikely to be perceptible, and the comparison tool described in <xref target="conformance"/> is designed to be relatively insensitive to them. The delays listed here are the ones that should be targeted by the encoder. </t> <texttable anchor="silk_resampler_delay_alloc" title="SILK Resampler Delay Allocations"> <ttcol>Audio Bandwidth</ttcol> <ttcol>Delay in millisecond</ttcol> <c>NB</c> <c>0.538</c> <c>MB</c> <c>0.692</c> <c>WB</c> <c>0.706</c> </texttable> <t> NB is given a smaller decoder delay allocation than MB and WB to allow a higher-order filter when resampling to 8 kHz in both the encoder and decoder. This implies that the audio content of two SILK frames operating at different bandwidths are not perfectly aligned in time. This is not an issue for any transitions described in <xref target="switching"/>, because they all involve a SILK decoder reset. When the decoder is reset, any samples remaining in the resampling buffer are discarded, and the resampler is re-initialized with silence. </t> </section> </section> <section title="CELT Decoder"> <t> The CELT layer of Opus is based on the Modified Discrete Cosine Transform <xref target='MDCT'/> with partially overlapping windows of 5 to 22.5 ms. The main principle behind CELT is that the MDCT spectrum is divided into bands that (roughly) follow the Bark scale, i.e., the scale of the ear's critical bands <xref target="Zwicker61"/>. The normal CELT layer uses 21 of those bands, though Opus Custom (see <xref target="opus-custom"/>) may use a different number of bands. In Hybrid mode, the first 17 bands (up to 8 kHz) are not coded. A band can contain as little as one MDCT bin per channel, and as many as 176 bins per channel, as detailed in <xref target="celt_band_sizes"/>. In each band, the gain (energy) is coded separately from the shape of the spectrum. Coding the gain explicitly makes it easy to preserve the spectral envelope of the signal. The remaining unit-norm shape vector is encoded using a Pyramid Vector Quantizer (PVQ) <xref target='PVQ-decoder'/>. </t> <texttable anchor="celt_band_sizes" title="MDCT Bins Per Channel Per Band for Each Frame Size"> <ttcol>Frame Size:</ttcol> <ttcol align="right">2.5 ms</ttcol> <ttcol align="right">5 ms</ttcol> <ttcol align="right">10 ms</ttcol> <ttcol align="right">20 ms</ttcol> <ttcol align="right">Start Frequency</ttcol> <ttcol align="right">Stop Frequency</ttcol> <c>Band</c> <c>Bins:</c> <c/> <c/> <c/> <c/> <c/> <c>0</c> <c>1</c> <c>2</c> <c>4</c> <c>8</c> <c>0 Hz</c> <c>200 Hz</c> <c>1</c> <c>1</c> <c>2</c> <c>4</c> <c>8</c> <c>200 Hz</c> <c>400 Hz</c> <c>2</c> <c>1</c> <c>2</c> <c>4</c> <c>8</c> <c>400 Hz</c> <c>600 Hz</c> <c>3</c> <c>1</c> <c>2</c> <c>4</c> <c>8</c> <c>600 Hz</c> <c>800 Hz</c> <c>4</c> <c>1</c> <c>2</c> <c>4</c> <c>8</c> <c>800 Hz</c> <c>1000 Hz</c> <c>5</c> <c>1</c> <c>2</c> <c>4</c> <c>8</c> <c>1000 Hz</c> <c>1200 Hz</c> <c>6</c> <c>1</c> <c>2</c> <c>4</c> <c>8</c> <c>1200 Hz</c> <c>1400 Hz</c> <c>7</c> <c>1</c> <c>2</c> <c>4</c> <c>8</c> <c>1400 Hz</c> <c>1600 Hz</c> <c>8</c> <c>2</c> <c>4</c> <c>8</c> <c>16</c> <c>1600 Hz</c> <c>2000 Hz</c> <c>9</c> <c>2</c> <c>4</c> <c>8</c> <c>16</c> <c>2000 Hz</c> <c>2400 Hz</c> <c>10</c> <c>2</c> <c>4</c> <c>8</c> <c>16</c> <c>2400 Hz</c> <c>2800 Hz</c> <c>11</c> <c>2</c> <c>4</c> <c>8</c> <c>16</c> <c>2800 Hz</c> <c>3200 Hz</c> <c>12</c> <c>4</c> <c>8</c> <c>16</c> <c>32</c> <c>3200 Hz</c> <c>4000 Hz</c> <c>13</c> <c>4</c> <c>8</c> <c>16</c> <c>32</c> <c>4000 Hz</c> <c>4800 Hz</c> <c>14</c> <c>4</c> <c>8</c> <c>16</c> <c>32</c> <c>4800 Hz</c> <c>5600 Hz</c> <c>15</c> <c>6</c> <c>12</c> <c>24</c> <c>48</c> <c>5600 Hz</c> <c>6800 Hz</c> <c>16</c> <c>6</c> <c>12</c> <c>24</c> <c>48</c> <c>6800 Hz</c> <c>8000 Hz</c> <c>17</c> <c>8</c> <c>16</c> <c>32</c> <c>64</c> <c>8000 Hz</c> <c>9600 Hz</c> <c>18</c> <c>12</c> <c>24</c> <c>48</c> <c>96</c> <c>9600 Hz</c> <c>12000 Hz</c> <c>19</c> <c>18</c> <c>36</c> <c>72</c> <c>144</c> <c>12000 Hz</c> <c>15600 Hz</c> <c>20</c> <c>22</c> <c>44</c> <c>88</c> <c>176</c> <c>15600 Hz</c> <c>20000 Hz</c> </texttable> <t> Transients are notoriously difficult for transform codecs to code. CELT uses two different strategies for them: <list style="numbers"> <t>Using multiple smaller MDCTs instead of a single large MDCT, and</t> <t>Dynamic time-frequency resolution changes (See <xref target='tf-change'/>).</t> </list> To improve quality on highly tonal and periodic signals, CELT includes a prefilter/postfilter combination. The prefilter on the encoder side attenuates the signal's harmonics. The postfilter on the decoder side restores the original gain of the harmonics, while shaping the coding noise to roughly follow the harmonics. Such noise shaping reduces the perception of the noise. </t> <t> When coding a stereo signal, three coding methods are available: <list style="symbols"> <t>mid-side stereo: encodes the mean and the difference of the left and right channels,</t> <t>intensity stereo: only encodes the mean of the left and right channels (discards the difference),</t> <t>dual stereo: encodes the left and right channels separately.</t> </list> </t> <t> An overview of the decoder is given in <xref target="celt-decoder-overview"/>. </t> <figure anchor="celt-decoder-overview" title="Structure of the CELT decoder"> <artwork align="center"><![CDATA[ +---------+ | Coarse | +->| decoder |----+ | +---------+ | | | | +---------+ v | | Fine | +---+ +->| decoder |->| + | | +---------+ +---+ | ^ | +---------+ | | | | Range | | +----------+ v | Decoder |-+ | Bit | +------+ +---------+ | |Allocation| | 2**x | | +----------+ +------+ | | | | v v +--------+ | +---------+ +---+ +-------+ | pitch | +->| PVQ |->| * |->| IMDCT |->| post- |---> | | decoder | +---+ +-------+ | filter | | +---------+ +--------+ | ^ +--------------------------------------+ ]]></artwork> </figure> <t> The decoder is based on the following symbols and sets of symbols: </t> <texttable anchor="celt_symbols" title="Order of the Symbols in the CELT Section of the Bitstream"> <ttcol align="center">Symbol(s)</ttcol> <ttcol align="center">PDF</ttcol> <ttcol align="center">Condition</ttcol> <c>silence</c> <c>{32767, 1}/32768</c> <c></c> <c>post-filter</c> <c>{1, 1}/2</c> <c></c> <c>octave</c> <c>uniform (6)</c><c>post-filter</c> <c>period</c> <c>raw bits (4+octave)</c><c>post-filter</c> <c>gain</c> <c>raw bits (3)</c><c>post-filter</c> <c>tapset</c> <c>{2, 1, 1}/4</c><c>post-filter</c> <c>transient</c> <c>{7, 1}/8</c><c></c> <c>intra</c> <c>{7, 1}/8</c><c></c> <c>coarse energy</c><c><xref target="energy-decoding"/></c><c></c> <c>tf_change</c> <c><xref target="transient-decoding"/></c><c></c> <c>tf_select</c> <c>{1, 1}/2</c><c><xref target="transient-decoding"/></c> <c>spread</c> <c>{7, 2, 21, 2}/32</c><c></c> <c>dyn. alloc.</c> <c><xref target="allocation"/></c><c></c> <c>alloc. trim</c> <c>{2, 2, 5, 10, 22, 46, 22, 10, 5, 2, 2}/128</c><c></c> <c>skip</c> <c>{1, 1}/2</c><c><xref target="allocation"/></c> <c>intensity</c> <c>uniform</c><c><xref target="allocation"/></c> <c>dual</c> <c>{1, 1}/2</c><c></c> <c>fine energy</c> <c><xref target="energy-decoding"/></c><c></c> <c>residual</c> <c><xref target="PVQ-decoder"/></c><c></c> <c>anti-collapse</c><c>{1, 1}/2</c><c><xref target="anti-collapse"/></c> <c>finalize</c> <c><xref target="energy-decoding"/></c><c></c> </texttable> <t> The decoder extracts information from the range-coded bitstream in the order described in <xref target='celt_symbols'/>. In some circumstances, it is possible for a decoded value to be out of range due to a very small amount of redundancy in the encoding of large integers by the range coder. In that case, the decoder should assume there has been an error in the coding, decoding, or transmission and SHOULD take measures to conceal the error and/or report to the application that a problem has occurred. Such out of range errors cannot occur in the SILK layer. </t> <section anchor="transient-decoding" title="Transient Decoding"> <t> The "transient" flag indicates whether the frame uses a single long MDCT or several short MDCTs. When it is set, then the MDCT coefficients represent multiple short MDCTs in the frame. When not set, the coefficients represent a single long MDCT for the frame. The flag is encoded in the bitstream with a probability of 1/8. In addition to the global transient flag is a per-band binary flag to change the time-frequency (tf) resolution independently in each band. The change in tf resolution is defined in tf_select_table[][] in celt.c and depends on the frame size, whether the transient flag is set, and the value of tf_select. The tf_select flag uses a 1/2 probability, but is only decoded if it can have an impact on the result knowing the value of all per-band tf_change flags. </t> </section> <section anchor="energy-decoding" title="Energy Envelope Decoding"> <t> It is important to quantize the energy with sufficient resolution because any energy quantization error cannot be compensated for at a later stage. Regardless of the resolution used for encoding the spectral shape of a band, it is perceptually important to preserve the energy in each band. CELT uses a three-step coarse-fine-fine strategy for encoding the energy in the base-2 log domain, as implemented in quant_bands.c</t> <section anchor="coarse-energy-decoding" title="Coarse energy decoding"> <t> Coarse quantization of the energy uses a fixed resolution of 6 dB (integer part of base-2 log). To minimize the bitrate, prediction is applied both in time (using the previous frame) and in frequency (using the previous bands). The part of the prediction that is based on the previous frame can be disabled, creating an "intra" frame where the energy is coded without reference to prior frames. The decoder first reads the intra flag to determine what prediction is used. The 2-D z-transform <xref target='z-transform'/> of the prediction filter is: <figure align="center"> <artwork align="center"><![CDATA[ -1 -1 (1 - alpha*z_l )*(1 - z_b ) A(z_l, z_b) = ----------------------------- -1 1 - beta*z_b ]]></artwork> </figure> where b is the band index and l is the frame index. The prediction coefficients applied depend on the frame size in use when not using intra energy and are alpha=0, beta=4915/32768 when using intra energy. The time-domain prediction is based on the final fine quantization of the previous frame, while the frequency domain (within the current frame) prediction is based on coarse quantization only (because the fine quantization has not been computed yet). The prediction is clamped internally so that fixed point implementations with limited dynamic range always remain in the same state as floating point implementations. We approximate the ideal probability distribution of the prediction error using a Laplace distribution with separate parameters for each frame size in intra- and inter-frame modes. These parameters are held in the e_prob_model table in quant_bands.c. The coarse energy quantization is performed by unquant_coarse_energy() and unquant_coarse_energy_impl() (quant_bands.c). The encoding of the Laplace-distributed values is implemented in ec_laplace_decode() (laplace.c). </t> </section> <section anchor="fine-energy-decoding" title="Fine energy quantization"> <t> The number of bits assigned to fine energy quantization in each band is determined by the bit allocation computation described in <xref target="allocation"></xref>. Let B_i be the number of fine energy bits for band i; the refinement is an integer f in the range [0,2**B_i-1]. The mapping between f and the correction applied to the coarse energy is equal to (f+1/2)/2**B_i - 1/2. Fine energy quantization is implemented in quant_fine_energy() (quant_bands.c). </t> <t> When some bits are left "unused" after all other flags have been decoded, these bits are assigned to a "final" step of fine allocation. In effect, these bits are used to add one extra fine energy bit per band per channel. The allocation process determines two "priorities" for the final fine bits. Any remaining bits are first assigned only to bands of priority 0, starting from band 0 and going up. If all bands of priority 0 have received one bit per channel, then bands of priority 1 are assigned an extra bit per channel, starting from band 0. If any bits are left after this, they are left unused. This is implemented in unquant_energy_finalise() (quant_bands.c). </t> </section> <!-- fine energy --> </section> <!-- Energy decode --> <section anchor="allocation" title="Bit Allocation"> <t>Because the bit allocation drives the decoding of the range-coder stream, it MUST be recovered exactly so that identical coding decisions are made in the encoder and decoder. Any deviation from the reference's resulting bit allocation will result in corrupted output, though implementers are free to implement the procedure in any way which produces identical results.</t> <t>The per-band gain-shape structure of the CELT layer ensures that using the same number of bits for the spectral shape of a band in every frame will result in a roughly constant signal-to-noise ratio in that band. This results in coding noise that has the same spectral envelope as the signal. The masking curve produced by a standard psychoacoustic model also closely follows the spectral envelope of the signal. This structure means that the ideal allocation is more consistent from frame to frame than it is for other codecs without an equivalent structure, and that a fixed allocation provides fairly consistent perceptual performance <xref target='Valin2010'/>.</t> <t>Many codecs transmit significant amounts of side information to control the bit allocation within a frame. Often this control is only indirect, and must be exercised carefully to achieve the desired rate constraints. The CELT layer, however, can adapt over a very wide range of rates, and thus has a large number of codebook sizes to choose from for each band. Explicitly signaling the size of each of these codebooks would impose considerable overhead, even though the allocation is relatively static from frame to frame. This is because all of the information required to compute these codebook sizes must be derived from a single frame by itself, in order to retain robustness to packet loss, so the signaling cannot take advantage of knowledge of the allocation in neighboring frames. This problem is exacerbated in low-latency (small frame size) applications, which would include this overhead in every frame.</t> <t>For this reason, in the MDCT mode Opus uses a primarily implicit bit allocation. The available bitstream capacity is known in advance to both the encoder and decoder without additional signaling, ultimately from the packet sizes expressed by a higher-level protocol. Using this information, the codec interpolates an allocation from a hard-coded table.</t> <t>While the band-energy structure effectively models intra-band masking, it ignores the weaker inter-band masking, band-temporal masking, and other less significant perceptual effects. While these effects can often be ignored, they can become significant for particular samples. One mechanism available to encoders would be to simply increase the overall rate for these frames, but this is not possible in a constant rate mode and can be fairly inefficient. As a result three explicitly signaled mechanisms are provided to alter the implicit allocation:</t> <t> <list style="symbols"> <t>Band boost</t> <t>Allocation trim</t> <t>Band skipping</t> </list> </t> <t>The first of these mechanisms, band boost, allows an encoder to boost the allocation in specific bands. The second, allocation trim, works by biasing the overall allocation towards higher or lower frequency bands. The third, band skipping, selects which low-precision high frequency bands will be allocated no shape bits at all.</t> <t>In stereo mode there are two additional parameters potentially coded as part of the allocation procedure: a parameter to allow the selective elimination of allocation for the 'side' (i.e., intensity stereo) in jointly coded bands, and a flag to deactivate joint coding (i.e., dual stereo). These values are not signaled if they would be meaningless in the overall context of the allocation.</t> <t>Because every signaled adjustment increases overhead and implementation complexity, none were included speculatively: the reference encoder makes use of all of these mechanisms. While the decision logic in the reference was found to be effective enough to justify the overhead and complexity, further analysis techniques may be discovered which increase the effectiveness of these parameters. As with other signaled parameters, an encoder is free to choose the values in any manner, but unless a technique is known to deliver superior perceptual results the methods used by the reference implementation should be used.</t> <t>The allocation process consists of the following steps: determining the per-band maximum allocation vector, decoding the boosts, decoding the tilt, determining the remaining capacity of the frame, searching the mode table for the entry nearest but not exceeding the available space (subject to the tilt, boosts, band maximums, and band minimums), linear interpolation, reallocation of unused bits with concurrent skip decoding, determination of the fine-energy vs. shape split, and final reallocation. This process results in a per-band shape allocation (in 1/8th bit units), a per-band fine-energy allocation (in 1 bit per channel units), a set of band priorities for controlling the use of remaining bits at the end of the frame, and a remaining balance of unallocated space, which is usually zero except at very high rates.</t> <t> The "static" bit allocation (in 1/8 bits) for a quality q, excluding the minimums, maximums, tilt and boosts, is equal to channels*N*alloc[band][q]<<LM>>2, where alloc[][] is given in <xref target="static_alloc"/> and LM=log2(frame_size/120). The allocation is obtained by linearly interpolating between two values of q (in steps of 1/64) to find the highest allocation that does not exceed the number of bits remaining. </t> <texttable anchor="static_alloc" title="CELT Static Allocation Table"> <preamble>Rows indicate the MDCT bands, columns are the different quality (q) parameters. The units are 1/32 bit per MDCT bin.</preamble> <ttcol align="right">0</ttcol> <ttcol align="right">1</ttcol> <ttcol align="right">2</ttcol> <ttcol align="right">3</ttcol> <ttcol align="right">4</ttcol> <ttcol align="right">5</ttcol> <ttcol align="right">6</ttcol> <ttcol align="right">7</ttcol> <ttcol align="right">8</ttcol> <ttcol align="right">9</ttcol> <ttcol align="right">10</ttcol> <c>0</c><c>90</c><c>110</c><c>118</c><c>126</c><c>134</c><c>144</c><c>152</c><c>162</c><c>172</c><c>200</c> <c>0</c><c>80</c><c>100</c><c>110</c><c>119</c><c>127</c><c>137</c><c>145</c><c>155</c><c>165</c><c>200</c> <c>0</c><c>75</c><c>90</c><c>103</c><c>112</c><c>120</c><c>130</c><c>138</c><c>148</c><c>158</c><c>200</c> <c>0</c><c>69</c><c>84</c><c>93</c><c>104</c><c>114</c><c>124</c><c>132</c><c>142</c><c>152</c><c>200</c> <c>0</c><c>63</c><c>78</c><c>86</c><c>95</c><c>103</c><c>113</c><c>123</c><c>133</c><c>143</c><c>200</c> <c>0</c><c>56</c><c>71</c><c>80</c><c>89</c><c>97</c><c>107</c><c>117</c><c>127</c><c>137</c><c>200</c> <c>0</c><c>49</c><c>65</c><c>75</c><c>83</c><c>91</c><c>101</c><c>111</c><c>121</c><c>131</c><c>200</c> <c>0</c><c>40</c><c>58</c><c>70</c><c>78</c><c>85</c><c>95</c><c>105</c><c>115</c><c>125</c><c>200</c> <c>0</c><c>34</c><c>51</c><c>65</c><c>72</c><c>78</c><c>88</c><c>98</c><c>108</c><c>118</c><c>198</c> <c>0</c><c>29</c><c>45</c><c>59</c><c>66</c><c>72</c><c>82</c><c>92</c><c>102</c><c>112</c><c>193</c> <c>0</c><c>20</c><c>39</c><c>53</c><c>60</c><c>66</c><c>76</c><c>86</c><c>96</c><c>106</c><c>188</c> <c>0</c><c>18</c><c>32</c><c>47</c><c>54</c><c>60</c><c>70</c><c>80</c><c>90</c><c>100</c><c>183</c> <c>0</c><c>10</c><c>26</c><c>40</c><c>47</c><c>54</c><c>64</c><c>74</c><c>84</c><c>94</c><c>178</c> <c>0</c><c>0</c><c>20</c><c>31</c><c>39</c><c>47</c><c>57</c><c>67</c><c>77</c><c>87</c><c>173</c> <c>0</c><c>0</c><c>12</c><c>23</c><c>32</c><c>41</c><c>51</c><c>61</c><c>71</c><c>81</c><c>168</c> <c>0</c><c>0</c><c>0</c><c>15</c><c>25</c><c>35</c><c>45</c><c>55</c><c>65</c><c>75</c><c>163</c> <c>0</c><c>0</c><c>0</c><c>4</c><c>17</c><c>29</c><c>39</c><c>49</c><c>59</c><c>69</c><c>158</c> <c>0</c><c>0</c><c>0</c><c>0</c><c>12</c><c>23</c><c>33</c><c>43</c><c>53</c><c>63</c><c>153</c> <c>0</c><c>0</c><c>0</c><c>0</c><c>1</c><c>16</c><c>26</c><c>36</c><c>46</c><c>56</c><c>148</c> <c>0</c><c>0</c><c>0</c><c>0</c><c>0</c><c>10</c><c>15</c><c>20</c><c>30</c><c>45</c><c>129</c> <c>0</c><c>0</c><c>0</c><c>0</c><c>0</c><c>1</c><c>1</c><c>1</c><c>1</c><c>20</c><c>104</c> </texttable> <t>The maximum allocation vector is an approximation of the maximum space that can be used by each band for a given mode. The value is approximate because the shape encoding is variable rate (due to entropy coding of splitting parameters). Setting the maximum too low reduces the maximum achievable quality in a band while setting it too high may result in waste: bitstream capacity available at the end of the frame which can not be put to any use. The maximums specified by the codec reflect the average maximum. In the reference implementation, the maximums in bits/sample are precomputed in a static table (see cache_caps50[] in static_modes_float.h) for each band, for each value of LM, and for both mono and stereo. Implementations are expected to simply use the same table data, but the procedure for generating this table is included in rate.c as part of compute_pulse_cache().</t> <t>To convert the values in cache.caps into the actual maximums: first set nbBands to the maximum number of bands for this mode, and stereo to zero if stereo is not in use and one otherwise. For each band set N to the number of MDCT bins covered by the band (for one channel), set LM to the shift value for the frame size, then set i to nbBands*(2*LM+stereo). Then set the maximum for the band to the i-th index of cache.caps + 64 and multiply by the number of channels in the current frame (one or two) and by N, then divide the result by 4 using integer division. The resulting vector will be called cap[]. The elements fit in signed 16-bit integers but do not fit in 8 bits. This procedure is implemented in the reference in the function init_caps() in celt.c. </t> <t>The band boosts are represented by a series of binary symbols which are entropy coded with very low probability. Each band can potentially be boosted multiple times, subject to the frame actually having enough room to obey the boost and having enough room to code the boost symbol. The default coding cost for a boost starts out at six bits (probability p=1/64), but subsequent boosts in a band cost only a single bit and every time a band is boosted the initial cost is reduced (down to a minimum of two bits, or p=1/4). Since the initial cost of coding a boost is 6 bits, the coding cost of the boost symbols when completely unused is 0.48 bits/frame for a 21 band mode (21*-log2(1-1/2**6)).</t> <t>To decode the band boosts: First set 'dynalloc_logp' to 6, the initial amount of storage required to signal a boost in bits, 'total_bits' to the size of the frame in 8th bits, 'total_boost' to zero, and 'tell' to the total number of 8th bits decoded so far. For each band from the coding start (0 normally, but 17 in Hybrid mode) to the coding end (which changes depending on the signaled bandwidth), the boost quanta in units of 1/8 bit is calculated as quanta = min(8*N, max(48, N)). This represents a boost step size of six bits, subject to a lower limit of 1/8th bit/sample and an upper limit of 1 bit/sample. Set 'boost' to zero and 'dynalloc_loop_logp' to dynalloc_logp. While dynalloc_loop_log (the current worst case symbol cost) in 8th bits plus tell is less than total_bits plus total_boost and boost is less than cap[] for this band: Decode a bit from the bitstream with a with dynalloc_loop_logp as the cost of a one, update tell to reflect the current used capacity, if the decoded value is zero break the loop otherwise add quanta to boost and total_boost, subtract quanta from total_bits, and set dynalloc_loop_log to 1. When the while loop finishes boost contains the boost for this band. If boost is non-zero and dynalloc_logp is greater than 2, decrease dynalloc_logp. Once this process has been executed on all bands, the band boosts have been decoded. This procedure is implemented around line 2474 of celt.c.</t> <t>At very low rates it is possible that there won't be enough available space to execute the inner loop even once. In these cases band boost is not possible but its overhead is completely eliminated. Because of the high cost of band boost when activated, a reasonable encoder should not be using it at very low rates. The reference implements its dynalloc decision logic around line 1304 of celt.c.</t> <t>The allocation trim is a integer value from 0-10. The default value of 5 indicates no trim. The trim parameter is entropy coded in order to lower the coding cost of less extreme adjustments. Values lower than 5 bias the allocation towards lower frequencies and values above 5 bias it towards higher frequencies. Like other signaled parameters, signaling of the trim is gated so that it is not included if there is insufficient space available in the bitstream. To decode the trim, first set the trim value to 5, then if and only if the count of decoded 8th bits so far (ec_tell_frac) plus 48 (6 bits) is less than or equal to the total frame size in 8th bits minus total_boost (a product of the above band boost procedure), decode the trim value using the PDF in <xref target="celt_trim_pdf"/>.</t> <texttable anchor="celt_trim_pdf" title="PDF for the Trim"> <ttcol>PDF</ttcol> <c>{1, 1, 2, 5, 10, 22, 46, 22, 10, 5, 2, 2}/128</c> </texttable> <t>For 10 ms and 20 ms frames using short blocks and that have at least LM+2 bits left prior to the allocation process, then one anti-collapse bit is reserved in the allocation process so it can be decoded later. Following the the anti-collapse reservation, one bit is reserved for skip if available.</t> <t>For stereo frames, bits are reserved for intensity stereo and for dual stereo. Intensity stereo requires ilog2(end-start) bits. Those bits are reserved if there is enough bits left. Following this, one bit is reserved for dual stereo if available.</t> <t>The allocation computation begins by setting up some initial conditions. 'total' is set to the remaining available 8th bits, computed by taking the size of the coded frame times 8 and subtracting ec_tell_frac(). From this value, one (8th bit) is subtracted to ensure that the resulting allocation will be conservative. 'anti_collapse_rsv' is set to 8 (8th bits) if and only if the frame is a transient, LM is greater than 1, and total is greater than or equal to (LM+2) * 8. Total is then decremented by anti_collapse_rsv and clamped to be equal to or greater than zero. 'skip_rsv' is set to 8 (8th bits) if total is greater than 8, otherwise it is zero. Total is then decremented by skip_rsv. This reserves space for the final skipping flag.</t> <t>If the current frame is stereo, intensity_rsv is set to the conservative log2 in 8th bits of the number of coded bands for this frame (given by the table LOG2_FRAC_TABLE in rate.c). If intensity_rsv is greater than total then intensity_rsv is set to zero. Otherwise total is decremented by intensity_rsv, and if total is still greater than 8, dual_stereo_rsv is set to 8 and total is decremented by dual_stereo_rsv.</t> <t>The allocation process then computes a vector representing the hard minimum amounts allocation any band will receive for shape. This minimum is higher than the technical limit of the PVQ process, but very low rate allocations produce an excessively sparse spectrum and these bands are better served by having no allocation at all. For each coded band, set thresh[band] to twenty-four times the number of MDCT bins in the band and divide by 16. If 8 times the number of channels is greater, use that instead. This sets the minimum allocation to one bit per channel or 48 128th bits per MDCT bin, whichever is greater. The band-size dependent part of this value is not scaled by the channel count, because at the very low rates where this limit is applicable there will usually be no bits allocated to the side.</t> <t>The previously decoded allocation trim is used to derive a vector of per-band adjustments, 'trim_offsets[]'. For each coded band take the alloc_trim and subtract 5 and LM. Then multiply the result by the number of channels, the number of MDCT bins in the shortest frame size for this mode, the number of remaining bands, 2**LM, and 8. Then divide this value by 64. Finally, if the number of MDCT bins in the band per channel is only one, 8 times the number of channels is subtracted in order to diminish the allocation by one bit, because width 1 bands receive greater benefit from the coarse energy coding.</t> </section> <section anchor="PVQ-decoder" title="Shape Decoding"> <t> In each band, the normalized "shape" is encoded using a vector quantization scheme called a "pyramid vector quantizer". </t> <t>In the simplest case, the number of bits allocated in <xref target="allocation"></xref> is converted to a number of pulses as described by <xref target="bits-pulses"></xref>. Knowing the number of pulses and the number of samples in the band, the decoder calculates the size of the codebook as detailed in <xref target="cwrs-decoder"></xref>. The size is used to decode an unsigned integer (uniform probability model), which is the codeword index. This index is converted into the corresponding vector as explained in <xref target="cwrs-decoder"></xref>. This vector is then scaled to unit norm. </t> <section anchor="bits-pulses" title="Bits to Pulses"> <t> Although the allocation is performed in 1/8th bit units, the quantization requires an integer number of pulses K. To do this, the encoder searches for the value of K that produces the number of bits nearest to the allocated value (rounding down if exactly halfway between two values), not to exceed the total number of bits available. For efficiency reasons, the search is performed against a precomputed allocation table which only permits some K values for each N. The number of codebook entries can be computed as explained in <xref target="cwrs-decoder"></xref>. The difference between the number of bits allocated and the number of bits used is accumulated to a "balance" (initialized to zero) that helps adjust the allocation for the next bands. One third of the balance is applied to the bit allocation of each band to help achieve the target allocation. The only exceptions are the band before the last and the last band, for which half the balance and the whole balance are applied, respectively. </t> </section> <section anchor="cwrs-decoder" title="PVQ Decoding"> <t> Decoding of PVQ vectors is implemented in decode_pulses() (cwrs.c). The unique codeword index is decoded as a uniformly-distributed integer value between 0 and V(N,K)-1, where V(N,K) is the number of possible combinations of K pulses in N samples. The index is then converted to a vector in the same way specified in <xref target="PVQ"></xref>. The indexing is based on the calculation of V(N,K) (denoted N(L,K) in <xref target="PVQ"></xref>). </t> <t> The number of combinations can be computed recursively as V(N,K) = V(N-1,K) + V(N,K-1) + V(N-1,K-1), with V(N,0) = 1 and V(0,K) = 0, K != 0. There are many different ways to compute V(N,K), including precomputed tables and direct use of the recursive formulation. The reference implementation applies the recursive formulation one line (or column) at a time to save on memory use, along with an alternate, univariate recurrence to initialize an arbitrary line, and direct polynomial solutions for small N. All of these methods are equivalent, and have different trade-offs in speed, memory usage, and code size. Implementations MAY use any methods they like, as long as they are equivalent to the mathematical definition. </t> <t> The decoded vector X is recovered as follows. Let i be the index decoded with the procedure in <xref target="ec_dec_uint"/> with ft = V(N,K), so that 0 <= i < V(N,K). Let k = K. Then for j = 0 to (N - 1), inclusive, do: <list style="numbers"> <t>Let p = (V(N-j-1,k) + V(N-j,k))/2.</t> <t> If i < p, then let sgn = 1, else let sgn = -1 and set i = i - p. </t> <t>Let k0 = k and set p = p - V(N-j-1,k).</t> <t> While p > i, set k = k - 1 and p = p - V(N-j-1,k). </t> <t> Set X[j] = sgn*(k0 - k) and i = i - p. </t> </list> </t> <t> The decoded vector X is then normalized such that its L2-norm equals one. </t> </section> <section anchor="spreading" title="Spreading"> <t> The normalized vector decoded in <xref target="cwrs-decoder"/> is then rotated for the purpose of avoiding tonal artifacts. The rotation gain is equal to <figure align="center"> <artwork align="center"><![CDATA[ g_r = N / (N + f_r*K) ]]></artwork> </figure> where N is the number of dimensions, K is the number of pulses, and f_r depends on the value of the "spread" parameter in the bit-stream. </t> <texttable anchor="spread values" title="Spreading Values"> <ttcol>Spread value</ttcol> <ttcol>f_r</ttcol> <c>0</c> <c>infinite (no rotation)</c> <c>1</c> <c>15</c> <c>2</c> <c>10</c> <c>3</c> <c>5</c> </texttable> <t> The rotation angle is then calculated as <figure align="center"> <artwork align="center"><![CDATA[ 2 pi * g_r theta = ---------- 4 ]]></artwork> </figure> A 2-D rotation R(i,j) between points x_i and x_j is defined as: <figure align="center"> <artwork align="center"><![CDATA[ x_i' = cos(theta)*x_i + sin(theta)*x_j x_j' = -sin(theta)*x_i + cos(theta)*x_j ]]></artwork> </figure> An N-D rotation is then achieved by applying a series of 2-D rotations back and forth, in the following order: R(x_1, x_2), R(x_2, x_3), ..., R(x_N-2, X_N-1), R(x_N-1, X_N), R(x_N-2, X_N-1), ..., R(x_1, x_2). </t> <t> If the decoded vector represents more than one time block, then this spreading process is applied separately on each time block. Also, if each block represents 8 samples or more, then another N-D rotation, by (pi/2-theta), is applied <spanx style="emph">before</spanx> the rotation described above. This extra rotation is applied in an interleaved manner with a stride equal to round(sqrt(N/nb_blocks)), i.e., it is applied independently for each set of sample S_k = {stride*n + k}, n=0..N/stride-1. </t> </section> <section anchor="split" title="Split decoding"> <t> To avoid the need for multi-precision calculations when decoding PVQ codevectors, the maximum size allowed for codebooks is 32 bits. When larger codebooks are needed, the vector is instead split in two sub-vectors of size N/2. A quantized gain parameter with precision derived from the current allocation is entropy coded to represent the relative gains of each side of the split, and the entire decoding process is recursively applied. Multiple levels of splitting may be applied up to a limit of LM+1 splits. The same recursive mechanism is applied for the joint coding of stereo audio. </t> </section> <section anchor="tf-change" title="Time-Frequency change"> <t> The time-frequency (TF) parameters are used to control the time-frequency resolution tradeoff in each coded band. For each band, there are two possible TF choices. For the first band coded, the PDF is {3, 1}/4 for frames marked as transient and {15, 1}/16 for the other frames. For subsequent bands, the TF choice is coded relative to the previous TF choice with probability {15, 1}/15 for transient frames and {31, 1}/32 otherwise. The mapping between the decoded TF choices and the adjustment in TF resolution is shown in the tables below. </t> <texttable anchor='tf_00' title="TF Adjustments for Non-transient Frames and tf_select=0"> <ttcol align='center'>Frame size (ms)</ttcol> <ttcol align='center'>0</ttcol> <ttcol align='center'>1</ttcol> <c>2.5</c> <c>0</c> <c>-1</c> <c>5</c> <c>0</c> <c>-1</c> <c>10</c> <c>0</c> <c>-2</c> <c>20</c> <c>0</c> <c>-2</c> </texttable> <texttable anchor='tf_01' title="TF Adjustments for Non-transient Frames and tf_select=1"> <ttcol align='center'>Frame size (ms)</ttcol> <ttcol align='center'>0</ttcol> <ttcol align='center'>1</ttcol> <c>2.5</c> <c>0</c> <c>-1</c> <c>5</c> <c>0</c> <c>-2</c> <c>10</c> <c>0</c> <c>-3</c> <c>20</c> <c>0</c> <c>-3</c> </texttable> <texttable anchor='tf_10' title="TF Adjustments for Transient Frames and tf_select=0"> <ttcol align='center'>Frame size (ms)</ttcol> <ttcol align='center'>0</ttcol> <ttcol align='center'>1</ttcol> <c>2.5</c> <c>0</c> <c>-1</c> <c>5</c> <c>1</c> <c>0</c> <c>10</c> <c>2</c> <c>0</c> <c>20</c> <c>3</c> <c>0</c> </texttable> <texttable anchor='tf_11' title="TF Adjustments for Transient Frames and tf_select=1"> <ttcol align='center'>Frame size (ms)</ttcol> <ttcol align='center'>0</ttcol> <ttcol align='center'>1</ttcol> <c>2.5</c> <c>0</c> <c>-1</c> <c>5</c> <c>1</c> <c>-1</c> <c>10</c> <c>1</c> <c>-1</c> <c>20</c> <c>1</c> <c>-1</c> </texttable> <t> A negative TF adjustment means that the temporal resolution is increased, while a positive TF adjustment means that the frequency resolution is increased. Changes in TF resolution are implemented using the Hadamard transform <xref target="Hadamard"/>. To increase the time resolution by N, N "levels" of the Hadamard transform are applied to the decoded vector for each interleaved MDCT vector. To increase the frequency resolution (assumes a transient frame), then N levels of the Hadamard transform are applied <spanx style="emph">across</spanx> the interleaved MDCT vector. In the case of increased time resolution the decoder uses the "sequency order" because the input vector is sorted in time. </t> </section> </section> <section anchor="anti-collapse" title="Anti-Collapse Processing"> <t> The anti-collapse feature is designed to avoid the situation where the use of multiple short MDCTs causes the energy in one or more of the MDCTs to be zero for some bands, causing unpleasant artifacts. When the frame has the transient bit set, an anti-collapse bit is decoded. When anti-collapse is set, the energy in each small MDCT is prevented from collapsing to zero. For each band of each MDCT where a collapse is detected, a pseudo-random signal is inserted with an energy corresponding to the minimum energy over the two previous frames. A renormalization step is then required to ensure that the anti-collapse step did not alter the energy preservation property. </t> </section> <section anchor="denormalization" title="Denormalization"> <t> Just as each band was normalized in the encoder, the last step of the decoder before the inverse MDCT is to denormalize the bands. Each decoded normalized band is multiplied by the square root of the decoded energy. This is done by denormalise_bands() (bands.c). </t> </section> <section anchor="inverse-mdct" title="Inverse MDCT"> <t>The inverse MDCT implementation has no special characteristics. The input is N frequency-domain samples and the output is 2*N time-domain samples, while scaling by 1/2. A "low-overlap" window reduces the algorithmic delay. It is derived from a basic (full overlap) 240-sample version of the window used by the Vorbis codec: <figure align="center"> <artwork align="center"><![CDATA[ 2 / /pi /pi n + 1/2\ \ \ W(n) = |sin|-- * sin|-- * -------| | | . \ \2 \2 L / / / ]]></artwork> </figure> The low-overlap window is created by zero-padding the basic window and inserting ones in the middle, such that the resulting window still satisfies power complementarity <xref target='Princen86'/>. The IMDCT and windowing are performed by mdct_backward (mdct.c). </t> <section anchor="post-filter" title="Post-filter"> <t> The output of the inverse MDCT (after weighted overlap-add) is sent to the post-filter. Although the post-filter is applied at the end, the post-filter parameters are encoded at the beginning, just after the silence flag. The post-filter can be switched on or off using one bit (logp=1). If the post-filter is enabled, then the octave is decoded as an integer value between 0 and 6 of uniform probability. Once the octave is known, the fine pitch within the octave is decoded using 4+octave raw bits. The final pitch period is equal to (16<<octave)+fine_pitch-1 so it is bounded between 15 and 1022, inclusively. Next, the gain is decoded as three raw bits and is equal to G=3*(int_gain+1)/32. The set of post-filter taps is decoded last, using a pdf equal to {2, 1, 1}/4. Tapset zero corresponds to the filter coefficients g0 = 0.3066406250, g1 = 0.2170410156, g2 = 0.1296386719. Tapset one corresponds to the filter coefficients g0 = 0.4638671875, g1 = 0.2680664062, g2 = 0, and tapset two uses filter coefficients g0 = 0.7998046875, g1 = 0.1000976562, g2 = 0. </t> <t> The post-filter response is thus computed as: <figure align="center"> <artwork align="center"> <![CDATA[ y(n) = x(n) + G*(g0*y(n-T) + g1*(y(n-T+1)+y(n-T+1)) + g2*(y(n-T+2)+y(n-T+2))) ]]> </artwork> </figure> During a transition between different gains, a smooth transition is calculated using the square of the MDCT window. It is important that values of y(n) be interpolated one at a time such that the past value of y(n) used is interpolated. </t> </section> <section anchor="deemphasis" title="De-emphasis"> <t> After the post-filter, the signal is de-emphasized using the inverse of the pre-emphasis filter used in the encoder: <figure align="center"> <artwork align="center"><![CDATA[ 1 1 ---- = --------------- , A(z) -1 1 - alpha_p*z ]]></artwork> </figure> where alpha_p=0.8500061035. </t> </section> </section> </section> <section anchor="Packet Loss Concealment" title="Packet Loss Concealment (PLC)"> <t> Packet loss concealment (PLC) is an optional decoder-side feature that SHOULD be included when receiving from an unreliable channel. Because PLC is not part of the bitstream, there are many acceptable ways to implement PLC with different complexity/quality trade-offs. </t> <t> The PLC in the reference implementation depends on the mode of last packet received. In CELT mode, the PLC finds a periodicity in the decoded signal and repeats the windowed waveform using the pitch offset. The windowed waveform is overlapped in such a way as to preserve the time-domain aliasing cancellation with the previous frame and the next frame. This is implemented in celt_decode_lost() (mdct.c). In SILK mode, the PLC uses LPC extrapolation from the previous frame, implemented in silk_PLC() (PLC.c). </t> <section anchor="clock-drift" title="Clock Drift Compensation"> <t> Clock drift refers to the gradual desynchronization of two endpoints whose sample clocks run at different frequencies while they are streaming live audio. Differences in clock frequencies are generally attributable to manufacturing variation in the endpoints' clock hardware. For long-lived streams, the time difference between sender and receiver can grow without bound. </t> <t> When the sender's clock runs slower than the receiver's, the effect is similar to packet loss: too few packets are received. The receiver can distinguish between drift and loss if the transport provides packet timestamps. A receiver for live streams SHOULD conceal the effects of drift, and MAY do so by invoking the PLC. </t> <t> When the sender's clock runs faster than the receiver's, too many packets will be received. The receiver MAY respond by skipping any packet (i.e., not submitting the packet for decoding). This is likely to produce a less severe artifact than if the frame were dropped after decoding. </t> <t> A decoder MAY employ a more sophisticated drift compensation method. For example, the <xref target='Google-NetEQ'>NetEQ component</xref> of the <xref target='Google-WebRTC'>Google WebRTC codebase</xref> compensates for drift by adding or removing one period when the signal is highly periodic. The reference implementation of Opus allows a caller to learn whether the current frame's signal is highly periodic, and if so what the period is, using the OPUS_GET_PITCH() request. </t> </section> </section> <section anchor="switching" title="Configuration Switching"> <t> Switching between the Opus coding modes, audio bandwidths, and channel counts requires careful consideration to avoid audible glitches. Switching between any two configurations of the CELT-only mode, any two configurations of the Hybrid mode, or from WB SILK to Hybrid mode does not require any special treatment in the decoder, as the MDCT overlap will smooth the transition. Switching from Hybrid mode to WB SILK requires adding in the final contents of the CELT overlap buffer to the first SILK-only packet. This can be done by decoding a 2.5 ms silence frame with the CELT decoder using the channel count of the SILK-only packet (and any choice of audio bandwidth), which will correctly handle the cases when the channel count changes as well. </t> <t> When changing the channel count for SILK-only or Hybrid packets, the encoder can avoid glitches by smoothly varying the stereo width of the input signal before or after the transition, and SHOULD do so. However, other transitions between SILK-only packets or between NB or MB SILK and Hybrid packets may cause glitches, because neither the LSF coefficients nor the LTP, LPC, stereo unmixing, and resampler buffers are available at the new sample rate. These switches SHOULD be delayed by the encoder until quiet periods or transients, where the inevitable glitches will be less audible. Additionally, the bit-stream MAY include redundant side information ("redundancy"), in the form of additional CELT frames embedded in each of the Opus frames around the transition. </t> <t> The other transitions that cannot be easily handled are those where the lower frequencies switch between the SILK LP-based model and the CELT MDCT model. However, an encoder may not have an opportunity to delay such a switch to a convenient point. For example, if the content switches from speech to music, and the encoder does not have enough latency in its analysis to detect this in advance, there may be no convenient silence period during which to make the transition for quite some time. To avoid or reduce glitches during these problematic mode transitions, and also between audio bandwidth changes in the SILK-only modes, transitions MAY include redundant side information ("redundancy"), in the form of an additional CELT frame embedded in the Opus frame. </t> <t> A transition between coding the lower frequencies with the LP model and the MDCT model or a transition that involves changing the SILK bandwidth is only normatively specified when it includes redundancy. For those without redundancy, it is RECOMMENDED that the decoder use a concealment technique (e.g., make use of a PLC algorithm) to "fill in" the gap or discontinuity caused by the mode transition. Therefore, PLC MUST NOT be applied during any normative transition, i.e., when <list style="symbols"> <t>A packet includes redundancy for this transition (as described below),</t> <t>The transition is between any WB SILK packet and any Hybrid packet, or vice versa,</t> <t>The transition is between any two Hybrid mode packets, or</t> <t>The transition is between any two CELT mode packets,</t> </list> unless there is actual packet loss. </t> <section anchor="side-info" title="Transition Side Information (Redundancy)"> <t> Transitions with side information include an extra 5 ms "redundant" CELT frame within the Opus frame. This frame is designed to fill in the gap or discontinuity in the different layers without requiring the decoder to conceal it. For transitions from CELT-only to SILK-only or Hybrid, the redundant frame is inserted in the first Opus frame after the transition (i.e., the first SILK-only or Hybrid frame). For transitions from SILK-only or Hybrid to CELT-only, the redundant frame is inserted in the last Opus frame before the transition (i.e., the last SILK-only or Hybrid frame). </t> <section anchor="opus_redundancy_flag" title="Redundancy Flag"> <t> The presence of redundancy is signaled in all SILK-only and Hybrid frames, not just those involved in a mode transition. This allows the frames to be decoded correctly even if an adjacent frame is lost. For SILK-only frames, this signaling is implicit, based on the size of the of the Opus frame and the number of bits consumed decoding the SILK portion of it. After decoding the SILK portion of the Opus frame, the decoder uses ec_tell() (see <xref target="ec_tell"/>) to check if there are at least 17 bits remaining. If so, then the frame contains redundancy. </t> <t> For Hybrid frames, this signaling is explicit. After decoding the SILK portion of the Opus frame, the decoder uses ec_tell() (see <xref target="ec_tell"/>) to ensure there are at least 37 bits remaining. If so, it reads a symbol with the PDF in <xref target="opus_redundancy_flag_pdf"/>, and if the value is 1, then the frame contains redundancy. Otherwise (if there were fewer than 37 bits left or the value was 0), the frame does not contain redundancy. </t> <texttable anchor="opus_redundancy_flag_pdf" title="Redundancy Flag PDF"> <ttcol>PDF</ttcol> <c>{4095, 1}/4096</c> </texttable> </section> <section anchor="opus_redundancy_pos" title="Redundancy Position Flag"> <t> Since the current frame is a SILK-only or a Hybrid frame, it must be at least 10 ms. Therefore, it needs an additional flag to indicate whether the redundant 5 ms CELT frame should be mixed into the beginning of the current frame, or the end. After determining that a frame contains redundancy, the decoder reads a 1 bit symbol with a uniform PDF (<xref target="opus_redundancy_pos_pdf"/>). </t> <texttable anchor="opus_redundancy_pos_pdf" title="Redundancy Position PDF"> <ttcol>PDF</ttcol> <c>{1, 1}/2</c> </texttable> <t> If the value is zero, this is the first frame in the transition, and the redundancy belongs at the end. If the value is one, this is the second frame in the transition, and the redundancy belongs at the beginning. There is no way to specify that an Opus frame contains separate redundant CELT frames at both the beginning and the end. </t> </section> <section anchor="opus_redundancy_size" title="Redundancy Size"> <t> Unlike the CELT portion of a Hybrid frame, the redundant CELT frame does not use the same entropy coder state as the rest of the Opus frame, because this would break the CELT bit allocation mechanism in Hybrid frames. Thus, a redundant CELT frame always starts and ends on a byte boundary, even in SILK-only frames, where this is not strictly necessary. </t> <t> For SILK-only frames, the number of bytes in the redundant CELT frame is simply the number of whole bytes remaining, which must be at least 2, due to the space check in <xref target="opus_redundancy_flag"/>. For Hybrid frames, the number of bytes is equal to 2, plus a decoded unsigned integer less than 256 (see <xref target="ec_dec_uint"/>). This may be more than the number of whole bytes remaining in the Opus frame, in which case the frame is invalid. However, a decoder is not required to ignore the entire frame, as this may be the result of a bit error that desynchronized the range coder. There may still be useful data before the error, and a decoder MAY keep any audio decoded so far instead of invoking the PLC, but it is RECOMMENDED that the decoder stop decoding and discard the rest of the current Opus frame. </t> <t> It would have been possible to avoid these invalid states in the design of Opus by limiting the range of the explicit length decoded from Hybrid frames by the actual number of whole bytes remaining. However, this would require an encoder to determine the rate allocation for the MDCT layer up front, before it began encoding that layer. By allowing some invalid sizes, the encoder is able to defer that decision until much later. When encoding Hybrid frames which do not include redundancy, the encoder must still decide up-front if it wishes to use the minimum 37 bits required to trigger encoding of the redundancy flag, but this is a much looser restriction. </t> <t> After determining the size of the redundant CELT frame, the decoder reduces the size of the buffer currently in use by the range coder by that amount. The CELT layer read any raw bits from the end of this reduced buffer, and all calculations of the number of bits remaining in the buffer must be done using this new, reduced size, rather than the original size of the Opus frame. </t> </section> <section anchor="opus_redundancy_decoding" title="Decoding the Redundancy"> <t> The redundant frame is decoded like any other CELT-only frame, with the exception that it does not contain a TOC byte. The frame size is fixed at 5 ms, the channel count is set to that of the current frame, and the audio bandwidth is also set to that of the current frame, with the exception that for MB SILK frames, it is set to WB. </t> <t> If the redundancy belongs at the beginning (in a CELT-only to SILK-only or Hybrid transition), the final reconstructed output uses the first 2.5 ms of audio output by the decoder for the redundant frame as-is, discarding the corresponding output from the SILK-only or Hybrid portion of the frame. The remaining 2.5 ms is cross-lapped with the decoded SILK/Hybrid signal using the CELT's power-complementary MDCT window to ensure a smooth transition. </t> <t> If the redundancy belongs at the end (in a SILK-only or Hybrid to CELT-only transition), only the second half (2.5 ms) of the audio output by the decoder for the redundant frame is used. In that case, the second half of the redundant frame is cross-lapped with the end of the SILK/Hybrid signal, again using CELT's power-complementary MDCT window to ensure a smooth transition. </t> </section> </section> <section anchor="decoder-reset" title="State Reset"> <t> When a transition occurs, the state of the SILK or the CELT decoder (or both) may need to be reset before decoding a frame in the new mode. This avoids reusing "out of date" memory, which may not have been updated in some time or may not be in a well-defined state due to, e.g., PLC. The SILK state is reset before every SILK-only or Hybrid frame where the previous frame was CELT-only. The CELT state is reset every time the operating mode changes and the new mode is either Hybrid or CELT-only, except when the transition uses redundancy as described above. When switching from SILK-only or Hybrid to CELT-only with redundancy, the CELT state is reset before decoding the redundant CELT frame embedded in the SILK-only or Hybrid frame, but it is not reset before decoding the following CELT-only frame. When switching from CELT-only mode to SILK-only or Hybrid mode with redundancy, the CELT decoder is not reset for decoding the redundant CELT frame. </t> </section> <section title="Summary of Transitions"> <t> <xref target="normative_transitions"/> illustrates all of the normative transitions involving a mode change, an audio bandwidth change, or both. Each one uses an S, H, or C to represent an Opus frame in the corresponding mode. In addition, an R indicates the presence of redundancy in the Opus frame it is cross-lapped with. Its location in the first or last 5 ms is assumed to correspond to whether it is the frame before or after the transition. Other uses of redundancy are non-normative. Finally, a c indicates the contents of the CELT overlap buffer after the previously decoded frame (i.e., as extracted by decoding a silence frame). <figure align="center" anchor="normative_transitions" title="Normative Transitions"> <artwork align="center"><![CDATA[ SILK to SILK with Redundancy: S -> S -> S & !R -> R & ;S -> S -> S NB or MB SILK to Hybrid with Redundancy: S -> S -> S & !R ->;H -> H -> H WB SILK to Hybrid: S -> S -> S ->!H -> H -> H SILK to CELT with Redundancy: S -> S -> S & !R -> C -> C -> C Hybrid to NB or MB SILK with Redundancy: H -> H -> H & !R -> R & ;S -> S -> S Hybrid to WB SILK: H -> H -> H -> c \ + > S -> S -> S Hybrid to CELT with Redundancy: H -> H -> H & !R -> C -> C -> C CELT to SILK with Redundancy: C -> C -> C -> R & ;S -> S -> S CELT to Hybrid with Redundancy: C -> C -> C -> R & |H -> H -> H Key: S SILK-only frame ; SILK decoder reset H Hybrid frame | CELT and SILK decoder resets C CELT-only frame ! CELT decoder reset c CELT overlap + Direct mixing R Redundant CELT frame & Windowed cross-lap ]]></artwork> </figure> The first two and the last two Opus frames in each example are illustrative, i.e., there is no requirement that a stream remain in the same configuration for three consecutive frames before or after a switch. </t> <t> The behavior of transitions without redundancy where PLC is allowed is non-normative. An encoder might still wish to use these transitions if, for example, it doesn't want to add the extra bitrate required for redundancy or if it makes a decision to switch after it has already transmitted the frame that would have had to contain the redundancy. <xref target="nonnormative_transitions"/> illustrates the recommended cross-lapping and decoder resets for these transitions. <figure align="center" anchor="nonnormative_transitions" title="Recommended Non-Normative Transitions"> <artwork align="center"><![CDATA[ SILK to SILK (audio bandwidth change): S -> S -> S ;S -> S -> S NB or MB SILK to Hybrid: S -> S -> S |H -> H -> H SILK to CELT without Redundancy: S -> S -> S -> P & !C -> C -> C Hybrid to NB or MB SILK: H -> H -> H -> c + ;S -> S -> S Hybrid to CELT without Redundancy: H -> H -> H -> P & !C -> C -> C CELT to SILK without Redundancy: C -> C -> C -> P & ;S -> S -> S CELT to Hybrid without Redundancy: C -> C -> C -> P & |H -> H -> H Key: S SILK-only frame ; SILK decoder reset H Hybrid frame | CELT and SILK decoder resets C CELT-only frame ! CELT decoder reset c CELT overlap + Direct mixing P Packet Loss Concealment & Windowed cross-lap ]]></artwork> </figure> Encoders SHOULD NOT use other transitions, e.g., those that involve redundancy in ways not illustrated in <xref target="normative_transitions"/>. </t> </section> </section> </section> <!-- ******************************************************************* --> <!-- ************************** OPUS ENCODER *********************** --> <!-- ******************************************************************* --> <section title="Opus Encoder"> <t> Just like the decoder, the Opus encoder also normally consists of two main blocks: the SILK encoder and the CELT encoder. However, unlike the case of the decoder, a valid (though potentially suboptimal) Opus encoder is not required to support all modes and may thus only include a SILK encoder module or a CELT encoder module. The output bit-stream of the Opus encoding contains bits from the SILK and CELT encoders, though these are not separable due to the use of a range coder. A block diagram of the encoder is illustrated below. <figure align="center" anchor="opus-encoder-figure" title="Opus Encoder"> <artwork> <![CDATA[ +------------+ +---------+ | Sample | | SILK |------+ +->| Rate |--->| Encoder | V +-----------+ | | Conversion | | | +---------+ | Optional | | +------------+ +---------+ | Range | ->| High-pass |--+ | Encoder |----> | Filter | | +--------------+ +---------+ | | Bit- +-----------+ | | Delay | | CELT | +---------+ stream +->| Compensation |->| Encoder | ^ | | | |------+ +--------------+ +---------+ ]]> </artwork> </figure> </t> <t> For a normal encoder where both the SILK and the CELT modules are included, an optimal encoder should select which coding mode to use at run-time depending on the conditions. In the reference implementation, the frame size is selected by the application, but the other configuration parameters (number of channels, bandwidth, mode) are automatically selected (unless explicitly overridden by the application) depend on the following: <list style="symbols"> <t>Requested bitrate</t> <t>Input sampling rate</t> <t>Type of signal (speech vs music)</t> <t>Frame size in use</t> </list> The type of signal currently needs to be provided by the application (though it can be changed in real-time). An Opus encoder implementation could also do automatic detection, but since Opus is an interactive codec, such an implementation would likely have to either delay the signal (for non-interactive applications) or delay the mode switching decisions (for interactive applications). </t> <t> When the encoder is configured for voice over IP applications, the input signal is filtered by a high-pass filter to remove the lowest part of the spectrum that contains little speech energy and may contain background noise. This is a second order Auto Regressive Moving Average (i.e., with poles and zeros) filter with a cut-off frequency around 50 Hz. In the future, a music detector may also be used to lower the cut-off frequency when the input signal is detected to be music rather than speech. </t> <section anchor="range-encoder" title="Range Encoder"> <t> The range coder acts as the bit-packer for Opus. It is used in three different ways: to encode <list style="symbols"> <t> Entropy-coded symbols with a fixed probability model using ec_encode() (entenc.c), </t> <t> Integers from 0 to (2**M - 1) using ec_enc_uint() or ec_enc_bits() (entenc.c),</t> <t> Integers from 0 to (ft - 1) (where ft is not a power of two) using ec_enc_uint() (entenc.c). </t> </list> </t> <t> The range encoder maintains an internal state vector composed of the four-tuple (val, rng, rem, ext) representing the low end of the current range, the size of the current range, a single buffered output byte, and a count of additional carry-propagating output bytes. Both val and rng are 32-bit unsigned integer values, rem is a byte value or less than 255 or the special value -1, and ext is an unsigned integer with at least 11 bits. This state vector is initialized at the start of each each frame to the value (0, 2**31, -1, 0). After encoding a sequence of symbols, the value of rng in the encoder should exactly match the value of rng in the decoder after decoding the same sequence of symbols. This is a powerful tool for detecting errors in either an encoder or decoder implementation. The value of val, on the other hand, represents different things in the encoder and decoder, and is not expected to match. </t> <t> The decoder has no analog for rem and ext. These are used to perform carry propagation in the renormalization loop below. Each iteration of this loop produces 9 bits of output, consisting of 8 data bits and a carry flag. The encoder cannot determine the final value of the output bytes until it propagates these carry flags. Therefore the reference implementation buffers a single non-propagating output byte (i.e., one less than 255) in rem and keeps a count of additional propagating (i.e., 255) output bytes in ext. An implementation may choose to use any mathematically equivalent scheme to perform carry propagation. </t> <section anchor="encoding-symbols" title="Encoding Symbols"> <t> The main encoding function is ec_encode() (entenc.c), which encodes symbol k in the current context using the same three-tuple (fl[k], fh[k], ft) as the decoder to describe the range of the symbol (see <xref target="range-decoder"/>). </t> <t> ec_encode() updates the state of the encoder as follows. If fl[k] is greater than zero, then <figure align="center"> <artwork align="center"><![CDATA[ rng val = val + rng - --- * (ft - fl) , ft rng rng = --- * (fh - fl) . ft ]]></artwork> </figure> Otherwise, val is unchanged and <figure align="center"> <artwork align="center"><![CDATA[ rng rng = rng - --- * (fh - fl) . ft ]]></artwork> </figure> The divisions here are integer division. </t> <section anchor="range-encoder-renorm" title="Renormalization"> <t> After this update, the range is normalized using a procedure very similar to that of <xref target="range-decoder-renorm"/>, implemented by ec_enc_normalize() (entenc.c). The following process is repeated until rng > 2**23. First, the top 9 bits of val, (val>>23), are sent to the carry buffer, described in <xref target="ec_enc_carry_out"/>. Then, the encoder sets <figure align="center"> <artwork align="center"><![CDATA[ val = (val<<8) & 0x7FFFFFFF , rng = rng<<8 . ]]></artwork> </figure> </t> </section> <section anchor="ec_enc_carry_out" title="Carry Propagation and Output Buffering"> <t> The function ec_enc_carry_out() (entenc.c) implements carry propagation and output buffering. It takes as input a 9-bit value, c, consisting of 8 data bits and an additional carry bit. If c is equal to the value 255, then ext is simply incremented, and no other state updates are performed. Otherwise, let b = (c>>8) be the carry bit. Then, <list style="symbols"> <t> If the buffered byte rem contains a value other than -1, the encoder outputs the byte (rem + b). Otherwise, if rem is -1, no byte is output. </t> <t> If ext is non-zero, then the encoder outputs ext bytes---all with a value of 0 if b is set, or 255 if b is unset---and sets ext to 0. </t> <t> rem is set to the 8 data bits: <figure align="center"> <artwork align="center"><![CDATA[ rem = c & 255 . ]]></artwork> </figure> </t> </list> </t> </section> </section> <section anchor="encoding-alternate" title="Alternate Encoding Methods"> <t> The reference implementation uses three additional encoding methods that are exactly equivalent to the above, but make assumptions and simplifications that allow for a more efficient implementation. </t> <section anchor="ec_encode_bin" title="ec_encode_bin()"> <t> The first is ec_encode_bin() (entenc.c), defined using the parameter ftb instead of ft. It is mathematically equivalent to calling ec_encode() with ft = (1<<ftb), but avoids using division. </t> </section> <section anchor="ec_enc_bit_logp" title="ec_enc_bit_logp()"> <t> The next is ec_enc_bit_logp() (entenc.c), which encodes a single binary symbol. The context is described by a single parameter, logp, which is the absolute value of the base-2 logarithm of the probability of a "1". It is mathematically equivalent to calling ec_encode() with the 3-tuple (fl[k] = 0, fh[k] = (1<<logp) - 1, ft = (1<<logp)) if k is 0 and with (fl[k] = (1<<logp) - 1, fh[k] = ft = (1<<logp)) if k is 1. The implementation requires no multiplications or divisions. </t> </section> <section anchor="ec_enc_icdf" title="ec_enc_icdf()"> <t> The last is ec_enc_icdf() (entenc.c), which encodes a single binary symbol with a table-based context of up to 8 bits. This uses the same icdf table as ec_dec_icdf() from <xref target="ec_dec_icdf"/>. The function is mathematically equivalent to calling ec_encode() with fl[k] = (1<<ftb) - icdf[k-1] (or 0 if k == 0), fh[k] = (1<<ftb) - icdf[k], and ft = (1<<ftb). This only saves a few arithmetic operations over ec_encode_bin(), but allows the encoder to use the same icdf tables as the decoder. </t> </section> </section> <section anchor="encoding-bits" title="Encoding Raw Bits"> <t> The raw bits used by the CELT layer are packed at the end of the buffer using ec_enc_bits() (entenc.c). Because the raw bits may continue into the last byte output by the range coder if there is room in the low-order bits, the encoder must be prepared to merge these values into a single byte. The procedure in <xref target="encoder-finalizing"/> does this in a way that ensures both the range coded data and the raw bits can be decoded successfully. </t> </section> <section anchor="encoding-ints" title="Encoding Uniformly Distributed Integers"> <t> The function ec_enc_uint() (entenc.c) encodes one of ft equiprobable symbols in the range 0 to (ft - 1), inclusive, each with a frequency of 1, where ft may be as large as (2**32 - 1). Like the decoder (see <xref target="ec_dec_uint"/>), it splits up the value into a range coded symbol representing up to 8 of the high bits, and, if necessary, raw bits representing the remainder of the value. </t> <t> ec_enc_uint() takes a two-tuple (t, ft), where t is the value to be encoded, 0 <= t < ft, and ft is not necessarily a power of two. Let ftb = ilog(ft - 1), i.e., the number of bits required to store (ft - 1) in two's complement notation. If ftb is 8 or less, then t is encoded directly using ec_encode() with the three-tuple (t, t + 1, ft). </t> <t> If ftb is greater than 8, then the top 8 bits of t are encoded using the three-tuple (t>>(ftb - 8), (t>>(ftb - 8)) + 1, ((ft - 1)>>(ftb - 8)) + 1), and the remaining bits, (t & ((1<<(ftb - 8)) - 1), are encoded as raw bits with ec_enc_bits(). </t> </section> <section anchor="encoder-finalizing" title="Finalizing the Stream"> <t> After all symbols are encoded, the stream must be finalized by outputting a value inside the current range. Let end be the integer in the interval [val, val + rng) with the largest number of trailing zero bits, b, such that (end + (1<<b) - 1) is also in the interval [val, val + rng). This choice of end allows the maximum number of trailing bits to be set to arbitrary values while still ensuring the range coded part of the buffer can be decoded correctly. Then, while end is not zero, the top 9 bits of end, i.e., (end>>23), are passed to the carry buffer in accordance with the procedure in <xref target="ec_enc_carry_out"/>, and end is updated via <figure align="center"> <artwork align="center"><![CDATA[ end = (end<<8) & 0x7FFFFFFF . ]]></artwork> </figure> Finally, if the buffered output byte, rem, is neither zero nor the special value -1, or the carry count, ext, is greater than zero, then 9 zero bits are sent to the carry buffer to flush it to the output buffer. When outputting the final byte from the range coder, if it would overlap any raw bits already packed into the end of the output buffer, they should be ORed into the same byte. The bit allocation routines in the CELT layer should ensure that this can be done without corrupting the range coder data so long as end is chosen as described above. If there is any space between the end of the range coder data and the end of the raw bits, it is padded with zero bits. This entire process is implemented by ec_enc_done() (entenc.c). </t> </section> <section anchor="encoder-tell" title="Current Bit Usage"> <t> The bit allocation routines in Opus need to be able to determine a conservative upper bound on the number of bits that have been used to encode the current frame thus far. This drives allocation decisions and ensures that the range coder and raw bits will not overflow the output buffer. This is computed in the reference implementation to whole-bit precision by the function ec_tell() (entcode.h) and to fractional 1/8th bit precision by the function ec_tell_frac() (entcode.c). Like all operations in the range coder, it must be implemented in a bit-exact manner, and must produce exactly the same value returned by the same functions in the decoder after decoding the same symbols. </t> </section> </section> <section title='SILK Encoder'> <t> In many respects the SILK encoder mirrors the SILK decoder described in <xref target='silk_decoder_outline'/>. Details such as the quantization and range coder tables can be found there, while this section describes the high-level design choices that were made. The diagram below shows the basic modules of the SILK encoder. <figure align="center" anchor="silk_encoder_figure" title="SILK Encoder"> <artwork> <![CDATA[ +----------+ +--------+ +---------+ | Sample | | Stereo | | SILK | ------>| Rate |--->| Mixing |--->| Core |----------> Input |Conversion| | | | Encoder | Bitstream +----------+ +--------+ +---------+ ]]> </artwork> </figure> </t> <section title='Sample Rate Conversion'> <t> The input signal's sampling rate is adjusted by a sample rate conversion module so that it matches the SILK internal sampling rate. The input to the sample rate converter is delayed by a number of samples depending on the sample rate ratio, such that the overall delay is constant for all input and output sample rates. </t> </section> <section title='Stereo Mixing'> <t> The stereo mixer is only used for stereo input signals. It converts a stereo left/right signal into an adaptive mid/side representation. The first step is to compute non-adaptive mid/side signals as half the sum and difference between left and right signals. The side signal is then minimized in energy by subtracting a prediction of it based on the mid signal. This prediction works well when the left and right signals exhibit linear dependency, for instance for an amplitude-panned input signal. Like in the decoder, the prediction coefficients are linearly interpolated during the first 8 ms of the frame. The mid signal is always encoded, whereas the residual side signal is only encoded if it has sufficient energy compared to the mid signal's energy. If it has not, the "mid_only_flag" is set without encoding the side signal. </t> <t> The predictor coefficients are coded regardless of whether the side signal is encoded. For each frame, two predictor coefficients are computed, one that predicts between low-passed mid and side channels, and one that predicts between high-passed mid and side channels. The low-pass filter is a simple three-tap filter and creates a delay of one sample. The high-pass filtered signal is the difference between the mid signal delayed by one sample and the low-passed signal. Instead of explicitly computing the high-passed signal, it is computationally more efficient to transform the prediction coefficients before applying them to the filtered mid signal, as follows <figure align="center"> <artwork align="center"> <![CDATA[ pred(n) = LP(n) * w0 + HP(n) * w1 = LP(n) * w0 + (mid(n-1) - LP(n)) * w1 = LP(n) * (w0 - w1) + mid(n-1) * w1 ]]> </artwork> </figure> where w0 and w1 are the low-pass and high-pass prediction coefficients, mid(n-1) is the mid signal delayed by one sample, LP(n) and HP(n) are the low-passed and high-passed signals and pred(n) is the prediction signal that is subtracted from the side signal. </t> </section> <section title='SILK Core Encoder'> <t> What follows is a description of the core encoder and its components. For simplicity, the core encoder is referred to simply as the encoder in the remainder of this section. An overview of the encoder is given in <xref target="encoder_figure" />. </t> <figure align="center" anchor="encoder_figure" title="SILK Core Encoder"> <artwork align="center"> <![CDATA[ +---+ +--------------------------------->| | +---------+ | +---------+ | | |Voice | | |LTP |12 | | +-->|Activity |--+ +----->|Scaling |-----------+---->| | | |Detector |3 | | |Control |<--+ | | | | +---------+ | | +---------+ | | | | | | | +---------+ | | | | | | | |Gains | | | | | | | | +-->|Processor|---|---+---|---->| R | | | | | | |11 | | | | a | | \/ | | +---------+ | | | | n | | +---------+ | | +---------+ | | | | g | | |Pitch | | | |LSF | | | | | e | | +->|Analysis |---+ | |Quantizer|---|---|---|---->| | | | | |4 | | | |8 | | | | E |--> | | +---------+ | | +---------+ | | | | n | 2 | | | | 9/\ 10| | | | | c | | | | | | \/ | | | | o | | | +---------+ | | +----------+ | | | | d | | | |Noise | +--|-->|Prediction|--+---|---|---->| e | | +->|Shaping |---|--+ |Analysis |7 | | | | r | | | |Analysis |5 | | | | | | | | | | | +---------+ | | +----------+ | | | | | | | | | /\ | | | | | | | +----------|--|--------+ | | | | | | | | \/ \/ \/ \/ \/ | | | | | +---------+ +------------+ | | | | | | | |Noise | | | -+-------+-----+------>|Prefilter|--------->|Shaping |-->| | 1 | | 6 |Quantization|13 | | +---------+ +------------+ +---+ 1: Input speech signal 2: Range encoded bitstream 3: Voice activity estimate 4: Pitch lags (per 5 ms) and voicing decision (per 20 ms) 5: Noise shaping quantization coefficients - Short term synthesis and analysis noise shaping coefficients (per 5 ms) - Long term synthesis and analysis noise shaping coefficients (per 5 ms and for voiced speech only) - Noise shaping tilt (per 5 ms) - Quantizer gain/step size (per 5 ms) 6: Input signal filtered with analysis noise shaping filters 7: Short and long term prediction coefficients LTP (per 5 ms) and LPC (per 20 ms) 8: LSF quantization indices 9: LSF coefficients 10: Quantized LSF coefficients 11: Processed gains, and synthesis noise shape coefficients 12: LTP state scaling coefficient. Controlling error propagation / prediction gain trade-off 13: Quantized signal ]]> </artwork> </figure> <section title='Voice Activity Detection'> <t> The input signal is processed by a Voice Activity Detector (VAD) to produce a measure of voice activity, spectral tilt, and signal-to-noise estimates for each frame. The VAD uses a sequence of half-band filterbanks to split the signal into four subbands: 0...Fs/16, Fs/16...Fs/8, Fs/8...Fs/4, and Fs/4...Fs/2, where Fs is the sampling frequency (8, 12, 16, or 24 kHz). The lowest subband, from 0 - Fs/16, is high-pass filtered with a first-order moving average (MA) filter (with transfer function H(z) = 1-z**(-1)) to reduce the energy at the lowest frequencies. For each frame, the signal energy per subband is computed. In each subband, a noise level estimator tracks the background noise level and a Signal-to-Noise Ratio (SNR) value is computed as the logarithm of the ratio of energy to noise level. Using these intermediate variables, the following parameters are calculated for use in other SILK modules: <list style="symbols"> <t> Average SNR. The average of the subband SNR values. </t> <t> Smoothed subband SNRs. Temporally smoothed subband SNR values. </t> <t> Speech activity level. Based on the average SNR and a weighted average of the subband energies. </t> <t> Spectral tilt. A weighted average of the subband SNRs, with positive weights for the low subbands and negative weights for the high subbands. </t> </list> </t> </section> <section title='Pitch Analysis' anchor='pitch_estimator_overview_section'> <t> The input signal is processed by the open loop pitch estimator shown in <xref target='pitch_estimator_figure' />. <figure align="center" anchor="pitch_estimator_figure" title="Block diagram of the pitch estimator"> <artwork align="center"> <![CDATA[ +--------+ +----------+ |2 x Down| |Time- | +->|sampling|->|Correlator| | | | | | | |4 | +--------+ +----------+ \/ | | 2 +-------+ | | +-->|Speech |5 +---------+ +--------+ | \/ | |Type |-> |LPC | |Down | | +----------+ | | +->|Analysis | +->|sample |-+------------->|Time- | +-------+ | | | | |to 8 kHz| |Correlator|-----------> | +---------+ | +--------+ |__________| 6 | | | |3 | \/ | \/ | +---------+ | +----------+ | |Whitening| | |Time- | -+->|Filter |-+--------------------------->|Correlator|-----------> 1 | | | | 7 +---------+ +----------+ 1: Input signal 2: Lag candidates from stage 1 3: Lag candidates from stage 2 4: Correlation threshold 5: Voiced/unvoiced flag 6: Pitch correlation 7: Pitch lags ]]> </artwork> </figure> The pitch analysis finds a binary voiced/unvoiced classification, and, for frames classified as voiced, four pitch lags per frame - one for each 5 ms subframe - and a pitch correlation indicating the periodicity of the signal. The input is first whitened using a Linear Prediction (LP) whitening filter, where the coefficients are computed through standard Linear Prediction Coding (LPC) analysis. The order of the whitening filter is 16 for best results, but is reduced to 12 for medium complexity and 8 for low complexity modes. The whitened signal is analyzed to find pitch lags for which the time correlation is high. The analysis consists of three stages for reducing the complexity: <list style="symbols"> <t>In the first stage, the whitened signal is downsampled to 4 kHz (from 8 kHz) and the current frame is correlated to a signal delayed by a range of lags, starting from a shortest lag corresponding to 500 Hz, to a longest lag corresponding to 56 Hz.</t> <t> The second stage operates on an 8 kHz signal (downsampled from 12, 16, or 24 kHz) and measures time correlations only near the lags corresponding to those that had sufficiently high correlations in the first stage. The resulting correlations are adjusted for a small bias towards short lags to avoid ending up with a multiple of the true pitch lag. The highest adjusted correlation is compared to a threshold depending on: <list style="symbols"> <t> Whether the previous frame was classified as voiced </t> <t> The speech activity level </t> <t> The spectral tilt. </t> </list> If the threshold is exceeded, the current frame is classified as voiced and the lag with the highest adjusted correlation is stored for a final pitch analysis of the highest precision in the third stage. </t> <t> The last stage operates directly on the whitened input signal to compute time correlations for each of the four subframes independently in a narrow range around the lag with highest correlation from the second stage. </t> </list> </t> </section> <section title='Noise Shaping Analysis' anchor='noise_shaping_analysis_overview_section'> <t> The noise shaping analysis finds gains and filter coefficients used in the prefilter and noise shaping quantizer. These parameters are chosen such that they will fulfill several requirements: <list style="symbols"> <t> Balancing quantization noise and bitrate. The quantization gains determine the step size between reconstruction levels of the excitation signal. Therefore, increasing the quantization gain amplifies quantization noise, but also reduces the bitrate by lowering the entropy of the quantization indices. </t> <t> Spectral shaping of the quantization noise; the noise shaping quantizer is capable of reducing quantization noise in some parts of the spectrum at the cost of increased noise in other parts without substantially changing the bitrate. By shaping the noise such that it follows the signal spectrum, it becomes less audible. In practice, best results are obtained by making the shape of the noise spectrum slightly flatter than the signal spectrum. </t> <t> De-emphasizing spectral valleys; by using different coefficients in the analysis and synthesis part of the prefilter and noise shaping quantizer, the levels of the spectral valleys can be decreased relative to the levels of the spectral peaks such as speech formants and harmonics. This reduces the entropy of the signal, which is the difference between the coded signal and the quantization noise, thus lowering the bitrate. </t> <t> Matching the levels of the decoded speech formants to the levels of the original speech formants; an adjustment gain and a first order tilt coefficient are computed to compensate for the effect of the noise shaping quantization on the level and spectral tilt. </t> </list> </t> <t> <figure align="center" anchor="noise_shape_analysis_spectra_figure" title="Noise shaping and spectral de-emphasis illustration"> <artwork align="center"> <![CDATA[ / \ ___ | // \\ | // \\ ____ |_// \\___// \\ ____ | / ___ \ / \\ // \\ P |/ / \ \_/ \\_____// \\ o | / \ ____ \ / \\ w | / \___/ \ \___/ ____ \\___ 1 e |/ \ / \ \ r | \_____/ \ \__ 2 | \ | \___ 3 | +----------------------------------------> Frequency 1: Input signal spectrum 2: De-emphasized and level matched spectrum 3: Quantization noise spectrum ]]> </artwork> </figure> <xref target='noise_shape_analysis_spectra_figure' /> shows an example of an input signal spectrum (1). After de-emphasis and level matching, the spectrum has deeper valleys (2). The quantization noise spectrum (3) more or less follows the input signal spectrum, while having slightly less pronounced peaks. The entropy, which provides a lower bound on the bitrate for encoding the excitation signal, is proportional to the area between the de-emphasized spectrum (2) and the quantization noise spectrum (3). Without de-emphasis, the entropy is proportional to the area between input spectrum (1) and quantization noise (3) - clearly higher. </t> <t> The transformation from input signal to de-emphasized signal can be described as a filtering operation with a filter <figure align="center"> <artwork align="center"> <![CDATA[ -1 Wana(z) H(z) = G * ( 1 - c_tilt * z ) * ------- Wsyn(z), ]]> </artwork> </figure> having an adjustment gain G, a first order tilt adjustment filter with tilt coefficient c_tilt, and where <figure align="center"> <artwork align="center"> <![CDATA[ 16 d __ -k -L __ -k Wana(z) = (1 - \ (a_ana(k) * z )*(1 - z * \ b_ana(k) * z ), /_ /_ k=1 k=-d ]]> </artwork> </figure> is the analysis part of the de-emphasis filter, consisting of the short-term shaping filter with coefficients a_ana(k), and the long-term shaping filter with coefficients b_ana(k) and pitch lag L. The parameter d determines the number of long-term shaping filter taps. </t> <t> Similarly, but without the tilt adjustment, the synthesis part can be written as <figure align="center"> <artwork align="center"> <![CDATA[ 16 d __ -k -L __ -k Wsyn(z) = (1 - \ (a_syn(k) * z )*(1 - z * \ b_syn(k) * z ). /_ /_ k=1 k=-d ]]> </artwork> </figure> </t> <t> All noise shaping parameters are computed and applied per subframe of 5 ms. First, an LPC analysis is performed on a windowed signal block of 15 ms. The signal block has a look-ahead of 5 ms relative to the current subframe, and the window is an asymmetric sine window. The LPC analysis is done with the autocorrelation method, with an order of between 8, in lowest-complexity mode, and 16, for best quality. </t> <t> Optionally the LPC analysis and noise shaping filters are warped by replacing the delay elements by first-order allpass filters. This increases the frequency resolution at low frequencies and reduces it at high ones, which better matches the human auditory system and improves quality. The warped analysis and filtering comes at a cost in complexity and is therefore only done in higher complexity modes. </t> <t> The quantization gain is found by taking the square root of the residual energy from the LPC analysis and multiplying it by a value inversely proportional to the coding quality control parameter and the pitch correlation. </t> <t> Next the two sets of short-term noise shaping coefficients a_ana(k) and a_syn(k) are obtained by applying different amounts of bandwidth expansion to the coefficients found in the LPC analysis. This bandwidth expansion moves the roots of the LPC polynomial towards the origin, using the formulas <figure align="center"> <artwork align="center"> <![CDATA[ k a_ana(k) = a(k)*g_ana , and k a_syn(k) = a(k)*g_syn , ]]> </artwork> </figure> where a(k) is the k'th LPC coefficient, and the bandwidth expansion factors g_ana and g_syn are calculated as <figure align="center"> <artwork align="center"> <![CDATA[ g_ana = 0.95 - 0.01*C, and g_syn = 0.95 + 0.01*C, ]]> </artwork> </figure> where C is the coding quality control parameter between 0 and 1. Applying more bandwidth expansion to the analysis part than to the synthesis part gives the desired de-emphasis of spectral valleys in between formants. </t> <t> The long-term shaping is applied only during voiced frames. It uses three filter taps, described by <figure align="center"> <artwork align="center"> <![CDATA[ b_ana = F_ana * [0.25, 0.5, 0.25], and b_syn = F_syn * [0.25, 0.5, 0.25]. ]]> </artwork> </figure> For unvoiced frames these coefficients are set to 0. The multiplication factors F_ana and F_syn are chosen between 0 and 1, depending on the coding quality control parameter, as well as the calculated pitch correlation and smoothed subband SNR of the lowest subband. By having F_ana less than F_syn, the pitch harmonics are emphasized relative to the valleys in between the harmonics. </t> <t> The tilt coefficient c_tilt is for unvoiced frames chosen as <figure align="center"> <artwork align="center"> <![CDATA[ c_tilt = 0.25, ]]> </artwork> </figure> and as <figure align="center"> <artwork align="center"> <![CDATA[ c_tilt = 0.25 + 0.2625 * V ]]> </artwork> </figure> for voiced frames, where V is the voice activity level between 0 and 1. </t> <t> The adjustment gain G serves to correct any level mismatch between the original and decoded signals that might arise from the noise shaping and de-emphasis. This gain is computed as the ratio of the prediction gain of the short-term analysis and synthesis filter coefficients. The prediction gain of an LPC synthesis filter is the square root of the output energy when the filter is excited by a unit-energy impulse on the input. An efficient way to compute the prediction gain is by first computing the reflection coefficients from the LPC coefficients through the step-down algorithm, and extracting the prediction gain from the reflection coefficients as <figure align="center"> <artwork align="center"> <![CDATA[ K ___ 2 -0.5 predGain = ( | | 1 - (r_k) ) , k=1 ]]> </artwork> </figure> where r_k is the k'th reflection coefficient. </t> <t> Initial values for the quantization gains are computed as the square-root of the residual energy of the LPC analysis, adjusted by the coding quality control parameter. These quantization gains are later adjusted based on the results of the prediction analysis. </t> </section> <section title='Prediction Analysis' anchor='pred_ana_overview_section'> <t> The prediction analysis is performed in one of two ways depending on how the pitch estimator classified the frame. The processing for voiced and unvoiced speech is described in <xref target='pred_ana_voiced_overview_section' /> and <xref target='pred_ana_unvoiced_overview_section' />, respectively. Inputs to this function include the pre-whitened signal from the pitch estimator (see <xref target='pitch_estimator_overview_section'/>). </t> <section title='Voiced Speech' anchor='pred_ana_voiced_overview_section'> <t> For a frame of voiced speech the pitch pulses will remain dominant in the pre-whitened input signal. Further whitening is desirable as it leads to higher quality at the same available bitrate. To achieve this, a Long-Term Prediction (LTP) analysis is carried out to estimate the coefficients of a fifth-order LTP filter for each of four subframes. The LTP coefficients are quantized using the method described in <xref target='ltp_quantizer_overview_section'/>, and the quantized LTP coefficients are used to compute the LTP residual signal. This LTP residual signal is the input to an LPC analysis where the LPC coefficients are estimated using Burg's method <xref target="Burg"/>, such that the residual energy is minimized. The estimated LPC coefficients are converted to a Line Spectral Frequency (LSF) vector and quantized as described in <xref target='lsf_quantizer_overview_section'/>. After quantization, the quantized LSF vector is converted back to LPC coefficients using the full procedure in <xref target="silk_nlsfs"/>. By using quantized LTP coefficients and LPC coefficients derived from the quantized LSF coefficients, the encoder remains fully synchronized with the decoder. The quantized LPC and LTP coefficients are also used to filter the input signal and measure residual energy for each of the four subframes. </t> </section> <section title='Unvoiced Speech' anchor='pred_ana_unvoiced_overview_section'> <t> For a speech signal that has been classified as unvoiced, there is no need for LTP filtering, as it has already been determined that the pre-whitened input signal is not periodic enough within the allowed pitch period range for LTP analysis to be worth the cost in terms of complexity and bitrate. The pre-whitened input signal is therefore discarded, and instead the input signal is used for LPC analysis using Burg's method. The resulting LPC coefficients are converted to an LSF vector and quantized as described in the following section. They are then transformed back to obtain quantized LPC coefficients, which are then used to filter the input signal and measure residual energy for each of the four subframes. </t> <section title="Burg's Method"> <t> The main purpose of linear prediction in SILK is to reduce the bitrate by minimizing the residual energy. At least at high bitrates, perceptual aspects are handled independently by the noise shaping filter. Burg's method is used because it provides higher prediction gain than the autocorrelation method and, unlike the covariance method, produces stable filters (assuming numerical errors don't spoil that). SILK's implementation of Burg's method is also computationally faster than the autocovariance method. The implementation of Burg's method differs from traditional implementations in two aspects. The first difference is that it operates on autocorrelations, similar to the Schur algorithm <xref target="Schur"/>, but with a simple update to the autocorrelations after finding each reflection coefficient to make the result identical to Burg's method. This brings down the complexity of Burg's method to near that of the autocorrelation method. The second difference is that the signal in each subframe is scaled by the inverse of the residual quantization step size. Subframes with a small quantization step size will on average spend more bits for a given amount of residual energy than subframes with a large step size. Without scaling, Burg's method minimizes the total residual energy in all subframes, which doesn't necessarily minimize the total number of bits needed for coding the quantized residual. The residual energy of the scaled subframes is a better measure for that number of bits. </t> </section> </section> </section> <section title='LSF Quantization' anchor='lsf_quantizer_overview_section'> <t> Unlike many other speech codecs, SILK uses variable bitrate coding for the LSFs. This improves the average rate-distortion (R-D) tradeoff and reduces outliers. The variable bitrate coding minimizes a linear combination of the weighted quantization errors and the bitrate. The weights for the quantization errors are the Inverse Harmonic Mean Weighting (IHMW) function proposed by Laroia et al. (see <xref target="laroia-icassp" />). These weights are referred to here as Laroia weights. </t> <t> The LSF quantizer consists of two stages. The first stage is an (unweighted) vector quantizer (VQ), with a codebook size of 32 vectors. The quantization errors for the codebook vector are sorted, and for the N best vectors a second stage quantizer is run. By varying the number N a tradeoff is made between R-D performance and computational efficiency. For each of the N codebook vectors the Laroia weights corresponding to that vector (and not to the input vector) are calculated. Then the residual between the input LSF vector and the codebook vector is scaled by the square roots of these Laroia weights. This scaling partially normalizes error sensitivity for the residual vector, so that a uniform quantizer with fixed step sizes can be used in the second stage without too much performance loss. And by scaling with Laroia weights determined from the first-stage codebook vector, the process can be reversed in the decoder. </t> <t> The second stage uses predictive delayed decision scalar quantization. The quantization error is weighted by Laroia weights determined from the LSF input vector. The predictor multiplies the previous quantized residual value by a prediction coefficient that depends on the vector index from the first stage VQ and on the location in the LSF vector. The prediction is subtracted from the LSF residual value before quantizing the result, and added back afterwards. This subtraction can be interpreted as shifting the quantization levels of the scalar quantizer, and as a result the quantization error of each value depends on the quantization decision of the previous value. This dependency is exploited by the delayed decision mechanism to search for a quantization sequency with best R-D performance with a Viterbi-like algorithm <xref target="Viterbi"/>. The quantizer processes the residual LSF vector in reverse order (i.e., it starts with the highest residual LSF value). This is done because the prediction works slightly better in the reverse direction. </t> <t> The quantization index of the first stage is entropy coded. The quantization sequence from the second stage is also entropy coded, where for each element the probability table is chosen depending on the vector index from the first stage and the location of that element in the LSF vector. </t> <section title='LSF Stabilization' anchor='lsf_stabilizer_overview_section'> <t> If the input is stable, finding the best candidate usually results in a quantized vector that is also stable. Because of the two-stage approach, however, it is possible that the best quantization candidate is unstable. The encoder applies the same stabilization procedure applied by the decoder (see <xref target="silk_nlsf_stabilization"/> to ensure the LSF parameters are within their valid range, increasingly sorted, and have minimum distances between each other and the border values. </t> </section> </section> <section title='LTP Quantization' anchor='ltp_quantizer_overview_section'> <t> For voiced frames, the prediction analysis described in <xref target='pred_ana_voiced_overview_section' /> resulted in four sets (one set per subframe) of five LTP coefficients, plus four weighting matrices. The LTP coefficients for each subframe are quantized using entropy constrained vector quantization. A total of three vector codebooks are available for quantization, with different rate-distortion trade-offs. The three codebooks have 10, 20, and 40 vectors and average rates of about 3, 4, and 5 bits per vector, respectively. Consequently, the first codebook has larger average quantization distortion at a lower rate, whereas the last codebook has smaller average quantization distortion at a higher rate. Given the weighting matrix W_ltp and LTP vector b, the weighted rate-distortion measure for a codebook vector cb_i with rate r_i is give by <figure align="center"> <artwork align="center"> <![CDATA[ RD = u * (b - cb_i)' * W_ltp * (b - cb_i) + r_i, ]]> </artwork> </figure> where u is a fixed, heuristically-determined parameter balancing the distortion and rate. Which codebook gives the best performance for a given LTP vector depends on the weighting matrix for that LTP vector. For example, for a low valued W_ltp, it is advantageous to use the codebook with 10 vectors as it has a lower average rate. For a large W_ltp, on the other hand, it is often better to use the codebook with 40 vectors, as it is more likely to contain the best codebook vector. The weighting matrix W_ltp depends mostly on two aspects of the input signal. The first is the periodicity of the signal; the more periodic, the larger W_ltp. The second is the change in signal energy in the current subframe, relative to the signal one pitch lag earlier. A decaying energy leads to a larger W_ltp than an increasing energy. Both aspects fluctuate relatively slowly, which causes the W_ltp matrices for different subframes of one frame often to be similar. Because of this, one of the three codebooks typically gives good performance for all subframes, and therefore the codebook search for the subframe LTP vectors is constrained to only allow codebook vectors to be chosen from the same codebook, resulting in a rate reduction. </t> <t> To find the best codebook, each of the three vector codebooks is used to quantize all subframe LTP vectors and produce a combined weighted rate-distortion measure for each vector codebook. The vector codebook with the lowest combined rate-distortion over all subframes is chosen. The quantized LTP vectors are used in the noise shaping quantizer, and the index of the codebook plus the four indices for the four subframe codebook vectors are passed on to the range encoder. </t> </section> <section title='Prefilter'> <t> In the prefilter the input signal is filtered using the spectral valley de-emphasis filter coefficients from the noise shaping analysis (see <xref target='noise_shaping_analysis_overview_section'/>). By applying only the noise shaping analysis filter to the input signal, it provides the input to the noise shaping quantizer. </t> </section> <section title='Noise Shaping Quantizer'> <t> The noise shaping quantizer independently shapes the signal and coding noise spectra to obtain a perceptually higher quality at the same bitrate. </t> <t> The prefilter output signal is multiplied with a compensation gain G computed in the noise shaping analysis. Then the output of a synthesis shaping filter is added, and the output of a prediction filter is subtracted to create a residual signal. The residual signal is multiplied by the inverse quantized quantization gain from the noise shaping analysis, and input to a scalar quantizer. The quantization indices of the scalar quantizer represent a signal of pulses that is input to the pyramid range encoder. The scalar quantizer also outputs a quantization signal, which is multiplied by the quantized quantization gain from the noise shaping analysis to create an excitation signal. The output of the prediction filter is added to the excitation signal to form the quantized output signal y(n). The quantized output signal y(n) is input to the synthesis shaping and prediction filters. </t> <t> Optionally the noise shaping quantizer operates in a delayed decision mode. In this mode it uses a Viterbi algorithm to keep track of multiple rounding choices in the quantizer and select the best one after a delay of 32 samples. This improves the rate/distortion performance of the quantizer. </t> </section> <section title='Constant Bitrate Mode'> <t> SILK was designed to run in Variable Bitrate (VBR) mode. However the reference implementation also has a Constant Bitrate (CBR) mode for SILK. In CBR mode SILK will attempt to encode each packet with no more than the allowed number of bits. The Opus wrapper code then pads the bitstream if any unused bits are left in SILK mode, or encodes the high band with the remaining number of bits in Hybrid mode. The number of payload bits is adjusted by changing the quantization gains and the rate/distortion tradeoff in the noise shaping quantizer, in an iterative loop around the noise shaping quantizer and entropy coding. Compared to the SILK VBR mode, the CBR mode has lower audio quality at a given average bitrate, and also has higher computational complexity. </t> </section> </section> </section> <section title="CELT Encoder"> <t> Most of the aspects of the CELT encoder can be directly derived from the description of the decoder. For example, the filters and rotations in the encoder are simply the inverse of the operation performed by the decoder. Similarly, the quantizers generally optimize for the mean square error (because noise shaping is part of the bit-stream itself), so no special search is required. For this reason, only the less straightforward aspects of the encoder are described here. </t> <section anchor="pitch-prefilter" title="Pitch Prefilter"> <t>The pitch prefilter is applied after the pre-emphasis. It is applied in such a way as to be the inverse of the decoder's post-filter. The main non-obvious aspect of the prefilter is the selection of the pitch period. The pitch search should be optimized for the following criteria: <list style="symbols"> <t>continuity: it is important that the pitch period does not change abruptly between frames; and</t> <t>avoidance of pitch multiples: when the period used is a multiple of the real period (lower frequency fundamental), the post-filter loses most of its ability to reduce noise</t> </list> </t> </section> <section anchor="normalization" title="Bands and Normalization"> <t> The MDCT output is divided into bands that are designed to match the ear's critical bands for the smallest (2.5 ms) frame size. The larger frame sizes use integer multiples of the 2.5 ms layout. For each band, the encoder computes the energy that will later be encoded. Each band is then normalized by the square root of the <spanx style="strong">unquantized</spanx> energy, such that each band now forms a unit vector X. The energy and the normalization are computed by compute_band_energies() and normalise_bands() (bands.c), respectively. </t> </section> <section anchor="energy-quantization" title="Energy Envelope Quantization"> <t> Energy quantization (both coarse and fine) can be easily understood from the decoding process. For all useful bitrates, the coarse quantizer always chooses the quantized log energy value that minimizes the error for each band. Only at very low rate does the encoder allow larger errors to minimize the rate and avoid using more bits than are available. When the available CPU requirements allow it, it is best to try encoding the coarse energy both with and without inter-frame prediction such that the best prediction mode can be selected. The optimal mode depends on the coding rate, the available bitrate, and the current rate of packet loss. </t> <t>The fine energy quantizer always chooses the quantized log energy value that minimizes the error for each band because the rate of the fine quantization depends only on the bit allocation and not on the values that are coded. </t> </section> <!-- Energy quant --> <section title="Bit Allocation"> <t>The encoder must use exactly the same bit allocation process as used by the decoder and described in <xref target="allocation"/>. The three mechanisms that can be used by the encoder to adjust the bitrate on a frame-by-frame basis are band boost, allocation trim, and band skipping. </t> <section title="Band Boost"> <t>The reference encoder makes a decision to boost a band when the energy of that band is significantly higher than that of the neighboring bands. Let E_j be the log-energy of band j, we define <list> <t>D_j = 2*E_j - E_j-1 - E_j+1 </t> </list> The allocation of band j is boosted once if D_j > t1 and twice if D_j > t2. For LM>=1, t1=2 and t2=4, while for LM<1, t1=3 and t2=5. </t> </section> <section title="Allocation Trim"> <t>The allocation trim is a value between 0 and 10 (inclusively) that controls the allocation balance between the low and high frequencies. The encoder starts with a safe "default" of 5 and deviates from that default in two different ways. First the trim can deviate by +/- 2 depending on the spectral tilt of the input signal. For signals with more low frequencies, the trim is increased by up to 2, while for signals with more high frequencies, the trim is decreased by up to 2. For stereo inputs, the trim value can be decreased by up to 4 when the inter-channel correlation at low frequency (first 8 bands) is high. </t> </section> <section title="Band Skipping"> <t>The encoder uses band skipping to ensure that the shape of the bands is only coded if there is at least 1/2 bit per sample available for the PVQ. If not, then no bit is allocated and folding is used instead. To ensure continuity in the allocation, some amount of hysteresis is added to the process, such that a band that received PVQ bits in the previous frame only needs 7/16 bit/sample to be coded for the current frame, while a band that did not receive PVQ bits in the previous frames needs at least 9/16 bit/sample to be coded.</t> </section> </section> <section title="Stereo Decisions"> <t>Because CELT applies mid-side stereo coupling in the normalized domain, it does not suffer from important stereo image problems even when the two channels are completely uncorrelated. For this reason it is always safe to use stereo coupling on any audio frame. That being said, there are some frames for which dual (independent) stereo is still more efficient. This decision is made by comparing the estimated entropy with and without coupling over the first 13 bands, taking into account the fact that all bands with more than two MDCT bins require one extra degree of freedom when coded in mid-side. Let L1_ms and L1_lr be the L1-norm of the mid-side vector and the L1-norm of the left-right vector, respectively. The decision to use mid-side is made if and only if <figure align="center"> <artwork align="center"><![CDATA[ L1_ms L1_lr -------- < ----- bins + E bins ]]></artwork> </figure> where bins is the number of MDCT bins in the first 13 bands and E is the number of extra degrees of freedom for mid-side coding. For LM>1, E=13, otherwise E=5. </t> <t>The reference encoder decides on the intensity stereo threshold based on the bitrate alone. After taking into account the frame size by subtracting 80 bits per frame for coarse energy, the first band using intensity coding is as follows: </t> <texttable anchor="intensity-thresholds" title="Thresholds for Intensity Stereo"> <ttcol align='center'>bitrate (kb/s)</ttcol> <ttcol align='center'>start band</ttcol> <c><35</c> <c>8</c> <c>35-50</c> <c>12</c> <c>50-68</c> <c>16</c> <c>84-84</c> <c>18</c> <c>84-102</c> <c>19</c> <c>102-130</c> <c>20</c> <c>>130</c> <c>disabled</c> </texttable> </section> <section title="Time-Frequency Decision"> <t> The choice of time-frequency resolution used in <xref target="tf-change"></xref> is based on R-D optimization. The distortion is the L1-norm (sum of absolute values) of each band after each TF resolution under consideration. The L1 norm is used because it represents the entropy for a Laplacian source. The number of bits required to code a change in TF resolution between two bands is higher than the cost of having those two bands use the same resolution, which is what requires the R-D optimization. The optimal decision is computed using the Viterbi algorithm. See tf_analysis() in celt/celt.c. </t> </section> <section title="Spreading Values Decision"> <t> The choice of the spreading value in <xref target="spread values"></xref> has an impact on the nature of the coding noise introduced by CELT. The larger the f_r value, the lower the impact of the rotation, and the more tonal the coding noise. The more tonal the signal, the more tonal the noise should be, so the CELT encoder determines the optimal value for f_r by estimating how tonal the signal is. The tonality estimate is based on discrete pdf (4-bin histogram) of each band. Bands that have a large number of small values are considered more tonal and a decision is made by combining all bands with more than 8 samples. See spreading_decision() in celt/bands.c. </t> </section> <section anchor="pvq" title="Spherical Vector Quantization"> <t>CELT uses a Pyramid Vector Quantization (PVQ) <xref target="PVQ"></xref> codebook for quantizing the details of the spectrum in each band that have not been predicted by the pitch predictor. The PVQ codebook consists of all sums of K signed pulses in a vector of N samples, where two pulses at the same position are required to have the same sign. Thus the codebook includes all integer codevectors y of N dimensions that satisfy sum(abs(y(j))) = K. </t> <t> In bands where there are sufficient bits allocated PVQ is used to encode the unit vector that results from the normalization in <xref target="normalization"></xref> directly. Given a PVQ codevector y, the unit vector X is obtained as X = y/||y||, where ||.|| denotes the L2 norm. </t> <section anchor="pvq-search" title="PVQ Search"> <t> The search for the best codevector y is performed by alg_quant() (vq.c). There are several possible approaches to the search, with a trade-off between quality and complexity. The method used in the reference implementation computes an initial codeword y1 by projecting the normalized spectrum X onto the codebook pyramid of K-1 pulses: </t> <t> y0 = truncate_towards_zero( (K-1) * X / sum(abs(X))) </t> <t> Depending on N, K and the input data, the initial codeword y0 may contain from 0 to K-1 non-zero values. All the remaining pulses, with the exception of the last one, are found iteratively with a greedy search that minimizes the normalized correlation between y and X: <figure align="center"> <artwork align="center"><![CDATA[ T J = -X * y / ||y|| ]]></artwork> </figure> </t> <t> The search described above is considered to be a good trade-off between quality and computational cost. However, there are other possible ways to search the PVQ codebook and the implementers MAY use any other search methods. See alg_quant() in celt/vq.c. </t> </section> <section anchor="cwrs-encoder" title="PVQ Encoding"> <t> The vector to encode, X, is converted into an index i such that 0 <= i < V(N,K) as follows. Let i = 0 and k = 0. Then for j = (N - 1) down to 0, inclusive, do: <list style="numbers"> <t> If k > 0, set i = i + (V(N-j-1,k-1) + V(N-j,k-1))/2. </t> <t>Set k = k + abs(X[j]).</t> <t> If X[j] < 0, set i = i + (V(N-j-1,k) + V(N-j,k))/2. </t> </list> </t> <t> The index i is then encoded using the procedure in <xref target="encoding-ints"/> with ft = V(N,K). </t> </section> </section> </section> </section> <section anchor="conformance" title="Conformance"> <t> It is our intention to allow the greatest possible choice of freedom in implementing the specification. For this reason, outside of the exceptions noted in this section, conformance is defined through the reference implementation of the decoder provided in <xref target="ref-implementation"/>. Although this document includes an English description of the codec, should the description contradict the source code of the reference implementation, the latter shall take precedence. </t> <t> Compliance with this specification means that in addition to following the normative keywords in this document, a decoder's output MUST also be within the thresholds specified by the opus_compare.c tool (included with the code) when compared to the reference implementation for each of the test vectors provided (see <xref target="test-vectors"></xref>) and for each output sampling rate and channel count supported. In addition, a compliant decoder implementation MUST have the same final range decoder state as that of the reference decoder. It is therefore RECOMMENDED that the decoder implement the same functional behavior as the reference. A decoder implementation is not required to support all output sampling rates or all output channel counts. </t> <section title="Testing"> <t> Using the reference code provided in <xref target="ref-implementation"></xref>, a test vector can be decoded with <list> <t>opus_demo -d <rate> <channels> testvectorX.bit testX.out</t> </list> where <rate> is the sampling rate and can be 8000, 12000, 16000, 24000, or 48000, and <channels> is 1 for mono or 2 for stereo. </t> <t> If the range decoder state is incorrect for one of the frames, the decoder will exit with "Error: Range coder state mismatch between encoder and decoder". If the decoder succeeds, then the output can be compared with the "reference" output with <list> <t>opus_compare -s -r <rate> testvectorX.dec testX.out</t> </list> for stereo or <list> <t>opus_compare -r <rate> testvectorX.dec testX.out</t> </list> for mono. </t> <t>In addition to indicating whether the test vector comparison passes, the opus_compare tool outputs an "Opus quality metric" that indicates how well the tested decoder matches the reference implementation. A quality of 0 corresponds to the passing threshold, while a quality of 100 is the highest possible value and means that the output of the tested decoder is identical to the reference implementation. The passing threshold (quality 0) was calibrated in such a way that it corresponds to additive white noise with a 48 dB SNR (similar to what can be obtained on a cassette deck). It is still possible for an implementation to sound very good with such a low quality measure (e.g. if the deviation is due to inaudible phase distortion), but unless this is verified by listening tests, it is RECOMMENDED that implementations achieve a quality above 90 for 48 kHz decoding. For other sampling rates, it is normal for the quality metric to be lower (typically as low as 50 even for a good implementation) because of harmless mismatch with the delay and phase of the internal sampling rate conversion. </t> <t> On POSIX environments, the run_vectors.sh script can be used to verify all test vectors. This can be done with <list> <t>run_vectors.sh <exec path> <vector path> <rate></t> </list> where <exec path> is the directory where the opus_demo and opus_compare executables are built and <vector path> is the directory containing the test vectors. </t> </section> <section anchor="opus-custom" title="Opus Custom"> <t> Opus Custom is an OPTIONAL part of the specification that is defined to handle special sample rates and frame rates that are not supported by the main Opus specification. Use of Opus Custom is discouraged for all but very special applications for which a frame size different from 2.5, 5, 10, or 20 ms is needed (for either complexity or latency reasons). Because Opus Custom is optional, streams encoded using Opus Custom cannot be expected to be decodable by all Opus implementations. Also, because no in-band mechanism exists for specifying the sampling rate and frame size of Opus Custom streams, out-of-band signaling is required. In Opus Custom operation, only the CELT layer is available, using the opus_custom_* function calls in opus_custom.h. </t> </section> </section> <section anchor="security" title="Security Considerations"> <t> Implementations of the Opus codec need to take appropriate security considerations into account, as outlined in <xref target="DOS"/>. It is extremely important for the decoder to be robust against malicious payloads. Malicious payloads must not cause the decoder to overrun its allocated memory or to take an excessive amount of resources to decode. Although problems in encoders are typically rarer, the same applies to the encoder. Malicious audio streams must not cause the encoder to misbehave because this would allow an attacker to attack transcoding gateways. </t> <t> The reference implementation contains no known buffer overflow or cases where a specially crafted packet or audio segment could cause a significant increase in CPU load. However, on certain CPU architectures where denormalized floating-point operations are much slower than normal floating-point operations, it is possible for some audio content (e.g., silence or near-silence) to cause an increase in CPU load. Denormals can be introduced by reordering operations in the compiler and depend on the target architecture, so it is difficult to guarantee that an implementation avoids them. For architectures on which denormals are problematic, adding very small floating-point offsets to the affected signals to prevent significant numbers of denormalized operations is RECOMMENDED. Alternatively, it is often possible to configure the hardware to treat denormals as zero (DAZ). No such issue exists for the fixed-point reference implementation. </t> <t>The reference implementation was validated in the following conditions: <list style="numbers"> <t> Sending the decoder valid packets generated by the reference encoder and verifying that the decoder's final range coder state matches that of the encoder. </t> <t> Sending the decoder packets generated by the reference encoder and then subjected to random corruption. </t> <t>Sending the decoder random packets.</t> <t> Sending the decoder packets generated by a version of the reference encoder modified to make random coding decisions (internal fuzzing), including mode switching, and verifying that the range coder final states match. </t> </list> In all of the conditions above, both the encoder and the decoder were run inside the <xref target="Valgrind">Valgrind</xref> memory debugger, which tracks reads and writes to invalid memory regions as well as the use of uninitialized memory. There were no errors reported on any of the tested conditions. </t> </section> <section title="IANA Considerations"> <t> This document has no actions for IANA. </t> </section> <section anchor="Acknowledgements" title="Acknowledgements"> <t> Thanks to all other developers, including Raymond Chen, Soeren Skak Jensen, Gregory Maxwell, Christopher Montgomery, and Karsten Vandborg Soerensen. We would also like to thank Igor Dyakonov, Jan Skoglund, and Christian Hoene for their help with subjective testing of the Opus codec. Thanks to Ralph Giles, John Ridges, Ben Schwartz, Keith Yan, Christian Hoene, Kat Walsh, and many others on the Opus and CELT mailing lists for their bug reports and feedback. </t> </section> <section title="Copying Conditions"> <t>The authors agree to grant third parties the irrevocable right to copy, use and distribute the work (excluding Code Components available under the simplified BSD license), with or without modification, in any medium, without royalty, provided that, unless separate permission is granted, redistributed modified works do not contain misleading author, version, name of work, or endorsement information.</t> </section> </middle> <back> <references title="Normative References"> <reference anchor="rfc2119"> <front> <title>Key words for use in RFCs to Indicate Requirement Levels </title> <author initials="S." surname="Bradner" fullname="Scott Bradner"></author> </front> <seriesInfo name="RFC" value="2119" /> </reference> </references> <references title="Informative References"> <reference anchor='requirements'> <front> <title>Requirements for an Internet Audio Codec</title> <author initials='J.-M.' surname='Valin' fullname='J.-M. Valin'> <organization /></author> <author initials='K.' surname='Vos' fullname='K. Vos'> <organization /></author> <author> <organization>IETF</organization></author> <date year='2011' month='August' /> <abstract> <t>This document provides specific requirements for an Internet audio codec. These requirements address quality, sample rate, bitrate, and packet-loss robustness, as well as other desirable properties. </t></abstract></front> <seriesInfo name='RFC' value='6366' /> <format type='TXT' target='http://tools.ietf.org/rfc/rfc6366.txt' /> </reference> <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.3550.xml"?> <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.3533.xml"?> <reference anchor='SILK' target='http://developer.skype.com/silk'> <front> <title>SILK Speech Codec</title> <author initials='K.' surname='Vos' fullname='K. Vos'> <organization /></author> <author initials='S.' surname='Jensen' fullname='S. Jensen'> <organization /></author> <author initials='K.' surname='Soerensen' fullname='K. Soerensen'> <organization /></author> <date year='2010' month='March' /> <abstract> <t></t> </abstract></front> <seriesInfo name='Internet-Draft' value='draft-vos-silk-01' /> <format type='TXT' target='http://tools.ietf.org/html/draft-vos-silk-01' /> </reference> <reference anchor="laroia-icassp"> <front> <title abbrev="Robust and Efficient Quantization of Speech LSP"> Robust and Efficient Quantization of Speech LSP Parameters Using Structured Vector Quantization </title> <author initials="R.L." surname="Laroia" fullname="R."> <organization/> </author> <author initials="N.P." surname="Phamdo" fullname="N."> <organization/> </author> <author initials="N.F." surname="Farvardin" fullname="N."> <organization/> </author> </front> <seriesInfo name="ICASSP-1991, Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, pp. 641-644, October" value="1991"/> </reference> <reference anchor='CELT' target='http://celt-codec.org/'> <front> <title>Constrained-Energy Lapped Transform (CELT) Codec</title> <author initials='J-M.' surname='Valin' fullname='J-M. Valin'> <organization /></author> <author initials='T.B.' surname='Terriberry' fullname='Timothy B. Terriberry'> <organization /></author> <author initials='G.' surname='Maxwell' fullname='G. Maxwell'> <organization /></author> <author initials='C.' surname='Montgomery' fullname='C. Montgomery'> <organization /></author> <date year='2010' month='July' /> <abstract> <t></t> </abstract></front> <seriesInfo name='Internet-Draft' value='draft-valin-celt-codec-02' /> <format type='TXT' target='http://tools.ietf.org/html/draft-valin-celt-codec-02' /> </reference> <reference anchor='SRTP-VBR'> <front> <title>Guidelines for the use of Variable Bit Rate Audio with Secure RTP</title> <author initials='C.' surname='Perkins' fullname='K. Vos'> <organization /></author> <author initials='J.M.' surname='Valin' fullname='J.M. Valin'> <organization /></author> <date year='2011' month='July' /> <abstract> <t></t> </abstract></front> <seriesInfo name='RFC' value='6562' /> <format type='TXT' target='http://tools.ietf.org/html/rfc6562' /> </reference> <reference anchor='DOS'> <front> <title>Internet Denial-of-Service Considerations</title> <author initials='M.' surname='Handley' fullname='M. Handley'> <organization /></author> <author initials='E.' surname='Rescorla' fullname='E. Rescorla'> <organization /></author> <author> <organization>IAB</organization></author> <date year='2006' month='December' /> <abstract> <t>This document provides an overview of possible avenues for denial-of-service (DoS) attack on Internet systems. The aim is to encourage protocol designers and network engineers towards designs that are more robust. We discuss partial solutions that reduce the effectiveness of attacks, and how some solutions might inadvertently open up alternative vulnerabilities. This memo provides information for the Internet community.</t></abstract></front> <seriesInfo name='RFC' value='4732' /> <format type='TXT' octets='91844' target='ftp://ftp.isi.edu/in-notes/rfc4732.txt' /> </reference> <reference anchor="Martin79"> <front> <title>Range encoding: An algorithm for removing redundancy from a digitised message</title> <author initials="G.N.N." surname="Martin" fullname="G. Nigel N. Martin"><organization/></author> <date year="1979" /> </front> <seriesInfo name="Proc. Institution of Electronic and Radio Engineers International Conference on Video and Data Recording" value="" /> </reference> <reference anchor="coding-thesis"> <front> <title>Source coding algorithms for fast data compression</title> <author initials="R." surname="Pasco" fullname=""><organization/></author> <date month="May" year="1976" /> </front> <seriesInfo name="Ph.D. thesis" value="Dept. of Electrical Engineering, Stanford University" /> </reference> <reference anchor="PVQ"> <front> <title>A Pyramid Vector Quantizer</title> <author initials="T." surname="Fischer" fullname=""><organization/></author> <date month="July" year="1986" /> </front> <seriesInfo name="IEEE Trans. on Information Theory, Vol. 32" value="pp. 568-583" /> </reference> <reference anchor="Kabal86"> <front> <title>The Computation of Line Spectral Frequencies Using Chebyshev Polynomials</title> <author initials="P." surname="Kabal" fullname="P. Kabal"><organization/></author> <author initials="R." surname="Ramachandran" fullname="R. P. Ramachandran"><organization/></author> <date month="December" year="1986" /> </front> <seriesInfo name="IEEE Trans. Acoustics, Speech, Signal Processing, vol. 34, no. 6" value="pp. 1419-1426" /> </reference> <reference anchor="Valgrind" target="http://valgrind.org/"> <front> <title>Valgrind website</title> <author></author> </front> </reference> <reference anchor="Google-NetEQ" target="http://code.google.com/p/webrtc/source/browse/trunk/src/modules/audio_coding/NetEQ/main/source/?r=583"> <front> <title>Google NetEQ code</title> <author></author> </front> </reference> <reference anchor="Google-WebRTC" target="http://code.google.com/p/webrtc/"> <front> <title>Google WebRTC code</title> <author></author> </front> </reference> <reference anchor="Opus-git" target="git://git.xiph.org/opus.git"> <front> <title>Opus Git Repository</title> <author></author> </front> </reference> <reference anchor="Opus-website" target="http://opus-codec.org/"> <front> <title>Opus website</title> <author></author> </front> </reference> <reference anchor="Vorbis-website" target="http://xiph.org/vorbis/"> <front> <title>Vorbis website</title> <author></author> </front> </reference> <reference anchor="Matroska-website" target="http://matroska.org/"> <front> <title>Matroska website</title> <author></author> </front> </reference> <reference anchor="Vectors-website" target="http://opus-codec.org/testvectors/"> <front> <title>Opus Testvectors (webside)</title> <author></author> </front> </reference> <reference anchor="Vectors-proc" target="http://www.ietf.org/proceedings/83/slides/slides-83-codec-0.gz"> <front> <title>Opus Testvectors (proceedings)</title> <author></author> </front> </reference> <reference anchor="line-spectral-pairs" target="http://en.wikipedia.org/wiki/Line_spectral_pairs"> <front> <title>Line Spectral Pairs</title> <author><organization>Wikipedia</organization></author> </front> </reference> <reference anchor="range-coding" target="http://en.wikipedia.org/wiki/Range_coding"> <front> <title>Range Coding</title> <author><organization>Wikipedia</organization></author> </front> </reference> <reference anchor="Hadamard" target="http://en.wikipedia.org/wiki/Hadamard_transform"> <front> <title>Hadamard Transform</title> <author><organization>Wikipedia</organization></author> </front> </reference> <reference anchor="Viterbi" target="http://en.wikipedia.org/wiki/Viterbi_algorithm"> <front> <title>Viterbi Algorithm</title> <author><organization>Wikipedia</organization></author> </front> </reference> <reference anchor="Whitening" target="http://en.wikipedia.org/wiki/White_noise"> <front> <title>White Noise</title> <author><organization>Wikipedia</organization></author> </front> </reference> <reference anchor="LPC" target="http://en.wikipedia.org/wiki/Linear_prediction"> <front> <title>Linear Prediction</title> <author><organization>Wikipedia</organization></author> </front> </reference> <reference anchor="MDCT" target="http://en.wikipedia.org/wiki/Modified_discrete_cosine_transform"> <front> <title>Modified Discrete Cosine Transform</title> <author><organization>Wikipedia</organization></author> </front> </reference> <reference anchor="FFT" target="http://en.wikipedia.org/wiki/Fast_Fourier_transform"> <front> <title>Fast Fourier Transform</title> <author><organization>Wikipedia</organization></author> </front> </reference> <reference anchor="z-transform" target="http://en.wikipedia.org/wiki/Z-transform"> <front> <title>Z-transform</title> <author><organization>Wikipedia</organization></author> </front> </reference> <reference anchor="Burg"> <front> <title>Maximum Entropy Spectral Analysis</title> <author initials="JP." surname="Burg" fullname="J.P. Burg"><organization/></author> </front> </reference> <reference anchor="Schur"> <front> <title>A fixed point computation of partial correlation coefficients</title> <author initials="J." surname="Le Roux" fullname="J. Le Roux"><organization/></author> <author initials="C." surname="Gueguen" fullname="C. Gueguen"><organization/></author> </front> <seriesInfo name="ICASSP-1977, Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, pp. 257-259, October" value="1977"/> </reference> <reference anchor="Princen86"> <front> <title>Analysis/synthesis filter bank design based on time domain aliasing cancellation</title> <author initials="J." surname="Princen" fullname="John P. Princen"><organization/></author> <author initials="A." surname="Bradley" fullname="Alan B. Bradley"><organization/></author> </front> <seriesInfo name="IEEE Trans. Acoust. Speech Sig. Proc. ASSP-34 (5), 1153-1161" value="1986"/> </reference> <reference anchor="Valin2010"> <front> <title>A High-Quality Speech and Audio Codec With Less Than 10 ms delay</title> <author initials="JM" surname="Valin" fullname="Jean-Marc Valin"><organization/> </author> <author initials="T. B." surname="Terriberry" fullname="Timothy Terriberry"><organization/></author> <author initials="C." surname="Montgomery" fullname="Christopher Montgomery"><organization/></author> <author initials="G." surname="Maxwell" fullname="Gregory Maxwell"><organization/></author> </front> <seriesInfo name="IEEE Trans. on Audio, Speech and Language Processing, Vol. 18, No. 1, pp. 58-67" value="2010" /> </reference> <reference anchor="Zwicker61"> <front> <title>Subdivision of the audible frequency range into critical bands</title> <author initials="E." surname="Zwicker" fullname="E. Zwicker"><organization/></author> <date month="February" year="1961" /> </front> <seriesInfo name="The Journal of the Acoustical Society of America, Vol. 33, No 2" value="p. 248" /> </reference> </references> <section anchor="ref-implementation" title="Reference Implementation"> <t>This appendix contains the complete source code for the reference implementation of the Opus codec written in C. By default, this implementation relies on floating-point arithmetic, but it can be compiled to use only fixed-point arithmetic by defining the FIXED_POINT macro. Information on building and using the reference implementation is available in the README file. </t> <t>The implementation can be compiled with either a C89 or a C99 compiler. It is reasonably optimized for most platforms such that only architecture-specific optimizations are likely to be useful. The FFT <xref target="FFT"/> used is a slightly modified version of the KISS-FFT library, but it is easy to substitute any other FFT library. </t> <t> While the reference implementation does not rely on any <spanx style="emph">undefined behavior</spanx> as defined by C89 or C99, it relies on common <spanx style="emph">implementation-defined behavior</spanx> for two's complement architectures: <list style="symbols"> <t>Right shifts of negative values are consistent with two's complement arithmetic, so that a>>b is equivalent to floor(a/(2**b)),</t> <t>For conversion to a signed integer of N bits, the value is reduced modulo 2**N to be within range of the type,</t> <t>The result of integer division of a negative value is truncated towards zero, and</t> <t>The compiler provides a 64-bit integer type (a C99 requirement which is supported by most C89 compilers).</t> </list> </t> <t> In its current form, the reference implementation also requires the following architectural characteristics to obtain acceptable performance: <list style="symbols"> <t>Two's complement arithmetic,</t> <t>At least a 16 bit by 16 bit integer multiplier (32-bit result), and</t> <t>At least a 32-bit adder/accumulator.</t> </list> </t> <section title="Extracting the source"> <t> The complete source code can be extracted from this draft, by running the following command line: <list style="symbols"> <t><![CDATA[ cat draft-ietf-codec-opus.txt | grep '^\ \ \ ###' | sed -e 's/...###//' | base64 -d > opus_source.tar.gz ]]></t> <t> tar xzvf opus_source.tar.gz </t> <t>cd opus_source</t> <t>make</t> </list> On systems where the provided Makefile does not work, the following command line may be used to compile the source code: <list style="symbols"> <t><![CDATA[ cc -O2 -g -o opus_demo src/opus_demo.c `cat *.mk | grep -v fixed | sed -e 's/.*=//' -e 's/\\\\//'` -DOPUS_BUILD -Iinclude -Icelt -Isilk -Isilk/float -DUSE_ALLOCA -Drestrict= -lm ]]></t></list> </t> <t> On systems where the base64 utility is not present, the following commands can be used instead: <list style="symbols"> <t><![CDATA[ cat draft-ietf-codec-opus.txt | grep '^\ \ \ ###' | sed -e 's/...###//' > opus.b64 ]]></t> <t>openssl base64 -d -in opus.b64 > opus_source.tar.gz</t> </list> </t> </section> <section title="Up-to-date Implementation"> <t> As of the time of publication of this memo, an up-to-date implementation conforming to this standard is available in a <xref target='Opus-git'>Git repository</xref>. Releases and other resources are available at <xref target='Opus-website'/>. However, although that implementation is expected to remain conformant with the standard, it is the code in this document that shall remain normative. </t> </section> <section title="Base64-encoded Source Code"> <t> <?rfc include="opus_source.base64"?> </t> </section> <section anchor="test-vectors" title="Test Vectors"> <t> Because of size constraints, the Opus test vectors are not distributed in this draft. They are available in the proceedings of the 83th IETF meeting (Paris) <xref target="Vectors-proc"/> and from the Opus codec website at <xref target="Vectors-website"/>. These test vectors were created specifically to exercise all aspects of the decoder and therefore the audio quality of the decoded output is significantly lower than what Opus can achieve in normal operation. </t> <t> The SHA1 hash of the files in the test vector package are <?rfc include="testvectors_sha1"?> </t> </section> </section> <section anchor="self-delimiting-framing" title="Self-Delimiting Framing"> <t> To use the internal framing described in <xref target="modes"/>, the decoder must know the total length of the Opus packet, in bytes. This section describes a simple variation of that framing which can be used when the total length of the packet is not known. Nothing in the encoding of the packet itself allows a decoder to distinguish between the regular, undelimited framing and the self-delimiting framing described in this appendix. Which one is used and where must be established by context at the transport layer. It is RECOMMENDED that a transport layer choose exactly one framing scheme, rather than allowing an encoder to signal which one it wants to use. </t> <t> For example, although a regular Opus stream does not support more than two channels, a multi-channel Opus stream may be formed from several one- and two-channel streams. To pack an Opus packet from each of these streams together in a single packet at the transport layer, one could use the self-delimiting framing for all but the last stream, and then the regular, undelimited framing for the last one. Reverting to the undelimited framing for the last stream saves overhead (because the total size of the transport-layer packet will still be known), and ensures that a "multi-channel" stream which only has a single Opus stream uses the same framing as a regular Opus stream does. This avoids the need for signaling to distinguish these two cases. </t> <t> The self-delimiting framing is identical to the regular, undelimited framing from <xref target="modes"/>, except that each Opus packet contains one extra length field, encoded using the same one- or two-byte scheme from <xref target="frame-length-coding"/>. This extra length immediately precedes the compressed data of the first Opus frame in the packet, and is interpreted in the various modes as follows: <list style="symbols"> <t> Code 0 packets: It is the length of the single Opus frame (see <xref target="sd_code0_packet"/>). </t> <t> Code 1 packets: It is the length used for both of the Opus frames (see <xref target="sd_code1_packet"/>). </t> <t> Code 2 packets: It is the length of the second Opus frame (see <xref target="sd_code2_packet"/>).</t> <t> CBR Code 3 packets: It is the length used for all of the Opus frames (see <xref target="sd_code3cbr_packet"/>). </t> <t>VBR Code 3 packets: It is the length of the last Opus frame (see <xref target="sd_code3vbr_packet"/>). </t> </list> </t> <figure anchor="sd_code0_packet" title="A Self-Delimited Code 0 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|0|0| N1 (1-2 bytes): | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | Compressed frame 1 (N1 bytes)... : : | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <figure anchor="sd_code1_packet" title="A Self-Delimited Code 1 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|0|1| N1 (1-2 bytes): | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : | Compressed frame 1 (N1 bytes)... | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : | Compressed frame 2 (N1 bytes)... | : +-+-+-+-+-+-+-+-+ | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <figure anchor="sd_code2_packet" title="A Self-Delimited Code 2 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|1|0| N1 (1-2 bytes): N2 (1-2 bytes : | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : | Compressed frame 1 (N1 bytes)... | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | Compressed frame 2 (N2 bytes)... : : | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <figure anchor="sd_code3cbr_packet" title="A Self-Delimited CBR Code 3 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|1|1|0|p| M | Pad len (Opt) : N1 (1-2 bytes): +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame 1 (N1 bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame 2 (N1 bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : ... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame M (N1 bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : Opus Padding (Optional)... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> <figure anchor="sd_code3vbr_packet" title="A Self-Delimited VBR Code 3 Packet" align="center"> <artwork align="center"><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | config |s|1|1|1|p| M | Padding length (Optional) : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : N1 (1-2 bytes): ... : N[M-1] | N[M] : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame 1 (N1 bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame 2 (N2 bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : ... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Compressed frame M (N[M] bytes)... : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : Opus Padding (Optional)... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </section> </back> </rfc>