WO1994025959A1 - Use of an auditory model to improve quality or lower the bit rate of speech synthesis systems - Google Patents

Use of an auditory model to improve quality or lower the bit rate of speech synthesis systems Download PDF

Info

Publication number
WO1994025959A1
WO1994025959A1 PCT/AU1994/000221 AU9400221W WO9425959A1 WO 1994025959 A1 WO1994025959 A1 WO 1994025959A1 AU 9400221 W AU9400221 W AU 9400221W WO 9425959 A1 WO9425959 A1 WO 9425959A1
Authority
WO
WIPO (PCT)
Prior art keywords
speech
model
excitation
auditory
masking
Prior art date
Application number
PCT/AU1994/000221
Other languages
French (fr)
Inventor
Dipanjan Sen
Warwick Harvey Holmes
Original Assignee
Unisearch Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AUPM5067A external-priority patent/AUPM506794A0/en
Application filed by Unisearch Limited filed Critical Unisearch Limited
Priority to AU66720/94A priority Critical patent/AU675322B2/en
Publication of WO1994025959A1 publication Critical patent/WO1994025959A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/26Pre-filtering or post-filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/10Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a multipulse excitation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L2019/0001Codebooks
    • G10L2019/0013Codebook search algorithms

Definitions

  • the present invention relates to speech synthesis systems and, in particular, discloses a system based upon an auditory model.
  • Fig. 1A illustrates a practical CELP method 1 of obtaining synthesised speech from digital speech by exciting a vocal tract filter such as a weighted formant filter 2, with vectors chosen from a fixed codebook 3 and an adaptive codebook 4. This method has become the dominant technique in present day low bit rate speech coders. Representations of speech using the codebook indices and vocal-tract filter coefficients have achieved high coding gain.
  • CELP Code-Excited Linear Prediction
  • MBE Multiband Excitation
  • a mean square error criterion 6 is used to determine the error between the weighted input digital speech, obtained via a weighting filter 5, and the weighted synthesised speech in order to make selections from the codebooks 3,4.
  • the stochastic codebook 3 includes a large number, typically about 1,000, of random signals each of which represents between 5 and 30 milliseconds of a sampled speech signal.
  • the adaptive codebook 4 models the periodic components of speech and typically holds approximately about 0.1 seconds of speech, parts of which may be accessed by up to about 200 vectors. Typically, selections from the stochastic codebook 3 and adaptive codebook 4 are chosen which minimize the mean square error between the weighted input digital speech and the weighted synthesized speech. This is performed for each frame, typically between 5 and 30 milliseconds, of sampled speech.
  • a zero impulse response (ZIR) filter 7 is used to compensate for framing effects between speech segments.
  • the quality of the synthesized speech is often far from the transparent quality achieved in current high fidelity audio coding systems such as the MASCAM system.
  • the quality of the reproduced sound can be attributed to the modelling of the human auditory system in these systems.
  • These schemes dynamically quantize the samples across the spectrum such that the quantization noise is not perceived.
  • the threshold of perception is computed by modelling the frequency selectivity and masking properties of the human cochlea.
  • step (a) analysing an input speech signal on the basis of an auditory model to identify perceptually significant components of said input speech signal; and (b) selecting from a plurality of candidate excitation signals to the system, an excitation signal which results in a system output optimally matched to said input speech signal in the perceptually significant components of step (a).
  • step (c) selecting the parameters of such excitation signals to minimize an error between the digitised and synthesised speech only in the regions of the spectrum found to be perceptible in step (a);
  • additional coding gain is achieved by analysing the perceptual content of each sample in the spectrum.
  • An algorithm is disclosed which is able to introduce selective distortion that is a direct function of human hearing perception and is thus optimally matched to the hearing process. It will be shown that good coding gain can be obtained with excellent speech quality.
  • the algorithm may be used on its own, or incorporated into traditional speech coders.
  • the weighting filter in CELP can be replaced by the auditory model, which enables the search for the optimum stochastic code vector in the psychoacoustic domain. It can also be implemented with very little computational overhead and low coding delay.
  • Figs. 1A-1C show CELP and weighted noise arrangements of prior art systems
  • Fig. 2 illustrates auditory modelling based on the resonance in the Basilar
  • Fig. 3 shows the Bark (or Critical Band) Scale
  • Fig. 4 shows an example of a masking threshold
  • Fig. 5 shows the simplified masking threshold due to the component v
  • Fig. 6 shows the absolute threshold of hearing
  • Fig. 7 shows a sound level excess diagram
  • Fig. 8 illustrates a speech spectrum and a raised masking threshold
  • Fig. 9 shows an arrangement used for auditory model processing in embodiments of the invention
  • Fig. 10 illustrates an arrangement for a stochastic code vector search
  • Fig. 11 illustrates a first embodiment of the present invention
  • Fig. 12 shows the noise spectrum for the embodiment of Fig. 11
  • Fig. 13 illustrates the error calculation for a Noise Above Masking embodiment of the present invention
  • Fig. 14 shows the noise spectrum of the embodiment of Fig. 13
  • Fig. 15 illustrates a device constructed in accordance with the preferred embodiment.
  • Auditory models attempt to emulate the signal processing carried out by the human ear. Of particular interest is the function of the basilar membrane, the corresponding transduction process of the hair cells in the inner ear, and innervation of the auditory nerve fibres. Examples of such models are described in:
  • This process causes hearing perception to be a function of the frequency of the sound, with frequency sensitivity depending on the position along the basilar membrane using the place theory as illustrated in Fig. 2.
  • the critical bands are a direct result of the way sound is processed in the inner ear. Sound pressure is transmitted via the ear canal, the malleus, stapes, incus and finally to the fluid chambers of the cochlea. This results in a travelling wave along the basilar membrane. The movement of the membrane causes shearing in the hair cells attached to it thus leading to the hearing sensation. The travelling wave discards high frequency components as it moves towards the helicotrema. The basilar membrane thus performs an instantaneous transform into the frequency domain. Observations have shown that the frequency selectivity of the basilar membrane, hair cells and nerve fibres follows the critical band scale (Bark scale) in Figure 3. The critical bands are approximately equally spaced along the basilar membrane.
  • the critical bands are typically intimately related to a number of important psychoacoustic phenomena, including the perceived loudness of sounds.
  • Figure 4 displays the minimum pressure level required for isolated frequency components in the speech frequency range to be just perceptible in the presence of a 60 dB tone at 1000 Hz. It is seen that the 1000 Hz tone masks neighbouring frequencies. This tone is referred to as a masker in this context, and the minimum perceptible level of other tones is called the corresponding masking threshold.
  • Masking can be much more complex than this simple example. In general, each frequency component of a signal tends to mask other components of the same signal. Masking can be classified into two types:
  • the absolute threshold of hearing is the level below which no sound is perceived by the ear.
  • Terhardt's experimental results are used to incorporate the absolute threshold into the auditory model as illustrated in Fig. 6.
  • Seneff S., "A Joint Synchrony /Mean-Rate Model of Auditory Speech Processing", J. Phonetics, vol. 16, pp. 55-76, 1988 ("Seneff s model").
  • Johnston's model "Transform Coding of Audio Signals Using Perceptual Noise Criteria", IEEE J. of Selected Areas in Communications, vol. 6, no. 2, pp. 314- 323, February 1988 ("Johnston's model").
  • Derivatives of those models can also be used, as can non-simultaneous masking models.
  • TDAC Time Domain Aliasing Cancellation
  • Princen, J.P. & Bradley, A.B. "Analysis/Synthesis Filter Bank Design Based on Time-Domain Aliasing Cancellation", IEEE Trans, on ASSP, vol 34, no. 5, pp. 1153-1161, 1986.
  • the overlapping of the frames results in minimal blocking effects at the synthesis stage, and also reduces the effects of frame discontinuities on the spectral estimates.
  • the frames are multiplied by a sine window which satisfies the criterion for perfect reconstruction. This window also has a frequency response with reasonably low leakage into neighbouring samples (i.e. high frequency selectivity). Low leakage is extremely important for accurate psychoacoustic modelling.
  • Alternate Modified Discrete Cosine Transforms (MDCT) and Modified Discrete Sine Transforms (MDST) are then used to transform the data into the frequency domain. The following derivation shows that the MDCT (and similarly the MDST) can be computed using the FFT, thus making them computationally efficient.
  • the MDCT is given by
  • the masking threshold can be used in a number of different ways in speech coders. Its use in CELP coders is discussed later, but first a general approach which can be used to generate new types of speech coders is discussed.
  • spectral samples below the masking threshold need not be transmitted. This may not in itself result in any substantial coding gain, but ensures transparent fidelity of the synthesized sound. In the synthesis process at the receiver, all the samples that lie above the threshold would be used to reconstruct the spectrum.
  • the masking threshold may be increased by 3-5 dB across the spectrum.
  • This procedure adds distortion in a controlled manner which is directly related to human auditory perception, as the distortion is based on a psychoacoustic model. That is, the distortion spectrum is optimized with respect to the human ear.
  • Figure 7 shows an example of the sound level excess, which is the difference between the spectrum and the masking threshold. Any sample below the zero line
  • Figure 8 shows the speech spectrum and the masking threshold raised such that
  • PERCELP Perceptual Code Excited Linear Prediction
  • Discrete samples of the spectrum amplitude which fall below the masking threshold may therefore be replaced by zero values without adding audible distortion. This may also be viewed from the perspective of rate-distortion theory, where the masking threshold is the maximum allowable distortion.
  • PERCELP coders should produce speech quality significantly better than that obtained when sub-optimal weighting filters are used.
  • PERCELP quantizes the areas of the speech spectrum which lie above the masking threshold with minimal distortion, while quantizing those regions below the masking threshold with as much distortion as the masking threshold will allow. This is an attempt to synthesize the perceptually significant regions of the spectrum with greater fidelity.
  • this method emphasizes the error in the perceptually significant regions of the spectrum while (almost) "ignoring" errors in other spectral regions. Further, the fact that there is no simple relation between the vocal tract filter, modelled by the linear prediction (LP) filter, and the masking threshold, means that the analysis is best carried out in the frequency /bark domain.
  • LP linear prediction
  • PERCELP The ideas of PERCELP have been tested by applying them to a speech coder similar to 4.8 kbit/s Federal Standard 1016.
  • a frame length of 30 ms with four sub-frames is used, with 60 samples per sub-frame.
  • the coder produces the parameters of 10 LP coefficients per frame, as well as adaptive and stochastic codebook indices and gains each sub-frame.
  • the parameters are quantized to 142 bits (using the FS1016 quantization tables), resulting in an overall rate of
  • the masking analysis is performed in the frequency domain using the arrangement 20 shown in Fig. 9.
  • the arrangement 20 inputs digital speech 21 to a zero padding and window unit 22 where 60 samples per sub-frame are padded with 68 zeros and windowed (using a Hamming window) before an FFT analysis in an FFT processor 23.
  • the length of the padding is chosen not only to take advantage of the fast algorithm of the FFT but also because of circular convolution that has to be accounted for (due to other parts of the algorithm discussed later).
  • a selected auditory model 24 is used to identify the perceptually significant (unmasked) regions of the spectrum.
  • the output of the auditory model 24 is a 64 sample binary array reflecting the level of each spectral sample in relation to the masking threshold.
  • the array may be defined as follows, where level[i] represents the sound pressure level of the signal:
  • the masking threshold is incremented in equation (5). by an offset of 10 dB to take advantage of results presented above.
  • LP Linear Prediction
  • the filter parameters (co-efficients) of the LP filter are chosen to minimize the error between the filter response and the speech spectrum only in the regions of the spectrum found to be perceptible by the perceptible hearing analysis.
  • the pitch analysis is carried out using the established technique of an adaptive codebook, and requires the searching of 128 integer and 128 non-integer delays stored in an adaptive array.
  • the final excitation (the sum of the pitch excitation and the stochastic codebook excitation) is then used to update the adaptive array.
  • the pitch gain is quantized each sub-frame using 5-bit non-uniform scalar quantization. The total number of bits required to transmit the pitch information is thus 52 bits per frame.
  • the stochastic codebook contribution is often blamed for the inherent noisiness (often described as background buzziness) of CELP coders. Due to this recognized inaccuracy of the stochastic codebook, and the fact that the final level of the noise is determined by its contribution, it seems the most likely candidate for an auditory analysis to achieve a perceptual improvement.
  • the stochastic codebook search algorithm In order to perform a selective search across the spectrum using the masking analysis, the stochastic codebook search algorithm has to be performed in the frequency domain.
  • the stochastic codebook (512 size) analysis is performed only in these regions of the spectrum.
  • the optimum code vector is given by the codebook entry which maximizes M defined by:
  • T(i) is the target vector given by subtracting the adaptive codebook excitation from the LP residual
  • mask(t ' ) is the binary vector from the auditory masking analysis.
  • Equations 5 and 6 may easily be derived by minimizing the noise energy only in the unmasked regions of the spectrum.
  • the search for the optimum stochastic code vector is shown diagrammatically in Fig. 10.
  • a PERCELP coder 50 is shown in which digital speech 52 is input to an adder 54 configured to subtract from the digital speech the output of the zero impulse filter (ZIR) 56.
  • the output of the adder 54 is supplied to a further adder 58 which receives the synthesised speech signal Sn received from a formant filter 56, and outputs an error signal 76.
  • the error signal 76 is supplied to a minimum squared error search 72 similar to that of the prior art which is used to make selection from an adaptive codebook 64.
  • the error signal 76 is also supplied to a new error search incorporating the auditory masking 74 and which corresponds to the arrangement shown in Fig. 10 which is used to make selections from a stochastic codebook 66.
  • the speech signal 52 is also input to an auditory masking analysis 62 which implements the auditory model in the manner described above.
  • the output of the auditory masking analysis 62 supplies further input to the new error search 74.
  • the selected outputs from each of the codebooks 64 and 66 are summed in a summer 68 and applied to a post-filter 70, the output of which is returned for updating the adaptive codebook 64, as well as being applied to the formant filter 60.
  • Codebook gain units 78 and 80 are also provided. It will be apparent from the above that the PERCELP coder 50 differs from the standard CELP coder of the prior art. Firstly, the search process for the optimum stochastic codebook index and codebook gain is modified to minimize the error only in the perceptually significant portions of the spectrum.
  • the post-filter 70 is used to truncate all sections of the combined adaptive and stochastic excitation spectrum which the auditory analysis indicates are below the masking threshold. This second step is necessary because the error has not been minimized in these regions of the spectrum.
  • the truncated excitation is used to update the adaptive codebook 64.
  • the post-filter 70 immediately before or after the formant filter 60 appears as the simplest way to truncate the synthesized speech spectrum, this results in a non-optimized adaptive codebook update. It was therefore decided to place the post- filter 70 inside the adaptive codebook loop, as shown in Fig. 11. An alternative would have been to place it directly at the output of the stochastic codebook 66, but use in that position leads to subjectively poorer quality.
  • Quality Of Synthesized Speech using PERCELP Examples of the noise levels using CELP and PERCELP, along with the original speech spectrum and masking thresholds for a particular frame, are shown in Figs. IC and 12, respectively. As may be observed, the noise level for PERCELP is predominantly under the masking threshold.
  • Fig. 12 illustrates an arrangement termed by the inventors as minimization of Total Unmasked Noise (TUN), in the synthesised speech.
  • TUN Total Unmasked Noise
  • Fig. 12 suggests that there may be further scope to perceptually minimize the noise in the unmasked regions, as follows.
  • the required improvement to the results in Fig. 12 is obtained by minimizing only the noise energy which lies above the masking threshold.
  • the minimization is still carried out in the unmasked regions of the spectrum only - however the error energy that lies below the masking threshold is now ignored.
  • the rationale for this is that, since the noise energy that lies below the masking level is imperceptible, it may safely be allowed to remain. This technique will tend to distribute the unavoidable unmasked noise uniformly above the masking threshold.
  • the total Noise energy Above the Masking (NAM) threshold energy for code vector k is calculated as follows:
  • the code vector which results in the lowest value of NAM is chosen from the 512 codebook entries.
  • the foregoing error calculation is depicted in Fig. 13.
  • the noise level resulting from this modified codebook search algorithm is displayed in Fig. 14.
  • the perceived quality of the synthesized speech is slightly better than that from the previous algorithm (Fig. 12), although the signal-to-noise ratio with PERCELP is lower than with the CELP algorithm. This is because the total noise energy has not being minimized, but only that above the masking threshold.
  • the decoder will not have access to the masking information to calculate the combined excitation.
  • One solution to this problem is to transmit extra information about the masked portions of the spectrum. This however increases the bit rate, and is only be realistic if the masking analysis were to be carried out once per frame, since it would require about 64 bits (i.e. an extra 2.1 kbits/s).
  • a preferable solution is to compute the masking threshold both at the decoder and the encoder based on information that is known to both encoder and decoder.
  • the short- term spectral envelope (via the LP coefficients) and the adaptive excitation are such information.
  • the masking analysis could be carried out on either the envelope alone, or on the envelope excited by the adaptive excitation. Quality and Computational Complexity of PERCELP
  • the computational overhead associated with the auditory model is small enough to be included in single-DSP full-duplex implementations of CELP coders at 4.8 kbps.
  • the computational overhead of the current implementation is due in part to the frequency domain stochastic codebook. Existing techniques which minimize the computation as well as the storage requirements should make this overhead negligible.
  • Fig. 15 illustrates a configuration of a PERCELP coder which can be formed as an application specific integrated circuit (ASIC) 100.
  • ASIC application specific integrated circuit
  • the ASIC 100 includes a PCM module 102 which receives and outputs analog speech 101, generally band limited between 300Hz - 3300Hz as in telephony systems.
  • a digital signal processor (DSP) 104 receives digital speech, 8 bits sampled at 8kHz giving 64 kbps, from the PCM Module 102, and is programmed to implement PERCELP coding and decoding as described above using a stochastic codebook initially stored in a ROM 106, but transferred to a RAM 108 to permit high speed access during operation.
  • the RAM 108 also stores the adaptive codebook.
  • the DSP 104 outputs digital speech at 4.8 kbps to a telecommunications channel 100.
  • a programmable logic device (PLD) 112 is used to "glue" or otherwise link the other components of the ASIC 100.
  • MBE multiband excitation
  • a perceptual model of digitized speech is used in the manner described above to determine the perceptible sections of the speech spectra.
  • Periodic or noise excitation signals are then passed through a number of bandpass filters which output the synthesized speech. Parameters of the excitation signals are then selected to minimise the error in the same manner as in the previous embodiments.
  • Post-filtering of the synthesized speech spectra can also be used as before.
  • the speech synthesis system disclosed herein has application to the telecommunication industry and similar industries where digital speech is being conveyed or stored.

Abstract

Low bit rate speech coding algorithms are mostly based on the use of voice production models in which vocal tract filters are excited by vectors chosen from fixed and adaptive codebooks. It has been recognized that to improve the perceptual quality of such coders it is necessary to also allow for the pyschoacoustic properties of the human ear. The weighting filter (5, of Fig. 1B) traditionally used for this purpose is sub-optimal as it doesnot explicitly evaluate auditory characteristics. Disclosed in the preferred embodiment of the present invention, the weighting filter is replaced with an auditory model which enables the search for the optimum stochastic code vector in the psychoacoustic domain. An algorithm, which has been termed PERCELP (for Perceptually Enhanced Random Codebook Excited Linear Prediction), is disclosed which produces speech that is of considerably better quality than obtained with a weighting filter. The computational overhead is low enough to warrant the use of this approach in new speech coders.

Description

USE OF AN AUDITORY MODEL TO IMPROVE QUALITY OR LOWER THE BIT RATE OF SPEECH SYNTHESIS SYSTEMS
Field of the Invention
The present invention relates to speech synthesis systems and, in particular, discloses a system based upon an auditory model.
Background of the Invention Modern speech coding algorithms, such as Code-Excited Linear Prediction (CELP) and Multiband Excitation (MBE), have exploited properties of the human articulatory process. These schemes use parameters such as pitch, vocal-tract filter coefficients and voiced-unvoiced decisions to encode speech at low bit rates. Fig. 1A illustrates a practical CELP method 1 of obtaining synthesised speech from digital speech by exciting a vocal tract filter such as a weighted formant filter 2, with vectors chosen from a fixed codebook 3 and an adaptive codebook 4. This method has become the dominant technique in present day low bit rate speech coders. Representations of speech using the codebook indices and vocal-tract filter coefficients have achieved high coding gain.
A mean square error criterion 6 is used to determine the error between the weighted input digital speech, obtained via a weighting filter 5, and the weighted synthesised speech in order to make selections from the codebooks 3,4. The stochastic codebook 3 includes a large number, typically about 1,000, of random signals each of which represents between 5 and 30 milliseconds of a sampled speech signal. The adaptive codebook 4 models the periodic components of speech and typically holds approximately about 0.1 seconds of speech, parts of which may be accessed by up to about 200 vectors. Typically, selections from the stochastic codebook 3 and adaptive codebook 4 are chosen which minimize the mean square error between the weighted input digital speech and the weighted synthesized speech. This is performed for each frame, typically between 5 and 30 milliseconds, of sampled speech. A zero impulse response (ZIR) filter 7 is used to compensate for framing effects between speech segments.
While these techniques have resulted in excellent coding gain, the quality of the synthesized speech is often far from the transparent quality achieved in current high fidelity audio coding systems such as the MASCAM system. Apart from the extra bandwidth available in audio coding, the quality of the reproduced sound can be attributed to the modelling of the human auditory system in these systems. These schemes dynamically quantize the samples across the spectrum such that the quantization noise is not perceived. The threshold of perception is computed by modelling the frequency selectivity and masking properties of the human cochlea.
Auditory models have not been used in low bit rate speech coding, possibly because of doubts about the achievable coding gain and the computational overheads. Also, the maintenance of transparent quality has not been a consideration in most algorithms, as distortion is always present. It has however long been recognised that to improve the perceptual quality of these speech coders, which are essentially voice production models, it is necessary to consider the psychoacoustic properties of the human ear. The only technique so far used in speech coding which allows in any way for the properties of the hearing process, is to shape the error signal spectrum such that noise levels tend to be generally less than the signal levels at all frequencies, even in formant valleys in the manner, shown in Fig. IB. This is performed using the weighting filter of Fig. 1A. This scheme does not explicitly evaluate auditory characteristics - it relies on the higher levels of the signal to lessen the effect of the noise and is not optimally matched to the hearing process. The noise spectrum using CELP and the weighting filter 5 of Fig. 1A is shown in Fig. IC.
Summary of the Invention It is an object of the present invention to substantially overcome, or ameliorate, some or all of the abovementioned problems. In accordance with one aspect of the present invention there is disclosed a method of determining an optimum excitation in a speech synthesis system, said method comprising the steps of:
(a) analysing an input speech signal on the basis of an auditory model to identify perceptually significant components of said input speech signal; and (b) selecting from a plurality of candidate excitation signals to the system, an excitation signal which results in a system output optimally matched to said input speech signal in the perceptually significant components of step (a).
In accordance with another aspect of the present invention there is disclosed a method of determining an optimum codebook entry (or "vector") in a code-book excited linear prediction speech system, the method comprising the steps of:
(a) using a perceptual hearing model of digitised speech to determine those perceptible sections of the speech spectra;
(b) passing a vector chosen from a stochastic codebook through a linear prediction filter to produce synthesised speech; (c) selecting a codebook entry which minimises an error between the digitised and synthesised speech only in the regions of the spectrum found to be perceptible in step
(a); and
(d) post-filtering out those regions of the synthesised speech spectrum found to be imperceptible according to the hearing model. In accordance with another aspect of the present invention there is disclosed a method of determining an optimum codebook entry in a multiband excitation speech coding system, the method comprising the steps of:
(a) using a perceptual hearing model of digitized speech to determine the perceptible sections of the speech spectra; (b) passing periodic or noise excitation signals through a multiplicity of bandpass filters to produce synthesized speech;
(c) selecting the parameters of such excitation signals to minimize an error between the digitised and synthesised speech only in the regions of the spectrum found to be perceptible in step (a); and
(d) post-filtering out those regions of the synthesised speech spectrum found to be imperceptible according to the hearing model, and using the remaining codebook entries and linear prediction filter parameters in said system.
The significance of the above is the use in some embodiments of the perceptual model of hearing to isolate the perceptually important sections of the speech spectrum to determine the optimum stochastic codebook entries in CELP speech systems (coders).
In the preferred embodiment additional coding gain is achieved by analysing the perceptual content of each sample in the spectrum. An algorithm is disclosed which is able to introduce selective distortion that is a direct function of human hearing perception and is thus optimally matched to the hearing process. It will be shown that good coding gain can be obtained with excellent speech quality. The algorithm may be used on its own, or incorporated into traditional speech coders. For example, the weighting filter in CELP can be replaced by the auditory model, which enables the search for the optimum stochastic code vector in the psychoacoustic domain. It can also be implemented with very little computational overhead and low coding delay.
Brief Description of the Drawings A number of embodiments of the present invention will now be described with reference to the drawings in which:
Figs. 1A-1C show CELP and weighted noise arrangements of prior art systems; Fig. 2 illustrates auditory modelling based on the resonance in the Basilar
Membrane of the inner ear;
Fig. 3 shows the Bark (or Critical Band) Scale; Fig. 4 shows an example of a masking threshold; Fig. 5 shows the simplified masking threshold due to the component v; Fig. 6 shows the absolute threshold of hearing;
Fig. 7 shows a sound level excess diagram; Fig. 8 illustrates a speech spectrum and a raised masking threshold; Fig. 9 shows an arrangement used for auditory model processing in embodiments of the invention; Fig. 10 illustrates an arrangement for a stochastic code vector search;
Fig. 11 illustrates a first embodiment of the present invention; Fig. 12 shows the noise spectrum for the embodiment of Fig. 11; Fig. 13 illustrates the error calculation for a Noise Above Masking embodiment of the present invention; Fig. 14 shows the noise spectrum of the embodiment of Fig. 13; and Fig. 15 illustrates a device constructed in accordance with the preferred embodiment.
Detailed Description of the Best and Other Modes of Carrying Out the Invention The Auditory Masking Model
Auditory models attempt to emulate the signal processing carried out by the human ear. Of particular interest is the function of the basilar membrane, the corresponding transduction process of the hair cells in the inner ear, and innervation of the auditory nerve fibres. Examples of such models are described in:
Zwicker, E. & Zwicker, U.T., "Audio Engineering and Psychoacoustics: Matching Signals to the Final Receiver, the Human Auditory System", J. Audio Eng. SΩC. , vol. 39, no. 3, March 1991, pp.115-125 (the "Zwicker Model"); and
Allen, J.B., "Cochlear Modelling", IEEE ASSP Mag., vol. 2, no. 1, January 1985, pp.3-29 (the "Allen Model").
This process causes hearing perception to be a function of the frequency of the sound, with frequency sensitivity depending on the position along the basilar membrane using the place theory as illustrated in Fig. 2. The critical bands are a direct result of the way sound is processed in the inner ear. Sound pressure is transmitted via the ear canal, the malleus, stapes, incus and finally to the fluid chambers of the cochlea. This results in a travelling wave along the basilar membrane. The movement of the membrane causes shearing in the hair cells attached to it thus leading to the hearing sensation. The travelling wave discards high frequency components as it moves towards the helicotrema. The basilar membrane thus performs an instantaneous transform into the frequency domain. Observations have shown that the frequency selectivity of the basilar membrane, hair cells and nerve fibres follows the critical band scale (Bark scale) in Figure 3. The critical bands are approximately equally spaced along the basilar membrane.
As hearing perception is directly related to the deformation of the basilar membrane, caused by different frequency components, the critical bands are typically intimately related to a number of important psychoacoustic phenomena, including the perceived loudness of sounds.
Of particular importance is the fact that sound components at particular frequencies can reduce, or even totally suppress, the perceptual effects of other sound components at neighbouring frequencies, at least partly because of interaction effects along the basilar membrane. This phenomenon is called auditory masking. Using the travelling wave theory, it can be seen that the low frequency components which extend over most of the membrane are able to mask the high frequency components since the high frequencies predominate in the early portions of the membrane. However, this is not to say that high frequencies components do not mask the lower components but only that the effect is stronger in the former case.
As an example, Figure 4 displays the minimum pressure level required for isolated frequency components in the speech frequency range to be just perceptible in the presence of a 60 dB tone at 1000 Hz. It is seen that the 1000 Hz tone masks neighbouring frequencies. This tone is referred to as a masker in this context, and the minimum perceptible level of other tones is called the corresponding masking threshold.
Masking can be much more complex than this simple example. In general, each frequency component of a signal tends to mask other components of the same signal. Masking can be classified into two types:
• Inter-Bark masking due to the interaction of frequency components across the whole frequency range, and
• Intra-Bark masking due to the frequency components within each bark-band. (This type of masking accounts for the observed asymmetry of masking between tones and noise.)
Investigations into the masking effect of spectral samples on higher frequencies have led to the conclusion that there is a direct relationship between the level of the component and the amount of masking it induces on the spectrum. Thus, consider the masking effect of the vth signal component at frequency fv Hz and with sound pressure level Lv. The fundamental simplified model used in the preferred embodiment is described by Terhardt, E., "Calculating Virtual Pitch", Hearing Research. 1979, pp. 155- 199 ("Terhardt" s Model") and assumes that the slope of the masking threshold contribution due to this component on frequencies higher than itself is given by
Sv = -24 - ^ + 0.2Lv dB / bark . (1)
'v It has been found that the level of the masking signal is not so important when computing the masking effect on lower frequencies. Masking of lower frequencies is hence modelled using the level-independent relationship
Sv = 27 dB /bark . (2)
This is illustrated in Figure 5. The masking threshold at the wth frequency index due to all the components in the spectrum is then computed as
Figure imgf000007_0001
v≠u where sv is given by Equations (1) or (2), and zv is the critical band from Figure 3.
The asymmetry of masking between tones and noise within each critical band, as described in Hellman, R.P., "Asymmetry of Masking Between Noise and Tone".
Perception and Psychophysics. vol. 11, No. 3, 1972 ("Hellman's Model"), is also allowed for in the auditory model used in the preferred embodiment. If there is a tonal component at the frequency index at which the masking effect is being computed, the noise in the critical band depends on the spectral intensities away from the immediate neighbourhood of that index. This noise is modelled here by adding the spectral intensities of all samples in the critical band, except for the three samples directly neighbouring the index under consideration.
The absolute threshold of hearing is the level below which no sound is perceived by the ear. Terhardt's experimental results are used to incorporate the absolute threshold into the auditory model as illustrated in Fig. 6.
Other auditory models that can be used include: (1) Ghitza, O., "Auditory Nerve Representation for Speech Analysis/Synthesis",
IEEE Trans, on ASSP. vol. 35, no. 6, pp. 736-740, 1987 ("Ghitza's model").
(2) Lyon, O., "A Computational Model of Filtering, Detection and Compression in the Cochlea", Proc. IEE ICASSP. pp. 1281-1285, 1982 ("Lyon's model").
(3) Seneff, S., "A Joint Synchrony /Mean-Rate Model of Auditory Speech Processing", J. Phonetics, vol. 16, pp. 55-76, 1988 ("Seneff s model").
(4) Johnston, J.D., "Transform Coding of Audio Signals Using Perceptual Noise Criteria", IEEE J. of Selected Areas in Communications, vol. 6, no. 2, pp. 314- 323, February 1988 ("Johnston's model").
Derivatives of those models can also be used, as can non-simultaneous masking models.
Preferred Implementation of the Auditory Model
In the first step of the implementation, input speech samples are divided into the critical frequency bands. The filter-bank used for this purpose is realized using Time Domain Aliasing Cancellation (TDAC), as described in Princen, J.P. & Bradley, A.B., "Analysis/Synthesis Filter Bank Design Based on Time-Domain Aliasing Cancellation", IEEE Trans, on ASSP, vol 34, no. 5, pp. 1153-1161, 1986. This involves the analysis of 32 ms speech frames with 50% overlap, so that each frame contains 16 ms of the samples from the previous frame. This frame length was chosen with pseudo-stationarity in mind. The overlapping of the frames results in minimal blocking effects at the synthesis stage, and also reduces the effects of frame discontinuities on the spectral estimates. The frames are multiplied by a sine window which satisfies the criterion for perfect reconstruction. This window also has a frequency response with reasonably low leakage into neighbouring samples (i.e. high frequency selectivity). Low leakage is extremely important for accurate psychoacoustic modelling. Alternate Modified Discrete Cosine Transforms (MDCT) and Modified Discrete Sine Transforms (MDST) are then used to transform the data into the frequency domain. The following derivation shows that the MDCT (and similarly the MDST) can be computed using the FFT, thus making them computationally efficient. The MDCT is given by
Figure imgf000009_0001
where ;t(r) are the speech samples, hif) is the window, and Xfr is the transformed component.
An additional advantage of using the TDAC transform for the filter bank computation is the maintenance of critical sampling, even though the analysis frames are overlapped by 50%. Following the transformation into the frequency domain, the samples are grouped according to the critical bands. The masking threshold is then computed according to the foregoing theory. Coding Gain Achieved Using the Auditory Model Alone
Once the masking threshold is calculated, it can be used in a number of different ways in speech coders. Its use in CELP coders is discussed later, but first a general approach which can be used to generate new types of speech coders is discussed.
In principle, spectral samples below the masking threshold need not be transmitted. This may not in itself result in any substantial coding gain, but ensures transparent fidelity of the synthesized sound. In the synthesis process at the receiver, all the samples that lie above the threshold would be used to reconstruct the spectrum.
To obtain more coding gain at the cost of a slight loss of quality, the masking threshold may be increased by 3-5 dB across the spectrum. This procedure adds distortion in a controlled manner which is directly related to human auditory perception, as the distortion is based on a psychoacoustic model. That is, the distortion spectrum is optimized with respect to the human ear.
Figure 7 shows an example of the sound level excess, which is the difference between the spectrum and the masking threshold. Any sample below the zero line
(shown) therefore represents samples below the auditory threshold, which can be discarded on transmission. This results in a punctured spectrum with "holes" in the regions considered to be masked.
Figure 8 shows the speech spectrum and the masking threshold raised such that
75 % (average) of the spectrum is discarded. Listening tests have shown that this amount of the spectrum can be discarded without noticeable loss of speech quality. When the threshold is raised further, and only 10% of the spectrum is used to synthesize the speech, the quality remains very good, but tonal artefacts begin to appear. New types of speech coders can be based directly on the method presented in this section. However, the masking threshold computation can also be used in conventional speech coders to improve their performance, as illustrated in the following sections. Use of Auditory Model in CELP Coders (PERCELP) The foregoing technique of identifying the perceptually important portions of the speech spectrum and the masking threshold also enables the replacement of the weighting filter 5 in a traditional CELP coder (Fig. 1A), as follows. The resulting coders are called Perceptual Code Excited Linear Prediction (PERCELP) coders. These coders are based on the hypothesis that the perceptually important regions of the spectrum should be synthesized with minimal distortion, and the masked regions should remain masked after synthesis. The noise level in the synthesized waveform would thus be very small in the regions above the masking threshold and slightly below the masking threshold in the masked regions.
Discrete samples of the spectrum amplitude which fall below the masking threshold may therefore be replaced by zero values without adding audible distortion. This may also be viewed from the perspective of rate-distortion theory, where the masking threshold is the maximum allowable distortion.
This is considerably different to the conventional weighting filter approach, in which the noise level tends to be shaped so that it is everywhere just below the spectral envelope. Accordingly, PERCELP coders should produce speech quality significantly better than that obtained when sub-optimal weighting filters are used.
The approach in PERCELP is to quantize the areas of the speech spectrum which lie above the masking threshold with minimal distortion, while quantizing those regions below the masking threshold with as much distortion as the masking threshold will allow. This is an attempt to synthesize the perceptually significant regions of the spectrum with greater fidelity.
From a traditional weighting filter point of view, this method emphasizes the error in the perceptually significant regions of the spectrum while (almost) "ignoring" errors in other spectral regions. Further, the fact that there is no simple relation between the vocal tract filter, modelled by the linear prediction (LP) filter, and the masking threshold, means that the analysis is best carried out in the frequency /bark domain.
The ideas of PERCELP have been tested by applying them to a speech coder similar to 4.8 kbit/s Federal Standard 1016. Telecommunications: Analog to Digital Conversion of Radio Voice bv 4.800 bit/second CELP. National Communications System, Office of Technology and Standards, Washington DC, 14 February 1991 (FS1016). A frame length of 30 ms with four sub-frames is used, with 60 samples per sub-frame. The coder produces the parameters of 10 LP coefficients per frame, as well as adaptive and stochastic codebook indices and gains each sub-frame. The parameters are quantized to 142 bits (using the FS1016 quantization tables), resulting in an overall rate of
4.66 kbits/s.
Masking Threshold Analysis
The masking analysis is performed in the frequency domain using the arrangement 20 shown in Fig. 9. The arrangement 20 inputs digital speech 21 to a zero padding and window unit 22 where 60 samples per sub-frame are padded with 68 zeros and windowed (using a Hamming window) before an FFT analysis in an FFT processor 23. The length of the padding is chosen not only to take advantage of the fast algorithm of the FFT but also because of circular convolution that has to be accounted for (due to other parts of the algorithm discussed later).
Once in the frequency domain, a selected auditory model 24 is used to identify the perceptually significant (unmasked) regions of the spectrum. The output of the auditory model 24 is a 64 sample binary array reflecting the level of each spectral sample in relation to the masking threshold. The array may be defined as follows, where level[i] represents the sound pressure level of the signal:
. _ Jl , leveI[ ] > threshold + \0dB mask[z] = jO ^ level[/] < threshold + \0dB ( ^ for t'= l to 64.
The masking threshold is incremented in equation (5). by an offset of 10 dB to take advantage of results presented above.
Short Term Spectral Analysis and Pitch Analysis A 10th order Linear Prediction (LP) filter is used to model the vocal tract. A total of 34 bits, is used to quantize the LP coefficients in the manner specified in FS1016.
The filter parameters (co-efficients) of the LP filter are chosen to minimize the error between the filter response and the speech spectrum only in the regions of the spectrum found to be perceptible by the perceptible hearing analysis. The pitch analysis is carried out using the established technique of an adaptive codebook, and requires the searching of 128 integer and 128 non-integer delays stored in an adaptive array. The final excitation (the sum of the pitch excitation and the stochastic codebook excitation) is then used to update the adaptive array. The pitch gain is quantized each sub-frame using 5-bit non-uniform scalar quantization. The total number of bits required to transmit the pitch information is thus 52 bits per frame.
The pitch analysis is unmodified in PERCELP at present, mainly because of the complexity of transforming each of the integer and non-integer delay vectors into the frequency domain every sub-frame. This would be required, as it is not possible to pre- store these frequency domain vectors because the values in the adaptive array are constantly changing. Frequency Domain Stochastic Codebook
The stochastic codebook contribution is often blamed for the inherent noisiness (often described as background buzziness) of CELP coders. Due to this recognized inaccuracy of the stochastic codebook, and the fact that the final level of the noise is determined by its contribution, it seems the most likely candidate for an auditory analysis to achieve a perceptual improvement. In order to perform a selective search across the spectrum using the masking analysis, the stochastic codebook search algorithm has to be performed in the frequency domain.
In order to minimize the distortion only in the unmasked portions of the speech spectrum, the stochastic codebook (512 size) analysis is performed only in these regions of the spectrum. The optimum code vector is given by the codebook entry which maximizes M defined by:
Figure imgf000012_0001
where Yk(i) ιs *e filtered codeword (obtained by multiplying h( the 128-point FFT of the all-pole LP filter impulse response with xjλ.i) the Λth codebook entry), T(i) is the target vector given by subtracting the adaptive codebook excitation from the LP residual, and mask(t') is the binary vector from the auditory masking analysis. The corresponding optimum gain is given by:
Figure imgf000012_0002
Equations 5 and 6 may easily be derived by minimizing the noise energy only in the unmasked regions of the spectrum. The search for the optimum stochastic code vector is shown diagrammatically in Fig. 10.
The interfacing of the auditory model to a standard CELP coder is carried out in two steps to form a PERCELP configuration as shown in Fig. 11. Here, a PERCELP coder 50 is shown in which digital speech 52 is input to an adder 54 configured to subtract from the digital speech the output of the zero impulse filter (ZIR) 56. The output of the adder 54 is supplied to a further adder 58 which receives the synthesised speech signal Sn received from a formant filter 56, and outputs an error signal 76. The error signal 76 is supplied to a minimum squared error search 72 similar to that of the prior art which is used to make selection from an adaptive codebook 64. The error signal 76 is also supplied to a new error search incorporating the auditory masking 74 and which corresponds to the arrangement shown in Fig. 10 which is used to make selections from a stochastic codebook 66.
The speech signal 52 is also input to an auditory masking analysis 62 which implements the auditory model in the manner described above. The output of the auditory masking analysis 62 supplies further input to the new error search 74. The selected outputs from each of the codebooks 64 and 66 are summed in a summer 68 and applied to a post-filter 70, the output of which is returned for updating the adaptive codebook 64, as well as being applied to the formant filter 60. Codebook gain units 78 and 80 are also provided. It will be apparent from the above that the PERCELP coder 50 differs from the standard CELP coder of the prior art. Firstly, the search process for the optimum stochastic codebook index and codebook gain is modified to minimize the error only in the perceptually significant portions of the spectrum. Secondly, the post-filter 70 is used to truncate all sections of the combined adaptive and stochastic excitation spectrum which the auditory analysis indicates are below the masking threshold. This second step is necessary because the error has not been minimized in these regions of the spectrum. The truncated excitation is used to update the adaptive codebook 64.
Whilst the use of the post-filter 70 immediately before or after the formant filter 60 appears as the simplest way to truncate the synthesized speech spectrum, this results in a non-optimized adaptive codebook update. It was therefore decided to place the post- filter 70 inside the adaptive codebook loop, as shown in Fig. 11. An alternative would have been to place it directly at the output of the stochastic codebook 66, but use in that position leads to subjectively poorer quality. Quality Of Synthesized Speech using PERCELP Examples of the noise levels using CELP and PERCELP, along with the original speech spectrum and masking thresholds for a particular frame, are shown in Figs. IC and 12, respectively. As may be observed, the noise level for PERCELP is predominantly under the masking threshold.
It is clear that post-filtering the excitation in Fig. 12 has kept the noise level below the masking threshold in the masked regions of the spectrum, and is therefore perceptually optimum for these regions. Also, the noise energy in the unmasked regions is mostly lower in these regions than in the CELP coder in Fig. IC, where the noise energy has been minimized across the whole spectrum rather than just the unmasked regions. The perceived effect of this is a much smoother, more natural sounding synthesized speech. The most noticeable effect is the lack of the background 'buzziness' found in most CELP coders.
However, the noise level in the unmasked regions in Fig. 12 is still above the corresponding speech level in some parts of the spectrum, even though it is significantly lower in other regions of the spectrum. Fig. 12 with equations (6) and (7) illustrates an arrangement termed by the inventors as minimization of Total Unmasked Noise (TUN), in the synthesised speech. Fig. 12 suggests that there may be further scope to perceptually minimize the noise in the unmasked regions, as follows. The required improvement to the results in Fig. 12 is obtained by minimizing only the noise energy which lies above the masking threshold. The minimization is still carried out in the unmasked regions of the spectrum only - however the error energy that lies below the masking threshold is now ignored. The rationale for this is that, since the noise energy that lies below the masking level is imperceptible, it may safely be allowed to remain. This technique will tend to distribute the unavoidable unmasked noise uniformly above the masking threshold.
This approach requires the calculation of actual noise levels across the spectrum and therefore increases the computational complexity. The total Noise energy Above the Masking (NAM) threshold energy for code vector k is calculated as follows:
Figure imgf000014_0001
where M(i) is the masking threshold across the spectrum, E (i is the error due to the kt
1 , Ek 2(i) > M2(i) code vector, and Ik{i) =
0 , Ek 2{i) ≤ M2(i)
The code vector which results in the lowest value of NAM is chosen from the 512 codebook entries. The foregoing error calculation is depicted in Fig. 13. The noise level resulting from this modified codebook search algorithm is displayed in Fig. 14. The perceived quality of the synthesized speech is slightly better than that from the previous algorithm (Fig. 12), although the signal-to-noise ratio with PERCELP is lower than with the CELP algorithm. This is because the total noise energy has not being minimized, but only that above the masking threshold. Real Time Implementation of PERCELP
For practical real time implementations of PERCELP, the decoder will not have access to the masking information to calculate the combined excitation. One solution to this problem is to transmit extra information about the masked portions of the spectrum. This however increases the bit rate, and is only be realistic if the masking analysis were to be carried out once per frame, since it would require about 64 bits (i.e. an extra 2.1 kbits/s).
A preferable solution is to compute the masking threshold both at the decoder and the encoder based on information that is known to both encoder and decoder. The short- term spectral envelope (via the LP coefficients) and the adaptive excitation are such information. The masking analysis could be carried out on either the envelope alone, or on the envelope excited by the adaptive excitation. Quality and Computational Complexity of PERCELP
Listening tests of PERCELP applied to a 4.8 kbps speech coder have shown that the perceptual quality of the synthesized speech is significantly better than that of a conventional implementation using a weighting filter. It is more natural and lacks the inherent noise of CELP coders, which is often attributed to a non-optimum choice of stochastic codebook index and gain.
The computational overhead associated with the auditory model is small enough to be included in single-DSP full-duplex implementations of CELP coders at 4.8 kbps. The computational overhead of the current implementation is due in part to the frequency domain stochastic codebook. Existing techniques which minimize the computation as well as the storage requirements should make this overhead negligible.
Fig. 15 illustrates a configuration of a PERCELP coder which can be formed as an application specific integrated circuit (ASIC) 100.
The ASIC 100 includes a PCM module 102 which receives and outputs analog speech 101, generally band limited between 300Hz - 3300Hz as in telephony systems. A digital signal processor (DSP) 104 receives digital speech, 8 bits sampled at 8kHz giving 64 kbps, from the PCM Module 102, and is programmed to implement PERCELP coding and decoding as described above using a stochastic codebook initially stored in a ROM 106, but transferred to a RAM 108 to permit high speed access during operation. The RAM 108 also stores the adaptive codebook. The DSP 104 outputs digital speech at 4.8 kbps to a telecommunications channel 100. A programmable logic device (PLD) 112 is used to "glue" or otherwise link the other components of the ASIC 100. When the present invention is embodied in a multiband excitation (MBE) speech coding system, a perceptual model of digitized speech is used in the manner described above to determine the perceptible sections of the speech spectra. Periodic or noise excitation signals are then passed through a number of bandpass filters which output the synthesized speech. Parameters of the excitation signals are then selected to minimise the error in the same manner as in the previous embodiments. Post-filtering of the synthesized speech spectra can also be used as before.
Accordingly, the speech synthesis system disclosed herein has application to the telecommunication industry and similar industries where digital speech is being conveyed or stored. The foregoing describes only a number of embodiments of the present invention and modifications, obvious to those skilled in the art, can be made thereto without departing from the scope of the present invention.

Claims

CLΔIMS:
1. A method of determining an optimum excitation in a speech synthesis system, said method comprising the steps of: (a) analysing an input speech signal on the basis of an auditory model to identify perceptually significant components of said input speech signal; and
(b) selecting from a plurality of candidate excitation signals to the system, an excitation signal which results in a system output optimally matched to said input speech signal in the perceptually significant components of step (a).
2. A method as claimed in claim 1, wherein said auditory model is a masking model.
3. A method as claimed in claim 1, wherein the masking model is selected from the group consisting of "Zwicker's model", "Terhardt's model", "Hellman's model", "Allen's model", "Ghitza's model", "Lyon's model", "Seneff s model", "Johnston's model", derivatives of same, and non-simultaneous masking models.
4. A method as claimed in claim 1, wherein the speech synthesis system comprises a code-excided linear prediction arrangement.
5. A method as claimed in claim 4, wherein the auditory model is used to select from a plurality of codebooks used in said arrangement an optimum codebook entry and gain which form part of said excitation in said arrangement.
6. A method as claimed in claim 5, wherein the selected codebook entry is selected from any one or more subsets of said codebooks.
7. A method as claimed in claim 1, wherein the speech synthesis system is selected from the group consisting of: - a multiband excitation arrangement,
- a linear prediction arrangement, and
- an arrangement employing filter plus excitation models of speech.
8. A method as claimed in claim 1, wherein the analysis of said speech signal and the selection of said candidate excitation signals are performed in the frequency domain.
9. A method as claimed in claim 1, wherein the analysis of said speech signal and the selection of said candidate excitation signals are performed in the time domain.
10. A method as claimed in claim 1, wherein the analysis of said speech signal is performed using time domain aliasing cancellation.
11. A method as claimed in claim 1, wherein the auditory model is configured to control criteria by which the optimal matching of the system output and said input speech signal is determined.
12. A method as claimed in claim 10, wherein the total energy of noise components that exceed a masking threshold of said auditory model is passed to said system output (TUN).
13. A method as claimed in claim 10, wherein the partial energy of noise components above a masking threshold of said auditory model is passed to said system output (NAM).
14. A method as claimed in claim 12 or 13, wherein the noise components are weighted across the frequency spectra of said input speech signal.
15. A method as claimed in claim 1, comprising the further step of cancelling from the excitation signal those portions which are determined masked by said auditory model.
16. A method of determining an optimum codebook entry in a code-book excited linear prediction (CELP) speech coding system, the method comprising the steps of: (a) using a perceptual hearing model of digitised speech to determine those perceptible sections of the speech spectra;
(b) passing a stochastic codebook vector through a linear prediction filter to produce synthesised speech;
(c) selecting a codebook vector which minimises an error between the digitised and synthesised speech only in the regions of the spectrum found to be perceptible in step (a); and
(d) post-filtering out those regions of the synthesised speech spectrum found to be imperceptible according to the hearing model, and using the remaining codebook entries and linear prediction filter parameters in said system.
17. A method as claimed in claim 16, wherein the parameters of the linear prediction filter are chosen to minimize an error measure between the filter response and the speech spectrum only in the region of the spectrum found to the perceptible by the perceptual hearing analysis.
18. A method of determining an optimum codebook entry in a multiband excitation speech coding system, the method comprising the steps of:
(a) using a perceptual hearing model of digitized speech to determine the perceptible sections of the speech spectra;
(b) passing periodic or noise excitation signals through a multiplicity of bandpass filters to produce synthesized speech; (c) selecting the parameters of such excitation signals to minimize an error between the digitised and synthesised speech only in the regions of the spectrum found to be perceptible in step (a); and (d) post-filtering out those regions of the synthesised speech spectrum found to be imperceptible according to the hearing model, and using the remaining codebook entries and linear prediction filter parameters in said system.
19. Apparatus configured to implement the method as claimed in any one of the preceding claims.
PCT/AU1994/000221 1993-04-29 1994-04-29 Use of an auditory model to improve quality or lower the bit rate of speech synthesis systems WO1994025959A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU66720/94A AU675322B2 (en) 1993-04-29 1994-04-29 Use of an auditory model to improve quality or lower the bit rate of speech synthesis systems

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AUPL854293 1993-04-29
AUPL8542 1993-04-29
AUPM5067 1994-04-14
AUPM5067A AUPM506794A0 (en) 1994-04-14 1994-04-14 Perceptual enhancement of c.e.l.p. speech coders

Publications (1)

Publication Number Publication Date
WO1994025959A1 true WO1994025959A1 (en) 1994-11-10

Family

ID=25644453

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU1994/000221 WO1994025959A1 (en) 1993-04-29 1994-04-29 Use of an auditory model to improve quality or lower the bit rate of speech synthesis systems

Country Status (2)

Country Link
AU (1) AU675322B2 (en)
WO (1) WO1994025959A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0673013A1 (en) * 1994-03-18 1995-09-20 Mitsubishi Denki Kabushiki Kaisha Signal encoding and decoding system
WO1997000516A1 (en) * 1995-06-16 1997-01-03 Nokia Mobile Phones Limited Speech coder
US5845251A (en) * 1996-12-20 1998-12-01 U S West, Inc. Method, system and product for modifying the bandwidth of subband encoded audio data
US5864820A (en) * 1996-12-20 1999-01-26 U S West, Inc. Method, system and product for mixing of encoded audio signals
US5864813A (en) * 1996-12-20 1999-01-26 U S West, Inc. Method, system and product for harmonic enhancement of encoded audio signals
WO2000013174A1 (en) * 1998-09-01 2000-03-09 Telefonaktiebolaget Lm Ericsson (Publ) An adaptive criterion for speech coding
WO2002023536A2 (en) * 2000-09-15 2002-03-21 Conexant Systems, Inc. Formant emphasis in celp speech coding
US6463405B1 (en) 1996-12-20 2002-10-08 Eliot M. Case Audiophile encoding of digital audio data using 2-bit polarity/magnitude indicator and 8-bit scale factor for each subband
US6477496B1 (en) 1996-12-20 2002-11-05 Eliot M. Case Signal synthesis by decoding subband scale factors from one audio signal and subband samples from different one
US6516299B1 (en) 1996-12-20 2003-02-04 Qwest Communication International, Inc. Method, system and product for modifying the dynamic range of encoded audio signals
US6782365B1 (en) 1996-12-20 2004-08-24 Qwest Communications International Inc. Graphic interface system and product for editing encoded audio data
US7043037B2 (en) 2004-01-16 2006-05-09 George Jay Lichtblau Hearing aid having acoustical feedback protection
KR100895745B1 (en) 2001-06-26 2009-04-30 소니 가부시끼 가이샤 Transmission apparatus, transmission method, reception apparatus, reception method, and transmission/reception apparatus
EP3079151A1 (en) 2015-04-09 2016-10-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio encoder and method for encoding an audio signal

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0415675A2 (en) * 1989-09-01 1991-03-06 AT&T Corp. Constrained-stochastic-excitation coding
WO1992016930A1 (en) * 1991-03-15 1992-10-01 Codex Corporation Speech coder and method having spectral interpolation and fast codebook search
US5199076A (en) * 1990-09-18 1993-03-30 Fujitsu Limited Speech coding and decoding system
US5226085A (en) * 1990-10-19 1993-07-06 France Telecom Method of transmitting, at low throughput, a speech signal by celp coding, and corresponding system
US5230036A (en) * 1989-10-17 1993-07-20 Kabushiki Kaisha Toshiba Speech coding system utilizing a recursive computation technique for improvement in processing speed
US5233660A (en) * 1991-09-10 1993-08-03 At&T Bell Laboratories Method and apparatus for low-delay celp speech coding and decoding
US5245662A (en) * 1990-06-18 1993-09-14 Fujitsu Limited Speech coding system
US5321793A (en) * 1992-07-31 1994-06-14 SIP--Societa Italiana per l'Esercizio delle Telecommunicazioni P.A. Low-delay audio signal coder, using analysis-by-synthesis techniques

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0415675A2 (en) * 1989-09-01 1991-03-06 AT&T Corp. Constrained-stochastic-excitation coding
US5230036A (en) * 1989-10-17 1993-07-20 Kabushiki Kaisha Toshiba Speech coding system utilizing a recursive computation technique for improvement in processing speed
US5245662A (en) * 1990-06-18 1993-09-14 Fujitsu Limited Speech coding system
US5199076A (en) * 1990-09-18 1993-03-30 Fujitsu Limited Speech coding and decoding system
US5226085A (en) * 1990-10-19 1993-07-06 France Telecom Method of transmitting, at low throughput, a speech signal by celp coding, and corresponding system
WO1992016930A1 (en) * 1991-03-15 1992-10-01 Codex Corporation Speech coder and method having spectral interpolation and fast codebook search
US5233660A (en) * 1991-09-10 1993-08-03 At&T Bell Laboratories Method and apparatus for low-delay celp speech coding and decoding
US5321793A (en) * 1992-07-31 1994-06-14 SIP--Societa Italiana per l'Esercizio delle Telecommunicazioni P.A. Low-delay audio signal coder, using analysis-by-synthesis techniques

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ALLEN, J.B., "Cochlear Modelling", IEEE ASSP MAG., Vol. 2, No. 1, January 1985, pp. 3-29. *
DOMAIN ALIASING CANCELLATION (TDAC), as described in PRINCEN, J.P. & BRADLEY, A.B., "Analysis/Synthesis Filter Bank Design Based on Time-Domain Aliasing Cancellation", IEEE TRANS. ON ASSP. Vol. 34, No. 5, pp. 1153-1161, 1986. *
GHITZA, O., "Auditory Nerve Representation for Speech Analysis/Synthesis", IEEE TRANS. ON ASSP, Vol. 35, No. 6, pp. 736-740, 1987. *
HELLMAN, R.P., "Asymmetry of Masking Between Noise and Tone", PERCEPTION AND PSYCHOPHYSICS, Vol. 11, No. 3, 1972. *
JOHNSTON, J.D, "Transform Coding of Audio Signals Using Perceptual Noise Criteria", IEEE J. OF SELECTED AREAS IN COMMUNICATIONS, Vol. 6, No. 2, pp. 314-323, February 1988. *
SENEFF, S., "A Joint Synchrony/Mean-Rate Model of Auditory Speech Processing", J. PHONETICS, Vol. 16, pp. 55-76, 1988. *
ZWICKER, E. & ZWICKER, U.T., "Audio Engineering and Psychoacoustics: Matching Signals to the Final Receiver, the Human Auditory System", J. AUDIO ENG. SOC., Vol. 39, No. 3, March 1991, pp. 115-125. *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0673013A1 (en) * 1994-03-18 1995-09-20 Mitsubishi Denki Kabushiki Kaisha Signal encoding and decoding system
US5864794A (en) * 1994-03-18 1999-01-26 Mitsubishi Denki Kabushiki Kaisha Signal encoding and decoding system using auditory parameters and bark spectrum
AU714752B2 (en) * 1995-06-16 2000-01-13 Nokia Technologies Oy Speech coder
WO1997000516A1 (en) * 1995-06-16 1997-01-03 Nokia Mobile Phones Limited Speech coder
ES2146155A1 (en) * 1995-06-16 2000-07-16 Nokia Mobile Phones Ltd Speech coder
US6782365B1 (en) 1996-12-20 2004-08-24 Qwest Communications International Inc. Graphic interface system and product for editing encoded audio data
US5845251A (en) * 1996-12-20 1998-12-01 U S West, Inc. Method, system and product for modifying the bandwidth of subband encoded audio data
US5864813A (en) * 1996-12-20 1999-01-26 U S West, Inc. Method, system and product for harmonic enhancement of encoded audio signals
US5864820A (en) * 1996-12-20 1999-01-26 U S West, Inc. Method, system and product for mixing of encoded audio signals
US6516299B1 (en) 1996-12-20 2003-02-04 Qwest Communication International, Inc. Method, system and product for modifying the dynamic range of encoded audio signals
US6477496B1 (en) 1996-12-20 2002-11-05 Eliot M. Case Signal synthesis by decoding subband scale factors from one audio signal and subband samples from different one
US6463405B1 (en) 1996-12-20 2002-10-08 Eliot M. Case Audiophile encoding of digital audio data using 2-bit polarity/magnitude indicator and 8-bit scale factor for each subband
US6192335B1 (en) 1998-09-01 2001-02-20 Telefonaktieboiaget Lm Ericsson (Publ) Adaptive combining of multi-mode coding for voiced speech and noise-like signals
WO2000013174A1 (en) * 1998-09-01 2000-03-09 Telefonaktiebolaget Lm Ericsson (Publ) An adaptive criterion for speech coding
WO2002023536A3 (en) * 2000-09-15 2002-06-13 Conexant Systems Inc Formant emphasis in celp speech coding
WO2002023536A2 (en) * 2000-09-15 2002-03-21 Conexant Systems, Inc. Formant emphasis in celp speech coding
KR100895745B1 (en) 2001-06-26 2009-04-30 소니 가부시끼 가이샤 Transmission apparatus, transmission method, reception apparatus, reception method, and transmission/reception apparatus
US7043037B2 (en) 2004-01-16 2006-05-09 George Jay Lichtblau Hearing aid having acoustical feedback protection
EP3079151A1 (en) 2015-04-09 2016-10-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio encoder and method for encoding an audio signal
WO2016162375A1 (en) 2015-04-09 2016-10-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio encoder and method for encoding an audio signal
US10672411B2 (en) 2015-04-09 2020-06-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method for adaptively encoding an audio signal in dependence on noise information for higher encoding accuracy

Also Published As

Publication number Publication date
AU6672094A (en) 1994-11-21
AU675322B2 (en) 1997-01-30

Similar Documents

Publication Publication Date Title
Spanias Speech coding: A tutorial review
JP4662673B2 (en) Gain smoothing in wideband speech and audio signal decoders.
EP0732686B1 (en) Low-delay code-excited linear-predictive coding of wideband speech at 32kbits/sec
US5790759A (en) Perceptual noise masking measure based on synthesis filter frequency response
JP3566652B2 (en) Auditory weighting apparatus and method for efficient coding of wideband signals
US5778335A (en) Method and apparatus for efficient multiband celp wideband speech and music coding and decoding
EP0764941B1 (en) Speech signal quantization using human auditory models in predictive coding systems
EP0673013B1 (en) Signal encoding and decoding system
KR100421226B1 (en) Method for linear predictive analysis of an audio-frequency signal, methods for coding and decoding an audiofrequency signal including application thereof
EP1141946B1 (en) Coded enhancement feature for improved performance in coding communication signals
JP3653826B2 (en) Speech decoding method and apparatus
EP0764939B1 (en) Synthesis of speech signals in the absence of coded parameters
EP1328923B1 (en) Perceptually improved encoding of acoustic signals
AU675322B2 (en) Use of an auditory model to improve quality or lower the bit rate of speech synthesis systems
JPH06222798A (en) Method for effective coding of sound signal and coder using said method
US6665638B1 (en) Adaptive short-term post-filters for speech coders
Kroon et al. Predictive coding of speech using analysis-by-synthesis techniques
KR20020012509A (en) Relative pulse position in celp vocoding
Almeida et al. Harmonic coding: A low bit-rate, good-quality speech coding technique
Sen et al. Perceptual enhancement of CELP speech coders
EP0954851A1 (en) Multi-stage speech coder with transform coding of prediction residual signals with quantization by auditory models
US20030055633A1 (en) Method and device for coding speech in analysis-by-synthesis speech coders
Gournay et al. A 1200 bits/s HSX speech coder for very-low-bit-rate communications
Bertorello et al. Design of a 4.8/9.6 kbps baseband LPC coder using split-band and vector quantization
JPH08160996A (en) Voice encoding device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref country code: US

Ref document number: 1996 545657

Date of ref document: 19960308

Kind code of ref document: A

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase