EP2490459A1 - Method for voice signal blending - Google Patents

Method for voice signal blending Download PDF

Info

Publication number
EP2490459A1
EP2490459A1 EP11155021A EP11155021A EP2490459A1 EP 2490459 A1 EP2490459 A1 EP 2490459A1 EP 11155021 A EP11155021 A EP 11155021A EP 11155021 A EP11155021 A EP 11155021A EP 2490459 A1 EP2490459 A1 EP 2490459A1
Authority
EP
European Patent Office
Prior art keywords
signal
noise
blending
microphone
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP11155021A
Other languages
German (de)
French (fr)
Other versions
EP2490459B1 (en
Inventor
Bernd Iser
Arthur Wolf
Patrick Hannon
Mohamed Krini
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SVOX AG
Original Assignee
SVOX AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SVOX AG filed Critical SVOX AG
Priority to EP11155021.6A priority Critical patent/EP2490459B1/en
Publication of EP2490459A1 publication Critical patent/EP2490459A1/en
Application granted granted Critical
Publication of EP2490459B1 publication Critical patent/EP2490459B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/02Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback

Definitions

  • the invention refers to a method and system installed in a car for the communication of people sitting in remote locations. Therefore at least one loudspeaker is installed and at least two microphones, one assigned to each person.
  • the invention relates to method applied to a hands-free telephony system or to an automatic speech recognition system, where conditions and requirements are quite similar. More specifically, the invention relates to a method according to the introductory clause of claim 1 as well as to a software product according to claim 8 and a system according to claim 9.
  • EP1850640B1 shows and describes a typical setup for a basic intercom system with two microphones and one loudspeaker.
  • the mixer in this setup decides depending on a criterion which of the microphone signals shall be switched to the next module or the output.
  • This criterion can depend on the detected occupation of a seat.
  • it has been also proposed in US006549629B2 such decision criterion can depend on the Signal plus Noise to Noise Ratio (SNNR).
  • SNNR Signal plus Noise to Noise Ratio
  • SNR Signal to Noise Ratio
  • a further disadvantage of these systems is a possible change in background-noise level when switching from one speaking person to another (e.g. driver speaks to a passenger in the back while having his window open, and then the co-driver speaks to the passenger in the back having his window closed and driver as well as co-driver are each equipped with a microphone).
  • This change in background-noise level of the signal played back over the loudspeaker might be experienced as unpleasant by a listener.
  • US-A-2005/0265560 discloses a system, where a beamformer is used for microphone signal blending. The, thus, blended output signal is then subjected to feedback suppression.
  • This has the disadvantage that the power of the signals to be blended is relative high and, moreover, the output signal of the beamformer is adulterated by the feedback component of the signal.
  • the suppression is merely the suppression of frequencies, which are just developing resonance oscillations, which means that feedback components remain, and only resonance oscillations are prevented.
  • the feedback suppression or compensation is effected before blending the signals of the at least two microphones. If in this connection the term "feedback suppression or compensation" is used, it is in any case a minimization of feedback components of a signal.
  • a feedback compensation is for example described in EP 1 679 874 B1 .
  • Feedback suppression can be applied by filtering, for example with a notch-filter.
  • Another known compensation method uses frequency shifting. These suppression methods reduce the development of feedback, but existing feedback components remain.
  • the method for voice signal blending is applied in a communication system, such as an indoor communication system, particularly in a vehicle, or in a hands-free telephony system or an automatic speech recognition system, comprising at least two microphones and at least one loudspeaker.
  • the microphone signals are blended with respective weights to be delivered to the at least one loudspeaker.
  • the feedback suppression or compensation is effected before blending the signals of the at least two microphones. This improves the quality of the blended signal and is more robust in relation to feedback components.
  • the feedback components are minimized by estimating the feedback signal or its energy level, preferably its power spectral density, and applying a Wiener filter which eliminates the estimated feedback signal from the signals to be blended.
  • the energies of the microphone signals are determined and at least the energies of the noise components are estimated, wherein for blending, a higher weight is given to at least the sub-band with the highest ratio of signal energy to noise energy.
  • a higher weight is given to at least the sub-band with the highest ratio of signal energy to noise energy.
  • a signal level adjustment of the at least two microphone signals to a predetermined value is carried out particularly immediately before or immediately after blending, wherein the adjustment is depending on the energies of the noise components and minimizes the perceivable difference in the noise level when blending from one microphone to the other. In this way changes of noise levels are better avoided.
  • the signal levels of the microphone signals are adjusted to a predetermined value so that there is no perceivable difference in the speech level when blending from one microphone to the other.
  • the noise level of at least two microphones can be unpleasantly different. An adjustment can be achieved, if the characteristics of claim 5 are fulfilled.
  • the microphone signals comprise different components, each with a certain level. If a person close to a microphone is speaking, then the signal of this microphone comprises this voice with a corresponding voice level.
  • the voice levels of the at least two microphone signals are adjusted substantially to the same level by adjusting the microphone signals. Then the difference of the noise levels of the at least two adjusted microphone signals is determined.
  • the noise suppressions in the microphone signals are controlled by adapting the parameters of the noise suppression characteristic in such a way, that the noise suppressed signals have substantially the same level of residual noise and the same level of speech signal in it for each microphone signal.
  • the respective speaking and/or listening, non-speaking party is detected by one of the following measures:
  • Fig. 1 shows a typical setup for a basic intercom system with two microphones 1 and 2 and one loudspeaker 3.
  • a voice mixer 5 which receives the pre-processed signals of the microphones 1, 2 and decides depending on a criterion which of the microphone signals shall be switched to its output and to the next module 6.
  • Fig. 2a one embodiment of the invention is illustrated, where the signals of the microphones 1, 2, after an optional pre-processing stage 4a, are subjected to feedback suppression or compensation in a stage 7.
  • the first component after some optional pre-processing is a feedback suppression or feedback compensation at 7.
  • This module 7 suppresses or compensates for the portion of the loudspeaker signal that is coupling back into the microphone and therefore being an undesired signal component.
  • noise suppression in a noise suppression module 8 can be effected.
  • This module 8 suppresses the background noise components of the microphone signals resulting from the background noise being present in the car cabin and being picked up by the microphones.
  • a practical embodiment with a noise suppression postponed to the voice blender 5a is shown in Fig. 4b .
  • the signal m i of microphone i is made up of the signal from the speaker in the cabin s i , the feedback of the system from the loudspeaker into the microphone f i and the background noise of the cabin recorded by the microphone b i .
  • i corresponds to the microphone index
  • k to the time interval
  • to the frequency band.
  • the noise suppression is located before the voice blender 5a which is not necessarily required (without limitation of generality, see e.g. Fig. 4b ).
  • the blended and noise suppressed signal goes suitably to a NDGC or noise dependent gain control 9.
  • the voice blender module blends the at least two feedback suppressed or compensated and noise suppressed signals according to Fig. 2a together to one output signal following below described criterion. According to Fig. 2b the blended signals are only feedback suppressed or compensated.
  • the blending is made by giving weights to the microphones.
  • the blending or weighting criterion can either be evaluated in a non-frequency selective manner or in a frequency selective manner resulting in a non-frequency selective weighting or in a frequency selective weighting of the microphone signals m i .
  • 4a , 4b , 4c is a function of the ratio of the energy of every microphone signal S mimi subtracted by the feedback ⁇ i (respectively S ⁇ i ⁇ i for the energy of the feedback component at every microphone) due to the coupling of the loudspeaker signal back into the microphone signal and further subtracted by the noise b i (respectively S bibi for the energy of the noise component at every microphone) within the microphone signal resulting from the noise present in the car cabin and being picked up by the microphone (for the setup according to Fig. 2b each microphone signal is only subtracted by the feedback) to the estimated noise present in each microphone signal.
  • Double indices are used for the energy of the signal, and for power spectral densities.
  • N the number of microphones.
  • ß m is a smoothing constant that has to be chosen between zero and one. This means the current short-term power is weighted with ß m and the estimate of the previous time frame is weighted with (1-ß m ).
  • the feedback component S ⁇ i ⁇ i ( k , ⁇ ) is estimated in the feedback compensation or feedback suppression module and is therefore given.
  • is a small number depending on the sampling rate.
  • the modified SNR after subtracting the power of the feedback signal and of the background noise from the microphone signal power, is set into relation with the sum over all microphones of the modified SNR.
  • the resulting weight is more robust against misinterpretations of feedback power as desired speech power as would be the case using an SNNR as described in US006549629B2 .
  • the voice blender module 5a determines the difference in noise levels between the different signals and causes noise suppression to adapt the parameters of the noise suppression characteristic to result in a noise suppressed signal having the same level of residual noise in it for each speaker.
  • Equation 8 This allows compensating for different background-noise levels and different speech levels at the same time. No trade off between the both optimization criteria is necessary. This means applying Equation 8 and adjusting the filter coefficients of the noise suppression (e.g. a Wiener filter) to (see Fig.
  • H k ⁇ ⁇ min ⁇ , 1 - ⁇ i w i , speech k ⁇ ⁇ ⁇ S b i ⁇ b i k ⁇ ⁇ S m i ⁇ m i k ⁇ ⁇
  • Fig. 4a shows, how a voice blender for carrying out the method according to the invention may be structured. It is clearly visible that feedback suppression or feedback compensation is done for each of the signals stemming from the microphones 1 and 2 in modules 7a and 7b. The output of these modules 7a, 7b is delivered to the voice blender 5a, i.e. to noise estimation modules 10, 10' (see equations 1-4), to power estimation modules 11, 11' (see equation 8) and to multiplicators 12 and 12' for weighting the incoming signals.
  • multiplicators 12, 12' are each controlled by a module Wi which determines the blender weights for each signal of the microphones 1, 2.
  • a summing point 13 At the output of blender 5a is a summing point 13.
  • the output signal v( ⁇ ,k) of the voice blender 5a can be used to influence noise suppression in module 8. This is shown in Fig. 4b , where the weighting signals W i,speech (k, ⁇ ) are supplied to module 8. Noise suppression down to zero gives mostly a bad feeling to the listener, for which reason it is preferred to adjust noise suppression to a predetermined level to a predetermined minimum level. This level typically equals 0.316 meaning a maximum suppression of -10 dB.
  • Fig. 4c a configuration according to Fig. 4c is possible within the scope of the present invention taking equation 8 in mind.
  • the weighting signals W i,speech (k, ⁇ ) are supplied to module 9, the noise dependent gain control, which, to have an information of noise present in the signal, receives also the output signal S bibi (k, ⁇ ) of the noise estimation modules 10 and 10'.
  • the weighting signal W i of the voice blender 5a could be used to control an equalizer 6a which forms part of the post-processor 6 ( Fig. 1 to Fig. 2b ).
  • the decision of the voice blender module is used for the noise dependent gain control module.
  • the noise dependent gain control module is responsible for increasing the level of the output signal by applying a gain g l,NDGC (S blbl (k, ⁇ ) ),depending on the level of noise perceived by the listening party.
  • This increase of the output signal level depends on a characteristic which maps the level of background noise in the car cabin to a gain applied to the output signal. The characteristic chosen for this mapping depends on which speaker is active. This information is delivered by the voice blender module in the form of the microphone index of the most active microphone l( k ).
  • Equation 12 S blbl (k, ⁇ ) represents the noise component that is present in the microphone signal of the active speaker. Some sample characteristics are depicted in Fig. 6 .
  • equalizer 6a part of post processing 6 of an intercom system is the equalizer 6a.
  • This equalizer 6a can as well be implemented as a multi-channel equalizer individual for each channel.
  • the settings of the equalizer 6a are chosen depending on the voice blender decision of which person is speaking to enhance the perceived quality of the output for the listening party (see Fig.3 and Fig. 5 ).
  • NDGC mapping characteristic and equalizer setting can be chosen in an even more specific manner.

Abstract

In a method for voice signal blending in an indoor communication system, particularly in a vehicle, or in a hands-free telephony system or an automatic speech recognition system, there are at least two microphones, the voice signals of which should be blended to be delivered to at least one loudspeaker. Moreover, feedback suppression or compensation is provided. This feedback suppression or compensation is effected before blending the signals of the at least two microphones.

Description

    Field of the invention
  • The invention refers to a method and system installed in a car for the communication of people sitting in remote locations. Therefore at least one loudspeaker is installed and at least two microphones, one assigned to each person. Alternatively, the invention relates to method applied to a hands-free telephony system or to an automatic speech recognition system, where conditions and requirements are quite similar. More specifically, the invention relates to a method according to the introductory clause of claim 1 as well as to a software product according to claim 8 and a system according to claim 9.
  • Background of the invention
  • EP1850640B1 shows and describes a typical setup for a basic intercom system with two microphones and one loudspeaker. The mixer in this setup decides depending on a criterion which of the microphone signals shall be switched to the next module or the output. This criterion can depend on the detected occupation of a seat. However, it has been also proposed in US006549629B2 such decision criterion can depend on the Signal plus Noise to Noise Ratio (SNNR).
  • The fundamental disadvantage of such a system is that the feedback is still contained in the signal and therefore SNNR is not a good approach to a Signal to Noise Ratio (SNR). Using SNNR as SNR causes a misinterpretation of the signal power when estimating the SNNR for each microphone. The result of such a misinterpretation is the preferred usage of the microphone containing the highest amount of feedback.
  • A further disadvantage of these systems is a possible change in background-noise level when switching from one speaking person to another (e.g. driver speaks to a passenger in the back while having his window open, and then the co-driver speaks to the passenger in the back having his window closed and driver as well as co-driver are each equipped with a microphone). This change in background-noise level of the signal played back over the loudspeaker might be experienced as unpleasant by a listener.
  • Another disadvantage that occurs is, when the play-back level between two speaking persons differs. This can occur if for example the driver is quite tall and therefore the distance to the microphone is little, which means significant speech level in the microphone signal. If for example the co-driver is a small person and therefore the distance to the microphone is large, which means low speech level within the microphone signal this basic setup results in a playback signal for the loudspeaker that differs in speech level for the driver and co-driver in an unpleasant manner concerning the listener impression.
  • US-A-2005/0265560 discloses a system, where a beamformer is used for microphone signal blending. The, thus, blended output signal is then subjected to feedback suppression. This has the disadvantage that the power of the signals to be blended is relative high and, moreover, the output signal of the beamformer is adulterated by the feedback component of the signal. The suppression, however, is merely the suppression of frequencies, which are just developing resonance oscillations, which means that feedback components remain, and only resonance oscillations are prevented.
  • Summary of the invention
  • It is an object of the present invention to improve the quality of the blended signal and to find a method that is more robust in relation to feedback components.
  • This object is achieved by the measure according to the characterizing clause of claim 1. The feedback suppression or compensation is effected before blending the signals of the at least two microphones. If in this connection the term "feedback suppression or compensation" is used, it is in any case a minimization of feedback components of a signal. A feedback compensation is for example described in EP 1 679 874 B1 . Feedback suppression can be applied by filtering, for example with a notch-filter. Another known compensation method uses frequency shifting. These suppression methods reduce the development of feedback, but existing feedback components remain.
  • The method for voice signal blending is applied in a communication system, such as an indoor communication system, particularly in a vehicle, or in a hands-free telephony system or an automatic speech recognition system, comprising at least two microphones and at least one loudspeaker. The microphone signals are blended with respective weights to be delivered to the at least one loudspeaker. The feedback suppression or compensation is effected before blending the signals of the at least two microphones. This improves the quality of the blended signal and is more robust in relation to feedback components.
  • In preferred embodiments the feedback components are minimized by estimating the feedback signal or its energy level, preferably its power spectral density, and applying a Wiener filter which eliminates the estimated feedback signal from the signals to be blended.
  • According to a preferred embodiment, for at least a sub-band of the microphone signals the energies of the microphone signals are determined and at least the energies of the noise components are estimated, wherein for blending, a higher weight is given to at least the sub-band with the highest ratio of signal energy to noise energy. Sub-dividing the entire frequency band into sub-bands enables it to give individual sub-bands a different weight. Blending will be made for each sub-band with the respective weights.
  • It is favourable, if a signal level adjustment of the at least two microphone signals to a predetermined value is carried out particularly immediately before or immediately after blending, wherein the adjustment is depending on the energies of the noise components and minimizes the perceivable difference in the noise level when blending from one microphone to the other. In this way changes of noise levels are better avoided.
  • There can be a perceivable difference in the speech level when blending from one microphone to the other. To solve this problem, the signal levels of the microphone signals are adjusted to a predetermined value so that there is no perceivable difference in the speech level when blending from one microphone to the other.
  • The noise level of at least two microphones can be unpleasantly different. An adjustment can be achieved, if the characteristics of claim 5 are fulfilled.
  • The microphone signals comprise different components, each with a certain level. If a person close to a microphone is speaking, then the signal of this microphone comprises this voice with a corresponding voice level. In a preferred embodiment the voice levels of the at least two microphone signals are adjusted substantially to the same level by adjusting the microphone signals. Then the difference of the noise levels of the at least two adjusted microphone signals is determined. The noise suppressions in the microphone signals are controlled by adapting the parameters of the noise suppression characteristic in such a way, that the noise suppressed signals have substantially the same level of residual noise and the same level of speech signal in it for each microphone signal.
  • In a preferred embodiment the respective speaking and/or listening, non-speaking party is detected by one of the following measures:
    1. a) by analyzing the signal of the at least two microphones during blending;
    2. b) by sensing by means of vehicle sensors, e.g. for the seat occupancy.
  • In further preferred embodiments at least one of the following measures is taken:
    1. a) the signal weight of the microphone signal where no speaking party is detected is reduced, when blending;
    2. b) predetermined amplifier characteristics of the gain control, e.g. noise dependent gain control, for listeners and/or speakers are provided and that amplifier characteristic is chosen, which corresponds to the position of the respective listener and/or speaker;
    3. c) equalizing is effected by means of an equalizer after voice signal blending and that the settings of the equalizer are chosen depending on the detection so as to enhance the perceived quality of the output for the listening and/or speaking party.
    Brief description of the drawings
  • Further details and advantages will become apparent from the following description of embodiments with reference to the drawings, in which
  • Fig. 1
    shows a block diagram of a typical setup for a basic intercom system with two microphones and one loudspeaker;
    Fig. 2a
    is a block diagram of a first embodiment according to the invention, while
    Fig. 2b
    shows a second embodiment according to the invention, and
    Fig. 3
    depicts a third embodiment according to the invention;
    Fig. 4a
    illustrates a circuit for voice blending according to a first structure according to the invention;
    Fig. 4b
    is a circuit for voice blending according to a second structure according to the invention; and
    Fig. 4c
    shows a circuit for voice blending according to a third structure according to the invention;
    Fig. 5
    illustrates setting an equalizer after voice blending; and
    Fig. 6
    depicts a sample of a characteristic used for chosing a gain factor depending on a noise estimate..
    Detailed description of the invention
  • Fig. 1 shows a typical setup for a basic intercom system with two microphones 1 and 2 and one loudspeaker 3. Between a pre-processing stage 4, directly connected to the microphones 1, 2 and a post-processing stage 6, leading to the loudspeaker 3 is a voice mixer 5 which receives the pre-processed signals of the microphones 1, 2 and decides depending on a criterion which of the microphone signals shall be switched to its output and to the next module 6.
  • In Fig. 2a, one embodiment of the invention is illustrated, where the signals of the microphones 1, 2, after an optional pre-processing stage 4a, are subjected to feedback suppression or compensation in a stage 7. Thus, the first component after some optional pre-processing is a feedback suppression or feedback compensation at 7. This module 7 suppresses or compensates for the portion of the loudspeaker signal that is coupling back into the microphone and therefore being an undesired signal component.
  • Before reaching the voice blender 5a (Fig. 2a) or after the blender 5a (Fig. 2b), noise suppression in a noise suppression module 8 can be effected. This module 8 suppresses the background noise components of the microphone signals resulting from the background noise being present in the car cabin and being picked up by the microphones. A practical embodiment with a noise suppression postponed to the voice blender 5a is shown in Fig. 4b.
  • The signal mi of microphone i is made up of the signal from the speaker in the cabin si, the feedback of the system from the loudspeaker into the microphone fi and the background noise of the cabin recorded by the microphone bi.
  • This is suitably depicted by the following m i k μ = s i k μ + f i k μ + b i k μ
    Figure imgb0001
  • Where i corresponds to the microphone index, k to the time interval and µ to the frequency band.
  • In the following the noise suppression is located before the voice blender 5a which is not necessarily required (without limitation of generality, see e.g. Fig. 4b).
  • In any case (vide Figs. 2a, 2b), the blended and noise suppressed signal goes suitably to a NDGC or noise dependent gain control 9.
  • Voice decision
  • The voice blender module blends the at least two feedback suppressed or compensated and noise suppressed signals according to Fig. 2a together to one output signal following below described criterion. According to Fig. 2b the blended signals are only feedback suppressed or compensated.
  • The blending is made by giving weights to the microphones. The blending or weighting criterion can either be evaluated in a non-frequency selective manner or in a frequency selective manner resulting in a non-frequency selective weighting or in a frequency selective weighting of the microphone signals mi . The criterion used in this invention for the weighting according to Fig. 4a, 4b, 4c is a function of the ratio of the energy of every microphone signal Smimi subtracted by the feedback ƒi (respectively Sƒiƒi for the energy of the feedback component at every microphone) due to the coupling of the loudspeaker signal back into the microphone signal and further subtracted by the noise bi (respectively Sbibi for the energy of the noise component at every microphone) within the microphone signal resulting from the noise present in the car cabin and being picked up by the microphone (for the setup according to Fig. 2b each microphone signal is only subtracted by the feedback) to the estimated noise present in each microphone signal. Double indices are used for the energy of the signal, and for power spectral densities. w i k μ = f S mm k μ , S ff k μ , S bb k μ
    Figure imgb0002
  • For k being the time block index and µ being the sub-band index. The output of the voice blender 5a is denoted with: v k μ = i = 0 N - 1 w i k μ m i k μ - f i k μ + b i k μ
    Figure imgb0003
  • With N being the number of microphones.
  • For estimating the power of the microphone signal a first order IIR filter for smoothing can be applied according to S m i m i k μ = β m m i k μ 2 + 1 - β m S m i m i k - 1 , μ
    Figure imgb0004
  • Where ßm is a smoothing constant that has to be chosen between zero and one. This means the current short-term power is weighted with ßm and the estimate of the previous time frame is weighted with (1-ßm).
  • The feedback component Sƒiƒi (k,µ) is estimated in the feedback compensation or feedback suppression module and is therefore given.
  • The noise component can be estimated using a minimum tracker according to S b i b i k μ = min S m i m i k μ , S b i b i k - 1 , μ 1 + ε
    Figure imgb0005
  • Where ε is a small number depending on the sampling rate. A good choice for a sampling rate of 44.1 kHz is e.g. epsilon=0.00001. This means that the minimum of the current time frame and the previous one is weighted by (1+epsilon) resulting in a signal that tracks local minima of the microphone signal power for estimating the noise signal power.
  • One possible implementation of wi (k, µ) is: w i k μ = S m i m i k μ - S f i f i k μ - S b i b i k μ S b i b i k μ i S m i m i k μ - S f i f i k μ - S b i b i k μ S b i b i k μ
    Figure imgb0006
  • This means that the modified SNR, after subtracting the power of the feedback signal and of the background noise from the microphone signal power, is set into relation with the sum over all microphones of the modified SNR. The resulting weight is more robust against misinterpretations of feedback power as desired speech power as would be the case using an SNNR as described in US006549629B2 .
  • Perception enhancement
  • Further criteria that can be included in the above weights are:
    • Adjusting the signal before or after the blending in terms of level to a common background noise level so that there is no perceivable difference for the listening party considering background-noise level when switching from one microphone to the other, e.g. using w i , noise k μ = w i k μ S bb , desired k μ 1 M μ = 0 M - 1 S b i b i k μ
      Figure imgb0007
      Where M is the amount of sub-bands µ and Sbb,desired(k,µ) is the desired noise level. This means applying a gain to the signal with lower noise level than the desired one to reach the desired noise level.
    • Adjusting the signal before or after the blending in terms of level to a common speech level so that there is no perceivable difference for the listening party considering speech level when switching from one microphone to the other. S s i s i k μ = S m i m i k μ - S f i f i k μ - S b i b i k μ
      Figure imgb0008
      w i , speech k μ = w i k μ S ss , desired k μ 1 M μ = 0 M - 1 S s i s i k μ
      Figure imgb0009
      Where Sss,desired(k,µ) is the desired speech level.
  • A combination of the above mentioned criteria with some trade-offs is permitted as well. A possibility of combining both criteria at the same time without having to trade off is described in the following.
  • Speaker individual noise suppression parameterization
  • For obtaining the same noise level for each speaker, the voice blender module 5a determines the difference in noise levels between the different signals and causes noise suppression to adapt the parameters of the noise suppression characteristic to result in a noise suppressed signal having the same level of residual noise in it for each speaker.
  • When having in mind that we as well want to have a balanced speech level between the different speakers we have to evaluate the differences in noise level after having balanced the signals for their speech level and use the resulting differences in noise level to adjust the parameters of the noise suppression module for each speaker.
  • This allows compensating for different background-noise levels and different speech levels at the same time. No trade off between the both optimization criteria is necessary. This means applying Equation 8 and adjusting the filter coefficients of the noise suppression (e.g. a Wiener filter) to (see Fig. 4b): H k μ = min β , 1 - i w i , speech k μ S b i b i k μ S m i m i k μ
    Figure imgb0010
    With β = β NR 1 M μ = 0 M - 1 i w i , speech k μ S s i s i k μ S ss , desired k μ
    Figure imgb0011
    And βNR being the regular spectral floor or maximum attenuation of the Wiener filter (typically βNR = ―10dB ).
  • Joint information usage
  • Fig. 4a shows, how a voice blender for carrying out the method according to the invention may be structured. It is clearly visible that feedback suppression or feedback compensation is done for each of the signals stemming from the microphones 1 and 2 in modules 7a and 7b. The output of these modules 7a, 7b is delivered to the voice blender 5a, i.e. to noise estimation modules 10, 10' (see equations 1-4), to power estimation modules 11, 11' (see equation 8) and to multiplicators 12 and 12' for weighting the incoming signals.
  • To this end, the multiplicators 12, 12' are each controlled by a module Wi which determines the blender weights for each signal of the microphones 1, 2. At the output of blender 5a is a summing point 13.
  • It is clear, that the representation as distinct blocks 10, 11, 12 does not mean that these blocks have to be realized as distinctive devices. In practice, all these activities or at least part of them will be handled by software.
  • As has been mentioned above, the output signal v(µ,k) of the voice blender 5a can be used to influence noise suppression in module 8. This is shown in Fig. 4b, where the weighting signals Wi,speech(k,µ) are supplied to module 8. Noise suppression down to zero gives mostly a bad feeling to the listener, for which reason it is preferred to adjust noise suppression to a predetermined level to a predetermined minimum level. This level typically equals 0.316 meaning a maximum suppression of -10 dB.
  • Alternatively or in combination with the above-mentioned measures (see also Fig. 3 which shows the information exchange between the modules in dotted lines), a configuration according to Fig. 4c is possible within the scope of the present invention taking equation 8 in mind. According to this figure, the weighting signals Wi,speech(k,µ) are supplied to module 9, the noise dependent gain control, which, to have an information of noise present in the signal, receives also the output signal Sbibi(k,µ) of the noise estimation modules 10 and 10'.
  • In a further modification according to Fig. 5, the weighting signal Wi of the voice blender 5a could be used to control an equalizer 6a which forms part of the post-processor 6 (Fig. 1 to Fig. 2b).
  • To figures 3, 4c and 5, the following details are now given:
  • (i) Noise dependent gain control
  • The decision of the voice blender module is used for the noise dependent gain control module. The noise dependent gain control module is responsible for increasing the level of the output signal by applying a gain gℓ,NDGC (Sbℓbℓ(k,µ)),depending on the level of noise perceived by the listening party. This increase of the output signal level depends on a characteristic which maps the level of background noise in the car cabin to a gain applied to the output signal. The characteristic chosen for this mapping depends on which speaker is active. This information is delivered by the voice blender module in the form of the microphone index of the most active microphone ℓ(k). k = arg max i μ = 0 M - 1 w i , speech k μ
    Figure imgb0012
    g , NDGC S b b k μ = { 0 , for S b b k μ S bb , , low k μ a S b b k μ , for S bb , , low k μ < S b b k μ < S bb , , high k μ g , NDGC , max S b b k μ , for S b b k μ S bb , , high k μ
    Figure imgb0013
    a = g , NDGC , max S b b k μ S bb , , high k μ - S bb , , low k μ
    Figure imgb0014
    Where g ℓ,NDGC, max (S bb(k,µ)) represents the maximum gain of the NDGC (e.g. 10dB). So all properties of the different NDGC characteristics are described by gℓ,NDGC,max (Sbℓbℓ(k,µ)), Sbb,ℓ,high(k,µ) and Sbb,ℓ,low(k,µ) .
  • In Equation 12, Sbℓbℓ(k,µ) represents the noise component that is present in the microphone signal of the active speaker. Some sample characteristics are depicted in Fig. 6.
  • (ii) Equalizer
  • As has been mentioned above, part of post processing 6 of an intercom system is the equalizer 6a. This equalizer 6a can as well be implemented as a multi-channel equalizer individual for each channel. The settings of the equalizer 6a are chosen depending on the voice blender decision of which person is speaking to enhance the perceived quality of the output for the listening party (see Fig.3 and Fig. 5).
  • (iii) Further optional modifications
    • A multi-microphone array can be used for beamforming. This allows for localization of the speaker in the car cabin (similar to the decision ℓ(k) of the voice blender). This information is used by the voice blender to decide which speaker is active.
    • Detection of position of listening party (e.g. by weight sensors in the seats) to further enhance joint information usage.
      • o Voice blender knows where no speaker is and can set weights accordingly.
  • Furthermore by knowing who speaks and who listens, NDGC mapping characteristic and equalizer setting can be chosen in an even more specific manner.
  • Numerous modifications can be made within the scope of the invention; for example instead of voice blending, beamforming could be performed, as has been indicated above, so that the expression "blending" has to be understood in a broader sense.

Claims (9)

  1. Method for voice signal blending in a communication system, such as an indoor communication system, particularly in a vehicle, or in a hands-free telephony system or an automatic speech recognition system, comprising at least two microphones and at least one loudspeaker, wherein the microphone signals are blended with respective weights to be delivered to the at least one loudspeaker and feedback suppression or compensation is provided, characterized in that feedback suppression or compensation is effected before blending the signals of the at least two microphones.
  2. Method according to claim 1, characterized in that for at least a sub-band of the microphone signals the energies of the microphone signals are determined and at least the energies of the noise components are estimated,, wherein for blending, a higher weight is given to at least the sub-band with the highest ratio of signal energy to noise energy.
  3. Method according to claim 1 or 2, characterized in that a signal level adjustment of the at least two microphone signals to a predetermined value is carried out particularly immediately before or immediately after blending, wherein the adjustment is depending on the energies of the noise components and minimizes the perceivable difference in the noise level when blending from one microphone to the other.
  4. Method according to any of the preceding claims, characterized in that signal levels of the microphone signals are adjusted to a predetermined value so that there is no perceivable difference in the signal level when blending from one microphone to the other.
  5. Method according to claim 4, characterized in that voice levels of the at least two microphone signals are adjusted substantially to the same level, then the difference of the noise levels of the at least two adjusted microphone signals is determined and the noise suppression is controlled by adapting the parameters of the noise suppression characteristic, to generate a noise suppressed signal having substantially the same level of residual noise and the same level of speech signal in it for each microphone signal.
  6. Method according to any of the preceding claims, characterized in that the respective speaking and/or listening, non-speaking party is detected by one of the following measures:
    a) by analyzing the signal of the at least two microphones during blending;
    b) by sensing by means of vehicle sensors, e.g. for the seat occupancy.
  7. Method according to claim 6, characterized in that at least one of the following measures is taken after detection:
    a) the signal weight of the microphone signal where no speaking party is detected is reduced, when blending;
    b) predetermined amplifier characteristics of the gain control, e.g. noise dependent gain control, for listeners and/or speakers are provided and that amplifier characteristic is chosen, which corresponds to the position of the respective listener and/or speaker;
    c) equalizing is effected by means of an equalizer after voice signal blending and that the settings of the equalizer are chosen depending on the detection so as to enhance the perceived quality of the output for the listening and/or speaking party.
  8. Software product which executes the method according to any of the preceding claims.
  9. Communication system, such as an in-door communication system, a hands-free telephony system or an automatic speech recognition system, comprising at least one loudspeaker and at least two microphones, as well as a signal treatment device, which carries out a method according to any of claims 1 to 7.
EP11155021.6A 2011-02-18 2011-02-18 Method for voice signal blending Active EP2490459B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP11155021.6A EP2490459B1 (en) 2011-02-18 2011-02-18 Method for voice signal blending

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP11155021.6A EP2490459B1 (en) 2011-02-18 2011-02-18 Method for voice signal blending

Publications (2)

Publication Number Publication Date
EP2490459A1 true EP2490459A1 (en) 2012-08-22
EP2490459B1 EP2490459B1 (en) 2018-04-11

Family

ID=44065560

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11155021.6A Active EP2490459B1 (en) 2011-02-18 2011-02-18 Method for voice signal blending

Country Status (1)

Country Link
EP (1) EP2490459B1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2816816A1 (en) * 2013-06-20 2014-12-24 2236008 Ontario Inc. Sound field spatial stabilizer with structured noise compensation
US9099973B2 (en) 2013-06-20 2015-08-04 2236008 Ontario Inc. Sound field spatial stabilizer with structured noise compensation
US9106196B2 (en) 2013-06-20 2015-08-11 2236008 Ontario Inc. Sound field spatial stabilizer with echo spectral coherence compensation
US9271100B2 (en) 2013-06-20 2016-02-23 2236008 Ontario Inc. Sound field spatial stabilizer with spectral coherence compensation
US9516418B2 (en) 2013-01-29 2016-12-06 2236008 Ontario Inc. Sound field spatial stabilizer
US11463820B2 (en) 2019-09-25 2022-10-04 Oticon A/S Hearing aid comprising a directional microphone system
US11798576B2 (en) 2014-02-27 2023-10-24 Cerence Operating Company Methods and apparatus for adaptive gain control in a communication system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5602962A (en) * 1993-09-07 1997-02-11 U.S. Philips Corporation Mobile radio set comprising a speech processing arrangement
US20030026437A1 (en) * 2001-07-20 2003-02-06 Janse Cornelis Pieter Sound reinforcement system having an multi microphone echo suppressor as post processor
US6549629B2 (en) 2001-02-21 2003-04-15 Digisonix Llc DVE system with normalized selection
US20050265560A1 (en) 2004-04-29 2005-12-01 Tim Haulick Indoor communication system for a vehicular cabin
EP1850640A1 (en) * 2006-04-25 2007-10-31 Harman/Becker Automotive Systems GmbH Vehicle communication system
EP1679874B1 (en) 2005-01-11 2008-05-21 Harman Becker Automotive Systems GmbH Feedback reduction in communication systems
US20090316923A1 (en) * 2008-06-19 2009-12-24 Microsoft Corporation Multichannel acoustic echo reduction

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5602962A (en) * 1993-09-07 1997-02-11 U.S. Philips Corporation Mobile radio set comprising a speech processing arrangement
US6549629B2 (en) 2001-02-21 2003-04-15 Digisonix Llc DVE system with normalized selection
US20030026437A1 (en) * 2001-07-20 2003-02-06 Janse Cornelis Pieter Sound reinforcement system having an multi microphone echo suppressor as post processor
US20050265560A1 (en) 2004-04-29 2005-12-01 Tim Haulick Indoor communication system for a vehicular cabin
EP1679874B1 (en) 2005-01-11 2008-05-21 Harman Becker Automotive Systems GmbH Feedback reduction in communication systems
EP1850640A1 (en) * 2006-04-25 2007-10-31 Harman/Becker Automotive Systems GmbH Vehicle communication system
EP1850640B1 (en) 2006-04-25 2009-06-17 Harman/Becker Automotive Systems GmbH Vehicle communication system
US20090316923A1 (en) * 2008-06-19 2009-12-24 Microsoft Corporation Multichannel acoustic echo reduction

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9516418B2 (en) 2013-01-29 2016-12-06 2236008 Ontario Inc. Sound field spatial stabilizer
US9949034B2 (en) 2013-01-29 2018-04-17 2236008 Ontario Inc. Sound field spatial stabilizer
EP2816816A1 (en) * 2013-06-20 2014-12-24 2236008 Ontario Inc. Sound field spatial stabilizer with structured noise compensation
US9099973B2 (en) 2013-06-20 2015-08-04 2236008 Ontario Inc. Sound field spatial stabilizer with structured noise compensation
US9106196B2 (en) 2013-06-20 2015-08-11 2236008 Ontario Inc. Sound field spatial stabilizer with echo spectral coherence compensation
US9271100B2 (en) 2013-06-20 2016-02-23 2236008 Ontario Inc. Sound field spatial stabilizer with spectral coherence compensation
US9743179B2 (en) 2013-06-20 2017-08-22 2236008 Ontario Inc. Sound field spatial stabilizer with structured noise compensation
US11798576B2 (en) 2014-02-27 2023-10-24 Cerence Operating Company Methods and apparatus for adaptive gain control in a communication system
US11463820B2 (en) 2019-09-25 2022-10-04 Oticon A/S Hearing aid comprising a directional microphone system

Also Published As

Publication number Publication date
EP2490459B1 (en) 2018-04-11

Similar Documents

Publication Publication Date Title
EP2490459B1 (en) Method for voice signal blending
US9711131B2 (en) Sound zone arrangement with zonewise speech suppression
EP0932142B1 (en) Integrated vehicle voice enhancement system and hands-free cellular telephone system
EP1885154B1 (en) Dereverberation of microphone signals
US6674865B1 (en) Automatic volume control for communication system
US7117145B1 (en) Adaptive filter for speech enhancement in a noisy environment
US9002028B2 (en) Noisy environment communication enhancement system
EP1855456B1 (en) Echo reduction in time-variant systems
EP1718103B1 (en) Compensation of reverberation and feedback
US7039197B1 (en) User interface for communication system
US8396234B2 (en) Method for reducing noise in an input signal of a hearing device as well as a hearing device
Li et al. Two-stage binaural speech enhancement with Wiener filter for high-quality speech communication
EP2859772B1 (en) Wind noise detection for in-car communication systems with multiple acoustic zones
Kamkar-Parsi et al. Instantaneous binaural target PSD estimation for hearing aid noise reduction in complex acoustic environments
US9532149B2 (en) Method of signal processing in a hearing aid system and a hearing aid system
US20090034752A1 (en) Constrainted switched adaptive beamforming
EP2048659A1 (en) Gain and spectral shape adjustment in audio signal processing
EP2647223B1 (en) Dynamic microphone signal mixer
EP1858295A1 (en) Equalization in acoustic signal processing
EP3103204A1 (en) Methods and apparatus for adaptive gain control in a communication system
EP2151983A1 (en) Hands-free telephony and in-vehicle communication
WO2002032356A1 (en) Transient processing for communication system
Fuchs et al. Noise suppression for automotive applications based on directional information
EP2490218B1 (en) Method for interference suppression
EP1623600B1 (en) Method and system for communication enhancement ina noisy environment

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

17P Request for examination filed

Effective date: 20130220

17Q First examination report despatched

Effective date: 20130705

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 3/00 20060101AFI20170824BHEP

Ipc: H04R 3/02 20060101ALN20170824BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 3/02 20060101ALN20170913BHEP

Ipc: H04R 3/00 20060101AFI20170913BHEP

INTG Intention to grant announced

Effective date: 20171016

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 989245

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180415

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602011047312

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20180411

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180711

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180711

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180712

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 989245

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180411

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180813

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602011047312

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

26N No opposition filed

Effective date: 20190114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20190218

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190218

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20190228

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190228

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190218

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190218

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190228

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190218

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180811

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20110218

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180411

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20221229

Year of fee payment: 13