Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SYSTEM, DEVICE AND METHOD FOR AUDIO ENHANCEMENT AND AUTOMATIC CORRECTION OF MULTIPLE LISTENING ANOMALIES
Document Type and Number:
WIPO Patent Application WO/2024/028656
Kind Code:
A1
Abstract:
A plug and play system (1000) for correction of multiple audio inconsistencies by processing vocals to music & tonal imbalances correction, enhancement, upscaling to develop high resolution audio, and correction of generated output for audio equipment limitations & distortion, further maintaining equalization and volume & psychoacoustic correction by real time monitoring of audio output & tracking (1012 & 1014, 1016). The processing engine (1010) automatically converts, remasters, equalizes, enhances, and corrects the input audio signal using the pre-set rules. The generated output monitoring unit (1014 & 1016) along with processing engine (1012) performs real-time monitoring of generated audio comparing a sample of test audio recorded by tracking unit (1010) to generate a test result - to design, develop, and implement a plurality of correction filters for the test audio if the test result represents an error (1012G).

Inventors:
DASGUPTA SURANJAN (IN)
Application Number:
PCT/IB2023/052706
Publication Date:
February 08, 2024
Filing Date:
March 20, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DASGUPTA SURANJAN (IN)
International Classes:
G10L21/02
Foreign References:
US20110116642A12011-05-19
US20200252738A12020-08-06
Attorney, Agent or Firm:
DEWAN, Mohan (IN)
Download PDF:
Claims:
CLAIMS:

1. A device for audio enhancement and automatic correction of multiple listening anomalies connected to an audio source and an audio player, said device comprising: a second repository (1002) configured to store a plurality of pre-set rules; an inputs module (1004) configured to receive an input audio signal from said audio source; a processing engine (1006) configured to cooperate with said inputs module (1004) and said second repository (1002) to receive said input audio signal and said pre-set rules, and further configured to automatically convert, remaster, equalize, enhance, and correct said input audio signal using said pre-set rules, said processing engine (1006) comprises: a signal conversion module (1008) configured to receive said input audio signal and further configured to convert said input audio signal to a digital audio signal; a first processing engine (1010) configured to receive said digital audio signal and further configured to automatically correct acoustic anomalies of said digital audio signal; a recording unit (1016) communicatively couple with said processing engine configured to monitor audio played on said audio player and further configured to record a test audio at a trigger; and a second processing unit (1012) configured cooperate with said recording unit (1016) to receive said test audio to perform real-time analysis, processing and generate a test result, and further configured to design, develop, and implement a plurality of correction filters for said test audio if said test result represents an error to generate a final audio.

2. The device as claimed in claim 1, wherein said device further comprises an output module to send said final audio to said audio player.

3. The device as claimed in claim 1, wherein said device cooperatively coupled with a second device having a third processing module (1014) downloaded on it and said third processing module (1014) configured to provide an interface for setting, monitoring, recording, and processing a plurality of aspects of said device.

4. The device as claimed in claim 1, wherein said first processing engine (1010) further comprises: an audio bifurcation module (1010a) configured to bifurcate said digital audio signal into a vocal track and a plurality of music tracks; an auto-remastering module (1010b) configured to cooperate with said audio bifurcation module (1010a) and said second repository (1002) to receive said vocal track and said plurality of music tracks and said pre-set rules, and further configured to remaster and equalize said vocal track and said plurality of music tracks using said pre-set rules to produce a balanced track; an equalizing circuit (1010c)configured to cooperate with said autoremastering module (1010b) to receive said balanced track, and further configured to process and enhance said balanced track to produce an enhanced equalized track; and a high resolution audio module (lOlOd) configured to cooperate with said equalizing circuit (1010c) to receive said enhanced equalized track, and further configured to implement artificial intelligence technique to convert said enhanced equalized track to a high resolution wave form track.

5. The device as claimed in claim 2, wherein said auto-remastering module (1010b) further includes a vocal muting module configured to mute or unmute said vocal track upon receiving a corresponding signal from said third processing module (1014).

6. The device as claimed in claim 2, wherein said equalizing circuit (1010c) the equalizing circuit is an audio enhancement circuit that uses digital signal processors (DSP) to provide wide audio processing functionality to the user.

7. The device as claimed in claim 1, wherein said first processing engine (1010) further comprises: a listening area acoustic correction module (1010c) configured to correct a listening area for sound anomalies, said listening area acoustic correction module (1010c) comprising: a test trigger module (lOlOea) configured to trigger said device to play at-least one pre-recorded test tracks and further configured to trigger a recording unit (1016) to record said pre-recorded test track from a listening position to generate a result track; a track comparison module (lOlOeb) configured to cooperate with said test trigger module (101 Oea) to receive said result track and further configured to compare said pre-recorded test track with said result track and further configured to generate a comparison result; a result sending module (lOlOec) configured to cooperate with said track comparison module (lOlOeb) to receive said comparison result and further configured to send it to said second processing unit (1012); a filter exchange module (lOlOed) configured to cooperate with said second processing unit (1012) to receive a plurality of correction filters and further configured to send it to said third processing module (1014). The device as claimed in claim 6, wherein said recording unit (1016) is placed at a listening position in said listening area. The device as claimed in claim 6, wherein said third processing module (1014) receives said correction filters to display on a user device for user’s selection. The device as claimed in claim 1, wherein said second processing unit (1012) further comprises: a real-time distortion correction module (1012a) configured to auto execution of correction filters in real-time, said real-time distortion correction module (1012a) comprises: a master splitter module (1012b) configured to cooperate with a first processing engine (1010) to receive a high resolution wave form track and further configured to split said high resolution wave form track into a plurality of master EQ frequency bands; a distortion correction filter (1012c)configured to identify a distortion in said plurality of master EQ frequency bands and further configured to generate a DC filter; an automatic bass treble levelling module (1012d) configured to identify a bass treble mismatch in said plurality of master EQ frequency bands and further configured to generate a ABTL filter; an auto volume levelling module (1012e) configured to identify a volume mismatch in said plurality of master EQ frequency bands and further configured to generate a VOL filter; a psychoacoustics correction module (1012f) configured to identify a plurality of harsh frequencies (at high levels beyond long term exposure limits, at low levels as per minimal auditory response) in said plurality of master EQ frequency bands and further configured to generate a PC correction filter; an automatic summation and implementation of correction filter module (1012g) configured to implement DC filter, ABTL filter, VOL filter and PC correction filter on said plurality of master EQ frequency bands to generate a corrected master EQ frequency bands. The device as claimed in claim 1, wherein said first trigger for said recording unit (1016) generated from at-least one of an audio track change, a volume change, and equalization change. The device as claimed in claim 1, wherein said test audio is corrected for an output distortion, an equalization distortion, and a psychoacoustics. The device as claimed in claim 1, wherein said third processing module (1014) is implemented on a device selected from a group of a mobile device, a smart home control center, a tablet, and a device interfaced with a digital assistant.

14. The device as claimed in claim 1, wherein said audio source and said audio player are implemented as a single device.

15. The device as claimed in claim 1, wherein said pre-set rules are customizable based on a plurality of preferences selected from a group of a processing speed, a time lag, a required time, an audio quality, a maximum volume, and a minimum volume in any proportion.

16. The device as claimed in claim 1, wherein said pre-set rules are designed according to one of said audio source and said audio player at the manufacturing stage of said device and said pre-set rules are upgradable or alterable via a remote server.

17. A system (1000) for audio enhancement and automatic correction of multiple listening anomalies connected to an audio source and an audio player, said system comprising: a second repository (1002) configured to store a plurality of pre-set rules; an inputs module (1004) configured to receive an input audio signal from said audio source; a processing engine (1006) configured to cooperate with said inputs module (1004) and said second repository (1002) to receive said input audio signal and said pre-set rules, and further configured to automatically convert, remaster, equalize, enhance, and correct said input audio signal using said pre-set rules, said processing engine (1006) comprises: a signal conversion module (1008) configured to receive said input audio signal and further configured to convert said input audio signal to a digital audio signal; a first processing engine (1010) configured to receive said digital audio signal and further configured to automatically correct acoustic anomalies of said digital audio signal; a recording unit (1016) configured to monitor audio played on said audio player and further configured to record a test audio at a trigger; and a second processing unit (1012) configured to perform real-time analysis and processing of said test audio and generate a test result, and further configured to design, develop, and implement a plurality of correction filters for said test audio if said test result represents an error to generate a final audio. The system as claimed in claim 17, wherein said system further comprises an output module to send said final audio to said audio player. The system (1000) as claimed in claim 17, wherein said system cooperatively coupled with a second device having a third processing module (1014) downloaded on it and said third processing module (1014) configured to provide an interface for setting, monitoring, recording, and processing a plurality of aspects of said system. The system (1000) as claimed in claim 17, wherein said first processing engine (1010) further comprises: an audio bifurcation module (1010a) configured to bifurcate said digital audio signal into a vocal track and a plurality of music tracks; an auto-remastering module (1010b) configured to cooperate with said audio bifurcation module (1010a) and said second repository (1002) to receive said vocal track and said plurality of music tracks and said pre-set rules, and further configured to remaster and equalize said vocal track and said plurality of music tracks using said pre-set rules to produce a balanced track; an equalizing circuit (1010c) configured to cooperate with said autoremastering module (1010b) to receive said balanced track, and further configured to process and enhance said balanced track to produce an enhanced equalized track; and a high resolution audio module (lOlOd) configured to cooperate with said equalizing circuit (1010c) to receive said enhanced equalized track, and further configured to implement artificial intelligence technique to convert said enhanced equalized track to a high resolution wave form track. The system (1000) as claimed in claim 20, wherein said auto-remastering module (1010b) further includes a vocal muting module configured to mute or unmute said vocal track upon receiving a corresponding signal from said third processing module (1014). The system (1000) as claimed in claim 20, wherein said equalizing circuit (1010c) is an audio enhancement circuit that uses digital signal processors (DSP) to provide wide audio processing functionality to the user. The system (1000) as claimed in claim 17, wherein said first processing engine (1010) further comprises: a listening area acoustic correction module (1010c) configured to correct a listening area for sound anomalies, said listening area acoustic correction module (1010c) comprising: a test trigger module (lOlOea) configured to trigger said device to play at-least one pre-recorded test tracks and further configured to trigger a recording unit (1016) to record said pre-recorded test track from a listening position to generate a result track; a track comparison module (lOlOeb) configured to cooperate with said test trigger module (101 Oea) to receive said result track and further configured to compare said pre-recorded test track with said result track and further configured to generate a comparison result; a result sending module (lOlOec) configured to cooperate with said track comparison module (lOlOeb) to receive said comparison result and further configured to send it to said second processing unit (1012); a filter exchange module (lOlOed) configured to cooperate with said second processing unit (1012) to receive a plurality of correction filters and further configured to send it to said third processing module (1014). The system (1000) as claimed in claim 17, wherein said recording unit (1016) is placed at a listening position in said listening area. The system (1000) as claimed in claim 17, wherein said third processing module (1014) receives said correction filters to display on a user device for user’s selection. The system (1000) as claimed in claim 17, wherein said second processing unit (1012) further comprises: a real-time distortion correction module (1012a) configured to auto execution of correction filters in real-time, said real-time distortion correction module (1012a) comprises: a master splitter module (1012b) configured to cooperate with a first processing engine (1010) to receive a high resolution wave form track and further configured to split said high resolution wave form track into a plurality of master EQ frequency bands; a distortion correction filter (1012c) configured to identify a distortion in said plurality of master EQ frequency bands and further configured to generate a DC filter; an automatic bass treble levelling module (1012d) configured to identify a bass treble mismatch in said plurality of master EQ frequency bands and further configured to generate an ABTL filter; an auto volume levelling module (1012e) configured to identify a volume mismatch in said plurality of master EQ frequency bands and further configured to generate a VOL filter; a psychoacoustics correction module (1012f) configured to identify a plurality of harsh frequencies (at high levels beyond long term exposure limits , at low levels as per minimal auditory response) in said plurality of master EQ frequency bands and further configured to generate a PC correction filter; an automatic summation and implementation of correction filter module (1012g) configured to implement DC filter, ABTL filter, VOL filter and PC correction filter on said plurality of master EQ frequency bands to generate a corrected master EQ frequency bands. The system (1000) as claimed in claim 17, wherein said first trigger for said recording unit (1016) generated from at-least one of an audio track change, a volume change, and equalization change.

28. The system (1000) as claimed in claim 17, wherein said test audio is corrected for an output distortion, an equalization distortion, and a psychoacoustics.

29. The system (1000) as claimed in claim 17, wherein said third processing module (1014) is implemented on a device selected from a group of a mobile device, a smart home control center, a tablet, and a device interfaced with a digital assistant.

30. The system (1000) as claimed in claim 17, wherein said pre-set rules are customizable based on a plurality of preferences selected from a group of a processing speed, a time lag, a required time, an audio quality, a maximum volume, and a minimum volume in any proportion.

31. The system (1000) as claimed in claim 17, wherein said pre-set rules are designed according to one of said audio source and said audio player at the manufacturing stage of said device and said pre-set rules are upgradable or alterable via a remote server.

32. A method (2000) for audio enhancement and automatic correction of multiple listening anomalies, said method comprising steps of: storing, by a second repository (1002), a plurality of pre-set rules; receiving, by an inputs module (1004), an input audio signal from said audio source; receiving, by a processing engine (1006), said input audio signal and said preset rules from said inputs module (1004) and said second repository (1002); converting, by a signal conversion module (1008) of said processing engine (1006), said input audio signal to a digital audio signal; receiving, by a first processing engine (1010) of said processing engine (1006), said digital audio signal; automatically correcting, by said first processing engine (1010) of said processing engine (1006), acoustic anomalies of said digital audio signal; monitoring, by a recording unit (1016) of said processing engine (1006), audio played on said audio player; recording, by said recording unit (1016) of said processing engine (1006), a test audio at a trigger; analyzing and processing, by a second processing unit (1012) of said processing engine (1006), in real-time of said test audio and generate a test result; and designing, developing and implementing, by said second processing unit (1012) of said processing engine (1006), a plurality of correction fdters for said test audio if said test result represents an error. The method (2000) as claimed in claim 30, wherein said method can be executed using a second device having a third processing module (1014) downloaded on a user device and said third processing module (1014) configured to provide an interface for setting, monitoring, recording, and processing a plurality of aspects of said method. The method (2000) as claimed in claim 30, wherein said step of automatically correcting by said first processing engine (1010) of said processing engine (1006), further comprises steps of: bifurcating, by an audio bifurcation module (1010a) of said first processing engine (1010), said digital audio signal into a vocal track and a plurality of music tracks; receiving, by an auto-remastering module (1010b) of said first processing engine (1010), said vocal track and said plurality of music tracks and said preset rules from said audio bifurcation module (1010a) and said second repository (1002); remastering, equalizing and producing, by said auto-remastering module (1010b) of said first processing engine (1010), said vocal track and said plurality of music tracks using said pre-set rules to produce a balanced track; receiving, by an equalizing circuit (1010c) of said first processing engine (1010), said balanced track from said auto-remastering module (1010b); processing, enhancing and producing, by said equalizing circuit (1010c) of said first processing engine (1010), said balanced track to an enhanced equalized track; receiving, by a high resolution audio module (lOlOd) of said first processing engine (1010), said enhanced equalized track from said equalizing circuit; implementing, by said high resolution audio module (lOlOd) of said first processing engine (1010), an artificial intelligence technique to convert said enhanced equalized track to a high resolution wave form track. The method (2000) as claimed in claim 30, wherein step of analyzing and processing, by a second processing unit (1012) of said processing engine (1006), further comprises steps of: automatic execution, by a real-time distortion correction module (1012a) of said second processing unit (1012), of correction filters in real-time, said realtime distortion correction module (1012a); receiving, by a master splitter module (1012b) of said real-time distortion correction module (1012a), a high resolution wave form track from a first processing engine (1010); splitting, by said master splitter module (1012b) of said real-time distortion correction module (1012a), said high resolution wave form track into a plurality of master EQ frequency bands; identifying, by a distortion correction filter (1012c) of said real-time distortion correction module (1012a), a distortion in said plurality of master EQ frequency bands; generating, by said distortion correction filter (1012c) of said real-time distortion correction module (1012a), a DC filter; identifying, by an automatic bass treble levelling module (1012d) of said real-time distortion correction module (1012a), a bass treble mismatch in said plurality of master EQ frequency bands; generating, by said automatic bass treble levelling module (1012d) of said real-time distortion correction module (1012a), an ABTL fdter; identifying, by an auto volume levelling module (1012e) of said realtime distortion correction module (1012a), a volume mismatch in said plurality of master EQ frequency band; generating, by said auto volume levelling module (1012e) of said realtime distortion correction module (1012a), a VOL fdter; identifying, by a psychoacoustics correction module (1012f) of said real-time distortion correction module (1012a), a plurality of harsh frequencies (at high levels beyond long term exposure limits , at low levels as per minimal auditory response) in said plurality of master EQ frequency bands; generating, by a psychoacoustics correction module (1012f) of said real-time distortion correction module (1012a), a PC correction fdter; implementing, by an automatic summation and implementation of correction fdter module (1012g) of said real-time distortion correction module (1012a), said DC fdter, said ABTL fdter, said VOL fdter and said PC correction fdter on said plurality of master EQ frequency bands to generate a corrected master EQ frequency band.

Description:
A SYSTEM, DEVICE AND METHOD FOR AUDIO ENHANCEMENT AND AUTOMATIC CORRECTION OF MUETIPEE LISTENING ANOMALIES

FIELD

The present invention relates to the field of audio signal processing. More particularly, the present invention relates to a system, a device and a method for audio enhancement and automatic correction of multiple listening anomalies.

DEFINITIONS

As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used indicates otherwise.

Electronic device/ User device/ Mobile device - The terms ‘electronic device’, ‘user device’, and ‘Mobile device’ hereinafter may be referred to as a device, used by a user of the present disclosure, wherein the user device includes but is not limited to a mobile phone, a laptop, a tablet, an iPad, a PDA, a notebook, a net book, a smart device, a smart phone, a personal computer, a handheld device and the like.

Digital Signal Processors - The term ‘Digital Signal Processors’ hereinafter may be referred to as a Processor. Audio DSP are used to speed up the execution of audio-related algorithms while consuming less power than a typical CPU.

Equalization (EQ) - The term ‘Equalization (EQ)’ hereinafter may be referred to as the process of changing the gain / quality / level / balance of different frequency components in an audio signal.

Master Equalization (EQ) Frequency Bands - The term ‘Master Equalization (EQ) Frequency Bands’ refers to an automatic equalization process carrying out various frequency band adjustments as per feedback received from monitoring of playback quality of generated audio output. Distortion Correction (DC) Filter - The term ‘Distortion Correction (DC) Filter’ hereinafter may be referred to a Filter that is inserted between the mains grid and the primary windings of the transformer to avoid mechanical humming from the mains transformer, caused by a DC component on the mains grid's AC voltage.

Treble - The term ‘Treble’ hereinafter may be referred to the highest sound in music.

Bass - The term ‘Bass’ hereinafter may be referred to the lowest sound in music.

Automatic Bass Treble Levelling (ABTL) Filter - The term ‘Automatic Bass Treble Levelling (ABTL) Filter’ hereinafter may be referred to as a filter automatically adjusting the treble and the bass in music for a better sound conducive to the playing hardware.

Auto Volume Leveling (VOL) Filter - The term ‘Auto Volume Leveling (VOL) Filter’ hereinafter may be referred to as a filter automatically adjusting the volume of music for a better sound conducive to the playing hardware.

Psychoacoustics Correction (PC) Filter - The term ‘Psychoacoustics Correction (PC) Filter’ hereinafter may be referred to as a filter automatically adjusting the sound of music to provide a soothing hearing experience to human ears.

BACKGROUND

The background information herein below relates to the present disclosure but is not necessarily prior art.

Listening to music is enjoyable, relaxing, and is pleasurable to the brain as it releases dopamine which is a chemical in the brain associated with pleasure & relaxation. However, there has been a change in source of music from traditional sources such as LP, CD, etc., that were mastered and checked for level and quality in comparison to present popular Internet based sources.

Lot of information and knowledge is being exchanged through E Books , podcasts , and by social media content creators. Earlier speeches, radio shows and older TV shows & content is being streamed without a high resolution track being available. Sometimes the content creator is unable to develop a high resolution studio quality audio track and maybe embedded with background noise. Watching informational content, movies, shows and series is popular over OTT platforms, in the event of lesser internet bandwidth the OTT platforms automatically throttles and lowers the quality of AV resolution. Earlier content may not be available in high resolution audio and may be embedded with distortion & vocals to music & tonal imbalances.

Car audio application, car listening environment presents several listening challenges, the acoustics properties in each type of car is unique & different that presents many challenges. To address this purpose-built car audio DSP’s are used that require advanced skill in installation and calibration by professional sound engineers. This system proposes a plug and play devise / an additional output module to be added to car head units to correct for acoustics anomalies and upscale car radio & music content including real time monitoring and enhance the generated audio quality in real time by correcting for distortion arising from limitations of playback speakers, maintaining equalization & tonal balance, volume levels as per user preferences including reducing harsh frequencies by adopting psychoacoustics corrections. Existing audio playback and sound reinforcement systems are not inherently capable of upscaling low resolution audio & correcting for vocals to music & tonal imbalances and enhancing music playback quality. Existing audio processing methods apply corrections and equalization to music source in a passive way with a predefined or user defined set of rules by way of:

• applying user-defined or preset equalization,

• an embedded digital signal processing (DSP) - using a purpose-built pre-programmed audio processor (MCU) that is programmed in a certain way to carry out several corrections,

• other PC based programs programmed to carry out corrections to the music sources, and

• harmonics enhancers.

However, the above-mentioned conventional technologies for equalization and corrections of music sources are not dynamic they are either pre-programed or user-defined. They neither perform real-time monitoring of the listening area nor apply automatic corrections to equalization towards anomalies in source and listening environment as per the quality of playback.

There are several purpose-built MCUs with in-built audio I/O called as Audio DSP. These MCUs have a preprogrammed way of functioning and require the setting up of various cross overs, equalizers, and inbuilt enhancers and surround sound generators or decoders. These come with a program to access the firmware and allow a manufacturer adopting the MCU to set and tweak multiple parameters, however, their overall performance is based on a preprogrammed passive way of analyzing and processing audio.

For the purpose of room acoustics correction, an open-source program such as Room EQ Wizard (REW) is typically used by audio enthusiasts to assess room acoustics and obtain corrective results. However, the results obtained from REW need to be applied to a computer / similar hardware to achieve room corrections. Further, REW is difficult to use and needs a lot of manual setups. It also requires compatible hardware and skilled knowledge for setting up. Further, REW does not offer some important audio quality management functionalities, such as real-time distortion correction and physio acoustics correction.

Higher end home theater amplifiers and music systems provide for room acoustics correction, however these systems do not provide a plug & play system to be added to existing and all audio systems and do not use the user's mobile device microphone to assess & process signals - also they use separate hardware for this processing. These amplifiers are designed only for enclosures and they usually fail in an open air environment.

There are some programs that offer the functionality of harmonics enhancement. These programs essentially analyze and boost frequencies that have low or poor recording/input source levels; however, they function according to a complex principle. These programs have been designed to be installed in a computer / similar hardware systems that emulate the playback source. These systems are not available in the form of plug-and-play hardware devices that can adapt to any playback source.

Furthermore, there are some plug-n-play add-on devices that can be used to apply room acoustics corrections. However, they do not facilitate auto-leveling and equalization, realtime distortion correction, and up-sampling of the audio signals.

Thus, none of the prior art systems provides an integrated plug-n-play solution for autoleveling and equalization, real-time distortions correction, and up-sampling.

There is, therefore, felt a need for a device, a system, and a method for audio enhancement and automatic correction of vocals to music & tonal imbalances that alleviate the aforementioned drawbacks. OBJECTS

Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:

It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.

An object of the present disclosure is to provide a system, a device and a method for audio enhancement and automatic correction of multiple listening anomalies of an audio source quality to enhance source with real time monitoring of listening environment to reduce distortion, harsh sounds and achieve a constant balanced equalization (during track change) to make listening experience distortion free and enjoyable without constant adjustment by the listener.

Another object of the present disclosure is to provide a system, a device and a method that applies the principles of sound engineering to correct poor source quality, by analyzing source dynamically and processing the source composition by way of intermediate frequency leveling and corrections thereby attempting to achieve original music composition.

Still another object of the present disclosure is to provide a system, a device and a method that performs real-time monitoring of listening environment distortion and corrects for inherent distortion of source and audio playback equipment.

Yet another object of the present disclosure is to provide a system, a device and a method that performs an analysis of room acoustics and facilitates the correction of room and listening area acoustics anomalies.

A further objective of the present disclosure is to monitor, in real-time, the playback quality of audio equipment and provide a system that performs correction to reduce equalization anomalies by way of monitoring frequency levels and intermediate balance between frequencies to automatically adjust, to the optimum equalization levels, such that an optimum audio playback quality is achieved and playback is within the limits of the audio equipment.

Another object of the present disclosure is to provide a system, a device and a method that performs real-time adjustment of intermediate frequency playback levels as per the user’s preference and facilitate the correction of frequency level and intermediate balance between frequencies. Another object of the present disclosure is to provide a system, a device and a method that performs real-time automatic equalization of audio.

Still another object of the present disclosure is to provide a system, a device and a method that performs advanced acoustics tests in a plug and play easy manner and corrects room acoustics.

Yet another object of the present disclosure is to provide a system, a device and a method that corrects for physio acoustics in real-time and performs automatic loudness adjustment.

Still another object of the present disclosure is to monitor distortion and audio equipment overload and automatically apply real-time equalization to bass, treble, volume and intermediate frequency settings to achieve distortion free playback that is optimized to suit audio equipment.

Another object of the present disclosure is to provide real-time monitoring and corrections to uncomfortable levels of harsh frequencies as per the latest physio acoustics studies.

Yet another object of the present disclosure is to provide a system, a device and a method that employs Digital Signal Processing (DSP) harmonics enhancer and surround-sound processor with Artificial Intelligence (Al) based artificial remastering to automatically generate missing content and generate a high resolution audio track.

Still another object of the present disclosure is to provide a plug-and-play, easy to use and inexpensive system that applies all of the above features to correct poor source quality, listening environment and audio equipment anomalies along with real-time monitoring and various above mentioned equalization and corrections to source/ playback.

Another objective is to provide an advanced mobile based user interface for the user to enjoy the above objectives in an effortless manner.

Yet another object of the present disclosure is to monitor type of content - music - speech - movie and switch equalization mode automatically.

Still another object of the present disclosure is to add a Bluetooth headphone & mute larger speakers.

Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure. SUMMARY

A system for audio enhancement and automatic correction of multiple listening anomalies connected to an audio source and an audio player.

In an aspect, the audio source and the audio player are implemented as a single device.

The system comprises a second repository, an inputs module, a processing engine and a quality enhancement unit.

In an aspect, the system is cooperatively coupled with a second device having a third processing module downloaded on it and the third processing module configured to provide an interface for setting, monitoring, quality enhancement, and processing a plurality of aspects of the system.

In another aspect, the third processing module is implemented on a device selected from a group of a mobile device, a smart home control center, a tablet, and a device interfaced with a digital assistant.

The second repository is configured to store a plurality of pre-set rules.

In an aspect, the pre-set rules are customizable based on a plurality of preferences selected from a group of a processing speed, a time lag, a required time, an audio quality, a maximum volume, and a minimum volume in any proportion.

In an aspect, the pre-set rules are designed according to one of the audio source and the audio player at the manufacturing stage of the device and the pre-set rules are upgradable or alterable via a remote server.

The inputs module is configured to receive an input audio signal from the audio source.

The processing engine is configured to cooperate with the inputs module and the second repository to receive the input audio signal and the pre-set rules, and is further configured to automatically convert, remaster, equalize, enhance, and correct the input audio signal using the pre-set rules, the processing engine comprises a signal conversion module, a first processing engine, and a second processing unit. The signal conversion module is configured to receive the input audio signal and is further configured to convert the input audio signal to a digital audio signal.

The first processing engine is configured to receive the digital audio signal and is further configured to automatically correct acoustic anomalies of the digital audio signal imbalances in source quality.

In an aspect, the first processing engine comprises an audio bifurcation module, an autoremastering module, an equalizing circuit and a high resolution audio module.

The audio bifurcation module is configured to bifurcate the digital audio signal into a vocal track and a plurality of music tracks.

The auto-remastering module is configured to cooperate with the audio bifurcation module and the second repository to receive the vocal track and the plurality of music tracks and the pre-set rules, and is further configured to remaster and equalize the vocal track and the plurality of music tracks using the pre-set rules to produce a balanced track.

In an aspect, the auto-remastering module further includes a vocal muting module configured to mute or unmute the vocal track upon receiving a corresponding signal from the third processing module.

The equalizing circuit is configured to cooperate with the auto-remastering module to receive the balanced track, and is further configured to process and enhance the balanced track to produce an enhanced equalized track.

In an aspect, the equalizing circuit is an audio enhancement circuit that uses digital signal processors (DSP).

The high resolution audio module is configured to cooperate with the equalizing circuit to receive the enhanced equalized track, and is further configured to implement artificial intelligence techniques to convert the enhanced equalized track to a high resolution wave form track.

In another aspect, the first processing engine further comprises an listening area acoustic correction module. The listening area acoustic anomalies correction module includes a test trigger module, a playback track comparison module, a result sending module and a fdter exchange module.

The test trigger module is configured to trigger the system to play at-least one pre-recorded test tracks and is further configured to trigger a recording unit to record the pre-recorded test track from a listening position to generate a result track.

The track comparison module is configured to cooperate with the test trigger module to receive the result track and is further configured to compare the pre-recorded test track with the result track and is further configured to generate a comparison result.

The result sending module is configured to cooperate with the track comparison module to receive the comparison result and is further configured to send it to the second processing unit.

The filter exchange module is configured to cooperate with the second processing unit to receive a plurality of correction filters and is further configured to send it to the third processing module.

In an aspect, third applications on user’s devise controls the steps of listening area acoustics correction filter generation. Multiple filters are generated and the user is able to select the best suited filter as per user preference.

The recording unit is configured to monitor audio played on the audio player and is further configured to record a test audio at a trigger.

In an aspect, the trigger for the recording unit generated from at-least one of an audio track change, a volume change, and an equalization change.

In an aspect, the test audio is corrected for an output distortion, an equalization distortion, and psychoacoustics.

In an aspect, the user’s devise is placed at a listening position in the listening area.

The second processing unit is configured to perform real-time analysis and processing of the test audio and generate a test result, and is further configured to design, develop, and implement a plurality of correction filters for the test audio if the test result represents an error to generate a final audio. In an aspect, the system further comprises an output module to send said final audio to the audio player.

In an aspect, the second processing unit comprises a real time listening area monitoring & generation of quality enhancement correction filters, a master splitter module, a distortion correction filter, an automatic bass treble levelling module, an auto volume leveling module, a psychoacoustics correction module and an automatic summation and implementation of correction filter module.

The master splitter module named herein as Master Eq is configured to cooperate with a first processing engine to receive a high resolution wave form track and is further configured to split the high resolution wave form track into a plurality of frequency bands. The levels of these frequency bands are automatically adjusted as per feedback filters received from the real time listening area playback quality monitoring unit.

The distortion correction filter is configured to identify distortion in the playback quality and generate a DC filter in the plurality of master EQ frequency bands.

The automatic bass treble levelling module is configured to monitor the generated output & identify a bass treble mismatch between user preference settings and playback levels of low & high frequencies in the plurality of master EQ frequency bands and is further configured to generate an ABTL filter.

The auto volume leveling module is configured to monitor the generated output & identify a volume mismatch in user set sound pressure limit and playback sound pressure limit in the plurality of master EQ frequency bands and is further configured to generate a VOL filter.

The psychoacoustics correction module is configured to monitor the generated output & identify a plurality of harsh frequencies in the plurality of master EQ frequency bands and is further configured to generate a PC correction filter.

In an aspect, the psychoacoustics correction module identifies the plurality of harsh frequencies at high levels beyond long term exposure limits.

In an aspect, the psychoacoustics correction module identifies the plurality of harsh frequencies at low levels and identifies a plurality of frequencies as per the minimal auditory response. The playback quality is automatically balanced as per user tonal preferences & further enhanced by automatic summation and implementation of the DC fdter, ABTL fdter, VOL filter and PC correction filter on the plurality of master EQ frequency bands.

The present disclosure further envisages a device for audio enhancement and automatic correction of vocals to music & tonal imbalances , tonal imbalances, embedded and generated output distortion correction by a plug & play method.

BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING

The embodiments of the present disclosure will now be described with the help of the accompanying drawing, in which:

Figure 1 illustrates a block diagram of a system for audio enhancement and automatic correction of multiple listening anomalies, in accordance with an embodiment of the present disclosure;

Figure 1A - IB illustrates a flow diagram of a method for audio enhancement and automatic correction of multiple listening anomalies, in accordance with an embodiment of the present disclosure;

Figure 1C illustrates a first exemplary embodiment in a block diagram of a system to enhance audio source quality and correct listening area anomalies, in accordance with an embodiment of the present disclosure;

Figure 1D-1G illustrate a second exemplary embodiment in a block diagram of a system to enhance audio source quality and correct listening area anomalies, in accordance with an embodiment of the present disclosure; and

Figures 2A, 2B and 2C illustrate an exemplary embodiment in a flow diagram of a method for audio enhancement and automatic correction of multiple listening anomalies, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION

Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing. Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details are set forth, relating to specific components, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.

The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a,” "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "comprises," "comprising," “including,” and “having,” are open-ended transitional phrases and therefore specify the presence of stated features, elements, modules, units, and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

When an element is referred to as being "connected to," or "coupled to" another element, it may be directly on, engaged, connected, or coupled to the other element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed elements.

The terms first, second, etc., should not be construed to limit the scope of the present disclosure as the aforementioned terms may be only used to distinguish one element, component, region, layer, or section from another component, region, layer or section. Terms such as first, second, etc. when used herein do not imply a specific sequence or order unless clearly suggested by the present disclosure.

Terms such as “inner,” “outer,” "beneath," "below," "lower," "above," "upper," and the like, may be used in the present disclosure to describe relationships between different elements as depicted from the figures.

The popularity of music sources is drifting away from master copies of physically/studio- recorded mediums, such as DVDs and CDs. Often original studio-recorded tracks are not available to a listener over the internet or shared files. Further, most of the time, the audio source is of poor quality with inherent distortion and severe nonlinearities. Moreover, the internet playback quality is dependent on various factors like internet speed, paid service, and quality of upload of the source.

Lot of information and knowledge is being exchanged through E Books , podcasts , and by social media content creators. Earlier speeches, radio shows and older TV shows & content is being streamed without a high resolution track being available. Sometimes the content creator is unable to develop a high resolution studio quality audio track and maybe embedded with background noise.

Watching informational content, movies, shows and series is popular over OTT platforms, in the event of lesser internet bandwidth the OTT platforms automatically throttles and lowers the quality of AV resolution. Earlier content may not be available in high resolution audio and may be embedded with distortion & vocals to music & tonal imbalances.

To solve the above-mentioned issues, the present disclosure envisages a system (hereinafter “system 1000”), a device (hereinafter “device 3000”) and method (hereinafter “method 2000”) for audio enhancement and automatic correction of multiple listening anomalies. The system 1000 and the method 2000 of the present disclosure are now being described in detail, with reference to Figure 1 through Figure 2C.

A plug and play system for vocals to music & tonal imbalances correction, enhancement, multiple audio processing & upscaling, listening area acoustics correction, and automatic correction by way of real time monitoring of generated output correcting for audio equipment limitations & distortion including maintaining user set tonal balance, equalization and volume of all audio sources.

In an aspect, the audio source and the audio player are implemented as a single device.

The system comprises a second repository 1002, an inputs module 1004, a processing engine 1006, a signal conversion module 1008, a first processing engine 1010, a quality enhancement unit 1016 and a second processing unit 1012.

In an aspect, the system is cooperatively coupled with a second device having a third processing module 1014 downloaded on it and the third processing module 1014 configured to provide an interface for setting, monitoring, recording, and processing a plurality of aspects of the system. In another aspect, the third processing module 1014 is implemented on a device selected from a group of a mobile device, a smart home control center, a tablet, and a device interfaced with a digital assistant.

The second repository 1002 is configured to store a plurality of pre-set rules.

In an aspect, the pre-set rules are customizable based on a plurality of preferences selected from a group of a processing speed, a time lag, a required time, an audio quality, a maximum volume, and a minimum volume in any proportion.

In an aspect, the pre-set rules are designed according to one of the audio source and the audio player and the pre-set rules are upgradable or alterable via a remote server.

The inputs module 1004 is configured to receive an input audio signal from the audio source. The input module is configured to receive input from multiple analog & digital sources including musical instruments allowing for user selection, karaoke & mixing between inputs through a mixing console implemented on Users Device 1014.

The processing engine 1006 is configured to cooperate with the inputs module 1004 and the second repository 1002 to receive the input audio signal and the pre-set rules, and is further configured to automatically convert, remaster, equalize, enhance, and correct the input audio signal using the pre-set rules, the processing engine 1006 comprises a signal conversion module 1008, a first processing engine 1010, a quality enhancement unit 1016 and a second processing unit 1012.

The signal conversion module 1008 is configured to receive the input audio signal and is further configured to convert the input audio signal to a digital audio signal.

The first processing engine 1010 is configured to receive the digital audio signal and is further configured to automatically correct acoustic anomalies of the digital audio signal.

In an aspect, the first processing engine 1010 automatically corrects multiple listening anomalies, embedded tonal imbalances, embedded & audio enhancement.

In an aspect, the first processing engine 1010 comprises an audio bifurcation module 1010a, an auto-remastering module 1010b, an equalizing circuit lOlOcand a high resolution audio module lOlOd. The audio bifurcation module 1010a is configured to bifurcate the digital audio signal into a vocal track and a plurality of music tracks.

The auto-remastering module 1010b is configured to cooperate with the audio bifurcation module 1010a and the second repository 1002 to receive the vocal track and the plurality of music tracks and the pre-set rules, and is further configured to remaster and equalize the vocal track and the plurality of music tracks using the pre-set rules to produce a balanced track.

In an aspect, the auto-remastering module 1010b further includes a vocal muting module configured to mute or unmute the vocal track upon receiving a corresponding signal from the third processing module 1014.

The equalizing circuit 1010c is configured to cooperate with the auto-remastering module 1010b to receive the balanced track, and is further configured to process and enhance the balanced track to produce an enhanced equalized track.

In an aspect, the equalizing circuit is an audio enhancement circuit that uses digital signal processors (DSP).

The high resolution audio module lOlOd is configured to cooperate with the equalizing circuit 1010c to receive the enhanced equalized track, and is further configured to implement artificial intelligence technique to convert the enhanced equalized track to a high resolution wave form track.

In another aspect, the first processing engine 1010 further comprises an listening area acoustic correction module 1010c.

The listening area acoustic correction module 1010c includes a test trigger module lOlOea, a track comparison module lOlOeb, a result sending module lOlOec and a filter exchange module lOlOed.

The test trigger module lOlOea is configured to trigger the system to play at-least one prerecorded test tracks and is further configured to trigger the user's device 1014 to record the pre-recorded test track from a listening position to generate a result track. The track comparison module lOlOeb is configured to cooperate with the test trigger module lOlOea to receive the result track and is further configured to compare the pre-recorded test track with the result track and is further configured to generate a comparison result.

The result sending module lOlOec is configured to cooperate with the track comparison module lOlOeb to receive the comparison result and is further configured to send it to the second processing unit 1012.

The filter exchange module lOlOed is configured to cooperate with the second processing unit 1012 to receive a plurality of correction filters and is further configured to send it to the third processing module 1014.

In an aspect, the third processing module 1014 enables the user to choose a suitable filter on the user's device.

The quality enhancement unit 1016 is configured to monitor the high resolution audio signal played on the audio player and is further configured to record a test audio at a trigger.

In an aspect, the trigger for the quality enhancement unit 1016 generated from at-least one of an audio track change, a volume change, and equalization change.

In an aspect, the test audio is corrected for an output distortion, an equalization distortion, and psychoacoustics.

In an aspect, the recording unit 1016 is placed at a listening position in the listening area.

The second processing unit 1012 is configured to perform real-time analysis and processing of the test audio and generate a test result, and is further configured to design, develop, and implement a plurality of correction filters for the test audio if the test result represents an error.

In an aspect, the second processing unit 1012 comprises a real time listening area monitoring & generation of quality enhancement correction filters 1012a, a master splitter module 1012b, a distortion correction filter, an automatic bass treble levelling module 1012d, an auto volume levelling module 1012e, a psychoacoustics correction module 1012f and an automatic summation and implementation of correction filter module 1012g.

16

SUBSTITUTE SHEET (RULE 26) The master splitter module 1012b is configured to cooperate with a first processing engine 1010 to receive a high resolution wave form track and is further configured to split the high resolution wave form track into a plurality of frequency bands. The levels of these frequency bands are automatically adjusted as per feedback filters received from the real time listening area playback quality monitoring unit.

The distortion correction filter 1012c is configured to identify distortion in the playback quality and to generate a DC filter in the plurality of master EQ frequency bands.

The automatic bass treble levelling module 1012d is configured to monitor the generated output & identify a bass treble mismatch in the user preference settings and playback levels of lo & high frequencies in the plurality of master EQ frequency bands and is further configured to generate an ABTL filter.

The auto volume levelling module 1012e is configured to monitor the generated output & identify a volume mismatch in the user set sound pressure limit and playback sound pressure limit in the plurality of master EQ frequency bands and is further configured to generate a VOL filter.

The psychoacoustics correction module 1012f module is configured to monitor the generated output & identify a plurality of harsh frequencies in the plurality of master EQ frequency bands and is further configured to generate a PC correction filter.

In an aspect, the psychoacoustics correction module identifies the plurality of harsh frequencies at high levels beyond long term exposure limits.

In an aspect, the psychoacoustics correction module identifies the plurality of harsh frequencies at low levels and identifies a plurality of frequencies as per minimal auditory response.

The playback quality is automatically balanced as per user tonal preferences & further enhanced by automatic summation and implementation of the DC filter, ABTL filter, VOL filter and PC correction filter on the plurality of master EQ frequency bands.

In an aspect, a device 3000 compromises an electronic device placed at the listening area to monitor the quality of playback and operate in tandem with devise 1010 to correct for distortions generated by the limitations of the audio equipment & maintain user set tonal balance, equalization, volume preferences along with psychoacoustics corrections in real time.

In an aspect, the audio source and the audio player are implemented as a single device.

The device 3000 comprises a second repository 1002, an inputs module 1004, a processing engine 1006 and a quality enhancement unit.

In an aspect, the device is cooperatively coupled with a second device having a third processing module 1014 downloaded on it and the third processing module 1014 configured to provide an interface for setting, monitoring, quality enhancement, and processing a plurality of aspects of the device.

In another aspect, the third processing module 1014 is implemented on a device selected from a group of a mobile device, a smart home control center, a tablet, and a device interfaced with a digital assistant.

The second repository 1002 is configured to store a plurality of pre-set rules.

In an aspect, the pre-set rules are customizable based on a plurality of preferences selected from a group of a processing speed, a time lag, a required time, an audio quality, a maximum volume, and a minimum volume in any proportion.

In an aspect, the pre-set rules are designed according to one of the audio source and the audio player and the pre-set rules are upgradable or alterable via a remote server.

The inputs module 1004 is configured to receive an input audio signal from the audio source.

The processing engine 1006 is configured to cooperate with the inputs module 1004 and the second repository 1002 to receive the input audio signal and the pre-set rules, and is further configured to automatically convert, remaster, equalize, enhance, and correct the input audio signal using the pre-set rules, the processing engine 1006 comprises a signal conversion module 1008, a first processing engine 1010, a quality enhancement unit 1016 and a second processing unit 1012.

The signal conversion module 1008 is configured to receive the input audio signal and is further configured to convert the input audio signal to a digital audio signal. The first processing engine 1010 is configured to receive the digital audio signal and is further configured to automatically correct vocals to music & tonal imbalances & anomalies in the source.

In an aspect, the first processing engine 1010 comprises an audio bifurcation module 1010a, an auto-remastering module 1010b, an equalizing circuit 1010c and a high resolution audio module lOlOd.

The audio bifurcation module 1010a is configured to bifurcate the digital audio signal into a vocal track and a plurality of music tracks.

The auto-remastering module 1010b is configured to cooperate with the audio bifurcation module 1010a and the second repository 1002 to receive the vocal track and the plurality of music tracks and the pre-set rules, and is further configured to remaster and equalize the vocal track and the plurality of music tracks using the pre-set rules to produce a balanced track.

In an aspect, the auto-remastering module 1010b further includes a vocal muting module configured to mute or unmute the vocal track upon receiving a corresponding signal from the third processing module 1014.

The equalizing circuit 1010c is configured to cooperate with the auto-remastering module 1010b to receive the balanced track, and is further configured to process and enhance the balanced track to produce an enhanced equalized track.

In an aspect, the equalizing circuit is an audio enhancement circuit that uses digital signal processors (DSP).

The high resolution audio module lOlOd is configured to cooperate with the equalizing circuit 1010c to receive the enhanced equalized track, and is further configured to implement artificial intelligence technique to convert the enhanced equalized track to a high resolution wave form track.

In another aspect, the first processing engine 1010 further comprises an listening area acoustic correction module 1010c. The listening area acoustic correction module 1010c includes an listening area acoustic correction module 1010c, a test trigger module lOlOea, a track comparison module lOlOeb, a result sending module lOlOec and a filter exchange module lOlOed.

The test trigger module lOlOea is configured to trigger the device to play at-least one prerecorded test tracks and is further configured to trigger a quality enhancement unit 1016 to record the pre-recorded test track from a listening position to generate a result track.

The track comparison module lOlOeb is configured to cooperate with the test trigger module lOlOea to receive the result track and is further configured to compare the pre-recorded test track with the result track and is further configured to generate a comparison result.

The result sending module lOlOec is configured to cooperate with the track comparison module lOlOeb to receive the comparison result and is further configured to send it to the second processing unit 1012.

The filter exchange module lOlOed is configured to cooperate with the second processing unit 1012 to receive a plurality of correction filters and is further configured to send it to the third processing module 1014.

In an aspect, the third processing module 1014 controls the steps of listening area acoustics correction filter generation. Multiple filters are generated and the user is able to select the best suited filter as per user preference.

The real time playback monitoring & quality enhancement unit 1016 is configured to monitor audio played on the audio player and is further configured to record a test audio at a trigger.

In an aspect, the trigger for the real time playback monitoring & quality enhancement unit 1016 generated from at-least one of an audio track change, a volume change, and an equalization change.

In an aspect, the generated audio is corrected for an output distortion, an equalization distortion, and psychoacoustics.

In an aspect, the real time playback monitoring & user’s device 1014 is placed at a listening position in the listening area. The second processing unit 1012 is configured to perform real-time analysis and processing of the test audio and generate a test result, and is further configured to design, develop, and implement a plurality of correction filters for the test audio if the test result represents an error.

In an aspect, the second processing unit 1012 comprises a real time listening area monitoring & generation of quality enhancement correction filters 1012a, a master splitter module 1012b, a distortion correction filter, an automatic bass treble levelling module 1012d, an auto volume levelling module 1012e, a psychoacoustics correction module 1012f and an automatic summation and implementation of correction filter module 1012g.

The master splitter module 1012b is configured to cooperate with a first processing engine 1010 to receive a high resolution wave form track and is further configured to split the high resolution wave form track into a plurality of master EQ frequency bands.

The distortion correction filter 1012c is configured to identify a distortion in the playback quality and generate a DC filter in the plurality of master EQ frequency bands.

The automatic bass treble levelling module 1012d is configured to identify monitor the generated output & identify a bass treble mismatch between user preference settings and playback levels of low & high frequencies in the plurality of master EQ frequency bands and is further configured to generate an ABTL filter.

The auto volume levelling module 1012e is configured to monitor the generated output & identify a volume mismatch in user set sound pressure limit and playback sound pressure limit in the plurality of master EQ frequency bands and is further configured to generate a VOL filter.

The psychoacoustics correction module 1012f is configured to monitor the generated output & identify a plurality of harsh frequencies in the plurality of master EQ frequency bands and is further configured to generate a PC correction filter.

In an aspect, the psychoacoustics correction module identifies the plurality of harsh frequencies at high levels beyond long term exposure limits. In an aspect, the psychoacoustics correction module identifies the plurality of harsh frequencies at low levels and identifies a plurality of frequencies as per minimal auditory response.

The automatic summation and implementation of correction filter module 1012g automatically restores the equalization as per user settings & further corrects for distortion and psychoacoustics by implementing the DC filter, ABTL filter, VOL filter and PC correction filter in the plurality of master EQ frequency bands to generate a corrected balanced and distortion free final output equalization.

The output module 1018 is configured to render several multiple analog & digital outputs allowing for splitting of the final track to several bands to output to collaborative amplifiers, subwoofers, wireless & network speakers. Control & functions of the various output possibilities like pass band level frequency cutoff & roll off connecting network multiple speakers is implemented on Users Devise 1014.

Referring to Figures 1A-1B, illustrating a method for audio enhancement and automatic correction of multiple listening anomalies is shown in accordance with an embodiment. The order in which the method 2000 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any appropriate order to cany out the method 2000 or an alternative method. Additionally, individual blocks may be deleted from the method 2000 without departing from the scope of the subject matter described herein. The method for audio enhancement and automatic correction of multiple listening anomalies includes steps of:

At step 2002: the method 2000 includes storing, by a second repository 1002, a plurality of pre-set rules;

At step 2004: the method 2000 includes storing, receiving, by an inputs module 1004, an input audio signal from said audio source;

At step 2006: the method 2000 includes storing, receiving, by a processing engine 1006, said input audio signal and said pre-set rules from said inputs module 1004 and said second repository 1002;

At step 2008: the method 2000 includes storing, converting, by a signal conversion module 1008 of said processing engine 1006, said input audio signal to a digital audio signal; At step 2010: the method 2000 includes storing, receiving, by a first processing engine 1010 of said processing engine, said digital audio signal;

At step 2012: the method 2000 includes storing, automatically correcting, by said first processing engine 1010 of said processing engine 1006, acoustic anomalies of said digital audio signal;

At step 2014: the method 2000 includes storing, monitoring, by a quality enhancement unit 1016 of said processing engine 1006, audio played on said audio player;

At step 2016: the method 2000 includes storing and recording, a test audio by said quality enhancement unit 1016 of said processing engine 1006 at a trigger;

At step 2018: the method 2000 includes, storing, analyzing and processing, by a second processing unit 1012 of said processing engine 1006, in real-time of generated audio output and develop a test result; and

At step 2020: the method 2000 includes, storing, designing, developing and implementing, by said second processing unit 1012 of said processing engine 1006, a plurality of correction filters for generated audio output if said test result represents an error.

In an embodiment, wherein said method 2000 can be directed by a second device having a third processing module 1014 downloaded on a user device and said third processing module 1014 configured to provide an interface for setting, monitoring, recording, and processing a plurality of aspects of said method.

In an embodiment, wherein said step of automatically correcting 2012 by said first processing engine 1010 of said processing engine 1006, further comprises steps of:

• bifurcating, by an audio bifurcation module 1010a of said first processing engine 1010, said digital audio signal into a vocal track and a plurality of music tracks;

• receiving, by an auto-remastering module 1010b of said first processing engine 1010, said vocal track and said plurality of music tracks and said pre-set rules from said audio bifurcation module 1010a and said second repository 1002;

• remastering, equalizing and producing, by said auto-remastering module 1010b of said first processing engine 1010, said vocal track and said plurality of music tracks using said pre-set rules to produce a balanced track; • receiving, by an equalizing circuit 1010c of said first processing engine 1010, said balanced track from said auto-remastering module 1010b;

• processing, enhancing and producing, by said equalizing circuit 1010c of said first processing engine 1010, said balanced track to an enhanced equalized track;

• receiving, by a high resolution audio module lOlOd of said first processing engine 1010, said enhanced equalized track from said equalizing circuit;

• implementing, by said high resolution audio module lOlOd of said first processing engine 1010, an artificial intelligence technique to convert said enhanced equalized track to a high resolution wave form track.

In an embodiment, wherein said of analyzing and processing 2018, by a second processing unit 1012 of said processing engine 1006 further with tandem operation of module 1010, further comprising steps of:

• automatic execution, by a real-time generated audio correction module 1012a of said second processing unit 1012, of correction filters in real-time, said real time listening area monitoring & generation of quality enhancement correction filters 1012a;

• receiving, by a master splitter module 1012b of said real time listening area monitoring & generation of quality enhancement correction filters 1012a, a high resolution wave form track from a first processing engine 1010;

• splitting, by said master splitter module 1012b of said real time listening area monitoring & generation of quality enhancement correction filters 1012a, a waveform graph into a plurality of master EQ frequency bands;

• identifying, by a distortion correction filter 1012c of said real time listening area monitoring & generation of quality enhancement correction filters 1012a, a waveform graph of the playback in the said plurality of master EQ frequency bands;

• developing, by module 1012c and module 1012a to generate a filter to reduce the levels of frequencies in the plurality of Master Eq bands found to be distorted by way of comparison of input & output waveform graphs, a DC filter; • identifying, by an automatic bass treble levelling module 1012d of said real-time generated audio correction module 1012a, user set tonal balance level sample waveform in said plurality of master EQ frequency bands;

• developing, by said automatic bass treble levelling module 1012d of said real-time generated audio correction module 1012a, an ABTL fdter by way of comparing real time output waveform tonal levels to tonal levels of user set waveform ;

• identifying, by an auto volume levelling module 1012e of said real-time generated audio correction module 1012a, a volume mismatch between user set SPL & real time generated out SPL in said plurality of master EQ frequency band;

• generating, by said auto volume levelling module 1012e of said real-time generated audio correction module 1012a, a VOL filter;

• identifying, by a Psychoacoustics correction module 1012f of said real time listening area monitoring & generation of quality enhancement correction filters 1012a; at high levels of playback a plurality of harsh frequencies by comparing existing data of human saturation level endurance of harsh frequency level and comparing with generated output frequency levels; at low levels a plurality of frequencies with lower range of human sensitivity by comparing with existing data of minimal level of perception as per human auditory response, in said plurality of master EQ frequency bands;

• generating, by a psychoacoustics correction module 1012f of said real time listening area monitoring & generation of quality enhancement correction filters 1012a, a PC correction filter; by an automatic summation and implementation of correction filter module 1012g of said real-time output correction module 1012a, said DC filter, said ABTL filter, said VOL filter and said PC correction filter in the said plurality of master EQ frequency bands to correct for distortions generated by the limitations of the audio equipment & maintain user set tonal balance, equalization, volume preferences along with psychoacoustics corrections in real time.

In an aspect, the present invention applies studio sound engineering techniques to re-master source with a plug and play inexpensive system to address the common problems associated with listening to music derived from internet sources.

25

SUBSTITUTE SHEET (RULE 26) Referring to Figure 1C, the system 100 comprises a hardware unit 102 and an audio monitoring application 104 executable in an electronic device 10. The hardware unit 102 comprises a repository 106, an input module 108, an auto remastering module 110, a quality enhancement module 112, an Al up sampling module 114, real time fdters execution & acoustics correction module 113, an user interface 118, and hardware beacon 120.

There are several incremental and interlinked steps of processing. There are multiple ways to implement the system, but not limited to the below steps and configuration.

The hardware unit 102 is a plug-and-play device connectable to an audio source 20. The hardware unit 102 is implemented using one or more processors, such as Raspberry Pi, Orange Pi, SBC or other common microprocessors along with Audio I/O, Audio Digital Signal Processor (DSP), Bluetooth (BT), and/or Wi-Fi modules. The hardware unit 102 is configured to communicate with the electronic device 10 via Bluetooth, Wi-Fi, or other wireless communication networks. The hardware unit 102 is further configured to receive generated audio output or processed data of generated audio output via the electronic device 10. The hardware unit 102 carries out analysis and preprocessing of audio input and post processing of audio output via a wireless hardware beacon 120 comprising a microphone and other sensors placed at listing position or a mobile device placed at a listening position using mobile device internal microphone or external microphone & sensors. This component consists of acoustics correction module & real time auto equalization implementation module 113.

The audio monitoring application 104 comprises the generated output monitoring & correction filter generation module 117 and a user interface 118. This audio monitoring application 104 may be executed in a commonly available hardware based on android / IOS / Windows / Linux developer boards or in hardware unit 102.

In an embodiment, the generated output monitoring & correction filter generation module 117 is configured to send real-time correction filters from the electronic device 10, via Bluetooth or other wireless communication means to hardware unit 102.

Further, the audio monitoring application 104 provides a user interface 118 being executed in a mobile phone / tablet or electronic device 10. The user interface 118 facilitates the user to monitor input verses output audio in real time by way of graphical representation along with the effect of corrections applied and change correction filter & equalization settings as per user preference.

The audio monitoring application 104 executed in the electronic device 10 that monitors the audio playback in real-time by receiving the audio playback from the audio source 20 via the following methods: i. A preprocessing program and filter implementation application and the audio monitoring application 104 are implemented in one hardware unit 102 along with a dedicated wireless microphone or hardware beacon 120 comprising a microphone and associated hardware designed to be placed at listening area. ii. A mobile phone placed at the listening area with audio monitoring application 104 is implemented with a feature to recognize whether the user mobile is in preferred listening area. iii. a dedicated hardware / mobile device placed at the listening position with audio monitoring application 104 is implemented.

Auto remastering and Quality enhancement modules of component 102:

The repository 106 is configured to store a pre-defined set of frequency detection and splitting rules, a pre-defined set of vocal extraction rules, and a pre-defined set of male and female frequency band separation rules.

In an embodiment, the input module 108 is configured to cooperate with the audio source 20 to receive a digital audio signal as an input from the audio source 20. Alternatively, the input module 108 receives an analog audio signal and performs analog to digital conversion on the received audio signal to generate the digital audio signal, using analog to digital converter.

The auto remastering module 110 is configured to receive the digital audio signal, split the digital audio signal into constituent frequency bands using frequency detection and splitting rules, and extract different vocal components from the audio using the vocal extraction rules. Further, the auto remastering module 110 is configured to perform source intermediate frequency level correction by digitally applying one or more sound engineering techniques.

In one embodiment, the auto remastering module 110 comprises a separation unit 110a, a comparing unit 110b and a level cutting unit 110c. The separation unit 110a is configured to cooperate with the repository 106 to split the digital audio signal into vocal & music components to individual stream using frequency detection and splitting rules. In an embodiment, the separation unit 110a splits the digital audio signal to 1/3 rd octave frequency bands. The separation unit 110a is further configured to separate male and female frequency bands, from the frequency bands received from the input module 108, based on the separation rules stored in the repository 106. The comparing unit 110b is configured to compare the amplitude levels of each music stream frequency band with amplitude levels of male and female frequency bands based on the comparison rules stored in repository 106. The level cutting unit 110c is configured to apply a level cut to frequencies that are higher than the vocal frequency bands, based on the comparison results received from the comparing unit 110b, to generate a remastered audio signal. The auto remastering module 110 is configured to enable a user to control a part of the level of cut via the user interface 118. In an embodiment, the level cut operates in two different ways depending on the presence of vocal frequency. When the vocal frequency is present in the frequency bands, about lOdb CUT to below vocal level to frequencies levels higher than the vocal level or as per the appropriate sound engineering technique applicable to the music content is applied. When vocal frequency is not present in the frequency bands, about 5db cut to frequencies levels higher than the vocal level or as per the appropriate sound engineering technique applicable to the music content is applied.

Simultaneously, when vocals are present, it is checked in case if karaoke feature is activated and if it is so, the vocal frequencies are removed and a feature to facilitate the user to sing along is activated. Alternatively, when no vocal frequency is present, a uniform smoothening is applied in case any frequency is detected at an exceptionally higher amplitude level than the overall average levels of all frequencies to bring entire audio content to a uniform level. Herein, the levels of cut and smoothening are controlled by the user via the user interface 118.

The karaoke function is accessed and controlled by the user interface 118. The hardware unit 102 is equipped with additional inputs to facilitate the users to connect guitars and other musical instruments. With mixing console features available, the user interface 118 provides studio type Karaoke voice over and music remastering features.

To enable the karaoke functionality, the hardware unit 102 is configured to perform the following functions: • checking if the Karaoke functionality is activated. If yes, extract music without voice and pass the extracted music to the next level;

• upon activation of the vocal suppress karaoke feature from the electronic device 10, the hardware unit 102 performs the following functions: o receive singer audio input from the microphone of the electronic device 10 or an external microphone; o continue the corrections as per the sound engineering principles to enhance the poor source quality; o continue to apply all the correction filters; o apply a cut to remove the vocal content from the music source to be passed to the next stage; and o input the electronic device 10 or add-on microphone as the sources of vocals.

• mixer input: o support analog and digital mixers for karaoke integration, wherein the playback level of the mixer may be controlled by the user using the electronic device 10; o dedicated mixers may be made available to be connected to hardware unit 102 with - analog to digital converters and input identification, and each input may be controlled by the electronic device 10.

The audio monitoring application 104 installed in the electronic device 10 is configured to enable the features of vocal suppression, override, cut, and remove. Further, the audio monitoring application 104 provides the user with access to song lyrics and the in-built microphone to voice-over audio playback equipment. The hardware unit 102 may have a few inputs for musical instruments like guitar etc., these inputs are made readily available in the user interface 118 to facilitate the mixing of input levels. The hardware unit 102 recognizes a dedicated mixer to add on multiple musical instruments at the same time. The audio monitoring application 104 allows the user to have full control over the setting of the add-on digital mixer via hardware or mobile based user interface and communication of the mobile- application, user interface 118 with the plug and play hardware device

Another feature of the auto remastering module 110 is to identify sporadic frequencies that are at an unusually higher level than the time weighted average of all other frequencies and will apply a cut to bring all music content / instruments to a coherent level between all frequencies. In an embodiment, the quality enhancement module 112 is configured to receive real-time correction filters from the electronic device 10, via Bluetooth or other wireless communication means. In an embodiment, the quality enhancement module 112 is configured to receive the remastered audio signal from the auto remastering module 110 and process the received signal by integrating existing powerful audio tools such as embedded DSP, enhancers, parametric equalizers and surround sound processing by:

• employing Audio Digital Signal Processor MCUs to use existing technologies to enhance source quality;

• executing Harmonics enhancer DSP/program to enhance sound quality;

• implementing Surround sound processor DSP/program to increase headroom and ambience effects; and

• integrating a parametric equalizer DSP program to make studio level equalization available to users.

The user will be able to control these enhancement features via the user interface 118. The quality enhancement module 112 generates a processed audio signal for audio playback.

In another embodiment, the Al up sampling module 114 is configured to reconstruct a lower- resolution waveform to a high-resolution audio waveform using existing & future open Al platforms including 3rd party service providers or by using own Al up-sampling process for reconstruction. The up-sampling of music content is performed using hardware/cloud-based artificial intelligence technologies to generate missing music content.

In another embodiment, the real time filters execution & acoustics correction module 113 carries out active equalization adapted to the listening environment and user preferences.

Listening area acoustics correction:

Once this feature is activated to correct for listening area acoustical anomalies, the acoustics correction module 113 performs a series of predefined audio frequency sweep and analyzes the playback quality. Thereafter module 113 generates a plurality of correction filters to implement listening area acoustics correction as per the detected audio playback quality. In particular, the real time filters execution acoustics correction module 113 is configured to perform a calibration process to tune and correct for acoustics anomalies resulting from physical interaction of generated audio withr the listening environment. The real time filters execution acoustics correction module 113 performs the following functions: • Initiate playback of predefined stored test signals and sampling the generated output to analyze listening area acoustics anomalies;

• storing generated output & analyzing detected test signal:

- Corrections are carried out to correct for Room/ listening area acoustic anomalies, during initial setup or in case there have been physical changes to listening area. These corrections are stored and applied during audio playback.

In an embodiment, the listening area anomalies monitoring & correction filter generation module 117 is configured to send real-time correction filters from the electronic device 10, via Bluetooth or other wireless communication means to hardware unit 102.

The user controls the above listening area acoustic correction feature & choose correction filters via the user interface 118,

In an embodiment

The audio monitoring application 104 performs the following functions -

• monitoring of quality of audio playback and associated distortion with analysis to generate correction filters to correct for detected anomalies.

• maintaining a seamless balance of level of equalization and volume by monitoring changes to the source quality to set levels of equalization and overall balance of frequencies over audio bandwidth, wherein during a track change, a memory function is used to detect changes to frequencies compared to stored preferences and previous playback settings;

• generating correction filters automatically to correct the equalization to maintain set user preferences over all frequencies automatically; and

• automatic volume level correction between tracks by monitoring average sound pressure level at the listening area and generating correction filters to adjust the volume level to maintain the average sound pressure level (SPL).

• correction of room acoustics which involves an automated processing of- o generating test signals for playback to analyze room acoustics anomalies, o analyzing the test signal in the listening position, o generating room acoustics correction filters to correct for- ■ Room/ listening area acoustic anomalies, and

■ Loudspeaker phase shift and time alignment, and o sending the correction filters to the hardware unit 102 and maintaining a record of the correction filters as per location/ room.

• Monitoring and correcting for physio acoustics and painful sounds which involves monitoring of frequency levels of frequencies that causes listening fatigue or are perceived as harsh sounds by the brain. Correction filters are developed to actively reduce levels of targeted frequencies as per latest physio acoustics studies thereby enhancing listening pleasure, comfort, and wellbeing.

Working of corrections to listening area, acoustics and development of active filters as per quality of reproduced audio to enhance overall output suited to listening environment, acoustics, audio equipment and physio acoustics.

Room acoustics and correction for listening environment anomalies:

• The hardware unit 102 generates playback test signals to analyze room acoustics. The test signals are recorded by the electronic device 10 and sent to the hardware unit 102 for analysis and generation of filters to correct for room acoustics anomalies.

A plurality of acoustics correction filters are generated for user selection, user preference filter is stored and implemented by 113 till next calibration.

• The process of generation of correction filters by the hardware unit 102 is controlled by the user interface 118. The user is enabled to playback the suggested filters and save preferred filters for implementation. Acoustics correction and the calibration are carried out when audio playback equipment is changed or there have been physical changes or relocation of listening area.

Development of active filters as per quality of reproduced audio'.

The real time filters execution & acoustics correction module 113 and the listening area anomalies monitoring & correction filter generation module 117 work in tandem to enhance the audio quality by applying correction filters for correcting playback distortion and listening area anomalies. The generation of listening area correction filters by tandem operation involves: • executing a program in 113 named as Master Auto Equalizer bands that splits the audio signal after Al up sampling to frequency bands, and

• tracking of input and carrying out various analysis & functions to develop real time listening area anomalies correction filters. The correction filter described herein is a string of digits developed by the listening area anomalies monitoring & correction filter generation module 117 to be implemented by the hardware unit 102. It contains information about level set points or adjustments to be made automatically by the hardware unit 102 by summation of all the correction filters to achieve an overall equalization. For e.g., if an audio source is split into 10 bands, the level settings of each band will be +10db Boost - Odb - 15db Cut. When no boost/ cut is applied, a filter that may be used is 0,0, 0,0, x 10 bands. To boost 1 st band by 3db, a filter that may be used is 3, 0, 0, 0 x 10 bands.

For the implementation of correction filters, the source corrected audio is split into various frequency bands, called * 1 Master EQ by the quality enhancement module 113. A cut or boost filter control setting is assigned to the equalizer bands - to apply cut/ boost to each band, in the range +10db, 0, - 15db as per the filter setting of playback corrected audio received from the listening area anomalies monitoring & correction filter generation module 117 for each band.

Every filter works differently and the principle of corrections by the filters is explained as below.

(1) Filter 1: Real time distortion monitoring and correction.

This distortion correction filter is generated by comparing source and playback waveforms. The hardware unit 102 receives a graphical representation of the source corrected waveform from the electronic device 10.

Listening area anomalies monitoring & correction filter generation module 117 samples the audio output waveform and compares the differences in smoothness between source & playback waveform.

The waveforms split to master EQ frequency bands compared and analyzed to generate correction filters that are then sent via wireless communication means to the real time filters execution & acoustics correction module 113 to implement the correction filters thereby actively reducing levels of frequencies that are distorted / jagged in real time. The level of correction will depend on the volume of setting, overload of speakers, and source bandwidth. To avoid over compensation, the listening area waveform is updated at regular intervals.

New fdters are generated during - (i) volume change by user, (ii) equalizer settings change by user (iii) track change is detected by 102.

The level of correction by the hardware unit 102 is controlled by user interface 118. The level of compensation will be limited to about -5db to -lOdb and can be controlled by user preference.

(2) Filter 2. Auto Bass Treble:

Listening area anomalies monitoring & correction fdter generation module 117 samples and saves the levels of Lo and Hi frequencies once the user changes preferences in a TWA waveform.

Thereafter at fixed time intervals Listening area anomalies monitoring & correction filter generation module 117 monitors and generates current playback waveform. Both waveforms are split to frequency bands and compared, in case there is difference in levels a correction filter is generated to cut/boost levels of frequencies to automatically restore to user preference levels.

The correction filters that are then sent via wireless communication means to the real time filters execution & acoustics correction module 113 to actively restore user set equalization levels in real time.

New filters are also generated during - (i) volume change by user, (ii) equalizer settings change by user (iii) track change is detected by 102.

The level of correction by the hardware unit 102 is controlled by user interface 118. By user defined level of automatic correction to be applied.

(3) Auto volume control (A.V.L) filter operates in a different manner. An overall volume control is implemented in the hardware unit 102. The volume will have 2 inputs - i. Input 1 volume set by user. ii. Input 2 volume control filter setting received from unit 10 Further, the listening area anomalies monitoring & correction filter generation module 117 monitors the SPL at the listening area in a TWA manner. Each change in 7 to lOdb of SPL is monitored and set as a new threshold, wherein a memory stores the SPL level.

When track change is detected by the audio monitoring application 104, new levels of SPL is monitored and compared to the stored level of the earlier track by the listening area anomalies monitoring & correction filter generation module 117, in case a level change is detected, a correction filter to cut/boost volume level is generated and sent to the hardware unit 102 via wireless communication means to maintain user preference levels by automatically adjusting 2 nd volume input correction filter in real time.

The level of AVL correction by the hardware unit 102 is controlled by the user interface 118 by defining the limit of AVL. In order to avoid over compensation, the filter setting is allowed to be configurable as per the user preference and can be between ± Idb to ±10db, preferably ±3db to ±6db.

(4) Real time physio acoustics correction.

Physio acoustics is a new subject which studies the disturbances experienced by human beings exposed to certain frequencies.

Studies also provide an exposure threshold level to the range of harsh frequencies.

Many of the findings of physio acoustics have not yet been implemented in the older studio recordings & harsh sub sonic, infrasonic & high levels of harsh frequencies maybe present in the music content that is harmful for long term listening. The present invention proposes a real-time way of correcting the same.

To generate physio acoustics correction filter, the latest frequency vs audible levels of harsh frequencies graph is updated & stored in the audio monitoring application 104

The listening area anomalies monitoring & correction filter generation module 117 samples the levels of audio output in the range of harsh frequencies and compares to the acceptable stored range graph and generates correction filters to reduce high levels of harsh frequencies to acceptable levels. At fixed intervals the graphs are split to master EQ frequency bands compared and analyzed to generate correction filters that are then sent wirelessly to the hardware unit 102 to actively correct for psychoacoustics’ in real time.

New filters are also generated during - (i) volume change by user, (ii) equalizer settings change by user (iii) track change is detected by the audio monitoring application 104.

Moreover, to avoid overcompensation, the level of cut will be limited and can be configured by the user via the user interface 118.

• Finally, the hardware unit 102 collectively processes the automatic correction filters received wirelessly from electronic device 10 and actively changes frequency levels of the Master EQ bands in real time to enhance audio quality, reduce distortion, maintain user preference equalization & volume levels and correct for psychoacoustics.

Advantageously, the user interface 118 of the audio monitoring application 104 & the quality enhancement modules 112, 113 & 114 provides simple controls to operate & execute complex signal processing of equalization, surround effects, harmonic enhancer, and parametric equalizer to the user.

Further, the user interface 118 also facilitates:

• To automatically identify associated hardware executing components 20 & 10 and prompt easy setup of internal & external wireless networking.

• To identify wireless speakers using common wireless protocols and setup wireless streaming.

• To add multiple wireless speakers with individual speakers’ control.

• To setup and control a large network of Lan connected routers to stream wireless audio to multiple wireless speakers over a large area.

• To be compatible with various wireless streaming protocols like - Spotify etc.

• To be compatible with other voice control and smart home systems like Google Home, Alexa etc.

• Component 4 maybe embedded into Android / IOS.

• Ease to setup latest Home theater 9.2.4 that uses multiple speakers (15) wirelessly.

Referring to Figures 2A and 2B, the method 200 to enhance audio source quality and correct listening area anomalies. The order in which the method 200 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any appropriate order to carry out the method 200 or an alternative method. Additionally, individual blocks may be deleted from the method 200 without departing from the scope of the subject matter described herein. The method 200 comprises the following steps:

At step 202, the method 200 includes receiving, by an input module 108 of a hardware unit 102, from an audio source 20, an analog audio signal.

At step 204, the method 200 includes converting, by the input module 108, the received analog signals into digital audio signal.

At step 206, the method 200 includes separating, by a separation unit 110a of an auto remastering module 110, the vocal & music contents to 2 individual streams.

At step 208, the method 200 includes separating, by the separation unit 110a, music streams to l/3 rd octave frequency bands.

At step 210, the method 200 includes performing, by the auto remastering module 110, comparison and level cut operations on the extracted music content in comparison to the levels of vocal components by digitally applying sound engineering techniques to generate a remastered audio signal.

At step 212, the method 200 includes activating, in presence of vocal frequency by auto remastering module 110, karaoke mode, via removal of vocal components and allowing super imposing of vocal content by adding sing along facility for the user.

At step 214, the method 200 includes correcting in absence of vocal frequency by auto remastering module 110, source intermediate frequency level on the extracted music content in comparison with average levels of all other frequency bands, a cut as per the appropriate case and sound engineering technique applicable to the music content is applied.

At step 216, the method 200 includes processing, by quality enhancement module 112, the remastered audio signal to enrich the audio signal by applying existing enhancement techniques of audio DSP MCU, harmonics enhancers, surround sound processors, and parametric equalizer. At step 218, the method 200 includes up-sampling, by Al up sampling module 114, the audio signal to further up sample content using hardware/ cloud based / third party Al to generate missing music content.

At step 220, the method 200 includes receiving, by an electronic device 10, the generated audio output.

At step 222, the method 200 includes performing, by an audio monitoring application 104 executed in the electronic device 10 or in one or more hardware units, real-time analysis of the audio playback quality.

At step 224, the method 200 includes generating, by real time filters execution & acoustics correction module 113 implemented in the hardware unit 102, an audio sweep of a plurality of audio signals, wherein hardware unit 102 receives the generated output inclusive of changes to the original signal from physical limitations of the acoustical properties of the listening area via the audio monitoring application 104, and component performing a series of analysis to generate correction filters in order to correct for listening area acoustics anomalies.

At step 226, the method 200 includes applying active dynamic equalization and real time monitoring of generated output by tandem operation between listening area anomalies monitoring & correction filter generation module 117 of electronic device 10 and real time filters execution & acoustics correction module 113 electronic devise 102 by way of automatically adjusting frequency levels of the Master EQ bands in real time to enhance audio quality, reduce distortion, maintain user preference equalization & volume levels and correct for psychoacoustics as per analysis and feedback of listening area monitoring in real time without latency.

At step 228, the method 200 includes performing, by real time filters execution acoustics correction module 113, automatic summation of Master EQ band filters and implementation of correction filters.

The foregoing description of the embodiments has been provided for purposes of illustration and is not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.

Advantageously, the system 100 of the present disclosure supports karaoke functionality. To enable the karaoke functionality, tracks may be made available to the users with music content only, i.e., without voice. However, such tracks may not be readily available for all songs. To address this issue, the audio monitoring application 104 is configured to provide a vocal suppress feature which allows the user to remove vocal component from songs. This system 100 facilitates the mobile device’s microphone to be used as a karaoke microphone for singing along by the users.

The system further provides easy functionality to social media content creators to create high resolution audio tracks for AV content by using the features of auto remastering - imbalance correction and DSP, harmonics enhancer, equalizers & upscaling.

The input and of the system supports features of a mixing console & digital audio interface accessed by user device and mobile app based control to add various instruments , wireless microphones and synthesizers to enable home studio features.

Conventional audio processing systems perform equalization and apply corrections to the music sources passively by way of applying user-defined or preset equalization, using a purpose-built audio processor (MCU) that is programmed in a certain way to carry out several corrections, using computer-based programs that are configured to carry out corrections to the music sources, or by using harmonics enhancers.

However, the conventional systems are either pre-programmed or user-defined/user- controlled. They do not facilitate real-time monitoring of listening area acoustics, automatic corrections to equalization, and anomaly correction as per the quality of playback.

The system 100 of the present disclosure is an active system that monitors the playback audio in real-time and applies various corrections in real-time to the audio as per listening area interaction.

Referring to Figure 1D-1G that illustrate a second exemplary embodiment in a block diagram of a system explaining a way of process flow to discuss the implementation of the system and method detailed above. There are several incremental and interlinked steps of processing. There are multiple ways to implement the system, but not limited to the below steps and configuration.

The steps of processing are divided into 4 components with tandem processing and operation between the components. The sequence of steps is variable and should not be limited to below simplified explanation:

Component 1 -

Component 1 carries out analysis, processing, remastering & enhancement of input source in various ways. Further Input is tracked and tandem operations with component 3 are carried out to develop real time correction filters to correct for various generated output distortion & anomalies. Correction of acoustic anomalies of the listening environment is also carried out by this component.

Component 2 -

Carries out real time monitoring of the generated audio output at listening position and operates in tandem with component 1&3.

Component 3 -

Operates in tandem with Component 1 & 2.

Performs a real time analysis & processing of output playback quality. As per analysis correction filters are developed to correct for generated output distortion, equalization, psychoacoustics etc. to enhance listening pleasure taking into account the limitations of audio playback equipment and listening environment.

Component 4 is a Mobile application to facilitate User Interface to control the various functions and features of Components 1 & 3

Hardware - available hardware platform such as Raspberry Pi, similar SBC with additional off the shelf add on hardware MCU modules or other similar electronic prototyping platforms. Component 2 - Microphone for listening area monitoring. User mobile phone microphone or add on Bluetooth microphone, professional microphone / wireless microphone beacon.

Component 3 - Users Mobile Phone or Tab, Smart home control center / similar devices.

Component 4 - Users Mobile Phone or Tab, Smart home control center / similar devices.

In an aspect, Component - 1, 2, 3 may be implemented in 1 hardware device.

In an aspect, Component 2, 3, 4 may be implemented in one hardware devise.

In an aspect, Component 1&3 may be implemented in one hardware devise.

In an aspect, Component 2&3 may be implemented in one hardware devise.

Each component may be implemented in separate devices.

Components are configured to exchange data by wirelessly and operate in tandem with one another.

❖ Working principle of the 4 components: -

• Component 1 comprises of 6 stages of signal processing elements: -

In one embodiment, o Stage 1. Auto Remastering (110)

Carries out Auto remastering of the source by applying principles of sound engineering -

■ Separation of Vocal & Music components to individual streams. (110A)

■ Further split music stream to 1/3 rd octave frequency bands. (110B)

■ Compare amplitude levels of each music stream frequency band with amplitude levels of male / female vocal frequency amplitude as per below rules. (110C2B)

- Apply > Eevel CUT to Eo & Hi frequencies that are higher than the vocal frequencies.

OR

Apply an uniform smoothening to reduce the levels of any frequency found exceptionally higher than overall average level of other frequencies. - Case 1 - when vocal frequency is present - apply an user selectable

CUT to levels of music frequencies to level as per the type of music to vocal level music balancing. (110C2B1)

- Case 2 - Karaoke - in case karaoke feature is activated - vocals frequencies are removed and user sing along feature is activated. (110C2A)

- Case 3 - when no vocal frequency is present an uniform smoothening is applied only in case any frequencies is detected to be at an exceptionally higher amplitude level than the overall average levels of all frequencies to bring entire audio content to an uniform level. (110C1)

■ The Levels of CUT & Smoothening - will be user controllable via a mobile interface executed in component 4. (404)

In another embodiment, o Stage 2. Quality Enhancement of source. (112)

In another embodiment, provide for integrating existing powerful audio tools such as embedded DSP, enhancers, parametric equalizers and surround sound processing by:-

■ Employing Audio Digital Signal Processor MCUs to use existing technologies to enhance source quality. (112A)

■ Executing Harmonics enhancer program & additional MCU/DSP to enhance sound quality. ( 112B)

■ Implement Surround sound processor program & additional MCU/DSP to increase headroom and ambience effects. (112C)

■ Integrating a parametric equalizer & additional MCU/DSP to make available studio level equalization available to user. (112D)

■ User will be able to control the above via a mobile interface executed in component 4. (404) In another embodiment reconstruct a lower resolution track to a high resolution track. o Stage 3. Al up sampling - (114) s By using existing & future open Al platforms including 3 rd party service providers to reconstruct a lower-resolution waveform to a high-resolution audio waveform. (114A)

Or by using own algorithm and added hardware to reconstruct a lower- resolution waveform to a high-resolution audio waveform.

In yet another embodiment. o Stage 4. Correction of acoustic anomalies of the listening environment. (116)

An acoustics calibration process to tune and correct for acoustics anomalies resulting from physical interaction of generated audio with listening environment. This is done by -

■ Playback of various audio test signals stored in the system. (116A)

■ Sampling of quality of playback at listening area by component 2 placed at listening position. (116B)

■ Analysis & processing of the generated output quality of test signal is carried out (116C)

■ Generation of correction fdters is carried out to correct for room -listening area acoustic anomalies. (116D)

This feature is used during initial setup or in case there have been physical changes to listening area. The corrections are stored and applied during audio playback.

Acoustics correction filter selected is applied to all music content till next calibration.

■ The user will be able to control the above process of listening area acoustic correction feature.

Multiple corrections filters are generated and saved - user will be able to choose preferred correction filter. (116E)

Via the user interface executed in mobile devise component 4. (404) In an embodiment - real time monitoring and of generated output correcting for audio equipment limitations & distortion including maintaining user set tonal balance, equalization and volume of all audio sources. o Stage 5. Generation of Listening area correction fdters by tandem operation between Component 1 & 3 (118)

■ Executes a program that splits the audio signal after stage 4* to frequency bands - named as Master Auto Eq bands. (118A)

■ Tracking of input and carrying out various analysis & functions to develop real time listening area anomalies correction filters. (118-B,C,D,E) o Stage 6. Master Auto Eq bands filter settings are automatically executed, and overall equalization is adjusted as per summation of filter setpoints received from Component 3

- A filter contains information of level set points along with adjustments to be made automatically by component 1.

- For e.g. to implement Master Auto Eq the audio source may be split into 10 bands, the level settings of each band may be +10db Boost - Odb - _15db Cut.

- When no boost / cut is applied - Filter setting data set = 0,0, 0,0, x 10 bands

- To boost 1 st band by 3db - Filter setting data set would be: - +3, 0,0,0 x 10 bands.

- Various filter formats may be used to automatically implement corrections received from component 3

• Component 2.

Monitors the quality of generated output at listening position in real time and operates in tandem with Component 1&3.

■ Component 2 processes data from microphones & sensors to monitor the generated output in real time and transfers this data to Component 1&3 for further analysis.

Component 2 & Component 1&3 to work in tandem to monitor the generated output quality at listening position. ■ Incase component 2 & 3 is executed in one hardware device a feature will be executed to identify if the hardware device is in listening position or not. In case the hardware is located out of the listening position functioning of component 2 will be paused automatically.

• Component 3 - analyzes the quality of generated audio output in real time and produces correction fdters to improve the quality of generated output in 4 steps : - (302)

Component 3 operates in tandem with Component 1. Component 3 analyses the generated output & develops correction fdters with feedback from Component 1 that tracks & analyzes the input signal. Summation & implementation of the correction fdters is carried out by Component 1.

" Step 1 - Distortion correction fdter (118B)

■ By Component 1 > At intervals sample input waveform after Stage 4* convert to frequency vs level graph & send to 304B

■ By Component 3

- Input waveform from Component 1 is compared with output waveform received via Component 2 and is analyzed for smoothness of the generated output waveform.

Further the waveforms are split into their frequency components as per number of Master Auto Eq bands. (304A,B)

- Differences between the waveforms are compared and analyzed for smoothness, waveform distortion between input & output. (304C1)

- Correction fdters are developed to reduce levels of frequencies that are jagged /distorted in the output waveform to bring to acceptable levels of distortion. (304C2)

The level of correction will depend on volume of settings - overload of audio equipment in use, physical nature of listening space etc. o A correction fdter is generated & sent to component 1 as per frequencies identified with distorted waveform for e.g. in case of a 10 way Master Auto Eq* split the correction fdter generated will be: - 0,0, 0,0, x 10 for no correction. (304C2) ■ Component 1 receives the correction filter from component 3 and implements. (118B1)

- To avoid overcompensation waveforms will be compared (118BA1) o 1. At regular intervals and new filters generated o 2. Volume changes by user o 3. Equalizer settings change by user o 4. Track change.

■ The level of distortion correction and associated compensation can be limited & controlled by the user via component 4. (404)

-J Step 2 - Auto Bass Treble filter.

■ By Component 3

- Upon change to bass treble settings by the user tracked by Component 1

(118CA2) Component 3 saves levels of Bass & Treble of the generated output received via component 2

And the waveform is split into frequency components as per number of Master Auto Eq bands. (306A)

- At intervals present playback level is sampled and waveform split to frequency components as per number of Master Auto Eq bands.

(306B)

- The saved and sampled waveforms are compared, and correction filters generated to apply Cut / Boost to bring the output levels to saved levels. (306C1)

- A correction filter is generated as per frequencies identified with higher levels. (306C2) o ‘for e.g.:- in case of a 10 way overall split - -3, -2, 0,0,0, x 10 for no correction for an auto low frequency correction corresponding to -3 dB & -2db cut to be applied.

■ Component 1 receives the correction filter from component 3 and implements. (118C1) - Execution of a new filter will be carried out when one of the following is detected (118CA1)

' Volume change by user

' Equalizer settings change by user

' Track change.

■ The level of compensation will be controllable by the user via component 4. (404) tep 3 - Auto volume control filter.

■ By Component 1 > Volume control setpoint is split to - 2 inputs

Input 1 volume set by user. (118D1) (404) input 2 volume control setpoint as per correction filter - from Component 3

■ By Component 3

Processes & saves the generated output SPL received via component 2 and splits to about - 20 setpoints between 40dB to 140dB in 5 dB steps with an identifier.

Further samples & saves average peak SPL of the generated output at fixed intervals. Each change of about 5db of SPL is set as new threshold. (308 A)

- When a track change is detected ongoing average SPL is sampled. (308B) - and -

Difference between Saved SPL of earlier track to present SPL is compared and volume correction filter is generated. (308C1)

' For e.g. = difference between stored SPL and present SPL = +10db then the correction filter would be = -2, with identifier (308C2)

■ Component 1

- monitors input and processes track change (118DA 1 )

- volume input 2 receives volume set point correction filter from component 3 and changes volume setting automatically to saved SPL levels of earlier track. (118D2,3) ■ Component 4

The level of AVL compensation sensitivity can be set by user in component 4. (404) 4 - Psychoacoustics correction filter.

Foreword -

It has been recently found that lower frequency sounds cause annoyance. There may be presence of infrasonic material / noise in earlier sources where effects of low frequency were not known.

Low frequency is attenuated by room size & placement of speakers etc. The listening area may acoustically amplify low frequency sounds causing irritation and listening fatigue.

Earlier recordings may not have corrected for infra sound frequencies and other harsh frequencies.

Studies are presently being carried out and it has been found that sounds between 40 to 80hz are perceived as harsh / annoying

Further at low levels the bandwidth curve of human auditory response from 20 Hz to 20 kHz is not - flat. Several frequencies at each end of the human auditory range requires higher levels for minimum audibility. Perception of low frequencies and high frequencies requires a significant level boost as compared to the midrange frequencies.

■ Component 3

- Latest psychoacoustics graph (updated by OTA) is stored in Component 3 repository and split into its frequency components as per number of Master Auto Eq bands. (310A)

- At fixed intervals or during track change / input equalization change the generated output is dB/frequency is sampled and split into its frequency components as per number of Master Auto Eq bands.

(31 OB) - The differences between the waveforms are compared & correction fdters are developed to reduce levels of generated output frequencies that are higher to the stored waveform. (310C1)

' A correction filter is generated & sent to component 1 as per frequencies identified with higher levels. For e.g.: incase 1 st Lo frequency band is found to be at a higher level by 4 dB, then - with a 10-way Master Auto Eq split the correction filter generated will be: -4, 0,0,0, x 10 to implement a CUT of 4db in the 1 st band. (310C2)

■ Component 1 receives the correction filter from component 3 and implements.

(118E1)

- Execution of a new filter will be carried out when one of the following is detected (118CA1)

' Volume change by user

' Equalizer settings change by user ' Track change.

■ Similar to above process Auto loudness is achieved at low levels of playback by comparing the generated output to frequency levels required for minimal level of perception of human auditory response & a boost is applied to frequencies that require higher levels of sound pressure to be audible,

■ To avoid overcompensation the extent of correction can be user controlled and configured via component 4. (404)

• Component 4: User Interface incorporated in user’s Mobile devise.

Component 4 comprises of an easy-to-use, guided User Interface to control the various functions and features of Component 1 & 3. (404) The interface will be designed to provide user friendly guided interactive operating instructions towards utilization of the powerful audio tools executed in component 1 & 3 in a simplistic manner.

Component 4 can be executed in the following ways

' In user’s personal mobile devise.

' A dedicated mobile or tab kept at listening area.

' A mobile or tab where other smart home applications are executed.

Features of Component 4 application: - (404 A,B,C,D,E)

■ Will enable plug & play of voice assistant & smart home features of Google Home , Alexa , Siri etc.

■ Will be able to automatically identify all partner and associated hardware executing components 1,2,3 and prompt easy setup of internal & external wireless networking.

■ Will be able to identify wireless speakers using common wireless protocols and setup wireless streaming.

■ Will be able to setup and control a large network of Lan connected routers to stream wireless audio to multiple wireless speakers over a large area.

■ Will be compatible with various wireless streaming protocols like - Spotify etc.

■ Component 4 maybe embedded into Android / IOS & other OS.

Conventional audio processing systems perform equalization and apply corrections to the music sources passively by way of applying user-defined or preset equalization, using a purpose-built audio processor (MCU) that is programmed in a certain way to cany out several corrections, using computer-based programs that are configured to carry out corrections to the music sources, or by using harmonics enhancers.

However, the conventional systems are either pre-programmed or user-defined/user- controlled. They do not facilitate real-time monitoring of listening area acoustics, automatic corrections to equalization, and anomaly correction as per the quality of playback. The system 102 along with system 202, 302 & 402 of the present disclosure is an active system that monitors the playback audio in real-time and applies various corrections in realtime to the audio as per listening area interaction.

Alternative ways of implementing the present invention are as follows:

1. Wireless - The audio monitoring application executed in the wireless hardware beacon Component 2 or mobile or the electronic device Component 2, 3&4 can be replaced with purpose-built hardware with wireless communication capability. The hardware can be placed at the listening area to actively monitor listening area acoustics and generate filters to automatically adjust the source playback anomalies.

2. Wired - The audio monitoring application executed in the wireless hardware beacon Component 2 or mobile or the electronic device Component 2, 3&4 can be replaced with a wired microphone or any other hardware device that can be placed at the listening area to actively monitor acoustics and generate filters to automatically adjust the source playback anomalies.

3. The audio monitoring application executed in the wireless hardware beacon Component 2 or mobile or the electronic device Component 2, 3&4 can be replaced with an embedded Digital Signal Processor (DSP) with microphone input to monitor listening area acoustics to cany out automatic corrections to the audio source 20.

4. The audio monitoring application executed in the wireless hardware beacon Component 2 or mobile or the electronic device Component 2, 3&4 can be replaced with any other existing device with a microphone re-purposed to achieve the above function for e.g., a laptop, personal computer, or other electronic device.

The invention can be employed for sound improvement without change of amplifiers, sound improvement to existing sound systems, in smart homes, for home theatre improvement, with smart phones, in cloud processing and servers, with online audio sources such as YouTube, music applications, and streaming applications, with recorded audio sources such as tapes, television, personal computer, in car audio system, with various speakers such as Bluetooth speakers and Wi-Fi speakers, with headphones and microphones, auditoriums, movie halls, night clubs, stadiums, open theatres or open auditoriums, karaoke & sing along systems, by bands, during live performances, and the like. The system further provides easy functionality to social media content creators to create high resolution audio tracks for AV content by using the features of auto remastering - imbalance correction and DSP, harmonics enhancer, equalizers & upscaling.

The system further provides easy functionality to social media content creators to create high resolution audio tracks for AV content by using the features of auto remastering - imbalance correction and DSP, harmonics enhancer & equalizers.

The input and of the system supports features of a mixing console & digital audio interface accessed by user device and mobile app based control to add various instruments , wireless microphones and synthesizers to enable home studio features.

TECHNICAL ADVANCEMENTS

The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a system, a device and a method for audio enhancement and automatic correction of vocals to music & tonal imbalances that:

• apply principles of sound engineering to correct poor source quality, to try and achieve original music composition by way of intermediate frequency levelling and corrections;

• perform real time monitoring and corrections without latency by Master equalization feedback correction filters allowing direct pass through of audio and applying necessary corrections digitally after analysis of digital data from listening position;

• perform real-time monitoring of playback quality and correct for the inherent distortion of source and audio playback equipment;

• perform real-time analysis of intermediate levels of audio spectrum playback levels and perform corrections to levels of intermediate frequencies as per the user’s preference including being able to facilitate correction of frequency level and intermediate balance between frequencies;

• monitor distortion and audio equipment overload and automatically apply real-time equalization to bass, treble, volume, and intermediate frequency settings to achieve optimized distortion free playback from any audio equipment; • perform advanced acoustics tests by way of a plug and play system in an easy manner by mobile app interface to correct for room acoustics;

• perform real-time monitoring and correcting for uncomfortable levels of harsh frequencies as per latest physio acoustics studies;

• employ Digital Signal Processor (DSP) Harmonics enhancer and surround sound processor with Al-based artificial remastering and missing content regeneration to generate a high resolution audio output;

♦ is a plug and play hardware design with mobile phone user interface and control;

• provides easy functionality to social media content creators to create high resolution audio tracks for AV content by using the features of auto remastering - imbalance correction and DSP, harmonics enhancer, equalizers & upscaling;

• provides easy functionality to social media content creators to create high resolution audio tracks for AV content by using the features of auto remastering - imbalance correction and DSP, harmonics enhancer & equalizers;

♦ The input and of the system supports features of a mixing console & digital audio interface accessed by user devise and mobile app based control to add various instruments , wireless microphones and synthesizers to enable home studio features;

♦ provides for easy broadcast and addressing way by enabling mobile phone mic to be used as a microphone;

♦ provides for karaoke along with adding instruments and home studio features controlled by mobile app by enabling a mixing console and ability to add multiple wireless / analog / digital microphones, inputs & instrument to create content easily;

♦ correct several listening challenges in car audio, the acoustics properties in a car are very unique & different that presents many challenges. To address this purpose-built car audio DSP’s are used that require advanced skill in installation and calibration by professional sound engineers. This system proposes a plug and play devise / an additional output module to be added to car head units to correct for acoustics

53

SUBSTITUTE SHEET (RULE 26) anomalies and upscale car radio & music content including real time monitoring and enhance the generated audio quality in real time by correcting for distortion arising from limitations of playback speakers, maintaining equalization & tonal balance, volume levels as per user preferences including reducing harsh frequencies by adopting psychoacoustics corrections;

♦ provides for auto loudness control by real time monitoring of frequency levels and boosting levels as per minimum level required for human perception during lo volume payback to maintain frequency response aligned with realm of human auditory response.

♦ provides to reduce listening fatigue during D.J. playback and clubs. Automatically reduce patrons exposure to high levels of distortion.

♦ provides for real time distortion correction in large open air venues and stadiums and automatically reduces exposure to high levels of disturbing frequencies and distortion.

• adapt plug & play DSP in car audio to control frequency band width and level with several Hi - Pass, Band Pass, Lo Pass fdters to maintain stereo image and balance. Enabling adding several amplifiers and speakers be added at different locations to produce single coherent playback in unison;

• enable set up of several crossovers / amplifier & speaker outputs by users devise in easy guided interactive steps in for car audio;

• enable set up of several crossovers / amplifier & speaker outputs by users devise in easy guided interactive steps in for professional audio;

• enabling set up of several crossovers / amplifier & speaker outputs by users devise in easy guided interactive steps in home audio;

• enable control of several network speakers by users devise in easy guided interactive steps in professional audio & large venues;

♦ exhibit an in-built feature of wireless streaming to wireless speakers and user interface to add and control multiple wireless speakers; and

♦ use an electronic device based user interface to control the features in a simplistic manner including features to setup multiple wireless audio speakers. LIST OF REFERENCE NUMERALS

10 - Electronic Device

20 - Audio Source

100 - System

102 - Hardware Unit

104 - Audio Monitoring Application

106 - Repository

108 - Input Module

110 - Auto Remastering Module

110a - Separation Unit

110b - Comparing Unit

110c - Level Cutting Unit

112 - Quality Enhancement Module

113 - Real Time Filters Execution & Acoustics Correction Module

114 - Al Up Sampling Module

117 - Listening Area Distortion Anomalies Monitoring & Correction Filter Generation Module

118 - User Interface

120 - Hardware Beacon

1000 - System

2000 - Method

3000 -Device

1002 - Second Repository 1004 - Inputs Module

1006 - Processing Engine

1008 - Signal Conversion Module

1010 - First Processing Engine

1010a - Audio Bifurcation Module

1010b - Auto-Remastering Module

1010c - Equalizing Circuit lOlOd - High Resolution Audio Module

1010c - Listening area Acoustic Correction Module lOlOea -Test Trigger Module lOlOeb - Track Comparison Module lOlOec - Result Sending Module lOlOed - Filter Exchange Module

1012 - Second Processing Unit

1012a - Real time listening area monitoring & generation of quality enhancement correction filters

1012b - Master Splitter Module

1012c - Distortion Correction Filter

1012d - Automatic Bass Treble Levelling Module

1012e - Auto Volume Leveling Module

1012f — Psychoacoustics Correction Module

1012g - Automatic Summation and Implementation Of Correction Filter Module 1014 - Third Processing Module

1016 - Quality enhancement Unit

1018 - Output Module

Equivalents

The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.

The foregoing description of the specific embodiments so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.

The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.

Any discussion of documents, acts, materials, devices, articles or the like that has been included in this specification is solely for the purpose of providing a context for the disclosure. It is not to be taken as an admission that any or all of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.

While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.