Everyone admits that now is a digital age, and many people are working tirelessly to pursue excellent sound quality. With the advent of the digital age, everyone admits that digital audio is superior to analog signals. What is an analog signal? In fact, any sound we can hear through the audio line or microphone is a series of analog signals. The analog signal is what we can hear. Digital signals use a bunch of digital notations to record sound, rather than using physical means to preserve the signal. (Using ordinary tape recording is a physical way) Digital signals are actually inaudible to us.

In this way, we can briefly compare the difference between the recording production in the analog era and the digital age: the analog era is to physically record the original signal onto the tape (of course in the studio), then process, splicing, modifying, and finally recording. To tape, LP and other listeners can appreciate the carrier. This series of processes is all simulated, and every step has to lose some signals. It is naturally far from the audience, not to mention HI-FI. The digital age is the first step in recording the original signal into digital audio material, and then processing it in hardware or software. This process is superior to the analog method because it has almost no loss. For the machine, it is just a matter of dealing with the numbers. Of course, there is also the possibility of losing the code, but it will not happen as long as the operation is reasonable. Finally, the digital signal is transmitted to a digital recording device such as a CD, and the loss is naturally much smaller!
If we pay attention to the CDs around us, we will see many CDs like ADD, AAD, DDD and so on. The three letters each represent the way the film is used in the three processes of recording, editing, and finished products, whether it is analog or digital. Of course A represents analog and D represents numbers. AAD shows that the recording and editing are in analog mode, and the final filming is digital. Most of these records are used to convert past recorded music into CD without any modification. ADD has a modification process. Many classical music masters' performances or conductors are recorded in the analog era. The CDs we hear now are modified and canned. Many of these records have the mark ADD. And DDD's record is necessarily a more modern recording. Naturally, the CD film must end with D, and the tape can be considered as AAA, although it seems that there is no such statement.
Therefore, digital audio is a way for us to save sound signals and transmit sound signals. It is characterized by the fact that signals are not easily lost. The analog signal is what we can finally hear. However, the modification of the analog signal is a disaster and the loss is too great. This secluded Glenn Gould will be stunned if he lives to this day. And digital audio copying will not be lost 100 times, do not believe everyone COPY a WAVE file to try?
The most critical step in digital recording is to convert analog signals into digital signals. As far as the computer is concerned, the analog sound signal is recorded as a Wave file. This is also possible with the tape recorder that comes with Windows. However, its function is very limited and cannot meet our needs, so we replace it with other professional audio software, such as SoundForge. Wait. The recorded file is a Wave file. The description of the Wave file mainly has two indicators, one is the sampling precision and the other is the bit rate. This is two very important concepts in digital audio production. Let's take a look at it.
What is the sampling accuracy? Because Wave is a digital signal, it uses a bunch of numbers to describe the original analog signal, so it needs to analyze the original analog signal. We know that all sounds have their waveforms, and the digital signals are on the original analog signal waveforms. Perform "take points" every once in a while, give each point a value, which is "sampling", then connect all the "points" to describe the analog signal, obviously, take it in a certain period of time. The more points, the more accurate the waveform is described. This scale is called “sampling accuracy”. Our most common sampling accuracy is 44.1kHz/s. It means that 44,100 samples are taken every second. The reason why this value is used is because after repeated experiments, people find that this sampling accuracy is the most appropriate. Below this value, there will be a more obvious loss, and the ear is higher than this value. It has been difficult to distinguish and has increased the space occupied by digital audio.
Generally, in order to achieve "very accurate", we will also use the sampling accuracy of 48k or even 96k. In fact, the difference between 96k sampling accuracy and 44.1k sampling accuracy is definitely not so different as 44.1k and 22k. We use The sampling standard for CD is 44.1k. Currently, 44.1k is still the most popular standard. Some people think that 96k will be the trend of the recording industry in the future. Sampling accuracy should be a good thing, but sometimes I also think, can we really hear the difference between music made with 96k sampling accuracy and music made with 44.1k sampling accuracy? Can the sound of ordinary people's homes release their differences?
Bit rate is a term that everyone often hears. Digital recording generally uses 16 bits, 20 bits, and 24 bits to make music. What is a "bit"? We know that the sound is light and loud, and the physical element that affects the lightness is the amplitude. As a digital recording, it must also accurately represent the soft sound of the music, so we must have a precise description of the amplitude of the waveform. One unit, 16 bits, means that the amplitude of the waveform is divided into 216 or 65,536 levels. According to the light response of the analog signal, it can be represented by a number. As with the sampling accuracy, the higher the bit rate, the more detailed the light response of the music can be reflected. 20 bits can produce 1048576 levels, and there is no problem with the dynamic music of the symphony. I just mentioned a term "dynamic", which actually refers to how much the most loud and lightest contrast of a piece of music can be achieved. We also often say "dynamic range", the unit is dB, and the dynamic range is used in our recording. The bit rate is tightly combined. If we use a very low bit rate, then we have very few levels that can be used to describe the strength of the sound. Of course, we can't hear the contrast between the big and the weak. . The relationship between dynamic range and bit rate is: for every 1 bit increase in bit rate, the dynamic range is increased by 6 dB. So if we use 1-bit recording, then our dynamic range is only 6dB, so music is impossible to listen to. At 16 bits, the dynamic range is 96 dB. This can meet the general needs. At 20 bits, the dynamic range is 120dB, and the contrasting symphony can be handled with ease. It is more than enough to express the strength of music. The enthusiast-level sound engineer also uses 24 bits, but like the sampling accuracy, it does not change significantly from 20 bits. Theoretically, 24 bits can achieve a dynamic range of 144 dB, but it is actually difficult to achieve because Any device will inevitably produce noise, at least at this stage 24 bits is difficult to achieve its expected results.
Audio processing
I. Digital Processing of Audio Media With the development of computer technology, especially the realization of mass storage devices and large-capacity memory on PCs, digital processing of audio media becomes possible. The core of digital processing is the sampling of audio information. By processing the collected samples to achieve various effects, this is the basic meaning of digital processing of audio media.
Second, the basic processing of audio media The basic audio digitization process includes the following:
Transformation and conversion between different sampling rates, frequencies, and channel numbers. The transformation is simply treated as another format, and the conversion is done by resampling, where an interpolation algorithm can also be employed as needed to compensate for the distortion.
Various transformations for the audio data itself, such as fade in, fade out, volume adjustment, and the like.
Transformations made by digital filtering algorithms, such as high-pass, low-pass filters.
Third, the three-dimensional processing of audio media For a long time, computer researchers have been underestimating the role of sound in human information processing. When virtual technology continues to evolve, people no longer meet the sound of monotonous planes, but rather a three-dimensional sound effect with a sense of space. The auditory channel can work simultaneously with the visual channel, so the three-dimensional processing of the sound can not only express the spatial information of the sound, but also combine with the multi-channel of visual information to create an extremely realistic virtual space, which is in the future multimedia system. Very important. This is also an important measure in media processing.
The most basic theory of human perception of the location of sound sources is the duplex theory, which is based on two factors: the difference in the arrival time of the sound between the ears and the difference in the intensity of the sound between the ears. The time difference is caused by the distance. When the sound is transmitted from the front and the distance is equal, there is no time difference, but if it is three degrees to the right, the time to reach the right ear is about 30 microseconds less than the left ear. Thirty microseconds allowed us to identify the location of the sound source. The difference in intensity is caused by the attenuation of the signal. The attenuation of the signal is naturally caused by the distance, or the occlusion of the human head causes the sound to attenuate, resulting in a difference in intensity, which is heard near the ear on the side of the sound source. The sound intensity is greater than the other ear.
Based on the duplex theory, as well, by mixing a common two-channel audio between two channels, the ordinary two-channel sound can be made to have a three-dimensional sound field effect. This involves the following two concepts about the sound field: the width and depth of the sound field.
The width of the sound field is done using the principle of time difference. Since the normal stereo audio is now expanded, the position of the sound source is always in the middle of the sound field, which simplifies our work. All that is to be processed is to mix the sounds of the two channels with appropriate delay and intensity. Because of this limitation, there is a limit, that is, the delay cannot be too long, otherwise it will become an echo.
The depth of the sound field is done using the principle of poor intensity. The specific form of expression is echo. The deeper the sound field, the longer the delay of the echo. Therefore, at least three parameters should be provided in the echo settings: the attenuation rate of the echo, the depth of the echo, and the delay between the echoes. At the same time, you should also provide an option to set the depth of the sound that is mixed in by another channel. Audio properties and audio processing

Nail Making Machinery

These machines are used for the production of nails

ROYAL RANGE INTERNATIONAL TRADING CO., LTD , https://www.royalrangelgs.com

Posted on