Digital signal processors (DSPs) have excellent multimedia performance. In general, they require only 40% to 50% of the cycles of a general-purpose processor (GPP) core to run a codec. DSPs also offer much greater flexibility and reconfigurability than ASICs. But so far, to use DSP in digital video applications, programmers have to spend more time learning about specialized languages. However, with the advent of application programming interfaces (APIs), it is no longer necessary to learn these specialized DSP languages. In applications running on GPP, the API can easily take full advantage of DSP.

The open source multimedia architecture generally runs under the Linux operating system on GPP and is an ideal target for these APIs. The API can be used to offload the computational load of the video codec, greatly reducing the complexity of DSP programming. This solution only requires the programmer to have basic DSP knowledge, without writing code to integrate DSP functions with those running on GPP. This advantage, combined with the ability to take advantage of the many features provided by the free open source plug-ins and architecture, can significantly reduce the time-to-market for new video products.

Hardware platform selection

Developers have several options when choosing a hardware platform that runs a codec that compresses the digital stream that is transferred or stored and decompressed for viewing or editing. ASICs are specifically designed for digital video applications to deliver high performance and low power consumption in such applications. Its shortcomings (NRE) are expensive. In addition, if the ASIC changes, such as changes to adapt to the codec standard, the implementation cost is very high.

On the other hand, the GPP core has a relatively low cost of streaming, and reprogramming for changes is quite easy. However, they are less efficient when applied to digital video processing because they are inefficient when performing computationally intensive signal processing applications. For example, GPP implements multiplication by a series of shifting and addition operations, and each shift and addition requires more than one clock cycle.

DSP has the potential to combine the advantages of both. Unlike GPP, DSP is optimized for computationally intensive signal processing applications in digital video applications. It has a single-cycle multiplier or multiply-accumulate unit to speed up the execution of the codec algorithm. Higher performance DSPs also include several independent execution units that can operate in parallel, which allows them to perform several operations per instruction. In addition, the DSP provides full software programming capabilities, including on-site reprogramming capabilities. This allows users to launch MPEG-2 products first and later upgrade to H.264 video codecs. The main limitation of DSPs in digital video applications is that they typically require programming in a proprietary language, and programmers familiar with DSP are far less familiar with the popular GPP architecture.

Figure 1: Multimedia framework responsibilities and data flow in an example with only decoders

Component integration challenges

Developers of digital video systems are also facing integration challenges. Digital video systems consist of multiple encoders, decoders, codecs, algorithms, and other software that must be integrated into an executable image before they can run on the system. Integrating all of these components and ensuring their operational coordination is a difficult task. Different systems may require distinct video, image, voice, audio, and other multimedia modules. Developers who manually integrate every software module or algorithm are struggling with value-added functionality, such as adding innovative features.

Many digital video developers are beginning to adopt open source approaches to building software. A common approach is to get a significant portion of the software from open source, while leveraging internal expertise in usability and hardware integration. Developers often participate in open source technology development projects to meet specific requirements and integrate internally developed code with open source code to create new products.

New API

In order to solve the above problems, Texas Instruments (TI) has developed an API that can take full advantage of DSPs such as GStreamer in the open source multimedia framework. This API allows multimedia programmers to take advantage of the DSP codec engine in a familiar environment to free digital video programmers from complex DSP programming, allowing ARM/Linux developers to easily take advantage of the DSP codec acceleration capabilities. Have relevant hardware knowledge. The interface also automatically and efficiently partitions the work between the ARM and the DSP, eliminating the need to write code for coordination between functions running on the DSP and running on the GPP core. This interface has been developed by TI in the form of the GStreamer plugin in accordance with open source community standards.

Figure 2: The method of characterizing data in GStreamer through the GstBuffer structure is consistent with the methods adopted by several other operating systems and their corresponding multimedia frameworks.

GStreamer is a media processing library that provides an abstract model of a transformation process that works through the concept of a pipeline in which the media flows from input to output in a defined direction. GStreamer is able to extract the behavior of different media in a way that simplifies the programming process and is popular in the digital video programming community. With GStreamer, you can write a universal video or music player that can support many different formats and networks. And most of the operations are performed by plugins, not the GStreamer kernel. The basic functionality of GStreamer is primarily related to registering and loading plugins, and provides base classes that define the basic functionality of the GStreamer class.

GStreamer filter

The source filter is responsible for obtaining the original multimedia data from the data source for processing. The data source here may be a hard disk file (such as a file source filter), or a CD or a DVD disc, or a television receiving card or a network. Real time "source. Some source filters simply pass raw data to a parser or splitter filter, while other source filters perform their own profiling steps. The transform filter receives the raw data or partially processed data, and further processes it before passing it to the next level filter.

There are many types of transform filters, and the parser is an example. This filter separates the raw byte stream into multiple samples or frames, compressors or decompressors, and format converters. The Renderer filter typically receives fully processed data and plays it on the system display or through a speaker or some external device. This type of filter also includes a "file writer" filter and a network transport filter that allows the data to be saved to a hard disk or other persistent storage device.

Data processing takes place in the plug-in_chain() or plug-in_loop() function. This function may be as simple as component scaling or as complex as a real MP3 decoder. After the data is processed, it is sent from the source pad of the GStreamer element using a gst_pad_push() function, thereby passing the data to the next component of the pipeline chain.

GStreamer buffer

In GStreamer, a buffer is the basic unit of data transfer. The GstBuffer (instance) class provides all the states necessary to define a storage area as part of a streaming media. Through the GstBuffer structure, Gstreame's internal data representation follows several other operating systems and their respective multimedia architectures (eg, the media sampling concept in Microsoft DirectShow). In addition, a secondary buffer is supported, allowing a small portion of the buffer to be its own buffer, and this processing mechanism ensures that the storage space is not released prematurely.

Figure 3: An efficient way to reuse buffers that have been allocated on the drive and are physically contiguous.

Buffers are usually created using gst_buffer_new(). After creating a buffer, it is generally allocated memory for it, setting the size of the buffer data. An example of buffer creation is shown below that is capable of holding video frames of a given width, height, and bits per pixel.

Elf Bar

disposable electronic cigarette disposable vape pen disposable e-cig

Shenzhen Xcool Vapor Technology Co.,Ltd , http://www.xcoolvapor.com

Posted on