Ñæàòèå âèäåî - Ýíòðîïèéíîå ñæàòèå
Ðóññêèå ìàòåðèàëû |
|||
Àâòîðû | Íàçâàíèå ñòàòüè | Îïèñàíèå | Ðåéòèíã |
Âëàäèìèð Ñåìåíþê | "Âåðîÿòíîñòíûå ìåòîäû ýêîíîìíîãî êîäèðîâàíèÿ âèäåîèíôîðìàöèè" (äèññåðòàöèÿ) |
1. Ðàçðàáîòàí àëãîðèòì ýêîíîìíîãî êîäèðîâàíèÿ êîýôôèöèåíòîâ äèñêðåòíîãî êîñèíóñíîãî ïðåîáðàçîâàíèÿ ïîçâîëÿåò â ñðåäíåì íà 10% ïîâûñèòü ýôôåêòèâíîñòü îáùåïðèíÿòûõ ñòàíäàðòíûõ ñõåì JPEG è MPEG. 2. Ðàçðàáîòàí àëãîðèòì ýêîíîìíîãî êîäèðîâàíèÿ èçîáðàæåíèé íà îñíîâå äèñêðåòíîãî âåéâëåò-ïðåîáðàçîâàíèÿ ÿâëÿåòñÿ íàèáîëåå ýôôåêòèâíûì àëãîðèòìîì â ñâîåì êëàññå. Ïðè ôèêñèðîâàííîì ðàçìåðå âûõîäíîãî êîäà ýôôåêòèâíîñòü àëãîðèòìà íà 0.05-0.2 dB âûøå ýôôåêòèâíîñòè àíàëîãè÷íûõ ðåøåíèé. Àëãîðèòì ìîæåò áûòü óñïåøíî ïðèìåíåí íà ïðàêòèêå äëÿ ïîëó÷åíèÿ ýêîíîìíûõ ïðåäñòàâëåíèé íåïîäâèæíûõ èçîáðàæåíèé è âèäåîïîòîêîâ. 3. Ïðåäëîæåí ìåòîä ïîëó÷åíèÿ àäàïòèâíûõ âåðîÿòíîñòíûõ îöåíîê ÿâëÿåòñÿ ýôôåêòèâíîé çàìåíîé ìåòîäó, íàèáîëåå ÷àñòî èñïîëüçóåìîìó íà ïðàêòèêå.  ÷àñòíîñòè, ïðèìåíåíèå ðàçðàáîòàííîãî ìåòîäà â àëãîðèòìå êîäèðîâàíèÿ èçîáðàæåíèé íà îñíîâå äèñêðåòíîãî âåéâëåò-ïðåîáðàçîâàíèÿ â ñðåäíåì ïîçâîëèëî íà 0.5% ïîâûñèòü ýôôåêòèâíîñòü êîäèðîâàíèÿ è íà 10% óâåëè÷èòü åãî ïðîèçâîäèòåëüíîñòü. RAR (àâòîðåôåðàò) 32 êáàéò RAR (äèññåðòàöèÿ) 450 êáàéò |
|
Àíãëèéñêèå ìàòåðèàëû |
|||
Àâòîðû | Íàçâàíèå ñòàòüè | Îïèñàíèå | Ðåéòèíã |
Jin Soo Choi, Yong Han Kim, Ho-Jang Lee, In-Sung Park, Myoung Ho Lee, and Chieteuk Ahn | Geometry Compression of 3-D Mesh Models Using Predictive Two-Stage Quantization | Chieteuk Ahn Abstract—In conventional predictive quantization schemes for 3-D mesh geometry, excessively large residuals or prediction errors, although occasional, lead to visually unacceptable geometric distortion. This is due to the fact that they cannot limit the maximum quantization error within a given bound. In order to completely eliminate the visually unacceptable distortion caused by large residuals, we propose a predictive two-stage quantization scheme. This scheme is very similar to the conventional DPCM, except that the embedded quantizer is replaced by a series of two quantizers. Each quantizer output is further compressed by an arithmetic code. When applied to typical 3-D mesh models, the scheme performs much better than the conventional predictive quantization methods and, depending upon input models, even than the MPEG-4 compression method for 3-D mesh geometry both in rate-distortion sense and in subjective viewing. RAR 418 êáàéò |
|
Jozsef Vass, Bing-Bing Chai, Kannappan Palaniappan, and Xinhua Zhuang | Significance-Linked Connected Component Analysis for Very Low Bit-Rate Wavelet Video Coding | Abstract—In recent years, a tremendous success in wavelet image coding has been achieved. It is mainly attributed to innovative strategies for data organization and representation of wavelet-transformed images. However, there have been only a few successful attempts in wavelet video coding. The most successful is perhaps Sarnoff Corp.’s zerotree entropy (ZTE) video coder. In this paper, a novel hybrid wavelet video coding algorithm termed video significance-linked connected component analysis (VSLCCA) is developed for very low bit-rate applications. There also has been empirical evidence that wavelet transform combined with those innovative data organization and representation strategies can be an invaluable asset in very low bit-rate video coding as long as motion compensated error frames are ensured to be free of blocking effect or coherent. In the proposed VSLCCA codec, first, fine-tuned motion estimation based on the H.263 Recommendation is developed to reduce temporal redundancy, and exhaustive overlapped block motion compensation is utilized to ensure coherency in motion compensated error frames. Second, wavelet transform is applied to each coherent motion compensated error frame to attain global energy compaction. Third, significant fields of wavelettransformed error frames are organized and represented as significance-linked connected components so that both the withinsubband clustering and the cross-scale dependency are exploited. Last, the horizontal and vertical components of motion vectors are encoded separately using adaptive arithmetic coding while significant wavelet coefficients are encoded in bit-plane order by using high order Markov source modeling and adaptive arithmetic coding. Experimental results on eight standard MPEG-4 test sequences show that for intraframe coding, on average the proposed codec exceeds H.263 and ZTE in peak signal-to-noise ratio by as much as 2.07 and 1.38 dB at 28 kbits, respectively. For entire sequence coding, VSLCCA is superior to H.263 and ZTE by 0.35 and 0.71 dB on average, respectively. RAR 894 êáàéò |
|
Jeong-Kwon Kim, Kyeong Ho Yang, and Choong Woong Lee, Fellow | Document Image Compression by Nonlinear Binary Subband Decomposition and Concatenated Arithmetic Coding | Abstract-This paper proposes a new subband coding approach to compression of document images, which is based on nonlinear binary subband decomposition followed by the concatenated arithmetic coding. We choose to use the sampling-exclusive OR (XOR) subband decomposition to exploit its beneficial characteristics to conserve the alphabet size of symbols and provide a small region of support while providing the perfect reconstruction property.We propose a concatenated arithmetic coding scheme to alleviate the degradation of predictability caused by subband decomposition, where three high-pass subband coefficients at the same location are concatenated and then encoded by an octave arithmetic coder. The proposed concatenated arithmetic coding is performed based on a conditioning context properly selected by exploiting a nature of the sampling-XOR subband filter bank as well as taking the advantage of noncausal prediction capability of subband coding. We also introduce a unicolor map to efficiently represent large uniform regions frequently appearing in document images. Simulation results show that each of the functional blocks proposed in the paper performs very well, and consequently, the proposed subband coder provides good compression of document images. RAR 202 êáàéò |
|
Limin Wang and Andr´e Vincent | Bit Allocation and Constraints for Joint Coding of Multiple Video Programs | Abstract-Recent studies have shown that joint coding is more efficient and effective than independent coding for compression of multiple video programs [3]-[7]. Unlike independent coding, joint coding is able to dynamically distribute the channel capacity among video programs according to their respective complexities and hence achieve a more uniform picture quality. This paper examines the bit-allocation issues for joint coding of multiple video programs and provides a bit-allocation strategy that results in a uniform picture quality among programs as well as within a program. To prevent the encoder/decoder buffers from over- flowing and underflowing, further constraints on bit allocation are also discussed. RAR 370 êáàéò |
|
Meng-Han Hsieh and Che-Ho Wei | An Adaptive Multialphabet Arithmetic Coding for Video Compression |
Abstract—In this paper, the hardware implementation issues for an adaptive multialphabet arithmetic coder are discussed. A simple weighted history model is proposed to encode the video data. This model uses a weighted finite buffer to model the cumulative density function of the arithmetic coder. The performance of the weighted history model is evaluated together with several other well-known models. To access, search, and update the cumulative frequencies corresponding to model symbols in real time, we present a low complexity multibase cumulative occurrence array structure that can offer the probability information for high-speed encoding and decoding. For the application in video compression, the multialphabet arithmetic coding with weighted history model can be a good choice as the variable length coding of the video symbols. RAR 166 êáàéò |
|
Kuang-Shyr Wu and Ja-Chen Lin | Fast VQ Encoding by an Efficient Kick-Out Condition |
Abstract—A new fast approach to the nearest codeword search using a single kick-out condition is proposed. The nearest codeword found by the proposed approach is identical to the one found by the full search, although the processing time is much shorter. The principle is to bypass those codewords which satisfy the proposed kick-out condition without the actual (and time-consuming) computation of the distortions from the bypassed codewords to the query vector. Due to the efficiency and simplicity of the proposed condition, a considerable saving of the central processing unit time needed to encode a data set (using a given codebook) can be achieved. Moreover, the memory requirement is low. Comparisons with some recent works are included to show these two benefits. RAR 89 êáàéò |
|
Ramon Llados-Bernaus and Robert L. Stevenson | Fixed-Length Entropy Coding for Robust Video Compression |
Abstract—Entropy coding is a fundamental stage in all video compression algorithms in terms of compression efficiency and error resilience. Variable-length entropy codes (VLC) are used in current video codecs. Designed to be employed in noiseless applications, these codes are very sensitive to transmission errors. This paper proposes the use of fixed-length entropy codes (FLC) as an alternative to VLC in video compression applications. In noisy transmissions, the FLC-based codec has shown a superior performance compared to VLC-based codecs with synchronization words, while matching its performance in terms of coding efficiency. RAR 230 êáàéò |
|
R. Chandramouli, N. Ranganathan, and Shivaraman J. Ramadoss | Adaptive Quantization and Fast Error-Resilient Entropy Coding for Image Transmission |
Abstract—Recently, there has been an outburst of research in image and video compression for transmission over noisy channels. Channel matched source quantizer design has gained prominence. Further, the presence of variable-length codes in compression standards like the JPEG and the MPEG has made the problem more interesting. Error resilient entropy coding (EREC) has emerged as a new and effective method to combat catastrophic loss in the received signal due to burst and random errors. In this paper, we propose a new channel-matched adaptive quantizer for JPEG image compression. A slow, frequencynonselective Rayleigh fading channel model is assumed. The optimal quantizer that matches the human visibility threshold and the channel bit-error rate is derived. Further, a new fast error-resilient entropy code (FEREC) that exploits the statistics of the JPEG compressed data is proposed. The proposed FEREC algorithm is shown to be almost twice as fast as EREC in encoding the data, and hence the error resilience capability is also observed to be significantly better. On an average, a 5% decrease in the number of significantly corrupted received image blocks is observed with FEREC. Upto a 2-dB improvement in the peak signal-to-noise ratio of the received image is also achieved. RAR 315 êáàéò |
|
Ñàéò î ñæàòèè >> Ñòàòüè è èñõîäíèêè >>
Ìàòåðèàëû ïî âèäåî
Ñìîòðèòå òàêæå ìàòåðèàëû:
- Ïî öâåòîâûì ïðîñòðàíñòâàì
- Ïî JPEG
- Ïî JPEG-2000
íàâåðõ
Ïîäãîòîâèëè Ñåðãåé Ãðèøèí è Äìèòðèé Âàòîëèí