Download: ðåñóðñû ïî ñæàòèþ

Ñæàòèå âèäåî - Wavelet

Seongman Kim, Seunghyeon Rhee, Jun Geun Jeon, and Kyu Tae Park Interframe Coding Using Two-Stage Variable Block-Size Multiresolution Motion Estimation and Wavelet Decomposition
Abstract— In this paper, we propose a two-stage variable block-size multiresolution motion estimation (MRME) algorithm. In this algorithm, a method to reduce the amount of motion information is developed, and a bit allocation method minimizing the sum of the motion information and the prediction error is obtained in the wavelet transform domain. In the first stage of the proposed scheme, we utilize a set of wavelet components of the four subbands in the lowest resolution. These motion vectors are directly used as motion vectors for the lowest subband, and are scaled into the initial biases for other subbands at every layer of the wavelet pyramid. In the second stage, the bottom-up construction of a quadtree based on the merge operation is performed. The proposed scheme reduces the uncompressed bit rate of 8 bits/pixel into 0.212 bits/pixel at 41.1 dB of PSNR for the “Claire” sequence, which can be regarded as nearly an 11% decrease compared with the conventional method.
RAR  378 êáàéò
?
Jiebo Luo, Chang Wen Chen, Kevin J. Parker and Thomas S. Huang A Scene Adaptive and Signal Adaptive Quantization for Subband Image and Video Compression Using Wavelets
Abstract—Discrete wavelet transform (DWT) provides an advantageous framework of multiresolution space-frequency representation with promising applications in image processing. The challenge as well as the opportunity in wavelet-based compression is to exploit the characteristics of the subband coefficients with respect to both spectral and spatial localities. A common problem with many existing quantization methods is that the inherent image structures are severely distorted with coarse quantization. Observation shows that subband coefficients with the same magnitude generally do not have the same perceptual importance; this depends on whether or not they belong to clustered scene structures. We propose in this paper a novel scene adaptive and signal adaptive quantization scheme capable of exploiting both the spectral and spatial localization properties resulting from wavelet transform. The proposed quantization is implemented as a maximum a posteriori probability (MAP) estimation-based clustering process in which subband coefficients are quantized to their cluster means, subject to local spatial constraints. The intensity distribution of each cluster within a subband is modeled by an optimal Laplacian source to achieve the signal adaptivity, while spatial constraints are enforced by appropriate Gibbs random fields (GRF) to achieve the scene adaptivity. Consequently, with spatially isolated coefficients removed and clustered coefficients retained at the same time, the available bits are allocated to visually important scene structures so that the information loss is least perceptible. Furthermore, the reconstruction noise in the decompressed image can be suppressed using another GRF-based enhancement algorithm. Experimental results have shown the potentials of this quantization scheme for low bit-rate image and video compression.
RAR  998 êáàéò
?
Seung-Kwon Paek and Lee-Sup Kim A Real-Time Wavelet Vector Quantization Algorithm and Its VLSI Architecture
Abstract—In this paper, a real-time wavelet image compression algorithm using vector quantization and its VLSI architecture are proposed. The proposed zerotree wavelet vector quantization (WVQ) algorithm focuses on the problem of how to reduce the computation time to encode wavelet images with high coding efficiency. A conventional wavelet image-compression algorithm exploits the tree structure of wavelet coefficients coupled with scalar quantization. However, they can not provide the real-time computation because they use iterative methods to decide zerotrees. In contrast, the zerotree WVQ algorithm predicts in real-time zero-vector trees of insignificant wavelet vectors by a noniterative decision rule and then encodes significant wavelet vectors by the classified VQ. These cause the zerotree WVQ algorithm to provide the best compromise between the coding performance and the computation time. The noniterative decision rule was extracted by the simulation results, which are based on the statistical characteristics of wavelet images. Moreover, the zerotree WVQ exploits the multistage VQ to encode the lowest frequency subband, which is generally known to be robust to wireless channel errors. The proposed WVQ VLSI architecture has only one VQ module to execute in real-time the proposed zerotree WVQ algorithm by utilizing the vacant cycles for zero-vector trees which are not transmitted. And the VQ module has only + 1 processing elements (PE's) for the real-time minimum distance calculation, where the codebook size is . PE's are for Euclidean distance calculation and a PE is for parallel distance comparison. Compared with conventional architectures, the proposed VLSI architectures has very cost-effective hardware (H/W) to calculate zerotree WVQ algorithm in real time. Therefore, the zerotree WVQ algorithm and its VLSI architectures are very suitable to wireless image communication, because they provide high coding efficiency, real-time computation, and cost-effective H/W. image-compression techniques robust to transmission channel errors are essential to wireless image communication, because wireless communication channels suffer from burst errors in which a large number of consecutive bits are lost or corrupted by the channel-fading effect. The conventional image-coding standards are very susceptible to transmission errors, and hence, they need powerful error-correction codes. Therefore, it is desirable to design a robust image-coding technique, which has a high compression ratio and produces acceptable image quality over a fading channel. Finally, we should consider image compression algorithms and their VLSI architectures which allow portable decoders with small size, low-power consumption, and acceptable reconstructed image quality.
RAR  694 êáàéò
?
Shipeng Li, and Weiping Li, Fellow Shape-Adaptive Discrete Wavelet Transforms for Arbitrarily Shaped Visual Object Coding
Abstract—This paper presents a shape-adaptive wavelet coding technique for coding arbitrarily shaped still texture. This technique includes shape-adaptive discrete wavelet transforms (SA-DWT’s) and extentions of zerotree entropy (ZTE) coding and embedded zerotree wavelet (EZW) coding. Shape-adaptive wavelet coding is needed for efficiently coding arbitrarily shaped visual objects, which is essential for object-oriented multimedia applications. The challenge is to achieve high coding efficiency while satisfying the functionality of representing arbitrarily shaped visual texture. One of the features of the SA-DWT’s is that the number of coefficients after SA-DWT’s is identical to the number of pixels in the original arbitrarily shaped visual object. Another feature of the SA-DWT is that the spatial correlation, locality properties of wavelet transforms, and self-similarity across subbands are well preserved in the SA-DWT. Also, for a rectangular region, the SA-DWT becomes identical to the conventional wavelet transforms. For the same reason, the extentions of ZTE and EZW to coding arbitrarily shaped visual objects carefully treat “don’t care” nodes in the wavelet trees. Comparison of shape-adaptive wavelet coding with other coding schemes for arbitrarily shaped visual objects shows that shape-adaptive wavelet coding always achieves better coding efficiency than other schemes. One implementation of the shape-adaptive wavelet coding technique has been included in the new multimedia coding standard MPEG-4 for coding arbitrarily shaped still texture. Software implementation is also available.
RAR  2840 êáàéò
?
David B. H. Tay Rationalizing the Coefficients of Popular Biorthogonal Wavelet Filters
Abstract—Many wavelet filters found in the literature have irrational coefficients and thus require infinite precision implementation. One of the most popular filter pairs is the “9/7” biorthogonal pair of Cohen, Daubechies, and Feauveau, which is adopted in the FBI finger-print compression standard. We present here a technique to rationalize the coeffcients of wavelet filters that will preserve biorthogonality and perfect reconstruction. Furthermore, most of the zeros at = 1 will also be preserved. These zeros are important for achieving regularity. The rationalized coefficients filters have characteristics that are close to the original irrational coefficients filters. Three popular pairs of filters, which include the “9/7” pair, will be considered.
RAR  710 êáàéò
?
Ashraf A. Kassim and Lifeng Zhao Rate-Scalable Object-Based Wavelet Codec with Implicit Shape Coding
Abstract—In this paper, we present an embedded approach for coding image regions with arbitrary shapes. Our scheme takes a different approach by separating the objects in the transform domain instead of the image domain so that only one transform for the entire image is required. We define a new shape-adaptive embedded zerotree wavelet coding (SA-EZW) technique for encoding the coefficients corresponding to specific objects in gray-scale and color-image segments by implicitly representing their shapes, thereby forgoing the need for separately coding the region boundary. At our decoder, the shape information can be recovered without separate and explicit shape coding. The implicit shape coding enables the bit stream for the object to be fully rate scalable, since no explicit bit allocation is needed for the object shape. This makes it particularly suitable when content-based functionalities are desired in situations where the user bit rate is constrained and enables precise bit-rate control while avoiding the problem of contour coding. We show that our algorithm sufficiently addresses the issue of content-based scalability and improved coding efficiency when compared with the “chroma keying” technique, an implicit shape-coding technique which is adopted by the current MPEG-4 standard.
RAR  820 êáàéò
?
Chengjiang Lin, Bo Zhang, and Yuan F. Zheng Packed Integer Wavelet Transform Constructed by Lifting Scheme
Abstract—A new method for speeding up the integer wavelet transforms constructed by the lifting scheme is proposed. The proposed method packs multiple pixels (wavelet coefficients) in a single word; therefore, it can make use of the 32-bit or 64-bit computational capability of modern computers to accomplish multiple addition/ subtraction operations in one instruction cycle. As a result, our method can save the decomposition/reconstruction time by up to 37% on 32-bit machines and require much less working memory in comparison with the original wavelet transform algorithms.
RAR  106 êáàéò
?
CKai Bao and Xiang-Gen Xia Image Compression Using a New Discrete Multiwavelet Transform and a New Embedded Vector Quantization
Abstract—An embedded image compression scheme using discrete multiwavelet transform (DMWT) is proposed in this paper. The proposed coding scheme is based on a new prefilter design for DMWT and a new embedded coding algorithm which combines scalar quantization and 2 2 vector quantization (VQ). A new algorithm for embeddedVQcodebook generation is proposed, which is shown to have a better performance than the current schemes. The performance of the proposed compression scheme is comparable to the one of the SPIHT algorithm.
RAR  588 êáàéò
?
Hyuk Choi and Taejeong Kim Blocking-Artifact Reduction in Block-Coded Images Using Wavelet-Based Subband Decomposition
Abstract—We propose a post-processing method in the wavelet transform domain that can significantly reduce the blocking effects in low-bit-rate block-transform-coded images. Although the quantization noise of transform coefficients is the sole source of error in a coded image, the properties of block transform make the errors appear in two categories: blocky noise, which causes blocking effects, and granular (nonblocky) noise. Noting that subband coding does not suffer from blocky noise, the proposed technique is designed to work in the subband domain. Once a coded image is decomposed into subbands by wavelet filters, most energy of the blocky noise exists on the predetermined block boundaries of their corresponding subbands. We can reduce the blocky noise by a linear minimum mean square error filter, which fully exploits the characteristics of the signal and noise components in each subband. After the blocky noise is reduced, the granular noise can further be decreased by exploiting its nonstructuredness. Computer simulations show that the proposed method visibly reduces the blocking effects in reconstructed images and yields better PSNR improvement. In this paper, we divide the blocking artifacts into two categories.
RAR  136 êáàéò
?
Detlev Marpe, Gabi Blättermann, Jens Ricke, and Peter Maaß A Two-Layered Wavelet-Based Algorithm for Efficient Lossless and Lossy Image Compression
Abstract—In this paper, we propose a wavelet-based image-coding scheme allowing lossless and lossy compression, simultaneously. Our two-layered approach utilizes the best of two worlds: it uses a highly performing wavelet-based or wavelet packet-based coding technique for lossy compression in the low bit range as a first stage. For the second (optional) stage, we extend the concept of reversible integer wavelet transforms to the more flexible class of adaptive reversible integer wavelet packet transforms which are based on the generation of a whole library of bases, from which the best representation for a given residue between the reconstructed lossy compressed image and the original image is chosen using a fast-search algorithm. We present experimental results demonstrating that our compression algorithm yields a rate-distortion performance similar or superior to the best currently published pure lossy still image-coding methods. At the same time, the lossless compression performance of our two-layered scheme is comparable to that of state-of-the-art pure lossless image-coding schemes. Compared to other combined lossy/lossless coding schemes such as the emerging JPEG-2000 still image-coding standard PSNR improvements up to 3 dB are achieved for a set of standard test images.
RAR  234 êáàéò
?
Seung-Kwon Paek and Lee-Sup Kim Real-Time Wavelet Vector Quantization Algorithm and Its VLSI Architecture
Abstract—In this paper, a real-time wavelet image compression algorithm using vector quantization and its VLSI architecture are proposed. The proposed zerotree wavelet vector quantization (WVQ) algorithm focuses on the problem of how to reduce the computation time to encode wavelet images with high coding efficiency. A conventional wavelet image-compression algorithm exploits the tree structure of wavelet coefficients coupled with scalar quantization. However, they can not provide the real-time computation because they use iterative methods to decide zerotrees. In contrast, the zerotree WVQ algorithm predicts in real-time zero-vector trees of insignificant wavelet vectors by a noniterative decision rule and then encodes significant wavelet vectors by the classified VQ. These cause the zerotree WVQ algorithm to provide the best compromise between the coding performance and the computation time. The noniterative decision rule was extracted by the simulation results, which are based on the statistical characteristics of wavelet images. Moreover, the zerotree WVQ exploits the multistage VQ to encode the lowest frequency subband, which is generally known to be robust to wireless channel errors. The proposed WVQ VLSI architecture has only one VQ module to execute in real-time the proposed zerotree WVQ algorithm by utilizing the vacant cycles for zero-vector trees which are not transmitted. And the VQ module has only + 1 processing elements (PE's) for the real-time minimum distance calculation, where the codebook size is . PE's are for Euclidean distance calculation and a PE is for parallel distance comparison. Compared with conventional architectures, the proposed VLSI architectures has very cost-effective hardware (H/W) to calculate zerotree WVQ algorithm in real time. Therefore, the zerotree WVQ algorithm and its VLSI architectures are very suitable to wireless image communication, because they provide high coding efficiency, real-time computation, and cost-effective H/W.
RAR  694 êáàéò
?
Ke Shen, and Edward J. Delp, Fellow Wavelet Based Rate Scalable Video Compression
Abstract—In this paper, we present a new wavelet based rate scalable video compression algorithm. We will refer to this new technique as the Scalable Adaptive Motion Compensated Wavelet (SAMCoW) algorithm. SAMCoW uses motion compensation to reduce temporal redundancy. The prediction error frames and the intracoded frames are encoded using an approach similar to the embedded zerotree wavelet (EZW) coder. An adaptive motion compensation (AMC) scheme is described to address error propagation problems. We show that, using our AMC scheme, the quality of the decoded video can be maintained at various data rates. We also describe an EZW approach that exploits the interdependency between color components in the luminance/chrominance color space. We show that, in addition to providing a wide range of rate scalability, our encoder achieves comparable performance to the more traditional hybrid video coders, such as MPEG1 and H.263. Furthermore, our coding scheme allows the data rate to be dynamically changed during decoding, which is very appealing for network-oriented applications.
RAR  1018 êáàéò
?
Iraj Sodagar, Hung-Ju Lee, Paul Hatrack, and Ya-Qin Zhang, Fellow Scalable Wavelet Coding for Synthetic/Natural Hybrid Images
Abstract— This paper describes the texture representation scheme adopted for MPEG-4 synthetic/natural hybrid coding (SNHC) of texture maps and images. The scheme is based on the concept of multiscale zerotree wavelet entropy (MZTE) coding technique, which provides many levels of scalability layers in terms of either spatial resolutions or picture quality. MZTE, with three different modes (single-Q, multi-Q, and bilevel), provides much improved compression efficiency and fine-gradual scalabilities, which are ideal for hybrid coding of texture maps and natural images. The MZTE scheme is adopted as the baseline technique for the visual texture coding profile in both the MPEG-4 video group and SNHC group. The test results are presented in comparison with those coded by the baseline JPEG scheme for different types of input images. MZTE was also rated as one of the top five schemes in terms of compression efficiency in the JPEG2000 November 1997 evaluation, among 27 submitted proposals.
RAR  949 êáàéò
?
Hong Man, Faouzi Kossentini, and Mark J. T. Smith, Fellow A Family of Efficient and Channel Error Resilient Wavelet/Subband Image Coders
Abstract—We present a new wavelet/subband framework that allows the efficient and effective quantization/coding of subband coefficients in both noiseless and noisy channel environments. Two different models, one based on a zero-tree structure and another based on a quadtree and context-based modeling structure, are introduced for coding the locations of significant subband coeffi- cients. Then, several multistage residual lattice vector quantizers are proposed for the quantization of such coefficients. The proposed framework features relatively simple modeling and quantization/ coding structures that produce a bit stream containing two distinct bit sequences, which can then be protected differently according to their importance and channel noise sensitivity levels. The resulting wavelet/subband image coding algorithms provide good tradeoffs between compression performance and resilience to channel errors. In fact, experimental results indicate that for both noiseless and noisy channels, the resulting coders outperform most of the source–channel coders reported in the literature. More importantly, our coders are substantially more robust than all previously reported source–channel coders with respect to varying channel error conditions. This is a desired feature in low-bandwidth wireless applications.
RAR  273 êáàéò
?
Zixiang Xiong, Kannan Ramchandran, Michael T. Orchard, and Ya-Qin Zhang A Comparative Study of DCT- and Wavelet-Based Image Coding
Abstract—We undertake a study of the performance difference of the discrete cosine transform (DCT) and the wavelet transform for both image and video coding, while comparing other aspects of the coding system on an equal footing based on the state-of-theart coding techniques. Our studies reveal that, for still images, the wavelet transform outperforms the DCT typically by the order of about 1 dB in peak signal-to-noise ratio. For video coding, the advantage of wavelet schemes is less obvious. We believe that the image and video compression algorithm should be addressed from the overall system viewpoint: quantization, entropy coding, and the complex interplay among elements of the coding system are more important than spending all the efforts on optimizing the transform.
RAR  66 êáàéò
?
Nam Chul Kim, Ick Hoon Jang, Dae Ho Kim, and Won Hak Hong Reduction of Blocking Artifact in Block-Coded Images Using Wavelet Transform
Abstract— We propose a simple yet efficient method which reduces the blocking artifact in block-coded images by using a wavelet transform. An image is considered a set of onedimensional signals, and so all processings including the wavelet transform are one-dimensionally executed. The artifact reduction operation is applied to only the neighborhood of each block boundary in the wavelet transform at the first and second scales. The key idea behind the method is to remove the blocking component which reveals stepwise discontinuities at block boundaries. Each block boundary is classified into one of shade region, smooth edge region, and step edge region. Threshold values for the classification are selected adaptively according to each coded image. The performance is evaluated for 512 . 512 images JPEG coded with 30 : 1 and 40 : 1 compression ratios. Experimental results show that the proposed method yields not only a PSNR improvement of about 0.69–1.06 dB, but also subjective quality nearly free of the blocking artifact and edge blur.
RAR  165 êáàéò
?
Seongman Kim, Seunghyeon Rhee,Jun Geun Jeon, and Kyu Tae Park Interframe Coding Using Two-Stage Variable Block-Size Multiresolution otion Estimation and Wavelet Decomposition
Abstract— In this paper, we propose a two-stage variable block-size multiresolution motion estimation (MRME) algorithm. In this algorithm, a method to reduce the amount of motion information is developed, and a bit allocation method minimizing the sum of the motion information and the prediction error is obtained in the wavelet transform domain. In the first stage of the proposed scheme, we utilize a set of wavelet components of the four subbands in the lowest resolution. These motion vectors are directly used as motion vectors for the lowest subband, and are scaled into the initial biases for other subbands at every layer of the wavelet pyramid. In the second stage, the bottom-up construction of a quadtree based on the merge operation is performed. The proposed scheme reduces the uncompressed bit rate of 8 bits/pixel into 0.212 bits/pixel at 41.1 dB of PSNR for the “Claire” sequence, which can be regarded as nearly an 11% decrease compared with the conventional method.
RAR  378 êáàéò
?
Ricardo de Queiroz, C. K. Choi, Young Huh, and K. R. Rao Wavelet Transforms in a JPEG-Like Image Coder
Abstract—The discrete wavelet transform (DWT) is incorporated into the JPEG baseline system for image coding. The discrete cosine transform (DCT) is replaced by an association of two-channel filter banks connected hierarchically. JPEG block-scanning and quantization schemes are adopted while we use JPEG’s entropy coder. The changes in scanning can be incorporated into the transform block in such a way that the only part that needs to be changed in a JPEG framework is to replace the DCT by the DWT. Objective results and reconstructed images are presented demonstrating that the proposed coder outperforms JPEG and approaches the performance of more sophisticated and complex wavelet coders. However, it does not require full-image buffering nor imposes a large complexity increase.
RAR  542 êáàéò
?
Hiroyuki Katata, Norio Ito, Tomoko Aono and Hiroshi Kusao Object Wavelet Transform for Coding of Arbitrarily Shaped Image Segments
Abstract— In this paper, one approach to transform an arbitrary shapedimage region is addressed. The proposed approach called objectbased wavelet transform (OWT), is simple to implement and a smooth extension of regular wavelet transform (WT). OWT consist of two phase processes: one is an extrapolation phase for regular WT and the other is a handling coefficients phase for eliminating redundancy caused by the extrapolation. Due to some experimental results, it is confirmed that the method performs the same as other shape-adaptive approaches with low complexity.
RAR  167 êáàéò
?
Stephen A. Martucci, Iraj Sodagar, Tihao Chiang,and Ya-Qin Zhang A Zerotree Wavelet Video Coder
Abstract—This paper describes a hybrid motion-compensated wavelet transform coder designed for encoding video at very low bit rates. The coder and its components have been submitted to MPEG-4 to support the functionalities of compression efficiency and scalability. Novel features of this coder are the use of overlapping block motion compensation in combination with a discrete wavelet transform followed by adaptive quantization and zerotree entropy coding, plus rate control. The coder outperforms the VM of MPEG-4 for coding of I-frames and matches the performance of the VM for P-frames while providing a path to spatial scalability, object scalability, and bitstream scalability.
RAR  463 êáàéò
?
Jiebo Luo, Chang Wen Chen, Kevin J. Parker, Fellow,and Thomas S. Huang, Fellow A Scene Adaptive and Signal Adaptive Quantization for Subband Image and Video Compression Using Wavelets
Abstract—Discrete wavelet transform (DWT) provides an advantageous framework of multiresolution space-frequency representation with promising applications in image processing. The challenge as well as the opportunity in wavelet-based compression is to exploit the characteristics of the subband coefficients with respect to both spectral and spatial localities. A common problem with many existing quantization methods is that the inherent image structures are severely distorted with coarse quantization. Observation shows that subband coefficients with the same magnitude generally do not have the same perceptual importance; this depends on whether or not they belong to clustered scene structures. We propose in this paper a novel scene adaptive and signal adaptive quantization scheme capable of exploiting both the spectral and spatial localization properties resulting from wavelet transform. The proposed quantization is implemented as a maximum a posteriori probability (MAP) estimation-based clustering process in which subband coefficients are quantized to their cluster means, subject to local spatial constraints. The intensity distribution of each cluster within a subband is modeled by an optimal Laplacian source to achieve the signal adaptivity, while spatial constraints are enforced by appropriate Gibbs random fields (GRF) to achieve the scene adaptivity. Consequently, with spatially isolated coefficients removed and clustered coefficients retained at the same time, the available bits are allocated to visually important scene structures so that the information loss is least perceptible. Furthermore, the reconstruction noise in the decompressed image can be suppressed using another GRF-based enhancement algorithm. Experimental results have shown the potentials of this quantization scheme for low bit-rate image and video compression.
RAR  998 êáàéò
?

íàâåðõ
Ïîäãîòîâèëè Ñåðãåé Ãðèøèí è Äìèòðèé Âàòîëèí