Lossless Coding

The entropy of the images ~ 7.92

If we use an entropy minimizer predictive coding, where I_^(x,y) = 7/8*I(x-1,y) + 5/8*I(x,y-1) - 1/2*I(x-1,y-1)

then the entropy of the error image becomes ~ 5.90

If we try to use the temporal redundancy in images, i.e:

e(t) = I(t) - I(t-1)

The entropy of the error image becomes ~5.76. On the other hand, the main error occurs at in the colon area where

there is a rapid transition from the colon tissue area to air. So it does not improve a lot in that area, the entropy might

be even worse.

The intuition:

Block wise transformation is required.

DCT:

Mean squared error is given as the error measure: 10log10(sum(I-J)^2)/N

Two approaches were taken:

 

1- With RLC:

The flow: DCT - Quantize - RLC+VLC - ReScale - IDCT

Assumed that 4 bits would be a good estimate of control elements. As expected, this would perform better compared to

without RLC case when the compression rate increases.

2- Without RLC

The flow: DCT - Quantize - RLC+VLC - ReScale - IDCT

Disadvantage: Encoder is quite complex for DCT.

A serious of tests with different quantization stepsize was applied.Stepsize changed from 2^0 to 2^10. Here is a summary of results for different k:

k                0          1          2            3            4            5            6            7            8            9            10

RLC       8.04       7.09    5.87       4.51       3.15       1.95       1.07      0.55       0.28       0.14         0.07

Rate      1.99        2.25     2.72      3.54       5.07        8.18       14.91    29          56.3     107.58       207 

VLC      5.74        4.97     4.09      3.20      2.34         1.57       0.96     0.55        0.31       0.16         0.09

Rate       2.78        3.22     3.91      4.99      6.82         10.14    16.55    28.7        51.2      94.8          175.7

Dist         -11.68   -5.67   0.34       6.26       11.93      17.18      21.81  25.73     29.31      32.64       35.95

PCA based transformation

D seemed to ~30 for a compression rate of 1:16

BlockwiseVector Quantization

Initial results: for a rate of 1:50, D is 37.96. The advantage is : noncomplex decoder.

Initial result for a rate of 1:100,

by vector Quantization, the entropy for 32 codewords was 3.6950.

for 1:108.25 rate D=38.

for 1:212.17 rate D=39.59

Motion Compansated Coding

This is definetely the right thing to do. Visually, the image seemed to be very nice and the compression rate is

512/(min entropy coded dx and dy). Entropy of the difference images are about 3.82. Entropy of dx is 2.45.

Entropy of dy is 2.65. The distortion turned out to be 28 for the first image. The list of distortions throughout the images turned out

to be about 36db.

the average number of losslessly encoded blocks were 20, which is about %2.5 of 900 blocks.

 

The final experiment was to combine the motion based estimater by lossless entropy constraining  ROI coder.

Two block sizes are tried:16 and 8. 16 is better to get a better motion estimation rate, but 8 is better in the sense

that the resulting error in the blocks is much less.

Exp with block size 16:

Number of average ROI blocks is 125, which is %12 of the total images. (5 of this 125 is due to blocks with a big average error)

The entropy of dx and dy for 0.1 pixel accuracy: Edx = 2.28 Edy = 2.45

The entropy of the difference image: 4.38 for the ROI, 5.24 for NBAD

The total mean squared Error in the images is about: 33.71 db, with lossless in ROI.

 

Exp with block size 8:

Number of average ROI blocks is 300, which is %7.3 of the total images.

The entropy of dx and dy for 0.1 pixel accuracy: Edx = 1.82 Edy = 1.96

The entropy of the difference image: 4.31 for the ROI, 4.82 for NBAD

The total mean squared Error in the images is about: 30.3 db, with lossless in ROI.

evaluate