Abstract

With the development of technologies such as multimedia technology and information technology, a great deal of video data is generated every day. However, storing and transmitting big video data requires a large quantity of storage space and network bandwidth because of its large scale. Therefore, the compression method of big video data has become a challenging research topic at present. Performance of existing content-based video sequence compression method is difficult to be effectively improved. Therefore, in this paper, we present a fractal-based parallel compression method without content for big video data. First of all, in order to reduce computational complexity, a video sequence is divided into several fragments according to the spatial and temporal similarity. Secondly, domain and range blocks are classified based on the color similarity feature to reduce computational complexity in each video fragment. Meanwhile, a fractal compression method is deployed in a SIMD parallel environment to reduce compression time and improve the compression ratio. Finally, experimental results show that the proposed method not only improves the quality of the recovered image but also improves the compression speed by compared with existing compression algorithms.

1. Introduction

With the rapid development of the Internet and intellectual mobile terminals, multimedia video and image applications are becoming more and more widespread. Video data is ubiquitous and plays a critical role in all aspects of people’s lives, including urban security, medical care, education, communications, industrial production, and film and television. Since video applications generate massive amounts of video data every moment, the amount of global video data has exploded quickly [1]. Big video data do not only broaden the horizon of human beings, and enable us to better experience and recognize the world around us, but also have buried a large amount of valuable information waiting for exploration [2].

In order to effectively store and manage big video data, efficient video compression technology has become crucial. The purpose of video compression is to ensure that the video compression ratio is maximized while maintaining a certain image quality [3]. Video compression technology has been widely used today, such as digital cameras, USB cameras, video phones, video on demand, video conferencing systems, and digital surveillance systems. Meanwhile, many video coding methods including fractal coding have been improved, and some coding techniques have been applied to video coding standards [4].

Existing content-based video compression methods have reached bottlenecks. However, rule-based fractal image compression technology is a potential image compression method. Its potentially high compression ratio characteristic has made it subject to attention of many scholars [5]. Fractal image compression technology converts a digital image into a set of contractive affine transformations (CAT) according to self-similarity of the image, and parameters of CAT of the image are stored as a compression file [6]. Moreover, a corresponding decompression process is very simple to suit the situations where an image is compressed once and decompressed many times, such as video on demand.

However, the traditional fractal video compression method performs frame-by-frame compression on the entire video without considering spatial-temporal similarity, which causes a lot of computational redundancy [7]. Moreover, since each range block is matched with all domain blocks, such a large amount of calculation leads to a high computational complexity. Therefore, the existing fractal compression method is underutilized.

In this paper, we propose a novel fractal video compression method. First, an entire video is divided into several fragments according to the spatial-temporal similarity of video frames. Each fragment contains several similar video frames. Secondly, we classify range and domain blocks according to their color similarity feature in each subfragment. Meanwhile, in order to compress real-time big video data, we propose a video compression framework with a dual-layer parallel structure. In the first layer of the parallel structure, the central server allocates all video fragments to multiple processors averagely for parallel compression. In the second layer, the processor allocates all kinds of range and domain blocks to multiple computing nodes for simultaneous matching search.

The rest of the paper is organized as follows. Section 2 reviews related work on video compression methods and fractal image coding. Section 3 presents our proposed parallel fractal compression method for big video data and analyzes its computational complexity. In Section 4, in order to verify the effectiveness of the proposed method, the traditional fractal compression method and some recent image compression methods are used as comparisons. Experimental results show the effectiveness of our method. Finally, Section 5 concludes our work and describes the direction of future research.

2.1. Video Sequence Compression Method

Since the 1980s, video coding technology has developed rapidly and has been a hot research field soon. In March 2003, two international organizations for standardizations ISO/IEC and ITU-T jointly developed the video coding standard H.264/AVC [8]. H.264/AVC achieved good results in many aspects such as coding efficiency, image quality, network adaptability, and error resistance. However, its coding algorithm had a high degree of complexity [9]. After years, many new technologies such as motion compensation, transformation, interpolation, and entropy coding had demonstrated their superiority. Therefore, ISO/IEC and ITU-T jointly developed a new video coding standard, H.265/HEVC [4], in November 2013.

The video coding standard defined a syntax and semantic constraint decoder of code stream with open-ended encoders. Therefore, in order to further improve compression efficiency, coding technology is perfected on the premise of conforming to constraint of code stream [10]. The existing research on coding optimization mainly focused on two aspects. One was how the coding efficiency can be further improved; the other was how the coding complexity can be effectively reduced.

In order to improve coding performance, HEVC defined 35 intraprediction directions and advanced interframe interpolation techniques with spatial correlation of images [11]. Zhang et al. proposed a method for the recombination and prediction of the reference frame with background modeling [12]. It effectively allocated resources and improved coding performance. Ugur et al. proposed an adaptive filtering technique. The design of interpolation and deblocking filter improved coding efficiency [13]. Seo et al. proposed a rate control method to maximize coding efficiency [14].

The above methods mainly eliminated redundancy by adding a large number of coding modes, optimization parameters, and traversal searching techniques. Since the compression ratio and time can be improved by mining the visual redundancy in video, Zhang et al. proposed a block-adaptive residual preprocessing method based on the stereoscopic visual JND (just notice difference) model [15]. It effectively reduced unnecessary perceived redundancy without degrading visual quality. Luo et al. proposed an alternative perceptual video coding method to improve the existing H.264/advanced video control (AVC) framework, which achieved significant bit savings while maintaining visual quality [16]. By combining video coding with visual perception, Wang et al. proposed a game-based efficient coding method and a bit allocation method, which effectively improved network adaptability and coding efficiency [17].

The compression algorithm of HEVC had a high complexity because many technologies were introduced in it, such as hierarchical variable-size coding units, multiscale prediction units, transform units, and multireference frame motion estimation. Thus, Guo et al. proposed a method to reduce the computational complexity of video compression standards [18]. It included an intraframe and a chroma search algorithm, which accelerated the prediction process of luminance and chromaticity macroblocks. Potluri et al. introduced a new 8-point DCT approximation that required 14 additions without multiplication [19]. Pan et al. proposed an efficient motion and disparity estimation algorithm to reduce the computational complexity of multiview video coding [20].

Today, content-based video compression technologies made it difficult to get major breakthroughs. Most of the existing video compression methods improve the compression quality by increasing the computational complexity of encoding. The computational complexity of existing video compression methods is generally high due to a large number of parameters and mathematics that need to be calculated. Therefore, this research direction is to build an efficient coding framework. The method we proposed better balances the relationship between computational complexity and image quality.

2.2. Fractal Image Compression

Fractal image compression was a compression method by CAT [21]. It was first proposed by Barnsley and Hurd in the mid-1980s according to the mathematical theory of fractal geometry. Then, Jacquin divided a coded image into small pieces of equal size and explored the mapping relationship between these small pieces [22]. Fractal image coding has advantages such as high compression ratio and independence from resolution. However, its compression time is very long because each range block needs to be searched with all domain blocks. The key of acceleration in fractal coding is a well-designed search scheme. At present, improvement of a fractal coding algorithm is mainly divided into two directions, which are sub-block classification and neighborhood search.

The sub-block classification method classifies image sub-blocks according to a certain characteristic. Thus, intragroup search is used to replace global search during matching. Jacquin divided image blocks into shade blocks, edge blocks, and midrange blocks according to visual geometry [23]. Jacobs et al. proposed a more refined classification method to obtain 72 types of sub-blocks by classifying image sub-blocks based on gray mean and variance [24]. Jiang et al. applied the -means clustering algorithm into fractal image compression to cluster range and domain blocks [25]. Jaferzadeh et al. used pixel space and 1D-DCT vectors to implement fuzzy clustering, which improved the speed of compression with equal decoding quality [26]. Wu et al. divided domain blocks into simple blocks and complex blocks [27]. In order to shorten the compression time, only complex blocks are coded. However, a simple block is stored by its pixel mean and coordinates of the upper left corner.

Assuming that positions of optimal matching domain blocks are often concentrated near the range block, the neighborhood search method refers to searching its optimal matching block only in the neighborhood of a range block, which narrows the search range from global to local search [28]. Chong et al. proposed an improved formula for prequantized nearest neighborhood search, which was based on orthogonal projection and fractal transform parameters [29]. In addition, they derived an optimal adaptive scheme for approximating search parameters to improve the performance of the algorithm. Truong et al. proposed a new search strategy based on spatial correlation of images [30]. Lin et al. implemented a neighborhood search by utilizing a phenomenon that similarly edge-shaped blocks are concentrated in certain regions [31]. Wang et al. calculated and sorted the standard deviation of domain blocks. Each range block was limited to search for domain blocks with similar standard deviation [32].

Searching for the optimal matching block in the neighborhood of a range block speeds up the compression process, but ultimately results in a larger pixel difference between the decoded image and the original image. This strategy trades for shorter compression times at the expense of image quality. The proposed method uses the idea of classification to narrow the matching range of range blocks while ensuring image quality. Although there are many classification methods, the method we proposed has a lower calculation amount and its speed is satisfactory.

3. The Proposed Fractal Compression Methods

3.1. Traditional Fractal Compression Method for Video Sequence

The traditional fractal compression method for video sequence performs frame-by-frame compression on the entire video. In the traditional method, an image is divided into nonoverlapping blocks (range block) with the same size.

Assuming that the size of is and the size of is , a interception window is used to traverse along horizontal and vertical directions of by a given step . Each movement of the interception block constitutes a domain block. All blocks constitute a search space , where is the domain block. It is obvious that and .

For each domain block , the mean of four neighboring pixels generates an pixel block .

Then, eight isometric transformations are performed on to generate codebook of matching operation.

For an arbitrary range block , the optimal matching block is determined by 1, where are eight isometric transformations. and are parameters of affine transformation.

Equations 2 and 3 are used to calculate and , where and represent the mean intensity of block and , is the inner product in the Euclidean space, and stands for 2-Norm. is an identity matrix with the same size as and .

For each range block , the transform cluster obtained by 1 is its fractal code, where represents the location of .

The computational complexity of the traditional fractal compression algorithm is given by Lemma 1.

Lemma 1. The computational complexity of traditional fractal image compression algorithm is .

Proof. In the traditional fractal image compression method, searching for the optimal matching for each requires a global search of codebook . The total number of range blocks and domain blocks is shown in 4. Each range block must be searched for the optimal self-similar by comparing times with all domain blocks. The number of comparisons to complete compression is given by 5. Based on the discussion above, Lemma 1 is proved. Proof is finished.

3.2. The Proposed Fractal Video Sequence Compression Method

There are two drawbacks of traditional fractal video compression. First, spatial-temporal similarity is not considered in the frame-by-frame compression method. Second, using the global search method to find the optimal matching block results in a long compression time. Therefore, we propose a novel video sequence compression method. Our method is divided into two steps. First, the video sequence is divided into video fragments according to spatial-temporal similarity. Second, domain and range blocks are classified based on color similarity feature within each fragment. A compression flow chart is shown in Figure 1. We explain steps to fragment the video in Section 3.2.1 and describe the steps to classify domain and range blocks in Section 3.2.2.

3.2.1. Video Content Classification Based on Spatial-Temporal Similarity

Video data is an unstructured data that has both temporal and spatial properties. Before it is compressed, video data may be processed and structured according to spatial-temporal similarity. Therefore, we fragment the video sequence based on image content. Figure 2 shows three video fragments that belong to the same video sequence “Jogging” [33]. Each row represents a fragment. There are large differences in video frames between different fragments. In a fragment, the difference between sequential frames is small.

According to the similarity of video content, we use the color histogram method to fragment a video. The HSV color model is selected in this paper, as shown in Figure 3. The HSV color model is composed of three components, which are H (hue), S (saturation), and V (value). The human eye has different sensitivities to the three components. Therefore, the weights of the three components are modified to save storage space and reduce computational complexity.

H, S, and V are divided into , , and intervals, respectively. According to this quantization level, each color component is synthesized as a -dimensional feature vector by 6.

Similarity of histograms will be accurately calculated by 7, where and represent color feature vectors of frames and .

A large change occurs between the two frames and if is larger than a certain threshold . Therefore, and are divided into different video fragments by

Finally, the entire video is divided into video fragments , where each video fragment contains frame images :.

3.2.2. Video Fragment Compression Method Based on Color Similarity

In order to accelerate the fractal compression speed and improve the decoding quality of image, we combine sequential video frames into a whole image matrix to classify domain blocks in each video fragment .

Figures 4(a) and 4(b) are two frames in the same video sequence. Block 1 is of Figure 4(a). In the traditional frame-by-frame video compression algorithm, block 2 is the optimal-matching domain block of in Figure 4(a). However, block 3 in Figure 4(b) is of by the proposed method. The mean squared error (MSE) is calculated by 9, where is the number of pixels of the image. and , respectively, represent the gray value of images and at position . Block 3’s MSE is 193, and block 2’s MSE is 351. Therefore, block 3 is better than block 2.

Matching error of two image blocks and is calculated by 10, where the definition of , , and is the same as that in 2 and 3.

Theorem 1. For two matrices and with the same size, , where and are any unit in matrix and .

Proof. Let , 10 is described as Build 12 and 13, where is the total number of pixels of and . Thus, satisfies and ; we have Therefore, by 12 and 13, 11 is described as follows: Based on the discussion above, Theorem 1 is proved. Proof is finished.

Let ; between and with is approximated by calculating . Meanwhile, and are replaced by mean gray values of gray matrices and b. Assuming that , is smaller than a certain value . Then, is not much different. Therefore, we classify domain blocks by an automatic classification method according to Theorem 1.

 Classification Algorithm of Domain and Range Blocks (one frame e.g.,).
Input: domain blocks and range blocks.
Output:domain block categories and range block categories
Set: the number of categories and a set of threshold sequences
Repeat
 Mean gray value of remaining domain blocks is calculated by Eq.16;
is the median of threshold sequence ;
;
 Repeat
  Remove a domain block from ;
  GA between and is calculated;
   If then
    ;
    ;
   End
 Until Each domain block of is compared;
 If then
  ;
  ;
  Perform previous “Repeat” step, until ;
 End
 If then
  ;
  ;
  Perform previous “Repeat” step, until ;
 End
 The i-th category of domain blocks is determined;
Until domain blocks are classified and are got;
Center of s categories of domain blocks are obtained by Eq.16;
Repeat
 Remove a range block from ;
 Distance between and center are calculated by Eq.17;
 Select the smallest distance ;
;
Until range blocks are classified and are got;
Algorithm 1

The mean gray value of all remaining domain blocks is calculated by 16 as the initial cluster center of the category of domain blocks, where is the number of domain blocks and represents the gray value of the domain block at position .

The distance between range block and center is calculated by 17, where and represent the gray value of and the center at position , respectively.

A matching search is performed between each category of range blocks and the same category of domain blocks, such as is matched with . A matching process between different categories of range blocks is performed independently.

     Proposed Fractal Video Sequence Compression Algorithm
Input:video sequence
Output:fractal code of video sequence
Repeat
 Take out sequential m frames of images in the video sequence and combine them into a large image matrix F for holistic compression. Divide F into non-overlapping N × N range blocks;
 A 2 N × 2 N window with a step size of is used to intercept domain block along F;
 Domain blocks are contracted by mean of neighboring four pixels;
 According to Algorithm 1, domain blocks are classified and the corresponding range block categories are obtained;
 Repeat
  Remove a range block from the p-th range block category and set an initial value Error;
  Repeat
   Remove a domain block from the p-th domain block category, perform eight isometric transformations, and calculate according to Eqs.23 and Eq.10;
   If then
    Replace Error with ;
   End
   Until All domain blocks in the p-th domain block category are completely searched;
   Store fractal code of range block ;
  Until All range block categories have been matched with their corresponding domain block category;
  Store fractal code of the entire image F;
Until The entire image sequence is compressed and fractal codes of it is obtained.
Algorithm 2

The computational complexity of the proposed algorithm is given by Theorem 2.

Theorem 2. The computational complexity of the proposed algorithm is .

Proof. Assuming that all domain blocks are uniformly classified into categories. Correspondingly, range blocks are classified into categories. The number of each category of domain blocks and range blocks is given by Each category of range blocks only needs to be matched with its corresponding domain category. The times of comparisons within each category is shown by 19. Based on the discussion above, Theorem 2 is proved. Proof is finished.

Since the matching range for each range block is narrowed down to one of categories of domain blocks, instead of traditional global search, theoretically the compression speed of the proposed algorithm is times as fast as the traditional algorithm. In order to improve the compression speed obviously, normally .

Inference 1. The compression speed of the proposed combined algorithm is times as fast as the traditional algorithm.

Proof. Assuming that the time required to compress a single frame by the traditional algorithm is . All domain blocks are divided into categories when frames are combined to compress integrally. Correspondingly, all range blocks are divided into categories.
The mean time required to compress the frame is . Therefore, the compression time of the proposed combined algorithm is . However, the time required to compress the frame by the traditional algorithm is .
Based on the discussion above, Inference 1 is proved. Proof is finished.

Inference 1 indicates that the compression speed of the proposed combined algorithm is determined by two factors. One is the number of combined images and the other is the class number of domain blocks. How to balance these two factors is related to compression time and image quality.

3.3. Parallel Framework for Big Video Data Compression

Parallel computing lay the foundation of methodology for the solution of complex problems. Considering the compression of each video fragment without affecting each other and that the matching search between range blocks of different categories is independent of each other, the proposed algorithm is deployed on a parallel architecture of SIMD to improve its efficiency. Thus, a double-parallel video compression framework is built as shown in Figure 5. In the first layer of the parallel framework, all video fragments are allocated to multiple processors. In the second layer, all kinds of range and domain blocks are allocated to multiple computing nodes.

     Parallel Video Sequence Compression Algorithm Based on Fractal
Input:video sequence
Output:fractal code of video sequence
Fragment: video fragments are allocated to multiple processors for compression, which are divided according to the spatial–temporal similarity of video sequence.
 Classify:Combine frames contained in video fragment into a large image matrix F for overall compression;
  Divide F into non-overlapping N × N range blocks;
  A 2 N × 2 N window with a step size of is used to intercept domain block along F;
  Domain blocks are contracted by mean of neighboring four pixels;
  According to Algorithm 1, domain blocks are classified and the corresponding range block categories are obtained;
  Assign all categories of range block and corresponding domain blocks to multiple compute nodes for matching search;
  Repeat
   Remove a range block from the c-th range block category and set an initial value Error;
   Repeat
    Remove a domain block from the c-th domain block category, perform eight isometric transformations, and calculate according to Eqs.23 and Eq.10;
    If then
     Replace error with ;
    End
   Until All domain blocks in the c-th domain block category are completely searched;
   Store fractal code {} of range block ;
  Until All range block categories have been matched with their corresponding domain block category;
 Merge: Combine the results of multiple compute nodes to obtain fractal code of video fragment ;
Merge:Combine the results of multiple processors to obtain fractal code of whole video ;
Algorithm 3

Speed-up ratio and parallel efficiency were used to measure the performance of the parallel algorithm. Their definitions in this paper are shown by Lemma 2.

Assume that contains frames and it is divided into fragments. All range blocks and domain blocks are classified into categories in each fragment. Compression is deployed in a parallel environment of processors with compute nodes per processor.

Lemma 2. Speed-up ratio of the parallel algorithm is , and its parallel efficiency is .

Proof. The speed-up ratio is defined as in 20, where is the time required to compress video sequence by the traditional fractal compression method and is the time required by the parallel method. Let , the time required of each fragment is .
Because each processor gets fragments, the time required of each processor is .
After range blocks and domain blocks are classified and allocated to compute notes, the time required of each processor is reduced to .
Because each fragment contains frames, the compression time is .
Thus, speed-up .
Parallel efficiency is defined as in 21, where is an absolute parallel speed-up without resulting from the classification of domain and range blocks, and is the number of compute nodes. Since and , parallel efficiency .
Based on the discussion above, Lemma 2 is proved. Proof is finished.

The amount of computation for compressing arbitrary video data based on a serial algorithm is considerably large. Compared with the traditional serial algorithm, the more the number of processors, the higher the unit calculation efficiency and the higher the parallel computing efficiency. However, it is necessary to consider the actual situation of computing resources and the number of video segments and the class number of domain blocks.

Theorem 3. The computational complexity of parallel algorithm is .

Proof. According to Theorem 2, the computational complexity of a single frame is , where represents the category number of domain and range blocks. Therefore, the computational complexity of the parallel algorithm is , where is the number of video frames assigned to each processor and is the class number of range and domain blocks assigned to each compute node. In this parallel environment, and .
Based on the discussion above, Theorem 3 is proved. Proof is finished.

Compared with the computational complexity of the parallel method we proposed, the computational complexity of the traditional serial method is times as much as it. A reasonable allocation of , , and values can improve parallel efficiency while reducing computational complexity.

4. Experiments and Analysis

In this paper, experiments are performed on the computer with an Intel Core i5-4590 CPU and a 12 GB memory, and the operating environment is Matlab 2016a. Three standard grayscale image sequences are used, which were Walter Cronkite moving head, chemical plant flyover (close view), and chemical plant flyover (far view) [34]. They are renamed to Seq1, Seq2, and Seq3, respectively. The image size is 256 × 256 × 8 bit. The proposed algorithm is compared with the traditional fractal video sequence compression algorithm from three aspects, which are comparison of single-frame image compression, comparison of sequential frame compression, and comparison of the traditional serial method with the proposed parallel method. In these experiments, the range block has size 4 × 4, the domain block has size 8 × 8, and both horizontal step and vertical step are 1. Finally, the range block with size 8 × 8 and the domain block with size 16 × 16 are added to the experiment when compared with the AVQIS algorithm [35].

In this paper, a parallel environment consisting of one manager and four processors with four compute nodes each processor is constructed when the traditional serial algorithm is compared with the parallel algorithm. First, the central server fragments the entire video sequence according to its spatial-temporal similarity and sequentially distributes it to processors numbered 1–4 for compression. Then, all range and domain blocks are automatically classified within the video fragment and assigned to four compute nodes for independent work.

We evaluate the quality of the decoded image by calculating 22. The peak signal-to-noise ratio (PSNR) is the logarithm of the mean square error (MSE) between the original image and the decoded image relative to , where represents the upper limit of the gray level and is the storage bit of each pixel. The higher the PSNR value, the lesser the distortion.

Meanwhile, we use the compression ratio (CR) to measure compression performance by comparing with the AVQIS algorithm. The compression ratio is defined as the ratio of the original data to the compressed data in 23, where represents the number of range blocks and represents the quantization level of fractal parameters .

4.1. Comparison of Single-Frame Image Compression

Four sequential frames of Seq1, Seq2, and Seq3 are selected to perform frame-by-frame compression. The mean compression time and PSNR of 4 frames are calculated. The threshold sequence is shown in Table 1. With the automatic classification method, domain blocks were divided into 10 categories. Correspondingly, range blocks were divided into 10 categories. Restored images obtained by the traditional method and proposed method are shown in Table 2. The experimental data is shown in Table 3.

From Tables 2 and 3, it can be seen that the quality of reconstructed images obtained by the proposed algorithm and traditional algorithm is not obviously different from being uncompressed. It indicates that the proposed algorithm is feasible. Moreover, compared with the traditional fractal compression algorithm, although PSNR of the proposed algorithm is slightly decreased, the compression speed is times as fast. According to Theorem 2 and its analysis, since range and domain blocks are divided into ten categories, the compression speed should theoretically increase by a factor of ten. Because classification of domain and range blocks is nonuniform, the speed-up ratio of the proposed method is lower than the theoretical value of .

4.2. Comparison of Sequential Frame Compression

First, 4 sequential frames of Seq1, Seq2, and Seq3 are compressed by the traditional fractal method. Second, they are compressed by the proposed method frame by frame. Finally, 4 frames are combined to compress together by the proposed method. The threshold sequence is shown in Table 1. After combination, domain and range blocks of Seq1, Seq2, and Seq3 are divided into 20, 15, and 15 categories, respectively. The comparison of performance of the three methods is shown in Table 4, where PSNR is the mean of 4 frames.

From Tables 4 and 5, it can be seen that the image quality of the decoding image obtained by the three algorithms is comparable. Suppose that compressing one frame takes time . According to Inference 1, when 4 frames are compressed together, the time required becomes . Since the classification of domain and range blocks can speed up the compression process, the compression time is reduced to . In this experiment, domain and range blocks of Seq1, Seq2, and Seq3 are divided into 20, 15, and 15 categories, respectively. Therefore, the theoretical compression speed is further reduced to , that is, the theoretical value should be times. When 4 frames are combined for compression, its compression speed is approximately times as fast as the traditional algorithm, which is lower than the theoretical value of times. It is because classification of domain and range blocks is nonuniform.

Compared with single-frame compression, the speed-up ratio of the combination algorithm is reduced. However, its image quality is closer to the original image.

4.3. Comparison of Traditional Serial and Proposed Parallel Compression

Video sequence Seq1 contains 16 frames. In the double-layer parallel framework, the distribution of its compression task is shown in Figure 6. Seq1 is divided into 4 video fragments. Processors numbered P1, P2, P3, and P4 get 4, 4, 4, and 4 frames, respectively. Then, all domain blocks are divided into 20 categories, and 4 compute nodes C1, C2, C3, and C4 obtain 5, 5, 5, and 5 categories of range blocks, respectively. Video sequence Seq2 includes 32 frames and Seq3 includes 11 frames. The distribution of Seq2 and Seq3 is shown in Figures 7 and 8. The threshold sequence is shown in Table 6.

Serial computing was used in the traditional fractal video compression method. The performance data of the traditional serial method and the parallel proposed method is shown in Table 7. is the whole time required of video sequence compression. PSNR is the mean of all images included in each video sequence.

Table 7 shows that under the premise of ensuring that PSNR is comparable, the mean compression speed of the parallel algorithm achieves more than 40 times that of the traditional serial algorithm, and parallel efficiency averages 67%. In this experiment, the numbers of processors, compute nodes, and fragments are all four, except for the number of frames and the class number of domain blocks. Video sequences Seq1, Seq2, and Seq3 contain 16, 32, and 11 frames, respectively. The class numbers of domain and range blocks are 20, 20, and 12, respectively. According to Lemma 2, the theoretical acceleration ratios for the three sequences are 80.00, 40.00, and 69.82, respectively. Therefore, the theoretical average speed-up ratio is 63. The actual speed-up is less than the theoretical mean speed-up of 63 times because of communication cost. In addition, since classification of range and domain blocks is nonuniform, it causes a difference between actual speed-up and theoretical speed-up.

4.4. Comparison of the AVQIS Algorithm and Proposed Algorithm

The AVQIS algorithm was proposed by Pizzolante et al., which is an extension of the AVQ algorithm. The algorithm utilized the correlation between sequential frames of image sequence to perform lossy compression on image sequence.

Table 8 shows recovery images obtained by decompressing the compressed video sequence using the proposed algorithm and the AVQIS algorithm. Table 9 shows CR and PSNR obtained by compressing Seq1, Seq2, and Seq3 using the proposed algorithm and the AVQIS algorithm.

From Tables 7 and 8, it can be seen that compared with the AVQIS algorithm, the compression ratio of Seq1 is decreased, but PSNR is obviously improved with the range block size of 4 × 4. The compression ratio of Seq2 and Seq3 is higher than that of the AVQIS algorithm. When the size of the range block is 8 × 8, the compression ratio of all sequences is higher than that of the AVQIS algorithm.

5. Conclusions

In order to efficiently compress big video data and reduce the computational complexity of the traditional fractal video compression method, a double-layer parallel video compression framework based on fractals was proposed. In the first layer of the parallel structure, a video sequence was divided into many fragments according to its spatial-temporal similarity. Then, video fragments were allocated to multiple processors for simultaneous compression. In the second layer, a novel fractal video compression method was used to compress video fragments. All domain and range blocks were classified within each fragment. Processors distributed all domain and range blocks to multiple computing nodes for parallel processing.

Experimental results showed that compared with the traditional fractal video compression method, the proposed parallel method significantly improved the compression speed when the image quality was similar. In addition, the compression ratio was higher when compared with the AVQIS algorithm. It verified the effectiveness of the proposed method.

Future research directions will include two aspects. One is that we will further construct new features to enhance the calculation of video segmentation and classification of the domain block. The other is that based on the concepts of cloud computing and fog computing, we will strive for proposing a more effective parallel fractal compression method for real-time big video data processing.

Data Availability

Image sequences used to support the findings of this study are publicly available at http://sipi.usc.edu/database/database.php?volume=sequences. Three sequences consist of 16, 32, and 11 256 × 256 images, respectively.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The research is supported by the National Natural Science Foundation of China (Grant No. 61502254), Program for Young Talents of Science and Technology in universities of the Inner Mongolia Autonomous Region (Grant No. NJYT-18-B10), and open funds of the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (Grant No. 93K172018K07).