Next Article in Journal
Measurement and Analysis of Vibration Levels in Stacked Small Package Shipments in Delivery Vans as a Function of Free Movement Space
Next Article in Special Issue
A False Negative Study of the Steganalysis Tool Stegdetect
Previous Article in Journal
Unravelling the Environmental Application of Biochar as Low-Cost Biosorbent: A Review
Previous Article in Special Issue
A Watermarking Protocol Based on Blockchain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Block-Based Steganography Method Using Optimal Selection to Reach High Efficiency and Capacity for Palette Images

Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 11 October 2020 / Revised: 1 November 2020 / Accepted: 2 November 2020 / Published: 4 November 2020

Abstract

:
The primary goal of steganographic methods is to develop statically undetectable methods with high steganographic capacity. The embedding efficiency is one kind of measure for undetectability. Block-based steganography methods have been proposed for achieving higher embedding efficiency under limited embedding capacity. However, in these methods, some blocks with larger embedding distortions are skipped, and a location map is usually incorporated into these methods to record the embedding status of each block. This reduces the embedding capacity for secret messages. In this study, we proposed a block-based steganography method without a location map for palette images. In this method, multiple secret bits can be embedded in a block by modifying at most one pixel with minimal embedding distortion; this enables each block to be used for data embedding; thus, our method provides higher embedding capacity. Furthermore, under the same capacity, the estimated and experimental embedding efficiencies of the proposed method are compared with those of Imaizumi et al. and Aryal et al.’s methods; the comparisons indicate that the proposed method has higher embedding efficiency than Imaizumi et al. and Aryal et al.’s methods.

1. Introduction

Steganography is a technique for secret communication, in which secret messages are embedded into common digital media such as images, audios, and videos. The original media is called “cover,” and the embedded media is called “stego.” Of these media, images are the most popular because they are widely transmitted over the Internet. Many image-based steganography methods [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] have been proposed and applied in raw [1,2,3,4,5], JPEG [6], VQ [7], and absolute moment block truncation coding (AMBTC) [8] images. However, the application in palette images [9,10,11,12,13,14,15,16,17] is limited. Palette images [18] have gained popularity. Graphics interchange format (GIF) is a type of palette image frequently used by young consumers to communicate and express themselves. GIF animations are widely used on social media platforms such as Tumblr, Twitter, Facebook, and Line [18,19].
A palette image includes a palette and an index table. The palette consists of a few (generally no more than 256) colors. Each pixel is represented by a color in the palette, and the index table records each pixel’s corresponding color index. In general, the order of colors in the palette is random. Because only limited colors are used in palette images, modifying a pixel’s color index considerably distorts the pixel color. Thus, embedding data into a palette image is more challenging than in other image formats.
Embedding methods are of two types. In pixel-based methods (PBMs) [9,10,11,12,13,14], a pixel is used as a unit to embed secret data. In block-based methods (BBMs) [15,16,17], a block is used as a unit to embed secret data.
Fridrich [11] proposed a method in which the colors of a palette are partitioned into two sets with different parities ( R + G + B mod 2) to represent one secret bit. The capacity of this method is 1 bit per pixel (bpp). Under the same embedding capacity, Fridrich and Du [12] proposed an optimal parity assignment algorithm to improve image quality. Tzeng et al. [13] proposed an adaptive data-hiding scheme for palette images based on local complexity. An embedding capacity of approximately 0.1 bpp is used for most images [20]. Tanaka et al. [14] proposed an algorithm to partition colors into 2 k sets. Each set represents one type of k -bit secret data. For each pixel, according to the secret data, the closest color in the corresponding set is determined to replace the pixel’s color. The embedding capacity is increased to k bpp.
In [9,10,11,12,14], each pixel was used to embed secret data in PBMs. Therefore, the embedding distortion for a pixel may be large. In [13], an adaptive scheme was provided to skip pixels with large distortions. However, this reduced the capacity.
Imaizumi and Ozawa [15] proposed a BBM to embed k bits into each 3 × 3 block. In this method, first, all colors in a palette are reordered according to the color Euclidean distance. Next, for each block, the sum of all color indices is calculated and divided by 2 k to obtain a remainder. Then, some pixels in the block are selected, and their color indices are adjusted to obtain a remainder equal to the value of k embedding secret data bits with the least embedding distortion. The embedding capacity is k 9 bpp. Imaizumi and Ozawa [16] adopted a block size of 2 × 2 to improve the embedding capacity of their previous method [15]. Subsequently, Aryal et al. [17] used a block size of 1 × 2 k 2 + 1 to improve the embedding capacity of the aforementioned method [16] and used the L*a*b color space to replace the RGB color space to improve image quality. In all of these BBMs [15,16,17], the palette of an image is first reordered. Then, for each block, under the consideration of minimum embedding distortion, several pixels are selected, and their color indices are modified by + 1 or 1 . In the reordered palette, two colors with adjacent color indices may not be similar; some selected pixels may have large embedding distortion, which degrades image quality after data embedding. Furthermore, the color indices of some selected pixels may overflow or underflow after data embedding. To avoid using these blocks, an additional location map is required to record if a block is used to embed secret data. This reduces the embedding capacity for secret data because some blocks are skipped and some are used to embed the location map.
To address the aforementioned problem, a novel BBM for palette images was proposed in this study. First, Tanaka et al.’s k -bit parity assignment [14] was used to assign a k -bit parity to each color in the palette of an image. Then, for each block, at most one pixel was modified for data embedding, and no location map was required for data extraction. The proposed method provided higher capacity.
Note that the primary goal of steganographic methods [21] is to develop statically undetectable methods with high steganographic capacity. Steganographic capacity [21] is defined as the maximum number of bits that can be hidden in a given cover work, such that the probability of detection by an adversary is negligible. The embedding efficiency [21] is one kind of measure for undetectability, and it is defined as the number of embedded random message bits per embedding change. Crandall [22] mentioned that the most obvious method to reduce the possibility of detecting hidden information is to reduce the change density in the cover image. Since decreasing the change density raises the embedding efficiency, higher embedding efficiency will lower the detectability. Since BBMs reduce the change density, the detectability of these methods is lower.
To measure the undetectability of the proposed method, chi-square attack [23], RS steganalysis [24], and embedding efficiency are used. Estimation and experimental results demonstrated that the proposed method provided higher undetectability than the aforementioned BBMs [16,17].
The remainder of this paper is organized as follows. Related works are introduced in Section 2. The proposed method is presented in Section 3. The analysis of embedding capacities is given in Section 4. Experimental results are provided in Section 5. Finally, conclusions are presented in Section 6.

2. Related Works

In this section, the parity assignment method proposed by Tanaka et al. [14] and referenced in the proposed method is first described. Then, the method proposed by Aryal et al. [17] is described; it is used to make a comparison with the proposed method.

2.1. Parity Assignment Method Proposed by Tanaka et al.

Tanaka et al. proposed an algorithm to assign k parity bits to each color in the palette of an image. The algorithm contains two parts. In the first part, from the palette, 2 k closest colors ( c 0 ,   ,   c 2 k 1 ) are determined sequentially, c i is assigned a k -bit parity with value i . Next, 2 k sets ( S 0 ,   ,   S 2 k 1 ) are established, that is, S i = { c i } . In the second part, from the unassigned colors, the color c with the minimal distance from the last assigned color is determined. Next, the minimal distance d p of c from each set S p is evaluated. Then, the maximal distance d p among { d p | p = 0 , , 2 k 1 } is determined and p is assigned as the parity of color c . The procedure is repeated until all colors are assigned.
In the algorithm, the distance d c i , c j between two colors c i = ( r i , g i , b i ) and c j = ( r j , g j , b j ) is defined using the following expression:
d i , j = ( r i r j ) 2 + ( g i g j ) 2 + ( b i b j ) 2 .
The details of the parity assignment are described as follows:
Step 1:
Let A be the set of all colors in the palette. Let S q = ϕ , and assign parity q to S q ; q = 0 , , 2 k 1 .
Step 2:
Let h = 0 . Find an initial color c h using the following expression:
c h = arg min c i A ( 256 2 r i + 256 g i + b i ) ,
S h = S h { c h } ; A = A \ { c h } .
Step 3:
Let h = h + 1 . Find color c h in A with the minimum distance from c h 1 using the following equation:
c h = arg min c i A d c h 1 , c i ,
S h = S h { c h } ; A = A \ { c h } .
Step 4:
Repeat Step 3 until h = 2 k 1 .
Step 5:
Let h = h + 1 . Find color c h in A with the minimum distance from c h 1 using Equation (3).
Step 6:
In each S q , q = 0 , , 2 k 1 , find the color in S q with the minimum distance m d q from c h using the following expression:
m d q = min c i S q d c h , c i ,
Step 7:
Find the maximum distance m d q among all m d q using the following expression:
q = argmax q m d q ,
S q = S q { c h } , A = A \ { c h } .
Step 8:
Repeat Steps 5 to 7 until A is empty.

2.2. Aryal et al.’s Method

Aryal et al.’s method [17] improves the capacity and image quality of Imaizumi et al.’s method [16]. In the method, the color c of each pixel in the RGB color space is first converted to color (L*, a*, b*) in the L*a*b color space, and the CIEDE2000 formula [25] is used to calculate the color distance. The method includes three processes: Palette reordering, block embedding, and extraction. In the method, the k -bit messages are embedded into each 2 k 2 + 1 pixel matrix. For convenience, we assumed k = 3 in the following processes.

2.2.1. Palette Reordering Process

Step 1:
Let the original palette P be { c 0 , , c 255 }, the new palette P be { c 0 , , c 255 }, and A = P .
Step 2:
Set h = 0 . Find the first color c h in P using the following expression:
c h = arg min c i A L i * ,
where L i * is the luminance component of color c i in the L*a*b color space.
Step 3:
Let A = A \ { c h } , h = h + 1 . Find color c h in A with the minimum distance from c h 1 using the following expression:
c h = argmin c i A Δ E c h 1 , c i ,
where Δ E c h 1 , c i is the CIEDE2000 color distance [25] between c h 1 and c i .
Step 4:
Repeat Step 3 until h = 255 .

2.2.2. Embedding Process

Step 1:
Divide the cover image into nonoverlapping blocks, each of which contains 1 × 3 pixels.
Step 2:
Reorder palette P to obtain a new palette P using the palette reordering process.
Step 3:
Consider a block B with three pixels ( B 0 , B 1 , B 2 ) and three secret data bits w .
Step 4:
Let I i be the color index of B i in P , i = 0 ,   1 ,   2 .
Step 5:
Calculate T 0 and T 1 using the following equations.
T 0 = I 0   mod   2 .
T 1 = ( I 1 + I 2 )   mod   4 .
Step 6:
Obtain t 0 and t 1 based on Table 1 and secret data w .
Step 7:
If T 0 = t 0 and T 1 = t 1 , then go to Step 10.
Step 8:
If | T 0 t 0 | = 1 , then I 0 is changed by + 1 or 1 ; this depends on whether Δ E c I 0 1 , c I 0 or Δ E c I 0 + 1 , c I 0 is smaller.
Step 9:
If | T 1 t 1 | 0 , then three cases are possible:
Case 1:
If | T 1 t 1 | = 2 , then I 1 and I 2 are changed by + 1 or 1 ; this depends on whether Δ E c I 1 + 1 , c I 1 + Δ E c I 2 + 1 , c I 2 or Δ E c I 1 1 , c I 1 + Δ E c I 2 1 , c I 2 is smaller.
Case 2:
If T 1 t 1 = 1 or T 1 t 1 = 3 , then either I 1 or I 2 is changed by 1 ; this depends on whether Δ E c I 1 1 , c I 1 or Δ E c I 2 1 , c I 2 is smaller.
Case 3:
If T 1 t 1 = 1 or T 1 t 1 = 3 , then either I 1 or I 2 is changed by + 1 ; this depends on whether Δ E c I 1 + 1 , c I 1 or Δ E c I 2 + 1 , c I 2 is smaller.
Step 10:
Go to Step 3 until all secret data are embedded.
If any Δ E > 5 in the embedding process, the current block is skipped, and no secret data are embedded. An additional location map is required for recording if a block is used to embed secret data.

2.2.3. Extraction Process

Step 1:
Divide the stego image into nonoverlapping blocks, each of which contains 1 × 3 pixels.
Step 2:
Reorder palette P to obtain a new palette P through the palette reordering process used in the embedding process.
Step 3:
According to the location map, consider an embedded block B with three pixels ( B 0 , B 1 , B 2 ). Let I i be the color index of B i in P , i = 0 ,   1 ,   2 .
Step 4:
Calculate t 0 and t 1 using the following equations:
t 0 = I 0   mod   2 .
t 1 = ( I 1 + I 2 )   mod   4 .
Step 5:
Extract w according to Table 1, t 0 and t 1 .
Step 6:
Go to Step 3 until all embedded blocks are processed.
In Imaizumi et al. and Aryal et al.’s methods, for data extraction, the receiver requires the positions of embedded blocks. Thus, a location map with one bit for each block should be transmitted to the receiver; this will reduce the embedding capacity for secret data, and will be discussed in Section 4. To overcome this disadvantage, a novel BBM was proposed; it does not require a location map.

3. Proposed Method

To avoid using a location map and to obtain higher embedding efficiency and capacity, a novel BBM for palette images is proposed. The method includes three processes: Parity assignment, embedding, and extraction. In the parity assignment process, Tanaka et al.’s assignment [14] is used to assign a k -bit parity to each color in a palette. Through the assignment, a stego image with lower embedding distortion can be obtained, such that a location map is not required for secret data extraction. In the embedding process, an optimal scheme is provided to select the pixel in a block with the minimal embedding distortion; this makes each block used for data embedding. The embedding process is performed as follows:

3.1. Embedding Process

Step 1:
Divide the cover image into nonoverlapping blocks, each of which contains n × m pixels.
Step 2:
Use Tanaka et al.’s parity assignment to assign a k -bit parity to each color in palette P .
Step 3:
Take one block B with pixels ( B 0 , , B n × m 1 ) and k secret data bits w sequentially.
Step 4:
Let c B j be the color of B j , and q j be the parity of c B j , 0 j n × m 1 .
Step 5:
Calculate r using the following expression:
r = j = 0 n × m 1 q j   mod   2 k .
Step 6:
If r = w , then go to Step 10.
Step 7:
For each pixel B j , parity q j is calculated using the following equation:
q j = ( q j + w r + 2 k )   m o d   2 k , j = 0 , , n × m 1 .
Step 8:
For each pixel B j , find a new color c B j * with parity q j according to the following equation:
c B j * = a r g m i n c i S q j d c B j , c i ,
where S q j is the set of all colors with parity q j .
Step 9:
Consider pixel B α satisfying the following expression, and set the color of B α to be c B α * .
α = a r g m i n j { 0 , , n × m 1 } d c B j , c B j * ,
Step 10:
Repeat Steps 3 to 9 until all blocks are processed.
In Step 2, Tanaka et al.’s parity assignment is used; it makes each color be able to find a closer color in each set with a different parity. In Step 8, for each pixel, we always select the color with the required parity and the minimal embedding distortion to replace the original color. In Step 9, at most one pixel is modified with the minimal embedding distortion. Through these three steps, each block is used to embed secret data, and the embedding quality is also improved.

3.2. Extraction Process

In the extraction process, the receiver uses the same parity assignment as that used in the embedding process to assign a k -bit parity to each color. The extraction process is as follows:
Step 1:
Divide the stego image into nonoverlapping blocks, each of which contains n × m pixels.
Step 2:
Use Tanaka et al.’s parity assignment to assign a k -bit parity to each color in palette P .
Step 3:
Take a block B with pixels ( B 0 , , B n × m 1 ) sequentially.
Step 4:
Use Equation (12) to obtain r .
Step 5:
Set r to be the k secret data bits w .
Step 6:
Repeat Steps 3 to 5 until all blocks are processed.
Because each block is used to embed secret data, the receiver does not require a location map in data extraction.

4. The Analysis for Embedding Capacities

As mentioned previously, both Imaizumi et al. [16] and Aryal et al.’s [17] methods need a location map in the extraction process; thus, the location map should be transmitted through another channel or be embedded in the stego image. However, it is unreasonable to transmit the location map through another channel. Thus, in the following, we only consider the location map embedded in the stego image. Let an image size be N × M , block size be n × m , the embedding bits for each block be k , then the total block number T = N × M / ( n × m ) . Assume that in the location map, each block is represented by one bit, 1 stand for the corresponding block used for embedding; 0 for skipping. Let X be the size of the location map, assume that the location map is embedded in the first [ X / k ] blocks, then the number of available blocks for secret data embedding is T [ X / k ] . Thus, X should satisfy the following equation:
X = T [ X / k ] .
Note that the embedding capacity (bits) is k X . Let an image size be 256 × 256 and block size be 2 × 2 , then T = 16384 . Let k = 3 ; through Equation (16), we can obtain X = 12288 , that is, the number of available blocks for secret data embedding is 12288, and the block number needed for recording the location map is 4096. Table 2 shows embedding capacities for Imaizumi et al.’s, Aryal et al.’s, and the proposed methods. In the table, we can see that the proposed method is superior to Imaizumi et al. and Aryal et al.’s methods in embedding capacity.

5. Experimental Results

In the experiments, 25 images of 256 × 256 in Figure 1 were used. These images were obtained from the Standard Image Database [26] or CBlR Image Database [27] and in the TIFF or JPG format. Photopea [18] was first used to resize and crop each image to 256 × 256 , then Cloudconvert [18] was applied to convert TIFF (JPG) format into the GIF format. The embedded secret data were generated using a pseudorandom number generator. Image quality was measured using the peak signal-to-noise ratios (PSNRs). Chi-square attack, RS steganalysis, and embedding efficiency were used to measure the undetectability of a steganography method.

5.1. PSNR Comparisons

To demonstrate the effectiveness of our method, we conducted three experiments for comparing the image qualities of the proposed method with those of Imaizumi et al. [16] and Aryal et al. [17] under the same embedding capacity. The first two experiments assume that the location map needed by Imaizumi et al. and Aryal et al.’s methods is not embedded in the stego image, and it is sent through another channel. The third experiment assumes that the location map needed by Imaizumi et al. and Aryal et al.’s methods is embedded in the stego image; thus, the embedding capacity is reduced.
In the first experiment, Imaizumi et al.’s and the proposed methods with capacities of 15,000 and 30,000 bits were conducted. First, each image was divided into 128 × 128 blocks, each of which has size 2 × 2 . Then, each block was embedded k bits, k = 1 ,   2 . Part of results is depicted in Table 3 and Table 4. These tables reveal that the image qualities of the proposed method were the best. This means that the proposed method actually has lower embedding distortion for each block such that each block can be used for data embedding. In the following, more details of explanation are described.
In Imaizumi et al.’s method, the palette is reordered; this increases the distance between two neighboring colors (i.e., two colors with index difference 1) when their indices are large. In data embedding, if one pixel’s color c with index i is replaced by color c , then the index of c is i 1 or i + 1 . However, when i is large, these two colors with the index difference of 1 may not be close; this increases the embedding distortion.
To provide more explanation, the image Lena was used with setting k = 1 . Figure 2 illustrates the result of Tanaka et al.’s parity assignment. Figure 2a depicts all colors with parity 0, and Figure 2c illustrates all colors with parity 1. Figure 2b depicts the corresponding closest color with parity 1 of each color in Figure 2a. Figure 2d depicts the corresponding closest color with parity 0 of each color in Figure 2c. From these figures, we can determine that for each color with parity 0 (1), a similar color with parity 1 (0) can always be found.
Figure 3 illustrates the result of Imaizumi et al.’s palette reordering. Figure 3a displays the reordered color palette. Figure 3b denotes the best replacing one of each color in Figure 3a when embedding data. Figure 3c illustrates the enlarged part of the rectangle marked by red color in Figure 3a. Figure 3d depicts the enlarged part of the rectangle marked by red color in Figure 3b. These figures indicate that some colors may be replaced by an unsimilar color during data embedding.
In the second experiment, we compared the proposed method with the methods proposed by Imaizumi et al. [16] and Aryal et al. [17] with a capacity of 45,000 bits and k = 3 . Note that for the image Couple, only 43,845 bits can be embedded using Aryal et al.’s method. Part of results is listed in Table 4, which reveals that the proposed method provides the best image quality under the same capacity. The reason is the same as that in experiment 1. Furthermore, our method modifies at most one pixel during data embedding; other methods [16,17] may modify more than one pixel during data embedding.
As mentioned previously, Imaizumi et al. and Aryal et al.’s methods need an extra location map to record which blocks are used for embedding, in experiment 3, the location map is embedded into a stego image. According to Table 2, the maximum valid embedding capacity of Imaizumi et al. and Aryal et al.’s methods are 12288 × k and 16384 × k , respectively. Table 5 shows part of image qualities of Imaizumi et al.’s, Aryal et al.’s, and our methods under the same embedding capacity. Note that to enforce embedding the location map in the first 4096 (5467) blocks for Imaizumi et al.’s (Aryal et al.’s) method, all indices 0 (255) of pixels in these blocks are changed to 1 (254) before embedding; this will avoid overflow/underflow after embedding. From these tables, we can see that image qualities of the proposed method are always superior to those of Imaizumi et al. and Aryal et al.’s methods under the same embedding capacity. The main reason is that embedding the location map will occupy several blocks and reduce the embedding capacity. Thus, under the same embedding capacity, those blocks used for embedding the location map in Imaizumi et al. and Aryal et al.’s methods will be skipped for data embedding in the proposed method. This will make the proposed method have higher image quality.

5.2. Chi-square Attack and RS Steganalysis

In steganography, the main goal for a stego image is statistically undetectable [21] (p. 52). To test whether the embedded images are detectable, chi-square attack [23], RS steganalysis [24], and embedding efficiency [21] were conducted. Chi-square attack and RS steganalysis are discussed in this section, and embedding efficiency in Section 5.3.
Each of chi-square attack and RS steganalysis has two experiments, one embedded 15 , 000 × k bits ( k = 1 ,   2 ,   3 ) for Imaizumi et al.’s and the proposed methods. The other embedded 15 , 000 × k bits ( k = 3 ) for Aryal et al.’s and the proposed methods.
Chi-square attack employs Pearson’s chi-square test to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of an image, and it can detect whether a palette image is embedded by messages. The details of applying the chi-square attack to steganography methods are described as follows:
Step 1:
Let the palette indices of a palette image be divided into K categories, each category contains a pair of indices ( 2 i , 2 i + 1 ), i = 0 , …, K 1 . Let E i and O i be the theoretically expected and observed frequencies of pixels with index 2 i after embedding messages, respectively. Then
E i = The   number   of   pixels   with   color   index   2 i   or   2 i + 1   2 ,
O i =     The   number   of   pixels   with   color   index   2 i .
Step 2:
The χ 2 statistic is given as
χ K 1 2 = i = 0 K 1 ( O i E i ) 2 E i   with   K 1   degrees   of   freedom .
Step 3:
Let p v represent the embedding probability, p v can be calculated by the following equation:
p v = 1 1 2 K 1 2 Γ ( K 1 2 ) 0 χ K 1 2 e x 2 x K 1 2 1 d x .
In chi-square attack, the results for three methods are p v = 0 ; this means that the three methods can resist chi-square attack. There are two reasons: (1) These three methods are block-based; the embedding capacities are limited and lower than 1 bpp; this will make chi-square attack fail [21]; (2) chi-square attack is used to detect LSB-based methods, but these three methods are not LSB-based methods.
Chi-square attack just uses sample counts and neglects spatial correlations among pixels in the stego image. Fridrich et al. [24] introduced RS steganalysis for detection of LSB embedding that utilizes sensitive dual statistics derived from spatial correlations in image.
In RS steganalysis, for a given image I , through a given local mask, two flipping functions, and a discrimination function, all pixels of the image can be classified into three groups: Regular, singular, and unchanged. Given a non-negative mask m , we can obtain the relative frequency of the regular group denoted as R + and the relative frequency of the singular group denoted as S + . Then through the mask m , we can obtain the relative frequency of the regular group denoted as R and the relative frequency of the singular group denoted as S . Let p e be the embedding rate in f, RS steganalysis can estimate p e by solving the following equation:
2 ( d 1 + d 0 ) z 2 + ( d 0 d 1 d 1 3 d 0 ) z + d 0 d 0 = 0 ,
where
d 0 = R + ( p e / 2 ) S + ( p e / 2 ) ,   d 1 = R + ( 1 p e / 2 ) S + ( 1 p e / 2 ) , d 0 = R ( p e / 2 ) S ( p e / 2 ) ,   d 1 = R ( 1 p e / 2 ) S ( 1 p e / 2 ) ,
and R + ( p / 2 ) represents R + of a stego image with p pixels embedded (i.e., the LSBs of p / 2 pixels flipped from a cover image). Thus, we can obtain R + ( p e / 2 ) easily through f; furthermore, by flipping the LSB of each pixel in I to get I , we can also obtain R + ( 1 p e / 2 ) through I . S + , S , and R can be obtained by the similar way. Then, p e is calculated from the root z of Equation (20), whose absolute value is smaller, by the following equation:
p e = z z 1 2 .
Two experiments are conducted, one for Imaizumi et al.’s [16] and the proposed methods uses block size 2 × 2 and three capacities (23%, 46%, and 69%); the other for Aryal et al.’s [17] and the proposed methods uses block size 1 × 3 and the same three capacities. In the experiments, the mask m is defined as [ 0   1   1   0 ] . Table 6 and Table 7 show part of the p e s of Imaizumi et al.’s, Aryal et al.’s, and the proposed methods. Note that both Imaizumi et al. and Aryal et al.’s methods embedded message by reordering the original palette, so there are two results for their methods by analyzing the original and the reordering palettes. From these tables, we can see that RS steganalysis cannot estimate p e accurately. The reason is that these three methods are not LSB-based methods. We can conclude that the three methods can resist RS steganalysis and are undetectable.

5.3. Embedding Efficiency

Another measurement for undetectability is embedding efficiency [21]. The embedding efficiency is defined as the number of embedded random message bits per embedding change [21]. According to this definition, embedding efficiency ( E F ) can be expressed as follows:
E F = number   of   embedding   bits number   of   embedding   pixel   changes .
Methods with higher embedding efficiency are more undetectable, because under the same capacity, higher embedding efficiency methods will change less pixels than lower embedding efficiency methods. For k = 3 , according to Equation (24), the estimated embedding efficiency of the proposed method ( E F p ) can be calculated by Equation (25).
E F p = 3 0 × p 0 + 1 × p 1 ,
where p i stands for the probability of i pixel changed in a block, here p 0 = 1 / 8 and p 1 = 7 / 8 .
The estimated embedding efficiency of Imaizumi et al.’s method ( E F i ) can be calculated by Equation (26).
E F i = 3 0 × p 0 + 1 × p 1 + 2 × p 2 + 3 × p 3 + 4 × p 4 ,
where p i stands for the probability of i pixel changed in a block, here p 0 = 1 / 15 , p 1 = p 2 = p 3 = 4 / 15 , and p 4 = 2 / 15 .
The estimated embedding efficiency of Aryal et al.’s method ( E F a ) can be calculated by Equation (27).
E F a = 3 ( p 0 ) × ( 1 × p 1 + 2 × p 2 ) + ( 1 p 0 ) × ( ( 1 + 1 ) × p 1 + ( 1 + 2 ) × p 2 ) .
where p 0 stands for the probability of B 0 unchanged, p i stands for the probability of i pixels from { B 1 , B 2 } changed, here p 0 = 1 / 2 , p 1 = 4 / 7 , and p 2 = 2 / 7 .
Table 8 shows the estimated embedding efficiencies of the proposed, Imaizumi et al.’s [16], and Aryal et al.’s [17] methods obtained by using Equations (25–27). Table 9 shows part of experimental embedding efficiency calculated by applying Equation (24) to each stego image, which is obtained by one of Imaizumi et al.’s, Aryal et al.’s and the proposed methods. From these tables, we can see that either estimated embedding efficiency or experimental one, the proposed method has higher embedding efficiency than those of Imaizumi et al. and Aryal et al.’s methods. This means that the proposed method is less detectable than Imaizumi et al. and Aryal et al.’s methods.

6. Conclusions

As mentioned previously, some BBMs produce large embedding distortions in some blocks; a location map is incorporated into these methods to record which blocks are used for data embedding. This will reduce the embedding capacity for secret data. To avoid this disadvantage, in this paper, we have proposed a BBM for palette images. The method modifies at most one pixel in a block. If modification is required, one optimal pixel with minimal embedding distortion is selected; this makes each block be used to embed secret data; that is, the embedding capacity of the proposed method is larger than that of the state-of-the-art BBMs. As to the undetectability, chi-square attack, RS steganalysis, and the embedding efficiency are used. Imaizumi et al.’s, Aryal et al.’s., and the proposed methods can resist chi-square attack and RS steganalysis. However, through the measure of embedding efficiency, both estimated and experimental efficiencies revealed that our method provided higher undetectability.

Author Contributions

Conceptualization, H.-Y.W. and L.-H.C.; methodology, H.-Y.W. and L.-H.C.; software, H.-Y.W.; validation, H.-Y.W. and L.-H.C.; writing—Original draft preparation, H.-Y.W.; writing—Review and editing, H.-Y.W. and L.-H.C.; visualization, H.-Y.W. and L.-H.C.; supervision, L.-H.C. and Y.-T.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, X.; Sun, Z.; Tang, Z.; Yu, C.; Wang, X. High capacity data hiding based on interpolated image. Multimed. Tools Appl. 2016, 76, 9195–9218. [Google Scholar] [CrossRef]
  2. Hussain, M.; Wahid, A.; Ho, A.T.; Javed, N.; Jung, K.-H. A data hiding scheme using parity-bit pixel value differencing and improved rightmost digit replacement. Signal. Process. Image Commun. 2017, 50, 44–57. [Google Scholar] [CrossRef]
  3. Zhang, W.; Wang, S.; Zhang, X. Improving Embedding Efficiency of Covering Codes for Applications in Steganography. IEEE Commun. Lett. 2007, 11, 680–682. [Google Scholar] [CrossRef]
  4. Chang, C.; Chou, Y. Using nearest covering codes to embed secret information in gray scale images. In Proceedings of the 2nd International Conference on Ubiquitous Information Management and Communication, Suwon, Korea, 31 January–1 February 2008; Associaion for Computing Machinery (ACM): New York, NY, USA. [Google Scholar]
  5. Cao, Z.; Yin, Z.; Hu, H.; Gao, X.; Wang, L. High capacity data hiding scheme based on (7, 4) Hamming code. SpringerPlus 2016, 5, 175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Guo, Y.; Cao, X.; Wang, R.; Jin, C. A new data embedding method with a new data embedding domain for JPEG images. In Proceedings of the 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM), Laguna Hills, CA, USA, 13–16 September 2018; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2018; pp. 1–5. [Google Scholar]
  7. Hong, W.; Zhou, X.; Lou, D.-C.; Chen, T.S.; Li, Y. Joint image coding and lossless data hiding in VQ indices using adaptive coding techniques. Inf. Sci. 2018, 245–260. [Google Scholar] [CrossRef]
  8. Hong, W.; Chen, T.S.; Yin, Z.; Luo, B.; Ma, Y. Data hiding in AMBTC images using quantization level modification and perturbation technique. Multimed. Tools Appl. 2016, 76, 3761–3782. [Google Scholar] [CrossRef]
  9. S-Tools. Available online: https://www.cs.vu.nl/~ast/books/mos2/zebras.html (accessed on 3 November 2020).
  10. Ez Stego. Available online: http://www.informatik.htw-dresden.de/~fritzsch/VWA/Source/EzStego.java (accessed on 3 November 2020).
  11. Fridrich, J. A new steganographic method for palette-based image. In Proceedings of the IS&T PICS Conference, Savannah, GA, USA, 25–28 April 1999. [Google Scholar]
  12. Fridrich, J.; Du, R. Secure steganographic methods for palette images. Comput. Vis. 2000, 1768, 47–60. [Google Scholar] [CrossRef]
  13. Tzeng, C.-H.; Yang, Z.-F.; Tsai, W.-H. Adaptive data hiding in palette images by color ordering and mapping with security protection. IEEE Trans. Commun. 2004, 52, 791–800. [Google Scholar] [CrossRef]
  14. Tanaka, G.; Suetake, N.; Uchino, E. A steganographic method realizing high capacity data embedding for palette-based images. In Proceedings of the International Workshop on Smart Info-Media Systems in Asia, Osaka, Japan, 22–23 October 2009. [Google Scholar]
  15. Imaizumi, S.; Ozawa, K. Multibit embedding algorithm for steganography of palette-based images. Lect. Notes Comp. Sci. 2014, 8333, 99–110. [Google Scholar] [CrossRef]
  16. Imaizumi, S.; Ozawa, K. Palette-based image steganography for high-capacity embedding. Bull. Soc. Photogr. Imag. Jpn. 2015, 25, 7–11. [Google Scholar]
  17. Aryal, A.; Motegi, K.; Imaizumi, S.; Aoki, N. Improvement of Multi-bit Information Embedding Algorithm for Palette-Based Images. Lecture Notes in Computer Science; Springer Science and Business Media LLC: Cham, Switzerland, 2015; Volume 9290, pp. 511–523. [Google Scholar]
  18. EVONIK. Available online: https://www.corporate.evonik.com/ (accessed on 3 November 2020).
  19. Miltner, K.M.; Highfield, T. Never gonna GIF you up: Analyzing the cultural significance of the animated GIF. Soc. Media + Soc. 2017, 3, 1–11. [Google Scholar] [CrossRef] [Green Version]
  20. Zhao, H.; Wang, H.; Khan, M.K. Steganalysis for palette-based images using generalized difference image and color correlogram. Signal. Process. 2011, 91, 2595–2605. [Google Scholar] [CrossRef]
  21. Cox, I.J.; Miller, M.L.; Bloom, J.A.; Fridrich, J.; Kalker, T. Applications and Properties. In Digital Watermarking and Steganography, 2nd ed.; Morgan Kaufmann: San Francisco, CA, USA, 2007; pp. 49–56. [Google Scholar]
  22. Crandall, R. Some Notes on Steganography. Posted on Steganography Mailing List. 1998. Available online: http://dde.binghamton.edu/download/Crandall_matrix.pdf (accessed on 29 May 2018).
  23. Westfeld, A.; Pfitzmann, A. Attacks on steganographic systems breaking the steganographic utilities EzStego, Jsteg, Steganos, and S-Tools. Lec. Notes. Comput. Sci. 2000, 1768, 61–75. [Google Scholar]
  24. Fridrich, J.; Goljan, M. Practical steganalysis of digital images: State of the art. Electron. Imaging 2002 2002, 4675, 1–13. [Google Scholar] [CrossRef]
  25. ISO/CIE 11664-6:2014 (Formerly CIE S 0146/E:2013). Colorimetry—Part 6: CIEDE2000 Colour-Difference Formula; CIE Central Bureau: Vienna, Austria, 2014. [Google Scholar]
  26. Standard Image Database (SIDBA). Available online: http://www.ess.ic.kanagawa-it.ac.jp/app_images_j (accessed on 7 October 2018).
  27. CBlR Image Database. Available online: http://imagedatabase.cs.washington.edu/groundtruth/ (accessed on 3 October 2020).
Figure 1. Twenty-five test images.
Figure 1. Twenty-five test images.
Applsci 10 07820 g001aApplsci 10 07820 g001b
Figure 2. Results of Tanaka et al.’s parity assignment with k = 1 for Lena.
Figure 2. Results of Tanaka et al.’s parity assignment with k = 1 for Lena.
Applsci 10 07820 g002
Figure 3. Results of Imaizumi et al.’s palette reordering for Lena.
Figure 3. Results of Imaizumi et al.’s palette reordering for Lena.
Applsci 10 07820 g003
Table 1. Relation between secret data w and ( t 0 ,   t 1 ).
Table 1. Relation between secret data w and ( t 0 ,   t 1 ).
w (3 Bits) t 0 t 1
7 (111)03
6 (110)02
5 (101)01
4 (100)00
3 (011)10
2 (010)11
1 (001)12
0 (000)13
Table 2. The comparisons of embedding capacities among Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods for image size 256 × 256 and k = 3 .
Table 2. The comparisons of embedding capacities among Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods for image size 256 × 256 and k = 3 .
Block Size 2 × 2 1 × 3
Embedding Method[16]Proposed Method[17]Proposed Method
Total Block number ( T )16,38421,845
Location map size ( X bits)12,288016,3830
Block number needed for record location map ( X / k )4096054610
Number of available blocks for secret data embedding ( T X / k )12,28816,38416,38421,845
Maximum embedding capacity (bits)36,86449,15249,15265,535
Table 3. Peak signal-to-noise ratios (PSNRs) of Imaizumi et al.’s [16] and the proposed methods.
Table 3. Peak signal-to-noise ratios (PSNRs) of Imaizumi et al.’s [16] and the proposed methods.
Capacity (Bits) 15 , 000 ,   k = 1 30 , 000 ,   k = 2
[16]Proposed Method[16]Proposed Method
Aerial38.7841.9936.7139.31
Airplane46.2947.1842.9544.09
Balloon45.1048.0442.1444.14
Earth43.7050.6142.4547.22
Girl42.5743.3339.4040.12
Lena41.5145.2139.1741.89
Parrots40.2242.2237.0538.54
Pepper41.4242.5338.0239.17
Mandrill38.8240.3435.8137.30
Couple41.4446.1639.7743.18
Sailboat40.6243.9638.2440.76
Milkdrop45.6147.4742.0943.35
Table 4. PSNRs of Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods with a capacity 45,000 bits and k = 3 .
Table 4. PSNRs of Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods with a capacity 45,000 bits and k = 3 .
Block Size 2 × 2 1 × 3
[16]Proposed Method[17]Proposed Method
Bell34.3941.4537.5141.13
Bricks36.0035.4434.9736.95
Church36.2142.0738.0641.29
Cliffs33.2839.5136.0539.50
Kangaroo36.5340.3132.6339.99
Kinkaku-ji Temple34.6038.6134.6638.54
Monkeys36.0839.5334.8739.99
Mosque37.4240.3838.7040.03
Painting35.9441.7937.5841.47
Rice paddies36.1841.1836.4640.93
Statue30.7939.2835.2538.77
Sydney Opera House40.1843.8739.0743.80
Temple40.2240.3836.3940.51
Table 5. PSNRs of Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods with k = 3 .
Table 5. PSNRs of Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods with k = 3 .
Block Size 2 × 2 1 × 3
Capacity 12 , 288 × k 16 , 384 × k
[16]Proposed Method[17]Proposed Method
Aerial33.0738.3529.2236.91
Airplane37.0542.6234.9640.66
Balloon35.8242.1333.7840.52
Earth38.9445.6039.1443.99
Girl33.5338.8930.5537.33
Lena34.3440.3032.5238.61
Parrots31.6036.5229.8634.97
Pepper32.7237.5830.7236.05
Mandrill30.8135.9127.0034.25
Couple35.4841.7632.7540.16
Sailboat33.9239.5231.1837.99
Milkdrop35.5840.9933.5039.20
Table 6. The results of RS steganalysis for Imaizumi et al.’s [16] and the proposed methods with block size 2 × 2 .
Table 6. The results of RS steganalysis for Imaizumi et al.’s [16] and the proposed methods with block size 2 × 2 .
MethodProposed MethodImaizumi et al.Imaizumi et al.
PaletteOriginalOriginalReordering
k 012301230123
Actual embedding rate (%)
023466902346690234669
Estimated embedding rate (%)
Aerial0-3-510101016−20101832
Airplane−16−128−16−37−56−80−9−12−5−9
Balloon−11−19−17−20−11−14−12−217603
Earth−18−12−12−16−18−23−28−26211414−15
Girl12−23−18−301210−4−7−16−8−18−19
Lena012111202−9−446108
Parrots11−11211121115−9−6−7−6
Pepper−1111111−1−1−3−2426−4
Mandrill000−1038−1151097
Couple−11−2−11−7−115−12−4−16201822
Sailboat11−15−25−241112105−12−12−9−12
Milkdrop1510101315141814−120−1
Table 7. The results of RS steganalysis for Aryal et al.’s [17] and the proposed methods with block size 1 × 3 .
Table 7. The results of RS steganalysis for Aryal et al.’s [17] and the proposed methods with block size 1 × 3 .
MethodProposed MethodAryal et al. Aryal et al.
PaletteOriginalOriginalReordering
k 033303330333
Actual embedding rate (%)
023466902346690234669
Estimated embedding rate (%)
Bell−11−11−11−13−11−16−15−19−12−17−16−10
Bricks−27−39−55−39−27−28−31−46501−9
Church1111151711111416444−2
Cliffs−9−12−19−20−9−7−6−42102
Kangaroo−40−46−40−43−40−52−60−75−39−44−47−43
Kinkaku−ji Temple−15−16−24−31−15−19−25−3042−1−4
Monkeys455840−6−9−2−223
Mosque−15−11−10−8−15−14−17−19−2−208
Painting−11−13−19−22−11−12−22−35−5−9−13−17
Rice paddies−148−143−144−158−148−151−138−131−54−40−39−38
Statue1816171818232529−9−8−11−4
Sydney Opera House1−2−6−71−21−28141517
Temple−18−16−14−17−18−48−50−456111513
Table 8. Estimated embedding efficiencies of Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods with k = 3 .
Table 8. Estimated embedding efficiencies of Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods with k = 3 .
Block Size 2 × 2 1 × 3
Methods[16]Proposed Method[17]Proposed Method
1.4063.4281.9093.428
Table 9. Experimental embedding efficiencies of Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods with k = 3 .
Table 9. Experimental embedding efficiencies of Imaizumi et al.’s [16], Aryal et al.’s [17], and the proposed methods with k = 3 .
Block Size 2 × 2 1 × 3
Capacity (Bits)4915265535
Methods [16]Proposed Method[17]Proposed Method
Images
Aerial1.4923.4172.0063.442
Airplane1.4993.4151.9943.431
Balloon1.4973.4301.9993.423
Earth1.5003.4151.9943.425
Girl1.4903.4542.0073.450
Lena1.4913.4121.9963.431
Parrots1.4963.4451.9883.424
Pepper1.4823.4012.0113.427
Mandrill1.5073.4342.0153.442
Couple1.4823.4322.0123.440
Sailboat1.5053.4251.9963.427
Milkdrop1.5003.4111.9933.442
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, H.-Y.; Chen, L.-H.; Ching, Y.-T. Block-Based Steganography Method Using Optimal Selection to Reach High Efficiency and Capacity for Palette Images. Appl. Sci. 2020, 10, 7820. https://0-doi-org.brum.beds.ac.uk/10.3390/app10217820

AMA Style

Wu H-Y, Chen L-H, Ching Y-T. Block-Based Steganography Method Using Optimal Selection to Reach High Efficiency and Capacity for Palette Images. Applied Sciences. 2020; 10(21):7820. https://0-doi-org.brum.beds.ac.uk/10.3390/app10217820

Chicago/Turabian Style

Wu, Han-Yan, Ling-Hwei Chen, and Yu-Tai Ching. 2020. "Block-Based Steganography Method Using Optimal Selection to Reach High Efficiency and Capacity for Palette Images" Applied Sciences 10, no. 21: 7820. https://0-doi-org.brum.beds.ac.uk/10.3390/app10217820

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop