Next Article in Journal
Technical and Economical Assessment of CO2 Capture-Based Ammonia Aqueous
Next Article in Special Issue
Data Driven Model Estimation for Aerial Vehicles: A Perspective Analysis
Previous Article in Journal
Structural Safety Analysis of Cantilever External Shading Components of Buildings under Extreme Wind Environment
Previous Article in Special Issue
Enhance Teaching-Learning-Based Optimization for Tsallis-Entropy-Based Feature Selection Classification Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multilayer Reversible Data Hiding Based on the Difference Expansion Method Using Multilevel Thresholding of Host Images Based on the Slime Mould Algorithm

by
Abolfazl Mehbodniya
1,
Behnaz karimi Douraki
2,
Julian L. Webber
1,
Hamzah Ali Alkhazaleh
3,*,
Ersin Elbasi
4,
Mohammad Dameshghi
5,
Raed Abu Zitar
6 and
Laith Abualigah
7
1
Department of Electronics and Communication Engineering, Kuwait College of Science and Technology (KCST), Kuwait City 7207, Kuwait
2
Department of Mathematics, University of Isfahan, Isfahan 81431-33871, Iran
3
IT Department, College of Engineering and IT, University of Dubai, Academic City, United Arab Emirates
4
College of Engineering and Technology, American University of the Middle East, Kust Kuwait 15453, Kuwait
5
Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 5166-15731, Iran
6
Sorbonne Center of Artificial Intelligence, Sorbonne University-Abu Dhabi, Abu Dhabi, United Arab Emirates
7
Faculty of Computer Sciences and Informatics, Amman Arab University, Amman 11953, Jordan
*
Author to whom correspondence should be addressed.
Submission received: 16 March 2022 / Revised: 22 April 2022 / Accepted: 24 April 2022 / Published: 26 April 2022
(This article belongs to the Special Issue Evolutionary Process for Engineering Optimization)

Abstract

:
Researchers have scrutinized data hiding schemes in recent years. Data hiding in standard images works well, but does not provide satisfactory results in distortion-sensitive medical, military, or forensic images. This is because placing data in an image can cause permanent distortion after data mining. Therefore, a reversible data hiding (RDH) technique is required. One of the well-known designs of RDH is the difference expansion (DE) method. In the DE-based RDH method, finding spaces that create less distortion in the marked image is a significant challenge, and has a high insertion capacity. Therefore, the smaller the difference between the selected pixels and the more correlation between two consecutive pixels, the less distortion can be achieved in the image after embedding the secret data. This paper proposes a multilayer RDH method using the multilevel thresholding technique to reduce the difference value in pixels and increase the visual quality and the embedding capacity. Optimization algorithms are one of the most popular methods for solving NP-hard problems. The slime mould algorithm (SMA) gives good results in finding the best solutions to optimization problems. In the proposed method, the SMA is applied to the host image for optimal multilevel thresholding of the image pixels. Moreover, the image pixels in different and more similar areas of the image are located next to one another in a group and classified using the specified thresholds. As a result, the embedding capacity in each class can increase by reducing the value of the difference between two consecutive pixels, and the distortion of the marked image can decrease after inserting the personal data using the DE method. Experimental results show that the proposed method is better than comparable methods regarding the degree of distortion, quality of the marked image, and insertion capacity.

1. Introduction

Data hiding (DH) is a way of secretly sending information or data to others. In this way, data, including text, images, etc., can be inserted into a medium such as an image, video, audio, etc., using a DH algorithm to make them invisible to others. DH is performed in two domains of space and frequency. The related data are inserted directly into the host image pixels in the space domain, which is often reversible. Reversibility means that the inserted data and the original host image are entirely recovered in the extraction phase. In the frequency domain, first, a frequency transform such as discrete wavelet transform (DWT), discrete cosine transform (DCT), etc., is applied to the host image. Then, using a special algorithm, the data are inserted into frequency coefficients that are often irreversible. The data are then extracted, while the original host image is not fully and accurately recovered.
Reversible data hiding (RDH) techniques in the space field are divided into several categories: difference expansion (DE), histogram shifting (HS), prediction-error expansion (PEE), pixel value grouping (PVG), and pixel value ordering (PVO). Over the years, various RDH methods have been proposed by researchers in these fields, all of which aim to reduce distortion and increase the quality of the marked image or increase the capacity to insert the data, and these issues continue. As the main challenge, this has occupied the minds of researchers. The following is a brief description of the papers available in each field:
An RDH method was proposed in [1] to increase the capacity based on the DE method. In this method, the host image is divided into 1 × 2 blocks, and spaces of −1, 0, and +1 are selected to insert the data. A lossless RDH method based on the DE histogram (DEH) was proposed to increase the capacity in [2], where the difference histogram (DH) peak point was selected to insert the data. In [3], a multilayer RDH method based on the DEH was proposed to increase the capacity and reduce the distortion. However, this method was not able to improve the capacity sufficiently. In [4], the RDH method was presented for gray images using the DEH and the module function. The position matrix and the marked image were sent separately to the receiver with low capacity.
In [5], to improve the capacity performance and increase security, a guided filter predictor and an adaptive PEE design were used to insert the data in color images using intrachannel correlation. A two-layer DH method based on the expansion and displacement of a PE pair in a two-dimensional histogram was proposed to increase the capacity by extracting the correlations between the consecutive PEs in [6], where their work was both low-capacity and low-quality. In [7], the authors presented an RDH method based on the multiple histogram correction and PEE to increase the capacity of the gray images, using a rhombus predictor to predict the host image pixels. In PE histogram (PEH) methods [8], to extract the redundancy between adjacent pixels, the correction path detection strategy is used to obtain a two-dimensional PEH that has a high distortion. In an RDH method [9], based on a two-layer insertion and PEH, the pixel pairs are selected based on pixel density and the spatial distance between two pixels. The distortion is much less than in the previous methods. In [10], an RDH method based on multiple histogram shifting was proposed that used a genetic algorithm (GA) to control the substantial image distortion. In [11], an RDH technique was proposed based on pairwise prediction-error expansion (PPEE) or two-dimensional PEH (2D-PEH) to increase capacity. In [12], an RDH technique based on the host image texture analysis was proposed by blocking the image and inserting the data into image texture blocks [13,14,15,16,17,18,19,20].
In another RDH method [21], the PVG method was used by multilayer insertion to increase the capacity. The PVG was applied on each block. The zero point of the histogram of the difference was selected to insert the data bits. The pixel-based PVG (PPVG) technique in [22] was proposed to increase the capacity applied to each block after image blocking. Each time a pixel is located in the smooth areas of each block for inserting the data, the PE value is the difference between the reference pixel and the particular pixel. This method has a low capacity and moderate distortion. In [23], a technique was presented to increase the reliability of the image for inserting the data. Using PEE based on PVO, the authors inserted the data in the pins of −1 and +1 from the PEH. The problem was that the quality of the image was still low, while the capacity was not high. The authors of [24] presented a robust reversible data hiding (RRDH) scheme based on two-layer embedding with a reduced capacity-distortion tradeoff, where it first decomposes the image into two planes—namely, the higher significant bit (HSB) and least important bit (LSB) planes—and then employs prediction-error expansion (PEE) to embed the secret data into the HSB plane.
In [25], the authors proposed an AMBTC-based RDH method in which the hamming distance and PVD methods were used to insert information. In [26], a PVD-based RDH method was used. In [27], the RDH method was based on PVD and LSB reversible insert information. The authors of [28] proposed an RDH in encrypted images (RDHEI) method with hierarchical embedding based on PE. PEs are divided into three categories: small-magnitude, medium-magnitude, and large-magnitude. In their approach, pixels with small-magnitude/large-magnitude PEs were used to insert data. Their method had a high capacity, but the image quality was still low.
The authors of [29] proposed an RDH method for color images using HS-based double-layer embedding. The authors used the image interpolation method to generate PE matrices for HS in the first-layer embedding, and local pixel similarity to calculate the difference matrices for HS in the second-layer embedding. In their process, the embedding capacity was low. In [30], the authors proposed a dual-image RDH method based on PE shift. Moreover, in their work, a bidirectional-shift strategy was used to extend the shiftable positions in the central zone of the allowable coordinates. In their work, the embedding capacity was low. Today, optimization algorithms such as the grasshopper optimization algorithm (GOA), whale optimization algorithm (WOA), moth–flame optimization (MFO) [31], Harris hawk optimization (HHO) [32], and artificial bee colony (ABC) [33] are used in many papers [34,35,36,37,38].
This paper proposes a DE-based multilayer image RDH method using the multilevel thresholding technique. Optimization algorithms are one of the most popular methods for solving NP-hard problems. Due to this, the slime mould algorithm (SMA) gives good results in finding the best solutions to solve optimization problems. First, the SMA is applied to the host image to find the two thresholds, and image pixels are based on the thresholds located in three different classes. Using the multilevel thresholding creates more similarities between the pixels of each class. Therefore, due to the reduction in the difference between each class’ pixels, the quality of the marked image does not decrease with the insertion of data via DE. At the same time, less distortion is created in the picture, and the insertion capacity increases. Table 1 shows the critical acronyms and their meanings.
The method of Arham et al. [3] is the basis of the proposed method. Arham et al. [3] proposed a multilayer RDH method to reduce the distortion in the image. First, the host image is divided into 2 × 2 blocks in their work. Then, to calculate the value of differences for both consecutive pixels, the pixel vector is created for each block in each layer, as shown in Figure 1. A threshold ( 2 Th 30 ) is considered, and only the blocks in which the value of differences for both consecutive pixels is positive and less than the threshold value (2 ≤ Th ≤ 30) are selected to insert the data.
For each the vectors w s where s = { 0 ,   1 ,   2 ,   3 } in the s -th insertion layer, in order to ensure reversibility and extract the data in the extraction phase, the s -th pixel from each block is not inserted, there is a reference pixel in each insertion layer, and three bits of data are inserted in a block of size 2 × 2. In the first insertion layer, in Figure 1, the pixel vector w s is created to calculate the values of v 1 , v 2 , v 3 , . The value of differences is calculated using Equation (1) for both consecutive pixels:
v 1 = u 1 u 0 ,   v 2 = u 2 u 1 ,   v 3 = u 3 u 2
Furthermore, to reduce the distortion and control of overflow/underflow, Equation (2) is used to reduce the value of v k   ( k = 1 ,   2 ,   3 ) :
v k = { v k 2 | v k | 1       if   2 × 2 n 1 | v k | 3 × 2 n 1 1 v k 2 | v k |             if     3 × 2 n 1 | v k | 4 × 2 n 1 1
where n is calculated using Equation (3):
n = log 2 ( | v k | )
Then, the value of v k is expanded to Equation (4) by collecting the kth bits of the three-bit sequence of confidential data b = ( b 1 ,   b 2 ,   b 3 ) , as follows:
v k = 2   ×   v k + b k
A location map (LM) is used to separate the range | v k | , as shown in Equation (5), to recover the original differences and the original pixels of the host image in the extraction phase:
LM = { 0 ,           if             2 × 2 n 1 | v k | 3 × 2 n 1 1 1 ,         if             3 × 2 n 1 | v k | 4 × 2 n 1 1
Each insertion layer creates a location map (M) to determine the location of blocks containing data. If the block is inserted with the data bits, the value of M is equal to 1. Otherwise, the value of M is 0. Finally, Equation (6) is used to insert the data in the vector u 0 :
v 0 = u 0 +   u 1 +   u 2 +   u 3 2 u 0 =   u 0 ,   u 1 = v 1 + u 0 ,   u 2 =   v 2 + u 2 ,   u 3 =   v 3 + u 3
In the extraction phase, the secret data and the v k are recovered using location maps LM and M and Equation (7):
v k = { v k + ( | v k | ) + 1 ,             if                   LM = 0 v k + ( | v k | ) ,                       if                 LM = 1
The remainder of this paper is organized as follows: Section 2 introduces the proposed plan that uses the Otsu thresholding method and SMA. In Section 3, the proposed methods are evaluated and compared with other works, and Section 4 contains the conclusions.

2. Proposed Method

The proposed method consists of three phases: In the first phase, the multilevel thresholding technique using a combination of the SMA and the Otsu evaluation function is applied to the host image to classify the host image pixels based on the relevant thresholds. The second phase inserts the data in the pixels of each class using DE, and in the third phase the extracted inserted data and the original image are recovered. In the following section, each of these three phases is described in turn:

2.1. Multilevel Thresholding Using the Slime Mould Algorithm

In the DE-based DH methods, the extraction of the best and most extra space for inserting the data and achieving a high capacity will also create less distortion in the host image—a significant issue. In other words, the smaller the difference between the two consecutive pixels and the more similarity between the pixels, the less distortion created in the image after inserting the data using the DE.
In this paper, we use the multilevel thresholding technique using the combination of the SMA and the Otsu evaluation function [34] to determine the two optimal thresholds ( T 1 and T 2 ) for classifying the image pixels. Thus, increases in the similarity and correlation between the pixels of each class are used to reduce the value of differences for two consecutive pixels in each class, and to increase capacity and reduce distortion in the marked image.
The Otsu method is an automatic thresholding method obtained according to the image histogram, and defines the boundaries of objects in the image with high accuracy [34]. In gray images, the pixel intensity is between 0 and 255. Therefore, the Otsu method selects a threshold from 0 to 255, with the highest interclass variance in gray images, or minimizes intraclass variance [34]. All image pixels are grouped using multilevel thresholding based on correlation and similarity [35].
SMA is a new population-based metaheuristic algorithm [36] inspired by the intelligent behavior of a type of mould called slime mould. Slime mould also behaves intelligently, and can navigate very quickly and without error. In this paper, the input and output of the SMA are the image histogram and the thresholds, respectively. The related thresholds in the SMA are obtained by X * . The vector represents each slime mould X kt , in which kt = 1, 2:
X kt = ( T 1 ,   T 2 ) for 0   <   T 1   <   T 2   <   H
In Equation (8), T 1 and T 2 represent the search thresholds. In this paper, we seek to find the two optimal thresholds T 1 and T 2 , and utilize the SMA to use these two thresholds to classify image pixels.
The location of each slime mould that represents a threshold value is in the range [ lb ,   ub ], where lb and ub represent the lowest and highest brightness of the pixels in the host image, respectively. The initial location of the slime moulds is determined randomly, according to Equation (9):
x i 1 =   lb   +   rand ( 0 , 1 ) × ( ub     lb )
where x i 1 represents each threshold of the threshold vector. The value of the evaluation function is then calculated for all slime moulds, and the slime mould for which the evaluation function obtained is the smallest value is taken as the criterion, while the corresponding location, which is defined as X * , is set as the related threshold. Slime mould uses the bait smell released into the air to approach and guide the prey. Equation (10) describes the routing behavior of slime moulds based on bait smell:
X ( t + 1 ) = { X b ( t ) + vb · ( W · X A ( t ) X B ( t ) ) ,   r < p vc · X ( t ) ,                       r p
where vb is a parameter in the range [−a, a], vc is a parameter that decreases linearly from 1 to 0, and t represents the current iteration. X b ( t ) Indicates the location of the slime mould in the tth iteration with the highest smell concentration in the environment. X ( t ) indicates the location of the slime mould in the tth iteration. X ( t + 1 ) is the next location of the slime mould in the tth iteration. X A ( t ) and X B ( t ) represent the two randomly selected locations of the slime mould, and W represents the slime mould’s weight. The value of p is obtained using Equation (11):
p = tanh|S(i)-DF|
where i = 1, 2, …, n represents the number of cells in the slime mould, DF represents the best evaluation obtained in all iterations, S(i) represents the value of the evaluation function, X ( t ) DF represents the best value of the evaluation function obtained during all iterations, and vb is obtained from Equation (12):
vb = [ a ,   a ]
a = arctanh ( ( t max _ t ) + 1 )
W is obtained using Equation (14):
W ( SmellIndex ( i ) ¯ = { 1 + r . log ( bF S ( i ) bF wF + 1 ) ,   condition 1 r . log ( bF S ( i ) bF wF + 1 ) , others
SmellIndex = sort(S)
The condition indicates that S(i) is in the first half of the population. The r represents a random value [0, 1]. The bF represents the value of the obtained optimal evaluation function in the current iteration. The wF represents the worst evaluation function received in the current iteration. SmellIndex represents the sequence of values of the evaluated function (in ascending order). The location of the slime mould is also updated using Equation (16) [36]:
X * = { rand . ( UB LB ) + LB ,             rand < z X b ( t ) + vb . ( W . X A ( t ) X B ( t ) ) ,               r < p vc . X ( t ) ,                                 r p
where LB and UB are the lower limit and the upper limit, respectively, which in this case are equal to 0 and 255, respectively; rand and r are randomly determined in the range [0, 1], and the value of z will be discussed in the parameter setting test. Therefore, this process is repeated until the stop condition is met. We set the stop condition to reach 100 iterations. Then, we obtain the output X * , which represents the optimal threshold vector. Finally, the host gray image A is divided into three separate classes C 1 ,   C 2 , and C 3 using the optimal thresholds T kt (kt = 1, 2), as shown in Equation (17):
{ C 1 = { g ( i ,   j )     A |   0   g ( i ,   j )   T 1 1 } C 2 = { g ( i ,   j )     A |   T 1   g ( i ,   j )   T 2 1 } C 3 = { g ( i ,   j )     A |   T 2   g ( i ,   j )   H }
where g(i, j) represents each pixel in row i and column j of image A, and H represents the gray area of gray image A. Thresholds T kt are obtained by maximizing the evaluation function F, as shown in Equation (18):
T kt = MAX T kt   F ( T kt )
where F( T kt ) is the same evaluation function for the SMA or the Otsu evaluation function. The Otsu evaluation function is calculated using Equation (19) [28]:
F = i = 0 2 SUM i ( μ i μ 1 ) 2
SUM i = j = T i T i + 1 1 P j
μ i = j = T i T i + 1 1 i P j SUM i   , P j = fer ( j ) / Num p
In Equation (19), μ 1 is the average density of the host image A for T 1 = 0 and T 2 = H. The μ i is the average density of the C y class for T 1 and T 2 , and SUM i is the sum of the probabilities. In Equations (20) and (21), P j shows the probability of the gray level jth, fer(j) is the frequency of the jth gray level, and Num p represents the total number of pixels in the host image A [36].

2.2. Data Insertion Process

In this paper, the value of 0 difference is also expanded for inserting the data (0 ≤ Th ≤ 32) to increase the capacity, in addition to the fewer equal to 0 and fewer equal to 32 differences [3]. In this paper, two optimal thresholds T 1 and T 2 are obtained for the host image, and the host image pixels are located together in three separate classes based on the two corresponding thresholds. Therefore, there is more similarity and correlation between the pixels of each class, and the difference between two consecutive pixels related to each class is less.
By reducing the value of differences between consecutive pixels in each class, less distortion is created in the image due to data insertion based on the expansion of the value of differences. Furthermore, the capacity increases when increasing the number of pixels with less difference between them.
After determining the optimal thresholds by the SMA, based on the two thresholds of T 1 and T 2 , starting from the pixel on the first row and the first column of the host image, pixels belonging to the first class—whose value is smaller than the threshold of T 1 —are located in the liner matrix C 1 . The second-class pixels, whose value is larger than that of T 1 and smaller than that of T 2 , are located in the liner matrix C 2 . Eventually, the pixels belonging to the third class, whose value is larger than that of T 2 —are located in the liner matrix C 3 . Figure 2 shows the status of the three matrices of C 1 , C 2 , and C 3 .
To insert the data, after classifications of host image pixels, matrix pixels C y (y = 1, 2, 3) are divided into non-overlapping blocks with a size of 1 × 5. The vector P is then created for pixels of each block to calculate the different values for both consecutive pixels, according to Figure 3.
According to Equation (22), differences are calculated for both consecutive pixels in the vector P:
{ v 1 = p C y , 2 p C y , 1 v 2 = p C y , 3 p C y , 2 v 3 = p C y , 4 p C y , 3 v 4 = p C y , 5 p C y , 4
Given that the proposed method inserts the data by expanding the value of differences to reduce distortion in the marked image, the blocks are selected to insert the data. The difference between two consecutive pixels is between 0 and +32 (0 ≤ Th ≤ 32). Moreover, to reduce distortion in the marked image and the overflow/underflow control, the range of each difference v k ( k = 1 ,   2 ,   3 ,   4 ) is diminished. A location map M q (q = 1, 2, 3, 4) is created in each insertion layer to determine the locations of blocks inserted with data bits. If the block is inserted with data bits, it M q will be equal to 1. Otherwise, it is equal to 0. Furthermore, a location map LM y to separate each of the differences as 2 ≤ v k   32 or 0 ≤ v k   1 is considered, as shown in Equation (23):
LM y = { 0 ,             if             0   v k 1 1 ,             if             2   v k 32
If 0≤ v k   1 , Equation (24) can be used to change the range of v k :
v k = | v k + 2 | 2 log 2 ( | v k | )
If 2 ≤ v k 32 , Equation (25) can be used to change the range of v k :
v k = { | v k | 2 log 2 ( | v k | ) 1 ,   if     2 × 2 n 1 | v k | 3 × 2 n 1 1 | v k | 2 log 2 ( | v k | ) ,   if     3 × 2 n 1 | v k | 4 × 2 n 1 1
where n is calculated using Equation (26):
n = log 2 ( | v k | )
In each insertion layer of each y class, a location map LM y , f —where f = 1, 2, as shown in Equations (27) and (28)—is used to recover the original differences. LM y , 1 and LM y , 2 are used to separate the differences, as 0 ≤ v k 1 and 2 ≤ v k 32 , respectively.
LM y , 1 = { 0 ,             if             | v k | = 0 1 ,           if               | v k | = 1
LM y , 2 = { 0 ,             if             2 × 2 n 1 | v k | 3 × 2 n 1 1 1 ,           if               3 × 2 n 1 | v k | 4 × 2 n 1 1
The matrices M q LM y , and LM y , f are used to recover the original differences and original image pixels. Given that in the proposed method, in the sth insertion layer (s = 1, 2, 3, 4, 5), the sth pixel is not inserted for reversibility, and in each layer, four -bits of data are inserted in four pixels of each block, the data bits’ sequence is divided into 4-bit subsequences, where b k = ( b 1 ,   b 2 ,   b 3 ,   b 4 ) 2   that   k   = 1 ,   2 ,   3 ,   4 . Then, using Equation (29), v k expands with a datum:
v k = 2 × v k + b k
The first pixel is not inserted from each block in the first layer. Therefore, in the first insertion layer, insertion of the 4-bit sequence b in pixels of each block is carried out as shown in Equation (30). In the second insertion layer, the second pixels of each class block are not selected to insert the data. In this layer, the process of inserting the 4-bit sequence b in pixels of each block is as shown in Equation (31). In the third insertion layer, the third pixel of each class block is not selected to insert the data, according to Equation (32). In the 4th insertion layer, the 4th pixel of each class block is not selected to insert the data, according to Equation (33).
{ p C y , 1 = P C y , 1 p C y , 2 = v 1 + P C y , 1 p C y , 3 = v 2 + p C y , 2 p C y , 4 = v 3 + p C y , 3 p C y , 5 = v 4 + p C y , 4
{ p C y , 1 = v 1 + P C y , 1 p C y , 2 = P C y , 2 p C y , 3 = v 2 + p C y , 2 p C y , 4 = v 3 + p C y , 3 p C y , 5 = v 4 + p C y , 4
{ p C y , 1 = v 1 + P C y , 1 p C y , 2 = v 2 + P C y , 2 p C y , 3 = p C y , 2 p C y , 4 = v 3 + p C y , 3 p C y , 5 = v 4 + p C y , 4
{ p C y , 1 = v 1 + P C y , 1 p C y , 2 = v 2 + P C y , 2 p C y , 3 = v 3 + p C y , 2 p C y , 4 = p C y , 3 p C y , 5 = v 4 + p C y , 4
Therefore, due to the data insertion process of pixels p C y , 1 , p C y , 2 , p C y , 3 , p C y , 4 and p C y , 5 from each block, five marked pixels p C y , 1 , p C y , 2 , p C y , 3 , p C y , 4 , and p C y , 5 are obtained, as is the marked image A .
The steps of the data insertion process in each insertion layer are as follows:
Step 1: Dividing the data sequence into 4-bit b. Step 2: Apply SMA on the host image to determine the optimal thresholds T 1 and T 2 . Step 3: Classification of image pixels based on thresholds T 1 and T 2 . Step 4: Dividing the class pixels C y into non-overlapping blocks of size 1 × 5. Step 5: Production of a location map M q to determine the block locations with insertion conditions. Step 6: Produce the vector P for each selected block to calculate the value of differences. Step 7: Calculate the differences v k for both consecutive pixels, and then calculate v k . Step 8: Calculate the value of v k to reduce image distortion and prevent overflow/underflow. Step 9: Produce the location map LM y . Step 10: Calculation of the marked pixels p C y , 1 , p C y , 2 , p C y , 3 , p C y , 4 , and p C y , 5 . Step 11: Produce the marked image A and save the location maps LM y , LM y , f , and M q , and thresholds T 1 and T 2 in order to extract the original data and recover the original image.

2.3. Data Extraction Process

In the extraction phase, the marked image A is first classified using the thresholds T 1 and T 2 , and the pixels belonging to each class are located in the matrix C y . Then, the pixels belonging to the matrix C y are divided into blocks of size 1 × 5, and using location maps the LM y and   LM y , f the secret data are extracted from the corresponding blocks, and the original image pixels are recovered. The extraction process is performed from the first insertion to the last. The vector p is created for the pixels of each block after dividing the matrix C y into blocks of size 1 × 5, according to Equation (34):
p = ( p C y , 1 ,   p C y , 2 ,   p C y , 3 ,   p C y , 4 ,   p C y , 5 )
In each insertion layer, the binary value v k is obtained. Then, the least significant bit (LSB) of v k as the k th bit is extracted from the secret data. In the 4th insertion layer, extracting the data bits b k and recovering the original pixels from each vector p is performed as follows (see Equations (35)–(38)):
v 1 = P C y , k P C y , k
b k = LSB ( v k )
v k = v k 2
P C y , k = P C y , k 1 + v k 1
If the LM y , 1 is equal to 0 or 1, Equation (39) can be used to obtain v k . If the LM y , 2 is equal to 0 or 1, Equation (40) can be used to obtain v k .
v k = | v k | 2 log 2 ( | v k | )
v k = { | v k | + 2 log 2 ( | v k | ) ,   if             LM y ,   2 = 0 | v k | + 2 log 2 ( | v k | ) ,   if       LM y , 2 = 1
The values of p C y , 1   p C y , 2 , b 1 , and v 1 are obtained using Equations (41)–(57).
P c y , 1 = P c y , 1
v 1 = P C y , 2 P c y , 1
b 1 = LSB ( v 1 )
v 1 = v 1 2
P c y , 2 = P c y , 1 + v 1
Therefore, the value v 1 is obtained using Equation (39) or Equation (40). Then, p C y , 3 , b 2 , and v 2 are obtained using the following equations:
v 2 = P C y , 3 P c y , 2
b 2 = LSB ( v 2 )
v 2 = v 2 2
P c y , 3 = P c y , 2 + v 2
Therefore, the value v 2 is obtained using Equation (39) or Equation (40).
Then, p C y , 4 , b 3 , and v 3 are obtained using the following equations:
v 3 = P C y , 4 P c y , 3
b 3 = LSB ( v 3 )
v 3 = v 3 2
P c y , 4 = P c y , 3 + v 3
Therefore, the value v 3 is obtained using Equations (39) and (40).
Then, p C y , 5 , b 4 , and v 4 are obtained using the following equations:
v 4 = P C y , 5 P c y , 4
b 4 = LSB ( v 4 )
v 4 = v 4 2
P c y , 5 = P c y , 4 + v 4
Therefore, the value v 4 is obtained using Equations (39) and (40).
The steps of the data extraction process and the original host image pixel recovery in each layer are as follows:
Step 1: Classify the pixels of the image A to get three classes C y .
Step 2: Dividing the class pixels C y into non-overlapping blocks of size 1 × 5.
Step 3: Identify marked blocks using the location map M q .
Step 4: Production of the vector p for each block.
Step 5: Calculate the value v k .
Step 6: Extract the inserted data b k using the first LSB v k .
Step 7: Calculate the value v k .
Step 8: Calculate the value of the v k difference using LM y and LM y , f .
Step 9: Calculate the original pixels of each block.
Step 10: Recover the original image A.
The marked pixels in the respective block are shown in Figure 4. An example of the embedding and extraction process:
Embedding process:
P = ( 106 ,   121 ,   148 ,   158 ,   155 ) { v 1 = 121 106 = 15 v 2 = 148 121 = 27 v 3 = 158 148 = 10 v 4 = 158 158 = 0 v k = ( 15 ,   27 ,   10 ,   0 ) ,   b = ( 0 ,   1 ,   1 ,   0 ) v 1 = 15 ,   n = 3 ,   v 1 = 7 ,   L M y , 2 = 1 ,   b 1 = 0 ,   v 1 = 2 × 7 + 0 = 14 v 2 = 27 ,   n = 4 ,   v 2 = 11 ,   L M y , 2 = 1 ,   b 2 = 1 ,   v 2 = 2 × 11 + 1 = 23 v 3 = 10 ,   n = 4 ,   v 3 = 6 ,   L M y , 2 = 0 ,   b 3 = 1 ,   v 3 = 2 × 6 + 1 = 13 v 4 = 0 ,   n = 4 ,   v 0 = 1 ,   L M y , 2 = 1 ,   b 4 = 0 ,   v 4 = 2 × 1 + 0 = 2 v k = ( 7 ,   11 ,   6 ,   1 ) ,   v k = ( 14 ,   55 ,   13 ,   2 ) ,   L M y , 2 = ( 1 ,   1 ,   0 ) 2 ,   M = 1 { p 1 = 106 p 2 = 106 + 14 = 120 p 3 = 121 + 23 = 144 p 4 = 148 + 13 = 161 p 5 = 158 + 2 = 160 p ( 106 ,   120 ,   144 ,   161 ,   160 )
Extraction process:
p ( 106 ,   120 ,   144 ,   161 ,   160 ) v 1 = 120 106 = 14 b 1 = L S B ( 14 ) = L S B ( 1110 ) 2 = 0 v 1 = 14 2 = 7 L M y , 1 = 1 ,   v 1 = 15 { p C y , 1 = 106 p C y , 2 = 106 + 15 = 121 v 2 = 144 121 = 23 b 2 = L S B ( 23 ) = L S B ( 10111 ) 2 = 2 v 2 = 23 2 = 11 L M y , 2 = 1 ,   v 2 = 27 p C y , 3 = 27 + 121 = 148 v 3 = 161 148 = 13 b 3 = L S B ( 13 ) = L S B ( 1101 ) 2 = 1 v 3 = 13 2 = 6 L M y , 2 = 0 ,   v 3 = 10 p C y , 4 = 148 + 10 = 158 v 4 = 160 158 = 2 b 4 = L S B ( 2 ) = L S B ( 10 ) 2 = 0 v 3 = 2 2 = 1 L M y , 1 = 1 ,   v 3 = 0 p C y , 5 = 158 + 0 = 158
Finally, the initial pixels of the corresponding block are retrieved as follows.
P = (106, 121, 148, 158, 155)

3. Results

In this paper, the gray images of Lena, Peppers, Airplane, Baboon, Ship, Lake, Bridge, Cameraman, and Barbara are used as the host image of size 512 × 512, taken from the USC-SIPI database. Figure 5 shows the host images used in this paper. The proposed method is simulated using MATLAB 2018b and a Windows 64-bit operating system with a Core i5 CPU.

3.1. Evaluation Metrics

To evaluate the proposed method and compare it with the methods of Arham et al. [3] and Kumar et al. [24], the peak signal-to-noise ratio (PSNR), insertion capacity, structural similarity index measure (SSIM), and processing time metrics were used.
PSNR: To measure the quality of the marked image and the similarity measurement between the marked image and the original host image, which also specifies the distortion ratio, the PSNR was used. The PSNR value was calculated using the mean squared error (MSE), and has a reverse ratio with MSE. MSE and PSNR were calculated using Equations (59) and (60), respectively [21,22].
MSE = 1 m × n i = 1 m j = 1 n ( A ( i ,   j ) A ( i ,   j ) ) 2
PSNR = 10 L o g 10 ( 255 2 M S E )
For the image with a size of m × n, A (i, j) represents the host image pixels, and A ( i ,   j ) represents the marked image pixels.
Insertion capacity: This paper used a random bit sequence as secret data. For a gray host image with a size of m × n, the maximum insertion capacity was calculated according to the number of bits per pixel (bpp), using Equation (61) [3]:
Insertion   capacity ( bpp ) = L e n g t h   o f   s e q u e n c e   b i t m × n
In the proposed method, the multilayer insertion technique enhances the capacity. Arham et al. [3] considered the maximum number of insertion layers to be eight. Therefore, the data insertion process was performed two times, and at each time, data bits were inserted under four layers—the first time from the first layer to the fourth layer, and the second time from the fifth layer to the eighth layer.
SSIM: The SSIM metric is a famous metric used to measure the amount of structural similarity between the host image A and image A . This metric can be obtained using Equation (62) [34]:
SSIM ( A ,   A ) = ( 2 μ 1 μ A + c 1 ) ( 2 σ 1 , A + c 2 ) ( μ 1 2 + μ A 2 + c 1 ) ( σ 1 1 + σ A 2 + c 2 )
where μ 1 and μ A are the mean brightness intensity of A and A , respectively, σ A represents the standard deviation of images A and A , respectively, σ 1 , A represents covariance between images A and A , respectively, and c 1 and c 2 are two constant values of 6.50 and 58.52, respectively. The higher the SSIM value in data hiding methods and the closer it is to 1, the more effective the corresponding process [34].
Processing time: processing time is one of the essential parameters for comparison to DH methods. Therefore, the total processing time is equal to the total insertion and extraction time values to compare the proposed method and other methods.

3.2. Comparison with the Other Methods

In this paper, the proposed method is compared with the methods of Arham et al. [3], Yao et al. [22], and Kumar et al. [16]. Similarity and correlation between pixels belonging to each class are increased using multilevel thresholding, and the difference between the pixels decreases. As a result, inserting a few layers of data creates less distortion in the image.
Because, in the proposed method, all of the values of zero and positive differences are expanded to insert the data, in the first insertion layer and higher insertion layers, it has a higher insertion capacity and PSNR than the methods of Arham et al. [3], Yao et al. [30], and Kumar et al. [24]. As a result, the resulting distortion in the marked image for the proposed method is less than that in the methods of Arham et al. [3], Yao et al. [30], and Kumar et al. [24]. Table 2 shows the values of insertion capacity maxima for different images per thresholds T 1 and T 2 at 0 ≤ Th ≤ +32. The values of T 1 and T 2 are set by the SMA. Table 3 shows the values of processing time (seconds) for different images per thresholds T 1 and T 2 at 0 ≤ Th ≤ +32.
As can be seen from Table 2 and Table 3, for the two thresholds and T 2 in the first insertion layer, the proposed method has more insertion capacity and more PSNR for all images than the methods of Arham et al. [3] and Kumar et al. [24].
According to Table 2, the insertion capacity of the first insertion layer in the proposed method for the Airplane, Baboon, Barbara, Ship, Lena, Lake, Bridge, Cameraman, and Peppers images is 834 bits (0.0032 bpp), 1689 bits (0.0064 bpp), 766 bits (0.0029 bpp), 1561 bits (0.0059 bpp), 2241 bits (0.0085 bpp), 1604 bits (0.0061 bpp), 4402 bits (0.0168 bpp), 4794 bits (0.0183 bpp), and 1571 bits (0.0061 bpp), respectively—more than in the method of Arham et al. [3].
In the proposed method, the Lena image has the highest increase in capacity (0.0085 bpp), while the Barbara image has the lowest increase in capacity (0.0029 bpp), compared to the method of Arham et al. [3]. Therefore, on average, the capacity of the first insertion layer in the proposed method is 0.0058 bpp more than that of the method of Arham et al. [3]. The average processing time for the proposed method is 173.2367 s, while that for the method of Arham et al. [3] is 150.8417 s. Therefore, the proposed method is slower than the method of Arham et al. [3], due to the use of the SMA and its repetitions to obtain optimal thresholds.
Moreover, the proposed method is better than the method of Kumar et al. [24] in terms of PSNR and embedding capacity values, as can be seen in Table 2 and Table 3, but in terms of execution time it is slower compared to the methods of Kumar et al. [24] and Arham et al. [3]. Furthermore, the proposed method, compared to the method of Kumar et al. [24] for the Lake, Bridge, Cameraman, Airplane, Baboon, Barbara, Ship, Lena, and Peppers images, yields 82,485 bits (3145 bpp), 11,874 bits (0.0324 bpp), 40,762 bits (0.1581 bpp), 5000 bits (0.0191 bpp), 85,000 bits (0.3242 bpp), 34,232 bits (0.1306 bpp), 36,144 bits (0.1379 bpp), 9000 bits (0.0343 bpp) and 18,985 bits (0.0724 bpp), respectively, showing greater capacity.
Table 4 also compares the PSNR values of the proposed method with the method of Arham et al. [3] for different capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp, and 0.7 bpp in the first insertion layer. As can be seen from Table 4, the average PSNR of the proposed method for different capacities is higher than that of the method of Arham et al. [3].
Figure 6 shows the PSNR comparison diagram of the proposed method with the method of Arham et al. [3] for different images under the same capacities drawn using the data shown in Table 4 and Table 5. As can be seen from the diagrams in Figure 6, the proposed method has a higher PSNR value for all images than the method of Arham et al. [3]. For the first insertion layer, the Airplane image for the capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp, and 0.7 bpp shows increases in quality compared to the method of Arham et al. [3] of 0.8200 dB, 0.8500 dB, 1.1400 dB, 1.04 dB, 1.6200 dB, 1.6700 dB, and 0.94 dB, respectively.
Moreover, the Baboon image for the capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp, and 0.7 bpp, compared to the method of Arham et al. [3], shows increases in quality of 1.0600 dB, 0.9600 dB, 0.9000 dB, 0.3900 dB, 0.6300 dB, 0.5700 dB and 1.4900 dB, respectively. Meanwhile, the Barbara image for the capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp, and 0.7 bpp increases in quality by 1.4500 dB, 0.7000 dB, 0.610.0 dB, 1.1900 dB, 0.1400 dB, 1.1100 dB, and 1.0600 dB, respectively, compared to the method of Arham et al. [3].
The Ship image for the capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp, and 0.7 bpp increases in quality by 1.1400 dB, 1.6000 dB, 1.6300 dB, 1.0600 dB, 1.2300 dB, 1.1800 dB, and 1.5900 dB, respectively, compared to the method of Arham et al. [3].
The Lena image for capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp and 0.7 bpp, compared to the method of Arham et al. [3], has an increase in quality of 0.4800 dB, 1.8100 dB, 0.8700 dB, 0.4000 dB, 1.2700 dB, 1.5800 dB, and 1.2800 dB, respectively. The Peppers image for capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp, and 0.7 bpp, compared to the method of Arham et al. [3], has an increase in quality of 1.6500 dB, 1.8200 dB, 1.3300 dB, 1.50 dB, 0.5900 dB, 1.1700 dB, and 1.3500 dB, respectively. The Lake image for the capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp, and 0.7 bpp increases in quality by 1.4700 dB, 1.2100 dB, 1.1000 dB, 1.0700 dB, 1.1300 dB, 1.1400 dB, and 0.6900 dB, respectively, compared to the method of Arham et al. [3].
The Cameraman image for capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp and 0.7 bpp, compared to the method of Arham et al. [3], has an increase in quality of 1.9400 dB, 1.3800 dB, 1.6900 dB, 1.2900 dB, 1.5500 dB, 1.1100 dB, and 1.0600 dB, respectively. The Bridge image for capacities of 0.1 bpp, 0.2 bpp, 0.3 bpp, 0.4 bpp, 0.5 bpp, 0.6 bpp, and 0.7 bpp, compared to the method of Arham et al. [3], has an increase in quality of 1.4500 dB, 1.8200 dB, 0.9600 dB, 1.2400 dB, 1.5000 dB, 0.8400 dB, and 1.0600 dB, respectively.
The proposed method has more insertion capacity than that of Arham et al. [3] for the first and higher insertion layers. Table 5 shows the PSNR comparison of the proposed method and the method of Arham et al. [3] for the eight insertion layers. As the number of insertion layers increases, the total insertion capacity increases. As shown in Table 5, the values of insertion capacity and PSNR for the proposed method are higher than those in the method of Arham et al. [3]. According to Table 5, the average insertion capacity increase in the eight insertion layers in the proposed method is 0.625 bpp, and in the method of Arham et al. [3] it is 0.572 bpp. In insertion layer eight, the Peppers, Lena, Barbara, and Baboon images have the highest increase in capacity, while the Airplane, Ship, Lake, Bridge, and Cameraman images have the slightest increase in capacity, compared to the method of Arham et al. [3].
In insertion layer eight, the Baboon image has the highest increase in PSNR (1.25 dB), while the Barbara image has the lowest increase in PSNR (1 dB), compared to the method of Arham et al. [3]. In insertion layer eight, the average insertion capacity of the proposed method and the method of Arham et al. [3] is 6.33 bpp and 6 bpp, respectively, while the average PSNR in the proposed method and the method of Arham et al. [3] is 29.75 dB and 28.88 dB, respectively. In insertion layer eight, the proposed method has an average capacity increase of 0.33 bpp and an average quality increase of 1.63 dB compared to the method of Arham et al. [3]. Figure 7 shows a comparison diagram of the capacity and PSNR values of the proposed method and the method of Arham et al. [3] for eight insertion layers drawn using the data shown in Table 5.
Table 6 shows the SSIM values of the proposed method and the methods of Arham et al. [3] and Kumar et al. [16] for maximum embedding capacity (bits). According to Table 6, due to its nature, the proposed method has a higher SSIM than the compared methods, while the capacity of the proposed method is also more than that of the compared methods.
Table 7 shows the method proposed by Arham et al. [3], using the SSIM evaluation metric, which compares one embedding layer and several embedding layers. As can be seen in Table 7, the proposed method has superior performance for different capacities (bpp) compared to the method of Arham et al. [3], with a higher SSIM value. As the capacity value increases in bpp, the SSIM value for the proposed method and the method of Arham et al. [3] decreases. Still, in any case, the proposed method compared is superior to the method of Arham et al. [3] in terms of SSIM value.
Table 8 shows the PSNR and SSIM values for the proposed method and the method of Yao et al. [30] for capacities of 30,000 and 50,000 bits. As can be seen from Table 8, the proposed method has higher SSIM and PSNR values compared to the method of Yao et al. [30].
In this paper, a multilayer RDH method based on the multilevel thresholding technique is proposed, aiming to increase the insertion capacity and reduce the distortion after data embedding by improving the correlation between consecutive pixels of the image. Firstly, the SMA is applied to find the optimal thresholds of host image segmentation. Next, according to the specified threshold, image pixels located in different image areas are classified into different categories. Finally, the difference between two consecutive pixels is reduced in each class, and then the data are embedded via DE.

4. Conclusions

In this paper, a new multilayer RDH technique was proposed using multilevel thresholding where, first, the SMA was used to determine the optimal thresholds on the host image. By using the SMA, two optimal threshold values were determined. Image pixels were then located in their specific classes based on those thresholds. The similar and more correlated pixels within a group were classified. Therefore, the difference between pixels related to each class decreased. It was proven that insertion and distortion in the marked image decreased after inserting the data by reducing the differences between each class’ pixels. The proposed method has a simple implementation. The results showed that the proposed method, in comparison with the methods of Arham et al. [3] and Kumar et al. [24], has more capacity and more PSNR, but is slightly slower in comparison with the methods of Arham et al. [3] and Kumar et al. [24]. In our subsequent work, we intend to use deep learning methods to extract features and image thresholds in order to classify image features and maximize the capacity and quality of the marked image. Therefore, using the multilevel thresholding technique in the proposed method improved the multilayer RDH method compared to the other methods. This technique reduced the difference between the pixels, and the pixels in the same class were very similar to one another.

Author Contributions

Conceptualization, A.M., B.k.D., J.L.W., H.A.A., E.E., M.D., R.A.Z. and L.A.; methodology, A.M. and L.A.; software, A.M., J.L.W., H.A.A., M.D. and L.A.; validation, A.M., B.k.D., J.L.W., H.A.A., E.E., M.D., R.A.Z. and L.A.; formal analysis, A.M., B.k.D., J.L.W., H.A.A., E.E., M.D., R.A.Z. and L.A.; investigation, E.E., M.D., R.A.Z. and L.A.; resources, A.M., B.k.D., J.L.W., H.A.A. and L.A.; data curation, A.M., B.k.D., J.L.W., H.A.A., E.E., M.D., R.A.Z. and L.A.; writing—original draft preparation, A.M., B.k.D., J.L.W., H.A.A., E.E., M.D., R.A.Z. and L.A.; writing—review and editing, A.M., B.k.D., J.L.W.; visualization, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

The study did not require ethical approval.

Data Availability Statement

Data is available upon the request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abdullah, S.M.; Manaf, A.A. Multiple Layer Reversible Images Watermarking Using Enhancement of Difference Expansion Techniques; Springer: Berlin/Heidelberg, Germany, 2010; Volume 87, pp. 333–342. [Google Scholar]
  2. Zeng, X.; Li, Z.; Ping, L. Reversible data hiding scheme using reference pixel and multi-layer embedding. Int. J. Electron. Commun. (AEÜ) 2012, 66, 532–539. [Google Scholar] [CrossRef]
  3. Arham, A.; Nugroho, H.A.; Adji, T.B. Multiple Layer Data Hiding Scheme Based on Difference Expansion of Quad. Signal Process. 2017, 137, 52–62. [Google Scholar] [CrossRef]
  4. Maniriho, P.; Ahmad, T. Information Hiding Scheme for Digital Images Using Difference Expansion and Modulus Function. J. King Saud Univ. Comput. Inf. Sci. 2018, 31, 335–347. [Google Scholar] [CrossRef]
  5. Yao, H.; Qin, C.; Tang, Z.; Tian, Y. Guided filtering based color image reversible data hiding. J. Vis. Commun. Image R 2013, 43, 152–163. [Google Scholar] [CrossRef]
  6. Ou, B.; Li, X.; Zhao, Y. Pairwise Prediction-Error Expansion for Efficient Reversible Data Hiding. IEEE Trans. Image Process. 2013, 22, 5010–5021. [Google Scholar] [CrossRef] [PubMed]
  7. Li, X.; Zhang, W.; Gui, X.; Yang, B. Efficient Reversible Data Hiding Based on Multiple Histograms Modification. IEEE Trans. Inf. Forensics Secur. 2015, 10, 2016–2027. [Google Scholar]
  8. Fu, D.; Jing, Z.J.; Zhao, S.G.; Fan, J. Reversible data hiding based on prediction-error histogram shifting and EMD mechanis. Int. J. Electron. Commun. 2014, 68, 933–943. [Google Scholar] [CrossRef]
  9. Ou, B.; Li, X.; Wang, J.; Peng, F. High-fidelity reversible data hiding based on geodesic path and pairwise prediction-error expansion. Neurocomputing 2016, 68, 933–943. [Google Scholar] [CrossRef] [Green Version]
  10. Wang, J.; Ni, J.; Zhang, X.; Shi, Y.Q. Rate and Distortion Optimization for Reversible Data Hiding Using Multiple Histogram Shifting. IEEE Trans. Cybern. 2016, 47, 315–326. [Google Scholar] [CrossRef]
  11. Xiao, M.; Li, X.; Wang, Y.; Zhao, Y.; Ni, R. Reversible data hiding based on pairwise embedding and optimal expansion path. Signal Process. 2019, 19, 30017–30019. [Google Scholar] [CrossRef]
  12. Zhou, H.; Chen, K.; Zhang, W.; Yu, N. Comments on Steganography Using Reversible Texture Synthesis. IEEE Trans. Image Process. 2017, 26, 1623–1625. [Google Scholar] [CrossRef] [PubMed]
  13. Shehab, M.; Abualigah, L.; Shambour, Q.; Abu-Hashem, M.A.; Shambour, M.K.Y.; Alsalibi, A.I.; Gandomi, A.H. Machine learning in medical applications: A review of state-of-the-art methods. Comput. Biol. Med. 2022, 145, 105458. [Google Scholar] [CrossRef] [PubMed]
  14. Zhu, X.; Zhou, M. Multiobjective Optimized Cloudlet Deployment and Task Offloading for Mobile-Edge Computing. IEEE Internet Things J. 2021, 8, 15582–15595. [Google Scholar] [CrossRef]
  15. Zhu, Q.-H.; Tang, H.; Huang, J.-J.; Hou, Y. Task Scheduling for Multi-Cloud Computing Subject to Security and Reliability Constraints. IEEE/CAA J. Autom. Sin. 2021, 8, 848–865. [Google Scholar] [CrossRef]
  16. Ezugwu, A.E.; Ikotun, A.M.; Oyelade, O.O.; Abualigah, L.; Agushaka, J.O.; Eke, C.I.; Akinyelu, A.A. A comprehensive survey of clustering algorithms: State-of-the-art machine learning applications, taxonomy, challenges, and future research prospects. Eng. Appl. Artif. Intell. 2022, 110, 104743. [Google Scholar] [CrossRef]
  17. Otair, M.; Abualigah, L.; Qawaqzeh, M.K. Improved near-lossless technique using the Huffman coding for enhancing the quality of image compression. Multimed. Tools Appl. 2022, 1–21. [Google Scholar] [CrossRef]
  18. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  19. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  20. He, W.; Cai, J.; Xiong, G.; Zhou, K. Improved reversible data hiding using pixel-based pixel value grouping. Optik 2017, 157, 68–78. [Google Scholar] [CrossRef]
  21. He, W.; Xiong, G.; Zhou, K.; Cai, J. Reversible data hiding based on multilevel histogram modification and pixel value grouping. J. Vis. Commun. Image Represent. 2016, 40, 459–469. [Google Scholar] [CrossRef]
  22. Li, X.; Li, J.; Li, B.; Yang, B. High-fidelity reversible data hiding scheme based on pixel-value-ordering and prediction-error expansion. Signal Process. 2013, 93, 198–205. [Google Scholar] [CrossRef]
  23. Kumar, R.; Jung, K.H. Robust reversible data hiding scheme based on two-layer embedding strategy. Inf. Sci. 2020, 512, 96–107. [Google Scholar] [CrossRef]
  24. Kumar, R.; Kim, D.S.; Jung, K.H. Enhanced AMBTC based data hiding method using hamming distance and pixel value differencing. J. Inf. Secur. Appl. 2019, 47, 94–103. [Google Scholar] [CrossRef]
  25. Kim, P.H.; Ryu, K.W.; Jun, K.H. Reversible data hiding scheme based on pixel-value differencing in dual images. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720911006. [Google Scholar] [CrossRef]
  26. Hussain, M.; Riaz, Q.; Saleem, S.; Ghafoor, A.; Jung, K.H. Enhanced adaptive data hiding method using LSB and pixel value differencing. Multimed. Tools Appl. 2021, 80, 20381–20401. [Google Scholar] [CrossRef]
  27. Yu, C.; Zhang, X.; Zhang, X.; Li, G.; Tang, Z. Reversible Data Hiding with Hierarchical Embedding for Encrypted Images. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 451–466. [Google Scholar] [CrossRef]
  28. Tang, Z.; Nie, H.; Pun, C.M.; Yao, H.; Yu, C.; Zhang, X. Color Image Reversible Data Hiding with Double-Layer Embedding. IEEE Access 2020, 8, 6915–6926. [Google Scholar] [CrossRef]
  29. Yao, H.; Mao, F.; Tang, Z.; Qin, C. High-fidelity dual-image reversible data hiding via prediction-error shift. Signal Process. 2020, 170, 107447. [Google Scholar] [CrossRef]
  30. Salehnia, T.; Izadi, S.; Ahmadi, M. Multilevel image thresholding using GOA, WOA and MFO for image segmentation. In Proceedings of the 8th International Conference on New Strategies in Engineering, Information Science and Technology in the Next Century, Dubai, United Arab Emirates (UAE), 2021; Available online: https://civilica.com/doc/1196572/ (accessed on 16 March 2022).
  31. Raziani, S.; Salehnia, T.; Ahmadi, M. Selecting of the best features for the knn classification method by Harris Hawk algorithm. In Proceedings of the 8th International Conference on New Strategies in Engineering, Information Science and Technology in the Next Century, Dubai, United Arab Emirates (UAE), 2021; Available online: https://civilica.com/doc/1196573/ (accessed on 16 March 2022).
  32. Salehnia, T.; Fath, A. Fault tolerance in LWT-SVD based image watermarking systems using three module redundancy technique. Expert Syst. Appl. 2021, 179, 115058. [Google Scholar] [CrossRef]
  33. El Aziz, M.A.; Ewees, A.A.; Hassanien, A.E. Whale Optimization Algorithm and Moth-Flame Optimization for Multilevel Thresholding Image Segmentation. Expert Syst. Appl. 2017, 83, 242–256. [Google Scholar] [CrossRef]
  34. Akay, B. A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding. Appl. Soft Comput. 2013, 13, 3066–3091. [Google Scholar] [CrossRef]
  35. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  36. Liu, H.; Zhou, M.; Guo, X.; Zhang, Z.; Ning, B.; Tang, T. Timetable Optimization for Regenerative Energy Utilization in Subway Systems. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3247–3257. [Google Scholar] [CrossRef]
  37. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  38. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf mongoose optimization algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
Figure 1. Production of vectors for four layers [3]: (a) Layer-1; (b) Layer-2; (c) Layer-3; (d) Layer-4.
Figure 1. Production of vectors for four layers [3]: (a) Layer-1; (b) Layer-2; (c) Layer-3; (d) Layer-4.
Processes 10 00858 g001
Figure 2. Production of the matrices for three classes: (a) Class C 1 ; (b) Class C 2 ; (c) Class C 3 .
Figure 2. Production of the matrices for three classes: (a) Class C 1 ; (b) Class C 2 ; (c) Class C 3 .
Processes 10 00858 g002
Figure 3. Production of the vector P.
Figure 3. Production of the vector P.
Processes 10 00858 g003
Figure 4. An example of quad-pixel reversible data embedding: (a) before embedding; (b) after embedding; (c) after extraction.
Figure 4. An example of quad-pixel reversible data embedding: (a) before embedding; (b) after embedding; (c) after extraction.
Processes 10 00858 g004
Figure 5. Used host images in this paper: (a) Lena; (b) Peppers; (c) Airplane; (d) Baboon; (e) Ship; (f) Barbara; (g) Cameraman; (h) Lake; (i) Bridge.
Figure 5. Used host images in this paper: (a) Lena; (b) Peppers; (c) Airplane; (d) Baboon; (e) Ship; (f) Barbara; (g) Cameraman; (h) Lake; (i) Bridge.
Processes 10 00858 g005
Figure 6. Comparison of image quality for single-layer capacity: (a) Airplane; (b) Baboon; (c) Barbara; (d) Ship; (e) Lena; (f) Peppers; (g) Lake; (h) Bridge [3].
Figure 6. Comparison of image quality for single-layer capacity: (a) Airplane; (b) Baboon; (c) Barbara; (d) Ship; (e) Lena; (f) Peppers; (g) Lake; (h) Bridge [3].
Processes 10 00858 g006aProcesses 10 00858 g006b
Figure 7. Comparing the quality values for multilayer capacities in different images: (a) Airplane; (b) Baboon; (c) Barbara; (d) Ship; (e) Lena; (f) Peppers; (g) Lake; (h) Bridge [3].
Figure 7. Comparing the quality values for multilayer capacities in different images: (a) Airplane; (b) Baboon; (c) Barbara; (d) Ship; (e) Lena; (f) Peppers; (g) Lake; (h) Bridge [3].
Processes 10 00858 g007aProcesses 10 00858 g007b
Table 1. Key acronyms and their meanings.
Table 1. Key acronyms and their meanings.
Key AcronymsTheir Meanings
DHData hiding
DWTDiscrete wavelet transform
DCTDiscrete cosine transform
RDHReversible data hiding
DEDifference expansion
HSHistogram shifting
PEEPrediction-error expansion
PVGPixel value grouping
PVOPixel value ordering
DHDifference histogram
PEHPE histogram
GAGenetic algorithm
PPEEPairwise prediction-error expansion
2D-PEHTwo-dimensional PEH
RRDHRobust reversible data hiding
HSBHigher significant bit
LSBLeast important bit
PEEPrediction-error expansion
RDHEIRDH in encrypted images
MSEMean squared error
PSNRPeak signal-to-noise ratio
bppBits per pixel
SSIMStructural similarity index measure
Table 2. A comparison of the actual embedding capacities in common images.
Table 2. A comparison of the actual embedding capacities in common images.
ProposedInsertion Capacity (bits)Insertion Capacity (bpp)
ImageT1T2[3][24]PM[3][24]PM
Airplane111182194,250210,000215,0000.74490.80100.8201
Baboon78149193,314110,000195,0030.73740.41960.7438
Barbara55131191,466158,000192,2320.73040.60270.7333
Ship79156194,583160,000196,1440.74230.61030.7482
Lena85152196,768200,000209,0000.74800.76290.7972
Lake76145191,854111,000193,4580.73180.42340.7514
Bridge84151192,472185,000196,8740.73420.71860.7379
Cameraman91174194,968159,000199,7620.74370.60390.7510
Peppers61136195,414178,000196,9850.74540.67900.7514
PM = proposed method.
Table 3. A comparison of the processing times in common images.
Table 3. A comparison of the processing times in common images.
ProposedProcessing Time (s)
ImageT1T2[3][24]PM
Airplane111182154.091.29173.94
Baboon78149153.721.81173.60
Barbara55131151.441.37175.35
Ship79156138.111.44160.99
Lena85152153.581.67181.56
Lake76145152.411.37174.56
Bridge84151153.251.41175.92
Cameraman91174139.481.62179.45
Peppers61136154.111.15185.98
PM = proposed method.
Table 4. Comparison of the PSNR (dB) value single-layer embedding in terms of visual quality test images.
Table 4. Comparison of the PSNR (dB) value single-layer embedding in terms of visual quality test images.
PSNR (dB)
0.1 bpp0.2 bpp0.3 bpp0.4 bpp0.5 bpp0.6 bpp0.7 bpp
Airplane[3]54.3551.8848.8544.3642.4541.4440.92
Proposed55.1752.7349.9945.4044.0743.1141.86
Baboon[3]40.9538.9737.9836.9536.4835.9735.40
Proposed42.0140.0338.8837.3437.1136.5436.89
Barbara[3]49.2946.7343.8441.1238.9837.5036.57
Proposed50.7447.4344.4542.3139.1238.6137.63
Ship[3]50.7146.8643.8541.9540.9240.4640.19
Proposed51.8548.4645.4843.0142.1541.6441.78
Lena[3]54.7050.3147.7445.7744.1943.0342.16
Proposed55.1852.1248.6146.1745.4644.6143.44
Lake[3]48.3244.1640.3538.3636.5634.7933.21
Proposed49.9745.9841.6839.8637.1535.9634.56
Bridge[3]47.7643.3239.8636.7534.8533.4531.99
Proposed49.2344.5340.9637.8235.9834.5932.68
Cameraman[3]46.1242.3638.2534.9132.5631.2030.05
Proposed48.0643.7439.9436.2034.1132.3131.11
Peppers[3]49.6647.1245.3743.9243.1542.6142.05
Proposed51.1148.9446.3344.6145.2043.4543.11
Table 5. A comparison of the PSNR (dB) value multiple-layer embedding in terms of visual quality test images.
Table 5. A comparison of the PSNR (dB) value multiple-layer embedding in terms of visual quality test images.
PSNR (dB)
0.7 bpp1.5 bpp2.2 bpp3 bpp3.7 bpp4.5 bpp5.2 bpp6 bpp
Airplane[3]40.536.5935.13432.531.93130
Proposed41.1238.0236.2535.0133.8432.5632.9731.01
Baboon[3]35.1323028.527.126.225.325
Proposed36.9833.2431.1229.9528.8727.9726.7826.25
Barbara[3]36.233.93230.729.128.8327.827
Proposed37.4534.2533.1231.9930.4730.0228.1128
Ship[3]40.5836.93533.23231.573029.1
Proposed42.0137.236.1134.423333.033130.25
Lena[3]4238.23735.1343332.131.1
Proposed43.5239.443836.553534.113332
Lake[3]37.2634.7332.8231.7430.822927.7626
Proposed38.9935.4734.1533.2632.4230.7629.1227.22
Bridge[3]37.8533.7232.2531.7429.182826.8725.10
Proposed39.2335.2833.7632.0630.462927.7526
Cameraman[3]40.5136.9734.9832.7630.5628.752725.34
Proposed42.0738.1336.4635.3632.8931.4629.4226.12
Peppers[3]41.938.336.33534333231.1
Proposed4339.853736.5535.89343332.34
Table 6. Comparison of the SSIM in common images for different capacities.
Table 6. Comparison of the SSIM in common images for different capacities.
ProposedInsertion Capacity (bits)SSIM
ImageT1T2[3][24]PM[3][24]PM
Airplane111182194,250210,000215,0000.91280.91310.9261
Baboon78149193,314110,000195,0030.91250.91550.9298
Barbara55131191,466158,000192,2320.91350.91460.9283
Ship79156194,583160,000196,1440.91340.91240.9282
Lena85152196,768200,000209,0000.91320.91380.9272
Lake76145191,854111,000193,4580.91570.91150.9284
Bridge84151192,472185,000196,8740.91250.91140.9279
Cameraman91174194,968159,000199,7620.91480.91180.9256
Peppers61136195,414178,000196,9850.91360.91450.9274
PM = proposed method.
Table 7. A comparison of the SSIM for single-layer embedding.
Table 7. A comparison of the SSIM for single-layer embedding.
SSIM
0.1 bpp0.2 bpp0.3 bpp0.4 bpp0.5 bpp0.6 bpp0.7 bpp
Airplane[3]0.97360.96580.95310.94620.93120.92430.9134
Proposed0.98650.97360.96380.95820.94620.93650.9245
Baboon[3]0.97350.96850.95380.94250.93650.92350.9141
Proposed0.98460.97340.96480.95360.94680.93280.9267
Barbara[3]0.97620.96940.95390.94620.93610.92170.9119
Proposed0.98730.97250.96470.95430.94860.93690.9278
Ship[3]0.97480.96570.95670.94740.93130.92360.9167
Proposed0.98390.97460.96250.95120.94520.93850.9286
Lena[3]0.97430.96510.95120.94360.93210.92380.9118
Proposed0.98490.97830.96740.95720.94450.93680.9264
Lake[3]0.97390.96450.95360.94120.93360.92250.9136
Proposed0.98180.97620.96520.95480.94620.93680.9275
Bridge[3]0.97860.96380.95360.94380.93560.92830.9169
Proposed0.98630.97820.96710.95820.94680.93970.9247
Cameraman[3]0.97680.96370.95320.94130.93670.92140.9179
Proposed0.98640.97260.96820.95510.94390.93690.9264
Peppers[3]0.97390.96320.95390.94620.93630.92580.9179
Proposed0.98270.97460.96480.95830.94860.93780.9248
PM = proposed method.
Table 8. Compares the PSNR value for the PM and yao et al. [22] method.
Table 8. Compares the PSNR value for the PM and yao et al. [22] method.
30,000 (bits)50,000 (bits)
psnrssimpsnrssim
ImageYao et al. [30]PMYao et al. [30]PM
Airplane65.560.973666.120.984664.340.961665.220.9765
Baboon65.560.974766.680.987464.340.962365.440.9716
Barbara65.560.973266.230.983464.340.967565.650.9736
Ship65.560.974766.420.984564.340.964665.610.9765
Lena65.560.976166.290.985664.340.965465.120.9748
Lake65.420.971966.180.988564.120.963865.830.9716
Bridge65.560.976466.470.984964.340.967665.790.9734
Cameraman65.550.977366.420.985964.340.964965.680.9756
Peppers65.560.977266.820.981764.340.964765.360.9748
PM = proposed method.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mehbodniya, A.; Douraki, B.k.; Webber, J.L.; Alkhazaleh, H.A.; Elbasi, E.; Dameshghi, M.; Abu Zitar, R.; Abualigah, L. Multilayer Reversible Data Hiding Based on the Difference Expansion Method Using Multilevel Thresholding of Host Images Based on the Slime Mould Algorithm. Processes 2022, 10, 858. https://0-doi-org.brum.beds.ac.uk/10.3390/pr10050858

AMA Style

Mehbodniya A, Douraki Bk, Webber JL, Alkhazaleh HA, Elbasi E, Dameshghi M, Abu Zitar R, Abualigah L. Multilayer Reversible Data Hiding Based on the Difference Expansion Method Using Multilevel Thresholding of Host Images Based on the Slime Mould Algorithm. Processes. 2022; 10(5):858. https://0-doi-org.brum.beds.ac.uk/10.3390/pr10050858

Chicago/Turabian Style

Mehbodniya, Abolfazl, Behnaz karimi Douraki, Julian L. Webber, Hamzah Ali Alkhazaleh, Ersin Elbasi, Mohammad Dameshghi, Raed Abu Zitar, and Laith Abualigah. 2022. "Multilayer Reversible Data Hiding Based on the Difference Expansion Method Using Multilevel Thresholding of Host Images Based on the Slime Mould Algorithm" Processes 10, no. 5: 858. https://0-doi-org.brum.beds.ac.uk/10.3390/pr10050858

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop