Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

Signal Processing in Encrypted and Spatial Domains

Info: 18805 words (75 pages) Dissertation
Published: 10th Dec 2019

Reference this

Tags: Information Technology



1.1 Introduction to the project:

Recently, image processing engineers started showing interest in signal processing in encrypted and spatial domain. The most popular and effective means for protection of privacy and confidential information is encryption which converts the ordinary signal into incomprehensible data, thus making the traditional signal processing in encryption and decryption of an image or information. But, in some cases the content owner would trust no one except the customer who would buy the information from the content owner. So, the manipulation of the encrypted data while not disclosing the information is required. For example, when the transmitting secret information is protected through encryption, the channel provider not knowing the cipher key will compress the data encrypted because of very less channel resources. Meanwhile the binary image which is encrypted could be compressed in a lossless manner by finding the syndromes of codes. For encrypted gray image which uses rate-compatible punctured turbo codes and progressive decomposition, a lossless compression method is developed. The gray image encrypted can be compressed by removing the excess fine information coefficients generated from orthogonal transform with the lossy compression method. With this compressed data, the receiver can reconstruct the essential and principle content of actual original image through retrieving the coefficient values. The transform computation in the encryption domain is being learnt. Depe on the underlying cryptosystem homomorphic properties, the Discrete Fourier Transform (DFT) can be implemented in the encrypted domain. Packing a number of signal samples and processing them as a unique sample can decrease the size of encrypted data and complexity of computation in composite signal representation method.

There are many ways and a number of works for hiding data in the encrypted domain. In the seller–buyer watermarking system protocol, the seller of the digital multimedia encrypts and protects the actual original data using a key called as public key. Then, he embeds and permutes the encryption fingerprint given by the buyer. The buyer can obtain a watermarked product after decryption with the key called private key.

This protocol will ensure that the seller would know nothing about the buyer’s watermarked version while the buyer does not knows the actual and original version. The improvement of the ciphering rate can be exploited by the anonymous fingerprinting and a method called Okamoto-Uchiyama encryption method is proposed. Both the large communication bandwidth and computational overhead because of the homomorphic public-key encryption are reduced significantly by introducing the composite signal representation mechanism. In some other type of joint encryption and data hiding schemes, a part of cover data carries the extra message bits and the remaining data is encrypted, as a result, both the privacy and the copyright can be protected. The DCT coefficients sign and motion vector difference are encrypted in the intra-prediction mode, then the amplitudes of DCT coefficients are embedded with the watermark. The cover data in lower and higher bit-planes of transform domain are first encrypted and then watermarked. If the content owner encrypts the DCT coefficient sign and each different key is used by the content-user to decrypt just a subset of the coefficients. Different fingerprints are generated for the users by the different series of versions.

In the project, reversible data hiding in an encrypted image is investigated. Many works on reversible data hiding focuses on the data embedding, extraction and recovery of the content image and cover image on the plain and spatial domain. But, in some cases, an inferior channel assistant or a inferior channel administrator plans to append some extra additional message bits, such as the image authentication data and origin information within the encrypted image and the assistant does not know the original image content. It can be found that the information at the receiver is exactly same as the information at the sender.

The encrypted image now contains the extra data and the receiver will decrypt the data by using a key called encryption key. Then extracts the combined image consisting of embedded data after embedding process and then recovers the original cover image by using a key called the data-hiding key.

To put it in other words, the extra additional data will have to be first extracted from the decrypted image, as a result the main content of actual and original image is revealed prior to data extraction. In case, if a person has only the data-hiding key and not the encryption key then he may extract no data bits through the encrypted image consisting of extra and additional data. This project proposes an innovative scheme for reversible data hiding in encrypted jpeg bit-stream.

1.2 Aim of the Project:

The Aim of this project is to implement a method for secure image transmission from sender to the receiver by using Reversible Data hiding using Encrypted JPEG Bit-stream.

1.3 Feasibility Study:

The implementation of this project is checked during this phase and business proposal is made with a draft plan for the project and also does cost estimation. The feasibility study of our proposed system is carried out in the system analysis. This ensures us that the system which the company or an individual has proposed will not be a burden to the company or individual. The important considerations for the understanding of the major requirements are essential for the feasibility analysis.

Three main things in the feasibility analysis are

  • Economic Feasibility
  • Technical Feasibility
  • Social Feasibility


1.3.1 Economic Feasibility:

How the system will be impacted economically will be checked I this economic feasibility. The companies may have very limited and restricted budget for the allocation of funds for the research and development. After allocation of funds the spending expenditure must be justified in accordance with the budget report.

1.3.2 Technical Feasibility:

The technical specifications are required by the system of the company is carried out in technical feasibility section. There should be high demand on the technical resources available for any system that is developed. Eventually, this leads to clients who face heavy demand. Therefore the developed system should have a moderate requirement of minimal or almost null charges for the implementation of the system.

1.3.3 Social Feasibility:

Checking the acceptance level of the system by the user is dealt by social feasibility. The training process of the user to use the system in a better way is included here. The user should not be feared by the system, instead he should accept and believe it as a necessary. The acceptance level of the user entirely depends on the methods employ which make the user to get to know about the system and to make him familiar with the system. The level of confidence of the user must be raised in such a way that he may face severe criticism. He should welcome such types of criticism because eventually he is the final and main user of the system.

  1.    System Requirements:

1.4.1 Hardware Requirements:


  1. System
Pentium IV and above.
  1. Hard Disk
40 GB or Above.
  1. Monitor
VGA and High resolution monitor.
  1. Mouse
  1. Ram
1 GB or Above.

Table 1.4.1 Hardware requirements

1.4.2   Software Requirements:

  1. Operating system
Windows XP, 7.
  1. Coding  Language
Matlab software.

Table 1.4.2 Software requirements

1.5 Organization of the Project:

Chapter 1 describes about the introduction part and the aim of the project. It also deals with feasibility study and its classification. Software and hardware requirements are also mentioned in it. ­­­­­

Chapter 2 describes about the existing system, its disadvantages and the proposed system.

Chapter 3 gives information about the basics of image processing and different types image files used in the project.

Chapter 4 describes about the MATLAB software in detail.

Chapter 5 describes about the block diagram and the implementation of the project.

Chapter 6 shows the output waveforms and result of the project.

Chapter 7 describes the conclusion and future enhancements for the project.







2.1 System Analysis:

System Analysis is a mixed process of dividing the responsibilities of the system which are based on the user requirements and domain problem characteristics.

2.1.1 Existing System:

The least significant bits of an encrypted jpeg image is compressed by the data hider with the use of a key called data hiding key, which creates some sparse space. This sparse space accommodates the additional data. Two reversible data-embedding techniques for lossy image format JPEG. The differences of neighboring pixel values, and select some difference values for the difference expansion (DE). It employs the lossless image compression algorithm CALIC, during data embedding, we will modify all changeable difference values, by either adding a new LSB (via the DE) or modifying its LSB. Overflow/underflow some pixels cannot be expanded or shifted. This means that the watermarked values should be restricted.Embedding using horizontal and vertical pairing alternatively will leads to extra storage space by exploring the redundancy in the image content.

2.1.2 Disadvantages of Existing System:

  • Watermarks are routinely added to digital images as a form of copy protection, but their presence essentially destroys the picture.
  • The decoder needs to know where (from which difference values) to collect and decode the location map.
  • The encoding, decoding and authentication process consists of several steps.

2.1.3 Proposed System:

The proposed method is a lossless, a reversible, and a combined data hiding schemes for encrypted images by exploiting the probabilistic and homomorphic properties of cryptosystems.

With these schemes, the division and reorganization of pixels is reduced and avoided so that the encryption and decryption processes are performed on the pixels of the cover image directly. The amount of computational complexity and encrypted data is reduced and lowered. The encrypted image data is modified for data embedding due to the probabilistic property in the lossles scheme. The decryption can directly happen in the actual original image, on the other hand, the embedded data can be extracted. In tour project i.e., reversible scheme, the histogram shrink will take place prior to the process of encryption. As a result, the modification on encrypted jpeg bit-stream for embedding data will cause no pixel oversaturation in plaintext domain. Even though the embedding the data may result very little change or distortion in the plain domain,  because of a property called homomorphic property, the data which is embedded can be extracted perfectly and the content information will be completely recovered from the decrypted image directly. In addition to this, the data embedding process of the reversible and lossless schemes can be done at a time in an encrypted jpeg image. The combination of both reversible and lossless techniques, the part of the embedded data can be extracted at the receiver before decryption. Whereas the another part of the embedded data can be extracted and the original image can be recovered after decryption.








  1.   Image:

An image is generally a two-dimensional block of rows and columns which looks identical to a person or a physical object in terms of appearance.

In general, a photo taken by the digital camera, screenshot of a display can be called as an image. The photos of the three-dimensional and two-dimensional objects or places or persons can be captured by electronic gadgets like cameras and even optical objects like lenses, mirrors, microscopes, telescopes. Even natural objects like human eye and water surfaces can capture photos of the objects that they see. But the only difference between these optical devices, natural objects and electronic gadgets is the storage capacity of the captured images.

In a broader perspective, the image can be defined as maps, pie-char, abstract painting and graphs. Images are made not just automatically by cameras and mirrors, but they can also be made manually with human hands like painting of a picture, drawing the outline of a person, carving of the stones. In addition to this images can be automatically displayed by printing them from printers and some sophisticated software.


Fig 3.1.1 Two dimensional image

An image is a rectangular block of pixels arranged in rows and columns. The image dimensions are calculated in terms of height and width which are counted in number of pixels per row and column. Length of the total number of rows is considered as the height of the image. Whereas, length of the total number columns is considered as the width of the image.

The pixels in an image are dots and they contain the image are arranged in a grid. The number values containing in the pixel denote the magnitudes of colour and brightness of the image.


Fig 3.1.2 Pixels in an image

Every pixel consists of a colour component and that colour component is represented by an integer value of 32-bit. The 32 bits are divided into four sections of 8 bits each. The first section of eight bits represent the red colour of the pixel followed by green colour and the blue. The last and the remaining 8 bits represent the transparency of the pixel.

Fig 3.1.3 Structure of a pixel

  1.   Image File Sizes:

The file size of an image is expressed in terms of bits, bytes, kilobytes, megabytes. And the file size increases with the increase in the number of pixels and the bit colour depth(L) in an image. If the number of rows and columns are high, then the resolution of the image is high thus making the size of image file large.

In addition to this, the size of each and every pixel present in a particular image increases when the bit colour depth of the image increases. For example, an eight bit pixel stores only 256 colours, whereas the twenty four bit pixel stores sixteen million colours and this 24 bit pixel is called as true colour. Some algorithms are used to decrease the size of the image files during compression. Nowadays, digital cameras used by the photographers can take photos of very high resolution, as a result the size of the images are very high values ranging from hundreds of kilobytes to tens of megabytes. Very high resolution cameras can take twelve megapixel images, in a layman terms it means that the image taken by such cameras contains 12×106 pixels, which is true color. For an instance, consider that a 12 megapixel digital camera has captured an image, as we know that each pixel uses twenty four bits or three bytes to record a true colour, then a size of 288,000,000 bits is taken by the uncompressed image. In practice, the cameras have to capture and record such images occupying very large size.  In order to accept such image file sizes by both the cameras and the storage elements, some file formats have been developed.

  1.   Image File Formats:


The image file formats organize the row and column pixels and then store the images. This is all about the digital formats of the images which are used to store the photos and images. The images are made up of either vector data or pixel. These pixels are rasterized to pixels and are displayed on a vector graphical display. There are several types of image formats which are being used at present, but the most used image types are JPEG, PNG, BMP, and GIF.


Fig 3.3 Raster and Vector format of an image

Other than the straight image file formats, there are some meta-file formats which are portable and can include both vector information and raster information. The meta-file formats like raster and vector formats are the intermediate formats. Meta-files are opened by most of the windows applications and then they save them in own format.

3.3.1 Raster Formats:


In bitmaps, these raster formats store photos or images. Bitmaps are also known as pix-maps.

  • JPEG:

Joint Photographic Experts Group is the full form of the image file format JPEG and it is one of the compression methods. JPEG File Interchange Format also known as, JFIF stores these JPEG compressed images. Joint photographic experts group is one of the lossy compressions. Almost all digital cameras can now save and store JPEG format images and this JPEG supports 1byte per colour and the three colours are RGB  for a 3 byte total, thus comparatively producing small files. If the photographic images are edited then they may be stored in a non-JPEG formats and these images are unacceptable. Many adobe PDF file readers use JPEG image format for image compressions.

  • EXIF:

Exchangeable Image File Format is the full form of EXIF image and these formats are identical to JFIF and TIFF image extensions. EXIF format is introduced in the JPEG writing software which is used in most of the digital cameras. The use of EXIF is to capture and to direct the exchange of images. The image exchanges are between digital cameras and editing and viewing software about the image metadata. Each and every image contains recorded metadata and these images include things like settings in camera, shutter speed, exposure, size of the image, time and date, compression of the image, camera name, information about the colour and many more. All the information will be displayed when the images are viewed and edited by the editing software of the images..

  • TIFF:

Tagged Image File Format is the full form of TIFF and generally saves 1 byte of 2 bytes per colour for 3 bytes and 6 bytes, thus making it as flexible format. The file usually ends with the extension name TIF or TIFF. Tagged Image File Formats can either be lossy or lossless. Comparatively some of them offer better lossless compression for black and white images. Using LZW compression technique most of the digital cameras save images in TIFF formats. Most web browsers do not support the TIFF extension for images. Whereas, in printing and xerox centers this image format is mainly used as a photo or an image file. Some device specific colour spaces like CMYK which are defined by a set of printing centers are handled by this TIFF image format.

  •     PNG:

Portable Network Graphics is the full form of PNG. This image file format is for free and is also an open source after the GIFF. This image file format supports 16 million colours which is otherwise called as true colour, whereas GIF supports only 256 colours. When the image is uniformly coloured and it is large enough then this image format excels. PNG is a lossless format of an image and is best used for editing images. Whereas, JPEG is a lossy format and is mainly used for final distribution of the photo images. This is because the PNG image format is larger than JPEG image format.

The word PNG itself says it is a potable, compressed and extensible file format which is stored in raster images. As it is already said, PNG is for free and is used in place of GIFF. PNG has many things in common with the TIFF image format. PNG supports image types like grayscale, index colour, and true colour. In addition to this and optional alpha channel is also supported by the PNG. PNG works good web viewing applications. The main qualities of PNG image format are robustness, integrity checking and the detection of colour and transmission errors.

  • GIF:

Graphics Interchange Format is the full form of GIFF. It is just restricted to an eight bit palette or only to 256 colour components. Because of this reason, the GIF format is useful for storing graphics with less colours like simple figures, diagrams, logos, shapes, cartoons and many more.

Many five second animations and cartoons used in facebook chats and other social platforms are supported by GIF image formats. GIF is another lossless compression file format which works efficiently when large places having only one colour and this does not work well for more detailed images or photos.

  • BMP:

Bitmap is the full form of BMP image file format. These image formats are used   in windows operating system for managing and handling graphics in their operating system. Generally, bitmap images are very large because they ar not compressed. The main advantage of using bitmap images format is their use of simplicity and are accepted widely in windows programs.

3.3.2 Vector Formats:

Vector formats are the inverse of the raster formats. In raster formats, they concentrate mostly on characteristics of individual pixel. Whereas, in vector image formats, they concentrate on the geometric description and this geometric description can be provided smoothly at display of desired size.

In some cases, each and every vector graphics has to be rasterized mandatorily so as to display it on the digital monitor screens. Although, even analog CRT monitor screens can display vector images, they are used in medical monitors, laser shows, radar display screens, and even in electronic test equipments. There are a type of printers called plotters which make use of vector data instead of pixel data to draw graphics.

  • CGM:

Computer Graphics Metafile is the full form of CGM. It is a file format which is used for two dimensional raster and vector graphics, and also text. Each and every graphic elements are described in a text source file and this file can be compiled into a binary file. Computer graphics metafile gives the means of graphical data interchange for representation of the computer about two dimensional graphical data which is independent of a particular system, platform , device or an application.

  • SVG:

Scalable Vector Graphics is the full form of SVG. It is created and developed by the World Wide Web for free and it is an open standard. It is used for the versatility, script and all purposes of the vector formats.

The scalable vector graphics is compressed by an eternal program called gzip because it does not have an own compression technique. Because of the text nature of XML it does not have own compression method.

  1.   Image Processing:

Now a days, most of the images are compressed by many tools available on the internet. Bu the main concept that is used in the compression of images on a pixel level is the research and development in the fields of digital image processing. Human being would have never expected that he would capture and record his activities. With the invention of cameras it was made possible. In the present generation cameras of very high resolution are coming into the markets and to they are using digital image processing to convert the photos into small size files with same quality of what they have captured.                 Photos and images have appealed and gained the attention of both the scientists and also the layman. Just like other glamour fields even digital image processing suffers from misconnections, misinformation, misunderstandings and some myths. It covers diverse topics like mathematics, electronics, photo graphics, and optics.

There are many important factors that shows a bright future for the digital image processing. The most important factor which enhances the DIP is the cost reduction of the equipment. Many other trends in the technology make the digital image processing much better. And some of the trends include parallel processing made by the microprocessors which are inexpensive nowadays. Charge Coupled Devices are used for storing data during processing, digitizing the data and finally displaying the large and low cost of images which are stored in the arrays.

  1.    Fundamental Steps in Digital Image Processing:

Fig 3.5 Steps involved in digital image processing

  1.      Image Acquisition:


Digital image is acquired by Image Acquisition.In order to do it, an image sensor is required which is capable of digitize the signals emitted by the sensor. That sensor can produce a complete image of the domain for every 1 by 30th second and it can be a monochrome or a TV camera. The sensors which are being used include a line scan camera. And these types of cameras can make a single line image at a time.


Fig 3.5.1(a) Camera

The two dimensional image are produced by the scanner. An analog to digital converter is used to digitize if the outputs of the cameras and other imaging sensors are not in digital form. A specific application is used to determine the nature of the sensor and the image.


Fig 3.5.1(b) Scanner


  1.      Image Enhancement:

One of the simplest and the most appealing areas of the digital image processing is the Image Enhancement. Generally, the main reason behind the use of enhancement techniques is to bring out many details which are obscured and to highlight the interesting features of an image. The most known example about the enhancement is at what time the contrast of the image has to be increased. The important point that has to be kept in mind is that enhancement is the core and subjective area of the digital image processing.

Fig 3.5.2 Image enhancement

  1.      Image Restoration:

The appearance of the image can be improved can be improved by the Image Restoration. Although, image enhancement is subjective and image restoration is objective. Most of the restoration techniques are based probabilistic and mathematical models of the degradation of the image.


Fig 3.5.3 Before and after image restoration

On the other hand, enhancement depends on the human preferences and constitutes better image enhancement results. For an instance, stretching of the contrast is believed as the image enhancement method, as it is primarily dependent on the pleasing aspects of the viewer. On the other hand, image blur can be removed by the application of the function called as deblur and this is considered as one of the restoration techniques.

  1.      Colour Image Processing:


There are two principle factors which motivate the colour in the image processing. The primary factor is the use of a descriptor which regularly simplifies and makes the identification of the objects easier and also extraction. The secondary factor is that humans can view millions of the colours, shades and intensities when compared to dozen shades of the gray images. Most of the manage image analysis is important for the second factor.


Fig 3.5.4Colour image processing

  1.      Wavelets and Multi-resolution Processing:

Multi resolution processing for the formation of the representation f the images are made by the wavelets. Till 1950’s, fourier transform was the major transform technique for the image processing. However, this method was replaced by the new technique called as the wavelet transform. This wavelet transform makes the processing easier for the methods like compression, transmission, and analyzation of many images. In opposite to fourier transform which is based on the functions of the sinusoidals, wave transformations. These things are based on the small values are called as wavelets of varying frequency and the limited duration.


Fig 3.5.5 Wavelets

Wavelets are considered as the foundation for the beginning of the new powerful approach for the signal processing and analysis. And this analysis is called as Multi- resolution. Wavelets were first shown to be the foundation of a powerful new approach to signal processing and analysis called Multiresolutiontheory. The proposed theory introduces and combines techniques from different kind of things like sub-band coding form signal processing, speech recognition and signal image processing. And also pyramidal image processing is used.

  1.      Compression:

The name compression itself says that, it deals with methods that decrease the storage required for storing the image. It is also used for the bandwidth required for the transmission of the mages. For the past many years the storage elements have been increased and developed, by this t cannot be said same about the transmission capacity. Many image file formats like jpg, png file extensions are the image compression techniques for the images to be stored in less space.

  1.      Morphological Processing:

Most of the tools which are used for the extraction of the components of the image that are useful for the representation of the shape and the description of the shape I dealt by the Morphological Processing. Mathematical morphological processing language is set theory. Therefore, morphology gives a combined and unique approach to the many image processing problems. The objects in the images are represented by the sets of the mathematical morphology. For an instance, the complete morphological description of the image is defined as the set of all black pixels in the image.


Fig 3.5.7 Morphological processing

The important sets in the questions are the members of the two dimensional integer space. The each and every element of the set is a two dimensional vector co-ordinates of the black and white pixels in the image. The components which are in Z3 represent the gray-scale digital images. In some cases, the first two components represent the co-ordinates of the pixels. Whereas, the third component represent the discrete gray-level level.

  1.      Segmentation:

  Segmentation is the process of dividing the image into different parts of objects. Generally, the most difficult task in the digital image processing is the autonomous segmentation. The long way process for the successful solution of problems which are imagined is brought by rugged segmentation and these objects have to be identified individually.


Fig 3.5.8 Segmentation

Otherwise, most of the erratic and weak algorithms will always guarantee the failure of the events. Simply, recognition will succeed in a better way if the segmentation is more accurate.

  1.      Representation and description:

The output of the segmentation stage will always be followed by the representation and description.  These are in a raw pixel data which either contain the boundary of the region or every points in the region itself. In any of the cases, the conversion of the data which are suitable for the computer processing is essential. The primary decision which has to be taken is if the data represented as a boundary or the entire region. When we focus on the characteristics of the external shape like inflections and corners, then the the boundary representation is relevant.

In contrast to boundary representation, when we focus on the internal properties Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. These representations complement each other in some applications. Choosing a representation is only part of the solution for transforming raw data into a form suitable for subsequent computer processing. To highlight the features of interest a method must also be specified for describing the data. Extracting attributes that result in some quantitative information of interest is dealt by Description, also called feature selection, for differentiating one class of objects from another.

  1. Object Recognition:

The final stage involves recognition and interpretation. The process of assigning a label to an object based on the information provided by its descriptors is called Recognition. Assigning meaning to an ensemble of recognized objects is called Interpretation

  1. Knowledge Base:

A knowledge database is formed by coding about a public domain into a image processing system. This knowledge may be as simple as detailing regions of an image when the information of interests is known to be found, thus simplifying the search that has to be conducted in finding that information.

The knowledge base may be complicated, like the related list of each and every defects in the database and containing satellite images of very high resolutions. The region or a material is connected with the change and deletion application. The knowledge base controls the interaction between different modules in addition to the guiding the operation of each and every processing module. In order to recognize the importance of the location of the string to the other components of an address field, the system must e used with the knowledge. The knowledge not only used for the operation of all modules, but also makes this feedback operations among modules with the help of the knowledge base. In MATLAB , the preprocessing techniques are being employed.

  1. Components of an Image Processing System:

There were several models of the digital image processing systems that were being sold throughout the world in the 1980’s and these were the most substantial devices that were equal to the host computers. In the late 1880’s and in the early 1990’s, the image processing hardware markets shifted from single boards to the industry compatible standard buses and these were fit into personal computers and the engineering workstation cabinets. These have not just lowered the costs they have also shifted the markets and these have catalyzed the major number of companies. These companies have been good in the software development which is specifically written for image processing.

Fig 3.6 Components of an image processing system

Even there are many large scale image processing systems have been still sold in th markets like the satellite processing images. This trend still continues towards reducing the sizes and mixing the personal computers which are specialized in image processing hardware. The basic components of the general purpose system which is used for digital image processing is shown I the figure 3.6. Each and every component is discussed below function.

  • Image sensors:

Two main elements are essential to obtain the digital images with reference to sensing. The first and the foremost sensor is the physical device which is sensitive to the radiated energy by whichever object we would like to choose. Whereas, the second element is called as the digitizer and this is used to convert the physical sensing device output into digital form. For example, if the digital video camera and the sensors produce the electrical output which is proportional to the intensity of the light and this digitizer converts the mentioned outputs to the digital data.

  • Specialized image processing hardware:

The digitizer is contained in the specialized image processing hardware. In addition, hardware performs primitive operations like Arithmetic logical unit. ALU performs some addition, subtraction, multiplication and division operations on entire images. For an instance, let us look at how the ALU is helped in digitizing images which is used for the noise reduction. The hardware that we have discussed above is known as the front-end subsystem and it is one of the most different speed characteristics. To put it in other words, this hardware unit require functions which make fast throughputs. Let us take an example of average and digitization of the videos and image ate thirty frames per second and this cannot be handled by the computer.

  • Computer:

In a digital image processing system, the computer used is the basically a general-purpose computer and can usually range from a normal PC to a super computer. In many applications, some computers are specially designed to achieve the desired level of performance. And we are more interested n the general-purpose digital image processing systems. Every well-equipped personal computers are suitable for image processing tasks worked in offline in these systems.

  • Image processing software:

Image processing software has some special modules which perform some specific tasks.  Several special and well designed modules are equipped in a package that include the capability for the user to write codes for image processing minimum.

  • Mass storage:

For an image processing applications, the mass storage capacity is mandatory. Suppose consider and image having resolution of pixels 1024×1024. The intensity in each pixel is an eight bit quantity which requires atleast 1 megabyte of space if the image is not compressed. It is very difficult to provide storage capacity for thousand and millions of the uncompressed images and hence considered as a challenge. The storage capacity for the image processing system falls into 3 main categories. They are short-term storage for the use, on-line storage capacity comparatively for fast recall, and archive storage which are characterized by the infrequent access. The mass storage is usually measured in bytes/bits , Kbytes, Mbytes, Gbytes, and Tbytes.

Computer memory is one of the short term storage method. And the other method is by specialized boards which are called as frame buffers. These frame buffers store one or many images. These images can be accessed fastly and is usually at video rates. Whereas, the second method usually allows the virtual instantaneous zoom of the images. These image zooms can be vertical shifts and horizontal shifts. The buffers of the frames are usually kept in a special image processing hardware which is shown in the figure above. Magnetic disks and optical media storage are used to store the information online. However, the main factor that characterizes the online storage is the frequent access of the stored data. Eventually, the archival storage is mainly characterized by the massive storage but not frequent need for the access. The optical disks and the magnetic tapes are positioned in blocks called “jukeboxes” and these are the usual media for the archival applications.

  • Image displays:

Most of the image displays and screens which are being in use are TV monitors. These computer monitors are usually driven by the image and outputs and the graphic display cards. These form an integral part of a computer system. It is very rare that the requirements of the image displays and its applications which are not meet by the display cards which are available commercially in a computer system. It is important to have a stereo displays and these are used in the form of a headgear which contains two small displays embedded in the goggles which are worn by the user.

  • Hardcopy:

Many laser printers, film cameras, inkjet units, heat sensitive devices, and digital units like optical and CD ROM disks. The film gives the maximum and highest pixel resolution. However, the paper is the best choice for written material. In presentations, the images are being displayed on the film or in the digital medium if the image projection equipment is used. The second method is widely accepted as per the standard for image presentations.

  • Network:

Now a days, in any computer system networking has become an important function. The key consideration in the image transmission is bandwidth because of the high amount of the data that is inherent. Therefore, in such dedicated networks, this is typically not at all a problem, but in communication systems with remote sites which is through the internet  and these are not always efficient.
































  1.  Introduction to MATLAB:

MATLAB could be a superior language for technical computing. It integrates computation, visualization, and programming in easy-to-use surroundings wherever issues and solutions are expressed in acquainted notation. Typical uses embody

• Math and computation

• Algorithm development

• Data acquisition

• Modeling, simulation, and prototyping

• Data analysis, exploration, and visualization

• Scientific and engineering graphics

• Application development, as well as graphical computer programme building

MATLAB is AN interactive system whose basic knowledge component is AN array that doesn’t need orientating. This enables you to unravel several technical computing issues, particularly those with matrix and vector formulations, in a very fraction of the time it’d fancy write a program in a very scalar non-interactive language like C or algebraic language.

4.2    The MATLAB System:

The MATLAB system consists of five main parts

4.2.1 Development Environment:

This is the set of tools and facilities that assist you employ MATLAB functions and files. many of these tools ar graphical user interfaces.

It includes the MATLAB desktop and command window, a command history, degree editor and program, and browsers for viewing facilitate, the space, files, and so the search path.

4.2.2 The Matlab Mathematical perform Library:

This is an enormous assortment of procedure algorithms ranging from elementary functions, like sum, sine, cosine, and complex arithmetic, to extra refined functions like matrix inverse, matrix chemist values, astronomer functions, and fast Fourier transforms.

4.2.3 The MATLAB Language:

This is a high-level matrix/array language with management flow statements, functions, data structures, input/output, and object-oriented programming options. It permits each “programming within the small” to quickly produce fast and dirty throw-away programs, and “programming within the massive” to make large and complicated application programs.

4.2.4 Graphics:

MATLAB has in depth facilities for displaying vectors and matrices as graphs, yet as annotation and printing these graphs. It includes high-level functions for two-dimensional and three-dimensional knowledge mental image, image process, animation, and presentation graphics. It additionally includes low-level functions that enable you to totally customise the looks of graphics yet on build complete graphical user interfaces on your MATLAB applications.

4.2.5 The MATLAB program Interface (API):

This is a library that enables you to write down C and FORTRAN programs that act with MATLAB. It includes facilities for job routines from MATLAB (dynamic linking), job MATLAB as a procedure engine, and for reading and writing MAT-files. varied toolboxes ar there in MATLAB for computing recognition techniques, however we tend to ar victimization IMAGE process chest.

4.3 Graphical interface (GUI):

MATLAB’s Graphical interface Development surroundings (GUIDE) provides a chic set of tools for incorporating graphical user interfaces (GUIs) in M-functions. victimization GUIDE, the processes of birthing out a interface (i.e., its buttons, pop-up menus, etc.)and programming the operation of the interface ar divided handily into 2 simply managed and comparatively freelance tasks. The ensuing graphical M-function consists of 2 identically named (ignoring extensions) files:

• A file with extension.fig, referred to as a FIG-file that contains an entire graphical description of all the function’s interface objects or parts and their placement. A FIG-file contains binary knowledge that doesn’t ought to be parsed once he associated GUI-based M-function is dead.

• A file with extension .m, referred to as a interface M-file, that contains the code that controls the interface operation. This file includes functions that ar referred to as once the interface is launched and exited, and asking functions that ar dead once a user interacts with interface objects for instance, once a button is pushed.

• To launch GUIDE from the MATLAB command window, kind guide file name, wherever file name is that the name of Associate in Nursing existing FIG-file on the present path. If file name is omitted, GUIDE opens a brand new (i.e., blank) window.

A graphical interface (GUI) may be a graphical show in one or additional windows containing controls, referred to as elements that alter a user to perform interactive tasks.

The user of the interface doesn’t need to produce a script or kind commands at the statement to accomplish the tasks. not like cryptography programs to accomplish tasks, the user of a interface needn’t perceive the small print of however the tasks ar performed. interface elements will embrace menus, toolbars, push buttons, radio buttons, list boxes, and sliders simply to call a number of. GUIs created victimization MATLAB tools may also perform any form of computation, browse and write knowledge files, communicate with different GUIs, and show knowledge as tables or as plots.

Fig4.3 GUI window

4.4 beginning and Quitting MATLAB:

4.4.1 Starting MATLAB:

  • On a Microsoft Windows platform, to begin MATLAB, double-click the MATLAB cutoff icon on your Windows desktop.
  • On a UNIX operating system platform, to begin MATLAB, kind matlab at the software package prompt.
  • After beginning MATLAB, the MATLAB desktop opens – see MATLAB Desktop.
  • You will modification the directory within which MATLAB starts, outline startup choices together with running a script upon startup, and scale back startup time in some things.

4.4.2 Quitting MATLAB:

  • To finish your MATLAB session, choose Exit MATLAB from the File menu within the desktop, or kind quit within the Command Window. To execute specific functions on every occasion MATLAB equal, like saving the space, you’ll be able to produce and run a end.m script.

4.5 MATLAB Desktop:

When you begin MATLAB, the MATLAB desktop seems, containing tools (graphical user interfaces) for managing files, variables, and applications related to MATLAB.

The first time MATLAB starts, the desktop seems as shown within the following illustration, though your Launch Pad could contain totally different entries.

You can modification the approach your desktop appearance by gap, closing, moving, and resizing the tools in it. you’ll be able to additionally move tools outside of the desktop or come back them back within the desktop (docking). All the desktop tools give common options like context menus and keyboard shortcuts.

You can specify sure characteristics for the desktop tools by choosing Preferences from the File menu. for instance, you’ll be able to specify the font characteristics for Command Window text. For additional data, click the assistance button within the Preferences window.

4.5.1 Desktop Tools:

This section provides Associate in Nursing introduction to MATLAB’s desktop tools. you’ll be able to additionally use MATLAB functions to perform most of the options found within the desktop tools. The tools are:

• Current Directory Browser

• Workspace Browser

• Array Editor

• Editor/Debugger

• Command Window

• Command History

• Launch Pad

• Help Browser

Command Window – Use the Command Window to enter variables and run functions and M-files.

Command History – Lines you enter within the Command Window ar logged within the Command History window. Within the Command History, you’ll be able to read antecedently used functions, and replica and execute elite lines. to save lots of the input and output from a MATLAB session to a file, use the diary operate.

Running External Programs – you’ll be able to run external programs from the MATLAB Command Window. The exclamation mark character! may be a shell escape and indicates that the remainder of the input line may be a command to the software package. this is often helpful for invoking utilities or running different programs while not quitting MATLAB. On Linux, for instance,!emacsmagik.m invokes Associate in Nursing editor referred to as emacs for a file named magik.m. once you quit the external program, the software package returns management to MATLAB.

Launch Pad – MATLAB’s Launch Pad provides easy accessibility to tools, demos, and documentation.

Help Browser – Use the assistance browser to look and examine documentation for all of your scientific discipline Works product. the assistance browser may be a browser integrated into the MATLAB desktop that displays hypertext markup language documents.To open the assistance browser, click the assistance button within the toolbar, or kind helpbrowser within the Command Window. the assistance browser consists of 2 panes, the assistance Navigator, that you employ to search out data, and also the show pane, wherever you read the knowledge.

Help Navigator – Use to assist Navigator to search out data. It includes:

Product filter– Set the filter to point out documentation just for the product you specify.

Contents tab– read the titles and tables of contents of documentation for your product.

Index tab– realizes specific index entries (selected keywords) within the MathWorks documentation for your product.

Search tab– hunt for a selected phrase within the documentation. to induce facilitate for a selected operate, set the Search kind to operate Name.

Favorite’s tab– read a listing of documents you antecedently selected as favorites.

Display Pane – once finding documentation victimization the assistance Navigator, read it within the show pane. whereas viewing the documentation, you can:

Browse to different pages– Use the arrows at the superior and bottoms of the pages, or use the rear and forward buttons within the toolbar.

Bookmark pages-Click the increase Favorites button within the toolbar.

Print pages – Click the print button within the toolbar.

Find a term within the page – blood type term within the realize in page field within the toolbar and click on Go. Other options obtainable within the show pane are: repetition data, evaluating a variety, and viewing web content.

Current Directory Browser– MATLAB file operations use the present directory and also the search path as reference points. Any file you wish to run should either be within the current directory or on the search path.

Search Path – to see the way to execute functions you decision, MATLAB uses a quest path to search out M-files and different MATLAB-related files, that ar organized in directories on your classification system.

Any file you wish to run in MATLAB should reside within the current directory or during a directory that’s on the search path. By default, the files provided with MATLAB and MathWorks toolboxes are enclosed within the search path.

Workspace Browser – The MATLAB space consists of the set of variables (named arrays) engineered up throughout a MATLAB session and hold on in memory. You add variables to the space by victimization functions, running M-files, and loading saved workspaces.

To view the space and knowledge concerning every variable, use the space browser, or use the functions UN agency and whose. To delete variables from the space, choose the variable and choose Delete from the Edit menu. As an alternative, use the clear operates. The space isn’t maintained once you finish the MATLAB session. To save lots of the space to a file that may be browse throughout a later MATLAB session, choose save space As from the File menu, or use the save operate. This protects the space to a computer file referred to as a MAT-file that incorporates a .mat extension. There ar choices for saving to totally different format. To browse during a MAT-file, choose Import knowledge from the File menu, or use the load operate.

Array Editor – Double-click on a variable within the space browser to examine it within the Array Editor. Use the Array Editor to look at and edit a visible illustration of one- or two-dimensional numeric arrays, strings, and cell arrays of strings that are within the space.

Editor/Debugger -Use the Editor/Debugger to make and rectify M-files, that ar programs you write to run MATLAB functions. The Editor/Debugger provides a graphical interface for basic text editing, yet as for M-file debugging. You can use any text editor to make M-files, like Emacs, and might use preferences (accessible from the desktop File menu) to specify that editor because the default. If you employ another editor, you’ll be able to still use the MATLAB Editor/Debugger for debugging; otherwise you will use debugging functions, like dbstop, that sets a breakpoint. If you only ought to read the contents of Associate in Nursing M-file, you’ll be able to show it within the Command Window by victimization the kind operates.


MATLAB provides an outsized variety of normal elementary mathematical functions, together with abs, sqrt, exp, and sin. Taking the root or power of a negative variety isn’t Associate in nursing error; the suitable complicated results created mechanically. MATLAB additionally provides more advanced mathematical functions, together with astronomer and gamma functions. Most of those functions settle for complicated arguments. For a listing of the elementary mathematical functions, type

help elfun

help specfun

help elmat

Some of the functions like sqrt and sin, ar constitutional. {They ar|they’re} a part of the MATLAB core so that they are terribly economical, however the procedure details don’t seem to be without delay accessible. Different functions, like gamma and sinh, are enforced in M-files

Pi 3.14159265…
i Imaginary unit, √-1
I Same as i
Eps Floating-point relative precision, 2-52
Realmin Smallest floating-point number, 2-1022
Realmax Largest floating-point number, (2-ε)21023
Inf Infinity
NaN Not-a-number

Table 4.6 Mathematical functions of MATLAB


MATLAB implements GUIs as figure windows containing varied kinds of UI management objects. You want to program every object to perform the meant action once activated by the user of the interface. Additionally, you want to be ready to save and launch your interface. All of those tasks are simplified by GUIDE, MATLAB’s graphical interface development surroundings.

4.7.1 GUI Development Environment:

The process of implementing a interface involves 2 basic tasks:

  • Laying out the interface elements
  • Programming the interface elements

GUIDE primarily may be a set of layout tools. However, interfaceDE additionally generates Associate in Nursing M-file that contains code to handle the data formatting and launching of the GUI. This M-file provides a framework for the implementation of the callbacks – the functions that execute once users activate elements within the interface.

4.7.2 The Implementation of a GUI:

While it’s potential to write down Associate in Nursing M-file that contains all the commands to get out a interface, it’s easier to use GUIDE to get out the elements interactively and to come up with 2 files that save and launch the GUI:

A FIG-file contains an entire description of the interface figure and every one of its youngsters (UI controls and axes), yet because the values of all object properties.

An M-file contains the functions that launch and management the interface and the callbacks, that are outlined as sub functions. This M-file is noted as the application M-file during this documentation.

Note that the appliance M-file doesn’t contain the code that lays out the uicontrols; this data is saved within the FIG-file.

The following diagram illustrates the parts of a GUI implementation.


Fig 4.7.2 Implementation of GUI


GUIDE simplifies the creation of interface applications by automatically generating degree M-file framework directly from your layout. You may then use this framework to code your application M-file. This approach provides kind of advantages:

The M-file contains code to implement kind of useful choices (see Configuring Application selections for information on these features). The M-file adopts Associate in nursing economical approach to managing object handles and capital punishment request routines (see creating and Storing the article Handle Structure for extra information). The M-files provides the only thanks to manage world data (see Managing interface data for extra information).

The automatically inserted sub function prototypes for callbacks guarantee compatibility with future releases. For extra data, see Generating request operate Prototypes for information on syntax and arguments.

You can elect to possess GUIDE generate alone the FIG-file and write the applying M-file yourself. Confine mind that there don’t seem to be any UI management creation commands inside the appliance M-file; the layout data is contained inside the FIG-file generated by the Layout Editor.


The Layout Editor Part palette contains the pc programme controls {that you|that you simply|that you simply} just can use in your interface. These elements unit MATLAB UI management objects and unit programmable via their request properties. This section provides data on these elements.

  • Push Buttons
  • Sliders
  • Toggle Buttons
  • Frames
  • Radio Buttons
  • List boxes
  • Checkboxes
  • Popup Menus
  • Edit Text
  • Axes
  • Static Text
  • Figures




5.1   Implementation:

Implementation is the status or stage of the project where the theoretical design like designing architecture and block diagram of the project or system is transformed into a practical and working system. This implementation section gives information about the proposed scheme called reversible data hiding in encrypted jpeg bit-stream. In the proposed scheme which is called reversible data hiding, initially a process is implemented to reduce or shrink the histogram of the cover image. Then each pixel is encrypted by the image provider. With the availability of the encrypted jpeg image, then the data-hider changes the values of pixels so as to embed a generated bit-sequence from the error-correction codes and extra and additional data.

There will be a slight increase or decrease on pixel values with the change in encrypted domain because of the property called as homo-morphic property. Thus, implying that an image similar to the original image can be obtained at the receiver side after the decryption process is implemented.

Due to the process of histogram shrink prior to the encryption process, the data embedding operation should not cause any underflow or overflow in the image which is decrypted directly. Later, the additional embedded data can be extracted and the original image can be recovered from the decrypted image directly. It should be noted that the extraction of data and recovery of the content image of the reversible data hiding scheme can be performed in plaintext domain, meanwhile the extraction of data of the former lossless scheme is done in a domain called encrypted domain thus making the content recovery needless.

Based on two main principles, most of the spatial domain reversible data hiding are developed. They are Difference Expansion method and Histogram Modification method. Generally, the first kind of method i.e., difference expansion method can provide a higher capacity whereas the second method i.e., histogram modification can yield a good quality watermarked image.

This Project proposes a novel scheme called reversible data hiding scheme based on histogram modification. The principle of RDH based on histogram modification is to change the histogram depending on the adjacent pixel differences in place of the host cover image’s histogram. There exist several peak points around the bin zero. Many zero points present on either sides of the bin zero. In the histogram, the peak point indicates the height of histogram bin with the highest statistical value (i.e., the falling of the count in the respective bin). Whereas, the zero point indicate the histogram bin with zero statistical value. While, in this case, all the differences are divided into levels ranging from      [−255, 255]. Each level represents a histogram bin. Therefore, it is feasible to change the histogram with a mechanism called multilevel mechanism which conceals more secret data. On the other hand, the cover image pixels are recovered one by one in the decoder which is present in the receiver section. Which is, each pixel that is recovered aided by its previously recovered neighbor. During this process, based on watermarked adjacent pixel difference, the secret data bits are extracted.

Fig 5.1 Principle of Reversible data hiding based on histogram modification

A reversible data hiding method based on histogram modification. In this scheme, in the histogram of the cover image, a part of the histogram is shifted either leftward or rightward.  This produces extra redundancy for the embedding of data. This principle can be explained better and is shown in Fig 5.1. Initially, the zero and maximum points of the histogram of the original image are denoted as b(Z) and b(P). Later, all the bins belonging to b(Z) and b(P) are shifted by one towards right.

This is how, the bin of b(P) is reduced and made to zero thus b(P + 1) becoming the new peak or maximum point. In the next process, the secret data can be inserted into cover image by increasing the pixel values based on conditions and equaling to P + 1. Suppose, if we encounter a pixel with value P + 1, then one bit of secret data will be hidden. For an instance, if the present processing secret bit is ‘0’, then we change the pixel value as P. Meanwhile, if the present processing secret bit is ‘1’, then the pixel having value of P + 1 is not changed. In decoder which is present in the receiver, the extraction of data and recovery of the image is the opposite process of data embedding. Another reversible data hiding method named adjacent pixel difference (APD) depending on the adjacent pixel differences change. An inverse “S” order is considered to scan the pixels of the images in this method. In Fig. 2, a 3 × 3 image consisting of 3 rows and 3 columns is used to explain the principle. In the image the odd rows are scanned form left to right and the even rows are scanned from right to left. The scan direction is marked as the blue line, and the image block can be arranged into a column of pixel sequence starting from p1, p2, . . ., p9. Let us assume that, the cover image is an eight bit gray level bitmap image which is sized as M × N. Then the column of pixel sequence p1, p2, . . ., pMxN is  acquired through the inverse ‘S’ order scan. The neighboring pixels differences are calculated based on the equation given below:

Fig 5.2 Inverse S scan of a 3×3 image

Assuming the pixel values between pi−1 and pi to be similar, a high quantity of values di (2 ≤ i≤ M × N) which is equal or nearly close to 0. Now, the difference histogram is made depending on the M × N − 1 statistical differences. For an instance, histogram bins which are from left to right are defined as b(−255), b(−254), . . ., b(−1), b(0), b(1), . . ., b(254), b(255). Generally, most of the differences are placed around b(0) bin. No differences fall into bins far from b(0), when the curve moves away towards left and right sides, it drops drastically. Generally, Adjacent Pixel Differences select bins b(p1), b(z1) (assume p1 < z1), where b(p1) and b(z1) represent the maximum point and minimum point. Now, the bins among [b(p1 + 1), b(z1 − 1)] are shifted by one towards right side. The bin b(p1 + 1) are emptied and made to zero for embedding of data. Suppose, if the confidential bit ‘1’ is embedded and the differences which are equal to p1 and are then added by a value 1. And if ‘0’ is embedded, then nothing is changed. In order to improve the capacity, adjacent pixel differences can now select two pairs of maximum and minimum. For an instance, [b(1), b(z1)] and [b(z2), b(p2)] (Assume z1 < p1 and z2 < p2). Now the histogram bins between the values [b(p1 + 1), b(z1 − 1)] are now shifted by one towards right side. The values between [b(z2 + 1), b(p2 − 1)] are shifted by one towards left side. Thus histogram bins b(p1 + 1) and b(p2 − 1) are emptied to zero for embedding of data. The confidential bits modulation is identical to the one pair of maximum-minimum points. It should be noted that the ranges of [b(p1), b(z1)] and [b(z2), b(p2)] should be not mixed.

Fig 5.3 Histogrammodification for EL=0


Fig 5.4 Histogram modification for EL=2

  1.   Proposed Scheme:

5.2.1   Motivation:

Although, the main drawback of adjacent pixel difference method of not providing very large capacity because of only two pairs of maximum-minimum points at man are used for data hiding. As a result, this restricts the importance of application where huge amount of data has to be used for embedding. Actually, many pairs of maximum-minimum points has to be used.  By getting motivated from this, the modification of multilevel histogram has been designed for data hiding in large capacities.

  1.      Data Embedding:

In the reversible data hiding scheme, in order to scan the pixels of the image for the differences between adjacent pixels, a method called the inverse “S” order is adopted. Pseudo random code number generator generates the confidential data bit sequence. A multiple level histogram modification is used during the data embedding stage. To control the hiding capacity a parameter which is integer called embedding level (EL) is used. If EL value is high than it indicates that more confidential data can be embedded, whereas smaller EL value indicates means less secret data bits can be embedded. Comparatively, the value of EL greater than zero is more complicated than the the value of El which is equal to zero.

In step1, the pixels of the image is scanned by the method called inverse “S” scan I into a column of pixel sequence p1, p2, . . ., pMxN.

In step 2, the differences di (1 ≤ i ≤ M × N) are computed and a histogram is drawn depending on the value of di (2 ≤ i ≤ M × N).

In step 3, the value of EL is selected.  And if EL value is zero, then follow and execute step 4. Suppose if EL is greater than zero, then go to step 5.

In step 4, if Data embedding for EL = 0.

Then in step 4.1, the right bins of b(0) are shifted by one level towards right:


In step 4.2, study d’i = 0 (2 ≤ i≤ M × N) one after another. To hide 1 secret bit, each difference value whose value is equivalent to zero is used. There will not be an change if the present processing confidential bit value is zero( w = 0). Suppose, if w = 1, it will be added by 1. The operation performed can be represented through the equation written below


Fig. 5.3 shows the modification of histogram for embedding level value EL=0. The red arrow indicates embedding 0 and blue arrow indicates embedding 1. Then go to step 6.

For, step 5 if data embedding level EL > 0.

Then go to step 5.1, which shifts the right bins of b(EL) by 1 level towards right and shifts left bins of b(−EL) towards :


In step 5.2, study the di = 0 (2 ≤ i ≤ M × N) values in the range of [−EL, EL] one after another.

Through step 5.2.1 embedding the confidential secret data bits as follows


In step 5.2.2 embedding value(EL) is reduced by 1.


Fig 5.5 Example of data embedding for EL=0


Fig 5.6 Example of data extraction and recovery for EL=0


Fig 5.7 Example of data embedding for EL=2


Fig 5.8 Example of data extraction and image recovery for EL=2

In step 5.2.3, if EL is not equal to zero, then follow and execute Step 5.2.1 and Step 5.2.2 continuously and if EL is equal to zero, then move to Step 6.


In step 6, marked pixels p1 are produced as


Rearrange the marked image I’ and p1 acquired in step 7.

  1.      Data Extraction and Image Recovery:

The data extraction and image recovery is exactly opposite and inverse process of data embedding. These operations are explained through the steps mentioned below

In step 1, scan the image block using a method called inverse ‘S’ scan I’ into a column of pixel sequences pi (1 ≤ i ≤ M × N).

In Step 2, the embedding level parameter is received from the encoder using a secure channel. And if EL is found to be 0, then follow and execute Step 3 and Step 4. Suppose, if EL is greater than 0, then follow and execute Step 5 and Step 6.

In step 3, if EL is equal to zero, then the pixels of the host image are recovered as


In step 4, if EL is equal to zero, then the secret data ‘w’ is extracted as:


A secret bit “0” is extracted, if coming across pi-1 – pi1 = 0 and (2 ≤ i ≤ M × N). A secret bit “1” is extracted if pi-1 – pi1 = 1 and (2 ≤ i ≤ M × N). The extracted bits are rearranged and the actual secret bit sequence is obtained. After the completion of this step, then go to step 7.

In step 5, if embedding level is greater than 0, then the first host cover image pixel as    p1= p11 obtained. Then the differences are calculated as:


The actual original differences can be obtained through the equation given below:


After that the host cover image pixel sequence can be recovered though the below conditions:


Now in step 6, if embedding level is greater than zero, then the extraction of confidential secret data is linked with (Embedding Level + 1).

First set the round index R = 1. Step 6.1. Extract the data as:


In step 6.2, embedding level is reduced by 1 and the round index is increased by 1.

In step 6.3, if EL is not equal to zero, then follow and execute the steps 6.1 and 6.2 repeatedly.


In step 6.4, regroup and combine the extracted data wR (1 ≤ R ≤ EL + 1) through the equation below:


Therefore, the confidential secret bits which are hidden in the cover image are obtained. Then go to Step 7.

Finally, rearrange the entire recovered sequence pi (1 ≤ i ≤ M × N) into the cover image (I) in the step 7.











The output of the project, Reversible data hiding in encrypted JPEG bit stream can found through eight steps after running the matlab code. Let us look at the following steps mentioned below

Firstly, the matlab code is executed and a window shown in the Fig 7.1 appears. Then an input colour jpeg image having dimensions 481*321 has to be selected by clicking on the browse option. The input image will be displayed in the (a) window shown in the Fig 7.1.

C:UsersPrasad KatkamDesktopcalculations photosoutput images1.JPG

Fig 6.1 Selecting input JPEG Image

C:UsersPrasad KatkamDesktopcalculations photosoutput images2.JPG

Fig 6.2 Histogram modification of the input image

Next, histogram modification of the selected input image takes place by selecting the values from 0 to 5. Before histogram modification, input image will be converted to grey bitmap image, which can also be called as JPEG bit stream. The bitstream is now altered and the contrast of the image changes by selecting the values from 0 to 5.

C:UsersPrasad KatkamDesktopcalculations photosoutput images3.JPG

Fig 6.3 Modified gray bitmap image

C:UsersPrasad KatkamDesktopsqsasasasa.jpg

Fig 6.4 Inverse S-Order Diff to find second order differential coefficients

The x axis of the histogram shows the range of pixel values. Since its an 8 bpp image, that means it has 256 levels of gray or shades of gray in it. Thats why the range of x axis starts from 0 and end at 255 with a gap of 50. Whereas on the y axis, is the count of these intensities. As you can see from the graph, that most of the bars that have high frequency lies in the first half portion which is the darker portion. That means that the image we have got is darker. And this can be proved from the image too.


C:UsersPrasad KatkamDesktopcalculations photosoutput images5.JPG

Fig 6.5 Selecting secret content(logo) to be embedded with the input image

For image encryption and data embedding, a content image has to be selected, which is done by clicking on watermarked logo. In this project, we are selecting 32*32 bitmap image. The image contained in window (c) is the logo or content image.

C:UsersPrasad KatkamDesktopcalculations photosoutput images6.JPG

Fig 6.6 Watermarked image after embedding of input cover image and logo

Now, the input image which has been converted into bitmap image shown in window (b) and the content logo shown in the window(c) in the above figure are embedded to one image and the watermarked image thus obtained is contained in the window(d) in the above figure. It looks similar to the image in window(b), but it also contains content logo in it. This happens by leaving some vacant bits by the bitstream for the secret content to embed. During embedding process, the addition and subtraction of bit lengths of both the images takes place.

C:UsersPrasad KatkamDesktopcalculations photosoutput images7.JPG

Fig 6.7 Extraction of logo and the input cover image at the receiver

Here, upon clicking the Extraction button, extraction of the embedded image takes place. Both, input bitmap image and the content logo are retrieved independently. The peak signal to noise ratio and mean square error are found to be 37.5542 and 11.4197 respectively. For images to be extracted without bit differences and errors, the peak signal to noise ratio has to be between 25 and 50. Whereas, the mean square error has to be very small. PSNR and MSE are inversely proportional to each other.

C:UsersPrasad KatkamDesktopcalculations photosoutput images8.JPG

Fig 6.8 Comparison of the output images at the receiver end and the images at the sender end

Here,the eight images shown in the above figure consisting of eight images.

Image (a) is the input JPEG image.

Image (b) is the histogram modified bitmap grey image.

Image (c) is the secret information image.

Image (d) is the Embedded image which consist of both input image and logo.

Image (e) is the Extracted loss less secret information image at the receiver.

Image (f) is the comparison of the input image at the sender and extracted cover image.

Image (g) is the comparison of the secret image at the sender and the extracted secret image i.e., comparison of images (c) and (e)




C:UsersPrasad KatkamDesktopcalculations photosoutput images9.JPG

Fig 6.9 Waveforms of L, PSNR, MSE and hiding capacity

The above figure contains three waveforms. The first waveform is the L Vs PSNR, where ‘L’ is the Bit colour depth and ‘PSNR’ is the Peak Signal to Noise Ratio. The second waveform is the L Vs MSE and the third waveform is the L Vs Hiding Capacity.

C:UsersPrasad KatkamDesktopcalculations photosoutputimage (14).png

Fig 6.10 Command window output

The PSNR and MSE value table for the following images is shown in the table

C:UsersPrasad KatkamDesktopReversible JPEG Bitstream Codejpg image 1.jpgC:UsersPrasad KatkamDesktopReversible JPEG Bitstream Codesecret 02.bmp

Fig 6.11 Image 1

C:UsersPrasad KatkamDesktopReversible JPEG Bitstream Codejpg image 2.jpgC:UsersPrasad KatkamDesktopReversible JPEG Bitstream Codesecret 01.bmp

Fig 6.12 Image 2

C:UsersPrasad KatkamDesktopReversible JPEG Bitstream Codejpg image 3.jpgC:UsersPrasad KatkamDesktopReversible JPEG Bitstream CodeSecret 05.bmp

Fig 6.13 Image 3

C:UsersPrasad KatkamDesktopReversible JPEG Bitstream Codejpg image 4.jpgC:UsersPrasad KatkamDesktopReversible JPEG Bitstream Codesecret 04.bmp

Fig 6.14 Image 4

Image Peak Signal to Noise Ratio (PSNR)(in dB) Mean Square Error (MSE)
Image 1 37.7191 10.9943
Image 2 37.5137 11.5267
Image 3 36.8627 13.3908
Image 4 37.605 11.2869

Table 7.1 PSNR and MSE values for different images

From the table, it can b observed that the PSNR value is higher than 25 dB, which conveys that ther is no much change in image bits. Even MSE values are very low. Thus, it can be said that the retrieved image is perfect and same as the input images sent from the sender.

PSNR = 10x log10 (L2/MSE) dB

From the above equation, it is clearly visible that the PSNR and MSE are indirectly proportional to each other. If PSNR is high then MSE must be low and vice versa.




















7.1 Conclusion:

In the project named reversible data hiding in encrypted jpeg bit-stream, an innovative scheme is proposed. The proposed system consists of image encryption followed by data embedding, data-extraction and image-recovery operations. Firstly, using histogram modification of the cover image, the original uncompressed image is encrypted by the content owner. The data-hider can compress the LSB of the encrypted jpeg image thus creating a sparse space to accommodate the extra data, even if he does not know anything about the original content information. Now, the encrypted jpeg bit-stream contains extra additional data. The receiver can extract either additional data or he can obtain the extracted image which is very much alike the original cover image. By exploiting the spatial correlation in the original image, the additional data and the recovery of the original content with no errors can be extracted by the receiver if the additional data is small or it is not very high. Suppose, if the lossless compression method is used for the encrypted jpeg image containing both cover image and secret embedded data into it, the original content can be also recovered and the extra and additional data bits can be still extracted because there will not be any change in the content of encrypted jpeg image containing embedding data of cover and secret images in the lossless compression.

7.2 Future Enhancements:

Although, the reversible data hiding method in well matched with the encrypted jpeg and png images which are produced by the permutation of pixel rows and columns, it is incompatible here because the image encryption is done by Exclusive OR and bit operation. However, a more understandable and easier combination of encryption of images and hiding of data will be better investigated and found in the future..




[1] Zhenxing Qian, Xinpeng Zhang, and Shuozhong Wang, “Reversible Data Hiding in Encrypted JPEG Bitstream”, IEEE Transactions On Multimedia, Vol. 16, No. 5, August 2014.

[2] K. Ma, W. Zhang, and X. Zhao et al., “Reversible data hiding in encrypted images by reserving room before encryption,” IEEE Trans. Inf.Forensics Security, vol. 8, no. 3, pp. 553–562, March 2013.

[3] Xinpeng Zhang, “Seperable Reversible Data Hiding in Encrypted images”, IEEE Trans.Information Forensics And Security, Vol. 7, no. 2,April 2012.

[4] X. Zhang, “Reversible data hiding in encrypted image,” IEEE Signal Process. Lett., vol. 18, no. 4, pp. 255–258, Apr. 2011.

[5] Z. Ni, Y.-Q. Shi, N. Ansari, and W. Su, “Reversible data hiding,” IEEE Trans. Circuits Syst.

Video Technol., vol. 16, no. 3, pp. 354–362, Mar.2006.

[6] C.-C. Chang, C.-C. Lin, and Y.-H. Chen, “Reversible data-embedding scheme using

differences between original and predicted pixel values”, IET Inform. Security, vol. 2, no. 2, pp. 35–46, June 2008.

[7] J. Tian, “Reversible data embedding using a difference expansion,”IEEE Trans. Circuits Syst. Video Technol., vol. 13, no. 8, pp. 890–896,Aug. 2003.

[8] W. Hong, T.-S. Chen, Y.-P. Chang, and C.-W. Shiu, “A high capacity reversible data hiding scheme using orthogonal projection and prediction error modification,” Signal Process., vol. 90, pp. 2911–2922, March 2010.

[9] M. U. Celik, G. Sharma, A. M. Tekalp, and E.Saber, “Lossless generalized-LSB data embedding,” IEEE Trans. Image Process., vol. 14, no.2, pp. 253–266, Feb. 2005.

[10] Z. Ni, Y.-Q. Shi, N. Ansari, and W. Su, “Reversible data hiding,” IEEE Trans. Circuits Syst. Video Technol., vol. 16, no. 3, pp. 354–362, Mar.2006.

[11] Chu, W, 2003. “DCT-Based Image Watermarking Using Subsampling,” IEEE Trans.

Multimedia, vol. 13, no. 8, pp. 890–896,Aug. 2001.

[12] Deng, F. and B. Wang, 2003. “A novel technique for robust image watermarking in the DCT domain,” in Proc. of the IEEE 2003 Int. Conf. on Neural Networks and Signal Processing, vol. 2, pp: 1525-1528, August2010.

[13] Wu, C. and W. Hsieh, 2000. “Digital watermarking using zero tree of DCT,” IEEE Trans.

Consumer Electronics, vol. 46, no. 1, pp: 8794,April 2008.

[14] R.Manikandan, M. Uma, S M. MahalakshmiPreethi, “Reversible Data Hiding for Encrypted Image”, Journal of computer Application ISSN: 0974-1925, Volume-5, Issue E1CA2012-1, February 10, 2012.

[15] M. Cancellaro, F. Battisti, M. Carli, G. Boato, F. G. B. Natale, and A. Neri, “A commutative digital image watermarking and encryption method in the tree structured Haar transform

domain,”SignalProcessing:ImageCommun., vol. 26, no. 1, pp. 1–12, July 2011.

[16] X. Zhang, “Lossy compression and iterative reconstruction for encrypted image,” IEEE Trans. Inform. Forensics Security, vol. 6, no. 1, pp. 53–58, Feb. 2011.

[17] W. Liu, W. Zeng, L. Dong, and Q. Yao, “Efficient compression of encrypted grayscale

images,” IEEE Trans. Image Process., vol. 19, no. 4, pp. 1097–1102, Apr. 2010.

[18] T. Bianchi, A. Piva, and M. Barni, “Composite signal representation for fast and storage-efficient processing of encrypted signals,” IEEE Trans.Inform

































functionvarargout = reversewatermark(varargin)

% Begin initialization code – DO NOT EDIT

gui_Singleton = 1;

gui_State = struct(‘gui_Name’,       mfilename, …

‘gui_Singleton’,  gui_Singleton, …

‘gui_OpeningFcn’, @reversewatermark_OpeningFcn, …

‘gui_OutputFcn’,  @reversewatermark_OutputFcn, …

‘gui_LayoutFcn’,  [] , …

‘gui_Callback’,   []);


gui_State.gui_Callback = str2func(varargin{1});



[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});


gui_mainfcn(gui_State, varargin{:});


% End initialization code – DO NOT EDIT

% — Executes just before reversewatermark is made visible.

functionreversewatermark_OpeningFcn(hObject, eventdata, handles, varargin)

handles.output = hObject;

a = ones(256,256);















% Update handles structure

guidata(hObject, handles);

% — Outputs from this function are returned to the command line.

functionvarargout = reversewatermark_OutputFcn(hObject, eventdata, handles)

% Get default command line output from handles structure

varargout{1} = handles.output;

% — Executes on button press in browseim.

functionbrowseim_Callback(hObject, eventdata, handles)

[file path] = uigetfile(‘*.jpg’,’Input Image is Selected’);

if file==0

warndlg(‘User has to select Input’);


Host_im = imread(file);



detail_Inform = imfinfo(file);


Host_im = rgb2gray(Host_im);


handles.Host_im = imresize(Host_im,[128 128]);

detail_Inform.Height = 128;

detail_Inform.Width = 128;

handles.detail_Inform = detail_Inform;


% Update handles structure

guidata(hObject, handles);

% — Executes on button press in histmod.

functionhistmod_Callback(hObject, eventdata, handles)

Host_im = handles.Host_im;

X = inputdlg(‘Enter Value from 0 to 5′);

L = str2num(X{1});

P = 2^L;

H = Host_im;

loaction = find(Host_im<P);

loaction2 = find(Host_im>(255-P));

H(loaction) = P;

H(loaction2) = 255-P;



handles.H = H;

handles.P = P;

handles.L = L;

% Update handles structure

guidata(hObject, handles);

% — Executes on button press in invers_S_ord.

functioninvers_S_ord_Callback(hObject, eventdata, handles)

H = double(handles.H);

P = handles.P;

Host_im = double(handles.Host_im);

detail_Inform = handles.detail_Inform;

Row = detail_Inform.Height;

Col = detail_Inform.Width;

k =1;

fori = 1:Row

im_val = Host_im(i,:);

im_val2 = H(i,:);

if mod(i,2)==0

d(k:i*Col,:) = im_val(end:-1:1)’;

d2(k:i*Col,:) = im_val2(end:-1:1)’;


d(k:i*Col,:) = im_val(1:1:end)’;

d2(k:i*Col,:) = im_val2(1:1:end)’;


k = i*Col + 1;


Diff(1) = d(1);

Diff(2:length(d)) = abs(d(1:length(d)-1) –  d(2:length(d)));

Diff2(1) = d2(1);

Diff2(2:length(d)) = abs(d2(1:length(d)-1) –  d2(2:length(d)));

%%%%%%%%%%%%% Histogram Image………….

fori = 0:255

N = find(Host_im==i);

N2 = find(Diff==i);

Num_pix(i+1) = length(N);

Num_diffpix(i+1) = length(N2);







%%%%%%%%%%%%% Histogram Image………….

mn = 1;

fori = P:255-P

N3 = find(H==i);

N4 = find(Diff2==i);

Num_pix2(mn) = length(N3);

Num_diffpix2(mn) = length(N4);

mn = mn+1;






handles.Diff = Diff;

handles.d = d;

handles.Diff2 = Diff2;

handles.d2 = d2;

% Update handles structure

guidata(hObject, handles);

% — Executes on button press in embed.

functionembed_Callback(hObject, eventdata, handles)

Logo_im = handles.Logo_im;

x = handles.d2;

Diff = handles.Diff2;

L = handles.L;

len = length(Diff);


Logo_len = length(Logo_im(:));

handles.Logo_len = Logo_len;

New_logo = zeros(1,len);


New_logo(1,1:Logo_len) = reshape(Logo_im,[1 Logo_len]);

P = 2^L;


Y = x;

k = 1;

h = waitbar(0,’Please Wait….’);

fori = 2:len

if (Diff(i)>=P) && (x(i)>=x(i-1))

Y(i) = x(i)+P;

elseif (Diff(i)>=P) && (x(i)<x(i-1))

Y(i) = x(i)-P;

elseif (Diff(i)<P) && (x(i)>=x(i-1))

seq_b(k) = New_logo(k);

Y(i) = x(i)+(Diff(i)+New_logo(k));

k = k+1;

elseif (Diff(i)<P) && (x(i)<x(i-1))

seq_b(k) = New_logo(k);

Y(i) = x(i)-(Diff(i)+New_logo(k));

k = k+1;





handles.secret_data = seq_b;


handles.Y = Y;

handles.P = P;

handles.k = k;

% Update handles structure

guidata(hObject, handles);

warndlg(‘Embedding Process Completed’);

% — Executes on button press in watermarked.

functionwatermarked_Callback(hObject, eventdata, handles)

Y = handles.Y;

H = handles.H;

cnt = handles.k;

detail_Inform = handles.detail_Inform;

Row = detail_Inform.Height;

Col = detail_Inform.Width;

k =1;

fori = 1:Row

if mod(i,2)==0

star_p = Col;

mid_p = -1;

end_p = 1;


star_p = 1;

mid_p = 1;

end_p = Col;


for j = star_p:mid_p:end_p

watermarked_im(i,j) = Y(k);

k = k+1;






[PSNR MSE] = psnrmse(H,watermarked_im);



L = handles.L;

switch L

case 0


MSE_0 = MSE;

K_0 = cnt;

save PSNR_0PSNR_0;

save MSE_0MSE_0;

save K_0K_0;

case 1


MSE_1 = MSE;

K_1 = cnt;

save PSNR_1PSNR_1;

save MSE_1MSE_1;

save K_1K_1;

case 2


MSE_2 = MSE;

K_2 = cnt;

save PSNR_2PSNR_2;

save MSE_2MSE_2;

save K_2K_2;

case 3


MSE_3 = MSE;

K_3 = cnt;

save PSNR_3PSNR_3;

save MSE_3MSE_3;

save K_3K_3;

case 4


MSE_4 = MSE;

K_4 = cnt;

save PSNR_4PSNR_4;

save MSE_4MSE_4;

save K_4K_4;

case 5


MSE_5 = MSE;

K_5 = cnt;

save PSNR_5PSNR_5;

save MSE_5MSE_5;

save K_5K_5;


handles.watermarked_im = watermarked_im;

% Update handles structure

guidata(hObject, handles);

% — Executes on button press in clear.

functionclear_Callback(hObject, eventdata, handles)

a = ones(256,256);















set(handles.edit1,’string’,’ ‘);

set(handles.edit2,’string’,’ ‘);

% — Executes on button press in plot_graph.

functionplot_graph_Callback(hObject, eventdata, handles)

load PSNR_0;

load PSNR_1;

load PSNR_2;

load PSNR_3;

load PSNR_4;

load PSNR_5;

load MSE_0;

load MSE_1;

load MSE_2;

load MSE_3;

load MSE_4;

load MSE_5;

load K_0;

load K_1;

load K_2;

load K_3;

load K_4;

load K_5;

X = [0 1 2 3 4 5];


Z = [MSE_0 MSE_1 MSE_2 MSE_3 MSE_4 MSE_5];

T = [K_0 K_1 K_2 K_3 K_4 K_5];




xlabel(‘L value’);


title(‘L Vs PSNR’);



xlabel(‘L value’);


title(‘L Vs MSE’);



xlabel(‘L value’);

ylabel(‘Hiding Capacity’);

title(‘L Vs Hiding Capacity’);

% — Executes on button press in feature_extract.

functionfeature_extract_Callback(hObject, eventdata, handles)

watermarked_im = handles.watermarked_im;

Logo_len = handles.Logo_len;

L = handles.L;

P = 2^(L+1);

detail_Inform = handles.detail_Inform;

Row = detail_Inform.Height;

Col = detail_Inform.Width;

k =1;

fori = 1:Row

im_val = watermarked_im(i,:);

if mod(i,2)==0

d(k:i*Col,:) = im_val(end:-1:1)’;


d(k:i*Col,:) = im_val(1:1:end)’;


k = i*Col + 1;


Diff(1) = d(1);

Diff(2:length(d)) = abs(d(1:length(d)-1) –  d(2:length(d)));



secret_data = handles.secret_data;

m = 1;

h=waitbar(0,’Please Wait…..’);

x = d;

for k = 2:size(d)

if (abs(d(k)-x(k-1))<P) && (d(k)<x(k-1))

x(k) = d(k) + ceil(abs(d(k)-x(k-1))/2);

elseif (abs(d(k)-x(k-1))<P) && (d(k)>x(k-1))

x(k) = d(k) – ceil(abs(d(k)-x(k-1))/2);

elseif (abs(d(k)-x(k-1))>=P) && (d(k)<x(k-1))

x(k) = d(k) + (2^L);

elseif (abs(d(k)-x(k-1))>=P) && (d(k)>x(k-1))

x(k) = d(k) – (2^L);


x(k) = d(k);


if abs(d(k) – x(k-1))<P && mod(abs(d(k) – x(k-1)),2)~= 0

b(m) = 1;

m = m+1;

elseif abs(d(k) – x(k-1))<P && mod(abs(d(k) – x(k-1)),2)== 0

b(m) = 0;

m = m+1;





Ret_im = b(1,1:Logo_len);

Extract_out = reshape(Ret_im,[sqrt(Logo_len) sqrt(Logo_len)]);



Extrc_out = sum(abs(secret_data – b));

disp(‘No of Bits Changed’);



fori = 1:Row

if mod(i,2)==0

star_p = Col;

mid_p = -1;

end_p = 1;


star_p = 1;

mid_p = 1;

end_p = Col;


for j = star_p:mid_p:end_p

ddd(i,j) = x(k);

k = k+1;





handles.x = x;

% Update handles structure

guidata(hObject, handles);

% — Executes on button press in compare_res.

functioncompare_res_Callback(hObject, eventdata, handles)

H = handles.H;

watermarked_imold = handles.watermarked_im;

x = handles.x;

%%%%%%%%%%%%%  Checking Process………..

detail_Inform = handles.detail_Inform;

Row = detail_Inform.Height;

Col = detail_Inform.Width;

k =1;

fori = 1:Row

if mod(i,2)==0

star_p = Col;

mid_p = -1;

end_p = 1;


star_p = 1;

mid_p = 1;

end_p = Col;


for j = star_p:mid_p:end_p

watermarked_im(i,j) = x(k);

k = k+1;



LSB_out = double(H) – watermarked_imold;



revers_im = double(H) – watermarked_im;



% — Executes on button press in water_logo.

functionwater_logo_Callback(hObject, eventdata, handles)

% hObject    handle to water_logo (see GCBO)

% eventdata  reserved – to be defined in a future version of MATLAB

% handles    structure with handles and user data (see GUIDATA)

[file path] = uigetfile(‘*.bmp’,’Select a Logo Image’);

if file==0

warndlg(‘User has to select Input’);


Logo_im = imread(file);



handles.Logo_im = Logo_im;


% Update handles structure

guidata(hObject, handles);

function edit1_Callback(hObject, eventdata, handles)

% hObject    handle to edit1 (see GCBO)

% eventdata  reserved – to be defined in a future version of MATLAB

% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,’String’) returns contents of edit1 as text

%        str2double(get(hObject,’String’)) returns contents of edit1 as a double

% — Executes during object creation, after setting all properties.

function edit1_CreateFcn(hObject, eventdata, handles)

% hObject    handle to edit1 (see GCBO)

% eventdata  reserved – to be defined in a future version of MATLAB

% handles    empty – handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.

%       See ISPC and COMPUTER.

ifispc&&isequal(get(hObject,’BackgroundColor’), get(0,’defaultUicontrolBackgroundColor’))



function edit2_Callback(hObject, eventdata, handles)

% hObject    handle to edit2 (see GCBO)

% eventdata  reserved – to be defined in a future version of MATLAB

% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,’String’) returns contents of edit2 as text

% str2double(get(hObject,’String’)) returns contents of edit2 as a double

% — Executes during object creation, after setting all properties.

function edit2_CreateFcn(hObject, eventdata, handles)

% hObject    handle to edit2 (see GCBO)

% eventdata  reserved – to be defined in a future version of MATLAB

% handles    empty – handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.

%       See ISPC and COMPUTER.

ifispc&&isequal(get(hObject,’BackgroundColor’), get(0,’defaultUicontrolBackgroundColor’))



function [PSNR MSE] = psnrmse(Image1,Image2)

x = double(Image1);

y = double(Image2);

[r c p] = size(x);

MSE = (sum(sum((x – y) .^ 2)))/(r*c*p);

PSNR = 10*log10((255*255)/MSE);

if p==3

PSNR = sum(PSNR)/3;

MSE = sum(MSE)/3;





Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: