Image Encryption - jpeg

i am doing image steganography and if i type message greater than 3 chars to encrypt there is an exception that Quantization table 0x01 is not defined and is message is less than 3 char i got an encrypted image as i needed .I think this is due to JPEG format (I think while injecting bits in image byte array i hv destroyed the property and attributes of an image ).Help me i am sure its something related to metadata but don`t know much about it.
i am adding code what i am doing
Creating_image()
{
File f=new File(file.getParent()+"/encrypt.jpg");
if(file==null)
{
JOptionPane.showMessageDialog(rootPane, "file null ho gyi encrypt mein");
}
try{
FileInputStream imageInFile = new FileInputStream(file);
byte imageData[] = new byte[(int) file.length()];
imageInFile.read(imageData);
// Converting Image byte array into Base64 String
String imageDataString = Base64.encode(imageData);
// Converting a Base64 String into Image byte array
pixels = Base64.decode(imageDataString);
// Write a image byte array into file system
imageInFile.close();
}
catch(Exception as)
{
JOptionPane.showMessageDialog(rootPane,"Please first select an Image");
}
String msg=jTextArea1.getText();
byte[] bmsg=msg.getBytes();
String as=Base64.encode(bmsg);
bmsg=Base64.decode(as);
int len=msg.length();
byte[] blen=inttobyte(len);
String sd=Base64.encode(blen);
blen=Base64.decode(sd);
pixels=encode(pixels,blen,32);
pixels=encode(pixels,bmsg,64);
try{
// Converting Image byte array into Base64 String
String imageDataString = Base64.encode(pixels);
// Converting a Base64 String into Image byte array
pixels = Base64.decode(imageDataString);
InputStream baisData = new ByteArrayInputStream(pixels,0,pixels.length);
image= ImageIO.read(baisData);
if(image == null)
{
System.out.println("imag is empty");
}
ImageIO.write(image, "jpg", f);
}
catch(Exception s)
{
System.out.println(s.getMessage());
}
}
and thats what encode fxn looks like
byte[] encode(byte [] old,byte[] add,int offset)
{
try{ if(add.length+offset>old.length)
{
JOptionPane.showMessageDialog(rootPane, "File too short");
}
}
catch(Exception d)
{
JOptionPane.showMessageDialog(rootPane, d.getLocalizedMessage());
}
byte no;
for(int i=0;i<add.length;i++)
{
no=add[i];
for(int bit=7;bit>=0;bit--,++offset)
{
int b=(no>>bit)&1;
old[offset]=(byte)((old[offset]&0xfe)|b);
}
}
return old;
}

You are correct in that you have disturbed the file structure. The JPEG format contains highly compressed data to the point none of its bytes represent any pixel values directly. In fact, JPEG doesn't even store the pixel values, but the DCT coefficients of pixel blocks.
Your method of reading the raw bytes of the file would work only for a format like BMP, where the pixels are directly stored in the file. However, you'd still have to skip the first few bytes (header), which contain information like the width and height of the image, number of colour planes and bits per pixel.
If you want to embed your message by modifying the least significant bits of pixels, you have to load the actual pixels in a byte array. Then you can modify the pixels with your encode() method. To save the data to a file, convert the byte array to a BuffferedImage object and use ImageIO.write(). However, you must use a format that does not involve lossy compression, because that can distort the pixel values, thereby destroying your message. Losslessly compressed (or uncompressed) file formats include BMP and PNG, while JPEG is lossy.
If you still want to do JPEG steganography, the process is a bit more involving, but this answer pretty much covers what you need to do. Briefly, you want to borrow the source code of a jpeg encoder because writing one is very complex and requires intricate understanding of the whole format. The encoder will convert the pixels to a bunch of different numbers (lossy step) and store them compactly to a file. Your steganography algorithm should then be injected between these two steps, where you can modify those numbers before saving them to file.

Related

android AudioTrack playback short array (16bit)

I have an application that playback audio. It takes encoded audio data over RTP and decode it to 16bit array. The decoded 16bit array is converted to 8 bit array (byte array) as this is required for some other functionality.
Even though audio playback is working it is breaking continuously and very hard to recognise audio output. If I listen carefully I can tell it is playing the correct audio.
I suspect this is due to the fact I convert 16 bit data stream into a byte array and use the write(byte[], int, int, AudioTrack.WRITE_NON_BLOCKING) of AudioTrack class for audio playback.
Therefore I converted the byte array back to a short array and used write(short[], int, int, AudioTrack.WRITE_NON_BLOCKING) method to see if it could resolve the problem.
However now there is no audio sound at all. In the debug output I can see the short array has data.
What could be the reason?
Here is the AUdioTrak initialization
sampleRate =AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_MUSIC);
minimumBufferSize = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
minimumBufferSize,
AudioTrack.MODE_STREAM);
Here is the code converts short array to byte array
for (int i=0;i<internalBuffer.length;i++){
bufferIndex = i*2;
buffer[bufferIndex] = shortToByte(internalBuffer[i])[0];
buffer[bufferIndex+1] = shortToByte(internalBuffer[i])[1];
}
Here is the method that converts byte array to short array.
public short[] getShortAudioBuffer(byte[] b){
short audioBuffer[] = null;
int index = 0;
int audioSize = 0;
ByteBuffer byteBuffer = ByteBuffer.allocate(2);
if ((b ==null) && (b.length<2)){
return null;
}else{
audioSize = (b.length - (b.length%2));
audioBuffer = new short[audioSize/2];
}
if ((audioSize/2) < 2)
return null;
byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
for(int i=0;i<audioSize/2;i++){
index = i*2;
byteBuffer.put(b[index]);
byteBuffer.put(b[index+1]);
audioBuffer[i] = byteBuffer.getShort(0);
byteBuffer.clear();
System.out.print(Integer.toHexString(audioBuffer[i]) + " ");
}
System.out.println();
return audioBuffer;
}
Audio is decoded using opus library and the configuration is as follows;
opus_decoder_ctl(dec,OPUS_SET_APPLICATION(OPUS_APPLICATION_AUDIO));
opus_decoder_ctl(dec,OPUS_SET_SIGNAL(OPUS_SIGNAL_MUSIC));
opus_decoder_ctl(dec,OPUS_SET_FORCE_CHANNELS(OPUS_AUTO));
opus_decoder_ctl(dec,OPUS_SET_MAX_BANDWIDTH(OPUS_BANDWIDTH_FULLBAND));
opus_decoder_ctl(dec,OPUS_SET_PACKET_LOSS_PERC(0));
opus_decoder_ctl(dec,OPUS_SET_COMPLEXITY(10)); // highest complexity
opus_decoder_ctl(dec,OPUS_SET_LSB_DEPTH(16)); // 16bit = two byte samples
opus_decoder_ctl(dec,OPUS_SET_DTX(0)); // default - not using discontinuous transmission
opus_decoder_ctl(dec,OPUS_SET_VBR(1)); // use variable bit rate
opus_decoder_ctl(dec,OPUS_SET_VBR_CONSTRAINT(0)); // unconstrained
opus_decoder_ctl(dec,OPUS_SET_INBAND_FEC(0)); // no forward error correction
Let's assume you have a short[] array which contains the 16-bit one channel data to be played.
Then each sample is a value between -32768 and 32767 which represents the signal amplitude at the exact moment. And 0 value represents a middle point (no signal). This array can be passed to the audio track with ENCODING_PCM_16BIT format encoding.
But things are going weird when playing ENCODING_PCM_8BIT is used (See AudioFormat)
In this case each sample encoded by one byte. But each byte is unsigned. That means, it's value is between 0 and 255, while 128 represents the middle point.
Java has no unsigned byte format. Byte format is signed. I.e. values -128...-1 will represent actual values of 128...255. So you have to be careful when converting to the byte array, otherwise it will be a noise with barely recognizable source sound.
short[] input16 = ... // the source 16-bit audio data;
byte[] output8 = new byte[input16.length];
for (int i = 0 ; i < input16.length ; i++) {
// To convert 16 bit signed sample to 8 bit unsigned
// We add 128 (for rounding), then shift it right 8 positions
// Then add 128 to be in range 0..255
int sample = ((input16[i] + 128) >> 8) + 128;
if (sample > 255) sample = 255; // strip out overload
output8[i] = (byte)(sample); // cast to signed byte type
}
To perform backward conversion all should be the same: each single sample to be converted to exactly one sample of the output signal
byte[] input8 = // source 8-bit unsigned audio data;
short[] output16 = new short[input8.length];
for (int i = 0 ; i < input8.length ; i++) {
// to convert signed byte back to unsigned value just use bitwise AND with 0xFF
// then we need subtract 128 offset
// Then, just scale up the value by 256 to fit 16-bit range
output16[i] = (short)(((input8[i] & 0xFF) - 128) * 256);
}
The issue of not being able to convert data from byte array to short array was resolved when used bitwise operators instead of using ByteArray. It could be due not setting the correct parameters in ByteArray or it is not suitable for such conversion.
Nevertheless implementing conversion using bitwise operators resolved the problem. Since the original question has been resolved by this approach, please consider this as the final answer.
I will raise a separate topic for playback issue.
Thank you for all your support.

How to fix .gif with corrupted alpha channel (stuck pixels) collected with Graphicsmagick?

I want to convert an .avi with alpha channel into a .gif.
Firstly, I use
ffmpeg -i source.avi -vf scale=720:-1:flags=lanczos,fps=10 frames/ffout%03d.png
to convert .avi to sequence of .png's with aplha channel.
Then, I use
gm convert -loop 0 frames/ffout*.png output.gif
to collect a .gif.
But it seems that pixels of the output.gif just get stuck when something opaque is rendered on top of the transparent areas.
Here's an example:
As you can see the hearts and explosions do not get derendered.
P.S.
FFMPEG output (collection on .png's) is fine.
I do not use Graphicsmagick but your GIF has image disposal mode 0 (no animation). You should use disposal mode 2 (clear with background) or 3 (restore previous image) both works for your GIF. The disposal is present in gfx extension of each frame in the Packed value.
So if you can try to configure encoder to use disposal = 2 or 3 or write script that direct stream copy your GIF and change the Packed value of gfx extension chunk frame by frame. Similar to this:
GIF Image getting distorted on interlacing
If you need help with the script then take a look at:
How to find where does Image Block start in GIF images?
Decode data bytes of GIF87a raster data stream
When I tried this (C++ script) on your GIF using disposal 2 I got this result:
The disposal is changed in C++ like this:
struct __gfxext
{
BYTE Introducer; /* Extension Introducer (always 21h) */
BYTE Label; /* Graphic Control Label (always F9h) */
BYTE BlockSize; /* Size of remaining fields (always 04h) */
BYTE Packed; /* Method of graphics disposal to use */
WORD DelayTime; /* Hundredths of seconds to wait */
BYTE ColorIndex; /* Transparent Color Index */
BYTE Terminator; /* Block Terminator (always 0) */
__gfxext(){}; __gfxext(__gfxext& a){ *this=a; }; ~__gfxext(){}; __gfxext* operator = (const __gfxext *a) { *this=*a; return this; }; /*__gfxext* operator = (const __gfxext &a) { ...copy... return this; };*/
};
__gfxext p;
p.Packed&=255-(7<<2); // clear old disposal and leave the rest as is
p.Packed|= 2<<2; // set new disposal=2 (the first 2 is disposal , the <<2 just shifts it to the correct position in Packed)
It is a good idea to leave other bits of Packed as are because no one knows what could be encoded in there in time ...

Read and convert Monochrome bitmap file into CByteArray MFC

In my MFC project, I need to read and convert a Monochrome bitmap file into CByteArray. While reading the bitmap file by using 'CFile' class with the mode of 'Read', it seems like it gives more length than its original.
My MFC code:-
CFile ImgFile;
CFileException FileExcep;
CByteArray* pBinaryImage = NULL;
strFilePath.Format("%s", "D:\\Test\\Graphics0.bmp");
if(!ImgFile.Open((LPCTSTR)strFilePath,CFile::modeReadWrite,&FileExcep))
{
return NULL;
}
pBinaryImage = new CByteArray();
pBinaryImage->SetSize(ImgFile.GetLength());
// get the byte array's underlying buffer pointer
LPVOID lpvDest = pBinaryImage->GetData();
// perform a massive copy from the file to byte array
if(lpvDest)
{
ImgFile.Read(lpvDest,pBinaryImage->GetSize());
}
ImgFile.Close();
Note: File length is been set to bytearray obj.
I checked with C# with the following sample:-
Bitmap bmpImage = (Bitmap)Bitmap.FromFile("D:\\Test\\Graphics0.bmp");
ImageConverter ic = new ImageConverter();
byte[] ImgByteArray = (byte[])ic.ConvertTo(bmpImage, typeof(byte[]));
While comparing the size of "pBinaryImage" and "ImgByteArray", its not same and I guess "ImgByteArray" size is the correct one since from this array value, I can get my original bitmap back.
As I noted in comments, by reading the whole file with CFile, you are also reading the bitmap headers, which will be corrupting your data.
Here is an example function, showing how to load a monochrome bitmap from file, wrap it in MFC's CBitmap object, query the dimensions etc. and read the pixel data into an array:
void LoadMonoBmp(LPCTSTR szFilename)
{
// load bitmap from file
HBITMAP hBmp = (HBITMAP)LoadImage(NULL, szFilename, IMAGE_BITMAP, 0, 0,
LR_LOADFROMFILE | LR_MONOCHROME);
// wrap in a CBitmap for convenience
CBitmap *pBmp = CBitmap::FromHandle(hBmp);
// get dimensions etc.
BITMAP pBitMap;
pBmp->GetBitmap(&pBitMap);
// allocate a buffer for the pixel data
unsigned int uBufferSize = pBitMap.bmWidthBytes * pBitMap.bmHeight;
unsigned char *pPixels = new unsigned char[uBufferSize];
// load the pixel data
pBmp->GetBitmapBits(uBufferSize, pPixels);
// ... do something with the data ....
// release pixel data
delete [] pPixels;
pPixels = NULL;
// free the bmp
DeleteObject(hBmp);
}
The BITMAP structure will give you information about the bitmap (MSDN here) and, for a monochrome bitmap, the bits will be packed into the bytes you read. This may be another difference with the C# code, where it is possible that each bit is unpacked into a whole byte. In the MFC version, you will need to interpret this data correctly.

Convert String Data to Binary Image

I am using Qt and I am new to Qt. I am getting stream of string data from server in particular port.
I am receiving 1 and 0. each time I receive one line like this
1111110001111111111111111111100000000000011111111111
After getting n number of times I need to create binary image file from the data. 1 for white and 0 for black.
How to do this? I already implement the receiving data but I have no idea how to convert this data to image.
Please help me to find the solution for this problem.
You must know dimensions of your image (for example NxM)
According to dimensions of image, you must parse string what you got (think on how to write correct cycle to get NxM 2D array from 1D array consisting NxM elements).
For holding your image data you can use QImage class. Create QImage object, passing to constructor height and width, use its method to fill image. For setting some color of pixel, you can use QImages method setPixel ( int x, int y, uint index_or_rgb ).
Thats all. Good luck!
You may try doing this way
QImage Image(500,500, QImage::Format_Indexed8);
for(int i=0;i<500/*image_width*/;i++)
{
for(int j=0;j<500/*image_height*/;j++)
{
QRgb value;
if(data[i*j] == 0)/*the data array should contain all the information*/
{
value = qRgb(0,0,0);
Image.setPixel(i,j,qGray(value))
}
else
{
value = qRgb(255,255,255);
Image.setPixel(i,j,qGray(value))
}
}
}
From Qt docs:
"Because QImage is a QPaintDevice subclass, QPainter can be used to draw directly onto images."
So, you can create QImage sized to 500x500
QImage image = QImage(500,500)
and then draw on this image
QPainter p(&image);
p.drawPoint(0,0);
p.drawPoint(0,1);
etc;
Another way is to save your bit stream into array char[] and simply create QImage with format Format_Mono or Format_MonoLSB.
QImage image = QImage(bitData, 500, 500, Format_Mono);
Thanks For help i created image. here My Code
QImage testClass::GetImage(QString rdata, int iw, int ih)
{
QImage *Image=new QImage(iw,ih,QImage::Format_ARGB32);
for(int i=0;i<ih;i++)
{
for(int j=0;j<iw;j++)
{
if(rdata.at((i*iw)+j) == '0')
Image->setPixel(QPoint(j,i),qRgb(0,0,0));
else
Image->setPixel(QPoint(j,i),qRgb(255,255,255));
}
}
return *Image;
}

Convert a string of bytes to cv::mat

I need to implement a function that receives a string containing the bytes of an image (received via boost socket connection) and converts the info into an OpenCV cv::Mat.
I also know the width and height of the image and its size in bytes. My function looks like this:
void createImageFromBytes(const std::string& name, std::pair<int,int> dimensions, const std::string& data)
{
int width,height;
width = dimensions.first;
height = dimensions.second;
//convert data to cv::Mat image
std::string filepng = DATA_PATH"/" + name +".png";
imwrite(filepng, image);
}
Which is the best method for doing this? Does OpenCV has a constructor for Mat from a string?
OpenCV Mat has a constructor from vector<byte>, but this is not so intuitive. You need to convert from string to vector this way first:
std::vector<byte> vectordata(data.begin(),data.end());
Then you can create a cv::Mat from the vector:
cv::Mat data_mat(vectordata,true);
You also need to decode the image (check documentation for which types are allowed, png, jpg, depending on the OpenCV version)
cv::Mat image(cv::imdecode(data_mat,1)); //put 0 if you want greyscale
Now you can check if the resulting size of the image is the same as the one you sent:
cout<<"Height: " << image.rows <<" Width: "<<image.cols<<endl;
Easy to trip here as the image may have null characters and any c function handling string will see null as string end
Read the image
cv::Mat image;
image = cv::imread("../test/image.png", CV_LOAD_IMAGE_COLOR);
Convert to Bytes (this is just working code, not checked for leaks)
int dataSize = image.total() * image.elemSize();
//convert to bytes
std::vector<char> vec(dataSize);
memcpy(&vec[0], reinterpret_cast<char *>(image.data), dataSize);
std::string test2(vec.begin(), vec.end());
Test and see if conversion works
//test
cv::Mat data_mat(height,width,CV_8UC3,const_cast<char*>(test2.c_str()));
imwrite("out2.png", data_mat);
If the data in the string is raw pixels (rather than a Jpeg/png etc) you can create the cv::mat directly
// assuming an RGB image in bytes
cv::Mat mat(height,width,CV_8UC3,string.data());
Here is my improved solution of Jav_Rock, the problem is that is not clear to use vector (byte type is not defined in c++, i didn't found that), instead of that, use vector, here is a example code
int func(char * pfile){
string strfile = pfile;
std::vector<unsigned char> vectordata(strfile.begin(),strfile.end());
Mat data_mat(vectordata, true);
Mat graySacleFrame = imdecode(data_mat, 0); //PGM image
...
}

Resources