Linux scsi ata cmd write or read sometimes work and sometimes didn't work when transfer length is over 1345 - linux

My code is as follows:
unsigned char cmd[16];
cmd[0] = WRITE_16;
//lba is start address
cmd[2] = (lba >> 54) & 0xFF;
cmd[3] = (lba >> 48) & 0xFF;
cmd[4] = (lba >> 40) & 0xFF;
cmd[5] = (lba >> 32) & 0xFF;
cmd[6] = (lba >> 24) & 0xFF;
cmd[7] = (lba >> 16) & 0xFF;
cmd[8] = (lba >> 8) & 0xFF;
cmd[9] = lba & 0xFF;
//len is transfer length
cmd[10] = (len >> 24) & 0xFF;
cmd[11] = (len >> 16) & 0xFF;
cmd[12] = (len >> 8) & 0xFF;
cmd[13] = len & 0xFF;
void* buffer;
buffer = malloc(len*512);
__u64 buffer_len = 512*len;
io_hdr.interface_id = 'S';
io_hdr.cmd_len = sizeof(cmd);
io_hdr.mx.sb_len = sizeof(sense);
io_hdr.dxfer_direction = SG_DXFER_TO_FROM_DEV;
io_hdr.dxfer_len = buffer_len;
io_hdr.dxferp = buffer;
io_hdr.cmdp = cmd;
io_hdr.sbp = sense;
io_hdr.timeout = 30000;
ioctl(fd, SG_IO, &io_hdr);
If I send cmd transfer length over 1345, it sometimes work and sometimes it doesn't work. If thetransfer length grows up, the portion that doesn't work goes up too. There's no uart log or kernel log when cmd doesn't work.
ps. If cmd doesn't work, errno says 22(invalid argument)

You're not initializing the bytes in the SCSI CDB to zero, so sometimes there's trash in cmd[1], cmd[14], and cmd[15]. Add a call to memset up at the top, or initialize the array with = { };.
Also, I know a bunch of examples use this technique for initializing the command structure, but man, it's really going to drive you nuts. I recommend defining an __attribute__ ((packed)) structure for the CDB that uses bitfields.
Lastly, the line cmd[2] = (lba >> 54) & 0xFF; should shift lba by 56 bits, not 54 bits.

Related

c++ visual studio 2017 reading png dimension through header data

Hello i have problem to read out the correct data of png dimesnion size
unsigned width = 0;
unsigned height = 0;
bool output_json = false;
std::ifstream in(filepath, std::ios_base::binary | std::ios_base::in);
if (in.is_open())
{
in.seekg(16, std::ios_base::cur);
in.read((char *)&width, 4);
in.read((char *)&height, 4);
_byteswap_uint64(width);
_byteswap_uint64(height);
output_json = true;
in.close();
}
the width should be 155 , but output 2600468480
the height should be 80, but output 1342177280
the width should be 155 , but output 2600468480 the height should be 80, but output 1342177280
There is a problem of endianess.
2600468480 is, in exadecimal form 9b000000; 155 is 9b.
So the order of the less significat / most significant bytes is switched.
Try swapping the bytes
unsigned w0;
in.read((char *)&w0, 4);
width = ((w0 >> 24) & 0xff) |
((w0 << 8) & 0xff0000) |
((w0 >> 8) & 0xff00) |
((w0 << 24) & 0xff000000);

Parse Integer from Buffer

I created this function to parse a Integer from a Buffer:
var makeInt = function(b1, b2, b3, b4) {
return ((b1 << 0) & 0x000000FF) +
((b2 << 8) & 0x0000FF00) +
((b3 << 16) & 0x00FF0000) +
((b4 << 24) & 0xFF000000);
}
From a buffer I read the Integer like that:
var buffer = new Buffer([0,0,15,47,0,0,0,64,0,0])
console.log(makeInt(buffer[3],buffer[2],buffer[1],buffer[0]))
=> 3887
What's the official Buffer function from https://nodejs.org/api/buffer.html that does the same like my makeInt function?
I tried https://nodejs.org/api/buffer.html#buffer_buf_readuintbe_offset_bytelength_noassert
But buf.readUIntLE(offset, byteLength[, noAssert]) returns:
buffer.readUIntLE(0, 3)
=> 983040
Why is it not returning 3887 != 983040?
Thanks
You use 4 bites, but passed 3. There are two ways to store number - little-endian and big-endian. It seems your code realized big-endian.
var buffer = new Buffer([0,0,15,47,0,0,0,64,0,0]);
console.log(buffer.readUIntBE(0, 4));

Two questions with base64 encoding

I confused how to convert const char * to base64 with 2 Questions:
Question #1 how do I defined the length of output string that would perfectly match the length of output base64?I have found a code which from apple opensource,the code in below http://www.opensource.apple.com/source/QuickTimeStreamingServer/QuickTimeStreamingServer-452/CommonUtilitiesLib/base64.c
or I could directly use "atlenc.h" in VC++.if the length of coded_dst which I have defined is smaller than the actually,the program may crashed
int Base64encode(char *coded_dst, const char *plain_src, int len_plain_src)
{
const char basis_64[] ="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
int i;
char *p;
p = coded_dst;
for (i = 0; i < len_plain_src - 2; i += 3) {
*p++ = basis_64[(plain_src[i] >> 2) & 0x3F];
*p++ = basis_64[((plain_src[i] & 0x3) << 4) |
((int) (plain_src[i + 1] & 0xF0) >> 4)];
*p++ = basis_64[((plain_src[i + 1] & 0xF) << 2) |
((int) (plain_src[i + 2] & 0xC0) >> 6)];
*p++ = basis_64[plain_src[i + 2] & 0x3F];
}
if (i < len_plain_src) {
*p++ = basis_64[(plain_src[i] >> 2) & 0x3F];
if (i == (len_plain_src - 1)) {
*p++ = basis_64[((plain_src[i] & 0x3) << 4)];
*p++ = '=';
}
else {
*p++ = basis_64[((plain_src[i] & 0x3) << 4) |
((int) (plain_src[i + 1] & 0xF0) >> 4)];
*p++ = basis_64[((plain_src[i + 1] & 0xF) << 2)];
}
*p++ = '=';
}
*p++ = '\0';
return p - coded_dst;
}
Question #2 as we all well know that the type of byte in C++ is unsigned char,how do I convert the char * to unsigned char *?
thanks
regards
Ken
The design of your function, based on the signature, tells me it's up to the caller to provide a sufficient buffer for output. This would be unsafe in your example because the caller isn't informing the function how large that buffer is. Your function has no chance to limit output to coded_dst to the buffer provided, so you should add, at the least, a parameter for that.
As such, you would need to check as you loop to be sure p, a pointer into coded_dst, stays within that limit, returning an error to the caller if there's insufficient room.
That said, notice how many increments of p occur for every 3 source items processed. The ratio is 3/4...for every 3 that go into that loop, 4 come out. So, to start the calculation of the required length, begin with
( len_plain_src / 3 ) * 4;
Now, consider r = len_plain_src % 3; If r is zero, your algorithm adds 2 more bytes. If r has a remainder, your algorithm adds 3 more bytes.
After that, you append a zero terminator.
Look carefully, I've not clearly analyzed this, but you may have an error in the closing '=' appended at the tail for the case where (i<len_plain_src) - you may have added two of them instead of just one.
Now, to handle the unsigned char, you could change the declaration and initial assignment of p with,
unsigned char * p = (unsigned char *) coded_dst;
At which point it would be more convenient for you if you declare basis_64 to be unsigned char

How to multiply hex color codes?

I want to change color from 0x008000 (green) to 0x0000FF (blue).
If I multiply 0x008000 * 256 = 0x800000 (Google search acts as a calculator).
I need to find the correct multiplier so the result would be 0x0000FF.
To answer people below - I am doing this in order to make a color transition on a rectangle in pixi.js.
From what I've gathered, RGB color code is divided into 3 parts - red, green and blue in a scale of 0-FF expressed in hex, or 0-255 in decimal. But how to multiply correctly to get desired result?
If you want linear change from one color to another, i recommend something like this:
int startColor = 0x008000;
int endColor = 0x0000FF;
int startRed = (startColor >> 16) & 0xFF;
int startGreen = (startColor >> 8) & 0xFF;
int startBlue = startColor & 0xFF;
int endRed, endGreen, endBlue; //same code
int steps = 24;
int[] result = new int[steps];
for(int i=0; i<steps; i++) {
int newRed = ( (steps - 1 - i)*startRed + i*endRed ) / (steps - 1);
int newGreen, newBlue; //same code
result[i] = newRed << 16 | newGreen << 8 | newBlue;
}
This is for JavaScript:
var startColor = 0x008000;
var endColor = 0x0000FF;
var startRed = (startColor >> 16) & 0xFF;
var startGreen = (startColor >> 8) & 0xFF;
var startBlue = startColor & 0xFF;
var endRed = (endColor >> 16) & 0xFF;
var endGreen = (endColor >> 8) & 0xFF;
var endBlue = endColor & 0xFF;
var steps = 24;
var result = [];
for (var i = 0; i < steps; i++) {
var newRed = ((steps - 1 - i) * startRed + i * endRed) / (steps - 1);
var newGreen = ((steps - 1 - i) * startGreen + i * endGreen) / (steps - 1);
var newBlue = ((steps - 1 - i) * startBlue + i * endBlue) / (steps - 1);
var comb = newRed << 16 | newGreen << 8 | newBlue;
console.log(i + " -> " + comb.toString(16));
result.push(comb);
}
console.log(result);

Flex ActionScript 3 How to Transform part of String to Big Endian Integer?

I currently have tried something like this, but unfortunately it doesn't compile.
public function processPacket(event:PacketEvent):void {
var packetType:int = event.packetType;
var packetData:String = event.packetData;
var size:int = ((((byte)packetData.charAt(0)) & 0xff) << 24) | ((((byte)packetData.charAt(1)) & 0xff) << 16) |
((((byte)packetData.charAt(2)) & 0xff) << 8) | (((byte)packetData.charAt(3)) & 0xff);
//...
//TODO: Retrieve String based on the size above.
// processedSize += size;
//Then if(packetData.length > processedSize) size = old string +1
}
Error I get
C:\src\flash.mxml(111): Error: Syntax error: expecting rightparen before packetData.
var size:int = ((((byte)packetData.charAt(0)) & 0xff) <<
24) | ((((byte)packetData.charAt(1)) & 0xff) << 16) |
C:\src\flash.mxml(111): Error: Syntax error: expecting semicolon before rightparen.
var size:int = ((((byte)packetData.charAt(0)) & 0xff) <<
24) | ((((byte)packetData.charAt(1)) & 0xff) << 16) |
Is there any function which can do it maybe in one line as well, maybe by String index.
I know I'm not using ByteArray's which probably would would of had no problem. But In the Socket I do something like this
recvPacketData = socket.readUTFBytes(recvPacketSize);
So I don't have access to any ByteArray's at this point.
Got it to compile turns out if you want to convert a String back into a ByteArray you just have to fill the ByteArray like ByteArray.writeUTFBytes(String);
Here is my example I got it to compile like this.
public function processPacket(event:PacketEvent):void {
var packetType:int = event.packetType;
var packetData:String = event.packetData;
var packetDataBytes:ByteArray = new ByteArray();
packetDataBytes.writeUTFBytes(packetData);
var arguments:Array = new Array();
var size:int = 4; //4 bytes for readInt() + 1 for possible String
while(packetDataBytes.length > size) {
size = packetDataBytes.readInt();
if(size <= 0 || size > 2000000) break;
arguments.push(packetDataBytes.readUTFBytes(size));
}
switch(packetType) {
//...
}
}

Resources