Setting values of bytearray - visual-c++

I have a BYTE data[3]. The first element, data[0] has to be a BYTE with very specific values which are as follows:
typedef enum
{
SET_ACCURACY=0x01,
SET_RETRACT_LIMIT=0x02,
SET_EXTEND_LIMT=0x03,
SET_MOVEMENT_THRESHOLD=0x04,
SET_STALL_TIME= 0x05,
SET_PWM_THRESHOLD= 0x06,
SET_DERIVATIVE_THRESHOLD= 0x07,
SET_DERIVATIVE_MAXIMUM = 0x08,
SET_DERIVATIVE_MINIMUM= 0x09,
SET_PWM_MAXIMUM= 0x0A,
SET_PWM_MINIMUM = 0x0B,
SET_PROPORTIONAL_GAIN = 0x0C,
SET_DERIVATIVE_GAIN= 0x0D,
SET_AVERAGE_RC = 0x0E,
SET_AVERAGE_ADC = 0x0F,
GET_FEEDBACK=0x10,
SET_POSITION=0x20,
SET_SPEED= 0x21,
DISABLE_MANUAL = 0x30,
RESET= 0xFF,
}TYPE_CMD;
As is, if I set data[0]=SET_ACCURACY it doesn't set it to 0x01, it sets it to 1, which is not what I want. data[0] must take the value 0x01 when set it equal to SET_ACCURACY. How do I make it so that it does this for not only SET_ACCURACY, but all the other values as well?
EDIT: Actually this works... I had a different error in my code that I attributed to this. Sorry!
Thanks!

"data[0]=SET_ACCURACY doesn't set it to 0x01, it sets it to 1"
It assigns value of SET_ACCURACY to the data[0], which means that bits 00000001 are stored into memory at address &data[0]. 0x01 is hexadecimal representation of this byte, 1 is its decimal representation.

Related

Casting to 8 bytes variable takes only 4 bytes

I have structure that contains two fields:
struct ggg {
unsigned long long int a;
unsigned int b;
};
Field a should be 8 bytes long, while b one is 4 bytes long.
Trying to cast it to array of bytes:
unsigned char c[8 + 4] = { 0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, };
ggg* g = (ggg *)c ;
char tt[1024];
sprintf(tt, "a=%d b=%d ", g->a, g->b);
Got result in tt string :
a=1 b=2
Looks like while casting a takes only 4 bytes instead of 8. Why?
The problem is not casting but your sprintf format specifiers. You are using %d which means signed int, which typically is 4 bytes.
Try changing the format string to "a=%llu b=%u" and you are more likely to get the expected output.

How do I reduce the color palette of an image in Python 3?

I've seen answers for Python 2, but support for it has ended and the Image module for Python2 is incompatible with aarch64.
I'm just looking to convert some images to full CGA color, without the hassle of manually editing the image.
Thanks in advance!
Edit: I think I should mention that I'm still pretty new to Python. I think I should also mention I'm trying to use colors from a typical CGA adapter.
Here's one way to do it with PIL - you may want to check I have transcribed the list of colours accurately from here::
#!/usr/bin/env python3
from PIL import Image
# Create new 1x1 palette image
CGA = Image.new('P', (1,1))
# Make a CGA palette and push it into image
CGAcolours = [
0x00, 0x00, 0x00,
0x00, 0x00, 0xaa,
0x00, 0xaa, 0x00,
0x00, 0xaa, 0xaa,
0xaa, 0x00, 0x00,
0xaa, 0x00, 0xaa,
0xaa, 0x55, 0x00,
0xaa, 0xaa, 0xaa,
0x55, 0x55, 0x55,
0x55, 0x55, 0xff,
0x55, 0xff, 0x55,
0x55, 0xff, 0xff,
0xff, 0x55, 0x55,
0xff, 0x55, 0xff,
0xff, 0xff, 0x55,
0xff, 0xff, 0xff]
CGA.putpalette(CGAcolours)
# Open our image of interest
im = Image.open('swirl.jpg')
# Quantize to our lovely CGA palette, without dithering
res = im.quantize(colors=len(CGAcolours), palette=CGA, dither=Image.Dither.NONE)
res.save('result.png')
That transforms this:
into this:
Or, if you don't really want to write any Python, you can do it on the command-line with ImageMagick in a highly analogous way:
# Create 16-colour CGA swatch in file "cga16.gif"
magick xc:'#000000' xc:'#0000AA' xc:'#00AA00' xc:'#00AAAA' \
xc:'#AA0000' xc:'#AA00AA' xc:'#AA5500' xc:'#AAAAAA' \
xc:'#555555' xc:'#5555FF' xc:'#55FF55' xc:'#55FFFF' \
xc:'#FF5555' xc:'#FF55FF' xc:'#FFFF55' xc:'#FFFFFF' \
+append cga16.gif
That creates this 16x1 swatch (enlarged so you can see it):
# Remap colours in "swirl.jpg" to the CGA palette
magick swirl.jpg +dither -remap cga16.gif resultIM.png
Doubtless the code could be converted for EGA, VGA and the other early PC hardware palettes... or even to the Rubik's Cube palette:
Resulting in:
Or the Bootstrap palette:
Or the Google palette:

BPF filter fails

Can anyone suggest why this (classic) BPF program sometimes lets non-DHCP-response packets through:
# Load the Ethertype field
BPF_LD | BPF_H | BPF_ABS 12
# And reject the packet if it's not 0x0800 (IPv4)
BPF_JMP | BPF_JEQ | BPF_K 0x0800 0 8
# Load the IP protocol field
BPF_LD | BPF_B | BPF_ABS 23
# And reject the packet if it's not 17 (UDP)
BPF_JMP | BPF_JEQ | BPF_K 17 0 6
# Check that the packet has not been fragmented
BPF_LD | BPF_H | BPF_ABS 20
BPF_JMP | BPF_JSET | BPF_K 0x1fff 4 0
# Load the IP header length field
BPF_LDX | BPF_B | BPF_MSH 14
# And load that offset + 16 to get the UDP destination port
BPF_LD | BPF_IND | BPF_H 16
# And reject the packet if the destination port is not 68
BPF_JMP | BPF_JEQ | BPF_K 68 0 1
# Accept the frame
BPF_RET | BPF_K 1500
# Reject the frame
BPF_RET | BPF_K 0
It doesn't let every frame through, but under heavy network load it fails quite often. I'm testing it with this Python 3 program:
import ctypes
import struct
import socket
ETH_P_ALL = 0x0003
SO_ATTACH_FILTER = 26
SO_ATTACH_BPF = 50
filters = [
0x28, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00,
0x30, 0x00, 0x00, 0x00, 0x17, 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x06, 0x11, 0x00, 0x00, 0x00,
0x28, 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x45, 0x00, 0x04, 0x00, 0xff, 0x1f, 0x00, 0x00,
0xb1, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x00, 0x00, 0x48, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x15, 0x00, 0x00, 0x01, 0x44, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0xdc, 0x05, 0x00, 0x00,
0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]
filters = bytes(filters)
b = ctypes.create_string_buffer(filters)
mem_addr_of_filters = ctypes.addressof(b)
pf = struct.pack("HL", 11, mem_addr_of_filters)
pf = bytes(pf)
def main():
sock = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(ETH_P_ALL))
sock.bind(("eth0", ETH_P_ALL))
sock.setsockopt(socket.SOL_SOCKET, SO_ATTACH_FILTER, pf)
# sock.send(req)
sock.settimeout(1)
try:
data = sock.recv(1500)
if data[35] == 0x43:
return
print('Packet got through: 0x{:02x} 0x{:02x}, 0x{:02x}, 0x{:02x}'.format(data[12], data[13], data
except:
print('Timeout')
return
sock.close()
for ii in range(1000):
main()
If I do this while SCPing a big core file to the host running that script, it doesn't reach the one-second timeout in the large majority of, but not all, cases. Under lighter load failures are much rarer - eg twiddling around on an ssh link while the socket is receiving; sometimes it gets through 1000 iterations without failure.
The host in question is Linux 4.9.0. The kernel has CONFIG_BPF=y.
Edit
For a simpler version of the same question, why does this BPF program let through any packets at all:
BPF_RET | BPF_K 0
Edit 2
The tests above were on an ARM64 machine. I've retested on amd64 / Linux 5.9.0. I still see failures, though not nearly as many.
I got a response on LKML explaining this.
The problem is that the filter is applied as the frame arrives on the interface, not as it's passed to userspace with recv(). So under heavy load, frames arrive between the creation of the socket with socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(ETH_P_ALL)) and the filter being applied with sock.setsockopt(socket.SOL_SOCKET, SO_ATTACH_FILTER, pf). These frames sit in the queue; once the filter is applied, subsequent arriving packets have the filter applied to them.
So once the filter is applied, it's necessary to "drain" any queued frames from the socket before you can rely on the filter.

ISO 15693: read multiple security blocks

I am trying to modify an existing SCardTransmit() command (C#) that currently reads one security status/block from a ISO 15693 vicinity RFID card (TI Tag-it HF), to one that will retrieve the security status for all 64 blocks on the card. The existing code is as follows:
Byte[] sendHeader = { 0xFF, 0x30, 0x00, 0x03, 0x05, 0x01, 0x00, 0x00, 0x00, Convert.ToByte(blockNum), 0x01 };
Byte[] sendBuffer = new Byte[255]; //Send Buffer in SCardTransmit
int sendbufferlen; //Send Buffer length in SCardTransmit
SmartCardData pack = new SmartCardData();
sendHeader.CopyTo(sendBuffer, 0);
sendbufferlen = Convert.ToByte(sendHeader.Length);
SCardTransmitReceived rxBuf = SmartCardTransmit(sendBuffer, sendbufferlen);
The way I understand it, the bytes preceding Convert.ToByte(blockNum) represent the command to get a security status, followed by the block in question, and the number of blocks to read. The only reference I have seen regarding security status reads is in section 10.3.4 in the "Contactless Smart Card Reader Dev Guide"
NOTE: SmartCardTransmit takes care of calling SCardTransmit with the correct card handle and the other needed parameters. I'm more interested in the format of the send header that represents a request for security blocks 0 to 63.
Unfortunately, that is not possible. The Get Security Status command of the HID/Omnikey smartcard reader can only retrieve the security status of one block with each command. So regardless of what Le byte you try to provide, the reader will always only return the security status of the block you specify with blockNum.
So the only way to get the security status for all blocks is by iterating through all blocks and issuing the command for each of them:
bool moreBlocks = true;
int blockNum = 0;
while (moreBlocks) {
byte[] sendBuffer = {
0xFF, 0x30, 0x00, 0x03,
0x05,
0x01,
0x00, 0x00,
(byte)((blockNum>>8) & 0xFF), (byte)(blockNum & 0xFF),
0x00
};
SCardTransmitReceived rxBuf = SmartCardTransmit(sendBuffer, sendBuffer.Length);
moreBlocks = check_if_rxBuf_ends_with_sw9000(rxBuf);
++blockNum;
}
From this document:
link
it appears that your tag complies with ISO15693 standard. From the documentation you have provided it appears that the command you need is at page 59. Now, from the description of the command it appears 0x01 is Version and the following two bytes (0x00 and 0x00) means reading by block. The byte before Convert.ToByte() seams to be the MSB of the starting block (0x00). The Convert.ToByte() is the LSB of the starting block. The next is Le in command description (the number of blocks to be read).

Smooth Streaming and AAC-Low Complexity audio codec. Data format?

I am writing a Smooth Streaming client application. On the server side (IIS 7 with Media Services extensions), I have a bunch of ISMV and ISMA files encoded using Expression Encoder pro 4 with the "H.264 IIS Smooth Streaming iPhone WiFi" preset. In a nutshell, it uses the "H.264 baseline" video codec, and the AAC-LC audio codec.
On the client side however is where I'm having problems, specifically with the audio chunks. While I have been able to make sense of the H.264 video stream (it is essentially a sequence of raw NAL units prefixed by their length, without the NAL unit "start code" 0, 0, 0, 1), I still haven't been able to crack what is inside the AAC LC audio stream, i.e. what comes in the "mdat" (Media Data Box) atom. It is most definitely not an MP4 container, but what is it then?
I am pasting below the first 128 (number chosen arbitrarily) bytes of one AAC-LC fragment (MDAT portion only) obtained from the server, in case anyone can figure it out from there.
unsigned char data[128] = {
0x21, 0x09, 0x0A, 0xBF, 0xBF, 0xFF, 0xFF, 0xD5, 0xB1, 0x8D, 0xC4, 0xA1,
0x18, 0x0D, 0x25, 0xC9, 0x2E, 0x49, 0x2E, 0x10, 0x88, 0x91, 0x10, 0x01,
0x13, 0x23, 0x2C, 0x36, 0x25, 0x60, 0x6B, 0x94, 0x8C, 0x74, 0xD7, 0x4A,
0x95, 0xD3, 0x03, 0x91, 0x5B, 0x76, 0xDE, 0x27, 0xC5, 0xB2, 0x4C, 0xCF,
0xEB, 0x3E, 0xDD, 0xFF, 0x22, 0xAF, 0xC3, 0xF8, 0x60, 0x36, 0x49, 0xBC,
0xAE, 0x4D, 0x10, 0x31, 0xC6, 0x28, 0x2A, 0xEB, 0xCA, 0x94, 0x51, 0xD8,
0x61, 0x1B, 0xC6, 0x2A, 0x91, 0x71, 0xE4, 0x8C, 0xF8, 0x19, 0x2C, 0xDE,
0x71, 0xBB, 0xE3, 0xBD, 0x36, 0xB4, 0x45, 0x37, 0x02, 0x61, 0x48, 0x8E,
0x19, 0x80, 0xD5, 0x24, 0x97, 0x24, 0x92, 0x44, 0x08, 0x89, 0x12, 0x00,
0xB3, 0xF8, 0x1E, 0xE2, 0xBD, 0xCD, 0x4E, 0xF7, 0xA9, 0xE2, 0x0E, 0xD8,
0xEA, 0xFA, 0xCF, 0xDB, 0x4E, 0x69, 0x6F, 0xEE
};
After a long research and this tip I received on the IIS forums, I was able to figure it out. Basically this is a raw AAC stream, which needs to be wrapped with headers before it can be played back. The simplest and most common header format appears to be ADTS, which consists in adding a 7-byte header in front of each sample.

Resources