Replace parts of file with 0xFF? - linux

I want to modify a file so every byte from location 0x3000 to 0xDC000 is replaced with 0xFF, everything else should be unmodified.
How to accomplish this with standard Linux tools?

This is jhnc's answer with little improvements (explained at the end of this answer).
#! /bin/bash
overwrite() {
file="$1"; from="$2"; to="$3"; with="$4"
yes '' | tr \\n "\\$(printf %o "$with")" |
dd conv=notrunc bs=1 seek="$((from))" count="$((to-from))" of="$file"
}
In your case you would use the function from above like
overwrite yourFile 0x3000 0xDC000 0xFF
The start and end byte are both 0-based. The start is inclusive and the end is exclusive. Example:
$ printf 00000 > file
$ overwrite file 1 3 0x57
$ hexdump -C file
00000000 30 57 57 30 30 |0WW00|
00000005
Improvements made:
Fixed wrong count=... and explained interpretation of start and end.
Allow filling with null bytes.
If you want to write null bytes 0x00 you cannot use yes $'\x00'. The null byte would represent the end of yes's argument string making the call equivalent to yes ''. Since yes '' | tr -d \\n produces no output dd will wait indefinitely.
The command presented in this answer allows you to fill the region with any byte (choose one from 0x00 to 0xFF).

If Perl is your option, would you please try the following:
perl -e '
$start = 0x3000; # start position to overwrite
$end = 0xDC000; # end position to overwrite
$file = "file"; # filename to modify (replace with your filename)
open(FH, "+< $file") or die "$file"; # open the file "$file" to both read & write with the filehandle "FH"
seek(FH, $start, 0); # jump to the start position
for ($i = $start; $i < $end; $i++) { # loop over the overwrite area
print FH "\xFF"; # replace the byte with 0xFF
}
close(FH);
'

Related

How to launch an app in memory on linux system

I read the encrypted file, decrypt it in a buffer. how could I run the decrypted code?
where should I jump to? in DOS, I know, jump to the buffer offset 0x100, that's the code entry point. how about in linux?
thank you
Xian
Try using tail -c (output last K bytes).
Full answer:
First convert from hex to dec (remove the "0x" before converting)
Then, find your input file size. Deduct 0x100
hex="100"
# convert hex to dec
dec=$(echo "obase=10; ibase=16; ${hex}" | bc)
# input_file size in bytes
file_size=$(stat --printf="%s" input_file)
truncated_file_size=$(($file_size - $dec))
tail -c $truncated_file_size input_file > new_file

how to add a string to a binary operation then save it to a .dat file in bash

I am trying to make a binary operation then add 0b to it then save it into an output.dat file. for example 0b1101. But it seems like binary operation overrides itself to the 0b.
#!/bin/bash
binary="0b"
while IFS=" ," read i1 i2 i3 #assigns each line into three seperate entity.
do
#checks if it's in binary, decimal or hexadecimal
if [[ $i1 == *"0b"* ]]; then #binary
i1=${i1//$binary/}
i3=${i3//$binary/}
if [ "$i2" = "+" ]; then
echo "0b" >$HOME/Desktop/Homework_1/output.dat
echo "ibase=2;obase=2; $i1+$i3" | bc -l
>$HOME/Desktop/Homework_1/output.dat
There are two errors here:
First, you're redirecting both outputs with >. You should be aware that this will clear the content of the target file before writing. To append, use >> as the redirection operator:
echo "ibase=2;obase=2; $i1+$i3" | bc -l >> $HOME/Desktop/Homework_1/output.dat
# ^^
Second, there's another issue with your bc calculation: You cannot specify obase=2 after ibase=2. You should change it into this:
echo "obase=2;ibase=2; $i1+$i3" | bc -l
# specify obase first
You can read more about this issue here #>> bc: Why does ibase=16; obase=10; FF returns FF and not 255?

Using bash to copy contents of row in one file to specific character location in another file

I'm new to bash and need help to copy Row 2 onwards from one file into a specific position (150 characters in) in another file. Through looking through the forum, I've found a way to include specific text listed at this position:
sed -i 's/^(.{150})/\1specifictextlisted/' destinationfile.txt
However, I can't seem to find a way to copy content from one file into this.
Basically, I'm working with these 2 starting files and need the following output:
File 1 contents:
Sequence
AAAAAAAAAGGGGGGGGGGGCCCCCCCCCTTTTTTTTT
File 2 contents:
chr2
tccccagcccagccccggccccatccccagcccagcctatccccagcccagcctatccccagcccagccccggccccagccccagccccggccccagccccagccccggccccagccccggccccatccccggccccggccccatccccggccccggccccggccccggccccggccccatccccagcccagccccagccccatccccagcccagccccggcccagccccagcccagccccagccacagcccagccccggccccagccccggcccaggcccagcccca
Desired output contents:
chr2
tccccagcccagccccggccccatccccagcccagcctatccccagcccagcctatccccagcccagccccggccccagccccagccccggccccagccccagccccggccccagccccggccccatccccggccccggccccatccccgAAAAAAAAAGGGGGGGGGGGCCCCCCCCCTTTTTTTTTgccccggccccggccccggccccggccccatccccagcccagccccagccccatccccagcccagccccggcccagccccagcccagccccagccacagcccagccccggccccagccccggcccaggcccagcccca
Can anybody put me on the right track to achieving this?
If the file is really huge instead of just 327 characters you might want to use dd:
dd if=chr2 bs=1 count=150 status=none of=destinationfile.txt
tr -d '\n' < Sequence >> destinationfile.txt
dd if=chr2 bs=1 skip=150 seek=189 status=none of=destinationfile.txt
189 is 150+length of Sequence.
You can use awk for that:
awk 'NR==FNR{a=$2;next}{print $1, substr($2, 0, 149) "" a "" substr($2, 150)}' file1 file2
Explanation:
# Total row number == row number in file
# This is only true when processing file1
NR==FNR {
a=$2 # store column 2 in a variable 'a'
next # do not process the block below
}
# Because of the 'next' statement above, this
# block gets only executed for file2
{
# put 'a' in the middle of the second column and print it
print $1, substr($2, 0, 149) "" a "" substr($2, 150)
}
I assume that both files contain only a single line, like in your example.
Edit: In comments you said that the files actually spread two lines, in that case you can use the following awk script:
# usage: awk -f this_file.awk file1 file2
# True for the second line in each file
FNR==2 {
# Total line number equals line number in file
# This is only true while we are processing file1
if(NR==FNR) {
insert=$0 # Store the string to be inserted in a variable
} else {
# Insert the string in file1
# Assigning to $0 will modify the current line
$0 = substr($0, 0, 149) "" insert "" substr($0, 150)
}
}
# Print lines of file2 (line 2 has been modified above)
NR!=FNR
You can use bash and read one char at the time from the file:
i=1
while read -n 1 -r; do
echo -n "$REPLY"
let i++
if [ $i -eq 150 ]; then
echo -n "AAAAAAAAAGGGGGGGGGGGCCCCCCCCCTTTTTTTTT"
fi
done < chr2 > destinationfile.txt
This simply reads a char, echos it and increments the counter. If the counter is 150 it echos your sequence. You can replace the echo with a cat file | tr -d '\n'. Just make sure to remove any newlines, like here with tr. That is also why I use echo -n so it doesn't add any.

Add line feed every 2391 byte

I am using Redhat Linux 6.
I have a file which should comes from mainframe MVS with EBCDIC-ASCII conversion.
(But I suspect some conversion may be wrong)
Anyway, I know that the record length is 2391 byte. There are 10 records and the file size is 23910 byte.
For each 2391 byte record, there are many 0a or 0d char (not CRLF). I want to replace them with, say, # and #.
Also, I want to add a LF (i.e.0a) every 2391 byte so as to make the file become a normal unix text file for further processing.
I have try to use
dd ibs=2391 obs=2391 if=emyfile of=myfile.new
But, this cannot work. Both files are the same.
I also try
dd ibs=2391 obs=2391 if=myfile | awk '{print $0}'
But, this also not work
Can anyone help on this ?
Something like this:
#!/bin/bash
for i in {0..9}; do
dd if=emyfile bs=2391 count=1 skip=$i | LC_CTYPE=C tr '\r\n' '##'
echo
done > newfile
If your files are longer, you will need more than 10 iterations. I would look to handle that by running an infinite looop and exiting the loop on error, like this:
#!/bin/bash
i=0
while :; do
dd if=emyfile bs=2391 count=1 skip=$i | LC_CTYPE=C tr '\r\n' '##'
[ ${PIPESTATUS[0]} -ne 0 ] && break
echo
((i++))
done > newfile
However, on my iMac under OSX, dd doesn't seem to exit with an error when you go past end of file - maybe try your luck on your OS.
You could try
$ dd bs=2391 cbs=2391 conv=ascii,unblock if=emyfile of=myfile.new
conv=ascii converts from EBCDIC to ASCII. conv=unblock inserts a newline at the end of each cbs-sized block (after removing trailing spaces).
If you already have a file in ASCII and just want to replace some characters in it before splitting the blocks, you could use tr(1). For example, the following will replace each carriage return with '#' and each newline (linefeed) with '#':
$ tr '\r\n' '##' < emyfile | dd bs=2391 cbs=2391 conv=unblock of=myfile.new

Read characters from a text file using bash

Does anyone know how I can read the first two characters from a file, using a bash script. The file in question is actually an I/O driver, it has no new line characters in it, and is in effect infinitely long.
The read builtin supports the -n parameter:
$ echo "Two chars" | while read -n 2 i; do echo $i; done
Tw
o
ch
ar
s
$ cat /proc/your_driver | (read -n 2 i; echo $i;)
I think
dd if=your_file ibs=2 count=1 will do the trick
Looking at it with strace shows it is effectively doing a two bytes read from the file.
Here is an example reading from /dev/zero, and piped into hd to display the zero :
dd if=/dev/zero bs=2 count=1 | hd
1+0 enregistrements lus
1+0 enregistrements écrits
2 octets (2 B) copiés, 2,8497e-05 s, 70,2 kB/s
00000000 00 00 |..|
00000002
echo "Two chars" | sed 's/../&\n/g'
G'day,
Why not use od to get the slice that you need?
od --read-bytes=2 my_driver
Edit: You can't use head for this as the head command prints to stdout. If the first two chars are not printable, you don't see anything.
The od command has several options to format the bytes as you want.

Resources