I am looking for a mechanism to manipulate my eeprom image with a unique device id. I'd like to do this in a make file so that the device would automatically obtain a new ID and then update it to the data image, then flash it. In pseudocode:
wget http://my.centralized.uid.service/new >new.id
binedit binary.image -write 0xE6 new.id
flash binary.image into device
So first we get an id into a separate file, then we overwrite the image (from given offset) with the contents of this ID file. Then flash. But how to do the second part? I looked up bvi, which seems to have some scripting abilities, but I did not fully understand it, and to be honest vi always gave me the creeps.
Thanks for help beforehand!
(Full disclosure: I made the initial vote to close as a duplicate. This answer is adapted from the referenced question.)
Use dd with the notrunc option:
offset=$(( 0xe6 ))
length=$( wc -c < new.id )
dd bs=1 if=new.id of=binary.image count=$length seek=$offset conv=notrunc
You may want to try this on a copy first, just to make sure it works properly.
If you know the offset of the file that you want to replace from, you can use the split command to split the initial file up until the offset. The cat command can then be used to join the required pieces together.
Another useful tool when working with binary files is od which will let you examine the binary files in human readable format.
I would perhaps use something like Perl. See here and in particular the section labelled Updating a Random-Access File (example here)
Related
I'm running a bash script that will play a video with mplayer depending on an input from an Arduino (on/off).
When the movie ends, I need to get a timestamp in a txt file. First question is whether there's a command in mplayer slave mode to tell me that, so I can output a timestamp easily.
If not, here's my strategy so far:
I'm running mplayer in slave mode with a fifo, where I echo "pause", whenever I want it to stop.
So, I've been doing this: echo "get_time_pos" to my fifo, which will tell mplayer to show in my Terminal the current position in the movie in seconds. When I say in my Terminal, it's in the same window where I'm running my script.
Now, I need to store this value in a variable to be able to compare with the total length and then output time.
I'm stuck at getting this output into a variable in my bash script.
I recently put together a small bash library that may grow with time. At the moment, it has the functionality you're looking for. I'll explain how to get the info you seek and then point you to my library which simplifies the task.
To get the information you seek, you don't even need to call get_time_pos. You can simply dump the mplayer (not running in quiet mode) output to a file and search that for the last timestamp. The trick here is that the timestamps listed in the dump may not be intuitive to search because of some special characters that control how text is displayed. You have to replace some of these special characters with new lines so that you can easily search it. Then you have to grab the last two lines in case the last line is not a timestamp.
Using my bash library:
Now, if you would like to simplify this process, check out this little library I wrote. Follow the usage directions on my GitHub to incorporate it, and then when you play a media file, play it with the playMediaFile function. If you do this, you'll be able to call the getElapsedSeconds or getElapsedTimestamp function to retrieve the current playback position or the playback position after mplayer has stopped. Storing it to a variable from within bash would be as simple as:
pos=$(getElapsedSeconds)
or
pos=$(getElapsedTimestamp)
This library contains other functions as well. The isFinishedPlaying function may or may not also be of use to you.
I'm interested in simply adding a comment next to my files in Linux (Ubuntu). An example would be:
info user ... my_data.csv Raw data which was sent to me.
info user ... my_data_cleaned.csv Raw data with duplicates filtered.
info user ... my_data_top10.csv Cleaned data with only top 10 values selected for each ID.
So sort of the way you can comment commits in Git. I don't particularly care about searching on these tags, filtering them etc. Just seeings them when I list files in a directory. Bonus if the comments/tags follow the document around as I copy or move it.
Most filesystem types support extended attributes where you could store comments.
So for example to create a comment on "foo.file":
xattr -w user.comment "This is a comment" foo.file
The attributes can be copied/moved with the file just be aware that many utilities require special options to copy the extended attributes.
Then to list files with comments use a script or program that grabs the extended attribute. Here is a simple example to use as a starting point, it just lists the files in the current directory:
#!/bin/sh
ls -1 | while read -r FILE; do
comment=`xattr -p user.comment "$FILE" 2>/dev/null`
if [ -n "$comment" ]; then
echo "$FILE Comment: $comment"
else
echo "$FILE"
fi
done
The xattr command is really slow and poorly written (it doesn't even return error status) so I suggest something else if possible. Use setfattr and getfattr in a more complex script than what I have provided. Or maybe a custom ls command that is aware of the user.comment attribute.
This is a moderately serious challenge. Basically, you want to add attributes to files, keep the attributes when the file is copied or moved, and then modify ls to display the values of these attributes.
So, here's how I would attack the problem.
1) Store the information in a sqlLite database. You can probably get away with one table. The table should contain the complete path to the file, and your comment. I'd name the database something like ~/.dirinfo/dirinfo.db. I'd store it in a subfolder, because you may find later on that you need other information in this folder. It'd be nice to use inodes rather than pathnames, but they change too frequently. Still, you might be able to do something where you store both the inode and the pathname, and retrieve by pathname only if the retrieval by inode fails, in which case you'd then update the inode information.
2) write a bash script to create/read/update/delete the comment for a given file.
3) Write another bash function or script that works with ls. I wouldn't call it "ls" though, because you don't want to mess with all the command line options that are available to ls. You're going to be calling ls always as ls -1 in your script, possibly with some sort options, such as -t and/or -r. Anyway, your script will call ls -1 and loop through the output, displaying the file name, and the comment, which you'll look up using the script from 2). You may also want to add file size, but that's up to you.
4) write functions to replace mv and cp (and ln??). These would be wrapper functions that would update the information in your table, and then call the regular Unix versions of these commands, passing along any arguments received by the functions (i.e. "$#"). If you're really paranoid, you'd also do it for things like scp, which can be used (inefficiently) to copy files locally. Still, it's unlikely you'll catch all the possibilities. What if someone else does a mv on your file, who doesn't have the function you have? What if some script moves the file by calling /bin/mv? You can't easily get around these kinds of issues.
Or if you really wanted to get adventurous, you'd write some C/C++ code to do this. It'd be faster, and honestly not all that much more challenging, provided you understand fork() and exec(). I can't recall whether sqlite has a C API. I assume it does. You'd have to tangle with that, too, but since you only have one database, and one table, that shouldn't be too challenging.
You could do it in perl, too, but I'm not sure that it would be that much easier in perl, than in bash. Your actual code isn't that complex, and you're not likely to be doing any crazy regex stuff or string manipulations. There are just lots of small pieces to fit together.
Doing all of this is much more work than should be expected for a person answering a question here, but I've given you the overall design. Implementing it should be relatively easy if you follow the design above and can live with the constraints.
OK, what I need is fairly simple.
I want to download LOTS of different files (from a specific server), via cURL and would want to save each one of them as a specific new filename, on disk.
Is there an existing way (parameter, or whatever) to achieve that? How would you go about it?
(If there was an option to input all URL-filename pairs in a text file, one per line, and get cURL to process it, would be ideal)
E.g.
http://www.somedomain.com/some-image-1.png --> new-image-1.png
http://www.somedomain.com/another-image.png --> new-image-2.png
...
OK, just figured a smart way to do it myself.
1) Create a text file with pairs of URL (what to download) and Filename (how to save it to disk), separated by comma (,), one per line. And save it as input.txt.
2) Use the following simple BASH script :
while read line; do
IFS=',' read -ra PART <<< "$line";
curl $PART[0] -o $PART[1];
done < input.txt
*Haven't thoroughly tested it yet, but I think it should work.
Sorry if this belongs on serverfault
I'm wondering what the proper way is to use an SVG(xml) string as standard input
for a "convert msvg:- jpeg:- 2>&1" command (using linux)
Currently I'm just saving a temp file to use as input,
but the data originates from an API in my case, so feeding
the string directly to the command would obviously be most efficient.
I appreciate everyone's help. Thanks!
This should work:
convert - output.jpg
Example:
convert logo: logo.svg
cat logo.svg | convert - logo.jpg
Explanation:
The example's first line creates an SVN file and writes it to disk. This is only a preparatory stop so that we can run the second line.
The second line is a pipeline of two commands: cat streams the bytes of the file to stdout (standard output).
The first line served only as preparation for the next command in the pipeline, so that this next command has something to read in.
This next command is convert.
The - character is a way to tell convert to read its input data not from disk, but from stdin (standard input).
So convert reads its input data from its stdin and writes its JPEG output to the file logo.jpg.
So my first command/line is similar to your step described as 'currently I'm just saving a temp file to use as input'.
My second command/line does not use your API (I don't have access to it, do I?), but it demonstrates a different method to 'feeding a string directly to the command'.
So the most important lesson is this: Whereever convert would usually read input from a file and where you would write the file's name on the commandline, you can replace the filename by - to tell convert it should read from stdin. (But you need to make sure that there is actually something offered on convert's standard input which it can digest...)
Sorry, I can't explain better than this...
So I just found out I can create log files of everything I do in screen (C-a H). Sounds like a nice way to keep track of potential goofs in a particular screen session. However, when I went to try it out the logfile is reported as being a binary file (and can't be viewed like a regular text as such). So am I missing something? A quick man page looksee and searching Google (and SO) turns up nothing about this.
So my question is: How do I generate plain text log files in screen?
Assuming the answer is "What a noob... how about you try making them? RTFM." my question becomes: How do I use less to view screen logfiles I've created (since less screenlog.0 does not work on a binary file)?
EDIT: So cat works fine but less complains that the file is binary... why?
SOLUTION: as jcomeau_ictx helpfully pointed out, you can view these logfiles fine with cat or more but with less you must add the -r flag less -r screenlog.0
I just found a screenlog.0 on the net; it is plain text, with some escape sequences. Just 'cat' the file, you should be able to view it just fine.
[after more checking]
Control-A H is what generates the screenlog on my system. And though 'cat' works, you'll miss a lot of data. Use 'more' instead of 'less' to interpolate the escape codes.
I found neither less nor more nor cat to be an ideal solution for viewing screenlog files. All "replay" some of the control character so that e.g. screen deletions as produced by "clear" (don't remember the corresponding control character) are beeing shown, hiding what has been cleared.
What i know works great is: use "view" or "vi", it just shows the control character in escaped notation. Probably any other text editor works, too (not tested).
-L logs to file,
tail -f 'logfilename' to monitor this file