Sorry if this belongs on serverfault
I'm wondering what the proper way is to use an SVG(xml) string as standard input
for a "convert msvg:- jpeg:- 2>&1" command (using linux)
Currently I'm just saving a temp file to use as input,
but the data originates from an API in my case, so feeding
the string directly to the command would obviously be most efficient.
I appreciate everyone's help. Thanks!
This should work:
convert - output.jpg
Example:
convert logo: logo.svg
cat logo.svg | convert - logo.jpg
Explanation:
The example's first line creates an SVN file and writes it to disk. This is only a preparatory stop so that we can run the second line.
The second line is a pipeline of two commands: cat streams the bytes of the file to stdout (standard output).
The first line served only as preparation for the next command in the pipeline, so that this next command has something to read in.
This next command is convert.
The - character is a way to tell convert to read its input data not from disk, but from stdin (standard input).
So convert reads its input data from its stdin and writes its JPEG output to the file logo.jpg.
So my first command/line is similar to your step described as 'currently I'm just saving a temp file to use as input'.
My second command/line does not use your API (I don't have access to it, do I?), but it demonstrates a different method to 'feeding a string directly to the command'.
So the most important lesson is this: Whereever convert would usually read input from a file and where you would write the file's name on the commandline, you can replace the filename by - to tell convert it should read from stdin. (But you need to make sure that there is actually something offered on convert's standard input which it can digest...)
Sorry, I can't explain better than this...
Related
I have a project that involves extracting data from a database into a text file, and then ingesting it into Hadoop. So i want to create a shell script that NiFi can run to automatically to check if a text file is extracted and ingest it, but I need to make sure that the whole data has been extracted first before ingesting it. Meaning I would need to check that the text file has an EOF, how do I do that?
Don't have any code as of yet, I have very little knowledge writing shell scripts.
While creating the file, use a different name. Rename it to the expected name once the extraction is done. Then, the other process can start its work once the file exists.
EOF is not something that actually gets put in the text file - in fact, there isn't really any EOF value. EOF or end-of-file is a condition that occurs when you try to consume input from a source that has none to give.
There is no general marker you can look for in your text files that will tell you whether they are complete. You'll need to make your script indicate when a given chunk of data has been extracted in some other way. There are many possibilities; you could change the name of the file as choroba suggested, or you could create a lock file and remove it once the data extraction is done, or you could have your extraction program write a distinctive sequence of bytes to the file at the end, or so on.
I'm trying to filter out lines from all .js source files, and put into a separate file. (Specifically, I'm trying to grep all calls to a string translation function and post-process them).
I think I have the different parts figured out but can't make them fit together.
For each file, process it
Write each file's grep:ed lines to output.
Append the result to a file
I've tried to through.push(<output per file>) from the plugin, but the following step expects a file, not a string.
From there, I expect I could do something like gulp-concat or stream merge on the results and pipe it on to gulp.dist, but there's bit missing here.
I figured out a way - simply replace the Vinyl file's content with the lines to output, and push that through to through.push.
OK, what I need is fairly simple.
I want to download LOTS of different files (from a specific server), via cURL and would want to save each one of them as a specific new filename, on disk.
Is there an existing way (parameter, or whatever) to achieve that? How would you go about it?
(If there was an option to input all URL-filename pairs in a text file, one per line, and get cURL to process it, would be ideal)
E.g.
http://www.somedomain.com/some-image-1.png --> new-image-1.png
http://www.somedomain.com/another-image.png --> new-image-2.png
...
OK, just figured a smart way to do it myself.
1) Create a text file with pairs of URL (what to download) and Filename (how to save it to disk), separated by comma (,), one per line. And save it as input.txt.
2) Use the following simple BASH script :
while read line; do
IFS=',' read -ra PART <<< "$line";
curl $PART[0] -o $PART[1];
done < input.txt
*Haven't thoroughly tested it yet, but I think it should work.
I am looking for a mechanism to manipulate my eeprom image with a unique device id. I'd like to do this in a make file so that the device would automatically obtain a new ID and then update it to the data image, then flash it. In pseudocode:
wget http://my.centralized.uid.service/new >new.id
binedit binary.image -write 0xE6 new.id
flash binary.image into device
So first we get an id into a separate file, then we overwrite the image (from given offset) with the contents of this ID file. Then flash. But how to do the second part? I looked up bvi, which seems to have some scripting abilities, but I did not fully understand it, and to be honest vi always gave me the creeps.
Thanks for help beforehand!
(Full disclosure: I made the initial vote to close as a duplicate. This answer is adapted from the referenced question.)
Use dd with the notrunc option:
offset=$(( 0xe6 ))
length=$( wc -c < new.id )
dd bs=1 if=new.id of=binary.image count=$length seek=$offset conv=notrunc
You may want to try this on a copy first, just to make sure it works properly.
If you know the offset of the file that you want to replace from, you can use the split command to split the initial file up until the offset. The cat command can then be used to join the required pieces together.
Another useful tool when working with binary files is od which will let you examine the binary files in human readable format.
I would perhaps use something like Perl. See here and in particular the section labelled Updating a Random-Access File (example here)
I want to save results in a text file. How can I do that? Write command?
Yes, the write command. The details should be in some book, or on the net, but here's a simple example:
OPEN(UNIT=20, FILE='FILENAME.TXT', STATUS='NEW')
C STATUS='NEW' WILL CREATE A NEW FILE IF ONE DOESN'T EXITST, 'REPLACE' WILL
C OVERWRITE OLD ONE
WRITE(UNIT=20, *)(A(I),I=1,10)
CLOSE(UNIT=20)
In fortran77 it was always good practice to avoid low (below 10) unit numbers, because some of them were reserved - depending on the platform, compiler ... generally, start with those above 10.
yes, the write command. And the open command to open the file. Something like this, if my rusty FORTRAN memory serves:
OPEN(UNIT=1, FILE=FNAME, STATUS='NEW')
WRITE(UNIT=1,FMT=*) "your data"
Your other option is to simply write to stdout (unit=*) and the redirect the output from the command line (eg: $ myfortranprogram > output.txt).
If you are on unix/linux (which is likely), then just redirect the output to a file:
a.out > myoutputfile
where a.out is the name of the compiled executable. Alternatively, change your code to write to a file instead of just to the console:
io=22 !or some other integer number
open(io,file="myoutputfile")
write(io,*)myint,myreal
close(io)
or to keep appending the values to an existing file:
open(io,file="myoutputfile",position="APPEND")
but this is only possible in fortran 90, not in fortran 77. Try renaming your .f to .f90 in that case.