I need to be able to sign a EXE in Linux. Following the directions listed at https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Signing_an_executable_with_Authenticode, I was able to get it done using Mono. However, I'm running into problems -- apparently Mono's signcode only supports md5 and sha1 hash algorithm. I'm trying to sign using sha256. Anyone know how I can get this done?
Thanks!
I was unable to find out how to do this with Mono's signcode but this can be done using https://sourceforge.net/projects/osslsigncode/files/osslsigncode/ -- use -h sha2 parameter when signing.
According to the mono documenation following algorithms are supported:
sha1 | md5 | sha2 | sha256 | sha384 | sha512
So the adjusted example from Mozilla would look like this:
signcode \
-spc authenticode.spc \
-v authenticode.pvk \
-a sha256 -$ commercial \
-n My\ Application \
-i http://www.example.com/ \
-t http://timestamp.verisign.com/scripts/timstamp.dll \
-tr 10 \
MyApp.exe```
Related
I'm trying to create a hash for files in the directory using this script:
for file in *.zip; do openssl dgst -sha256 -binary ${file%.*}.zip $file | base64 >> ${file%.*}.zip.base64sha256; done
It creates hash like this:
b5iQL1fo5r+6osykGr0mcEZ14Xdbn8y0SrFGIuzMfeRvmJAvV+jmv7qh7OUavSZwRnXhd1ufzLRKsUYi7Mx95A==
But for terraform and AWS Lambdas I need a shorted hash value. I can get by using terminal and command like this:
openssl dgst -sha256 -binary archive.zip | base64 >> hash.base64sha256
And output is b5iQL1fo5r+6osykGr0mcEZ14Xdbn8y0SrFGIuzMfeQ=
So the question is: how I can retrieve short version of hash? It's required by terraform and AWS (when hash value is long - lambda are going to redeploy every time)
If you decode the "long" base64 you'll see that it's the same sequence of bytes repeated. That's because here
openssl dgst -sha256 -binary ${file%.*}.zip $file
you are specifying the file twice, once removing the extension and then re-adding it as .zip in ${file%.*}.zip, the other plainly as $file. This results in outputting the concatenated hash for both inputs (that are the same). To fix this, just specify it once:
openssl dgst -sha256 -binary "$file"
(with quotes to avoid problems with whitespace in shell expansion)
Instead of
for file in *.zip; do openssl dgst -sha256 -binary ${file%.*}.zip $file | base64 >> ${file%.*}.zip.base64sha256; done
try
for file in *.zip; do openssl dgst -sha256 -binary ${file%.*}.zip | base64 >> ${file%.*}.zip.base64sha256; done
my file test.txt contains
checksum test file
when I upload into blob its md5 is
CONTENT-MD5 cvL65GNcvWFoqZUTI5oscw==
when I run in local md5Sum test.txt its value is
72f2fae4635cbd6168a99513239a2c73
As discussed in the comments. Solution from here:
Googled around and found a suggestion to use openssl dgst, and it
worked!
openssl dgst -md5 -binary $filename | base64
Turns out, md5sum returns a hex representation of the hash and I had
to unhex it before computing its base64:
md5sum --binary $filename | awk '{print $1}' | xxd -p -r | base64
I want to get the short hash/sha of a GitHub commit, is there a way to get the short hash using GitHub API?
I was not able to find out anything on the official documentation page.
This trick did it for me:
curl -s -L https://api.github.com/repos/:ORG/:REPO/git/refs/heads/master | grep sha | cut -d '"' -f 4 | cut -c 1-7
Easy way to count key
my way:
cat \
public.log.2015050723 \
public.log.2015050800 \
public.log.2015050801 \
public.log.2015050802 \
public.log.2015050803
| grep 18310680207 | wc -l
I need easy way to count this. In fact, my question is how does cat use grep?
File list:
public.log.2015050723
public.log.2015050800
public.log.2015050801
public.log.2015050802
public.log.2015050803
This is easier because it uses one fewer processes:
cat public.log.2015050723 \
public.log.2015050800 \
public.log.2015050801 \
public.log.2015050802 \
public.log.2015050803 | # Note pipe or backslash needed here!
grep -c 18310680207
Note that the pipe symbol needs to appear after the last file name, or you need a backslash after the last file name.
If you need the occurrences per file, then you can lose the cat too (which is what anubhava suggested):
grep -c 18310680207 \
public.log.2015050723 \
public.log.2015050800 \
public.log.2015050801 \
public.log.2015050802 \
public.log.2015050803
You can reduce the list of file names, with your sample file names, to:
cat public.log.2015050723 public.log.201505080[0-3] |
grep -c 18310680207
or:
grep -c public.log.2015050723 public.log.201505080[0-3]
Ok, so I can print a PDF doing:
pdf2ps file.pdf - | lp -s
But now I want to use convert to merge several PDF files, I can do this with:
convert file1.pdf file2.pdf merged.pdf
which merges file1.pdf and file2.pdf into merged.pdf, target can be replaced with '-'.
Question
How could I pipe convert into pdf2ps and then into lp though?
convert file1.pdf file2.pdf - | pdf2ps - - | lp -s
should do the job.
You send the output of the convert command to psf2ps, which in turn feeds its output to lp.
You can use /dev/stdout like a file:
convert file1.pdf file2.pdf /dev/stdout | ...
I use gs for merging pdfs like:
gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=/dev/stdout -f ...
Since hidden behind your pdf2ps command there is a Ghostscript command running (which accomplishes the PDF -> PS conversion), you could also run Ghostscript directly to generate the PostScript:
gs -o output.ps \
-sDEVICE=ps2write \
file1.pdf \
file2.pdf \
file3.pdf ...
Note, that older GS releases didn't include the ps2write device (which generates PostScript Level 2), but only pswrite (which generates the much larger PostScript Level 1). So change the above parameter accordingly if need be.
Older Ghostscript versions may also need to replace the modern abbreviation of -o - with the more verbose -dNOPAUSE -dBATCH -sOutputFile=/dev/stdout. Only newer GS releases (all after April 2006) know about the -o parameter.
Now, to directly pipe the PostScript output to the lp command, you would have to do this:
gs -o - \
-sDEVICE=ps2write \
file1.pdf \
file2.pdf \
file3.pdf ... \
| lp -s <other-lp-options>
This may be considerably faster than running pdftk first (but this also depends on your input files).
convert file1.pdf file2.pdf merged.pdf
pdf2ps merged.pdf - | lp -s