On an OpenWRT installation, I have an update script that downloads a file and checks its GPG signature.
If I run this script at boot in rc.d at priority 99 (it's the last one), I get a "gpg: Can't check signature: public key not found" error. If I run it via Cron or manually, everything works.
I also tried to add a 60 second sleep before running the script.
Is there a way to know when GPG finished its init?
Can you post the script you use?
A possible solution would be to add the public key import as part of the script before you check the signature, so it's always available for gpg.
This answer may also shed some light on this error: Can't check signature: public key not found
Turns out, scripts in rc.d are not run as root, or the root home is not specified yet (?), so the home directory where GPG is looking for signatures is different (it's looking at //.gnupg/ instead of /root/.gnupg/).
Adding the homedir parameter to GPG allows to specify the directory; this works:
gpg --homedir /root/.gnupg/ --verify update.gpg
Related
I'm trying to use gnome-keyring to memorize my GPG passphrase in a headless Ubuntu server (22.04.1 LTS GNU/Linux 5.15.0-57-generic x86_64). The reason I'm trying to do this with gnome-keyring and not using the gpg-agent cache is that I'd like for the GPG certificate to be immediately accessible to be used by some systemd cronjobs when I reboot my server.
I've followed the Gnome/Keyring instructions but using pinentry-gnome3 doesn't seem to work:
No Gcr System Prompter available, falling back to curses
I've also tried using pinentry-gtk-2 like it is mentioned in GnuPG instructions and although I don't get any error, the passphrase is not stored.
When doing some debugging, I've found some weird behavior. Trying to store something in my keyring gives me this error:
$ secret-tool store --label='test' foo bar
secret-tool: Cannot create an item in a locked collection
Anyone can help me? I'm also willing to drop using gnome-keyring for something else, but I haven't found anything that would fit my use case.
This is a very weird behavior. I am using gpg (GnuPG) 2.2.19, and I am trying to sign a git commit. The first time I try I get an error saying:
error: gpg failed to sign the data
fatal: failed to write commit object
... but then someone suggested in another Stack Overflow question that if you sign a local dummy file first, and then try again to sign the commit that would work. And it does! But why? How can I avoid doing weird thing of signing a local file first every time I want to sign a git commit?
I am using WSL on Windows 11, so all these take place in WSL.
Ok I don't know if this will solve it for everyone, the comments in the original question provide some other solutions that did not work for me. But I did find a solution in this guide in this section Configure pinentry to use the correct TTY
It was an issue where I had to specify the correct TTY as described in this gpg-agent documentation
So to achieve this I had added the following in my ~/.bashrc (or ~/.zshrc in my case when using ohmyzsh):
# update tty for gpg-agent
export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)
unset SSH_AGENT_PID
export GPG_TTY=$(tty)
gpgconf --launch gpg-agent
I want to write a shell script which basically goes through all the installation steps for gcloud, as outlined at: https://cloud.google.com/sdk/?hl=en
However, when you are run install.sh, you will be asked to enter an authorization code, your project-id, whether you want to help improve Google Cloud or not and so on. Basically user-inputs are required.
But if I want to automate the installation process on a machine where there will not be any user, how can I do this?
There are two separate problems here.
First, how do you install without prompts:
Download the google cloud sdk tar file. This can be found right under the curl command on https://cloud.google.com/sdk/
Untar and cd into the newly created directory.
run CLOUDSDK_CORE_DISABLE_PROMPTS=1 ./install.sh (or install.bat)
This disables all prompts. If you don't like the way it answers the prompts, you can pre-answer them with flags. If you preanswer all the questions, you don't need the CLOUDSDK_CORE_DISABLE_PROMPTS environment variable set. Run ./install.sh --help for a list of flags.
Now that you have it installed, how do you auth it?
If you are on GCE, you can use the credentials on the machine itself automatically. If not, some setup is required.
Since these are automated installs, you want to give them a server key. If there was a human involved, he can just proceed through the normal flow.
Keys can be downloaded from the developer console under "APIs & auth -> Credentials". Click "New credentials -> Service account key". Google recommends you use a JSON key.
When you have that key, you need to move it to the new server and run:
gcloud auth activate-service-account --key-file servicekey.json
gcloud config set project MYPROJECT
There seems to be a better / more elegant way to do this as per the docs:
curl https://sdk.cloud.google.com > install.sh
bash install.sh --disable-prompts
This sequence of commands should help:
file="google-cloud-sdk-101.0.0-linux-x86_64.tar.gz"
link="https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/"
curl -L "$link""$file" | tar xz
CLOUDSDK_CORE_DISABLE_PROMPTS=1 ./google-cloud-sdk/install.sh
I've got an SVN instance installed on a free EC2 AWS server. In short: I'm using LAMP.
Using what I read in this article and encountered the "you need a TTY" error as mentioned in the comments. I followed the second resource and it cleared the error message, but doesn't seem to be executing the script. When I manually run the script, however, it works.
Any clue what I'm missing?
When I followed the second resource to fix the TTY error I changed the contents of my /svn/repository/hooks/post-commit script from:
#!/bin/bash
sudo /usr/local/bin/svn-post-commit-update-mysite 1>&2
to:
#!/bin/bash
su –session-command=”/usr/local/bin/svn-post-commit-update-mysite 1>&2″ dynamic &
First possible issue:
You cannot rely on the value of the $PATH variable inside the hook. This means you need to specify complete paths for all executables.
In particular, "su" is a program located in "/bin/sh" in most distributions. To be sure, type
type su
Next possible issue:
Is your subversion server being run as root? su will try to ask for password if run by other users, and will fail if it's not being run interactively - even if the user is in the sudoers file!
If you are using Apache+DAV, this means the apache service must be run as root for this to work (instead of www-data), which is a serious security problem.
You probably don't need to use su or sudo at all if all of the files are owned by the same user (www-data, for instance). You can change the ownership of the site files with something like
sudo chown -R www-data:www-data /var/www/<my-project>
And then remove the sudo and su from both the hook and the svn-post-commit-update-mysite file.
My best guess would be that something in your script depends on the PATH environment variable. Subversion runs hooks in an empty environment for security reasons. So you need to either setup the environment in your shell script or use absolute paths.
You might want to read the Subversion book entry on implementing hook scripts. The particular issue I mentioned is explained in the information block.
I have installed postgressql 8.4-91 version in my Linux OS.
On going to the directory where its installed I am able to locate psql in the directory.
I am having 2 issues.
on typing ./psql ,it asks for a password and doesn't accept any password.
On typibf psql i am getting " command not found"
The second one is easy. Most secure Linux system don't include . (the current directory) in the path (i.e., $PATH).
This avoids the attack vector of providing an ls script in your directory that will run if someone is foolish enough to have . before the real location of ls in their path.
If you really want to be able to run it without the dot, the safest option is to set up an alias like:
alias pg='./psql'
and then use pg to run it. I would advise against putting . in your $PATH variable, at least on a shared machine. If you're the only one able to muck about on your machine, then you could probably do it safely.
The first you can probably get around by editing the pg_hba.conf file to get rid of authentication, using alter user (or add user) to set up a password then turn authentication back on.
Or you could just run without authentication in your development environment, as so many of us do :-)