I am having some issues while trying to setup a paramiko scp using private public keys.
The problem is not so much related to the paramiko itself, I think, but the fact this is launched in a Cron (user cron (crontab -e)).
So the script works from the normal terminal, but it does not from the cron. I tried to specify the exact location of the private key (key_filename="/home/myuser/.ssh/id_rsa") when calling the method connect. It returns back the following error: "Not a valid RSA private key file".
On the other hand, in the cron, I tried to declare the terminal to use:
SHELL=/bin/bash
PATH=... (all the typical values)
HOME=/home/myuser
Also tried to source the $HOME/profile prior to launching the task.
It keeps failing.
Either making the cron execution environment to have all the variables as a normal bash or being able to properly specify to paramiko the location of the private key would make it, but all the things I tried did not work.
I also tried: Paramiko can not access private key
But it did not work.
And this problem Paramiko: "not a valid RSA private key file" is not applicable, because the script works when launched from a normal terminal with that user. It fails with the cron.
Any clue?
Python3
Paramiko 2.6.0
Ubuntu 20.4.2 LTS
Thanks to Martin Prikryl, after activating the paramiko logging, which reported an error about something not implemented I ended here:
Getting Oops, unhandled type 3 ('unimplemented') while connecting SFTP with Paramiko
By applying the private key as per that post (and making a conversion as per this other post: Paramiko: "not a valid RSA private key file" Note I am using the Paramiko 2.6.0)...
Now it works!!
Thanks, Martin!
Related
Last night I setup Pass Password Manager. I used gpg2, and followed this tutorial. I didn't implement git integration. Everything worked successfully. To view my password I had to enter my master key, exactly like how I want it. This morning I try to use pass. In my terminal I typed in
pass account/adobe/my#email.com
I get the following error:
gpg: decryption failed: No secret key
It didn't ask me to enter my master key. I tried restarting gpg-agent, I tried editing ~/.gnupg/gpg-agent.conf, but nothing is working.
This is how my ~/.gnupg/gpg-agent.conf looks like:
default-cache-ttl 28800
# 8 hours
pinentry-program /usr/bin/pinentry-curses
allow-loopback-pinentry
I should mention that I am using Linux Subsystem on Windows 10.
I put this in ~/.gnupg/gpg-agent.conf :
default-cache-ttl 3153600000
pinentry-program /usr/bin/pinentry-curses
allow-loopback-pinentry
After enter the following commands:
$ gpgconf --kill gpg-agent
$ gpg-connect-agent /bye
I am connecting to a remote server with the following code:
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.WarningPolicy())
ssh.connect(
hostname=settings.HOSTNAME,
port=settings.PORT,
username=settings.USERNAME,
)
When I'm on local server A, I can ssh onto the remote from the command line, suggesting it is in known_hosts. And the code works as expected.
On local server B, I can also ssh onto the remote from the command line. But when I try to use the above code I get:
/opt/mysite/virtualenv/lib/python3.5/site-packages/paramiko/client.py:763: UserWarning: Unknown ssh host key for [hostname]:22: b'12345'
key.get_fingerprint())))
...
File "/opt/mysite/virtualenv/lib/python3.5/site-packages/paramiko/client.py", line 416, in connect
look_for_keys, gss_auth, gss_kex, gss_deleg_creds, t.gss_host,
File "/opt/mysite/virtualenv/lib/python3.5/site-packages/paramiko/client.py", line 702, in _auth
raise SSHException('No authentication methods available')
paramiko.ssh_exception.SSHException: No authentication methods available
Unlike "SSH - Python with paramiko issue" I am using both load_system_host_keys and WarningPolicy, so I should not need to programatically add a password or key (and I don't need to on local server A).
Is there some system configuration step I've missed?
Try to use the fabric (this is written based on invoke + paramiko) instead of the paramiko and set the following parameters:
con = fabric.Connection('username#hostname' ,connect_kwargs={'password': 'yourpassword', 'allow_agent': False}
If it's keep falling, try to check if your password is still valid and you're not required to change your password.
I tested with the wrong user on local server B. The user running the Python process did not have ssh permissions after all. (Command line ssh failed for that user.) Once I gave it permissions, the connection worked as expected.
I'm reading that a shell command can be executed from custom facts with Facter::Core::Execution.exec. I've made a fact with the following code:
Facter.add(:controller_id) do
setcode do
Facter::Core::Execution.exec('/usr/bin/jq -r .device._id /var/lib/mylib/system.json')
end
end
When I run the command stand alone like /usr/bin/jq -r .device._id /var/lib/mylib/system.json on an agent it returns a string. But when I run the fact on my agent with puppet agent -t PuppetDb doesn't contain the new fact.
I can see that the agent sees the new fact code because it tells me my code has changed:
Notice:
/File[/opt/puppetlabs/puppet/cache/lib/facter/controller_id.rb]/content:
content changed '{md5}c3567db500497e3586617bfed072ca6d' to
'{md5}bb617198c5612eee365b5af8d410d4bc'
But no error is returned telling me why the fact wasn't saved. Does anyone know what might be causing this issue?
The command below works and machines are creating on the right private VLAN .. BUT!!! They are coming with a public VLAN too even though I don't want that .. command so far that works:
slcli vs create --billing=hourly --image=1060669 --hostname=ejkpoc --domain=ejk.co.uk --cpu=1 --memory=1 --datacenter=lon02 --postinstall=https://10.1.1.13/files/bootstrap-rhel-5.sh --vlan-private=1227409
The major trouble with the assignment of a public for me is that all of the postinstall bootstrap with attaches to Chef etc. is now registering the FQDN of the public! Cheers EJK
I didn't "RTFM" correctly ... my bad ... in the options there is a "--private" to force the machine to be private only (see the middle of the picture DOH!!) ... running the command now ...
I will update with the command once it works ...
Cheers
EJK
Ok, so I'm trying to configure and install svnserve on my Ubuntu server. So far so good, up to the point where I try to configure sasl (to prevent plain-text passwords).
So; I installed svnserve and made it run as a daemon (also installed it as a startup script with the command svnserve -d -r /var/svn).
My repository is in /var/svn and has following configuration (to be found in /var/svn/myrepo/conf/svnserve.conf) (I left comments out):
[general]
anon-access = none
auth-access = write
realm = my_repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256
Over to sasl, I created a svn.conf file in /usr/lib/sasl2/:
pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /etc/my_sasldb
mech_list: DIGEST-MD5
I created it in that folder as the article at this link suggested: http://svnbook.red-bean.com/nightly/en/svn.serverconfig.svnserve.html#svn.serverconfig.svnserve.sasl (and also because it existed and was listed as a result when I executed locate sasl).
Right after that I executed this command:
saslpasswd2 -c -f /etc/my_sasldb -u my_repo USERNAME
Which also asked me for a password twice, which I supplied. All going great.
When issuing the following command:
sasldblistusers2 -f /etc/my_sasldb
I get the - correct, as far as I can see - result:
USERNAME#my_repo: userPassword
Restarted svnserve, also restarted the whole server, and tried to connect.
This was the result from my TortoiseSVN client:
Authentication error from server: SASL(-13): user not found: unable to canonify
user and get auxprops
I have no clue at all in what I'm doing wrong. I've been scouring the web for the past few hours, but haven't found anything but that I might need to move the svn.conf file to another location - for example, the install location of subversion itself. which svn results in /usr/bin/svn, thus I moved the svn.conf to /usr/bin (although that doesn't feel right to me).
Still doesn't work, even after a new reboot.
I'm running out of ideas. Anyone else?
EDIT
I tried changing this (according to what some other forums on the internet told me to do): in the file /etc/default/saslauthd, I changed
START=no
MECHANISMS="pam"
to
START=yes
MECHANISMS="sasldb"
(Actually I had already changed START=no to START=yes before, but I forgot to mention it). But still no luck (I did reboot the whole server).
It looks like svnserve uses default values for SASL...
Check /etc/sasl2/svn.conf to be readable by the svnserver process owner.
If /etc/sasl2/svn.conf is owned by user root, group root and --rw------, svnserve uses the default values.
You will not be warned by any log file entry..
see section 4 of https://svn.apache.org/repos/asf/subversion/trunk/notes/sasl.txt:
This file must be named svn.conf, and must be readable by the svnserve process.
(it took me more than 3 days to understand both svnserve-sasl-ldap and this pitfall at the same time..)
I recommend to install the package cyrus-sasl2-doc and to read the section Cyrus SASL for System Administrators carefully.
I expect this is caused by the SASL API for the call
result = sasl_server_new(SVN_RA_SVN_SASL_NAME,
hostname, b->realm,
localaddrport, remoteaddrport,
NULL, SASL_SUCCESS_DATA,
&sasl_ctx);
if (result != SASL_OK)
{
svn_error_t *err = svn_error_create(SVN_ERR_RA_NOT_AUTHORIZED, NULL,
sasl_errstring(result, NULL, NULL));
SVN_ERR(write_failure(conn, pool, &err));
return svn_ra_svn__flush(conn, pool);
}
as you may see, handling the access failure by svnserve is not foreseen, only Ok or error is expected...
I looked in /var/log/messages and found
localhost svnserve: unable to open Berkeley db /etc/sasldb2: No such file or directory
When I created the sasldb to the above file and got the permissions right, it worked. Looks like it ignores or does not use the sasl database path.
There was another suggestion that rebooting solved the problem but that option was not available to me.