I have a small Apache Camel application that picks up files at a smb endpoint:
smb://domain.nl;user-name#domain.nl/my/file/location?password=Blablabla&move=${processedFolder}&sendEmptyMessageWhenIdle=true&consumer.bridgeErrorHandler=true
Recently we had a glitch and the process failed after moving some of the files we had lined up. Logs made it clear the files from the smb location suddenly became inaccesible.
The first thing i tried was checking if i could get to the smb endpoint, like this:
smbclient -L domain.nl -U user-name -d 10
This returned session setup failed: NT_STATUS_LOGON_FAILURE after many debug lines.
Long story short, it turned out it was just a glitch, after rerunning camel all the files were picked up. So smbclient is not the way to check if apache camel using Samba JCIFS can access a location. But what is? Next time, how can i check manually from the linux server if a smb location is available?
I should note that the linux-version is ancient, 5.5 redhat.
camel version: 2.20.1,
camel jcifs version: 2.18.0,
jcifs version: 1.3.18
Related
I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config
I've been working on a file server product that uses smbcilent to transfer files between client computers and the server. It's been working great so far with our LAMP (Ubuntu) server and Windows machines.
I'm currently trying to expand the setup to include Mac's, but am having trouble with the server accessing the share on the Mac.
Here's my command and error (bracketed descriptions replace private info):
# smbclient //10.101.0.7/[share-file] -U [username]%[password] -c ls
WARNING: The "syslog" option is deprecated
NTLMSSP packet check failed due to short signature (0 bytes)!
NTLMSSP NTLM2 packet check failed due to invalid signature!
session setup failed: NT_STATUS_ACCESS_DENIED
Things I've tried:
✓ Accessing share using a Windows machine to ensure the share is setup properly - check! Works fine there.
✓ Invoking -S off or --signing=off in the command - no change.
✓ Just looking at the shares first using smbclient -L 10.101.0.7 -U [username]%[password] - same error.
✓ Googling for an answer - check! Several people with similar problems, but no working solutions so far.
The most promising thing I've see so far involves compiling smbclient 4.4 from sources and running that with no authentication (-U ""%""), but that seems like a temporary solution based on a bug rather than a solid plan that will work for a long time. (But I'll try that next if I can't find any better ideas...)
Thanks for reading and trying to help!
Try adding --option="ntlmssp_client:force_old_spnego = yes" to the smbclient command as suggested on the samba-technical mailing list.
For me, this now lists shares on a Mac OSX server:
smbclient -U$user%$password -L $mac_host --option="ntlmssp_client:force_old_spnego = yes"
For mounting, you may need to add the nounix,sec=ntlmssp options as in
sudo mount -t cifs //$mac_host/$share $mountpoint -o nounix,sec=ntlmssp,username=$user,password=$password
On recent versions of MacOS (e.g. Monterey) it is necessary to do several configuration steps to enable smb access from Linux:
Open System Preferences.
Select Sharing.
Select File Sharing.
Ensure that the directory is listed in Shared Folders.
Right-click/two-finger click on the share directory.
Click on Advanced Options
Ensure Only allow SMB encrypted connections is checked.
Click OK
Click on Options
Click on the checkbox for Share files and folders using SMB.
Under Windows File Sharing ensure the appropriate user is checked.
Type the user's password in the 'Authenticate' dialog bo and press 'OK'.
Click 'Done'.
You should now be able to connect from Linux to the MacOS share using the commands given by #mivk.
I am trying to implement rsyncd (through BackupPc) on a Windows 2002R2 server which already has cygwin on it (for accessing mail logs). I normally use a lighter installation with just the cygwin1.dll and rsyncd.exe plus the config files (rsyncd.conf, rsyncd.lock, rsyncd.log & rsyncd.secret) and install as a service so that it can be triggered by my remote BackupPc server but that approach doesn't work here as the server already has a cygwin installation.
I installed the rsycd package through the cygwin installation, set it up as a service (following this guide) and configured it to work with my BackupPc server.
Pings from the server are okay and I know it passes authentication (as I originally has the path to rsyncd.secrets wrong) but now it presents me with the error:
2014-06-26 13:03:01 full backup started for directory cDrive
2014-06-26 13:03:01 Got fatal error during xfer (setuid failed)
2014-06-26 13:03:06 Backup aborted (setuid failed)
The user is privileged and I have not received this error with the light installation method (mentioned above) in the same OS environment.
Unfortunately I could not get around the setuid error however I found a different implementation that allows me to achieve the same results with my systems.
Here is the guide that I followed.
The crux of this solution involves using DiskShadow in conjunction with Rsyncd and it requires Diskshadow Scripts to run as part of the backup process.
I have two PC in my network:
1) CentOs
2) Windows 7
I created repository on Linux machine and add some pre-commit hook scripts. Then, I checked out files to working copy directories on both machines. Now, when I make some changes and commit them from linux working copy then pre-commit hooks works as they should. But when I commit my changes from Windows (using Tortoise or command line) commit execute but without any results of working scripts.
I have read, that scripts are lunched on PC that holds repository (correct me if I'm wrong), so it shouldn't be matter of what kind of platform I'm making changes.
So, if any one can explain me why this doesn't work from windows then I would be grateful?
The pre-commit hook is run by the machine that's hosting the server. If you're using the repository with a file:// URL or using svnlook or svnadmin commands then that's always the local machine since there isn't actually a server and the repository is accessed directly.
From the what you're saying it sounds to me like you're putting the repository on a network volume (SMB, NFS, etc) and then using a file:// URL to access it. If you use one of the other access methods then you won't have this problem.
You have 3 options.
svnserve
svnserve is a simple daemon that provides the svn:// access method. It listens on its own network port and talks a protocol that's specific to Subversion.
svnserve over ssh
The svnserve protocol is tunneled over ssh and a svnserve process is started on demand.
Apache HTTP
The mod_dav_svn and mod_authz_svn modules provide access to Subversion via an Apache httpd server. This uses the DAV and DeltaV protocols over HTTP (optionally with SSL/TLS support).
The SVN Book has a whole section on server setup that covers choosing the server to how to configure it. You probably want to read this before you make a choise and then read the configuration steps for your chosen server.
I need to setup http live streaming in centOS , Please any one help in this with step by step configuration ?
I did lot of go-ogling but i dint find the proper solution . Every one tell about using ff mpeg we can achieve this . But not proper procedure.
you need to install a webserver with webdav support.apache with webdav modules activated will do it. the webdav_fs module expands the apache with user rights to write on a predefined directory.so, you create this directory first (eg /opt/webdav/hls/path2stream) and chmod and chown this directory to eg user and group "apache".then, you will have to edit httpd.conf to edit the server name according to the uname convention.finally, you can log the PUT commands by the encoder and the GET commands from the player in the apache log directory (/var/log/httpd/..access_log.
greez, nico