CVS error: connection refused error - linux

Hi i have installed CVS binary(means created link to binary file) in the path /home/mrsx/bin folder and created the respositry in the path /apps/src/CVSROOT(CVSROOT respository name).
and added entries in inetd.conf as(all in single line):
cvspserver stream tcp nowait root /home/mrsx/bin/cvs cvs -f --allow-root=/apps/src/CVSROOT pserver
and in /etc/services as : cvspserver 2401/tcp
and restarted inetd.
and set CVSROOT to :pserver:username#servername:2401/apps/src/CVSROOT
and tried to login and i got connection refused error..
can anybody please tell me what is wrong in the above mentioned steps.

I just had this problem migrating an Ubuntu cvs repository. In the Debian-Ubuntu world, do this:
apt-get install cvs xinetd
establish your repository ( just follow instructions in the manual)
make sure your users have write permission. Typically create a cvs group, put them in it, and mark the repos 775; chgrp -R cvs * ( cvs lacks security, read the manual)
add a file in /etc/xinetd.d called cvspserver
edit the file similar to this:
service cvspserver
{
port = 2401
socket_type = stream
protocol = tcp
user = root
wait = no
type = UNLISTED
server = /usr/bin/cvs
server_args = -f --allow-root /usr/local/cvs pserver
disable = no
}
reboot or restart xinetd

Related

Switching the update channel on Firefox Flame fails

I tried to follow the steps to change the update channel described here: Switch to nightly update channel. But the phone won't reboot after executing change_channel.sh because the scripts fails with
$ ./change_channel.sh -v aurora
adbd is already running as root
remount succeeded
cannot stat '/tmp/channel-prefs/updates.js': No such file or directory
Currently I have B2G 21.0.0.0-prerelease installed from here.
I you open the script and read the line 57, there is
cat >$TMP_DIR/updates.js <<UPDATES
If it fails to create the file in that directory, he won't be able to push it when doing adb push:
$ADB push $TMP_DIR/updates.js $B2G_PREF_DIR/updates.js
So check your permissions or change the temp directory to let your script create the updates.js file,
TMP_DIR=/tmp/channel-prefs

Fail2Ban is unable to block ip after multiple try

I have installed fail2ban on my Linux server version RHEL5.4. Its not blocking IP after max retry limit as described in jail.conf. When I try to restart the fail2ban I got following error message.
/etc/init.d/fail2ban restart
Stopping fail2ban: [ OK ]
Starting fail2ban: ERROR NOK: (2, 'No such file or directory')
[ OK ]
I have tried many more but failed to got solved the above issue. Following is the ssh jail in jail.conf file.
[ssh]
enabled = true
filter = sshd
action = iptables[name=SSH, port=ssh, protocol=tcp]
sendmail-whois[name=SSH, dest=a#exm.com, sender=a#exmp.com, sendername="Fail2Ban"]
logpath = /var/log/secure
maxretry = 3
Any body can suggest where is the issue.?
To configure fail2ban, make a 'local' copy the jail.conf file in /etc/fail2ban
cd /etc/fail2ban
sudo cp jail.conf jail.local
Try to restart with default configuration also before editing anything.

Authentication error from server: SASL(-13): user not found: unable to canonify

Ok, so I'm trying to configure and install svnserve on my Ubuntu server. So far so good, up to the point where I try to configure sasl (to prevent plain-text passwords).
So; I installed svnserve and made it run as a daemon (also installed it as a startup script with the command svnserve -d -r /var/svn).
My repository is in /var/svn and has following configuration (to be found in /var/svn/myrepo/conf/svnserve.conf) (I left comments out):
[general]
anon-access = none
auth-access = write
realm = my_repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256
Over to sasl, I created a svn.conf file in /usr/lib/sasl2/:
pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /etc/my_sasldb
mech_list: DIGEST-MD5
I created it in that folder as the article at this link suggested: http://svnbook.red-bean.com/nightly/en/svn.serverconfig.svnserve.html#svn.serverconfig.svnserve.sasl (and also because it existed and was listed as a result when I executed locate sasl).
Right after that I executed this command:
saslpasswd2 -c -f /etc/my_sasldb -u my_repo USERNAME
Which also asked me for a password twice, which I supplied. All going great.
When issuing the following command:
sasldblistusers2 -f /etc/my_sasldb
I get the - correct, as far as I can see - result:
USERNAME#my_repo: userPassword
Restarted svnserve, also restarted the whole server, and tried to connect.
This was the result from my TortoiseSVN client:
Authentication error from server: SASL(-13): user not found: unable to canonify
user and get auxprops
I have no clue at all in what I'm doing wrong. I've been scouring the web for the past few hours, but haven't found anything but that I might need to move the svn.conf file to another location - for example, the install location of subversion itself. which svn results in /usr/bin/svn, thus I moved the svn.conf to /usr/bin (although that doesn't feel right to me).
Still doesn't work, even after a new reboot.
I'm running out of ideas. Anyone else?
EDIT
I tried changing this (according to what some other forums on the internet told me to do): in the file /etc/default/saslauthd, I changed
START=no
MECHANISMS="pam"
to
START=yes
MECHANISMS="sasldb"
(Actually I had already changed START=no to START=yes before, but I forgot to mention it). But still no luck (I did reboot the whole server).
It looks like svnserve uses default values for SASL...
Check /etc/sasl2/svn.conf to be readable by the svnserver process owner.
If /etc/sasl2/svn.conf is owned by user root, group root and --rw------, svnserve uses the default values.
You will not be warned by any log file entry..
see section 4 of https://svn.apache.org/repos/asf/subversion/trunk/notes/sasl.txt:
This file must be named svn.conf, and must be readable by the svnserve process.
(it took me more than 3 days to understand both svnserve-sasl-ldap and this pitfall at the same time..)
I recommend to install the package cyrus-sasl2-doc and to read the section Cyrus SASL for System Administrators carefully.
I expect this is caused by the SASL API for the call
result = sasl_server_new(SVN_RA_SVN_SASL_NAME,
hostname, b->realm,
localaddrport, remoteaddrport,
NULL, SASL_SUCCESS_DATA,
&sasl_ctx);
if (result != SASL_OK)
{
svn_error_t *err = svn_error_create(SVN_ERR_RA_NOT_AUTHORIZED, NULL,
sasl_errstring(result, NULL, NULL));
SVN_ERR(write_failure(conn, pool, &err));
return svn_ra_svn__flush(conn, pool);
}
as you may see, handling the access failure by svnserve is not foreseen, only Ok or error is expected...
I looked in /var/log/messages and found
localhost svnserve: unable to open Berkeley db /etc/sasldb2: No such file or directory
When I created the sasldb to the above file and got the permissions right, it worked. Looks like it ignores or does not use the sasl database path.
There was another suggestion that rebooting solved the problem but that option was not available to me.

Vagrant puppet change owner of folder in pp exec

I am trying to develop a CakePHP application, and I am using Vagrant to run a testing environment. However, I was getting this error in the browser
Warning (2):
session_start() [http://php.net/function.session-start]:
open(/var/lib/php/session/sess_speva7ghaftl8n98r9id5a7434, O_RDWR) failed:
Permission denied (13) [CORE/Cake/Model/Datasource/CakeSession.php, line 614]
I can get rid of the error by SSHing to the vm and doing
[vagrant#myserver ~]$ sudo su -
[root#myserver ~]# chown -R vagrant. /var/lib/php/session/
I don't want to have to do this every time I restart the vm, so I tried adding this to myserver.pp
exec { 'chown':
command => 'chown -R vagrant. /var/lib/php/session/',
path => '/bin',
user => 'root'
}
but it gets an error while starting up the vm...
err:
/Stage[main]/Myserver/Exec[chown]/returns: change from notrun to 0 failed:
chown -R vagrant. /var/lib/php/session/
returned 1 instead of one of [0] at /tmp/vagrant-puppet/manifests/myserver.pp:35
I was unable to find any useful examples of how to use exec on the internet, and I have never used Vagrant or Puppet before, so the above code is just the best guess I could come up with, and I apologize if it is a simple fix to get this working.
I have verified using which chown within the vm that the path is /bin, and the command is exactly the same as when I run it in the vm myself. I'm thinking it is the user that is causing problem. Do I have that line right? Is it even possible to exec commands as root from a .pp file?
When using exec, you normally have to enter the full path to the command you execute. So if you change your command into
exec { 'chown':
command => '/bin/chown -R vagrant:vagrant /var/lib/php/session/',
path => '/bin',
user => 'root'
}
it should work imo.
However, it depends a lot how you install your application. If the setup/start of the application is also managed with Puppet, you can also manage the directory you're interested in with Puppet, like this
file { "/var/lib/php/session" :
ensure => directory,
group => "vagrant",
owner => "vagrant",
recurse => true,
}
before you start your app. This would be much more the Puppet way, as you manage a reource then instead of executing commands. However, normally /var/lib/... should not be owned by someone other than root.
So you should maybe look into how your app is started and make it start with another user or as root. If it is started with an exec, you can add an additional property
user => root
to it and that should also do the trick.

Automating Linux EBS snapshots backup and clean-up

Are there any good updated shell scripts for EBS snapshots to S3, and clean-up of older snapshots?
I looked through SO, but mostly are from 2009, referring to link that either broken or outdated.
Thanks.
Try the following shell-script, I use this to create snapshot for most of my projects and it works well.
https://github.com/rakesh-sankar/Tools/blob/master/AmazonAWS/EBS/EBS-Snapshot.sh
You can give me pull-request/fork the project to add the functionality of cleaning-up the old entries. Also watch for this repo, when I find some time I will update the code to have clean-up functionality.
If it is ok to use PHP as shel script you can use my latest script with latest AWS PHP SDK. This is much simpler because you do not need to setup environment. Just feed script your API keys.
How to setup
Open SSH connection to your server.
Navigate to folder
$ cd /usr/local/
Clon this gist into ec2 folder
$ git clone https://gist.github.com/9738785.git ec2
Go to that folder
$ cd ec2
Make backup.php executable
$ chmod +x backup.php
Open releases of the AWS PHP SDK github project and copy URL of aws.zip button. Now download it into your server.
$ wget https://github.com/aws/aws-sdk-php/releases/download/2.6.0/aws.zip
Unzip this file into aws directory.
$ unzip aws.zip -d aws
Edit backup.php php file and set all settings in line 5-12
$dryrun = FALSE;
$interval = '24 hours';
$keep_for = '10 Days';
$volumes = array('vol-********');
$api_key = '*********************';
$api_secret = '****************************************';
$ec2_region = 'us-east-1';
$snap_descr = "Daily backup";
Test it. Run this script
$ ./backup.php
Test is snapshot was created.
If everything is ok just add cronjob.
* 23 * * * /usr/local/ec2/backup.php

Resources