Over the past week, our users have been reaching out complaining that they can't upload/modify files on the company file server. Specifically, they'll try dragging files onto the share through Windows Explorer and will be faced with 'Access Denied'.
The fileserver is a Ubuntu VM that's joined to the Windows domain using the following documentation Setting_up_Samba_as_a_Domain_Member. Admittedly I used our old smb.conf(see below) from the old file server, as my understanding of Linux/Samba is very limited and needing to get the share up and running as soon as possible.
Here's what I've done so far
1. SSHd into the file server and checked the permissions of a folder that was known to be having issues.
ls -ll directory_in_question
drwxrwsr-x 12 root name_of_active_directory_group 4096 Dec 17 15:21 ./
Noticed that 'name_of_active_directory_group' seems to be correct, however the members of this group still can't upload files to this location through Explorer.
2. Checked to see if I can even access the group using getent group 'name_of_active_directory_group, and I'm given name_of_active_directory_group:*:10083:username_one,username_two..., I even try running id username_one and It seems to be reaching our AD DC fine.
3. Set the logging level of Samba to 5 and monitor for anything useful in /var/log/samba/. The only line that really jumps out to me is smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_ACCESS_DENIED]. However I can always provide the full log if it helps.
Temporary Fix
If I run setfacl -Rm u:username:rwX directory_in_question then the user will be able to make changes. Or if I change the permissions of the folder to chmod o+rwx directory_in_question then It works without a hitch. However, chmod g+rwx directory_in_question where I'm specifying the group directly with chmod doesn't work.
The smb.conf looks like this
#======================= Global Settings =======================
[global]
## Browsing/Identification ###
server string = %h server (Samba, Ubuntu)
security = ads
workgroup = COMPANY_A
realm = COMPANY_A.net
# dedicated keytab file = /etc/krb5.keytab
kerberos method = system keytab
disable netbios = Yes
load printers = No
printing = bsd
printcap name = /dev/null
disable spoolss = Yes
## User mapping!! (to map old users on server)
username map = /etc/samba/smbusers
#### Debugging/Accounting ####
# This tells Samba to use a separate log file for each machine
# that connects
log file = /var/log/samba/log.%m
# log level = 5
# Cap the size of the individual log files (in KiB).
max log size = 1000
# If you want Samba to only log through syslog then set the following
# parameter to 'yes'.
# syslog only = no
# We want Samba to log a minimum amount of information to syslog. Everything
# should go to /var/log/samba/log.{smbd,nmbd} instead. If you want to log
# through syslog you should set the following parameter to something higher.
syslog = 0
# Do something sensible when Samba crashes: mail the admin a backtrace
panic action = /usr/share/samba/panic-action %d
# Log level
# log level = 5
lm announce = no
server max protocol = SMB3
server min protocol = NT1
client max protocol = SMB3
client min protocol = NT1
[share]
comment = Share folder
path = /mnt/share_name
read only = no
guest ok = no
directory mask = 0744
force directory mode = 02775
create mask = 0664
force create mode = 0664
follow symlinks = yes
wide links = no
veto files = /._*/.DS_Store/
vfs objects = streams_xattr
Realm List Info
realm list info
company_A.net
type: kerberos
realm-name: company_A.NET
domain-name: company_A.net
configured: kerberos-member
server-software: active-directory
client-software: winbind
required-package: winbind
required-package: libpam-winbind
required-package: samba-common-bin
login-formats: COMPAND_A\%U
login-policy: allow-any-login
company_A.net
type: kerberos
realm-name: company_A.NET
domain-name: company_A.net
configured: kerberos-member
server-software: active-directory
client-software: sssd
required-package: sssd-tools
required-package: sssd
required-package: libnss-sss
required-package: libpam-sss
required-package: adcli
required-package: samba-common-bin
login-formats: %U
login-policy: allow-permitted-logins
permitted-logins:
permitted-groups:
Go and read the Samba wiki page again and then setup your smb.conf correctly, this time without sssd.
I also noticed this '## User mapping!! (to map old users on server)' , that isn't what the usermap is for (well, not in an AD domain).
Related
I am currently using Chef to deploy a Jenkins instance on a managed node. I am using the following public supermarket cookbook: https://supermarket.chef.io/cookbooks/jenkins .
I am using the following code in my recipe file to enable authentication:
jenkins_script 'activate global security' do
command <<-EOH.gsub(/^ {4}/, '')
import jenkins.model.*
import hudson.security.*
def instance = Jenkins.getInstance()
def hudsonRealm = new HudsonPrivateSecurityRealm(false)
hudsonRealm.createAccount("Administrator","Password")
instance.setSecurityRealm(hudsonRealm)
instance.save()
def strategy = new GlobalMatrixAuthorizationStrategy()
strategy.add(Jenkins.ADMINISTER, "Administrator")
instance.setAuthorizationStrategy(strategy)
instance.save()
EOH
end
This works great to setup security on the instance the first time the recipe is run on the managed node. It creates an administrator user with administrator permissions on the Jenkins server. In addition to enabling security on the Jenkins instance, plugins are also installed using this recipe.
Once security has been enabled, installation of plugins which do not yet exist (but are specified to be installed), fail:
ERROR: anonymous is missing the Overall/Read permission
I assume this is an error related to the newly created administrator account, and Chef attempting to install the plugins using the anonymous user as opposed to the administrator user. Is there anything that should be set in my recipe file in order to work around this permissions issue?
The goal here is that in the event a plugin is upgraded to an undesired version or uninstalled completely, running the recipe will reinstall / rollback any plugin changes. Currently this does not appear to be possible if I also have security enabled on the Jenkins instance.
EDIT It should also be noted that currently each time I need to repair plugins in this way, I have to disable security then run the entire recipe (plugin installation + security enable).
Thanks for any help!
The jenkins_plugin resource doesn't appear to expose any authentication options so you'll probably need to build your own resource. If you dive in to the code you'll see that the underlying executor layer in the cookbook does support auth (and a whole bunch of other stuff) so it might be easy to do in a copy-fork (and send us a patch) of just that resource.
We ran into this because we had previously been defining :jenkins_username and :jenkins_password, but those only work with the remoting protocol which is being deprecated in favor of the REST API being accessed via SSH or HTTPS and in newer releases defaults to DISABLED.
We ended up combining the logic from #StephenKing's cookbook and the information from chef-cookbooks/jenkins and this GitHub issue comment on that repo to get our plugin installation working after enabling authentication via Active Directory on our instances (we used SSH).
We basically pulled the example from https://github.com/TYPO3-cookbooks/jenkins-chefci/blob/e1b82e679074e96de5d6e668b0f10549c48b58d1/recipes/_jenkins_chef_user.rb and removed the portion that automatically generated the key if it didn't exist (our instances stick around and need to be mostly deterministic) and replaced the File.read with a lookup in our encrypted databag (or functional equivalent).
recipes/authentication.rb
require 'aws-sdk'
require 'net/ssh'
require 'openssl'
ssm = Aws::SSM::Client.new(region: 'us-west-2')
unless node.run_state[:jenkins_private_key]
key_contents = ssm.get_parameter(name: node['jenkins_wrapper']['secrets']['chefjenkins']['id_rsa']['path'], with_decryption: true).parameter.value
key_path = node['jenkins_wrapper']['secrets']['chefjenkins']['id_rsa']['path']
key = OpenSSL::PKey::RSA.new key_contents
# We use `log` here so we can assert the correct path was queried without exposing or hardcoding the secret in our tests
log 'Successfully read existing private key from ' + key_path
public_key = [key.ssh_type, [key.to_blob].pack('m0'), 'auto-generated key'].join(' ')
# Create the Chef Jenkins user with the public key
jenkins_user 'chefjenkins' do
id 'chefjenkins' # This also matches up with an Active Directory user
full_name 'Chef Client'
public_keys [public_key]
end
# Set the private key on the Jenkins executor
node.run_state[:jenkins_private_key] = key.to_pem
end
# This was our previous implementation that stopped working recently
# jenkins_password = ssm.get_parameter(name: node['jenkins_wrapper']['secrets']['chefjenkins']['path'], with_decryption: true).parameter.value
# node.run_state[:jenkins_username] = 'chefjenkins' # ~FC001
# node.run_state[:jenkins_password] = jenkins_password # ~FC001
recipes/enable_jenkins_sshd.rb
port = node['jenkins']['ssh']['port']
jenkins_script 'configure_sshd_access' do
command <<-EOH.gsub(/^ {4}/, '')
import jenkins.model.*
def instance = Jenkins.getInstance()
def sshd = instance.getDescriptor("org.jenkinsci.main.modules.sshd.SSHD")
def currentPort = sshd.getActualPort()
def expectedPort = #{port}
if (currentPort != expectedPort) {
sshd.setPort(expectedPort)
}
EOH
not_if "grep #{port} /var/lib/jenkins/org.jenkinsci.main.modules.sshd.SSHD.xml"
notifies :execute, 'jenkins_command[safe-restart]', :immediately
end
attributes/default.rb
# Enable/disable SSHd.
# If the port is 0, Jenkins will serve SSHd on a random port
# If the port is > 0, Jenkins will serve SSHd on that port specifically
# If the port is is -1 turns off SSHd.
default['jenkins']['ssh']['port'] = 8222
# This happens to be our lookup path in AWS SSM, but
# this could be a local file on Jenkins or in databag or wherever
default['jenkins_wrapper']['secrets']['chefjenkins']['id_rsa']['path'] = 'jenkins_wrapper.users.chefjenkins.id_rsa'
I have setup an SVN server on RHEL 7.2 machine with in-built RPM. After I have created a repository.
After the creation of the repository demorepo, I was successful in accessing the repository in one client through 'svn+ssh' protocol using 'root' user.
But later I enabled path-based authorization and configured the svnserve.conf, passwd and authz files of the repository as below:
svnserve.conf file
anon-access = none
auth-access = write
password-db = passwd
authz-db = authz
passwd file
rouser1 = pswd1
rouser2 = pswd2
rwuser1 = pswd3
rwuser2 = pswd4
spluser = pswd5
authz file
[groups]
readgrp = rouser1,rouser2,spluser
writegrp = rwuser1,rwuser2
[demorepo:/]
#readgrp = r
#writegrp = rw
[demorepo:/proj1]
spluser = rw
[demorepo:/proj2]
spluser =
Now, after the configuration of the above files, I am successful in accessing the repository through the "svn" protocol (not through the ssh tunnel) but I lost the access through the "svn+ssh" protocol.
So, is there any way to access the repository with the both the protocols simultaneously while path-based authorization is enabled? Or please let me know if I had done any mistake in my configuration?
mostly it is path issue .
if you are using same path for svn and 'svn+ssh' then that the issue as ssh will take full path so if we assume /proj1 is located in
/home/user/project1
the svn+ssh path will be yoursite.com/home/user/project1
while the svn path is yoursite.com/project1
This is my current setup:
I have 2 Active Directory servers (AD1 = TEST1) (AD2 = TEST2).AD2 is a trusted domain. And the samba version is 3.6.9-168.el6_5 .
I have successfully integrated my linux Clients (Red Hat Enterprise Linux Server release 6.5) to AD1 server with the below smb.conf file.I need consistent UID and GID for both the domain users TEST1 and TEST2. wibinfo -u and -g works fine for me with the below configuration for both the domains.I am able to get consistent UID and GID for the TEST1 domain. But for the TEST2 (trusted domain) , wbinfo -i shows the below error
# wbinfo -i TEST2\\user1
failed to call wbcGetpwnam: WBC_ERR_DOMAIN_NOT_FOUND
Could not get info for user user1
My current smb.conf file as below
[global]
workgroup = TEST1
realm = TEST1.LOCAL
netbios name = LB001
security = ads
winbind offline logon = yes
allow trusted domains = yes
winbind enum users = yes
winbind enum groups = yes
winbind use default domain = yes
winbind use default domain = yes
template home dir = /home/%U
idmap config * : backend = autorid
idmap config * : range = 1000000-1999999
template shell = /bin/bash
Need help to get the consistent UID and GID for both the domain with this autorid option.
Thanks in advance,
Firstly, wbinfo -i is a high-level tool that is testing nsswitch integration. Here much more than just Id-mapping can go wrong. The error message DOMAIN_NOT_FOUND is also misleading (bug in the wbinfo code).
The atomic testing of id-mapping would be this:
wbinfo -n TEST2\\user1
to give you the SID (windows user ID) for the user, and then use
wbinfo -S SID
with this sid to check UID assignment. Similarly, use
wbinfo -Y SID
if you are testing a group object.
That being said, you are not guaranteed to get the exact same configuration with idmap autorid, because it associates incrementing subranges of the defined overall range to the domains in the order they access the server.
So if you want to be 100% certain to have the exact same IDs, then you
should consider using an explicit configuration using the rid backend,
like so:
[global]
idmap config * : backend = tdb
idmap config * : range = 1000000-1999999
idmap config TEST1 : backend = rid
idmap config TEST1 : range = 2000000-2999999
idmap config TEST2 : backend = rid
idmap config TEST2 : range = 3000000-3999999
I want to monitor Azure Paas database with Nagios. I'm using this plugin available at https://github.com/MsOpenTech/WaMo
When I try to check database:
./check_azure_sql.py -u -p -d -k top5queries
I get this error message:
('08001', '[08001] [unixODBC][FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)')
Error connecting to database
All dependencies are installed (list in GitHub plugin site).
Here you can see my /etc/odbcinst.ini:
[ODBC]
Trace = Yes
TraceFile = /tmp/odbc.log
[FreeTDS]
Description = ODBC For TDS
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
UsageCount = 1
Here you can see my /etc/freetds/freetds.conf:
# $Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
#
# This file is installed by FreeTDS if no file by the same
# name is found in the installation directory.
#
# For information about the layout of this file and its settings,
# see the freetds.conf manpage "man freetds.conf".
# Global settings are overridden by those in a database
# server specific section
[global]
# TDS protocol version
; tds version = 4.2
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
; dump file = /tmp/freetds.log
; debug flags = 0xffff
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# Try setting 'text size' to a more reasonable limit
text size = 64512
# A typical Sybase server
[egServer50]
host = symachine.domain.com
port = 5000
tds version = 5.0
# A typical Microsoft server
[egServer70]
host = ntmachine.domain.com
port = 1433
tds version = 7.0
And my /etc/odbc.ini is empty.
Does anybody have any idea?
bhagdev, to do simple, i'm trying to monitor Sql database Azure Paas with nagios.
It's not me that is written the plugin available at github.com/MsOpenTech/WaMo. For a nagios admin, i only need to execute the command ./check_azure_sql.py -u (username) -p (password) -d (database) -k (key) (check_azure_sql.py written in python) from debian linux cli.
So when i execute the command above i get the error message :
('08001', '[08001] [unixODBC][FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)') Error connecting to database.
Than'ks for your help guy's
Hi i have installed CVS binary(means created link to binary file) in the path /home/mrsx/bin folder and created the respositry in the path /apps/src/CVSROOT(CVSROOT respository name).
and added entries in inetd.conf as(all in single line):
cvspserver stream tcp nowait root /home/mrsx/bin/cvs cvs -f --allow-root=/apps/src/CVSROOT pserver
and in /etc/services as : cvspserver 2401/tcp
and restarted inetd.
and set CVSROOT to :pserver:username#servername:2401/apps/src/CVSROOT
and tried to login and i got connection refused error..
can anybody please tell me what is wrong in the above mentioned steps.
I just had this problem migrating an Ubuntu cvs repository. In the Debian-Ubuntu world, do this:
apt-get install cvs xinetd
establish your repository ( just follow instructions in the manual)
make sure your users have write permission. Typically create a cvs group, put them in it, and mark the repos 775; chgrp -R cvs * ( cvs lacks security, read the manual)
add a file in /etc/xinetd.d called cvspserver
edit the file similar to this:
service cvspserver
{
port = 2401
socket_type = stream
protocol = tcp
user = root
wait = no
type = UNLISTED
server = /usr/bin/cvs
server_args = -f --allow-root /usr/local/cvs pserver
disable = no
}
reboot or restart xinetd