I'm having some issues with SELINUX.
When trying to visit my website I get 403 forbidden from nginx and the server pops up with an error and says to use grep NGINX /var/log/audit/audit.log | audit2allow -M mypol which i did however,
when trying to load the page it now says Access Denied and asks to use the command grep PHP-FPM /var/log/audit/audit.log | audit2allow -M mypol and when I do this it then reverts back to 403 forbidden access and asks me to use the first command again.
It's as if grep NGINX overwrites php-fpm and vice verse. How would solve this without disabling selinux?
I have access to the gnome desktop on my server and the SELinux security alert tells me to use the commands to solve the issue and the first command does solve it but then throws up another issue and when using the second command it overwrites the first and back to square one. I know that if i disable selinux it will work but it's unsafe and put's the server at risk.
Thanks.
Figured it out, for anyone else with the same issue, 403 forbidden access and selinux security error use this command on your servers root
restorecon -r /srv/www/domain.com
Fixed it for me and now everything is running as it should.
Related
I've been trying to send a https request using ssl.https library in Lua, however no matter what url I give, I alway get permission denied and no other values like headers, etc. The linux I am using is CentOS Linux version 7.
Here is the example code:
local httpsocket = require("socket.http")
local httpssocket = require ("ssl.https")
local ltn12 = require("ltn12")
local res, code, response_headers, status = httpssocket.request("https://www.google.com")
module:log("info","%s %s",code.."",response_headers);
The code itself is part of a prosody plugin and the last line in this example prints this out:
permission denied <nil>
My question is how do I fix this issue so that I can access the page?
EDIT: It seem that the problem might be the user that the service is run under and needs root privilages otherwise it throws ACCES error for ports lower than 1024. Does anyone know what to do in this case?
So... after attempting fix this issue again, I finally found the solution. If you are having trouble with services not being able to send http/https request on centOS, there is a single command that has to be run to fix this issue:
setsebool -P nis_enabled 1
For those who might have similar issues but not quite the same as me, look into the /var/log/audit/audit.log for anything related to your program, process, service, etc. then use this command:
grep <pattern_to_match_specific_log> /var/log/audit/audit.log | audit2why
This will give you a reason why it failed and how to fix it
OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.
So, I tried to set up a public SMB share with Samba on CentOS 7. Now, I have it set up, and I have a headache. But, sweet victory. I'm posting this here for all y'all so that you don't need to waste your time. It's actually easy, you just need to know the hoops you need to jump through. I'll also edit the Samba wiki.
The first problem was that it wouldn't connect at all, except locally:
Remote Connection (my Linux desktop):
-------
[root#my-desktop ~]# smbclient //sambaserver/PublicDocs -N
Error connecting to 192.168.100.97 (No route to host)
Connection to cgybkp01 failed (Error NT_STATUS_HOST_UNREACHABLE)
On Windows 8, using Windows Explorer, after typing "\\sambaserver" into the address bar, the progress bar would wait, wait, wait, then time out. The error message was:
Remote Connection (my Windows 8 desktop):
Windows cannot access \\sambaserver
Check the spelling of the name. Otherwise, there might be a problem with your network. To
try to identify and resolve network problems, click Diagnose.
This ended up being a problem with firewalld. To unblock Samba, I needed to add this line to /etc/firewalld/zones/public.xml :
<service name="samba"/>
Perfect, now I can connect!
But, I was actually mounting an NFS share, so I had one more issue, with SELinux. Now, when I attempt to connect with smbclient...
smbclient //sambaserver/PublicDocs -N
I can connect, but when I try to ls, I get the error: "NT_STATUS_ACCESS_DENIED" in CentOS 7. So, how do I connect?
The first thing everyone recommended that I try was file permissions. If you're not familiar with file permissions in Linux, I'd recommend trying those first. But for me, that didn't work, because SELinux was blocking me.
To see all of the SELinux options for Samba, type:
getsebool -a | grep samba
getsebool -a | grep smb
The one I needed to change was samba_share_nfs, because I was sharing an NFS mounted directory:
setsebool -P samba_share_nfs on
CentOS maintains a list of these booleans here.
I have installed nagios successfully on fedora 17. but when I am trying to connect to nagios like http://mylocalhost.com/nagios. It asks for username and password. After putting these information I am found out forbidden 403 error with message , You dont have permission to access /nagios/ on this server.
I am bit confused how to resolve this issue. I read some post. they were saying to create empty index.html inside http root directory. i tried but same error is there.
http://www.unixmen.com/nagios-http-warning-http11-403-forbidden-solved/
If I am not wrong http root directory is /var/www/html?
oops ... sorry it was problem with my httpd service which was running actually but not accessible for publicly.
simply I flushed out iptables. then Checked out httpd service whether it is running properly or not.
Now its working great.
I think you create file index.html in /var/www/html.
After you can restart service nagios and httpd
varnishlog is returning:
_.vsm: No such file or directory
Has anyone else seen this before?
It looks like varnishlog is not pointing to the correct directory, or has not access to it.
Please check the command line options of varnishd. If the deamon run with -n <instancename> argument, you have to add it to varnishlog as well.
The second thing, is to see the permissions of varnish directory.
In order to see the current directory used, you must log into root and run the command below :
$ lsof -p <PID of varnishd> | grep vsm
Once revealed, you just had to be sure the full path has read permission for your user.
In Varnish 4.1 the root cause can be due to incorrect rights for reading _.vsm file. For example:
# service varnishncsa start
* Starting HTTP accelerator log deamon [fail]
Can't open log - retrying for 5 seconds
Can't open VSM file (Cannot open /var/lib/varnish/dev-me/_.vsm: Permission denied
Varnishncsa works from varnishlog user. But /var/lib/varnish/dev-me/_.vsm can be readable from varnish group or root user only:
# ls -l /var/lib/varnish/dev-me/_.vsm
-rw-r----- 1 root varnish 84934656 Apr 15 05:58 /var/lib/varnish/dev-me/_.vsm
So you can fix this problem in the following way:
# usermod -a -G varnish varnishlog
# id varnishlog
uid=110(varnishlog) gid=116(varnishlog) groups=116(varnishlog),115(varnish)
And now you can start varnishncsa.
In our case the hostname of the server was changed.
If you do not specify an instance name, varnish uses the hostname. It was looking for a directory holding the shared memory logging configuration with the new hostname, but the instance was still running from the directory with the old hostname.
Restarting varnish solved the problem.
I just had the same error message while trying to issue varnishadm commands. Turned out that I renamed my machine without stopping varnish. There was some directory in /var/varnish/ corresponding to the machine name that varnish needed access to. "sudo service varnish restart" fixed this for me.