I've been trying to send a https request using ssl.https library in Lua, however no matter what url I give, I alway get permission denied and no other values like headers, etc. The linux I am using is CentOS Linux version 7.
Here is the example code:
local httpsocket = require("socket.http")
local httpssocket = require ("ssl.https")
local ltn12 = require("ltn12")
local res, code, response_headers, status = httpssocket.request("https://www.google.com")
module:log("info","%s %s",code.."",response_headers);
The code itself is part of a prosody plugin and the last line in this example prints this out:
permission denied <nil>
My question is how do I fix this issue so that I can access the page?
EDIT: It seem that the problem might be the user that the service is run under and needs root privilages otherwise it throws ACCES error for ports lower than 1024. Does anyone know what to do in this case?
So... after attempting fix this issue again, I finally found the solution. If you are having trouble with services not being able to send http/https request on centOS, there is a single command that has to be run to fix this issue:
setsebool -P nis_enabled 1
For those who might have similar issues but not quite the same as me, look into the /var/log/audit/audit.log for anything related to your program, process, service, etc. then use this command:
grep <pattern_to_match_specific_log> /var/log/audit/audit.log | audit2why
This will give you a reason why it failed and how to fix it
Related
I'm trying to execute a .bat File on a Server in a local network with psexec
I'm currently trying with this command:
.\PsExec.exe -i -u Administrator \\192.168.4.36 -s -d cmd.exe -c "Z:\NX_SystemSetup\test.bat"
The server has no password (it has no internet connection and is running a clean install of Windows Server 2016), so I'm currently not entering one, and when a password is asked I simply press enter, which seems to work. Also, the .bat File currently only opens notepad on execution.
When I enter this command, I get the message "The file cannot be acessed by the system"
I've tried executing it with powershell with administrator privileges (and also without, since I saw another user on Stackoverflow mention that it only worked for them that way) but to no success.
I'm guessing this is a privilege problem, since it "can't be accessed", which would indicate to me that the file was indeed found.
I used net share in a cmd and it says that C:\ on my server is shared.
The file I'm trying to copy is also not in any kind of restricted folder.
Any ideas what else I could try?
EDIT:
I have done a lot more troubleshooting.
On the Server, I went into the firewall settings and opened TCP Port 135 and 445 explicitly, since according to google, PsExec uses these.
Also on the Server, I opened Properties of the "windows" Folder in C: and added an admin$ share, where I gave everyone all rights to the folder (stupid ik but I'm desperate for this to work)
Also played around a bunch more with different commands. Not even .\PsExec.exe \\192.168.4.36 ipconfig seems to work. I still get the same error. "The file cannot be accessed by the system"
This is honestly maddening. There is no known documentation of this error on the internet. Searching explicitly for "File cannot be accessed" still only brings up results for "File cannot be found" and similar.
I'm surely just missing something obvious. Right?
EDIT 2
I also tried adding the domain name in front of the username. I checked the domain by using set user in cmd on the server.
.\PsExec.exe \\192.168.4.16 -u DomainName\Administrator -p ~ -c "C:\Users\UserName\Documents\Mellanox Update.bat"
-p ~
seems to work for the password, so I added that.
I also tried creating a shortcut of the .bat File, and executing it as Administrator, using it instead of the original .bat File. The error stays the same "The File cannot be accessed by the system"
As additional info, the PC I'm trying to send the command from has Windows 10, the Server is running Windows Server 2016
So, the reason for this specific error is as simple and as stupid as it gets.
Turns out I was using the wrong IP. The IP I was using is an IPMI Address, which does not allow for any traffic (other than IPMI related stuff)
I have not yet gotten it to work yet, since I've run into some different errors, but the original question/problem has been resolved.
I'm trying to connect to an Azure file share from my Mac running High Sierra 10.13.6 using the following command:
mount_smbfs -d 0777 -f 0777 //dolphins:PASSWORDHERE#dolphins.file.core.windows.net/models /Users/b3020111/Azure
However I keep getting the error:
mount_smbfs: server connection failed: No route to host
I have turned off packet signing in /etc/nsmb.conf:
[default]
signing_required=no
After looking around the web I seem to be at a loss as to where to go, any help is appreciated.
I got it working with azure provided connection example.
mount_smbfs -d 777 -f 777 //user:key#storageurl/folder ~/mountfolder
Folder in file share needed after url and mountfolder must exist.
But the main reason for "No route to host" was because the access key had forward slash in it! I did a rebuild of key1 until I got a key without forward slash.
BUT! Be aware, rebuilding key will kill all mounts and connections to that storageaccount.
Came across this issue myself today. Do double check that your ISP does not block SMB port 445. In my case, AT&T does actually block this port. I found this in their guide http://about.att.com/sites/broadband/network
The solution for me was to connect with a VPN which I'm already hosting on Azure. Additionally as others have mentioned in this thread, escape any / with %2f. Also, add the share name in the connection URL. For example, if your share name is my-data then the connection URL should contain xxx.file.core.windows.net/my-data.
This is omitted for some reason in the Azure docs/UI and was required for successful connection on OSX.
It was the "/" after all. I had to regenerate the key over ten times till I get a key that doesn't have the "/" character and then it worked fine through the terminal.
It should work using the following syntax:
mount_smbfs //<storage-account-name>#<storage-account-name>.file.core.windows.net/<share-name> <desired-mount-point>
Without adding the permissions.
Via Finder:
Source can be found here
"mount(2) system call failed no route to host "
while mounting azure file share on linux vm we can have this error.
In my case One package was missing which is - cifs-utils
So, I have used below command
"sudo yum install cifs-utils -y" to resolv the issue.
Important to allow port 445 (TCP) to smb communication. If you don't access it, your firewall block it! Please enable it and try it again.
I ran into this same problem, and while I was never able to get it working through the terminal I did manage to get it resolved in finder.
Essentially the same instructions as #Adam Smith-MSFT, however one key difference.
I created a directory via Azure's web interface, and after that I was able to connect by adding /<directory-name> to the connection string. Without a directory this would not work at all.
OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.
I'm having some issues with SELINUX.
When trying to visit my website I get 403 forbidden from nginx and the server pops up with an error and says to use grep NGINX /var/log/audit/audit.log | audit2allow -M mypol which i did however,
when trying to load the page it now says Access Denied and asks to use the command grep PHP-FPM /var/log/audit/audit.log | audit2allow -M mypol and when I do this it then reverts back to 403 forbidden access and asks me to use the first command again.
It's as if grep NGINX overwrites php-fpm and vice verse. How would solve this without disabling selinux?
I have access to the gnome desktop on my server and the SELinux security alert tells me to use the commands to solve the issue and the first command does solve it but then throws up another issue and when using the second command it overwrites the first and back to square one. I know that if i disable selinux it will work but it's unsafe and put's the server at risk.
Thanks.
Figured it out, for anyone else with the same issue, 403 forbidden access and selinux security error use this command on your servers root
restorecon -r /srv/www/domain.com
Fixed it for me and now everything is running as it should.
My OS is ubuntu, I have some codes located on github.com, everything is ok before, but one day, when I typing:
git pull
I'm asked to input password as usual, and then I get this error.
error: couldn't connect to host while accessing https://ghosert#github.com/ghosert/VimProject.git/info/refs
fatal: HTTP request failed
until I try sudo prefix like:
sudo git pull
It works as before once again. It seems I lost the permission on accessing https when git need it. Anyone has idea on this?
The error you posted doesn't indicate that the problem was permissions.
error: couldn't connect to host while accessing
https://ghosert#github.com/ghosert/VimProject.git/info/refs fatal: HTTP request failed
"HTTP request failed" sounds like a connectivity problem.
I would simply bet that your internet connection failed when you typed it the first time, and was back up when you typed it again, with sudo, which I doubt had any effect on fixing the problem.
Worse, it probably messed up your permissions now, refer to sarnold's answer.
I faced the same problem today and here is my analysis and solution.
I set proxy settings system wide in my chrome browser for some purpose and it seems that it has created some environment variables which are causing my working shell to believe that their is no connectivity as i killed the proxy server when the job was finished. The env variables were not removed.
Check if your env has some unnecessary variables set.
env --this command will show all the env variables set in you shell.
Variables:
http_proxy, https_proxy
Remove them and everything will work.