semanage command is not found in Centos 5.4 - linux

I am using the Centos 5.4.
I want to run this command without quotes to allow selinux ports for http traffic.
"semanage port -m -t http_port_t -p tcp 7000"
but error occur (semanage command is not found).
semanage is also available in this directory /usr/sbin/semanage in my OS.I already install and update the policycoreutils-1.33.12-14.13.el5.x86_64.rpm package and already try the many solutions available on internet for this problem.
How do I resolve this problem?

I resolve the issue when I try this command
/usr/sbin/semanage port -m -t http_port_t -p tcp 7000

Related

CentOS need explanation about semanage ... http_port_t

I came across the following command
semanage port -m -t http_port_t -p tcp 5500
However this argument "http_port_t" doesn't exist, what I understand from this error.
What else can be used instead of http_port_t? Or if there is any need? Because without this command my http service works fine. i.e.
service http restart generates no error as it is supposed to for that port.
I am using CentOS 7.9

How does one open a tensorboard port in Linux?

I have some tensorboard data and I want my server to let me see the data. I don't want to have to send the tensorboard data files to my computer, so it would be ideal if I can just access them remotely. How does one do that? I would assume that the server would just host it as a normal website? What are the Tensorboard commands for this?
I know that locally one can do:
tensorboard --logdir=path/to/log-directory
and then go to the browser to do:
http://localhost:6006/
but is it possible to the equivalent from a server and then just read the data in my local browser/computer from the server?
Assuming that there is no firewall preventing access to port 6006 from the outside, and that your server's address is server.example.com you should be able to simply type http://server.example.com:6006 into your browser and have it work.
In case of a restrictive firewall, tunneling the tensorboard port over SSH using Local Port Forwarding is a good approach (this is also more secure than opening random ports publicly). When logging in to your server, you could type (for instance):
ssh -L 12345:localhost:6006 server.example.com
After that, start tensorboard on the server as usual, and you will be able to access it at http://localhost:12345 in your browser.
mvoelske instructions for setting up port forwarding are correct. If you have administrative privileges on the machine, you can open port 6006 to your IP address using the following commands:
$ sudo iptables -A INPUT -p tcp -s <insert your ip> --dport 6006 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
$ sudo iptables -A OUTPUT -p tcp --sport 6006 -m conntrack --ctstate ESTABLISHED -j ACCEPT
The iptables change can be saved with the following command:
$ sudo service iptables save
Note that this is for CentOS v6 and below. CentOS v7 and above used Firewalld by default.
If you have reached this stackoverflow question because you are troubleshooting a previously working TensorBoard setup, you might consider adding the --bind-all flag to your command line.
$ tensorboard --logdir=path/to/log-directory --bind-all
This resolved my problem reaching TensorBoard by URL within an internal network.
http://my_server.company.com:6006

Difficulty accessing Docker's API

I was struggling to get connected to the Docker API running on a RedHat 7.1 VM, the docker API is running on both a TCP port and a UNIX socket.
To configure this I set -H OPTIONS as follows:
OPTIONS='--selinux-enabled -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock'
in the file:
/etc/sysconfig/docker
Running the docker client on the same box, it connected OK to the docker daemon via either communication path:
docker images
or
docker -H=tcp://127.0.0.1:2375 images
both work equally well.
I was unable to get any sense out of it from another box, I figured the first thing to do would be to prove I can connect to port 2375 from elsewhere. I was having no joy when I tried:
telnet 10.30.144.66 2375
I figured it must be a firewall problem but it took a while longer before I realised it was the firewall built into Linux.
To make 2375 accessable:
Use one of the following depending on your distro
sudo firewall-cmd --zone=public --add-port=2375/tcp --permanent
sudo firewall-cmd --reload
OR
sudo iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 2375 -j ACCEPT
sudo /sbin/service iptables save
I was facing similar problem when my IntelliJ IDE was not able to connect docker engine API installed on RHEL.
It got resolved with following:
firewall-cmd --add-port=2376/tcp --permanent
firewall-cmd --reload
systemctl restart docker

How can I run node.js Express in production mode via sudo?

I'm using the npm package express version 2.5.2 with node version .0.6.5. I appear to be running bash version 4.1.5 on Debian 4.4.5.
I'm trying to run my server in production mode but it still runs in development mode.
I run these commands in my bash shell:
$ export NODE_ENV=production
$ echo $NODE_ENV
production
$ sudo echo $NODE_ENV
production
$ sudo node bootstrap.js
I have this code inside bootstrap.js:
var bootstrap_app = module.exports = express.createServer();
//...
console.log(bootstrap_app.settings.env);
and here's what I see printed to standard out:
development
Is this a problem with my usage, or my system?
EDIT:
Thanks to ThiefMaster for his properly identifying that this issue stems from my running node as root. ThiefMaster suggested using iptables to forward from port 80 to an unprivileged port, but my system gives me an error. Moving this discussion to superuser.com or serverfault.com (link to follow)
Most environment variables are unset when using sudo for security reasons. So you cannot pass that environment variable to node without modifying your sudoers file to allow that variable to passt through.
However, you shouldn't run node as root anyway. So here's a good workaround:
If you just need it for port 80, run node on an unprivileged port and setup an iptables forward to map port 80 to that port:
iptables -A PREROUTING -d 1.2.3.4/32 -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 2.3.4.5:1234
Replace 1.2.3.4 with your public IP, 2.3.4.5 with the IP node runs on (could be the public one or 127.0.0.1) and 1234 with the port node runs on.
With a sufficiently recent kernel that has capability support you could also grant the node executable the CAP_NET_BIND_SERVICE privilege using the following command as root:
setcap 'cap_net_bind_service=+ep' /usr/bin/node
Note that this will allow any user on your system to open privileged ports using node!
sudo NODE_ENV=production /usr/local/bin/node /usr/local/apps/test/app.js

Openfire and Windows Azure

Has anyone installed OpenFire on Windows Azure before?
Is it easy to create another instance with the OpenFire in it?
Thanks!
Yes, I've installed openFire on both EC2 (Linux) and Azure. It is a painless as you could imagine.
get a VM
install java
install openfire
install openfire db to SQL azure (connection string syntax below)
jdbc:jtds:sqlserver://SQLAzInstance.database.windows.net:1433/OpenFireSqlDBName;ssl=require
be sure to allow proper ports through the endpoints tab of the virtual machine in the new azure management portal
TCP 5222/5223 (std/SSL client connectivity)
TCP 5269 (server-to-server)
TCP 9090 (default openfire web ui port, you could change this)
Log into your Windows Azure account.
Create a Machine running Ubuntu 14 LTS
Then go to your SSH client (for Mac and Linux users, you can use the terminal by typing
ssh username#servername e.g. ssh joel#chatserver.cloudapp.net ) and for Windows users, you can install PuTTy SSH client which comes with BitVise.
log in as an admin by typing
sudo su
then update the sever by typing
apt-get update
then check for any new releases by typing
apt-get upgrade
Then check if java is installed (it is usually not installed anyway) by typing
java -version
if it is not installed, install it by typing
apt-get install default-jre
accept it to install by typing y to mean yes
wait for it to install
then install openfire by first downloading it. You use the wget command to download it directly to your server as below. (at the time of writing, openfire 3.9.3 is the latest version)
wget -O openfire.deb http://www.igniterealtime.org/downloadServlet?filename=openfire/openfire_3.9.3_all.deb
Then after it has finished downloading, install it by typing
dpkg --install openfire.deb
Before you go to the browser, go to your Windows Azure dashboard
Click on the Virtual Machine you have created
Then click on Endpoints
Add the following end points, they are all of TCP type
Public Port 5222, Private Port 5223 and this is for SSL connectivity
Public Port 5269 and Private port 5269 and this is for server to server connectivity
Public Port 9090 and private port 9090 and this is for openfire web UI
After all this, you are good to go,
Go to your browser and type in your server url and at the end put 9090 e.g.
chatserver.cloudapp.net:9090
Hope that helped and happy chatting!!
To user default port such as 80 and 443 (replace 5222 and 5223 with 80 and 443 ) use following commands to redirect traffic on linux machine.
iptables -A INPUT -i eth0 -p tcp --dport 5222 -j ACCEPT
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 5222
iptables -A INPUT -i eth0 -p tcp --dport 5223 -j ACCEPT
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 5223

Resources