Why is my application not being deployed on OpenShift? - node.js

I believe I have everything set up properly for my server but I keep getting this error
Starting NodeJS cartridge
Tue Jan 05 2016 10:49:19 GMT-0500 (EST): Starting application 'squadstream' ...
Waiting for application port (8080) become available ...
Application 'squadstream' failed to start (port 8080 not available)
-------------------------
Git Post-Receive Result: failure
Activation status: failure
Activation failed for the following gears:
568be5b67628e1805b0000f2 (Error activating gear: CLIENT_ERROR: Failed to
execute: 'control start' for /var/lib/openshift/568be5b67628e1805b0000f2/nodejs
#<IO:0x0000000082d2a0>
#<IO:0x0000000082d228>
)
Deployment completed with status: failure
postreceive failed
I have my git repo set up with all the steps followed properly.
https://github.com/ammark47/SquadStreamServer
Edit: I have another app on openshift that is on 8080. I'm not sure if that makes a difference.

If the other application is running on the same gear, then it is binding to port 8080 first, making it unavailable for your second application. You will need to run each application on it's own gear. Also, you need to make sure that you are binding to port 8080 on the correct IP address for your gear, you can't bind to 0.0.0.0 or 127.0.0.1

Related

How to try all servers in dns using libcurl?

I need to regularly and randomly test with Linux/C++/libcurl the responses of several servers that are available through a single DNS name, such as
$ host example.com
n1.example.com 1.2.3.4
n2.example.com 1.2.3.5
n3.example.com 1.2.3.6
The list changes. When I try https://example.com libcurl always uses the same IP for the span of the TTL, and I cannot switch to the next host. There is CURLOPT_DNS_CACHE_TIMEOUT setopt, but setting it to zero does not help - even if I fully recreate easycurl object I still get the same IP. Therefore, this does not help: curl - How to set up TTL for dns cache & How to clear the curl cache
I can of course manually resolve DNS names and iterate, but are there any options? Polling randomly is okay. I see curl uses c-ares. Is there a way to clean up the cache there and will it help?
I cannot do exactly what I need with curl without doing a resolve by myself, but there are findings for the others to share with:
First of all, as a well-written TCP client, curl will try the hosts from the DNS list from top to bottom until a successful connection is made. Since then it will use that host even if it returns some higher level error (such as SSL error or HTTP 500). This is good for all major cases.
Curl command line of newer curl versions has --retry and --retry-all-errors - but there are no such things in libcurl, unfortunately. The feature is being enhanced right now, and there is no release yet as of 2021-07-14 that will enumerate all DNS hosts until there is one that returns HTTP 200. Instead, the released curl versions (I tried 7.76 and 7.77) will always do retries with the same host. But the nightly build (2021-07-14) does enumerate all DNS hosts. Here is how it behaves for two retries and three inexisting hosts (note, the retries will happen if any host returns HTTP 5xx):
$ ./src/curl http://nohost.sureno --trace - --retry 2 --retry-all-errors
== Info: Trying 192.168.1.112:80...
== Info: connect to 192.168.1.112 port 80 failed: No route to host
== Info: Trying 192.168.1.113:80...
== Info: connect to 192.168.1.113 port 80 failed: No route to host
== Info: Trying 192.168.1.114:80...
== Info: connect to 192.168.1.114 port 80 failed: No route to host
== Info: Failed to connect to nohost.sureno port 80 after 9210 ms: No route to host
== Info: Closing connection 0
curl: (7) Failed to connect to nohost.sureno port 80 after 9210 ms: No route to host
Warning: Problem (retrying all errors). Will retry in 1 seconds. 2 retries
Warning: left.
== Info: Hostname nohost.sureno was found in DNS cache
== Info: Trying 192.168.1.112:80...
== Info: connect to 192.168.1.112 port 80 failed: No route to host
== Info: Trying 192.168.1.113:80...
== Info: connect to 192.168.1.113 port 80 failed: No route to host
== Info: Trying 192.168.1.114:80...
== Info: connect to 192.168.1.114 port 80 failed: No route to host
== Info: Failed to connect to nohost.sureno port 80 after 9206 ms: No route to host
== Info: Closing connection 1
curl: (7) Failed to connect to nohost.sureno port 80 after 9206 ms: No route to host
Warning: Problem (retrying all errors). Will retry in 2 seconds. 1 retries
This behavior can be very helpful for the users of libcurl, but unfortunately, these retry flags presently have no mapping to curl_easy_setopt. And as a result, if you give --libcurl to the command line you will not see any retry-related code

How to install MongoDB Enterprise 4.4 on remote redhat server?

I followed the instructions listed here, https://docs.mongodb.com/manual/tutorial/install-mongodb-enterprise-on-red-hat/, and tried to install on a remote server from my local machine. I ssh from my local machine into the server and performed the steps for installation.
I'm not sure if there are additional steps that need to be completed or whether you have to set Directory Paths that are not the default ones since you are using a server instead of local machine. My current error is when I run mongo from my terminal and I get this error
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
[h699972#csc2cxp00020938 ~]$ mongo --host
sudo vim /etc/mongod.conf and setting bindIp: 0.0.0.0 did not work. Any help would be appreciated.

OpenVPN client is not working on Windows computer?

I try to run my OpenVPN client on my windows 10 machine in order to connect to a remote OpenVPN CentOS 7 server but it does not work. I get the error below:
Options error: --capath fails with 'C:\Users\Desktop\OpenVPN\ca.crt': No such process (errno=3)
Options error: --cert fails with 'C:\Users\Desktop\OpenVPN\Win10client.crt': No such process (errno=3)
Fri Mar 22 22:56:20 2019 WARNING: cannot stat file 'C:\Users\Desktop\OpenVPN\Win10client.key': No such process (errno=3)
Options error: --key fails with 'C:\Users\Desktop\OpenVPN\Win10client.key'
Fri Mar 22 22:56:20 2019 WARNING: cannot stat file 'C:\Users\Desktop\OpenVPN\myvpn.tlsauth': No such process (errno=3)
Options error: --tls-crypt fails with 'C:\Users\Desktop\OpenVPN\myvpn.tlsauth': No such process (errno=3)
This is the config that I have on my ovpn file:
client
tls-client
--capath C:\\Users\\Desktop\\OpenVPN\\ca.crt
--cert C:\\Users\\Desktop\\OpenVPN\\Win10client.crt
--key C:\\Users\\Desktop\\OpenVPN\\Win10client.key
--tls-crypt C:\\Users\\Desktop\\OpenVPN\\myvpn.tlsauth
remote-cert-eku "TLS Web Client Authentication"
proto udp
remote serveraddress 1194 udp
dev tun
topology subnet
pull
Assuming your config file is well done. Try to reinstall openvpn, and put your config file to the c:/program files/openvpn/config folder. Then you can start the openvpn Service. Therefore you dont need to use the Openvpn gui.

RedHat Redis Cluster port permission trouble

I am running into a problem trying to create a redis cluster following the instructions outlined here:
https://redis.io/topics/cluster-tutorial
The error I am getting in the logs when calling sudo service redis start:
/etc/log/redis/redis.log:
3432:M 04 Aug 13:38:57.411 * Node configuration loaded, I'm 7442dbd9342231844b12ede7513470c092bd4646
3432:M 04 Aug 13:38:57.411 # Creating Server TCP listening socket *:16379: bind: Permission denied
Interestingly enough when I start service using sudo with the same configuration file the service starts as expected according to the redis.log file:
command copied from the service script: sudo /usr/bin/redis-server /etc/redis.conf:
3484:M 04 Aug 13:59:14.900 * DB loaded from disk: 0.000 seconds
3484:M 04 Aug 13:59:14.900 * The server is now ready to accept connections on port 6379
From what I know it seems like a permission issue, but I am failing to understand or to find out where there is such thing as user/usergroup -> port binding permissions. The same service is able to bind the redis port 6379 but unable to bind port 16379.
Any suggestions/thoughts?
Thank you Florian, it was indeed SELinux blocking access to port 16379 for redis process.
The article that lead to the answer:
https://serverfault.com/questions/566317/nginx-no-permission-to-bind-port-8090-but-it-binds-to-80-and-8080
The gist to install redis on RedHat in cluster mode to spare the nightmare for others:
https://gist.github.com/vkhazin/f5c1b6e36e3a6c29aaf882041aaf78cb

Network Security Group for Filezilla(client) connection

I am new here.
Few days ago, attended MS azure events, and today registered with Azure (free account).
VM Environment: VM = CentOS 7, apache+php+mysql+vsftpd+phpMyAdmin
everything is up and running, able to visit the "info.php" via its public IP address.
SeLinux = disabled, Firewalld disabled.
my problem is not able to connect this server via Filezilla (PC client).
from Windows command prompt (FTP/put) is working, able to upload files.
But via Filezilla
Status: Connecting to 5x.1xx.1xx.7x:21...
Status: Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/home/ftpuser"
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PORT 192,168,1,183,234,99
Response: 200 PORT command successful. Consider using PASV.
Command: LIST
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
Status: Disconnected from server
Status: Connecting to 5x.1xx.1xx.7x:21...
Status: Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/home/ftpuser"
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PORT 192,168,1,183,234,137
Response: 200 PORT command successful. Consider using PASV.
Command: LIST
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
I believe that is because of the Network Security group settings for inbound and outbound rules, need open some port, but not sure, because I tried open 1024-65535 all allow, still not working.
If you use passive mode FTP, you should open ports 20,21 and ports that you need on Azure NSG(Inbound rules). You could check /etc/vsftpd.conf
pasv_enable=YES
pasv_min_port=60001
pasv_max_port=60005
For this example, you should open ports 60001-60005 on Azure NSG(Inbound rules).

Resources