Getting logs of Openshift applications from Nodejs server - node.js

I have a Nodejs server which is used to get the logs of a few servers which are sitting on VMs by connecting to the VM via SSH and using the tail command on the log file.
I wanted to know if there is a similar way or any way to get the logs of an app who's container is on openshift and not a VM, and if there is a way to get the logs of all the pods as one.
Thanks

Did you check the OpenShift documentation?
You can install a full EFK stack directly in OCP via an operator and /or forward the logs of everything running in OCP to an external EFK stack
https://docs.okd.io/latest/logging/cluster-logging.html
This works if the logs are written to stdout/stderr in the pods
Here are some other techniques around centralizing logs with kubernetes:
https://kubernetes.io/docs/concepts/cluster-administration/logging/

Related

Kubernetes Pod - read ECONNRESET when requesting external web services

I have a bare-metal Kubernetes cluster setup running on separate lxc containers as nodes on Ubuntu 20.04 . It has Istio service mesh configured and approx 20 application services running on it (ServiceEntries for Istio are created for external services to be reached). I use MetalLB for the gateway's external IP provisioning.
I have an issue with pods making requests outside the cluster (egress), specifically reaching some of the external web services such as Cloudflare API or Sendgrid API to make some REST API calls. The DNS is working fine as the hosts I try to reach are indeed reachable from the pods (containers). It happens only that the first time pod is successful at making requests outside to the internet and after that random read ECONNRESET error happens when it tries to make REST API calls and sometimes even connect ETIMEDOUT but not frequent as the first error. Making the network requests from the nodes themselves to the internet seem to have no problems at all. Pods communicate with each other through the k8s' services fine without any of the problems.
I would guess something is not configured correctly and that the packets are not properly delivered back to the pod but I can't find any relevant help on the internet and I am a little bit lost on this one, I appreciate and I am very grateful for any of your help! I will happily provide more details if needed.
Thank you all once again!

Accessing data from a relative kubenetes service url from another service using nodjes in the same cluster

I am in a kubernetes cluster with two services running. One of the services expose a endpoint like /customer/servcie-endpoint and other service is a nodejs application which is trying to access data from this service. Axios doesn't work as it needs a host to work with.
If I do a kubectl exec shell and run curl /customer/servcie-endpoint I receive all the data.
I am not sure how to get this data in a nodejs application. Sry for naive ask!

Mean stack app not working well on AWS EC2 instance

I have developed a SaaS app using MEAN that is working perfect on my local machines and server now I have deploy my app on AWS EC2 instance.
now I have problem with my server whenever I request with big data query my ec2 instance / server stop I cannot access it from putty or FileZilla.
Should I use other hosting service or there is my app infrastructure problem?
(sorry for bad English)
It seems like your EC2 instance is out of resources, hence not responding to the Putty/FileZilla apps.
You may check the CPU% on the monitoring tab in EC2 console, or via CloudWatch.
Also, You may install and configure CloudWatchAgent on your instance to get improved logging of RAM and also application logs.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
If the problems is resources (CPU, RAM, Disk), You can change your instance type to a more appropriate one.
BTW, instead of using Putty/FileZilla, you can connect with you instance via the connect tab or session manager (see attached image). Right click on the instance name, and choose "connect".

Link containers in Azure Container Service with Mesos & Marathon

I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.

How to access nodejs application installed on AWS EC2

I have AWS EC2 with Ubuntu instance. I successfully setup ssh access and I am able to login via ssh console. I installed NodeJS and one simple NodeJS application. Successfully start it by node server.js and when executing curl http://localhost:8080 I can confirm application is up and running. My only issue is that I am not able to access it using provided public IP by AWS.
I can see my public IP from AWS console, and I thought it should be enough to type:
http://aws-public-ip:8080 and it should load the application. It seams I am wrong since I don't obtain access to my app.
Any hints would be appreciated.
Actually I found the answer by myself - I had to edit security group rule and just add rule for corresponding port. By default security group created when you create your instance has only one incoming rule for port 22.

Resources