How to have a VIP for 2 servers ( Prod and Dr ) - linux

We have prod and dr servers, we would like to have VIP for them. They are not exposed to internet. Any one server will be active for 1st 6 months, and after DR drill DR server will act as prod for next 6 months. Here, we have upstream systems which pushes files ( csv or text or zip ) via SFTP to our servers which would be nearly 200 - 300mb size per day. Currently, during every DR drill these upstream systems need to raise change request to update the IP before DR drill. This take atleast 2 weeks. to resolve this issue we decided to provide a VIP from our end. So that they can use VIP to transfer files via SFTP.
Note: DR server will be up and won't be active. App Services won't be up
File transfer is not recommended via SFTP on F5 network.( we are not on F5 ).
Both the servers Prod and DR running on vmWare.
We would like to have a VIP for these servers. Need your advice and suggestion.
Thanks in advance.
Bala

Bala, I think I understand your question, it's not quite clear what the question is but my perception leads me to believe that you are trying to determine how to load balance the two server nodes.
== > first of all your group will have to acquire an F5 load balancer that is configured in accordance with your network requirements. I am assuming the load balancer is already live on the network. In order to load balance the two servers, you will have to create a pool consisting of the two servers, once the pool has been created, you then create a virtual server and associate the pool with the virtual server. Below are the essential steps required to make this happen. Also note that, the server nodes have to be added to the Nodes in the load balancer(this has to be done first)
Add Node:
Go to local Traffic --- > Nodes -- > Create
a. Give the nodes a name
b. enter the IP address of the node in the IP field.
c. In the Configuration section, select "Node Default" for the Health Monitor
leave the rest at their default settings of 1,0,0.
Create Pool:
From the GUI go to Local Traffic -- > Pool --- > Create
a. give the pool a name
b. For now use tcp as the monitor (select from the available options)
c. In the Resources section Fill in the ff
Load Balancing Method == > Round Robin (traffic distributed in a circular
fashion)
Other options include Least connections, observed, random .... much more, a
good reference which has links to creating pools, vips etc.
https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_configuration_guide_10_0_0/ltm_pools.html
Create the Virtual Server:
In the GUI, go to --- > Local Traffic --- > Virtual Servers -- > create
a. Give the virtual server a name
b. In the Type description Select "Standad" in this case, there are other options, that do not apply to your request at the moment, but I advice you to read up on them they are (forwarding (layer 2), Forwarding IP, Performance HTTP, Performance (layer 4), stateless, Reject, Internal)
c. In the source field enter 0.0.0.0/0
d. In the Destination Field Select "Host" and enter the IP address of the vip
which is normally the url's dns address.
e. Select the service Port, for http traffic select http/80, this could be whatever port your services are being heard on. Note for port 443/https, you will require an ssl certificate.
f. In the Configuration section , select Advanced and select the ff:
Protocol: TCP
Protocol Profile (client): tcp
Http Profile: http
Snat : Autosnat
I am assuming you are using Autosnat here, this is much simpler to deal with, otherwise, a snat pool will have to be created.
g. At the bottom under "Resources" In the drop down for the "Default Pool" select the pool you created above.
h Select "Source Address" for the "Default Persistence Profile"
Click finish.
At this point, if the server nodes are live, configured with the appropriate page, access to the appropriate resources should be reachable. There are other criteria such as monitors which can be specifically configured to monitor a particular page, but that is for another session.
I hope I pointed you in the right direction.
Note: You have to determine the type of service and application running on the servers, if the url requires request to return to the same server then in this case

Related

Standby setup for redundancy using Kubernetes Ingress

I am trying to set-up a slave\standby as below:
Normally, the clients connect to the primary, a hostname (for example) updates.mysite.com which resolves to an Ingress LoadBalancer IP say A behind which is my actual (primary) application.
I have setup another slave application - which is basically a copy (in a different data center) , which the client would need to connect to, in case the primary site goes down. The clients would then connect to the LoadBalancer IP B behind which is the slave application.
Options I considered:
When the primary goes down, bring up the slave so that now updates.mysite.com points to the IP B. We set the TTL to be very low, say 1 minute. For a maximum of 1 minute, clients may try to connect to the primary after which it would be switched to the new IP (same hostname). The problem here is it leads to unnecessary calls to fetch the DNS record again - for something that happens not very frequently.
Setup updates2.mysite.com, let clients know that on disconnection or when updates.mysite.com is not available - they need to connect to updates2.mysite.com
Setup updates2.mysite.com which would point to the IP B. Expose an API that would give the hostname to connect to, clients should get the hostname from here - before trying to connect - and also on disconnections. We make sure that the API gives the correct hostname to connect to.
What would be the best approach and are there any other effective ways I can achieve this? I prefer the Option 1 due to the least amount of work and simple setup - but I might be missing something.

Make a domain target two servers depending ont he port

I have a older server with SMTP configured and I've bought a faster server.
I want to make this new server the targeted domain and the older server a SMTP server.
But I want the domain to target both servers depending on the port beeing used.
How can I do that?
Who will do this is "NAT" (Network Address Translation).
How do you receive your internet?
In general we use:
Internet -> Modem -> Firewall -> Servers
If your scenery is like that, your configuration must be made on firewall.
Deppending you firewall solution this configuration can be called "publish rule" or "nat rule".
If you're using Azure it's simple, you just need to create a Load balance to do it.
As you have 2 server you need another element to receive the traffic and forward.
To it works, all users must use LoadBalance IP (You need to adjust your DNS record).
You need to configure the LoadBalance to forward the traffic to apropriate server based on port requested.
Official documentation can help you: https://learn.microsoft.com/en-us/azure/load-balancer/tutorial-load-balancer-port-forwarding-portal
Take care about MX change, maybe not necessary:
All domains must have at least 1 MX record, it's who will manager email requests.
If you split your mail server just for a webmail porpose for example, maybe not necessary change the MX record.
If you prefere, share here your complet situation and we'll try to help with more precision.
I've done some more resarch and I've found that in the DNS server I could create a MX registry targeting the other server, is that right?
https://support.google.com/a/answer/48090

How to secure Amazon EC2 with Tomcat7 and mySQL

I'm very new with EC2. I have Tomcat 7 and MySQL installed. The security group I have setup is
Custom TCP Rule
TCP
8080
SSH
TCP
22
MYSQL
TCP
3306
For Outbound
It is for all Traffic.
I got a report from Amazon said as below
Instance ID: i-1e42db06
AWS ID: 772517067349
Reported Activity: DoS
What should I do to stop it?
And I also got a bill as below
$0.090 per GB - first 10 TB / month data transfer out beyond the global free tier 637.521 GB
Please advice me steps to protect my instance in EC2
Updated: with email from Amazon
We've received a report(s) that your EC2 instance(s)
AWS ID: 772517067349
Instance Id: i-1e42db06
IP Address: 172.31.25.202
has been implicated in activity that resembles a Denial of Service attack against remote hosts; please review the information provided below about the activity.
Please take action to stop the reported activity and reply directly to this email with details of the corrective actions you have taken. If you do not consider the activity described in these reports to be abusive, please reply to this email with details of your use case.
If you're unaware of this activity, it's possible that your environment has been compromised by an external attacker, or a vulnerability is allowing your machine to be used in a way that it was not intended
Check the instance monitoring panel in EC2 to see your traffic and check the logs in your server to see what kind of traffic is is.
Mysql has just had a zero day exploit that you could be vulnerable to, and SSH has had quite a few critical bugs lately, so it's not only your firewall settings that you need to take into account here, but you need to secure the services behind those ports too.
Besides this, if your web application deployed in tomcat contains a vulnerability, you are open to all sort of attacks, many of which will be reflected in an increase of traffic. Tomcat itself, of course, must also be up to date and properly secured.
There's just too many things that could be happening to enumerate, but if the "transfer out" as it's worded in your question refers to outbound traffic, you could have been compromised and have the server be part of a botnet. It's not clear above if they are reporting to you that you are suffering a DoS or you are trying a DoS.

Two external IPs one WebServer/Website

I'm having the following dilemma, I have a website on IIS with two internal IPs, each one of those IPs are NATed to different external IPs (each IP is from a different ISP). I also configured a RoundRobin DNS Service (two A hosts with the same name but with a different IP). Basically what this does is that the traffic is balanced between the two ISPs, and that's what we want. The thing is that apparently this configuration (DNS Roundrobin) is meant for when you have a cluster of server so each server has its own ISP on its own NIC, so the traffic from the webserver to the client is made over that ISP.
Right now we are being told that no matter where our inbound traffic comes from, the outbound traffic is always through our main WAN, which is also OK, because we have tested that when the primary WAN link is down, the website keeps working on the secondary link.
OK, the question is, do you think there may be problem with this configuration? Is the DNS Rounrobin also useful on this configuration?.
Thanks a lot for your feedback.
normally when you host a web service the responses are much bigger compared to the inbound traffic (normally you receive an HTTP GET/ and deliver the whole content back) - so it would make much more sense to balance the outbound traffic over your ISPs to get value out of your additional bandwidth.
does it make sense - yes - you can loose one ISP and your site is still available (assuming you do Healthchecks on your DNS server to determine if the sites are available before you send the IP address back - if you always deliver both IPs even when one ISP is down it won't help you at all)
it would be better to add an additional server - OR do policy based routing on your single server - so sending the response out of the interface where it was received.
hope that helps!

IIS network load balancing

I have a clustered server with 4 nodes running Win server 2008 r2 with IIS 7.
Fail over kicks in when one of nodes fails but is there a way to have it round robin distribute incoming calls to different server?
This happens when incoming requests come from different client but our investigation shows that if there is one client that is making many requests, they all go to the same server.
I would like to the server to round robin request so that node 1 receives first request, node 2 receives second request and so on.
Each request could take a long time and having all requests go to the same node when I have 3 others idling is causing us perf issue. Thanks
NLB port rules have a couple of properties that control how requests are routed. The relevant properties seem to be:
Filtering mode - specifies whether a single host or multiple hosts in the cluster handle traffic for the given port
Affinity - controls how traffic is routed to hosts in the cluster
It is likely you need to set the Affinity value to none, which allows requests to be routed to multiple hosts within the cluster. The docs do not state whether round-robin or another algorithm is used for load balancing.
For more on Filtering Mode and Affinity: Network Load Balancing Manager Properties
How to: Edit a Network Load Balancing Port Rule
Round Robin Load Balancing will not distribute traffic coming from one destination. You will need to configure your load balancer to 'Least Connections'
Basically the NLB passes a new connection to the pool member or node that has the least number of active connections.

Resources