Verify Configuration of multiple OpenAM and OpenDJ - openam

I have a solution that uses 2 OpenAMs (v12.0.0) and 2 OpenDJs (v2.6.0). The OpenAMs are behind a load balancer and the OpenDJs are configured so 1 openAM will prefer 1 openDJ and the other OpenAM will prefer the other OpenDJ. Each OpenAM machine has the OpenDJ it prefers hosted on it. I followed this post to configure the OpenDJ instances in OpenAM
Original Blog post
So once I got my OpenAM configured with two servers in the site I then looked at the value of com.iplanet.am.lbcookie.value in the site config which reportedly should give me the serverId and then calculated the siteId to be 1 more than the highest serverId. I got 01 for the serverid of my first instance and 03 for my second. So in my LDAP server values I have the following
opendj1:1389|01|04
opendj2:1389|03|04
From my reading this should mean openDJ1 will be used by openam1 (server id 01) and openDJ2 will be used by openam2 (server id 03). If OpenAM 1 can't access OpenDJ1 it will failover to OpenDJ2. Is this correct?
The reason I ask is that if I look at the OpenAM access logs I see almost a 50 - 50 split in the number of requests that each instance is handling. Though if I look in the OpenDJ access logs the requests seem to be favouring OpenDJ1 ie 75 - 25 split between OpenDJ1 and OpenDJ2.
Any advice welcome.

You should check the access logs of both DJ servers and identify which AM server is responsible for what proportion of traffic. Once you see the culprit, you should make sure that the OpenAM server in question doesn't have any connection issues mentioned in its debug logs.
There is a good chance that one of the AM servers had to failover to the other DJ instance. Please keep in mind, that after a failover and recovery, OpenAM retains the connections made during the failover and will keep on sending heartbeat requests to that DJ node.

Related

Azure: RDP and brute force attack

I have a small server on Azure (Windows 10 pro). There is only one service (webservice REST) on this server and it's not critical: if the service is down for a few hours or even days or if someone stole the data that's not a big deal.
I'm the only person that have access to this server, I have a quite strong password with a custom admin user name so I just use "RDP" to manage the server without VPN. A few days ago I saw that my Azure bill was higher than usually (more or less +10USD). I see that it was because of a higher "data transfer out". So I'm try to understand the reason.
I saw that:
in my web server access and error log (Apache) there is about 80 connections that were blocked (http code 400/403).
in my web service log (custom log) I haven't any request (blocked by Apache that require a valid user and password)
in my Windows security events logs that's more complicated: I have about 31'000 "audit failure". I see that it's a kind of brute force attack probably through the rdp port (login events with differents accounts names). I haven't seen any successful attempts. So in my opinion it's because of this brute force attack that my bill is higher.
So my question is: could you help me to evaluate how many data transfer those 31'000 connection through RDP could represent ? Are there other elements that I should take into consideration ?
In order to avoid that kind of things I'll try to install a VPN. For now I just allow my IP adress through RDP in Azure portal.
Thank you for your help
Loic
For this, you can block all port incoming ports except RDP one ie 3389. (Also you can restrict for your Public IP to use them.
Blocking unused ports is always the best option.
try wail2ban used long back to protect Bruteforce attack.
I would recommend activating the just-in-time access feature in Azure.
This feature protects your management ports from attacks and it's a good option if you do not want to spend more money on a Azure App Gateway or Azure Bastion.
https://learn.microsoft.com/en-us/azure/defender-for-cloud/just-in-time-access-usage?tabs=jit-config-asc%2Cjit-request-asc

Configuring Azure SQL Server firewall - is allowing all IP's to connect a red flag?

I just created my first Azure SQL Server and Database and now am trying to configure the firewall so that my web application can connect and make changes to the single database on the server.
I see that you can allow all clients to connect by allowing a rule such:
However is this bad practice? I see my Client IP address is listed in the Azure portal can I get clarification that I should allow just that single IP address access for now, and then later when I publish my web application to Azure I will be given an IP address of where that web app lives and only restrict access to that (assuming that people can only make database changes through my front end application).
Thanks
Yes, this is bad practice. There are other layers they'd have to get through (e.g., the server login), but this opens the front door and allows anyone to pick away at it at their leisure.
If you have a web server hosting your web application on some server (which you must be), whitelist that server's IP address (and perhaps your own, for development/admin purposes), but allowing all IPs is not considered good practice, no.
One particular case where you may really want to allow this is if you are distributing a desktop application to unknown clients that must connect to the backend. It becomes extremely enticing at that point, but even so, the recommended practice (or at least, my recommended practice) would be to utilize a web service that will accept an application registration upon startup of the app, register the client IP temporarily through that, and then have a background server (think WebJobs) that will flush the firewall rules say, every night or so (or for a more elaborate setup, accept the registration with a timeout and use a continuous WebJob to poll for the timeout and refresh/revoke as necessary)

Redundancy with self hosted ServiceStack 3.x service [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We are running a self hosted AppService with ServiceStack 3.x
We would like to have a automatic failover mechanism on the clients if the current service running as master fails.
Clients at the moment are strong typed C# using the default SS JSONClient, but we will add web based clients (AngularJS) in the future.
Does anybody have an idea, how that could be done?
Server side redundancy & failover:
That's a very broad question. A ServiceStack self hosted application is no different to any other web-facing resource. So you can treat it like a website.
Website Uptime Monitoring Services:
You can monitor it with regular website monitoring tools. These tools could be as simple as an uptime monitoring site that simply pings your web service at regular intervals to determine if it up, and if not take an action, such as triggering a restart of your server, or simply send you an email to say it's not working.
Cloud Service Providers:
If you are using a cloud provider such as Amazon EC2, they provide CloudWatch services that can be configured to monitor the health of your host machine and the Service. In the event of failure, it could restart your instance, or spin up another instance. Other providers provide similar tools.
DNS Failover:
You can also consider DNS failover. Many DNS providers can monitor service uptime, and in the event of a failover their service will change the DNS route to point to another standby service. So the failover will be transparent to the client.
Load Balancers:
Another option is to put your service behind a load balancer and have multiple instances running your service. The likelihood of all the nodes behind the load balancer failing is usually low, unless there is some catastrophically wrong with your service design.
Watchdog Applications:
As you are using a self hosted application, you may consider making another application on your system that simply checks that your service application host is running, and if not restarts it. This will handle cases where an exception has caused you app to terminate unexpectedly - of course this is not a long term solution, you will need to fix the exception.
High Availability Proxies (HAProxy, NGINX etc):
If you are run your ServiceStack application using Mono on a Linux platform there are many High Availability solutions, including HAProxy or NGINX. If you run on a Windows Server, they provide failover mechanisms.
Considerations:
The right solution will depend on your environment, your project budget, how quickly you need the failover to resolve. The ultimate considerations should be where will the service failover to?
Will you have another server running your service, simply on standby - just in case?
Will you use the cloud to start up another instance on demand?
Will you try and recover the existing application server?
Resources:
There are lots of articles out there about failover of websites, as your web service use HTTP like a website, they will also apply here. You should research into High Availability.
Amazon AWS has a lot of solutions to help with failover. Their Route 53 service is very good in this area, as are their loadbalancers.
Client side failover:
Client side failover is rarely practical. In your clients you can ultimately only ever test for connectivity.
Connectivity Checking:
When connectivity to your service fails you'll get an exception. Upon getting the exception, the only solution would be to change the target service URL, and retry the request. But there are a number of problems with this:
It can be as expensive as server side failover, as you have to keep the failover service(s) online all the time for the just-in-case moments. Some server side solutions would allow you to start up a failover service on demand, thus reducing cost significantly.
All clients must be aware of the URL(s) to failover too. If you managed the failover at DNS i.e. server side then clients wouldn't have to worry about this complexity.
Your client can only see connectivity failures, there may not be an issue with the server, it may be their connectivity. Imagine the client wifi goes down for a few seconds while servicing your request to your primary service server. During that time the client gets the connectivity exception and you try to send the request to the failover secondary service server, at which point their wifi comes online. Now you have clients using both the primary and secondary service. So their network connectivity issues become your data consistency problems.
If you are planning web based clients, then you will have to setup CORS support on the server, and all clients will require compatible browsers, so they can change the target service URL. CORS requests have the disadvantages of having more overhead that regular requests, because the client has to send OPTIONS requests too.
Connectivity error detection in clients is rarely fast. Sometimes it can take in excess of 30 seconds before a client times out a request as having failed.
If your service API is public, then you rely on the end-user implementing the failover mechanism. You can't guarantee they will do so, or that they will do so correctly, or that they won't take advantage of knowing your other service URLs and send requests there instead. Besides it look very unprofessional.
You can't guarantee that the failover will work when needed. It's difficult to guarantee that for any system, even big companies have issues with failover. Server side failover solutions sometimes fail to work properly but it's even more true for client side solutions because you can test the failover solution ahead of time, under all the different client side environmental factors. Just because your implementation of failover in the client worked in your deployment, will it work in all deployments? The point of the failover solution after all is to minimise risk. The risk of server side failover not working is far less than client, because it's a smaller controllable environment, which you can test.
Summary:
So while my considerations may not be favourable of client side failover, if you were going to do it, it's a case of catching connectivity exceptions, and deciding how to handle them. You may want to wait a few seconds and retry your request to the primary server before immediately swapping to the secondary just in case it was an intermittent error.
So:
Catch the connectivity exception
Retry the request (maybe after a small delay)
Still failing, change the target host and retry
If that fails, it's probably a client connectivity issue.

Connection to ldap with more DC (rotation on/off) (Confluence running on tomcat behind IIS)

I'm looking here for some advice or refer to some more information how solve this:
I have a Confluence 3.5 which is running on Tomcat6 which is connected by AJP with http server IIS .
In Confluence is possible in Administration to configure connection to an Active Directory through ldap server but i have a problem that nslookup will show me more DC (about 10 IP addresses from which are 6 allways online and 4 offline - they are rotate). How and where (i think that is not possible to configure it directly in confluence so should i configure it in Tomcat servre.xml by "realm" or?) i have to configure it?
thank you in advance for answering my question.
Probably this problem is really very casual but for the case if you will have a troubles with this try ldap local balancer . This tool is quering defined AD connection and in my case confluence queries is redirected through "pen" to free and available AD server .
(it create local service on defined port. So confluence i configured to connect to it like on LDAP and its redirect all queries to AD servers)

Load-Balancing Windows Azure Instances Locally

When deploying multi-instance WebRole to Windows Azure Emulator, the Emulator is running multiple IIS Express instances of the WebRole, each one on a reserved local IP, like:
127.255.0.1
127.255.0.2
127.255.0.3
The problem is that i want to access the WebRole as if it was really deployed on Azure, i need to check that Session State is persisted between instances.
Since my Session Id is stored on a cookie, each time i'm connecting to a different instance i need to manually 'inject' the cookie to the request to check session data (since the browser considers the IP of the next instance as different domain).
Is there a way i can use a hostname (on a Windows 7 machine) that will point itself randomally to one of those IP?
Well, apparently the emulator does load balance all request between instances:
Clicking 'Debug' on the Cloud Project will open a web page with an IP that is the virtual Load Balancer (usually 127.0.0.1:80 if not taken already).
Yet, there were 2 things that misled me in the first place:
1. The list of multliple IIS Express instances each with it's own binding (image attached).
2. Implicit Affinity:
I made the my web application output the instance-id and kept getting
the same instance-id all the time. the reason for that is (i guess)
the affinity that the emulator enforce (probably using cookie
comparison).
Conclusion:
If you want to manually load balance or to control the affinity yourself, you can leverage IIS Server Farming capabilities (as i did eventually) to emulate load balancing.
(Apache/Nginx as some kind of 'Reverse Proxy' is also a good option, but i preferred to stick with products that are already installed and in-use).

Resources