I am working on a sitecore application where I need to set up 3 node cluster with solr configured in 2 servers and zoo keeper in all 3 servers .So I am looking for some solution to load balance solr instances . We are working on azure windows 2012 vms .The front end application is built on top of siteoCore and we are using IIS as our web server . So what option would be the best without going for a 3rd party load balancer like to use solr's internal load balancer "LBHttpSolrServer" or external azure provided load balancer ? Please suggest me but I can't go for a 3rd party load balancer . We have 5 CD servers .
Can I go for server farm using Application Request Routing Module ? So where I need to install this in CD servers or in SOLR servers ?
Based in all information you have provided, the best approach is deploy LBHttpSolrServer as your load balanced solution.
Please find more info here: LBHttpSolrServer
Also, please note even running as Windows Service, it has to have a Path to Executable which runs either Solr or Apache TomCat behind the scene.
Thanks,
Vinicius
Related
We have several websites that are currently functioning with the IIS web server and we are attempting to determine how to analyze the bandwidth consumption for each websites.
We are using Windows Server 2019 and we are not able to determine which website is using more bandwidth on the server.
Please let me know if there is any solution available for this.
We have tryed the logparcel 2.2 and unable to figure out the valure
I created sample web api on .net core and registered it on default file in Nginx and was able to access it from outside.
The API looked like https://<>/api/values.
Now I want to add more configurations to host more web api with different port number. The problem is how will default file differentiate between multiple APIs since base URL is same i.e localhost\<> for all.
You need to create server blocks. Each of these server blocks will handle/listen/respond to different app. You can host as many apps you want to on a single Ubuntu machine using nginx this way.
This will be very helpful and describe the entire process of creating server blocks for your nginx server.
I have a Ubuntu Server on DigitalOcean which hosts a website, and a Windows Server on AWS which hosts another website.
I just built a mean.js stack app on my MAC, and I plan to deploy it to production.
It seems that most of the existing threads discuss about using a new dedicated server. For example, this thread is about deploying on a new AWS EC2 instance; this video is about deploying on a new Windows Azure server; this is to create a new droplet in DigitalOcean.
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server? If yes, will there be any difference in terms of performance?
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server?
Yes. Both Windows and Ubuntu allows you to deploy multiple applications on same instance.
For Ubuntu you can read this post which will help you server multiple apps.
In this example used Nginx, but you can follow to this example and use it without any server like Apache or Nginx. If you need subdomains I would suggest to use Apache virtual hosts with reverse proxy module and pm2
For Windows and its IIS I would suggest to use iisnode, in google you can find a lot of articles how to configure it.
will there be any difference in terms of performance?
It is depended on your applications, if you are already serving applications which handles huge traffic and need CPU and memory, I would not suggest you to use multiple apps on same instance, but if you are going to use simple web apps, you can easily use same instance.
Hope this answer will help you!
We have a web application running on AWS with the following architecture:
1 elasticseach cluster with 2 data nodes
1 auto-scaling load-balanced cluster of web servers
As elasticsearch does some clever internal load balancing we could just point all the web servers at one of the data nodes. But this would create a single point of failure - if that node goes down then I'm not going to get any query results.
My solution thus far has been to have elasticsearch running on each web server as non-data nodes. Each web server queries its local elasticsearch node, which in turn farms the request off to one of the data nodes. This seems to be the suggested approach on the elasticsearch website
This is great in that if one of the data nodes fails in some way we don't lose the ability to serve search queries. However, it does mean elasticsearch is using resources on each web server, and if we migrate to using elastic beanstalk (which I'm keen to do) then we'll need to some how get elasticsearch installed on our web instances. EDIT: I've succeeded with this now, but have yet to figure out how to specify a different config for each environment.
Is there another way to avoid a single point of failure without having elasticsearch running on each web server?
I thought about using a load balancer in front of the data nodes to serve queries from the web servers, but that would also mean opening the cluster up to public access without setting up VPC to restrict access.
Is there a simpler solution I'm missing?
I don't think this directly answers your question, but if you are still ok with running ES on your web server nodes, you can customize the software that is installed using the .ebextensions mechanism, which allows you to run scripts and/or install packages when new Elastic Beanstalk instances are started up. If this isn't sufficient you can start your Elastic Beanstalk instances using a custom AMI.
Also, you may not be aware that you can run Elastic Beanstalk in a VPC.
as servicestack leave it open to host service in web server or in stand alone app.
What is the best in term of performance both raw and for a high number of clients ?
Hosting on apache or nginx or XSP or IIS is just for added functionality or for perf ?
servicestack.net itself runs on Ubuntu / Nginx + MonoFastCGI, although we've been notified others have been able to get better performance with self-hosting which you can still serve behind a Nginx/Apache reverse proxy if you still wanted access to a full-featured web server.
You can also wrap a self-hosted ServiceStack in a Linux Daemon.
We've ran into same question while were choosing hosting schema for our ServiceStack services. Ran some benchmarks with same service hosted on self-host and under IIS. SelfHost windows service has shown near 1.5x better performance than IIS-hosted app.
Surely this is not and absolute number and it may vary by service's load type (cpu/io), but it is clear, that IIS routine adds tonns of overhead.
If you need speed and don't worry about all those features IIS can give you (monitoring / advanced routing / admin / etc)- self host is the way to go. Our set-up hides ServiceStack hosts behind nginx nodes that serve all the routing/proxy/balancing stuff so we don`t need monstrous IIS-routine.