We faced an issue in our staging environment where SignalR thought the request was coming from another domain. So, we enabled CORS for the time being...however, we realized that we were downgraded to long polling which means we can't maintain server affinity.
We are using .NET 4 / SignalR 1.2.2 and our request goes through an F5 load balancer. We're trying to debug this issue, obtain logs and disable CORS to get exact details. I tried to map a hub with "http://domainname.com/signalr", but it didn't work. The application lanuched without complaining, but I couldn't connect to signalr anymore. Since we're on .NET 4, we can't move to using the latest version / WebSockets.
What is the best way to instruct SignalR to allow a range of domains? (I've also tried multiple calls to map hubs, but that failed).
UPDATE:
Upon further investigation, I've realized that that the application can be accessed internally and externally. SignalR seems to bind to the machine name. Hence, everything works on the local url. However, when we make a request from an external domain which the F5 load balancer forwards, SignalR thinks it's a cross-site request....which theoretically it isn't in this case.
Is it possible that this is a F5 issue?
Is there a way to ask SignalR to allow certain domains without downgrading to longPolling?
If you've implemented load balancing then you must implement scale out in SignalR
take a look at ScaleOut in SignalR
One approach is ScaleOut with SQL Server : Have a look
Related
There are 4 instances of an application running behind a load balancer. How the ConfigCat webhook would work in this case? Do we need to configure 4 public URL's for all 4 servers in webhook settings?
Could you share some details about your use case? What would you like to achieve with webhooks? What kind of SDK are you using? What is the polling mode?
If you want to refresh the SDK's cache on Feature Flag value changes, you should consider using a distributed cache implementation (e.g. redis). Example custom cache in Java: https://configcat.com/docs/sdk-reference/java#custom-cache
If you implement a custom distributed cache, you'll only need to add your load balancer's url to the webhook because refreshing the cache in one instance will refresh the cache in the distributed cache so all of your instances could work with the latest configurations.
If you want to get notified about changes in each applications, there are different possibilities:
You can configure 4 public urls and use the webhooks just like you mentioned it.
If you are using auto polling mode, you can skip the webhooks part and start using the SDK's built-in configuration changed callbacks. e.g. in java: configurationChangeListener part at https://configcat.com/docs/sdk-reference/java#auto-polling-default. When the auto poll mode's polling happens the SDK detects if the configuration changed and it fires this event.
If you could share more details I could help you more.
Disclaimer: I am one of the founders of ConfigCat.
I want to host my own server and database on my computer, I don't want to pay monthly for services.
I developed a node.js app and it's using a postgresql database. I have a domain with an angular app and the app needs to use data from the server.
Can someone tell me how I can do this and which OS would be the best?
Thanks!
You have to do few things for that to work.
First, your Angular app needs to be able to connect with your home server so it either needs a static IP address accessible from the outside, a dynamic IP with dynamic DNS, or a VPN.
Your server needs to properly support CORS so that your Angular app would be able to connect with it. It will send OPTIONS requests that your server needs to handle properly.
Make sure that your server is always on, the internet connection is reliable, the power is reliable and that your services are properly restarted on reboot.
Make sure that your server is always up to date with security patches, is configured properly and doesn't use any unneeded software and services.
For (1) you have a lot of options and it all depends on whether you have a static or dynamic IP address, whether it is accessible from the internet etc. which you didn't include in your answer.
For (2) it depends on what Node framework do you use for you server-side application which you didn't include in your question. You need to use a way to set up CORS that is specific for the framework that you use.
The (3) is hard in home environment but it's important because on any downtime your users will not be able to use your application.
The (4) is critical in home environment because if anyone breaks into your server, he'll have access to your home network which may have a different kinds of consequences that breaking into a data center.
Another option would be to use a cheap VPS provider like Digital Ocean where you can get a server for $5 a month (or 2 months for free with this link) which may be less hassle that setting up your own server - for which you have to pay for electricity, manage the hardware, monitor the connectivity etc.
If you choose a VPS then (1) us taken care for you - you get your own static IP address accessible from the world, (3) is taken care for you completely, (4) is relatively easy to do and the biggest issue is making sure that CORS works as it should - but here you can host your API on the same domain as your frontend and then you don't need to worry about CORS at all.
If you get a VPS then you can host your frontend Angular app from the same server so that it doesn't even have to cost you more.
Currently I have two servers which I have deployed node.js/Express.JS based web services API. I am using Redis for caching the JSON strings.
What will be the best option deploying this setup in to production? I see here it advices to go with a dedicated server redis. OK. I take it and use a dedicated server for running redis master. Can I use existing app servers as slave nodes? Note : these app servers are running an Node/Express application.
What other other options do I have?
You can.
It all depends on the load that those other servers have, it's a problem of resource sharing. To be honest my main issue with your architecture is not the dedicated vs the non-dedicated servers, it's the fact that you are placing a Redis server (master or not) on a host that most likely will be facing the internet (expressJS app), meaning, it's quite exposed.
If you can simulate HTTP load into your Node/Express JS servers, see the difference between running some benchmark tests on your dedicated server vs the non dedicated ones:
On a running redis server type in:
redis-benchmark -q -n 100000
If the app servers are being hammered and using all cores frequently you should see a substantial difference in the benchmarks.
My suggestion is, go ahead with your first setup and add monitoring for the redis response times, and only act when you have to, which might be now if the benchmarks show very poor results.
As a side note, consider the option of not sharing hosts for services that you expose to the internet with services that perform internal functions to your application.
as servicestack leave it open to host service in web server or in stand alone app.
What is the best in term of performance both raw and for a high number of clients ?
Hosting on apache or nginx or XSP or IIS is just for added functionality or for perf ?
servicestack.net itself runs on Ubuntu / Nginx + MonoFastCGI, although we've been notified others have been able to get better performance with self-hosting which you can still serve behind a Nginx/Apache reverse proxy if you still wanted access to a full-featured web server.
You can also wrap a self-hosted ServiceStack in a Linux Daemon.
We've ran into same question while were choosing hosting schema for our ServiceStack services. Ran some benchmarks with same service hosted on self-host and under IIS. SelfHost windows service has shown near 1.5x better performance than IIS-hosted app.
Surely this is not and absolute number and it may vary by service's load type (cpu/io), but it is clear, that IIS routine adds tonns of overhead.
If you need speed and don't worry about all those features IIS can give you (monitoring / advanced routing / admin / etc)- self host is the way to go. Our set-up hides ServiceStack hosts behind nginx nodes that serve all the routing/proxy/balancing stuff so we don`t need monstrous IIS-routine.
Does anyone know a way to have a JavaScript file or set of files always running under IISNode without the need for a client request? The idea would be to have scripts that behave as services, but have them running under IISNode.
Thanks!
csh3
How about trying node-windows, it allows Node.js applications to run as a windows service. A nice feature is that it also exposes a way to write to the EventLog.
It probably fits your scenario better considering that you need any of the IIS features other than the long running aspect of it.
Hope this points you in more applicable direction.
I guess you have some reason to use iisnode, but you are trying to run a service in iis which is not a good idea, if you want to run as service then run as service. how?
if you still insist to use iisnode then options are
use Application Initialization for IIS
Or write a scheduled job that pings your iisnode page
Or use pingdom like service to ping your iisnode application.