What is the difference between Diameter as a client and Diameter as a server? - diameter-protocol

Till now I have installed and run freediameter. CER and CEA messages are exchanged between client and server successfully. But I am unable to grasp what makes one of the instance run every time as "client" and the other as "server" since both the client and server instance consists of the same source code. Is it dependent on the "diameter identity" or value of "ip" or by the action of a certain "flag" or "command line arguments"? I want my current client instance to run as a server and the current server instance to run as a client but I am unable to do so.

Related

How can I get Express JS to keep local variables upon server restarts?

I am using an Express server in NodeJS v14.15.1 to handle HTTP GET and POST requests. The server performs some cryptographic operations and obtains a key which must be used for subsequent requests. The obtained key is set as a global variable within my index.js file (where my express() app resides). However, the server restarts automatically (I am using nodemon) upon handling each HTTP request, and in doing so it erases the key global variable. So the next request which relies on reading the global key variable is unable to succeed. NB: The key cannot be stored on-disk or on the client-side due to security reasons. Also, this is for a university assignment, not a real production environment.
How can I keep the global variable upon server restart?
Any help is greatly appreciated.
None of the nodejs/Express environment automatically survives from one restart to the next. So, if you have specific data that you want to always be available after a restart, then you would typically save it to disk (often in a JSON file) every time it changes and then every time your server starts, it can read that state from the previous JSON file and initialize your variables from that data.
However, the server restarts automatically (I am using nodemon) upon handling each HTTP request
This should not happen, so your first thing to solve it to stop the server from restarting. Your server should run for days or weeks and be able to field millions of http requests without restarting. Is the server crashing and restarting or is nodemon seeing something change and automatically killing/restarting your server?
Nodemon is sometimes used in a "developer" mode such that it automatically restarts your server anytime files in a specific directory are changed. This can facilitate faster development cycles if you are editing your source files. But, this should NOT be happening except when you edit your server source files and only when you are using nodemon in a "debug" or "development" mode. If this is why nodemon is restarting your server, then you probably need to tweak the configuration so it isn't detecting file changes that are part of your normal server operation and thus doesn't restart just because your server does something normal.
The key cannot be stored on-disk
Well, there's no simple way to get data to survive a server restart without storing the data somewhere. NOTHING from your Express process survives a restart so you can't only keep it in the Express process if you want to have access to it again after a restart. So, it appears to me that you've put yourself in a box.
Your options are to either stop the server from restarting in the first place or find a secure place to store the key.

nodejs local agent functionality

I have a website hosted on Heroku and Firebase (front (react) and backend(nodejs)) and I have some "long running scripts" that I need to perform. I had the idea to deploy a node process to my raspberry pi to execute this (because I need resources from inside my network).
How would I set this up securely?
I think I need to create a nodejs process that checks the central server regularly if there are any jobs to be done. Can I use sockets for this? What technology would you guys use?
I think the design would be:
1. Local agent starts and connects to server
2. Server sends messages to agent, or local agent polls with time interval
EDIT: I have multiple users that I would like to serve. The user should be able to "download" the agent and set it up so that it connects to the remote server.
You could just use firebase for this right? Create a new firebase db for "tasks" or whatever that is only accessible for you. When the central server (whatever that is) determines there's a job to be done, it adds it to your tasks db.
Then you write a simple node app you can run on your raspberry pi that starts up, authenticates with firebase, and listens for updates on your tasks database. When one is added, it runs your long running task, then removes that task from the database.
Wrap it up in a bash script that'll automatically run it again if it crashes, and you've got a super simple pubsub setup without needing to expose anything on your local network.

Which is the better way to implement heartbeat on the client side for websockets?

On the Server side for websockets there is already an ping/pong implementation where the server sends a ping and client replies with a pong to let the server node whether a client is connected or not. But there isn't something implemented in reverse to let the client know if the server is still connected to them.
There are two ways to go about this I have read:
Every client sends a message to server every x seconds and whenever
an error is thrown when sending, that means the server is down, so
reconnect.
Server sends a message to every client every x seconds, the client receives this message and updates a variable on the client, and on the client side you have a thread that constantly checks every x seconds which checks if this variable has changed, if it hasn't in a while it means it hasn't received a message from the server and you can assume the server is down so reestablish a connection.
You can achieve trying to figure out on client side whether the server is still online using either methods. The first one you'll be sending traffic to the server whereas the second one you'll be sending traffic out of the server. Both seem easy enough to implement but I'm not so sure which is the better way in terms of being the more efficient/cost effective.
Server upload speeds are higher than client upload speeds, but server CPUs are an expensive resource while client CPUs are relatively cheap. Unloading logic onto the client is a more cost-effective approach...
Having said that, servers must implement this specific logic (actually, all ping/timeout logic), otherwise they might be left with "half-open" sockets that drain resources but aren't connected to any client.
Remember that sockets (file descriptors) are a limited resource. Not only do they use memory even when no traffic is present, but they prevent new clients from connecting when the resource is maxed out.
Hence, servers must clear out dead sockets, either using timeouts or by implementing ping.
P.S.
I'm not a node.js expert, but this type of logic should be implemented using the Websocket protocol ping rather than by your application. You should probably look into the node.js server / websocket framework and check how to enable ping-ing.
You should set pings to accommodate your specific environment. i.e., if you host on Heroku, than Heroku will implement a timeout of ~55 seconds and your pings should be sent before this timeout occurs.

How make big real time app use socket.io or lightstreamer and scale horizontal

I have got some questions with real time app.
I am making a real time app at this time. I used socket.io,mongodb and nodejs. This app works nice in prototype but what will happen when the number of users increases?
I want to grow horizontal scale.
e.g I have got two server (server A, server B)
client A connect server A
Client B connect server B
How can Client A send message Client B? It has been confusing me with different servers
I found the use redis for this. Is there a possibility that redis-server enough?
As a result, what should I use and which tech(redis,lightstreamer,jabber, socket.io,nginx)?
You can't send directly a message from A to B because they arn't connected to the same server.
The solution to this is to enable communication between the two node server.
You mentionned redis so if you go that route you can have a central redis server that has two lists (on for each server). When client A want to join client B, he send to server A his message. Server A will not find client B in his local sockets and will push to redis the message. Soon or later, server B will collect his pending messages from redis and dispatch them to client B.
It's a basic implementation that you can change to fit your needs. You can have for example a single list of messages per server, but also why not a list per user (and the server which has this
Also as a side note, any central data store such as a database server (mongo? MySQL?) can do the same as redis. It all comes down to what you allready have, what you can have and what type of persistence you want.

Nodejs handling login on another server

suppose you had 4 machines each running an instance identical nodejs app, and users have to log in to access your website, after a user logs in is it possible to move his connection to one of the other machines?
To Clear it up:
Node 1 only holds the main app page, handles login validation and
knows how many users are on each node, and it routes the user who
logs in to the Node with the lowest number of users, or to make it
more complicated to the server which has the lowest load(not based
on the number of users but the traffic).
each of the other Nodes run CentOS with a nodejs server instance
cluster of Node processes.
i am using socket.io intensively and after login i always have
presistant connection with the client,even on my client no ajax
requests are made, everything is handled using sockets.
in my current source code, everything is combined in one nodejs app,
and i do socket authentication for login
The clients have no kind of interaction with each other, which makes the job easiear.
is it possible to pass a socket connection from one nodejs server to another?
how would you solve this problem yourself, considering that the nodejs app that handles the login and the actual nodejs app are 2 seperate machines?
I would keep it simple. I would create a load balancer node with its own application balancer application. This node will redirect to the less loaded "worker" node.js instance based on the number of authenticated use sessions on each node. This should happen even before authentication is done. All other "worker" nodes will have same main app with exactly same logic running - main page, authentication and application logic.
Just save an access token (i.e. cookie in case of http) into a database and send it from the client to the server every time it connects.
Well, it's tough to give a complete answer without having a better sense of your application's architecture... but I'll lay out my assumptions and go from there:
a) www.yourdomain.com points to Node 1.
b) Node 1 is the only server that responds to HTTP requests, Node 2 through Node 5 only communicate through sockets.
c) Once a user is authenticated through Node 1, it does not need to re-authenticate through Node 1 for subsequent communication through sockets (this one is a bit tricky, if you really want to ensure that only authenticated users can access your app that authentication must be passed over the socket connection, but it can be simpler than the full authentication performed by Node 1, and it sounds like you're doing this, just want to raise the issue)
Given those assumptions, then I would assign subdomains to each app server (perhaps node2.yourdomain.com, node3.yourdomain.com, etc?), then when Node 1 is ready to pass the client over to the app server, determine which node you want to send them to, pass that subdomain over to the client, and have the client create a socket connection to the assigned app server, then all of its communication will happen through there.
If I've misunderstood or over-simplified things, feel free to set me straight in comments.

Resources