Cloud Haskell topology layer - haskell

I'm using Cloud Haskell for creating a quite simple client-server application (game, to be little more specific). It may be a little overhit, but their idea of processes and nodes seems nice and clear to me.
The application runs quite nice with their simplelocalnet, but I would like to use more than just local network. From what I have read about their idea of process/transport layer, it should be possible to use different topologies just on the 'backend' layer, not going down to network transport.
However, for now I have found just a few backend variations, except for simplelocalnetwork: backend for Azure, which is stated to be just a "Proof of concept", and for p2p. Good, but nothing of these two seems useful for my purpose.
The concept of application is: server "meets" clients (players), then starts the game with them. Server "plays" with each player independently, then showing the results of each one.
I hoped to find something like tcp-backend, so it would be like on local network, but over the internet (of course, clients know the IP adress of server). Still I found nothing.
Can anyone point me to the backend I could use (ideally working on TCP or UDP)? Or I have to go down to Network.Transport anyway? Or maybe you can suggest a different solution?
Thank you in advance - or just for reading this:)
Have a nice day.

Related

Python - Bridge server and client communication over the internet with websockets

My question is more theoretical that with actual code.
I have a programme that is quite heavy on computation load and is searching for specific events. Once it finds them I want to forward a small JSON so other computers in other networks can read that and answer in a similar fashion.
More or less the ABC of websockets.
My question is about how to bridge the gap in terms of communication between them.
I've read that servers can be hosted in places like pythonanywhere and that you could forward your ports so anybody can connect to it.
The first one is not appealing as I don't need to host the whole programme, I have a computer acting as server for that. I just need to have a public address outside my network for the clients to look at.
As I don't want to get exposed and I'm by no means an expert on security the second option is also out of the question.
I've been looking everywhere and it seems I can't think of the right words for my query because I haven't found anything that could be a solution.
Can you give me a hint here?

Online P2P game: Establishing a connection between PCs without a matchmaking server

I was trying to make a simple multiplayer (P2P) board game that connects players over the internet. So far, the only networking applications I've worked on only communicate with permanent web servers via HTTP.
After some preliminary reading, I learnt that most devices are subject to NAT nowadays, which makes P2P connections a pain to establish. I've read about 2 ways around the NAT hurdle:
Set up a central server to facilitate NAT punch-through.
Use Internet Gateway Device (from UPnP) to do automatic port-forwarding.
I haven't delved into the details yet, but it seems like NAT punch-through requires me to set up a permanent matchmaking server (for which I doubt is within my budget) and to administer it (which I have no experience in). UPnP seems like the way to go.
Would you say that my assessment is accurate? Are there any other options? Is there anything else I should take into account before designing/implementing my game?
(I'm planning to write my game using Qt, to support multiple platforms. Someone has made a Qt-based P2P file sharer using the MiniUPnPc library, so I'm thinking of studying that implementation)
One thing you can do to avoid the issue all together is just require your end user to make a local version of the app server and forward the ports themselves. Just give them an option to either "Create Game" or "Join Game", and if they create a game make them specify the port they want to run it on. Then it's their responsibility to make sure that port is open.
Having said that, I recently went through setting up a matchmaking style networked game using Unity3d and I can comment on setting up a matchmaking server within that framework.
There are 2 pieces to the process, the MasterServer and the Facilitator, where the MasterServer just holds all the game related information and the Facilitator handles incoming connections to the server.
Because the MasterServer/Facilitator are only handling connection information, they're fairly light processes and you can just run them in the background on your home machine if you don't mind leaving your machine on all the time.
It pretty much regulates itself once you have it up and running. It's not any more difficult than running a standard web server anyway.
I haven't explored using Internet Gateway Device to handle networking, but looking at the wiki for it (http://en.wikipedia.org/wiki/Internet_Gateway_Device_Protocol), it looks like it's just a magic thing to handle NAT punch through automatically. I have done little to no research on this method, but it looks like it might be something that has to be installed on the client end, which I feel like isn't what you want. If you find a way to make it work the way you want, let me know; I'd be curious to see it in action.

node js common practices

I've been reading up on a few node tutorials but there are a couple of best/common practices that I would like to ask about for those out there that have built real node apps before.
Who do you run the node application as on your linux box? None of the tutorials I've read mention anything about adding a node user and group so I'm curious if it's because they just neglect to mention it or because they do something else.
Where do you keep your projects? '/home/'? '/var/'?
Do you typically put something in front of your node app? Such as nginx or haproxy?
Do you run other resources, such as storage(redis, mongo, mysql, ...), mq, etc..., on the same machine or separate machines?
I am guessing this question is mostly about setting up your online server and not your local development machine.
In the irc channel somebody answered the same question and said that he uses a separate user for each application. So I am guessing that this is a good common practice.
I mostly do /home/user/apps
I see a lot of nginx examples so I am guessing that is what most people use. I have a server with varnish in front of the a node.js application and that works well and was easy to setup. There are some pure node.js solutions but for something as important as your reversed proxy I would go for something that is a little more battle-tested.
To answer this correctly you probably have to ask your self. What are my resources? Can I afford many small servers? How important is your application? Will you lose money if your app goes down?
If you run a full stack on lets say one VPS then if there is a problem with that VPS then only one of your apps is affected.
In terms of maintenance having for example one database server for multiple apps might seem attractive. You could reason that if you need to update your database to patch a security hole you only need to do it in one place. On the other hand you now have a single point of failure for all the apps depending on that database server.
I personally went for many full stack server and I am learning how to automate deployment and maintenance. Tools like Puppet and Chef seem to be really helpful for this.
I only owned my own Linux servers for the last 3 months and have been a Linux user for 1.5 years. So before setting up a server park based on these answers make sure you do some additional research.
Here's what I think:
Using separate user for each app is the way I'm doing this.
I keep it in /home/user/ to make sure that only user (and root of course) has access to the app.
Some time ago I've created my own reverse proxy in Node JS based on node-http-proxy module. If you don't want to use reverse proxy then there's no point in putting anything in front of Node. There's even more: it may harm the app, since for example nginx can't use HTTP/1.1 (at least at the moment).
All resources I run on the same machine. Only when I actually need to distribute my app between separate machines I start thinking about seperate machines. There's no need to preoptimize. App's code is a different thing, though.
Visit the following links::
nettuts
nodetuts
lynda nodejs tutorials
Best practice seems to be to use the same user/group as you would for Apache or a similar web server.
On Debian, that is www-data:www-data
However, that can be problematic with some applications that might require higher permissions. For example, I've been trying to write something similar to Webmin using Node and this requires root permissions (or at least adm group) for a number of tasks.
On Debian, I use /var/nodejs (I use /var/www for "normal" web applications such as PHP)
One of the reasons I'm still reluctant to use Node (apart from the appalling lack of good quality documentation) is the need to assign multiple IP Ports when running multiple applications. I think that for any reasonably sized production environment you would use virtual servers to partition out the Node server processes.
One thing that Node developers seem to often forget is that, in many enterprise environments, IP ports are very tightly controlled. Getting a new port opened through the firewall is a very painful and time-consuming task.
The other thing to remember if you are using a reverse proxy is that web apps often fail when run from behind a proxy - especially if mapping a virtual folder (e.g. https://extdomain/folder -> http://localhost:1234), you need to keep testing.
I'm just running a single VPS for my own systems. However, for a production app, you would need to understand the requirements. Production apps would be very likely to need multiple servers if only for resilience and scalability.

Distributing web traffic to various servers?

Something I have been curious about for quite sometime now.
How exactly do you distribute your web traffic to various servers? And when do you know when to distribute to another server?
For sites like Facebook, they have one point of entry via the domain www.facebook.com so if server A is running at 90% of what it can or whatever how does it know to switch to server X or to use a server closer to your location. How exactly does it achieve this.
And when building a website that will have large traffic how do you deal with this. Is this something you consider as a developer?
More information you can provide the better.
Thanks.
You probably want to look into load balancing
If you have specific questions beyond that, they're probably more suitable for server fault

Chat program without a central server

I'm developing a chat application (in VB.Net). It will be a "secure" chat program. All traffic will be encrypted (I also need to find the best approach for this, but that's not the question for now).
Currently the program works. I have a server application and a client application. However I want to setup the application so that it doesn't need a central server for it to work.
What approach can I take to decentralize the network?
I think I need to develop the clients in a way so that they do also act as a server.
How would the clients know what server it needs to connect with / what happens if a server is down? How would the clients / servers now what other nodes there are in the network without having a central server?
At best I don't want the clients to know what the IP addresses are of the different nodes, however I don't think this would be possible without having a central server.
As stated the application will be written in VB.Net, but I think the language doesn't really matter at this point.
Just want to know the different approaches I can follow.
Look for example at the paper of the Kademlia protocol (you can find it here). If you just want a quick overview, look at the Wikipedia page http://en.wikipedia.org/wiki/Kademlia. The Kademlia protocol defines a way of node lookups in a network in a decentral way. It has been successfully applied in the eMule software - so it is tested to really work.
It should cause no serious problems to apply it to your chat software.
You need some known IP address for clients to initially get into a network. Once a client is part of a network, things can be more decentralized, but that first step needs something.
There are basically only two options - either the user provides one (for an existing node of the network - essentially how BitTorrent trackers work), or you hard-code in a gateway node (which is effectively a central server).
Maybe you can see uChat program. It's a program from uTorrent creator with chat without server in mind.
The idea is connect to a swarm from a magnetlink and use it to send an receive messages. This is as Amber answer, you need an access point, may it be a server, a know swarm, manual ip, etc.
Here is uChat presentation: http://blog.bittorrent.com/2011/06/30/uchat-we-just-need-each-other/

Resources