I have been trying to figure out how to set up a Play! application on a Centos server but am running into several issues which I can't resolve.
I am using git and have a working Play! application on my local machine which I want to deploy to my server.
I have initialised a bare git repository in /home/git on my server using git init --bare and have pushed just the commit data to this bare repo using git push production +master:refs/heads/master as advised in this tutorial.
The plan is to use a git hook to automatically checkout my application to my website root whenever I deploy to production.(so note that the /home/git directory that my bare git repository is in is not my web root)
So my questions at this stage are:
Which directory should my Play! application be deployed to on my server? I have read that var/www/html is traditional for websites with only 1 website running at a particular ip address.
I will not be using an apache server, just the default Play! distribution. But I remember when setting up an apache server we define the DocumentRoot. I think I am right in saying that this defines where any request to the root of http://www.mydomain.com will be routed. As I am not using Apache, how do I define that routing for the Play! application?
For a Play! application, which user should own the web root directory?
Thanks for reading
For git I'd suggest to use gitolite it's litgh, but allows to manage many git accounts and user access and permissions by simple config file.
For questions:
It doesn't matter at all, you can use ANY folder to which you have access (even via sudo). DocumentRoot is typical for common HTTP servers. For Java program of any kind more important is used port on which you start your app. If you want to start application on port 80, you need to do it via sudo. To start more applications on port 80 in different domains you need to install HTTP server (ie. nginx or Apache) and use it reverse proxy possibilities in block's/vhost's config. Anyway used folder still doesn't matter.
As mentioned DocumentRoot is Apache's directive
There's no root directory ... again...
Play serves all resources by own process, doesn't serve anything directly from file storage, so your files are as save as your own app allows for that (especially if you have not any HTTP server on the machine running)
On the other hand, this way you can't run more applications responding at port 80, also eats processor each time for handling static assets, like css files, public images etc. Therefore I definitely prefer to use some HTTP as a reverse proxy/load balancer and server for static files. This way I can place several domains on one host, also HTTP server somehow serves files faster and don't disturbs main app by sending it to browsers.
Related
Normally when I run my application it runs at http://localhost:3001. When I run this same application in a Gitlab pipeline, it says:
Project is running at
http://runner-87654321-project-1234567-concurrent-0:3001/
Naturally, I cannot access the application, so how do I either change this to run at localhost, or get the runner url at runtime?
Service aliases might be what you're looking for: https://docs.gitlab.com/ee/ci/services/. So you can provide an image of your app and then refer to it with an alias.
I think it's better to make the app listen on all network interfaces. For instance, in Spring framework it's enough to set server.address=0.0.0.0 in the application properties file.
I have a web-site that needs to be up all the time. I also, of course, need to do new releases. Each page tends to be very long-lived, with lots of JavaScript doing AJAX calls to the server.
What I do is build a new WAR file and put it in Tomcat's webapps directory, which ends up looking like this:
20110701-7f077d 20110711-aa8db4 20110715-6f4a12
20110701-7f077d.war 20110711-aa8db4.war 20110715-6f4a12.war live
The war file is named after the date of its release and the first few characters of its GIT commit-id, just so I can keep track of everything. Tomcat automatically unpacks the war file into a directory of the same name. The live directly just contains a file giving the name of the "live" version.
This way, each user can continue using the version of the back-end that works with the version of the front-end that he has loaded into his browser. And obviously, version upgrade and reversion is painless.
Now, I'm switching to node.js and I want to do the same thing. I am reliably informed that node.js doesn't support independent applications in one instance. So, what to do?
The only thing I can thing of is to designate n slots (where n is some small number like 10 or 100), and each slot corresponds to a port (i.e., slot 1 is 8001 and so on), put Apache in front of several node.js instances, each representing a slot, and Apache would use mod_proxy or mod_redirect to proxy requests like '/slot01' to port 8081. "live" would point to the current slot.
This would be clumsy and error prone and require an otherwise useless Apache instance and most of all I cannot believe that node.js doesn't have a good solution to what seems like a near-universal problem.
You can use node-http-proxy and write some code to monitor your 'deployment directory' for new versions and when such versions are found you can start the corresponding script and proxy it under the directory name (to make myself clear if you find a new directory 'version-11-today' your parent node-http-proxy script could start the new script assigning it a port passed as a parameter and then proxy to the new app under the path '/version-11-today').
A similar solution could be done with nginx only in this case you could write a script to monitory the deployment directory and generate some new nginx configuration when new apps are found.
If you are afraid you might run out of ports I believe both node.js and nginx can run on and proxy unix sockets besides inet sockets.
An advantage of the above is that each app runs in its own process protecting the other apps from crashes and enabling individual app restarts.
A third solution if you are not afraid some bug will crash your app is to have a parent script that loads all the app versions in the same process and maps them under different paths depending on the directory they were found in. You can still restart your server without downtime such as in this example http://codegremlins.com/28/Graceful-restart-without-downtime
Are there any tools avaliable to create and configure (Add new users, Change authorization settings ) mercurial repository hosted in IIS server ?
Currently I want to remotely log in to the server and need to create new repository and configure it manually.
I would take a look at Kallithea. That is a like a hgweb on steriods :) It has user management with authentication against LDAP (so you can use Active Directory), repository management, browsing, etc.
You ask about IIS and I know that it can run Python WSGI applications. Kallithea is based on Pylons and while it's described a difficult, you can run Pylons on IIS if you want.
I would probably install Apache instead or simply use the small web server that comes with Kallithea (depending on the load you expect).
what configurations can make differ between a local host server like phpMyadmin and a web hosting server
is it possible or convenient if a laptop [instead a desktop computer] is converted into a serverHost
is there a php script provided for an automatic backup or sync of files
in a web-based application, which is better? running the codes first in the localhost or directly upload the codes in the webhost ??
waaa .. .i need some response to this
Default configurations on localhost servers are almost the same as the configurations hosting-companies offer.
Ofcourse!
Just google, back-up is always possible.. hosting-companies gives 90% of the time a tool in the controlpanel to backup your website.
Always test on localhost then apply on the webserver and watch the results.
Is there an alternative, other then modifying HOSTS to setup temp domains when testing websites locally? I'm using IIS7 on Win7.
I don't want to use /localhost/domainname. I'd rather do /domainname so i don't have to worry about paths to files, etc. My websites are setup so that paths to files are relative to the root folder and not to the page.
Unless your code explicitly checks the domain name, you should be able to deploy on II7 and test through http://localhost.
There are few caveats with this approach, though:
if you are using third-party API that requires a key tied to the domain name of you app, you might have to request two keys - one for the domain name (for PROD purposes) and one for localhost (for DEV purposes). I do that with both Google Ajax API and Facebook Connect keys.
http://localhost is in different security zone in IE than regular internet sites, so if your app uses any AP that requires cross-domain communication (like Facebook Connect), you might have problems testing on IE7. Works like a charm on Chrome and seems to work properly on IE8.
if you are working on multiple apps at the same time, you can't have all of them listen on port 80 at the same time. SO, some of the apps will have to be moved to http://localhost:8080 or another port.
My approach is to run the VS Dev WebServer (Cassini) on ports 808x during developing and to deploy to the local IIS7 (using CruiseControl.Net) on ports 888x. This allows me to debug easily with VS while working on the code, yet still have the site set under medium trust on IIS7.
I also have a host name on the target domain pointing to my dev machine, so the IIS7 instances are available both as http://localhost:888x and http://dev.domain.com:888x, which allows me to also test the domain integration with Google Ajax and Facebook Connect APIs. Of course, this requires control over the domain DNS and the ability to add an A record to it.
However, note that nothing in this setup requires actual testing on the domain URL.