I'm trying to integrate two pieces of software - one is a Web app, second one is a command line application. The problem is that the CLI application must be run as different user than a Web server. However, I need to retrieve output from the CLI application and pass it to the Web app. I was thinking about using some kind of buffer file, but I'm afraid of crashes when one of the app is not able to read/write from/to the file while other is using it.
I'm sure I've seen once solution similar to this, but I can't recall it. Any help will be valuable. Thanks a lot.
Run with sudo or set the appropriate owner and the suid bit on your command line application?
Related
I'm moving my Three.js app and its customized node.js environment, which I've been running on my local machine to Google Cloud. I want to test things out there, and hopefully soon get some early alpha testing going with other people.
I'm not sure which is the wiser way to go... to upload the repo I've been running locally as-is onto a VM which users would then access via the VM's external IP until I get a good name to call this app... or merge my local node.js environment with what's available via the Google App Engine and run it on GAE.
Issues I'm running into with the linux VM approach... I'm not sure how to do the equivalent on the VM of what I've been doing locally. In Windows Powershell I cd into the app directory and then enter node index.js. I'm assuming by this method of deployment that I can get the app running as soon as the browser hits the external IP. I should mention too that the app will allow users to save content as well as upload images, and eventually, 3D models as well as json datasets.
Issues I'm running into with the App Engine approach: it looks like I only have access to a linux-based command line, and have to install all the node.js modules manually. Meanwhile I have a bunch of files to upload, both the server-side node files and all the frontend stuff. I don't see where to upload those files, and ultimately what I'd like to do is have access to a visual, editable file-tree interface, as I have in Windows and FileZilla, so I can swap files in and out, etc. Alternatively I suppose I could import a repo from Github? Github would be fine as long as I can visually see what's happening. Is there a visual interface for file structure available in GAE somewhere? Am I missing something?
I went through the GAE "Hello World" tutorial and that worked fine, but was left scratching my head afterward regarding how to actually see and edit the guts of the tutorial app, or even where to look for the files.
So first off, I want to determine what's the better approach, and then if possible, determine how to make the experience of getting my app up there and running a more visual, user-friendly experience.
Thanks.
There are many things to consider when choosing how to run an app, but my instinct for your use case is to simply use a VM on GCE. The most compelling reason for this is that it's the most similar thing to what you have now. You can SSH into the machine and run nohup node index.js & (or node index.js inside tmux/screen if you prefer) and it will start the app and not stop it when you log out of SSH. You can use SCP / SFTP with whatever GUI client you want to upload files. You don't have to learn anything new! If you wanted to, you could even use a Windows VM (although I think you have to pay a little more than for a comparable Linux VM due to the licensing fees).
That said, the other way is arguably more "correct" by modern development standards, but it will involve a lot more learning that will prevent you from getting your app running somewhere other than your laptop in the short term:
First, you'll need to learn about Docker and stateless containers, which is basically what your app runs inside of on AppEngine.
Next, you'll need to learn how to hook up a separate stateful service (database, file server, ...) to your app's container so you can store your files, etc. in it, and then probably rewrite your app somewhat to use it to store stuff.
Next, you'll probably want some way to automatically deploy this from code instead of manually doing it, which gets you into build systems, package managers, artifact storage, continuous integration systems, and on and on and on.
This latter path is certainly what you should choose for a long-running production service if you work with a big team of developers -- but that doesn't mean that it's necessarily the right path for your project today. If you don't care about scaling up automatically, load balancing between nodes, redundant copies of your app running in different regions in case there's a natural disaster, etc., then go with the easy way for now, and you can learn new ways to improve the service when they're actually needed.
I am trying to make an application that runs submitted scripts, and would like to try to sandbox the submitted scripts. The scripts need to be able to be able to read in a certain directory (and in all of its subdirectories), but shouldn't be able to write at all, and, other than being able to read, should not be able to do anything that could not be done in a browser (ie download files using http). How would I go about doing this?
I don't think Node has this capability built in, but you should be able to run an "unsandboxed" Node on a *nix operating system as a severely restricted user (might be possible in other OSes too, I'm not sure). You might also want to look at Node's VM module.
Eventually, I decided on using the vm node module. I basically just made a namespace that the script running in the sandbox could use that would filter out malicious requests / requests that ought to be out of the bounds of the sandbox. The namebox included fs methods that would be necessary, but failed to execute any of the ones that would modify any directory other than the certain one that I wished the script to be able to modify.
I have a tool written in perl which is used by different users in my company. Each user has his/her own disk space allocated to them and they run the tool in their diskspace. This is working fine without any issues. As a next step, I wanted to enable the tool through web and created a web application through which users can run this tool, the issue that i have is, the tool is always run as a single user. I know the user name through authentication, is there a way by which i can run the tool as the user who is running the web application?
Yes, suexec.
Also see questions tagged suexec.
I have a UI app (uses GTK) for Linux that requires to be run as root (it reads and writes /dev/sd*).
Instead of requiring the user to open a root shell or use "sudo" manually every time when he launches my app, I wonder if the app can use some OS-provided API to get root permissions. (Note: gtk app's can't use "setuid" mode, so that's not an option here.)
The advantage here would be an easier workflow: The user could, from his default user account, double click my app from the desktop instead of having to open a root terminal and launch it from there.
I ask this because OS X offers exactly this: An app can ask the OS to launch an executable with root permissions - the OS (and not the app) then asks the user to input his credentials, verifies them and then launches the target as desired.
I wonder if there's something similar for Linux (Ubuntu, e.g.)
Clarification:
So, after the hint at PolicyKit I wonder if I can use that to get r/w access to the "/dev/sd..." block devices. I find the documention quite hard to understand, so I thought I'd first ask whether this is possible at all before I spend hours on trying to understand it in vain.
Update:
The app is a remote operated disk repair tool for the unsavvy Linux user, and those Linux noobs won't have much understanding of using sudo or even changing their user's group memberships, especially if their disk just started acting up and they're freaking out. That's why I seek a solution that avoids technicalities like this.
The old way, simple but now being phased out, is GKSu. Here is the discussion on GKSu's future.
The new way is to use PolicyKit. I'm not quite sure how this works but I think you need to launch your app using the pkexec command.
UPDATE:
Looking at the example code on http://hal.freedesktop.org/docs/polkit/polkit-apps.html, it seems that you can use PolicyKit to obtain authorization for certain actions which are described by .policy files in /usr/share/polkit-1/actions. The action for executing a program as another user is org.freedesktop.policykit.exec. I can't seem to find an action for directly accessing block devices, but I have to admit, the PolicyKit documentation breaks my brain too.
So, perhaps the simplest course of action for you is to separate your disk-mangling code that requires privileges into a command-line utility, and run that from your GUI application using g_spawn_[a]sync() with pkexec. That way you wouldn't have to bother with requesting actions and that sort of thing. It's probably bad practice anyway to run your whole GUI application as root.
Another suggestion is to ask the author of PolicyKit (David Zeuthen) directly. Or try posting your question to the gtk-app-devel list.
I've been looking into different web statistics programs for my site, and one promising one is Visitors. Unfortunately, it's a C program and I don't know how to call it from the web server. I've tried using PHP's shell_exec, but my web host (NFSN) has PHP's safe mode on and it's giving me an error message.
Is there a way to execute the program within safe mode? If not, can it work with CGI? If so, how? (I've never used CGI before)
Visitors looks like a log analyzer and report generator. Its probably best setup as a chron job to create static HTML pages once a day or so.
If you don't have shell access to your hosting account, or some sort of control panel that lets you setup up chron jobs, you'll be out of luck.
Is there any reason not to just use Google Analytics? It's free, and you don't have to write it yourself. I use it, and it gives you a lot of information.
Sorry, I know it's not a "programming" answer ;)
I second the answer of Jonathan: this is a log analyzer, meaning that you must feed it as input the logfile of the webserver and it generates a summarization of it. Given that you are on a shared host, it is improbable that you can access to that file, and even if you would access it, it is probable that it contains then entries for all the websites hosted on the given machine (setting up separate logging for each VirtualHost is certainly possible with Apache, but I don't know if it is a common practice).
One possible workaround would be for you to write out a logfile from your pages. However this is rather difficult and can have a severe performance impact (you have to serialize the writes to the logfile for one, if you don't want to get garbage from time to time). All in all, I would suggest going with an online analytics service, like Google Analytics.
As fortune would have it I do have access to the log file for my site. I've been able to generate the HTML page on the server manually - I've just been looking for a way to get it to happen automatically. All I need is to execute a shell command and get the output to display as the page.
Sounds like a good job for an intern.
=)
Call your host and see if you can work out a deal for doing a shell execute.
I managed to solve this problem on my own. I put the following lines in a file named visitors.cgi:
#!/bin/sh
printf "Content-type: text/html\n\n"
exec visitors -A /home/logs/access_log