It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I want protect web-application for administrator/other with physical access to server.
Any ideas?
Thanks
How physical is physical? :P
Your webserver (let's say Apache) needs to access your files. It runs under a user account (www-data or apache or something). Ergo: the files for the webapplication should be accessible to this other user.
An administrator (root user?) can impersonate any user, and has access to all files, so if you're dealing with a very smart administrator he can always get to your files.
You could run your webserver on a different account, and encrypt the part of your disk where the web application files are running. But since the webserver needs to decrypt it, the decryption key has to be stored somewhere, and the administrator has access to it.
So, I'd go for obfuscating your web application with (in case of PHP) something like Zend Guard, this makes the source unreadable. With a license manager on top the source is quite useless outside the server. (Not completely unbreakable though).
The only way to make really sure your sources are safe, you should be the only one with access to the root/administrator account.
Physical access can only be prevented by hosting your own server in a secure data center...
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
We have a new extension that we built for many months and is live on the webstore.
Our extension communicates with our API and we get many fake installs daily - installs that we see in our API and not in the web store.
After many sleepless nights - We suspect that a robot with chrome runs our extension and deletes the local storage every time - this is only our best guess, although we are not sure if it's likely because the IP of each installation is different.
There are many more details - so If you would like to help and need further info let me know what and I can elaborate.
I'm not familiar with this Chrome extension API, BUT, if someone earns money by letting people install your extension then I'd check that lead.
Eventually, the clients (many IPs you say) that apparently "install" your extension report this event by making an HTTP request - again I only presume!
Saying that, it might be that someone controls many computers which simply initiate these HTTP requests to "report" a (fake) installation, thus making money.
If that is the case, and someone does make money out of your extension (could be also INDIRECTLY) then check the affiliate code or whatever, this is even an issue to report to Google itself, they can certainly investigate that.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am currently writing user interface in ember.js and need some help in server-side decision concerning transfer technologi and server-side script.
App is planned to simply wrap calling of few server-side script with adding some database suggar for handling users permissions and storing inputs-outputs.
Users has to be logged-in for proper work with application.
I am expecting high concurrency of working users and since I can use more paralel threads on server I am not entirely sure if I have to go with Node.js+socket.io.
Half of requests will be simple ask to database requests and other half will need little more computation time of another server-side script (up to 5s).
I will most probably use MongrelDB as database.
My biggest questions:
Is today technologicaly safe to go with pure websockets or is better to have gracefull degradation of Socket.io?
Will node.js scale nicely on multi-core box or should I use something like Mongrel2 with python backend?
Will python backend handle big concurency giving that some responses are really long?
How do I handle logged-in users with node.js+socket.io?
Better to have graceful degradation - because websocket protocol is still in changing rapidly.
For scalling, I'm use Redis pub/sub, but you can use cluster module for multi-core.
Don't know.
I'm share session from connect to socket.io with RedisStorage. You can use RedisStorage only for handle logged-in users.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Do I really need a Portal ??
Always one question comes to my mind , why do people go for Portal development , can't they manage with a normal web aplication , managing the UI Page with different sections? I am sorry if my question is not valid.
Or In what scenarios do we actually need a web Portal?
Managing a UI page with different sections is fine if your users are using only your application. However, a portal allows your users to use multiple applications (and many of them not created by you) and kind of 'aggregate' their content on one page; well at least have a place which provides easy access to them on one page. The emphasis here is on multiple applications; these are applications that you as a developer may have no control over; they may have been written by a third party (such as Google, or any other developer).
The items on a portal page may not necessarily be from the problem domain that you are developing for. There's no reason why a user can't put the local weather on the same page that he's examining the inventory for his company.
Portals provide a single point of entry; that's the key point. A portal also provides some other niceties such as managed logins. If you are creating an application for a customer, and you don't have a need for a portal, a portal may be overkill (and probably is, if you have to ask).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
If /usr/bin/passwd was not a set-UID program, what capabilities would it require for a normal user to still be allowed to change his password?
It would need to be able to modify /etc/passwd, /etc/shadow and/or various other files (depending on how authentication is configured).
So CAP_DAC_OVERRIDE would seem to be sufficient, however, it is trivial to root a box with CAP_DAC_OVERRIDE, because any binary can be replaced (such as /bin/sh which is often executed by root cron-jobs).
On some systems, privileges are not required to change passwords because a daemon is used already, e.g. most decentralised authentication systems (nis+ etc)
Impossible unless you want to destroy the security of the system.
If the "passwd" utility can do its job as a normal user, then any user could write their own version to change the password of any other user. (That is, take the source code to the utility, modify it to skip asking for the current password, compile, and run.)
I suppose you could create a "password daemon" that runs as root and listens on a socket in order to service password change requests. Why you would want that instead of a set-uid /usr/bin/passwd is beyond me, though; the security implications are identical.
But no matter what you do, changing the password database can only be allowed for some trusted process. Otherwise anybody can change anybody else's password, which kind of defeats the purpose of a multi-user OS.
You'd need to be running as root. passwd requires read/write access to /etc/passwd and /etc/shadow, and those are files that only root can directly manipulate.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I need to develop a part of a Drupal web site. I cloned the git repository and setup the settings.php. The site is showing and I am using a dump of the MySql DB from the live server. I am 99% sure that the dump has been created without clearing the cache first. ANY link in the site is not working, so don't tell me to go to /admin/ because I can't. I tried to clean the table with cache* name without success. I have .htaccess in place. What elae I have do to do have a Drupal site working okay?
I am using Drupal 6.
Ok, I couldn't type this all in a comment, so it is now an answer...
~ "It is just that Drupal is rubbish. Now I remember why I moved away from PHP. I tried for follow Drupal's code but it is a nightmare. They should learn what OOP is (although OOP support in PHP is rubbish too)."
That comment is un-constructive, contradicting, and comical, especially considering you haven't even put in the time to know how to provide ample info for others to answer this question.
With such little info, I'd guess that Alexander is on the right track in questioning around it being an issue with your system's rewrite config.
Here are some other questions you should either have provided answers for or tried debugging already before you start blaming the platform over your ability to troubleshoot it.
Are you able to login? if not, maybe with http://example.com/index.php?q=user/login?
Any errors in the apache log? Have you tried to turn on rewrite debugging, and then any errors being reported if so?
Is mod_rewrite enabled on your local environment for clean urls?
Is your Apache config setup to AllowOverride=All for the specific virtualhost so .htaccess will pick up? If not, have you configured Apache to load that .htaccess file?
Is $base_url set correctly in the settings.php file?
What is the format the links are providing?
Have you tried disabling any custom modules via the system table?
Can you install a brand new instance of Drupal on the same system configured the same way successfully?
The platform you know well is usually going to seem much more elegant than the one you don't know much about at all.