Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I was wondering that how do storage solutions like S3 or Google Drive check whether their storage platform is being abused for the storage of malicious content?
e.g. if someone uploads a password protected zip file to their servers, I don't see a way on how they can verify it.
For unencrypted files, I can understand some sort of file parser could work. But if someone uploads a password protected file, the only way to see/verify the contents is try to brute force your way into it (ignoring the moral obligations for the organisation to not do that).
So, how do these companies/solutions verify the kind of data that is being uploaded on their platforms?
There isn't technical solution, but on legal solution. They say: "We are only a service provider, not a content provider. We aren't responsible of the illegal use of our services".
This stand has been the same with Youtube, where you was able to upload content with copyright without issue with Google (but with the owner of the copyright). Now, it has changed and Youtube performed check, but it was the same legal principle.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am new to web development, and particularly for the back end, I was wondering what are the first basic precautions should be implemented to ensure cybersecurity to avoid any exploits which could leak user data or credential for example.
First of all make sure you are following the CIA model:
Confidentiality: Refers to access control of information to ensure that those who should not have access are kept out. This can be done with passwords, usernames, and other access control components.
Integrity: Ensures that the information end-users receive is accurate and unaltered by anyone other than the site owner. This is often done with encryption, such as Secure Socket Layer (SSL) certificates which ensure that data in transit is encrypted.
Availability: Ensures information can be accessed when needed.
Some other tips would be :
Use SSL certificate.
Take precautions when accepting file uploads through your site.(Incase if you have)
Use CSP (To prevent against Cross-site scripting)
Set permissions that controls who can read, write, and execute any given file or folder of your website.
Limit Login Attempts and temporarily lock out IP Addresses that make several failed attempts to get inside.
Keep scripts up-to-date.
Maintain multilayer security and keep backup.
And please take care of your Database, how you create and link it.
Lastly, show the beta version of your website to someone with good experience to look for any loopholes before your website goes live.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am a beginner and I create a web app with react, I want to my web app be able to read and write a json or csv from my hard disk, I've done this easy with c++ and python I should learn about node.js, django or something like that? I've search and I don't know what to do
What should I do?
Edit: In this question I mean my disk no matter what, I readed the answers and already know what this is not a good idea
Part of the beauty of the web is that web browsers generally do not have access to the computer's filesystem. This is an intentional security choice. It would be horrible if advertisers could see the contents of your hard drive.
There are technologies that let individual websites store information on your computer that act a little bit like a filesystem, ranging from old school cookies to more advanced databases like LocalStorage or IndexedDB.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Beginning PowerShell and looking at the automatic tasks that can be run as Admin.
Are there security risks in automating tasks? How can I resolve these issues? Is there a way to script more security before the task is run?
I have been looking for hours trying to find an answer and nothing has come up for database risks, only for PHP and website issues. But what about the active directory or communicating with the servers? Or just simple things like looking for free disk space? Do those pose a security threat to the network?
The topic of securing an OS is huge and really off-topic on SO. I recommend you to get a basic course material like, say, CompTIA Security+ or the like. Learn the basic principles and concepts first, then focus on technology specific issues.
That being said, the most obvious security hole with scripted operations is invalid permissions. Consider that the admin script is in a directory that allows write access to non-admin users. Oops, immediate backdoor. Can you figure out why?
Even read access is dangerous. Maybe the admin script has stored credentials in plain-text or serialized on disk? Oops, another a security hole.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm creating a browser based image cloud service and every user will have his own picture folder.
My question is how to make sure, other users or hackers can not access foreign folders.
What is to consider?
Is it e.g. enough to check session-variables?
Thanks in advance!
I'm not 100% sure what you mean by just checking the session-variables.
I would create a setup like this:
\root
\userImages
\user1
\img1.png
\img2.png
\user2
\img1.png
\img2.png
\public
\index.php
I'm assuming you would use PHP or ASP.NET or something similar that uses some type of server like nginx or apahce. You can set the server root to the public folder. This means only your code would have access to the user images.
You can use PHP or whatever language to look at the session information and see if the user is authenticated. If you can, I would recommend encrypting the cookie data with Mycrypt. Once you have checked the authentication, you can get the file with a script and send back header information. Here's a really in depth article that I think would help you if you actually want a how-to. Protecting Images with PHP
If you are using PHP, Laravel handles sessions and protecing images really nicely.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
For Google Analytics, I had to prove that I owned my domain. I added a TXT record to do this. I also had to prove to Microsoft that I owned my domain by uploading a file (BingSiteAuth.xml) to my site.
Now that I'm up and running with Analytics and Webmaster tools for Google and Bing, can I remove these verification records, or will that break analytics? Does leaving the record and file there pose any kind of security risk?
No, you shouldn't remove any of the verification files or DNS records. Google will periodically recheck your site and if it doesn't succeed you will lose access to WMT, for example. See this WMT support page:
Removing the record from your server can cause your site to become
unverified, and you will need to go through the verification process
again.
I'm not 100% sure but I think Bing will do the same. It makes sense because a domain owner or the roles of administrators might change and you don't want anyone who ever had access to your site's data to keep that access right forever.