Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have finished watching the 2nd season of House of Cards, and I'm appalled. The show really is fantastic, however let me get down straight to the point - computer-wise, how realistic is the show? Gavin Orsay, a hacker and informant for the FBI created a USB stick for the The Washington Herald reporter Lucas Goodwin, which contained malicious code that would be automatically injected into any machine when plugged in.
- Theoretically, is this possible?
Not that I would want to do something like that, I'm just interested... And I'm not talking about autorun.infs, I mean real code that would be able to penetrate into a system, a virus, essentially. And if there was to be such a virus, would it be able to inject itself cross-platform? i.e. do the same amount of damage both on Windows, Unix, Linux distros etc.
This is possible. There have been certain countermeasures that have been implemented in Unix systems that pride themselves on being safe from attacks such as these.
That said, it wouldn't work on every machine.
This is not only possible it has been done many times. have a look at the Stuxnex virus developed by the United States to slow down Iran's uranium enrichment program.
http://en.wikipedia.org/wiki/Stuxnet
As far as cross platform, the flash drive could have different versions of the same virus/trojan/worm compiled for different hardware and operating systems. Developing software like this is not a matter of "if" it can be done, it is a matter of how much time/money do you have to make it happen!
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
So I was messing around looking at different ways to operate a computer under total security. I found ways people were using specialized operating systems like Tails and that got me thinking, could a computer be secured by running an operating system that nobody has ever seen?
Obviously this would take a lot of work to make an OS from the ground up without any help, but would that be safe? Could having no information available about an OS make it invulnerable to attack?
P.S. I am talking about anti-hacking and anti-malware, not private web browsing.
What you're suggesting sounds a lot like security through obscurity.
Firstly, there's the issue that if you write your own operating system from ground up, it won't have exposure to close scrutiny and it's very likely you would have undiscovered exploitable bugs and vulnerabilities. A lot like cryptography, anyone can design a secure operating system that they, themselves, can't break into. Unfortunately, there's always someone in the world that's smarter than you who will be able to break in.
Secondly (and following up on the first point), the entire security of your architecture will essentially rely on the secrecy of your implementation. The moment someone manages to get a copy of your operating system or source code, you can be sure the security of your whole system will come crashing down like a ton of bricks before you can finish saying "oops". This is a very fragile defence against attack.
Lastly, there's no provable 'invulnerable to attack'. The closest thing to it is to have as many people using it as possible and hope the good guys find the vulnerabilities before the bad guys. But then you'd be back to square one since this is pretty much what most major operating systems already do.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I've researched on the internet about GPL licensing and if it is possible to use the kernel in a commercial product, however, I always come across conflicting information.
Say I have an embedded system that that runs the kernel, and on top of that runs the embedded software that is written by me. Am I allowed to then sell the product to customers without then giving them the source code for the software I have written? For example, trade secrets, commercial viability, someone else then using my code to produce a similar product etc.
If your software runs on top of the kernel, then I suggest you read the license of the kernel:
NOTE! This copyright does not cover user programs that use kernel
services by normal system calls – this is merely considered normal use
of the kernel, and does not fall under the heading of "derived work".
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
You know, these things. I assume they run on some old computer language/framework, anyone know what that might be?
The displays themselves are pretty basic, they (in most cases) just have a microcontroller with some firmware that allows them to convert commands they get serially into patterns and/or characters. The more recent ones also give feedback regarding broken LEDs for example. Typically these firmwares are written either in assembly or C.
The real intelligence of these systems is often located in a central control system that coordinates an entire city or even a state. These control systems can perform intelligent tasks on entire groups of signs like given the location of an accident, they add the correct distance to the accident to the warning message, automatically divert traffic, and so on.
I know of such systems written in C, C++, Java, G2, ... Depends on the moment they were designed. So no, they're not by definition outdated and antique! They do tend to have a longer lifespan than your average desktop app though which often leads to the oldest parts being swapped out for more recent developments and these newer modules will in many cases be based on more recent technologies.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Is it possible to get into legal trouble for identifying vulnerabilities in a web application even if you don't exploit them?
I have considered using tools like NetSparker on occasion to see if a site has any vulnerabilities and I'd like to contact the owner of the site to see if they'd be interested in me fixing it. I suspect that some of these people might get angry or misinterpret my intentions and I'm curious if I could get into any trouble for simply finding these security issues.
If you are looking for vulnerabilities in open source software or commercially distributed software and you are a US citizen you are protected by the 1st amendment. It is legal for you to write exploit code and do whatever you want (as long as it isn't selling it to terrorists/the mob). If you do find a flaw, report it to BugTraq and put it on your resume. I have racked up a lot of CVE numbers over the years and I actively write exploit code.
In Germany and France the laws are a bit different, the possession of "hacking tools" like exploit code or even NMAP can land you in jail. You might also be interested in the laws of full disclosure.
On the flip side, if you go around scanning people's web sties looking for vulnerabilities you are breaking the law and the FBI will investigate you. Do not look for vulnerabilities in random websites without the owners permission.
You shouldn't get into trouble but depending on how big of a prick and who gets embarrassed and who feels threatened you could easily turn into the next Adrian Lamo.
What one can get into trouble doing often comes down to what "they" can convince a judge. It's certainly possible that a company can see such an act as a genuine attack (the wrong person in the company gets the wrong idea and yells loud enough about it) and seek some kind of damages from you. Just remember that "being right" or "being reasonable" or "making sense" don't really mean much in the US legal system (assuming US here).
That said, as a developer I absolutely encourage vulnerability testing and reporting back to the developer for the product being tested. But, unfortunately, you should tread carefully.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
After reading E-myth Revisited, I realize that I can do a better job at making my company less reliant upon me... I spend a tremendous amount of time answering silly questions (silly to me, but necessary for my developers to get the job done).
I need to write a set of operating manuals for what to do in certain situations...
For instance:
How to make a build
How to write test cases
How to report status
How to fix a bug
How to handle support question A, B, C, etc...
What to do when you are stalled
What to do when the power goes out (really, I need to do this)
etc...
What are some useful, generic operating manuals that you can think of, for a software development company?And please, if you have some good, short, online versions that you know of, please post them. I would much rather use a starter manual and modify it for my needs, than start from scratch.
What about a wiki - at least then other people can start to contribute.
Otherwise they are just going to rely on you for the manuals
I disagree with the wiki. As the owner of the company -- it is your responsibility to write the manuals, or delegate it in a very controlled fashion. People should rely on you for the manuals.
Really though, back to the question. The obvious standards, coding, SQL, etc for your platform and programming languages. You'll be able to find examples of these anywhere on the internet. As for customer support, you should probably write that yourself, you know how you want your customers treated. As for test cases, again, you'd have expect your developers or testers to have a professional understanding of what needs to be done, you might indicate the acceptable minimums however.
What to do when you are stalled. That's what managers are for :-)
I think it boils down to writing the manuals that are unique to your business, and trying to steal or borrow manuals for the generic processes.