How to check a virus code without infecting my machine? - security

I need to write some simple virus code for my assignment submission. I need to check whether it works properly or not.. But if I run it, it will infect my machine. Then how can I do this?

You can try to run it on a virtual machine? VMware/Oracle VM etc

Grab your Windows virtual machine image from > here
Install VirtualBox from > here
Take a snapshot point to restore the system to its pre-infection state at anytime later
Run your malware for test
Restore your system state as needed to re-test your malware
Alternatively you can try Malwr - an online sandbox service for malware analysis

Related

Change kvm qcow2 file with windows 10

I have a qcow2 file that contains windows 10. The operating system is RHEL8.2. The virtualization stack is kvm, qemu, virt-viewer. And the command line used to manipulate the virtual machine is 'virsh'.
I need to update the windows drivers and kernel, change some registry, uninstall some applications, add things to the task scheduler and more.
My question, is what is the best process to acheive this? Is the result should be a new qcow2 file? Are there changes required for the xml configuration file of the virtual machine?
There are 2 modes of editing the virtual machine, online and offline. The difference is if the virtual machine is running during the edit or not. What mode is best to perform the task described above?
As I understand, snapshots are stored inside the qcow2 file, and then the user needs to pick between them. The users, on the system I am working on, are not aware they run on virtual machines, so I can not use this path, unless I am missing something.
Also there is the 'managedsave' and 'save' commands for virsh, but they don't create a new qcow2 file, and I don't think that the commands are meant for it.
Finally I found that the qcow2 file can be mounted as a device, perform changes in it, and unmount it. But then how can I uninstall applications and more in this way?
Thank you!
All the changes you described (update the windows drivers and kernel, change some registry, uninstall some applications, add things to the task scheduler and more) affect only the guest disk - qcow2 file, and guest memory.
You can run the guests, do these changes and power off. All changes will be saved to the guest disk. When instead of poweroff you will suspend the guests, some of the changes can be saved in guest memory.
There are no changes needed for the xml configuration file of the virtual machine, no new qcow2 files will be created.
Yes, snapshots are stored inside the qcow2 file, but since you have a copy, you dont need to create snapshots. Also no need for 'managedsave' and 'save' commands.

Is there a manual way to roll back an Update on Windows 10?

Long story short, windows 10 is utterly broken on my laptop after it automatically installed some updates. It's now stuck in a loop which always ends up saying "Undoing changes made to your computer"
I can't get into the BIOS.
I can't get into the Windows Recovery Environment.
Been talking to MS support for far too long so far, so I'm wondering if it's possible to add it as a secondary disk drive to another machine that does work, and manually remove any updates that were installed directly through the filesystem?
The only solution MS were willing to offer was to format the whole drive and re-install windows.
When I moved the HDD into a working windows 7 machine it actually prompted a chkdsk to run over the disk.
It found a whole load of orphaned files, not sure if that was really the cause, but after backing up as many files as I had access to, I put the drive back into the other machine and now it boots.
tl;dr, chkdsk fixed it.
I was stuck in this loop last night
machine configuration : Dell Inspiron
windows 10 (original)
What the one thing you can do is to use an application name Dell usb recovery tool. You will have to format you whole computer be it c:// or any other. You will need an extra hard drive to make a backup.
the process goes like this.
You will have to install the above application on other computer and open it and fill your service tag and make that pen drive bootable with that application.
Now plug in that pen drive to the laptop.
Go for troubleshoot.
Repair.
Install new original os.
It will ask for backup make a backup to other HDD.
Install and recover your backup.

Is installing compiler on a virtual pc vs desktop more secure?

My supervisor is pushing developers to install any compiler (Visual studio etc) on a virtual machine vs desktop. His argument is, it is more secure to put compiler on virtual in case desktops are hacked. But if i can access virtual machine from my desktop then hacker can too. I am just trying to understand why it is more secure to put compiler on virtual. Thank you.
If virtual machine is hosted in the cloud(public/private/internal) then probably somebody else is managing security in the cloud. That case probably it is almost certain that it will provide more security then bare desktop.
However we are relying on the somebody.
If your VM is running on desktop hypervisor, then I would prefer doing all work on the VM. In hacking scenario, I would still have my desktop, while VM may be gone.
I have seen something similar when Ransomware attack happened, all windows desktop were gone including local filesystem/vms, but only VM running Windows were affected, and hypervisor and local filesystem were all good if running non-windows host os.
Not sure if it answer your question, but putting my perspective on what I have seen so far in the industry.

Cygwin vs Linux Virtual Machine for Development?

< skippable part >
I work in IT (mostly desktop support and network administration) in a Windows environment, and I occasionally program.
A couple weeks ago, I decided I couldn't be as effective as I want to be without a Bash environment for my command prompt needs. This is especially true when I am using Ruby and git. I used Msysgit for a while, but I just didn't like how it wasn't extensible like Linux. So, I installed Cygwin and played around with that for a couple weeks.
As great as Cygwin is, it seems like it is meant to be a suped up command prompt, and its compatibility with Linux is just a pleasant side effect. This especially became evident when I tried to upgrade Ruby to 1.9.3 (it worked, but it wasn't straightforward), install rvm (never worked), and install RMagick (may or may not work, but looks like a headache).
So, now I'm considering running Linux in a virtual machine. But I'm worried that might be another can of worms and I'll have wasted hours before I find that out. I like that Cygwin runs in Windows and I get to use my IDE, user folder, and more with it. But I don't like that support for it is not as thorough as for a major distro.
< /skippable part >
Does anyone here have insight on using Cygwin vs running a Linux virtual machine?
Any advice on setting up a Linux development environment in a virtual machine within Windows?
I have faced common issues before, and the best solution according to my experience is just 2 workstations :).
Apart from that having Linux running in a virtual environment is way better.
First of all, you will have full Linux capabilities (except 3d acceleration, but you probably don't need that).
You will have the capability of creating snapshots and revert back to them when things go wrong!
You can start multiple environment using templates, which is very convenient.
The only downfall I can think of is performance issues of the host machine.
If it's a normal workstation/PC, an IDE + one virtual machine + a 100+tabs browser just makes it slow.
1: cygwin is good for quick hacks, and for being able to acces host-os resources(you can run IE for example in a bash script). For something tightly integrated and some "real" word, go to a vm. It will emulate everything and separate development from the real machine, and this may be a good thing in some cases... as a plus it simulates a real server:)
2: in virtualbox at least, you have shared folders, and you can share a local folder, and see it in the vm as a local folder(local or as a windows share..it actually depends). Then you can use that "entry point" to symlink stuff into the vm, and do the things you need with the real files being located in the real(host) machine
SSH into a linux box. This is what everyone does. Why isn't this the answer?
There is something I have heard of called Cooperative Linux. It runs Linux alongside with Windows kernel so you can use them at the same time. I've never used it, but here:
http://www.colinux.org/
What I think now is getting the pros of 2 options is using
Docker
, it is giving you cygwin simplicity and VM functionality with better performance.
Linux in a virtual machine will give you the experience you want more than cygwin or any mock shell as I like to call them.
Running VM's though require a lot of ram depending on whether you want a desktop version of linux or just a command line version.
Myself in work I have a pc with 8gb of ram and I run ubuntu 64bit as main OS, two ubuntu servers (these are for dev environments two different projects) and a windows 7 VM and a win XP VM.
I can run the two ubuntu servers and one other VM at the same time, key here is more ram if you want to be able to do VM's.
If you're going to be working with Ruby then get an Ubuntu virtual machine up and running :) I've not tried Ruby, etc on Windows but I have heard that it is a pain to setup and configure. I use a Mac for all my Rails development so I cannot comment on the Windows side for that.
As for virtual machine creation, I prefer VMware Workstation, however there are free alternatives such as Virtualbox and VMware Server.
I'm using a Linux VM within a Windows seven environment as this VM is as representative as possible of the final production environment. The whole setup is binded to the Eclipse IDE under ms-Windows seven. So this is really great for local full testing, before committing or tagging the tested version to the production servers.
As you mentioned as well, this takes some time to get properly setup and fully configured. So if your need is only for little tricks or tasks, you may keep using cygwin. For example, I faced significant issues to configure perl and compile mysql within cygwin. So it's ok for basic usages, but not to fully take advantage of a full linux environment.
Your choice strongly depends on the final server setup purpose. A VM will do it whatever your need is. The setup cost for it is higher, so this time investment must be used often to get returned.

building linux kernels

i Just got the book Linux Kernel Development by Robert Love . It has lots of places where you are required to modify and build the kernel . So how should i go with it . Is it better to use a VM , or should i somehow get a proper test machine for it , since i dont want to goof up on my system and data.
A VM has the advantage of offering snapshots. These allow you to save the state of the machine - if the kernel build doesn't work you simply restore the snapshot, and you are able to take as many snapshots as you have disk space to store them. You are also able to clone and re-deploy the VM image, so you have many identical systems to test on.
The same experiment on a physical machine would require far greater effort (ghosting/cloning the disk, re-installing the OS etc).
VirtualBox is free, cross-platform virtualisation software.
There are a lot of tutorials on the web about this topic, e.g. here:
http://www.digitalhermit.com/linux/Kernel-Build-HOWTO.html
http://www.kernel.org/pub/linux/kernel/people/gregkh/lkn/lkn_pdf/ch04.pdf
You could do either or both. An alternative somewhere in between is to setup a dual boot. This is a little riskier than a VM, but not too much.
coLinux
or run linux iso image using QEMU on windows

Resources