OpenVAS scan not working - Kali linux - linux

Currently having an issue with my kali VM.
I installed Openvas, but when I try and initiate a scan with the admin user I get the following error message:
Operation: Run Wizard
Status Code: 400
Status message: Service temporarily down
Has any one experienced this before? Searched online and couldn't really find any solutions.
Thanks for your time.

I used the Virtual Appliance (Virtual Machine), getting error 400 first and later 503 with "Service temporarily down" when trying to scan.
Solution that worked was found at http://openvas-discuss.wald.intevation.narkive.com/TTK5H8YI/openvas-8-virtual-appliance-unable-to-start-task and can be found at http://plugins.openvas.org/ova_503.txt .
It comes down to stopping services, rebuilding the certificates, updating config and then restarting services. It is a 12 steps process that takes several minutes (maybe 30). I will not copy/paste, the second link above contains all the steps.

Related

Azure VM won't boot - Error is 'Fatal error C0000034 applying update operation 63 of 82641'

VM is set to start every morning at 8am. This morning I got the following error : -
'Fatal error C0000034 applying update operation 63 of 82641' in the Boot Diagnostics section in the VM Console
Every previous occurrence I found googling the error did not relate to an Azure VM but a standalone laptop. All of these suggest starting from a different partition or rescue disk which is not possible in my case.
Tried re-starting the VM
Redeploying the VM
Resizing the VM
Whatever I try I still can't RDP to the VM.
I can't restore the C: drive as I can't connect to the VM to do it.
Any ideas how I can recover from this or rescue the VM ? All greatly appreciated.
Thanks,
Dan.
I've managed no to resolve this so will post the Solution in here for anyone else
Found the error in the Microsoft Docs here :-
https://learn.microsoft.com/en-gb/troubleshoot/azure/virtual-machines/troubleshoot-stuck-updating-boot-error?WT.mc_id=Portal-Microsoft_Azure_Support
This advised taking a copy of the OS Disk and attaching it as a Data Disk to a 'Rescue VM'
Running the following Powershell command against the Disk in
dism /image::\ /get-packages > c:\temp\Patch_level.txt
Open the file , scroll to the bottom and look for updates that are Install Pending or Uninstall Pending
Running the following Powershell command
dism /Image::\ /Remove-Package /PackageName: where PackageName is the Package Identity in the text file
Detach the now repaired OS Disk from the Repair VM and Attach it to the Original VM
Start the VM - it may take a while to start.
With the oncoming tide of Managed Disks, I don't suppose there will be much call for this solution , but it's here if anyone needs it.

Azure Virtual Machine Crashing every 2-3 hours

We've got a classic VM on azure. All it's doing is running SQL server on it with a lot of DB's (we've got another VM which is a web server which is the web facing side which accesses the sql classic VM for data).
The problem we have that since yesterday morning we are now experiencing outages every 2-3 hours. There doesnt seem to be any reason for it. We've been working with Azure support but they seem to be still struggling to work out what the issue is. There doesnt seem to be anything in the event logs that give's us any information.
All that happens is that we receive a pingdom alert saying the box is out, we then can't remote into it as it times out and all database calls to it fail. 5 minutes later it will come back up. It doesnt seem to fully reboot or anything it just haults.
Any ideas on what this could be caused by? Or any places that we could look for better info? Or ways to patch this from happening?
The only thing that seems to be in the event logs that occurs around the same time is a DNS Client Event "Name resolution for the name [DNSName] timed out after none of the configured DNS servers responded."
Smartest or Quick Recovery:
Did you check SQL Server by connecting inside VM(internal) using localhost or 127.0.0.1/Instance name. If you can able connect SQL Server without any Issue internally and then Capture or Snapshot SQL Server VM and Create new VM using Capture VM(i.e without lose any data).
This issue may be occurred by following criteria:
Azure Network Firewall
Windows Server Update
This ended up being a fault with the node/sector that our VM was on. I fixed this by enlarging the size of our VM instance (4 core to 8 core), this forced azure to move it to another node/sector and this rectified the issue.

Azure Server Incaccessible

One of my 10 Azure VMs running windows has suddenly became inaccessible! Azure Management Console shows the state of this VM as "running" the Dashboard shows no server activity since my last RDP logout 16 hours ago. I tried restarting the instance with no success, still inaccessible ( No RDP access, hosted application down, unsuccessful ping...).
I have changed the instance size from A5 to A6 using the management portal and everything went back to normal. Windows event viewer showed no errors except the unexpected shutdown today after my Instance size change. Nothing was logged between my RDP logout yesterday and the system startup today after changing the size.
I can't afford having the server down for 16 hours! Luckily this server was the development server.
How can I know what went wrong? Anyone faced a similar issue with Azure?
Thanks
there is no easy way to troubleshoot this without capturing it in a stuck state.
Sounds like you followed the recommended steps, i.e.:
- Check VM is running (portal/PowerShell/CLI).
- Check endpoints are valid.
- Restart VM
- Force a redeployment by changing the instance size.
To understand why it happened it would be necessary to leave it in a stuck state and open a support case to investigate.
There is work underway to make both self-service diagnosis and redeployment easier for situations like this.
Apparently nothing is wrong! After the reboot the machine was installing updates to complete the reboot. When I panicked, I have rebooted it again, stopped it, started it again and I have even changed its configuration thinking that it is dead. While in fact it was only installing updates.
Too bad that we cannot disable the automatic reboot or estimate the time it takes to complete.

Cannot RDP to Azure VM after sysprep

I followed the instructions listed here to capture an image of my Azure VM:
Now I am unable to RDP to the VM - I get the generic message "Remote Desktop can't connect to the remote computer for one of these reasons:1,2,3 etc"
The VM I'm trying to connect to is: teamsitepoc.cloudapp.net:59207
Here's what I've tried:
I have checked that it's started.
Tried re-sizing to extra small then back to small.
Attached the disk that was captured, giving the following:
Could anyone please advise what else I can try to troubleshoot
It is entirely possible that you encountered the shutdown bug listed at the top of the page you link to.
Unfortunately rather than updating the documentation all they did was add a warning to the top of the page and left the incorrect instructions in the actual steps so likely many other people will encounter the same issue.
The workaround is available here: Image capture issue / VM unexpectedly started after guest-initiated shutdown
I also had this problem, pings went through from the VMs but no RDP port open.
Then I realized that windows was still updating!!

The server encountered an error while retrieving metrics - No dashboard metrics in Azure Ubuntu VM

I'm getting this error: "The server encountered an error while retrieving metrics. Retry the operation." in the dashboard and no Usage overview stats displayed after I've installed and removed a squid proxy server inside an Azure Ubuntu 12.04 server VM.
Anyone know any way to restore them?
I don't think this is related to anything that you've done, I think that MS have having some issues with metrics as I'm getting the same message on instances that I haven't changed.
If it's important for you I would log an issue with MS support.
I have exactly the same error as you. My service instances ran smoothly before this error showed up. And the service instances seem to be still running smoothly even with this error. Only that I cannot get statistics about the instances. It's kind of understandable, since Azure is only CTP now. But I really hops MS will fix this soon.

Resources