I have a Shodan account and am trying to get it to scan an IP and report the results. Unfortunately, the method reported in the documentation for doing this doesn't seem to work. Here's what I've been doing, using the Shodan CLI. All of these commands are being issued using the same API key.
Used the shodan scan submit command to initiate a scan of the
desired IP.
Used shodan scan list to monitor the status of the scan I submitted. From the list entry, I can get the scan ID and its status. Wait until the scan status is "DONE"
According to the Shodan API documentation, the way to retrieve my results is by using shodan download <download_file_name> scan:<my_scan_id>. However, when I send that command I am informed it is downloading 0 results.
Searching the database with shodan search scan:<my_scan_id> also shows zero results.
I've looked through the documentation and there doesn't seem to be another way of getting results without a dedicated data pipe, which I can't since I'm on the $50 lifetime level. So what's going on? Has the API changed? Does it take time for the results of on-demand scans to be incorporated into the database?
Thanks in advance to anyone who can offer some insight on this.
So after a late night with Shodan's API, I think I've figured this one out. There does not appear to be a way to download your scan results after the fact. They appear on the command line that launched the scan once it concludes. The only way to save them is to use scan submit --filename <your_file_name> <your_ip>. This also means you need to make sure the process that submitted the scan is still around to receive the result.
I can't account for the documentation saying you can use shodan download with your scan ID, but I've tried to multiple times from the Python API and the Shodan CLI and it doesn't seem to work. Unless someone comes along to tell me differently, I'm assuming that functionality is not available.
Related
Some time ago, I set up a Linux task to run speedtest-cli every 30 minutes to figure out a network issue. The task used the "--server ID" argument to get the speed to the same server each time. I used it for a while then forgot about it. Today I go back to revisit this only to find out that the API seems to have changed. Now proving the --list argument does not print a list of hundreds of servers, but of only the few (~10) nearest you. In my case, the servers it reports seems to change at least daily. Requesting speedtest to any server ID not reported in the list gives a failure. Has anyone figured out a way to get a periodic speedtest to a fixed server using speedtest-cli or any other tool?
If you are still looking for a solution, here is my suggestion.
While this does not use speedtest-cli (which no longer is supported and you should look at Ookla SpeedTest command line client instead) I believe this is what you are looking for, I'm running this in a Debian VM but if you have access to a RPi and can dedicate to this task, you may want to check this out.
https://github.com/geerlingguy/internet-pi
You can modify the docker-compose to hard code the server ID of your choice. You can get this from the Ookla SpeedTest command line client.
You would need to run the command:
speedtest -L
Good Luck!
I am creating a dashboard in Grafana with data from PNP4Nagios for problem resolution. One of the criterias is if there is a connection to a certain service. I have a plugin that verifies this properly. The answer is either connected or not conncted.
Is it possible to generate an output that PNP4Nagios will understand the output so I can add it to my Dashboard?
Was looking for a status plugin for grafana when I found this question.
Pnp4nagios only understands performance data, so as stated by pzkpfw, you need to add that in your check script by adding a pipe after your message and a label=value. Then if you want to display up/down or ok/warning/critical, there's the vonage status panel.
I know that this question seems familiar in a lot of stackoverflow questions. But This is not the same as the other questions.
Basically i've got a PS script that uses the module "AzSK" to run something , I used this command in a loop to add multiple properties to my azure storage. On every step the command keeps asking me to confirm if i want to continue (Y/N).
Because I use a loop for more than 40 iterations I need to confirm every time I perform the command.
Like many Stackoverflow questions and the internet told me i need to try to add -Force , -Confirm to my command to automatically confirm the yes to the read input. But this answer only applies to commands that have this parameter build in. with the get-help command -Detailed I didn't see any of this parameter available. So I was wondering if it was possible to create this auto "Y" reply even if the command does not allow any parameter for it.
The command I use is Get-AzSKAzureServicesSecurityStatus and this adds attestation statuses to control id inside a azure blob storage. the command only allows one attestation status to be added so I wrapped it inside a for loop. Which makes my struggle of constantly confirmation even worse.
Please try to use the format below:
cmd /c echo y | powershell "the command which will propmt"
I did a simple test which to delete a directory, and works.
This may not be an answer to your query "if it was possible to create this auto "Y" reply even if the command does not allow any parameter for it."
But since you are trying it specifically for the attestation feature of the Secure DevOps Kit for Azure(AzSK), this might help:
The reason the confirmation message pops up for each control and does not allow a "Forced" yes is because:
Utmost discretion is to be used when attesting controls using the
Secure DevOps Kit for Azure(AzSK). In particular, when choosing to not
fix a failing control, you are taking accountability that nothing will
go wrong even though security is not correctly/fully configured.
Ideally, the bulk attestation feature is meant to be used in case the same control needs to be attested across multiple resource instances/resource groups and not vice versa. Refer this for scenarios where this feature can be used (although not recommended).
Hope this helps!
Background:
I am moving a legacy app that was storing images and documents on local disk on the web server, over toward a PaaS Azure web app and using Azure File Storage to store and serve the files.
Question:
I have noticed that sometimes the url for file download fails the first time, either image links on a page are broken until I refresh or a download fails the first time, then succeeds the next. I am guessing that this is due to some issue with how Azure File Storage works and it hasn't started up or something. The only consistent thread I have observed is that this seems to happen once or twice in the morning when I am first working with it. I am guessing maybe my account has to ramp up or something, so its not ready on the first go round. I tried to come up with steps to reproduce, but I could not reproduce the symptom. If my hunch is correct, I will have to wait until tomorrow morning to try. I will post more detailed error information if/when I can.
var fullRelativePath = $"{_fileShareName}/{_fileRoot}/{relativePath}".Replace("//","/");
return $"{_fileStorageRootUrl}{fullRelativePath}{_fileStorageSharedAccessKey}";
Thanks!
So its been a while but I remember I was able to resolve this, so I'll be writing this from memory. To be able to access an image from file storage via a URL, you need to use a SAS token. I already had, which is why I was perplexed about this. I'm not sure if this is the ideal solution, but what I wound up doing was just appending some random characters to the end of the url, after the SAS token, and that make it work. My guess is this somehow made it unique, which may have helped it bypass some caching mechanism that was behaving erratically.
I'll see if I can dig up working example from my archive. If so, I'll append it to this answer.
I would really like to measure connection speed between two exact sites. Naturally one of the sites is our site. Somehow I need to prove that not our internet connection is flaky, but that a site at the other end, is overcrowded.
At our end I have windows and linux machines available for this.
I imagine I would run a script at certain times of day which - for example - tries to download an image from that site and try to measure download time. Then put the download time into a database then create a graph from the records in the database. (I know that this is really simple and not sophisticated enough, but hence my question)
I need help on the time measurement.
The felt speed differences are big, sometimes the application works flawlessly, but sometimes we get timed out errors.
Now I use speedtest to check if our internet connection is OK, but this does not show that the site that is not working is slow, and now I can't provide hard numbers to assist my situation.
Maybe it is worth mentioning that the application we try to use at the other end is java based.
Here's how I would do it in Linux:
Use wget to download whatever URL you think represents your sites best. Parse the output into a file (sed, awk), use crontab to trigger the download multiple times.
wget www.google.com
...
2014-02-24 22:03:09 (1.26 MB/s) - 'index.html' saved [11251]