Does a vulners nmap scan use CVSS v2 or v3 ratings? - security

nmap -oN scan.txt -sV -sC --script=vulners -iL ips.txt
I am scanning using nmap with vulners as my NSE.
The resulting scan.txt flags up vulnerabilities with their corresponding CVSS ratings.
When I check each of the vulnerabilities, I see that the NSE is using CVSS v2 ratings and not in fact CVSS v3.
I thought according to NVD that CVSSv1 & 2 are outdated.
https://nvd.nist.gov/vuln-metrics/cvss
Any explanation as to why would be much appreciated.

Related

sha256sum and hashalot produce different values on Linux

Suddenly I’ve discovered that two various SHA-256 calculators produce different values. Here is the real example — having had downloaded a ‛Neovim’ image, at first I didn’t get what was going on:
> cat nvim.appimage | sha256sum
ef9056e05ef6a4c1d0cdb8b21f79261703122c0fd31f23f782158d326fdadbf5 -
> cat nvim.appimage | hashalot -x sha256
ced1af6d51438341a0335cc00e1c2867fb718a537c1173cf210070a6b1cdf40a
The correct result is what ‛sha256sum’ gives — it matches the value on the official page. Did I do anything wrong? And how to avoid such unexpected effects in the future?
The operating system is Linux Mint 19 Cinnamon.
Thanks to user17732522 (https://stackoverflow.com/users/17732522/user17732522)
I forgot the name of the proper program, and after typing the wrong name, the shell suggested installing hashalot to calculate the sum. I did and read the man page but just only the first lines. If I had looked deeper, I wouldn’t be wondering and didn’t ask the question. The answer has turned out so simple. Thanks a lot!

Convert pdf to Tiff with same quality

We are using following shell script to convert pdf attachment in tiff but having some issue with quality, So can you please check below shell script and let us know anything where we can improve quality as well as compress file size too as much as possible while conversation.
shell_exec('/usr/bin/gs -q -sDEVICE=tiffg4 -r204x392 -dBATCH -dPDFFitPage -dNOPAUSE -sOutputFile=america_out7.tif america_test.pdf');
We have tried following command and seems quality is better but when we are going to send fax via Free-switch
gs -q -r1233x1754 -dFitPage -sPAPERSIZE=a4 -dFIXEDMEDIA -dPDFSETTINGS=/ebook -dNOPAUSE -dBATCH -sDEVICE=tiffg4 - -sOutputFile=america_out7.tif america_test.pdf
We are getting the below error
"Fax processing not successful - result (11) Far end cannot receive at the resolution of the image. "
So, here we need your help to elude this issue and please provide any other way.
Awaiting for some response on this.
Fax machines only support certain resolutions. If you Group 3 or Group 4 CCITT compressed TIFF many fax machines can read that, extract the compressed image data, and send it directly to a compatible fax machine.
"Standard" is 204x98, "fine" is 204x96, "superfine" is 400x391.
You've chosen a resolution of 1233x1754. That's 3 times higher than any fax specification I know of supports. So of course your receiving fax machine can't cope with it. Note that there is no fax standard (unless there's been a new one which seems unlikely) supports 600x600 either, though its entirely possible that specific manufacturers may support such a thing between their own equipment.
Naturally the higher the resolution, the better quality your rendered output will be, which is why using a higher resolution results in better quality.
Everyone wants the magical goal of "better quality and lower filesize" but there is no such thing. This is always a tradeoff.
You will probably find that using superfine resolution (400x391) will give you better quality, at the cost of higher file sizes.. You can't go higher than that with ordinary fax.
Note that the PDFSETTINGS switch has no effect except with the pdfwrite device, which is used to create PDF files, not read them.
This is also off-topic for Stack Overflow, since this is not a programming question. Not even a little bit, and Stack Overflow is not a generalised support forum.

How to index text files to improve grep time

I have a large number of text files I need to grep through on a regular basis.
There are ~230,000 files amounting to around 15GB of data.
I've read the following threads:
How to use grep efficiently?
How to use grep with large (millions) number of files to search for string and get result in few minutes
The machine I'll be grepping on is an Intel Core i3 (i.e. dual-core), so I can't parallelize to any great extent. The machine is running Ubuntu and I'd prefer to do everything via the command line.
Instead of running a bog-standard grep each time, is there any way I can either index or tag the contents of the text files to improve searching?
To search a large number of files for text patterns, qgrep uses indexing. See the article on why and how: https://zeux.io/2019/04/20/qgrep-internals
Alternatively, perhaps try modern multi-threaded grep tools like the new ugrep or ag aka silver searcher (note: the ag bug list on GitHub shows that the most recent ag 2.2.0 may run slower with multiple threads, which I assume will be fixed in a future update).
Have you tried ag as a replacement for grep? It should be in the Ubuntu repositories. I had a similar problem as yours, and ag is really much faster than grep for most regex searches. There are some differences in syntax and features, but that would only matter if you had special grep-specific needs.

What is * in AT command?

I came accros this kind of line in a proprietary use of the AT commands:
AT*REF=1,290717696<LF>
It is a proprietary command as it is used in a protocol to control a robot.
According to what I read on Wikipedia and other sources, AT command extensions should use "\" or "%". There is no mention of "*".
So what does * define?
There are more than two different characters in use by various manufacturers when implementing the first character of a proprietary AT command. Some that I've seen:
! # # $ % ^ * _
The manufacturer of your device may have chosen '*' for commands that have similar functionality, or they may have chosen to implement ALL of their proprietary AT commands with '*' as the first character.
There are many AT command references available online in PDF format from many different manufacturers. Perhaps the manufacturer of your device makes this information available as well.

Wikipedia text download

I am looking to download full Wikipedia text for my college project. Do I have to write my own spider to download this or is there a public dataset of Wikipedia available online?
To just give you some overview of my project, I want to find out the interesting words of few articles I am interested in. But to find these interesting words, I am planning to apply tf/idf to calculate term frequency for each word and pick the ones with high frequency. But to calculate the tf, I need to know the total occurrences in whole of Wikipedia.
How can this be done?
from wikipedia: http://en.wikipedia.org/wiki/Wikipedia_database
Wikipedia offers free copies of all available content to interested users. These databases can be used for mirroring, personal use, informal backups, offline use or database queries (such as for Wikipedia:Maintenance). All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Images and other files are available under different terms, as detailed on their description pages. For our advice about complying with these licenses, see Wikipedia:Copyrights.
Seems that you are in luck too. From the dump section:
As of 12 March 2010, the latest complete dump of the English-language Wikipedia can be found at http://download.wikimedia.org/enwiki/20100130/ This is the first complete dump of the English-language Wikipedia to have been created since 2008.
Please note that more recent dumps (such as the 20100312 dump) are incomplete.
So the data is only 9 days old :)
If you need a text only version, not a Mediawiki XML, then you can download it here:
http://kopiwiki.dsd.sztaki.hu/
Considering the size of the dump, you would probably be better served using the word frequency in the English language, or to use the MediaWiki API to poll pages at random (or the most consulted pages). There are frameworks to build bots based on this API (in Ruby, C#, ...) that can help you.
http://en.wikipedia.org/wiki/Wikipedia_database#Latest_complete_dump_of_english_wikipedia
See http://en.wikipedia.org/wiki/Wikipedia_database
All the latest wikipedia dataset can be downloaded from: Wikimedia
Just make sure to click on the latest available date
Use this script
#https://en.wikipedia.org/w/api.php?action=query&prop=extracts&pageids=18630637&inprop=url&format=json
import sys, requests
for i in range(int(sys.argv[1]),int(sys.argv[2])):
print("[wikipedia] getting source - id "+str(i))
Text=requests.get("https://en.wikipedia.org/w/api.php?action=query&prop=extracts&pageids="+str(i)+"&inprop=url&format=json").text
print("[wikipedia] putting into file - id "+str(i))
with open("wikipedia/"+str(i)+"--id.json","w+") as File:
File.writelines(Text)
print("[wikipedia] archived - id "+str(i))
1 to 1062 is at https://costlyyawningassembly.mkcodes.repl.co/.

Resources