Uncommon checksum representation on the FTP web site - linux

I'm sorry if i'm asking something basic. I've found list of files with corresponding checkums at ftp://ftp.ensemblgenomes.org/pub/plants/release-39/fasta/helianthus_annuus/dna/CHECKSUMS.
But in my opinion checksum representation is quite uncommon. Before i saw something like 58c1c7a730ba1e90169ab95efdc39e5c ( got this string as md5sum command output on one of the files from the link). How i can convert checksums from link above to regular format?

The output looks as it is from sum or cksumcommands.
See sum wiki and cksum wiki
You can not use this data to get back a md5sum or shasum.

Related

What command to search for ID in .bz2 file?

I am new to Linux and I'm trying to look for an ID number within a .bz2 file. Seems like a fairly straight forward requirement, however I cannot find the correct command anywhere online. I believe I need to use bzgrep.
I want to look for '123456' in the file Bulk9876.bz2
How would I construct this command?
You probably just need to tell grep that it's okay to parse that data as text:
bzgrep -a 123456 Bulk9876.bz2
If you're trying to view the compressed data (rather than decompressing it and searching the decompressed data), just use grep -a ….
Otherwise, it might make sense to verify that the desired string is even present in the file; bunzip2 it and grep -a the decompressed file. If that works, the problem is in your bzgrep instance (which is odd because it should be using the same decompression library as bunzip2).

How to use GNU sort with numeric data in binary format?

Is there any way to use GNU Coreutils sort with 64bit numbers stored in binary file?
If file wasn't binary then sort -n is the solution, but I didn't find any options to use it with binary data.
File is quite large (~100GB) and if it is possible I don't want to make its' text (non-binary) copy.
Sample of data:
$ xxd file
00292e0: 4036 1eb7 6888 d319 de6b 7402 9ca9 f116 #6..h....kt.....
00292f0: db68 7f05 199f 9d36 cf01 cb28 e49f 1116 .h.....6...(....
0029300: 0c7c 8b55 2963 ef0c 277a f2b0 38d7 2b19 .|.U)c..'z..8.+.
0029310: c83b 2614 4327 d838 820c 1bb8 444f 1731 .;&.C'.8....DO.1
0029320: 1695 cab3 cd12 092a 0691 d7e4 5fcc b01d .......*...._...
0029330: b12b 7c1b a209 7c1c 568a 125c 541c d334 .+|...|.V..\T..4
0029340: 09a3 ecbc 8370 e205 9265 7759 a378 4e2f .....p...ewY.xN/
The bsort utility does this.
It is a lightning fast inplace radix sort written in C. One of the test cases for its development was a 100Gb file on a machine with 16Gb ram - took about 22 seconds or so to sort.
sort(1) will not help you here. For a small file it could be possible to split your file into lines and feed it to sort(1), but not for 100G file of course.
The answer to this question on Serverfault has a link of the tool written for solving exactly your task. You can check the github project there (it seems to be written in Go so you will need to install a compiler if you decide to use it).
Quick googling does not find any other popular tool for this task written on some more popular language (and it surprises me a bit as the task itself is just a merge sort that thousands of students implement each year on their CS courses, but that's an off-topic).

Perl: Find duplicate in excel/csv file and write an output file with them

First of all I am a completely newbie (for now) with Perl, and I would like to ask you a quick advice.
I have to deal with a some lists of journals and publishers in different Excel/CSV files. I would like to find a way to cross the data in order to have the list of the titles & publishers in common between two files, and a list with the publisher and the number of journal published.
I would like to ask you if it is possible to do it with Perl (it should be the best method for what I understood, but I would like a confirmation!), and how advance it is.
Sorry for the strange request but I am writing my thesis and I would not like to spend time on something and discover that is not possible!
Thanks!
Yes.
To parse a CSV file:
Text::CSV_XS
To parse an Excel file:
Spreadsheet::ParseExcel
To find the common elements of two lists:
my %list1 = map { $_ => 1 } #list1;
my #common = grep $list1{$_}, #list2;

Getting specific fields from ID3 tags using command line tool?

I'm looking for a way that would let me get specific fields from ID3 tags from mp3 files.
All tools I have so far found return all fields, and they also format them for "easier reading". I need just some fields, and formatted differently (artist\talbum\ttitle\n) for reporting purposes.
Is there any such tool? I would love tool that would let me output separately values from ID3v1 and ID3v2.
id3v2 -R sounds like it does what you want. Debian package name is id3v2, upstream is http://id3v2.sourceforge.net/
From the manpage:
-R, --list-rfc822
Lists using an rfc822-style format for output
Example:
$ id3v2 -R 365-Days-Project-04-26-sprinkle-leland-w-the-great-stalacpipe-organ.mp3
Filename: 365-Days-Project-04-26-sprinkle-leland-w-the-great-stalacpipe-organ.mp3
TALB: Released independently through Luray Caverns
TPE1: Leland W. Sprinkle
TIT2: The Great Stalacpipe Organ
COMM: ()[eng]: � 2004, Copyright resides with the artist, The 365 Days Project, and UbuWeb (http://ubu.com) / PennSound (http://www.writing.upenn.edu/pennsound/). All materials at UbuWeb / PennSound are available for free exchange for noncommerical purposes.
365-Days-Project-04-26-sprinkle-leland-w-the-great-stalacpipe-organ.mp3: No ID3v1 tag
The easiest way is creating a bash script.
grep the fields returned by your tool so you get just the ones you want. Then you use awk (if you know how to use it), or cut, etc.
If you give us the format used by one of the tools you found, we can help you to write it. The more simple the format is, the more simple the script will be.

Compare two websites and see if they are "equal?"

We are migrating web servers, and it would be nice to have an automated way to check some of the basic site structure to see if the rendered pages are the same on the new server as the old server. I was just wondering if anyone knew of anything to assist in this task?
Get the formatted output of both sites (here we use w3m, but lynx can also work):
w3m -dump http://google.com 2>/dev/null > /tmp/1.html
w3m -dump http://google.de 2>/dev/null > /tmp/2.html
Then use wdiff, it can give you a percentage of how similar the two texts are.
wdiff -nis /tmp/1.html /tmp/2.html
It can be also easier to see the differences using colordiff.
wdiff -nis /tmp/1.html /tmp/2.html | colordiff
Excerpt of output:
Web Images Vidéos Maps [-Actualités-] Livres {+Traduction+} Gmail plus »
[-iGoogle |-]
Paramètres | Connexion
Google [hp1] [hp2]
[hp3] [-Français-] {+Deutschland+}
[ ] Recherche
avancéeOutils
[Recherche Google][J'ai de la chance] linguistiques
/tmp/1.html: 43 words 39 90% common 3 6% deleted 1 2% changed
/tmp/2.html: 49 words 39 79% common 9 18% inserted 1 2% changed
(he actually put google.com into french... funny)
The common % values are how similar both texts are. Plus you can easily see the differences by word (instead of by line which can be a clutter).
The catch is how to check the 'rendered' pages. If the pages don't have any dynamic content the easiest way to do that is to generate hashes for the files using a md5 or sha1 commands and check then against the new server.
IF the pages have dynamic content you will have to download the site using a tool like wget
wget --mirror http://thewebsite/thepages
and then use diff as suggested by Warner or do the hash thing again. I think diff may be the best way to go since even a change of 1 character will mess up the hash.
I've created the following PHP code that does what Weboide suggest here. Thanks Weboide!
the paste is here:
http://pastebin.com/0V7sVNEq
Using the open source tool recheck-web (https://github.com/retest/recheck-web), there are two possibilities:
Create a Selenium test that checks all of your URLs on the old server, creating Golden Masters. Then running that test on the new server and find how they differ.
Use the free and open source (https://github.com/retest/recheck-web-chrome-extension) Chrome extension, that internally uses recheck-web to do the same: https://chrome.google.com/webstore/detail/recheck-web-demo/ifbcdobnjihilgldbjeomakdaejhplii
For both solutions you currently need to manually list all relevant URLs. In most situations, this shouldn't be a big problem. recheck-web will compare the rendered website and show you exactly where they differ (i.e. different font, different meta tags, even different link URLs). And it gives you powerful filters to let you focus on what is relevant to you.
Disclaimer: I have helped create recheck-web.
Copy the files to the same server in /tmp/directory1 and /tmp/directory2 and run the following command:
diff -r /tmp/directory1 /tmp/directory2
For all intents and purposes, you can put them in your preferred location with your preferred naming convention.
Edit 1
You could potentially use lynx -dump or a wget and run a diff on the results.
Short of rendering each page, taking screen captures, and comparing those screenshots, I don't think it's possible to compare the rendered pages.
However, it is certainly possible to compare the downloaded website after downloading recursively with wget.
wget [option]... [URL]...
-m
--mirror
Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP
directory listings. It is currently equivalent to -r -N -l inf --no-remove-listing.
The next step would then be to do the recursive diff that Warner recommended.

Resources