I have 200 files in my folder that I would like to sort by name. Here is a short list of my files according to the way that my Linux computer sorted them by default.
Name at 0.2142.png
Name at 0.4284.png
Name at 0.04284.png
Name at 0.6426.png
Name at 0.8568.png
Name at 0.08568.png
...
I would like to resort them by name from lowest to highest, so my list would become
Name at 0.04284.png
Name at 0.08568.png
...
Name at 0.2142.png
Name at 0.4284.png
Name at 0.8568.png
...
(I put elipses in the middle of my list because there are many more files between Name at 0.08568.png and Name at 0.2142.png.)
I'm still new to writing bash files but I would think I could write one to sort my figures.
Thanks
Addendum:
Since the file names are so long I abbreviated them as Name at...
The real file name is Planetary Vorticity Tilting by Radial Velocity at t = with a number at the end. I hope you can understand why I didn't put that in the original question. I'm also pretty sure I said in the comments below that I am new to Linux and Ubuntu, I do not know "how" the file listing is created. I have a MatLab code that makes png files based off of my CFD data with the name that I provided above and the appropriate time step. When I open the folder on the Linux computer the files are automatically sorted in the way I listed above and I have no idea why.
If there is any other clarification needed I would be happy to put it here.
Given:
$ echo "$files"
Name at 0.2142.png
Name at 0.4284.png
Name at 0.04284.png
Name at 0.6426.png
Name at 0.8568.png
Name at 0.08568.png
Name at 1.11.png
You can do:
$ echo "$files" | sort -t. -k1
Name at 0.04284.png
Name at 0.08568.png
Name at 0.2142.png
Name at 0.4284.png
Name at 0.6426.png
Name at 0.8568.png
Name at 1.11.png
That assumes the prefix Name at is the same in all cases.
Related
I have 3 folders in my server,
Assuming folder names are
workbook_20220217
workbook_20220407
workbook_20220105
Each folder consist of its respective files
I would only want to print the latest file based on date, there are 2 methods i have tried so far
The first method i tried
Variable Declared
TABLEAU_REPORTING_FOLDER=/farid/reporting/workbook
#First Method
ls $TABLEAU_REPORTING_FOLDER *_* | sort -t_ -n -k2 | sed ':0 N;s/\n/, /;t0'
#The first method will return all its contents in the folder as well
#The second Method i have tried
$(ls -td ${TABLEAU_REPORTING_FOLDER}/workbook/* | head -1)
# This will return folder based on ascending order
Target output should be a workbook_20220407
What is the best approach should look into? There are no other logics i could think rather than using the date as the biggest value to determine if its the latest date
*PS i could not read folder as date modified because once folder have been transferred to my server, all 3 folders will be of the same date
UPDATE
I found a way to get the latest folder based on filename based on this reference : https://www.unix.com/shell-programming-and-scripting/174140-how-sort-files-based-file-name-having-numbers.html
ls | sort -t'-' -nk2.3 | tail -1
This will return the latest folder based on folder title , will this be safe to use ?
Also what does -nk.2.3 does and mean ?
You can list your files in a directory in reverse order with option -r (independent if you have selected either sort order) See man page of ls(1) command for details.
The options -n and -k2.3 of sort(1) command mean, respectively (see also sort(1) man page for details):
sort numerically. This meaning that the keys are considered as numbers and sorted accordingly.
select fields 2 and 3 (the dot must be a comma, by the way) as keys for sorting purposes.
Read the man pages of both commands, they are your friends.
I am looking to rename a bunch of files according to the names found in a separate list. Here is the situation:
Files:
file_0001.txt
file_0102.txt
file_ab42.txt
I want to change the names of these files according to a list of corresponding names that looks like :
0001 abc.01
0102 abc.02
ab42 def.01
I want to replace, for each file, the part of the name present in the first column of my list by the part in the second column:
file_0001.txt -> file_abc.01.txt
file_0102.txt -> file_abc.02.txt
file_ab42.txt -> file_def.01.txt
I looked into several mv, rename and such commands, but I only found ways to rename batch files according to a single pattern in the file name, not matching the changes with a list.
Does anyone has a example of script that I could use to do that ?
while read a b; do mv file_$a.txt $b;done < listfile
Hi when I try to download a file from mainframe, using attachmate extra it appends the username also along with it. I dont know where to turn it off.
like for example - file name is yyyy.file.name, then when i try to transfer of file it transfers username.yyyy.file.name.
in 3.4 the option to append user name is turned off. Still its happening
Enclose the entire dataset name (including the high-level qualifier) in single quotes. This is a TSO (not JCL) convention - if you refer to a dataset without single quotes, it pre-pends your user ID as the high-level qualifier; however if you place single quotes around the dataset name it will take it 'as is' (well, it will uppercase it, since all z/OS dataset names are uppercase, but otherwise it will be 'as is').
I would like to archive every file in a folder by putting it in another archive folder with a name like this: "Archive/myfolder-06-2014"
My problem is how to retrieve the current month and year and then how to create a folder (if it does not already exist) with these data.
This solution may be a little awkward (due to the required fuss) but it seems to work. The idea is to precompute the target filename in a seperate transformation and store it as a system variable (TARGET_ZIP_FILENAME):
The following diagrams show the settings of selected components.
Get the current time...
Provide the pattern of the target filename as a string constant...
Extract the month and year as formatted integers...
Replace the month in the pattern (the year will work equivalently)
Set the resulting filename as a system variable
The main job will call the transformation and use the system variable as the zip target filename.
Also you have to make sure that the setting Create Parent folder is active:
I have ~ 60K bibliographic records, which can be identified by system number. These records also hold full-text (individudal text files named by system number).
I have lists of system numbers in bunches of 5K and I need to find a way to copy only the text files from each 5K list.
All text files are stored in a directory (/fulltext) and are named something along these lines:
014776324.txt.
The 5k lists are plain text stored in separated directories (e.g. /5k_list_1, 5k_list_2, ...), where each system number matches to a .txt file.
For example: bibliographic record 014776324 matches to 014776324.txt.
I am struggling to find a way to copy into the 5k_list_* folders only the corresponding text files.
Any idea?
Thanks indeed,
Let's assume we invoke the following script this way:
./the-script.sh fulltext 5k_list_1 5k_list_2 [...]
Or more succinctly:
./the-script.sh fulltext 5k_list_*
Then try using this (totally untested) script:
#!/usr/bin/env bash
set -eu # enable error checking
src_dir=$1 # first argument is where to copy files from
shift 1
for list_dir; do # implicitly consumes remaining args
while read bibliographic record sys_num rest; do
cp "$src_dir/$sys_num.txt" "$list_dir/"
done < "$list_dir/list.txt"
done