Portable bourne shell script without using functions of modern shells as bash, ksh, zsh etc [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
First of all I want thank all of you who will help me solve this. I have an exam tomorrow and I have to prepare this script for the exam. I am really new to linux and those bourne shell script.
My project should be a portable bourne shell script which scans a directory for the following files: header.txt, footer.txt and content.txt. The content of the files should be read but ignoring the lines starting with # and this content should be used for generating an HTML page with the following header, footer and content. This files can contain any text and/or HTML code but the cannot contain head and body tags. When scanning the directory the script have to compare the date of the last change of the files (header.txt, footer.txt and content.txt) with the date of the last change of the HTML page (if you have one already) and if the date of the last edit on the files is newer than the one on the HTML page the script should generate a new HTML page with the latest content.
Guys thank you very much as this is very important for me. Please help me getting this done.
Thank you very much!

To remove lines beginning with # try this:
grep -v "^#" file
To remove lines that may contain spaces (or blank characters) before a #:
grep -v "^[[:blank:]]*#" file

Related

How to get the last added folder to a directory [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I work in a software development company and every day upon my machine boot I have to execute the same commands to start coding. I recently decided to create a bash script to do that for me. The problem is that the commands I need to type have a single difference from one another, and that is the folder I need to access.
I always have to access a directory where it will contain folders with different versions of the company code (let's call it "codes" for the sake of the discussion), and everyday another folder is added to the "codes" directory (they update the company code everyday) with name as timestamp e.g. 2021-07-05-17-52-51.
To be able to create my automation script I need to be able to get into the "codes" directory and get the most recent folder added to it, or get the latest timestamp.
I am new to bash and I couldn't find answers on how to get the last added folder to a directory using bash or someway to use tab and get the last one.
You can use something like this:
directory=$(ls -At1 | head -n 1)
An explanation in parts:
ls -At1 lists sorted by time with one entry per line
head -n 1 returns the first entry
$(...) runs the command as a subshell, evaluates, and sets directory to the name of the item with the most recent modified datestamp. If you want to ignore hidden files and folders, you can lose the -A flag from ls.

How can I search the content of a pdf file in linux shell script? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Suppose I have given some journal paper in pdf format. I want to find out the title and Author List of the papers. How can I do that in shell scripts ?
I do not know if this works for your journal, it works on some pdf files:
strings "myjournal.pdf" | egrep "/Author|/Title" | tr '/' '\n' | egrep "Author|Title"
I worked on a project where we had to do search's in the content of a pdf file. The process that we decided to use is the following one:
First we would convert the pdf file to an image with the following command:
convert -density 500 "pdf_path.pdf" -depth 8 "image_output.png"
And after the file has been created, we use the command below to create a txt file with the pdf's content.
tesseract "image_output.png" "out_put_txt_file_name" -l por
You are probably going to have to change the -l por argument, because we use to do this for text's in portuguese.

Error in shell script and how to write to a file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am writing a shell script that is extracting the data from a command:
I have tried running the script in both the vi and vim editor. But everything in vain.
Please help me out. And how write the output of this in a file.
It may be noted that this is just a starting point so the script will produce multiple files so
I cannot write:
Script_name > filename
I think this question is fine now, the input file is good enough after edit, I can fully understand what you ask for now.
With awk, you need learn to use 2-d array, it will simplify the code.
awk 'BEGIN{print "Instance id Name Owner Cost.centre"}
/TAG/{split($0,a,FS);a[4]=tolower(a[4]);$1=$2=$3=$4="";b[a[3],a[4]]=$0;c[a[3]]}
END{for (i in c) printf "%-18s%-26s%-14s%-20s\n",i,b[i,"name"],b[i,"owner"],b[i,"cost.center"]}' file
Instance id Name Owner Cost.centre
i-e1cfc499 Memcached
i-7f4b9300 Test_LB01_Sachin
i-c4260db8 Rishi_Win_SAML Rishi Pandey
i-fb5ca283 CLIQR-DO NOT TOUCH mataa 1234

source a file everytime I start unix [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I created a few .sh files and put them under one of the directories under $PATH. Unfortunately every time I start a new session I have to source them if I want to use them. I did a google search and couldn't really find what I am looking for to not having to source these files.
I guess I can place a source all command at ~/.bashrc but there should be a way to get this done in a simple way.
Thanks
Let's say all of your scripts are under the ~/.functions directory. Put this in your $HOME/.bashrc:
for file in ~/.functions/*
do
. $file
done
This will source in all files in the ~/.functions directory whenever you start a new shell.
Sourcing all commands in .bashrc is the simple way.
You may want a sophisticated way of sourcing your start scripts by creating a specific directory, say ~/.start_scripts, where you put all your commands, and write a loop in your .bashrc that sources whatever executable is in this directory. That way, you no longer have to edit .bashrc each time a new command is put in the .start_scripts directory.

OSX/Linux, slow down the output from terminal [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm printing to screen a long text file that scrolls very very quickly on my screen, is there a way to slow down the scrolling? In other words is there a system setting that controls the speed at which output is displayed to screen (OSX/Linux).
Simple answer: No.
Extended version: There are other solutions. You could pick from one of the following:
Use pipes. Using pipes allows you to redirect terminal output and to review it in your own speed. The appropiate symbol is |. Redirect the output to programs like less ore more. Both allow you to scroll through the output via pressing return, you can exit any time by pressing q. For instance, for handling a long directory listing, you could use
ls | more
Redirect your output into a file. If your output is cached in a file, it's persistent and allows you to open it with an editor of your choice to view (and edit) it. The symbol is >.
touch log.txt # create the file
ls > log.txt
nano log.txt # use nano text editor to view
script allows you to record entire terminal sessoins. This might be an overkill for your use-case, but is really useful. From the man page:
script makes a typescript of everything printed on your terminal. It is
useful for students who need a hardcopy record of an interactive session
as proof of an assignment, as the typescript file can be printed out
later with lpr(1).
Use less to page through files; you can page back and forth, search, etc.
xterm has limited control over scrolling speed; most other terminal emulators have none, because that's the wrong way to step through a file when you can use a program like less to filter the output.

Resources