Normalized audio in sox: no such file - audio

I'm trying to use this script to batch normalize audio using Sox. I'm having a problem because it appears that it's not creating a tmp file for some reason and then of course there is no normalized audio file either. I'm getting this error for every file:
norm_fade.sh: line 57: /Applications/sox/WantNotSamples/Who Am I-temp.wav: No such file or directory
Normalized File "wav_file" exists at "/Applications/sox/WantNotSamplesNormalize"
rm: /Applications/sox/WantNotSamples/Who Am I-temp.wav: No such file or directory
Here is my script:
#!/bin/sh
# Script.sh
#
#
# Created by scacinto on 1/31/13.
#
# For now, only put audio files in the working directory - working on a fix
# This is the directory to the source soundfiles that need to be
# normalized and faded (the first argument on the command line.)
src=$1
# This is the directory to write the normalized and faded files to
# (The second path you must supply on the command line.)
dest=$2
# This is the sox binary directory. Please set this to your sox path.
# As it is now, this assumes that the sox binary is in the same directory
# as the script.
SOX= ./sox
#enable for loops over items with spaces in their name
IFS=$'\n'
# This is the 'for' loop - it will run for each file in your directory.
for original_file in `ls "$src/"`
do
# Get the base filename of the current wav file
base_filename=`basename "$original_file" .wav`
# We need a temp file name to save the intermediate file as
temp_file="${base_filename}-temp.wav"
echo "Creating temp file: \"$temp_file\" in \"$src\""
# And we need the output WAV file
wav_file="${base_filename}-nf.wav"
# Convert all spaces to hyphens in the output file name
wav_file=`echo $wav_file | tr -s " " "-"`
#Print a progress message
echo "Processing: \"$original_file\". Saving as \"$wav_file\" ..."
# We need the length of the audio file
original_file_length=`$SOX $src/"$original_file" 2>&1 -n stat | grep Length | cut -d : -f 2 | cut -f 1`
# Use sox to add perform the fade-in and fade-out
# saving the result as our temp_file. Adjust the 0.1s to your desired fade
# times.
#$SOX $src/"$original_file" $src/"$temp_file" fade t 0.1 $original_file_length 0.1
# If files have readable headers, you can skip the above operation to get the
# file length and just use 0 as below.
#$SOX $src/"$original_file" $src/"$temp_file" fade t 0.5 0 0.5
# normalize and write to the output wave file
$SOX $src/"$temp_file" $dest/"$wav_file" norm -0.5
echo "Normalized File \"wav_file\" exists at \"$dest\""
# Delete that temp file
rm $src/$temp_file
done

Related

Removing every line from a large text file except specific lines

For my project I am handling large data files, when these data files come in they are "uncleaned" and I need to clean them so that I can calculate the required functions from them. In this data the first 9 line is text and information about for example the time, and number of atoms. Whilst the next 10000 lines it is trajectory data this repeats until a certain time.
Now I have written code that cleans the text out of it given by:
homedir=$(pwd) #print working directory
for ex in 0 #5
do
dirname="ex-$ex"
cd $dirname
dirname2="Tq-0.25-N10000"
cd $dirname2
for i in $(seq 1 1 100)
do
dirname3="tr-$i"
cd $dirname3
mv traj-passive-afterquench.atom traj-afterquench
sed -i "1,9d" traj-afterquench
awk '{if((NR-1) % 10009<=9999){print $0}}' traj-afterquench>test
cd .. # tr
done
cd .. # Ti-1
cd .. # ex
done
But now I want to create another file that removes every line except the time, these are located on the lines of 2+10009*i where i is the number of timesteps till the end of the file, how would I create a code that would remove every line except the ones in the given formula?
If you have GNU sed:
sed '2~10009!d' file
should do the job.

Linux Shell Script to unzip and split file outputs unreadable files

I have a zip folder which contains multiple files of same format. Each file with around 50 mb size. I need to split each file into multiple chuncks (say 1000 lines per spllited output file).
I have written a shell script which which unzips the folder and saves the split files output in a directory.
The problem is that the output chunks are in unreadable format containing symbols and random characters.
When I do it for each file individually, it outputs perfect txt split files. But it is not happening for whole zip folder.
Anyone knows how to can I get those files in txt format.
Here is my script.
for z in input.zip ; do
if unzip -p "$z" | split -l 1000 $z output_dir ; then
echo "$z"
fi
done
Problem
You need to unzip the files first. Otherwise, you are just chunking the original binary ZIP file.
Solution
The following is untested because I don't have your source file. However, it should work for you with a little tweaking.
unzip -d /tmp/unzipped input.zip
mkdir /tmp/split_files
for file in /tmp/unzipped/*txt do;
split -l 1000 "$file" "/tmp/split_files/$(basename "$file" .txt)"
done

Linux shell script to tar.gzip log files older than 1 month grouped by month

I have a directory full of various application logs.
Example:
FailedAudit_20150101_000000.log FailedAudit_20150209_000000.log
FailedAudit_20150316_000000.log stats20150116.log stats20150224.log
FailedAudit_20150102_000000.log FailedAudit_20150210_000000.log
FailedAudit_20150317_000000.log stats20150117.log stats20150225.log
FailedAudit_20150103_000000.log RepoV4Error20150227.log
All the logs have timestamp in format YYYYMMDD but also other numbers involved as you can see.
My objective is to write a script that can be run once periodically to go through this directory and do the following:
For all log files older than 1 month, based on filename timestamp
for each months worth of files (30~31 files), tar.gz them into one file
label the tar.gz file as
App1_201508.tar.gz <-- contains all 30 log files
So format AppnameYYYYMM.tar.gz
The log file application name is static except for the timestamp.
I suppose there is a few ways to do this but I would like to gather ideas from the great minds of stackoverflow to find the simplest way.
Thanks in advance
Here's the third solution for your updated question:
#!/usr/bin/env bash
LOGTYPES=$( ls *log* | sed -rn "s/([0-9]{6})[0-9]{2}.*$/\1/p" | sort -u )
# the sed command, item by item:
#
# s/ search and replace
# ([0-9]{6}) block of 6 digits, and store it
# [0-9]{2} followed by 2 more digits
# .*$ followed by any and all characters until the end of the input
# / replace all of that with
# \1 the first stored block (the 6 digits)
# /p print the output
#
# So this turns FailedAudit_20150101_000000.log into FailedAudit_201501
THIS_MONTH=$(date +%Y%m)
for LOG in $LOGTYPES; do
MONTH=${LOG: -6} # Last 6 characters of the LOGTYPE are YYYYMM
if [[ "$MONTH" -lt "$THIS_MONTH" ]]; then
LOG_FILES=$(ls ${LOG}*)
tar -czf ${LOG}.tar.gz ${LOG_FILES}
RC=$? # Check whether an error occured
if [[ "$RC" == "0" ]]; then
rm ${LOG_FILES}
fi
fi
done
Note: This assumes that the first block of 8 digits is the datestamp, and everything after that is not relevant for which archive it is to go to.
Update:
The sed script no longer outputs files that do not contain a timestamp.
here, not sure if workimg
#!/bin/bash
MONTH=$(date +%m)
OLDMONTH=$MONTH-1
for FILE in `ls $DIR`
do
if [ ${FILE:-4:2} == $OLDMONTH]; then
# do what you want with the file, it's one month old, eg add it to a list
fi
done
# do what you want with the list, eg tar,...
run the script once a day as example with runwhen or cron

Merge some parts of a split tar.gz file Linux command

I have a large tar.gz file (approximately 63 GB) on a linux server. This file has about 1000 compressed csv files. I need to save the data of csv files in a database.
I can't extract whole file in one go due to limited space on the server. So I split the tar.gz file into 5 parts (4 parts of 15 GB and 1 of 3GB) but did not merge all of them as the server won't have any space left when extraction would be done. I merged the first two parts to make a new tar.gz file and extracted the csv files from that.
When I tried to merge the last 3 parts, it did not make a valid tar.gz file and that file could not be extracted. This problem was not because of server space because I deleted the files that were no longer required after extraction from first two parts.
Is there any way through which the last 3 parts of the split tar.gz file can be merged in a valid tar.gz format and then extracted?
Command used to split :
split -b 15G file.tar.gz parts
Command used to merge :
cat parts* > combined.tar.gz
Command used to extract :
tar zxvf file.tar.gz -C folderwhereextracted
You can use short shell script:
#/bin/sh
path='./path'
list="$path/*.tar.gz"
for file in `ls ./da/*.tar.gz.*`
do
let i++
if [[ -f $(find $path/*.tar.gz.$i) ]]
then
echo "file $path/*.tar.gz.$i found."
list="$list $path/*.tar.gz.$i"
else
echo "file $path/*.tar.gz.$i not found!"
fi
done
cat $list > full.tar.gz
tar zxvf ./full.tar.gz -C $path
# rm -rf $list
Put your path to variable with the same name.
Uncomment last line to remove source files after untar.

Moving multiple files in directory that might have duplicate file names

can anyone help me with this?
I am trying to copy images from my USB to an archive on my computer, I have decided to make a BASH script to make this job easier. I want to copy files(ie IMG_0101.JPG) and if there is already a file with that name in the archive (Which there will be as I wipe my camera everytime I use it) the file should be named IMG_0101.JPG.JPG so that I don't lose the file.
#method, then
mv IMG_0101.JPG IMG_0101.JPG.JPG
else mv IMG_0101 path/to/destination
for file in "$source"/*; do
newfile="$dest"/"$file"
while [ -e "$newfile" ]; do
newfile=$newfile.JPG
done
cp "$file" "$newfile"
done
There is a race condition here (if another process could create a file by the same name between the first done and the cp) but that's fairly theoretical.
It would not be hard to come up with a less primitive renaming policy; perhaps replace .JPG at the end with an increasing numeric suffix plus .JPG?
Use the last modified timestamp of the file to tag each filename so if it is the same file it doesn't copy it over again.
Here's a bash specific script that you can use to move files from a "from" directory to a "to" directory:
#!/bin/bash
for f in from/*
do
filename="${f##*/}"`stat -c %Y $f`
if [ ! -f to/$filename ]
then
mv $f to/$filename
fi
done
Here's some sample output (using the above code in a script called "movefiles"):
# ls from
# ls to
# touch from/a
# touch from/b
# touch from/c
# touch from/d
# ls from
a b c d
# ls to
# ./movefiles
# ls from
# ls to
a1385541573 b1385541574 c1385541576 d1385541577
# touch from/a
# touch from/b
# ./movefiles
# ls from
# ls to
a1385541573 a1385541599 b1385541574 b1385541601 c1385541576 d1385541577

Resources