How to create an offline repository for debian non-free? - linux

I am using debian squeeze and want to create an offline repository or a cd/dvd for the debian non-free branch. I looked around the internet and all i found out is that there are neither iso images nor there are jidgo files for creating such image so I had the idea to fetch the packages from one of the debian package servers using:
wget -r --no-parent -nH -A*all.deb,*any.deb,*i386.deb \
ftp://debian.oregonstate.edu/debian/pool/non-free/
I know that that I must use file: in my */etc/apt/sources.list* to indicate local repositories but how do I actually create one so that apt or aptitude understands this?

(Answered in a questioned edit. Converted to a community wiki answer. See What is the appropriate action when the answer to a question is added to the question itself? )
The OP wrote:
Update: With a few ugly tricks I was able to extract the needed data from pool and the dist folder.
I used the unzipped Package.gz to do this:
grep '^Package\:.*' Packages|awk '{print $2}' >> Names.lst
grep '^Version\:.*' Packages|awk '{print $2}' >> Versions.lst
grep '^Architecture\:.*' Packages|awk '{print $2}' >> Arch.lst
With vim I find and remove the ':' in the file Versions.lst and generate a shorter Content.lst more easy to parse with bash tools:
paste Names.lst Versions.lst Arch.lst >> Content.lst
Now I do this:
cat content.lst | while read line; \
do echo "$(echo $line|awk '{print $1}')\
_$(echo $line|awk '{print $2}')_$(echo $line|awk '{print $3}')";\
done >> content.lst.tmp && mv content.lst.tmp content.lst
which generates me the file names in the debian directory I need. When finishing with my downloads using wget I find and rsync the needed file names. mv does not work here because I needed the structure as it is referring to in Packages.gz:
cat content.lst |while read line; \
do find debian/ -type f -name ${line}.deb -exec \
rsync -rtpog -cisR {} debian2/ \; ;done
rm -r debian && mv debian2 debian
To receive the complete dists tree structure I used wget again:
wget -c -r --no-parent -nH -A*.bz2,*.gz,Release \
ftp://debian.oregonstate.edu/debian/dists/squeeze/non-free/binary-i386/
I think the only thing I have to do now is to create the Contents.gz file.
The Contents.gz file can easily be created using the apt-ftparchive program:
apt-ftparchive contents > Contents-i386 && gzip -f Contents-i386

Related

Script will not download files when called by cron or udev rules

So I'm trying to make a script that will download my podcasts upon detecting my smart watch connecting and transfer them to it. I've configured the udev rule to detect when the watch is connected and it executes /bin/watch_transfer.sh for which the code is as such:
#!/usr/bin/env sh
echo "Watch connected at $(date)" >>/tmp/scripts.log
# Download new podcasts
cd /home/pi/Scripts/
./bashpodder.shell >>/tmp/pscripts.log
echo "Upodder should've run by now">>/tmp/scripts.log
# Transfer podcasts
for file in /home/pi/Downloads/podcasts/*
do
/usr/bin/mtp-sendfile $file /Podcasts
echo "Processing $file" >>/tmp/scripts.log
done
echo "Sent all files" >>/tmp/scripts.log
I know that the file runs when the watch is connected because /tmp/scripts.log is created and updated, and also bashpodder.shelll creates the podcast.m3u file so the bashpodder script is running but it doesn't download any files to /~/Downloads/podcasts. Bashpodder is a simle podcast downloader (I was using upodder but switched because it didn't seem to work) and mtp-tools is a way to transfer files through MTP. Bashpodder.shell script below:
# By Linc 10/1/2004
# Find the latest script at http://lincgeek.org/bashpodder
# Revision 1.21 12/04/2008 - Many Contributers!
# If you use this and have made improvements or have comments
# drop me an email at linc dot fessenden at gmail dot com
# and post your changes to the forum at http://lincgeek.org/lincware
# I'd appreciate it!
# Make script crontab friendly:
cd $(dirname $0)
# datadir is the directory you want podcasts saved to:
datadir=/home/pi/Downloads/podcasts
# create datadir if necessary:
mkdir -p $datadir
# Delete any temp file:
rm -f temp.log
# Read the bp.conf file and wget any url not already in the podcast.log file:
while read podcast
do
file=$(xsltproc parse_enclosure.xsl $podcast 2> /dev/null || wget -q $podcast -O - | tr '\r' '\n' | tr \' \" | sed -n 's/.*url="\([^"]*\)".*/\1/p')
for url in $file
do
echo $url >> temp.log
if ! grep "$url" podcast.log > /dev/null
then
wget -t 10 -U BashPodder -c -q -O $datadir/$(echo "$url" | awk -F'/' {'print $NF'} | awk -F'=' {'print $NF'} | awk -F'?' {'print $1'}) "$url"
fi
done
done < bp.conf
# Move dynamically created log file to permanent log file:
cat podcast.log >> temp.log
sort temp.log | uniq > podcast.log
rm temp.log
# Create an m3u playlist:
ls $datadir | grep -v m3u > $datadir/podcast.m3u
I think it might be something to do with permissions? As when I run ./watch_transfer.sh from the terminal it runs perfectly. Thanks in advance for your help.
edit:
After connecting my watch:
Ouput of $ cat /tmp/scripts.log:
Watch connected at Thu Jul 16 22:25:47 BST 2020
Upodder should've run by now
Processing /home/pi/Downloads/podcasts/podcast.m3u
Sent all files
$ cat /tmp/psripts.log doesn't output anything but /tmp/pscripts.log does exist.
Output of $ cat ~/Scripts/temp.log:
http://rasterweb.net/raster/audio/rwaudio20060108.mp3
http://rasterweb.net/raster/audio/rwaudio20051020.mp3
http://rasterweb.net/raster/audio/rwaudio20051017.mp3
http://rasterweb.net/raster/audio/rwaudio20050807.mp3
http://rasterweb.net/raster/audio/rwaudio20050719.mp3
http://rasterweb.net/raster/audio/rwaudio20050615.mp3
http://rasterweb.net/raster/audio/rwaudio20050525.mp3
http://rasterweb.net/raster/audio/rwaudio20050323.mp3
This seems to suggest that bashpodder is running through the urls but not actually downloading them?

Removing com.apple.quarantine from files on Linux

From time to time a user uploads a file with a tag "com.apple.quarantine". This is added, I think, when the user has downloaded a file onto his computer from the internet.
My question is, how do I remove this from a file if I'm on Linux?
Thanks
Use setfattr. On linux the extended attribute should be in the "user." namespace (your mileage may vary):
setfattr -x 'user.com.apple.quarantine' file1 [ file2 [ ... ] ]
Unfortunately, the -xattr predicate hasn't made it into GNU find yet so processing a complete hierarchy involves a brute-force-and-ignorance approach looking something like this:
cd /path/to/search
errors=/var/tmp/setfattr.errors
find . -exec setfattr -x 'user.com.apple.quarantine' {} + 2> "$errors"
After which the $errors file should only contain entries for files which didn't have the relevant attribute:
grep -v 'No such attribute' -- "$errors"
touch /tmp/com.apple.quarantine.test1
touch /tmp/com.apple.quarantine.test2
Then run following codes.
for f in $(find /tmp/ -type f|grep -i 'com.apple.quarantine');
do
OLD_NAME=$(echo $f|awk -F "/" '{print $NF'})
NEW_NAME=$(echo $OLD_NAME|sed "s/com\.apple\.quarantine\.//g")
echo $NEW_NAME
DIR_NAME=$(dirname $f)
cd $DIR_NAME
mv "$OLD_NAME" "$NEW_NAME"
done
Now there is only test1 and test2 at under the /tmp file.

Replacement for chmod --reference on OS X?

I'm trying to port some jenkins bash scripts from Ubuntu to OS X. The linux (and I think it is originally GNU) chmod has a --reference option that allows copying the mode from a reference file. I am looking for the equivalent code for OS X, preferably without installing extra packages. Even better would be a cross-platform solution.
The concrete snippet:
# expand all the templates
find "$OUTPUT_PATH" -name "*.template" | while read FILE ; do
sed \
-e "s/%{NAME}/$OPTION_NAME/g" \
-e "s/%{TITLE}/$OPTION_TITLE/g" \
-e "s/%{VERSION}/$OPTION_VERSION/g" \
-e "s/%{WHEN}/$OPTION_WHEN/g" \
"$FILE" > "${FILE%.*}"
chmod --reference="$FILE" "${FILE%.*}"
rm -f "$FILE"
done
[edit] The combination of stat -r with saving the file mode is the right combination, stat -c doesn't exist on OS X
Copy the file first and only then overwrite with a shell redirection. This should preserve the original permissions.
How about using the format switch to FreeBSD stat:
stat -f "%p" ~/.bashrc
stat -f "%Sp" ~/.bashrc
stat -f "%u:%g:%p" ~/.bashrc
If your OS X has the stat command
# expand all the templates
find "$OUTPUT_PATH" -name "*.template" | while read FILE ; do
savemod=$(stat -c "%a" "$FILE")
sed \
-e "s/%{NAME}/$OPTION_NAME/g" \
-e "s/%{TITLE}/$OPTION_TITLE/g" \
-e "s/%{VERSION}/$OPTION_VERSION/g" \
-e "s/%{WHEN}/$OPTION_WHEN/g" \
"$FILE" > "${FILE%.*}"
chmod $savemod "${FILE%.*}"
rm -f "$FILE"
done
If it doesn't have -c option, check the man page of stat under formatting. you can find similar ways to get the permission/mode of the file.

CentOS directory structure as tree?

Is there an equivalent to tree on CentOS?
If tree is not installed on your Centos system (I typically recommend server setups to use minimal install disk anyhow) you should type the following at your command line:
# yum install tree -y
If this doesn't install it's because you don't have the proper repository. I would use the Dag Wieers repository:
http://dag.wieers.com/rpm/FAQ.php#B
After that you can do your install:
# yum install tree -y
Now you're ready to roll. Always read the man page: http://linux.die.net/man/1/tree
So quite simply the following will return a tree:
# tree
Alternatively you can output this to a text file. There's a ton of options too.. Again, read your man page if you're looking for something other than default output.
# tree > recursive_directory_list.txt
(^^ in a text file for later review ^^)
You can make your own primitive "tree" ( for fun :) )
#!/bin/bash
# only if you have bash 4 in your CentOS system
shopt -s globstar
for file in **/*
do
slash=${file//[^\/]}
case "${#slash}" in
0) echo "|-- ${file}";;
1) echo "| |-- ${file}";;
2) echo "| | |-- ${file}";;
esac
done
As you can see here. tree is not installed by default in CentOs, so you'll need to look for an RPM and install it manually
Since tree is not installed by default in CentOS ...
[user#CentOS test]$ tree
-bash: tree: command not found
[user#CentOS test]$
You can also use the following ls command to produce almost similar output with tree
ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'
Example:
[user#CentOS test]$ ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'
.
|-directory1
|-directory2
|-directory3
[user#CentOS directory]$
You have tree in the base repo.
Show it (yum list package-name):
# yum list tree
Available Packages
tree.i386 1.5.0-4 base
Install it:
yum install tree
(verified on CentOS 5 and 6)
I need to work on a remote computer that won't allow me to yum install. So I modified bash-o-logist's answer to get a more flexible one.
It takes an (optional) argument that is the maximum level of subdirectories you want to show. Add it to your $PATH, and enjoy a tree command that doesn't need installation.
I am not an expert in shell (I had to Google a ton of times just for this very short script). So if I did anything wrong, please let me know. Thank you so much!
#!/bin/bash
# only if you have bash 4 in your CentOS system
shopt -s globstar # enable double star
max_level=${1:-10}
for file in **
do
# Get just the folder or filename
IFS='/'
read -ra ADDR <<< "$file"
last_field=${ADDR[-1]}
IFS=' '
# Get the number of slashes
slash=${file//[^\/]}
# print folder or file with correct number of leadings
if [ ${#slash} -lt $max_level ]
then
spaces=" "
leading=""
if [ "${#slash}" -gt 0 ]
then
leading=`eval $(echo printf '"|${spaces}%0.s"' {1..${#slash}})`
fi
echo "${leading}|-- $last_field"
fi
done

Wget Output in Recursive mode

I am using wget -r to download 3 .zip files from a specified webpage. Here is what I have so far:
wget -r -nd -l1 -A.zip http://www.website.com/example
Right now, the zip files all begin with abc_*.zip where * seems to be a random. I want to have the first downloaded file to be called xyz_1.zip, the second to be xyz_2.zip, and the third to be xyz_3.zip.
Is this possible with wget?
Many thanks!
I don't think it's possible with wget alone. After downloading you could use some simple shell scripting to rename the files, like:
i=1; for f in abc_*.zip; do mv "$f" "xyz_$i.zip"; i=$(($i+1)); done
Try to get a listing first and then download each file separately.
let n=1
wget -nv -l1 -r --spider http://www.website.com/example 2>&1 | \
egrep -io 'http://.*\.zip'| \
while read url; do
wget -nd -nv -O $(echo $url|sed 's%^.*/\(.*\)_.*$%\1%')_$n.zip "$url"
let n++
done
I don't think there is a way you can do it within a single wget command.
wget does have a -O option which you can use to tell it which file to output to, but it won't work in your case because multiple files will get concatenated together.
You will have to write a script which renames the files from abc_*.zip to xyz_*.zip after wget has completed.
Alternatively, invoke wget for one zip file at a time and use the -O option.

Resources