How to rename downloaded youtube playlist via youtube-dl - rename

I have already downloaded a YouTube playlist, I didn't think of renaming it while downloading.
Now, I need to rename all the videos as per playlist numbering. I found this to do so
youtube-dl -o "%(playlist_index)s-%(title)s.%(ext)s" <playlist_link>
But it will download the playlist again which I don't want. So I tried to use the different options like --skip-download without success. I read the youtube-dl -h to find a solution, but I can't seem to find. I am stuck in here any help is very much appreciated.

I don't think youtube-dl has an option for renaming while downloading. You can use Flash Renamer or something similar after the downloading has completed, it is very easy and useful.

Related

Is there way to get the lets say top 5 comments of a youtube video with a command line utility on linux

I want to get the top comments of youtube videos.
Is there a way to do this with a scriptable commandline utility or do I need to use curl and the API.
I thought of using youtube-dl , but there seems to be no such function.
Is there a similar tool capable of doing this?
Also I read some older questions, which suggested that there is no way of doing this (except by getting all comments and searching them locally), since it is not implemented in the API.
So I was wondering if this changed recently.
question from 2011
question from 2015
The API doesn't order comments into 'top comments' (unless you mean top-level comments, which is default) but you can use wget and parse the output file.
wget -O output "https://www.googleapis.com/youtube/v3/commentThreads?part=snippet&maxResults=5&videoId=[VIDEO_ID]&order=time&textFormat=plainText&key=[API_KEY]"

Download complete playlist with specific format in Linux

I am trying to download complete playlist from YouTube with youtube-dl but I am not able to specify the format of the video I.e webm or HD etc.
use this link to get the download ink sequence for all the videos.
Copy all the links and save it to text file.
Use linux download client like uget. In that select "new batch download"
in the source select "import from text file" . choose the file where you save all the download sequence links.
start your download :)
This will give all the available formats
youtube-dl -F linkofvideo
For Download
youtube-dl -f formatcode linkofvideo
format code is 22,17 given by first query
And in order to download complete video/playlist just write down
youtube-dl -f 22 linkofvideo

How to play audio on Corona?

I am trying to play audio as i used to do but it seems not working now. These are the codes I tried:
local birdSound = audio.loadSound("bird.mp3")
audio.play(birdSound)
It gives an error like that:
WARNING: Failed to create audio sound
Can you help me out? Thanks.
Don't use .mp3. .wav works for both iphone and android.
Make sure the .wav file is in our folder.
Sometimes some sound file can be played on computer, but not works in simulator. In that case, use other files instead.
If you really like that .mp3 file and can not find .wav, find some free software to convert it.
Changing the bit rate(increasing) of my files helped me to solve the problem. Thanks to SatheeshJM!

How to download images from "wikimedia search result" using wget?

I need to mirror every images which appear on this page:
http://commons.wikimedia.org/w/index.php?title=Special:Search&ns0=1&ns6=1&ns12=1&ns14=1&ns100=1&ns106=1&redirs=0&search=buitenzorg&limit=900&offset=0
The mirror result should give us the full size images, not the thumbnails.
What is the best way to do this with wget?
UPDATE:
I update the solution below.
Regex is your friend my friend!
Using cat, egrep and wget youll get this task done pretty fast
Download the search results URI wget, then run
cat DownloadedSearchResults.html | egrep (?<=class="searchResultImage".+href=").+?\.jpg/
That should give you http://commons.wikimedia.org/ based links to each of the image's web page. Now, for each one of those results, download it and run:
cat DownloadedSearchResult.jpg | egrep (?<=class="fullImageLink".*href=").+?\.jpg
That should give you a direct link to the highest resolution available for that image.
Im hoping your bash knowledge will do the rest. Good Luck.
Came here with the same problem .. found this >> http://meta.wikimedia.org/wiki/Wikix
I don't have access to a linux machine now, so I didn't try it yet.
It is quite difficult to write all the script in stackoverflow editor, you can find the script at the address below. The script only downloads all images at the first page, you can modify it to automate download process in another page.
http://pastebin.com/xuPaqxKW

continously add picture to video

Every x minutes I grab an image from a network-cam. Now i want to add this picture to an existing video file - on the fly.
I don't want to keep numerous image files and then encode them once in a while with e.g.
mencoder mf://#${LIST} -mf type=jpg:fps=${FPS} ...
The video format/codec doesn't really matter, as long as standard tools (mplayer, ffmpeg, vlc, ...) can handle it.
Any ides or suggestions?
Thanks in advance!
One obvious way which should work (at least according to my first tests) is to just write the new jpeg image to the end of the video file - so the video is a mjpeg stream.
cat ${PIC} >> ${VIDEO}
This is an answer to my question, however i was looking for something consuming less space than the pictures stored each by its own would take up.
Other hints?

Resources