I am giving one of my bots a youtube download command because I both am tired of using the sketchy ones on the internet and also ran out of things to code. So far my code gets the video in question and returns the file in the chat, and it works great, except for the fact that 90% of the videos are too big to be sent. So what I want to do is to save the file to D:\bot_yt_vids\ or something but I cannot figure out how to do this (I have searched everywhere and nothing works for some reason)
My current code is this:
var splitMessage = message.content.split(' ');
if (splitMessage[0] === "!ytdl") {
try {
const vid = ytdl(splitMessage[1], { filter: format => format.container === 'mp4' })
message.channel.send({ files: [{ attachment: vid, name: "video .mp4" }] })
} catch (e) { message.channel.send("An error occured"); console.log(e)}
}
ytdl() returns a readable file stream, so I suggest looking creating a new writableStream in node.js and using the .pipe() method of a readable stream to pass it to the writable stream to write to the file. I'm not familiar with filestreams, but I think that should be all you need.
readableStream#pipe
Great article on filestreams
Related
Please consider the code below:
navigator.mediaDevices.getUserMedia({audio: true}).then(function() {
navigator.mediaDevices.enumerateDevices().then((devices) => {
devices.forEach(function(device1, k1) {
if (device1.kind == 'audiooutput' && device1.deviceId == 'default') {
const speakersGroupId = device1.groupId;
devices.forEach(function(device2, k2) {
if (device2.groupId == speakersGroupId && ['default', 'communications'].includes(device2.deviceId) === false) {
const speakersId = device2.deviceId;
const constraints = {
audio: {
deviceId: {
exact: speakersId
}
}
};
console.log('Requesting stream for deviceId '+speakersId);
navigator.mediaDevices.getUserMedia(constraints).then((stream) => { // **this always fails**
console.log(stream);
});
}
});
}
});
});
});
The code asks for permissions via the first getUserMedia, then enumerates all devices, picks the default audio output then tries to get a stream for that output.
But it will always throw the error: OverconstrainedError { constraint: "deviceId", message: "", name: "OverconstrainedError" } when getting the audio stream.
There is nothing I can do in Chrome (don't care about other browsers, tested Chrome 108 and 109 beta) to get this to work.
I see a report here that it works, but not for me.
Please tell me that I'm doing something wrong, or if there's another way to get the speaker stream that doesn't involve chrome.tabCapture or chrome.desktopCapture.
Chrome MV3 extension ways are welcomed, not only HTML5.
.getUserMedia() is used to get input streams. So, when you tell it to use a speaker device, it can't comply. gUM's error reporting is, umm, confusing (to put it politely).
To use an output device, use element.setSinkId(deviceId). Make an audio or video element, then set its sink id. Here's the MDN example; it creates an audio element. You can also use a preexisting audio or video element.
const devices = await navigator.mediaDevices.enumerateDevices()
const audioDevice = devices.find((device) => device.kind === 'audiooutput')
const audio = document.createElement('audio')
await audio.setSinkId(audioDevice.deviceId)
console.log(`Audio is being played on ${audio.sinkId}`)
I have built a simple CRUD API for multer/gridfs to storage image files. It does work properly in terms of adding, deleting and finding certain image by it's name. However it does not list all the added files with find(). I am new to multer, so maybe I don't know something specific about chunks/files that makes this behave like that.
Here is my API for listing all files:
const Grid = require("gridfs-stream");
...
const conn = mongoose.connection;
conn.once("open", function () {
gfs = Grid(conn.db, mongoose.mongo);
gfs.collection("images");
});
...
app.get("/api/v1/files", async (req, res) => {
try {
const file = await gfs.files.find();
const readStream = gfs.createReadStream(file);
readStream.pipe(res);
} catch (error) {
res.send("not found");
}
});
Here is error log for GET request:
.../WebstormProjects/irrisfileapiv1/node_modules/mongodb/lib/utils.js:113
throw err;
^
MongoError: file with id not opened for writing
at Function.create (.../WebstormProjects/irrisfileapiv1/node_modules/mongodb/lib/core/error.js:59:12)
I don't have much experience with gridfs-stream or multer, but I think we can use some context clues to start figuring out what might be happening here.
The error message is:
MongoError: file with id not opened for writing
Looking very closely, note there are two spaces here:
id not
I am guessing this is not a typo. The code generating this message is probably generating a string such as:
"file with id " + fileName " not opened for writing"
So the file name that this function is attempting to use/open is probably incorrectly set to null. In your code we can see:
const file = await gfs.files.find();
const readStream = gfs.createReadStream(file);
This looks incorrect to me. The .find() function is going to return a cursor. But as far as I can tell from the readme, the createReadStream function is expecting to receive a filter with an _id or a filename.
Also, the error message specifically references opening the file for writing. But the code snippets you've shared are just about read streams. So the specific line of code that is generating the error message you've reported may be elsewhere.
Here is the problem: when i'm running this code I get a error saying: song_queue.connection.play is not a function. The bot joins the voicechat correctly but the error comes when it tries to play a song. Sorry for the large amount of code but I really want to fix this so my bot can work. I got the code from a YouTube tutorial recorded in discord.js 12.4.1 (my version is the latest 13.1.0) and I think the error has to do with #discordjs/voice. I would really appreciate any help with getting this to work.
const ytdl = require('ytdl-core');
const ytSearch = require('yt-search');
const { joinVoiceChannel, createAudioPlayer, createAudioResource, } = require('#discordjs/voice');
const queue = new Map();
// queue (message.guild.id, queue_constructor object { voice channel, text channel, connection, song[]});
module.exports = {
name: 'play',
aliases: ['skip', 'stop'],
description: 'Advanced music bot',
async execute(message, args, cmd, client, discord){
const voice_channel = message.member.voice.channel;
if (!voice_channel) return message.channel.send('You need to be in a channel to execute this command');
const permissions = voice_channel.permissionsFor(message.client.user);
if (!permissions.has('CONNECT')) return message.channel.send('You dont have permission to do that');
if (!permissions.has('SPEAK')) return message.channel.send('You dont have permission to do that');
const server_queue = queue.get(message.guild.id);
if (cmd === 'play') {
if (!args.length) return message.channel.send('You need to send the second argument');
let song = {};
if (ytdl.validateURL(args[0])){
const song_info = await ytdl.getInfo(args[0]);
song = { title: song_title.videoDetails.title, url: song_info.videoDetails.video_url }
} else {
//If the video is not a URL then use keywords to find that video.
const video_finder = async (query) =>{
const videoResult = await ytSearch(query);
return (videoResult.videos.length > 1) ? videoResult.videos[0] : null;
}
const video = await video_finder(args.join(' '));
if (video){
song = { title: video.title, url: video.url }
} else {
message.channel.send('Error finding your video');
}
}
if (!server_queue){
const queue_constructor = {
voice_channel: voice_channel,
text_channel: message.channel,
connection: null,
songs: []
}
queue.set(message.guild.id, queue_constructor);
queue_constructor.songs.push(song);
try {
const connection = await joinVoiceChannel({
channelId: message.member.voice.channel.id,
guildId: message.guild.id,
adapterCreator: message.guild.voiceAdapterCreator
})
queue_constructor.connection = connection;
video_player(message.guild, queue_constructor.songs[0]);
} catch (err) {
queue.delete(message.guild.id);
message.channel.send('There was an error connecting');
throw err;
}
} else{
server_queue.songs.push(song);
return message.channel.send(`<:seelio:811951350660595772> **${song.title}** added to queue`);
}
}
}
}
const video_player = async (guild, song) => {
const song_queue = queue.get(guild.id);
if(!song) {
song_queue.voice_channel.leave();
queue.delete(guild.id);
return;
}
const stream = ytdl(song.url, { filter: 'audioonly' });
song_queue.connection.play(stream, { seek: 0, volume: 0.5 }).on('finish', () => {
song_queue.songs.shift();
video_player(guild, song_queue.songs[0]);
});
await song_queue.text_channel.send('(`<:seelio:811951350660595772> Now Playing **${song.title}**`)')
}
Discord.js V13 and #discordjs/voice
Since a relatively recent update to the Discord.js library a lot of things have changed in the way you play audio files or streams over your client in a Discord voice channel. There is a really useful guide by Discord to explain a lot of things to you on a base level right here, but I'm going to compress it down a bit and explain to you what is going wrong and how you can get it to work.
Some prerequisites
It is important to note that for anything to do with voice channels for your bot it is necessary to have the GUILD_VOICE_STATES intent in your client. Without it your bot will not actually be able to connect to a voice channel even though it seems like it is. If you don't know what intents are yet, here is the relevant page from the same guide.
Additionally you will need some extra libraries that will help you with processing and streaming audio files. These things will do a lot of stuff in the background that you do not need to worry about them, but without them playing any audio will not work. To find out what you need you can use the generateDependecyReport() function from #discordjs/voice. Here is the page explaining how to use it and what dependencies you will need. To use the function you will have to import it from the #discordjs/voice library.
Playing audio over a client
So once everything is set up you can get to how to play audio and music. You're already a great few steps on the way by using ytdl-core and getting a stream object from it, but audio is not played by using a .play() command on the connection. Instead you will need to utilize AudioPlayer and AudioResource objects.
AudioPlayer
The AudioPlayer is essentially your jukebox. You can make one by simply calling its function and storing that in a const like so:
const player = createAudioPlayer()
This is a function from the #discordjs/voice library and will have to be imported just like generateDependencyReport().
There are a few parameters you can give it to modify its behavior, but right now that is not important. You can read more about that on its page from the Discord guide right here.
AudioResource
To get your AudioPlayer to actually play anything you will have to create an AudioResource. This is basically a version of your file or stream modified to work with the player. This is very simply done with another function from the #discord.js/voice library called createAudioResource(...). This must once again be imported. As a parameter you can parse the location of an mp3 or webm file, but you can also use a stream object like you have already acquired. Just input stream as the parameter of that function.
To now play the resource there are two more steps. First you must subscribe your connection to the player. This basically tells the connection to broadcast whatever your AudioPlayer is playing. To do this simply call the .subscribe() function on your connection object with the player as a parameter like so:
connection.subscribe(player)
player.play(resource)
The second line of code you see above is how you get your player to play your AudioResource. Just parse the resource as a parameter and it will start playing. You can find more on the AudioResource side of things on its page in the Discord guide right here.
This way takes a few more steps than it did in V12, but once you get the hang of this system it really isn't that bad or difficult.
Leaving a voice channel
There is another thing that is going wrong in your code when you try to leave a voice channel. I can see that you did figure out how to join in V13 already, but .leave() unfortunately is no longer a valid function. Now, to leave a voice channel you must retrieve the connection object that you get from calling joinVoiceChannel(...) and call either .disconnect() or .destroy() on it. They are almost the same, but the latter also makes it so that you cannot use the connection again.
I've been trying different things all day, but nothing seems to offer a simple, straight-forward way to write a ReadableStream (which is an image) to a file. I'm calling an API which returns a ReadableStream, but what then? I tried digging into the object a bit more, and followed it all the way to returning a Buffer[], which seems like it should be what needs to go into a fs.writeFile() but nothing works. The file gets created but I try to open the image and it says it can't open that file type (which file type they're talking about, I have no idea).
Here is my code that returns a Buffer[]. I can also cut off some of those chains to only return the ReadableStream body, but then that returns a PullThrough and I am already so lost. Very little about that class online. Any suggestions?
Here is the api I'm using: https://learn.microsoft.com/en-us/javascript/api/#azure/cognitiveservices-computervision/computervisionclient?view=azure-node-latest#generatethumbnail-number--number--string--computervisionclientgeneratethumbnailoptionalparams--servicecallback-void--
// Image of a dog.
const dogURL = 'https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png';
await computerVisionClient.generateThumbnail(100, 100, dogURL, { smartCropping: true } )
.then((thumbResponse) => {
console.log(thumbResponse.readableStreamBody.readableBuffer.head.data)
fs.writeFile("thumb.jpg", thumbResponse.readableStreamBody.readableBuffer.head.data, "binary", (err) => {
console.log('Thumbnail saved')
if (err) throw err
})
})
Finally found a solution. I don't understand pipe() all that well, but when it's called from a ReadableStream with a filepath as parameter, it works.
The API response thumbResponse.readableStreamBody was the ReadableStream. So anyone who has a readable stream can use this solution. No need to call an API for anything.
// Image of a dog.
const dogURL = 'https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png';
await computerVisionClient.generateThumbnail(100, 100, dogURL, { smartCropping: true } )
.then((thumbResponse) => {
const destination = fs.createWriteStream("thumb.png")
thumbResponse.readableStreamBody.pipe(destination)
console.log('Thumbnail saved')
})
I am trying to convert some PDF files into answer units with Watson's Document Conversion service. These files are all zipped up into one big .zip file, which is uploaded to my Bluemix server running a Node.js application. The application unzips the files in memory and tries to send each one in turn to the conversion service:
var document_conversion = watson.document_conversion(dcCredentials);
function createCollection(res, solrClient, docs)
{
for (var doc in docs) //docs is an array of objects describing the pdf files
{
console.log("Converting: %s", docs[doc].filename);
//make a stream of this pdf file
var rs = new Readable; //create the stream
rs.push(docs[doc].data); //add pdf file (string object) to stream
rs.push(null); //end of stream marker
document_conversion.convert(
{
file: rs,
conversion_target: "ANSWER_UNITS"
},
function (err, response)
{
if (err)
{
console.log("Error converting doc: ", err);
.
.
.
etc...
Every time, the conversion service returns error 400 with the description "Error in the web application".
After scratching my head for two days trying to figure out the cause of this rather unhelpful error message, I have pretty much decided that the problem must be that the conversion service can't figure out what type of file is being sent, since there's no filename associated with it. This of course is just a guess on my part, but I can't test this theory because I don't know how to provide that information to the service without actually writing each file to disk and reading it back.
Can anyone help?
Updated: The problem is in how the underlying form-data library handles Streams: It doesn't calculate the length of Streams (with the exception of file and request steams, which it has extra logic to handle).
getLengthSync() method DOESN'T calculate length for streams, use knownLength options as workaround.
I found two ways around this. Calculate the length yourself and pass it as an option:
document_conversion.convert({
file: { value: rs, options: { knownLength: 12345 } }
...
Or use a Buffer:
document_conversion.convert({
file: { value: myBuffer, options: {} }
...
The reason you were getting a 400 response was because the Content-Length header of your request was incorrectly calculated: The length was too small for the request, causing the MIME part of the request to be truncated (and not closed).
I suspect this is due to the Readable stream not providing a length or size for your content when the request library calculates the size of the entity.
Also, apologies for the useless error message. We'll make that better.
The code below iterate a zip file and convert each document to ANSWER_UNITS.
It uses node-unzip-2 and the zip file documents.zip contains these 3 sample files.
var unzip = require('node-unzip-2');
var watson = require('watson-developer-cloud');
var fs = require('fs');
var document_conversion = watson.document_conversion({
username: 'USERNAME',
password: 'PASSWORD',
version_date: '2015-12-01',
version: 'v1'
});
function convertDocument(doc) {
document_conversion.convert({
file: doc,
conversion_target: document_conversion.conversion_target.ANSWER_UNITS,
}, function (err, response) {
if (err) {
console.error(doc.path,'error:',err);
} else {
console.log(doc.path,'OK');
// hide the results for now
//console.log(JSON.stringify(response, null, 2));
}
});
}
fs.createReadStream('documents.zip')
.pipe(unzip.Parse())
.on('entry', function (entry) {
if (entry.type === "File") {
convertDocument(entry);
} else {
// Prevent out of memory issues calling autodrain for non processed entries
entry.autodrain();
}
});
Example output:
$ node app.js
sampleHTML.html OK
sampleWORD.docx OK
samplePDF.pdf OK