How do I get the downloadId of a downloaded file? - google-chrome-extension

I want to be able to use an extension to download a file and memorize the downloadId for later use. My original attempt at doing this was by using the callback function and assigning the downloadId as a variable like this:
chrome.downloads.download({
url: downloadURL
},
function getDownloadId(downloadId) {
var thisDownloadId = downloadId;
}
);
But this caused my Chromebook to crash, so I don't think that will work.
I attempted to use the solution from this question. However, when I attempt to use said solution, it ended up in this error:
TypeError: Cannot read property 'current' of undefined.
This is the snippet of my script that downloads the file (popup.js)
chrome.downloads.download({
url: downloadURL
});
And the snippet of where it waits for the file to download, which currently only displays the downloadId for testing (popup.js)
chrome.downloads.onChanged.addListener(function (
if(detail.state.current == "complete") { // This is where the error occurs
var downloadId = detail.id;
alert("downloadId: " + downloadId);
}
});

The downloadId variable was getting destroyed due to it being in a callback function. I needed to use the chrome.storage api to get it out.
The download function (modified)
chrome.downloads.download({
url: downloadURL
},
function(downloadId) {
chrome.storage.local.set({'downloadId': downloadId}, function() {
console.log("Store Download ID.");
chrome.storage.local.set({'downloadId': downloadId});
})
}
);
The downloadId alert function:
chrome.storage.local.get(['downloadId'], function(result) {
var downloadId = result.downloadId;
alert(downloadId);
});

Related

Why is the second Jest mock function never being called?

I am mocking navigator functions for simple clipboard functionality. Here is the relevant code:
// FUNCTION
/**
* Adds a click event to the button which will save a string to the navigator clipboard. Checks for
* clipboard permissions before copying.
*/
function loader(): void {
async function copyUrl(): Promise<void> {
const permission = await navigator.permissions.query({ name: "clipboard-write" });
if (permission.state == "granted" || permission.state == "prompt" ) {
await navigator.clipboard.writeText("the url");
} else {
console.error('Permission not supported');
}
}
const button = document.querySelector('button') as HTMLElement;
button.addEventListener('click', async () => {
await copyUrl();
});
}
// TEST
it('works', () => {
// mock navigator functions
Object.assign(navigator, {
permissions: {
query: jest.fn(async () => ({ state: "granted" }))
},
clipboard: {
writeText: jest.fn(async () => {})
}
});
// initialize DOM
document.body.innerHTML = '<button></button>';
loader(); // adds the event listener
// click the button!
const button = document.querySelector('button') as HTMLElement;
button.click();
expect(navigator.permissions.query).toHaveBeenCalledTimes(1);
expect(navigator.clipboard.writeText).toHaveBeenCalledWith('the url');
});
The test fails on expect(navigator.clipboard.writeText).toHaveBeenCalledWith('the url') with:
Expected: "the url" Number of calls: 0
Defeats the purpose of permissions, yes, but for the sake of debugging:
Try adding a clipboard call before permissions call like so?
// FUNCTION
// ...
async function copyUrl(): Promise<void> {
// add this
await navigator.clipboard.writeText('the url');
// keep the rest still
const permission = await navigator.permissions.query({ name: "clipboard-write" });
// ...
}
This fails on the first assertion now, expect(navigator.permissions.query).toHaveBeenCalledTimes(1) with
Expected number of calls: 1 Received number of calls: 0
With the addition above, I also changed the assertions to be:
expect(navigator.clipboard.writeText).toHaveBeenCalledWith('the url');
expect(navigator.clipboard.writeText).toHaveBeenCalledTimes(2);
expect(navigator.permissions.query).toHaveBeenCalledTimes(1);
... which failed on the second assertion because it expected 2 calls but only received 1.
I have been testing in a VSCode devcontainer and tried out the extension firsttris.vscode-jest-runner to debug the test. With breakpoints in the loader function, I'm able to see that every single line executes perfectly with my mockup but still fails at the end of debug.
I even changed the mock navigator.permissions.query function to return { state: 'denied' } instead. Both running and debugging, it did not satisfy the permission check and gave an error to the console as expected but the test still failed at expect(navigator.permissions.query).toHaveBeenCalledTimes(1) (with the added writeText call before it).
It seems to me that after the first call of a mock function, the others just don't work.
Am I missing something? Send help pls lol
EDITS
Using jest.spyOn as in this answer has the same issues.
Using an async test with an expect.assertions(n) assertion still produces the exact same issue.

cy.readFile results in timeout?

I am trying to read an JSON file that I have just written with another test in the same cypress project. However when it tries to read the file it times out after 4000 milliseconds.
Has anyone experienced this before? How could I solve it?
I have tried increasing the time out by giving it a settings object but that doesn't increase the time out time. I thought it might be a file permissions issue but that doesn't seem to be it either.
I am running on Mac but tried the same project on Windows with the same result.
before('grab generated user data', function (){
let data = cy.readFile("Generated User/Cypress test 131.json", {log:true, timeout: 180000});
}
I expect it to just give back the parsed JSON object. As it says in the Cypress docs. (https://docs.cypress.io/api/commands/readfile.html#Syntax)
1.Your file should be in the project directory where cypress.json file is present.
2.Your file name should be Cypresstest131.json or Cypress-test-131.json
before('grab generated user data', function (){
let data = cy.readFile("Cypresstest131.json", {log:true, timeout: 4000});
data.its('name').should('eq', 'Eliza')
})
or
before('grab generated user data', function (){
cy.readFile("Cypress-test-131.json", {log:true, timeout: 4000}).its('name').should('eq', 'Eliza')
})
Hope this help you
I ended up creating a data.json with cy.createFileSync(). When I want to read the file instead of using cypresses cy.readFile() function I created a cypress task that uses the fs library to read the file.
I am leaving you with the code snipped of the task I used to read the file.
const fs = require('fs');
const request = require('request');
module.exports = (on, config) => {
on('task', {
// Cypress task to get the last ID
getLastId: () => {
// Make a promise to tell Cypress to wait for this task to complete
return new Promise((resolve) => {
fs.readFile("data.json", 'utf8', (err, data) => {
if (data !== null && data !== undefined) {
data = JSON.parse(data);
if (data.hasOwnProperty('last_id')) {
resolve(data.last_id);
} else {
resolve("Missing last_id");
}
} else {
resolve(err);
}
});
});
}
Calling this function would be as simple as
let id = 0;
before('grab generated user data', function (){
cy.task('getLastId').then((newID)=>{
id = newID;
});
});

in firebase cloud function the bucket.upload promise resolves too early

I wrote a function that work like this
onNewZipFileRequested
{get all the necessary data}
.then{download all the files}
.then{create a zipfile with all those file}
.then{upload that zipfile} (*here is the problem)
.than{update the database with the signedUrl of the file}
Here is the relevant code
[***CREATION OF ZIP FILE WORKING****]
}).then(() =>{
zip.generateNodeStream({type:'nodebuffer',streamFiles:true})
.pipe(fs.createWriteStream(tempPath))
.on('finish', function () {
console.log("zip written.");
return bucket.upload(tempPath, { //**** problem****
destination: destinazionePath
});
});
}).then(()=>{
const config = {
action:'read',
expires:'03-09-2391'
}
return bucket.file(destinazionePath).getSignedUrl(config)
}).then(risultato=>{
const daSalvare ={
signedUrl: risultato[0],
status : 'fatto',
dataInserimento : zipball.dataInserimento
}
return event.data.ref.set(daSalvare)
})
On the client side, as soon as the app see the status change and the new Url, a download button (pointing to the new url) appears
Everything is working, but if I try to download the file immediately... there is no file yet!!!
If I wait same time and retry the file is there.
I noted that the time I have to wait depend on the size of the zipfile.
The bucket.upload promise should resolve on the end of the upload, but apparently fires too early.
Is there a way to know exactly when the file is ready?
I may have to make same very big file, it's not a problem if the process takes several minutes, but I need to know when it's over.
* EDIT *
there was a unnecessary nesting in the code. While it was not the error (results are the same before and after refactoring) it was causing some confusion in the answers, so i edited it out.
Id' like to point out that i update the database only after getting the signed url, and i get that only after the upload (i could not otherwise), so to get any result at all the promise chain MUST work, and in fact it does. When on the client side the download button appears (happens when 'status' become 'fatto') it is already linked to the correct signed url, but if i press it too early the file is not there (Failed - No file). If i wait some second (the bigger the file the longer i have to wait) then the file is there.
(English is not my mother language, if i have been unclear ask and i will try to explain myself better)
It looks like the problem could be that the braces are not aligned properly, causing a then statement to be embedded within another. Here is the code with the then statements separated:
[***CREATION OF ZIP FILE WORKING****]}).then(() => {
zip.generateNodeStream({type: 'nodebuffer', streamFiles: true})
.pipe(fs.createWriteStream(tempPath))
.on('finish', function () {
console.log('zip written.')
return bucket.upload(tempPath, {
destination: destinazionePath
})
})
}).then(() => {
const config = {
action: 'read',
expires: '03-09-2391'
}
return bucket.file(destinazionePath).getSignedUrl(config)
}).then(risultato => {
const daSalvare = {
signedUrl: risultato[0],
status : 'fatto',
dataInserimento : zipball.dataInserimento
}
return event.data.ref.set(daSalvare)
})

Calling external function from within Phantomjs+node.js

I'm going to be honest. I'm way in over my head here.
I need to scrape data from a dynamic site for my employer. Before the data is visible on the page, there are some clicks and waits necessary. Simple PHP scraping won't do. So I found out about this NodeJS + PhantomJS combo. Quite a pain to set up, but I did manage to load a site, run some code and get a result.
I wrote a piece of jQuery which uses timeout loops to wait for some data to be loaded. Eventually I get a js object that I want to write to a file (JSON).
The issue I'm facing.
I build up the the js object inside the PhantomJS .evaluate scope, which runs in a headerless browser, so not directly in my Node.JS server scope. How do I send the variable I built up inside evaluate back to my server so I can write it to my file?
Some example code (I know it's ugly, but it's for illustrative purposes). I use node-phantom-simple as a bridge between Phantom and Node
var phantom = require('node-phantom-simple'),
fs = require('fs'),
webPage = 'https://www.imagemedia.com/printing/business-card-printing/'
phantom.create(function(err, ph) {
return ph.createPage(function(err, page) {
return page.open(webPage, function(err, status) {
page.onConsoleMessage = function(msg) {
console.log(msg);
};
console.log("opened site? ", status);
page.evaluate(function() {
setTimeout(function() {
$('.price-select-cnt').eq(0).find('select').val('1266').change()
timeOutLoop()
function timeOutLoop() {
console.log('looping')
setTimeout(function() {
if ($('#ajax_price_tool div').length != 6) {
timeOutLoop()
} else {
$('.price-select-cnt').eq(1).find('select').val('25')
$('.price-select-cnt').eq(2).find('select').val('Premium Card Stock')
$('.price-select-cnt').eq(3).find('select').val('Standard').change()
timeOutLoop2()
}
}, 100)
}
function timeOutLoop2() {
console.log('looping2')
setTimeout(function() {
if ($('.pricing-cost-cnt').text() == '$0' || $('.pricing-cost-cnt').text() == '') {
timeOutLoop2()
} else {
var price = $('.pricing-cost-cnt').text()
console.log(price)
}
}, 100)
}
}, 4000)
});
});
});
});
function writeJSON(plsWrite) {
var key = 'file'
fs.writeFile('./results/' + key + '.json', plsWrite, 'utf8', function() {
console.log('The JSON file is saved as');
console.log('results/' + key + '.json');
});
}
So do do I write the price this code takes from the website, get it out of the evaluate scope and write it to a file?

A way to make onDeterminingFileName and NativeMessaging work together

I'm trying to determine if a file is already in my system before I download it.
I get the file name from the server through
chrome.downloads.onDeterminingFilename.addListener(function (item, suggest) {
var message = { "fileName": item.filename, "directory": directory };
suggest({
filename: item.filename,
conflict_action: 'prompt',
conflictAction: 'prompt'
});
// conflict_action was renamed to conflictAction in
// https://chromium.googlesource.com/chromium/src/+/f1d784d6938b8fe8e0d257e41b26341992c2552c
// which was first picked up in branch 1580.
});
which works great.
I am checking the filename through native messaging.
chrome.runtime.sendNativeMessage("testcsharp", message, function (response) {
if (chrome.runtime.lastError) {
alert("ERROR: " + chrome.runtime.lastError.message);
} else {
isDownloaded = JSON.parse(JSON.stringify(response)).data;
}
});
The problem is they are both asynchronous but I need to avoid the save as dialog if my file already exists and the only way I've found to do this is call chrome.downloads.cancel(item.id) but there is no guarantee that a response will come in time from native messaging to cancel in ondeterminingfilename. Is there a way to do this or am I just stuck notifying that this file already exists and closing the save dialog manually?

Resources