Bulk Excel Upload from Angular shows Page Unresponsive - node.js

I am trying to upload the excel from UI (angular8 and node 14) and try to process it, but while retrieving data from excel for bulk data(1800 rows almost 12mb file ) it shows a Page Unresponsive pop-up many times and I can understand, chrome browser trying to use a lot of memory and result is Page Unresponsive pop-up. is there any way to disable or not show this pop-up or any solution for this issue?
Thanks in advance.
I am attaching below working Ts file code.
code:
onFileChange(evt: any) {
const target: DataTransfer = (evt.target) as DataTransfer;
const selectedFile = evt.target.files[0];
const reader: FileReader = new FileReader();
reader.readAsBinaryString(selectedFile);
reader.onload = (e: any) => {
const bstr: string = e.target.result;
const wbk: XLSX.WorkBook = XLSX.read(bstr, {type: 'binary'});
wbk.SheetNames.forEach(sheet =>{
setTimeout(()=>{}, 250); //I am trying to use timeout to refresh the flow but not working
const data1 = XLSX.utils.sheet_to_json(wbk.Sheets[sheet]);
console.log(data1);
})
}
};

Related

Why chromium doesn't open in headless Mode?

I have the following NodeJS code to open Chromium in headless mode and record a web page to a video :
const { launch, getStream } = require("puppeteer-stream");
const fs = require("fs");
const { exec } = require("child_process");
async function test() {
const browser = await launch({headless: true});
const page = await browser.newPage();
await page.goto("https://www.someurl.com");
const stream = await getStream(page, { audio: true, video: true});
// record the web page to mp4 video
const ffmpeg = exec('ffmpeg -y -i - output.mp4');
stream.pipe(ffmpeg.stdin);
setTimeout(async () => {
await stream.destroy();
stream.on("end", () => {});
}, 1000 * 60);
}
The following code works properly but doesn't open chromium in headless mode. No matter what I do, the browser is still opened and visible when browsing the page. No error is thrown.
Does anyone know why it's not opened in headless mode please ?
Thanks
It says in the documentation for puppeteer-stream:
Notice: This will only work in headful mode
This is due to a limitation of Chromium where the Tab Capture API for the extension doesn't work in headless mode. (There are a couple bug reports about this, but I can't find the links at the moment.)
I had the same issue that headless doesn't work with some Websites and Elements (showing blank page content, not finding an element etc.).
But there is another method to "simulate" the headless mode by minimizing and moving the window to a location that can not be seen by the user.
This doesn't hide the chrome task from the taskbar, but the Chrome tab itself will still be hidden for the User.
Just use the following arguments:
var chromeOptions = new ChromeOptions();
chromeOptions.AddArguments(new List<string>() { "--window-size=1,1", "window-position=-2000,0" }); // This hides the chrome window
var chromeDriverService = ChromeDriverService.CreateDefaultService();
chromeDriverService.HideCommandPromptWindow = true; // This is to hid the console.
ChromeDriver driver = new ChromeDriver(chromeDriverService, chromeOptions);
driver.Navigate().GoToUrl("https://google.com");
in short the important part:
chromeOptions.AddArguments(new List<string>() { "--window-size=1,1", "window-position=-2000,0" });
chromeDriverService.HideCommandPromptWindow = true;
//driver.Manage().Window.Minimize(); //use this if the code above does not work

jsPDF saving the file without extension when sent to nodejs/expressjs server

I have a class based react component that takes data from a user. The data is then fed to jsPDF.
let doc = new jsPDF();
doc.save()
This works fine. It saves the file with .pdf extension.
Now the problem is that I am sending this file to the express.js backend.
const pdf = new Blob([this.state.doc.output("blob")], {
type: "application/pdf",
});
OR
const pdf = this.state.doc.output("blob");
NODE.js
I am using Formidable.js for receiving the file.
const newPath = files.pdf.path;
The file gets saved without an extension.
I also did this
const newPath = `${files.pdf.path}.pdf`
This adds the .pdf to the string that is saved to mongodb, but the file saved is without any extension.
Solved.
https://github.com/node-formidable/formidable/issues/680
//front-end
const pdf = new File([doc.output("blob")], "myDoc.pdf", {
type: "application/pdf",
});
//Node
const newPath = files.pdf.path;
Thanks to https://github.com/GrosSacASac

Node.js Google Cloud Storage Get Multiple Files' Metadata

I have several files on Google Cloud Storage that are named as 0.jpg, 1.jpg, 2.jpg, etc. I want to get the metadata for each file without setting file names separately. Then, want to send these metadata information to React application. In React application, when one clicks on an image, the popup displays the metadata information for this clicked image.
For only one file, I used the following code:
const express = require("express");
const cors = require("cors");
// Imports the Google Cloud client library
const { Storage } = require("#google-cloud/storage");
const bucketName = "bitirme_1";
const filename = "detected/0.jpg";
const storage = new Storage();
const app = express();
app.get("/api/metadata", cors(), async (req, res, next) => {
try {
// Gets the metadata for the file
const [metadata] = await storage
.bucket(bucketName)
.file(filename)
.getMetadata();
const metadatas = [
{id: 0, name: `Date: ${metadata.updated.substring(0,10)}, Time: ${metadata.updated.substring(11,19)}`},
{id: 1, name: metadata.contentType}
];
res.json(metadatas);
} catch (e) {
next(e);
}
});
const port = 5000;
app.listen(port, () => console.log(`Server started on port ${port}`));
I first set the bucket name. Then, set filename array as (89 is the number of files):
const filename = Array(89).fill(1).map((_, i) => ('detected/' + i + '.jpg'));
These files are in detected folder. When I try this, it gives me this error:
Error: No such object: bitirme_1/detected/0.jpg, detected/1.jpg, detected/2.jpg, detected/3.jpg, detected/4.jpg,detected/5.jpg,detected/6.jpg, ....
How can I solve the getting multiple files' metadata issue?
Also, I want to get the number of files in a bucket (or, in the detected folder). I searched the API but cannot found anything. I do not want to enter the total number of files as 89, want to get it from the API.
I found the solution for finding the number of files in a bucket or a folder in a bucket. This is the solution:
const [files] = await storage.bucket(bucketName).getFiles();
const fileStrings = files.map(file => file.name);
const fileSliced = fileStrings.map(el => el.slice(9, 11));
for (i = 0; i < fileSliced.length; i++) {
if (fileSliced[i].includes('.')) {
fileSliced[i] = fileSliced[i].slice(0, 1);
}
}
const fileNumbers = fileSliced.map(function(item) {
return parseInt(item, 10);
});
const numOfFiles = Math.max(...fileNumbers) + 1;
console.log(numOfFiles);
First, I got the all files with file names in a string array. In my case, file names are detected/0.jpg, detected/1.jpg, detected/2.jpg, etc. I just only want to the number part of the file name; hence, I sliced the string array starting from 9th index up to 11th index(not included). As a result, I got only the numbers except one digit numbers.
To handle one digit case, which have '.' at the end of the sliced name, I also removed '.' from these one digit file names.
As a result, I got ['0', '1', '2', '3', ...]. Next, I convert this string array to number array using parseInt function. Finally, to get the number of files, I got the maximum of the array and add 1 to this number.
I have an image detail component that includes location of sender ip, download option and exiting the popup page. This detail popup page opens at /#i where i is the name of image file, such as 1.jpg, 2.jpg. So, for example when I click the first image, the popup page opens at /#1. In this popup page, I want to get metadata information for the opened image. But, I could not find a solution for this.

Why this .js code doesn't open the browser when i run it?

I want to open the website via this code and it doesn't work. What's the problem? No errors are shown.
const Nightmare = require('nightmare');
var d = Nightmare({show:true});
d.goto('https://duckduckgo.com').wait(3000).end().then(result => {});

Navigating to another URL during webdriver task

I am trying to log into a website as an admin and then navigate to another page (a portal) which requires this admin login beforehand to display data. I don't think I can access the cookies because of an issue accessing https cookies issue I read up on earlier (correct me if I'm wrong).
So my current solution is to enter the url as soon as the login process is complete and then continue with other tasks. Could you please advise on the methods/functions I can use to do this? If there are better ways to do this, I'd also be happy to hear about those!
var webdriver = require("selenium-webdriver");
var By = require("selenium-webdriver").By;
var until = require("selenium-webdriver").until;
var assert = require("chai").assert;
var filename = "img";
var fs = require('fs');
var err = "error caught!";
var testName = "get_login_cookies";
var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();
describe('email register', function () {
this.timeout(25000);
before(function(done) {
driver.navigate().to('https://www.perlego.com/#');
driver.manage().deleteAllCookies;
driver.manage().window().maximize()
.then(() => done())
});
it('logs in with admin user and gets cookies', (done) => {
driver.findElement(By.name('email')).sendKeys("user#example.com");
driver.findElement(By.css('#password')).sendKeys("examplePassword");
driver.findElement(By.css('.login-button')).click();
// some code here to navigate to other page via url
// runs remainder of tests
});
after(function(done) {
driver.quit()
.then(() => done())
});
});
So I found that it was as simple as running the driver.navigate() method where I wanted to go to a new page:
driver.navigate().to('https://www.somesite.com/#');
Because of the cookie settings on the site, I was unable to access them with the webdriver, so I had to enter the password each time.
I was tripped up by waiting for ajax calls on the page when trying to select elements, this method helped:
driver.manage().timeouts().implicitlyWait(3000);
Hope this helps someone out there!

Resources