I have tested this with Script Lab (https://www.microsoft.com/en-us/garage/profiles/script-lab/) and it seems saving a document (automatically) to One Drive breaks Word if the document has something in it inserted through an add in (reproduced with a different add in that uses the same piece of code).
This is the code for Script Lab:
$("#run").click(run);
var content = '<span>Hello, World!</span>';
async function run() {
Word.run(function (context) {
var range = context.document.getSelection();
var cc = range.insertContentControl();
var ccRange = cc.insertHtml(content, 'replace');
cc.tag = 'citation';
cc.select('end');
context.load(cc);
context.load(range);
context.load(ccRange);
return context.sync();
})
}
The code above successfully inserts content to selection. However, if I wait for a couple of moments after inserting, Word breaks on saving and asks me to send an error report. Same code does not produce any errors on the desktop version of Word. Also, same piece of code has been used by our add in for months now without any issues until now.
Any ideas what is this about? Is there something wrong with the online version of Word or should the code for inserting be somehow updated?
Related
Good night people, I tell you..
I am working in node and express and I am getting the following error
It turns out that my pdf at the moment has 3 pages, but it can vary. What I need to do is find a way to read the number of sheets that the PDF has, I'm using pdf.js.
So in summary:
So what I need to do is do something in such a way that if the pdf has 3 pages, read me the 3 pages, if it has 4, read me the 4 pages and so on, I was reading the information that is https://mozilla.github.io /pdf.js/examples/ but it doesn't really fix much. Here's a picture of what I've done.
doc.numpages It returns the number of sheets, but when I use it by passing it to it, in this case, as numPages is = 3, it reads only the 3rd sheet
It looks like you are only calling await doc.getPage() after counting all the pages, so you only ever get the last page.
I'd imagine you need to move the getPage and getTextContent calls into the for loop and save the results in a data structure like an array until you've read the whole PDF and are ready to return it. For example:
function getAllPages(doc) {
let pages = [];
for (let i = 1; i < doc.numPages; i++) {
let page = await doc.getPage(i);
let pageContent = await page.getTextContent();
pages.push(pageContent);
}
return pages;
}
(P.S. it's much easier to help if you paste code as text instead of sharing a screenshot)
title pretty much explains it all. I'm trying to add objects into a nodejs file and cant seem to get it working.
Each file essentially looks like this
[{"name":name,"date":date},{"name":name,"date":date}] (in simplest terms)
I want to be able to add an object, to that array that is in that file. Here is the code I came up with
for(o in collections){
fs.readFile(__dirname + "/HowIsCollections/"+collections[o].mintDate,'utf8',function(err,data){
const dat = JSON.parse(data)
const existedData = []
//console.log(existedData)
for(i in dat){
existedData.push(JSON.stringify(dat[i]))
}
const project = JSON.stringify(collections[o])
if(!existedData.includes(project)){
console.log("?")
dat.push(project)
}
fs.writeFileSync(__dirname + "/HowIsCollections/"+collections[o].mintDate,JSON.stringify(dat))
console.log("????")
})
}
Its pretty self explanatory. From the top, its reading the file, getting the data, taking all of the objects found in the file and putting it into an array.
the second half of the code, stringifys each object, it then compares against the array to see if that object exists in the array (existedData, the data from the file). If it doesnt, it adds it. Then at the end im just resaving the file.
dat.push(project) is the array in the file.
I have similar setups like this in other parts of my code, which work. this however does not, i get no errors, nothing, it just doesnt work. All of my console.log's show, but thats it.
I tried looking on here mostly for solutions, but most of them were just talking about stringifying an object in fs.writefile, which isnt what i need here.
I have a folder that will receive multiple xlsx files that will be uploaded via Google forms. There were will be new sheets added a couple of times a week and this data will need to be added.
I want convert all of these xlsx files into a single sheet that will feed a datastudio.
I had started working with this script:
function myFunction() {
//folder ID
var folder = DriveApp.getFolderById("folder ID");
var filesIterator = folder.getFiles();
var file;
var filetype;
var ssID;
var combinedData = [];
var data;
while(filesIterator.hasNext()){
file = filesIterator.next();
filetype = file.getMimeType();
if (filetype === "application/vnd.google-apps.spreadsheet"){
ssID = file.getId();
data = getDataFromSpreadsheet(ssID)
combinedData = combinedData.concat(data);
}//if ends here
}//while ends here
Logger.log(combinedData.length);
}
function getDataFromSpreadsheet(ssID) {
var ss = SpreadsheetApp.openById(ssID);
var ws = ss.getSheets()[0];
var data = ws.getRange("A:W" + ws.getLastRow()).getValues();
return data;
}
Unfortunately that array is returning 0! I think this maybe due to the xlsx issue.
1. Fetch the excel data
Unfortunately, Apps Script can not deal directly with excel values. You need to first convert those files into Google Sheets to access the data. This is fairly easy to do, and can be accomplished using the Drive API (you can check the documentation here) with the following two lines at the top of your code.
var filesToConvert = DriveApp.getFolderById(folderId).getFilesByType(MimeType.MICROSOFT_EXCEL);
while (filesToConvert.hasNext()){ Drive.Files.copy({mimeType: MimeType.GOOGLE_SHEETS, parents: [{id: folderId}]}, filesToConvert.next().getId());}
Please note that this duplicates the existing file by creating a Google Sheets copy of the excel but does not remove the excel file itself. Also note that you will need to activate the Drive API service.
2. Remove duplicates from combinedData
This is not as straightforward as removing duplicate from a regular array, as combinedData is an array of arrays. Nevertheless, it can be accomplished by creating an intermediate object that stores an stringified version of the row array as the key and the row array itself as the value:
var intermidiateStep = {};
combinedData.forEach(row => {intermidiateStep[row.join(":")] = row;})
var finalData = Object.keys(intermidiateStep).map(row=>intermidiateStep[row]);
Extra
I also found another mistake in your code. You should add a 1 (or whichever the first row that you want to read is) when declaring the range of the values to be read, so
var data = ws.getRange("A1:W"+ws.getLastRow()).getValues();
instead of:
var data = ws.getRange("A:W" + ws.getLastRow()).getValues();
As it currently is, Apps Script fails to understand the exact range you want to be read and just assumes that it is the whole page.
In the Google Getting started with Node.js tutorial they perform the following operation
data = {...data};
in the code for sending data to Firestore.
You can see it on their Github, line 63.
As far as I can tell this doesn't do anything.
Is there a good reason for doing this?
Is it potentially future proofing, so that if you added your own data you'd be less likely to do something like data = {data, moreData}?
#Manu's answer details what the line of code is doing, but not why it's there.
I don't know exactly why the Google code example uses this approach, but I would guess at the following reason (and would do the same myself in this situation):
Because objects in JavaScript are passed by reference, it becomes necessary to rebuild the 'data' object from it's constituent parts to avoid the original data object being further modified by the ref.set(data) call on line 64 of the example code:
await ref.set(data);
For example, in MongoDB, when you pass an object into a write or update method, Mongo will actually modify the object to add extra properties such as the datetime it was insert into a collection or it's ID within the collection. I don't know for sure if Firestore does the same, but if it doesn't now, it's possible that it may in future. If it does, and if your original code that calls the update method from Google's example code goes on to further manipulate the data object that it originally passed, that object would now have extra properties on it that may cause unexpected problems. Therefore, it's prudent to rebuild the data object from the original object's properties to avoid contamination of the original object elsewhere in code.
I hope that makes sense - the more I think about it, the more I'm convinced that this must be the reason and it's actually a great learning point.
I include the full original function from Google's code here in case others come across this in future, since the code is subject to change (copied from https://github.com/GoogleCloudPlatform/nodejs-getting-started/blob/master/bookshelf/books/firestore.js at the time of writing this answer):
// Creates a new book or updates an existing book with new data.
async function update(id, data) {
let ref;
if (id === null) {
ref = db.collection(collection).doc();
} else {
ref = db.collection(collection).doc(id);
}
data.id = ref.id;
data = {...data};
await ref.set(data);
return data;
}
It's making a shallow copy of data; let's say you have a third-party function that mutates the input:
const foo = input => {
input['changed'] = true;
}
And you need to call it, but don't want to get your object modified, so instead of:
data = {life: 42}
foo(data)
// > data
// { life: 42, changed: true }
You may use the Spread Syntax:
data = {life: 42}
foo({...data})
// > data
// { life: 42 }
Not sure if this is the particular case with Firestone but the thing is: spreading an object you get a shallow copy of that obj.
===
Related: Object copy using Spread operator actually shallow or deep?
I've spent hours finding out why the excel export with the package cyber-duck/laravel-excel export OK the excel when the datasource is a query, but when using a custom serialiser it simply stops formatting the excel correctly
No errors in code, super simple excel. Even trying the code posted in the documentation:
Usage:
$serialiser = new CustomSerialiser();
$excel = Exporter::make('Excel');
$excel->load($collection);
$excel->setSerialiser($serialiser);
return $excel->stream('filename.xlsx');
CustomSerialiser:
namespace App\Serialisers;
use Illuminate\Database\Eloquent\Model;
use Cyberduck\LaravelExcel\Contract\SerialiserInterface;
class ExampleSerialiser implements SerialiserInterface
{
public function getData($data)
{
$row = [];
$row[] = $data->field1;
$row[] = $data->relationship->field2;
return $row;
}
public function getHeaderRow()
{
return [
'Field 1',
'Field 2 (from a relationship)'
];
}
}
Any thoughts?
What software do you use to open the file? Excel? OpenOffice?
If you open the test folder > Unit > ExporterTest.php, you should see a working example in test_can_use_a_custom_serialiser.
You can change row 155 to $exporter = $this->app->make('cyber-duck/exporter')->make('Excel');, row 160 to $reader = ReaderFactory::create(Type::XLSX); (otherwise it would use a CSV) and comment out line 174 to keep the file so you can open it after the test has running.
The ExampleSerialiser you posted needs to be modified to match your Eloquent model and relationships. Also, the example use an Eloquent version and you mentioned the query builder. If you want to use the Query builder version, you need to use loadQuery (I'll try to update the documentation next week to cover this user case). Feel free to drop me an email with your code so I can have a look and try to help out (It's a bit hard to understand the issue without seeing the actual implementation). You should find me on github, I'm one of the Cyber-Duck guys working on our projects.