Js-Report not display other language when deploy on lambda - node.js

I have use js-report to generate pdf with thai language
all of my process is
1.get csv file from s3
2.read data and convert csv to object
3.send object to jsreport
i already done all process in local and it's work fine .
but when i deploy this project on lambda (we can attach js report to lambda. more details in this url : jsreport-aws-lambda )
but when i deploy and test, it's not displaying thai language(and i think maybe other language too.)
at first i think it's becode encoding ('base64') i try to change it's to utf-8 but the file is corrupted.
I already set meta content of html file to
enter code here
but's still not working
what can i do to solved this, please help me.
Thanks.

Related

Taiko UI automation angular - Unable to use fileField to upload a csv file

Experimenting with Taiko for UI automation. Trying to upload a csv file but giving the id of the csv file selector is not working. A red rectangle outline blinks on top of the file upload link file firing {attach("/Users/username/Downloads/report.csv",$('*[id="some"]'))} but shows following error message in console.
Error: Node is not a file input element, run `.trace` for more info.
HTML
I've tried following fieldfield examples from https://docs.taiko.dev/#filefield
attach('report.csv', to(fileField('Upload CSV file (Optional)')))
fileField('Upload CSV file (Optional)').exists()
fileField({'id':'event-csv-upload'}).exists()
fileField({id:'event-csv-upload'},below('Upload CSV file (Optional)')).exists()
fileField(below('Upload CSV file (Optional)')).exists()
none of this works and finally tried following
attach("/Users/username/Downloads/report.csv",$('*[id="event-csv-upload"]'))
and
attach("/Users/username/Downloads/report.csv",fileField({id:'event-csv-upload'}))
source:https://github.com/getgauge/taiko/issues/309
Still not able to upload file using Taiko.
Why this file upload element is difficult to locate in angular code?
Is it too early to try Taiko now for angular web projects?
Do you recommend any other UI automation framework that work well with any angular versions?
attach expects a File input field as a selector to perform action on, in your case that element seems to be a hidden element linked to a button, attaching to that hidden element should work.
Try,
await attach("/Users/username/Downloads/report.csv",fileField({id:'eventCSVFileInput'},{ selectHiddenElements: true }))
Try this
await attach("/Users/username/Downloads/report.csv",fileField({id:'eventCSVFileInput'},{force:true}))

How to convert JSON file into PO?

Hello friends and colleagues...
A quick question How do Iconvert JSON file into PO?
I had a PO file with relevant translations, then I converted it to JSON on some website after that wrote a little script in NodeJS to translate keys via Google translate API and now I just want to convert this translated JSON back to PO...
Is there any easy way? I don't seem to find any working npm packages or anything else...
Please Help,
Thanks
Online tools like https://localise.biz/free/converter/po-to-json can help.
To translate using command line there's an open source repo on Git- https://github.com/2gis/i18n-json-po

Use images in s3 with SageMaker without .lst files

I am trying to create (what I thought was) a simple image classification pipeline between s3 and SageMaker.
Images are stored in an s3 bucket with their class labels in their file names currently, e.g.
My-s3-bucket-dir
cat-1.jpg
dog-1.jpg
cat-2.jpg
..
I've been trying to leverage several related example .py scripts, but most seem to be download data sets already in .rec format or containing special manifest or annotation files I don't have.
All I want is to pass the images from s3 to the SageMaker image classification algorithm that's located in the same region, IAM account, etc. I suppose this means I need a .lst file
When I try to manually create the .lst it doesn't seem to like it and it also takes too long doing manual work to be a good practice.
How can I automatically generate the .lst file (or otherwise send the images/classes for training)?
Things I read made it sound like im2rec.py was a solution, but I don't see how. The example I'm working with now is
Image-classification-fulltraining-highlevel.ipynb
but it seems to download the data as .rec,
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec')
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec')
which just skips working with the .jpeg files. I found another that converts them to .rec but again it has essentially the .lst already as .json and just converts it.
I have mostly been working in a Python Jupyter notebook within the AWS console (in my browser) but I have also tried using their GUI.
How can I simply and automatically generate the .lst or otherwise get the data/class info into SageMaker without manually creating a .lst file?
Update
It looks like im2py can't be run against s3. You'd have to completely download everything from all s3 buckets into the notebook's storage...
Please note that [...] im2rec.py is running locally,
therefore cannot take input from the S3 bucket. To generate the list
file, you need to download the data and then use the im2rec tool. - AWS SageMaker Team
There are 3 options to provide annotated data to the Image Classification algo: (1) packing labels in recordIO files, (2) storing labels in a JSON manifest file ("augmented manifest" option), (3) storing labels in a list file. All options are documented here: https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html.
Augmented Manifest and .lst files option are quick to do since they just require you to create an annotation file with a usually quick for loop for example. RecordIO requires you to use im2rec.py tool, which is a little more work.
Using .lst files is another option that is reasonably easy: you just need to create annotation them with a quick for loop, like this:
# assuming train_index, train_class, train_pics store the pic index, class and path
with open('train.lst', 'a') as file:
for index, cl, pic in zip(train_index, train_class, train_pics):
file.write(str(index) + '\t' + str(cl) + '\t' + pic + '\n')

I'm trying to get an excel sheet downloaded using python requests module and getting junk output

I'm trying to download an excel file which is uploaded on a Sharepoint 2013 site.
My code is as follows:
import requests
url='https://<sharepoint_site>/<document_name>.xlsx?Web=0'
author = HttpNtlmAuth('<username>','<passsword>')
response=requests.get(url,auth=author,verify=False)
print(response.status_code)
print(response.content)
This gives me a long output which is something like:
x00docProps/core.xmlPK\x01\x02-\x00\x14\x00\x06\x00\x08\x00\x00\x00!\x00\x7f\x8bC\xc3\xc1\x00\x00\x00"\x01\x00\x00\x13\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xb8\xb9\x01\x00customXml/item1.xmlPK\x05\x06\x00\x00\x00\x00\x1a\x00\x1a\x00\x12\x07\x00\x00\xd2\xba\x01\x00\x00\x00'
I did something like this before for another site and I got xml as output which was acceptable for me but I'm not sure how to handle this data.
Any ideas to process this to be like xlsx or xml?
Or maybe to download the xlsx another way?(I tried doing it through the wget library and the excel seems to get corrupted)
Any ideas would be really helpful.
Regards,
Karan
Its too late but i got similar issue... thought it might help someone else.
try writing the output to a file or apply some encoding while printing.
writing to a file:
file=open("./temp.xls", 'wb')
file.write(response.content)
file.close()
or
file=open("./temp.xls", 'wb')
file.write(response.text)
file.close()
printing with encoding
print ( resp.text.encode("utf-8") )
or
print ( resp.content.encode("utf-8") )
!Make appropriate imports.
!try 'w' or 'wb' for file write.
Hope this helps.
It seems that the file is encrypted and request can't handle this.
Maybe the web service provides an API for downloading and secure decoding.

openxml can't open docx file throught sharepoint rest

I'm using the sharepoint rest api to get the contents of a docx file like so
_api/web/getfolderbyserverrelativeurl('openxmlJsPoc')/files('TemplateDocument.docx')/$value
I get the contents of the file, But I'm having trouble reading it with the openxml javascript api.
this is a sample the return data that I get:
PK ! î¦o´• )  Í[Content_Types].xml ¢É(  ¼•MKÃ#†ï‚ÿ!ìUš­
"ÒÔƒG¬àuÝLÚÅýbgÚÚï$Ú(Z[iª—#²;ïûì»3dpþâl6ƒ„&øBæ}‘ס4~\ˆ‡ÑuïTdHÊ—Ê…X ŠóáþÞ´ˆ€W{,Ä„(žI‰zNa"x^©BrŠø5eTúYAõû'ROà©Gµ†.¡RSKÙÕ~#IQdok¯B¨­ÑŠ˜TÎ|ùÅ¥÷îse³'&âc¹Ò¡^ùÙà½î–£I¦„ìN%ºQŽ1ä<¤R–AOŸ!_/³‚3T•ÑÐÖ×j1
ˆœ¹³y»â”ñKþ9ˆÙ<;³42-ˆ;Û};úRy#BÅ}1ROvÏÐJo„˜ÃÓýŸEñI|7Ë]
%Gç, ¿Ê÷c„DÚùYÕ­·i‹‹XÎk]ýKÇfòþ¢ùÝuaë)RpÎJCàšÜ:‡ÞŠÖz›Co·0tŸûVtk†ãÿÎá£ùšKÙ‘ýŠ>”Ínø
ÿÿ PK ! ™U~ á  ó_rels/.rels ¢ï(  ¬’ÏJÃ#Æï‚ï°Ì½™´Šˆ4éE„ÞDâ»Ó$˜ýÃîTÛ·w-ˆjÒƒÇùæ›ß|ìzs°ƒzç˜zï*X%(vÚ›Þµ¼6O‹{PIȼã
ŽœS__­_x ÉC©ëCRÙÅ¥
:‘ð€˜tÇ–Rá»ÜÙùhIò3¶H¿Q˸*Ë;Œ¿= yª­© nÍ
¨æòæyo¿Ûõš½Þ[vrfòAØ6‹3[”>_£Š-KÆëç\NH!ð<Ñêr¢¿¯EËB†„PûÈÓ<_Š) åå#ó?é|øh0GtÊvŠæö?iô>‰·3ñœ4ßH8ú˜õ' ÿÿ PK ! v¥S¬" Û Ú word/_rels/document.xml.rels ¢Ö (  ¬”ËjÃ0E÷…þƒÑ¾–í´i)‘³)…l[ºUäñƒêa¤I[ÿ}E ±Cƒ’…6‚¡{W#­Ö¿J&ß]o4#yš‘´0u¯[F>ª×
which I'm positive its correct because when i save this as a docx file it opens correctly.
tried using
openXml.OpenXmlPackage(result);
// and
doc = new openXml.OpenXmlPackage();
doc.openFromArrayBuffer
but I keep getting errors
please help!
the problem was with the JZIP.js that comes packaged with the sdk.
A better approac is to save the template as a Word xml file and then download it through ajax and open it.
worked for me

Resources