Adding json data to json column is adding escape characters - node.js

I am using postgres database, I have a table called names which has a column named 'info' which is of type json. I am adding
{ "info": "security" , description : "Sednit update: Analysis of Zebrocy: The Sednit group \u2013 also known as APT28, Fancy Bear, Sofacy or STRONTIUM \u2013 is a group of attackers operating since 2004, if not earlier, and whose main objective is to steal confidential information from specific targets.\n\nToward the end of 2015, we started seeing a new component deployed by the group; a downloader for the main Sednit backdoor, Xagent. Kaspersky mentioned this component for the first time in 2017 in their APT trend report and recently wrote an article where they quickly described it under the name Zebrocy.\n\nThis new component is a family of malware, comprising downloaders and backdoors written in Delphi and AutoIt. These components play the same role in the Sednit ecosystem as Seduploader; that of first-stage malware."}
Here I am using node js, with sequelize as orm. When I save it in table. I see "\\n" for "\n" and "\\u" for \u. Can anyone help me to avoid escaping characters while saving to table.

I see \n for \n and \u for \u.
In your json description is type of string , so it will convert the new line/enter to \n that the default behavior , or else you will not get the new line / enter when you try to fetch the data again.
And \u is for unicode , you might be saving some smily or special character so that will be converted to such strings.
So there is no bug , this is how it works.

Related

Can't receive text from PostgreSQL into Excel via ODBC due to character coding problem (UTF-8 vs Win1250)

My base scenario is that I want to make an excel report with data from a PostgreSQL DB.
I get them via ODBC, making a simple linked table with PowerQuery.
For DSN I choose (None), then I write the connectio string and the SQL statement. Generally it works fine, but with one column, it doesn't. I recive the following error message:
ODBC: ERROR [22P05] ERROR: character with byte sequence 0xc2 0xb2 in encoding "UTF8" has no equivalent in encoding "WIN1250";Error while executing the query
So that is clear, the source is in UTF-8 with characters that are not compatible with Win1250.
What I am looking for is a general solution either on DB or excel site.
The used SQL statement is a simple SELECT * FROM [view], so I can use any replacement or converting or anything just to be able to hanle it with transformations on the column. I can replace the view with function if that is better.
But it would be better, if you can suggest an excel site solution.
With it there is some criteria. That scenario, when "I get the data first in text, then I convert it to Win1250, then import to excel" wont't fit, and I need something which connects to the excel file itself, so if I move it to an other pc, it need to work too without any more modification.
Thanks for all the help!

Excel Import of custom mandatory field doesn't work [Hybris 6.7.0]

I'm using Hybris version 6.7.0 and I stuck with the following problem:
When I trying to perform importing products from excel file. It gives me the following error ->
I've checked the excel file and there is, of course, field "Subscription Term*", it is mandatory that's why there is an asterisk there. Good to mention that this field is custom, so I write custom translator to it and exporting part works fine, but in importing part when I did debugging I found strange fact:
This WorkbookMandatoryColumnsValidator validator calls the method findColumnIndex(typeSystemSheet, sheet, this.prepareSelectedAttribute(mandatoryField)); from DefaultExcelTemplateService this method returns -1 and the validation does not passed. I dig into this method and there is such line of code:
String attributeDisplayName = this.findAttributeDisplayNameInTypeSystemSheet(typeSystemSheet, selectedAttribute); which returns "Subscription Term" string as you can see without an asterisk.
I've checked the other mandatory fields e.g. "Catalog version*^" it returns with 2 symbols after it.
The thing is that "Subscription Term" and "Subscription Term*" after string equality operation returns false and the validation fails as you can see here:
attributeDisplayName.equals(this.getCellValue(headerRow.getCell(i))).
Of course the second value is taken from the excel file where the asterisk sign presents.
If I remove an asterisk from excel file then I receive: Unknown attributes of type ISku error in WorkbookTypeCodeAndSelectedAttributeValidator validator:
The asterisk should be presented in excel file, I've just checked what would be...
It doesn't help me at all to understand what really happens.
I can't understand one thing: What is the source of "Subscription Term" string? Why without an asterisk? Is it predefined constant somewhere?
From debug I couldn't figure out from which source that string comes from.
I do not know for sure but I expect that string( i.e Subscription Term) to come from a localization file based on backoffice current session language ( e.g {extensionName}-locales_en.properties if the current language is en).
Try to search after "Subscription Term" in all properties files.
Maybe, if the attribute is mandatory(i.e optional="false" in items.xml) then Hybris will add to its name an "*" when performing the import.
Check whether you provided read and write permission to that attribute for that user.
Check with admin user before doing that. If there is no issue with admin user, then only permission issue with the user.

2 Sequential Transactions, setting Detail Number (Revit API / Python)

Currently, I made a tool to rename view numbers (“Detail Number”) on a sheet based on their location on the sheet. Where this is breaking is the transactions. Im trying to do two transactions sequentially in Revit Python Shell. I also did this originally in dynamo, and that had a similar fail , so I know its something to do with transactions.
Transaction #1: Add a suffix (“-x”) to each detail number to ensure the new numbers won’t conflict (1 will be 1-x, 4 will be 4-x, etc)
Transaction #2: Change detail numbers with calculated new number based on viewport location (1-x will be 3, 4-x will be 2, etc)
Better visual explanation here: https://www.docdroid.net/EP1K9Di/161115-viewport-diagram-.pdf.html
Py File here: http://pastebin.com/7PyWA0gV
Attached is the python file, but essentially what im trying to do is:
# <---- Make unique numbers
t = Transaction(doc, 'Rename Detail Numbers')
t.Start()
for i, viewport in enumerate(viewports):
setParam(viewport, "Detail Number",getParam(viewport,"Detail Number")+"x")
t.Commit()
# <---- Do the thang
t2 = Transaction(doc, 'Rename Detail Numbers')
t2.Start()
for i, viewport in enumerate(viewports):
setParam(viewport, "Detail Number",detailViewNumberData[i])
t2.Commit()
Attached is py file
As I explained in my answer to your comment in the Revit API discussion forum, the behaviour you describe may well be caused by a need to regenerate between the transactions. The first modification does something, and the model needs to be regenerated before the modifications take full effect and are reflected in the parameter values that you query in the second transaction. You are accessing stale data. The Building Coder provides all the nitty gritty details and numerous examples on the need to regenerate.
Summary of this entire thread including both problems addressed:
http://thebuildingcoder.typepad.com/blog/2016/12/need-for-regen-and-parameter-display-name-confusion.html
So this issue actually had nothing to do with transactions or doc regeneration. I discovered (with some help :) ), that the problem lied in how I was setting/getting the parameter. "Detail Number", like a lot of parameters, has duplicate versions that share the same descriptive param Name in a viewport element.
Apparently the reason for this might be legacy issues, though im not sure. Thus, when I was trying to get/set detail number, it was somehow grabbing the incorrect read-only parameter occasionally, one that is called "VIEWER_DETAIL_NUMBER" as its builtIn Enumeration. The correct one is called "VIEWPORT_DETAIL_NUMBER". This was happening because I was trying to get the param just by passing the descriptive param name "Detail Number".Revising how i get/set parameters via builtIn enum resolved this issue. See images below.
Please see pdf for visual explanation: https://www.docdroid.net/WbAHBGj/161206-detail-number.pdf.html

file transfer Extra attachmate appends username to host file name

Hi when I try to download a file from mainframe, using attachmate extra it appends the username also along with it. I dont know where to turn it off.
like for example - file name is yyyy.file.name, then when i try to transfer of file it transfers username.yyyy.file.name.
in 3.4 the option to append user name is turned off. Still its happening
Enclose the entire dataset name (including the high-level qualifier) in single quotes. This is a TSO (not JCL) convention - if you refer to a dataset without single quotes, it pre-pends your user ID as the high-level qualifier; however if you place single quotes around the dataset name it will take it 'as is' (well, it will uppercase it, since all z/OS dataset names are uppercase, but otherwise it will be 'as is').

Can I import SAP tables that were exported by SE16?

I have exported the contents of a table with transaction SE16, by selecting all the entries and going selecting Download, unconverted.
I'd like to import these entries into another system (where the same table exists and is active).
Furthermore, when I import, there's a possibility that the specific key already exists for a number of entries (old entries).
Other entries won't have a field with the same key present in the table where they're to be imported (new entries).
Is there a way to easily update my table in the second system with the file provided from the first system? If needed, I can export the data in the 3 other format types (Spreadsheet, Rich text format and HTML format). It seems to me though like the spreadsheet and rich text formats sometimes corrupt the data, and the html is far too verbose.
[EDIT]
As per popular demand, the table i'm trying to export / import is a Z table whose fields are all numeric, character, date or time fields (flat data types).
I'm trying to do it like this because the clients don't have any basis resource to help them transport, and would like to "kinna" automate the process of updating one of the tables in one system.
At the moment it's a business request to do it like this, but I'm open to suggestions (and the clients are open too)
Edit
Ok I doubt that what you describe in your comment exists out of the box, but you can easily write something like that:
Create a method (or function module if that floats your boat) that accepts the following:
iv_table name TYPE string and
iv_filename TYPE string
This would be the method:
method upload_table.
data: lt_table type ref to data,
lx_root type ref to cx_root.
field-symbols: <table> type standard table.
try.
create data lt_table type table of (iv_table_name).
assign lt_table->* to <table>.
call method cl_gui_frontend_services=>gui_upload
exporting
filename = iv_filename
has_field_separator = abap_true
changing
data_tab = <table>
exceptions
others = 4.
if sy-subrc <> 0.
"Some appropriate error handling
"message id sy-msgid type 'I'
" number sy-msgno
" with sy-msgv1 sy-msgv2
" sy-msgv3 sy-msgv4.
return.
endif.
modify (p_name) from table <table>.
"write: / sy-tabix, ' entries updated'.
catch cx_root into lx_root.
"lv_text = lx_root->get_text( ).
"some appropriate error handling
return.
endtry.
endmethod.
This would still require that you make sure that the exported file matches the table that you want to import. However cl_gui_frontend_services=>gui_upload should return sy-subrc > 0 in that case, so you can bail out before you corrupt any data.
Original Answer:
I'll assume that you want to update a z-table and not a SAP standard table.
You will probably have to format your datafile a little bit to make it tab or comma delimited.
You can then upload the data file using cl_gui_frontend_services=>gui_upload
Then if you want to overwrite the existing data in the table you can use
modify zmydbtab from table it_importeddata.
If you do not want to overwrite existing entries you can use.
insert zmydbtab from table it_importeddata.
You will get a return code of sy-subrc = 4 if any of the keys already exists, but any new entries will be inserted.
Note
There are many reasons why you would NOT do this for a SAP-standard table. Most prominent is that there is almost always more to the data-model than what we are aware of. Also when creating transactional data, there are often follow-on events or workflow that kicks off, that will not be the case if you're updating the database directly. As a rule of thumb, it is usually a bad idea to update SAP standard tables directly.
In that case try to find a BADI, or if that's not available, record a BDC and do the updates that way.
If the system landscape was setup correctly, your client would not need any kind of basis operations support whatsoever to perform the transports. So instead of re-inventing the wheel, I'd strongly suggest to catch up on what the CTS and TMS can do once they're setup with sensible settings.

Resources