Convert ITAB to XSTRING and back - string

I need to save an itab as an xstring or something like this and save it in dbtab.
Later I need to gather this xstring from dbtab and convert it in the itab before with exactly the same input from before.
I tried a lot of fuba´s like:
SCMS_STRING_TO_XSTRING or SCMS_XSTRING_TO_BINARY but I didn´t find something to convert it back.
Does somebody have tried something like this before and have some samples for me ?
Unfortunately I didn´t find something on other blogs or else.

An easy solution to convert into an xstring:
CALL TRANSFORMATION id SOURCE root = it_table RESULT XML DATA(lv_xstring).
Back would be like:
CALL TRANSFORMATION id SOURCE XML lv_xstring RESULT root = it_table.
For more information, see the ABAP documentation about data serialization and deserialization by using the XSL Identity Transformation.

use
import ... from data buffer
and
export ... to data buffer
to (re)store any variable as xstring.
Or you can use
import|export ... from|to database ...

I did some methods to do this:
First I loop at the table and concatenate it into a string.
Then convert the string into an xstring.
LOOP AT IT_TABLE ASSIGNING FIELD-SYMBOL(<LS_TABLE>).
CONCATENATE LV_STRING <LS_TABLE> INTO LV_STRING SEPARATED BY CL_ABAP_CHAR_UTILITIES=>NEWLINE.
ENDLOOP.
CALL FUNCTION 'SCMS_STRING_TO_XSTRING'
EXPORTING
TEXT = IV_STRING
IMPORTING
BUFFER = LV_XSTRING.
Back would be like:
Convert xstring back to string
String into table
TRY.
CL_BCS_CONVERT=>XSTRING_TO_STRING(
EXPORTING
IV_XSTR = IV_XSTRING
IV_CP = 1100 " SAP character set identification
RECEIVING
RV_STRING = LV_STRING
).
CATCH CX_BCS.
ENDTRY.
SPLIT IV_STRING AT CL_ABAP_CHAR_UTILITIES=>NEWLINE INTO: TABLE <LT_TABLE> .
READ TABLE <LT_TABLE> ASSIGNING FIELD-SYMBOL(<LS_TABLE>) INDEX 1.
IF <LS_TABLE> IS INITIAL.
DELETE TABLE <LT_TABLE> FROM <LS_TABLE>.
ENDIF.

Related

How to stop Python Pandas from converting specific column from int to float

trying to out put a dataframe into txt file (for a feed). a few specific columns are getting automatically converted into float instead of int as intented.
how can i specific those columns to use int as dtype?
i tried to output the whole dataframd as string and that did not work.
the columns i would like to specify are named [CID1] and [CID2]
data = pd.read_sql(sql,conn)
data = data.astype(str)
data.to_csv('data_feed_feed.txt', sep ='\t',index=True)
Based on the code you provided, you turn all of your data to strings just before export.
Thus, you either need to turn some cols back to desired type, such as:
data["CID1"] = data["CID1"].astype(int)
or not convert them in the first place.
It is not clear from what you provided why you'd have issues with ints being converted to floats.
this post provides heaps of info:
stackoverflow.com/a/28648923/9249533

Converting Issue date with download to CSV

I have a problem with the Function Module "GUI_DOWNLOAD" because of the date converting.
I want to get the date like I have it in my internal table but CSV (Excel) keeps converting it everytime.
The internal table contains the line like this: 12345678;GroupDate;2021-12-31;
The Output in the .csv-File should be "2021-12-31" but it keeps converting to "31.12.2021".
I also tried to put an ' (apostroph) before the date but the output will be '2021-12-31
Does anybode have an Idea ?
lv_conv = '2021-12-31'.
CONCATENATE TEXT-001 LV_CONV INTO LV_CONV.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
FILENAME = IV_PATH
TABLES
DATA_TAB = LT_FILE.
LT_FILE is a string table.
Thanks for the help.
Like Suncatcher and Sandra said the file is right but it is only the settings from excel which convert the date.
If the Output file won´t be needed for other purposes than showing the code could be something like this
CONCATENATE '=("' LV_CONV '")' INTO LV_CONV.
The csv-Output would be a date like this 1960-01-01 but in the cell the value would look like =("1960-01-01").

Remove the alias name from the json object in stream analytics

I use the UDF.Javascript function to process the message,when after converting to json object ,I see the UDF.Javascript alias name getting added to the json.
{"Device":{"deviceId":"DJT3COE4","productFilter":"pcmSensor","SignalDetails":[{"Devicevalue":"72.04","DisplayName":"Valve Open Status","Description":"Machine Valve Open State Information","DataType":"BOOLEAN","Precision":"undefined","DefaultUoM":"undefined"},{"Devicevalue":"2.7","DisplayName":"Temperature","Description":"Temperature Sensor Reading","DataType":"TEMPERATURE","Precision":"2","DefaultUoM":"DEG_CELSIUS"},{"Devicevalue":"2.99","DisplayName":"Location","Description":"Location","DataType":"LOCATION","Precision":"undefined","DefaultUoM":"LAT_LONG"},{"Devicevalue":"15","DisplayName":"Valve Control","Description":"On / Off control","DataType":"BOOLEAN","Precision":"undefined","DefaultUoM":"undefined"}]}}
Remove the aliasname : {"Device": from the json.
Maybe you could use WITH...AS... in your sql,please see below example:
WITH
c AS
(
SELECT
udf.processArray(input)
from input
)
SELECT
c.processarray.item,c.processarray.name
INTO
output
FROM
c
Output:
My columns are very few,you need to define all of your columns which is a little bit tedious.But it does works,please have a try.

constructing data-type instances from CSV

I have CSV data (inherited - no choice here) which I need to use to create data type instances in Haskell. parsing CSV is no problem - tutorials and APIs abound.
Here's what 'show' generates for my simplified trimmed-down test-case:
JField {fname = "cardNo", ftype = "str"} (string representation)
I am able to do a read to convert this string into a JField data record. My CSV data is just the values of the fields, so the CSV row corresponding to JField above is:
cardNo, str
and I am reading these in as List of string ["cardNo", "str"]
So - it's easy enough to brute-force the exact format of "string representation" (but writing Java or python-style string-formatting in Haskell isn't my goal here).
I thought of doing something like this (the first List is static, and the second list would be read file CSV) :
let stp1 = zip ["fname = ", "ftype ="] ["cardNo", "str"]
resulting in
[("fname = ","cardNo"),("ftype =","str")]
and then concatenating the tuples - either explicitly with ++ or in some more clever way yet to be determined.
This is my first simple piece of code outside of tutorials, so I'd like to know if this seems a reasonably Haskellian way of doing this, or what clearly better ways there are to build just this piece:
fname = "cardNo", ftype = "str"
Not expecting solutions (this is not homework, it's a learning exercise), but rather critique or guidelines for better ways to do this. Brute-forcing it would be easy but would defeat my objective, which is to learn
I might be way off, but wouldn't a map be better here? I guess I'm assuming that you read the file in with each row as a [String] i.e.
field11, field12
field21, field22
etc.
You could write
map (\[x,y] -> JField {fname = x, ftype = y}) data
where data is your input. I think that would do it.
If you already have the value of the fname field (say, in the variable fn) and the value of the ftype field (in ft), just do JField {fname=fn, ftype=ft}. For non-String fields, just insert a read where appropriate.

How to store string matrix and write to a file?

I don't know if Matlab can do this, but I want to store some strings in a 4×3 matrix, each element in the matrix is a string.
test_string_01 test_string_02 test_string_03
test_string_04 test_string_05 test_string_06
test_string_07 test_string_08 test_string_09
test_string_10 test_string_11 test_string_12
Then, I want to write this matrix into a plain text file, either comma or space delimited.
test_string_01,test_string_02,test_string_03
test_string_04,test_string_05,test_string_06
test_string_07,test_string_08,test_string_09
test_string_10,test_string_11,test_string_12
Seems like matrix data type is not capable of storing strings. I looked at cell. I tried to use dlmwrite() or csvwrite(), but both of them only accept matrices. I also tried cell2mat() first, but in that way all letters in the strings are comma seperated, like
t,e,s,t,_,s,t,r,i,n,g,_,0,1,t,e,s,t,_,s,t,r,i,n,g,_,0,2,t,e,s,t,_,s,t,r,i,n,g,_,0,3
So is there any way to achieve this?
It is possible to shorten yuk's solution a bit.
strings = {
'test_string_01','test_string_02','test_string_03'
'test_string_04','test_string_05','test_string_06'
'test_string_07','test_string_08','test_string_09'
'test_string_10','test_string_11','test_string_12'};
fid = fopen('output.txt','w');
fmtString = [repmat('%s\t',1,size(strings,2)-1),'%s\n'];
fprintf(fid,fmtString,strings{:});
fclose(fid);
Cell array is the way to store strings.
I agree it's a pain to save strings into a text file, but you can do it with this code:
strings = {
'test_string_01','test_string_02','test_string_03'
'test_string_04','test_string_05','test_string_06'
'test_string_07','test_string_08','test_string_09'
'test_string_10','test_string_11','test_string_12'};
fid = fopen('output.txt','w');
for row = 1:size(strings,1)
fprintf(fid, repmat('%s\t',1,size(strings,2)-1), strings{row,1:end-1});
fprintf(fid, '%s\n', strings{row,end});
end
fclose(fid);
Substitute \t with , to get csv file.
You can also store cell array of strings into Excel file with XLSWRITE (requires COM interface, so it's on Windows only):
xlswrite('output.xls',strings)
In most cases you can use the delimiter ' ' and get Matlab to save a string into file with dlmwrite.
For example,
output=('my_first_String');
dlmwrite('myfile.txt',output,'delimiter','')
will save a file named myfile.txt containing my_first_String.

Resources