How to get Dsname current generation - mainframe

I have these datasets.
FILE.AB1.NAME.G1000V00
FILE.AB2.NAME.G1000V00
FILE.AB3.NAME.G1000V00
When I enter FILE.AB1.NAME(0) I get FILE.AB1.NAME.G1000V00.
But when I enter FILE.*.NAME(0), I get "No data set names found"
How can I get all the following listed using (0)?
FILE.AB1.NAME.G1000V00
FILE.AB2.NAME.G1000V00
FILE.AB3.NAME.G1000V00

If you use
(0)
at the end of the dataset name you will get the latest generation of a Generation Data Group (GDG).
If you use
(-1)
you will get the penultimate generation of the GDG and so on.
If you want to list all datasets with the same generation you could use
FILE.**.G1000V00

Related

How to change a numeric ID into a sentence in Graylog using pipelines?

I am trying to "beautify" the data I receive from some windows logs on Graylog. My idea is to change the windows log ID from a number to the actual definition for that ID. For example: I receive a log with ID 4625, I want to show in my widget "An account failed to log on".
To do that, I am using a pipeline and a lookup table, which reads the IDs and the respective definitions in natural language from a .csv that I've uploaded on the server.
This is the rule that I wrote for my pipeline, that doesn't seem to work:
rule "eventid_windows_rule"
when
has_field("winlogbeat_winlog_event_id")
then
let winlogbeat_winlog_italiano = lookup("winlogbeat_winlog_event_id", to_string($message.winlogbeat_winlog_event_id));
set_field("winlogbeat_winlog_italiano", winlogbeat_winlog_italiano);
end
I think my problem is specifically in this rule, because Graylog allows to test the lookup tables, and if I manually write an ID, the lookup table finds the respective description.
I solved the issue myself, this is the correct code for the rule:
rule "eventid_windows_rule"
when
has_field("winlogbeat_winlog_event_id")
then
let winlogbeat_winlog_italiano = lookup("eventid_widget_windows_lookup", $message.winlogbeat_winlog_event_id);
set_field("winlogbeat_winlog_italiano", winlogbeat_winlog_italiano);
end
This rule checks if the log has the field "winlogbeat_winlog_event_id", then it generates the new field "winlogbeat_winlog_italiano", associates the numeric value of "winlogbeat_winlog_event_id" with the description in natural language thanks to the .csv that I've created, then puts the description in the field "winlogbeat_winlog_italiano".

Dynamically passing a row number to Session.findbyid method

I am trying to write a script which posts invoices in SAP through Excel (fairly new to this), and am running into the following error:
"The control could not be found by id".
The error is coming up at the below line:
session.FindById("wnd[0]/usr/subITEMS:SAPLFSKB:0100/tblSAPLFSKBTABLE/ctxtACGL_ITEM-HKONT[1,w_counter]").Text = w_glacc
Here, I am trying to pass the GL account number in the first row. There can be multiple rows, so I was hoping that instead of passing ctxtACGL_ITEM-HKONT[1,0]").Text, ctxtACGL_ITEM-HKONT[1,1]").Text etc. I wanted to initialize a counter and pass that value into this method.
Is there any way this can be achieved?
Found a solution:
Instead of passing just w_counter I had to pass "&w_counter&".
Previously, code was:
session.FindById("wnd[0]/usr/subITEMS:SAPLFSKB:0100/tblSAPLFSKBTABLE/ctxtACGL_ITEM-HKONT[1,w_counter]").Text = w_glacc
Now, it is:
session.FindById("wnd[0]/usr/subITEMS:SAPLFSKB:0100/tblSAPLFSKBTABLE/ctxtACGL_ITEM-HKONT[1,"& w_counter &"]").Text = w_glacc

Excel in SSIS: How to import a column that may have more than 255 characters when DT_NTEXT causes failures?

OK, so my latest project requires loading an Excel 2007 spreadsheet into a SQL Server table. I'm working in SSIS 2008R2. Based on some stuff I found on the internet, I opened the Excel source in Advanced editor and changed the datatype of the long column to DT_NTEXT, so that it wouldn't truncate it. Then I made the database column VARCHAR(MAX). This runs correctly in debug mode on my laptop.
Then I deployed it to the development server and attempted to load the same test file. It failed with the following error messages:
Error: Code: 0xC0208265
Source: Main Data Flow Task Get Main Data [1]
Description: Failed to retrieve long data for column "DESCR".
End Error
Error: Code: 0xC020901C
Source: Main Data Flow Task Get Main Data [1]
Description: There was an error with output column "DESCR" (72) on output "Excel Source Output" (9). The column status returned was: "DBSTATUS_UNAVAILABLE".
End Error
Error: Code: 0xC0209029
Source: Main Data Flow Task Get Main Data [1]
Description: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "output column "DESCR" (72)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "DESCR" (72)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
End Error
Searching for information about the error, I found about a million sites offering the same three suggested solutions:
Add 'IMEX=1' to the extended properties of the connection string.
It was already there.
Change the TypeGuessRows key in the registry.
This was set to zero on the server, which I understand to mean that it should look at the entire file. Nevertheless, I changed it to 8 to match my laptop. The same error occurred when I ran it again. Then I changed it to 1,763, which is more than the number of rows in the spreadsheet. It still gave the same error. So, I put it back to zero. (There's a 1,900-character value in the first row of my test file, so it shouldn't really matter how many it checks, in this case.)
Change the datatype to DT_WSTR(4000) in the source.
The column is supposed to have up to 10,000 characters, so I'm not sure this would be a good idea even if it worked. However, I tried it anyway. This time it gave me a truncation error. I changed the truncation error disposition to "ignore failure" and it loaded the data, but truncated the value to 255 characters. I have verified that the length is 4000 and doesn't get changed when I save the file, but it's still truncating at 255 characters.
I have no idea what else to look at. Any help would be appreciated.
UPDATE 1/29: The package, without any changes, works correctly when running on the pre-production server. It still fails when running on the development server. Both servers have the same version of SSIS (including minor version numbers) as well as the same versions of Windows, Access and Excel. I do not know how to explain this, nor do I know how to tell if it would work in production.
I created a new package with similar non-functional requirements (Excel 2007 file, SSIS 2008, SQL Server 2008 R2, VARCHAR(MAX) target column) and it worked just fine after deployment into the database server. My package:
Metadata at the Excel Source component's output (checked using Advanced Editor): DT_NTEXT
Derived Column component between source and destination to cast to non-unicode from unicode using (DT_TEXT,1252)
Metadata at the OLE DB Destination component's input (checked using Advanced Editor): DT_TEXT
Target Column data type: VARCHAR(MAX)
I do not explicitly use the extended property IMEX in the connection
Executed by right-clicking on the package at the database server, and loaded a file with a few thousand characters per record into the table without truncation. Hope this helps
I have faced this issue while importing an excel file with a field containing more than 255 characters. I solved the issue using Python.
Simply, import the excel in a pandas data frame and then calculate the length of each of those string values per row.
Then, sort the dataframe in descending order. This will enable SSIS to allocate maximum space for that field as it scans the first 3 rows to allocate storage:
df = pd.read_excel(f,sheet_name=0,skiprows = 1)
df = df.drop(df.columns[[0]], axis = 1)
df['length'] = df['Item Description'].str.len()
df.sort_values('length', ascending=False, inplace=True)
writer = ExcelWriter('Clean/Cleaned_'+f[5:])
df.to_excel(writer,sheet_name='Billing',index=False)
writer.save()

Can I import SAP tables that were exported by SE16?

I have exported the contents of a table with transaction SE16, by selecting all the entries and going selecting Download, unconverted.
I'd like to import these entries into another system (where the same table exists and is active).
Furthermore, when I import, there's a possibility that the specific key already exists for a number of entries (old entries).
Other entries won't have a field with the same key present in the table where they're to be imported (new entries).
Is there a way to easily update my table in the second system with the file provided from the first system? If needed, I can export the data in the 3 other format types (Spreadsheet, Rich text format and HTML format). It seems to me though like the spreadsheet and rich text formats sometimes corrupt the data, and the html is far too verbose.
[EDIT]
As per popular demand, the table i'm trying to export / import is a Z table whose fields are all numeric, character, date or time fields (flat data types).
I'm trying to do it like this because the clients don't have any basis resource to help them transport, and would like to "kinna" automate the process of updating one of the tables in one system.
At the moment it's a business request to do it like this, but I'm open to suggestions (and the clients are open too)
Edit
Ok I doubt that what you describe in your comment exists out of the box, but you can easily write something like that:
Create a method (or function module if that floats your boat) that accepts the following:
iv_table name TYPE string and
iv_filename TYPE string
This would be the method:
method upload_table.
data: lt_table type ref to data,
lx_root type ref to cx_root.
field-symbols: <table> type standard table.
try.
create data lt_table type table of (iv_table_name).
assign lt_table->* to <table>.
call method cl_gui_frontend_services=>gui_upload
exporting
filename = iv_filename
has_field_separator = abap_true
changing
data_tab = <table>
exceptions
others = 4.
if sy-subrc <> 0.
"Some appropriate error handling
"message id sy-msgid type 'I'
" number sy-msgno
" with sy-msgv1 sy-msgv2
" sy-msgv3 sy-msgv4.
return.
endif.
modify (p_name) from table <table>.
"write: / sy-tabix, ' entries updated'.
catch cx_root into lx_root.
"lv_text = lx_root->get_text( ).
"some appropriate error handling
return.
endtry.
endmethod.
This would still require that you make sure that the exported file matches the table that you want to import. However cl_gui_frontend_services=>gui_upload should return sy-subrc > 0 in that case, so you can bail out before you corrupt any data.
Original Answer:
I'll assume that you want to update a z-table and not a SAP standard table.
You will probably have to format your datafile a little bit to make it tab or comma delimited.
You can then upload the data file using cl_gui_frontend_services=>gui_upload
Then if you want to overwrite the existing data in the table you can use
modify zmydbtab from table it_importeddata.
If you do not want to overwrite existing entries you can use.
insert zmydbtab from table it_importeddata.
You will get a return code of sy-subrc = 4 if any of the keys already exists, but any new entries will be inserted.
Note
There are many reasons why you would NOT do this for a SAP-standard table. Most prominent is that there is almost always more to the data-model than what we are aware of. Also when creating transactional data, there are often follow-on events or workflow that kicks off, that will not be the case if you're updating the database directly. As a rule of thumb, it is usually a bad idea to update SAP standard tables directly.
In that case try to find a BADI, or if that's not available, record a BDC and do the updates that way.
If the system landscape was setup correctly, your client would not need any kind of basis operations support whatsoever to perform the transports. So instead of re-inventing the wheel, I'd strongly suggest to catch up on what the CTS and TMS can do once they're setup with sensible settings.

Resolving an error in using the Crf++ toolkit

To all those who have had experience with using the crf++ toolkit (refer: http://crfpp.sourceforge.net/)
Please find the error message which pops up on trying to execute the CRF++ training program:
CRF++: Yet Another CRF Tool Kit
Copyright (C) 2005-2009 Taku Kudo, All rights reserved.
encoder.cpp(280) [feature_index.open(templfile, trainfile)] feature_index.cpp(86) [max_size == size] inconsistent column size: 21 20 train.data
I'm not sure how to interpret the error message.
There are 20 features in my training file and the 21st token is the class value.
I have created the Crf++ template file as per the instructions on the site.
It looks like a training data format issue, make sure the number of columns are consistent across all sentences.
I got this error today, and found that crf++ toolkit just set tab character(\t) to default column separator whereas my train data file using one white space lead to error.
Some points to check:
1. Check if you have a new line after each sentence
2. Check if your columnar values does not contain any sp
Error suggests that your number of columns in rows are not same among all. Your maximum number of columns are 21 and that should be consistent through out the training file but crf_learn finds it 20 somewhere in your train.data training file. So locate such row and remove/repair it.

Resources