file transfer Extra attachmate appends username to host file name - mainframe

Hi when I try to download a file from mainframe, using attachmate extra it appends the username also along with it. I dont know where to turn it off.
like for example - file name is yyyy.file.name, then when i try to transfer of file it transfers username.yyyy.file.name.
in 3.4 the option to append user name is turned off. Still its happening

Enclose the entire dataset name (including the high-level qualifier) in single quotes. This is a TSO (not JCL) convention - if you refer to a dataset without single quotes, it pre-pends your user ID as the high-level qualifier; however if you place single quotes around the dataset name it will take it 'as is' (well, it will uppercase it, since all z/OS dataset names are uppercase, but otherwise it will be 'as is').

Related

run ibm data stage job with different file in same job

I created a job to input excel data into database. I need the job to be reusable for different excel version. The columns of the excel will be the same but only the values will change, it's like inserting newest excel values version to the database.
Example, the file of sales_report_january.xlsx , sales_report_february.xlsx both have same columns and only the row values is different. I need the job to be able to process both files without changing anything else except the file path. Because recreating different job with the same everything(except for the filepath) for the same task seems inefficient.
Is it available to do this in ibm data stage or do i need to remap everything despite it doesn't need any change? i already tried it by changing the file path manually but it raised error.
In a word: Parameter
Construct your job using a job parameter for the pathname of the Excel workbook.
Whichever stage you are using to read the worksheet will have the workbook name set up as reference(s) to that parameter.
Tip: Use two parameters; one for the dirname part of the pathname and one for the actual name of the workbook. This is a more flexible design in the long run.
I can think of at least four ways to do this. Usually, if the files are all in the same directory, we use looping in the sequence job to process a list of the file names obtained through an appropriate command (such as ls -m pattern for UNIX/Linux). Capture the output, convert the newlines to a delimiter such as comma if necessary, and use that list in the StartLoop activity.

Issue setting up a save path with integer variables and strings in kdb+

I am basically trying to save to data/${EPOCH_TIME}:
begin_unix_time: "J"$first system "date +%s"
\t:1 save `data/"string"$"begin_unix_time"
I am expecting it to save to data/1578377178
You do not need to cast first system "date +%s" to a long in this case, since you want to attach one string to another. Instead you can use
begin_unix_time:first system "date +%s"
to store the string of numbers:
q)begin_unix_time
"1578377547"
q)`$"data/",begin_unix_time
`data/1578377547
Here you use the comma , to join one string to another, then using cast `$ to convert the string to a symbol.
The keyword save is saving global data to a file. Given your filepath, it looks like youre trying to save down a global variable named 1578377547, and kdb can not handle variable names being purely numbers.
You might want to try saving a variable named a1578377547 instead, for example. This would change the above line to
q)`$"data/a",begin_unix_time
`data/a1578377547
and your save would work correctly, given that the global variable a1578377547 exists. Because you are sourcing the date down to the second from linux directly in the line you are saving a variable down to, this will likely not work, due to time constantly changing!
Also note that the timer system command will repeat it the execution n times (as in \t:n), meaning that the same variable will save down mutliple times given the second does not change. The time will also likely change for large n and you wont have anything assigned to the global variable you are trying to save should the second change.

Adding json data to json column is adding escape characters

I am using postgres database, I have a table called names which has a column named 'info' which is of type json. I am adding
{ "info": "security" , description : "Sednit update: Analysis of Zebrocy: The Sednit group \u2013 also known as APT28, Fancy Bear, Sofacy or STRONTIUM \u2013 is a group of attackers operating since 2004, if not earlier, and whose main objective is to steal confidential information from specific targets.\n\nToward the end of 2015, we started seeing a new component deployed by the group; a downloader for the main Sednit backdoor, Xagent. Kaspersky mentioned this component for the first time in 2017 in their APT trend report and recently wrote an article where they quickly described it under the name Zebrocy.\n\nThis new component is a family of malware, comprising downloaders and backdoors written in Delphi and AutoIt. These components play the same role in the Sednit ecosystem as Seduploader; that of first-stage malware."}
Here I am using node js, with sequelize as orm. When I save it in table. I see "\\n" for "\n" and "\\u" for \u. Can anyone help me to avoid escaping characters while saving to table.
I see \n for \n and \u for \u.
In your json description is type of string , so it will convert the new line/enter to \n that the default behavior , or else you will not get the new line / enter when you try to fetch the data again.
And \u is for unicode , you might be saving some smily or special character so that will be converted to such strings.
So there is no bug , this is how it works.

Skip element in BizTalk flat file assembly?

I've been tasked to map an input xml (actually an SAP idoc xml), and to generate a number of flat files. Each input xml may yield multiple output files (one output file per lot number), so I will be using xsl:key and the key() function in my mapping, based on the lot number
The thing is, the lot number itself will not be in the file itself, but the output file name needs to contain that lot number value.
So the question really is: can I map the lot number to the xml and have the flat file assembler skip it when it produces the file? Or is there another way the lot number can be applied as file name by the assembly without having it inside the file itself?
In your orchestration you can set a context property for each output message:
msgOutput(FILE.ReceivedFileName) = "DynamicStuff";
msgOutput then goes to the send shape.
In your send port you set the output file like this:
FixedStuff_%SourceFileName%.xml
The result:
FixedStuff_DynamicStuff.xml
If the value is not required in the message content, don't map it. That's it.
To insert at value in the file name, lot number in this case, you will need to promote that value to the FILE.ReceivedFileName Context Property. Then, you can use the %SourceFileName% Macro as part of the name setting in the Send Port. You can set FILE.ReceivedFileName by either Property Promotion or xpath() in an Orchestration.
Bonus: Sorting and Grouping in xslt is rather unwieldy, which is why I don't do that anymore. Instead, you can use SQL: BizTalk: Sorting and Grouping Flat File Data In SQL Instead of XSL

How to get non standard tags in rpm query

I wanted to add things such as Size, BuildHost, BuildDate etc in rpm query but adding this thing in spec file results in unknown tag?? How can I do this so that these things are reflected when i give the rpm query command?
These tags are determined when the package is built; they cannot be forced to specific values.
For example BuildHost is hardcoded in rpmbuild and cannot be changed. There is RFE https://bugzilla.redhat.com/show_bug.cgi?id=1309367 to allow it modify from command line. But right now you cannot change it by any tag in spec file nor by passing some option on command line to rpmbuild.
I assume it will be very similar to other values you specified.
RPM5 permits arbitrary unique tag names to be added to header metadata.
The tag names are configured in a colon separated list in a macro. Then the new tags can be used in spec files and can be extracted using --queryformat.
All arbitrary tags are string (or string array) valued.

Resources