CICS region automatically restarts after being shutdown - mainframe

It wont let me kill the winning vote (the top one), see below:
,Cmd: ,C,cancel request ,K, Kill request ,S,show request details
,Cmd Action WIN Request/Vote Data
,--- ------ --- ----------------------------------------------------------------
, ,,START Y ,Vote :,MakeAvailable
, ,, From Req. :,MakeAvailable for TCPIP/APL/DEVP
, ,, Created :,2018-11-14 21:03:02
, ,, Originator :,OPERATOR(XTSG101)
, ,, Priority :,02740000 Must Be Up - Operator
, ,,STOP * ,Request :,MakeUnAvailable
, ,, Created :,2018-06-01 15:13:56
, ,, Originator :,AUTOOPS(AUTOPCR)
, ,, Priority :,02620000 Must Be Down - Automation
, ,, Status :,Losing/Satisfied

Related

Remove audit trail from a RLIST command issued via ADDRESS TSO

I'm trying to write a script that would query specific resource profiles in a RACF class and later do a bit of logic to match a few things - not relevant.
The problem is that when I issue the command below I get the AUDIT TRAIL on the terminal. The script is meant to just return a 1 or a 0. All the logic works as it should but when I run the script I get the whole AUDIT TRAIL from RACF and at the bottom the result.
y = outtrap('resourceAccess.')
address tso 'RLIST CLASSX CLASSX.RESOURCE.LIST'
y = outtrap('off')
I already tried to create another outtrap after the one above with no success.
Is there a way to remove that AUDIT TRAIL bit?
It's possible that those lines of text are being issued in such a way that they cannot be trapped using outtrap and are instead being placed on the external data queue (EDQ) and then echoed to the terminal when the REXX exits. ACF2 does this with all output, making trapping command responses a bit tricky.
Try this:
/* Trap command response*/
y = outtrap('temp.')
address tso 'RLIST CLASSX CLASSX.RESOURCE.LIST'
y = outtrap('off')
/* Display anything put onto the EDQ */
do queued()
pull line
say line
end
Old answer:
If the output you are getting matches what's in the IBM docs you linked to (https://www.ibm.com/docs/en/szs/2.2?topic=effects-command-audit-trail), then what you need to do is after to have trapped the output, simply discard the first 2 lines, (which should be):
Command Audit Trail for USER IBMUSER
(one line of text and a blank line).
You could do this as follows:
y = outtrap('temp.')
address tso 'RLIST CLASSX CLASSX.RESOURCE.LIST'
y = outtrap('off')
/* Copy from the 3rd command response line into our 'real' response var */
do tempIndex = 3 to temp.0
desiredIndex = tempIndex - 2
resourceAccess.desiredIndex = temp.tempIndex
end
resourceAccess.0 = temp.0 - 2 /* Set number of lines */

How can I avoid escape chars in inserted binary string with Elixir/Ecto/Postgrex?

I'm new to elixir/ecto and I don't understand why my error_data field (defined as :binary in schema) gets inserted slash-escaped in my postgresql column:
params = %{error_data: "eyJtZXNzYWdlIjoiSW52YWxpZCB0b2tlbiIsImNhdXNlIjpbXSwiZXJyb3IiOiJub3RfZm91bmQiLCJzdGF0dXMiOjQwMX0="}
cast(%{}, params, [:error_data])
|> change(%{error_data: Base.decode64!(params.error_data)})
|> Ecto.Repo.insert()
Following #smathy insight, I've put an IO.puts(get_change(changeset, :error_data) between change and insert calls. It shows the data has beed decoded and is not slash escaped before insertion. But the next line showing Ecto query is escaped... Check my app's output:
[info] Creating error for 1 on channel 1
{"message":"Invalid token","cause":[],"error":"not_found","status":401}
[debug] QUERY OK db=0.5ms
INSERT INTO "errors" ("code","error","error_message","http_status","id","channel_id","inserted_at","updated_at") VALUES ($1,$2,$3,$4,$5,$6,$7,$8) RETURNING "id" ["error-03", "{\"message\":\"Invalid token\",\"cause\":[],\"error\":\"not_found\",\"status\":401}", "Invalid token", 401, 1, 1, ~N[2021-02-16 12:24:58], ~N[2021-02-16 12:24:58]]
Then check these DB queries out: the first is for the code inserted error. The last is from a manually inserted not-escaped error:
dev=# SELECT error FROM errors ORDER BY updated_at DESC limit 1;
error
---------------------------------------------------------------------------------------
"{\"message\":\"Invalid token\",\"cause\":[],\"error\":\"not_found\",\"status\":401}"
(1 row)
dev=# SELECT error FROM errors ORDER BY updated_at ASC limit 1;
error
---------------------
{"eita": "deu pau"}
(1 row)
How can I avoid that escape and insert the plain decoded ({"message":"Invalid token","cause":[],"error":"not_found","status":401}) content?
If I could use ecto fragments in insertion, I'd have told the DB to decode the base64 string... I didn't find how to do that either... any help?
I wonder it there is any environment configuration that affects ECTO in order to log it's queries and ends up string casting/escaping the error_data binary...
They're not really there, they're just being displayed by whatever tool you're using to print out that value because that tool uses "s as the string delimiter, and therefore escapes them to avoid ambiguity.
Same thing happens in an iex session, if you actually print out the value then it comes out as you're expecting because when you output a string it won't include the delimiters:
iex(6)> Base.decode64! "eyJtZXNzYWdlIjoiSW52YWxpZCB0b2tlbiIsImNhdXNlIjpbXSwiZXJyb3IiOiJub3RfZm91bmQiLCJzdGF0dXMiOjQwMX0="
"{\"message\":\"Invalid token\",\"cause\":[],\"error\":\"not_found\",\"status\":401}"
iex(7)> IO.puts v
{"message":"Invalid token","cause":[],"error":"not_found","status":401}
:ok
Update
This is me running a psql query after running precisely the code you've shown above on a string (varchar) field:
testdb=# select error_data from tt;
error_data
-------------------------------------------------------------------------
{"message":"Invalid token","cause":[],"error":"not_found","status":401}
(1 row)

Removing commas from ADF Sink Text File in my Pipeline

I have a sink file (same as a report) as an output from an ADF Copy Activity in a Pipeline where the formatting has to be "perfect" to meet a 3rd Party Vendors requirements. The following shows the first line; it is all perfectly correct, BUT I need ALL the commas removed as the last step in my ADF Pipeline
. I could not get ADF to create this without delimiters.
Can anyone please suggest a possible solution? I need this for Production.
Thanks!
Mike Kiser
05, ,2021-01-21, ,BMECOL, ,,,,,,, ,0000000000,0000000000,0000000000,000000000000000,+,00000000000,+,000000000000000000000,10,E,2007-10-09 00:00:00.0000000,XXXXXXXXX06,BMECOL, , , , ,00,,Henry,W, ,Loescherkisertest3,,M,1960-01-01 00:00:00.0000000, ,USA,,,,XXXXXXXXX06,,,,,,,S,Single,,010004,,,,,,,,,,15,1,,XXXXXXXXX06,BMECOL , , , , ,0 ,Address 1,Address 2,Address 3, ,City,OH, ,United States,12345,Home, ,541/981-1818, ,, ,hloescher#battelleecology.org,20, ,,XXXXXXXXX06,BMECOL , , , , ,00,,5,HIS, , , ,25, ,,XXXXXXXXX06,BMECOL , , , , ,00,0000,C4029O,Professional,000000,000000 ,S,5200,M ,CO0001,BE CS Boulder HQ Ops,D22,NO ,REGULAR , ,,BCO , , ,40,0000000000 ,0000000000 ,1900-01-01 00:00:00.0000000,1900-01-01 00:00:00.0000000
I think you can use column-patterns in dataflow to replace the commas.
I've created a simple test.
I saved the sample data as a row in txt file.
And select No delimiter at Column delimiter.
The data preview like this:
In DerivedColumn1 activity, we can use replace($$,',','') to remove the commas.
Sink preview like this:

Time Summing Issue with PL-SQL Linked Excel File

Going to try my best to explain this issue thoroughly, as I've spent literally days and days trying to figure out a fix to this problem to no avail.
I have some excel report templates I have built that reference Five9 Call Data exports. I've been manually having to export the data from the Five9 application on a regular basis to refresh these reports, so I decided to make my process more efficient and automated by coding my exports and having them link directly to my excel workbook reports for easy refreshing.
I have basic T-SQL experience, but realized I would need PL-SQL experience since this is an Oracle based DB. After a couple days I figured out how to code the exports I need in PL-SQL and set them up into linked excel workbooks. But then I ran into the issue...
My reports rely on being able to SUM the Time data from my exports, but my SUM formulas are now resulting in Zero when they shouldn't. The issue is not a matter of formatting the columns to Time ([h]:mm:ss...yes that was my first thought too). The issue is that the data is literally not recognized as a time at all and is recognized as text.
This article explains the exact issue I'm experiencing: http://theexceltrainer.co.uk/adding-up-times-in-excel-results-in-zero/
The problem is it doesn't give me a reasonable workaround. If I do the text-to-columns trick they suggest that will not stick when I go to refresh the data the next time I need to refresh the report (the times will revert back to being identified as text). The whole purpose of me coding out my exports was to make this process completely automated, so having to go in and reformat (do the text-to-columns thing) each time completely defeats the purpose.
Is there no other workaround for this? Possibly something to add into the code that will make excel recognize the time as actual time and not text? The problem did not exist when I used to manually export the data as a CSV file...but I can't (or at least I don't know how) to setup a PL-SQL linked excel file that is also CSV. I'm at a loss...
I have included my code, if that helps:
SELECT
CALL_DATE as "Date"
, CALL_TIME as "Time"
, CALL_TIMESTAMP as "Timestamp"
, RING_TIME as "Ring Time"
, DIAL_TIME as "Dial Time"
, CALL_TIME_2 as "Call Time"
, AFTER_CALL_WORK_TIME as "After Call Work Time"
, CAMPAIGN as "Campaign"
, CAMPAIGN_TYPE as "Campaign Type"
, CALL_TYPE as "Call Type"
, DISPOSITION as "Disposition"
, AGENT_GROUP as "Agent Group"
, AGENT_NAME as "Agent Name"
, SALESFORCE_ID as "Salesforce ID"
FROM Comp_DB.FIVE9_CALLS
WHERE
AGENT_GROUP = 'Collections'
and TRUNC(CALL_DATE) BETWEEN '01-June-2017' and TRUNC(SYSDATE)
ORDER BY
CALL_DATE asc
RING_TIME, DIAL_TIME, CALL_TIME_2 and AFTER_CALL_WORK_TIME are the fields I need to be able to SUM. They already pull in the correct format of "hh:mm:ss"...but Excel still isn't recognizing them as time...
First hand could I recommend replacing that string in your WHERE clause for something like this? :
TRUNC(CALL_DATE) BETWEEN to_date('01-06-2017','dd-mm-yyyy') and TRUNC(SYSDATE)
so that you don't let Oracle engine to (hopefully) resolve your date value.
Now for your problem, maybe you can do your "math" within the query as opposed to do it in Excel, as in:
SELECT
CALL_DATE as "Date"
, CALL_TIME as "Time"
, CALL_TIMESTAMP as "Timestamp"
, RING_TIME as "Ring Time"
, DIAL_TIME as "Dial Time"
, CALL_TIME_2 as "Call Time"
, AFTER_CALL_WORK_TIME as "After Call Work Time"
, CAMPAIGN as "Campaign"
, CAMPAIGN_TYPE as "Campaign Type"
, CALL_TYPE as "Call Type"
, DISPOSITION as "Disposition"
, AGENT_GROUP as "Agent Group"
, AGENT_NAME as "Agent Name"
, SALESFORCE_ID as "Salesforce ID"
, numtodsinterval(
(SUBSTR(RING_TIME, 1, 2)*3600 + SUBSTR(RING_TIME, 4, 2)*60 + SUBSTR(RING_TIME, 7, 2))+
(SUBSTR(DIAL_TIME, 1, 2)*3600 + SUBSTR(DIAL_TIME, 4, 2)*60 + SUBSTR(DIAL_TIME, 7, 2))+
(SUBSTR(CALL_TIME_2, 1, 2)*3600 + SUBSTR(CALL_TIME_2, 4, 2)*60 + SUBSTR(CALL_TIME_2, 7, 2))+
(SUBSTR(AFTER_CALL_WORK_TIME, 1, 2)*3600 + SUBSTR(AFTER_CALL_WORK_TIME, 4, 2)*60 + SUBSTR(AFTER_CALL_WORK_TIME, 7, 2))
, 'SECOND') as "Total duration"
FROM Comp_DB.FIVE9_CALLS
WHERE
AGENT_GROUP = 'Collections'
and TRUNC(CALL_DATE) BETWEEN to_date('01-06-2017','dd-mm-yyyy') and TRUNC(SYSDATE)
ORDER BY
CALL_DATE asc
Which basically subdivide the time elements (hour, minute) from every field, gets their representation in seconds, makes the addition for all your desired columns and in the end converts the resulting seconds into a 'date to second' interval with numtodsinterval.
Note that I'm assuming from your OP that the fields
already pull in the correct format of "hh:mm:ss"
and so the substr functions are pulling out the elements for hour, minute, second, in that expected position order.
PS: The method I'd just described helps you with a row-level summation and not a column-level one, please specify which one you do actually need, so we could provide a better answer.

Find a string using PHPMyAdmin

i have table in DB = dle_post and a row contains id,full_story i want to check if full_story starts with "1." then list its id but the big problem is there are some spaces in the start of full_story some time 1 some time 2 and some time 3 , how can i list all ids starting with "1."
You want to execute some SQL like this, which you can also do in PHPmyAdmin...
SELECT id FROM dle_post WHERE LTRIM(full_story) LIKE '1%';
I think this will work!
Would this query help:
$id = fetch id here;
mysql_query("SELECT * FROM YOUR_TABLE WHERE id LIKE '%".$id."`%'", $someconnection);
YOUR_TABLE -> replace it with your table nime

Resources