I am a new mainframer and I have been given access to/control of a test system to play around in and learn. We have been trying to get IMS set up on the system but when I try to log into IMS 14 I get the error
"INIT SELF FAILED WITH SENSE 08570002".
I have found that the error code means, "The SSCP-PLU session is inactive."
I am thinking that the issue is with the VTAM configuration but I am not sure what exactly needs to be fixed or where in z/OS to look for it.
I have asked around and dug through documentation with no luck so any help would be very much appreciated.
The message indicates an attempt was made to establish a connection from the SSCP (VTAM) and a Primary LU (an application) and the application was not available. This is done on behalf of an SLU (secondary logical unit) which is generally a terminal or printer.
This could the result of several situations but here are some common ones:
An attempt was made to log on to something like TSO, CICS, IMS, ... before the VTAM ACB was actually opened. You can attempt the request again later when the service is up
To determine if the PLU (application is available) use the the VTAM command D NET,ID=vtamappl where vtamappl is the application ID your are trying to connect to. This command is entered on the console directly or through a secondary means like SDSF.
There may be a LOGAPPL= statement coded on the LU definition that tells VTAM to attempt to initiate a session when starting the LU. In your case this would appear to be happening before the PLU (application) is up. The LU definition (or generic definition) is in the VTAMLST concatenation.
This manual describes the sense code in more detail.
Related
I get this every time I try to create an account to ask this on Stack Overflow:
Oops! Something Bad Happened!
We apologize for any inconvenience, but an unexpected error occurred while you were browsing our site.
It’s not you, it’s us. This is our fault.
That's the reason I post it here. I literally cannot ask it on Overflow, even after spending hours of my day (on and off) repeating my attempts and solving a million reCAPTCHA puzzles. Can you maybe fix this error soon?
With no meaningful/complete examples, and basically no documentation whatsoever, I've been trying to use the "shmop" part of PHP for many years. Now I must find a way to send data between two different CLI PHP scripts running on the same machine, without abusing the database for this. It must work without database support, which means I'm trying to use shmop, but it doesn't work at all:
$shmopid = shmop_open(1, 'w', 0644, 99999); // I have no idea what the "key" is supposed to be. It says: "System's id for the shared memory block. Can be passed as a decimal or hex.", so I've given it a 1 and also tried with 123. It gave an error when I set the size to 64, so I increased it to 99999. That's when the error changed to the one I now face above.
shmop_write($shmopid, 'meow 123', 0); // Write "meow 123" to the shared variable.
while (1)
{
$shared_string = shmop_read($shmopid, 0, 8); // Read the "meow 123", even though it's the same script right now (since this is an example and minimal test).
var_dump($shared_string);
sleep(1);
}
I get the error for the first line:
shmop_open(): unable to attach or create shared memory segment 'No error':
What does that mean? What am I doing wrong? Why is the manual so insanely cryptic for this? Why isn't this just a built-in "superarray" that can be accessed across the scripts?
About CLI:
It cannot work in standalone CLI processes, as an answer here says:
https://stackoverflow.com/a/34533749
The master process is the one to hold the shared memory block, so you will have to use php-fpm or mod_php or some other web/service-running version, and maybe even start/request/stop it all from a CLI php script.
About shmop usage itself:
Use "c" mode in shmop_open() for creating the segment before it can be used with "a" or "w".
I stumbled on this error in a different scenario where shared memory is completely optional to speed up some repeated operations. So I wanted to try reading first without knowing memory size and then allocate from actual data when needed. In my case I had to call it as #shmop_open() to hide this error output.
About shmop on Windows:
PHP 7 crashed Apache worker process causing its restart with status 3221225477 when trying to reallocate a segment with the same predefined (arbitrary number) key but different size, even after shmop_delete(). As a workaround for this, I took filemtime() of the source file which contains data to be stored in memory, and used that timestamp as the key in shmop_open(). It still was not flawless IIRC, and I don't know if it would cause memory leaks, but it was enough to test my code which would mainly run on Linux anyway.
Finally, as of PHP version 8.0.7, shmop seems to work fine with Apache 2.4.46 and mod_php in Windows 10.
I'm volunteering at a uni lab, and I was tasked with removing the dependency on Labview (among other things).
The only problem there for me is the VISA resource. I have no clue (and can't seem to figure out) what exactly the format of the data being sent is.
The VISA buffer seems to get a string, but I've been told that what's being sent is just numbers (0-255), which makes sense, except for the fact that the buffer receives a string.
When I looked at the com port using MAX I saw that there's a termination character on write only (which does make sense given that the device isn't meant to send any data back)
the baud on the com port also says 96,000, when the block diagram has a higher number getting inputted when initializing the VISA resource (though I didn't check it through MAX after running the thing, so it may just keep going to default until I run it)
The device also doesn't respond to an *IDN? query (times out), though I hope it's not a problem since, as mentioned, the device isn't meant to send back data, but I'm assuming whatever chip implements the VISA protocol on that side should also respond. pyVISA throws no errors (even with logging enabled), and any attempt to write just gives me success code 0.
All in all, short of debugging Labview to see exactly what's being fed to the buffer (which I haven't done yet - as a volunteer I'm not sure I'm even entitled to a license of labview on my laptop), I'm at a loss as to how I get all the information I need to imitate what's going on in LABVIEW with pyVISA. Right clicking on the VISA resource and looking at its properties is of little help.
Note: I'm using pyVISA-py as a backend for pyVISA since it seems I also need a license for NI's VISA drivers
Using uClinux we have one of two flash devices installed, a 1GB flash or a 2GB flash.
The only way I can think of solving this is to somehow get the device ID - which is down the in the device driver code, for me that is in:
drivers/mtd/devices/m25p80.c
I have been using the command mtdinfo (which comes from mtdutils binaries, derived from mtdinfo.c/h). There is various information stored in here about the flash partitions including flash type 'nor' eraseblock size '65536', etc. But nothing that I can identify the chip with.
Its not very clear to me how I can get information from "driver-land" into "user-land". I am looking at trying to extend the mtdinfo command to print more information but there are many layers...
What is the best way to achieve this?
At the moment, I have found no easy way to do this without code changes. However I have found an easy code change (probably a bit of a hack) that allows me to get the information I need:
In the relevant file (in my case drivers/mtd/devices/m25p80.c) you can call one of the following:
dev_err("...");
dev_alert("...");
dev_warn("...");
dev_notice("...");
_dev_info("...");
Which are defined in include/Linux/device.h, so they are part of the Linux driver interface so you can use them from any driver.
I found that the dev_err() and devalert() both get printed out "on screen" during run time. However all of these device messages can be found in /var/log/messages. Since I added messages in the format: dev_notice("JEDEC id %06x\n", jedecid);, I could find the device ID with the following command:
cat /var/log/messages | grep -i jedec
Obviously using dev_err() ordev_alert() is not quite right! - but dev_notice() or even _dev_info() seem more appropriate.
Not yet marking this as the answer since it requires code changes - still hoping for a better solution if anyone knows of one...
Update
Although the above "solution" works, its a bit crappy - certainly will do the job and good enough for mucking around. But I decided that if I am making code changes I may as well do it properly. So I have now implemented changes to add an interface in sysfs such that you can get the flash id with the following command:
cat /sys/class/m25p80/m25p80_dev0/device_id
The main function calls required for this are (in this order):
alloc_chrdev_region(...)
class_create(...)
device_create(...)
sysfs_create_group(...)
This should give enough of a hint for anyone wanting to do the same, though I can expand on that answer if anyone wants it.
I need to make an old Linux box running 2.6.12.1 kernel communicate with an older computer that is using:
ISO 8602 Datagram (connectionless service) 1987 12 15 (1st Edition)
ISO 8073 Class 4 (connection oriented service)
These are using "Inactive Network Layer" subset. (I am pretty sure this means I do not have to worry about routing. The two end points are hitting each other with their mac addresses.)
I have a kernel module that implements the connectionless part. In order to get the connection oriented service operational, what is the best approach? I have been taking the approach of adding in the struct proto_ops .connect, .accept, .listen functions to my existing connectionless driver by referring to the tcp implementation.
Maybe there is a better approach? I am spending a lot of time trying to decide what the tcp code is doing and then deciding if that is relevant to my needs. For example, the Nagle algorithm isn't needed because I don't have small bits of data being transmitted. In addition, there are probably a lot of error recovery and flow control stuff I don't need because I know the data that the two endpoints are transmitting and how frequently they transmit it. My plan is to implement this first with whatever simplistic (if any) packet retransmission, sequencing, etc.. to the point where my wireshark looks similar to the wireshark capture I have from the live system. Then try mine against the real thing and then add in whatever error recovery/retransmit stuff seems necessary. In other words, it is a pain in the rear trying to determine what is the guts of the tcp/stream implementation that I want to copy vs the extra error correction/flow control stuff that I might never need.
I found \net\core\stream.c which says:
* Generic stream handling routines. These are generic for most
* protocols. Even IP. Tonight 8-).
* This is used because TCP, LLC (others too) layer all have mostly
* identical sendmsg() and recvmsg() code.
* So we (will) share it here.
This suggested to me that maybe there might be a simpler stream thingy that I can start from. Can someone recommend a more basic streams driver that I should start from instead of tcp?
Is there any example code that provides a basic stream implementation?
I made a user level library to implement the protocol providing my own versions of open/read/write/select etc. If anyone else cares, you can find me at http://pnwsoft.com
Do not attempt to use openss7. It is a total waste of time.
This one has me rather confused. I've written a query which runs fine from my development client but fails on the production client with error "ORA-01652: unable to extend temp segment by....". In both instances, the database and user is the same. On my development machine (MS Windows) I've got SQL*PLUS (Release 9.0.1.4.0) and Toad 9.0 (both using version 9.0.4.0.1 of the oci.dll). Both run the code without errors.
However when I run the same file, against the same database, using the same username/password from a different machine, this time version 10.2.0.4.0 (from the 10.2.0.4-1 Oracle instant client) I get the error.
It does occur reproducibly.
Unfortunately I've only got limited access to the dictionary views on the database which is set up as read-only (can't even get an explain plan!).
I've tried working around the problem by tuning the query (I suspect that there is a large interim result set which is subsequently trimmed down) but have not managed to change the behaviour at either client.
It may be possible to deploy a different version of the client on the machine causing the problems - but currently that looks like downgrading to a previous version.
Any ideas?
TIA
Update
Based on Gary's answer below, I had a look at the glogin.sql scripts - the only difference was that 'SET SQLPLUSCOMPATIBILITY 8.1.7' was present on the working client but absent on failing client - but adding it in did not resolve the problem.
I also tried
alter session set workarea_size_policy=manual;
alter session set hash_area_size=1048576000;
and
alter session set sort_area_size=1048576000;
to no avail :(
Update 2
I managed to find the same behaviour, this time talking to an Oracle 8i backend. In this case the database was RW. That allowed me to confirm that the different clients were, as I suspected, generating different plans. But why????
Looking at the output of 'SHOW PARAMETERS' both clients reported exactly the same settings!
Years ago I worked on a DR database that was fully READONLY, and even the TEMP tablespace wasn't writable. Any query that tried to spill to temp would fail (even if the temp space to be used was pretty trivial).
If this is the same situation, I wouldn't be surprised if there was a login.sql (or glogin.sql or a logon trigger) that does an ALTER SESSION to set larger PGA memory value for the session, and/or changes the optimizer goal to FIRST_ROWS.
If you can, compare the results of the following from both clients:
select * from v$parameter
where ismodified != 'FALSE';
Also from each client for the problem SQL, try EXPLAIN PLAN FOR SELECT...
and SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
See if it is coming up with different query plans.
Not really an answer - but a bit more information....
Our local DBAs were able to confirm that the 16Gb (!) TEMP tablespace was indeed being used and had filled up, but only from the Linux clients (I was able to recreate the error making an oci8 call from PHP). In the case of the sqlplus client I was actually using exactly the same file to run the query on both clients (copied via scp without text conversion - so line endings were CRLF - i.e. byte for byte the same as was running on the Windows client).
So the only rational solution was that the 2 client stacks were resulting in different execution plans!
Running the query from both clients approx simultaeneously on a DBMS with very little load gave the same result - meaning that the two clients also generated different sqlids for the query.
(and also Oracle was ignoring my hints - I hate when it does that).
There is no way Oracle should be doing this - even if it were doing some internal munging of the query before presenting it to the DBMS (which would give rise to the different sqlids) the client stack used should be totally transparent regarding the choice of an execution plan - this should only ever change based on the content of the query and the state of the DBMS.
The problem was complicated by not being to see any explain plans - but for the query to use up so much temporary tablespace, it had to be doing a very ugly join (at least partially cartesian) before filtering the resultset. Adding hints to override this had no effect. So I resolved the problem by splitting the query into 2 cursors and doing a nested lookup using PL/SQL. A very ugly solution, but it solved my immediate problem. Fortunately I just need to generate a text file.
For the benefit of anyone finding themselves in a similar pickle:
BEGIN
DECLARE
CURSOR query_outer IS
SELECT some_primary_key, some_other_stuff
FROM atable
WHERE....
CURSOR query_details (p_some_pk) IS
SELECT COUNT(*), SUM(avalue)
FROM btable
WHERE fk=p_some_pk
AND....
FOR m IN query_outer
LOOP
FOR n IN query_details(m.some_primary_key)
LOOP
dbms_out.put_line(....);
END LOOP;
END LOOP;
END;
The more I use Oracle, the more I hate it!
A bit more information - I've run into the same problem connecting to a different database - this time an Oracle 8i. And I can get EXPLAIN plans.
Although in this case the query completed on both clients, it took 50% longer running from Linux sql*plus 10.2.0.4.0 vs WinXP sql*plus 8.0.6.0
As I suspected, the plans are different - but both are using the default settings on the client, both are using the same optimizer mode. It seems to think the plan generated from the Linux client has a lower cost than that from the XP client (although it does take longer to run). How does the optimizer prune the search path for potential plans?