Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm actually working as a Developper on the mainframe field. Reading many documentations I understand that the real power of such systems is that they can treat many transactions (input/output) operation at the same time. They're also needed to keep high performances.
So I was wondering, aren't the modern systems capable of performing the same or even better?
To correct some mis-understandings.
Mainframe hardware is not "old" -- it has had continuous development
and undergone a refresh cycle every two or three years. The chippery involved
is in some ways more advanced than x86 -- things like have a spare cpu on each
chip -- most of the differences are aimed at reliability and availability rather
than raw performance.
Having said that both manufacturers are moving the same electrons around on the same silicon so actual per CPU performance is much the same.
Likewise mainframe software comes in two varieties "ancient" and "modern".
Some software like "CICS" was first developed in the 1970s and although it
is actively maintained it still has some of the original code.
Some software (IEBCOPY we are looking at you) was developed in the 1960s and was considered terrible even then has been left untouched for decades.
However zOS also runs a fully POSIX compliant UNIX shell in which you can run any compliant J2EE application or compile any C/C++ program to run in.
While a well set up x86 environment can match the raw processing power, they fall slightly behind when it comes to reliability and availability.
The main reason why so many large corporations stick with the mainframe is the large body of bespoke software written for COBOL/CICS, PL/1-IMS environments at a
time when hardware was expensive and coding efficiency was at a premium.
So you could re-write an old COBOL/CICS application in Java/J2EE, but, you
would need about five times the raw processing power for the new system,
always assuming you could work out what business rules and logic was
embedded in the older system.
There are many factors involved in choosing a platform. The fact is that existing mainframes (generally IBM z/OS systems is what is implied) have a massive amount of existing programs, business processes, disaster recovery plans, etc. that would all need to be refactored. Your talking about migrating existing applications based on runtimes that do not exist on other platforms. Not to mention that massive amount of data that exists both transactionally and historically.
For instance, Customer Interactive Control System (CICS) uses a specific API called CICS EXEC where program calls, database interactions, internal programming facilities like queues exist. All of these programs need to be re-written, ported and established by moving the programs, processes and data to new platforms. Its rewriting 50 years of a business' investment.
This is inherently risky to a business. You are disrupting existing operations and intellectual property and data to gain what? The cost of any such move is massive and risky for what benefit? It ends up being risk / reward.
Bear in mind, that there is a new legacy built on Windows and Linux that will likely be "disrupted" in the future and its not likely that one would move all those applications for the same reasons.
As #james pointed out, mainframes are close to, if not currently, the fastest single general computing platforms out there. New hardware versions come out every two years and software is always being added to the platform, Java, Node, etc. The platform continues to evolve.
Its a complicated subject and not as simple as "use other technology" to perform the same or better. Its moving the programs, data and processes, which is really the hard part.
"performing better" is highly unlikely because the mainframe segment is still highly relevant in business and its architectures are kept closely up-to-date with all evolutions both in hardware and [system] software.
"performing the same" is theoretically certainly possible, but it must be noted that some of the more popular mainframe architectures have significantly different hardware setups, e.g. the processors in z/OS systems are, comparatively, pathetically slow, but they delegate lots and lots of work to coprocessors, and it must also be noted that on the software side, mainframers tend to have a higher degree of "resource-awareness" than, eurhm, more "modern" developers.
That said, of course any answers to this question will necessarily be more opinion than hard-facts based, which makes it an unfortunate thing to ask here.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
At some point I read that operating systems were meant to be created in a certain way (i.e. 'microkernel') to be resistant to fault but are made in another way (i.e. 'monolithic') for practical purposes such as speed. Whilst this is not the question, it does bring up the question:
Have any fundamental tradeoffs been made in computer architecture that have reduced security from the earlier theoretical systems?
I'm looking for answers that are accepted in the field of computer science, not opinions on current implementations. For example programs could run faster if they were all built on custom hardware, this is known, but this is impractical which is why we have general computers.
Anyone saying that microkernels are "the right way" is wrong. There is no "right way". There is no objectively best approach to security.
The problem is that security is not fundamental to computing. It's a sociological issue in a lot of ways, it only exists because humans exist - unlike computation, which is a literal science.
That said, there are principals of security that have held true and been incorporated into hardware and software, like the principal of least privilege. The kernel in an operating systems, on the hardware, runs at a higher privilege level than userland processes. That's why your program can't actually interact with hardware, and has to use system calls to do so.
There are also issues of complexity, and various measurements of complexity. Programs tend to get more complex as our needs grow - instead of a basic game of pong we now have 1,000 AI units on a giant map. Complexity goes up, and our ability to reason about the program will likely go down, opening up holes for vulnerabilities.
There's no real answer to your question but this - if there is an objective method for security we haven't discovered it yet.
SECURITY IS NOT A FUNCTION OF NATURE OF KERNEL.
Types of kernels has nothing to do with the security of the operating system. Though I would agree that the efficiecny of an operating system does depend on it's nature.
Monolithic kernel is a single large processes running entirely in a single address space. It is a single static binary file. Whereas, in Microkernels, the kernel is broken down into separate processes, known as servers. Some of the servers run in kernel space and some run in user-space. All servers are kept separate and run in different address spaces.The communication in microkernels is done via message passing.
Developers seem to prefer micro-kernels where as it provides flexibility and also it is more easy to work with different userspaces. Monolithic is somewhat complex in it's nature and is beneficial for lightweight systems.
Is their some fundamentally flawed way our computers are structured
that allow all the security holes that are found? What I mean by this,
is that there are sometimes the proper theoretical ways to do things
in computer science that satisfy all our requirements and are robust,
etc, .
Their are certain concepts like protection-ring and capability based security and all,but, at the end this depends on the requirements of the system. For more clarity be sure to visit the links provided. SOmewhat minor ideas are highlighted below.
Capability-based_security :- Although most operating systems implement a facility which resembles capabilities, they typically do not provide enough support to allow for the exchange of capabilities among possibly mutually untrusting entities to be the primary means of granting and distributing access rights throughout the system.
Protection_ring :- Computer operating systems provide different levels of access to resources. A protection ring is one of two or more hierarchical levels or layers of privilege within the architecture of a computer system. This is generally hardware-enforced by some CPU architectures that provide different CPU modes at the hardware or microcode level. Rings are arranged in a hierarchy from most privileged (most trusted, usually numbered zero) to least privileged (least trusted, usually with the highest ring number).
Long put short: The teacher who taught me through out the last year has only recently left and has been replaced with a new one. This new teacher has given me an assignment that involves things (like this) that we were never previously taught. So this task has showed up on the assignment and I have no idea how to do it. I can't get hold of the teacher because he's poorly and not coming in for the next few days. And even when I do ask him to explain further, he gets into a right mood and makes me feel like I'm completely retarded.
Describe how the hardware platform impacts upon the choice for
the programming language
Looking at my activity here on SO, you can tell that I'm into programming, I'm into developing things, and I'm into learning, so I'm not just trying to get one of you guys to do my homework for me.
Could someone here please explain how I would answer a question like this.
Some considerations below, but not a full answer by any means.
If your hardware platform is a small embedded device of some kind, then your choice of programming language is going to be directed towards the lower level unmanaged languages - you probably won't be able to (or want to) load a managed language runtime like the Java JVM or .NET CLR. This is down to memory and storage requirements. Similarly, interpreted languages will be out of the question as you won't have space for the intepreter.
If you're on a larger machine, it's more a question of compatibility. A managed language must run on a platform where its runtime is supported. In the case of .NET, that's Windows, or other platforms if you substitute the Microsoft CLR with the Mono runtime. In the case of Java, that's a far wider range of platforms.
This is by no means a definitive answer, but my first thought would be embedded systems. A task I perform on an embedded system, or other low powered battery operated computer, would need to be handled completely different to that performed on a computer which has access to mains electricity.
One simple impact.. would be battery life.
If I use wasteful algorithms on an embedded system, the battery life will be affected.
Hope that helps stir the brain juices!
Clearly, the speed and amount of memory of the device will impact the choice. The more primitive and weak the platform is, the harder it is to run code developed with very high level languages. Code written with them may just not work at all (e.g. when there isn't enough memory) or be too slow or it will require serious optimizations (i.e. incur more work), perhaps affecting negatively the feature set or quality.
Also, some languages and software may rely heavily on or benefit from the availability of page translation in the CPU. If the CPU doesn't have it, certain checks will have to be done in software instead of being done automatically in hardware, and that will affect the performance or the language/software choice.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Developers can learn a lot from other industries. As a thought exercise, is it possible to build a passenger aircraft using agile techniques?
Forgetting cost for now; how feasible is it to use iterative and incremental development for both the hardware (fuselage, wings, etc) as well as software, and still come out with a working and safe product which meets the customer’s requirements at the time of delivery?
Does it make sense to refactor a plane?
Agile in software and Agile in manufacturing are really quite different, although they share similar principals and values.
Agile in manufacturing emerged in Japan in the 1950s. Read up on W.E. Deming and the Toyota Production System to find out more. It's all about constantly improving the process whereby a product is reproduced.
Agile in software evolved in the early 1990s as a rapid development model. It's all about constantly improving the product.
You can certainly build a plane using Agile manufacturing methods, I've no doubt that some already are. Anything built in Japan definitely will be as Agile manufacturing is very well established there (it's taught in primary schools).
You couldn't build a plane using Agile software methods because you can't afford to rapidly change the product - in software changes and mistakes are cheap and reproduction is free. This isn't the case for aviation.
You could design a prototype plane using something like Agile software methods - but it would have to be standardised in order to be reproduced (a design task in itself).
How would you work using Test Driven Development? Would you automatically build and test a plane every iteration? Would you be able to make a ten minutes build? How easy is to make changes to the airplane? Even if you are really flexible in your desing the building some components need to be sent to special factories so there is not inmmediate feedback.
From de design using CAD software you need to make a mould, take the piece of fiber, put it in the plane. Etc. So here a trivial change has a non trivial cost. In Agile you can make a very little change and have it tested, built and an ready to ship in 20 minutes. If small changes are expensive then the short development cycle and refactoring won't be so usefull. Your feedback can take longer than a week so there is a strong reason for thinking in advance like in the waterfall model. And every attempt has a cost in physical materials unless you are programming. The Idea is not new. Carpenters measure twice. Programmers just first code and then test.
In summary. There may be some similarities but it will definitively be the same.
I'm going to say "kind of". In fact there's one example right now that I think is pretty close to answering this question.
Boeing is attempting to do this now with the new 787 - see following: Boeing 787 - Specification vs. Collaboration (From the 777 to the 787, the initial specifications document supposedly went from 2500 pages to 20 pages with the change in technique.) Suppliers from around the world are working independently to develop the components for this aircraft. (We'll call this the "teams".)
So, I want to say yes, but at the same time, iterations in creating the aircraft has resulted in delays of 2+ years and has resulted in stories like this one - (787 Delayed for 5th Time)
Will the airplane ever get built? Yes, of course it will. But when you look at the rubber hitting the road here, it seems like "integration test" is having one heck of a time.
Edit: At the same time, this shift in technique has resulted in building a new breed of aircraft built out of entirely new materials that will arguably be one of the most advanced in the world. This may be a direct result of the more Agile approach. Maybe that's actually the question - not a "can you?" but a "if Agile delays complex systems, does it provide a more innovative product in the payoff?"
Toyota pioneered Lean Production which Agile methodoligies followed on from. For the building of the hardware of the aircraft lean production would be the way to go and for the software an agile methodology would be the way to go.
Pick the right tools for the job.
A great book following how TPS was created and works
http://www.amazon.com/Machine-That-Changed-World-Production/dp/0060974176
http://en.wikipedia.org/wiki/Toyota_Production_System
I think in this case you are thinking too big. Agile is about breaking things down into more managable pieces and then working against that. The whole idea of Agile (XP in particular) is that you do your testing first so that you cut the number of bugs out and because plane software needs to have a very high code coverage for its testing it fits in quite neatly I think.
You aren't going to 'refactor' the mechanics of the plane but you will tweak them if they are unsafe and thats the whole iterative approach for you.
I have heard of Air Traffic Control software written with Agile Methodologies pushing it forward.
This is taken from http://requirements.seilevel.com/blog/2008/06/incose-2008-can-you-build-airplane-with.html
***Actually, that’s not true,***
My first guess - this probably relates to some of the core differences between systems and software engineering. I am going to over simplify this and just say that it is about scale. Systems projects are just a superset of software and hardware projects, integrating and deploying some combination of these. The teams of people deploying systems projects are quite large. And many of the projects discussed here are for government or regulated systems where specification and traceability is necessary. I could see how subsets of systems projects could in fact be developed using agile (pure software components), but I’m not convinced that an entire end-to-end project can. To put this in context, imagine you are building an airplane - a very commonly referenced type of systems engineering projects. Can you see this working using agile?
All skeptism aside, I do think that iterative development most certainly could work well on systems projects, and most people here would not argue that. In fact, I would love it if we could find examples of agile working on systems projects, because the number one feeling I get at systems engineering conferences is a craving for lighter processes.
I decided to do a little research outside the conference walls, and low-and-behold, I found a great article on this exact topic – “Toward Agile Systems Engineering Processes” by Dr. Richard Turner of the Systems and Software Consortium. The article is very well laid out, and I highly recommend reading it. He defines what agile is and what he believes the issue why most systems engineering projects are not agile. For example, he suggests that executives and program managers tend to believe that the teams involved have perfect knowledge about systems we are building, so we can plan them out in advance and work to a perfect execution against a perfect schedule.
Agile Can Work With Complex Systems
He talks to how to the agile concepts can work in systems projects. Here are a few examples summarized from his article:
Learning based: The traditional V-model implies a one-time trip through, implying one time to re-learn. However, perhaps the model can be re-interpreted to allow multiple iterations through it to fulfill this.
Customer focus: Typically systems engineering processes do not support multiple interactions with the customer throughout the project (just up front to gather requirements). That said, he references a study indicating the known issues with that on systems projects. Therefore, perhaps processes should be adapted to allow for this, particularly allowing for them to help prioritize requirements throughout projects.
Short iterations: Iterations are largely unheard of because the V-model is a one-time pass through the development lifecycle. That said, iterations of prototyping through testing could be done in systems engineering in many cases. The issue is in delivering something complete at the end of each iteration. He suggests that this is not as important to the customer in large deployments as is reducing risk, validating requirements, etc. This is a great point to rememember the airplane example! Could we have even a working part of an airplane after 2 weeks? What about even the software to run a subsystem on the aircraft?
Team ownership: Systems engineering is very process driven, so this one is tricky. Dr. Turner puts the idea out that perhaps allowing the systems engineers to drive it instead of the process to drive them, while more uncomfortable for management, might produce more effective results.
There is this story of an aircraft engine plant (September 1999). Their methods seem quite agile:
http://www.fastcompany.com/magazine/28/ge.html
Yes, you could do it. If you followed Agile Software Development techniques too closely however, it would be astronomically expensive, because of the varying costs of activities.
Consider the relative costs of design and build. If we include coding as part of the software design process, then design is definitely the expensive part and build is ridiculously easy and cheap. Most Agile projects would plan to release every few iterations at least. So we can work in small iterations with a continuous build process. Not so easy when you have to assemble a plane once a fortnight. Worse if you actually plan on "releasing" it. You'd probably need to get the airworthiness & safety people on to an Agile process too.
I'd truly love to see it tried.
Yes, you can use agile techniques for building complex systems, but I don't know if I'd use it for this particular system.
The problem with aircraft is the issue of safety. This means every precaution needs to be taken, from correctly identifying and interpreting the requirements to verifying and validating each and every single line of code.
Additionally, formal methods should probably be used to make sure that the system is truly safe by making sure the programming logic is sound and satisfies its conditions properly.
I'm fairly certain the answer is irrelevant. Even if you could, you would not be allowed to. There are too many safety requirements. You would not even be allowed to develop the flight software using Agile. For instance, you do not have a Software Requirements Specification (SRS) in Agile. Yet, for any avionics software onboard an airplane that can affect flight safety you will need an SRS.
Of course one can refactor a plane.
When one refactor, one modifies the source code, not the binaries. With a plane it's exactly the same thing: one modifies the blueprints, not the plane itself.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I didn't find any useful information about programming languages for real time systems. All I found was Real Time Systems and Programming Languages: Ada 95, Real-Time Java and Real-Time C/POSIX (some pdf here), which seems to talk about extensions of Java and C for real times systems (I don't have the book to read). Also, the book was published in 2001, and the information may be obsolete now.
So, I'm dubious whether these languages are used in real world applications, or if real time systems in the real world are made in other languages, like DSLs.
If the second option is true for you, what are the outstanding characteristics of the language you use?
I am an avionics software engineer.
I was able to participate in several development projects.
The languages I used in those projects are: C, C++, and Real-time Java.
C is great.
C++ is not so bad but C/C++ require strict coding standards for the safety considerations such as DO-178B.
I think Real-time Java is the way to go but I don't see many avionics applications, yet.
Korean jet trainer T-50 will have a mission computer running RT Java application serving HUD and MFD displays, and all of the mission critical functions.
The Real-Time Specification for Java now has several commercial-grade implementations:
Sun's JavaRTS
IBM's WebSphere Real-Time
Aonix PERC
aicas JamaicaVM
Apogee Aphelion
These products span the continuum from compilation to native code (Aonix) to J2ME (aicas, apogee), to full J2SE (Sun, IBM). Most, if not all, have seen deployments in small numbers of safety- or mission-critical systems, but momentum is building. Examples include Eglin AFB's space surveillance radar modernization and the US Navy's use of RTSJ in the DDG-1000/Zumwalt destroyer. Sun also claims deployment in the financial transaction processing domain.
If you are interested in RTSJ, I suggest Peter Dibble's Real-Time Platform Programming, or Professor Wellings' Concurrent and Real-Time Programming in Java.
On a related note, there is also work underway to provide a Safety-Critical profile for the Java programming language, built as a subset of RTSJ. Also, an expert group has formed to explore a Distributed RTSJ DRTSJ, but the work is stalled.
The book covers use of Ada 95, the Java Real-Time System and realtime POSIX extensions (programmed in C). None of these is directly a domain specific language.
Ada 95 is a programming language commonly used in the late 90s and (AFAIK) still widely used for realtime programming in defence and aerospace industries. There is at least one DSL built on top of Ada - SparkAda - which is a system of annotations which describe system characteristics to a program verification tool.
This interview of April 6, 2006 indicates some of the classes and virtual machine changes which make up the Java Real-Time System. It doesn't mention any domain specific language extensions. I haven't come across use of Java in real-time systems, but I haven't been looking at the sorts of systems where I'd expect to find it (I work in aerospace simulation, where it's C++, Fortran and occasionally Ada for real-time in-the-loop systems).
Realtime POSIX is a set of extensions to the POSIX operating system facilities. As OS extensions, they don't require anything specific in the language. That said, I can think of one C based DSL for describing embedded systems - SystemC - but I've no idea if it's also used to generate the embedded systems.
Not mentioned in the book is Matlab, which in the last few years has gone from a simulation tool to a model driven development system for realtime systems.
Matlab/Simulink is, in effect, a DSL for linear programming, state machines and algorithms. Matlab can generate C or HDL for realtime and embedded systems. It's very rare to see an avionics, EW or other defence industry real-time job advertised which doesn't require some Matlab experience. (I don't work for Matlab, but it's hard to over emphasis how ubiquitous it really is in the industry)
Real time applications can be made in almost any language. The environment (operating system, runtime and runtime libraries) must however be compliant to real time constraints. In most cases real-time means that there's always a deterministic time in which something happens. Deterministic time being ussually a very low time value in the microseconds/milliseconds range.
Real time systems depend solely on this criteria, as the specificiations usually say something like 'Every x (period of time) (do something | check something)'. Usually this happens if the system interfaces with external sensors and controls life-saving or life-threatening systems.
I was working on an in-car navigation and infotainment system developed mostly in C/C++ with an operating system configured specifically to meet the real-time constraints to provide real-time navigation and media playback.
But this is not all to real-time systems: Usually the selection of algorithms in the entire system is made to have deterministic runtimes according to the Big-O notation, mostly using linear or constant time. Everything else is considered non-deterministic and thus not useable for real-time systems.
All of the real-time systems I have worked with were predominantly written in C with some bits of assembler, or written mostly in assembler with little bits of C. (Depending on whether we're talking the 90s and beyond, or the 80s, respectively.) However, some of the real-time systems I've worked with have used -- not exactly DSLs -- special homegrown code generators.
Real-time oriented language?
What is real-time
First we have to define what real-time mean.
Of course depending on how your tool will work against the physical environment pure real-time couldn't be effectively done, mostly because there will be a lot of third party dependencies.
If you are building embed stuff by using microcontrollers like arduino, the language to use will be limited by the hardware, but with more complex stuff like Raspberry Pi, the language choice is very wide.
Granularity
This is depending on what you are measuring, if you're working with:
weather temperatures, one read each 10 minute could be enough
people height or weight, one or maybe four read by day
server status, between 1 second for fine debugging to approx 1 hour for quiet unimportant secondary server.
atomic collision count: something finer...
Event based reading
The right (better) way for collecting data is based on value change event... whenever the device do permit it.
Your tool have to not poll values from device, but the device have to send values to your tool, when they change.
This could be done by using an hardware interrupt trigger or by using port protocole like RS-232 staying listening on some serial port, for sample.
Monitoring environment
The last thing to be warned is how legitimate user will interact with.
If you're building embed standalone device, like robot, you may use graphic libraries to interact with touch screen.
If you're building web based monitor, you may have to keep in mind that the client could be an old 800x600 monochrome screen, using poor internet connection and small processor... But depending on final goal if you may interact with clients, you could ensure strong hardware and strong internet connections. Anyway you have to watch for connexion loosing and event for communication delay between server and client. There is mostly third party dependencies.
Which programming language?
From there, the language choice is wide and clearly depend on
your knowledge.
granularity requested (by using event-based too, of course)
the amount of time you have to build the tool (money;)
delay, co-workers...
kind of device
kind of monitoring
some other political reasons
You could build real-time monitoring engine by using bash and sql only, I've seen sophisticated engines that was built under postgresql only... I've personally built a web based, solar energy monitor by using perl, mysql and javascript.
I cannot believe no one has mentioned LabVIEW programming language which is widely used for Real-time safety-critical systems. It has extensive libraries and well-known design patterns for architecturing and implementing for RT systems.
Also National Instruments makes various hardware (cRIO, PXI and etc) which are designed for real-time applications.
We use LabVIEW for Fracking (Hydraulic Fracturing) which is used in safety-critical environments.
PLCs run ladder and fbd code which is really a real-time dsl in the sense that your options are so limited that it is difficult to program in a way that would result in unpredictable runtime performance
A really purposeful application of the C language to real-time programming - and all related issues (such as parallel programming) - is offered by my Kickstarter
http://www.kickstarter.com/projects/767046121/crawl-space-computing-with-connel
It is called "Wide Programming" and I've been doing it most of my life. The rewards include a software library and a book - designed to be useful.
the company I've been working for since 2003 has been developing and deploying a Scada/Mes platform. Original implementation started in 1993, used Modula2 on OS/2. Later (1998) it was ported to Ada95 and Windows. Currently (2019) we use Ada compiler by AdaCore. Our system was ported and has been deployed to 32/64 Windows, HPUX, OpenVMS (and lately even to Raspberry). We have multiple installation in central Europe (gas industry, refineries, factories, power plants).
We feel Ada's features give our system a high degree of reliability and prevents a lot of errors that would easily occour if we used languages like C.
See also my blog
https://www.ipesoft.com/en/blog/what-language-is-the-d2000-written
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been coding and managing Java & ASP.Net applications & servers for my entire career. Now I'm being directed towards involvement in mainframes, ie z/OS & JCL, and I'm finding it difficult to wrap my head around it (they still talk about punch cards!). What's the best way to go about learning all this after having been completely spoilt by modern luxuries?
There are no punch cards in modern mainframes, they're just having you on.
You will have a hard time since there are still many things done the "old" way.
Data sets are still allocated with properties such as fixed-block-80, variable-block-255 and so on. Plan your file contents.
No directories. There are levels of hierarchy and they're limited to 8 characters each.
The user interface is ISPF, a green-screen text-mode user interface from the seventh circle of hell for those who aren't used to it.
Most jobs will still be submitted as batch jobs and you will have to monitor their progress with SDSF (sort of task manager).
That's some of the bad news, here's the good news:
It has a USS subsystem (UNIX) so you can use those tools. It's remarkably well integrated with z/OS. It runs Java, it runs Websphere, it runs DB2 (proper DB2, not that little Linux/UNIX/Windows one), it runs MQ, etc, etc. Many shops will also run z/VM, a hypervisor, under which they will run many LPARs (logical partitions), including z/OS itself (multiple copies, sometimes) and zLinux (SLES/RHEL).
The mainframe is in no danger of disappearing anytime soon. There is still a large amount of work being done at the various IBM labs around the world and the 64-bit OS (z/OS, was MVS, was OS/390, ...) has come a long way. In fact, there's a bit of a career opportunity as all the oldies that know about it are at or above 55 years of age, so expect a huge suction up the corporate ladder if you position yourself correctly.
It's still used in the big corporations as it's the only thing that can be trusted with their transactions - the z in System z means zero downtime and that's not just marketing hype. The power of the mainframe lies not in it's CPU grunt (the individual processors aren't that powerful but they come in books of 54 CPUs with hot backups, and you can run many books in a single System z box) but in the fact that all the CPU does is process instructions.
Everything else is offloaded to specialist processors, zIIPs for DB2, zAAPs for Java workloads, other devices for I/O (and I/O is where the mainframe kills every other system, using fibre optics and very large disk arrays). I wouldn't use it for protein folding or genome sequencing but it's ideal for where it's targeted, massively insane levels of transaction processing.
As I stated, z/OS has a UNIX subsystem and z/VM can run multiple copies of z/OS and other operating systems - I've seen a single z800 box running tens of thousands of instances of RHEL concurrently. This puts all the PC manufacturers 'green' claims to shame and communications between the instances is blindingly fast with HyperSockets (TCP/IP but using shared memory rather than across slow network cables (yes, even Gigabit Ethernet crawls compared to HyperSockets (and sorry for the nested parentheses :-))).
It runs Websphere Application Server and Java quite well in the Unix space while still allowing all the legacy (heritage?) stuff to run as well. In fact, mainframe shops need not buy PC-based servers at all, they just plonk down a few zLinux VMs and run everything on the one box.
And recently, there's talk about that IBM may be providing xSeries (i.e., PCs) plugin devices for their mainframes as well. While most mainframe people would consider that a wart on the side of their beautiful box, it does open up a lot of possibilities for third-party vendors. I'm not sure they'll ever be able to run 50,000 Windows instances but that's the sort of thing they seem to be aiming for (one ring to rule them all?).
If you're interested, there's a System z emulator called Hercules which I've seen running at 23 MIPS on a Windows box and it runs the last legally-usable MVS 3.8j fast enough to get a feel. Just keep in mind that MVS 3.8j is to z/OS 1.10 as CP/M is to Windows XP.
To provide a shameless plug for a book one of my friends at work has written, check out What On Earth is a Mainframe? by David Stephens (ISBN-13 = 978-1409225355). I found this invaluable since I came from a PC/UNIX background, and it is quite a paradigm shift. I think this book would be ideal for your particular question. I think chunks of it are available on Google Books so you can try before you buy.
Regarding JCL, there is a school of thought that only one JCL file has ever been written and all the others were cut'n'paste jobs on that. Having seen the contents of them, I can understand this. Programs like IEBGENER and IEFBR14 make Unix look, if not verbose, at least understandable.
You first misconception is beleiving the "L" in JCL. JCL isnt a programming language its really a static declaration of how a program should run and what files etc. it should use.
In this way it is much like (though superior to) the xml config spahetti that is used to control such "modern" software as spring, hebernate and ant.
If you think of it in these terms all will become clear.
Mainframe culture is driven by two seemingky incompatable obsessions.
Backward compatability. You can still run executables written and compiled in 1970. forty year old JCLs and scripts still run and work!
Bleeding edge performance. You can have 128 cpus on four machines in two datacentres working on a single DB2 query. It will run the latest J2EE (Websphere) applications faster than any other machine.
If you ever get involved with CICS (mainframe transaction server) on Z/OS, I would recommend the book "Designing and Programming CICS applications".
It is very useful.
alt text http://img18.imageshack.us/img18/7031/designingandprogramming.gif
If you are going to be involved with traditional legacy applications development, read books by Steve Eckols. They are pretty good. You need to compare the terms from open systems to mainframe which will cut down your learning time. Couple of examples
Files are called Datasets on mainframe
JCL is more like a shell script
sub programs/routines or like common functions etc...Good luck...
The more hand holding at the beginning the better. I've done work on a mainframe as an intern and it wasn't easy even though I had a fairly strong UNIX background. I recommend asking someone who works in the mainframe department to spend a day or two teaching you the basics. IBM training may be helpful as well but I don’t have any experience with it so can’t guarantee it will. I’ve put my story about learning how to use the mainframe below for some context. It was decided that all the interns were going to learn how to use the mainframe as a summer project that would take 20% of there time. It was a complete disaster since all the interns accept me were working in non mainframe areas and had no one they could yell over the cube wall to for help. The ISPF and JCL environment was to alien for them to get proficient with quickly. The only success they had was basic programming under USS since it’s basically UNIX and college familiarized them with this. I had better luck for two reasons. One I worked in a group of about 20 mainframe programmers so was able to have someone sit down with me on a regular basis to help me figure out JCL, submitting jobs, etc. Second I used Rational Developer for System z when it was named WebSphere Developer for System z. This gave me a mostly usable GUI that let me perform most tasks such as submitting jobs, editing datasets, allocating datasets, debugging programs, etc. Although it wasn’t polished it was usable enough and meant I didn’t have to learn ISPF. The fact that I had an Eclipsed based IDE to do basic mainframe tasks decreased the learning curve significantly and meant I only had to learn new technologies such as JCL not an entirely new environment. As a further note I now use ISPF since the software needed to allow Rational to run on the mainframe wasn’t installed on one of the production systems I used so ISPF was the only choice. I now find that ISPF is faster then Rational Developer and I’m more efficient with it. This is only because I was able to learn the underlying technology such as JCL with Rational and the ISPF interface at a later date. If I had to learn both at once it would have been much harder and required more one on one instruction.