Where am I? (Geolocation, Emacs, Perl) [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
This is one of those "surely there is some generic utility that is
better than the quick and dirty thing I have whipped up" questions.
As in, I know how to do this in several ad-hoc ways, but would love to
use a standard.
BRIEF: I am looking for reasonably standard and ubiquitous tools to
determine my present geographical location. Callable from Linux/UNIX
command line, Perl, Emacs, etc.
DETAIL:
A trivial situation inspires this question (but there are undoubtedly
more important applications): I use emacs org-mode, often to record a
log or diary. I don't actually use the official org-mode diary much -
mainly, I drop timestamps in an ordinary org-mode log, hidden in
metadata that looks like a link.
[[metadata: timestamp="<2014-01-04 15:02:35 EST, Saturday, January 4, WW01>" <location location="??" timestamp="??"/>][03:02 PM]]
As you can see, I long ago added the ability to RECORD my
location. But hitherto I have had to set it manually. I am lazy, and
often neglect to set the location manually. (Minor note: I recorded
the last time I manually set the location, helpful when I move and
neglect to manually change my location.
I would much prefer to have code that automatically infers my
location. Particularly since I have been travelling quite a bit in the
last month, but probably more useful for the half-dozen or so
locations I move between on a daily basis: home, work, oceanside, the
standard restaurants I eat working lunch or breakfast in.
I can figure my location out using any of several tools, such as
Where Am I - See your Current Location on Google Maps - ctrlq.org/maps/where/
http://www.wolframalpha.com/input/?i=Where+am+I%3F
Perl CPAN packages such as IP::Location - to map an IP address to a location
note: doesn't necessarily work for a private IP address, behind NAT
but can combine with traceroute
and heuristics such as looking at WiFi SSIDs, etc.
I have already coded something up.
But... there's more depth to this than I have coded.
None of the
techniques above is perfect - e.g. I may not have net.connectivity,
etc. Some are OS specific.
If there is already some open source facility, I should use that.
Therefore my question: is there any reasonably ubiquitous geo location service?
My wishlist
Works cross OS
Cygwin
Linux
Android? OS-X? (just use OS standard)
e.g. tries to exec a command like Windows netsh, and if that fails...
Command line utility
Perl, etc.
callable in emacs
because that is where I want to use it
but I am sure that I would want to be able to use it in other places.
Can connect to widely available standard geolocation services
e.g. Perl CPAN IP::Location, IP->country/city/...
e.g. Google, etc., infer geographical location from browser
Works even when cannot connect to standard geolocation services, or the Internet
e.g. cache last location
e.g. ability to associate a name with a private network environment
e.g. if in a lab that is isolated from network
or at home, connected to WiFi, but broadband down
e.g. look at wifi SSID
Customizable
can use information that is NOT part of any ubiquitous geolocation database
e.g. I may recognize certain SSIDs as being my home or office.
Learned
Knows (or can learn) that some SSIDs are mobile, not geographically fixed (e.g. the mobile hotspot on my phone)
but some are (mainly) geographically fixed (e.g. WiFi at home connected to cable modem)
Learning
can override incorrect inferences (geo databases sometimes wrong, esp. VPN)
can extend or make precise
I wouldn't mind being able to write rules
but even better if some inference engine maintains the rules itself.
e.g. if I correct the location, make inferences about SSID coordinates used for the faulty inference
Heuristics
Windows 7 "netsh wlan show interfaces"
Windows / Cygwin ipconfig
*IX ifconfig
traceroute / tracert
reverse IP lookup
Caching
to avoid expensive lookups
but cache is NOT global - can be done per app
some apps may want to bypass the cache
others can use old data

GeoClue seems to satisfy at least some of your requirements.
To convert coordinates to human-readable address, one can use OSM Nominatim API.

Why didn't you just consider using GPS ? You could just add coordinates to your metadata and bind them to an address (going from simple numbers to an actual place) upon reading.
In this way almost anything can be tagged with coordinates.
In gnu/linux and other unices, gpds should do.
In windows, I have no idea.
In Android, the scripting layer for android should provide access to the gps device.
I am not sure this meets your requirements, but I'm just proposing.

You could use wget to pull data from one of those sites you mentioned, something like wget http://www.wolframalpha.com/input/?i=Where+am+I%3F and then find the data out of the file you just downloaded

Let me put it this way. You intend to track your location without using a positioning device such as gps. This is done based on your current geo location from your nearest network access point. The network access points are usually geo coded. I assume you are tracking your location in your laptop as it doesn't have a gps.
There must be a few frameworks out there to do this. Since you want it to be cross platform, I think a python based framework is your best option. You can also give google geo location a shot. There are a few api's built into html5 for geo location. I think you can coo-kup your own application and share it on opensource for everyone else to use.
For windows there are many commercial pc trackinga pps. All of them do a fine job at it.

Related

What application help system (like chm files) exist on Linux/GTK? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
On windows CHM is a very good option.
Is there anything other then delivering a static set of HTML pages and using a primitive call to a webbrowser (which is even a problem itself on linux). And it would not offer any kind of fulltext searching, separated bookmarks and even the simple fact of not opening a new tab for each help call.
The Gnome yelp program is what is used for GTK/Gnome applications. It supports a number of formats, but not CHM directly. They have started to define their own markup, named Mallard. But I don't know what is the status of that.
I'd still recommend static HTML as the best option (and of course man pages!). For example you can use Sphinx to write beautiful documentation with a full-text search support!
There are CHM viewers available on Linux though frankly as a Linux user I'd prefer to get static HTML pages.
Some examples are chmsee and kchmviewer.
Afaik there is no universal system. Depending on your desktop system (gnome/kde) there might be helpsystems, but they are usually based on loose files and use full-blown browsers. (usually webkit based)
For Lazarus a CHM based helpsystem and embedded browser was created, including CHM write support.
The reasons to avoid loose static html were mostly:
the 60000 lemma static documentation took too long to install on lighter systems or systems with specialist filesystems.
CHM removes slack and adds compression.
we also support non posix and OS X systems, and little filesystem related problems (charsets/encoding, separators, path depth etc) and case insenstive filesystems on *nix caused a lot of grief. The CHM based help solved that, allowing for one set of routines to access helpdata on all systems.
indexing and toc are Btree based, and can be easily merged runtime from independently produced help sets. In general integrating independently produced helpfiles is a underappreciated aspect of helpfiles in general, while key to open platforms.
native fulltext search.
An own viewer also has the ability to take advantage of extra features on top of the base system.
I'm not mentioning the Lazarus system in the hope you adapt it, since it is at the moment too much a development system (SDK) oriented system, the viewer is not even available as a separate package. I mainly mention it to illustrate the problems of loose html.
I haven't investigate KDE/Gnome/Eclipse what they use as helpsystem for a while though. If I would have to restart from scratch, that's where I would look first.
If I had to create something myself quickly, I would use zipped static html, and a single gziped file with metadata/indexes and the lightest browser (Konquerer?) I could find. Not ideal, not like Windows, but apparently the best Linux can offer.

Which is a better Unix Distribution to learn from a job perspective [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I work on text mining and statistical modeling, mostly course and research projects at school. I have primarily been using the Windows GUI versions of R and Python. I will be done soon and I realize that going into the industry, most work is done on Unix/Linux machines.
I wanted to get some hands on experience working on Unix before I start looking for jobs (in about 6 months), especially at the command line. I wanted to ask you guys for two things -
a. Which unix/linux distribution would be most beneficial in getting familiar with. I understand that most of the knowledge will scale across distributions, but I still wanted to know which one would be the best to invest time on.
b. Is there any resource or book to help me pick up speed on working from the command line instead of a GUI as in Gnome or KDE.
I am not sure if it matters, but I also wanted to mention that alongside I also want to invest some time in learning the basics of Hadoop, Pig and Mahout.
I use Ubuntu myself, but for your purposes, it doesn't matter too much which one you choose - as long as the chosen one doesn't eat up all your time learning UNIX itself - you want to focus on tools, not on system administration.
Better to spend time learning an editor (vim/emacs), a scripting language (Python, Ruby), and mapreduce (Hadoop, Pig, and Mahout).
I wouldn't worry much about the specific Linux distribution you end up learning. It almost certainly won't be the same as what they use at your eventual employer. Instead, pick a distribution that your friends and fellow students use. If no one else is using Linux, then Ubuntu is a good place to start.
You should also consider learning Mac OS X. The differences aren't huge, but more and more developers prefer to use OS X as their desktop unix environment.
You should also take some time to learn the basics of SQL. At the very least, grab SQLite so that you can make a database and run some queries. If you want to go deeper, give MySQL a try. Big statistical analysis projects often have SQL databases to manage the data sets. Even with medium sized projects, you may find it much easier to work with your data in a database than in flat files.
I agree with others Ubuntu is great for you to learn. Many companies opt for Red Hat Enterprise Linux because they can get official support, and companies like support.
CentOS is a free equivalent to that.
I like Unix Power Tools as a resource for the command line, and you can always google for "unix shell tips" and the like.
One of the most user friendly Linux distributions nowadays is without any doubt Ubuntu. There are ton's of guide on the Linux shell out there on the web.
However it is a question for Super user ... ;)
There are Hadoop VMs from Cloudera. You can use them on Linux or Windows. In general, VMs are good idea to learn almost anything, because you don't have to worry about trashing your main system, just because you followed some random bloggers instructions.You can use multiple VMs to simulate a small Hadoop cluster.

How do I protect my licensing file? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a licensing file for my application. The only way it can be broken right now and the user can have unlimited access is by deleting the licensing file every 30 days and reinstalling the program.
How do I best protect this file? (or information)
My first thought is to hide the file a few folders deep somewhere under windows %AllUsersProfile%. And use obscure names for the folders as to not advertise the location.
Another thought was to write to the registry, but we cannot always write to the HKEY_Local_Machine like I wanted to due to it requiring admin privileges.
Create a license file during the setup. And consider the absence of a license file as an invalid license file. Uninstalling, deleting the file and then reinstalling is much more annoying than just deleting a file.
And I hate it if files with an obscure random name appear somewhere on my system, since I wonder if I got infected with a virus, or if it's just some badly behaved software.
And don't try too hard. It's no use to make it harder than downloading a crack or license-reset tool from the next warez site. And a cracker will find your license file very quickly with tools like FileMon.
One idea that might actually work(except against cracks that patch your binary):
Fix the expiration date on download and embed the license file in the setup. That way they actually need to download a new version whenever their license expires. But of course your users might find that unacceptable or might not fit your distribution model...
Pretty much any method of hiding or obscuring your license file is crackable. You really need to decide among some scenarios:
I trust my customers, but they might inadvertently break my license terms
My product is highly desirable and thus will be the target of crackers
My product isn't interesting to crackers, but my customers are slimebags and will copy the snot out of it.
In case 1, a simple license file to help your customers with compliance is fine. Probably hiding it or obscuring it isn't necessary. I assume that isn't your case since you posted this question.
In cases 2 and 3, there's little you can do that can't be easily defeated. The tools available to crackers are quite powerful and widely available, along with techniques for using them. Our company (www.wibu.us) has a full-time cryptographer who just watches how people crack software so we can build stronger protection against it.
Probably the most "normal" approach for a DIY solution is to encrypt the license file using some "standard" algorithm like AES 128-bit or triple DES. Then make the key from a hash of several factors, like the MAC address, MB serial number, install date, and perhaps some user-input data ("name" "Phone" etc). However, crypto can get complicated so you want to make sure you know what you're doing with this approach.
Force the application to go to a remote server over port 80 to check a hash which was set at the first install, perhaps against the MAC address (not an absolute guarantee but good enough). If they try to install again it is at least tied to the MAC address and you can stop the install.
EDIT:
If your customer base is small and you have the resources to support them you can perform the same behavior without internet access except that the customer needs to come to you to get a license file. They generate a key via their system, again tied to the MAC address, then send you the key which you generate the license file from. This is dependent of course on the number of outgoing downloads a day.
One solution is to embed an expiration date with the license file. If your protected program does not find a license file, it's invalid.
So even if the user deletes it, it won't help much, it's still expired.
The only problem in this case is: you need to fix an expiration date when you distribute license files. You can solve this with other tricks (redistribute new licenses regularly, etc.), but that may not suit every need.

Need advice to design 'crack-proof' software [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am currently working on a project where i need to create some architecture, framework or any standards by which i can "at least" increase the cracking method for a software, i.e, to add to software security. There are already different ways to activate a software which includes online activation, keys etc. I am currently studying few research papers as well. But there are still lot of things that i want to discuss.
Could someone guide me to some decent forum, mailing list or something like that? or any other help would be appreciated.
I'll tell you the closest thing to "crackproof": a web application.
Desktop applications are doomed, for many other reasons, but making your application run "in the cloud", in a browser, gives you a lot more control about security.
A desktop software runs on the client's computer, so the client has full access to it. A web app runs on your server, so the client only sees a tiny bit of it.
You need to begin by infiltrating the local hacking gang, posing as an 11 year old who wants to "hack it up". Once you've earned their trust you can learn what features they find hardest to crack. As you secretly release "uncrackable" software to the local message boards, you can see what they do with it. Build upon your inner knowledge until they can no longer crack your software. When that is done, let your identity be known. Ideally, this will be seen as a sign of betrayal, that you're working against them. Hopefully this will lead them to contact other hackers outside the local community to attack your software.
Continue until you've reached the top of the hacker mafia. Write your thesis as a book, sell to HBO.
Isn't it a sign of success when your product gets cracked? :)
Seriously though - one approach is to use License objects that are serialized to XML and then encrypted using public/private key pairs. They are then read back in at runtime, de-serialized and processed to ensure they are valid.
But there is still the ubiquitous "IsValid()" method which can be cracked to always return true.
You could even put that method into a signed assembly to prevent tampering, but all you've done then is create another layer of "IsValid()" which too can be cracked.
We use licenses to turn on or off various features in our software, and to validate support/upgrade periods. But this is only for our legitimate customers. Anyone who wants to bypass it probably could.
We trust our legitimate customers to not try to bypass the licensing, and we accept that our illegitimate customers will find a way.
We would waste more money attempting to imporve the 'tamper proof' nature of our solution that we loose to people who pirate software.
Plus you've got to consider the pain to our legitimate customers, and asking them to paste a license string from their online account page is as much pain as I'd want to put them through. Why create additional barriers to entry for potential customers?
Anyway, depending on which solution you've got in place already, my description above might give you some ideas that might decrease the likelyhood someone will crack your product.
As nute said, any code you release to a customer's machine is crackable.
Don't try for "uncrackable." Try for "there's enough deterrent to reasonably protect my assets."
There are a lot of ways you can try and increase the cost of cracking. Most of them cost you but there is one thing you can do that actually reduces your costs while increasing the cost of cracking: deliver often.
There is a finite cost to cracking any given binary. That cost is increased by the number of binaries being cracked. If you release new functionality every week, you essentially bifurcate your users into two groups:
Those who don't need the latest features and can wait for a crack.
Those who do need the latest features and will pay for your software.
By engaging in the traditional anti-cracking techniques, you can multiply the cost of cracking one binary an, consequently, widen the gap between when a new feature is released and when it is available on the black market. To top it all off, your costs will go down and the amount of value you deliver in a period of time will go up - that's what makes it free.
The more often you release, the more you will find that quality and value go up, cost goes down, and the less likely people will be to steal your software.
As others have mentioned, once you release the bits to users you have given up control of them. A dedicated hacker can change the code to do whatever they want. If you want something that is closer to crack-proof, don't release the bits to users. Keep it on the server. Provide access to the application through the Internet or, if the user needs a desktop client, keep critical bits on the server and provide access to them via web services.
Like others have said, there is no way of creating a complete crack-proof software, but there are ways to make cracking the software more difficult; most of these techniques are actually used by bad guys to hide the malware inside binaries and by game companies to make cracking and copying the games more difficult.
If you are really serious about doing this, you could check e.g., what executable packers like UPX do. But then you need to implement the unpacker also. I do not actually recommend doing this, but studying game protectors and binary obfuscation might help you in your quest.
First of all, in what language are you writing this?
It's true that a crack-proof program is impossible to achieve, but you can always make it harder. A naive approach to application security means that a program can be cracked in minutes. Some tips:
If you're deploying to a virtual machine, that's too bad. There aren't many alternatives there. All popular vms (java, clr, etc.) are very simple to decompile, and no obfuscator nor signature is enough.
Try to decouple as much as possible the UI programming with the underlying program. This is also a great design principle, and will make the cracker's job harder from the GUI (e.g. enter your serial window) to track the code where you actually perform the check
If you're compiling to actual native machine code, you can always set the build as a release (not to include any debug information is crucial), with optimization as high as possible. Also in the critical parts of your application (e.g. when you validate the software), be sure to make it an inline function call, so you don't end up with a single point of failure. And call this function from many different places in your app.
As it was said before, packers always add up another layer of protection. And while there are many reliable choices now, you can end up being identified as a false positive virus by some anti-virus programs, and all the famous choices (e.g. UPX) have already pretty straight-forward unpackers.
there are some anti-debugging tricks you can also look for. But this is a hassle for you, because at some time you might also need to debug the release application!
Keep in mind that your priority is to make the critical part of your code as untraceable as possible. Clear-text strings, library calls, gui elements, etc... They are all points where an attacker may use to trace the critical parts of your code.

Learning mainframe & JCL with Java/OOP/SQL background [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been coding and managing Java & ASP.Net applications & servers for my entire career. Now I'm being directed towards involvement in mainframes, ie z/OS & JCL, and I'm finding it difficult to wrap my head around it (they still talk about punch cards!). What's the best way to go about learning all this after having been completely spoilt by modern luxuries?
There are no punch cards in modern mainframes, they're just having you on.
You will have a hard time since there are still many things done the "old" way.
Data sets are still allocated with properties such as fixed-block-80, variable-block-255 and so on. Plan your file contents.
No directories. There are levels of hierarchy and they're limited to 8 characters each.
The user interface is ISPF, a green-screen text-mode user interface from the seventh circle of hell for those who aren't used to it.
Most jobs will still be submitted as batch jobs and you will have to monitor their progress with SDSF (sort of task manager).
That's some of the bad news, here's the good news:
It has a USS subsystem (UNIX) so you can use those tools. It's remarkably well integrated with z/OS. It runs Java, it runs Websphere, it runs DB2 (proper DB2, not that little Linux/UNIX/Windows one), it runs MQ, etc, etc. Many shops will also run z/VM, a hypervisor, under which they will run many LPARs (logical partitions), including z/OS itself (multiple copies, sometimes) and zLinux (SLES/RHEL).
The mainframe is in no danger of disappearing anytime soon. There is still a large amount of work being done at the various IBM labs around the world and the 64-bit OS (z/OS, was MVS, was OS/390, ...) has come a long way. In fact, there's a bit of a career opportunity as all the oldies that know about it are at or above 55 years of age, so expect a huge suction up the corporate ladder if you position yourself correctly.
It's still used in the big corporations as it's the only thing that can be trusted with their transactions - the z in System z means zero downtime and that's not just marketing hype. The power of the mainframe lies not in it's CPU grunt (the individual processors aren't that powerful but they come in books of 54 CPUs with hot backups, and you can run many books in a single System z box) but in the fact that all the CPU does is process instructions.
Everything else is offloaded to specialist processors, zIIPs for DB2, zAAPs for Java workloads, other devices for I/O (and I/O is where the mainframe kills every other system, using fibre optics and very large disk arrays). I wouldn't use it for protein folding or genome sequencing but it's ideal for where it's targeted, massively insane levels of transaction processing.
As I stated, z/OS has a UNIX subsystem and z/VM can run multiple copies of z/OS and other operating systems - I've seen a single z800 box running tens of thousands of instances of RHEL concurrently. This puts all the PC manufacturers 'green' claims to shame and communications between the instances is blindingly fast with HyperSockets (TCP/IP but using shared memory rather than across slow network cables (yes, even Gigabit Ethernet crawls compared to HyperSockets (and sorry for the nested parentheses :-))).
It runs Websphere Application Server and Java quite well in the Unix space while still allowing all the legacy (heritage?) stuff to run as well. In fact, mainframe shops need not buy PC-based servers at all, they just plonk down a few zLinux VMs and run everything on the one box.
And recently, there's talk about that IBM may be providing xSeries (i.e., PCs) plugin devices for their mainframes as well. While most mainframe people would consider that a wart on the side of their beautiful box, it does open up a lot of possibilities for third-party vendors. I'm not sure they'll ever be able to run 50,000 Windows instances but that's the sort of thing they seem to be aiming for (one ring to rule them all?).
If you're interested, there's a System z emulator called Hercules which I've seen running at 23 MIPS on a Windows box and it runs the last legally-usable MVS 3.8j fast enough to get a feel. Just keep in mind that MVS 3.8j is to z/OS 1.10 as CP/M is to Windows XP.
To provide a shameless plug for a book one of my friends at work has written, check out What On Earth is a Mainframe? by David Stephens (ISBN-13 = 978-1409225355). I found this invaluable since I came from a PC/UNIX background, and it is quite a paradigm shift. I think this book would be ideal for your particular question. I think chunks of it are available on Google Books so you can try before you buy.
Regarding JCL, there is a school of thought that only one JCL file has ever been written and all the others were cut'n'paste jobs on that. Having seen the contents of them, I can understand this. Programs like IEBGENER and IEFBR14 make Unix look, if not verbose, at least understandable.
You first misconception is beleiving the "L" in JCL. JCL isnt a programming language its really a static declaration of how a program should run and what files etc. it should use.
In this way it is much like (though superior to) the xml config spahetti that is used to control such "modern" software as spring, hebernate and ant.
If you think of it in these terms all will become clear.
Mainframe culture is driven by two seemingky incompatable obsessions.
Backward compatability. You can still run executables written and compiled in 1970. forty year old JCLs and scripts still run and work!
Bleeding edge performance. You can have 128 cpus on four machines in two datacentres working on a single DB2 query. It will run the latest J2EE (Websphere) applications faster than any other machine.
If you ever get involved with CICS (mainframe transaction server) on Z/OS, I would recommend the book "Designing and Programming CICS applications".
It is very useful.
alt text http://img18.imageshack.us/img18/7031/designingandprogramming.gif
If you are going to be involved with traditional legacy applications development, read books by Steve Eckols. They are pretty good. You need to compare the terms from open systems to mainframe which will cut down your learning time. Couple of examples
Files are called Datasets on mainframe
JCL is more like a shell script
sub programs/routines or like common functions etc...Good luck...
The more hand holding at the beginning the better. I've done work on a mainframe as an intern and it wasn't easy even though I had a fairly strong UNIX background. I recommend asking someone who works in the mainframe department to spend a day or two teaching you the basics. IBM training may be helpful as well but I don’t have any experience with it so can’t guarantee it will. I’ve put my story about learning how to use the mainframe below for some context. It was decided that all the interns were going to learn how to use the mainframe as a summer project that would take 20% of there time. It was a complete disaster since all the interns accept me were working in non mainframe areas and had no one they could yell over the cube wall to for help. The ISPF and JCL environment was to alien for them to get proficient with quickly. The only success they had was basic programming under USS since it’s basically UNIX and college familiarized them with this. I had better luck for two reasons. One I worked in a group of about 20 mainframe programmers so was able to have someone sit down with me on a regular basis to help me figure out JCL, submitting jobs, etc. Second I used Rational Developer for System z when it was named WebSphere Developer for System z. This gave me a mostly usable GUI that let me perform most tasks such as submitting jobs, editing datasets, allocating datasets, debugging programs, etc. Although it wasn’t polished it was usable enough and meant I didn’t have to learn ISPF. The fact that I had an Eclipsed based IDE to do basic mainframe tasks decreased the learning curve significantly and meant I only had to learn new technologies such as JCL not an entirely new environment. As a further note I now use ISPF since the software needed to allow Rational to run on the mainframe wasn’t installed on one of the production systems I used so ISPF was the only choice. I now find that ISPF is faster then Rational Developer and I’m more efficient with it. This is only because I was able to learn the underlying technology such as JCL with Rational and the ISPF interface at a later date. If I had to learn both at once it would have been much harder and required more one on one instruction.

Resources