Parallel Computer Simulator - haskell

I am using Haskell, which reported to be great and easy for parallelism. Unfortunately, I have no way to test this claim, as my computer has a single processor. Does anyone know of a utility that will make it appear as if my computer as 2 or 4(obviously slower than the real one) processors, and would let me track there performance. It should also let me test one imaginary processor at a time, so I could see how parallel to nonparallel would compare on such a computer. Although it would be better to the comunity for a more universal app, I will take answers even if they only work with haskell.
P.S. I am running Ubuntu 13.10.

The search keywords you are looking for are "simulate multiple cores"
Here's one: sniper
It is not open source. From the FAQ:
Q: What are the license terms for using Sniper?
A: In short, the interval core model is protected under a US patent
application. We automatically grant you a free license for using the
interval model inside Sniper for academic purposes. For commercial
use, please contact Lieven Eeckhout. All other code is licensed under
the very liberal MIT license. You can view the full details on our
License page.

Related

Information on open soucre ALM tools

My objective is to have traceability between requirement, design, test case and test results of a project. Can any one give me the details of such an ALM tool . It should be an open source tool.
There are many tools for this. The first Question as always is: which programming language is this for? How big is the Team (including the specialist division which use these tools => requirements)?
Suggesting JAVA is the language, I'd prefer these tools:
Requirements: JIRA (not free but best!), Mantis or Bugzilla also may do an acceptable job
Design: depending on which design? To use UML a good choice had been TogetherJ (RIP => now part of Borland's toolbox); you may try ArgoUML or WhiteStarUML; using a Wiki I'd suggest e.g. DokuWiki and a good Office System is also a choice - depending on the needs within your team! (Yes a design always includes text)
test case: I'd like to split this topic a bit to “test planning” and “test execution” an last but not least “test documentation”
test planning : give TestLink a research
test execution: (free of charge!) Jubula, JUnit, Selenium => depending on your needs
test documentation: you should use a standard Editor like Word or Writer etc. (not the Wiki)
Additional perspectives:
build server: I've missed the build server within your list: if you code a piece of software how do you certain the software can be build also if a machine or a person refuse to work (on any reasons)? Building a software on the developer’s machine includes exactly the risc that the SW may not be buildable of another machine/by another person. So use a build server (where jenkins/hudson should be on your short list)
repository: according to the topic on saving the sourcecode within a CVS you probably also ensure to have an access to all the used external libraries you need within your program. Try artifactory or nexus
clearing process: If you work within a company’s team where the company’s strategy is actually to test a software before publishing the software you you’d think of a clearing process according to the test results. You should think about the group of people who should be involved within the clearing process of the software. Get them as partner into your project – otherwise it’ll be hard!
I hope the answer was helpful and fits to your needs?!
ALM is a huge topic and here we're discussing just a part of SDLC which is ONE topic in ALM.

How the hardware platform impacts upon the choice for the programming language?

Long put short: The teacher who taught me through out the last year has only recently left and has been replaced with a new one. This new teacher has given me an assignment that involves things (like this) that we were never previously taught. So this task has showed up on the assignment and I have no idea how to do it. I can't get hold of the teacher because he's poorly and not coming in for the next few days. And even when I do ask him to explain further, he gets into a right mood and makes me feel like I'm completely retarded.
Describe how the hardware platform impacts upon the choice for
the programming language
Looking at my activity here on SO, you can tell that I'm into programming, I'm into developing things, and I'm into learning, so I'm not just trying to get one of you guys to do my homework for me.
Could someone here please explain how I would answer a question like this.
Some considerations below, but not a full answer by any means.
If your hardware platform is a small embedded device of some kind, then your choice of programming language is going to be directed towards the lower level unmanaged languages - you probably won't be able to (or want to) load a managed language runtime like the Java JVM or .NET CLR. This is down to memory and storage requirements. Similarly, interpreted languages will be out of the question as you won't have space for the intepreter.
If you're on a larger machine, it's more a question of compatibility. A managed language must run on a platform where its runtime is supported. In the case of .NET, that's Windows, or other platforms if you substitute the Microsoft CLR with the Mono runtime. In the case of Java, that's a far wider range of platforms.
This is by no means a definitive answer, but my first thought would be embedded systems. A task I perform on an embedded system, or other low powered battery operated computer, would need to be handled completely different to that performed on a computer which has access to mains electricity.
One simple impact.. would be battery life.
If I use wasteful algorithms on an embedded system, the battery life will be affected.
Hope that helps stir the brain juices!
Clearly, the speed and amount of memory of the device will impact the choice. The more primitive and weak the platform is, the harder it is to run code developed with very high level languages. Code written with them may just not work at all (e.g. when there isn't enough memory) or be too slow or it will require serious optimizations (i.e. incur more work), perhaps affecting negatively the feature set or quality.
Also, some languages and software may rely heavily on or benefit from the availability of page translation in the CPU. If the CPU doesn't have it, certain checks will have to be done in software instead of being done automatically in hardware, and that will affect the performance or the language/software choice.

Successful Non-programmer, 5GL, Visual, 0 Source Code or Similar Tools?

Can anyone give me an example of successful non-programmer, 5GL (not that I am sure what they are!), visual, 0 source code or similar tools that business users or analysts can use to create applications?
I don’t believe there are and I would like to be proven wrong.
At the company that I work at, we have developed in-house MVC that we use to develop web applications. It is basically a reduced state-machine written in XML (à la Spring WebFlow) for controller and a simple template based engine for presentation. Some of the benefits:
dynamic nature: no need to recompile to see the changes
reduced “semantic load”: basically, actions in controller know only “If”. Therefore, it is easy to train someone to develop apps.
The current trend in the company (or at least at management level) is to try to produce tools for the platform that require 0 source code, are visual etc. It has a good effect on clients (or at least at management level) since:
they can be convinced that this way they will need no programmers or at least will be able to hire end-of-the-lather programmers that cost much less than typical programmers.
It appears that there is a reduced risk involved, since the tool limits the implementer or user (just don’t use the word programmer!) in what he can do, so there is a less chance that he can introduce error
It appears to simplify the whole problem since there seems to be no programming involved (notoriously complex). Since applications load dynamically, there is less complexity then typically associated with J2EE lifecycle: compile, package, deploy etc.
I am personally skeptic that something like this can be achieved. Solution we have today has a number of problems:
Implementers write JavaScript code to enrich pages (could be solved by developing widgets). Albeit client-side, still a code that can become very complex and result in some difficult bugs.
There is already a visual tool, but implementers prefer editing XML since it is quicker and easier. For comparison, I guess not many use Eclipse Spring WebFlow plug-in to edit flow XML.
There is a very poor reuse in the solution (based on copy-paste of XML). This hampers productivity and some other aspects, like fostering business knowledge.
There have been numerous performance and other issues based on incorrect use of the tools. No matter how reduced the playfield, there is always space for error.
While the platform is probably more productive than Struts, I doubt it is more productive than today’s RAD web frameworks like RoR or Grails.
Verbosity
Historically, there have been numerous failures in this direction. The idea of programs written by non-programmers is old but AFAIK never successful. At certain level, anything but the power of source code becomes irreplaceable.
Today, there is a lot of talk about DSLs, but not as something that non-programmers should write, more like something they could read.
It seems to me that the direction company is taking in this respect is a dead-end. What do you think?
EDIT: It is worth noting (and that's where some of insipiration is coming from) that many big players are experimenting in that direction. See Microsoft Popfly, Google Sites, iRise, many Mashup solutions etc.
Yes, it's a dead end. The problem is simple: no matter how simple you make the expression of a solution, you still have to analyze and understand the problem to be solved. That's about 80-90% of how (most good) programmers spend their time, and it's the part that takes the real skill and thinking. Yes, once you've decided what to do, there's some skill involved in figuring out how to do that (in a programming language of your choice). In most cases, that's a small part of the problem, and the least open to things like schedule slippage, cost overrun or outright failure.
Most serious problems in software projects occur at a much earlier stage, in the part where you're simply trying to figure out what the system should do, what users must/should/may do which things, what problems the system will (and won't) attempt to solve, and so on. Those are the hard problems, and changing the environment to expressing the solution in some way other that source code will do precisely nothing to help any of those difficult problems.
For a more complete treatise on the subject, you might want to read No Silver Bullet - Essence and Accident in Software Engineering, by Frederick Brooks (Included in the 20th Anniversary Edition of The Mythical Man-Month). The entire paper is about essentially this question: how much of the effort involved in software engineering is essential, and how much is an accidental result of the tools, environments, programming languages, etc., that we use. His conclusion was that no technology was available that gave any reasonable hope of improving productivity by as much as one order of magnitude.
Not to question the decision to use 5GLs, etc, but programming is hard.
John Skeet - Programming is Hard
Coding Horror - Programming is Hard
5GLs have been considered a dead-end for a while now.
I'm thinking of the family of products that include Ms Access, Excel, Clarion for DOS, etc. Where you can make applications with 0 source code and no programmers. Not that they are capable of AI quality operations, but they can make very usable applications.
There will always be "real" languages to do the work, but we can drag and drop the workflow.
I'm using Apple's Automator which allows users to chain together "Actions" exposed by the various applications on their systems.
Actions have inputs and/or outputs, some have UI elements and basic logic can be applied to the chain.
The key difference between automator
and other visual environments is that
the actions use existing application
code and don't require any special
installation.
More Info > http://www.macosxautomation.com/automator/
I've used it to "automate" many batch processes and had really great results (surprises me every time). I've got it running builds and backups and whenever i need to process a mess of text files it comes through.
I would love to know whether iHook or Platypus (osx wrapper builders for shell scripts) could let me develop plugins in python ....
There is definitely room for more applications like this and for more support from OSX application developers but the idea is sound.
Until there's major support there aren't many "actions" available, but a quick check on my system just showed me an extra 30 that i didn't know i had.
PS. There was another app for OS-preX called "Filter Tops" which had a much more limited set of plugins.
How about Dabble DB?
Of course, just like MS Access and other non-programmer programming platforms, it has some necessary limitations in order that the user won't get him or herself stuck... as John pointed out programming is hard. But it does give the user a lot of power, and it seems that most applications that non-programmers want to build are database-type applications anyway.

Confining user to one window during a test

Suppose you have written a testing program, to be used in a classroom.
How would you enforce the participants don't use any other software (no Internet, no calculator, no console etc.) during the exam time? Something similiar to "testing mode" in one of TI calculators.
Another possible application: lock players during a chess/bridge computer tournament.
Raymond Chen would recommend asking "What if Two Programs Did This?"... that is, what would happen if you had two such programs running at the same time? If the resulting scenario doesn't make sense, you may want to rethink your design.
That being said, if the students are taking the test on computers you (or your IT department) have control over, I think it would be much easier to make a special StudentTest account that only has permissions to run is the testing software, and is denied access to everything else. That way your testing software isn't 5% test, 95% attempt to keep the student from cheating which probably will have a hole in it at some point anyway.
This is a common Windows programming problem. Do a Google search for "windows kiosk mode".

Learning mainframe & JCL with Java/OOP/SQL background [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been coding and managing Java & ASP.Net applications & servers for my entire career. Now I'm being directed towards involvement in mainframes, ie z/OS & JCL, and I'm finding it difficult to wrap my head around it (they still talk about punch cards!). What's the best way to go about learning all this after having been completely spoilt by modern luxuries?
There are no punch cards in modern mainframes, they're just having you on.
You will have a hard time since there are still many things done the "old" way.
Data sets are still allocated with properties such as fixed-block-80, variable-block-255 and so on. Plan your file contents.
No directories. There are levels of hierarchy and they're limited to 8 characters each.
The user interface is ISPF, a green-screen text-mode user interface from the seventh circle of hell for those who aren't used to it.
Most jobs will still be submitted as batch jobs and you will have to monitor their progress with SDSF (sort of task manager).
That's some of the bad news, here's the good news:
It has a USS subsystem (UNIX) so you can use those tools. It's remarkably well integrated with z/OS. It runs Java, it runs Websphere, it runs DB2 (proper DB2, not that little Linux/UNIX/Windows one), it runs MQ, etc, etc. Many shops will also run z/VM, a hypervisor, under which they will run many LPARs (logical partitions), including z/OS itself (multiple copies, sometimes) and zLinux (SLES/RHEL).
The mainframe is in no danger of disappearing anytime soon. There is still a large amount of work being done at the various IBM labs around the world and the 64-bit OS (z/OS, was MVS, was OS/390, ...) has come a long way. In fact, there's a bit of a career opportunity as all the oldies that know about it are at or above 55 years of age, so expect a huge suction up the corporate ladder if you position yourself correctly.
It's still used in the big corporations as it's the only thing that can be trusted with their transactions - the z in System z means zero downtime and that's not just marketing hype. The power of the mainframe lies not in it's CPU grunt (the individual processors aren't that powerful but they come in books of 54 CPUs with hot backups, and you can run many books in a single System z box) but in the fact that all the CPU does is process instructions.
Everything else is offloaded to specialist processors, zIIPs for DB2, zAAPs for Java workloads, other devices for I/O (and I/O is where the mainframe kills every other system, using fibre optics and very large disk arrays). I wouldn't use it for protein folding or genome sequencing but it's ideal for where it's targeted, massively insane levels of transaction processing.
As I stated, z/OS has a UNIX subsystem and z/VM can run multiple copies of z/OS and other operating systems - I've seen a single z800 box running tens of thousands of instances of RHEL concurrently. This puts all the PC manufacturers 'green' claims to shame and communications between the instances is blindingly fast with HyperSockets (TCP/IP but using shared memory rather than across slow network cables (yes, even Gigabit Ethernet crawls compared to HyperSockets (and sorry for the nested parentheses :-))).
It runs Websphere Application Server and Java quite well in the Unix space while still allowing all the legacy (heritage?) stuff to run as well. In fact, mainframe shops need not buy PC-based servers at all, they just plonk down a few zLinux VMs and run everything on the one box.
And recently, there's talk about that IBM may be providing xSeries (i.e., PCs) plugin devices for their mainframes as well. While most mainframe people would consider that a wart on the side of their beautiful box, it does open up a lot of possibilities for third-party vendors. I'm not sure they'll ever be able to run 50,000 Windows instances but that's the sort of thing they seem to be aiming for (one ring to rule them all?).
If you're interested, there's a System z emulator called Hercules which I've seen running at 23 MIPS on a Windows box and it runs the last legally-usable MVS 3.8j fast enough to get a feel. Just keep in mind that MVS 3.8j is to z/OS 1.10 as CP/M is to Windows XP.
To provide a shameless plug for a book one of my friends at work has written, check out What On Earth is a Mainframe? by David Stephens (ISBN-13 = 978-1409225355). I found this invaluable since I came from a PC/UNIX background, and it is quite a paradigm shift. I think this book would be ideal for your particular question. I think chunks of it are available on Google Books so you can try before you buy.
Regarding JCL, there is a school of thought that only one JCL file has ever been written and all the others were cut'n'paste jobs on that. Having seen the contents of them, I can understand this. Programs like IEBGENER and IEFBR14 make Unix look, if not verbose, at least understandable.
You first misconception is beleiving the "L" in JCL. JCL isnt a programming language its really a static declaration of how a program should run and what files etc. it should use.
In this way it is much like (though superior to) the xml config spahetti that is used to control such "modern" software as spring, hebernate and ant.
If you think of it in these terms all will become clear.
Mainframe culture is driven by two seemingky incompatable obsessions.
Backward compatability. You can still run executables written and compiled in 1970. forty year old JCLs and scripts still run and work!
Bleeding edge performance. You can have 128 cpus on four machines in two datacentres working on a single DB2 query. It will run the latest J2EE (Websphere) applications faster than any other machine.
If you ever get involved with CICS (mainframe transaction server) on Z/OS, I would recommend the book "Designing and Programming CICS applications".
It is very useful.
alt text http://img18.imageshack.us/img18/7031/designingandprogramming.gif
If you are going to be involved with traditional legacy applications development, read books by Steve Eckols. They are pretty good. You need to compare the terms from open systems to mainframe which will cut down your learning time. Couple of examples
Files are called Datasets on mainframe
JCL is more like a shell script
sub programs/routines or like common functions etc...Good luck...
The more hand holding at the beginning the better. I've done work on a mainframe as an intern and it wasn't easy even though I had a fairly strong UNIX background. I recommend asking someone who works in the mainframe department to spend a day or two teaching you the basics. IBM training may be helpful as well but I don’t have any experience with it so can’t guarantee it will. I’ve put my story about learning how to use the mainframe below for some context. It was decided that all the interns were going to learn how to use the mainframe as a summer project that would take 20% of there time. It was a complete disaster since all the interns accept me were working in non mainframe areas and had no one they could yell over the cube wall to for help. The ISPF and JCL environment was to alien for them to get proficient with quickly. The only success they had was basic programming under USS since it’s basically UNIX and college familiarized them with this. I had better luck for two reasons. One I worked in a group of about 20 mainframe programmers so was able to have someone sit down with me on a regular basis to help me figure out JCL, submitting jobs, etc. Second I used Rational Developer for System z when it was named WebSphere Developer for System z. This gave me a mostly usable GUI that let me perform most tasks such as submitting jobs, editing datasets, allocating datasets, debugging programs, etc. Although it wasn’t polished it was usable enough and meant I didn’t have to learn ISPF. The fact that I had an Eclipsed based IDE to do basic mainframe tasks decreased the learning curve significantly and meant I only had to learn new technologies such as JCL not an entirely new environment. As a further note I now use ISPF since the software needed to allow Rational to run on the mainframe wasn’t installed on one of the production systems I used so ISPF was the only choice. I now find that ISPF is faster then Rational Developer and I’m more efficient with it. This is only because I was able to learn the underlying technology such as JCL with Rational and the ISPF interface at a later date. If I had to learn both at once it would have been much harder and required more one on one instruction.

Resources