Is there a way to automatically test interactive command line (console) Linux app? - linux

I am one of the developers of a console two-pane file manager for Linux (this is a port of Far Manager, called far2l), the application interface resembles Midnight Commander. I am faced with the need to implement automated testing. Can you please tell me which application or framework can be used for this?
I need the ability to write some scripts containing a sequence of keystrokes that will be transmitted to the console application (the ability to specify delays between emulated keystrokes also needed), as well as the ability to automatically analyze application interface drawn in the console, for example, for the presence of certain strings. And some kind of a framework to run a number of such tests automatically and generate testing reports.
Most console application testing tools I could find (like "cram", "cli-unit", "aruba", or "exactly") unfortunately don't seem to be designed specifically for testing interactive applications.

Related

Extract SAS Enterprise Guide into Unix Server runnable batch?

We have built a project in Enterprise Guide for the purpose of creating a easy understandable and maintainable code. The project contain a set of process flows which run should be done in specific order. This project we need to run on a Linux Server machine, where the SAS Metadata Server is running.
Basic idea is to extract this project into SAS code, which we would be able to run from command line in Linux as a batch job.
Question 1:
Is there any other way to schedule a batch job in Linux-hosted SAS Server? I have read about VBS scripting for scheduling/running batch jobs, but in order this to be done on Linux Server, a installation of WINE is required, which on a production machine which already runs a number of other important applications, is almost completely out of question.
Is there a way to specify a complete project export into SAS code, provided that I give the specific order of running process flows? I have tried out ordered list, which is able to make you a list of tasks to run in order (although there is no way to choose a whole process flow as a single task), but unfortunately, this ordered list itself is later not possible to be exported as a SAS code.
Current solution we do is the following:
We export each single process flow of the SAS EG project into SAS code, and then create another SAS code with %include lines to run all the extracted codes in order that we want. This is of course a possible solution, but definitely not the most elegant one.
Question 2:
Since I don't know how exactly the code is being exported afterwards, are there any dangers I should bear in mind with the solution I chose.
Is there any other, more elegant way?
You have a couple of options from what I'm familiar with, plus I suspect if Dom happens by he'll know more. These answers are based on EG 6.1, which is the current version (ships with 9.4); it's possible some of these things may not be true in earlier versions.
First, if you're running Enterprise Guide from Windows, you can schedule the job locally (on any Windows machine with Enterprise Guide). You're not scheduling the server directly, you schedule Windows to launch an EG process that connects to the server and does its magic. That's how I largely interact with scheduling (because I have a fairly 'light' scheduling need).
Second, from the blog post "Four Ways to Schedule SAS Tasks", options 3 and 4 may be helpful for you. The SAS Platform Suite is designed in part for scheduling, and the options using SAS Management Console to schedule via operating system tools, are both very helpful.
Third, you may want to look into SAS Stored Processes, which should be schedulable. A process flow can be converted into a stored process.
For your specific questions:
Question 1: When you export a process flow or a project, at least in 6.1 you have the option to change the order in which the programs are exported. It's manual, so it's probably not perfect, but it does give you that option. (The code seems to be by default in creation order, which is sub-optimal.) The project export does group process flows together, but you don't have the option of manipulating the order of process flows - you have to move each program around, which would be tedious. It also of course gives you less flexibility if you need to multiply run programs.
Question 2: As Stig Eide points out in comments, make sure your System Option LRECL is > 256 (the default) or you run some risk of code being cut off. In 9.2+ this is modifiable; just place LRECL=32767in your config.sas file.

DoS Attack my localhost tomcat

I'm using tomcat 6 on localhost and running an application site.
I want to stress test using DoS from cmd prompt.
Can any one help me with this?
http://localhost:8080/web/login.xhtml
that's my url.
Since you are using Tomcat, you are living in the Java world. The best Java-based tool I know of to perform load-testing is Apache JMeter.
It is honestly really great. You can set up complete workflows for a particular "user" to run-through, and then run lots of them in parallel. You can set up a bunch of different workflows to represent your various users and then launch an arbitrary number of them to run against your test site. You want 1 admin user and 5000 "regular" users? You got it. You want some users to be creating accounts and exploring the site while others continuously buy items in their shopping carts? No problem. Handles session-tracking, etc. You can even set the time interval between requests (or just go as fast as possible).
Unfortunately, JMeter is GUI-based, so not command-line. I'm not sure if you can use the GUI to create a testing profile and then launch it from the command-line.
If you want to stick with Apache, you can use ApacheBench (aka "ab") which comes with Apache httpd. It's pretty simple, and has some shortcomings due to its primitive threading-model: you can easily max-out ab's connection-making capabilities before you exhaust the server's resources.

Testing automation tool suited for operation team

I would like to start using an testing framework that does the following:
contains an process(the process can be a test) management engine. It is able to start processes(tests) with the help of a scheduler
it is distributed, processes can run locally or on other machines
tests can be anything:
simple telnet on a given port (infrastructure testing)
a disk I/O or mysql benckmark
a jar exported from Selenium that does acceptance testing
will need to know if the test passed or not
has the capability to get real time data from the test(something like graphite) -- this is optional
allows processes to be build in many programing languages: perl, ruby, C, bash
has a graphical interface
open-source
written in any language as long as it doesn't use resources , I would prefer C, perl or ruby
to run on linux
What not to be:
an add on to a browser: Selenium, BITE ..
I do not want something focused on web development
I will like to use such a tool or maybe collaborate on building one. I hope I was explicit enough. Thank you .
You might want to look at the robot framework combined with jenkins. Robotframework is a tool written in python for doing keyword-based acceptance testing. Jenkins is a continuous integration tool which allows you to schedule jobs, and distribute jobs amongst a grid of nodes.
Robotframework tests can do anything python can do, plus a whole lot more. It has a remote interface, which means you can write test keywords in just about any language. For example, I had a job where we had keywords written in Java. In another job we used robotframework with .NET-based keywords. This can be accomplished via a remote interface (so you can write keywords in many different languages) or you can run robot using jython to run on the JVM, or iron python to run in a .NET environment.

Node server GUI frontend

Well, we all know about headless servers. Actually, probably the vast majority of servers out there are headless.
As usual (it seems), my situation asked for quite something else. Basically, the proposed architecture looks more or less like:
The app server (node.js) is situated on a physical machine physically connected to two screens.
Between this machine and the 'net there are all sorts of regular networking layers. Please keep in mind that one of the main reasons for this setup is physical portability: ie, the client gets the necessary hardware as the product. The server itself relies on CDN for static files etc.
Each monitor/screen needs to show something different, produced by the same node server.
For now this server will probably run on Windows, but given a concept (which is what my question is after), I can change the code to run on the target platform. Well, depending on my code, this could even be done automatically.
So, my actual question. Node is quite flexible in that it can be run by anything - even custom made software (C++, Delphi, even GM). Just shell_exec('node server.js') and we're off.
But the screens themselves need to be quite dynamic. So node needs to influence both screens in some way. A few options I'm considering:
A custom app which creates two resizable, featureless windows with an embedded chromium browser to be controlled by the node server somehow (how node react with these browsers?)
A custom app which, according to node CLI output, updates the two screens' UI. Since I need something flashy as the UI, this app would be created in something like GameMaker, or a similar engine.
PS: Just in case you're asking; the physical connection opposed to a network one (eg; web-based GUI frontend) is by design.
I'd just wire up the result/monitoring screens as regular HTML pages. In your Node app, create a second HTTP server (on a non-standard port, firewalled from the public) that serves up the monitoring page.
Use socket.io to to send the realtime data to the monitoring page, which can make everything look pretty. Fire it up in a full-screen instance of Chrome.
This approach completely frees you from any kind of platform dependency, and decouples the monitoring app from the server app. It leaves you the latitude to run the monitoring app on a separate box if necessary.

run cgi script as a different user

I have a tool written in perl which is used by different users in my company. Each user has his/her own disk space allocated to them and they run the tool in their diskspace. This is working fine without any issues. As a next step, I wanted to enable the tool through web and created a web application through which users can run this tool, the issue that i have is, the tool is always run as a single user. I know the user name through authentication, is there a way by which i can run the tool as the user who is running the web application?
Yes, suexec.
Also see questions tagged suexec.

Resources