Is there an API to see how flow files are being processed by the 'origen program' command? - origen-sdk

If I pass multiple flow files to the Origen progam command as so:
origen p flow_file1 flow_file2 flow_file3
Is there an API to see the queue real-time in the test interface? I see there is a current_command API but it doesn't contain any information about the number of flow files being processed. Something along the lines of:
Origen.commands.queue # => ['flow_file1', 'flow_file2', 'flow_file3']
thx

This is not currently available, but it should not be hard to add if you want it.
Internally, the list of files is held in an anonymous array returned by this call to expand_lists_and_directories - https://github.com/Origen-SDK/origen/blob/60746ae33fd813b8cf1a3624f985476138a59920/lib/origen/application/runner.rb#L85
You could capture that into a named array and then make a read method for it.

Related

Logstash Read a Property File

I am looking for a way of reading property file in logstash config file so that I can do some data transformation based on the property file value? for example I can skip processing type 1 event and send to index a, process type 2 events and sent to index 2.
If I understand your question correctly, note that logstash will read all the files in your config directory. You can put different processing blocks in different config files, which makes for a nice separation of code. Be sure that each block is wrapped in a conditional so that they don't all run for all events.

output of scenario 1 to pass as input to scenario 2 in cucumber python

I would like to pass data from scenario 1 to scenario 2 are there any inbuilt methods exists?
scenario1: I am generating user with details
scenarioa2: I want to fetch generated user name from scenario1 and use in further steps
You can use Behave's context object to store data between test steps. For instance, let's say you have 2 scenarios:
Scenario: First one
# Some Given's and stuff
Then I am generating user with details
Scenario: Second one
# Some Given's and stuff
Then I fetch and use generated user details
In your step implementation file within the steps/ directory:
#then("I am generating user with details")
def step_impl(context):
context.user_details = function_to_generate_user_details()
#then("I fetch and use generated user details")
def step_impl(context):
function_to_do_something(context.user_details)
Note that your step implementations do not have to be within the same file since Behave searches through all the files within the steps/ directory when using your feature files.
I think instead of messing with logics of behave, you should save that data on excel data or any file, then parse that data on your other function.

tailLines and SinceTime in logging api,both not worked simultaneously

I am using container engine, and my pods are hosted there.
I am trying to fetch logs, using log api :
http://localhost:8000/api/v1/namespaces/app-test/pods/designer-0/log?tailLines=100&sinceTime=2017-09-17T10:47:58Z
if i used both the query params separately, it works and show the proper result, but if i am using it simultaneously only the top 100 logs are returning, the sinceTime param is get ignored.
my scenario is, i need a log from a specific time, in a chunk like, 100 lines, 100 lines.. like this.
I am not sure, whether it is a bug, or it is not implemented.
I found this from the api reference manual
https://kubernetes.io/docs/api-reference/v1.6/
tailLines - If set, the number of lines from the end of the logs to
show. If not specified, logs are shown from the creation of the
container or sinceSeconds or sinceTime
So, that means if you specify tailLines, it start from the end. I dont see any option explicitly mentioned other than limitBytes. But you will have to play around with it as it does not guarantee number of lines.
tailLines=X tells the server to start that many lines from the end
sinceTime tells the server to start from the specified time
the options are mutually exclusive
Thanks All,
I have later on recognized that, it is not ignoring the sinceTime, as the TailLines intended functionality is return the lines from the last.
So, if i mentioned the sinceTime= 10 PM yesterday, it will return the records from that time..And if also tailLines, is mentioned, so it will return the recent logs from that chunk.
So, it was working as expected. I need to play with LimitBytes for getting the logs in chunk, from that time, Instead of full logs.

Passing key material to openssl commands

Is it safe to pass a key to the openssl command via the command line parameters in Linux? I know it nulls out the actual parameter, so it can't be viewed via /proc, but, even with that, is there some way to exploit that?
I have a python app that I want to use OpenSSL to do the encryption/description through stdin/stdout streaming in a subprocess, but I want to know my keys are safe.
Passing the credentials on the command line is not safe. It will result in your password being visible in the system's process listing - even if openssl erases it from the process listing as soon as it can, it'll be there for an instant.
openssl gives you a few ways to pass credentials in - the man page has a section called "PASS PHRASE ARGUMENTS", which documents all the ways you can pass credentials into openssl. I'll explain the relevant ones:
env:var
Lets you pass the credentials in an environment variable. This is better than using the process listing, because on Linux your process's environment isn't readable by other users by default - but this isn't necessarily true on other platforms.
The downside is that other processes running as the same user, or as root, will be able to easily view the password via /proc.
It's pretty easy to use with python's subprocess:
new_env=copy.deepcopy(os.environ)
new_env["MY_PASSWORD_VAR"] = "my key data"
p = subprocess.Popen(["openssl",..., "-passin", "env:MY_PASSWORD_VAR"], env=new_env)
fd:number
This lets you tell openssl to read the credentials from a file descriptor, which it will assume is already open for reading. By using this you can write the key data directly from your process to openssl, with something like this:
r, w = os.pipe()
p = subprocess.Popen(["openssl", ..., "-passin", "fd:%i" % r], preexec_fn=lambda:os.close(w))
os.write(w, "my key data\n")
os.close(w)
This will keep your password secure from other users on the same system, assuming that they are logged in with a different account.
With the code above, you may run into issues with the os.write call blocking. This can happen if openssl waits for something else to happen before reading the key in. This can be addressed with asynchronous i/o (e.g. a select loop) or an extra thread to do the write()&close().
One drawback of this is that it doesn't work if you pass closeFds=true to subprocess.Popen. Subprocess has no way to say "don't close one specific fd", so if you need to use closeFds=true, then I'd suggest using the file: syntax (below) with a named pipe.
file:pathname
Don't use this with an actual file to store passwords! That should be avoided for many reasons, e.g. your program may be killed before it can erase the file, and with most journalling file systems it's almost impossible to truly erase the data from a disk.
However, if used with a named pipe with restrictive permissions, this can be as good as using the fd option above. The code to do this will be similar to the previous snippet, except that you'll need to create a fifo instead of using os.pipe():
pathToFifo = my_function_that_securely_makes_a_fifo()
p = subprocess.Popen(["openssl", ..., "-passin", "file:%s" % pathToFifo])
fifo = open(pathToFifo, 'w')
print >> fifo, "my key data"
fifo.close()
The print here can have the same blocking i/o problems as the os.write call above, the resolutions are also the same.
No, it is not safe. No matter what openssl does with its command line after it has started running, there is still a window of time during which the information is visible in the process' command line: after the process has been launched and before it has had a chance to null it out.
Plus, there are many ways for an accident to happen: for example, the command line gets logged by sudo before it is executed, or it ends up in a shell history file.
Openssl supports plenty of methods of passing sensitive information so that you don't have to put it in the clear on the command line. From the manpage:
pass:password
the actual password is password. Since the password is visible to utilities (like 'ps' under Unix) this form should only be used where security is not important.
env:var
obtain the password from the environment variable var. Since the environment of other processes is visible on certain platforms (e.g. ps under certain Unix OSes) this option should be used with caution.
file:pathname
the first line of pathname is the password. If the same pathname argument is supplied to -passin and -passout arguments then the first line will be used for the input password and the next line for the output password. pathname need not refer to a regular file: it could for example refer to a device or named pipe.
fd:number
read the password from the file descriptor number. This can be used to send the data via a pipe for example.
stdin
read the password from standard input.
All but the first two options are good.

Perl program structure for parsing

I've got question about program architecture.
Say you've got 100 different log files with different formats and you need to parse and put that info into an SQL database.
My view of it is like:
use general config file like:
program1->name1("apache",/var/log/apache.log) (modulename,path to logfile1)
program2->name2("exim",/var/log/exim.log) (modulename,path to logfile2)
....
sqldb->configuration
use something like a module (1 file per program) type1.module (regexp, logstructure(somevariables), sql(tables and functions))
fork or thread processes (don't know what is better on Linux now) for different programs.
So question is, is my view of this correct? I should use one module per program (web/MTA/iptablat)
or there is some better way? I think some regexps would be the same, like date/time/ip/url. What to do with that? Or what have I missed?
example: mta exim4 mainlog
2011-04-28 13:16:24 1QFOGm-0005nQ-Ig
<= exim#mydomain.org.ua** H=localhost
(exim.mydomain.org.ua)
[127.0.0.1]:51127 I=[127.0.0.1]:465
P=esmtpsa
X=TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32
CV=no A=plain_server:spam S=763
id=1303985784.4db93e788cb5c#mydomain.org.ua T="test" from
<exim#exim.mydomain.org.ua> for
test#domain.ua
everything that is bold is already parsed and will be putted into sqldb.incoming table. now im having structure in perl to hold every parsed variable like $exim->{timstamp} or $exim->{host}->{ip}
my program will do something like tail -f /file and parse it line by line
Flexability: let say i want to add supprot to apache server (just timestamp userip and file downloaded). all i need to know what logfile to parse, what regexp shoud be and what sql structure should be. So im planning to have this like a module. just fork or thread main process with parameters(logfile,filetype). Maybe further i would add some options what not to parse (maybe some log level is low and you just dont see mutch there)
I would do it like this:
Create a config file that is formatted like this: appname:logpath:logformatname
Create a collection of Perl class that inherit from a base parser class.
Write a script which loads the config file and then loops over its contents, passing each iteration to its appropriate handler object.
If you want an example of steps 1 and 2, we have one on our project. See MT::FileMgr and MT::FileMgr::* here.
The log-monitoring tool wots could do a lot of the heavy lifting for you here. It runs as a daemon, watching as many log files as you could want, running any combination of perl regexes over them and executing something when matches are found.
I would be inclined to modify wots itself (which its licence freely allows) to support a database write method - have a look at its existing handle_* methods.
Most of the hard work has already been done for you, and you can tackle the interesting bits.
I think File::Tail is a nice fit.
You can make an array of File::Tail objects and poll them with select like this:
while (1) {
($nfound,$timeleft,#pending)=
File::Tail::select(undef,undef,undef,$timeout,#files);
unless ($nfound) {
# timeout - do something else here, if you need to
} else {
foreach (#pending) {
# here you can handle log messages depending on filename
print $_->{"input"}." (".localtime(time).") ".$_->read;
}
(from perl File::Tail doc)

Resources