I have been using the Origen parameters features and successfully created single dimension parameter sets dynamically as so:
pcie.define_params "pcie_demphasis_#{a}_logvar_#{v}".to_sym, inherit: :logvar_default do |p|
pcie.pcie_tx_p_pins.each_with_index do |(pin_id, pin_obj), i|
p.tm.send("Variable#{i}.Name=", "var name1")
end
end
Is there a way to chain these sends together? I have a 4-D data set that needs to be dynamically defined when importing data from Excel.
thx
Does this not work, or do you mean something else?
p.tm.send("blah").send("blah2").send("blah3=", "blah3_val")
Related
i'm tryimg to loop over a diffrent countries folder that got fixed sub folder named survey (i.e Spain/survey , USA/survey ).
where and how I Need to define a wildcard / parameter for the countries so I could loop over all the files that in the survey folder ?
what is the right wildcard syntax ? ( the equivalent of - like 'survey%' in SQL) ?
I tried several ways to define it with no success and I would be happy to get some help on this - Thanks !
In case if the list of paths are static, you can create a parameter or add it in a SQL database and get that result from a lookup activity.
Pass the output to a for each activity and within foreach activity use a copy activity.
You can parameterize the input dataset to get the file paths thereby you need not think of any wildcard characters but use the actual paths itself.
Hope this is helpful.
I am having trouble creating a particular type of visualization in Kibana. My events in Kibana are statistics on communications between two ip address. Two of the fields are lists of ports used by the particular ip address. An example of the fields would be:
ip1 = 192.168.101.2
ip2 = 192.168.101.3
ip2Ports = 80,443
ip1Ports = 80,57000,0
I would like to have a top count of all the values such as
port count
80 2
57000 1
443 1
I have been able to parse ip2Ports to be ip2Ports_List.column1, ip2Ports_List.column2, ect, but I can only choose one term with term aggregation in the visualization. I can split the chart, but that leads to separate counts for each field. If I go by the original ip2Ports field, it is just aggregated as the string such as, "80,443".
Is it even possible to create a top count visualization of fields with multiple values? If so, how would I do so. If not, is there a way to restructure my data so I can do it? Thank you!
My issue stemmed from the format of the values being sent in by Logstash. I had thought that the 'ip2Ports_List.column1' format, which was a result from using the csv filter, was part of an array. It wasn't. After analyzing it, 'ip2Ports_List.column1' didn't seem to be much different from a new field.
Elastic needed an array to give me the visualization I wanted. I wasn't sure what the best way to produce it was, so I just ended up using the ruby filter. This is what the code ended up looking like:
ruby {
code => "fields = event.get('portsIp').split(',')
event.set('portsIpArray',fields)"
}
Where 'portsIp' looked something like "80,443". Splitting it turned 'portsIp' into a Ruby array. I just set that array as the value for a new event field, 'portsIpArray'.
From there when I tried visualize the 'portsIpArray' field, it looked exactly how I wanted it to, treating each port as separate value, and still associating each port with the same event/field.
Extra:
Also something I discovered is if you're writing your code like I was, directly in the Logstash conf file, Logstash doesn't like it if you use double quotes within the double quoted code. In hindsight it makes sense, but it doesn't give a clear error so it's difficult to figure out.
In my application I've got two browser-windows (main_win and info_win).
Now
I want to access a variable which is hold in global.js.
In main_win_controller.js I do a global = require('../global.js'); and then write to the variable initial/once like this: global.myVar = "from main_win";
In info_win_controller.js I do a global = require('../global.js'); and then write to the variable initial/once like this: global.myVar = "from info_win";
After doing so I read the variables continiousely via a interval for testing in both files. Both outputs are different so it seems like I've got two independent instances of my global.js
How can I get rid of this? I want to realise a central part of my application where I can define special reuseable functions (in my example I want to use here the GPIO-Pins of my Raspberry Pi via the use of a special npm-module)
I have a requirement such that whenever i run my Kettle job, the database connection parameters must be taken dynamically from an excel source on each run.
Say i have an excel with column names : HostName, Username, Database, Password.
i want to pass these connection parameters to my table input step dynamically whenever the job runs.
This is what i was trying to do.
You can achieve this by
reading the DB connection parameters from a source (e.g. Excel or in my example a CSV file)
storing the parameters in variables
using the variables in your connection setting.
Proceed as follows
Create another transformation for setting the variables (you cannot do this in the same transformation that uses it):
In the Set Variables element configure the variables:
In the element reading/writing your data create a new connection and set the connection parameters using ${variable_name}. Note that you have to blindly write ${password} into the appropriate field. Also note that this may be a security issue because the value may show up as plain text in log files!
In your job call the variable transformation first and then the functional part:
All you need is the XLS input and the Set Variables step. Define your variables as being valid in the Root job and you can use them in subsequent jobs, as long as they're called by the same root job, when defining the connection.
The "Copy rows to result" and "Get rows from result" are used to send information (rows of data) from one transformation to the next transformation or job in the same parent job. They're not used to send data between steps, that's what the hops are for.
I am trying to write a Matlab script to analyze two specific sets of data, create histograms for them, and write them to a single file where you can see both histograms overlapped on one plot.
I created a functioning script that created the histogram for 1 set of data that basically went like this:
h1=figure;
hist(data,nbins:;
print(h1,'-dpng','hist.png)
Then I tried to simply add a second line of:
h2=figure;
and changed the print function to include h2. That obviously didn't work. I found that I couldn't have both an h1 and an h2 with the print function.
After searching the internet and looking for ways to get around this I decided to try to use saveas instead. I got to the following:
h=findobj(gca,'Type','patch');
hist(data1,nbins);
hold on;
hist(data2,nbins);
set(h(1),'FaceColor','r','EdgeColor','k');
set(h(2),'FaceColor','b','EdgeColor','k');
saveas(h,'-dpng','hist.png')
But this won't quite work either. I haven't found anything on the Mathworks website that helps me with this problem, and I haven't found anything on any other site either. I am using a Linux computer connecting to a different server via SSH so the only way that I can view plots that I make is by saving them to a file and then opening them. Please let me know if you have any suggestions to accomplish my task as outlined in my first paragraph. Thank you.
One way is to use different axes for different histogram. You can use SUBPLOT for this:
subplot(2,1,1)
hist(data1,nbins);
subplot(2,1,2)
hist(data2,nbins);
Another way is to find a common bins (x) and return the hist output to vectors. Then use BAR function for the plot.
nbins = 20;
x = linspace(min([data1(:);data2(:)]),max([data1(:);data2(:)]),nbins);
h1 = hist(data1, x);
h2 = hist(data2, x);
hb = bar(x,[h1(:),h2(:)],'hist');
% change colors and set x limits
set(hb(1),'FaceColor','r','EdgeColor','k');
set(hb(2),'FaceColor','b','EdgeColor','k');
gap = x(2)-x(1);
xlim([x(1)-gap x(end)+gap])