test_ids gem allow for bin grouping based on test metadata? - origen-sdk

Is there a way for the test_ids gem to group tests such that the same softbin gets assigned? For example, here are 3 tests passed in the flow file:
func :test1, speed: 1000, vdd: :vmin
func :test2, speed: 1200, vdd: :vmin
func :test3, speed: 1000, vdd: :vmax
Would like to be able to tell test_ids gem to group by :vdd and get the following softbins assigned (assume the range is 200-299):
200, func_vmin
201, func_vmax
If I passed speed as the grouping arg I would get the following softbins:
200, func_1000
201, func_1200
The examples shown above only pass one piece of metadata but the ask would be that any combination of test metadata could be used to create the softbin group name.
thx

With no special options, the test IDs plugin will use the test name as a unique ID. In that case, tests with different names will be assigned different test numbers, bins and softbins, while tests with the same name will use the same numbers.
Sometimes, like in this case, it is desired for differently named tests to share all or some of their number allocations, and there are a few options available to control this.
Firstly, you can supply a test_id: option, this explicitly defines the ID that should be used for the test when assigning numbers, now your tests will all have the same test numbers, bins and softbins:
func :test1, speed: 1000, vdd: :vmin, test_id: :t1
func :test2, speed: 1200, vdd: :vmin, test_id: :t1
func :test3, speed: 1000, vdd: :vmax, test_id: :t1
This can be further fine-tuned by supplying number:, bin: and/or softbin: options with symbol values and these will be used as the test ID when assigning that specific number type.
For example, this will assign the softbin as you want based on vdd:
func :test1, speed: 1000, vdd: :vmin, softbin: :func_vmin
func :test2, speed: 1200, vdd: :vmin, softbin: :func_vmin
func :test3, speed: 1000, vdd: :vmax, softbin: :func_vmax
This is covered in the docs here - https://origen-sdk.org/test_ids/#Multiple_Instances_of_the_Same_Test
Use your test program interface to programatically assign the IDs based on your business rules, for example in your func method:
def func(name, options)
options[:softbin] = "func_#{options[:vdd] || :nom}".to_sym
# ...
end
It is recommended to have all of your test handlers like this func method handover to a single method to add the test to the flow - https://origen-sdk.org/origen//guides/program/interface/#Detecting_Changes_in_the_Execution_Context
That would then give you a single place to implement more global rules like using vdd. vs. speed to group by.
For example, if you wanted to group by the test type and then speed, you could do something like:
def func(name, options)
options[:softbin] = "func"
# ...
add_to_flow(my_test, options)
end
def add_to_flow(test, options)
if group_by_speed?
options[:softbin] = "#{options[:softbin]_#{options[:speed] || 1000}".to_sym
else
options[:softbin] = "#{options[:softbin]_#{options[:vdd] || :nom}".to_sym
end
# ...
end

Related

How to make Oban run more frequently than once in a minute?

I'm using Oban for the ongoing tasks.
# .....
{Oban.Plugins.Cron,
crontab: [
{"* * * * *", MyApp.Workers.W1},
{"* * * * *", MyApp.Workers.W2},
]},
I now need to run W1 and W2 more frequently that every minute - around once in every 10...30 seconds. Since cron doesn't support higher frequency than 1/min, how would I get around this limitation? Preferably without hacks, unless absolutely necessary.
I don't consider switching from Oban to other library.
I don't believe Oban.Plugins.Cron supports granularity finer than one minute, by glancing at the current code.
One way you could do this is by having a process (like a GenServer) in your application that uses Process.send_after/3 or :timer.send_interval/2 to periodically queue the Oban jobs you want. This is essentially what Oban.Plugins.Cron is doing. You'll probably want to pay attention to making sure the jobs are unique (documentation).
Some very simplified code:
defmodule MyApp.Scheduler do
use GenServer
def start_link(_) do
GenServer.start_link(__MODULE__, :no_args)
end
#impl true
def init(:no_args) do
:timer.send_interval(:timer.seconds(30), {:schedule_job, MyApp.Workers.W1, 30})
:timer.send_interval(:timer.seconds(20), {:schedule_job, MyApp.Workers.W2, 20})
{:ok, :no_state}
end
#impl true
def handle_info({:schedule_job, worker, unique_seconds}, state) do
%{my: :params}
|> Oban.Job.new(worker: worker, unique: [period: unique_seconds])
|> Oban.insert!()
{:noreply, state}
end
end

How to fetch only parts of json file in python3 requests module

So, I am writing a program in Python to fetch data from google classroom API using requests module. I am getting the full json response from the classroom as follows :
{'announcements': [{'courseId': '#############', 'id': '###########', 'text': 'This is a test','state': 'PUBLISHED', 'alternateLink': 'https://classroom.google.com/c/##########/p/###########', 'creationTime': '2021-04-11T10:25:54.135Z', 'updateTime': '2021-04-11T10:25:53.029Z', 'creatorUserId': '###############'}, {'courseId': '############', 'id': '#############', 'text': 'Hello everyone', 'state': 'PUBLISHED', 'alternateLink': 'https://classroom.google.com/c/#############/p/##################', 'creationTime': '2021-04-11T10:24:30.952Z', 'updateTime': '2021-04-11T10:24:48.880Z', 'creatorUserId': '##############'}, {'courseId': '##################', 'id': '############', 'text': 'Hello everyone', 'state': 'PUBLISHED', 'alternateLink': 'https://classroom.google.com/c/##############/p/################', 'creationTime': '2021-04-11T10:23:42.977Z', 'updateTime': '2021-04-11T10:23:42.920Z', 'creatorUserId': '##############'}]}
I was actually unable to convert this into a pretty format so just pasting it as I got it from the http request. What I actually wish to do is just request the first few announcements (say 1, 2, 3 whatever depending upon the requirement) from the service while what I'm getting are all the announcements (as in the sample 3 announcements) that had been made ever since the classroom was created. Now, I believe that fetching all the announcements might make the program slower and so I would prefer if I could get only the required ones. Is there any way to do this by passing some arguments or anything? There are a few direct functions provided by google classroom however I came across those a little later and have already written everything using the requests module which would require changing a lot of things which I would like to avoid. However if unavoidable I would go that route as well.
Answer:
Use the pageSize field to limit the number of responses you want in the announcements: list request, with an orderBy parameter of updateTime asc.
More Information:
As per the documentation:
orderBy: string
Optional sort ordering for results. A comma-separated list of fields with an optional sort direction keyword. Supported field is updateTime. Supported direction keywords are asc and desc. If not specified, updateTime desc is the default behavior. Examples: updateTime asc, updateTime
and:
pageSize: integer
Maximum number of items to return. Zero or unspecified indicates that the server may assign a maximum.
So, let's say you want the first 3 announcements for a course, you would use a pageSize of 3, and an orderBy of updateTime asc:
# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
service = build('classroom', 'v1', credentials=creds)
asc = "updateTime asc"
pageSize = 3
# Call the Classroom API
results = service.courses().announcements().list(pageSize=3, orderBy=asc ).execute()
or an HTTP request example:
GET https://classroom.googleapis.com/v1/courses/[COURSE_ID]/announcements
?orderBy=updateTime%20asc
&pageSize=2
&key=[YOUR_API_KEY] HTTP/1.1
Authorization: Bearer [YOUR_ACCESS_TOKEN]
Accept: application/json
References:
Method: announcements.list | Classroom API | Google Developers

How to force gem that converts all bins to 93k multibins to output 93k native bins?

My need is to get good old fashioned 93k native bad bins defined in my testflow. My ruby file compiles but looks like the gem is converting all bins to multibins. Is there a way to force this from my ruby file instead of hacking the gem files? If yes, going ahead with this, I couldn't find how to specify hardbin description and softbin description in origen. That is something I would like to add in the ruby code instead of on ATE.
Also on a side note, I am trying to force the output file name to something i want. Like in the sample code below i want the output file to be test.tf. The gem is adding some string and an underscore in front of "test". I don't need that either.
sample code:
Flow.create interface: 'MyTester::Interface', params: :room, unique_test_names: nil, flow_name:
:test, file_name: :test, insertion: :prb do
test_info1 = {"key_1" =>
[{:testname => "t1",
:sbin => 100,
:patternname => "p1"}],
"key_2" =>
[{:testname => "t2",
:sbin => 200,
:patternname => "t3"}]
}
testnum = 100000
test_info1.each do |key,val|
puts key
val.each do |info|
tname, sb, pname = info.values_at(:testname, :sbin, :patternname)
puts "#{tname} : #{sb} : #{pname}"
test_suites.add("#{tname}", pattern: "#{pname}", tim_spec_set: 1, timset: 1, lev_equ_set: 1,
lev_spec_set: 10, levset: 1, test_method: test_methods.ac_tml.ac_test.functional_test)
testnum = testnum+100
test :"#{tname}", bin: 10, softbin: "#{sb}", tnum: testnum
end
end
end

Bracket Order With Multiple Target Exits Using ib_insync

I am trying to create a bracket order that that will exit 1/2 the position at target 1 and the 2nd half at target 2. I am able to generate the orders, however, the quantity for both of the targets is the entire position amount rather than 1/2. For instance, when placing an order to buy 200 SPY the profit taking orders should have a quantity of 100 each, not 200. However, when I run my code this is the order displayed in TWS:
I am running:
Python 3.8.2 (default, May 6 2020, 09:02:42) [MSC v.1916 64 bit (AMD64)]
ib_insync 0.9.61
TWS Build 979.4t
Java Version: 1.8.0_152, OS: Windows 10 (amd64, 10.0)
Any ideas on how to solve this issue?
Please run this code to reproduce the situation:
from ib_insync import *
import time
from typing import NamedTuple
class BracketOrderTwoTargets(NamedTuple):
parent: Order
takeProfit: Order
takeProfit2: Order
stopLoss: Order
#modified from this post, https://groups.io/g/insync/message/3135
def bracketStopLimitOrderTwoTargets(
action: str, quantity: float, stopPrice: float,
limitPrice: float, takeProfitPrice1: float, takeProfitPrice2: float,
stopLossPrice: float, **kwargs) -> BracketOrderTwoTargets:
"""
Create a limit order that is bracketed by 2 take-profit orders and
a stop-loss order. Submit the bracket like:
Args:
action: 'BUY' or 'SELL'.
quantity: Size of order.
stopPrice: Stop Price for stopLimit entry order
limitPrice: Limit price of entry order.
takeProfitPrice1: 1st Limit price of profit order.
takeProfitPrice2: 2nd Limit price of profit order.
stopLossPrice: Stop price of loss order.
StopLimitOrder(action, totalQuantity, lmtPrice, stopPrice, **kwargs)
"""
assert action in ('BUY', 'SELL')
reverseAction = 'BUY' if action == 'SELL' else 'SELL'
parent = StopLimitOrder(
action, quantity, limitPrice, stopPrice,
orderId=ib.client.getReqId(),
transmit=False,
outsideRth=True,
**kwargs)
takeProfit1 = LimitOrder(
action=reverseAction, totalQuantity=quantity/2, lmtPrice=takeProfitPrice1,
orderId=ib.client.getReqId(),
transmit=False,
parentId=parent.orderId,
outsideRth=True,
**kwargs)
takeProfit2 = LimitOrder(
action=reverseAction, totalQuantity=quantity/2, lmtPrice=takeProfitPrice2,
orderId=ib.client.getReqId(),
transmit=False,
parentId=parent.orderId,
outsideRth=True,
**kwargs)
stopLoss = StopOrder(
reverseAction, quantity, stopLossPrice,
orderId=ib.client.getReqId(),
transmit=True,
parentId=parent.orderId,
outsideRth = True,
**kwargs)
return BracketOrderTwoTargets(parent, takeProfit1, takeProfit2, stopLoss)
ib = IB()
client_id = int(time.time()) # gets second since epoch. No need for the milliseconds, so round to int
ib.connect('127.0.0.1', 7497, clientId=client_id, timeout=10)
contract = Stock('SPY', exchange='SMART', currency='USD')
[ticker] = ib.reqTickers(contract)
contract_price = ticker.marketPrice()
high_bracket = bracketStopLimitOrderTwoTargets(action='BUY', quantity=200, stopPrice=contract_price+18,
limitPrice=contract_price+20,
takeProfitPrice1=contract_price+30,
takeProfitPrice2=contract_price+45,
stopLossPrice=contract_price-5)
for order in high_bracket:
order_res = ib.placeOrder(contract=contract, order=order)
print(order_res)
A different way to do it is using the scale fields attached to the profit taker. I dont know what the API looks like, but check it out in the TWS ui first. In the UI, you will see - Initial component, scale component and Scale price. You can then have the initial component of 100, and if you want the subsequent lots to be different, specify that size in the scale component.

Mnesia pagination with fragmented table

I have a mnesia table configured as follow:
-record(space, {id, t, s, a, l}).
mnesia:create_table(space, [ {disc_only_copies, nodes()},
{frag_properties, [ {n_fragments, 400}, {n_disc_copies, 1}]},
{attributes, record_info(fields, space)}]),
I have at least 4 million records for test purposes on this table.
I have implemented something like this Pagination search in Erlang Mnesia
fetch_paged() ->
MatchSpec = {'_',[],['$_']},
{Record, Continuation} = mnesia:activity(async_dirty, fun mnesia:select/4, [space, [MatchSpec], 10000, read], mnesia_frag).
next_page(Cont) ->
mnesia:activity(async_dirty, fun() -> mnesia:select(Cont) end, mnesia_frag).
When I execute the pagination methods it brings batch between 3000 and 8000 but never 10000.
What I have to do to bring me the batches consistently?
The problem is that you expect mnesia:select/4, which is documented as:
select(Tab, MatchSpec, NObjects, Lock) -> transaction abort | {[Object],Cont} | '$end_of_table'
to get you the NObjects limit, being NObjects in your example 10,000.
But the same documentation also says:
For efficiency the NObjects is a recommendation only and the result may contain anything from an empty list to all available results.
and that's the reason you are not getting consistent batches of 10,000 records, because NObjects is not a limit but a recommendation batch size.
If you want to get your 10,000 records you won't have no other option that writing your own function, but select/4 is written in this way for optimization purposes, so most probably the code you will be written will be slower than the original code.
BTW, you can find the mnesia source code on https://github.com/erlang/otp/tree/master/lib/mnesia/src

Resources