Pyomo: sending options="threads" to cbc solver causes an error - multithreading

It is possible to activate multithreading in a command line:
$cbc -threads=6
Welcome to the CBC MILP Solver
Version: 2.9.9
Build Date: Aug 21 2017
$command line - cbc -threads=6 (default strategy 1)
threads was changed from 0 to 6
But when I try to activate this option in pyomo code
opt = SolverFactory('cbc')
result = opt.solve(instance, options="threads=4")
I get an error:
File "/usr/local/lib/python3.9/dist-packages/pyomo/opt/base/solvers.py", line 561, in solve
self.options.update(kwds.pop('options', {}))
File "/usr/local/lib/python3.9/dist-packages/pyutilib/misc/misc.py", line 360, in update
if type(d[k]) is dict:
TypeError: string indices must be integers
Any ideas?

The options keyword argument expects a dictionary. If you want to use the same syntax as the command line, you're after options_string
opt.solve(instance, options_string="threads=4")
opt.solve(instance, options={"threads": 4})

Related

Encountered an internal AutoML error- ClientException: Message: No objects to concatenate

I am trying to implement Hierarchical time series forecasting on azureautoml pipelines.
I followed this notebook for implementation
https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb
While I ran training pipeline on compute instance it worked, but when I am running the same on compute cluster it breaks at hts-proportion-calculation part.
This is the error I am getting,
system error:
Encountered an internal AutoML error. Error Message/Code: ClientException. Additional Info: ClientException:
      Message: No objects to concatenate
      InnerException: None
      ErrorResponse
{
"error": {
"message": "No objects to concatenate"
}
}
logs :
Loading arguments for scenario proportions-calculation
adding argument --input-medatadata
adding argument --hts-graph
adding argument --enable-event-logger
Input arguments dict is {'--input-medatadata': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_automl_training_workspaceblobstore/azureml/17ca5ae7-7269-4246-888f-e781071e3f5c/automl_training', '--hts-graph': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_hts_graph_workspaceblobstore/azureml/a2c1b15a-c895-41e8-b6a6-1ca37ebe9e77/hts_graph', '--enable-event-logger': None}
Unknown file to proceed outputs.txt
processing: outputs.txt with type None.
Cleaning up all outstanding Run operations, waiting 300.0 seconds
3 items cleaning up...
Cleanup took 0.001676321029663086 seconds
Traceback (most recent call last):
File "proportions_calculation_wrapper.py", line 47, in <module>
runtime_wrapper.run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_many_models/automl_pipeline_step_wrapper.py", line 63, in run
self._run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 44, in _run
proportions_calculation(self.arguments_dict, self.event_logger, script_run=self.step_run)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 173, in proportions_calculation
proportion_files_list, forecasting_parameters.time_column_name, graph.label_column_name
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 92, in calculate_time_agg_sum_for_all_files
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 304, in concat
sort=sort,
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 351, in __init__
raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate
Please let me know how can I resolve this issue ?
This error was incurred as Iteration timeout was not less than experiment timeout , but the system error & logs are a kind of misleading.
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
logs was pointing to pandas "No objects to concatenate"
This error can be overcome by setting iterationtimeout value less than experimenttime out value.
I had set iteration_timeout_minutes=60 which caused the error.
automl_settings = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
experiment_timeout_hours=1,
label_column_name=label_column_name,
track_child_runs=False,
forecasting_parameters=forecasting_parameters,
pipeline_fetch_max_batch_size=15,
model_explainability=model_explainability,
n_cross_validations="auto", # Feel free to set to a small integer (>=2) if runtime is an issue.
cv_step_size="auto",
# The following settings are specific to this sample and should be adjusted according to your own needs.
iteration_timeout_minutes=10,
iterations=15,
)
We are able to run the sample successfully using the compute cluster as given below.
from azureml.core.compute import ComputeTarget, AmlCompute
# Name your cluster
compute_name = "hts-compute"
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print("Found compute target: " + compute_name)
else:
print("Creating a new compute target...")
provisioning_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D16S_V3", max_nodes=20
)
# Create the compute target
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20
)
# For a more detailed view of current cluster status, use the 'status' property
print(compute_target.status.serialize())

How to profile a vim plugin written in python

Vim offers the :profile command, which is really handy. But it is limited to Vim script -- when it comes to plugins implemented in python it isn't that helpful.
Currently I'm trying to understand what is causing a large delay on Denite. As it doesn't happen in vanilla Vim, but only on some specific conditions which I'm not sure how to reproduce, I couldn't find which setting/plugin is interfering.
So I turned to profiling, and this is what I got from :profile:
FUNCTION denite#vim#_start()
Defined: ~/.vim/bundle/denite.nvim/autoload/denite/vim.vim line 33
Called 1 time
Total time: 5.343388
Self time: 4.571928
count total (s) self (s)
1 0.000006 python3 << EOF
def _temporary_scope():
nvim = denite.rplugin.Neovim(vim)
try:
buffer_name = nvim.eval('a:context')['buffer_name']
if nvim.eval('a:context')['buffer_name'] not in denite__uis:
denite__uis[buffer_name] = denite.ui.default.Default(nvim)
denite__uis[buffer_name].start(
denite.rplugin.reform_bytes(nvim.eval('a:sources')),
denite.rplugin.reform_bytes(nvim.eval('a:context')),
)
except Exception as e:
import traceback
for line in traceback.format_exc().splitlines():
denite.util.error(nvim, line)
denite.util.error(nvim, 'Please execute :messages command.')
_temporary_scope()
if _temporary_scope in dir():
del _temporary_scope
EOF
1 0.000017 return []
(...)
FUNCTIONS SORTED ON TOTAL TIME
count total (s) self (s) function
1 5.446612 0.010563 denite#helper#call_denite()
1 5.396337 0.000189 denite#start()
1 5.396148 0.000195 <SNR>237_start()
1 5.343388 4.571928 denite#vim#_start()
(...)
I tried to use the python profiler directly by wrapping the main line:
import cProfile
cProfile.run(_temporary_scope(), '/path/to/log/file')
, but no luck -- just a bunch of errors from cProfile. Perhaps it is because the way python is started from Vim, as it is hinted here that it only works on the main thread.
I guess there should be an easier way of doing this.
The python profiler does work by enclosing the whole code,
cProfile.run("""
(...)
""", '/path/to/log/file')
, but it is not that helpful. Maybe it is all that is possible.

Groovy complains unexpected token when evaluating relational operator on decimal values without leading zeroes

I am facing an issue with the execution of following Groovy Script snippet.
GroovyShell sh = new GroovyShell();
sh.evaluate("\"abcd\".length() >= .34");
I am getting the following exceptions. The entire stack trace is mentioned below.
Exception in thread "main" org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Script1.groovy: 1: unexpected token: >= # line 1, column 17.
"abcd".length() >= .34d
If I change .34 to 0.34, it works. However, because of some limitation, I won't be able to change the script content.
Any help to overcome will be appreciated.
I am getting the following exceptions
Exception in thread "main" org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Script1.groovy: 1: unexpected token: >= # line 1, column 17.
"abcd".length() >= .34d
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.ErrorCollector.addFatalError(ErrorCollector.java:150)
at org.codehaus.groovy.control.ErrorCollector.addError(ErrorCollector.java:120)
at org.codehaus.groovy.control.ErrorCollector.addError(ErrorCollector.java:132)
at org.codehaus.groovy.control.SourceUnit.addError(SourceUnit.java:350)
at org.codehaus.groovy.antlr.AntlrParserPlugin.transformCSTIntoAST(AntlrParserPlugin.java:144)
at org.codehaus.groovy.antlr.AntlrParserPlugin.parseCST(AntlrParserPlugin.java:110)
at org.codehaus.groovy.control.SourceUnit.parse(SourceUnit.java:234)
at org.codehaus.groovy.control.CompilationUnit$1.call(CompilationUnit.java:168)
at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:943)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:605)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:584)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:623)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:594)
at groovytest.Testtest.main(Testtest.java:18)
Your Groovy snippet is incorrect - Groovy does not support notation without leading zero in case of decimal numbers smaller than 1.0. If you try to compile following expression directly using groovyc:
"abcd".length() >= .34
compilation will fail with error like:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
test.groovy: 2: Unexpected input: '.' # line 2, column 20.
"abcd".length() >= .34
^
1 error
Java supports such notation, however Groovy from 2.x up to 3.0.0-alpha-3 version does not support it.
Solid solution
Fix the input Groovy code snippet to contain only a valid and compile-ready code. Any invalid Groovy statements or expressions will lead to failures and compilation errors.
Workaround: add leading zeros with replaceAll() method
The only way to compile such incorrect snippet is to replace all .\d+ (dots followed by at least one space and ended with a number) with 0.$1. Consider following example:
def snippet = "\"abcd\".length() >= .34; \"efgh\".length() >= .22; \"xyz\".length() >= 0.11;"
println snippet.replaceAll(' \\.(\\d+)', ' 0.$1')
It adds 0 to all decimal numbers where leading zero is missing. Running this example prints following output to the console:
"abcd".length() >= 0.34; "efgh".length() >= 0.22; "xyz".length() >= 0.11;
If you pass such modified snippet to GroovyShell.evaluate() method it will run with no errors.
Of course this is not a rock-solid solution and it is just a way to automatically fix some of the syntax errors introduced in the code snippet. There are some corner cases where this workaround may cause some side effects, you have to be aware of it.

pywinauto error argument 4: int too long to convert

I use Python3/pywinauto/and tested app - all are 64.
I got a error when I trying to expend a tree
tree_item = systreeview.GetItem([current_menu_item, u'xxxxxx'])
everything worked with 32 app.
*log:
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 1523, in get_item
texts = [r.text() for r in roots]
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 1523, in <listcomp>
texts = [r.text() for r in roots]
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 960, in text
return self._readitem()[1]
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 1383, in _readitem
remote_mem)
ctypes.ArgumentError: argument 4: <class 'OverflowError'>: int too long to convert*
It was a bug. Fixed now. Thank you everyone.
Fixed another way in pull request #373. pywinauto 0.6.3 is out with the fix.
Just replaced 2 remaining win32functions.SendMessage calls with self.send_message everywhere.

Search multiline error log for error code and then some of it's parameters on Linux

What command would give me the output I need for each instance of an error code in a very large log file? The file has records marked by a begin and end with number of characters. Such as:
SR 120
1414760452 0 1 Fri Oct 31 13:00:52 2014 2218714 4
GROVEMR2 scn
../SrxParamIF.m 284
New Exam Started
EN 120
The 5th field is the error code, 2218714 in previous example.
I thought of just grep'ing for the error code and outputting -A lines afterwards; then picking what I needed from that rather than parsing the entire file. That seems easy but my grep/awk/sed usage isn't to that level.
ONLY when error 2274021 is encountered as in the following example I'd like some output as shown.
Show me output such as: egrep ‘Coil:|Connector:|Channels faulted:| First channel:’ ERRORLOG|less
Part of input file of interest:
Mon Nov 24 13:43:37 2014 2274021 1
AWHMRGE3T NSP
SCP:RfHubCanHWO::RfBias 4101
^MException Class: Unknown Severity: Unknown
Function: RF: RF Bias
PSD: VIBRANT Coil: Breast SMI Scan: 1106/14
Coil Fault - Short Circuit
A multicoil bias fault was detected.
.
Connector: Port 1 (P1)
Channels faulted: 0x200
First channel: 10 of 32, counting from 1
Fault value: -2499 mV, Channel: 10->
Output:
Coil: Breast SMI
Connector: Port 1 (P1)
Channels faulted: 0x200
First channel: 10 of 32, counting from 1
Thanks in advance for any pointers!
Try the following (with the convenient adaptations)
#!/usr/bin/perl
use strict;
$/="\nEN "; # register separated by "\nEN "
my $error=2274021; # the error!
while(<>){ # for all registers
next unless /\b$error\b/; # ignore unless error
for my $line ( split(/\n/,$_)){
print "$line\n" if ($line =~ /Coil:|Connector:|Channels faulted:|First channel:/);
}
print "====\n"
}
Is this what you need?

Resources