what is the difference between cmd and idle when using tqdm? - python-3.x

recently I want to add a simple progress bar to my script, I use tqdm to that, but what puzzle me is that the output is different when I am in the IDLE or in the cmd
for example this
from tqdm import tqdm
import time
def test():
for i in tqdm( range(100) ):
time.sleep(0.1)
give the expected output in the cmd
30%|███ | 30/100 [00:03<00:07, 9.14it/s]
but in the IDLE the output is like this
0%| | 0/100 [00:00<?, ?it/s]
1%|1 | 1/100 [00:00<00:10, 9.14it/s]
2%|2 | 2/100 [00:00<00:11, 8.77it/s]
3%|3 | 3/100 [00:00<00:11, 8.52it/s]
4%|4 | 4/100 [00:00<00:11, 8.36it/s]
5%|5 | 5/100 [00:00<00:11, 8.25it/s]
6%|6 | 6/100 [00:00<00:11, 8.17it/s]
7%|7 | 7/100 [00:00<00:11, 8.12it/s]
8%|8 | 8/100 [00:00<00:11, 8.08it/s]
9%|9 | 9/100 [00:01<00:11, 8.06it/s]
10%|# | 10/100 [00:01<00:11, 8.04it/s]
11%|#1 | 11/100 [00:01<00:11, 8.03it/s]
12%|#2 | 12/100 [00:01<00:10, 8.02it/s]
13%|#3 | 13/100 [00:01<00:10, 8.01it/s]
14%|#4 | 14/100 [00:01<00:10, 8.01it/s]
15%|#5 | 15/100 [00:01<00:10, 8.01it/s]
16%|#6 | 16/100 [00:01<00:10, 8.00it/s]
17%|#7 | 17/100 [00:02<00:10, 8.00it/s]
18%|#8 | 18/100 [00:02<00:10, 8.00it/s]
19%|#9 | 19/100 [00:02<00:10, 8.00it/s]
20%|## | 20/100 [00:02<00:09, 8.00it/s]
21%|##1 | 21/100 [00:02<00:09, 8.00it/s]
22%|##2 | 22/100 [00:02<00:09, 8.00it/s]
23%|##3 | 23/100 [00:02<00:09, 8.00it/s]
24%|##4 | 24/100 [00:02<00:09, 8.00it/s]
25%|##5 | 25/100 [00:03<00:09, 8.00it/s]
26%|##6 | 26/100 [00:03<00:09, 8.00it/s]
27%|##7 | 27/100 [00:03<00:09, 8.09it/s]
28%|##8 | 28/100 [00:03<00:09, 7.77it/s]
29%|##9 | 29/100 [00:03<00:09, 7.84it/s]
30%|### | 30/100 [00:03<00:08, 7.89it/s]
31%|###1 | 31/100 [00:03<00:08, 7.92it/s]
32%|###2 | 32/100 [00:03<00:08, 7.94it/s]
33%|###3 | 33/100 [00:04<00:08, 7.96it/s]
34%|###4 | 34/100 [00:04<00:08, 7.97it/s]
35%|###5 | 35/100 [00:04<00:08, 7.98it/s]
36%|###6 | 36/100 [00:04<00:08, 7.99it/s]
37%|###7 | 37/100 [00:04<00:07, 7.99it/s]
38%|###8 | 38/100 [00:04<00:07, 7.99it/s]
39%|###9 | 39/100 [00:04<00:07, 8.00it/s]
40%|#### | 40/100 [00:04<00:07, 8.00it/s]
41%|####1 | 41/100 [00:05<00:07, 8.00it/s]
I also get the same result if I make my own progress bar like
import sys
def progress_bar_cmd(count,total,suffix="",*,bar_len=60,file=sys.stdout):
filled_len = round(bar_len*count/total)
percents = round(100*count/total,2)
bar = "#"*filled_len + "-"*(bar_len - filled_len)
file.write( "[%s] %s%s ...%s\r"%(bar,percents,"%",suffix))
file.flush()
for i in range(101):
time.sleep(1)
progress_bar_cmd(i,100,"range 100")
why is that????
and there is a way to fix it???

Limiting ourselves to ascii characters, the program output of your second code is the same in both cases -- a stream of ascii bytes representing ascii chars. The language definition does not and cannot specify what an output device or display program will do with the bytes, in particular with control characters such as '\r'.
The Windows Command Prompt console at least sometimes interprets '\r' as 'return the cursor to the beginning of the current line without erasing anything'.
In a Win10 console:
>>> import sys; out=sys.stdout
>>> out.write('abc\rdef')
def7
However, when I run your second code, with the missing time import added, I do not see the overwrite behavior, but see the same continued line output as with IDLE.
C:\Users\Terry>python f:/python/mypy/tem.py
[------------------------------------------------------------] 0.0% ...range 100[#-----------------------------------------------------------] ...
On the third hand, if shorten the write to file.write("[%s]\r"% bar), then I do see one output overwritten over and over.
The tk Text widget used by IDLE only interprets \t and \n, but not other control characters. To some of us, this seems appropriate for a development environment, where erasing characters is less appropriate than in a production environment.

Related

Bitbucket Pipelines export into variable using jq and xq causes error

When i running a pipeline in bitbucket i want to export into variable using
export APEX_CLASSES=$(xq . < package/package.xml | jq '.Package.types | [.] | flatten | map(select(.name=="ApexClass")) | .[] | .members | [.] | flatten | map(select(. | index("*") | not)) | unique | join(",")' -r)
but i got error in pipeline
parse error: Invalid numeric literal at line 1, column 5
i tried to identify a error but i always get same error :(
When i add a escape \ before " i got this error
jq: error: syntax error, unexpected INVALID_CHARACTER (Unix shell quoting issues?) at <top-level>, line 1:
.Package.types | [.] | flatten | map(select(.name==\"ApexClass\")) | .[] | .members | [.] | flatten | map(select(. | index(\"*\") | not)) | unique | join(\",\")
jq: 1 compile error
This is package.xml
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
<types>
<members>AccountHelper</members>
<members>BoatHelper</members>
<members>CaseHelper</members>
<name>ApexClass</name>
</types>
<version>57.0</version>
</Package>

Unable to connect to the PYMQI Client facing FAILED: MQRC_ENVIRONMENT_ERROR

I am getting the below error while connecting to IBM MQ using library pymqi.
Its a clustered MQ channel
Traceback (most recent call last):
File "postToQueue.py", line 432, in <module>
qmgr = pymqi.connect(queue_manager, channel, conn_info)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 2608, in connect
qmgr.connect_tcp_client(queue_manager or '', CD(), channel, conn_info, user, password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1441, in connect_tcp_client
self.connect_with_options(name, cd, user=user, password=password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1423, in connect_with_options
raise MQMIError(rv[1], rv[2])
pymqi.MQMIError: MQI Error. Comp: 2, Reason 2012: FAILED: MQRC_ENVIRONMENT_ERROR'
Please see my code below.
queue_manager = 'quename here'
channel = 'channel name here'
host ='host-name here'
port = '2333'
queue_name = 'queue name here'
message = 'my message here'
conn_info = '%s(%s)' % (host, port)
print(conn_info)
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
queue.put(message)
print("message sent")
queue.close()
qmgr.disconnect()
Getting error at the line below
qmgr = pymqi.connect(queue_manager, channel, conn_info)
Added the IBM client to scripts folder as well , using Windows 10 , Python 3.8.1 and IBM Client 9.1 windows client installation image, Below is the header
-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
| Date/Time :- Tue January 28 2020 16:27:51 Eastern Standard Time |
| UTC Time :- 1580246871.853000 |
| UTC Time Offset :- -300 (Eastern Standard Time) |
| Host Name :- CA-LDLD0SQ2 |
| Operating System :- Windows 10 Enterprise x64 Edition, Build 17763 |
| PIDS :- 5724H7251 |
| LVLS :- 8.0.0.11 |
| Product Long Name :- IBM MQ for Windows (x64 platform) |
| Vendor :- IBM |
| O/S Registered :- 0 |
| Data Path :- C:\Python\Scripts\IBM |
| Installation Path :- C:\Python |
| Installation Name :- MQNI08000011 (126) |
| License Type :- Unknown |
| Probe Id :- XC207013 |
| Application Name :- MQM |
| Component :- xxxInitialize |
| SCCS Info :- F:\build\slot1\p800_P\src\lib\cs\amqxeida.c, |
| Line Number :- 5085 |
| Build Date :- Dec 12 2018 |
| Build Level :- p800-011-181212.1 |
| Build Type :- IKAP - (Production) |
| UserID :- alekhya.machiraju |
| Process Name :- C:\Python\python.exe |
| Arguments :- |
| Addressing mode :- 32-bit |
| Process :- 00010908 |
| Thread :- 00000001 |
| Session :- 00000001 |
| UserApp :- TRUE |
| Last HQC :- 0.0.0-0 |
| Last HSHMEMB :- 0.0.0-0 |
| Last ObjectName :- |
| Major Errorcode :- xecF_E_UNEXPECTED_SYSTEM_RC |
| Minor Errorcode :- OK |
| Probe Type :- INCORROUT |
| Probe Severity :- 2 |
| Probe Description :- AMQ6090: MQM could not display the text for error |
| 536895781. |
| FDCSequenceNumber :- 0 |
| Comment1 :- WinNT error 1082155270 from Open ccsid.tbl. |
| |
+-----------------------------------------------------------------------------+

How to generate single output json object from multiple log lines in logstash filter?

I am new to Logstash and Grok filter. I want to parse logs like these -
2018-01-11 17:17:16,071 | DEBUG | [Thread-2] | com.example.monitor.MonitorHelper:cpuMonitoring(307) | CommittedVirtualMemorySize :: 401186816
2018-01-11 17:17:16,071 | DEBUG | [Thread-2] | com.example.monitor.MonitorHelper:cpuMonitoring(307) | FreePhysicalMemorySize :: 1751130112
2018-01-11 17:17:16,072 | DEBUG | [Thread-2] | com.example.monitor.MonitorHelper:cpuMonitoring(307) | FreeSwapSpaceSize :: 4294967295
2018-01-11 17:17:16,694 | DEBUG | [Thread-2] | com.example.monitor.MonitorHelper:cpuMonitoring(307) | ProcessCpuLoad :: -1.0
2018-01-11 17:17:16,694 | DEBUG | [Thread-2] | com.example.monitor.MonitorHelper:cpuMonitoring(307) | ProcessCpuTime :: 47471104300
2018-01-11 17:17:16,698 | DEBUG | [Thread-2] | com.example.monitor.MonitorHelper:cpuMonitoring(307) | SystemCpuLoad :: 1.0
2018-01-11 17:17:16,698 | DEBUG | [Thread-2] | com.example.monitor.MonitorHelper:cpuMonitoring(307) | TotalPhysicalMemorySize :: 4285849600
2018-01-11 17:17:16,698 | DEBUG | [Thread-2] | com.example.monitor.MonitorHelper:cpuMonitoring(307) | TotalSwapSpaceSize :: 4294967295
to a JSON Object like this -
{
"timestamp": "2018-01-11 17:17:16,071",
"log_level": "DEBUG",
"thread_name": "Thread-2",
"class": "com.example.monitor.MonitorHelper",
"method": "cpuMonitoring",
"line_number": "307",
"CommittedVirtualMemorySize": "401186816",
"FreePhysicalMemorySize": "1751130112",
"FreeSwapSpaceSize": "4294967295",
"ProcessCpuLoad": "-1.0",
"ProcessCpuTime": "47471104300",
"SystemCpuLoad": "1.0",
"TotalPhysicalMemorySize": "4285849600",
"TotalSwapSpaceSize": "4294967295"
}
As of now my grok pattern is -
%{TIMESTAMP_ISO8601:timestamp} \| %{LOGLEVEL:log_level} \| [(?\b[\w-]+\b)] \| %{JAVAFILE:class}:%{JAVAMETHOD:method}(%{NUMBER:line_number}) \| %{GREEDYDATA:log_message}
which provides multiple output lines for each input log line. JSON object looks like this-
{
"timestamp": "2018-01-11 17:17:16,071",
"log_level": "DEBUG",
"thread_name": "Thread-2",
"class": "com.example.monitor.MonitorHelper",
"method": "cpuMonitoring",
"line_number": "307",
"log_message": "CommittedVirtualMemorySize :: 401186816 "
}
can you please help me with what I need to look for in order to achieve this?
The first recommendation is to change the original log output into a single line.
If you can't, and you're using filebeat to ship the file, use FB's multiline config to merge the lines before sending it to logstash.
If you're not using filebeat, you can try to use the multiline codec in logstash.

Strange bug in Julia or Screen

On Debian 8 Linux, I use vim together with screen to send lines from vim to a julia session. Today, I tried https://github.com/Keno/Cxx.jl. I followed the instructions there (i.e. I compiled the latest version 0.5.0-dev+3609 of Julia). The following bug appeared when I tried example 1, I could pinpoint it to the following simple steps:
Create the following files (don't overwrite your own files):
printf "using Cxx\ncxx\"\"\"#include<iostream> \"\"\"\ncxx\"\"\"void testfunction(){std::cout<<\"worked\"<< std::endl;}\"\"\"\njulia_function() = #cxx testfunction()\n" > start
and printf "julia_function()\n" > freeze
Open terminal 1 (I use gnome-terminal) and write screen -S session and then start Julia with julia. It should look like
_
_ _ _(_)_ | A fresh approach to technical computing
(_) | (_) (_) | Documentation: http://docs.julialang.org
_ _ _| |_ __ _ | Type "?help" for help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 0.5.0-dev+3609 (2016-04-18 07:07 UTC)
_/ |\__'_|_|_|\__'_| | Commit a136a6e (0 days old master)
|__/ | x86_64-linux-gnu
julia>
Open terminal 2 and do
screen -S session -p 0 -X eval "readreg p start" "paste p" Terminal 1 should now look like
_
_ _ _(_)_ | A fresh approach to technical computing
(_) | (_) (_) | Documentation: http://docs.julialang.org
_ _ _| |_ __ _ | Type "?help" for help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 0.5.0-dev+3609 (2016-04-18 07:07 UTC)
_/ |\__'_|_|_|\__'_| | Commit a136a6e (0 days old master)
|__/ | x86_64-linux-gnu
julia> using Cxx
julia> cxx"""#include&ltiostream&gt """
true
julia> cxx"""void testfunction(){std::cout<&lt"worked"&lt&lt std::endl;}"""
true
julia> julia_function() = #cxx testfunction()
julia_function (generic function with 1 method)
julia>
Now the strange bug appears: writing "ju", using TAB to complete to "julia_function" and adding "()" by hand in terminal 1 leads to
julia> julia_function()
worked
julia>
But if I do steps 1, 2, 3 and then screen -S session -p 0 -X eval "readreg p freeze" "paste p" in terminal 2 or write julia_function() in terminal 1 without using TAB to complete then I get a freeze in terminal 1:
julia> julia_function()
If I do steps 1, 2, 3, use TAB completion as described above, then (i.e. after the first call, which compiles the function) screen -S session -p 0 -X eval "readreg p freeze" "paste p" in terminal 2 and julia_function() without using TAB in terminal 1 work as expected (it prints "worked") and don't cause a freeze.
If I do not use screen at all, it works (with and without TAB). Can you please tell me what is going on here?

How to include modules for code coverage for unit testing?

My assumption is that any module tested using Intern will automatically be covered by Istanbul's code coverage. For reasons unknown to me, my module is not being included.
I am:
running Intern 1.6.2 (installed with npm locally)
testing NodeJS code
using callbacks, not promises
using CommonJS modules, not AMD modules
Directory Structure (only showing relevant files):
plister
|
|--libraries
| |--file-type-support.js
|
|--tests
| |--intern.js
| |--unit
| |--file-type-support.js
|
|--node_modules
|--intern
plister/tests/intern.js
define({
useLoader: {
'host-node': 'dojo/dojo'
},
loader: {
packages: [
{name: 'libraries', location: 'libraries'}
]
},
reporters: ['console'],
suites: ['tests/unit/file-type-support'],
functionalSuites: [],
excludeInstrumentation: /^(tests|node_modules)\//
});
plister/tests/unit/file-type-support.js
define([
'intern!bdd',
'intern/chai!expect',
'intern/dojo/node!fs',
'intern/dojo/node!path',
'intern/dojo/node!stream-equal',
'intern/dojo/node!../../libraries/file-type-support'
], function (bdd, expect, fs, path, streamEqual, fileTypeSupport) {
'use strict';
bdd.describe('file-type-support', function doTest() {
bdd.it('should show that the example output.plist matches the ' +
'temp.plist generated by the module', function () {
var deferred = this.async(),
input = path.normalize('tests/resources/input.plist'),
output = path.normalize('tests/resources/output.plist'),
temporary = path.normalize('tests/resources/temp.plist');
// Test deactivate function by checking output produced by
// function against test output.
fileTypeSupport.deactivate(fs.createReadStream(input),
fs.createWriteStream(temporary),
deferred.rejectOnError(function onFinish() {
streamEqual(fs.createReadStream(output),
fs.createReadStream(temporary),
deferred.callback(function checkEqual(error, equal) {
expect(equal).to.be.true;
}));
}));
});
});
});
Output:
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (29ms)
1/1 tests passed
1/1 tests passed
Output (on failure):
FAIL: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (30ms)
AssertionError: expected true to be false
AssertionError: expected true to be false
0/1 tests passed
0/1 tests passed
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0
Output (after removing excludeInstrumentation):
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (25ms)
1/1 tests passed
1/1 tests passed
------------------------------------------+-----------+-----------+-----------+-----------+
File | % Stmts |% Branches | % Funcs | % Lines |
------------------------------------------+-----------+-----------+-----------+-----------+
node_modules/intern/ | 70 | 50 | 100 | 70 |
chai.js | 70 | 50 | 100 | 70 |
node_modules/intern/lib/ | 79.71 | 42.86 | 72.22 | 79.71 |
Test.js | 79.71 | 42.86 | 72.22 | 79.71 |
node_modules/intern/lib/interfaces/ | 80 | 50 | 63.64 | 80 |
bdd.js | 100 | 100 | 100 | 100 |
tdd.js | 76.19 | 50 | 55.56 | 76.19 |
node_modules/intern/lib/reporters/ | 56.52 | 35 | 57.14 | 56.52 |
console.js | 56.52 | 35 | 57.14 | 56.52 |
node_modules/intern/node_modules/chai/ | 37.9 | 8.73 | 26.38 | 39.34 |
chai.js | 37.9 | 8.73 | 26.38 | 39.34 |
tests/unit/ | 100 | 100 | 100 | 100 |
file-type-support.js | 100 | 100 | 100 | 100 |
------------------------------------------+-----------+-----------+-----------+-----------+
All files | 42.14 | 11.35 | 33.45 | 43.63 |
------------------------------------------+-----------+-----------+-----------+-----------+
My module passes the test and I can make it fail too. It just will not show up in the code coverage. I have done the tutorial hosted on GitHub without any problems.
I tried dissecting the Istanbul and Intern dependencies. I place a console.log where it seems files to be covered go through, but my module doesn't get passed. I have tried every variation of deferred.callback and deferred.rejectOnError with no difference to the code coverage.
Also, any feedback on my use of deferred.callback and deferred.rejectOnError will be greatly appreciated. I am still a little uncertain on their usage.
Thanks!
As of Intern 1.6, only require('vm').runInThisContext is hooked to add code coverage data, not require. Instrumentation of require was added in Intern 2.0.
The use of callback/rejectOnError in the above code is correct.

Resources