Simple question - is allure supported with parallel pytest variants, like pytest-parallel and pytest-concurrent?
Both of these pytest variants allow individual tests to be run in in a thread-pool. That part is working fine. The problem I'm having is getting allure to work with them.
When I just use the bog-standard pytest all is fine.
pytest -lvs --alluredir=allure simpletest.py
...
dir /x allure
Volume in drive C has no label.
Volume Serial Number is 780F-2C58
Directory of C:\Users\XXX\allure
04/03/2020 14:15 <DIR> .
04/03/2020 14:15 <DIR> ..
04/03/2020 14:15 6 15B0C0~1.ATT 15b0c095-9809-4054-aad4-5f92123af984-attachment.attach
04/03/2020 14:15 2,382 2A5A56~1.JSO 2a5a5697-ea48-4b6c-a697-1e858d1a06ab-result.json
04/03/2020 14:15 2,035 487A7C~1.JSO 487a7cfd-69ba-4b8a-ae02-95dd092abdca-result.json
04/03/2020 14:15 6 4B3F99~1.ATT 4b3f9954-21e3-4a11-8897-23617e67678b-attachment.attach
04/03/2020 14:15 6 DA6010~1.ATT da601030-9330-4e12-a123-f67be3f445e5-attachment.attach
5 File(s) 4,435 bytes
2 Dir(s) 11,812,552,704 bytes free
When I run the same test with either pytest-parallel and pytest-concurrent I see the allure output directory created, but nothing gets written to it.
pytest -lvs --alluredir=allure simpletest.py
...
dir allure
Volume in drive C has no label.
Volume Serial Number is 780F-2C58
Directory of C:\Users\XXX\allure
04/03/2020 14:18 <DIR> .
04/03/2020 14:18 <DIR> ..
0 File(s) 0 bytes
2 Dir(s) 11,822,096,384 bytes free
I don't see anything on the allure-pytest site or the allure site that mentions threading.
Related
(ubuntu 18.04)
I'm attempting to extract an odbc driver from a tarball and following these instructions with command:
tar --directory=/opt -zxvf /SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux.tar.gz
This results in the following output:
root#08ba33ec2cfb:/# tar --directory=/opt -zxvf SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux.tar.gzSimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/GoogleBigQueryODBC.did
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/release-notes.txt
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/Simba Google BigQuery ODBC Connector Install and Configuration Guide.pdf
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/OEM ODBC Driver Installation Instructions.pdf
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/simba.googlebigqueryodbc.ini
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/odbc.ini
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/odbcinst.ini
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/SimbaODBCDriverforGoogleBigQuery32_2.4.6.1015.tar.gz
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015.tar.gz
The guide linked to above says:
The Simba Google BigQuery ODBC Connector files are installed in the
/opt/simba/googlebigqueryodbc directory
Not for me, but I do see:
ls -l /opt/
total 8
drwxr-xr-x 1 1000 1001 4096 Apr 26 00:39 SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux
And:
ls -l /opt/SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/
total 52324
-rwxr-xr-x 1 1000 1001 400 Apr 26 00:39 GoogleBigQueryODBC.did
-rw-rw-rw- 1 1000 1001 26688770 Apr 26 00:39 SimbaODBCDriverforGoogleBigQuery32_2.4.6.1015.tar.gz
-rw-rw-rw- 1 1000 1001 26876705 Apr 26 00:39 SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015.tar.gz
drwxr-xr-x 1 1000 1001 4096 Apr 26 00:39 docs
drwxr-xr-x 1 1000 1001 4096 Apr 26 00:39 setup
I was specifically looking for the .so driver file. All the above is on a docker container. I tried extracting the tarball locally on Ubuntu 18.04 (Same as my Docker container) and when I use Ubuntu desktop gui to extract by double clicking the tar.gz file and then clicking 'extract', I do indeed see the expected files.
It seems my tar command (tar --directory=/opt -zxvf /SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux.tar.gz) is not extracting the tarball as expected.
How can I extract the contents of the tarball properly? The tarball in question is the linux one on this link.
[edit]
Adding screens of contents of the tarball per comments. I had to click down two levels of nesting to arrive at 'stuff':
The instructions you linked to do not match the contents of the file I found from here. The first .tar.gz contains two other .tar.gz files. I looked into the 64 bit one and it has:
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/SimbaBigQueryODBCMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/ODBCMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/SQLEngineMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/DSMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/DSCURLHTTPClientMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/third-party-licenses.txt
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/libgooglebigqueryodbc_sb64.so
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/cacerts.pem
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/EULA.txt
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/Tools/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/Tools/get_refresh_token.sh
Your .so is in the lib directory. Based on the instructions it looks like you need to extract this file (or the 32 bit if appropriate) and rename, in this case SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015 to simba/googlebigqueryodbc. The tar command is doing what it is told but the instructions are way off.
train_data=torchvision.datasets.CIFAR100(root='',....,download=True)
When I download CIFAR100 datasets and select the root,are the dataset downloaded in the root that I selected?
I cannot find any file or images.
How can I see the file?
The root argument your pass to torchvision.datasets.CIFAR100 is relative to your current working directory. It will download the data there:
>>> torchvision.datasets.CIFAR100(root='.', download=True)
In the current directory (since root='.'), you will find the .tar.gz and uncompressed directory cifar-100-python/, containing the dataset:
>>> ls -al
drwxr-xr-x 2 1000 4096 Feb 20 2010 cifar-100-python/
-rw-r--r-- 1 root 169001437 Jan 21 16:02 cifar-100-python.tar.gz
I am using Python3.8 on windows10, with Spyder4 and busy working through examples from DarwinEx about algo trading and how they do it but I've run into a basic issue. When I try and run the file in IPython it does not see the other files in the same directory that it is supposed to import. I know I am doing something wrong just not sure what.
Have tried to hard code the path as you see in the image below and also tried another way as per a post her on SO.
I need to run the 'coin_flip_traders_v1.0.py' which then executes the errors.
Here it shows all files in the same directory.
In[65]: pwd
Out[65]: 'C:\\DNNTrain\\Coursera\\darwinex'
ls
Volume in drive C has no label.
Volume Serial Number is 544C-EAA4
Directory of C:\DNNTrain\Coursera\darwinex
24/01/2020 11:24 <DIR> .
24/01/2020 11:24 <DIR> ..
23/01/2020 23:43 <DIR> __pycache__
24/01/2020 11:24 9,700 coin_flip_traders_v1.0.py
24/01/2020 10:14 <DIR> DarwinexLabs-master
24/01/2020 09:47 5,101 DWX_HISTORY_IO_v2_0_1_RC8.py
23/01/2020 22:25 26,718 DWX_ZeroMQ_Connector_v2_0_1_RC8.py
24/01/2020 09:47 35,491 DWX_ZeroMQ_Server_v2.0.1_RC8.mq4
24/01/2020 09:47 2,195 DWX_ZMQ_Execution.py
24/01/2020 09:47 1,928 DWX_ZMQ_Reporting.py
24/01/2020 11:23 2,219 DWZ_ZMQ_Strategy.py
24/01/2020 11:10 <DIR> EXAMPLES
7 File(s) 83,352 bytes
5 Dir(s) 116,616,744,960 bytes free
Tis is the part in the code that it calls.
import os
#_path = 'C:\\DNNTrain\\Coursera\\darwinex\\' # Tried this with no luck
_path = './' # Also not working
os.chdir(_path)
#from EXAMPLES.TEMPLATE.STRATEGIES.BASE.DWX_ZMQ_Strategy import DWX_ZMQ_Strategy
from DWX_ZMQ_Strategy import DWX_ZMQ_Strategy
Here is the command and output I get.
Traceback (most recent call last):
File "C:\DNNTrain\Coursera\darwinex\coin_flip_traders_v1.0.py", line 47, in <module>
from DWX_ZMQ_Strategy import DWX_ZMQ_Strategy
ModuleNotFoundError: No module named 'DWX_ZMQ_Strategy'
Appreciate the help.
Just before hanging myself I saw that I copied all the files into the same directory and provided the _path correctly, but I never thought that they would have a typo in their file name. In the ls above you can see it is "DWZ...." instead it should be "DWX..." as per the call from the script and all the other files.
Guess Z and X are indeed very close to each other.
Lesson learned: Analysis paralysis - couldn't think that it would be so simple.
I am trying to copy directories (& files) recursively from one directory to another.
I tried the following -
rsync -avz <source> <target>
cp -ruT <source> <taret>
Both were successful. but, when i try to compare the sizes using (du -c), the empty directories seem to have mismatch in size.
In target directory
drwxrwxr-x 2 abc devl 4096 Jun 9 01:25 .
drwxrwxr-x 4 abc devl 4096 Jul 20 07:46 ..
In source directory
drwxrwxr-x 2 prod ops 2 Jun 9 01:25 .
drwxrwxr-x 4 prod ops 36 Jul 20 07:46 ..
Is there a special way to handle this? diff -qr doesn't show any differences though.
Thanks for your help.
Are both folders on the same volume? If not chances are that the sector size for those volumes are different and in turn the inode sizes differ. In case of diff it's just looking at whenever or not the directory exists and if it contains the corresponding files. It's similar in how diff doesn't include permission differences because those might be pretty system specific.
A pretty comprehensive answer can be found here: Why size reporting for directories is different than other files?
he used this tutorial: http://blog.urbaninsight.com/2012/10/31/installing-nodejs-centos-55-server
and apparently it has gone well, 100% good, no errors.
now i am here: http://www.nodebeginner.org/
I can't fide any instructions on what node modules i need to put on my server so I am guessing ... I downloaded the latest node source code from the node site and have put the lib folder into my public_html.
I now have made a hello.js which looks like this:
var http = require("lib/http.js");
http.createServer(function(request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.write("Hello World");
response.end();
}).listen(8888);
and 'as i expected' my guess is utter poop.. when I go to mysite.com:8888 i get Oops! Google Chrome could not connect to blaa blaa blaa
I have to think hard of how/what exactly i am trying to ask here...ok, I keep reading tutorials about doing things locally but can find nothing for online, to be
honest after my host finished installing stuff I expected to visually see some new .js file(s) sitting on the server (http.js? or something). I actually can't even figure out how to ask google on this one...
I can look through the tutorials at the code and see that it looks like very easy javascript (my favourite language, better than english), but its like i'm missing the part where i need to upload or connect to the frame work like when you use jquery; you can't just write jquery code cause the browser will be like what the hell is $? first you must do something like:
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
1.use a server that can interpret/run node (done)
2.??!
3.write simple code
---------------update-----------------------------
[root#user node-latest-install]# curl https://npmjs.org/install.sh | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7882 100 7882 0 0 2835 0 0:00:02 0:00:02 --:--:-- 7697k
tar=/bin/tar
version:
tar (GNU tar) 1.15.1
install npm#latest
fetching: http://registry.npmjs.org/npm/-/npm-1.2.18.tgz
0.10.3
1.2.18
cleanup prefix=/root/local
find: /root/local/lib/node: No such file or directory
find: /root/local/lib/node: No such file or directory
All clean!
/root/local/bin/npm -> /root/local/lib/node_modules/npm/bin/npm-cli.js
npm#1.2.18 /root/local/lib/node_modules/npm
It worked
[root#user node-latest-install]# cd ~
[root#user ~]# ls -l
total 548
drwxr-xr-x 5 root root 4096 Apr 7 04:03 local
drwx------ 5 root root 4096 Apr 4 19:37 Maildir
drwxr-xr-x 10 root root 4096 Apr 7 04:04 node-latest-install
drwxr-xr-x 2 root root 4096 Apr 7 04:04 tmp
-rw-r--r-- 1 root root 536584 Apr 4 19:38 virtualmin-install.log
[root#user ~]# ls -l ~/local
total 12
drwxr-xr-x 2 root root 4096 Apr 7 04:04 bin
drwxr-xr-x 4 root root 4096 Apr 7 04:03 lib
drwxr-xr-x 3 root root 4096 Apr 7 04:03 share
[root#user ~]#
I've also changed to require("http") and it still is giving me the same 'oops' error
To answer your main question about programming in node in general, you seem to be missing npm
1) First things first, install npm using
curl http://npmjs.org/install.sh | sh
Once you have npm, programming in node becomes much easier.
2) In your file, change
var http = require("lib/http.js");
to
var http = require("http");
Everything should work fine then.