How to install after downloading the files? - python-3.x

I have downloaded numpy-1.4.0rc2.zip from sourceforge and have expanded the contents to a folder:
Directory: D:\temp\numpy-1.4.0rc2
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 9/10/2015 13:47 doc
d---- 9/10/2015 13:48 numpy
-a--- 9/10/2015 13:47 1620 COMPATIBILITY
-a--- 9/10/2015 13:47 687 DEV_README.txt
-a--- 9/10/2015 13:47 4732 INSTALL.txt
-a--- 9/10/2015 13:47 1543 LICENSE.txt
-a--- 9/10/2015 13:47 785 MANIFEST.in
-a--- 9/10/2015 13:47 1613 PKG-INFO
-a--- 9/10/2015 13:47 783 README.txt
-a--- 9/10/2015 13:47 5989 setup.py
-a--- 9/10/2015 13:47 149 setupegg.py
-a--- 9/10/2015 13:47 4161 setupscons.py
-a--- 9/10/2015 13:47 154 setupsconsegg.py
-a--- 9/10/2015 13:47 5010 site.cfg.example
-a--- 9/10/2015 13:47 3109 THANKS.txt
INSTALL.txt and README.txt do not offer any installation instructions. What should the next step be? I tried python setup.py but got:
ImportError: No module named '__builtin__'
I am using Python 3.4 on Windows 10.

Related

How to generate a service using generate-odata-client from an ECC or S4 using the service's $metadata?

I'm trying to convert a $metadata into a service for use with the SAP Cloud SDK Library.
generate-odata-client --inputDir .\ctradeslipdata_metadata.xml --outputDir tradeslipdata
Error
[2022-06-10T01:36:13.788Z] ERROR (generator-cli): ErrorWithCause: Generation of services failed.
at C:\Users\Avell\AppData\Roaming\nvm\v14.17.0\node_modules\#sap-cloud-sdk\generator\dist\generator-cli.js:22:18
Caused by:
Error: EEXIST: file already exists, mkdir 'D:\PESSOAL\PROGRAMACAO\JS\nest\exemplo-api-btp\ctradeslipdata_metadata.xml'
estruct
d---- dist
d---- node_modules
d---- src
d---- test
d---- tradeslipdata
-a--- 77 .env
-a--- 20 .eslintignore
-a--- 681 .eslintrc.js
-a--- 49 .gitignore
-a--- 51 .prettierrc
-a--- 62594 ctradeslipdata_metadata.xml
-a--- 5742 default-env.json
-a--- 443 manifest.yml
-a--- 220 nest-cli.json
-a--- 3107 package.json
-a--- 97 tsconfig.build.json
-a--- 581 tsconfig.json
-a--- 317412 yarn.lock
Your command uses an option --inputDir, which needs a directory path as parameter. However, you passed a file, which should be fixed.
Please check the complete documentation of the generator here.

Unable to find pyspark testing module

Although, in source code in github I see pyspark.testing module being present, however, my local environment is throwing error that pyspark.testing is not found.
https://github.com/apache/spark/blob/master/python/pyspark/testing/sqlutils.py#L27
(Source Code)
Have installed pyspark as
pip install pyspark
Folder structure inside pyspark module in conda environment:
d---- 6/2/2022 12:15 PM bin
d---- 6/2/2022 12:15 PM cloudpickle
d---- 6/2/2022 12:15 PM data
d---- 6/2/2022 12:15 PM examples
d---- 6/2/2022 12:15 PM jars
d---- 6/2/2022 12:15 PM licenses
d---- 6/2/2022 12:15 PM ml
d---- 6/2/2022 12:15 PM mllib
d---- 6/2/2022 12:15 PM pandas
d---- 6/2/2022 12:15 PM python
d---- 6/2/2022 12:15 PM resource
d---- 6/2/2022 12:15 PM sbin
d---- 6/2/2022 12:15 PM sql
d---- 6/2/2022 12:15 PM streaming
Although all other folders from git repo are present, however, testing folder is not present.
Installed pyspark version
pyspark==3.2.1

Docker: Permission denied to local MySQL volume

I'm new with Docker and I don't know Linux well. I'm trying to build my own environment for local development with Docker. I’m using docker-compose utility. I want to store MySQL data in the local volume. When I run docker-compose build and docker-compose up -d commands for the first time, there are no errors. Data from MySQL container goes into the local folder. Everything works well except one: when I want to change my docker-compose.yml file and rebuild containers I get an error
vo#vo-ThinkPad-Edge-E330:~/www/test$ docker-compose build
mysql uses an image, skipping
nginx uses an image, skipping
Building app
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 126, in perform_command
File "compose/cli/main.py", line 302, in build
File "compose/project.py", line 468, in build
File "compose/project.py", line 450, in build_service
File "compose/service.py", line 1125, in build
File "docker/api/build.py", line 160, in build
File "docker/utils/build.py", line 30, in tar
File "docker/utils/build.py", line 49, in exclude_paths
File "docker/utils/build.py", line 214, in rec_walk
File "docker/utils/build.py", line 214, in rec_walk
File "docker/utils/build.py", line 214, in rec_walk
[Previous line repeated 1 more time]
File "docker/utils/build.py", line 184, in rec_walk
PermissionError: [Errno 13] Permission denied: '/home/vo/www/test/docker/mysql/dbdata/performance_schema'
[301838] Failed to execute script docker-compose
I found out that the owner of the folder is systemd-coredump from root group. So I have 2 ways:
sudo docker-compose build
Delete /home/vo/www/test/docker/mysql/dbdata folder with sudo permissions and run docker-compose build again.
So, my question: Is this how it should be or is it possible to solve the permissions problem?
My project structure:
/
├── docker
│   ├── mysql
│   │   ├── conf
│   │   │   └── my.cnf
│   │   └── dbdata
│   ├── nginx
│   │   └── conf
│   │   └── nginx.conf
│   └── php
│   ├── conf
│   │   └── local.ini
│   ├── config
│   │   └── local.ini
│   └── Dockerfile
├── docker-compose.yml
└── src
My docker-compose.yml:
version: "3.7"
services:
#PHP Service
app:
build:
args:
user: laravel
uid: 1000
context: ./
dockerfile: ./docker/php/Dockerfile
image: laravel-image
container_name: laravel
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www/
volumes:
- ./src:/var/www
- ./docker/php/config/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- laravel
#MySQL Service
mysql:
image: mysql:5.7
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: laravel
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD: secret
MYSQL_USER: laravel
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./docker/mysql/dbdata:/var/lib/mysql
- ./docker/mysql/conf/my.cnf:/etc/mysql/my.cnf
networks:
- laravel
#Nginx Service
nginx:
image: nginx:1.17-alpine
container_name: nginx
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
environment:
SERVICE_NAME: nginx
SERVICE_TAGS: dev
volumes:
- ./src:/var/www
- ./docker/nginx/conf:/etc/nginx/conf.d
networks:
- laravel
#Networks
networks:
laravel:
driver: bridge
Ok, I found a trick. In my docker-compose.yml in service volume section I have to use named volumes instead of path. For example, 'mysqldbvolume' instead of './docker/mysql/dbdata'. Then I have to define a named volume in the top-level volumes key:
services:
#MySQL Service
mysql:
image: mysql:5.7
...
volumes:
- mysqldbvolume:/var/lib/mysql
- ./docker/mysql/conf/my.cnf:/etc/mysql/my.cnf
...
...
# Volumes
volumes:
mysqldbvolume:
driver: local
So, where is my volume now? If I want to see list of my volumes, I have to run docker volume ls:
DRIVER VOLUME NAME
local test_mysqldbvolume
local test_postgresdbvolume
Inspect volume - docker volume inspect test_mysqldbvolume:
[
{
"CreatedAt": "2020-12-17T21:54:53+02:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "test",
"com.docker.compose.version": "1.27.4",
"com.docker.compose.volume": "mysqldbvolume"
},
"Mountpoint": "/var/lib/docker/volumes/test_mysqldbvolume/_data",
"Name": "test_mysqldbvolume",
"Options": null,
"Scope": "local"
}
]
So, path is "Mountpoint": "/var/lib/docker/volumes/test_mysqldbvolume/_data"
Run with regular user ls -la /var/lib/docker/volumes/test_mysqldbvolume/_data says access is denied. But if I run sudo ls -la /var/lib/docker/volumes/test_mysqldbvolume/_data I see my volume data:
drwxrwxrwt 6 systemd-coredump systemd-coredump 4096 дек 17 21:54 .
drwxr-xr-x 3 root root 4096 дек 17 21:42 ..
-rw-r----- 1 systemd-coredump systemd-coredump 56 дек 17 21:42 auto.cnf
-rw------- 1 systemd-coredump systemd-coredump 1676 дек 17 21:42 ca-key.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 1112 дек 17 21:42 ca.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 1112 дек 17 21:42 client-cert.pem
-rw------- 1 systemd-coredump systemd-coredump 1680 дек 17 21:42 client-key.pem
-rw-r----- 1 systemd-coredump systemd-coredump 2 дек 17 21:54 ed50eca9e01e.pid
-rw-r----- 1 systemd-coredump systemd-coredump 6093953 дек 17 21:54 general.log
-rw-r----- 1 systemd-coredump systemd-coredump 445 дек 17 21:49 ib_buffer_pool
-rw-r----- 1 systemd-coredump systemd-coredump 79691776 дек 17 21:54 ibdata1
-rw-r----- 1 systemd-coredump systemd-coredump 50331648 дек 17 21:54 ib_logfile0
-rw-r----- 1 systemd-coredump systemd-coredump 50331648 дек 17 21:42 ib_logfile1
-rw-r----- 1 systemd-coredump systemd-coredump 12582912 дек 17 21:54 ibtmp1
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 дек 17 21:47 laravel
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 дек 17 21:42 mysql
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 дек 17 21:42 performance_schema
-rw------- 1 systemd-coredump systemd-coredump 1680 дек 17 21:42 private_key.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 452 дек 17 21:42 public_key.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 1112 дек 17 21:42 server-cert.pem
-rw------- 1 systemd-coredump systemd-coredump 1680 дек 17 21:42 server-key.pem
drwxr-x--- 2 systemd-coredump systemd-coredump 12288 дек 17 21:42 sys
Most importantly, the permission error is gone.
I had this issue too but for a different reason than the majority of answers here.
I have a dual boot setup with a 2nd disk accessible to both Linux and Windows.
I had my docker images and code on the 2nd disk, an NTFS-3G drive. I tried all workarounds with chmod and chown but it just would not work.
When the penny dropped NTFS-3G was causing the issue, I moved Docker back to the default configuration with everything on the system disk including the code of my project.
Images & containers were once again in the default location /var/lib/docker and my code I moved to ~/code/project.
Once I did this, all permission issues went away.

how to fix permission problem in /tmp/.config by deployment Node.js AWS Beanstalk?

I am trying to deploy my React application to Node.js AWS Beanstalk unfortunately getting all the time in /var/log/nodejs/nodejs.log:
ℹ 「wds」: Project is running at http://172.31.28.128/
ℹ 「wds」: webpack output is served from
ℹ 「wds」: Content not from webpack is served from /var/app/current/public
ℹ 「wds」: 404s will fallback to /
Starting the development server...
┌──────────────────────────────────────────────────┐
│ npm update check failed │
│ Try running with sudo or get access │
│ to the local update config store via │
│ sudo chown -R $USER:$(id -gn $USER) /tmp/.config │
└──────────────────────────────────────────────────┘
I have tried all possible solutions which I could find including:
changing permission via config in .ebextensions (many variants checked)
.npmrc file with unsafe-perm=true in root folder of application(also added to src just to check)
NPM_CONFIG_UNSAFE_PERM=true in Environment properties
removing package-lock.json then npm I
changing instance to more powerful, currently t2.small
Interesting fact is that /tmp/.config has currently enough rights and still it fails
drwxrwxrwx 3 ec2-user ec2-user 4096 May 2 12:05 .config
Below my files which were deployed via eb deploy(I downloaded from s3)
wrmac:app-388c-200502_153210 wojtek$ ls -la
total 1600
drwxr-xr-x# 12 wojtek staff 384 May 2 20:57 .
drwx------# 294 wojtek staff 9408 May 2 20:57 ..
-rw-r--r--# 1 wojtek staff 6148 May 2 20:57 .DS_Store
drwxr-xr-x# 4 wojtek staff 128 May 2 15:22 .ebextensions
-rw-r--r--# 1 wojtek staff 407 May 1 15:26 .gitignore
-rw-r--r--# 1 wojtek staff 17 May 2 14:46 .npmrc
-rw-r--r--# 1 wojtek staff 121322 Sep 29 2018 README.md
-rw-r--r--# 1 wojtek staff 296 Sep 29 2018 frontend.iml
-rw-r--r--# 1 wojtek staff 667957 May 2 15:31 package-lock.json
-rw-r--r--# 1 wojtek staff 1035 May 1 22:14 package.json
drwxr-xr-x# 11 wojtek staff 352 Oct 17 2019 public
drwxr-xr-x# 13 wojtek staff 416 May 2 14:46 src
wrmac:app-388c-200502_153210 wojtek$ ls -la .ebextensions/
total 16
drwxr-xr-x# 4 wojtek staff 128 May 2 15:22 .
drwxr-xr-x# 12 wojtek staff 384 May 2 20:57 ..
-rw-r--r--# 1 wojtek staff 212 May 2 14:45 00_change_npm_permissions.config
-rw-r--r--# 1 wojtek staff 3856 Apr 25 15:40 https-redirect-nodejs.config
As you can see I do not have node_modules there what was the case for some people.
In .ebignore I excluded also
node_modules/
.idea/
.git/
(I have tried also excluding .DS_Store, .gitignore and frontend.iml)
I have also two other Environment properties: NODE_ENV and NPM_CONFIG_PRODUCTION, both true.
To be specific I am using:
Platform branch Node.js running on 64bit Amazon Linux Current platform
version 4.14.1 Current Node.js version 12.16.1
do you have any ideas what could solve the problem?
Seems only solution for this strange AWS specific problem is buying their "Business" support which costs >100$, there is also cheaper option "Developer" support but that might be not enough according to description.
At the end I decided to move to VPS what fully solves my problem(different hosting provider), additionally it is few times cheaper than AWS, I do not have this strange cpu cycles counting, price is easily predictable and probably few other benefits.

How to clear diagnostic logs on Azure Web Apps when using the Local Cache feature?

I'm currently hosting a ASP.NET MVC Web app on Azure's App Service offering and have optionally opted into use their App Service Local Cache feature since my app does not need to write to the local file system.
This has improved the performance of my web app but I've come to realise that it fills up the local File System very quickly when a file system despite having the app setting "WEBSITE_HTTPLOGGING_RETENTION_DAYS": "2" with a quota of 99MB (not sure why I can't see this in AppSettings?) set on App > Diagnostic Logs area.
The documentation says it does modify the way logfiles are stored but doesn't mention how to remove them.
Web apps can continue to write log files and diagnostic data as they do currently. Log files and data, however, are stored locally
on the VM. Then they are copied over periodically to the shared
content store. The copy to the shared content store is a best-case
effort--write backs could be lost due to a sudden crash of a VM
instance.
There is a change in the folder structure of the LogFiles and Data folders for web apps that use Local Cache. There are now subfolders
in the storage LogFiles and Data folders that follow the naming
pattern of "unique identifier" + time stamp. Each of the subfolders
corresponds to a VM instance where the web app is running or has run.
This has been confirmed by by a quick Powershell command on the Kudu Console on the D:\home\logfiles directory.
PS D:\home\logfiles> Get-ChildItem | Sort LastWriteTime
Get-ChildItem | Sort LastWriteTime
Mode LastWriteTime Name
---- ------------- --------------------------
d----- 11/28/2017 1:52 AM http
d----- 11/28/2017 4:42 AM kudu
d----- 11/28/2017 4:46 AM SiteExtensions
d----- 12/1/2017 1:26 AM Application
d----- 12/1/2017 1:26 AM Monaco
d----- 3/7/2018 3:37 PM 5198ab_18_03_03_06_28_59
d----- 3/7/2018 4:03 PM 907c1a_18_03_06_06_44_22
d----- 3/8/2018 3:59 PM 5198ab_18_03_07_14_25_08
d----- 3/8/2018 3:59 PM ff0fc5_18_03_07_11_46_16
d----- 3/8/2018 4:02 PM 0c1a69_18_03_07_08_14_26
d----- 3/8/2018 4:03 PM a9316b_18_03_08_04_25_31
d----- 3/9/2018 4:02 PM 5e5ff2_18_03_08_08_44_41
d----- 3/9/2018 4:02 PM c87157_18_03_09_03_50_23
d----- 3/10/2018 6:18 AM 0c1a69_18_03_09_18_29_49
d----- 3/10/2018 10:32 AM 1440de_18_03_09_11_09_37
... <Excluded for brevity>
... <Excluded for brevity>
d----- 7/31/2018 12:33 PM fe780a_18_07_30_03_36_41
d----- 7/31/2018 4:02 PM f713f0_18_07_31_04_13_17
d----- 7/31/2018 4:03 PM 2a1ef5_18_07_30_23_18_41
d----- 8/1/2018 4:17 PM 7254a1_18_07_31_08_14_27
d----- 8/1/2018 4:18 PM c01540_18_08_01_02_17_00
d----- 8/1/2018 4:18 PM bcecd7_18_07_31_09_41_33
d----- 8/1/2018 4:18 PM 11e825_18_07_31_16_28_38
d----- 8/2/2018 6:18 AM 2ee8db_18_08_01_17_19_18
d----- 8/2/2018 6:18 AM 6ba224_18_08_01_04_13_34
d----- 8/2/2018 10:33 AM dca085_18_08_01_04_14_00
d----- 8/2/2018 10:33 AM 6ead4a_18_08_01_11_08_46
d----- 8/2/2018 12:32 PM 4c87a3_18_08_02_00_07_21
d----- 8/2/2018 12:33 PM faf4de_18_07_28_16_57_44
d----- 8/2/2018 4:21 PM 46b00f_18_07_31_23_36_35
d----- 8/2/2018 4:21 PM 52a38c_18_08_01_11_02_26
d----- 8/3/2018 12:33 PM bcecd7_18_08_01_07_50_23
d----- 8/3/2018 4:02 PM 8eac1e_18_08_03_02_32_24
d----- 8/5/2018 2:35 PM 2341b7_18_08_03_04_15_08
d----- 8/6/2018 12:16 AM 0d1e3e_18_08_02_09_30_34
d----- 8/6/2018 12:17 AM 540bec_18_08_01_10_02_30
d----- 8/6/2018 11:04 AM 3952e0_18_08_05_20_58_56
d----- 8/6/2018 11:59 AM NewRelic
My Environment settings are
"WEBSITE_LOCAL_CACHE_OPTION" : "Always"
"WEBSITE_LOCAL_CACHE_SIZEINMB" : "1800"
"WEBSITE_HTTPLOGGING_RETENTION_DAYS": "2"
"WEBSITE_LOCALCACHE_READY": "TRUE"
So the questions are
1) How do I remove periodically remove the "unique identifier" + time stamp folders e.g. 3952e0_18_08_05_20_58_56?
2) How can I make it respect my diagnostic settings e.g. Retention of 2 days with a maximum quota of 99MB?

Resources