I am trying to create a portus envirnment using docker compose, but I get this error and I don't know how to solve it:
ERROR: for crono Container command 'bin/crono' not found or does not exist.
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "compose/cli/main.py", line 63, in main
AttributeError: 'ProjectError' object has no attribute 'msg'
docker-compose returned -1
Its probably that the Data Space Available value is near of 0 MB. You can check this value using the command "docker info".
If this is your case you can resolve your problem following the next steps:
If you haven't your images uploaded to a docker repository you should save it using the next command -> "docker save -o DockerImageName.tar "
su
systemctl stop docker
mkdir /new/path/to/docker
vim /etc/docker/daemon.json
Add the next lines to your daemon.json -> { "graph": "/new/path/to/docker" }
systemctl start docker
And you could try up you container again
su docker
If you have saved your docker images in the first step you should load the images with the next command -> "docker load -i DockerImageName.tar"
cd /path/to/docker-compose
docker-compose up -d
Related
I'm passing env var via docker container run in github actions like so:
run: docker container run -d -e MY_KEY="some key" -p 3000:3000 somedockerimage/somedockerimage:0.0.2
I know it should pass it right way because it working with node.js
in the python file:
import os
api_key = os.environ['MY_KEY']
print(api_key)
the results I get:
File "print.py", line 4, in <module>
api_key = os.environ['MY_KEY']
File "/usr/lib/python3.8/os.py", line 675, in __getitem__
raise KeyError(key) from None
KeyError: 'MY_KEY'
I don't see anything incorrect with the way you're running your Docker container. It's difficult to say without seeing the rest of the project, but you may need to delete your .pyc files with something like find . -name \*.pyc -delete. This answer could add more context.
I followed the official Airflow docker guide.
It works fine for most of the simple jobs I have.
I tried to use this guide for that I needed to add in the .env file this line:
_PIP_ADDITIONAL_REQUIREMENTS=pyspark xlrd apache-airflow-providers-apache-spark
Unfortunately, the dag is not being loaded.
The problem seems to be related to JAVA_HOME because the docker output shows this message:
airflow-scheduler_1 | is not set
In the Airflow web GUI it shows the following erro:
Broken DAG: [/opt/airflow/dags/SparkETL.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/pyspark/context.py", line 339, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "/home/airflow/.local/lib/python3.7/site-packages/pyspark/java_gateway.py", line 108, in launch_gateway
raise RuntimeError("Java gateway process exited before sending its port number")
RuntimeError: Java gateway process exited before sending its port number
I tried to add install -y openjdk-11-jdk command in the docker-compose, and set JAVA_HOME: '/usr/lib/jvm/java-11-openjdk-amd64' also in the docker compose. In this situation airflow_schedule dumps that the path does not exist.
we are sitting behind a firewall and try to run a docker image (cBioportal). The docker itself could be installed with a proxy but now we encounter the following issue:
Starting validation...
INFO: -: Unable to read xml containing cBioPortal version.
DEBUG: -: Requesting cancertypes from portal at 'http://cbioportal-container:8081'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Error occurred during validation step:
Traceback (most recent call last):
File "/cbioportal/core/src/main/scripts/importer/validateData.py", line 4491, in request_from_portal_api
response.raise_for_status()
File "/usr/local/lib/python3.5/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Timeout for url: http://cbioportal-container:8081/api-legacy/cancertypes
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/metaImport.py", line 127, in <module>
exitcode = validateData.main_validate(args)
File "/cbioportal/core/src/main/scripts/importer/validateData.py", line 4969, in main_validate
portal_instance = load_portal_info(server_url, logger)
File "/cbioportal/core/src/main/scripts/importer/validateData.py", line 4622, in load_portal_info
parsed_json = request_from_portal_api(path, api_name, logger)
File "/cbioportal/core/src/main/scripts/importer/validateData.py", line 4495, in request_from_portal_api
) from e
ConnectionError: Failed to fetch metadata from the portal at [http://cbioportal-container:8081/api-legacy/cancertypes]
Now we know that it is a firewall issue, because it works when we install it outside the firewall. But we do not know how to change the firewall yet. Our idea was to look up the files and lines which throw the errors. But we do not know how to look into the files since they are within the docker.
So we can not just do something like
vim /cbioportal/core/src/main/scripts/importer/validateData.py
...because ... there is nothing. Of course we know this file is within the docker image, but like i said we dont know how to look into it. At the moment we do not know how to solve this riddle - any help appreciated.
maybe you still might need this.
You can access this python file within the container by usingdocker-compose exec cbioportal sh or docker-compose exec cbioportal bash
Then you can us cd, cat, vi, vim or else to access the given path in your post.
I'm not sure which command you're actually running but when I did the import call like
docker-compose run --rm cbioportal metaImport.py -u http://cbioportal:8080 -s study/lgg_ucsf_2014/lgg_ucsf_2014/ -o
I had to replace the http://cbioportal:8080 with the servers ip address.
Also notice that the studies path is one level deeper than in the official documentation.
In cbioportal behind proxy the study import is only available in offline mode via:
First you need to get inside the container
docker exec -it cbioportal-container bash
Then generate portal info folder
cd $PORTAL_HOME/core/src/main/scripts ./dumpPortalInfo.pl $PORTAL_HOME/my_portal_info_folder
Then import the study offline. -o is important to overwrite despite of warnings.
cd $PORTAL_HOME/core/src/main/scripts
./importer/metaImport.py -p $PORTAL_HOME/my_portal_info_folder -s /study/lgg_ucsf_2014 -v -o
Hope this helps.
I am trying to launch Presto by entering the following in the terminal:
sudo bin/launcher start
It shows me this:
Started as 16501 (This integer varies on every attempt)
Then, I tried to launch it by entering the following in terminal:
sudo bin/launcher run --verbose
The output I get is:
config_path = /media/polly/161813A518138343/PrestoDB/presto-server- 0.203/etc/config.properties
data_dir = /media/polly/161813A518138343/PrestoDB/presto-server-0.203
etc_dir = /media/polly/161813A518138343/PrestoDB/presto-server-0.203/etc
install_path = /media/polly/161813A518138343/PrestoDB/presto-server-0.203
jvm_config = /media/polly/161813A518138343/PrestoDB/presto-server-0.203/etc/jvm.config
launcher_config = /media/polly/161813A518138343/PrestoDB/presto-server- 0.203/bin/launcher.properties
launcher_log = /media/polly/161813A518138343/PrestoDB/presto-server-0.203/var/log/launcher.log
log_levels = /media/polly/161813A518138343/PrestoDB/presto-server-0.203/etc/log.properties
log_levels_set = False
node_config = /media/polly/161813A518138343/PrestoDB/presto-server-0.203/etc/node.properties
pid_file = /media/polly/161813A518138343/PrestoDB/presto-server-0.203/var/run/launcher.pid
properties = {}
server_log = /media/polly/161813A518138343/PrestoDB/presto-server-0.203/var/log/server.log
verbose = True
['java', '-cp', '/media/polly/161813A518138343/PrestoDB/presto-server- 0.203/lib/*', '-server', '-Xmx16G', '-XX:+UseG1GC', '-XX:G1HeapRegionSize=32M', '-XX:+UseGCOverheadLimit', '-XX:+ExplicitGCInvokesConcurrent', '-XX:+HeapDumpOnOutOfMemoryError', '-XX:+ExitOnOutOfMemoryError', '-Dconfig=/media/polly/161813A51813834/PrestoDB/presto-server-0.203/etc/config.properties', 'com.facebook.presto.server.PrestoServer']
Traceback (most recent call last):
File "bin/launcher.py", line 445, in main
handle_command(command, o)
File "bin/launcher.py", line 329, in handle_command
run(process, options)
File "bin/launcher.py", line 251, in run
os.execvpe(args[0], args, env)
File "/usr/lib/python2.7/os.py", line 355, in execvpe
_execvpe(file, args, env)
File "/usr/lib/python2.7/os.py", line 382, in _execvpe
func(fullname, *argrest)
OSError: [Errno 2] No such file or directory
I am unable to understand the error message. Any help would be appreciated.
Here is the config.properties file:
coordinator=true
node-scheduler.include-coordinator=true
http-server.http.port=3306
query.max-memory=2GB
query.max-memory-per-node=1GB
discovery-server.enabled=true
discovery.uri=http://localhost:3306
EDIT: After entering sudo bin/launcher start into the terminal and then sudo bin/launcher status , it says "Not running". Also there is no web page at localhost:3306. If it started successfully, then I should get a web page.
Since I got it fixed myself, I will answer my own question for anyone who encounters this issue in future and comes across this question.
Where exactly was the problem: JRE. (Thanks to kokosing for pointing out that there might be some problem with java)
What I did before: I downloaded jre-8u171-linux-x64.tar.gz from https://java.com/en/download/help/linux_x64_install.xml, placed it in a partition or "media" different from where ubuntu is installed. I configured the .bashrc myself and added the following lines:
JAVA_HOME=/media/polly/161813A518138343/Java/jdk-10.0.1
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
export JAVA_HOME
export JRE_HOME
export PATH
For changes to take place, I executed exec bash in terminal.
To check if it was running I tried java -version and it displayed the version of java running.
I tried to launch Presto, it wouldn't run.
What I did after: I removed the part that I had added to .bashrc.
I used the command sudo apt-get install default-jre. After successful installation I entered java -version and it showed me the version of java installed and running. I tried to launch presto and it ran successfully. I am able to see the page at localhost:3360.
Commands sudo bin/launcher start and sudo bin/launcher run conflicts with each other. First starts Presto in background while second starts Presto in foreground. You cannot start two Presto processes on the same machine because they try to allocate the same port (see your config.properties http-server.http.port=3306).
What did you want to achieve with sudo bin/launcher run? If you want to run a query then please use presto-cli-*-executable.jar*
I can run this command on my instance using web console;
gsutil rsync -d -r /my-path gs://my-bucket
But when I try on my remote ssh terminal I get this error;
root#instance-2: gsutil rsync -d -r /my-path gs://my-bucket
Building synchronization state...
INFO 0923 12:48:48.572446 multistore_file.py] Error decoding credential, skipping
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/platform/gsutil/third_party/oauth2client/oauth2client/multistore_file.py", line 381, in _refresh_data_cache
(key, credential) = self._decode_credential_from_json(cred_entry)
File "/usr/lib/google-cloud-sdk/platform/gsutil/third_party/oauth2client/oauth2client/multistore_file.py", line 400, in _decode_credential_from_json
credential = Credentials.new_from_json(json.dumps(cred_entry['credential']))
File "/usr/lib/google-cloud-sdk/platform/gsutil/third_party/oauth2client/oauth2client/client.py", line 292, in new_from_json
return from_json(s)
File "/usr/lib/google-cloud-sdk/platform/gsutil/third_party/apitools/apitools/base/py/credentials_lib.py", line 356, in from_json
data['token_expiry'], oauth2client.client.EXPIRY_FORMAT)
TypeError: must be string, not None
Caught non-retryable exception while listing gs://my-bucket/: Could not reach metadata service: Not Found
At source listing 10000...
At source listing 20000...
At source listing 30000...
At source listing 40000...
CommandException: Caught non-retryable exception - aborting rsync
I solved this by switching the user to the default CGE one that is created when the project is created. Root on the VM does not have privileges to run gsutil commands it seems.