Using pip install when using automatic configuration script for proxy settings - python-3.x

While I try to do:
Pip install robotframework
I get a connection timedout error message.
I am using configuration script for my proxy
Could someone tell how to add this detail to environment variable or any other method to over come this ?

Take a look into your configuration script.
After that you can specify the required parameters in the following command:
pip install --proxy http://user:password#proxyserver:port robotframework

Related

AWS CLI -'Aws-shell' namespace object has no attribute: cli_binary_format

When I attempt to use aws-shell to check my s3 bucket list I get this error:
my environment is [cloudshell-user#ip-10-0-***~]$ aws --version
aws-cli/2.2.43 Python/3.8.8 Linux/4.14.252-195.483.amzn2.x86_64 exec-env/CloudShell exe/x86_64.amzn.2 prompt/off
Is anything wrong with any of my Envir. verions? Pls advice
See below for recommended approach -or- keep reading for a fix for aws-shell.
aws-shell requires awscli version 1 to function correctly, otherwise you'll receive the cli_binary_format error. To work around this you can do the following in the cloudshell environment.
Install awscli version 1 and aws-shell:
pip3 install --user -U awscli aws-shell boto3 --use-feature=2020-resolver --no-cache-dir
Update your PATH to cause the awscli version 1 to be the default:
export PATH=/home/cloudshell-user/.local/bin/:$PATH
However, the better solution would be to use awscli version 2 and enable the auto prompt feature as described here https://github.com/aws/aws-cli/issues/5664
aws configure set cli_auto_prompt on
or
export AWS_CLI_AUTO_PROMPT=on
Then awscli version 2 will behave similarly to aws-shell, providing completion hints etc.

How to set path to chromedriver in Google Cloud Composer

I am attempting to run a DAG that will use selenium to scrape a web page every week on Cloud Composer.
I have already tried to give the path when creating the WebDriver.Chrome() instance to a driver that I uploaded to GCS, though I imagine this is not the best way to do this.
Airflow is giving this error
Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
If you have any tips as to updating Cloud Composer's PATH variable, would be greatly appreciated. If I need to put in more info, drop a comment and I'll add on.
So there was no official answer and the Slack channel for neither Composer nor GKE were able to help. The real problem was that the binaries were not on Composer. Best answer for right now is to manually ssh into all of your GKE airflow-workers and install Google Chrome yourself: https://linuxize.com/post/how-to-install-google-chrome-web-browser-on-ubuntu-18-04/
Then place the chromedriver for the correct version of Chrome you installed in your dags/dependencies folder and reference it on instantiation of your Webdriver object. Hope this helps!
You can create a Docker File and give mention the command to install chrome in the Docker File. Or else as mentioned by Alex you can manually install chrome on all worker nodes.
Follow this tutorial to connect to your worker nodes using Cloud shell- https://towardsdatascience.com/connect-airflow-worker-gcp-e79690f3ecea
Once inside the worker Run the following to install Chrome-
sudo apt-get update
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo dpkg -i google-chrome-stable_current_amd64.deb
If you get some dependency error then run the below command and again run the install command
sudo apt --fix-broken install
To check the chrome installation run -
google-chrome --version
And now check where the chrome binary is installed
which google-chrome-stable
Copy this path and put it in the Selenium options in binary_location
options = webdriver.ChromeOptions()
options.binary_location= '/usr/bin/google-chrome-stable'
browser = webdriver.Chrome(ChromeDriverManager().install(),chrome_options=options);
If you are looking for Chrome Driver you can install it on the go while creating the webdriver object shown above

Group Install "GNOME Desktop"

Puppet Version: 3.8.7
I have been working on building some system monitoring boxes and have ran into an issue when it comes to installing group yum packages. The normal course of installing packages of course isn't working but I figured that I would at least be able to work around this by including an exec to run the install as a command (like below):
exec { "GNOME Desktop":
command => "/usr/bin/yum -y groups install 'GNOME Desktop'",
timeout => 600,
}
There is an available module on the puppet forge that seems to do what I want but it's not compatible with our version of puppet and we are not in a place to upgrade at this time.
I also tried the setup that was listed in the below server fault question but it also did not work for me:
https://serverfault.com/questions/127460/how-do-i-install-a-yum-package-group-with-puppet
I have also manually been able to run the following command but when I exec it as a puppet command, it fails:
/usr/bin/yum -y groups install "GNOME Desktop"
Why is this? I assumed that puppet is just issuing the command in the exact same way the terminal would?
Changing the time out (or removing it) had zero effect, the issue is with the version of puppet and the ability to install group packages. I ended up installing the desktop environment in my kickstart file and ran puppet for everything else.

Install and work with Cloud Custodian in EC2 instance (Linux)?

I have executed the following command on my Linux instance:
yum install custodian
It is installed, but I don't know how to start it and use it. Can anybody helps me how can use execute yml script.
In linux We can install custodian by using commands like :
1). virtualenv custodian
2). for starting custodian
source custodian/bin/activate
3). install cli on it
pip install cli
4). install c7n
pip install c7n
5). now configure user details as:
aws configure

NodeJs debug with WebStorm - org.jetbrains.v8.protocol.V8ProtocolReaderImpl$Mi cannot be cast to org.jetbrains.v8.protocol.ValueHandle

I'm trying to debug a Remote NodeJs app (Volumio).
I'm able to set break points and step over the code, but when I try to inspect a value of any variable I get:
org.jetbrains.v8.protocol.V8ProtocolReaderImpl$Mi cannot be cast to org.jetbrains.v8.protocol.ValueHandle
I assume it's some version issues between the local and remote env,
but any help would be appreciated.
In volumio ssh do:
sudo apt-get upgrade
sudo apt-get update

Resources