AWS IoT basicPubSub.py example - clarification on certificates (CLI) - python-3.x

I am been trying to get AWS IoT working and just keep hitting problems, errors and not getting anywhere. I am trying to use the AWS IoT basicPubSub.py script to test the connection, but getting an error:
ssl.SSLError: unknown error (_ssl.c:3946)
I have been through all the certificates several times, but want to check/fully understand if I can pull the rootCAFile, certfile and privatekey from the command line utility and/or from the IAM interface? I have download each piece of information and stored in local files.
python basicPubSub.py -e <endpoint> -r <rootCAFilePath> -c <certFilePath> -k <privateKeyFilePath>
The main aim is just to ensure everything is correct or I have another problem something else? Is there a way to test each certificate to ensure each file is correct and has the right information?

I am not sure how I managed to fix this problem, I tried the following items to fix the problem:
Re-created all the certificates
Re-installated CLI using sudo
Installed ssl (sudo apt-get install -y libssl-dev)
I going to do a fresh installation on my RPi and repeat the steps to understand how this was resolved and fixed.

Related

Error: Could not connect to Redis at 127.0.0.1:6379:connection refused

The detailed installation guide to install Redis on Mac
Hello Everyone,
I recently stumbled upon a YT video on Redis Crash Course by "BRAD" in Traversy Media Channel(https://www.youtube.com/channel/UC29ju8bIPH5as8OGnQzwJyA). Below are the following that I got stuck in while Installing Redis.
I was unable to download Redis through CLI i.e. wget
https://download.redis.io/releases/redis-6.2.6.tar.gz and note, I
used curl as wget was not functional.
I was unable to start the Redis-Cli and it tortured me with an Error:
Could not connect to Redis at 127.0.0.1:6379:connection refused not
connected> Below are the steps that I followed to install and run
successfully.
[Solution]Problem statement 1:
Instead of downloading through CLI, I tried downloading the "tar.gz" file directly. Downloaded the stable version 6.2.6 and then followed the below CLI commands.
$ tar xzf redis-6.2.6.tar.gz $ cd redis-6.2.6 $ make
This made the job easy to make a binary. Post which, I followed the Redis documentation to run the redis-server. And, it worked fine.
[Solution]Problem statement 2:
As I said, I was unable to run the redis-cli even though, I was able to successfully run the redis-server. I tried several websites and StackOverflow to understand the concept behind the error. That's when I realized the redis-server and redis-client are two separate executable files/process so to make the redis-client work, you should keep in mind that the redis-server should run either in background or in other terminal.
Note, if you're executing the redis-server in the same terminal, then make sure to run the server in the background using the below command.
redis-server --daemonize yes
This should solve the problem, now try using the redis-cli. It will work perfectly.
Now, you can see port 6379 with the localhost IP, make a test PING and confirm it is connected.

No Internet on Custom Image VM for Azure

I launched an Ubuntu 18.04 VM with Azure. I installed a bunch of stuff that I need. Then, I used the dashboard to create a custom image from this machine. After that, I checked that the image was okay by launching some machines with that image. Everything seemed to be working fine.
Today, I launched a new instance with my custom image. Then I tried to install a few things with apt-get install and I get the following error (e.g. for unzip):
sudo: unable to resolve host ABCDEFG: Resource temporarily unavailable
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package unzip is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'unzip' has no installation candidate
This same thing happens for any package I try to install. After testing some basic things with my repositories, I checked the internet connection with ping. E.g. ping www.google.com which is also not working. I launched a vanilla Ubuntu 18.04 instance and I am not having these problems with that machine.
I have also tried sudo reboot but no luck with that. I did notice that when the system booted it shows the following error, also indicating that something is wrong with the internet:
Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings
Any help is greatly appreciated.
So, after some digging around, I found this answer to something similar: https://askubuntu.com/questions/1045278/ubuntu-server-18-04-temporary-failure-in-name-resolution.
I used the following command and the internet started working again:
sudo ln -s ../run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
This is a little different than the answer on askubuntu because this is on an Azure image. First, I noticed that my image was missing resolv.conf in /etc. Using ls -la /etc/resolv.conf on a different azure image, I saw that it was a symbolic link to ../run/systemd/resolve/stub-resolve.conf, so I created a link that matched this format on my machine and that fixed things.
** EDIT **
It's worth noting that when you deprovision the VM to create the custom image, it does say:
WARNING! The waagent service will be stopped.
WARNING! Cached DHCP leases will be deleted.
WARNING! root password will be disabled. You will not be able to login as root.
WARNING! /etc/resolv.conf will be deleted.
WARNING! xxxx account and entire home directory will be deleted.

Google Cloud SDK - Is there a way to manually install google cloud sdk on Linux without internet access?

I am trying to install Google Cloud SDK on a Linux machine without any Internet access.
I am following the instructions at: https://cloud.google.com/sdk/?hl=en
I downloaded the tar file on my local machine and transferred it to the Linux machine using scp. I then ran the install.sh file and got the following error:
[me#user google-cloud-sdk]$ ./install.sh
Welcome to the Google Cloud SDK!
To help improve the quality of this product, we collect anonymized data on how
the SDK is used. You may choose to opt out of this collection now (by choosing
'N' at the below prompt), or at any time in the future by running the following
command:
gcloud config set --scope=user disable_usage_reporting true
Do you want to help improve the Google Cloud SDK (Y/n)? n
This will install all the core command line tools necessary for working with
the Google Cloud Platform.
/home/me/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py:661: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
exc_message = getattr(exc, 'message', None)
/home/me/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py:664: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
msg = u'({0}) {1}'.format(command_path_string, exc.message)
ERROR: (gcloud.components.update) Failed to fetch component listing from server. Check your network settings and try again.
I have a proxy server that I can use to access the internet from this Linux machine. I tried running install.sh as 'sh install.sh --proxy host:port' but obviously, there is no input parameter called proxy to install.sh.
How can I work around this problem?
Thanks in advance.
Exported my proxy details as "export https_proxy='...'" before running the install.sh file.
This worked for me.
Go to Advanced System settings and create a variable HTTPS_PROXY,
restart CMD.
echo %HTTPS_PROXY%
To make sure it has taken the changes into account.
Launch the install.bat

Issue Installing Composer on Hosted Web Server

I’m having a heck of a time trying to install composer (to install Laravel) on my server. I’m accessing my web server via the built-in terminal in Coda 2. These are the commands I’ve been trying to run:
curl -sS https://getcomposer.org/installer | php
mv composer.phar /usr/local/bin/composer
The curl command executes fine, but when I try the move. I get the following error:
mv: inter-device move failed: `composer.phar' to `/usr/local/bin/composer'; unable to remove target: Read-only file system
I tried to run the move with sudo per the Composer website, but that results in these errors:
sudo: unable to stat /etc/sudoers: No such file or directory
sudo: no valid sudoers sources found, quitting
I’ve been trying to google search some to figure it out, but I haven’t had much success. I'm not too savvy with server issues like this, so that has made it hard for to figure out what is going on.
Thanks in advance.

Adding Support for SCP and SFTP for Curl on Linux

I've been been desperately trying to add SFTP and SCP support for Curl on my CentOS box. I found something resembling a solution here:
http://andrewberls.com/blog/post/adding-sftp-support-to-curl
I followed these steps but found that when attempting to get a file via both SCP and SFTP, the connection hangs once the file has been found. I cannot fix this and cannot find an alternative solution.
I have to use Curl for a job at work and therefore cannot use another lib. Has anyone managed to successfully add support for SCP and SFTP on Curl? I have a test server setup and other protocols such as FTP work as expected.
Any help would be greatly appreciated!
Thanks in advance,
Peter
Although Curl does support SFTP, support isn't automatically included in the default package.
This website: http://andrewberls.com/blog/post/adding-sftp-support-to-curl provided the details which helped me add the required support for SFTP. As the site didn't work 100% for me, I've outlined the different steps taken below.
Manually downloading libssh2 didn't work for me so I used yum to install the two packages:
yum install libssh2 libssh2-devel
and then followed step two configuring Curl to install using the above libraries
The final step was to restart sshd:
service sshd restart
There you have it. Double check that SFTP is on the list of support protocols by running
curl -V
When I initially tested, Curl complained about key authentication issues, but you can force Curl to use any authentication to connect:
curl --anyauth sftp://user:passwd#127.0.0.1/directory -o Test.txt
This will round robin the different supported authentication methods and let you use you login credentials instead.
I hope this helps alleviate any other headaches for people trying to achieve the same.

Resources