How to Retrieve List of Servers from Rackspace via OpenStack Nova Client? - rackspace-cloud

I'm trying to use the OpenStack Nova client to run operations on my Rackspace account. The closest I was able to get was this blog post. However, it doesn't seem to work now. Does anyone know how to do this? Thanks.
http://www.zippykid.com/2011/10/06/using-the-rackspace-cloud-control-panel-via-openstack-cli-tools-on-os-x-lion-and-other-unixes/

You should be able to download and install the nova command-line client to operate with any OpenStack endpoint. So a couple of things to check:
To get the client from source:
git clone https://github.com/openstack/python-novaclient
cd python-novaclient
(sudo) python setup.py install
To get the client from PyPi:
pip install python-novaclient
Make sure you're working from an OpenStack endpoint - as I last heard, not all systems at Rackspace were running over openstack. While the APIs are darned similar, they're not guaranteed to be identical.
There's a --debug option on using the nova commandline that will show you the HTTP request and response while making the calls to manage your environment that may be useful in determining what's going wrong.
I'm afraid this is getting you to where we can determine why its not working, but without a bit more detail I can't assert what's actually broken.

There is a good guide at Rackspace Blog here.
Basically, add these lines to your /etc/profile:
export OS_AUTH_SYSTEM=rackspace
export OS_REGION_NAME=IAD (or any other region you have)
export OS_PASSWORD=<YOUR_API_PASSWORD>
export OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/
export OS_VERSION=2.0
export OS_USERNAME=<YOUR_API_USERNAME>
export OS_TENANT_NAME=<YOU CUSTOMER ID>
export OS_SERVICE_NAME=cloudserversOpenStack
Then run:
$ nova list

Related

Upgrading MariaDB on AWS Linux machine

I have a moodle site which runs on a linux AWS box and I'm trying to upgrade it. I need to have MariaDB 10.3 on there, and I currently have 10.2.10
I've followed the instruction for upgrading using yum from this webpage https://www.ryadel.com/en/mariadb-10-upgrade-10-3-without-losing-data-how-to/ and all goes fine until I get to Running Transaction Check and Running Transaction Check at which point I get the following
Transaction check error:
file /etc/my.cnf from install of MariaDB-common-10.3.27-1.el7.centos.x86_64 conflicts with file fr
om package mariadb-config-3:10.2.10-2.amzn2.0.3.x86_64
file /usr/lib64/libmysqlclient.so.18 from install of MariaDB-compat-10.3.27-1.el7.centos.x86_64 co
nflicts with file from package mariadb-libs-3:10.2.10-2.amzn2.0.3.x86_64
I'm not sure what to do now? Any help or pointers would be appreciated.
EC2 is not designed for database specifically
You seem to be installing and running your database on EC2 (what you call a linux AWS box), this means you can SSH into the instance and install software manually and carry out updates and edit configuration files and settings etc.
RDS is designed for Database
RDS also has other really convenient features like automatic version upgrade and maintenance window management.
If your situation allows I would suggest to use a tool designed for database instead of having to configure things manually. It will save you a lot of time and troubleshooting, it is also more secured.

How does Galaxy Meteor hosting for windows work?

I have a node.js application I have adopted from a more senior developer. I want to deploy it, and I know it will work because he already deployed it several times. I am reading these instructions:
https://galaxy-guide.meteor.com/deploy-quickstart.html
I use windows, as did he.
How does deployment work?
Take these instructions:
Windows If you are using Windows, the commands to deploy are slightly
different. You need to set the environment variable first, then run
the deployment command second (the syntax is the same as everything
you’d put for meteor deploy).
In the case of US East, the commands would be:
$ SET DEPLOY_HOSTNAME=galaxy.meteor.com
$ meteor deploy [hostname]
--settings path-to-settings.json
Am I just supposed to go to the source directory on my laptop and run these commands? What then happens? Is the source uploaded to their server from my laptop and then their magic takes care of the rest?
What about when I want to make a change to the code? Do I just do the same thing, poiting to an existing container and, again, they do the magic?
Am I just supposed to go to the source directory on my laptop and run these commands? What then happens? Is the source uploaded to their server from my laptop and then their magic takes care of the rest?
It is not magic. You basically go to your dev root and enter these commands. Under the hood it builds your app for production (including minification and prod flags for optimization) and once complete opens a connection to the aws infrastructure and pushes the build bundle.
See: https://github.com/meteor/meteor/blob/devel/tools/meteor-services/deploy.js
On the server there will be some install and post install scripts that set up all the environment for you and, if there are no errrors in the process, start your app.
These scripts have if course some automation, depending on your account settings and the commands you have entered.
What about when I want to make a change to the code? Do I just do the same thing, poiting to an existing container and, again, they do the magic?
You will have to rebuild (using the given deploy command) again but Galaxy will take care of the rest.

How could I prohibit anonymous access to my NodeRed UI Dashboard on IBM Cloud(Bluemix)?

I'm working with node-red, on boilerplate IBM cloud. I know that there is a way, changing the value of enviroments variables(NODE_RED_USERNAME and NODE_RED_PASSWORD), to change username and password of the editor flow. But, what about UI dashboard? I mean using dashboard nodes. Forbid access to
https://noderedservicename.mybluemix.net/ui/
I know that on the code, changing the variable httpNodeAuth on the file settings.js I can do what I want. What is the way for doing that on IBM Cloud?
Thank you in advance!
You need to add the httpNodeAuth (not the httpAdminAuth as this is for controlling access to the Node-RED editor and can done with the environment variables discussed in the other answer.) to the app/bluemix-settings.js file.
Something like this:
...
httpStatic: path.join(__dirname,"public"),
httpNodeAuth: {user:"user",pass:"$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN."},
functionGlobalContext: { },
...
Details of how to generate the pass can be found here
There are a number of ways you can edit the file, some of which include linking the Node-RED deployment to a git repository or downloading the whole app, editing the file and pushing it back to Bluemix (when you first deploy Node-RED from the starter pack it gives you instructions on how to download the source to make changes and then push them back. You can get to these instructions by clicking on the "Getting started" link in your Node-RED Bluemix console page).
But the quickest/simplest/dirtiest way is probably to just SSH into the instance and change the file with something like vi. Details on how to ssh to an app instance can be found here. But the following should work:
cf ssh [app name]
Once you have edited the file you will need to tell bluemix to restart the app. You can do this from the web console or with the cf command line tool.
(The changes made by this method will not survive if the app is restaged, or bluemix decides to move your instance to another machine internally because it will rebuild the app from the pushed sources. The permanent solution is to download the source, edit and push back)
This link will help you but it's written in Japanese.
http://dotnsf.blog.jp/archives/1030376575.html
Summary
You can define the "user-defined" environment variables through the IBM Cloud dashboard.
It contains the variables to protect Node-RED GUI.
You have to be set as follows
NODE_RED_USERNAME : username
NODE_RED_PASSWORD : password

Azure Chef Automate Starter Kit

I have a Chef Automate server in Azure that I am just starting to configure according to the Chef Docs (https://docs.chef.io/azure_portal.html). I successfully setup my credentials and logged into the Chef server, however, I was not prompted to download the starter_kit.zip file. Is there a way to manually download this file, and if so, how/where can I do that?
I also run the following command on the Chef Automate server and get a 404 Not Found error.
curl -X GET https://MY-AUTOMATE-SERVER.azure.com/biscotti/setup/starter-kit -k
I don't think the Starter Kit was carried over from the Manage product to Automate. You would normally just make your own knife config file these days.

Hgweb "Push" in IIS returning 502 (bad gateway)

I've got hgweb up and running on II7 7 (on windows server 2008). The web interface works, and I can view, pull, and clone the repositories there. But I cannot push, doing so gives me a 502 error right after "searching for changes". Using --debug shows the last few lines as:
sending unbundle command
sending 622 bytes
HTTP Error: 502 (Bad Gateway)
I am using TortoiseHG to push, but the result is the same when using the mercurial command line.
I had followed the tutorial here: http://www.sjmdev.com/blog/post/2011/03/30/setting-mercurial-18-server-iis7-windows-server-2008-r2.aspx to setup hgweb.
Looks like an old question but someone is bound to come across it again. I was close to drawing a black circle on a wall and ... anyhow the issue for us was the way central repository was created. We cloned it from BitBucket while being Remote connected to the machine as local administrator.
The issue was in [Repository].hg folder. You need to set correct permissions on it. Try it with adding Everyone -> Full permissions for test purpose. Please make sure you change this to a dedicated network login or appropriate local account afterwards.
I was seeing the exact same behaviour - even push worked fine with exception of getting a Bad Gateway after all the time. After correct permissions were set the issue was gone.
Thinking about it now, probably the best solution is to add each network login that uses the repo to machine users and then set up access permissions to .hg folder to local users.
Hope it helps someone.
Try using the ISAPI module method instead of the CGI that executes phython.exe as documented here. There's also another related, and possibly duplicate question here as well.
Take a look at the 'Push_ssl' setting in your hgweb.config file.
I was getting the same error (had mine set to '*'), and was able to resolve it by removing the line entirely. Granted, this makes Mercurial somewhat less secure, but it lets me get by the configuration issue (for now) while I investigate properly configuring SSL on the server.
You may also have to review the 'Allow_push' setting in order to get past further errors (or take another look at your authorization).
NOTE: At least in my case, having 'push_ssl = false' wasn't enough as that resulted in further errors (authorization failed).
(Again this is simply a temporary solution until the server can be properly secured.)
It could happen by different reasons, to get more details about the error run
hg push --config ui.usehttp2=true --config ui.http2debuglevel=info
For example, problem may occur because of proxy server or just in case when the Mercurial Web Server "forgets" about repositories it needs to serve: in case if you are using TortoiseHg workbench go to Workbench UI, Repository -> Start Web Server, make sure that your repository is in the list of the served repos.
Try use https instead http in .hg/hgrc, I have resolve this problem for code.google.com.
I had this issue, and the problem ended up being the server running out of disk space.

Resources