Eclipse Hono - Installation (Version 1.1.1) - eclipse-hono

I am not sure on the exact instructions to install Hono 1.1.1 locally. Following the documentation , I was able to build the project with maven but I am not sure on how to proceed.
I was using version 0.9 before in which I managed to run Hono using docker swarm by running the swarm_deploy.sh script that was located in the deploy folder after building the project with maven. Currently in Hono 1.1.1 in the deploy folder we have services.sh instead of swarm_deploy.sh.
I would like to know, how could I run the docker swarm as it was in version 0.9? Are there any major drawbacks from this approach?
Note: I am looking for a simple way to install Hono locally as its a small experimental project and not aiming at a full scalable version yet such as using Kubernetes.

Sorry, but we no longer support deployment to plain Docker Swarm. You shouldn't have any issues installing Hono 1.1.1 using the Helm chart to a local minikube or kind (single node) cluster, though. There is no big difference in resource consumption compared to plain Docker Swarm, in particular if you are using kind.
Using this approach there also is no need to compile Hono from source. Just follow the Hono chart's README.

Related

How to distribute python3 code which contains external libraries

I wrote a small script in python3 that uses numpy, matplotlib, and other libraries used by pyCharm CE in my linux machine.
I used pyCharm to code and create the virtual env.
The script works only inside pyCharm because of the dependencies.
And a friend of mine wants to use my script in a windows machine. I'm not sure if even he has python installed.
How can I run my script outside pyCharm, or how can I activate the virtual env created by pyCharm to run the script?
And
How I can create a package or something to give the script to my friend or anyone else to freely use it?
Thanks
One way of going about to ask your friend to install python3.x and pip in his system. Meanwhile you create a requirements.txt which consists of the libraries that need to be installed and their versions in this format.
dj-database-url==0.5.0
Django==2.2.5
pytz==2019.2
sqlparse==0.3.0
psycopg2>=2.7,<3.0
Then ask your friend to run pip install -r <path to requirements.txt>. This will install all the required libraries and if there is no OS based dependencies then the project should run fine.
Another way of doing it in the case of bigger project where there are OS based dependencies is to use a containerization tool such as docker. Containerization lets you run projects, in other machines, which are dependent on various packages or environments which are available/installed in your machine.
For example: Imagine I created a python based application which is dependent on multiple packages in my Debian machine. I can build a docker image using python3.x as the base and install the required packages inside the image during the build time. It is fairly simple to do so. After doing so I can push the image to docker hub which is a registry to store docker images. Do mind that the images stored here are publicly available. If you are worried about that, you can use a private AWS ECR registry to store your images. Once I have pushed the image, anyone with access to the image can pull it and spin up a container. A container is an instance of an image which can run the applications/scripts/anything that the image is built to do. In order to be able to spin up containers they will need docker installed in their machine.
This way you can share your project and make it run in anyone's machine with as little hassle as possible. They will not need anything other than docker installed in their machine. Unlike Virtual Machine docker containers are not heavy on your machine.
In your case using docker you can build an image (much like an ISO image) with python3.x as base and install all the required packages such as numpy, matplotlib and other libraries, then copy the scripts required for the project to run into the image and push it to docker hub or a private registry of your choice. Then you can give your friend an access to the image. Your friend will need Docker for Windows installed in his machine in order to be able to spin up a container using the image you provided him with. This container will have your script running as it will have all the required dependencies installed in it by you while building the image itself.
For more info on Docker: https://www.docker.com/

How to install dependent binaries on Azure App Service with Linux?

I have a spring boot application that I am running on Azure App Service (Linux). My application has a dependency on a binary and needs it to be present on the system. How do I install it on my App service?
I tried the following two options:
Did ssh via Kudu and installed the package ($ apk add package). But the changes are not persisted beyond /home. The dependencies were installed in other folders and when the app service was re-deployed all those dependencies were gone
Used the post deployment hook to run the command "$ apk add package" to install once the deployment finishes. This script is run as can be seen from the custom log statements but still i do not see the installed package. Even when is use apt-get it says "unable to lock administration directory"
Using a statically compiled binary is not an option for me since that has its own issues.
Thanks
For the Tomcat, Java SE and WildFly apps on App Service Linux, you can create a file at /home/startup.sh and use it to initialize the container in any way you want (Example: you can install the required packages using this script).
App Service Linux checks for the presence of /home/startup.sh at the time of startup. If it exists, it is executed. This provides web app developers with an extension point which can be used to perform necessary customization during startup, like installing necessary packages during container startup.
I think this is a common problem with Linux on Azure.
I recommend having a step back and consider one of the following options.
Run your application in a container that has all the dependencies
you are looking for.
Run your application on Linux VM IaaS instead
of Azure App Service (Linux),PaaS.
Run your application on Windows OS PaaS and add extension for your dependency.(Most likely you won't run into this problem when using Windows OS)
While I understand that none of them might be acceptable by you, but I have not found a solution for that problem in those specific circumstances.

Upgrading AWS EB Platform version from 2.0.1 to 3.1.0

My current platform version is: Node.js running on 64bit Amazon Linux/2.0.1
Which support following NodeJS versions: 0.12.6, 0.10.39, 0.10.38, 0.10.31, 0.8.28
I am looking a way to upgrade the NodeJS version: 4.x.x which seems to be available in platform version: Node.js running on 64bit Amazon Linux/3.1.0
but when I am upgrading it. it's says
How can I select the allowed version as it's not available.
Any Help,
Thanks,
P.S
1. Already tried via save / load configurations. Unable to find any option there.
2. Don't want to do setup it from the scratch for now.
First, ensure that you have tested the changes adequately before deploying to production. After that, you can:
Note the name of the Platform ARN/Solution Stack you want to upgrade to.
Perform eb init --region REGION_NAME and pick the application and environment you are working on
Perform eb config. This opens your environment's configuration in an editor. Change the value of the PlatformArn to the one you noted above in step 1.
Also in the editor, find the option setting aws:elasticbeanstalk:container:nodejs. Change the NodeVersion to 6.9.1 or one that the error message above suggests.
Save and quit.
After the configuration is complete:
Perform eb status to verify that your environment is using the upgraded Solution Stack.
You can clone the existing environment to a new one, using a different platform version.
In the Actions menu, select "Clone with the latest platform".
That opens a new page where you can select from available OS/nodejs versions.
Once you are satisfied with the new environment, you can swap URLs with the old one in order to replace it. After that, you can remove the older env.
This has happened a few times before when Amazon releases new Environments and they do not include versions in common between the environments. The solution is to set value for nodejs version to an empty string which means to use the latest version. It could break your app, but you can make a clone with latest platform and switch to desired version of nodejs later. So run this command to do that:
aws elasticbeanstalk update-environment --region "your region" --application-name "your app" --environment-name "your env" --option-settings "OptionName=NodeVersion,Namespace=aws:elasticbeanstalk:container:nodejs,Value=''"

shinyproxy basic basics (+ some general web knowledge)

The problem
While searching for ways to deploy shiny apps I stumbled across ShinyProxy. From what I understand it's an alternative for ShinyServer. However, I lack some (very basic) knowledge to follow the guide provided.
The questions
Can ShinyProxy be installed just on any bought/rented server? Do I need to preinstall some other software?
Where do I type in the commands provided in the ShinyProxy guide?
Does Docker need to be installed on the server or is it a tool to deploy to the server and is thus installed locally?
The ShinyProxy guide misses a point about installing ShinyProxy. Why? Is it not installed (or is installation so obvious)?
I couldn't actually find instructions on how to run a shiny app with ShinyProxy.
The authors of ShinyProxy can probably provide a much better answer, but here is my understanding:
Your server needs to support Java 8 and Docker (or you can install Java 8 and Docker on your server).
Assuming you logon to your server via SSH, the commands will be typed in the SSH terminal.
Yes Docker needs to be installed on the server
It appears that ShinyProxy does not need to be installed. You just need to download it (the shinyproxy-0.5.0.jar file) to a location on the server, and then run java -jar shinyproxy-0.5.0.jar (in your SSH terminal)
To run a Shiny app, you need to package it as an R package first, then build a Docker image for the R package. The app is then actually running inside a Docker container. You also need a configuration file to tell ShinyProxy where to look for your Docker image. Example is here https://github.com/openanalytics/shinyproxy-demo

Can CCM create command use a locally installed version?

I'm trying to create a Cassandra Cluster locally on a single Windows 64 bit machine and followed these instructions.
I already have Cassandra 3.7 locally installed and was assuming there'd be a way to make use of the same installation through ccm. But it looks like, ccm always tries to download and install the Cassandra version. Looking into the ccm create [options] didn't provide me a pointer.
Does this needs to be followed instead for an already installed one?
You can create a cluster with ccm by using the --install-dir= parameter as described in the README.

Resources