I am new with Amazon Code Deploy. I am getting an error when deploying
Deployment Failed
No hosts succeeded
I checked the service code deploy-agent on my Linux machine and it's running
How can I fix this issue?
Most of the time this issue occur due to insufficient IAM Permission on the Instance and CodeDeploy Service. You need to check /var/log/aws/codedeploy-agent/codedeploy-agent.log file for detail information. Also in /etc/codedeploy-agent/conf/codedeployagent.yml file you can set :verbose: true to get more info in log file.
These are the IAM Policies you need to update :
// Policy Role for Code Deploy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:PutLifecycleHook",
"autoscaling:DeleteLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"autoscaling:CompleteLifecycleAction",
"autoscaling:DescribeAutoscalingGroups",
"autoscaling:PutInstanceInStandby",
"autoscaling:PutInstanceInService",
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
// Policy Trust for Code Deploy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"codedeploy.us-west-2.amazonaws.com",
"codedeploy.us-east-1.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
// Instance Role for EC2 Instance
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
As BrunoLevy said, we will need more information about the the deployments you were trying to do.
However as a starting point for debugging, you can observe in which step your deployment failed from the deployment page
You can also look at the host agent log file on your hosts (/var/log/aws/codedeploy-agent/codedeploy-agent.log). That file contains information about the deployment.
This happens because the codeDeploy checks health of the ec2 instances by hitting instances. Before deployment, you need to run below bash script on the instances and check if the script worked. httpd service must be started. Reboot the instance.
#!/bin/bash
sudo su
yum update -y
yum install httpd -y
yum install ruby
yum install aws-cli
cd ~
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
chmod +x ./install auto
./install auto
echo 'hello world' > /var/www/html/index.html
hostname >> /var/www/html/index.html
chkconfig httpd on
service httpd start
Related
I am testing the VS Code's built-in HttpTrigger in Win 10, which is working, but the debugging is not working (breakpoints are not paused).
Below is the logging when starting Run -> Start Debugging.
connect econnrefused 127.0.0.1:9091
It processes the request successfully, but no breakpoints are paused.
Executing task: .venv\Scripts\python -m pip install -r requirements.txt <
Requirement already satisfied: azure-functions in
c:.me\mylab.code\azurecode\functions\funcpython\app.venv\lib\site-packages
(from -r requirements.txt (line 5)) (1.11.2)
Terminal will be reused by tasks, press any key to close it.
Executing task: func host start <
Found Python version 3.9.13 (py).
Azure Functions Core Tools Core Tools Version: 3.0.3904 Commit
hash: c345f7140a8f968c5dbc621f8a8374d8e3234206 (64-bit) Function
Runtime Version: 3.3.1.0
Functions:
HttpTrigger1: [GET,POST] http://localhost:7071/api/HttpTrigger1
For detailed output, run func with --verbose flag. info:
Microsoft.AspNetCore.Hosting.Diagnostics1
Request starting HTTP/2 POST http://127.0.0.1:63019/AzureFunctionsRpcMessages.FunctionRpc/EventStream
application/grpc info:
Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint 'gRPC - /AzureFunctionsRpcMessages.FunctionRpc/EventStream'
[2022-07-21T21:23:59.336Z] Worker process started and initialized.
[2022-07-21T21:24:03.839Z] Host lock lease acquired by instance ID '000000000000000000000000A150D788'.
[2022-07-21T21:39:28.145Z] Executing 'Functions.HttpTrigger1' (Reason='This function was programmatically called via the host APIs.', Id=d2265561-ccdd-47e6-ae21-edc358753208)
[2022-07-21T21:39:28.218Z] Python HTTP trigger function processed a request.
[2022-07-21T21:39:28.338Z] Executed 'Functions.HttpTrigger1' (Succeeded, Id=d2265561-ccdd-47e6-ae21-edc358753208, Duration=208ms)
Launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Python Functions",
"type": "python",
"request": "attach",
"port": 9091,
"preLaunchTask": "func: host start"
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"type": "func",
"command": "host start",
"problemMatcher": "$func-python-watch",
"isBackground": true,
"dependsOn": "pip install (functions)"
},
{
"label": "pip install (functions)",
"type": "shell",
"osx": {
"command": "${config:azureFunctions.pythonVenv}/bin/python -m pip install -r requirements.txt"
},
"windows": {
"command": "${config:azureFunctions.pythonVenv}\\Scripts\\python -m pip install -r requirements.txt"
},
"linux": {
"command": "${config:azureFunctions.pythonVenv}/bin/python -m pip install -r requirements.txt"
},
"problemMatcher": []
}
]
}
What I tried:
1 I tried this from the link below, the connect econnrefused 9091 window doesn't display. But debugging is still not working.
https://stackoverflow.com/a/53722540/665335
2 I tried all three approaches from this, none of it is working.
https://stackoverflow.com/a/71852516/665335
Environment:
Azure Func core tool: 3.0.4626
Function Runtime Version: 3.9.0.0
https://go.microsoft.com/fwlink/?linkid=2135274
https://github.com/Azure/azure-functions-core-tools
VS Code: Version 1.69
Python: 3.9.13
Python extension: v2022.10.1
Azure Functions extension: v1.7.4
After I updated relevant extensions, it works now.
Examples of the extensions (not all) might be relevant:
Azure Function: 1.7.4
Jupyter: 2022.6.12
Pylance: 2022.7.40
Python: 2022.10.1
There might be more extensions for you to update. Try to resolve all relevant extension updates if possible.
I am new to the development of blockchain technologies, to make the development and implementation process easier I am using the ibm extension, it brings a tutorial to do all the infrastructure assembly. I was able to finish the entire tutorial with no problem and at this point I have:
Smart contract developed in typescript
Api in nodejs that insert some
assets
In this local environment everything works great and I can make requests from postman, from nodejs I open port 8089 and the petitions (GET, POST, PUT,DELETE) for all cases were correct.
The problem comes when I create a Dockerfile for my nodejs project, which has the following structure
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 8089
CMD [ "node", "server.js" ]
Inside the docker the image launches successfully,but when trying to make a request to my container that has the nodejs api it shows me the following error, which I can see in the logs of my image
error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Endorser- name: org1peer-api.127-0-0-1.nip.io:8080, url:grpc://org1peer-api.127-0-0-1.nip.io:8080, connected:false, connectAttempted:true}
error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server org1peer-api.127-0-0-1.nip.io:8080 url:grpc://org1peer-api.127-0-0-1.nip.io:8080 timeout:3000
I am not sure if it is because it is not possible to connect the container with my hydperledger fabric that is deployed using the ibm extension or because I am not configuring the ports correctly.
Finally I have the connection.json file generated by my hydperledger fabric ibm extension and which I am using to connect from the api to the chaincode
{
"certificateAuthorities": {
"org1ca-api.127-0-0-1.nip.io:8080": {
"url": "http://org1ca-api.127-0-0-1.nip.io:8080"
}
},
"client": {
"connection": {
"timeout": {
"orderer": "300",
"peer": {
"endorser": "300"
}
}
},
"organization": "Org1"
},
"display_name": "Org1 Gateway",
"id": "org1gateway",
"name": "Org1 Gateway",
"organizations": {
"Org1": {
"certificateAuthorities": [
"org1ca-api.127-0-0-1.nip.io:8080"
],
"mspid": "Org1MSP",
"peers": [
"org1peer-api.127-0-0-1.nip.io:8080"
]
}
},
"peers": {
"org1peer-api.127-0-0-1.nip.io:8080": {
"grpcOptions": {
"grpc.default_authority": "org1peer-api.127-0-0-1.nip.io:8080",
"grpc.ssl_target_name_override": "org1peer-api.127-0-0-1.nip.io:8080"
},
"url": "grpc://org1peer-api.127-0-0-1.nip.io:8080"
}
},
"type": "gateway",
"version": "1.0"
}
Was the blockchain network still running when you created the Docker image. The registered user in the 'wallet' will become stale if not, and will no longer be valid for connecting to the network. It's been a while since I last used the IBM extension, so I don't know if it has the ability to stop the network as well as drop it entirely. But do check to make sure that the client credentials are up to date as a potential reason for not being able to connect to the network.
According to the github readme (https://github.com/SeleniumHQ/docker-selenium) the chrome standalone needs the option "-v /dev/shm:/dev/shm" but I am struggling to find in the documentation how to do this correctly.
The full docker run command looks like this:
docker run -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:3.12.0-cobalt
My reason for needing this is I have tests that specifically fail due to not having this option enabled.
Currently my azure command looks like this:
az container create --resource-group ${resourceGroup} --name ${containerName} --image selenium/standalone-chrome:3.12.0-cobalt --dns-name-label ${dnsNameLabel} --ports 4444
I have been trying to play around with the --azure-file-volume options with no success. Any help is greatly appreciated.
Edit:
Until this is figured out I have decided to use azure vms. Using a vm image that has docker installed and starts up the docker-selenium container. It is not quite as fast or as pretty to script but it gets the job done without having the issue with options to start a docker container with. For anyone that decides to go this route here is my cloud-init code for the vm.
#cloud-config
package_upgrade: true
package_reboot_if_required: true
runcmd:
- apt-get update
- curl -fsSL https://get.docker.com/ | sh
- curl -fsSL https://get.docker.com/gpg | sudo apt-key add -
- sudo docker run -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:3.12.0-cobalt
While there isn't a way to use Azure CLI to do this, you can use an Azure Resource Manager Template deployment.
Create a deployment template file, for example:
selenium-aci-standalone-example.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"dnsNameLabel": {
"type": "String"
}
},
"resources": [
{
"apiVersion": "2018-06-01",
"location": "[resourceGroup().location]",
"name": "[parameters('dnsNameLabel')]",
"properties": {
"containers": [
{
"name": "standalone-chrome",
"properties": {
"image": "selenium/standalone-chrome",
"ports": [{ "port": "4444", "protocol": "TCP" }],
"resources": { "requests": { "cpu": "1.0", "memoryInGb": "1.5" } },
"volumeMounts": [{"name": "devshm", "mountPath": "/dev/shm"}]
}
}
],
"ipAddress": {
"ports": [{ "port": "4444", "protocol": "TCP" }],
"type": "Public",
"dnsNameLabel": "[parameters('dnsNameLabel')]"
},
"osType": "Linux",
"volumes": [
{
"name": "devshm",
"emptyDir": {}
}
]
},
"type": "Microsoft.ContainerInstance/containerGroups"
}
]
}
Then you can execute the deployment with Azure CLI:
az group create -n selenium-standalone-rg -l westus2
az group deployment create -g selenium-standalone-rg --template-file .\selenium-aci-standalone-example.json --parameters dnsNameLabel=test-standalone-selenium-chrome
Mounting an emptyDir to /dev/shm on the node containers solved this issue for us running Selenium Grid with Azure Container Instances. It seems that it's not possible to directly control the size of the volume - and I couldn't find information about the size of an emptyDir volume in ACI documentation - but all the broken pipe errors went away in our test runs after we added the volume configuration to our ARM template.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
How can I use electron-builder's auto-update feature with Amazon S3 in my electron app?
Maybe someone who has already implemented it, can give more details than those which are provided in the electron-builder documentation?
Yeah I'm agree with you, I've been through there recently...
Even if i'm late, I will try to tell as much as I know for others!
In my case, I'm using electron-builder to package my electron/anguler app.
To use electron-builder, I suggest you to create a file called electron-builder.json at project root.
Thats the content of mine :
{
"productName": "project-name",
"appId": "org.project.project-name",
"artifactName": "${productName}-setup-${version}.${ext}", // this will be the output artifact name
"directories": {
"output": "builds/" // The output directory...
},
"files": [ //included/excluded files
"dist/",
"node_modules/",
"package.json",
"**/*",
"!**/*.ts",
"!*.code-workspace",
"!package-lock.json",
"!src/",
"!e2e/",
"!hooks/",
"!angular.json",
"!_config.yml",
"!karma.conf.js",
"!tsconfig.json",
"!tslint.json"
],
"publish" : {
"provider": "generic",
"url": "https://project-release.s3.amazonaws.com",
"path": "bucket-path"
},
"nsis": {
"oneClick": false,
"allowToChangeInstallationDirectory": true
},
"mac": {
"icon": "src/favicon.ico"
},
"win": {
"icon": "src/favicon.ico"
},
"linux": {
"icon": "src/favicon.png"
}
}
As you can see, you need to add publish config if you want to publish the app automaticly to s3 with electron-buider. The thing I don't like with that, is that all artifacts and files are all located in the same folder. In my case, like you can see in package.json below I decided to package it manually with electron-builder build -p never. This is basically telling never publish it, but I needed it because without it, it would not generate the latest.yml file. I'm using Gitlab-ci to generate the artefacts, then I use a script to publish it on s3, but you can can use -p always option if you want.
Electron-builder need the latest.yml file, because this is how he knoes if the artefact on s3 is more recent.
latest.yml content exemple :
version: 1.0.1
files:
- url: project-setup-1.0.0.exe
sha512: blablablablablablablabla==
size: 72014605
path: project-setup-1.0.0.exe
sha512: blablablablablabla==
releaseDate: '2019-03-10T22:18:19.735Z'
One other important thing to mension is that electron-builder will try to fetch content at the url you provided in electron-builder.json publish config like so :
https://project-release.s3.amazonaws.com/latest.yml
https://project-release.s3.amazonaws.com/project-setup-1.0.0.exe
And this is the default uploaded content
For that, you need to have your s3 bucket public so every one with the app can fetch newest versions...
Here's the policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name/*",
"arn:aws:s3:::your-bucket-name"
]
}
]
}
Replace your-bucket-name
Second, to package the app, I added a script to package.json. ("build:prod" for angular only)
"scripts": {
"build:prod": "npm run build -- -c production",
"package:linux": "npm run build:prod && electron-builder build --linux -p never",
"package:windows": "npm run build:prod && electron-builder build --windows -p never",
"package:mac": "npm run build:prod && electron-builder build --mac -p never",
},
Finally, here's a really well written article here that work with gitlab-ci.
I might have forgotten some parts, ask for any questions!
Here is the documentation for S3 autoUpdater in electron-builder
https://www.electron.build/configuration/publish#s3options
You put your configuration inside package.json build tag, for example:
{
"name": "ps-documentation",
"description": "Provides a design pattern for Precisão Sistemas",
"build":{
"publish": {
"provider": "s3",
"bucket": "your-bucket-name"
},
}
}
I'm trying to deploy a multiple docker container to Elastic Beanstalk. There is two containers, one for the supervisor+uwsgi+django application and one for the JavaScript frontend.
Using docker-compose it works fine locally
My docker-compose file:
version: '2'
services:
frontend:
image: node:latest
working_dir: "/frontend"
ports:
- "3000:3000"
volumes:
- ./frontend:/frontend
- static-content:/frontend/build
command: bash -c "yarn install && yarn build"
web:
build: web/
working_dir: "/app"
volumes:
- ./web/app:/app
- static-content:/home/docker/volatile/static
command: bash -c "pip3 install -r requirements.txt && python3 manage.py migrate && supervisord -n"
ports:
- "80:80"
- "8000:8000"
depends_on:
- db
- frontend
volumes:
static-content:
The image for the nodejs is the oficial Docker one.
For the "web" I use the following Dockerfile:
FROM ubuntu:16.04
# Install required packages and remove the apt packages cache when done.
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y \
python3 \
python3-dev \
python3-setuptools \
python3-pip \
nginx \
supervisor \
sqlite3 && \
pip3 install -U pip setuptools && \
rm -rf /var/lib/apt/lists/*
# install uwsgi now because it takes a little while
RUN pip3 install uwsgi
# setup all the configfiles
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
COPY nginx-app.conf /etc/nginx/sites-available/default
COPY supervisor-app.conf /etc/supervisor/conf.d/
EXPOSE 80
EXPOSE 8000
However, AWS uses its own "compose" settings, defined in the dockerrun.aws.json, which has a different syntax, so I had to adapt it.
First, I used the container-transform app to generate the file based on my docker-compose
Then I have to do some adjustment, i.e: The AWS file doesn't seem to have a "workdir" property, so I had to change it according
I also published my image to AWS Elastic Container Registry
The file became the following:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"command": [
"bash",
"-c",
"yarn install --cwd /frontend && yarn build --cwd /frontend"
],
"essential": true,
"image": "node:latest",
"memory": 128,
"mountPoints": [
{
"containerPath": "/frontend",
"sourceVolume": "_Frontend"
},
{
"containerPath": "/frontend/build",
"sourceVolume": "Static-Content"
}
],
"name": "frontend",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"command": [
"bash",
"-c",
"pip3 install -r /app/requirements.txt && supervisord -n"
],
"essential": true,
"image": "<my-ecr-image-path>",
"memory": 128,
"mountPoints": [
{
"containerPath": "/app",
"sourceVolume": "_WebApp"
},
{
"containerPath": "/home/docker/volatile/static",
"sourceVolume": "Static-Content"
},
{
"containerPath": "/var/log/supervisor",
"sourceVolume": "_SupervisorLog"
}
],
"name": "web",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
},
{
"containerPort": 8000,
"hostPort": 8000
}
],
"links": [
"frontend"
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "/var/app/current/frontend"
},
"name": "_Frontend"
},
{
"host": {
"sourcePath": "static-content"
},
"name": "Static-Content"
},
{
"host": {
"sourcePath": "/var/app/current/web/app"
},
"name": "_WebApp"
},
{
"host": {
"sourcePath": "/var/log/supervisor"
},
"name": "_SupervisorLog"
}
]
}
But then after deploy I see it on the logs:
> ------------------------------------- /var/log/containers/frontend-xxxxxx-stdouterr.log
>
> ------------------------------------- yarn install v1.3.2
> [1/4] Resolving packages...
> [2/4] Fetching packages...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> error An unexpected error occurred:
> "https://registry.yarnpkg.com/aws-sdk/-/aws-sdk-2.179.0.tgz:
> ESOCKETTIMEDOUT".
I have tried to increase timeout for yarn... but the error still happen
I also can't even execute bash on the container (it gets stuck forever)
or any other command (i.e: trying to reproduce the yarn issue)
And the _SupervisorLog doesn't seem to be mapping according, the folder is empty and I can't understand exactly what is happening or reproduce correctly the error
If I try to go to the url sometimes I get a Bad Gateway, sometimes I don't even get that.
If I try to go to the path where it should load the "frontend" I get a "forbidden" error.
Just to clarify, this is working fine when I run the containers locally with docker-compose.
I have started using Docker recently, so feel free to point any other issue you might find on my files.