Tutorial fails for AWS Node.js - node.js

I am currently working through AWS's Node.js tutorial, but am stymied at the deployment phase. When I try to upload the provided source bundle, the build fails and I get the following error:
Unable to deploy application version: Configuration validation exception: Invalid option specification (Namespace: 'aws:elasticbeanstalk:container:nodejs:staticfiles', OptionName: '/static'): Unknown configuration setting.
Where does this error come from, and where can I look to fix it?

"The current configuration assumes that you are using Amazon Linux AMI (pre-Amazon Linux 2), but the current default image is "Amazon Linux 2" and the static file parameter has changed."
Solution:
edit .ebextensions/options.config file
change:
aws:elasticbeanstalk:container:nodejs:staticfiles:
to
aws:elasticbeanstalk:environment:proxy:staticfiles:
reference: https://github.com/aws-samples/eb-node-express-sample/pull/21/files

Related

Error accessing remote module registry in Terraform

We have been given a remote Terraform Private registry to utilise. Along with that came a credentials name and token.
Once we configured the general details on the terraform script, we created a .terraformrc file in the same dir (mac) and created the following
credentials "my remote registy"
token = "tokenvaluegoeshere"
When we run a terraform init we get the following error (for all modules)
Error: Error Accessing remote module registry
failed to retrieve available versions for module "x" from external.location - failed to request discovery document: 401 Unauthorised
It feels like i haven't got something correctly setup in terraform (even though it looks fine)
I have tried running terraform from different locations on my mac also created new .terraformrc files but still doesn't work.

Azure Function deployed using FTP, how to fix non-existence docker container issue

I am in development phrase, and I am trying out Azure Function, with the following settings:
Linux
Premium Plan,
NodeJS 12
Deploy using FTP
What I have done:
I have deployed a sample Durable Functions HTTP Starter as specified here: https://learn.microsoft.com/en-us/azure/azure-functions/durable/quickstart-js-vscode#client-function-http-starter
And deployed my code to the xxxxxxxxxxx.ftp.azurewebsites.windows.net under /site/wwwroot
And I received the following Error in LogFiles/2020_06_10_xxxx_docker.log:
2020-06-10T01:05:51.825Z ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2020-06-10T01:05:51.845Z INFO - Stopping site XXXXXXXXXX because it failed during startup.
2020-06-10T01:09:59.152Z INFO - Pulling image from Docker hub: mcr.microsoft.com/azure-functions/node:3.0-node8-appservice-stage6
2020-06-10T01:10:00.049Z ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"manifest for mcr.microsoft.com/azure-functions/node:3.0-node8-appservice-stage6 not found: manifest unknown: manifest tagged by \"3.0-node8-appservice-stage6\" is not found"}
Upon inspection, this mcr.microsoft.com/azure-functions/node:3.0-node8-appservice-stage6 docker image didn't exists, so it failed.
My question is, how to instruct Azure Function to use a valid docker image instead of a non-existing one? Or if any step above I done wrong so result in this issue? Thanks
Completely removing the Azure Function and creating a new one fix this issue.

VSTS - Build a Docker Image

I have a .NET Core repo in VSTS. I'm trying to create a Build pipeline that builds a Docker image and adds it to my Azure Container Registry. My Build pipeline has a Docker task. This task has the "Build an image" action selected. This action relies on my Dockerfile, which looks like this:
FROM microsoft/dotnet:2.1.2-runtime-nanoserver-1803
# Install .NET Core
ENV DOTNET_VERSION 2.1.2
When my Build pipeline runs, I get an error that says:
failed to register layer: re-exec error: exit status 1: output: ProcessUtilityVMImage \\?\C:\ProgramData\docker\windowsfilter\82aba535faccd8bf0e5ce3c122247672fa671214000a12c5481972212c5e2ca0\UtilityVM: The system cannot find the path specified.
##[error]C:\Program Files\Docker\docker.exe failed with return code: 1
Why am I getting this error? How do I fix it?
It should be the same issue with this one : https://github.com/Microsoft/vsts-tasks/issues/6510
Seems it still have some issues with nanoserver-1803
Just try to setup and host a custom agent on Azure VM, then check it again.
https://github.com/Microsoft/vsts-tasks/issues/6510#issuecomment-370152300
I found maybe an explication about this error: VSTS agents seem not
support nanoserver-1709 actually. Maybe this will change with the next
version 1803.
See details here: Microsoft/vsts-agent#1393
When I setup and host a custom agent on a machine on Azure, it's
working. So it's not a bug with this task. I close this issue. Thanks!

Bad SSL Key When Trying to Use spark-ec2 script to launch cluster on EC2?

Version of Apache Spark: spark-1.2.1-bin-hadoop2.4
Platform: Ubuntu
I have been using the spark-1.2.1-bin-hadoop2.4/ec2/spark-ec2 script to create temporary clusters on ec2 for testing. All was working well.
Then I started to get the following error when trying to launch the cluster:
[Errno 185090050] _ssl.c:344: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib
I have traced this back to the following line in the spark_ec2.py script:
conn = ec2.connect_to_region(opts.region)
Thus, the first time the script interacts with ec2, it is throwing this error. Spark is using the Python boto library (included with the Spark download) to make this call.
I assume the error I am getting is because of a bad cacert.pem file somewhere.
My question: which cacert.pem file gets used when I try to invoke the spark-ec2 script, and why is it not working?
I also had this error with spark-1.2.0-bin-hadoop2.4
SOLVED: the embedded boto library that comes with Spark found a ~/.boto config file I had for another non-Spark project (actually it was for the Google Cloud Services...GCS installed it, I had forgotten about it). That was screwing everything up.
As soon as I deleted the ~/.boto config file GCS installed, everything started working again for Spark!

Cobertura -java.lang.IllegalArgumentException: Class does not have a default interface

I am using Cobertura to code coverage for Integration test. I am facing below issue while deploying instrumented jar in JBoss server.
DEPLOYMENTS IN ERROR: Deployment "vfszip:/D:/jboss-5.1.0.GA/server/test/some_jar.jar/" is in error due to the following reason(s):
java.lang.IllegalArgumentException:
Class class com.someclass does not have a default interface
Here are the steps I followed so far:
Downloaded cobertura-1.9.4.1.
Using this command obertura-instrument.bat C:\some_jar.jar I generated the .ser file and instrumented jar for some_jar.jar.
Placed the jar in JBoss server test/ folder.
Copied the .ser file to JBoss/bin folder.
Copied the Cobertura.jar to Jboss/lib folder.
Run the JBoss server.
Please let me know if I am missing any thing here.
Probably a configuration file error because you possibly have not adjusted a setting before starting for the first time.

Resources