Whenever I try to trigger a job that depends on that ec2 slave, it just stands in queue. I looked at the logs and saw this exception:
com.amazonaws.services.ec2.model.AmazonEC2Exception: Network interfaces and an instance-level security groups may not be specified on the same request
Whenever I click on build executor status on the left, there is a button that says "provision via ". I click on it and see the correct amazon linux image name that I entered under cloud on Jenkins' System Configuration, but when I click on that, I see that same exception as well... I just don't know how to fix this and cannot find any helpful information on this.
Any help would be much appreciated.
Ok, I'm not exactly sure what was causing the error since I don't really know how the Jenkins plugin interfaces with the aws api. But after a good amount of trial and error, I was able to provision the On Demand worker by adding more details/parameters in Configuration, under Cloud.
Adding a subnet ID for the VPC and a IAM Instance profile did the trick (I already had everything else including security groups, availability zone, instance type, etc). So it seems like you either leave out security groups, or go all in and fill in pretty much everything.
As an FYI if you see this with Jenkins EC2 Plugin v1.46 it looks like a genuine bug:
https://issues.jenkins-ci.org/browse/JENKINS-59543
The solution is to use 1.45 until it's fixed (see link above for more details).
Related
I'm deploying a container to a Container Optimized OS or COS on Google Compute.
I want to specify Logging and Monitoring for the VM. There are 2 ways to do this:
Specify metadata flags:
Mark the checkboxes
But when I then click on "Equivalent command line", there's no indication of these options.
Am I just misinterpreting something here or am I not allowed to specify these flags in the command?
I tried with the non-COS VM instance and the expected metadata flag showed up to indicate the metadata. But this does not show up in the COS command.
gcloud compute instances create instance-1 \
...
--metadata=MY_TEST_FLAG=test_value
Yes. When using container optimized OS images while creating a VM this issue is coming, But this is for command line code only. REST equivalent is generated properly, As a work around for this you can add the metadata flag to the generated command as mentioned below.
--metadata=google-logging-enabled=true,google-monitoring-enabled=true
I have raised a request on this issue. Please monitor the Google Public Issue Tracker for further updates on the fix of the issue.
In case you find any such issues in future you can report to Google using Report issues and request features with issue trackers.
I am using the RDS command line tool from here and am having trouble copying the parameter group to a different region. Running the rds-copy-db-parameter-group fails with the following error:
rds-copy-db-parameter-group: Could not find the resource you requested: DB ParameterGroup not found, not allowed to do cross region copy.
The command I am using is:
rds-copy-db-parameter-group arn:aws:rds:ap-southeast-1:myAccntId:pg:myParamGroup-utf8mb4 -t copyOfMyParam -td testcopy
I'm pretty sure the ARN is correct and the parameter does exist. Is this a problem with the tool or aws? Is anyone else encountering a similar issue?
I ran into this same issue recently and opened a support ticket with AWS. The response I got was that the RDS team added this feature to the documentation but haven't yet built the actual support for this feature.
This bothered me a lot and ate up a couple of hours so I put this simple script together. There's loads of room for improvement so please share if you improve upon it or find issues!
https://gist.github.com/phill-tornroth/f0ef50f9402c7c94cbafd8c94bbec9c9
When I manually provision a system (select the system, distro tree, etc) and click on Provision, I do not see a new job created. I get the impression that nothing is happening.
I am using Beaker 19.0.
This was one of the changes in Beaker 19.0. A manual provision doesn't create a job any more. To see what is happening, you have to hop on to the system's serial console either physically if possible or via using the console program provided the required infrastructure is in place.
To learn more about the change see the relevant release note entry.
I've just installed Neo4j 1.8.2 onto Azure by following this step-by-step process...
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
Unfortunately, when I browse to http://:7474/webadmin Fiddler says Error 10061 - No connection could be made because the target machine actively refused it.
I've followed the instructions exactly and haven't received any errors.
Any help much appreciated.
So, I think I got to the bottom of this. I think it was due to the size of compute / VM I was creating. It looks like the problem is caused when running on Extra Small instances. I created a new installation using a Small instance and everything now works :).
Try setting the server to accept connections form all hosts, and maybe use a newer Neo4j, say 1.9.4
http://docs.neo4j.org/chunked/stable/security-server.html#_secure_the_port_and_remote_client_connection_accepts
The way the VM Depot image is set up, it's pre-configured to allow all hosts to connect, and the Neo4j server will auto-start. The only thing you need to take care of, when constructing your VM, is to open an Input Endpoint, with any public port you want (preferably 7474 to stay true to Neo4j) and internal port 7474.
Note that the UI changed a bit since the how-to was published: You can specify the endpoint as the last step before creating your virtual machine. Other than that, the instructions should be the same. And... once the VM is up and running (it'll take about 5-10 minutes), you just visit http://yourservicename.cloudapp.net:7474 and you should see the web admin. Note: this is not the same as your vm name. If you named your VM something like 'neo' then you do not want http://neo:7474 or http://neo.cloudapp.net:7474. You need to use your cloud service name (you had to create a name for the service when you deployed the VM.
I've deployed that image several times in demos, and just tried again right now to make sure nothing wonky happened. Worked perfectly.
I know this question has been asked before, like this one. But they all very old, the method is very complex and I tried cannot really get it work. So I wonder if the new Azure SDK gives something easy, I guess should from Microsoft.WindowsAzure.ServiceRuntime namespace.
I need this because I use a worker role that mount CloudDrive, keep checking it and share to the network, then build a lucene.net on it.
This deployment works very well.
Since only 1 instance can mount the CloudDrive, so when I do VIP swap, I have to stop/(or delete) the stage deployment, then the new production deployment can successfully mount the drive. This cause the fulltext search stop for awhile (around 1-2 minutes if everything well and I click the button fast enough). So I wonder if I can detect current status, and only mount when production and unmount when stage.
I figured out one way to solve this, please see my answer here:
https://stackoverflow.com/a/18138700/1424115
Here a what more simpler solution.
What i did was an ip check. The staging enviroment gets a different external ip then the production enviroment. The production ip adres is the ip of (yourapp).cloudapp.net. So the only thing you need to do is to check if these two match.