Is there a way to hide node.js code - node.js

So as the deployment of our app we were thinking to deploy it in market place as a private ami.So we were thinking to get the docker image of the app when some one spinning up the vm with our ami.But the thing is the user who spin up the ami is the admin of the vm so he has full access.So does he has access to our code base when he ssh in to the docekr image.Is there a way to restrict user to access the code

Yes. Let them use your AWS server and don't give their account SSH access to the code.
Seriously. There isn't a trick to this one. This is the only way you can do it, short of SELinux.

Related

Get Mount script from Azure file share - Terraform

For every new fileshare that we create in Azure Storage account we get a connect option
,
if we click connect we get the below options,
is it possible to get that piece of code to mount this fileshare through terraform? I could not find it anywhere. Any help on this would be appreciated
Of course, it's possible. You just need to copy the code into a script and then use the VM extension to execute inside the VM. It's not complex at all. Here is an example.
But there is one thing you need to pay attention to, the VM extension only supports the non-interactive script. For example, the connect code for the Linux, the command sudo is an interactive command, so it's not recommended to use in the VM extension. You can get more details about the VM extension here.

How can I issue a certificate after I've moved to a new cluster?

I setup up a prototype cluster in Azure Kubernetes Service to test the ability to configure HTTPS ingress with cert-manager. I was able to make everything work, now I'm ready to setup my production environment.
The problem is I used the sub domain name I needed (sub.domain.com) on the prototype and now I can't seem to make Let's Encrypt give a certificate to the production cluster.
I'm still very new to Kubernetes and I can't seem to find a way to export or move the certificate from one to the other.
Update:
It appears that the solution provided below would have worked, but it came down to needing to suspend/turnoff the prototype's virtual machine. Within a couple minutes the production environment picked up the certificate.
you can just do something like:
kubectl get secret -o yaml
and just copy\paste your certificate secret to a new cluster, or use something like heptio ark to do backup\restore.
ps. I dont know why it wouldn't let you create a new cert, at worst you would need to wait 7 days for your rate limit to refresh.

AWS OptInRequired and Terraform

I'm trying to develop Terraform code for an OpenVPN Access Server, however I get the error:
aws_instance.openvpn_srv: Error launching source instance: OptInRequired:
In order to use this AWS Marketplace product you need to accept terms and
subscribe.
Does Terraform have any support for using AMIs like this?
We’ve found it easiest to accept the ULA one time through console and it solves that issue for the life of the ami.
apparently, your account is not allowed to access the resource in aws, pls choose marketplace and that page will give your next instruction( setup credit card or else)
Oliver

Failed to configure Release Management

I have a problem when i try to configure the agent on another server.
I have installed the Server RM in one machine and i use the user with name: usr_deploy.
(This machine has an domain called: mydomain.local)
I have another server that i need map to submit files for deploy. What i do? I installed the Agent RM, using the same account and password, but when i try to configure i have the error:
(This machine has an domain called: anotherdomain.local)
(Because i´m a new user i cant post image. I found the same image in Url: http://i.stack.imgur.com/vrkpQ.jpg)
All users i used with the name usr_deploy have local account on each server.
I need to use the same account but all the accounts needs to be a domain account ?
I have very difficultily to find on the web articles or steps to make the correctly configuration.
My scenario is 1 server with the RM Server and 3 servers to make a deploy.
Anyone can help me ?
Tks!
If you don't have a trust relationship between your domains, you'll have to use shadow accounts.
MSDN:
Follow these steps to configure the Release Management Server and the
Deployment Agent on machines that run in different domains that do not
have a two-way trust relationship.
On each computer where you will install the RM Server or Deployment Agent, create a local user account that is a member of the
Administrators group. Use the same account and password on each
machine (i.e. Shadow Account).
Add the RM Server’s Shadow Account to RM and grant both “Service User” and “Release Manager” permissions.
Add the Deployment Agent’s Shadow Account to RM and grant “Service User” permission.
Use the Shadow Account as the service account when you install and configure the Deployment Agent.
Note: When you add the local accounts to Release Management, include
the name of the local machine where the account resides. For example,
add the user account as \ or
When you are configuring the shadow account as the service account in deployment agent, make sure that you logged in using the same shadow account.
write it down as
Correct way:- http://(server):(port)
Incorrect way:- http://(server):(port)/ReleaseManagement
Do not write "/ReleaseManagement/" or any other URL segments after . This will solve your problem.
for e.g. :
http://sunnyserver:1000

Allowing additional users to access and EC2 instance

I have set up an Amazon EC2 instance and am able to SSH into it. Can anyone please tell me how I could allow additional users to SSH into this instance from a different location?
Max.
I started out creating additional users. But it is pointless if you want to give them sudo access anyway. Which you probably do want right? Because giving them sudo acccess gives them access they want to do anyway, so creating their user account was just a waste of time. Additionally creating additional users is an onerous task and leads to a lot of different permissions problems, and means you have to monkey around with the sudoers file to allow them to undertake sudo tasks without entering their password everytime.
My recommendation is to get the new user to provide you with a public key and have them use the primary ubuntu or root account directly:
ssh-keygen -f matthew
And get them to give you the .pub keyfile and paste it into the .ssh/authorized_keys file of your ec2 server.
Then they can login with their private key directly into the ubuntu or root account of your ec2 instance.
I store the authorized_keys file in my private github account. The public keys are not very useful unless you have the private key component so putting it in github seems fine to me. I then put the deployment of my centrally stored authorized_keys file as part of my new server production process.
And then remove the public key from access when they leave your employment. This will lock them out.
Create additional users at a *nix command prompt
useradd
Create a new rule in the security group which has been applied to your instance, enabling ssh for the public IP Range of your remote user
For specific instructions check out: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1233.
1.
Max.

Resources