I understand that in the same manifest, a resource type has a unique name.
For the "mount" resource, the name is the path where the device will be mounted.
I want to do the following with puppet :
mount an NFS partition
execute a script from this mount point
unmount this partition
So I declare :
mount {'/mnt/tina':
device => 'tina-iuem:/distrib',
fstype => 'nfs',
options => "defaults",
remounts => false,
atboot => false,
ensure => mounted,
}
exec {'install':
command => '/mnt/tina/mycommand.sh'
}
Then, how to unmount the '/mnt/tina' resource ?
Ultimately what you are attempting to do with puppet is not the intended "puppet way" so to speak. Puppet is a configuration management tool not a tool designed for one time batch jobs, as such doing things like this become "annoying".
Given that you cannot have resources in conflict (aka mount ensure => mounted, and mount => absent) in the same catalog compile, you are probably better off offloading the mounting etc to a script and execcing out (Which sadly is in my opinion always the cheap way, but best suited for this situation.)
Related
service{'cron':
ensure => 'running',
enable => 'true',
}
Error:
change from 'running' to 'stopped' failed: systems stop for cron failed.
Drop this
service { 'crond':
ensure => 'running',
enable => 'true',
}
Into a file on a server, let's call the file crontest.pp then as root run puppet apply crontest.pp you should see cron start.
Also, if you're trying to debug this sort of thing a good starting place is to use puppet resource in this case puppet resource service, you should be able to see a list of all your services. Look through that to find the one relating to cron, it gives you the Puppet code for it's current state so you can copy that directly into a class file, just ignore the provider => line as the Puppet resource abstraction layer will take care of that.
I am using resource host. During my experiments the resource type "resources" has no impacts if a host resource is using a non-standard target.
resources { 'host':
purge => true,
}
host { 'localhost.localdomain':
ip => '127.0.0.1',
target => '/chroot/etc/hosts',
}
When I am using target /etc/hosts and I remove the host resource or rename it the output is:
Info: Applying configuration version '1560267493'
Notice: /Stage[main]/Profile:abc:Hosts/Host[localhost.localdomain]/ensure: removed
Info: Computing checksum on file /etc/hosts
Notice: /Stage[main]/Profile::abc:Hosts/Host[localhost.localdomain]/ensure: created
When I am using non standard target e.g. /chroot/etc/hosts nothing happens. (If I rename the entry then just another host entry is created)
Another strange behaviour is that when there is no /etc/hosts file on the agent node an error is thrown: (even if I am using a different target.)
Error: Could not find a suitable provider for host
Versions: Puppetserver: 5.3.8, puppet agent: 4.10.8
I am using resource host. During my experiments the resource type "resources" has no impacts if a host resource is using a non-standard target.
That is not surprising. The resources resource type can purge only resource instances that the specified resource type "prefetches". For the Host resource type, this is hosts recorded in the default hosts file. This is the reason for the documented limitation that a Resources cannot purge type ssh_authorized_key. For that, as for hosts in other target files, Puppet has no way to identify the resources you want to purge.
Another strange behaviour is that when there is no /etc/hosts file on
the agent node an error is thrown: (even if I am using a different
target.) Error: Could not find a suitable provider for host
I would account that a bug. You could consider filing a ticket.
if you are creating an EC2 from Nodejs api call Runinstances
I found it tricky to expand the root volume if you are creating an EC2 from AMI comes with low disk Space 8GB usually
I you use
using this block will add another EBS Volume to the EC2 without expanding the root volume
BlockDeviceMappings: [
{
DeviceName: "/dev/sdh",
Ebs: {
VolumeSize: 100
}
}
],
what can we do to Expand the root volume ?
Have you tried modifyVolume(params = {}, callback) ⇒ AWS.Requestand you can also modify volume attribute by modifyVolumeAttribute(params = {}, callback) ⇒ AWS.Request.
It is mentioned in same documentation link you have shared.
Thanks
Even I have some issue while calling run-instances, I wanted to exclude some volume but it's not possible in the run-instance
but if you do the operation through AWS Console, AWS UI allows doing the same like expand the volume and exclude volume
You can launch the instance using the run-instance and do the volume modify and increase the size of the volume in another command.
if you find any other solution, please let me also know by answering the below question
Exclude EBS volume while create instance from the AMI
I am trying to build a service to execute programs in different languages in node and give the output. So far I'm using the child_process's spawn for spawning commands.
let p = spawn('python',[sourceFilePath]);
Is there a way so that I can limit the access of this child_process to my system so it can't access network,file system,etc ?
Node.js itself does not offer any mechanism to restrict child processes to only a subset of available resources other than setting the child process' UID/GID which might not be sufficient given your goal.
Note that remote code execution as a business model (ie. various *fiddle.org, online playgrounds etc.) are very difficult to make because it's a non-trivial task to protect the host operating system.
I will assume that your program will eventually run on a Linux server as that is the most common type of servers available nowadays for Node.js deployments. Covering this topic for all types of operating systems would be too broad and probably not that helpful.
Your goal
to execute a program with a user-provided input and return the output back to the user
restrict this program from accessing some or all I/O resources (file system, network, host OS information, memory etc.)
tolerate malicious users trying to do harm/extract information/compromise your server etc.
Node.js will not help here, at all. Node can spawn a child process, but that's about it. It does not control any of those I/O resources. The kernel does. We must look to what features the Linux kernel offers in this area.
I found a very detailed article about Linux sandboxing which I will use as a source of inspiration. If you are interested I recommend you read it and search for similar ones.
Low-level: Linux namespaces
Linux kernel offers low-level mechanisms to isolate processes in/from various system resources. You might want to check them out although I feel this is too low-level for your use case.
Firejail
Firejail is a tool to isolate a process for testing purposes from other system resources. I have never used it but it looks like it could be used for your use case.
Containers (ie. Docker)
Containers usually utilise Linux namespaces to create an environment which looks like a full operating system to the process running inside of them, even though they allow complete isolation from the host OS. You can restrict network access, filesystem access and even CPU/memory usage when running a program inside of a container.
Given your use case, I would probably go with container isolation as the community around them is quite huge nowadays which increases the likelihood of you finding the right support/documentation to achieve your goals.
if it's Unix you can provide uid, gid of user/group with lower permissions:
var child = spawn(process, args, {
detached: true,
uid: uid,
gid: gid,
cwd: appDir,
stdio: [ 'ignore', 'pipe', 'pipe']
})
You can use vm2 module for this.I also use that to run untrusted code by users.You can create sandbox to run user code and sandbox can access resources that only you specify to it.
For example
const vm = new VM({
sandbox: {
console:console,
timeout:200,
fileName:fileName,
cmdCommand:cmdCommand,
url:url,
exec:exec,
input:inputs,
imageName:imageName,
reqs:reqs,
resp:resp}
});
This can be your sandbox.User can use only that specific varibale that are listed above.For example if you remove console from above.Then user will not able to log.If they tries to do that code will through error.Late
vmCode = `var child = exec(cmdCommand,function(stderr, result) {
console.log("done in exec")
console.log(fileName)
console.log(result)
if(stderr){
console.log("stderr")
console.log(stderr)
resp.send({"error":"Syntax error"})
}
else {
console.log(result)
}
}
})`
This is place where your child process code will be defined (as string)
Finally
vm.run(vmCode,(err)=>{
console.log("done")
})
This statement will execute code that is as string (vmCode is defined above)
In the documentation it is written that you can use this to run untrusted code.
For more you can read from here https://www.npmjs.com/package/vm2
Presently I have my logs and logstash running on the same machine, so I read my logs placed on my local machine with this config(using pull model)
input {
file {
path => "/home/Desktop/Logstash-Input/**/*_log"
start_position => "beginning"
}
}
Now, we have logstash running on a different machine and want to read the logs remote mechine.
Is there a way to set the ip in file input of config file?
EDIT:
I manage to do this with logstash-forwarder which is a push model(log shipper/logstash-forwarder will ship log to logstash index server) but still i am looking for a pull model without shipper, where logstash index server will go and contact directly to remote host.
Take a look to FileBeat: https://www.elastic.co/products/beats/filebeat
It´s not a pull model but it seems a better choice than logstash-forwarder.
It monitors log files and forwards them to Logstash or Elasticsearh. It keeps also the state of log files and guarantees that events will be delivered at least one time (depends on log rotation speed). It's really easy to configure:
Input configuration:
input_type: log
paths:
- /opt/app/logs
Output configuration
output.logstash:
hosts: ["remote_host:5044"]
index: filebeat_logs
In the logstash side you must install and configure the Beats input plugin:
input {
beats {
port => 5044
}
}
Logstash doesn't contain any magic to read files from other computer's file systems (and that's probably a good thing). You'll either have to mount the remote file system that contains the logs you're interested in or you have to install a log shipper (like e.g. Logstash) on the remote machine and configure it to send the data to your current Logstash instance (or an intermediate broker like Redis, RabbitMQ, or Kafka).
You could also use the syslog daemon (that's probably already installed on the machine) to ship logs via the syslog protocol, but keep in mind that there's no guarantee of the maximum allowed length of each message.
You can add the remote system IP in the path and access the logs from Remote machine.
input {
file {
path => "\\IP address/home/Desktop/Logstash-Input/**/*_log"
start_position => "beginning"
}}