Resize container resources online - resources

How can I resize container resources after creation, when it is online? I would like to get a permanent solution for this, that isn't reset when restarting.
I've set resources in creation time with following options:
-c, --cpu-shares=0
--cpuset=""
-m, --memory=""
I have already tried to change values here
/sys/fs/cgroup/

Out of the box you can't do this, containers are meant to be immutable at least configuration wise.
You might be better off using docker-compose to define these parameters so they are always set for a given application.
An untested example docker-compose.yml might look like this :
awesome_app:
cpu_shares: 0
cpu_set: 0
memory: 0

Related

How to toggle a feature enable or disable feature in terraform without modifying code

In Terraform we need to implement a functionality to toggle the block of terraform code(From enable to disable) and redeploy without manually changing the terraform code using commenting the code lines. What is the best way without manually change the code?
Regards
Flashnightss
We tried to manually change the code for example disabling the feature(e.g storage replication disable from enable) using /* */ comment lines. But thats not the good idea for a novice users. Looking for a programmatically implement this without modifying the code manually.
One option is to introduce a feature flag, i.e., a variable that tells the code if a feature/module/block should be used, or not.
E.g., assuming you have a variable myfeatureflag declared in your terraform.tfvars and you want optionally use a resource block, you could write
resource "myresource" "myresourceid" {
count = var.myfeatureflag == true ? 1 : 0
<resource-specific code>
}
You could do it similarly for calling complete modules.

Cannot create a Dataproc cluster when setting the fs.defaultFS property?

This was already the object of discussion in previous post, however, I'm not convinced with the answers as the Google docs specify that it is possible to create a cluster setting the fs.defaultFS property. Moreover, even if possible to set this property programmatically, sometimes, it's more convenient to set it from command line.
So I wanted to know why the following option when passed to my cluster creation command does not work: --properties core:fs.defaultFS=gs://my-bucket? Please note I haven't included all parameters as I ran the command without the previous flag and it succeeded to create the cluster. However, when passing this, I get: "failed: Cannot start master: Insufficientnumber of DataNodes reporting."
If anyone managed to create a dataproc cluster by setting the fs.defaultFS that'd be great? Thanks.
It's true there are still known issues due to certain dependencies on actual HDFS; the docs were not intended to imply that setting fs.defaultFS to a GCS path at cluster-creation time would work, but to simply provide a convenient example of a property that appears in core-site.xml; in theory it would work to set fs.defaultFS to a different preexisting HDFS cluster, for example. I've filed a ticket to change the example in the documentation to avoid confusion.
Two options:
Just override fs.defaultFS at job-submission time using per-job properties
Workaround some of the known issues by setting fs.defaultFS explicitly using an initialization action instead of cluster properties.
Option 1 is better understood to work because cluster-level HDFS dependencies won't change. Option 2 works because most of the incompatibilities occur during initial startup only, and initialization actions run after the relevant daemons start up already. To override the setting in an init action, you'd use bdconfig:
bdconfig set_property \
--name 'fs.defaultFS' \
--value 'gs://my-bucket' \
--configuration_file /etc/hadoop/conf/core-site.xml \
--clobber

How does DIG utility work in FreeBSD and BIND?

I want to know how does the DIG (Domain Information Groper) command really works when it comes to code and implementation. I mean when we enter a DIG command, which part of the code in FreeBSD or BIND hits first.
Currently, I see that when I hit the DIG command, I see the control going to a file client.c. Inside this file, following function is called:
static void
client_request(isc_task_t *task, isc_event_t *event);
But how does the control reach to this place is still a big mystery for me even after digging a lot into 'named' part of the BIND code.
Further, I see this function being called from two places within this file. I tried to put logs into such places to know if control reaches to this place through those paths, but unfortunately that doesn't happen. It seems "Client_request()" function is somehow being called from outside somewhere that I am not able to figure out.
Is there anybody here who can help me out to resolve this mystery for me ?
Thanks.
Not only for bind but to any other command, within FreeBSD you could use ktrace, it is very verbose but could help you to get a quick overview of how the program is behaving.
For example, in latest FreeBSD's you have drill command instead of dig so if you would like to know what is happening behind scenes when you run the command, you could give a try to:
# ktrace drill freebsd.org
Then to disable tracing:
# ktrace -C
Once tracing is enabled on a process, trace data will be logged until
either the process exits or the trace point is cleared. A traced process
can generate enormous amounts of log data quickly; It is strongly
suggested that users memorize how to disable tracing before attempting to
trace a process.
After running ktrace drill freebsd.org a file ktrace.out should be created the one you could read with kdump, for example:
# kdump -f ktrace.out | less
That will hopefully "reveal the mystery", in your case, just replace drill with dig and then use something like:
# ktrace dig freebsd.org
Thanks to FreeBSD Ports system you can compile your own BIND with debugging enabled. To do so run
cd /usr/ports/dns/bind913/ && make install clean WITH_DEBUG=1
Then you can run it inside debugger (lldb /usr/local/bin/dig), break on the line you are interested in and then look at backtrace to figure out how the control reached there.

How to add entries with slapadd in openldap replication(synrepl)

We have openldap replication with syncrepl, I don't know how to add slapadd entries into it.
On standalone it works fine. but when i add entries in one of the machine in replication, second machines fails to start slapd.
Thanks
Unfortunately slapadd doesn't write to the accesslog and thus the modifications aren't replicable. This is especially bad, because some attributes can't be modified via ldapadd.
If you only need ordinary attributes, use ldapadd instead.
UPDATE:
It looks like you can use the -w switch:
Write syncrepl context information. After all entries are added, the
contextCSN will be updated with the greatest CSN in the database.

What does "No more variables left in this MIB View" mean (Linux)?

On Ubuntu 12.04 I am tring to get the subtree of management values with the following command:
snmpwalk -v 2c -c public localhost
with the last line of the output being
iso.3.6.1.2.1.25.1.7.0 = No more variables left in this MIB View (It is past the end of the MIB tree)
Is this an error? A warning? Does the subtree end there?
There's a bit more going on here than you might suspect. I encounter this on every new Ubuntu box that I build, and I do consider it a problem (not an error, but a problem--more on this down further).
Here's the technically-correct explanation (why this is not an "error"):
"No more variables left in this MIB View" is not particularly an error; rather, it is a statement about your request. The request started at something simple, say ".1.3" and continued to ask for the "next" lexicographic OID. It got "next" OIDs until that last one, at which point the agent has informed you that there's nothing more to see; don't bother asking.
Now, here's why I consider it a problem (in the context of this question):
The point of installing "snmpd" and running it is to gather meaningful information about the box; typically, this information is performance-oriented. For example, the three general things that I need to know about are network-interface information (IF-MIB::ifHCInOctets and IF-MIB::ifHCOutOctets), disk information (UCD-SNMP-MIB::dskUsed and UCD-SNMP-MIB::dskTotal), and CPU information (UCD-SNMP-MIB::ssCpuRawIdle, UCD-SNMP-MIB::ssCpuRawWait, and so on).
The default Ubuntu "snmpd" configuration specifically denies just about everything useful with this configuration (limiting access to just enough information to tell you that the box is a Linux box):
view systemonly included .1.3.6.1.2.1.1
view systemonly included .1.3.6.1.2.1.25.1
rocommunity public default -V systemonly
This configuration locks the box down, which may be "safe" if it will be on an insecure network with little SNMP administration knowledge available.
However, the first thing that I do is remove the "-V systemonly" portion of the "rocommunity" setting; this will allow all available SNMP information to be accessed (read-only) via the community string of "public".
If you do that, then you'll probably see what you're expecting, which is pages and pages of SNMP information that you can use to gauge the performance of your box.
I know this thread is probably very old the I fix this is to use:
rocommunity public
and that should fix the problem.
Briefly, this is not an error, actually, when you "walk up" all OIDs on your agent, it will shows your this line>
Sometimes, it won't show you this line, because the last OID is not on your agent(you have already walk up all OIDs on your agent, but not walk up all OIDs).
$ snmpwalk -v 2c -c public localhost NET-SNMP-EXTEND-MIB::nsExtendObjects
NET-SNMP-EXTEND-MIB::nsExtendObjects = No more variables left in this MIB View (It is past the end of the MIB tree)
Also you can get this error while you can trying to see executed scripts I fix that problem to add
view all included .1 80
line to snmpd.conf than restart service
Than you will see your output going to change for both input

Resources