Memory usage of a Linux Buildserver [closed] - linux

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 days ago.
Improve this question
I'm using a Linux buildserver, to be honest this is the first time.
I access it through VScode, using the available extensions.
Every time I build the code the memory of my linux buildserver is consumed at a point it reaches its limits, then start presenting erros due to this.
Other thing I notice, If I use a linux screen, it uses a huge amount of memory too.
Which should be the ways to deal with this environments ?
I dont know if this should be normal behaviour, every time I build it consumes more memory.
Could VScode having some impact on this ?
Is this the normal behaviour of a buildserver ? I want to understand why.
Thank you very much !
I'm trying to use a linux buildserver, but every time I try to build my code It uses too much memory of the buildserver. I want to know if this is normal or If there is some setup I need to do.
In general, I kindly ask for some explanation about how the buildserver works regarding to memory management.

Related

Why is mlockall occasionally slow? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
Background: we've recently updated our Operating System from RHEL 6.7 MRG to RHEL 7.7 (with RealTime). Our RealTime Ada Radar Applications run mlockall (c binding) to lock into memory at startup (yes, I understand this is rarely necessary, and likely isn't for all of the applications, but is required for many).
The Problem:
Since the upgrade, mlockall occasionally takes over 2 minutes, where it usually takes < 1 second. What could be causing this behavior?
I was pointed at file cache/memory buffer, so we ran some tests after dropping caches, but it didn't seem to have a positive effect.

How can I use a stack buffer overflow to make an attack [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am a IT Security student and I have to write a paper about a vulnerability in a real life case. I chose a small application that allows to create icons under Windows, which is vulnerable to a stack buffer overflow.
I've already had to do manipulations on exercise scripts (injecting shell code into a script that can be used in the Linux command line, etc.) but never on a real application.
Is it possible for you to help me to use the vulnerability to carry out some kind of "attack" or to execute a program function not supposed to be executed at that moment?
I didn't try to reverse the code yet, I will try to find where the program is storing the long string I use to make it crash, the size of the memory for this variable and the return address.
I found this app when trying to find a vulnerable app on Vulnerability Lab : https://www.vulnerability-lab.com/get_content.php?id=1609
You can also download the app from this link (the vulnerability is still present in the last version): http://www.aha-soft.com/iconlover/
PS : I've only been studying IT Security for a year and a half, so I am a beginner. Sorry if there are some mistakes in my text, I am french. This is one of my first post on this forum, I hope I did it well.
There are already ready made exploits to do what you are asking for iconlover. see here. The script forces the application to open up calculator.exe which indicates a very severe security issue.
You can modify the given shellcode for RCE or to execute malicious programs (assuming you already have some form of access to the target system.)

Docker on a linux VM, performances? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I would like to know what Docker on VM implies regarding the performances, will I have issue ?
To me, adding a "layer" would decrease the performances. Is it right or wrong and most importantly, why ?
I want to be able to know what is the best way to deal with new projects when containers are on the line.
Thanks in advance :)
Every part of the system stack has some performance cost, but it’s probably close to immeasurable. In what you describe the cost of the VM will probably be greater than the cost of Docker, but the cost of either will be dwarfed by the cost of any database I/O you do. As always database tuning and algorithmic tuning will probably make the biggest difference.
An additional layer in a Docker image has approximately zero performance impact. It’s mildly “nicer” to have fewer layers but it doesn’t really matter that much.
If your program is in an interpreted language like Ruby or Python, or if you’re frequently starting JVMs, the performance difference from using a virtual machine or not is noise compared to the sheer overhead of these systems.
As always, the real answer is to run real benchmarks, and profile the system/application if it’s too slow. The sorts of questions you’re asking aren’t things you need to optimize for early and often aren’t things you need to optimize for at all.

Error running bash script in supervisor [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I have a bash script that makes some work supervising network stuff, it works great when I run it manually, but when I put it in supervisor the ifs and the whiles does not work, just stops before any of those programming sentences, echos, running cat, more and other things work just fine, but in the minute I put an if nothing else work from there on.
Please give me some tips, I really need to run this script from supervisor.
Thanks Etan Reisner, I have already fix the problem, I will put it here so if anyone else have the same problem see why.
The problem was so simple (as usual) I was reading the content of text files /sys/class/net/eth0/carrier and /sys/class/net/eth0/operstate to detect when the network cable was plugged and unplugged and I was doing it with more that (I don't know why, because if I ran the script manually worked great) when was executed in supervisor stopped the execution there after the first more, I just changed to cat and that was it.
Haft a day spent to solve that.
Hope if someone gets into this kind of trouble find this answer and that way can solve the problem fast.
Regards

Power Consumption of an Application [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Is there any way to find out the power consumed by an application. Like if i have some ten user apps running on my laptop and would like to know how much power each application is consuming in Linux environment?
The PowerTop tool might have something for you. Lookup the section "Power usage". If the tool itself is not what you want, you can research, where the tool retrieves its information and evaluate them in the way you want.
That's an interesting question and does not have a easy answer that I've heard of.
Presuming that you have a way of metering the minute to minute consumption of the machine. You can get a crude approximation by examining the amount of CPU time used. Either by watching things in top, or by examining the output of time (1). Compare the machine's total power consumption in various states of idleness and load to the amount of work done by each process---with enough statistics you should have a solvable system...possibly even a over-constrained one that calls for some kind of best-fit solution.
The only way that occurs to me to do it to high precision would be to use
Instrumented virtual machine that accumulated statistics on what parts of the CPU were activated. (Do such things exist at this time?!?)
The manufacturers documentation for the chip-n-board you are running on to total up the power implied.
which would be a horribly complicated mess.
Sorting out which bits were needed just to provide the environment and which could be unambiguously attributed to the program won't be easy.
I have to ask...why?
I don't know if there's really a "good way" to do this. But here's a suggestion for a generic approach that would work regardless of operating system: Remove the battery from your laptop, and hook up its power adapter to a high-precision current meter. Note the draw when no "normal" applications are running. Then run each application on its own and note the differences in current draw.

Resources