Size of access.log and server load? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there any difference at all in server load when adding new lines to a big vs. small access.log file?
What I mean is, should I delete my access.log files if they become too big or leave it. It is 6GB right now. I do not rotate.

I'm not sure about the performance difference of big or small files, but maybe you want to split them every month and compress old access-log files. For that you can use logrotate. More information in the man page

Log rotation is an important part of maintaining a server. Without it, you'r likely to fill up your disk, and then your server will behave extremely strangely, depending on the app.
Regardless of performance, you should be using logrotate or something similar.

Related

Why can one remove/rename open files in Linux? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I learned that open files can not be removed/renamed in Windows but can be removed/renamed in Linux (by default). I think I understand the reasons of the Windows behaviour.
Now I wonder why Linux allows remame/remove of open files ? What was the design rationale behind this decision ? What are the use cases when one need it ?
the difference is that linux works on file handles rather than file names. as long as the file handle is valid you can read and write to it.
renaming a file in linux does not alter the file handle.
one very interesting use case is to delete temp files after opening them.
this makes it impossible for every other process to access this file, while the process that owns the file handle can still read and write.

How can I uncompress .z file under Ubuntu? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a file from TREC(Text REtrieval Conference) whose extension is .0z .1z etc etc. I tried every method I can do, but still failed. Could someone do me a favour please?
There are some evidence that might helpful.
In terminal, I used "file" command then it shows "fr940104.1z: compress'd data 16 bits".
I also check the properties of the file under GUI, which shows UNIX-compressed file(application/x-compress).
and are stored in chunks of about 1 megabyte each
indicates that you need to recombine the chunks before decompressing. Hopefully the filenames can help you with that ("chunk001.z", "chunk002.z", ?). Assuming that you can figure out the order, use cat to combine them into one file. Then use Unix uncompress. Or pipe directly from cat to uncompress.
.z normally means simple Unix compression. Does
uncompress filename.z
not work?

Deleting TTY_00000000.log logs from server [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have log file named TTY_00000000.log on my server which is of more then 2GB.
Can anyone let me know whether I can empty this file & how can I stop creation of this huge logs or minimize them in anyway?
Thanks,
Gaurav.
See this thread. Quotes:
These are a very important debugging tool for the PERC 5 controllers so
the ability to disable them is not exposed.
What you need is a simple logrotate script (Dell should have included one). Compression would save almost all the space since they are a lot of repetitive text.
cat > /etc/logrotate.d/omsa-tty <<EOF
/var/log/TTY_00000000.log {
monthly
notifempty
rotate 15
compress
}
EOF

What makes a copied disk different from the original disk? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
For a reason, when we create a backup of our game disk, there is always a difference between the original disk and the self-burned backup. A lot of games can detect that the disk, inserted in the optical drive isn't an original one.
The game isn't satisfied neither with a virtually mounted image file.
So what makes the difference and how does the software detect it?
Thanks
Maybe this is a superuser.com question, but I'm not sure...
Copy protection schemes involve putting features on the manufactured disk that are difficult or impossible to create using a consumer recorder. One common technique is to put deliberate errors on the disc. See the Wikipedia article on CD/DVD copy protection for more information.

Destroy a large amount of data as quickly as possible? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
How would you go about securely destroying several hundred gigabytes of arbitrary data as quickly as possible?
Incinerating hard drives is a slow, manual (and therefore insecure) process.
Physically destroying the drives does not (necessarily) take a significant amount of time. Consider, for example, http://www.redferret.net/?p=14528 .
I know the answer but this seems like one of those questions best left unanswered unless you know why it's being asked.

Resources