new to Linux. I've been having a few problems since I installed some time ago, one of the main issues being my keyboard layout defaults to US on a GB keyboard.
I've found the command setxkbmap -layout gb fixes this. Problem is, I have to run it each time I restart the laptop.
I've tried creating a shell script as follows in /etc/init.d
#!/bin/bash
# A she-bang - says what interpreter to use
### BEGIN INIT INFO
# Provides: SetKeyboardGB
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Change Keyboard to GB
# Description: Changes Keyboard to GB on satrtup.
### END INIT INFO
# This will always run (Hopefully)
setxkbmap -layout gb
exit 0
This executes absolutely fine when I just run it, and I've already ran update-rc.d SetKeyboardGB defaults but it won't seem to run when restarting the laptop. I feel like I've missed something in setting up my init.d script, but I haven't found the documentation easy to follow.
Any help is appreciated.
You could probably get away with creating a service that sets the keyboard like your are trying to do. However that service would have to compete with one that comes with Debian, keyboard-setup. So that is not the answer.
I would recommend running the following as root
# dpkg-reconfigure keyboard-configuration
# service keyboard-setup restart
to set the layout. The latter of the commands should apply the settings, but a proper reboot might be necessary if it does not work.
The keyboard layout settings are stored in /etc/default/keyboard which will be set by dpkg-reconfigure keyboard-configuration.
Best of luck!
I tried the above, but unfortunately it didn't seem to work. The solution was actually a little simpler than I had imagined - I feel dumb for not noticing it. In the bottom right of the LXDE desktop there is an option for keyboard layout which I somehow missed. Despite specifying otherwise during installation, it defaulted to US - I changed this to UK, Extended WinKeys and this seems to have stuck between reboots.
Related
I referred many solutions yet no luck. I have a linux automation which runs few gcloud commands with some conditions. I made this script with node js, but it is incredibly slow that I even finish it manually before the scrips completes the run.
Same with the gcloud commands when I connect to a cluster and kubectl commands when i query something.
Please help!!
It could be a DNS config error on WSL side. I hadthe same issue today, here's how I fixed it !
1. Checking the (deadly slow) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m1.212s
user 0m0.151s
sys 0m0.050s
2. Checking the WSL/DNS configuration
[tbg#~] cat /etc/wsl.conf
[network]
generateResolvConf=false
[tbg#~] cat /etc/resolv.conf
nameserver XX.XXX.XXX.X
nameserver YYY.YY.YY.YY
nameserver 1.1.1.1
If you see that, remove these lines to get back to automatic resolv.conf generation and restart WSL (wsl --shutdown)
3. Checking the (fixed !) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m10.530s
user 0m0.087s
sys 0m0.043s
I found out my resolv.conf configuration was causing that latency, by trying to reinstall kubectl with apt, and finding apt really slow too
Right now access to /mnt folders in WSL2 is too slow and by default at launch the entire Windows PATH is added to the Linux $PATH so any Linux binary that scans $PATH will make things unbearably slow.
To disable this feature, edit the /etc/wsl.conf to add the following section:
[interop]
appendWindowsPath = false
Avoid adding Windows Path to Linux $PATH and best for now is adding folders to the $PATH manually.
Terminate the WSL distro (wsl.exe --terminate <distro_name>) to make it immediately effective or wsl.exe --shutdown and start the terminal again.
Refer to the stack link for more information.
My computer's clock has been restarted, after this restarting I turn on my computer and waited for my CentOS to boot. but I face a black page which contains :
****An error ocurred during the file system check.
**** Dropping you to a shell; the system will reboot
****when you leave the shell.
****Warning -- SELinux is active
****Disabling security enforment for system recovery.
**** Run 'setenforce 1' to reenable. Give root password for maintenance (or type Control-D to continue):
I typed my password and I face #root line in the very black page.
I really need my CentOs work in GUI . Please help me.
If you have access to a root shell, try this command: "fsck -a" It will try to automatically fix error on your filesystem.
I've been using forever for a long time but recently it started to behave a little bit weird: everything is ok except for the logs.
I used to run forever start /path/to/app.js and everything was fine. Even for huge logs (1-2 gigs).
But I currently have an app, a very busy web app and the log is being truncated every 3-4 hours. The size is not that big, 80-120 megs actually.
After realising this I decided to try (unsuccessfully though) starting forever with options: forever --append -o /path/to/out.log -e /path/to/error.log start /path/to/app.js but the problem persists.
I really don't know what to do.
Any thoughts??
Thanks!!
Versions:
node v0.12.7
npm 2.11.3
forever v0.15.1
3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u6 (2015-11-09) x86_64 GNU/Linux
UPDATE1:
BTW: I have plenty of hard disk an memory available
UPDATE2:
I found a related Issue (https://github.com/foreverjs/forever/issues/106#issuecomment-116933382) and beging testing with the following command: forever -a -l >(logger -t fileteTrackchile) start /path/to/app.js So far so good, but it will store log information in /var/log/user.log, /var/log/syslog and /var/log/messages. It's the same information so it would be better to save it only once.
I'll leave it running for a couple of days and see if it works or not.
UPDATE3 (FINAL):
The problem had nothing to do with forever, nor winston. I didn't realise that the files were so big that the log viewer was only showing part of it. The confusing part is that the first line (past) was truncated by the viewer (OSX Console) what led me to think that the file itself was truncated.
Regarding your first approach, unexpected log truncation is often caused by log rotation. However, you didn't mention rotating your logs.
Your second approach is to send the logs to a syslog daemon using the logger client.
Your logging is then ending up in three logs due your syslog configuration.
Try this solution. Create a file named /etc/rsyslog.d/10-fileteTrackchile.conf.
In the file, add these lines:
$template JustMsg,"%msg:2:10000%\n"
if $programname == 'fileteTrackchile' then /var/log/fileteTrackchile.log;JustMsg
if $programname == 'fileteTrackchile' then stop
Then:
service rsyslog stop
service rsyslog start
Because the file name starts with 10, it will get run before other configuration.
The logic instructs the logging for your app be logged to a particular file, unmodified, and then stop further processing by rsyslog, so the logging doesn't end up in the three other files you mentioned.
You may also wish to create /etc/logrotate.d/fileteTrackchile with contents like this:
/var/log/fileteTrackchile.log
{
rotate 7
daily
missingok
notifempty
compress
sharedscripts
postrotate
service rsyslog reload >/dev/null 2>&1 || true
endscript
}
See man logrotate for the details of those log rotation options.
The problem had nothing to do with forever, nor winston. I didn't realise that the files were so big that the log viewer was only showing part of it. The confusing part is that the first line (past) was truncated by the viewer (OSX Console) what led me to think that the file itself was truncated.
A few questions are already asking about how to fix the mongodb warning:
** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always.'
** We suggest setting it to 'never'
But I'm wondering if it should be fixed. I get this warning from MongoDB 3.0.1 on a Ubuntu VM running on Google's Cloud. Should I trust MongoDB that 'never' is better? Or should I trust Google/Ubuntu that they set it to 'always' for a good reason? I imagine there are tradeoffs to be considered and don't know what I'd be trading to keep it or fix it.
Asking how to fix it is fine, but asking whether to fix it seems wiser.
Edit: Mongodb have addressed this issue since I wrote this answer. Their recommendation is at https://docs.mongodb.com/master/tutorial/transparent-huge-pages/ and probably ought to be your go-to solution. My original answer will still work, but I'd consider it a hack now that an official solution is available.
Original answer: According to the MongoDB documentation, http://docs.mongodb.org/manual/reference/transparent-huge-pages/, and support, https://jira.mongodb.org/browse/DOCS-2131, transparent_hugepage (THP) is designed to create fewer large memory blocks rather than many small memory blocks in systems with a lot of memory. This is great if your software needs large contiguous memory accesses. For MongoDB, however, regardless of memory available, it requires numerous smaller memory accesses and therefore performs better with THP disabled.
That makes me think either way will work, but you'll get better mongo (or any database) performance with THP off, giving you smaller bites of memory. If you don't have much memory anyway, THP probably ought to be off no matter what you run.
Several ways to do that are outlined in the link above. The most universally applicable appears to be editing rc.local.
$ sudo nano /etc/rc.local
Insert the following lines before the "exit 0" line.
...
if test -f /sys/kernel/mm/transparent_hugepage/khugepaged/defrag; then
echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
exit 0
Note: redhat-based systems may use "redhat_transparent_hugepage" rather than "transparent_hugepage" and can be checked by:
ls /sys/kernel/mm/*transparent_hugepage*/enabled
cat /sys/kernel/mm/*transparent_hugepage*/enabled
To apply the changes, reboot (which will run rc.local) or:
$ sudo su
# source /etc/rc.local
# service mongod restart
# exit
to properly apply the changes made above
For Ubuntu using upstart scripts:
Since we are deploying machines with Ansible I don't like modifying rc files or GRUB configs.
I tried using sysfsutils / sysfs.conf but ran into timing issues when starting the services on fast (or slow machines). It looked like sometimes mongod was started before sysfsutils. Sometimes it worked, sometimes it did not.
Since mongod is an upstart process I found that the cleanest solution was to add the file /etc/init/mongod_vm_settings.conf with the following content:
# Ubuntu upstart file at /etc/init/mongod_vm_settings.conf
#
# This file will set the correct kernel VM settings for MongoDB
# This file is maintained in Ansible
start on (starting mongod)
script
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
end script
This will run the script just before mongod will be started.
Restart mongod (sudo service mongod restart) and done.
In Ubuntu I used the option 'Init Script' of this document: http://docs.mongodb.org/manual/tutorial/transparent-huge-pages/
None of these worked for me on Amazon ec2 instance running Ubuntu 14.04, not even the init.d script recommended by MongoDB. I had to use the hugeadm tool by first installing it via apt-get and then running sudo hugeadm --thp-never, this post pointed me to hugeadm. I'm still trying to figure out how to disable the transparent_hugepage defrag. hugeadm doesn't seem to have an easy way to do that.
I don't know if this was an effect of the shellshock attack which my server was victim to (or another attack that worked) but it basically enabled the hacker to overwrite my SSH config file when the server rebooted.
This new file used wget to load in a file from a website, then another library of hack functions which I guessed he then used to run hacks/DOS from my server. I caught it pretty fast and ideally want to upgrade but because I have cancer and just had a big operation it is too much effort at the moment.
Therefore I did a lot of house keeping, changing passwords, removing shell access, reverting back to DASH, replacing the default shell for root and any other users to another folder with symbolic links, restoring the config file for SSH, removing CGI functionality from config files e.g
ScriptAlias /cgi-bin/ /home/searchmysite/cgi-bin/
#
allow from all
#
Removed AW stats and Webalizer for all virtual min sites.
I already had DenyHosts and Fail2Ban installed.
I also blocked in/outbound traffic to the IPs of the sites he was getting the files from.
However it seems since this change I have lost the visual cron manager from webmin.
When I go to the menu item "Scheduled Cron Jobs", it says, "The command crontab for managing user Cron configurations was not found. Maybe Cron is not installed on this system?"
However I can see in the file system it exists.
When I run crontab -l or crontab -e I get "Permission Denied"
whoami shows "root"
I did think at the time of the hack this was all related and he had used SSH and a Cron job to get his hack running.
What I want to know is how I can get the CronTab manager back.
All the cron jobs are still running such as importing feeds into my websites, running scheduled emails and so on, what I don't know is how to resolve this without a full rebuild.
If I had the time and energy I would do that but I am totally drained and before this hack everything was just running smoothly and my websites which bring me in money were working fine.
They currently are still working fine and I regularly check my logs for IPs that look odd, have strong htacess rules for xss/sql/path travesal/file hacks and ban whole countries from Cloudflare which the site sits behind. So I don't "think" the machine is compromised at the moment even if it is old - could be wrong though!
details of box
Operating system Debian Linux 5.0 Virtualmin version 3.98.gpl GPL WebMin Version: 1.610 Kernel and CPU Linux 2.6.32.9-rscloud on x86_64
So if anyone can help me get my crontab manager back that would be great.
Thanks
1) check if chattr exists, if not, download a new one.
2) type whereis crontab, then chattr -isa /path/to/crontab.(usually /usr/bin/cron) then chmod crontab back to it original settings.
3) navigate to /var/spool/ and
chattr -isa cron
cd cron
chattr -isa crontabs
4) remove cron entry in /etc/cron.weekly
Look in /etc/cron.weekly for any new