trouble starting shorewall on Linode - firewall

Im having trouble configuring shorewall on my linode instance.
just thought maybe you know of an issue, perhaps related to your Xen virtualization and running shorewall on it...
when attempting to start shorewall I get the following error:
"ERROR: UNTRACKED state requires Raw Table in your kernel and iptables"
any ideas would be appreciated
thanks

Ideally the kernel should have CONFIG_IP_NF_RAW (and CONFIG_IP6_NF_RAW for IPv6) enabled, which provides support for the missing "Raw Table" mentioned in the error.
A link to an (unmaintained) page for kernel configuration options with Shorewall can be found here:
http://shorewall.net/kernel.htm
However, if you are unable to update the kernel, you may be able to work around the issue by editing the shorewall.conf (or shorewall6.conf) file, and changing the following line:
BLACKLIST="NEW,INVALID,UNTRACKED"
to:
BLACKLIST="NEW,INVALID"
This would, obviously, reduce some of the effectiveness of the firewall, hence ideally the kernel should be updated instead.

Related

"No Kernel!" error Azure ML compute JupyterLab

When using the JupyterLab found within the azure ML compute instance, every now and then, I run into an issue where it will say that network connection is lost.
I have confirmed that the computer is still running.
the notebook itself can be edited and saved, so the computer/VM is definitely running
Of course, the internet is fully functional
On the top right corner next to the now blank circle it will say "No Kernel!"
We can't repro the issue, can you help gives us more details? One possibility is that the kernel has bugs and hangs (could be due to extensions, widgets installed) or the resources on the machine are exhausted and kernel dies. What VM type are you using? If it's a small VM you may ran out of resources.
Having troubleshooted the internet I found that you can force a reconnect (if you wait long enough, like a few minutes, it will do on its own) by using Kernel > Restart Kernel.
Based on my own experience, it seems like this is a fairly common issue but I did spend a few minutes figuring it out. Hope this helps others who are using this.
Check your browser console for any language-pack loading errors.
Part of our team had this issue this week, the root cause for us was some language-packs for pt-br not loading correctly, once the affected team members changed the page/browser language to en-us the problem was solved.
I have been dealing with the same issue, after some research around this problem I learnt my firewall was blocking JupyterLab, Jupyter and terminal, allowing the access to it solved the issue.

Cygwin intermittently loses it's mapped drives in /cygdrive

So, I have a collection of Windows Server 2016 virtual machines that are used to run some tests in pairs. To perform these tests, I copy a selection of scripts and files from the network on to the machine, before performing the tests.
I'm basically using a selection of scripts that have existed around here since before my time and whilst i would like to use other methods, so much of our infrastructure relies on these scripts that overhauling the system would be a colossal task.
First up, i sort out the mapped drives with
net use X: \\network\location1 /user:domain\user password
net use Y: \\network\location2 /user:domain\user password
and so on
Soon after, i use rsync to copy files from a location in /cygdrive/y/somewhere to /cygdrive/c/somewhere_else
During the rsync, i will get errors that "files have vanished" (I'm currently unable to post the exact error, I will edit this later to include this). When i check what's currently in the /cygdrive directory, all i see is /cygdrive/c and everything else has disappeared.
I've tried making a symbolic link to /cygdrive/y in a different location, I've tried including persistent:yes on the net use command, I've changed the power settings on the network card to not sleep. None of these work.
I'm currently looking into the settings for the virtual machines themselves at this point, but I have some doubts as we have other virtual windows machines that do not seem to have this issue.
Has anyone has heard of anything similar and/or knows of a decent method to troubleshoot this?
Right, so I've been working on this all day and finally noticed a positive change, but since my systems are in VMware's vCloud, this may not work for some people. It's was simply a matter of having the VM turned off and upgrading the Virtual Hardware Version to the latest version. I have noticed with this though, that upon a restart, one of the first messages that comes up mentions that the computer is "disabling group policies".
I did a bit of research into this and found out that Windows 8 and 10 (no mention of any Windows Server machines) both automatically update Group Policies in the background, disconnecting and reconnecting mapped drives to recreate them.
It's possible that changing the Group Policy drive from "recreate" to "update" should fix this issue, and that the Virtual Hardware update happened to resolve this in a similar manner.

illegal activity on virtual server: ntp.client & smartctl.dump

We have a virtual webserver with ubuntu 12.04. Today we recived a message form the webhoster, because there are illegaly activities on this server.
I found bad code on different joomla installations and cleaned it. Now i have two proccess on this server, startet form our ftp-user with the following commands:
/tmp/ntp.client -p9406 -d
/tmp/smartctl.dump -p3218 -d
they used a lot cpu time and are similar and google says nothing to ntp.client or smartctl.dump
Can anybody say somthing about this processes. Can I kill them?
Thanks
PS: sorry for my english!
Unless you installed it to /tmp yourself, get rid of it. And reinstall the server. Those two are easy to spot. You have no idea how many well hidden backdoors you already have on the system. Or better yet - get someone to install it for you and take care of it/secure it for you ...
edit: And see this canonical question and the other linked questions on ServerFault, where this question actually belongs.

Collectd server not writing down received client data

I have pretty strange problem with Collectd. I'm not new to Collectd, was using it for a long time on CentOS based boxes, but now we have Ubuntu TLS 12.04 boxes, and I have really strange issue.
So, using version 5.2 on Ubuntu 12.04 TLS. Two boxes residing on Rackspace (maybe important, but I'm not sure). Network plugin configured using two local IPs, without any firewall in between and without any security (just to try to set simple client server scenario).
On both servers collectd writes in configured folders as it should write, but on server machine it doesn't write data received from client.
Troubleshooted with tcpdump, and I can clearly see UDP traffic and collectd data, including hostname and plugin names from my client machine, received on server, but they are not flushed to appropriate folder (configured by collectd) ever. Also running everything as root user, to avoid troubleshooting permissions.
Anyone has any idea or similar experience with this? Or maybe some idea what could I do for troubleshooting this beside trying to crawl internet (I think I clicked on every sensible link Google gave me in last two days) and checking network layer (which looks fine)?
And just small note: exactly the same happened with official 4.10.2 version from Ubuntu's repo. After trying to troubleshoot it for hours moved to upgrade to version five.
I'd suggest trying out the quite generic troubleshooting procedure based on the csv and logfile plugins, as described in this answer. As everything seems to be fine locally, follow this procedure on the server, activating only the network plugin (in addition to logfile, csv and possibly rrdtool).
So after no way of fixing this, I upgraded my Ubuntu to 12.04.2 LTS (3.2.0-24-virtual) and this just started working fine, without any intervention.

Running external code in a restricted environment (linux)

For reasons beyond the scope of this post, I want to run external (user submitted) code similar to the computer language benchmark game. Obviously this needs to be done in a restricted environment. Here are my restriction requirements:
Can only read/write to current working directory (will be large tempdir)
No external access (internet, etc)
Anything else I probably don't care about (e.g., processor/memory usage, etc).
I myself have several restrictions. A solution which uses standard *nix functionality (specifically RHEL 5.x) would be preferred, as then I could use our cluster for the backend. It is also difficult to get software installed there, so something in the base distribution would be optimal.
Now, the questions:
Can this even be done with externally compiled binaries? It seems like it could be possible, but also like it could just be hopeless.
What about if we force the code itself to be submitted, and compile it ourselves. Does that make the problem easier or harder?
Should I just give up on home directory protection, and use a VM/rollback? What about blocking external communication (isn't the VM usually talked to over a bridged LAN connection?)
Something I missed?
Possibly useful ideas:
rssh. Doesn't help with compiled code though
Using a VM with rollback after code finishes (can network be configured so there is a local bridge but no WAN bridge?). Doesn't work on cluster.
I would examine and evaluate both a VM and a special SELinux context.
I don't think you'll be able to do what you need with simple file system protection because you won't be able to prevent access to syscalls which will allow access to the network etc. You can probably use AppArmor to do what you need though. That uses the kernel and virtualizes the foreign binary.

Resources