Method to find DNS hijack? - dns

I'm getting worried. For one specific computer where I run win10 and chrome i maybe once every 30-60 days get a different web page compared to the url I manually type in.
I have tested kaspersky and avast, but none of them found anything on my computer.
I suspect that some service on my computer injects itself every now and then between chrome and the DNS server to give me the wrong ip address for the site that I'm looking for.
I have a ubiquity edge router and can not see any odd setting in it.
I have tried to run wireshark, but since it happens so rarely I have not managed to have wireshare running while this happens.
The pages that I get to instead of where I want to get is online gambling sites.
I have searched the web after instructions on how to trouble shoot this and tried many without any success.
What is the best working instruction to sort this out? Grateful for any help or direction.

Related

How to troubleshoot angular app hosted in WSL2

I am very new to development, so I apologize in advance if I am not being clear enough. I appreciate any feedback on the question and how to better pose it.
I'm currently working on an angular app hosted in WSL2. To get it to run locally, this is what I do:
Launch solution in VS Code
From terminal, run npm start
Then, I click on Run and Debug
Once I get the Now listening on: ..... message, I move on to Chrome to start debugging my app
Since this morning, I keep getting a ERR_CONNECTION_REFUSED in the browser, and I'm currently waiting on IT to step in as there could have been some security changes placed that are blocking the requests. Also, I'm not using the VPN.
In the meantime, is there a way to know for sure (or get as close as possible to) what is causing the connection to be refused in my particular scenario? I'm curious about it, but I don't know how to even search the topic properly due to lack of proper terms.
What should one do to at least obtain more details about the issue on their own (perhaps, to help expedite help by collecting important info upfront)? Where my apps are hosted in WSL2, I get very confused between the ip address originating the request and that of the server.
I appreciate any guidance anyone could provide.
It turned out there was a VPN software running where the firewall was ON, even though I was not in the VPN (it went on after I connected to the VPN the previous day). I guess there was nothing else I could have done anyway.

Determining Website Crash Time on Linux Server

2.5 months ago, I was running a website on a Linux server to do a user study on 3 variations of a tool. All 3 variations ran on the same website. While I was conducting my user study, the website (i.e., process hosting the website) crashed. In my sleep-deprived state, I unfortunately did not record when the crash happened. However, I now need to know a) when the crash happened, and b) for how long the website was down until I brought it back up. I only have a rough timeframe for when the crash happened and for long it was down, but I need to pinpoint this information as precisely as possible to do some time-on-task analyses with my user study data.
The server runs Linux 16.04.4 LTS (GNU/Linux 4.4.0-165-generic x86_64) and has been minimally set up to run our website. As such, it is unlikely that any utilities aside from those that came with the OS have been installed. Similarly, no additional setup has likely been done. For example, I tried looking at a history of commands used in hopes that HISTTIMEFORMAT was previously set so that I could see timestamps. This ended up not being the case; while I can now see timestamps for commands, setting HISTTIMEFORMAT is not retroactive, meaning I can't get accurate timestamps for the commands I ran 2.5 months ago. That all being said, if you have an idea that you think might work, I'm willing to try (as long as it doesn't break our server)!
It is also worth mentioning that I currently do not know if it's possible to see a remote desktop or something of the like; I've been just ssh'ing in and use the terminal to interact with the server.
I've been bouncing ideas off with friends and colleagues, and we all feel that there must be SOMETHING we could use to pinpoint when the server went down (e.g., network activity logs showing spikes around the time that the user study began as well as when the website was revived, a log of previous/no longer running processes, etc.). Unfortunately, none of us know about Linux logs or commands to really dig deep into this very specific issue.
In summary:
I need a timestamp for either when the website crashed or when it was revived. It would be nice to have both (or otherwise determine for how long the website was down for), but this is not completely necessary
I'm guessing only a "native" Linux command will be useful since nothing new/special has been installed on our server. Otherwise, any additional command/tool/utility will have to be retroactive.
It may or may not be possible to get a remote desktop working with the server (e.g., to use some tool that has a GUI you interact with to help get some information)
Myself and my colleagues have that sense of "there must be SOMETHING we could use" between various logs or system information, such at network activity, process start times, etc., but none of us know enough about Linux to do deep digging without some help
Any ideas for what I can try to help figure out at least when the website crashed (if not also for how long it was down)?
A friend of mine pointed me to the journalctl command, which apparently maintains timestamps of past commands separately from HISTTIMEFORMAT and keeps logs that for me went as far back as October 7. It contained enough information for me to determine both when I revived my Node js server as well as when my Node js server initially went down

Stop browser from telling me my local dev site is dangerous?

Sorry I wasn't sure how best to tag this. Maybe someone can help? Or maybe this even belongs on a different site?
IE is giving me the big red screen telling me I should not proceed, but the address is one that is set in my hosts file to go to our local dev machine that we've used for years. I guess someone got the address and used it publicly in a malicious way.
Is there a way to prevent IE from showing this red screen or is my only choice just to change the local host name that I use for development?
It's super annoying because it pops up on every navigation.

Cannot access websites on apache from outside the server

I have a debian 7.5 based Ubuntu server, apache 2.2.22.
It's a rather vanilla installed XAMP used as a basic web server.
It used to work fine and I have no idea why it stopped working suddenly (there was some maintenance today but it worked when I left it - I changed partition sizes with Gparted).
When I try to access a website from the server (tried with w3m) all is working OK, including PHP and MySQL access.
When I try to access the same host (using a domain) from the outside, the browser keeps loading for a long while, eventually (after few minutes) saying the page could not be loaded.
I made sure that ports are open and accessible with outside scanner.
So I'm sure the Apache is available (working from inside the network, websites loading from SSH using w3M and pinging)
I'm sure the server is connected to the web (I can use putty to SSH)
the host is resolving to the correct IP (but won't ping from outside, only inside)
The ports seems to be opened (scanned and got OK for port 80)
I'm not a professional IT, so If there is info I can add that could help just ask away.
would really appreciate any idea or direction.
Thanks!
I still suspect the UFW/iptables firewall is blocking all incoming connections... Please go through this article and double check
http://www.cyberciti.biz/faq/ubuntu-server-disable-firewall/
If you're sure that the firewall config is OK, please try packet capturing with Wireshark to see what's going on underneath.
http://www.youtube.com/watch?v=sOTCRqa8U9Y How to install
Thanks for the help,
Oddly enough - It just started working again after 12 hours of not working.
A friend of mine, an IT person just called to try and help, and he simply connected (5 mins after I tried) and said it's all working for him.
I tried, and it's working for me also.
Have no idea why it stopped working, and why it is working now.
I think it might be an ISP problem or a router issue... The server is in our offices so I guess it could be both. I just don't understand why SSH would work and HTTP wouldn't.

Slow website even though VPS is up and running

Sorry if this is a bit of a newbie question, but I am quite new to VPS and the relatively more complicated set up. I have a VPS set up, and every day or twice a day the site loads for a bout 10 minutes with no luck. Then when it comes back on line its fine after that. Upon logging on to Plesk, the server is up and running, very low CPU usage (0.10 and drops to 0.00 after a few minutes) and around 18% RAM usage.
The MySQLAdmin loads up fine.
So it seems the VPS is running fine.
Is there maybe another reason? The domain is with Daily.co.uk and the VPS is with LCN.com. Could there be another problem somewhere? On daily.co.uk, there are two nameservers set. ns0.etc*** and ns1.etc***. I did a tracert on windows cmd, this traced down to the server, with two timeouts.
I also tried a check on http://dnscheck.pingdom.com/ while the site was slow and this came back fine, except this: Too few IPv4 name servers (1). Only one IPv4 name server was found for the zone. You should always have at least two IPv4 name servers for a zone to be able to handle transient connectivity problems.
Any help would be appreciated. I have tried searching but with no luck.
The recommended diagnostic check for the issue you are experiencing is called a DIG.
On your Windows system, this check is not intrinsically available, but it can be downloaded from http://members.shaw.ca/nicholas.fong/dig/
Once you have installed it, you'll want to run it from the command prompt with the following syntax:
C:> dig -insert your domain here- +trace
This will show you how DNS resolution is happening from your location to the requested end point. Chances are, the error you received is correct. Most DNS setups have several name servers to assign to your domain registration to allow the round-robining of delegated name servers in the event that one becomes unresponsive.
My personal recommendation would be to outsource the DNS to a managed provider. Doing so will increase the availability of the zone, and reduce latency.

Resources