File upload is limited to 1M, even after PHP, Nginx, and Apache configuration - firewall

I'm asking about an issue on a Wordpress website, that serves on an Ubuntu 18.04 behind of a WAF(Web Application Firewall) service.
The server was working for 1 year. 4 days ago I tried to upload a file and I got Http error.
upload_max_filesize and other values on configurations are about 2G.
First of all, I checked VM configs and I found out VM memory reduced to 4G. After increasing the memory amount, I did a check on Vestacp for the PHP, Nginx, and Apache configs, and they had no changes. Then I tried to upload, but this time on the local network and file uploaded successfully!
here are the questions:
Did I miss anything, while I checked the configurations?
Is the WAF the reason of the problem?
Is it a possibility that reducing and increasing the VM Memory amount, causing the problem?
And at last, how can I fix this?
Edited
Can someone explain the Juniper strategy and why it has done this?

Double check your php.ini settings, look for upload_max_filesize and post_max_size. Both of those values affect the max upload size of a form. Also, dont forget to restart your server service after changes are made.
If that does not work, check the phpinfo(); function output to make sure your settings saved and that the appropriate php.ini file is loaded. If so,
check code for ini_set( ... ) functions. If there is one and it is changing it dynamically as the script executes, change it. Or, as a blanket fix, you can use that function and set the post max sizes to what you want. However, you should still investigate what is actually going on.

The problem has solved after bypassing the WAF
Edited Feb 15, 2019
According to the network security administrator investigation on the issue, after a couple of malicious requests to the Server, WAF made the right decision by blocking the rest of them. However, we are trying to find the source of the requests. As a theory, MSN Bot doing this.

Related

For IIS, what is the best way to update static files while maintaining availability on a live website?

I've investigated this and found lots of related information, but nothing that answers my question.
A little background. A small set of files are shared by IIS 10 statically. These files need to be updated usually weekly, but not more than once an hour (unless someone manually runs an update utility for testing). The files are expected to be a couple K bytes in size, no larger then 10 kilobytes. The update process can be run on the IIS server and will be written in PowerShell or C#.
My plan for updating files that are actively being served as static files by IIS is:
Copy the files to a temporary local location (on the same volume)
Attempt to move the files to the IIS static site location
The move may fail if the file is in use (by IIS). Implement a simple retry strategy for this.
It doesn't cause a problem if there is a delay in publishing these files. What I really want to avoid is IIS trying to access one of the files at just the wrong time, a race condition while my file replacement is in process. I have no control over the HTTP client which might be a program that's not tolerant of the type of error IIS could be expected to return, like an HTTP status 404, "Not Found".
I have a couple random ideas:
HTTP GET the file from IIS before I replace it with the intention of getting the file into IIS's cache in hopes that will improve the situation.
Just ignore this potential issue an hope for the best
I can't be the only developer who's faced this. What's a good way to address this issue? (Or is it somehow not an issue at all and I'm just over thinking it?)
Thanks in advance for any help.

gitlab runner errors occasionally

I have gitlab setup with runners on dedicated VM machine (24GB 12 vCPUs and very low runner concurrency=6).
Everything worked fine until I've added more Browser tests - 11 at the moment.
These tests are in stage browser-test and start properly.
My problem is that, it sometimes succeeds and sometimes not, with totally random errors.
Sometimes it cannot resolve host, other times unable to find element on page..
If I rerun these failed tests, all goes green always.
Anyone has an idea on what is going wrong here?
BTW... I've checked, this dedicated VM is not overloaded...
I have resolved all my initial issues (not tested with full machine load so far), however, I've decided to post some of my experiences.
First of all, I was experimenting with gitlab-runner concurrency (to speed things up) and it turned out, that it really quickly filled my storage space. So for anybody experiencing storage shortcomings, I suggest installing this package
Secondly, I was using runner cache and artifacts, which in the end were cluttering my tests a bit, and I believe, that was the root cause of my problems.
My observations:
If you want to take advantage of cache in gitlab-runner, remember that by default it is accessible on host where runner starts only, and remember that cache is retrieved on top of your installation, meaning it overrides files from your project.
Artifacts are a little bit more flexible, cause they are stored/fetched from your gitlab installation. You should develop your own naming convention (using vars) for them to control, what is fetched/cached between stages and to make sure all is working, as you would expect.
Cache/Artifacts in your tests should be used with caution and understanding, cause they can introduce tons of problems, if not used properly...
Side note:
Although my VM machine was not overloaded, certain lags in storage were causing timeouts in the network and finally in Dusk, when running multiple gitlab-runners concurrently...
Update as of 2019-02:
Finally, I have tested this on a full load, and I can confirm my earlier side note, about machine overload is more than true.
After tweaking Linux parameters to handle big load (max open files, connections, sockets, timeouts, etc.) on hosts running gitlab-runners, all concurrent tests are passing green, without any strange, occasional errors.
Hope it helps anybody with configuring gitlab-runners...

When should an Azure website be restarted, and what are the consequences?

In the Azure Management Portal, you can configure your website. As an example, you can change the PHP version your website is using. When you have edited a configuration option, you have to click “Save”.
So far, so good. But you also have the option to restart your site (by clicking “Restart“ next to “Save”).
My question is, when should you restart your website? Are there some configuration changes that require a restart, and others that don't? I haven't found any hints in the user interface.
Are there other situations that require a restart? Say, the website has been running for a given time without a restart?
Also, what are the consequences of restarting a website? Does it affect cookies/sessions in any way (i.e. delete a user's shopping cart or log them out)? Are there any other consequences I should be aware of?
Generally speaking, you may want to restart your website because of application performance issues. For example, you may have a memory leak in your application, connections not getting closed, or other things that would degrade the performance of the application over time. As you monitor your website and observe conditions like this you may make a decision to restart it. Even better, you may even automate the task of restarting when these conditions occurr. Anyway, these kinds of things are not unique to Azure Websites. You would take similar actions for a website running on-premises.
As for configuration changes, if you make a change to your web.config file, this change is detected and your website would be restarted automatically for you. Similarily, if you were to make configuration changes in the CONFIG page of your website in the Azure Management Portal such as application settings, connection strings, etc., then Azure Websites will detect this change to your environment and automatically restart it.
Indeed, restarting a website will result in any session data kept in memory being lost for that instance. Additionally, if you have startup/initialization code that takes time to complete then that will have to be rerun. Again, this is not anything unique to Azure Websites though.

Slow website even though VPS is up and running

Sorry if this is a bit of a newbie question, but I am quite new to VPS and the relatively more complicated set up. I have a VPS set up, and every day or twice a day the site loads for a bout 10 minutes with no luck. Then when it comes back on line its fine after that. Upon logging on to Plesk, the server is up and running, very low CPU usage (0.10 and drops to 0.00 after a few minutes) and around 18% RAM usage.
The MySQLAdmin loads up fine.
So it seems the VPS is running fine.
Is there maybe another reason? The domain is with Daily.co.uk and the VPS is with LCN.com. Could there be another problem somewhere? On daily.co.uk, there are two nameservers set. ns0.etc*** and ns1.etc***. I did a tracert on windows cmd, this traced down to the server, with two timeouts.
I also tried a check on http://dnscheck.pingdom.com/ while the site was slow and this came back fine, except this: Too few IPv4 name servers (1). Only one IPv4 name server was found for the zone. You should always have at least two IPv4 name servers for a zone to be able to handle transient connectivity problems.
Any help would be appreciated. I have tried searching but with no luck.
The recommended diagnostic check for the issue you are experiencing is called a DIG.
On your Windows system, this check is not intrinsically available, but it can be downloaded from http://members.shaw.ca/nicholas.fong/dig/
Once you have installed it, you'll want to run it from the command prompt with the following syntax:
C:> dig -insert your domain here- +trace
This will show you how DNS resolution is happening from your location to the requested end point. Chances are, the error you received is correct. Most DNS setups have several name servers to assign to your domain registration to allow the round-robining of delegated name servers in the event that one becomes unresponsive.
My personal recommendation would be to outsource the DNS to a managed provider. Doing so will increase the availability of the zone, and reduce latency.

FTP suddenly refuses connection after multiple & sporadic file transfers

I have an issue that my idiot web host support team cannot solve, so here it is:
When I'm working on a site, and I'm uploading many files here and there (small files, most of them a few dozen lines at most, php and js files mostly, with some png and jpg files), after multiple uploads in a very short timeframe, the FTP chokes on me. It cuts me off with a "refused connection" error from the server end as if I am brute-force attacking the server, or trying to overload it. And then after 30 minutes or so it seems to work again.
I have a dedicated server with inmotion hosting (which I do NOT recommend, but that's another story - I have too many accounts to switch over), so I have access to all logs etc. if you want me to look.
Here's what I have as settings so far:
I have my own IP on the whitelist in the firewall.
FTP settings have maximum 2000 connections at a time (Which I am
nowhere near close to hitting - most of the accounts I manage
myself, without client access allowed)
Broken Compatibility ON
Idle time 15 mins
On regular port 21
regular FTP (not SFTP)
access to a sub-domain of a major domain
Anyhow this is very frustrating because I have to pause my web development work in the middle of an update. Restarting FTP on WHM doesn't seem to resolve it right away either - I just have to wait. However when I try to access the website directly through the browser, or use ping/traceroute commands to see if I can reach it, there's no problem - just the FTP is cut off.
The ftp server is configured for such a behavior. If you cannot change its configuration (or switch to another ftp server program on the server), you can't avoid that.
For example vsftpd has many such configuration switches.
Going to something else like scp or ssh should help
(I'm not sure that calling idiot your web support team can help you)

Resources