In my typo3temp folder I always find a file called javascript_a1cb3a5978.js. It seems that this is a JS by Typo3 to encrypt email adresses. Now in the code always the trojan is appended. I delete the file from the Typo3 cache and if the page is called in the browser the file is generated.
I tried to download the site and scan it with Security Essentials. Also I tried to search for eval but there are too much in the whole Typo3 folder. I didn't found something in the index.php and also I didn't found it in the htaccess. Permission should be OK for the site.
Do you have some ideas for me where this code is appended?
Check typo3conf/localconf.php and typo3conf/temp_* files and typo3conf/extTables.php.
Deactivate every extension and update your TYPO3. Check your TypoScript. I guess you should shut down your website and analyse how the attacker injected that code.
Related
My website is defaced and every time I load it, there appears a page with some message in some Turkish language and the title of page is 'hacked by 58'.
I tried searching this web page in my public_html directory but couldn't find this html in any directory. Yet still it loads when we visit the website.
Any help?
nginx or apache? locate the configure file,inside the file there is a line kind of 'root',that's path of your index folder,enter that folder fix index.html.
Btw,if your server is hacked,it's highly possible running dangerous programs ,you'd better reinstall your sys,and try not to be hacked,such as use ssh-key login method.
I am working on an E-Commerce website project which I created in localhost. It worked fine, until I moved it online.
Since I moved it online, I've had issues accessing the admin page and the index.php. I've managed to make the admin page work and can now access the backoffice without any issues, but my index.php still shows me an "error: too many redirects1" page.
What's happening?
Main page of my website is stuck in a redirecting loop (chrome error message : this url tried to redirect you too many times)
Everytime I reload the main page, the url switches between www.mydomain.com and mydomain.com (might be an htaccess issue?)
What I've done to try and solve the problem:
I have checked everything in core_config_data table to make sure
the right urls are written in web/secure/base_url and
web/unsecure/base_url. They are.
I have manually cleared the var/cache and var/session from my
FTP.
I have cleared all cookies / cache from Chrome / Firefox
I have reuploaded the files and database multiple times, thinking it might be due to a corrupted file from the upload.
I have tried to edit the htaccess, but it didn't change anything.
What should I do now ?
I feel like I've tried everything.
As it's my first time with magento, I'm sure it's some dumb thing I might not know about, but I've read nearly every single post about this kind of issues on this website and haven't found anything to resolve it.
So I'm asking you. I'm willing to try every single idea you throw at me, as I've been stuck on this issue for a while now ^^
Thanks for reading :)
Weird, It seems that You make everything right. Try to find and update all url settings in core_config_data: select * from core_config_data where path like '%url%'.
You can try update web/url/redirect_to_base config to 0 (if you have 1).
Remember to clear cache.
Using website grabbers whole website with folder structure can be downloaded.
Is there any way to prevent this?
If so,how?
The only way to protect a websites markup is not to publish it. If you want your users to see something they need to get the HTML markup and the images, that should be displayed. And therefore the files need to be accessible. And if your files are accessible every user/bot/crawler/grabber can save these files.
The best way is to put a few files like the index page in the main directory and call the other sub pages in it. If using php then you may do the following.
Say keep the index.php in the main folder and keep the homepage.php in a directory called includes and use the homepage in the index.php via include function in php.
Now add a .htaccess file to the includes folder which must contain
"deny from all"
This way users can use the page but will not have direct access to the files. So will be for the grabber.
My question pertains specifically to the two pages below, but is also more generally relating to methods for using clean URLs without an .htaccess file.
http://www.decitectural.com/
and
http://www.decitectural.com/about/
The pages above are hosted on Amazon's S3, which does not allow for the use of htaccess files. As a result, I have found no easy way to create a clean url rewrite scheme that sends all requests to an index file which, in turn, interprets the URL using javascript and loads up the correct page (with AJAX, or, as is the case with decitectural, with simple div visibility toggling).
In order to circumvent this problem, I usually edit the amazon S3 bucket properties and set both the index page and the error page to the index.html file. In this case, the index.html file is served even when an invalid path (such as /about/) is requested. This has, for the most part, been a functioning solution... That is, until I realized that I was also getting a 404 with the index.html page which would stop Google from indexing it.
This has led me to seek out an alternative solution to this problem. Currently, as a temporary fix, I am actually creating the /about/ directory on the server with a duplicate of the index.html file in it. This works, but obviously is not a real solution to the problem.
I would appreciate any advice on how to set up a clean URL routing scheme on S3 or in any instance where an .htaccess file can't be used.
Here's a few solutions: Pretty URLs without mod_rewrite, without .htaccess
Also, I guess you can run a script to create the files dynamically from an array or database so it generates all your URLs:
/index.html
/about/index.html
/contact/index.html
...
And hook the script on every edit, in a cron or run manually. Not the best in terms of performance but hey, it should work.
I think you are going about it the wrong way. S3 gives you complete control of the page structure of your site. If you want your link to be "/about", just upload a file called "about", and you're done. (Set the headers so that the browser knows it's HTML.)
Yes, it will break if someone links to "/about/" or "/about.html". But pretty much any site will break if you mess with their links in odd ways. You will have to be vigilant when linking to your own site, because you won't have any rewrite rules to clean up for you. But you should have automation doing that.
I am stuck at the situation where I want the url, which contains a folder having some files (html, swf etc.), to be accessible after I validate the user.
For example.
The url to access is:
A - http://mysite.com/files/version/1/file.swf
And this above url is accessible from the link,
B - http://mysite.com/view/1
I have implemented a way to hide the URL A from a normal user but if the user somehow is a semi-techie person then he can know the swf file location from firebug or other tools. So, to make the access-to-file secure what should I do?
If a user somehow knows the first url(A) and then enters it in browser, i have to check if the user is logged-in and if validation is done it lets the url A to be loaded.
Since, in CI, the controller names cannot be named same as the folders in the root directory, in this case i cannot have a controller called “files”. So, the only option left to make this secure access to url work is to use htaccess rule/cond. If this is the only option, then how can it be achieved by htaccess and if not, then what other options do i have.
Will the codeigniter's URI Routes work because when i tried like this:
$route[‘files/version/1/(:any)’] = “view/$1”;
and it doesnt work, maybe because there is no controller/function/param as files/versions/1 ...
looking for quick help. Thanks
There isn't a sure-fire way to do it without, for example, using .htpasswd.
One thing you could implement is sort of "Security by Obscurity". In that case you could redirect all requests to a file to the URL http://mysite.com/view/file-id and then instead of loading the requested file directly, you would load a .php template with the appropriate headers - be it an image, a flash file or anything else.
But it really depends on how the files are going to be managed, since every file will need an entry in the database and you would have to output different headers for different types of files. And if someone still manages to guess the path to the file, it will be directly accessible.