Google app engine bot attack? - security

I have an application in Google app engine that only runs cron jobs and uses a backend, so there are no incoming requests from any client. I noticed that a request from a user named 'niki-bot' was received and I'm quite surprised as my app url does not appear anywhere it's only used by admin account which sends cron requests. Fortunately I had setup security on my crons so this user got a 403 forbidden message, but I'm still wondering how could this happen. Has any of you guys experienced something similar?

You were likely running the 'Awesome Screenshot' plugin in your browser, or similar software which leaks all your browsing history to an upstream service - that upstream service appears to return with a niki-bot crawler to scrape or do something with those 'impossible to otherwise find' URLs.
Read more about it here: https://mig5.net/content/awesome-screenshot-and-niki-bot

As I think you are aware, backends are addressable to the outside world, it's only the public/private status and the security level applied to the endpoints that determines if the calls are successful.
Regarding how a bot would have gotten your App ID, I suppose they could just be trying random ones to see if there is anything they can exploit.
Were the requests for standard admin endpoints? I get many random requests for the PHP files below, and my app isn't even on PHP. People just trying to attack known systems (this is on my front-end module):
/mysqladmin/scripts/setup.php
/myadmin/scripts/setup.php
/MyAdmin/scripts/setup.php
/pma/scripts/setup.php
/phpMyAdmin/scripts/setup.php
/phpmyadmin/scripts/setup.php
/db/scripts/setup.php
/dbadmin/scripts/setup.php

Related

Slackbox - the requested URL could not be retrieved - access denied

I have slackbox running locally, have created a Spotify dev application and have successfully authenticated slackbox. It says I am logged in at http://localhost:5000/. All of my variables have been set, including the slack token, in an .env file via dotenv.
All seems well there.
On the slack side, I have created a slash command mapped to /spotify that POSTs to http://localhost:5000/store. The slash command shows up in my command description list when typing.
When I attempt to use it though, I get an access denied message in chat, I'm assuming due to cross-domain issues:
ERROR: The requested URL could not be retrieved Access Denied.
According to their docs - https://github.com/benchmarkstudios/slackbox - running this locally should work. I also run a Hubot bot locally and it integrates fine with the same slack room.
Any help is appreciated!
https://sprint.ly/blog/5-steps-to-a-slack-integration/
Slack’s outgoing slash command requests need to be sent to a public facing url, which is a problem if we want to receive these messages to our local development server.
How do we solve this?
One way is with the use of a secure tunnel which acts as a public HTTPS URL for our local development server. Problem solved!
Who provides this service?
ForwardHQ provide the best user experience, including a browser extension for setting up a local tunnel in one click. They have a free 7 day trial.
My preferred option is ngrok. It’s free for one concurrent tunnel client, with no time restriction. Woop! Its a little harder to use but it does the job.

Could not retrieve the CDN endpoints in subscription with ID

Searched Google and so - no luck.
Just got this message in Azure for 3 CDN endpoints.
There seems no way to know what is going on without MS support. It is a test account and I do not recall setting this. I have been through similar obfuscated MS error messages only to discover that Azure had crashed.
What does it mean?
This isn't really a direct answer, but could help with the general problem of "what happens if the CDN goes down?".
There is a recent development called the "Progressive Web App".
Basically unless served by localhost, everything has to be over https, but script is cached as a local application in your browser.
When your app makes requests to the registered domain, these are intercepted by a callback you put in your serviceWorker.js, so you can cache even application data locally, and sync the local data occasionally with the server (or on receive events if you're using webSockets).
Since the Service Worker intercepts REST calls to the registered domain, this in theory makes it fairly easy to add to just about any framework.
https://developers.google.com/web/fundamentals/getting-started/codelabs/your-first-pwapp/
Sometimes there is a (global) problem with the CDN. It happend before.
You can check the azure CDN status on this page: https://azure.microsoft.com/en-us/status/
At this moment everything looks good, you still have problems?

WebSockets noob working with Railo

I am admittedly a complete noob in all things server, Linux, and websockets. I finally managed to set up a VM running Apache, Tomcat, and Railo that I could connect to and serve up CFM pages, all the while learning UNIX command line navigation, server theory, etc, etc...
Here's my problem -- there is only one Railo websocket extension and it is super rinky-dink (I had to modify the CFC just to get the service to start) but I can't get a websocket connection up (I keep getting "unexpected code 200" in Google Chrome). There is minimal documentation, which is not helpful at all.
Basically, I am trying to do some prototyping for a future project that will use websockets. I like Railo for its speed, security, and excellent ability for very database heavy operations. I am interested in Node, but don't know how to get the same security and DB functionality out of Javascript as I can with CFML.
So I have a couple questions: what are my best options for WebSocket servers? Should I be trying to use Apache and/or Tomcat? People keep saying it's totally not worthwhile to have something like Node.js running the websockets portion and something else doing the heavy lifting behind it -- why is this? I'm more than happy writing WS handlers in whatever language if I can just get a nudge in the right direction, some excellent tutorials (I can't seem to find much in this department), or good feedback on how to, from the ground up, set up my Linux box to handle websockets -- and preferably how to handle both websockets and a robust language like Railo.
The Railo extension works fine for me.
What about submitting some test code so that we can debug it? Of course the websockets projects is very young and in full deployment. So feel free to fork and submit patches or suggestions.
You have plenty of options:
Railo Google Group
https://groups.google.com/forum/?fromgroups#!forum/railo
Github Extension Repository
submit a but in the Railo Jira bugtracker
The main problem of node.js is that it's mono-thread : you won't be able to do background tasks using it and local IO will block your server.
A solution I use is Go. It's very fast, has very good concurrency features and has integrated websocket and json libraries (sample : http://gary.beagledreams.com/page/go-websocket-chat.html). An efficient web application server is made in a few dozens lines of Go. You'll find that there is still much less documentation on internet than for java or even node.js through.
There are a few implementations of websockets in java but as I'm in the process of switching everything I had in java to Go I hadn't tested them. I know I use Google gson for the json encoding in java and it's very good.
The "unexpected code 200" is caused by Railo's web socket server sending an outdated response. They changed the web socket spec and Chrome uses the newer spec.
The problem seems to be caused by chrome & co implementing the new spec, "draft-ietf-hybi-thewebsocketprotocol-17". It requires the server to respond with "HTTP/1.1 101 Switching Protocols" rather than 200 OK.
The solution here would be to either update the Railo web socket extension yourself or use some other solution:
Here is a complete demo of a web socket chat server written in PHP.
http://www.flynsarmy.com/2012/02/php-websocket-chat-application-2-0/
I have used this myself to implement a real-time HTML chat served from an Arch Linux machine that I had lying around. Configuration consisted of simply setting up Apache and PHP then changing the IP address in index.html and in server.php to the external ip address of the server machine.
This flynsarmy demo includes a recent version of PHPWebSocket which is an open source web socket server written entirely in PHP and contained in a single file. The demo hooks into three callbacks: connect, message recieved, and disconnect.
The important thing to note, for me, was that the web socket protocol supports text only, not binary so while extending it for my own chat app I had to implement my own commands to help control the server. Commands in my case looked like this:
!kickusers: username, another_username, a_third_username
My server code would check the first character of all messages for a '!' and if present would treat it as a command. Then I slice up the string to get the command "kickusers" and a list of users to kick. Then I call the appropriate kick function and pass it the array of usernames.
Since my scenario was a chat client this meant that the user could literally type this command into chat and the server would accept and respond to it.
The way all this is deployed on my server is like so:
I have Apache serve the index.html page when the user goes to that location on my server in their browser. The only purpose Apache plays here is to give index.html to the client who requested it.
The index.html page contains html to display the chat and javascript to send and recieve chat to/from the server. Basically, index.html is simply a chat client written in HTML and Javascript and it runs in the browser.
I run server.php via ssh on the server to start up the WEB SOCKET server (totally separate from Apache) which just sits there and handles chat stuff like echoing text to the other connected clients etc.
Though the Arch wiki on installing Apache and PHP is specific to Arch in the way that you install the Apache and PHP packages the sections on configuring Apache and PHP apply to all. I'll save you the google query and give you the link here if you like: https://wiki.archlinux.org/index.php/LAMP
As for prototyping, the reason I gave the link to Flynsarmy's chat demo is because his comments are helpful, he wrote a blog about it, and it comes as a very simple yet complete example of how to do something with web sockets in php.

How to know the browser the user is using?

We have started developing an application for location aware emergency service. The users can connect through computer,smart phone or even through WAP. We want to use cloud servers (GAE or AWS). We want to optimize the site for the user's device.
I can not find out exactly how to know the device or the browser the user is using. From apache, by analyzing browser request, we could know the browser type. But how to learn that in Cloud servers like GAE or AWS? Is there any other way to learn which browser or device the user is using? Also is it possible to know the ip address of the user in GAE or AWS?
Thanks in advance.
I have no experience with programming in the cloud, but the request headers (among them USER_AGENT) come from the client and should be present as usual.
For GAE / Python, the answer is in this question: User-Agent in Google App Engine python
For GAE / Java, a hint is in the GAE docs. There must be a request object containing all the headers.

Pitfalls of accessing a webserver on 127.0.0.1 from js with a public site

I'm thinking about exploring the idea of having our client software run as a service on a high port and listen for simple http GET requests from 127.0.0.1. The theory is that I would be able to access this service via js from a web page that is served from my site.
1) User installs client software that installs itself as a service and waits for authenticated requests on 127.0.0.1:8080
2) When the user hits my home page js on the page makes an xhtml request to 127.0.0.1:8080 and asks for the status
3) The home page then makes another js request back to my web server sending the status that it received.
This would allow my users to upload/download and edit files on a USB attached device in real-time from a browser. Polling could be the fallback method which is close to what we do today.
Has anyone done this and what potential pitfalls are there? Will this even work?
I can't see any potential pitfalls. I do have a couple of points however.
1/ You probably want to make sure your service only accepts incoming connection from the local machine (127.0.0.1). Otherwise, anyone could look at your JavaScript and figure out that it's talking to [your-ip]:8080. They could then try that themselves from a remote site (security hole).
2/ I wouldn't use port 8080 as it's commonly used for other things (alternate HTTP servers, etc.). Make it configurable and choose a nice high random-type value.
3/ I'm not sure what you're trying to do with point 3 but I think you're trying to send the status back to the user. In which case, why wouldn't the JavaScript on your home page just get the status in a single session and output/update the HTML to be presented to the user? Your "another js request back to my web server" doesn't make sense to me.
You may not be able to do a xml http request to 127.0.0.1 as XMLHTTPRequest is usually limited to the same domain as the main content is being served from. I'm not sure if this restriction applies if the server is on the client's machine. That being said, you could still create a <script> tag that had the src pointing to 127.0.0.1, and have the web server return some Javascript to run. If you only need a simple response, this could work well.
I think it is much better for you to avoid implementation of application logic in JavaScript and html. Once user clicks button on a web page JavaScript should send request to your service and allow it do the rest of the work.
You could have problems with step 1 (Client installs itself) depending on your target user base.
You will need a customised install for each supported environment (Win2K, Vista, Linux, MAC OS 9.0/10.0 etc.).
If your user is on a locked down at work PC this simply wont be allowed.
To some users this might look distressingly similar to a trojan unless you explicitly point out you will be installing software that runs as a service.
You didnt mention an unistall procedure. Users resent "Adobe" like software which installs itself and provides no sensible un-install options
Ohterwise the approach is sound, and, there are are couple of commercial products out there that use exactly this approach!

Resources