Enforcing Saas subscription requirements for client based apps - google-chrome-extension

I want to create a SaaS extension for chrome.
How do I ensure that they cannot use my extension's functionality when their subscription is no longer current?
My basic idea is that whenever they want to use my chrome extension's functionality, the extension makes an ajax request to my server to check to see if today's date is before the subscription's ending date in my DB.
The extension is obviously is client based, so even if I have code on the client side that's only executed if my ajax request returns that they have a current subscription, couldn't an enterprising individual just look at my code and run it via console in a way that gets past my ajax request requirement?
Is there a way to enforce the subscription?
Edit:
This is mostly a conceptual question, but I'll try be clearer.
All the javascript code needed for my app to function is on their local machine, in their source files (to work it doesn't require access to my database).
so you could think of my code on their local machine as looking like this:
if (usersSubscriptionIsCurrent) {
runFeature()
}
And usersSubscriptionIsCurrent is true if the Ajax request to my server returns that their subscription is current.
Someone could still run my feature just by looking at the source code, and then typing runFeature() into their console.
I want to prevent that.
My extension relies on sending data from the extension to a related chrome app, so I just had the idea that I could also send the data to my server, which could then forward the data to user's chrome app if they have a current subscription. But yikes.
The more I think about it, the less I think it's possible for me to prevent, but I figured I'd ask in case anyone has a clever idea.

I think you are slightly confused about what counts as SaaS. Wikipedia:
Software as a service is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. SaaS is typically accessed by users using a thin client via a web browser.
Emphasis mine.
If your app / extension contains all the logic required, it does not qualify as SaaS. Futhermore, as it is always possible to copy/dissect your app, taking out all license checks, you can't protect it against a determined attacker.
There are ways to protect your code to some degree, via obfuscation, offloading logic to (P)NaCl modules, native host modules, or, as Alex Belozerov suggested, load the code on runtime. Again, all of that can be broken by a determined attacker.
But if you truly have SaaS in mind (and not just subscription-based licensing), your client app should be a thin client: that is, your app logic should be processed on a server, with code safely away from clients. That is the only "sure" way to protect it, but incurs processing costs to you, but that's what subscription is supposed to cover in the first place.

You can get part of code needed from server side. So if user's subscription is over, he won't be apple to run your feature as part of code is missing. Concept of my idea:
var subscriptionStatusResponse = makeAjaxCall();
if(subscriptionStatusResponse.usersSubscriptionIsCurrent) {
runFeature_localCode(); // only part of functional
subscriptionStatusResponse.remoteCode(); // second part
}

Maybe the best solution is to check if their subscription is current as soon as the extension starts, and then use the chrome management API to uninstall or disable it if their subscription is over.
I'd love to hear better ideas though.

Related

How to check if a url is malicious within the code?

I would like to take a url input from a user, and serve that url to all other users in some context. I will show this url to users within my website as a link with a message of More Details. I am aware that when you point the cursor to the link, it shows the url and give or take it can be understood whether it is real or malicious, but 99.9% of the people won't think about such thing and will just click it right away.
So my question is, can I detect whether the inputted link is real or malicious, and if so how, and if not so, what can I do to at least improve the security to some extent? I am using frontend Reactjs , backend Nodejs, and multiple AWS Resources for data and API management.
As far as I am aware, there's nothing specific that can be done on the AWS side to achieve this since this is specific to your backend implementation.
I am no expert on security, but maybe using VirusTotal API to check if a given URL is malicious? There are limits on the allowed number of requests. Also, as stated:
The public API is a free service, available for any website or application that is free to consumers. The API must not be used in commercial products or services
If you want to commercialize your service, you may get banned from using VirusTotal if you do not go with the paid route.
Maybe there are alternative solutions that are free for commercial use. But using such a service is your only route if you want to delegate URL security checks to a third-party service since AWS does not offer anything similar.

How do I properly setup and deploy a private API exclusively for my frontend?

I am currently working on a web application. The client is designed in Vue.js and the server application is made with node.js and express.
As of now I plan to deploy both the client-website and the node.js-app on the same server. Both will be adressed via two different, unique domains. The server will be set up manually with nginx.
The problem now is that this solution won't prevent a user from being able to send requests to the server outside the client that was made for it. Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way. I think the only clean solution is that only my Vue.js-app would be able to perform such actions. However, since both the server and the client are two different environments/applications, some sort of cross-origin-request mechanism (cors for instance) must be set up.
So I'm wondering, is this bad by design or is it usual that way? If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible? If so, what are usual best practices for development and deployment / things to consider? Should I change my plan and work on a complete different architecture for my expectations instead / How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
I think the only clean solution is that only my Vue.js-app would be able to perform such actions.
An API that is usable from a browser-based application is just open to the world. You cannot prevent use from other places. That just how the WWW works. You can require that a user in your system is authenticated and that auth credential is provided with each request (such as an auth cookie) before the API will provide any data. But, even then, any hacker can sign up for your system, take the auth credential and use your API for their own uses. You cannot prevent that.
If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible?
There is no such thing as a private API that is used from a browser-based application. Nothing that runs in a browser is private.
If you were thinking of using CORs protections to limit the use of your API, that only limits it from other browser-based applications as CORs protections are enforced inside the browser. Any outside script using your API is not subject to CORs at all.
How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
Bigger sites (such as Google) have APIs that require some sort of developer credential and that credential comes with particular usage rules (max number of requests over some time period, max data used, storage limits, etc...). These sites implement code in their API servers to verify that only an authorized client (one with the proper developer credential) is using the API and that the usage stays within the bounds that are afforded that developer credential. If not, the API will return some sort of 4xx or 5xx error.
Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way.
Yes, this will likely be possible. Many sites nowadays use something like a captcha to require human intervention before a request to create an account can succeed. This can be successful at preventing entirely automated creation of accounts. But, it still doesn't stop some developer from manually creating an account, then grabbing that accounts credentials and using them with your API.
When talking about web applications, the only truly private APIs are APIs that are entirely within your server (one part of your server calling something in another part of your server). These private APIs can even be http requests, but they must either not be accessible to the outside world or they must require credentials that are never available to the outside world. Since they are not available to the outside world, they cannot be used from within a browser application.
OK, that was a lot of things you cannot do, what CAN you do?
First and foremost, an application design that keeps private APIs internal to the server (not sent from the client) is best. So, if you want to implement a piece of functionality that needs to call several APIs you would like to be private, then don't implement that functionality on the client. Implement that functionality on the server. Have the client make one request and get some data or HTML back that it can then display. Keep as much of the internals of the implementation of that feature on the server.
Second, you can require auth credentials for a user in your system for all API usage. While this won't prevent rouge usage, it will give you a bit more control because you can track usage, suspend user accounts when you find abuse, etc...
Third, you can implement usage rules for your public-facing APIs such as requests per minute, amount of data, etc... that your actual web application would never exceed so if they are exceeded, then it must be some unintended usage of the API. And, you could go further than that and detect usage patterns that do not happen in your client. For example, if you see an API user cycling through dozens of users, requesting all their profiles and you know that is something your regular client never does, you could detect that type of usage and block it.

Methods to protect client-side Chrome extension code

I'm working on creating a Chrome extension that uses client-side JS to automate a task on a website. Since this tool would be used by businesses, I'm planning to sell it in a subscription-based model. Previously, Chrome had a payments API that could protect someone from having access to the client-side code without first paying, but they've since deprecated that API. Additionally, it used to be possible to get remote code and execute it within a Chrome extension, but Chrome has now also forbade that. And on top of that, it is not acceptable to obfuscate code.
Given all of these very strict restrictions, I'm wondering what options we as developers have to enforce any amount of security for our code. Obviously this is client-side code, meaning there is no true way to fully protect it, but what I'm initially thinking is:
Minify/bundle the code. Minification is allowed and I've been able to publish bundled/minified extensions before. This helps to hide intent of functions by shortening their names.
Use scripting.injectScript to inject individual functions from the background script to tabs rather than have it readily available on the page. This hides the logic a bit and makes it a bit harder to track the flow.
Given the above, my main question is about payment status checks. I will have a backend with authentication that integrates with Stripe. I'm wondering if there's some way I can maybe check with my server on a daily basis for paid status, store that status in chrome.storage, and then have checks throughout my code that check chrome.storage without making it stupidly easy for someone to just do chrome.storage.sync.set({ paid: true }). My thought to adding just another hurdle at least is to have the server return an encrypted payload containing the id of the user and the time they last authenticated as well as a key to decrypte it. The Chrome extension would then have the ability to decrypt the payload and check that the date and id are accurate. For the user to get around this, they'd have to have some basic encryption understanding and do it on a daily basis, which would hopefully be too annoying to be worth it.
To be clear, I understand that sending the key to the client makes it so that this isn't actually secure, and the user could even go through the code to manually remove the checks, but the goal here is to just make it hard enough that the average user would have a hard time figuring out how to overcome the challenges and it wouldn't be worth it.
Does anyone have any other strategies they're employing or just ideas in general in this new era of Chrome extensions?
The only other option I could think of that would actually secure a Chrome extension but that wouldn't fly by the Chrome team is having the extension code be encrypted at the time of download and the user would need to subscribe to get a key to decrypt it. But this would violate the obfuscation rule. Would be nice if we could talk to the Chrome team about allowing something like that where we could give them a key at time of submission.

Azure App Service Multi Instance: Do I need to change my web app code

I just discovered that azure app services can scale both up and out. For out this means creating multiple instances. So my question is do I need to change my asp.net web app to support this? For example if a user asks to run an async report that runs in background and then comes back later to download the report will it just work? What about security. If a user has authenticated, gotten a cookie, and then leaves the app alone for a while and then continues will it work? Is there any documentation to help.
If your code doesn't support, you can always switch on server affinity. This ensures the request will route back to the same server. However this is not recommended you want any server to respond, rather the same one they started with.
You don't need to change your code, it will just work and its Azure is smart enough to route traffic to the servers for you, so your question about async, yes that will 100% work.
If you use store information in the cookie, it should work without server affinity, but if you use session, then you most likely will need to turn it on (depending on where session is stored - inproc, sql). Here is an article about server affinity https://blogs.msdn.microsoft.com/appserviceteam/2016/05/16/disable-session-affinity-cookie-arr-cookie-for-azure-web-apps/
Hope that helps

How to restrict Chrome Apps to only work on specific computers?

I'm developing a POS Client using Chrome (packaged) Apps. It will run locally on the installed computers and interact with the server via web service. This app should only run on specific computers at the stores.
I know I can go to each store and install the .crx file in which case I don't have to publish the app to Chrome Web Store. However, I want it to be published to Chrome Web Store so that I can take advantage of its auto-updating feature.
What should I do to make sure that the app can only run at the stores' computers? (I can go the the stores and setup anything needed at the first installation).
Options I have thought of:
Create some secret key and enter it to the app at the first time of running.
Build a small tool (winforms application) to generate time-based tokens and install it on the computers. The staff will need to enter the token each time opening the app.
Any better idea how to accomplish this?
You said the app needs to talk to a web service to work. That's the key to a simple approach. (Assume you don't care whether the staff acquires a nonfunctional copy of the client app.)
At startup, app checks for existence of a validation of some kind stored in chrome.storage.local. If it exists, startup continues.
If the validation is missing, the app checks for existence of a GUID stored in chrome.storage.local.
If the GUID is missing, generate and store one using something like window.crypto.getRandomValues().
Ask the server for a validation by sending the GUID and getting a response.
If a validation comes back, save it in chrome.storage.local and go back to the start of this sequence.
Otherwise tell the user to get lost.
A full-strength version of this approach would have some additional features:
Use an HMAC(GUID, secret) for the validation. I'm assuming the staff aren't tech superstars, so something simple like a boolean would probably suffice.
Optionally add a per-launch step that sends up the GUID and validation and confirms it's still valid each time.
When the validation is requested, you might prompt for the secret key you mentioned in your question. In normal cases this would be needed only at provisioning time.
In case you haven't figured it out yet, the server is now acting like a simple licensing server, so it's up to you to decide how to decide whether the validation request succeeds. Maybe it allows only N validations to exist at once, or after you're done provisioning you hardcode future validations to fail. Maybe it limits validation requests to certain IP addresses. You get to choose.
That's the gist. It's a simple DRM system that is easier to manage than the enter-secret-at-installation method, but that won't withstand an attack of more than 30 minutes (since a smart attacker will just inject another machine's GUID and HMAC validation into the duplicate machine's chrome.storage.local).

Resources