Related
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary? This app will be distributed on Google Play and the Apple App Store so it should be implied that someone will have access to its binary and try to reverse engineer it.
I was thinking something involving the app signatures, since every published app must be signed somehow, but I can't figure out how to do it in a secure way. Maybe a combination of getting the app signature, plus time-based hashes, plus app-generated key pairs and the good old security though obscurity?
I'm looking for something as fail proof as possible. The reason why is because I need to deliver data to the app based on data gathered by the phone sensors, and if people can pose as my own app and send data to my api that wasn't processed by my own algorithms, it defeats its purpose.
I'm open to any effective solution, no matter how complicated. Tin foil hat solutions are greatly appreciated.
Any credentials that are stored in the app can be exposed by the user. In the case of Android, they can completely decompile your app and easily retrieve them.
If the connection to the server does not utilize SSL, they can be easily sniffed off the network.
Seriously, anybody who wants the credentials will get them, so don't worry about concealing them. In essence, you have a public API.
There are some pitfalls and it takes extra time to manage a public API.
Many public APIs still track by IP address and implement tarpits to simply slow down requests from any IP address that seems to be abusing the system. This way, legitimate users from the same IP address can still carry on, albeit slower.
You have to be willing to shut off an IP address or IP address range despite the fact that you may be blocking innocent and upstanding users at the same time as the abusers. If your application is free, it may give you more freedom since there is no expected level of service and no contract, but you may want to guard yourself with a legal agreement.
In general, if your service is popular enough that someone wants to attack it, that's usually a good sign, so don't worry about it too much early on, but do stay ahead of it. You don't want the reason for your app's failure to be because users got tired of waiting on a slow server.
Your other option is to have the users register, so you can block by credentials rather than IP address when you spot abuse.
Yes, It's public
This app will be distributed on Google Play and the Apple App Store so it should be implied that someone will have access to its binary and try to reverse engineer it.
From the moment its on the stores it's public, therefore anything sensitive on the app binary must be considered as potentially compromised.
The Difference Between WHO and WHAT is Accessing the API Server
Before I dive into your problem I would like to first clear a misconception about who and what is accessing an API server. I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
Think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
So if you are not using user authentication in the app, then you are left with trying to attest what is doing the request.
Mobile Apps should be as much dumb as possible
The reason why is because I need to deliver data to the app based on data gathered by the phone sensors, and if people can pose as my own app and send data to my api that wasn't processed by my own algorithms, it defeats its purpose.
It sounds to me that you are saying that you have algorithms running on the phone to process data from the device sensors and then send them to the API server. If so then you should reconsider this approach and instead just collect the sensor values and send them to the API server and have it running the algorithm.
As I said anything inside your app binary is public, because as yourself said, it can be reverse engineered:
should be implied that someone will have access to its binary and try to reverse engineer it.
Keeping the algorithms in the backend will allow you to not reveal your business logic, and at same time you may reject requests with sensor readings that do not make sense(if is possible to do). This also brings you the benefit of not having to release a new version of the app each time you tweak the algorithm or fix a bug in it.
Runtime attacks
I was thinking something involving the app signatures, since every published app must be signed somehow, but I can't figure out how to do it in a secure way.
Anything you do at runtime to protect the request you are about to send to your API can be reverse engineered with tools like Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
Your Suggested Solutions
Security is all about layers of defense, thus you should add as many as you can afford and required by law(e.g GDPR in Europe), therefore any of your purposed solutions are one more layer the attacker needs to bypass, and depending on is skill-set and time is willing to spent on your mobile app it may prevent them to go any further, but in the end all of them can be bypassed.
Maybe a combination of getting the app signature, plus time-based hashes, plus app-generated key pairs and the good old security though obscurity?
Even when you use key pairs stored in the hardware trusted execution environment, all an attacker needs to do is to use an instrumentation framework to hook in the function of your code that uses the keys in order to extract or manipulate the parameters and return values of the function.
Android Hardware-backed Keystore
The availability of a trusted execution environment in a system on a chip (SoC) offers an opportunity for Android devices to provide hardware-backed, strong security services to the Android OS, to platform services, and even to third-party apps.
While it can be defeated I still recommend you to use it, because not all hackers have the skill set or are willing to spend the time on it, and I would recommend you to read this series of articles about Mobile API Security Techniques to learn about some complementary/similar techniques to the ones you described. This articles will teach you how API Keys, User Access Tokens, HMAC and TLS Pinning can be used to protect the API and how they can be bypassed.
Possible Better Solutions
Nowadays I see developers using Android SafetyNet to attest what is doing the request to the API server, but they fail to understand it's not intended to attest that the mobile app is what is doing the request, instead it's intended to attest the integrity of the device, and I go in more detail on my answer to the question Android equivalent of ios devicecheck. So should I use it? Yes you should, because it is one more layer of defense, that in this case tells you that your mobile app is not installed in a rooted device, unless SafetyNet has been bypassed.
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary?
You can allow the API server to have an high degree of confidence that is indeed accepting requests only from your genuine app binary by implementing the Mobile App Attestation concept, and I describe it in more detail on this answer I gave to the question How to secure an API REST for mobile app?, specially the sections Securing the API Server and A Possible Better Solution.
Do you want to go the Extra Mile?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
No. You're publishing a service with a public interface and your app will presumably only communicate via this REST API. Anything that your app can send, anyone else can send also. This means that the only way to secure access would be to authenticate in some way, i.e. keep a secret. However, you are also publishing your apps. This means that any secret in your app is essentially being given out also. You can't have it both ways; you can't expect to both give out your secret and keep it secret.
Though this is an old post, I thought I should share the updates from Google in this regard.
You can actually ensure that your Android application is calling the API using the SafetyNet mobile attestation APIs. This adds a little overhead on the network calls and prevents your application from running in a rooted device.
I found nothing similar like SafetyNet for iOS. Hence in my case, I checked the device configuration first in my login API and took different measures for Android and iOS. In case of iOS, I decided to keep a shared secret key between the server and the application. As the iOS applications are a little bit difficult to reversed engineered, I think this extra key checking adds some protection.
Of course, in both cases, you need to communicate over HTTPS.
As the other answers and comments imply, you cant truly restrict API access to only your app but you can take different measures to reduce the attempts. I believe the best solution is to make requests to your API (from native code of course) with a custom header like "App-Version-Key" (this key will be decided at compile time) and make your server check for this key to decide if it should accept or reject. Also when using this method you SHOULD use HTTPS/SSL as this will reduce the risk of people seeing your key by viewing the request on the network.
Regarding Cordova/Phonegap apps, I will be creating a plugin to do the above mentioned method. I will update this comment when its complete.
there is nothing much you can do. cause when you let some one in they can call your APIs. the most you can do is as below:
since you want only and only your application (with a specific package name and signature) calls your APIs, you can get the signature key of your apk pragmatically and send is to sever in every API call and if thats ok you response to the request. (or you can have a token API that your app calls it every beginning of the app and then use that token for other APIs - though token must be invalidated after some hours of not working with)
then you need to proguard your code so no one sees what you are sending and how you encrypt them. if you do a good encrypt decompiling will be so hard to do.
even signature of apk can be mocked in some hard ways but its the best you can do.
Someone have looked at Firebase App Check ?
https://firebase.google.com/docs/app-check
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary?
I'm not sure if there is an absolute solution.
But, you can reduce unwanted requests.
Use an App Check:
The "Firebase App Check" can be used cross-platform (https://firebase.google.com/docs/app-check) - credit to #Xande-Rasta-Moura
iOS: https://developer.apple.com/documentation/devicecheck
Android: https://android-developers.googleblog.com/2013/01/verifying-back-end-calls-from-android.html
Use BasicAuth (for API requests)
Allow a user-agent header for mobile devices only (for API requests)
Use a robots.txt file to reduce bots
User-agent: *
Disallow: /
I'm just started learning what is API. I've go through some documentation, post but still I didn't get it all.
I'm confused in, is API is just about code or just only url part ?
What is difference between URL and API.
When someone want to build API in their project what they have to do ?
I mean did they write some code or just only make some url ?
Like in express.js when I write some end point I write it
app.get("/user/id",(req,res)=>{
//some stuff...
}
So here it mean that this is my API or what is it ?
I'm very confused in API. Please explain it.
The Wikipedia page is quite good: https://en.wikipedia.org/wiki/API
There's different kinds of APIs. The one you are probably thinking about is the 'Web API', also mentioned on this page.
Quote:
In computing, an application programming interface (API) is an interface that defines interactions between multiple software applications or mixed hardware-software intermediaries.[1] It defines the kinds of calls or requests that can be made, how to make them, the data formats that should be used, the conventions to follow, etc. It can also provide extension mechanisms so that users can extend existing functionality in various ways and to varying degrees.[2] An API can be entirely custom, specific to a component, or designed based on an industry-standard to ensure interoperability. Through information hiding, APIs enable modular programming, allowing users to use the interface independently of the implementation.
I had this question for an interview.
Can you tell me how a server know which tab or browser I am using for a specific website?
There is a header called User-Agent that is sent to the server when a user makes a request from the browser. It looks like this:
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20100101 Firefox/12.0
It clearly tells you that the browser is mozilla, although there is some extra information for historical reasons that you souldn't worry about.
A header is a type of meta information that the users doesn't see but helps the server communicate with the client by sharing this kind of information. So you are able to serve different pages to different pages if the client is using an outdated browser.
You can't know what tab the client is using.
See this Wikipedia article.
In computing, a user agent is software (a software agent) that is
acting on behalf of a user. One common use of the term refers to a web
browser telling a web site information about the browser and operating
system. This allows the web site to customize content for the
capabilities of a particular device, but also raises privacy issues.
There are other uses of the term "user agent". For example, an email
reader is a mail user agent. In many cases, a user agent acts as a
client in a network protocol used in communications within a
client–server distributed computing system. In particular, the
Hypertext Transfer Protocol (HTTP) identifies the client software
originating the request, using a user-agent header, even when the
client is not operated by a user. The Session Initiation Protocol
(SIP) protocol (based on HTTP) followed this usage. In the SIP, the
term user agent refers to both end points of a communications session.
User agent identification When a software agent operates in a network
protocol, it often identifies itself, its application type, operating
system, software vendor, or software revision, by submitting a
characteristic identification string to its operating peer. In HTTP,
SIP, and NNTP protocols, this identification is transmitted in a
header field User-Agent. Bots, such as Web crawlers, often also
include a URL and/or e-mail address so that the Webmaster can contact
the operator of the bot.
Use in HTTP In HTTP, the User-Agent string is often used for content
negotiation, where the origin server selects suitable content or
operating parameters for the response. For example, the User-Agent
string might be used by a web server to choose variants based on the
known capabilities of a particular version of client software. The
concept of content tailoring is built into the HTTP standard in RFC
1945 "for the sake of tailoring responses to avoid particular user
agent limitations.”
The User-Agent string is one of the criteria by which Web crawlers may
be excluded from accessing certain parts of a Web site using the
Robots Exclusion Standard (robots.txt file).
As with many other HTTP request headers, the information in the
"User-Agent" string contributes to the information that the client
sends to the server, since the string can vary considerably from user
to user.
Format for human-operated web browsers The User-Agent string format is
currently specified by section 5.5.3 of HTTP/1.1 Semantics and
Content. The format of the User-Agent string in HTTP is a list of
product tokens (keywords) with optional comments. For example, if a
user's product were called WikiBrowser, their user agent string might
be WikiBrowser/1.0 Gecko/1.0. The "most important" product component
is listed first.
The parts of this string are as follows:
product name and version (WikiBrowser/1.0) layout engine and version
(Gecko/1.0) During the first browser war, many web servers were
configured to only send web pages that required advanced features,
including frames, to clients that were identified as some version of
Mozilla. Other browsers were considered to be older products such as
Mosaic, Cello or Samba and would be sent a bare bones HTML document.
For this reason, most Web browsers use a User-Agent string value as
follows:
Mozilla/[version] ([system and browser information]) [platform]
([platform details]) [extensions]. For example, Safari on the iPad has
used the following:
Mozilla/5.0 (iPad; U; CPU OS 3_2_1 like Mac OS X; en-us)
AppleWebKit/531.21.10 (KHTML, like Gecko) Mobile/7B405 The components
of this string are as follows:
Mozilla/5.0: Previously used to indicate compatibility with the
Mozilla rendering engine. (iPad; U; CPU OS 3_2_1 like Mac OS X;
en-us): Details of the system in which the browser is running.
AppleWebKit/531.21.10: The platform the browser uses. (KHTML, like
Gecko): Browser platform details. Mobile/7B405: This is used by the
browser to indicate specific enhancements that are available directly
in the browser or through third parties. An example of this is
Microsoft Live Meeting which registers an extension so that the Live
Meeting service knows if the software is already installed, which
means it can provide a streamlined experience to joining meetings.
Before migrating to the Chromium code base, Opera was the most widely
used web browser that did not have the User-Agent string with
"Mozilla" (instead beginning it with "Opera"). Since July 15, 2013,
Opera's User-Agent string begins with "Mozilla/5.0" and, to avoid
encountering legacy server rules, no longer includes the word "Opera"
(instead using the string "OPR" to denote the Opera version).
Format for automated agents (bots) Automated web crawling tools can
use a simplified form, where an important field is contact information
in case of problems. By convention the word "bot" is included in the
name of the agent[citation needed]. For example:
Googlebot/2.1 (+http://www.google.com/bot.html) Automated agents are
expected to follow rules in a special file called "robots.txt".
User agent spoofing The popularity of various Web browser products has
varied throughout the Web's history, and this has influenced the
design of Web sites in such a way that Web sites are sometimes
designed to work well only with particular browsers, rather than
according to uniform standards by the World Wide Web Consortium (W3C)
or the Internet Engineering Task Force (IETF). Web sites often include
code to detect browser version to adjust the page design sent
according to the user agent string received. This may mean that
less-popular browsers are not sent complex content (even though they
might be able to deal with it correctly) or, in extreme cases, refused
all content. Thus, various browsers have a feature to cloak or spoof
their identification to force certain server-side content. For
example, the Android browser identifies itself as Safari (among other
things) in order to aid compatibility.
Other HTTP client programs, like download managers and offline
browsers, often have the ability to change the user agent string.
Spam bots and Web scrapers often use fake user agents.
At times it has been popular among Web developers to initiate Viewable
With Any Browser campaigns, encouraging developers to design Web pages
that work equally well with any browser.
A result of user agent spoofing may be that collected statistics of
Web browser usage are inaccurate.
User agent sniffing Main article: Browser sniffing The term user agent
sniffing refers to the practice of Web sites showing different content
when viewed with a certain user agent. On the Internet, this will
result in a different site being shown when browsing the page with a
specific browser. One example of this is Microsoft Exchange Server
2003's Outlook Web Access feature. When viewed with Internet Explorer
6 or newer, more functionality is displayed compared to the same page
in any other browsers. User agent sniffing is now considered poor
practice, since it encourages browser-specific design and penalizes
new browsers with unrecognized user agent identifications. Instead,
the W3C recommends creating HTML markup that is standard, allowing
correct rendering in as many browsers as possible, and to test for
specific browser features rather than particular browser versions or
brands.
Web sites specifically targeted towards mobile phones, like NTT
DoCoMo's I-Mode or Vodafone's Vodafone Live! portals, often rely
heavily on user agent sniffing, since mobile browsers often differ
greatly from each other. Many developments in mobile browsing have
been made in the last few years,[when?] while many older phones that
do not possess these new technologies are still heavily used.
Therefore, mobile Web portals will often generate completely different
markup code depending on the mobile phone used to browse them. These
differences can be small, e.g., resizing of certain images to fit
smaller screens, or quite extensive, e.g., rendering of the page in
WML instead of XHTML.
Encryption strength notations Web browsers created in the United
States, such as Netscape Navigator and Internet Explorer, previously
used the letters U, I, and N to specify the encryption strength in the
user agent string. Until 1996, when the United States government
disallowed encryption with keys longer than 40 bits to be exported,
vendors shipped various browser versions with different encryption
strengths. "U" stands for "USA" (for the version with 128-bit
encryption), "I" stands for "International" – the browser has 40-bit
encryption and can be used anywhere in the world – and "N" stands (de
facto) for "None" (no encryption). Following the lifting of export
restrictions, most vendors supported 256-bit encryption.
I'm currently in the process of building a browser helper object.
One of the things the BHO has to do is to make cross-site requests that bypass the cross-domain policy.
For this, I'm exposing a __MyBHONameSpace.Request method that uses WebClient internally.
However, it has occurred to me that anyone that is using my BHO now has a CSRF vulnerability everywhere as a smart attacker can now make arbitrary requests from my clients' computers.
Is there any clever way to mitigate this?
The only way to fully protect against such attacks is to separate the execution context of the page's JavaScript and your extension's JavaScript code.
When I researched this issue, I found that Internet Explorer does provide a way to achieve creation of such context, namely via IActiveScript. I have not implemented this solution though, for the following reasons:
Lack of documentation / examples that combines IActiveScript with BHOs.
Lack of certainty about the future (e.g. https://stackoverflow.com/a/17581825).
Possible performance implications (IE is not known for its superb performance, how would two instances of a JavaScript engines for each page affect the browsing speed?).
Cost of maintenance: I already had an existing solution which was working well, based on very reasonable assumptions. Because I'm not certain whether the alternative method (using IActiveScript) would be bugfree and future-proof (see 2), I decided to drop the idea.
What I have done instead is:
Accept that very determined attackers will be able to access (part of) my extension's functionality.
#Benjamin asked whether access to a persistent storage API would pose a threat to the user's privacy. I consider this risk to be acceptable, because a storage quota is enforced, and all stored data is validated before it's used, and it's not giving an attacker any more tools to attack the user. If an attacker wants to track the user via persistent storage, they can just use localStorage on some domain, and communicate with this domain via an <iframe> using the postMessage API. This method works across all browsers, not just IE with my BHO installed, so it is unlikely that any attacker dedicates time at reverse-engineering my BHO in order to use the API, when there's a method that already works in all modern browsers (IE8+).
Restrict the functionality of the extension:
The extension should only be activated on pages where it needs to be activated. This greatly reduces the attack surface, because it's more difficult for an attacker to run code on https://trusted.example.com and trick the user into visiting https://trusted.example.com.
Create and enforce whitelisted URLs for cross-domain access at extension level (in native code (e.g. C++) inside the BHO).
For sensitive APIs, limit its exposure to a very small set of trusted URLs (again, not in JavaScript, but in native code).
The part of the extension that handles the cross-domain functionality does not share any state with Internet Explorer. Cookies and authorization headers are stripped from the request and response. So, even if an attacker manages to get access to my API, they cannot impersonate the user at some other website, because of missing session information.
This does not protect against sites who use the IP of the requestor for authentication (such as intranet sites or routers), but this attack vector is already covered by a correct implemention a whitelist (see step 2).
"Enforce in native code" does not mean "hard-code in native code". You can still serve updates that include metadata and the JavaScript code. MSVC++ (2010) supports ECMAScript-style regular expressions <regex>, which makes implementing a regex-based whitelist quite easy.
If you want to go ahead and use IActiveScript, you can find sample code in the source code of ceee, Gears (both discontinued) or any other project that attempts to enhance the scripting environment of IE.
I have a web application that supports a variety of clients on various platforms including desktop browsers, mobile browsers, as well as mobile and tablet native applications. I am wondering if it is possible to detect, in a secure manner, which of these platforms is being used to connect to the service.
This would be useful information to have, and would enable use cases where a security decision could be made based on the client platform. For example, I could restrict access to certain portions of the service if a user was on a mobile client, or a browser with a known vulnerability.
I am aware of EFF's Panopticlick research, which uses a variety of browser-based attributes, such as User-Agent string, installed plugins, screen dimensions, etc. to establish a unique fingerprint for a client, but this doesn't meet the "verifiable" requirement, as all the information is compiled on the client and could easily be spoofed.
I need a solution that is verifiable on the server side that the information sent by the client is accurate. Does such a solution exist?
There is no way to know which user-agent/platform you're dealing with, because ultimately any information you might use to identify them comes from the client side.
Any attribute you use to fingerprint my browser or operating system can be faked by simply sending you different HTTP headers. There are dozens of browser-level HTTP header manipulators and HTTP request code libraries that do precisely that.
I would therefore highly recommend against making -any- security decision based on platform or user-agent of the client. Assume that whatever rules you may set for purposes of usability along those lines can be violated by a hacker.