Getting MSISDN from mobile browser headers - browser

What is the best way of going about this? I need to get MSISDN data from users accessing a mobisite to enhance the user experience.
I understand not all gateways would populate the headers entirely, but would wish to have MSISDN capture as option one before falling back on a cookie based model

I know this is an old post, but I'd like to give my contribution.
I work for a mobile carrier and here we have a feature that you can set some parameteres for header enrichment. We create some filters to match certain traffic passing through the GGSN (GPRS gateway node) then it opens the packages at layer 7 (when application layer is HTTP - not protected with SSL) and write msisdn, imsi and other parameters inside it.
So it is a carrier-depending feature.

While some operators do this, the representation and mechanism depends entirely on the operator. There is no standard way to do this.

If you are willing to pay for it try http://Bango.com. They provide an api but you may need to redirect user to their service

As others have said, there is no standard way between mobile operators for passing the MSISDN in the HTTP headers.
Different operators vary on the header value used, some operators do not pass the MSISDN unless they "authorize" your website and others have more complicated means of passing the MSISDN (e.g. redirects to their network to pick up the header).
Developing a site for one specific operator is easy enough, developing for multiple is next to impossible if you need to rely on the header.

Related

Campaign tracking on Cross-Domain without redirect (no linker)

Lets say we have the following url:
https://www.sale.com/?utm_source=CDTest3Newsletter&utm_medium=CDTest3Email&utm_campaign=CDTest3FallSale&utm_id=CDT3ID
A user clicks on the link and surf through it and then close the session.
An hour later he/she navigates to www.purchase.com and a conversion occurs, is there a way to track and relate the conversion to the utm_id=cdt3id?
In Summary the conversion happens in the second domain and we want to relate that to the first domain marketing campaign!
Please note i know hot to enable linker while redirecting from origin domain to target domain!
You have to realize that this kind of behavior is not standard. Therefore, it will require non-standard solutions.
Having said that, your real problem is not the attribution. In the described scenario, you are likely to lose the user completely. Purchase.com will have no idea that this client is supposed to have the same id as on the previous site. The linker adds an explicit _ga query param to the url for the ga library on the purchase.com to know to use that as a user id and not to generate a new one.
If you're not able to reliably pass the client id to the checkout TLD through front-end, you have to use your backend to match the user by the BE auth/session token. Same exact logic applies if you want to pass the attribution data. You just keep it on the backend, bound to the user session token and throw it to the user's cookie on checkout, then grab it with GTM and populate it however you like. Or you can as well just conduct a BE redirect, appending both the _ga and the UTM query params to the url.
There are a bit more options if you're not using GA for your actual analysis. If you're able to match users and calculate attributions on your own either through ETL or persistent derived tables/SQL. So, basically, if you download your GA data to a third party storage like snowflake, asure or BQ and then use a BI tool on top of that. But at this point those options should be pretty apparent from the issue and possible solution described above.

Best practice for sending query parameters in a GET request?

I am writing a backend for my application that will accept query parameters from the front end, and then query my DB based on these parameters. This sounds to me like it should be a GET request, but since I have a lot of params that I'm passing with some of them being optional I think it would be easiest to do a POST request and send the search params in a request body. I know I can convert my params to a query string and append it to my GET request, but there has to be a better way because I will be passing different data types and will end up having to parse the params on the backend anyways if I do it this way.
This depends heavily on the context, but I would prefer using GET request in your scenario.
What Request Method should I use
According to the widely accepted convention, one uses:
GET to read existing data
POST to create something new
More details can be found here: https://www.restapitutorial.com/lessons/httpmethods.html
How do I pass the parameters
Regarding the way to pass parameters, it is a less obvious thing. Unless there's something sensitive in the request parameters, it is perfectly fine to send them as part of URL.
Parameters may be either part of path:
myapi/customers/123
or a query string:
myapi?customer=123
Both options are feasible, and I'd say a choice depends heavily on the application domain model. One popular rule of thumb is:
use "parameters as a part of a path" for mandatory parameters
use "parameters as a query string" for optional parameters.
I'd recommend using POST in the case where there are a lot of parameters/options. There are a few of reasons why I think it's better than GET:
Your url will be cleaner looking
You hide internal structure from the user (it's still visible if they use the Developer Tools of the browser though)
People can't easily change the options to adjust your query. Having it in the url is simple to just modify and reload with other values. It's more work to do this as a POST.
However, if it's of any use that the URL you end up with can be bookmarked or shared, then you'd want all parameters encoded as part of the query, so using GET would be best in that case.
Another answer stated that POST should be used for creating something new, but I disagree. That might apply to PUT, but it's perfectly fine to use POST to allow more complex structures to be passed even when retrieving existing data.
For example, with POST you can send a JSON body object that has nested structure. This can be very handy and would be difficult to explode into a traditional GET query. You also have to worry about URL-encoding your data then decoding it when receiving it, which is a hassle.
For simple frontend to backend communication you don't really need REST to start with as it targets cases where the server is accessed by a plethora of clients not under your control or a client has to access plenty of different servers and should work with all of them. REST should be aimed for if you see benefit in a server that can evolve freely in future without having to fear breaking clients as they will adept to changes quite easily. Such strong properties however come at its price in terms of development overhead and careful designing. Don't get me wrong, you can still aim for a REST architecture, but for such a simple application-2-backend scenario this sounds like an overkill.
In a REST architecture usually a server will tell clients how it wants to receive input data. Think of HTML forms where the method and enctype attributes specify which HTTP method to use and to which representation format the input to convert to. Which HTTP method to use depends on the use case actually. If a server constantly receives the same request for the same input parameters and calculating the result may be costly, then caching the response once and serving further requests from that cache might take away a lot of unnecessary computation overhead from the server. I.e. the BBC claims that the cache is the single most important technology in keeping sites scalable and fast. I once read that they cache most articles for only a minute but this is sufficient enough to spare them form retrieving the same content thousands and thousands of times again and again, freeing up the resources for other requests or tasks. It is no miracle that caching also belongs to one of the few constraints REST has.
HTTP by default will allow caches to store response representations for requested URIs (including any query, path or matrix parameters) if requested via safe operations, such as HEAD or GET requests. Any unsafe operation invoked, however, will lead to a cache invalidation and therefore the removal of any stored representations for that target URI. Hence, any followup requests of that URI will reach the server in order to process a response for the requesting client.
Unfortunately caching isn't the only factor to consider when to decide between using GET or POST as also the current representation format the client currently processes has an influence on the decision. Think of a client processing the previous HTML response received from a server. The HTML response contains a form that teaches a client what fields the server expects as input as well as the choices a client can make for certain input parameters. HTML is a perfect example where the media-type restricts which HTTP methods are available (GET as default method and POST are supported) and which not (all of the other HTTP methods). Other representation formats might only support POST (i.e. while application/soap+xml would allow for either GET or POST (at least in SOAP 1.2), I have never seen GET requests in reality and so everything is exchanged with POST).
A further point that may prevent you from using GET requests is a de facto limitation on the URI length most HTTP implementations have. If you exceed this limitations some HTTP frameworks might not be able to process the message exchanged. On looking at the Web, however, one might find a little workaround to such a limitation. In most Web shops the checkout area is usually split into different pages where each page consists of a form that collects some input like address information, bank or payment data and further input that as a whole act as kind of wizard to guide the user through the payment process. Such a wizard style could be implemented in this case as well. Parts of the request are sent via POST to a dedicated endpoint that takes care of collecting the data and on the final "page" of the wizard the server will ask for a final confirmation on the collected data and uses that resource as GET target. This way the response remains cacheable even though the input data exceeded the typical URL limitation imposed by some HTTP frameworks.
While the arguments listed by Always Learning aren't wrong, I wouldn't rely on those from a security standpoint. While it may filter out people with little knowledge, it won't hinder the ones for long with knowledge (and there are plenty out there) to modify the request before sending it to your server. So simply recommending using PUT as a way to making user edits harder feels odd to me.
So, in summary, I'd base the decision whether to use POST or GET for sending data to the server mainly on the factor whether the response should be cacheable, as it is often requested, or not. In cases where the URI might get so large that certain HTTP frameworks may fail processing the request you are basically forced to use POST anyway unless you can split the actual request into multiple tinier requests which act as wizard for the data collection until a final confirmation request triggers the actual final HTTP call.

Is it safe to use a UUID in a URL for semi-private data?

I run a landscaping company and have multiple crews. I want to provide each one with a custom URL (like mysite.com/xxxx-xxxx-xxxx) that shows their daily schedule. Going to the page will list the name, address and phone number of 5-10 customers for the day.
Is it safe/wise to use a UUID in a URL for semi-private data?
Depends on how safe you want it to be.
Are the UUIDs used for anything else? If not, they are fine for creating random URLs.
But, browser history would allow anyone using the same machine to find the URLs. Also, unless using https, a network sniffer could easily see the requested URLs and go to the same page.
Another concern is spider bots. Make sure nothing links to those pages, use a robots.txt to prevent indexing the site, but you still might find that some of the pages show up on search engines. It might be better to have the UUID set in a cookie and check that for determining which employee it is, lest your semi-private pages start showing up on google.
Whether or not that schema would work for you, depends on your threat model (as well as some implementation details). Without a concrete threat model, it is not possible to give a definitive answer to your question.
I can, however, give you some ideas about potential issues with the solution, so you can determine if they are relevant for your application. This is not a complete list.
On the implementation side of things:
Not all UUID generators are created equal. Ideally, you want to use a generator based on a cryptographically secure RNG, providing an UUID where every byte is chosen at random.
Using the UUID for a database lookup or similar operation is not necessarily a constant-time operation (and thus there might be side-channel attacks unless you implement the lookup by yourself)
Make sure your URI does not leak via referrer
Some tools attempt to detect 'secret' URLs to protect them from history synchronization or other automatic features. Your schema will most likely not be detected as 'secret'. It might be better to artificially lengthen your URI and to move your UUID into a query parameter.
You can further reduce attack surface with the usual methods (rate limiting, server hardening, etc.)
On the conceptual side of things:
A single identifier for both identification and authentication is not necessarily a bad thing. However, in most cases there is a need for an identification-only identifier – you must not use the 'secret' UUID in those scenarios
If a 'crew' consists of multiple people: you cannot revoke access for a single crew member
Some software (antivirus, browser, etc.) treats information in URLs as public information, and might upload them without user interaction

Is it good to use POST for each HTTP verb in REST API for security reasons?

I am working on an enterprise web app. Our app would be deployed to production in https only. We are writing REST APIs that would be consumed by our html 5 client only (not for others)
My team has taken decision to use POST for all rest verbs, because GET, PUT, DELETE exposes id in the url.
What my team thinks is that, having any type of data exposed in the url is not a good idea, even if id is not directly the database id.
Everybody seems to be agreed to use POST http method for GET, PUT, DELETE and pass http_method as parameter and using the parameter decide what operation to do.
This is the only difference that we are planning to do for REST APIs. What do you think?
By avoiding the use of GET you loose the ability to use caching. Hiding the id of the resource in the body will most likely cause you to violate the "identification of resources" constraint. Which will make linking/hypermedia far more difficult.
I would assert that it will be almost impossible to get the benefits of a RESTful system if you limit yourself to POST. Having said that, considering you are only targeting HTML5 clients you probably don't need most of the REST benefits. Doing Json-RPC is probably more than adequate for your needs.
However, I still think throwing away GET for a false sense of security is a bad idea. The body of your posts are just as easy to access by some intermediary as an id in the URI.
You should use HTTP as it was intended so you can get the full benefits of the stack. The different verbs have different semantics for a reason.
You can feel like POST is more secure than GET all you want, but it's a false premise.
Who are you securing this data from? Man in the middle? That's what HTTPS solves.
Are you securing it from your end users? The ones who can download your entire application and study the source code at their leisure whenever they want? Or view the source of a page with a right click and see your payloads laid out in all their glory? Or simply turn the browser debugger on and watch all of your traffic? You're trying to secure it from those people?
POST doesn't hide anything from anybody.
You should probably have a look at this post as it pretty much explains what https does for you. So the question would be: who do you want to protect the URL from? It might be in the browser history or visible to someone standing behind the user, but not to anyone listening on the line.

What can POST do, GET can't do? [duplicate]

From what I can gather, there are three categories:
Never use GET and use POST
Never use POST and use GET
It doesn't matter which one you use.
Am I correct in assuming those three cases? If so, what are some examples from each case?
Use POST for destructive actions such as creation (I'm aware of the irony), editing, and deletion, because you can't hit a POST action in the address bar of your browser. Use GET when it's safe to allow a person to call an action. So a URL like:
http://myblog.org/admin/posts/delete/357
Should bring you to a confirmation page, rather than simply deleting the item. It's far easier to avoid accidents this way.
POST is also more secure than GET, because you aren't sticking information into a URL. And so using GET as the method for an HTML form that collects a password or other sensitive information is not the best idea.
One final note: POST can transmit a larger amount of information than GET. 'POST' has no size restrictions for transmitted data, whilst 'GET' is limited to 2048 characters.
In brief
Use GET for safe andidempotent requests
Use POST for neither safe nor idempotent requests
In details
There is a proper place for each. Even if you don't follow RESTful principles, a lot can be gained from learning about REST and how a resource oriented approach works.
A RESTful application will use GETs for operations which are both safe and idempotent.
A safe operation is an operation which does not change the data requested.
An idempotent operation is one in which the result will be the same no matter how many times you request it.
It stands to reason that, as GETs are used for safe operations they are automatically also idempotent. Typically a GET is used for retrieving a resource (a question and its associated answers on stack overflow for example) or collection of resources.
A RESTful app will use PUTs for operations which are not safe but idempotent.
I know the question was about GET and POST, but I'll return to POST in a second.
Typically a PUT is used for editing a resource (editing a question or an answer on stack overflow for example).
A POST would be used for any operation which is neither safe or idempotent.
Typically a POST would be used to create a new resource for example creating a NEW SO question (though in some designs a PUT would be used for this also).
If you run the POST twice you would end up creating TWO new questions.
There's also a DELETE operation, but I'm guessing I can leave that there :)
Discussion
In practical terms modern web browsers typically only support GET and POST reliably (you can perform all of these operations via javascript calls, but in terms of entering data in forms and pressing submit you've generally got the two options). In a RESTful application the POST will often be overriden to provide the PUT and DELETE calls also.
But, even if you are not following RESTful principles, it can be useful to think in terms of using GET for retrieving / viewing information and POST for creating / editing information.
You should never use GET for an operation which alters data. If a search engine crawls a link to your evil op, or the client bookmarks it could spell big trouble.
Use GET if you don't mind the request being repeated (That is it doesn't change state).
Use POST if the operation does change the system's state.
Short Version
GET: Usually used for submitted search requests, or any request where you want the user to be able to pull up the exact page again.
Advantages of GET:
URLs can be bookmarked safely.
Pages can be reloaded safely.
Disadvantages of GET:
Variables are passed through url as name-value pairs. (Security risk)
Limited number of variables that can be passed. (Based upon browser. For example, Internet Explorer is limited to 2,048 characters.)
POST: Used for higher security requests where data may be used to alter a database, or a page that you don't want someone to bookmark.
Advantages of POST:
Name-value pairs are not displayed in url. (Security += 1)
Unlimited number of name-value pairs can be passed via POST. Reference.
Disadvantages of POST:
Page that used POST data cannot be bookmark. (If you so desired.)
Longer Version
Directly from the Hypertext Transfer Protocol -- HTTP/1.1:
9.3 GET
The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI. If the Request-URI refers to a data-producing process, it is the produced data which shall be returned as the entity in the response and not the source text of the process, unless that text happens to be the output of the process.
The semantics of the GET method change to a "conditional GET" if the request message includes an If-Modified-Since, If-Unmodified-Since, If-Match, If-None-Match, or If-Range header field. A conditional GET method requests that the entity be transferred only under the circumstances described by the conditional header field(s). The conditional GET method is intended to reduce unnecessary network usage by allowing cached entities to be refreshed without requiring multiple requests or transferring data already held by the client.
The semantics of the GET method change to a "partial GET" if the request message includes a Range header field. A partial GET requests that only part of the entity be transferred, as described in section 14.35. The partial GET method is intended to reduce unnecessary network usage by allowing partially-retrieved entities to be completed without transferring data already held by the client.
The response to a GET request is cacheable if and only if it meets the requirements for HTTP caching described in section 13.
See section 15.1.3 for security considerations when used for forms.
9.5 POST
The POST method is used to request that the origin server accept the
entity enclosed in the request as a new subordinate of the resource
identified by the Request-URI in the Request-Line. POST is designed
to allow a uniform method to cover the following functions:
Annotation of existing resources;
Posting a message to a bulletin board, newsgroup, mailing list,
or similar group of articles;
Providing a block of data, such as the result of submitting a
form, to a data-handling process;
Extending a database through an append operation.
The actual function performed by the POST method is determined by the
server and is usually dependent on the Request-URI. The posted entity
is subordinate to that URI in the same way that a file is subordinate
to a directory containing it, a news article is subordinate to a
newsgroup to which it is posted, or a record is subordinate to a
database.
The action performed by the POST method might not result in a
resource that can be identified by a URI. In this case, either 200
(OK) or 204 (No Content) is the appropriate response status,
depending on whether or not the response includes an entity that
describes the result.
The first important thing is the meaning of GET versus POST :
GET should be used to... get... some information from the server,
while POST should be used to send some information to the server.
After that, a couple of things that can be noted :
Using GET, your users can use the "back" button in their browser, and they can bookmark pages
There is a limit in the size of the parameters you can pass as GET (2KB for some versions of Internet Explorer, if I'm not mistaken) ; the limit is much more for POST, and generally depends on the server's configuration.
Anyway, I don't think we could "live" without GET : think of how many URLs you are using with parameters in the query string, every day -- without GET, all those wouldn't work ;-)
Apart from the length constraints difference in many web browsers, there is also a semantic difference. GETs are supposed to be "safe" in that they are read-only operations that don't change the server state. POSTs will typically change state and will give warnings on resubmission. Search engines' web crawlers may make GETs but should never make POSTs.
Use GET if you want to read data without changing state, and use POST if you want to update state on the server.
My general rule of thumb is to use Get when you are making requests to the server that aren't going to alter state. Posts are reserved for requests to the server that alter state.
One practical difference is that browsers and webservers have a limit on the number of characters that can exist in a URL. It's different from application to application, but it's certainly possible to hit it if you've got textareas in your forms.
Another gotcha with GETs - they get indexed by search engines and other automatic systems. Google once had a product that would pre-fetch links on the page you were viewing, so they'd be faster to load if you clicked those links. It caused major havoc on sites that had links like delete.php?id=1 - people lost their entire sites.
Use GET when you want the URL to reflect the state of the page. This is useful for viewing dynamically generated pages, such as those seen here. A POST should be used in a form to submit data, like when I click the "Post Your Answer" button. It also produces a cleaner URL since it doesn't generate a parameter string after the path.
Because GETs are purely URLs, they can be cached by the web browser and may be better used for things like consistently generated images. (Set an Expiry time)
One example from the gravatar page: http://www.gravatar.com/avatar/4c3be63a4c2f539b013787725dfce802?d=monsterid
GET may yeild marginally better performance, some webservers write POST contents to a temporary file before invoking the handler.
Another thing to consider is the size limit. GETs are capped by the size of the URL, 1024 bytes by the standard, though browsers may support more.
Transferring more data than that should use a POST to get better browser compatibility.
Even less than that limit is a problem, as another poster wrote, anything in the URL could end up in other parts of the brower's UI, like history.
1.3 Quick Checklist for Choosing HTTP GET or POST
Use GET if:
The interaction is more like a question (i.e., it is a safe operation such as a query, read operation, or lookup).
Use POST if:
The interaction is more like an order, or
The interaction changes the state of the resource in a way that the user would perceive (e.g., a subscription to a service), or
The user be held accountable for the results of the interaction.
Source.
There is nothing you can't do per-se. The point is that you're not supposed to modify the server state on an HTTP GET. HTTP proxies assume that since HTTP GET does not modify the state then whether a user invokes HTTP GET one time or 1000 times makes no difference. Using this information they assume it is safe to return a cached version of the first HTTP GET. If you break the HTTP specification you risk breaking HTTP client and proxies in the wild. Don't do it :)
This traverses into the concept of REST and how the web was kinda intended on being used. There is an excellent podcast on Software Engineering radio that gives an in depth talk about the use of Get and Post.
Get is used to pull data from the server, where an update action shouldn't be needed. The idea being is that you should be able to use the same GET request over and over and have the same information returned. The URL has the get information in the query string, because it was meant to be able to be easily sent to other systems and people like a address on where to find something.
Post is supposed to be used (at least by the REST architecture which the web is kinda based on) for pushing information to the server/telling the server to perform an action. Examples like: Update this data, Create this record.
i dont see a problem using get though, i use it for simple things where it makes sense to keep things on the query string.
Using it to update state - like a GET of delete.php?id=5 to delete a page - is very risky. People found that out when Google's web accelerator started prefetching URLs on pages - it hit all the 'delete' links and wiped out peoples' data. Same thing can happen with search engine spiders.
POST can move large data while GET cannot.
But generally it's not about a shortcomming of GET, rather a convention if you want your website/webapp to be behaving nicely.
Have a look at http://www.w3.org/2001/tag/doc/whenToUseGet.html
From RFC 2616:
9.3 GET
The GET method means retrieve whatever information (in the form of
an entity) is identified by the
Request-URI. If the Request-URI refers
to a data-producing process, it is the
produced data which shall be returned
as the entity in the response and not
the source text of the process, unless
that text happens to be the output of
the process.
9.5 POST The POST method is used to request that the origin server
accept the entity enclosed in the
request as a new subordinate of the
resource identified by the Request-URI
in the Request-Line. POST is designed
to allow a uniform method to cover the
following functions:
Annotation of existing resources;
Posting a message to a bulletin board, newsgroup, mailing list, or
similar group of articles;
Providing a block of data, such as the result of submitting a form, to a
data-handling process;
Extending a database through an append operation.
The actual function performed by the
POST method is determined by the
server and is usually dependent on the
Request-URI. The posted entity is
subordinate to that URI in the same
way that a file is subordinate to a
directory containing it, a news
article is subordinate to a newsgroup
to which it is posted, or a record is
subordinate to a database.
The action performed by the POST
method might not result in a resource
that can be identified by a URI. In
this case, either 200 (OK) or 204 (No
Content) is the appropriate response
status, depending on whether or not
the response includes an entity that
describes the result.
I use POST when I don't want people to see the QueryString or when the QueryString gets large. Also, POST is needed for file uploads.
I don't see a problem using GET though, I use it for simple things where it makes sense to keep things on the QueryString.
Using GET will allow linking to a particular page possible too where POST would not work.
The original intent was that GET was used for getting data back and POST was to be anything. The rule of thumb that I use is that if I'm sending anything back to the server, I use POST. If I'm just calling an URL to get back data, I use GET.
Read the article about HTTP in the Wikipedia. It will explain what the protocol is and what it does:
GET
Requests a representation of the specified resource. Note that GET should not be used for operations that cause side-effects, such as using it for taking actions in web applications. One reason for this is that GET may be used arbitrarily by robots or crawlers, which should not need to consider the side effects that a request should cause.
and
POST
Submits data to be processed (e.g., from an HTML form) to the identified resource. The data is included in the body of the request. This may result in the creation of a new resource or the updates of existing resources or both.
The W3C has a document named URIs, Addressability, and the use of HTTP GET and POST that explains when to use what. Citing
1.3 Quick Checklist for Choosing HTTP GET or POST
Use GET if:
The interaction is more like a question (i.e., it is a
safe operation such as a query, read operation, or lookup).
and
Use POST if:
The interaction is more like an order, or
The interaction changes the state of the resource in a way that the user would perceive (e.g., a subscription to a service), or
o The user be held accountable for the results of the interaction.
However, before the final decision to use HTTP GET or POST, please also consider considerations for sensitive data and practical considerations.
A practial example would be whenever you submit an HTML form. You specify either post or get for the form action. PHP will populate $_GET and $_POST accordingly.
In PHP, POST data limit is usually set by your php.ini. GET is limited by server/browser settings I believe - usually around 255 bytes.
From w3schools.com:
What is HTTP?
The Hypertext Transfer Protocol (HTTP) is designed to enable
communications between clients and servers.
HTTP works as a request-response protocol between a client and server.
A web browser may be the client, and an application on a computer that
hosts a web site may be the server.
Example: A client (browser) submits an HTTP request to the server;
then the server returns a response to the client. The response
contains status information about the request and may also contain the
requested content.
Two HTTP Request Methods: GET and POST
Two commonly used methods for a request-response between a client and
server are: GET and POST.
GET – Requests data from a specified resource POST – Submits data to
be processed to a specified resource
Here we distinguish the major differences:
Well one major thing is anything you submit over GET is going to be exposed via the URL. Secondly as Ceejayoz says, there is a limit on characters for a URL.
Another difference is that POST generally requires two HTTP operations, whereas GET only requires one.
Edit: I should clarify--for common programming patterns. Generally responding to a POST with a straight up HTML web page is a questionable design for a variety of reasons, one of which is the annoying "you must resubmit this form, do you wish to do so?" on pressing the back button.
As answered by others, there's a limit on url size with get, and files can be submitted with post only.
I'd like to add that one can add things to a database with a get and perform actions with a post. When a script receives a post or a get, it can do whatever the author wants it to do. I believe the lack of understanding comes from the wording the book chose or how you read it.
A script author should use posts to change the database and use get only for retrieval of information.
Scripting languages provided many means with which to access the request. For example, PHP allows the use of $_REQUEST to retrieve either a post or a get. One should avoid this in favor of the more specific $_GET or $_POST.
In web programming, there's a lot more room for interpretation. There's what one should and what one can do, but which one is better is often up for debate. Luckily, in this case, there is no ambiguity. You should use posts to change data, and you should use get to retrieve information.
HTTP Post data doesn't have a specified limit on the amount of data, where as different browsers have different limits for GET's. The RFC 2068 states:
Servers should be cautious about
depending on URI lengths above 255
bytes, because some older client or
proxy implementations may not properly
support these lengths
Specifically you should the right HTTP constructs for what they're used for. HTTP GET's shouldn't have side-effects and can be safely refreshed and stored by HTTP Proxies, etc.
HTTP POST's are used when you want to submit data against a url resource.
A typical example for using HTTP GET is on a Search, i.e. Search?Query=my+query
A typical example for using a HTTP POST is submitting feedback to an online form.
Gorgapor, mod_rewrite still often utilizes GET. It just allows to translate a friendlier URL into a URL with a GET query string.
Simple version of POST GET PUT DELETE
use GET - when you want to get any resource like List of data based on any Id or Name
use POST - when you want to send any data to server. keep in mind POST is heavy weight operation because for updation we should use PUT instead of POST
internally POST will create new resource
use PUT - when you

Resources