Reputation: 107072
A REST API can have arguments in several places:
/api/resource?p1=v1&p2=v2
/api/resource/v1/v2
What are the best practices and considerations of choosing between 1 and 2 above?
2 vs 3 is covered here.
Upvotes: 302
Views: 251256
Reputation: 1817
Current best practices for Get uses routes and query parameters, not bodies. According to some posts I've read, there are some systems that will throw failures when trying to read a GET with a body.
However, I believe this system and way of thinking is outdated and it is well proved by simply reading this post and others. It is my personal opinion that GET requests should read something like the below to allow for HIPPA laws and privacy practices to be best implemented. Query strings and URLs are easily read while sitting between two endpoints, leading to some crazy stupid query strings and other behaviors (like sending GETs in POSTs) that simply are a symptom of this outdated practice. Unfortunately, I am not a major cog in the machine to elicit such change.
We should, very much consider a change to do this as practitioners and pioneers of our art, but we should also maintain the newest and current best practices until that time:
get/Humans?page=2&sort=desc
{
Name: "Bob"
Age:"44"
Region:"Greece"
}
aka
get/route?metasearchinfo
{
"private":"searchInfo"
}
The proposed example has HIPPA type data and private data in the body of a GET, while having meta search data in the query string, and the exact URL path where we will query this data before that query. In this query, we don't betray any sensitive information, we aren't pretending a POST, and we are keeping things organized.
Again, you can feel free to disagree, but I think this kind of GET makes sense for a future that upholds privacy as an important ideal. Unfortunately, it isn't best practice at this time and will cause issues with some systems.
For those of you who don't realize it, it is now current best practice to put sensitive data in bodies on GETs. However, the point I make is that we are still thinking primitively. What is sensitive, vs. what is not sensitive is defined by law, but law changes and not everyone agrees. If you went to a restaurant and they announced your name, you might not like that if you were Johnny Depp and trying to be incognito on a date with your brand new girlfriend.
Even better, all you have to do is look at the requests and find the ones with bodies to determine which ones you should decrypt to get the information you want. If you have a secure system, you can uses paging in the query string, but put all the information in your GET bodies, that way you don't have to worry about what can/can't go into your query strings, and no one just glancing at URLs can just determine which bodies to decrypt.
That is why this system of query strings is primitive and needs to be updated; but until that time, as a fellow developer, I will recommend whatever is best practice for now.
Upvotes: -2
Reputation: 418
According to Service Design Patterns (2011) by R. Daigneau, a service can leverage the Tolerant Reader pattern. This ensures that the service function properly when some of the content in the messages or media type they receive is unknown or when data structure vary.
The software is likely to be developed by small incremental pieces over time, and yet the system design evolve naturally. Unfortunately, it introduce the possibility of breaking change as data items are added to, changed, or removed from the message, by any stakeholders. The service must be forward compatible and under certain condition accept content that it does not fully understand. Exceptions are only throw when the message content clearly violate business rules.
In complement to other answers, we can imagine a service that accept message data from both the request body and the query string for the same requested item representation. This to be compatible with evolving clients.
Upvotes: 0
Reputation: 454
The reasoning I've always used is that because POST, PUT, and PATCH presumably have payloads containing information that customers might consider proprietary, the best practice is to put all payloads for those methods in the request body, and not in the URL parameters, because it's very likely that somewhere, somehow, URL text is being logged by your web server and you don't want customer data getting splattered as plain text into your log file system.
That potential exposure via the URL isn't an issue for GET or DELETE or any of the other REST operations.
Upvotes: 20
Reputation: 11807
What are the best practices and considerations of choosing between 1 and 2 above?
Usually the content body is used for the data that is to be uploaded/downloaded to/from the server and the query parameters are used to specify the exact data requested. For example, when you upload a file you specify the name, MIME type, etc. in the body but when you fetch list of files you can use the query parameters to filter the list by some property of the files. In general, the query parameters are property of the query, not the data.
Of course, this is not a strict rule. You can implement it in whatever way you find more appropriate/working for you.
You might also want to check the Wikipedia article about query string, especially the first two paragraphs.
Upvotes: 126
Reputation: 7167
I'll assume you are talking about POST/PUT requests. Semantically the request body should contain the data you are posting or patching.
The query string, as part of the URL (a URI), it's there to identify which resource you are posting or patching.
You asked for a best practices, following semantics are mine. Of course using your rules of thumb should work, specially if the web framework you use abstract this into parameters.
You most know:
Upvotes: 61
Reputation: 107072
The following are my rules of thumb...
When to use the body:
When to use the query string:
curl
application/octet-stream
Notice you can mix and match - put the the common ones, the ones that should be debugable in the query string, and throw all the rest in the json.
Upvotes: 43