user3455395
user3455395

Reputation: 161

Strategy for protecting client side web services

I have WCF service called by $.ajax({ url: 'service.svc?a=1', dataType: "JSONP", ...}) on one of the pages of mysite.com (100% client side stack). I want to limit service usage to mysite.com users only, is it possible to do it and if so how?

The only idea I have for now is introducing 'via' parameter, which'd help me to understand via which www my service was accessed.

P.S. I'm really struggling to come up with a good title, please fill free to change it!

Upvotes: 0

Views: 194

Answers (5)

Zdenek
Zdenek

Reputation: 710

While you can't make a 100% bulletproof solution in principle unless you implement moderated user accounts, here is an approach I used to thwart hotlinking on my server. And it's not about referers.

My server implements ETag headers and 304 Not Modified headers for certain files which would otherwise suffice with a timed cache. This is to save bandwidth and still see what was accessed. Then, the user's route to target is tracked at server side. Iff the milestones are met, the user is given a credit to access the expensive resource. The credits can add, so that simultaneous accesses will work. Besides this, I also set cookies to the user's browser during the route and test them later. Fake clients are notoriously bad at cookies, if they know them at all.

Remember, there is no perfect solution, but I believe that if you implement this, your server won't get overloaded. I should know, 99% of my requests are stolen, this filtering works!

Upvotes: 1

jitendra
jitendra

Reputation: 209

You can authenticate user and generate one time token with sliding expiration time.Put that token into db with authenticated user and then send that token with each service request to authorize user.

Upvotes: 1

Dhaval Patel
Dhaval Patel

Reputation: 7591

If you are hosting you application on IIS you can just add to you web.config:

<system.webServer>
  <httpProtocol>
    <customHeaders>
      <add name="Access-Control-Allow-Origin" value="*" />
      <add name="Access-Control-Allow-Methods" value="GET, POST" />
    </customHeaders>
  </httpProtocol>
</system.webServer>

For Access-Control-Allow-Origin you can set you application address: Access-Control-Allow-Origin: http://domain1.com, http://domain2.com

<system.webServer>
  <httpProtocol>
    <customHeaders>
      <add name="Access-Control-Allow-Origin" value="http://domain1.com" />
      <add name="Access-Control-Allow-Methods" value="GET, POST" />
    </customHeaders>
  </httpProtocol>
</system.webServer>

You can also reach the goal, writing behaviour which adds a specific header to each message. Here is a guide: http://blogs.msdn.com/b/carlosfigueira/archive/2012/05/15/implementing-cors-support-in-wcf.aspx

There is a constant CorsConstants.Origin, you can set your domain instead.

To check if response has required header you can use fiddler.

Upvotes: 1

Rachit Patel
Rachit Patel

Reputation: 862

We have developed a web based tool and we consume data through WCF service but we are not directly consume WCF service because it's REST service and we have not implemented any security in service. so, call service different way.

1) we have created a different but required handler files in same project

2) This handler file call our WCF service

3) As our tool is web based so we are checking session id when we call handler

4) If session id match then we pass data else we show session expire message

Please let me know if you still unclear about this concept.

Upvotes: 3

zeebonk
zeebonk

Reputation: 5034

Assuming you don't want to force your visitors to login/authenticate first: you can't.

To be able to limit usage to a certain group of users (in you case visitors of mysite.com), the users need to send something (key, password, token) to you service to identify themselves. If you store this token in your client side app (eg. javascript) people can just extract the token and use it whatever way they like. So that isn't possible. Neither can you trust any data send by a browser (eg. via param) because it can always be faked with simple tools. Those are pretty much all the options you have.

The real question is, why would you want to secure this content if it already is publicly available trough the website itself? One could easily build a simple scraper to get your content if they wanted to.

Upvotes: 2

Related Questions