Reputation: 3306
The Security and Design guidelines go to great length outlining various methods to make it more difficult for an attacker to compromise in-app billing implementation.
Especially noted is how easy it is to reverse-engineer a .apk
file, even if obfuscated via Proguard. So they even recommend modifying all sample application code, especially "known entry points and exit points".
What I find missing is any reference to the wrapping certain verification methods in a single method, like the static Security.verify()
which returns boolean
: A good design practice (reducing code duplication, reusable, easier to debug, self-documenting, etc.) but all an attacker needs to do now is identify that method and make it always return true
... So regardless how many times I used it, delayed or not delayed, randomly or not, it simply doesn't matter.
On the other hand, Java doesn't have macros like in C/C++, which allows reducing source code duplication, but doesn't have a single exit point for a verify()
function.
So my questions:
Is there an inherent contention between the well known software engineering/coding practices and design for so called security? (in the context of Java/Android/secure transactions at least)
What can be done to mitigate the side-effects of "design for security" which seems like "shooting oneself in the foot" in terms of over-complicating software that could have been simpler, more maintainable and easier to debug?
Can you recommend good sources for further studying this subject?
Upvotes: 14
Views: 837
Reputation: 771
However tough the obfuscation methods are, there is always a way to reverse engineer them. I mean, if your software gets more popular among the hakers community, eventually someone will try to reverse-engineer it.
Obfuscation is just a method to make the process of reverse engineering tougher.
So is packing. I think many packing methods are available, but so is the process to reverse-engineer them.
You can check the www.tuts4you.com to see how tons of guides are being available.
I am not an expert like many others, but this is my experience in the process of learning reverse-engineering. Also recently I have seen a lot of guides for Android applications reverse-engineering. I have seen even in nullc0n (not sure) CTF, there was an app in Reversing Android. If you want, I can mention the site after searching.
Upvotes: 3
Reputation: 52936
As usual, it's a tradeoff. Making your code harder to reverse-engineer/crack involves making it less readable and harder to maintain. You decide how far to go, based on your intended user base, your own skills in the area, time/cost, etc. This is not specific to Android. Watch this Google I/O presentation for various stages of obfuscating and making your code tamper resistant. Then decide how far you are willing to go for your own apps.
On the other hand, you don't have to obfuscate/harden, etc. all of your code, just the part that deals with licensing, etc. That is usually a very small part of the whole codebase and doesn't really need to change that often, so you could probably live with it being hard to follow/maintain, etc. Just keep some notes on how it works, so you remind yourself 2 years later :).
Upvotes: 7
Reputation: 10662
The counter productivity you are describing is the tip of the iceberg... No software is 100% bug-free on release, so what do you do when users start reporting problems?
How do you troubleshoot or debug field problems after you disabled logging, stack tracing and all kinds of other information that help reverse-engineers but also help the legitimate development team?
Upvotes: 5