Reputation: 1693
I have an Android project that has grown with time, and with the size have grown the gradle build times.
It was bearable while it was under the 65k limit - around 14s. Now with multidex it takes 36s.
So my question is - are there any ways to "turn off" parts of the code that are not being used so it's back under the 65k limit?
For e.g. turn off the amazon s3 sdk which is brought in via gradle and has n thousands of methods.
I know you can strip code with proguard, but that just bumps up the build time even higher.
I'm happy with it crashing at runtime when I open the parts that use it, just want to make testing quicker.
At the moment when I remove amazon from gradle imports, I obviously get this:
Error:(24, 26) error: package com.amazonaws.auth does not exist
Is there a way to somehow ignore the error? I know that in Picasso, it has a runtime check to see if you have OkHttp, and if you haven't - use standard networking.
static Downloader createDefaultDownloader(Context context) {
if (SDK_INT >= GINGERBREAD) {
try {
Class.forName("com.squareup.okhttp.OkHttpClient");
return OkHttpLoaderCreator.create(context);
} catch (ClassNotFoundException ignored) {}
}
return new UrlConnectionDownloader(context);
}
Is there something like this I could do? Or any other way?
Upvotes: 4
Views: 572
Reputation: 3620
It is possible to specify compile-time dependencies for each build type independently. I use this method to include "production-only" dependencies in only the release builds, reducing the method count for debug builds.
For example, I only include Crashlytics in release builds. So in build.gradle
I include the dependency for only my release build (and beta and alpha):
releaseCompile('com.crashlytics.sdk.android:crashlytics:2.5.5@aar') {
transitive = true;
}
Then I abstract the functionality of Crashlytics into a class called CrashReportingService
. In my debug source code, this class does nothing:
/app/src/debug/java/com/example/services/CrashReportingService.java:
public class CrashReportingService {
public static void initialise(Context context) {
}
public static void logException(Throwable throwable) {
}
}
And I flesh out the implementation in my release source code:
/app/src/release/java/com/example/services/CrashReportingService.java
public class CrashReportingService {
public static void initialise(Context context) {
Fabric.with(context, new Crashlytics());
}
public static void logException(Throwable throwable) {
Crashlytics.getInstance().core.logException(throwable);
}
}
Crashlytics is now only included in release builds and there is no reference to Crashlytics in my debug builds. Back under 65k methods, hooray!
Upvotes: 1
Reputation: 29168
I have got another option. That also helps to speed up but not as your demand. That is using demon.
If you use the new Gradle build system with Android (or Android Studio) you might have realized, that even the simplest Gradle call (e.g. gradle project or grade tasks) is pretty slow. On my computer it took around eight seconds for that kind of Gradle calls. You can decrease this startup time of Gradle (on my computer down to two seconds), if you tell Gradle to use a daemon to build. Just create a file named gradle.properties
in the following directory:
/home/<username>/.gradle/
(Linux)/Users/<username>/.gradle/
(Mac)C:\Users\<username>\.gradle
(Windows)org.gradle.daemon=true
From now on Gradle will use a daemon to build, whether you are using Gradle from command line or building in Android Studio. You could also place the gradle.properties file to the root directory of your project and commit it to your SCM system. But you would have to do this, for every project (if you want to use the daemon in every project).
Note: If you don’t build anything with Gradle for some time (currently 3 hours), it will stop the daemon, so that you will experience a long start-up time at the next build.
The Gradle Daemon is a long lived build process. In between builds it waits idly for the next build. This has the obvious benefit of only requiring Gradle to be loaded into memory once for multiple builds, as opposed to once for each build. This in itself is a significant performance optimization, but that's not where it stops.
A significant part of the story for modern JVM performance is runtime code optimization. For example, HotSpot (the JVM implementation provided by Oracle and used as the basis of OpenJDK) applies optimization to code while it is running. The optimization is progressive and not instantaneous. That is, the code is progressively optimized during execution which means that subsequent builds can be faster purely due to this optimization process.
Experiments with HotSpot have shown that it takes somewhere between 5 and 10 builds for optimization to stabilize. The difference in perceived build time between the first build and the 10th for a Daemon can be quite dramatic.
The Daemon also allows more effective in memory caching across builds. For example, the classes needed by the build (e.g. plugins, build scripts) can be held in memory between builds. Similarly, Gradle can maintain in-memory caches of build data such as the hashes of task inputs and outputs, used for incremental building.
Upvotes: 0
Reputation: 117
The only realistic way of doing this (that I'm aware of) is to refactor your project so that your packages are split into separate modules. You would therefore have separate gradle build files for each module, but would only have to recompile each module whenever they were touched. You could, for instance, have a data access package and a UI package. That seems like a pretty natural split.
I realize that this is a disappointing answer but the issue you're complaining about is that your build dependencies require all those extra unnecessary libraries and method calls: not that your code uses them.
The only other tip I can give you is that the Google Play API kit has tens of thousands of method calls. If you can use only the pieces that you're using you stand a much better chance of being beneath the 65k limit.
Upvotes: 3