Reputation: 681
I was just wondering about the compiler speed of the Crystal programming language. It feels relatively slow:
➜ ~/Code/crystal/crystal_scheduler (master ✘)✹✭ ᐅ time crystal build --release src/crystal_scheduler.cr
34.64s user 1.10s system 93% cpu 38.174 total
➜ ~/Code/crystal/crystal_scheduler (master ✘)✹✭ ᐅ time crystal build --release src/crystal_scheduler.cr
36.11s user 0.83s system 93% cpu 39.465 total
➜ ~/Code/crystal/crystal_scheduler (master ✘)✹✭ ᐅ time crystal build src/crystal_scheduler.cr
8.09s user 0.89s system 181% cpu 4.956 total
The code is relatively small, two shards, two classes. Compared to other compilation times I know from Java, this feels like long times.
I get that release compiling is slow, but the git book states:
The reason for this is that performance without full optimizations is still pretty good and provides fast compile times, so you can use the crystal command almost as if it were an interpreter.
But 8s feels a bit slow to claim that you can use it "almost as if it were an interpreter".
I was just wondering if - a) my compilation is especially slow / the compile times are normal - b) how it compares to other languages in your experience
Compilation Stats:
Parse: 00:00:00.0007470 ( 0.25MB)
Semantic (top level): 00:00:00.3968920 ( 36.08MB)
Semantic (new): 00:00:00.0019210 ( 44.08MB)
Semantic (type declarations): 00:00:00.0355760 ( 44.08MB)
Semantic (abstract def check): 00:00:00.0012690 ( 44.08MB)
Semantic (ivars initializers): 00:00:00.0094640 ( 44.08MB)
Semantic (cvars initializers): 00:00:00.0394420 ( 44.08MB)
Semantic (main): 00:00:00.6025030 ( 108.14MB)
Semantic (cleanup): 00:00:00.0012750 ( 108.14MB)
Semantic (recursive struct check): 00:00:00.0018930 ( 108.14MB)
Codegen (crystal): 00:00:00.7354530 ( 140.27MB)
Codegen (bc+obj): 00:00:33.2533520 ( 140.27MB)
Codegen (linking): 00:00:00.3647440 ( 140.27MB)
My System:
➜ crystal -v
Crystal 0.22.0 (2017-04-20) LLVM 4.0.0
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: MacBookPro11,1
Processor Name: Intel Core i5
Processor Speed: 2,8 GHz
Number of Processors: 1
Total Number of Cores: 2
L2 Cache (per Core): 256 KB
L3 Cache: 3 MB
Memory: 16 GB
Upvotes: 6
Views: 2323
Reputation:
Use crystal build --no-debug my_code.cr
to improve compiler speed a bit.
See: Why Crystal use too much memory? and Crystal for large scale programs
Also, I suggest you to add --progress
flag.
Progress flag adds more information about compilation process, showing stats for every file generated on codegen phase.
>>> crystal build -s --no-debug -p my_code.cr
Parse: 00:00:00.0008560 ( 0.34MB)
Semantic (top level): 00:00:00.2588280 ( 27.91MB)
Semantic (new): 00:00:00.0018450 ( 35.91MB)
Semantic (type declarations): 00:00:00.0263890 ( 35.91MB)
Semantic (abstract def check): 00:00:00.0015270 ( 35.91MB)
Semantic (ivars initializers): 00:00:00.0018980 ( 35.91MB)
Semantic (cvars initializers): 00:00:00.0158470 ( 35.91MB)
Semantic (main): 00:00:00.4168150 ( 60.10MB)
Semantic (cleanup): 00:00:00.0010650 ( 60.10MB)
Semantic (recursive struct check): 00:00:00.0008110 ( 60.10MB)
Codegen (crystal): 00:00:00.3381910 ( 68.10MB)
[12/13] [67/215] Codegen (bc+obj)
~~~~~~~~
(1)
(1) Amount of files processed, you can see these files on Crystal cache directory.
BTW, I compiled a whole Amber project in less than 30 seconds with 10 shards on an old Intel Celeron PC. Don't use --release
flag on development phase, but on production. (LLVM take a lot of time doing optimizations)
>>> cat /proc/cpuinfo | grep "model name"
model name : Intel(R) Celeron(R) 2957U @ 1.40GHz
>>> shards list
Shards installed:
* amber (0.1.3)
* radix (0.3.8)
* kilt (0.1.0)
* slang (1.6.1)
* redis (1.8.0)
* quartz-mailer (0.1.0)
* kilt (0.1.0)
* smtp (0.1.0)
* granite_orm (0.6.2)
* kemalyst-validators (0.2.0)
* db (0.4.2)
* sqlite3 (0.8.2)
* db (0.4.2)
>>> time crystal build -p src/app.cr -o app --no-debug
real 0m24.320s
user 0m26.700s
sys 0m1.437s
>>> time crystal build -p src/app.cr -o app --no-debug
real 0m6.225s
user 0m5.727s
sys 0m0.970s
Upvotes: 10