Reputation: 29483
How bad is it, to use JavaScript (CoffeeScript) for implementing a heavy computational task? I am concerned with an optimization problem, where an optimal solution cannot be computed that fast.
JavaScript was chosen in the first place, because visualization is required and instead of adding the overhead for communication between different processes the decision was to just implement everything in JavaScript.
I don't see a problem with that, especially when looking at the benchmarks game. But I often receive the question: Why on earth JavaScript?
I would argue in the following way: It is an optimization problem, NP-hard. It does not matter how much faster another language would be, since this only adds a constant factor to the running time - is that true?
Upvotes: 5
Views: 2491
Reputation: 707198
The only way to answer this question is to measure and evaluate those measurements as every problem and application has different needs. There is no absolute answer that covers all situations.
If you implement your app/algorithm in javascript, profile that javascript to find out where the performance bottlenecks are and optimize them as much as possible and it's still too slow for your application, then you need a different approach.
If, on the other hand, you already know that this is a massively time draining problem and even in the fastest language possible, it will still be a meaningful bottleneck to the performance of your application, then you already know that javascript is not the best choice as it will seldom (if ever) be the fastest executing option. In that case, you need to figure out if the communication between some sort of native code implementation and the browser is feasible and will perform well enough and go from there.
As for NP-hard vs. a constant factor, I think you're fooling yourself. NP-hard means you need to first make the algorithm as smart as possible so you've reduced the computation to the smallest/fastest possible problem. But, even then, the constant factor can still be massively meaningful to your application. A constant factor could easily be 2x or even 10x which would still be very meaningful even though constant. Imagine the NP-hard part was 20 seconds in native code and the constant factor for javascript was 10x slower. Now you're looking at 20 sec vs. 200 sec. That's probably the difference between something that might work for a user and something that might not.
Upvotes: 1
Reputation: 308743
If JavaScript is working for you and meeting your requirements, what do you care what other people think?
One way to answer the question would be to benchmark it against an implementation in a "good" language (your terms, not mine) and see how much of a difference it makes.
I don't buy the visualization argument. If your "good" language implementation was communicating with a front end you might be able to have faster performance and visualization. You might be overstating the cost of communication to make yourself feel better.
I also don't like your last argument. JavaScript is single threaded; another language might offer parallelism that JavaScript can't. Algorithm can make a huge difference; perhaps you've settled on one that is far from optimal.
I can tell you that no one in their right mind would consider using JavaScript for computationally intensive tasks like scientific computing. SO did have a reference to a JavaScript linear algebra library, but I doubt that it could be used for analysis of non-linear systems will millions of degrees of freedom. I don't know what kind of optimization problem you're dealing with.
With that said, I'd wonder if it's possible to treat this question fairly in a forum like this. It could lead to a lot of back and forth and argument.
Are you seeking a justification for your views or do you want alternatives? It's hard to tell.
Upvotes: 1
Reputation: 38676
Well it's not exactly constant time it's usually measured X times slower than Java. But, as you can see from your results for the benchmark shootout it really depends on the algorithm as to how much slower it is. This is V8 javascript so it's going to depend on the browser you are running it in as how much slower. V8 is the top performer here, but it can dramatically run slower on other VMs: ~2x-10x.
If your problem can be subdivided into parallel processors then the new Workers API can dramatically improve performance of Javascript. So it's not single threaded access anymore, and it can be really fast.
Visualization can be done from the server or from the client. If you think lots of people are going to executing your program at once you might not want to run it on the server. If one of these eats up that much processors think what 1000 of them would do to your server. With Javascript you do get a cheap parallel processor by federating all browsers. But, as far as visualization goes it could be done on the server and sent to the client as it works. It's just what you think is easier.
Upvotes: 1
Reputation: 120486
Brendan Eich (Mozilla's CTO and creator of JavaScript) seems to think so.
http://brendaneich.com/2011/09/capitoljs-rivertrail/
I took time away from the Mozilla all-hands last week to help out on-stage at the Intel Developer Forum with the introduction of RiverTrail, Intel’s technology demonstrator for Parallel JS — JavaScript utilizing multicore (CPU) and ultimately graphics (GPU) parallel processing power, without shared memory threads (which suck).
See especially his demo of JS creating a scene graph:
Here is my screencast of the demo. Alas, since RiverTrail currently targets the CPU and its short vector unit (SSE4), and my screencast software uses the same parallel hardware, the frame rate is not what it should be. But not to worry, we’re working on GPU targeting too.
At CapitolJS and without ScreenFlow running, I saw frame rates above 35 for the Parallel demo, compared to 3 or 2 for Sequential.
Upvotes: 2