Reputation: 23
For example,
with a CPU speed 1 I can solve X amount of a problem with time complexity of log N
.
If I speed up the CPU 100 times, what would be the X?
How can I find out this?
Is it the same as decreasing an input size by 100?
I need explanation not only answers.
Upvotes: 1
Views: 72
Reputation: 1
The original problem, as defined, does not seek the [TIME]
-domain complexity of an algorithm ( System-under-Test ), because both the [TIME]
-domain and [SPACE]
-domain complexities focus on the SuT-processing requirements' properties with respect to the scaling of the problem size -
- the N
, better the f( N )
.
O/P has asked another question:
X
of a problem,X
?which is exactly the shift of paradigm, that was expressed by Dr. John GUSTAFSON in his Law of scaling a SuT-performance, due to some resource "improvement".
The execution workload of the whole task before the ( SuT ) improvement of the resources of the system is denoted
W
. It includes the execution workload of the part that does not benefit from the improvement of the resources and the execution workload of the one that benefits from it. The fraction of the execution workload that would benefit from the improvement of the resources is denoted byp
. The fraction concerning the part that would not benefit from it is therefore1 − p
.
Then
W = ( 1 − p )W + pW
It is the execution of the ( SuT ) part that benefits from the improvement of the resources that is sped up by a factor
s
after the improvement of the ( SuT ) resources. Consequently, the execution workload of the part that does not benefit from it remains the same.
The theoretical execution workloadW( s )
of the whole task after the ( SuT ) improvement of the resources is then
W( s ) = ( 1 − p )W + spW
So your "improved" X
is scaled the same way to 100 X
, irrespective of [TIME]
-domain complexity f( N )
, and an attempt to reduce N by a factor of 100 ( MUL/DIV ) will, on one side reduce the processing duration from a time ~ T_1_a( 100 X )
to a time ~ T_1_b( X )
, which in the case of the above posted known [TIME]
-domain complexity f( N ) =
log( N )
would yield to a time saving of:
T_1_b = T_1_a - 2.0
i.e. 2 units of time, yet without knowing anything with respect to the "improved" resources performance.
Use the Gustafson's Law to scale the Workload, measured by a known problem size ( and derive the resized Workload-size, that will get processed in the same [TIME]
-domain budget T_1
, after having the 100x improvement of the speed of the SuT "resources",
that will take the same time T_1 == T_100
- right because that larger input-base ( SuT Workload ) of X --> k X
will yet get processed in the same time == T_1
right because the SuT( CPU ) was improved those 100-times ). T_1( X ) ~ log( X ) == log( k X ) ~ T_100( k X )
In more complex ( and more realistic ) setups, where only some fraction of SuT system resources got improved { CPU | CACHE | RAM }, but the fileIO remains the same "slow", the fileIO-processing will remain the blocking bottleneck for fileIO-based workloads and thus no such increase in Gustafson's Law scaled workload W( 100 )
will be achievable...
Upvotes: 0
Reputation: 14399
Time-complexity has no relation to speed of the machine on which the problem is being solved. Complexity is the attribute of algorithm, not the problem or CPU.
A faster CPU will simply solve the problem in lesser time. However, the faster CPU will solve any problem whatsoever in lesser time.
If you have an algorithm that solves a problem with time complexity of log N
and you pass it a problem set of size X
, the time taken by a CPU will be proportional to log X
.
Let's say that the time taken on a CPU is equal to ( a * log X
). On a faster CPU, the time taken will again be proportional to log X
again. However, on this CPU the time taken would be something like equal to ( b * log X
).
And since the second CPU is faster than the first CPU, b
would be less than a
.
Upvotes: 1