Reputation: 185
Long story short, A\b works well, it just takes too much memory. What's the inplace option? Is there an inplace option?
I need to solve A\b lots, and the number of allocations is giving me memory problems. I've tried gmres and similar solvers, but I'm not getting as accurate solutions. I've tried playing with the relative tolerance, but my solutions aren't working well. Note that A is a linear operator... if A doesn't get too ill conditioned, and it is fairly sparse.
Upvotes: 0
Views: 643
Reputation: 19132
LinearSolve.jl is an interface over the linear solvers of the Julia ecosystem. Its interface internally uses the mutating forms (which are not just ldiv!
, but also lu!
etc. as well, which are not compatible with sparse matrices, etc.) for performance as much as possible. It connects with multiple sparse solvers, so not just the default UMFPACK, but also others like KLU and Krylov methods. It will also do other tricks that are required if you're solving a lot, like caching the symbolic factorizations, which are not necessarily well-documented. It does all of this for you by default if you use the caching interface, and the details for what this entails in all of the sparse scenarios with maximum performance is basically best described by just looking at the source code. So just use it, or look at the code.
Using LinearSolve.jl in this manner is fairly straightforward. For example you just define and solve a LinearProblem:
using LinearSolve
n = 4
A = rand(n,n)
b1 = rand(n); b2 = rand(n)
prob = LinearProblem(A, b1)
linsolve = init(prob)
sol1 = solve(linsolve)
#=
4-element Vector{Float64}:
-0.9247817429364165
-0.0972021708185121
0.6839050402960025
1.8385599677530706
=#
and then you can replace b:
linsolve = LinearSolve.set_b(sol1.cache,b2)
sol2 = solve(linsolve)
sol2.u
#=
4-element Vector{Float64}:
1.0321556637762768
0.49724400693338083
-1.1696540870182406
-0.4998342686003478
=#
or replace A and solve:
A2 = rand(n,n)
linsolve = LinearSolve.set_A(sol2.cache,A2)
sol3 = solve(linsolve)
sol3.u
#=
4-element Vector{Float64}:
-6.793605395935224
2.8673042300837466
1.1665136934977371
-0.4097250749016653
=#
and it will do the right thing, i.e. in solving those 3 equations it will have done two factorizations (only refactorize after A
is changed). Using arguments like alias_A
and alias_b
can be sent to ensure 0 memory is allocated (see the common solver options). When this is sparse matrices, this example would have only performed one symbolic factorization, 2 numerical factorizations, and 3 solves if A
retained the same sparsity pattern. And etc. you get the point.
Note that the structs used for the interface are immutable, and so within functions Julia will typically use interprocedural optimizations and escape analysis to determine that they are not needed and entirely eliminate them from the generated code.
Upvotes: 3
Reputation: 185
Finally found it. LinearAlgebra package: ldiv!
One would think that would show up more readily in a google search.
Upvotes: 1