Reputation: 1
I browsed through XLA's document and cannot find a good way to express more complicated matrix operations like matrix_solve, matrix_triangular_solve, cholesky and so on. How XLA handles this? I know there is a catch-call operation "CallCustom" but just wonder about better ways.
Upvotes: 0
Views: 147
Reputation: 66
In general, the intention is for the actual computation to be specified in regular TensorFlow. Then you turn on XLA either via Just-In-Time compilation (https://www.tensorflow.org/performance/xla/jit), or Ahead-Of-Time compilation (.../xla/tfcompile).
In terms of underlying support for matrix solvers, note that in addition to typical dense matrix operations, XLA does support some control flow primitives. See https://www.tensorflow.org/performance/xla/operation_semantics paying attention to the while loop construct (#while), and how to select output from different choices (#select).
I haven't worked out whether this will yield a great result, but at a high-level it seems like the fundamental pieces are there.
(Sorry for the abbreviated links; I can't seem to post more than 2)
Upvotes: 1