Reputation: 139
In inner product layer, I need to multiply (top_diff * bottom_data) .* (2*weight)
. First we calculate (result = top_diff * bottom_data
) as matrix multiplication in caffe_cpu_gemm
and then do a dot product
between weight
and result
.
More explanation is defined as follow:
const Dtype* weight = this->blobs_[0]->cpu_data();
if (this->param_propagate_down_[0]) {
const Dtype* top_diff = top[0]->cpu_diff();
const Dtype* bottom_data = bottom[0]->cpu_data();
caffe_cpu_gemm<Dtype>(CblasTrans, CblasNoTrans, N_, K_, M_, (Dtype)1.,
top_diff, bottom_data, (Dtype)1., this->blobs_[0]->mutable_cpu_diff());
}
For more understanding, I checked math_function.c
. It is implemented as follows:
template<>
void caffe_cpu_gemm<float>(const CBLAS_TRANSPOSE TransA,
const CBLAS_TRANSPOSE TransB, const int M, const int N, const int K,
const float alpha, const float* A, const float* B, const float beta,
float* C) {
int lda = (TransA == CblasNoTrans) ? K : M;
int ldb = (TransB == CblasNoTrans) ? N : K;
cblas_sgemm(CblasRowMajor, TransA, TransB, M, N, K, alpha, A, lda, B,
ldb, beta, C, N);
}
I think I should perform multiplication (result = top_diff * bottom_data
) in caffe_cpu_gemm()
and after that do dot product
with weight
. how should I do?!
Many thanks!!!! Any advice would be appreciated!
Upvotes: 1
Views: 891
Reputation: 2179
If you just want to perform dot product between two matrices, you can use the following function to multiply matrices on CPU,
void caffe_mul<float>(const int n, const float* a, const float* b, float* y)
If you want to do the same operation on a GPU, use this template
void caffe_gpu_mul<float>(const int N, const float* a, const float* b, float* y)
a and b are you matrices and c will contain the final result. N is the total number of elements in your matrix.
You can also use the 'Eltwise' layer, which already does this.
Upvotes: 1