Reputation: 121
Per Taking Solr To Production (https://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html), "Running Solr as root is not recommended for security reasons, and the control script start command will refuse to do so.
Provisioning of the persistent volume occurred. However, when we claim and mount that into the folder structure for our Pod, the permissions setup for that mounted folder are only writable as root. Therefore, the SolrCloud micro services cannot either store its configuration files nor core/collection data or backups to the persistent volume.
How should we go about addressing this permissions issue in Kubernetes, since Solr enforces the inability to use root via the Solr command / start script?
Here is also information about the Kubernetes server version:
C:\Users\xxxx>kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommi
t:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2
017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd6
4"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.8+coreos.0",
GitCommit:"fc34f797fe56c4ab78bdacc29f89a33ad8662f8c", GitTreeState:"clean", Bui
ldDate:"2017-08-05T00:01:34Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"lin
ux/amd64"}
Please see below yaml, docker file and start script.
yaml file:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
labels:
app: "solrclouddemo1"
version: "1.0.0"
data:
config-env: dev
zookeeper-hosts: xxxx.com:2181
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
labels:
app: "solrclouddemo1"
version: "1.0.0"
spec:
replicas: 1
selector:
matchLabels:
app: "solrclouddemo1"
version: "1.0.0"
template:
metadata:
labels:
app: "solrclouddemo1"
version: "1.0.0"
build: "252"
developer: "XXX"
annotations:
prometheus.io/scrape.ne: 'true'
prometheus.io/port: '8000'
spec:
serviceAccount: "default"
containers:
- env:
- name: ENV
valueFrom:
configMapKeyRef:
key: config-env
name: "solrclouddemo1"
- name: ZK_HOST
valueFrom:
configMapKeyRef:
key: zookeeper-hosts
name: "solrclouddemo1"
- name: java_runtime_arguments
value: ""
image: "xxx.com:5100/com.xxx.cppseed/solrclouddemo1:1.0.0"
imagePullPolicy: Always
name: "solrclouddemo1"
ports:
- name: http
containerPort: 8983
protocol: TCP
resources:
requests:
memory: "600Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
labels:
app: "solrclouddemo1"
version: "1.0.0"
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8983
selector:
app: "solrclouddemo1"
version: "1.0.0"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
spec:
selector:
matchLabels:
app: "solrclouddemo1"
version: "1.0.0"
minAvailable: 1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
spec:
selector:
matchLabels:
app: "solrclouddemo1"
serviceName: "solrclouddemo1"
replicas: 1
template:
metadata:
labels:
app: "solrclouddemo1"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "solrclouddemo1"
topologyKey: "kubernetes.io/hostname"
containers:
- name: "solrclouddemo1"
command:
- "/bin/bash"
- "-c"
- "/opt/docker-solr/scripts/startService.sh"
imagePullPolicy: Always
image: "xxx.com:5100/com.xxx.cppseed/solrclouddemo1:1.0.0"
resources:
requests:
memory: "600Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 8983
name: http
*volumeMounts:
- name: datadir
mountPath: /opt/solr/server/data
securityContext:
runAsUser: 8983
fsGroup: 8983
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
selector:
matchLabels:
app: cppseed-solr*
Dockerfile:
FROM xxx.com:5100/com.xxx.public/solr:7.0.0
LABEL maintainer="xxx.com"
ENV SOLR_USER="solr" \
SOLR_GROUP="solr"
# AAF Authentication
ADD aaf/config/ /opt/solr/server/etc/
ADD aaf/etc/ /opt/solr/server/etc/
ADD aaf/jars/ /opt/solr/server/lib/
ADD aaf/security/ /opt/solr/
# Entrypoint
ADD docker/startService.sh /opt/docker-solr/scripts/
# Monitoring
VOLUME /etc
#ADD monitoring/monitoring.jar /monitoring.jar
ADD /etc/ /etc/
# Permissions
USER root
RUN apt-get install sudo -y && \
chown -R $SOLR_USER:$SOLR_GROUP /opt/solr && \
chown -R $SOLR_USER:$SOLR_GROUP /opt/docker-solr/scripts/ && \
chmod 777 /opt/docker-solr/scripts/startService.sh
# && \ chmod 777 /monitoring.jar
WORKDIR /opt/solr
ENTRYPOINT ["startService.sh"]
startService.sh
#!/bin/bash
#
# docker-entrypoint for docker-solr
# Fail immediately if anything has a non-zero result status
set -e
# Optionally echo commands before running them for debugging.
if [[ "$VERBOSE" = "yes" ]]; then
set -x
fi
# execute command passed in as arguments.
# The Dockerfile has specified the PATH to include
# /opt/solr/bin (for Solr) and /opt/docker-solr/scripts (for our scripts
# like solr-foreground, solr-create, solr-precreate, solr-demo).
# Note: if you specify "solr", you'll typically want to add -f to run it in
# the foreground.
echo "Invoking solr-foreground"
# Allow the clients to pass in java_runtime_arguments to tune the solr runtime when invoking the pipeline
if [[ -z "${java_runtime_arguments}" ]]; then
echo "No java_runtime_arguments received, so using default values"
exec solr-foreground -c -noprompt $@
else
echo "Received custom java_runtime_arguments. User will be responsible for prefixing all values passed with -a to allow SolrCloud to accept them. User is also responsible for establishing the -a -javaagent:/monitoring.jar=8000-/etc/config/prometheus_jmx_config.yaml-/etc/config/prometheus_application_config.yaml-/metrics which is used for Prometheus monitoring"
exec solr-foreground -c -noprompt $java_runtime_arguments $@
fi
Upvotes: 1
Views: 1850
Reputation: 121
Workaround: Use initContainers
# Before Pod Starts this will change the ownership of the initContainers:
initContainers:
- name: volume-mount-hack
image: busybox
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 600Mi
command:
- /bin/sh
- -c
- "chown -R solr:solr /opt/solr/server/data"
volumeMounts:
- name: datadir
mountPath: /opt/solr/server/data
Make sure to use same volumeMouth details in the container spec, along with runAsUser
containers:
- name: "${APP_NAME}"
imagePullPolicy: Always
image: "${IMAGE_NAME}"
env:
- name: ENV
valueFrom:
configMapKeyRef:
key: config-env
name: "${APP_NAME}"
- name: ZK_HOST
valueFrom:
configMapKeyRef:
key: zookeeper-hosts
name: "${APP_NAME}"
- name: ZK_CLIENT_TIMEOUT
value: "30000"
- name: java_runtime_arguments
value: "${JAVA_RUNTIME_ARGUMENTS}"
command:
- "/bin/bash"
- "-c"
- "/opt/docker-solr/scripts/startService.sh"
resources:
requests:
memory: "600Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 8983
name: http
volumeMounts:
- name: datadir
mountPath: /opt/solr/server/data
securityContext:
runAsUser: 8983
Upvotes: 1