Reputation: 21
I have a single node kubernetes cluster running in a VM in azure. I have a service running SCTP server in port 38412. I need to expose that port externally. I have tried by changing the port type to NodePort. But no success. I am using flannel as a overlay network. using Kubernetes version 1.23.3.
This is my service.yaml file
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: fivegcore
meta.helm.sh/release-namespace: open5gs
creationTimestamp: "2022-02-11T09:24:09Z"
labels:
app.kubernetes.io/managed-by: Helm
epc-mode: amf
name: fivegcore-amf
namespace: open5gs
resourceVersion: "33072"
uid: 4392dd8d-2561-49ab-9d57-47426b5d951b
spec:
clusterIP: 10.111.94.85
clusterIPs:
- 10.111.94.85
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: tcp
nodePort: 30314
port: 80
protocol: TCP
targetPort: 80
- name: ngap
nodePort: 30090
port: 38412
protocol: SCTP
targetPort: 38412
selector:
epc-mode: amf
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
As you can see I changed the port type to NodePort.
open5gs fivegcore-amf NodePort 10.111.94.85 <none> 80:30314/TCP,38412:30090/SCTP
This is my Configmap.yaml. In this configmap that ngap dev is the server I want to connect which is using default eth0 interface in the container.
apiVersion: v1
data:
amf.yaml: |
logger:
file: /var/log/open5gs/amf.log
#level: debug
#domain: sbi
amf:
sbi:
- addr: 0.0.0.0
advertise: fivegcore-amf
ngap:
dev: eth0
guami:
- plmn_id:
mcc: 208
mnc: 93
amf_id:
region: 2
set: 1
tai:
- plmn_id:
mcc: 208
mnc: 93
tac: 7
plmn_support:
- plmn_id:
mcc: 208
mnc: 93
s_nssai:
- sst: 1
sd: 1
security:
integrity_order : [ NIA2, NIA1, NIA0 ]
ciphering_order : [ NEA0, NEA1, NEA2 ]
network_name:
full: Open5GS
amf_name: open5gs-amf0
nrf:
sbi:
name: fivegcore-nrf
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: fivegcore
meta.helm.sh/release-namespace: open5gs
creationTimestamp: "2022-02-11T09:24:09Z"
labels:
app.kubernetes.io/managed-by: Helm
epc-mode: amf
I exec in to the container and check whether the server is running or not. This is the netstat of the container.
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 10.244.0.31:37742 10.105.167.186:80 ESTABLISHED 1/open5gs-amfd
sctp 10.244.0.31:38412 LISTEN 1/open5gs-amfd
sctp module is also loaded in the host.
$lsmod | grep sctp
sctp 356352 8
xt_sctp 20480 0
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,ip_vs,sctp
x_tables 49152 18 ip6table_filter,xt_conntrack,xt_statistic,iptable_filter,iptable_security,xt_tcpudp,xt_addrtype,xt_nat,xt_comment,xt_owner,ip6_tables,xt_sctp,ipt_REJECT,ip_tables,ip6table_mangle,xt_MASQUERADE,iptable_mangle,xt_mark
Is it possible to expose this server externally?
Upvotes: 0
Views: 1138
Reputation: 11
We found that using a nodeport to translate ports breaks the SCTP protocol (see rfc 4960) because the SCTP transport address structure (embedded in each stream) will match the nodeport port, not the sctp port. Using a port translation (e.g. nodeport or firewall) will therefore cause the decryptor to fail in the 5g core NF's we have worked with. This is obviously not a problem if the code listens on the same port as the nodeport. Port 38412 is only recommended, it is not required. Can you recompile your code to use a port in the nodeport range? Or turn off the decryptor error function?
Upvotes: 1
Reputation: 11
You cannot expose port 38412 externally because the default node port range in Kubernetes is 30000-32767.
The best solution (which I tried and working) is to deploy a router/firewall in between Kubernetes cluster and the external srsRAN. On firewall map SCTP port 38412 --> 31412.
Initiate the connection from srsRAN/UERANSIM and the firewall will do the magic and make your RAN connect to the 5g CORE.
PS: I deployed this solution on OpenStack.
Upvotes: 1
Reputation: 15530
Neither AKS nor Flannel supports SCTP at this point of writing. Here's some details about it.
Upvotes: 1