Reputation: 8151
Most examples about service fabric shows that after deployments, cluster endpoints appear magically just as given in service-manifest eg: <cluster-url>:port/api/home
Some people on forums mention about tweaking load balancer to allow access to port.
Why the difference in opinions? Which way is correct? When I tried I was never able to access a deployed api/endpoint in azure cluster(load balancer tweaked or not). OneBox worked though.
Upvotes: 0
Views: 259
Reputation: 8151
The custom endpoints help says "Custom endpoints allow for connections to applications running on this node type. Enter endpoints separated by a comma.". It can only be set while creating cluster.
So apparently without setting the port here, outside world can never access that port. Or so is what this user felt in 2016.
Since there is a loadbalancer in front and if we put a probe from port x(public) to port y(backend-pool) and if we open the port y in firewall in all the nodes then it should work too. How to open port.
What would happen if we mention a port(say 2345) in servicemanifest? Then a port is opened in firewall by SF for us and it looks like this. And if there is a probe in Loadbalancer pointing to 2345, then it should work.
So unless there is a probe manually set by us in load balancer, it should not work.
Upvotes: 0
Reputation: 11341
The main detail most people forget when building SF applications is that they are building distributed applications, when you deploy one service in a cluster, you need to a way to find it, and it can move around the cluster in some cases, so the solution must be able to reflect these distributions.
It works locally because you have a single endpoint (localhost(127.0.0.1)>Service) and you will always find your application there.
On SF, you hit a domain that will map to a load balancer, that will map to a set of machines, and one of the machines might have you application running on it (domain>LB IP > Nodes > Service).
The first thing you need to know is:
Your service might not run on all nodes(machines) behind a load balancer, when the load balancer send a request to that node, if it fails, the LB does not retry on another node, and it forward these requests to random nodes, and in most cases it stick open connections to the same machine. If you need your service running on all nodes, set the instance count to -1 and you might see it working just by opening the ports on LB.
Each NodeType has one load balancer in front of it, so, always set a placement constraint on the service to avoid it start on other NodeType not exposed externally
Every port opened by your application does open on a node basis, if you need external access, it must be opened in the LoadBalancer manually, or via script, the ports assigned by SF to your service are meant to be managed internally to avoid port conflicts between services running on same node, SF does not open the ports in the LB for external access.
There are many approaches to expose these service, you also could try:
Upvotes: 2