brycejl
brycejl

Reputation: 1491

How do I access my Docker API container from my front-end container?

I'm new to docker and trying to wrap my head around the networking between containers. I'm running two containers, a container for my Node.js API server, and a container that holds my front-end React UI. Things work fine when running them both locally. The API server exposes port 3001, and from my React site I can make calls to localhost:3001/api.

Given that the idea of a container is that it can be run anywhere, how can I guarantee that these two container services can connect when not running on a local machine? I understand that networks can be setup between docker containers, but that seems to not be applicable in this situation as the react container is not making the request, but rather the client accessing the react container (so localhost would now refer to their machine instead of my API container).

What is best practice for deploying this type of architecture?

What kind of setup is needed to guarantee that these containers can talk in a cloud deployment where the API host may be dynamically generated at deployment?

If relevant, I'm looking specifically to deploy to AWS ECS.

Edit:

The package.json proxy is only relevant in development, as the proxy doesn't take effect in a production build of a react app.

Upvotes: 2

Views: 3388

Answers (4)

nologin
nologin

Reputation: 1452

You have to prepare a VPC with min. one subnet (routetables, gateways, loadbalancer, etc...). The VPC and the subnet will be configured with an IP-Range.

All Instances in the Subnet will get an IP out of its range and your App-Instances might talk to each other. You can preset fixed IPs for your Apps.

Regarding a blue/green deployment it is needed to instanciate a (Fargate)ECS-Cluster. The corresponding AWS Service for docker container is AWS ECR. I recommend using AWS Fargate instead a pure AWS ECS-instance - less money, more flexibility.

Infos about VPC and Subnets: https://docs.aws.amazon.com/AmazonECS/latest/userguide/create-public-private-vpc.html

Infos about Fargate and Cluster: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html

Upvotes: 0

Martin Löper
Martin Löper

Reputation: 6649

As far as I understand, you want to deploy a classic two-tier application consisting of a React frontend and a Node.js backend to Amazon ECS (the production environment).

We set this up for our application some time ago and I want to outline the solution.

What is best practice for deploying this type of architecture?

That is a really tough question, since it also depends on some characteristic of the two tiers which are not fully specified in your question.

Frontend

The first question which comes into my mind is, whether you really need to run the React UI from a docker container? Is it dynamic content? If your React app is built properly for production - as outlined in [1] - it should be static content. The advantage of static content is, that it can be cached easily and thus does not need to be served from a Docker container in production. Quite the opposite is the case: I would consider serving static content from an ECS container in production a bad practice. What you should do instead is:

  • Create an S3 bucket and deploy your static assets into the bucket.
  • Optionally, but highly recommended for production: use some sort of Content Delivery Network (CDN) in order to distribute your content and cache it effectively at the edge. The AWS service landscape provides the CloudFront service for this purpose. Whether or not it pays off using CloudFront in turn depends on the traffic pattern of your application. You could serve the static assets from your S3 bucket directly, which will probably result in higher latency but could be more cost-effective.

All in all I would recommend: If you are planning to bring some serious application into production which is expected to receive a decent load of traffic and/or is designed as Single Page Application (SPA), outsource your static assets into S3 and serve them via CloudFront.

Backend

The best practice is straightforward here: Create an application load balancer (ALB), a target group and point your target group at your ECS service. ECS provides an integration for AWS Elastic Load Balancing. The advantage of using an AWS ALB here is that a DNS record is automatically created for you. [3]

But what if I really need to use two containers in ECS?

If you decide not to outsource static assets because there are dynamic parts in your React stuff or the pricing of the solution outlined above is not appropriate, let me answer your second question:

What kind of setup is needed to guarantee that these containers can talk in a cloud deployment where the API host may be dynamically generated at deployment?

There are multiple strategies how to wire things together in ECS. I guess your React container does not need to connect to the Node.js container directly, right? Correct me if that assumption is wrong. For me the scenario looks like the following:

  1. Client --> Docker container 1 "React" (loading e.g. index.html)
  2. Client (e.g. using Ajax from inside index.html) --> Docker container 2 "Node.js"

If the two tiers are really fully independent, I would suggest to create two separate ECS services - each running a separate ECS task definition. Secondly, you create an application load balancer and activate the load balancer integration on each of those services. Finally, you need to create a separate target group on the load balancer for each service and assign a separate listener on the load balancer to redirect traffic to the respective target group.

Example:

  • Application Load Balancer with DNS name: my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com
  • Service A with React Frontend
  • Service B with Node.js Backend
  • Target Group A which redirects traffic to Service A
  • Target Group B which redirects traffic to Service B
  • ALB Listener A which redirects traffic on port 80 of your load balancer to Target Group A
  • ALB Listener B which redirects traffic on port 8080 of your load balancer to Target Group B
  • Optionally: A custom DNS record for your own domain which points at the load balancer (via an alias record) in order to provide a more customer-friendly name in the browser instead of the automatically created record in the aws zone elb.amazonaws.com.

Now the frontend is accessible on the standard HTTP port of your load balancer domain and the backend is accessible on port 8080. You could activate SSL on the load balancer easily and use port 443 instead. [4]
This allows SSL to be terminated on the load balancer instead of your docker container - a feature called SSL termination [5].

But what if those containers must communicate with each other?

With the approach outlined above, the containers are able to communicate with each other via an application load balancer. However, this is not ideal if this communication is internal by nature since it is routed via a public load balancer endpoint. If we want to make sure traffic does not leave the private network in between the containers, we COULD* place them together:

  • create a task definition in ECS [6] and put both containers into it
  • specify "NetworkMode": "bridge" for the task
  • specify a link between the containers using the property Links on the respective container definition (inside the task definition)

*there are multiple strategies again to achieve this and I am outlining the simplest one I know here (e.g. tasks can also be linked together privately using Service Discovery [7] or the task network mode awsvpc)

I know it is a complex and particularly wide topic since there are multiple strategies which each have their pros and cons, but I hope I could give you some useful references.

References

[1] https://create-react-app.dev/docs/production-build/
[2] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
[3] https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html#resource-record-sets-elb-dns-name-procedure
[4] https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
[5] https://infra.engineer/aws/36-aws-ssl-offloading-with-an-application-load-balancer
[6] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html
[7] https://stackoverflow.com/a/57540515/10473469

Upvotes: 5

ahasbini
ahasbini

Reputation: 6901

seems to not be applicable in this situation as the react container is not making the request, but rather the client accessing the react container

I'm not very experienced with React UI, however and correct me if I'm wrong, from what I understood is that a client browser first connects to the React UI frontend which in-turn supplies (or responds) with webpages that have functionalities that send API requests to the Node.js API server using an embedded base url and that it is listening on port 3001. In that case it doesn't seem to me that React UI frontend container is connecting to the Node.js API container, it's just giving means to the client browser for the webpages to send requests to the right location.

Although I kinda feel that proxy in package.json seems to be the key but again it's not my area of expertise, what really needs to be done is to configure the React UI container to embed the proper hostname where the Node JS API is hosted so the client browser would be sending the requests to the correct destination. Taking into account what you've been able to do on your local machine, it appears to me that the port configuration for your containers has been done correctly, I'm going to assume that they are being "exposed". Hence running your containers in the same manner on a server that has docker in it will also expose the ports correctly.

So to wrap-up, you would basically have a server that has a public hostname and has docker running on it which in-turn will run your containers and expose their port. The basic configuration needed is to have React UI container to supply the correct URL of the Node.js API which will actually be the public hostname of your server (since docker is technically listening on the ports on the server and sending them inwards to the containers, that's what's meant by exposing, more like port-forwarding).

Upvotes: 0

Kapil Khandelwal
Kapil Khandelwal

Reputation: 1176

First thing first. Since both - Node.js API server and front-end React UI are running on two different containers, so you need to configure the proxy in the package.json of the react application.

"proxy": "http://<docker_container_name>:3001",

If you are still wondering what is this proxy any why is it required, please refer to this before reading further.

Now since our services are running in two different containers, so "proxy": "http://localhost:3001", won't work, as this would proxy the request within the same front-end container. So, we need to tell the react server to proxy the request to the node server that is running on some other container.

Hence, docker_container_name is actually the docker container name in which Node.js API server is running.

"proxy": "http://<docker_container_name>:3001",

NOTE: Make sure to expose port 3001 in the Node server container.

What if you do not want to expose the port of node server??

For this, I would recommend using docker-compose. Create a docker-compose.yml that looks something like this:

version: "3.7"
services:
  frontend:
    # Add other configuration options for frontend service
    depends_on:
      - backend  
  backend:
    # Add configuration options for backend service

Please refer this to learn more about depends_on

Upvotes: 1

Related Questions