Reputation: 716
This is more of an architectural question rather than coding issue. Please pardon me if I am in wrong place I have an Ec2 instance running in private VPC where we in future are going to deploy PII data and by no mean we can have internet access to it. However We need to install ETL tool in docker(Airflow, docker, nifi, python etc) and ofcourse need to ssh into from my local company vpc.
There is two approach as far as I think is
1. To create an another EC2 in public subnet and install all our tool there
and call the VPC EC2 from this one.
So that I can move the PII data to S3 through private Endpoint.
Cons: Does not it still raise the security concern as the EC2(ETL) is still
in internet where from one can access the PII data in second ec2.
Another Option
2. To create the Ec2 in public and install all tools and then
finally change it into private VPC.
Cons: in case if tool crash or there is any change needed then will have to
move it back to public which again does not look proper way of handling it.
I tried to search internet to get any tutorial or training about it. But cannot find it.
Any suggestion will be highly appreciated.
Upvotes: 1
Views: 329
Reputation: 238209
You don't need to use internet at all if you don't want. I assume that by no internet access you mean that this is two ways - no access from the internet to the instance, nor the instance can connect internet at all (i.e. no NAT or any other proxy).
There are a couple of ways of doing this. One way is as follows:
Upvotes: 3
Reputation: 3063
I think both approaches are inherently sub-optimal
If all you're trying to do is avoid exposing your compute instances to the internet, and your setup is docker based, simply setup your own docker repository, either using ECS or Sonatype Nexus (on another server), upload your docker images there and have that node use that ECS/Nexus as its docker registry.
That way, your enjoying free access to all resources exposed as docker images while maintaining security compliance.
Upvotes: 0