VinsanityL
VinsanityL

Reputation: 830

Processing Messages in Amazon

It can be a silly question, but I have a big mental block right now. I can't see the connection between my SQS queue and the EC2 instances that will process the message.

That is to say: a client completes a form on the web page (this web page is hosted in a EC2 instance), and that form is sent as a message to a SQS Queue. After that, the goal of my Cloud-Application is to take the form information of the message and run a .sh with that information. An example of this process is shown in the next picture:

enter image description here

So, how can my SQS Queue run a .sh in a EC2 instance? The only way I figured out to do so with Python-Boto, is creating a "listener" to constantly read the messages and "do something" with that message.

while 1:
    m = conn.receive_message(
        q, 
        number_messages=1, 
        message_attributes=['configName'], 
        attributes='All'
    )
if not m:
    time.sleep(5)
else:
    a = str(m[0].message_attributes.get('configName').get('string_value'))
    rh = str(m[0].receipt_handle)
    # Processing the message in a EC2 instance
    conn.delete_message_from_handle(q, rh)
    time.sleep(5)

So, as seen in the above code, I read the attributes of the message (in this case only one for simplicity) and I have to process it.

How can I process the incoming messages in parallel in different EC2 instances? I don't see how, because I only have one listener, and if the listener is busy processing one message, it won't process any other until it finishes the first one. I want the listener to process any number of messages in parallel (depending on the number of instances and the bill I want to pay, of course). How can I know which EC2 instance will run my .sh program?

There is another way to do this in a much easier way with any Amazon Service?

Thanks

Upvotes: 2

Views: 492

Answers (1)

tster
tster

Reputation: 18237

So, how can my SQS Queue run a .sh in a EC2 instance?

It looks like you have this question answered with your own code.

The only way I figured out to do so with Python-Boto, is creating a "listener" to constantly read the messages and "do something" with that message.

Yup, that's how queues work.

How can I process the incoming messages in parallel in different EC2 instances?

Just run that code which processes the messages on multiple EC2 instances

I don't see how, because I only have one listener, and if the listener is busy processing one message, it won't process any other until it finishes the first one.

This is true for your single thread. So you can create multiple threads running that loop you have. You can also create multiple processes which have those threads in them (either on the same VM or on multiple VMs).

How can I know which EC2 instance will run my .sh program?

You don't. Whichever EC2 instance picks it up will run it. That's usually the intended behavior for decoupling front end and back end processing through a queue.

There is another way to do this in a much easier way with any Amazon Service?

No, this way is the simplest assuming you are trying to decouple and/or offload the backed processing from the front end web servers. I don't think it's complex at all.

Also, remove your sleeps, and instead use long polling

Upvotes: 2

Related Questions