Split messages from single SQS queue into Multiple SQS queues using EventBridge

In this post I discuss how messages in a SQS queue can be split between multiple SQS queues with their original payloads. The idea is to achieve this functionality with a low code solution using EventBridge Pipe, Event Bus and Event Rules.

EventBridge Pipe

EventBridge Pipe was introduced in the last re:Invent 2022. EB Pipes helps to create point to point integration with event producers and consumers with low code.

This was a great addition to the Serverless services since this will reduce some custom codes that are required to connect two services together. Also it includes additional capabilities to filter and enrich events as they are passed through the Pipe. As of now, EB Pipe supports streaming sources such as Kinesis, DynamoDB, Self Managed Apache Kafka, Amazon MSK and Amazon MQ and SQS as well.

About this project

Imagine a situation where you need to split messages in a SQS queue based on it’s content to separate SQS queues. An example can be where an external producer sends messages to a single SQS queue and you have to process those messages using different consumers. Else, those messages may need to be processed with different priorities. So, first, you need to separate those messages into different SQS queues.

This can be achieved using a single Lambda function where Lambda function will first process the messages from the source queue and then send them to different SQS queues.

However, this Lambda function will contain too much business logic and permissions to point correct messages to the respective target queues. Also, it should perform in a way to scale well based on the demand and be reliable to not to be a single point of failure. Further, when there are more targets introduced, this needs to be extended to cater those requirements, which is always a challenge.


High level architecture

Image: High level architecture

Image: High level architecture

How it works

  1. EventBridge Pipe is configured to poll messages from the source SQS queue.

  2. There is no filter set up, which means all the messages in the source queue will be processed through the Pipe.

  3. When a message is received from the Pipe, it contains not only the original message in the body parameter, but a lot of metadata (related to SQS) as well. Because of that, in order to send the message to the targets, it is required to extract only the original message from the payload. For this, the enrichment Lambda function is used here.

  4. Then, this message is sent to the Event Bus.

  5. There are event rules defined for the Event Bus with conditions that will match against this original message.

  6. Each rule has a SQS queue defined as the target.

  7. In a scenario where a message content is matched with the rule, that message will be sent to the particular queue.

Test it yourself

I have created a sample application to test this scenario. This is created using AWS CDK v2 with Python. So, you need CDK v2 and Python installed in your environment.

Set up

  1. Clone the repository: https://github.com/pubudusj/sqs-to-multiple-sqs

  2. Go into the cloned directory.

  3. To manually create a virtualenv on MacOS and Linux:

    $ python3 -m venv .venv
  4. After the init process completes and the virtualenv is created, you can use the following step to activate your virtualenv.

    $ source .venv/bin/activate
  5. If you are a Windows platform, you would activate the virtualenv using:

    % .venv\Scripts\activate.bat
  6. Once the virtualenv is activated, you can install the required dependencies.

    $ pip install -r requirements.txt
  7. Then, deploy the application:

    $ cdk deploy
  8. Once the application is deployed, in the output you can see 4 values for: SourceQueueUrl, TargetQueueOrderCreated, TargetQueueOrderUpdated and PipeArn. Copy the value of SourceQueueUrl to test the application.


  1. Here, I have configured 2 Event Bus rules.

  2. First rule will match a message with the field type which has the value ‘OrderCreated’. This rule has a target SQS queue named TargetQueueOrderCreated.

  3. Second rule will match a message with the field type which has value ‘OrderUpdated’. This rule has a target SQS queue named TargetQueueOrderUpdated.

  4. Send the a message with type ‘OrderCreated’ into the source queue as follows:

     aws sqs send-message \
        --queue-url=SourceQueueUrl \
        --message-body '{"orderId":"125a2e1e-d420-482e-8008-5a606f4b2076",  "customerId": "a48516db-66aa-4dbc-bb66-a7f058c5ec24", "type": "OrderCreated"}'
  5. If you check the TargetQueueOrderCreated, you will see the message has arrived into the queue with the original payload.

  6. Send the a message with type ‘OrderUpdated’ into the source queue as follows:

      aws sqs send-message \
        --queue-url=SourceQueueUrl \
        --message-body '{"orderId":"125a2e1e-d420-482e-8008-5a606f4b2076", "customerId": "a48516db-66aa-4dbc-bb66-a7f058c5ec24", "type": "OrderUpdated"}'
  7. If you check the TargetQueueOrderUpdated, you will see the message has arrived into the queue with the original payload.


With EventBridge Pipes, it is easy to connect sources, specially streaming sources, with the targets with minimum configurations, where previously custom code is required. It scales well, cost effective and very minimum maintenance is required.

  1. EventBridge Pipes documentation: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html

  2. Cloudformation API for Pipes: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-pipes-pipe.html

  3. Pipes Documentation for CDK v2 Python: https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_pipes/CfnPipe.html

  4. EventBridge Pipes pricing: https://aws.amazon.com/eventbridge/pricing/#Pipes


Please feel free to deploy this solution to your own AWS environment and share your experience with me. And you can connect with me in LinkedIn: https://www.linkedin.com/in/pubudusj and Twitter https://twitter.com/pubudusj

Keep building! Keep sharing!