Skip to main content
Argos Mesh uses an event-driven architecture with RabbitMQ for asynchronous communication between microservices. This page documents the internal events used across the system.

Event Overview

The system publishes three main types of internal events:
  • ProductSoldInternalEvent - Published when a product is sold
  • ProductCreatedInternalEvent - Published when a new product is created
  • AlertInternalEvent - Published when suspicious activity is detected

ProductSoldInternalEvent

Published by the Orders service when a product sale transaction is completed. This event is consumed by the Sentinel service for fraud detection and monitoring.

Event Schema

package com.argos.orders.dto.event;

import java.time.LocalDateTime;

public record ProductSoldInternalEvent(
    Long productID,
    Integer quantity,
    String ipAddress,
    LocalDateTime timeStamp
)

Fields

productID
Long
required
The unique identifier of the product that was sold
quantity
Integer
required
The number of units sold in the transaction
ipAddress
String
required
The client IP address from which the purchase was made. Used for fraud detection and geographic analysis.
timeStamp
LocalDateTime
required
The exact date and time when the sale occurred in ISO-8601 format

Message Configuration

Queue
String
argos.sales.queue
Exchange
String
shop.exchange (Topic Exchange)
Routing Key
String
shop.event.sold

Event Example

{
  "productID": 1,
  "quantity": 5,
  "ipAddress": "192.168.1.100",
  "timeStamp": "2026-03-05T14:30:45.123"
}

Event Flow

The Orders service uses the shop.exchange Topic Exchange to allow multiple consumers to process sale events independently.

ProductCreatedInternalEvent

Published by the Orders service when a new product is successfully created in the system.

Event Schema

package com.argos.orders.dto.event;

import com.argos.orders.dto.ProductResponse;

public record ProductCreatedInternalEvent(ProductResponse response)

Fields

response
ProductResponse
required
The complete product information including the newly generated ID

Message Configuration

Queue
String
argos.products.mgmt.queue
Exchange
String
shop.exchange (Topic Exchange)
Routing Key
String
shop.event.product.# (wildcard pattern)

Event Example

{
  "response": {
    "productID": 42,
    "productName": "Laptop",
    "productPrice": 999.99,
    "productStock": 50
  }
}
The routing key uses a wildcard pattern (shop.event.product.#) to allow for future product-related events such as shop.event.product.updated or shop.event.product.deleted.

AlertInternalEvent

Published by the Sentinel service when suspicious activity or security threats are detected, such as DDoS attacks or unusual purchasing patterns.

Event Schema

package com.argos.notify.dto;

import java.time.LocalDateTime;

public record AlertInternalEvent(
    String type,
    String sourceIp,
    String severity,
    LocalDateTime timeStamp
)

Fields

type
String
required
The type of alert detected. Common values:
  • DDoS Attack
  • Suspicious Purchase Pattern
  • Rate Limit Exceeded
  • Inventory Manipulation
sourceIp
String
required
The IP address associated with the suspicious activity (e.g., 127.0.0.1, 192.168.1.100)
severity
String
required
The severity level of the alert:
  • CRITICAL - Immediate action required
  • HIGH - Requires urgent attention
  • MEDIUM - Should be investigated
  • LOW - Informational
timeStamp
LocalDateTime
required
When the alert was generated in ISO-8601 format

Message Configuration

Queue
String
argos.alert.queue
Exchange
String
alert.exchange (Topic Exchange)
Routing Key
String
argos.alert.# (wildcard pattern)

Event Example

{
  "type": "DDoS Attack",
  "sourceIp": "203.0.113.42",
  "severity": "CRITICAL",
  "timeStamp": "2026-03-05T14:35:22.789"
}

Consumers

The AlertInternalEvent is consumed by:
  • Notify Service - Sends notifications to administrators via email, SMS, or other channels
  • Security Dashboard - Updates real-time monitoring displays
  • Logging System - Archives alerts for compliance and audit purposes
CRITICAL severity alerts trigger immediate notifications and may automatically block the source IP address depending on system configuration.

RabbitMQ Configuration

Exchanges

The system uses two main Topic Exchanges:
Exchange NamePurpose
shop.exchangeProduct and sales events
alert.exchangeSecurity and monitoring alerts

Queues

Queue NameBound ToRouting KeyConsumer
argos.sales.queueshop.exchangeshop.event.soldSentinel Service
argos.products.mgmt.queueshop.exchangeshop.event.product.#Product Management
argos.alert.queuealert.exchangeargos.alert.#Notify Service

Message Serialization

All events are serialized as JSON using Jackson with the following configuration:
ObjectMapper mapper = new ObjectMapper();
mapper.registerModule(new JavaTimeModule());
mapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
The JavaTimeModule ensures that LocalDateTime fields are serialized as ISO-8601 strings rather than timestamps.

Event Publishing

From Orders Service

The Orders service publishes events using Spring AMQP:
// Publishing a ProductSoldInternalEvent
ProductSoldInternalEvent event = new ProductSoldInternalEvent(
    productId,
    quantity,
    clientIp,
    LocalDateTime.now()
);

rabbitTemplate.convertAndSend(
    "shop.exchange",
    "shop.event.sold",
    event
);

From Sentinel Service

The Sentinel service publishes alerts:
// Publishing an AlertInternalEvent
AlertInternalEvent alert = new AlertInternalEvent(
    "DDoS Attack",
    sourceIp,
    "CRITICAL",
    LocalDateTime.now()
);

rabbitTemplate.convertAndSend(
    "alert.exchange",
    "argos.alert.critical",
    alert
);

Event Consumption

Listening to Sales Events

@RabbitListener(queues = "argos.sales.queue")
public void handleProductSold(ProductSoldInternalEvent event) {
    // Process the sale event
    log.info("Product {} sold: {} units from IP {}",
        event.productID(),
        event.quantity(),
        event.ipAddress());
    
    // Perform fraud analysis
    fraudDetectionService.analyze(event);
}

Listening to Alert Events

@RabbitListener(queues = "argos.alert.queue")
public void handleAlert(AlertInternalEvent alert) {
    // Send notification based on severity
    if ("CRITICAL".equals(alert.severity())) {
        notificationService.sendUrgentAlert(alert);
    } else {
        notificationService.logAlert(alert);
    }
}

Best Practices

Consumers should be designed to handle duplicate events gracefully. Use unique transaction IDs or timestamps to detect and skip duplicate processing.
Implement dead letter queues (DLQs) for messages that fail processing. Configure retry policies with exponential backoff.
Monitor queue depths, message rates, and consumer lag to ensure healthy event processing. Set up alerts for queue buildup.
When modifying event schemas, maintain backward compatibility. Add new optional fields rather than modifying existing ones.

Troubleshooting

Messages Not Being Consumed

  1. Verify the queue exists and is bound to the correct exchange
  2. Check the routing key matches the binding pattern
  3. Ensure the consumer service is running and connected to RabbitMQ
  4. Review consumer logs for exceptions

High Queue Depth

  1. Check if consumers are processing messages slowly
  2. Scale up the number of consumer instances
  3. Investigate if messages are being rejected and requeued
  4. Review message sizes and processing complexity
Use the RabbitMQ Management UI at http://localhost:15672 to monitor queues, exchanges, and message flows in real-time.

Build docs developers (and LLMs) love