Overview

MongoDB serves as the primary data store for Linqra, requiring a replica set configuration for high availability and data redundancy. The database is configured with a 3-node replica set architecture to ensure continuous operation and data consistency.

Data Structure

Core Data Collections

CollectionPurposeKey Components
UsersUser management and authentication• Account information
• Credentials
• Sessions
• Access tokens
TeamsOrganizational structure• Team hierarchies
• Department mappings
• Member associations
RolesAccess control and permissions• Role definitions
• Permission sets
• Access policies

API Management

CollectionPurposeKey Components
RoutesAPI routing configuration• Endpoint definitions
• Route mappings
• Load balancing rules
ServicesService registry and management• Service registry
• Dependencies
• Health status
• Versions
DocumentationAPI documentation• API specifications
• Version history
• Endpoint docs

System Collections

CollectionPurposeKey Components
MetricsPerformance monitoring• Response times
• Request volumes
• Error rates
• Resource usage
AuditSystem auditing• System events
• User activities
• Config changes
AnalyticsSystem analytics• Usage patterns
• Performance trends
• Health metrics

All collections are automatically replicated across the three nodes in the replica set, ensuring data redundancy and high availability.

Local Development Setup

Directory Preparation

Create the necessary data directories for the MongoDB replica set:

# Create data directories
sudo mkdir -p ~/IdeaProjects/linqra/.kube/mongodb/data1
sudo mkdir -p ~/IdeaProjects/linqra/.kube/mongodb/data2
sudo mkdir -p ~/IdeaProjects/linqra/.kube/mongodb/data3
sudo chmod -R 777 ~/IdeaProjects/linqra/.kube/mongodb/data*

# Create SSL keyfile
openssl rand -base64 756 > ~/IdeaProjects/linqra/.kube/mongodb/mongo-keyfile
chmod 600 ~/IdeaProjects/linqra/.kube/mongodb/mongo-keyfile

Docker Compose Configuration

The MongoDB replica set is configured using Docker Compose. Here’s the complete configuration for all three nodes:

services:
  mongodb1:
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G
    build:
      context: .
      dockerfile: ./.kube/mongodb/Dockerfile
    container_name: mongodb1
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: mongopw
      MONGO_REPLICA_SET_NAME: rs0
    ports:
      - "27017:27017"
    volumes:
      - ./.kube/mongodb/data1/:/data/db
      - ./.kube/mongodb/mongo-keyfile:/data/mongo-keyfile
    networks:
      - linqra-network
    command: mongod --bind_ip_all --replSet rs0 --port 27017 -keyFile /data/mongo-keyfile

  mongodb2:
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G
    build:
      context: .
      dockerfile: ./.kube/mongodb/Dockerfile
    container_name: mongodb2
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: mongopw
      MONGO_REPLICA_SET_NAME: rs0
    ports:
      - "27018:27018"
    volumes:
      - ./.kube/mongodb/data2/:/data/db
      - ./.kube/mongodb/mongo-keyfile:/data/mongo-keyfile
    networks:
      - linqra-network
    command: mongod --bind_ip_all --replSet rs0 --port 27018 -keyFile /data/mongo-keyfile

  mongodb3:
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G
    build:
      context: .
      dockerfile: ./.kube/mongodb/Dockerfile
    container_name: mongodb3
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: mongopw
      MONGO_REPLICA_SET_NAME: rs0
    ports:
      - "27019:27019"
    volumes:
      - ./.kube/mongodb/data3/:/data/db
      - ./.kube/mongodb/mongo-keyfile:/data/mongo-keyfile
    networks:
      - linqra-network
    command: mongod --bind_ip_all --replSet rs0 --port 27019 -keyFile /data/mongo-keyfile

Each node is configured with resource limits (1 CPU, 1GB memory) and uses the same keyfile for authentication. The only differences between nodes are their container names, ports, and data volume paths.

Host Configuration

Add the following entries to your hosts file (important for container communication):

# Add to /etc/hosts
127.0.0.1       mongodb1
127.0.0.1       mongodb2
127.0.0.1       mongodb3

Without proper host configuration, Docker containers may not be able to communicate with the MongoDB nodes.

Replica Set Initialization

  1. Connect to the primary node:
docker exec -it mongodb1 mongosh -u root -p mongopw --authenticationDatabase admin
  1. Initialize the replica set:
rs.initiate({
  _id: "rs0",
  members: [
    { _id: 0, host: "mongodb1:27017" },
    { _id: 1, host: "mongodb2:27018" },
    { _id: 2, host: "mongodb3:27019" }
  ]
}, { force: true });
  1. Verify the replica set status:
rs.status();

The replica set will automatically elect a primary node, with the remaining nodes becoming secondaries. Node roles may change over time as the cluster rebalances.

Connecting to the Replica Set

You can connect to the replica set using the following connection string:

mongosh "mongodb://root:mongopw@localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0&authSource=admin"

This same connection string can be used in:

  • Your application configuration
  • MongoDB Compass
  • Other MongoDB clients

Connection Configuration

MongoDB connection settings for your application:

# MongoDB Connection Settings
export MONGODB_URI="mongodb://root:mongopw@localhost:27017,localhost:27018,localhost:27019/linqra?replicaSet=rs0&authSource=admin"
export MONGODB_DATABASE="linqra"

For production environments, it’s recommended to use secrets management for storing credentials.

Replica Set Configuration

Linqra requires a 3-node MongoDB replica set configuration to ensure high availability and fault tolerance.

The replica set consists of:

  • 1 Primary node (handles all write operations)
  • 2 Secondary nodes (provide read scalability and automatic failover)

Benefits

  • High Availability: Automatic failover if the primary node becomes unavailable
  • Data Redundancy: Multiple copies of data across different nodes
  • Read Scalability: Secondary nodes can handle read operations
  • Disaster Recovery: Built-in backup and recovery capabilities

Dockerfile

FROM mongo:latest

Directory Structure

.kube/
└── mongodb/
    ├── Dockerfile
    ├── mongo-keyfile
    ├── data1/
    ├── data2/
    └── data3/

The data directories (data1, data2, data3) will be populated when the containers start running. The mongo-keyfile is generated during the setup process.