Inhaltsverzeichnis

HowTo Deploy Synapse with Multiple Workers using docker-compose

Status: Working Again (25th of May 2023)

The update to v1.84.0 broke my setup today. The Upgrade-Info noticed me that there have been some changes in the configuration I missed and so I had to adapt some things in both the shared homeserver.yaml and the worker configs. So the example files beyond won't work with synapse versions before the v1.84.0 release.

Status: Finally working!!! (1st of March 2022)

I can finally announce a working docker-compose setup with 2 workers for Matrix-Synapse! That is up to now one worker for client related requests and one for the federation requests. Of course the amount of workers is expandable and only depends on how you route the requests to the workers and the main process using your reverse proxy (here: nginx).

Scope

I document my relevant config files except my homeserver.yaml. For briefness I will only show the small parts of it that are relevant for having the workers to communicate successfully with the main process. It's assumed that you have an already working homeserver.yaml based on a monolithic synapse setup. I will try to comment some critical details so that you will know what to adapt and what to adopt.

If you're looking for an example of synapse with workers without docker you might want to visit https://git.envs.net/envs/matrix-conf/

Thnx

... go out to Sloth (@sleuth:rexrobotics.org) for collaboration and mutual support while working on this. If you like to compare those files documented here to s.o. else's please have a look at Sloth's gitlab repo.

My docker-compose.yml

The base of all

Please note that the container_name fields are crucial for the setup to work. You need them to address the containers between each other.

version: '3'

services:
  postgres:
    restart: unless-stopped
    networks:
      - default
    environment:
      POSTGRES_PASSWORD: "[some-secret-password]"
      POSTGRES_USER: synapse
      POSTGRES_DB: synapse
      POSTGRES_INITDB_ARGS: "--encoding='UTF8' --lc-collate='C' --lc-ctype='C'"
      POSTGRES_HOST_AUTH_METHOD: trust
    image: postgres:13-alpine
    volumes:
      - /volume1/docker/volumes/matrix-synapse/postgres:/var/lib/postgresql/data

  nginx:
    image: nginx:stable-alpine
    container_name: matrix-nginx
    volumes:
      - /volume1/docker/volumes/matrix-synapse/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    ports:
      - 8008:8008   ## Those outbound ports fit the needs of my environment where 8008 leads to another proxy server that serves to port 80, 443 and 8448
      - 84:8484
    depends_on:     ## That's actually very important! The startup order __must__ be: redis, postgres, synapse, worker[1:n], nginx
      - synapse
      - federation_worker
      - client_worker
    networks:
      - default


  synapse:
    image: matrixdotorg/synapse:latest
    container_name: matrix-synapse
    restart: "unless-stopped"
    environment:
      - SYNAPSE_REPORT_STATS=no
      - SYNAPSE_SERVER_NAME=ismus.net     # of course you must adjust this to your domain
      - SYNAPSE_CONFIG_PATH=/data/homeserver-postgres.yaml
      - TZ=Berlin/Europe
    depends_on: ["postgres"]
    volumes:
      - /volume1/docker/volumes/matrix-synapse:/data
    networks:
      - default
      
  client_worker:
    image: matrixdotorg/synapse:latest
    container_name: matrix-client
    restart: "unless-stopped"
    command:
      - 'run'
      - '--config-path=/data/homeserver-postgres.yaml'
      - '--config-path=/data/workers/synchrotron-1.yaml' # Important! You can _NOT_ do this by `- SYNAPSE_CONFIG_PATH=/data/workers/synchrotron-1.yaml` in the environment section!
    environment:
      - SYNAPSE_REPORT_STATS=no
      - SYNAPSE_SERVER_NAME=ismus.net
      - SYNAPSE_WORKER=synapse.app.generic_worker
      - TZ=Berlin/Europe
    depends_on:
      - synapse
    volumes:   
      - /volume1/docker/volumes/matrix-synapse:/data
    networks:
      - default

  federation_worker:
    image: matrixdotorg/synapse:latest
    container_name: matrix-federation
    restart: "unless-stopped"
    command:
      - 'run'  
      - '--config-path=/data/homeserver-postgres.yaml'
      - '--config-path=/data/workers/federation-1.yaml'
    environment:
      - SYNAPSE_REPORT_STATS=no
      - SYNAPSE_SERVER_NAME=ismus.net
      - SYNAPSE_WORKER=synapse.app.generic_worker
      - TZ=Berlin/Europe
    depends_on: 
      - client-worker
    volumes:
      - /volume1/docker/volumes/matrix-synapse:/data
    networks:
      - default
    
  redis:
    image: "redis:latest"
    restart: "unless-stopped"
    networks:
      - default
    
    
networks:
  default:

The nginx.conf so far:

The nginx has the task to distribute the different kinds of request between the workers. I had not much idea how to config nginx before I found a template which looked much alike the following. But I still had a lot to correct and find out to make it work properly. If you want to re-balance the work between the two or even more workers you will have to alter the map block and if needed add further upstreams per worker.

As described above I had the special case that in my environment there's another proxy server on top that is responsible for domain and ssl so that I only needed the nginx container to distribute requests and reach traffic through between this on top reverse proxy and the synapse processes. If you want to use the nginx container as fully responsible server have a look into the official synapse documentation and add the few missing parts.

events {
        worker_connections 1024;
}

http {

upstream synapse_master {
        server matrix-synapse:8888;            # Here neither localhost nor 127.0.0.1 or 0.0.0.0 worked for me.
}

upstream synapse_client {
        server matrix-client:8084;             # But the container_name labels are resolved to the local ips of the containers.
}

upstream synapse_federation {                  
        server matrix-federation:8083;         # That makes the docker-setup quite fail-safe in case of eventually changing network conditions, I guess.
}

map_hash_bucket_size 128;

map $request_uri $synapse_backend {
        default synapse_master;                # Important: Makes safe that everything _not_ covered by the regex will go to the master process by default and by this: won't be lost!
                                               # The requests are basically copy paste from the [official docs](https://matrix-org.github.io/synapse/latest/workers.html).

        # Sync requests
        "~^/_matrix/client/(v2_alpha|r0)/sync$" synapse_client;
        "~^/_matrix/client/(api/v1|v2_alpha|r0)/events$" synapse_client;
        "~^/_matrix/client/(api/v1|r0)/initialSync$" synapse_client;
        "~^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$" synapse_client;

        # Federation requests
        "~^/_matrix/federation/v1/event/" synapse_federation;
        "~^/_matrix/federation/v1/state/" synapse_federation;
        "~^/_matrix/federation/v1/state_ids/" synapse_federation;
        "~^/_matrix/federation/v1/backfill/" synapse_federation;
        "~^/_matrix/federation/v1/get_missing_events/" synapse_federation;
        "~^/_matrix/federation/v1/publicRooms" synapse_federation;
        "~^/_matrix/federation/v1/query/" synapse_federation;
        "~^/_matrix/federation/v1/make_join/" synapse_federation;
        "~^/_matrix/federation/v1/make_leave/" synapse_federation;
        "~^/_matrix/federation/v1/send_join/" synapse_federation;
        "~^/_matrix/federation/v2/send_join/" synapse_federation;
        "~^/_matrix/federation/v1/send_leave/" synapse_federation;
        "~^/_matrix/federation/v2/send_leave/" synapse_federation;                                           
        "~^/_matrix/federation/v1/invite/" synapse_federation;
        "~^/_matrix/federation/v2/invite/" synapse_federation;
        "~^/_matrix/federation/v1/query_auth/" synapse_federation;
        "~^/_matrix/federation/v1/event_auth/" synapse_federation;
        "~^/_matrix/federation/v1/exchange_third_party_invite/" synapse_federation;
        "~^/_matrix/federation/v1/user/devices/" synapse_federation;
        "~^/_matrix/federation/v1/get_groups_publicised$" synapse_federation;
        "~^/_matrix/key/v2/query" synapse_federation;
        
        # Inbound federation transaction request
        "~^/_matrix/federation/v1/send/" synapse_federation;

        # Client API requests
        "~^/_matrix/client/(api/v1|r0|unstable)/publicRooms$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/account/3pid$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/devices$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/keys/query$" synapse_client;                                 
        "~^/_matrix/client/(api/v1|r0|unstable)/keys/changes$" synapse_client;
        "~^/_matrix/client/versions$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/joined_groups$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/search$" synapse_client;

        # Registration/login requests
        "~^/_matrix/client/(api/v1|r0|unstable)/login$" synapse_client;
        "~^/_matrix/client/(r0|unstable)/register$" synapse_client;
        # Event sending requests
        "~^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/join/" synapse_client;
        "~^/_matrix/client/(api/v1|r0|unstable)/profile/" synapse_client;                     
}

        server {
                listen 8008;
                listen [::]:8008;
                listen 8484;
                listen [::]:8484
                server_name localhost;
                client_max_body_size 100M;

                location ~ ^(/_matrix|/_synapse|client) {
                        proxy_pass http://$synapse_backend;
                        proxy_set_header Host $host;
                        proxy_set_header X-Forwarded-Proto $scheme;
                        proxy_set_header X-Forwarded-For $remote_addr;
                }
                

                
                location / {
                        proxy_pass http://synapse_master;
                        proxy_set_header Host $host;
                        proxy_set_header X-Forwarded-Proto $scheme;
                        proxy_set_header X-Forwarded-For $remote_addr;
                }
                
                # ALTERNATIVELY!! you can redirect from your domain to any url you want as e.g. a matrix webclient
                location / {
                        return 301 https://app.schildi.chat/#/login;  
                }




        }


}

Worker 1

synchrotron-1.yaml

worker_app: synapse.app.generic_worker
worker_name: generic_worker1

worker_listeners:
  - type: http
    port: 8084
    resources:
      - names: [client]
  - type: http
    port: 9094
    resources:
      - names: [replication]

send_federation: true

worker_log_config: /data/workers/logs/Worker1_log_config.yaml     # if you like to have a seperate log for each worker you will need this config. But I understand that you can also use the already existing config of your main synapse process.

Worker 2

federation-1.yaml

worker_app: synapse.app.generic_worker
worker_name: generic_worker2

worker_listeners:
  - type: http
    port: 8083
    resources:
      - names: [federation]
  - type: http
    port: 9092
    resources:
      - names: [replication]

send_federation: true

worker_log_config: /data/workers/logs/Worker2_log_config.yaml

Log config

Here you have an example how a WorkerX_log_config.yaml could look like. If something doesn't work maybe check the indention first after copypasting ...

version: 1

formatters:
  precise:
   format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- $'

filters:
  context:
    (): synapse.util.logcontext.LoggingContextFilter
    request: ""
                                                                                
handlers:
  file:
    class: logging.handlers.RotatingFileHandler
    formatter: precise
    filename: /data/workers/logs/Worker1.log
    maxBytes: 104857600
    backupCount: 10
    filters: [context]
    encoding: utf8
    level: DEBUG
  console:
   class: logging.StreamHandler
    formatter: precise
    level: INFO
    
loggers:
    synapse:
        level: INFO
    
    synapse.storage.SQL:
        level: INFO
  
    synapse.app.generic_worker: 
            level: DEBUG
root:
    level: INFO
    handlers: [file, console]

homeserver.yaml

Last but not least the excerpts from the homeserver.yaml:

## Ports ##
        
listeners:
  - 
    port: 8448
    bind_addresses: ['::']
    type: http
    tls: false
    x_forwarded: true
    resources:
      - names: [client]
        compress: true
      - names: [federation]  # Federation APIs
        compress: false
   
  - port: 8888
    tls: false
    bind_addresses: ['::']
    type: http
    x_forwarded: true

    resources:
      - names: [client]
        compress: true
      - names: [federation]  
        compress: false
    
  # The HTTP replication port
  - port: 9093
    bind_address: '::'
    type: http
    resources:
      - names: [replication]

##################################################
snipsnap
##################################################

# Worker
worker_replication_secret: "some long and super secret password you don't need to memorize or copy because main and workers share it already"

# The following block was added to be compatible with v1.84.0 which broke my former config on the 24th of May 2023. Check also the Worker configs above!
instance_map:
  main:
    host: matrix-synapse
    port: 9093
    tls: false
  generic_worker2:
    host: matrix-federation
    port: 9092
    tls: false
  generic_worker1:
    host: matrix-client
    port: 9094
    tls: false


redis:
  enabled: true
  host: matrix-redis
  port: 6379

At least for me this Setup does work. I hope it is of help to you as I spent more than three whole days to get it too work. I searched a lot in the web about setting up workers in a docker-compose context, I knew only few about the single components and tried a lot pieces and hints that didn't work in the end.

Most of the time it was either

And now good luck with your own efforts :)

For documentation and transparency

or What Happened Before ...

Status: Not working yet (25th of February 2022)

while trying to give my docker-compose setup two workers (multicontainer) - 1 for client, 1 for federation. While I made some progress today I am struggling with a strange problem:

Old Outdated Information

As I experienced significant delay with my synapse instance when joining rooms on other instances like matrix.org I was told that I urgently need to split the cpu load of it into multiple processes. By this the load could be distributed to multiple cpu cores instead of only one that would limit the speed of processing of large room content. Well, until now I know next to nothing about so called workers and how to set those up in a dockerized environment. On its top: There's yet a poor amount of documentation or tutorials regarding this. Or at least I couldn't find it.

My efforts for now lead to some [general information about in synapse]. A few hints on how to configure separate workers in docker/docker-compose can be found in the official [README].

Meanwhile developers are obviously [working on a "batteries included" docker-image for multiprocess synapse] which already makes use of workers. But that's not for production at the moment unfortunately.

What I understand so far

n>