howto_deploy_synapse_with_multiple_workers

Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen Revision Vorhergehende Überarbeitung
Nächste Überarbeitung
Vorhergehende Überarbeitung
howto_deploy_synapse_with_multiple_workers [2022/03/01 01:03] – [HowTo Deploy Synapse with Multiple Workers using docker-compose] homerhowto_deploy_synapse_with_multiple_workers [2023/05/24 22:16] (aktuell) – [Log config] homer
Zeile 1: Zeile 1:
 <markdown> <markdown>
-# HowTo Deploy [Matrix-Synapse](https://matrix-org.github.io/synapse/latest/welcome_and_overview.html) with Multiple Workers using `docker-compose`+# HowTo Deploy Synapse with Multiple Workers using docker-compose 
 +## Status: Working Again (25th of May 2023) 
 +The update to v1.84.0 broke my setup today. The [Upgrade-Info](https://github.com/matrix-org/synapse/blob/release-v1.84/docs/upgrade.md#upgrading-to-v1840noticed me that there have been some changes in the configuration I missed and so I had to adapt some things in both the shared homeserver.yaml and the worker configs. So the example files beyond won't work with synapse versions before the v1.84.0 release. 
 ## Status: Finally working!!! (1st of March 2022) ## Status: Finally working!!! (1st of March 2022)
-I can finally announce a working docker-compose setup with 2 workers!+I can finally announce a working docker-compose setup with 2 workers for [Matrix-Synapse](https://matrix-org.github.io/synapse/latest/welcome_and_overview.html)!
 That is up to now one worker for client related requests and one for the federation requests. Of course the amount of workers is expandable and only depends on how you route the requests to the workers and the main process using your reverse proxy (here: nginx). That is up to now one worker for client related requests and one for the federation requests. Of course the amount of workers is expandable and only depends on how you route the requests to the workers and the main process using your reverse proxy (here: nginx).
 ## Scope ## Scope
 I document my relevant config files except my homeserver.yaml. For briefness I will only show the small parts of it that are relevant for having the workers to communicate successfully with the main process. It's assumed that you have an already working homeserver.yaml based on a monolithic synapse setup. I will try to comment some critical details so that you will know what to adapt and what to adopt. I document my relevant config files except my homeserver.yaml. For briefness I will only show the small parts of it that are relevant for having the workers to communicate successfully with the main process. It's assumed that you have an already working homeserver.yaml based on a monolithic synapse setup. I will try to comment some critical details so that you will know what to adapt and what to adopt.
  
 +If you're looking for an example of synapse with workers _without_ docker you might want to visit https://git.envs.net/envs/matrix-conf/
 +### Thnx
 +... go out to Sloth (@sleuth:rexrobotics.org) for collaboration and mutual support while working on this. If you like to compare those files documented here to s.o. else' please have a look at [Sloth's gitlab repo](https://gitlab.com/Sleuth56/synapse-with-workers/-/tree/master/).
 ### My docker-compose.yml ### My docker-compose.yml
 #### The base of all #### The base of all
 +Please note that the container_name fields are crucial for the setup to work. You need them to address the containers between each other.
 ``` ```
 version: '3' version: '3'
Zeile 51: Zeile 57:
       - SYNAPSE_REPORT_STATS=no       - SYNAPSE_REPORT_STATS=no
       - SYNAPSE_SERVER_NAME=ismus.net     # of course you must adjust this to your domain       - SYNAPSE_SERVER_NAME=ismus.net     # of course you must adjust this to your domain
-      - SYNAPSE_CONFIG_PATH=/data/homeserver.yaml+      - SYNAPSE_CONFIG_PATH=/data/homeserver-postgres.yaml
       - TZ=Berlin/Europe       - TZ=Berlin/Europe
     depends_on: ["postgres"]     depends_on: ["postgres"]
-   volumes:+    volumes:
       - /volume1/docker/volumes/matrix-synapse:/data       - /volume1/docker/volumes/matrix-synapse:/data
     networks:     networks:
Zeile 112: Zeile 118:
 ``` ```
 ### The nginx.conf so far: ### The nginx.conf so far:
 +The nginx has the task to distribute the different kinds of request between the workers. I had not much idea how to config nginx before I found a template which looked much alike the following. But I still had a lot to correct and find out to make it work properly. If you want to re-balance the work between the two or even more workers you will have to alter the map block and if needed add further upstreams per worker.
 +
 +As described above I had the special case that in my environment there's another proxy server on top that is responsible for domain and ssl so that I only needed the nginx container to distribute requests and reach traffic through between this on top reverse proxy and the synapse processes. If you want to use the nginx container as fully responsible server have a look into the [official synapse documentation](https://matrix-org.github.io/synapse/latest/reverse_proxy.html) and add the few missing parts.
  
 ``` ```
Zeile 121: Zeile 130:
  
 upstream synapse_master { upstream synapse_master {
-        server matrix-synapse:8888;+        server matrix-synapse:8888;            # Here neither localhost nor 127.0.0.1 or 0.0.0.0 worked for me.
 } }
  
 upstream synapse_client { upstream synapse_client {
-        server matrix-client:8084;+        server matrix-client:8084;             # But the container_name labels are resolved to the local ips of the containers.
 } }
  
-upstream synapse_federation { +upstream synapse_federation {                   
-        server matrix-federation:8083;+        server matrix-federation:8083;         # That makes the docker-setup quite fail-safe in case of eventually changing network conditions, I guess.
 } }
  
Zeile 135: Zeile 144:
  
 map $request_uri $synapse_backend { map $request_uri $synapse_backend {
-        default synapse_master;+        default synapse_master;                # Important: Makes safe that everything _not_ covered by the regex will go to the master process by default and by this: won't be lost! 
 +                                               # The requests are basically copy paste from the [official docs](https://matrix-org.github.io/synapse/latest/workers.html).
  
         # Sync requests         # Sync requests
Zeile 214: Zeile 224:
                         proxy_set_header X-Forwarded-For $remote_addr;                         proxy_set_header X-Forwarded-For $remote_addr;
                 }                 }
 +                
 +
 +                
 +                location / {
 +                        proxy_pass http://synapse_master;
 +                        proxy_set_header Host $host;
 +                        proxy_set_header X-Forwarded-Proto $scheme;
 +                        proxy_set_header X-Forwarded-For $remote_addr;
 +                }
 +                
 +                # ALTERNATIVELY!! you can redirect from your domain to any url you want as e.g. a matrix webclient
 +                location / {
 +                        return 301 https://app.schildi.chat/#/login;  
 +                }
 +
 +
 +
  
         }         }
Zeile 220: Zeile 247:
 } }
 ``` ```
 +### Worker 1
 +#### synchrotron-1.yaml
 +``` 
 +worker_app: synapse.app.generic_worker
 +worker_name: generic_worker1
 +
 +worker_listeners:
 +  - type: http
 +    port: 8084
 +    resources:
 +      - names: [client]
 +  - type: http
 +    port: 9094
 +    resources:
 +      - names: [replication]
 +
 +send_federation: true
 +
 +worker_log_config: /data/workers/logs/Worker1_log_config.yaml     # if you like to have a seperate log for each worker you will need this config. But I understand that you can also use the already existing config of your main synapse process.
 +```
 +
 +### Worker 2
 +#### federation-1.yaml
 +```
 +worker_app: synapse.app.generic_worker
 +worker_name: generic_worker2
 +
 +worker_listeners:
 +  - type: http
 +    port: 8083
 +    resources:
 +      - names: [federation]
 +  - type: http
 +    port: 9092
 +    resources:
 +      - names: [replication]
 +
 +send_federation: true
 +
 +worker_log_config: /data/workers/logs/Worker2_log_config.yaml
 +```
 +### Log config
 +Here you have an example how a WorkerX_log_config.yaml could look like. If something doesn't work maybe check the indention first after copypasting ...
 +```
 +version: 1
 +
 +formatters:
 +  precise:
 +   format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- $'
 +
 +filters:
 +  context:
 +    (): synapse.util.logcontext.LoggingContextFilter
 +    request: ""
 +                                                                                
 +handlers:
 +  file:
 +    class: logging.handlers.RotatingFileHandler
 +    formatter: precise
 +    filename: /data/workers/logs/Worker1.log
 +    maxBytes: 104857600
 +    backupCount: 10
 +    filters: [context]
 +    encoding: utf8
 +    level: DEBUG
 +  console:
 +   class: logging.StreamHandler
 +    formatter: precise
 +    level: INFO
 +    
 +loggers:
 +    synapse:
 +        level: INFO
 +    
 +    synapse.storage.SQL:
 +        level: INFO
 +  
 +    synapse.app.generic_worker: 
 +            level: DEBUG
 +root:
 +    level: INFO
 +    handlers: [file, console]
 +```
 +
 +
 +### homeserver.yaml
 +Last but not least the excerpts from the homeserver.yaml:
 +
 +```
 +## Ports ##
 +        
 +listeners:
 +  - 
 +    port: 8448
 +    bind_addresses: ['::']
 +    type: http
 +    tls: false
 +    x_forwarded: true
 +    resources:
 +      - names: [client]
 +        compress: true
 +      - names: [federation]  # Federation APIs
 +        compress: false
 +   
 +  - port: 8888
 +    tls: false
 +    bind_addresses: ['::']
 +    type: http
 +    x_forwarded: true
 +
 +    resources:
 +      - names: [client]
 +        compress: true
 +      - names: [federation]  
 +        compress: false
 +    
 +  # The HTTP replication port
 +  - port: 9093
 +    bind_address: '::'
 +    type: http
 +    resources:
 +      - names: [replication]
 +
 +##################################################
 +snipsnap
 +##################################################
 +
 +# Worker
 +worker_replication_secret: "some long and super secret password you don't need to memorize or copy because main and workers share it already"
 +
 +# The following block was added to be compatible with v1.84.0 which broke my former config on the 24th of May 2023. Check also the Worker configs above!
 +instance_map:
 +  main:
 +    host: matrix-synapse
 +    port: 9093
 +    tls: false
 +  generic_worker2:
 +    host: matrix-federation
 +    port: 9092
 +    tls: false
 +  generic_worker1:
 +    host: matrix-client
 +    port: 9094
 +    tls: false
 +
 +
 +redis:
 +  enabled: true
 +  host: matrix-redis
 +  port: 6379
 +```
 +
 +At least for me this Setup does work. I hope it is of help to you as I spent more than three whole days to get it too work. I searched a lot in the web about setting up workers in a docker-compose context, I knew only few about the single components and tried a lot pieces and hints that didn't work in the end.
 +
 +Most of the time it was either
 + - something with the regex
 + - or with howto correctly address the different containers between each other. As mentioned before the only stable way to solve this turned out to be the container_name tags that resolved to the current ip-adresses even after down and up again the docker-compose environment.
 +
 +And now good luck with your own efforts :)
 +
 +
 +## For documentation and transparency
 +### or What Happened Before ...
  
 ## Status: Not working yet (25th of February 2022) ## Status: Not working yet (25th of February 2022)
  • howto_deploy_synapse_with_multiple_workers.1646093007.txt.gz
  • Zuletzt geändert: 2022/03/01 01:03
  • von homer